id
stringlengths
30
36
source
stringclasses
1 value
format
stringclasses
1 value
text
stringlengths
5
878k
no-problem/9910/cond-mat9910145.html
ar5iv
text
# Three-dimensional Ising model in the fixed-magnetization ensemble: a Monte Carlo study ## I Introduction Second order phase transitions display many interesting and subtle properties associated with scale invariance and universality at critical points. Some of these, such as power-law singularities of the free energy and other quantities at the critical point, and their critical exponents and amplitudes, have been studied rather thoroughly (see, e. g., ). Among less investigated items are the universal characteristics of finite-size effects. These are important for analysis of experiments with finite samples, as well as for computer simulations, which necessarily have to deal with finite-size systems. In clear distinction from, for example, critical indices, the finite size effects depend crucially on the nature of the statistical ensemble under consideration. To be concrete, let us consider one of the standard model systems of the phase transition theory — the three-dimensional (3D) Ising model on the simple cubic lattice, with nearest-neighbor interactions. According to universality, this model describes the critical properties of a wide range of systems of a different physical nature, including second order phase transitions in uniaxial magnetic systems and the liquid-vapor critical point. The statistical ensemble most commonly used (usually denoted as canonical ensemble) is defined by the partition function $$Z_c(h)=\underset{\{s_i\}}{}\mathrm{exp}\left\{\beta \underset{ij}{}s_is_j+h\underset{i}{}s_i\right\},s_i=\pm 1,$$ (1) where the sum includes all the $`2^N`$ possible configurations of a total number of $`N`$ spins. This ensemble is perfectly natural for applications to magnetic phase transitions, where $`s_i=\pm 1`$ corresponds to physical spin at the lattice site $`i`$ pointing up or down, respectively. However, in the language of the lattice gas, $`s_i=\pm 1`$ corresponds to an occupied or unoccupied site, respectively: the total number of particles fluctuates. The canonical partition sum Eq. (1) in the spin language thus corresponds with the grand partition sum in the lattice gas language. To avoid confusion, we emphasize that in the rest of this paper we are using the word ‘canonical’ in the language of the Ising model: it means that the magnetization is allowed to fluctuate freely. In contrast, many real experiments and simulations of liquid-vapor systems are performed with a fixed number of particles. Fixing the number of lattice-gas particles is equivalent, in the language of magnetic systems, to fixing the total magnetization $`M_{\mathrm{total}}_{i=1}^Ns_i`$ of the system, or, in other words, fixing the average magnetization per spin $`M\frac{1}{N}M_{\mathrm{total}}=\frac{1}{N}_{i=1}^Ns_i`$. Thus we will be interested in the properties of the 3D Ising model in the fixed-magnetization ensemble, $$Z_f(M)=\underset{\{s_i\}:_is_i=NM}{}\mathrm{exp}\left\{\beta \underset{ij}{}s_is_j\right\}=\underset{\{s_i\}}{}\delta _{NM,_is_i}\mathrm{exp}\left\{\beta \underset{ij}{}s_is_j\right\}.$$ (2) Note that the magnetic field $`h`$ is absent; it can only contribute a constant factor. One of the main difficulties encountered in computer simulation studies of critical phenomena is the critical-slowing-down phenomenon. For a number of spin models in the canonical ensemble this problem has been largely overcome with the invention of cluster algorithms . Until recently, no similarly useful algorithm has become available for the fixed-magnetization ensemble. This situation has now changed by the development of a geometric cluster algorithm . We have used this algorithm extensively in this work to efficiently simulate systems of fixed magnetization at the critical point. In the canonical Ising model described by Eq. (1), the magnetic field $`h`$ is an adjustable external parameter. In contrast, the magnetization is a fluctuating observable. For each configuration taken from the ensemble one can sample its magnetization per spin $`M=\frac{1}{N}_{i=1}^Ns_i`$. Having accumulated $`M`$ over a sufficiently large set of configurations, one can construct the probability distribution $`P(M)`$ , and determine various expectation values such as $`M`$, $`M^2`$. Examples of such probability distributions at the critical point are shown in Fig. 1. On the other hand, for systems in the fixed-magnetization ensemble described by Eq. (2), the roles of $`h`$ and $`M`$ are interchanged: now $`M`$ is the adjustable parameter, and it is intuitively clear that there should be some way to define an observable, which we denote by $`\stackrel{~}{h}`$ (to avoid confusion with $`h`$ in Eq. (1)), that will correspond to the magnetic field. Thus $`\stackrel{~}{h}`$ will be a fluctuating quantity, that can be sampled on a microscopic level from configurations taken from the fixed-magnetization ensemble. In the limit $`N\mathrm{}`$ in both ensembles (such that the correlation length vanishes in comparison with the system size), the fluctuations in $`M`$ and $`\stackrel{~}{h}`$ become negligible, and the difference between $`h`$ and $`\stackrel{~}{h}`$ vanishes. In Section II we discuss the definition of $`\stackrel{~}{h}`$ and its properties. In Section III we establish the relation between the function $`\stackrel{~}{h}(M)`$ in the fixed-$`M`$ ensemble and the probability distribution $`P(M)`$ in the canonical ensemble. We conclude with a discussion of the relation between the function $`\stackrel{~}{h}(M)`$ in the fixed-$`M`$ ensemble and the function $`h(M)`$ in the canonical ensemble, and with a summary of our main results. ## II The magnetic observable $`\stackrel{~}{h}(M)`$ for the fixed-$`M`$ ensemble We will now describe a definition of $`\stackrel{~}{h}(M)`$ that is based on statistical analysis of the local environment of a given spin. By local environment we mean the set of neighbors with which this spin interacts. In our particular case of the Ising model with nearest-neighbor interactions on the simple cubic lattice, the local environment consists of 6 spins on the neighboring sites. The local environment has $`2^6`$ possible configurations which divide in 7 types: 0+6$``$ (zero spins up, 6 down), 1+5$``$, 2+4$``$, 3+3$``$, 4+2$``$, 5+1$``$, 6+0$``$ (six spins up, zero down). The simplest way to Monte Carlo sample $`\stackrel{~}{h}(M)`$ is on the basis of the symmetric case: 3+3$``$ . For every Monte Carlo configuration, go through all sites, and select all spins with the required 3+3$``$ local environment. Then compute the average $`s_0`$ of the selected spins, and define $`\stackrel{~}{h}`$ by $$s_0=\mathrm{tanh}\stackrel{~}{h}.$$ (3) It is also possible to employ, instead of 3+3$``$, any other of the 7 types of local environment. In these non-symmetric cases the definition reads $$s_0=\mathrm{tanh}\left(\stackrel{~}{h}+\beta \underset{i=1}{\overset{6}{}}s_{0i}\right),$$ (4) where the $`s_{0i}`$ are the nearest neighbors of $`s_0`$. One easily notices that the definition is constructed in such a way that $`\stackrel{~}{h}`$ corresponds, on the mean field level, to the external field $`h`$ in Eq. (1). Now it is interesting to see what are the results of Monte Carlo simulations for $`\stackrel{~}{h}(M)`$. As has already been demonstrated in , at the critical temperature $`\stackrel{~}{h}(M)`$ practically coincides with the relation $`h(M)`$ in the canonical ensemble as obtained by Monte Carlo simulations, provided $`M`$ is sufficiently large, so that the correlation length is sufficiently small in comparison with the system size and the finite-size effects are suppressed. At the same time, the striking feature of $`\stackrel{~}{h}(M)`$ for not-so-large $`M`$ is its nonmonotonic behavior. First $`\stackrel{~}{h}(M)`$ goes negative at small $`M`$, then it begins to grow, and finally assumes the usual behavior at larger $`M`$ . This is clearly seen in Fig. 2 (diamonds), which shows Monte Carlo results obtained by means of the the geometric cluster algorithm . In the remaining part of the paper, we will give the explanation of this behavior (which turns out to be a peculiar kind of finite size effect), by establishing a close relation between $`\stackrel{~}{h}(M)`$ in the fixed-$`M`$ ensemble, and the probability distribution $`P(M)`$ in the canonical ensemble. ## III Connection between $`\stackrel{~}{h}(M)`$ in the fixed-$`M`$ ensemble and the probability distribution of $`M`$ in the canonical ensemble Considering the fixed-$`M`$ ensemble, Eq. (2), one notices that it can be obtained by taking the canonical ensemble (1), and cutting from it the subset satisfying the constraint, $`_is_i=NM`$. Within this subset we still have the usual Boltzmann probabilities $`\mathrm{exp}\{\beta _{ij}s_is_j\}`$ for individual configurations. This makes it possible to establish a relation between $`\stackrel{~}{h}(M)`$ in the fixed-$`M`$ ensemble, and the properties of the system in the canonical ensemble. The definition of $`\stackrel{~}{h}(M)`$ described in Sect. II is equivalent to the following. Let us take the fixed-$`M`$ ensemble and concentrate our attention on one particular lattice site, and on the spin located there. Let us perform the following measurement. For every configuration consider the local environment of our selected site. If it is not 3+3$``$, do not measure anything for this configuration. If it is 3+3$``$, measure the selected spin $`s_0`$ and store it. Finally, find $`s_0`$, and use Eq. (3) to determine $`\stackrel{~}{h}`$. One notices that, as long as we are performing a thought experiment, we need not care about the Monte Carlo statistics. We can just stick to one site and get the same $`s_0`$ without averaging over all sites, because they are equivalent. Up to now we have distinguished between 7 types of local environments, such as 3+3$``$. Let us go a bit further and treat separately all $`2^6`$ possible local environments. In other words, the measurement of $`s_0`$ is now performed in an even smaller subset of the fixed-$`M`$ ensemble: also the six spins forming the local environment of $`s_0`$ are fixed. In the case that the predetermined local environment is of the 3+3$``$ type, it may seem that there is no interaction between $`s_0`$ and the remaining system of $`N7`$ spins. Nevertheless, the fixed-$`M`$ ensemble probabilities that $`s_0`$ is $`+1`$ or $`1`$, which we denote, respectively, by $`P_+`$ and $`P_{}`$, are not equal in general. These probabilities may still depend on the magnetization of the remaining system, which is coupled to $`s_0`$ by the overall magnetization constraint: $$s_0+\underset{i=1}{\overset{6}{}}s_{0i}+\underset{iRS}{}s_i=\underset{i=1}{\overset{N}{}}s_iNM.$$ (5) The total magnetization of the system is thus expressed as the sum of three terms: the local spin $`s_0`$, the sum of its six neighbors and the magnetization of the remaining $`N7`$ spins denoted as $`_{iRS}s_i`$, where $`RS`$ stand for “remaining system”. The conditional probabilities $`P_\pm `$ can be more explicitly written as $$P_\pm =P(s_0=\pm 1|NM,s_{01}\mathrm{}s_{06})$$ (6) The two conditional arguments specify the total magnetization $`NM`$ and the states of the 6 neighbor spins. We now make the connection with the canonical probabilities $`P_c`$ which include the magnetization as an unconditional argument. We use the zero-field canonical probabilities i.e., $`h=0`$ in Eq. (1). $$P_\pm =P_c^1(NM|s_{01}\mathrm{}s_{06})P_c(s_0=\pm 1,NM|s_{01}\mathrm{}s_{06})$$ (7) We may slightly rewrite this by substitution of the probability $`P_c`$ by $`\widehat{P}_c`$ which is equal but uses the magnetization of the $`N7`$ remaining spins as its second argument: $$P_\pm =P_c^1(NM|s_{01}\mathrm{}s_{06})\widehat{P}_c(s_0=\pm 1,\underset{iRS}{}s_i=NM\underset{i=1}{\overset{6}{}}s_{0i}s_0|s_{01}\mathrm{}s_{06})$$ (8) Let us first consider the simplest case $`_{i=1}^6s_{0i}=0`$. Thus the canonical probability $`\widehat{P}_c`$ does not depend on its first argument, which can thus be skipped: $$P_\pm =\frac{1}{2}P_c^1(NM|s_{01}\mathrm{}s_{06})\widehat{P}_c(\underset{iRS}{}s_i=NM1|s_{01}\mathrm{}s_{06})$$ (9) Therefore, $$\frac{P_+}{P_{}}=\frac{\widehat{P}_c(\underset{iRS}{}s_i=NM1|s_{01}\mathrm{}s_{06})}{\widehat{P}_c(_{iRS}s_i=NM+1|s_{01}\mathrm{}s_{06})}.$$ (10) The condition $`s_{01}\mathrm{}s_{06}`$ in effect introduces a defect in the remaining system: an octahedron-shaped bubble with six spins at its vertices fixed, while the spin $`s_0`$ in the middle is decoupled and plays no role any more. Obviously, the ratio (10) could be obtained by performing a usual canonical ensemble simulation of such a system with a defect, and measuring the probability distribution for its overall magnetization $`_{iRS}s_i`$. The value of the ratio (10) would then be given by the ratio of the heights of the two neighboring bins in the corresponding histogram. In all cases of practical interest for the study of the scaling limit (sufficiently large systems, sufficiently small magnetization) the ratio (10) is close to 1. Otherwise a difference of one unit in the total magnetization would lead to a large change of probability: this would obviously be far from the scaling limit. Thus we always work with $`\stackrel{~}{h}1`$. It is convenient to introduce a shorter notation $`P_{RS}(x)\widehat{P}_c(_{iRS}s_i=Nx|s_{01}\mathrm{}s_{06})`$ where $`x`$ may be read as the magnetization of the remaining system if the factor $`N`$ instead of $`N7`$ makes a negligible difference, i.e. for large systems. We have to keep in mind that the notation $`P_{RS}`$ refers to a system with a defect whose type is not explicitly shown. Again restricting ourselves to local environments of the type 3+3$``$, we obtain $$s_0=\frac{P_+P_{}}{P_++P_{}}=\frac{P_{RS}(M\frac{1}{N})P_{RS}(M+\frac{1}{N})}{P_{RS}(M\frac{1}{N})+P_{RS}(M+\frac{1}{N})}.$$ (11) Thus we arrive at $$s_0\frac{1}{N}\frac{1}{P_{RS}(M)}\frac{dP_{RS}(M)}{dM}.$$ (12) Also, due to $`\stackrel{~}{h}1`$, Eq. (3) reduces to $$s_0=\stackrel{~}{h},$$ (13) and we get $$\stackrel{~}{h}=\frac{1}{N}\frac{d}{dM}\mathrm{log}P_{RS}(M).$$ (14) Defining the effective potential $`V_{\mathrm{eff}}^{(RS)}(M)`$ (i.e., the Ginzburg – Landau fixed-$`M`$ free energy) of the present system (with defect) by $$P_{RS}(M)\mathrm{exp}\{NV_{\mathrm{eff}}^{(RS)}(M)\},$$ (15) we get immediately $$\stackrel{~}{h}=\frac{dV_{\mathrm{eff}}^{(RS)}(M)}{dM}.$$ (16) For large systems, the relative contribution of the defect is small, and thus $`\stackrel{~}{h}(M)`$ is well approximated by $`V_{\mathrm{eff}}(M)`$ for the finite system without a defect: $$P(M)\mathrm{exp}\{NV_{\mathrm{eff}}(M)\},$$ (17) $$\stackrel{~}{h}=\frac{dV_{\mathrm{eff}}(M)}{dM}+\mathrm{}$$ (18) where the ellipsis stands for corrections vanishing at large $`N`$. As is well known, for finite 3D Ising models in a cubic box with periodic boundary conditions, the distribution $`P(M)`$ has a double-peak structure at the critical point. Thus $`V_{\mathrm{eff}}(M)`$ has a double-well shape, which immediately explains why $`\stackrel{~}{h}`$ goes negative for small values of $`M`$. In Fig. 2 we show the quantitative comparison of $`\stackrel{~}{h}(M)`$ (depicted by diamonds) and $`dV_{\mathrm{eff}}/dM`$ (solid line). One observes that the correspondence between the points and the line clearly improves with increasing lattice size. To extract $`V_{\mathrm{eff}}`$ from the Monte-Carlo-generated $`P(M)`$ (Fig. 1) we have exploited the fact, reported in , that for the system under consideration, $`P(M)`$ can be well approximated by the ansatz $$P(M)\mathrm{exp}\left\{\left(\frac{M^2}{M_0^2}1\right)^2\left(a\frac{M^2}{M_0^2}+c\right)\right\},$$ (19) which applies to the finite-size regime, i.e. the finite size is small compared to the bulk correlation length. We have fitted the Monte Carlo generated $`P(M)`$ data accordingly, determined the parameters $`a`$ and $`c`$, and thus obtained $`dV_{\mathrm{eff}}(M)/dM`$ in a simple polynomial form. It is also worth mentioning that the shape of $`P(M)`$ for a given geometry (in our case, a cubic box with periodic boundaries) is universal at the critical point. That is, the parameters $`a`$ and $`c`$ have well-defined scaling limits when the system size grows to infinity. These values, $`a=0.158(2)`$, $`c=0.776(2)`$, have been determined in by making use of a special model in the 3D Ising universality class, which has almost no corrections to scaling . The corresponding scaling form of $`dV_{\mathrm{eff}}(M)/dM`$ is plotted by the dashed line in Fig. 2. One observes that deviations from scaling (between the solid and the dashed lines) go down with increasing size, as they should. The results in Fig. 2 confirm the relation between the observable $`\stackrel{~}{h}(M)`$ as defined above in the fixed-$`M`$ ensemble, and the probability distribution $`P(M)`$ in the canonical ensemble. The remaining discrepancy (between the diamonds and the solid line in Fig. 2) is due to the “defect” discussed above. The question arises whether it is possible to modify our definition of $`\stackrel{~}{h}`$ in order to suppress this discrepancy. We have found that this is indeed the case. Up to this point, we restricted ourselves to symmetric local environments (3+3$``$) to define $`\stackrel{~}{h}`$ via Eq. (3). As has already been mentioned, using Eq. (4) one may use other types of local environments as well. In those cases the magnetization $`k_{i=1}^6s_i`$ enters the definition: $$s_0=\mathrm{tanh}(\stackrel{~}{h}+\beta k).$$ (20) Following the same arguments as before, we decompose the system in the local spin $`s_0`$, its fixed neighbors, and the remaining system ($`RS`$). This leads to the following generalization of Eq. (11): $`s_0`$ $`=`$ $`{\displaystyle \frac{e^{\beta k}P_{RS}(M\frac{k}{N}\frac{1}{N})e^{\beta k}P_{RS}(M\frac{k}{N}+\frac{1}{N})}{e^{\beta k}P_{RS}(M\frac{k}{N}\frac{1}{N})+e^{\beta k}P_{RS}(M\frac{k}{N}+\frac{1}{N})}}`$ (21) $``$ $`\mathrm{tanh}\beta k{\displaystyle \frac{1}{\mathrm{cosh}^2\beta k}}{\displaystyle \frac{1}{N}}{\displaystyle \frac{1}{P_{RS}(M\frac{k}{N})}}{\displaystyle \frac{dP_{RS}(M\frac{k}{N})}{dM}}`$ (22) $``$ $`\mathrm{tanh}\left(\beta k{\displaystyle \frac{1}{N}}{\displaystyle \frac{1}{P_{RS}(M)}}{\displaystyle \frac{dP_{RS}(M)}{dM}}\right).`$ (23) Thus we arrive once again at Eqs. (14-16). But we now have a different type of defect in the remaining system, and a shift of $`k/N`$ in the magnetization of the remaining system; we neglect the latter effect. Now it seems plausible that one can suppress the influence of the defect by averaging over all configurations of the defect, weighted with their natural occurrence probabilities. Such an averaging should more faithfully reproduce the characteristics of a system without a defect. The modified determination of $`\stackrel{~}{h}`$ is as follows. Sample configurations from the fixed-$`M`$ ensemble. For each spin determine its orientation ($`+`$ or $``$) and the type of its local environment (type 0, …, 6 for 0+6$``$, …, 6+0$``$). Accumulate these data by incrementing one out of 14 bins $`N_{q,+}`$, $`N_{q,}`$, where $`q=0\mathrm{}6`$ denotes the type of local environment, and $`+`$ or $``$ denotes the local spin. The resulting population numbers satisfy $`_{q=0}^6(N_{q,+}+N_{q,})=N`$. Then, for each $`q`$, find $`s_0_q=(N_{q,+}N_{q,})/(N_{q,+}+N_{q,})`$ and compute $`\stackrel{~}{h}_q`$ according to Eq. (4). Finally, $$\stackrel{~}{h}_{\mathrm{improved}}=\frac{1}{N}\underset{q=0}{\overset{6}{}}\stackrel{~}{h}_q(N_{q,+}+N_{q,}).$$ (24) Applying this definition to our simulation data, we observe that, within the statistical accuracy, the discrepancy between $`\stackrel{~}{h}(M)`$ and $`dV_{\mathrm{eff}}(M)/dM`$ is indeed eliminated (Fig. 2). ## IV Discussion and conclusions The relation (18) looks exactly the same as the standard relation between the field and magnetization in the canonical ensemble: $$h=\frac{d\stackrel{~}{V}_{\mathrm{eff}}(M)}{dM}.$$ (25) The observed differences between the properties of $`\stackrel{~}{h}(M)`$ in the fixed-$`M`$ ensemble and $`h(M)`$ in the canonical ensemble, the most prominent of which is the nonmonotonic behavior of $`\stackrel{~}{h}(M)`$ instead of the monotonic behavior of $`h(M)`$, can be traced to the different definitions of the effective potential. The one that occurs in Eq. (18) is the fixed-$`M`$ free energy, $$V_{\mathrm{eff}}(M)=(1/N)\mathrm{log}Z_f(M),$$ (26) while the one that enters Eq. (25) is defined via a Legendre transformation: $$\stackrel{~}{V}_{\mathrm{eff}}(M)=(1/N)\mathrm{log}Z_c(h)+hM,$$ (27) where $$M=M_h$$ (28) is the canonical average of the magnetization in an external field $`h`$. The partition functions $`Z_f(M)`$ and $`Z_c(h)`$ were defined in Section I. In a situation where fluctuations become negligible, the term $`hM`$ in $`\stackrel{~}{V}_{\mathrm{eff}}`$ cancels the field dependence of the Boltzmann weights. Then both definitions of the effective potential become equivalent, and both effective potentials approach the bulk form so that the difference between $`h`$ and $`\stackrel{~}{h}`$ vanishes. In a finite system, due to fluctuations, $`\stackrel{~}{V}_{\mathrm{eff}}`$ differs from $`V_{\mathrm{eff}}`$. For instance, at the Ising critical point, the double-well form of $`V_{\mathrm{eff}}`$ is absent: $`\stackrel{~}{V}_{\mathrm{eff}}`$ has a single-well form. Returning to Eq. (16), there is another finite-size effect: the difference between $`V_{\mathrm{eff}}^{(RS)}(M)`$ and $`V_{\mathrm{eff}}(M)`$ due to the presence of the defect. The relative contribution of the defect becomes small for large systems, and it is further suppressed by the improved definition of $`\stackrel{~}{h}`$, Eq. (24). Thus for large systems and sufficiently high $`M`$, when the correlation length is small in comparison with the system size, the finite size effects are suppressed, the difference between $`V_{\mathrm{eff}}(M)`$ and the bulk effective potential disappears, and our definition of $`\stackrel{~}{h}(M)`$ reproduces the expected bulk behavior. In conclusion, we have studied the critical three-dimensional Ising model in the fixed-magnetization ensemble, in a cubic geometry with periodic boundary conditions. This was done by means of the recently developed geometric cluster Monte Carlo algorithm. We have defined a magnetic field-like observable $`\stackrel{~}{h}`$ for this ensemble, studied its dependence on the magnetization $`M`$ and explained its counter-intuitive nonmonotonic behavior: $`\stackrel{~}{h}`$ first becomes negative and then positive with increasing $`M`$ (Fig. 2). We have provided a quantitative description of $`\stackrel{~}{h}(M)`$ by establishing a close relation with $`P(M)`$ — the probability distribution of the magnetization in the canonical ensemble. The nonmonotonic behavior of $`\stackrel{~}{h}(M)`$ can be understood as a manifestation of the same finite-size effect that is responsible for the double-peak shape of $`P(M)`$ at the critical point. Furthermore we have shown that, when fluctuations are negligible, our definition reduces to the standard canonical relation $`M(h)`$. Finally, we note that in the different context of the simulation of a system of particles whose number is fixed, a similar line of reasoning enables the determination of the chemical potential of the particles . ###### Acknowledgements. We thank INTAS (grant CT93-0023) and DRSTP (Dutch Research School for Theoretical Physics) for enabling one of us (M.T.) to visit Delft University.
no-problem/9910/astro-ph9910015.html
ar5iv
text
# Collisional Ring Galaxies in Small Groups ## 1 Introduction The evolution of galaxies in small groups is governed by gravitational interaction. Even if the lifetime of a dynamical system is 5–10 times larger than the crossing time, small groups of galaxies should have merged into one single galaxy in much less than a Hubble time. It has been suggested that small groups are constantly replenished through infall of galaxies in looser groups (Diaferio, Geller & Ramella 1994). Another scenario supported by numerical simulations is that dark matter halos may delay the merging process of small groups, in particular if the halo envelopes the whole group (Barnes 1985; Athanassoula et al. 1997). The presence of a ring galaxy in a group may help provide constraints to a number of parameters, such as the dynamical timescales and the dark matter distribution. Rings form after the passage of a smaller galaxy through the center of a disk galaxy. They are expanding ring waves made more visible through the blue light of the young stars born from the gas that has been compressed in the ring wave. From the size and the expansion velocity of the ring (as measured for instance by high-resolution emission line observations), it is possible to estimate the age of the ring. The relative spacing of the rings gives information on the dark matter distribution in that galaxy. The large-scale atomic gas distribution traces the history of the interaction and, combined with numerical N-body simulations, provides some insight into the evolution of those groups. One of the best studied northern ring systems is VIIZw466 for which Hi observations have provided information on the kinematics of the gas in the ring and its nearby companions and have revealed the presence of a plume extending from one of the companions towards the ring (Appleton, Charmandaris & Struck 1996). The effect of an asymmetric compression in the ring was seen in the mid-infrared (using $`ISO`$) and the radiocontinuum emission (Appleton, Charmandaris, Horellou et al. 1999). Here we discuss two more ring galaxies located in small groups: the Cartwheel and Arp 119; we present a model of the Cartwheel that reproduces most of the observed features, and the first high-resolution Hi observations of Arp 119. ## 2 A Model for the Cartwheel The results of our simulation assuming the Cartwheel’s most distant companion (called G3) as the intruder are shown in Fig. 1. We choose G3 as the “bullet” rather than one of the two nearer companions because of the existence of an extended Hi plume reaching from the Cartwheel towards that galaxy (Higdon 1996). Our model is based on the nearly central collision of a rigid companion galaxy with a self-gravitating disk containing both stars and gas (see Horellou & Combes 2000a for more details). The halo of our computer-made Cartwheel is about three time as massive as the disk+ bulge, and the mass of the companion is half that of the target. The model reproduces the main features of the Cartwheel and the observed position and radial velocity difference between the Cartwheel and G3. It is interesting than a good fit can also be obtained by assuming one of the nearer companions as the intruder on a trajectory that is perpendicular to that of the Cartwheel’s disk (Bosma et al., this volume). However, neither Bosma et al.’s model nor ours involving a rigid intruder is able to reproduce the $``$ 100 kpc long Hi tail towards G3. In a more elaborate calculation in which both the companion and target are deformable, we found that a gas-rich companion plunging through the center of the target on a prograde orbit can extrude a gaseous plume reminiscent of that in the Cartwheel (Horellou & Combes 2000b). ## 3 Arp 119 Arp 119 is a more distorted system with two nearby companions, an elliptical and an irregular. In addition to the material directly associated with the galaxies, Hi emission is detected south of the ring-like one and north of the southern irregular companion Mrk 983 (see Fig. 2; Horellou, Charmandaris & Combes, in prep). The two features may be connected, forming a bridge between the two galaxies. The Hi in Arp 119 is distributed in a broad rotating ring with several condensations, some of which coincide with the knots terminating the “spokes” that are visible on the optical picture. Those features may be due to the head-on collision between Mrk 983 and Arp 119. The nucleus of Arp 119 presents characteristics of a LINER. It is not clear whether the head-on collision of two gas-rich galaxies can trigger an active nucleus, although the fraction of ring galaxies with a Seyfert nucleus is rather high (four out of about 30). ## 4 Conclusions As originally shown by Few & Madore (1986), ring galaxies which present signs of having undergone a collision have, on the average, a higher number of neighbors than galaxies in a control sample. When two companions are present, Hi observations help reveal the identity of the intruder. X-ray maps of groups containing a ring would give further constraints on the degree of relaxation of the group. So far, the Cartwheel is the only ring galaxy in which X-ray emission has been detected (ROSAT observations by Wolter, Trinchieri & Iovino 1999) and the emission is concentrated in the southern ring quadrant where most of the star formation occurs. The VIIZw466 system – which contains an asymmetric ring produced in an off-center collision, an edge-on spiral and a massive ellipical with distorted isophotes that may have accreted several companions – would be an ideal target for observations with Chandra. ###### Acknowledgements. The author enjoyed discussions with Vassilis Charmandaris, Françoise Combes and Phil Appleton on ring galaxy projects. Special thanks to John and Steven Black for their support and their interest in telescopes and galaxies.
no-problem/9910/cond-mat9910371.html
ar5iv
text
# Dynamics of Vibrated Granular Monolayers ## Abstract We study statistical properties of vibrated granular monolayers using molecular dynamics simulations. We show that at high excitation strengths, the system is in a gas state, particle motion is isotropic, and the velocity distributions are Gaussian. As the vibration strength is lowered the system’s dimensionality is reduced from three to two. Below a critical excitation strength, a gas-cluster phase occurs, and the velocity distribution becomes bimodal. In this phase, the system consists of clusters of immobile particles arranged in close-packed hexagonal arrays, and gas particles whose energy equals the first excited state of an isolated particle on a vibrated plate. PACS: 81.05.Rm, 05.70.Fh, 02.70.Ns Granular media, i.e, ensembles of hard macroscopic particles exhibit rich, interesting, and only partially understood collective behavior . The dynamics of driven or excited granular media is particularly important since the external energy source balances the energy loss due to collisions. Collective behavior of such systems is therefore important for establishing a more complete theoretical description of granular media. Here, we focus on monolayer geometries where collapse, clustering, and long range order have been observed recently . In particular, experimental studies reported that the velocity distribution function may exhibit both Gaussian and non-Gaussian behavior under different driving conditions. In this study, we carry out molecular dynamics simulations of vertically vibrated granular monolayers. To validate the simulation method, we verified the experimentally observed transition from a gas-like phase in high vibration strengths, to a cluster-gas phase in low vibration strengths . Additionally, we checked that several other details including the transition point, and the statistics of the horizontal velocities agree quantitatively with the experimental results. In particular, the horizontal energy vanishes linearly near the transition point, and the corresponding velocity distribution changes from a Gaussian to a non-Gaussian as the vibration strength is reduced. In contrast with the experimental studies, the simulations enables us to probe the vertical motion, an important characteristic of the dynamics. Our results show that as the system approaches the transition point, the vertical energy drops by several orders of magnitude. Furthermore, the gas-cluster phase is characterized by a coexistence of clusters of immobile particles, and energetic gas particles, whose energy can be understood by considering an isolated particle on a vibrated plate. We also find that the deviation from the Gaussian behavior in the gas phase is directly related to the development of an anisotropy in the motion, i.e, significant differences between the horizontal and the vertical velocities. To study the dynamics of vibrated monolayers, we used the standard molecular dynamics simulation technique . We considered an ensemble of $`N`$ identical weakly deformable spheres of mass $`m`$, radius $`R`$, and moment of inertia $`I\frac{2}{5}mR^2`$. The simulation integrates the equations of motion for the linear and angular momentums, $`m\ddot{𝐫}_i=_{ji}𝐅_{ij}^n+mg\widehat{z}`$, and $`I\dot{\omega }_i=_{ji}𝐫_{ij}\times 𝐅_{ij}^t`$, respectively. Here, $`𝐫_i`$ is the position of the $`i`$th particle, $`\omega _i`$ is its angular velocity and $`g`$ is the gravitational acceleration. The force due to contact with the $`j`$th particle in the direction normal (tangential) to the vector $`𝐫_{ij}=𝐫_j𝐫_i`$ is denoted by $`𝐅_{ij}^n`$ ($`𝐅_{ij}^t`$). The force between two particles is nonzero only when they overlap, i.e., $`𝐅_{ij}=0`$ when $`|𝐫_{ij}|>2R`$. When there is an overlap, the normal contact force $`𝐅_{ij}^n=𝐅_{ij}^{\mathrm{rest}}+𝐅_{ij}^{\mathrm{diss}}`$ between the particles is a sum of the following forces: (a) A restoring force, $`𝐅_{ij}^{\mathrm{rest}}=Ym_i(|𝐫_{ij}|2R)𝐫_{ij}/|𝐫_{ij}|`$, with $`Y`$ the Young’s modulus, and (b) An inelastic dissipative force, $`𝐅_{ij}^{\mathrm{diss}}=\gamma _nm_i𝐯_{ij}^n`$. The tangential force is the frictional force $`𝐅_{ij}^t=𝐅_{ij}^{\mathrm{shear}}=\gamma _sm_i𝐯_{ij}^t`$. In the above, $`𝐯_{ij}^n=(𝐯_{ij}𝐫_{ij})𝐫_{ij}/|𝐫_{ij}|^2`$, and $`𝐯_{ij}^t=𝐯_{ij}𝐯_{ij}^n`$ are projections of the relative velocities in the normal and tangential directions, respectively. The coefficients $`\gamma _n`$ and $`\gamma _s`$ account for the dissipation due to the relative motion in the normal and tangential directions, respectively. Overall, the molecular dynamics method has the advantage that it is amenable for parallel implementation, and that it allows handling of collisions involving an arbitrary number of particles. Initially, particles are randomly distributed on the vibrating plate, with a filling fraction $`\rho `$. The velocities were drawn independently from an isotropic Gaussian distribution. The plate undergoes harmonic oscillations in the vertical direction according to $`z_p(t)=A(t)\mathrm{cos}(\omega t)`$ with $`A`$ the vibration amplitude, and $`\omega =2\pi \nu `$ with $`\nu `$ the frequency. When a particle collides with the plate, it experiences the same force as if it were to collide with another particle moving with the plate velocity. The simulation was carried in a finite box with a height chosen to be large enough so that no collisions can occur with the box ceiling. Periodic boundary conditions were implemented horizontally. Overall, the simulation parameters were chosen to be as compatible as possible to the experimental values : $`R=0.595mm`$, $`g=9.8m/s^2`$, $`Y=10^7/s^2`$, $`\nu =70Hz`$, $`N=2000`$, $`\rho =0.463`$, $`\gamma _s=100/s`$, and $`\gamma _n=200/s`$. The above normal dissipation parameter leads to a restitution coefficient of $`r=0.95`$. Without loss of generality, we set the particle mass to unity, $`m=1`$. We verified that the results reported in this paper were independent of the value of most of these parameters, as well as the nature of boundary conditions. We are primarily interested in statistical properties of the system in the steady state, and especially their dependence on the vibration strength, which can be quantified by the dimensionless acceleration $`\mathrm{\Gamma }=A\omega ^2/g`$. The quantity $`\mathrm{\Gamma }`$ can be tuned by varying either $`\omega `$ or $`A`$. We chose to fix the frequency and vary the vibration amplitude. This method should be valid as long as the time scale underlying the variation is larger than the systems’ intrinsic relaxation time scales. Each of our simulations was initially run at $`10^3`$ oscillation cycles at a constant amplitude $`A_0`$. Then the amplitude was slowly reduced in a linear fashion according to $`A/A_0=1t/\tau `$, with the decay time $`\tau 10^3s`$ (or alternatively $`10^5`$ cycles). Throughout this paper we report measurements of average quantities such as the temperature and the velocity distribution. These were obtained by averaging over 100 consecutive oscillation cycles. Qualitatively, we observe that above a critical vibration intensity, $`\mathrm{\Gamma }>\mathrm{\Gamma }_c`$, particles are in a gas phase, in which their motion is random, as seen in Fig. 1. When $`\mathrm{\Gamma }<\mathrm{\Gamma }_c`$ in addition to particles in the gas phase, hexagonal ordered clusters form, as shown in a snapshot of the system. Furthermore, particles inside these hexagonal clusters are stationary, while particles outside the clusters move appreciably. This behavior is rather robust as it is independent of many of the underlying parameters including the dissipation parameters. Nevertheless, the simulations indicate that the critical acceleration $`\mathrm{\Gamma }_c`$ is primarily determined by $`\gamma _n`$, while the stability of the clusters is governed by $`\gamma _s`$. Additionally, the relaxation time scale $`\tau `$ had to be sufficiently small for the system to be able to fully relax any transient behavior. Experimental measurements of the horizontal temperature, defined by $`T_H=v_x^2+v_y^2`$, indicate a linear dependence in the vicinity of the critical point $$T_H(\mathrm{\Gamma }\mathrm{\Gamma }_c).$$ (1) Our simulations confirm this linear behavior, as shown in Fig. 2 and Fig. 3. Furthermore, the horizontal energy practically vanishes below the transition point, as this quantity decreases by 3 orders of magnitude for $`\mathrm{\Gamma }<\mathrm{\Gamma }_c`$. This is reminiscent of a sharp phase transition and it is therefore sensible to view $`\mathrm{\Gamma }_c`$ as a critical point. This linear behavior can be used to estimate the critical point, and a linear least-square-fit yields $`\mathrm{\Gamma }_c=0.763`$, in good agreement with the experimental observation, $`\mathrm{\Gamma }_c=0.77`$ . We conclude that the near critical behavior observed numerically agrees both qualitatively and quantitatively with the experimental observations. Simulations also allow measurements of the vertical velocities. We find that the vertical energy $`T_V=2v_{z}^{}{}_{}{}^{2}`$ decreases sharply near the transition point as well. However, in contrast with the horizontal energy, it does not vanish below the transition point. Therefore, the velocities develop a strong anisotropy as the vertical and the horizontal velocities behave quite differently. This reflects the fact that the system is far from equilibrium. More detailed velocity statistics is provided by the velocity distribution. We observe that at high accelerations the particle motion is nearly isotropic, i.e., the ratio of horizontal to vertical energy is of the order unity. Indeed, this ratio approaches a value of roughly 0.5 (see Fig. 2). In addition, the velocity distribution is Gaussian (see Fig. 4) and the system is practically three-dimensional. However, as the acceleration is decreased, the two-dimensional geometry becomes more and more pronounced. The vertical motion dominates over the horizontal one, and the horizontal velocity distribution departs strongly from a Gaussian distribution. Near the phase transition point the large velocity tail becomes nearly exponential (see Fig. 4). Below the transition point, a significant fraction of the particles have a nearly vanishing horizontal velocity, and the distribution of horizontal velocities is strongly enhanced near the origin. The deviation from the Gaussian behavior can be quantified using the kurtosis, defined via the fourth and second moments of the distribution, $`\kappa =v^4/v^2^2`$. Indeed, in the limit of high vibration intensities, $`\mathrm{\Gamma }1`$, this parameter approaches the Gaussian value $`\kappa 3`$. On the other hand, near the phase transition point, i.e, as $`\mathrm{\Gamma }\mathrm{\Gamma }_c`$, this parameter approaches the exponential value $`\kappa 6`$. It proves useful to examine how the kurtosis depends on $`T_H/T_V`$, the ratio between the horizontal and the vertical energies. As shown in the inset to Fig. 4, the smaller the ratio (or equivalently, the larger the anisotropy), the larger the deviation from a Gaussian distribution. Hence, whether the velocity distribution is Gaussian or not reflects the degree of anisotropy in the particle motion. Non-Gaussian distributions has been observed experimentally, theoretically , and numerically in two-and three-dimensional geometries. The distribution of vertical velocities can be used to distinguish between cluster and gas particles. Indeed, the vertical velocity distribution changes its character from a unimodal to a bimodal distribution in the cluster-gas phase (see Fig. 5) with the low velocity peak corresponding to the cluster particles, and the high velocity peak corresponding to the gas particles. Interestingly, the location of the high velocity peak does not change as $`\mathrm{\Gamma }`$ decreases. In fact, this energy can be understood by considering the first excited state energy of a single particle bouncing on a vibrating plate, which can be calculated to be $$E_1=\frac{1}{6}\left(\frac{\pi g}{\omega }\right)^2,$$ (2) or in our case $`E_1=8.16cm^2/s^2`$. We verified this result by simulating the motion of a single particle on a vibrating plate. Interestingly, the energy of the gas particles falls within less then 10% of this value. Therefore, below the phase transition point particles residing in clusters are in the ground state, i.e., they are moving with the plate. Furthermore, the rest of the particles constituting the gas phase are in the first excited state of an isolated ball on a vibrating surface. This indicates that particles in the gas phase are essentially noninteracting. The vertical velocity distribution can be used to study the fraction of particles in each phase by simply integrating the area under the respective energy peaks. As shown in Fig. 6, $`P_0`$, the fraction of particles in the gas phase, is almost independent of the vibration intensity below the transition point. As the transition point is approached this fraction rapidly decreases and ultimately vanishes for $`\mathrm{\Gamma }\mathrm{\Gamma }_c`$. Although this quantity does not undergo a sharp transition, its behavior is consistent with our previous estimate of the transition point from the horizontal energy behavior. In summary, we have studied the dynamics of vibrated granular monolayers using molecular dynamics simulations. We find that the transition between the gas and the cluster-gas phases can be regarded as a sharp phase transition, and that the horizontal energy decreases linearly near the transition point. We have shown that at high vibration strengths, the particle motion is isotropic, and the velocity distributions are Gaussian. The deviation from a Gaussian distribution were found to be closely related to the degree of anisotropy in the motion. We have also shown that below the phase transition, the velocity distribution is bimodal. The cluster particles move with the plate, while the gas particles behave as a noninteracting gas, as their energy agrees with the first excited state of an isolated vibrated particle. Our results agree both qualitatively and quantitatively with the experimental data. This shows that the underlying phenomena can be explained solely by the simulated interactions, i.e., contact force interactions, no attractive forces, and dissipative collisions. Other mechanisms, possibly present in the experiment, such as electrostatic forces, etc. are therefore not responsible for the phase transition. It will be interesting to use molecular dynamics simulations to determine the full phase diagram of this system by varying the density and the driving frequency.
no-problem/9910/astro-ph9910381.html
ar5iv
text
# 1 1(a), 1(b), 1(c) show the speckle image, PSF, and deconvolved image of HR 5138 respectively. The numbers on the axes denote pixel numbers with each pixel being equal to 0.02 arc sec. BLIND ITERATIVE DECONVOLUTION OF BINARY STAR IMAGES S.K. Saha and P. Venkatakrishnan Indian Institute of Astrophysics, Bangalore 560034 Abstract The technique of Blind Iterative De-convolution (BID) was used to remove the atmospherically induced point spread function (PSF) from short exposure images of two binary stars, HR 5138 and HR 5747 obtained at the cassegrain focus of the 2.34 meter Vainu Bappu Telescope(VBT), situated at Vainu Bappu Observatory(VBO), Kavalur. The position angles and separations of the binary components were seen to be consistent with results of the auto-correlation technique, while the Fourier phases of the reconstructed images were consistent with published observations of the binary orbits. Keywords: Speckle Imaging, Image Reconstruction, Binary stars. 1. Introduction Atmospheric turbulence degrades the images obtained by ground based astronomical telescopes. Schemes like speckle interferometry (Labeyrie, 1970), Knox-Thomson algorithm (Knox and Thompson, 1974), shift and add (Lynds et al., 1976), triple correlation (Lohmann et al., 1983) have been successfully employed to restore the degraded images. All these schemes depend on the statistical treatment of a large number of images. Often, it may not be possible to record a large number of images within the time interval over which the statistics of the atmospheric turbulence remains stationary. In such cases, where only few images can be used, there are a number of schemes to restore the image using some prior information about the image. The maximum entropy method (Jaynes, 1982), CLEAN algorithm (Hogbom, 1974), and BID (Ayers and Dainty, 1988) are examples of such schemes. In this paper, we employ a version of BID developed by P. Nisenson (Nisenson, 1991), on degraded images of two binaries, HR 5138 HR 5138 ($`m_v`$=5.57, $`\mathrm{\Delta }`$m=0.2), and HR 5747 ($`m_v`$=3.68, $`\mathrm{\Delta }`$m=1.5) obtained at the 2.34 meter VBT at Kavalur. 2. Observations The 2.34 meter VBT has two accessible foci for backend instrumentation $``$ a prime focus (f/3.25 beam) and a cassegranian focus (f/13 beam). The latter was used for the observations described in this paper. The cassegranian focus has an image scale of 6.7 arcseconds per mm. This was further magnified to $``$1.21 arcseconds per mm, using a Barlow lens arrangement (Saha et al., 1987, Chinnappan et al.,1991). This enlarged image was recorded through a 5 nm filter centred on H$`\alpha `$ using an EEV intensified CCD camera which provides a standard CCIR video output of the recorded scene. The interface between the intensifier screen and the CCD chip is a fibre-optic bundle which reduces the image size by a factor of 1.69. A DT-2851 frame grabber card digitises the video signal. This digitiser resamples the pixels of each row (385 CCD columns to 512 digitized samples) and introduces a net reduction in the row direction of a factor of 1.27. The frame grabber can store upto two images on its onboard memory. These images are then written onto the hard disc of a personal computer. The images were stored on floppy diskettes and later analysed on a pentium. The observing conditions were fair with an average seeing of $``$2 arcseconds during the nights of 16/17 March 1990. 3. Data Processing The blind iterative deconvolution technique is described in detail in the literature (Bates and McDonnell, 1986). Essentially, it consists of using very limited information about the image, like positivity and image size, to iteratively arrive at a deconvolved image of the object, starting from a blind guess of either the object or both the convolving function. The implementation of the particular version of BID used by us is described in Nisenson (1991). The algorithm has the degraded image $`c(x,y)`$ as the operand. An initial estimate of the point spread function (PSF) $`p(x,y)`$ has to be provided. The degraded image is deconvolved from the guess PSF by Wiener filtering, which is an operation of multiplying a suitable Wiener filter (constructed from the Fourier transform $`P(u,v)`$ of the PSF) with the Fourier transform $`C(u,v)`$ of the degraded image as follows $`O(u,v)=C(u,v)\frac{P^{}(u,v)}{P(u,v)P^{}(u,v)+N(u,v)N^{}(u,v)}`$ where $`O`$ is the Fourier transform of the De-convolved image and $`N`$ is the noise spectrum. This result $`O`$ is transformed to image space, the negatives in the image are set to zero, and the positives outside a prescribed domain (called object support) are set to zero. The average of negative intensities within the support are subtracted from all pixels. The process is repeated until the negative intensities decrease below the noise. A new estimate of the psf is next obtained by Wiener filtering the original image $`c(x,y)`$ with a filter constructed from the constrained object $`o(x,y)`$. This completes one iteration. This entire process is repeated until the derived values of $`o(x,y)`$ and $`p(x,y)`$ converge to sensible solutions. 4. Results The results for HR 5138 and HR 5747 were arrived at after 350 iterations. Since the intensity of the stars were different, the value of the Weiner filter parameters also had to be chosen accordingly. Figures 1(a), 1(b), 1(c) show the speckle image, PSF, and deconvolved image of HR 5138 respectively. Figures 2(a), 2(b), 2(c) are for HR 5747 respectively. The companions of HR 5138 and HR 5747 are separated by 0.27 and 0.20 arcseconds respectively. Measurements of position angle (235 and 110 degrees for HR 5138 and HR 5747 respectively) and separation were compatible with the values published in the CHARA catalogue (McAlister and Hartkopf, 1988). Horch (1994), too found similar results. The magnitude difference for the reconstructed objects were 0.04 and 1.65 respectively for HR 5138 and HR 5747. Although these values compare quite well with those published in the Bright Star Catalogue (Hoffleit and Jaschek, 1982) one needs to treat many more objects before one can characterise the photometric quality of the reconstructions. 5. Discussion and Conclusions The present scheme of BID has the chief problem of convergence. It is indeed an art to decide when to stop the iterations. The results are also vulnerable to the choice of various parameters like the support radius, the level of high frequency suppression during the Wiener filtering, etc. The availability of prior knowledge on the object through autocorrelation of the degraded image was found to be very useful for specifying the object support radius. In spite of this care taken in the choice of the support radius, the psf for each star contains residual signatures of the binary sources. Although suggestions have been made for improving the convergence (cf. Jefferies and Christou, 1993), these improved algorithms require more than a single speckle frame. For the present, it is noteworthy that useful reconstructions are possible using single speckle frames. Questions regarding their dynamic range and linearity can be answered only after examining a wide range of reconstructions. The iterative nature of the algorithm does not lend to an explicit estimation of these parameters from a limited sample. New developments in camera electronics promise the capability to acquire several images within a short time. This will also reduce the level of artifacts. The chief achievement in this paper is a demonstration of the scientific potential of BID for resolving bright objects acquired using simple apparatus. Acknowledgment The authors are grateful to Dr. P. Nisenson of Center for Astrophysics, Cambridge, USA, for the BID code as well as for useful discussions. References Ayers G.R. & Dainty J.C., 1988, Optics Letters, 13, 547. Bates R.H.T. & McDonnell M.J., 1986, ”Image Restoration and Reconstruction”, Oxford Engineering Science 16, Clarendon Press, Oxford. Chinnappan V., Saha S.K. & Faseehana, 1991, Kod. Obs. Bull. 11, 87. Hoffleit D. & Jaschek, C., 1982, ”The Bright Star Catalogue”, Yale University Observatory Hogbom J., 1974, Ap.J. Suppl., 15, 417. Horch E. P., 1994, Ph.D Thesis, Stanford University Jaynes E.T., 1982, Proc. IEEE, 70, 939. Jefferies S.M. & Christou J.C., 1993, ApJ., 415, 862. Knox K.T. & Thompson B.J., 1974, Ap.J. Lett., 193, L45. Labeyrie A., 1970, A and A, 6, 85. Lohmann A.W., Weigelt G., Wirnitzer B., 1983, Appl. Opt., 22, 4028. Lynds C.R., Worden S.P., Harvey J.W., 1976, Ap.J., 207, 174. McAlister H. A. & Hartkopf W. I., 1988, CHARA contribution no. 2. Nisenson P., 1991, in Proc. ESO-NOAO conf. on High Resolution Imaging by Interferometry ed. J M Beckers & F Merkle, p-299. Saha S.K., Venkatakrishnan P., Jayarajan A.P. & Jayavel N., 1987, Current Science, 56, 985.
no-problem/9910/astro-ph9910317.html
ar5iv
text
# The Large Zenith Telescope: Redshifts and Physical Parameters of Faint Objects from a Principal Component Analysis ## 1. The Large Zenith Telescope Survey The Large Zenith Telescope (LZT) is a 6-m liquid mirror telescope (Hickson 1998; Borra et al. 1992; Cabanac et al. 1998). It is a transient instrument dedicated to a survey of faint objects (Table 1). The final input of the survey will be a catalog giving the photometric fluxes in $`40`$ narrow-band filters (3700Å-1$`\mu `$ range), as well as basic morphological information for $`10^6`$ objects. Here we investigate the potential of the Principal Component Analysis (PCA) to extract meaningful physical parameters from such a large dataset. ## 2. Principal Component Analysis and Mock Catalogs The application of the PCA to the classification of galaxies is described elsewhere (Murtagh & Heck 1987; Galaz & de Lapparent 1998 and references therein). After a PCA, the spectrophotometric energy distribution (SED) of each object can be approximated as $`S_{approx}=\alpha _1E_1+\alpha _2E_2+\alpha _3E_3+\mathrm{},`$ where $`S_{approx}`$ is the SED, $`\alpha _1,\alpha _2,\alpha _3\mathrm{}`$ are the eigenvalues, $`E_1,E_2,E_3\mathrm{}`$ are the eigenvectors. We build realistic mock catalogs of LZT SEDs to a limiting magnitude $`R_{mag}<23`$ for stars (number counts and color distribution from the Bahcall-Soneira model, ARAA 1986), galaxies (Fioc & Rocca-Volmerange GISSEL and Bruzual & Charlot PEGASE spectra, luminosity functions by Autofib for $`0<z_{shift}<0.5`$, and CFRS luminosity function for $`0.5<z_{shift}<2`$), and QSOs (number counts and counts vs $`z_{shift}`$ from 2dF QSO Survey). ## 3. Results: morphological discrimination and redshift (i) Most of the information in the LZT spectral energy distributions is contained in the continua. Only strong emission lines are detected ($`W_{line}>50\AA `$). (ii) The PCA is robust to discriminate different species at a resolution of $`40`$, and at low $`S/N`$. (iii) Redshifts of galaxies can be derived for $`z_{shift}<1.5`$. Strong degeneracies occur at higher redshifts between high $`z_{shift}`$ blue galaxies and local red galaxies because the H & K (Ca II) break exits the observed spectral range. ## References Borra, E.F., Content, R., & Girard, L. 1992, ApJ, 393, 829 Cabanac, R.A., Borra E.F., & Beauchemin, M. 1998, ApJ, 509, 309 Murtagh & Heck, 1987, Multivariate Data Analysis (Dordrecht: Reidel) Hickson, P., et al. 1998, Proc. SPIE, 3356, Kona Galaz, G., & de Lapparent, V. 1998, A&A, 332, 459
no-problem/9910/astro-ph9910568.html
ar5iv
text
# Extended Warm Inflation ## I Introduction Since the advent of inflationary models, the roles played by scalar fields at different epochs of the cosmic history have been extensively investigated. Such fields have been invoked in a variety of disparate scenarios with rather different goals. Some known examples are: (i) the inflaton, the field that drives inflation , (ii) the axion, a cold dark matter candidate and (iii) the dilaton, the field appearing in the low energy string action which addresses the same issues of inflation and may provide a solution to the singularity problem . More recently, inspired by the existing observational data and theoretical speculations, some authors have also suggested scalar fields (sometimes called “quintessence”) as the sough non-baryonic dark matter. These “remnant” fields, might have important consequences on the formation of the large scale structure, as well as be responsible by the present day accelerated phase , as indicated by the latest type Ia Supernovae observations . Despite their generalized use in the cosmological framework, the physical situations in which the scalar fields are commonly considered are rather particular. For example, in the new inflation case there is not a fundamental justification for “turning off” all the possible couplings during the slow rollover phase and doing the opposite just at the onset of the thermalisation phase of reheating (as well as afterwards, if some potential energy is still available). This can be achieved only by considering very strict initial conditions, which weakened the viability of such a scenario. But this is not an isolated case. Neglecting possible thermal couplings between scalar fields and the other constituents of the universe is a general feature assumed in nearly all scalar field models presented in the literature. One important exception is the explosive reheating period required to succeed any version of isentropic inflation. Along this process, as the field coherently oscillates about the minimum of the potential, its energy is drained to the matter and radiation components. Either in its earlier or in its modern version based on parametric resonance (sometimes called preheating), the reheating mechanism is a relatively fast process, and virtually all the entropy of the present universe may be generated in this way. This is certainly not the most general case. In principle, a permanent or temporary coupling of the scalar field $`\varphi `$ with other fields might also lead to dissipative processes producing entropy at different eras of the cosmic evolution. It is expected that progresses in non-equilibrium statistics of quantum fields will provide the necessary theoretical framework for discussing dissipation in more general cases (see for example and references therein). Another possibility is the so-called “instant preheating” . In this process, the inflaton decays continuously into another scalar field as it rolls down the potential. This second field is very short-lived and rapidly decays into fermions thus furnishing a sustained entropy generation, including for quintessence-like models. Although a justification from first principles for dissipative effects has not been firmly achieved, such effects should not be ruled out only on readiness basis. Much work can be done in phenomenological grounds as, for instance, by applying nonequilibrium thermodynamic techniques to the problem or even studying particular models with dissipation. An interesting example of the latter case is the warm inflationary picture recently proposed . Like in new inflation, a phase transition driving the universe to an inflationary period dominated by the scalar field potential is assumed. However, a standard phenomenological friction-like term $`\mathrm{\Gamma }\dot{\varphi }^2`$ is inserted into the scalar field equation of motion to represent a continuous energy transfer from $`\varphi `$ to the radiation field. This persistent thermal contact during inflation is so finely adjusted that the scalar field evolves all the time in a damped regime generating an isothermal expansion. As a consequence, the subsequent reheating mechanism is not needed and thermal fluctuations produce the primordial spectrum of density perturbations (see also reference ). Warm inflation was originally formulated in a phenomenological setting, but some attempts of a fundamental justification has also been presented . Furthermore, a dynamical systems analysis showed that a smooth transition from inflationary to a radiation phase is attained for many values of the friction parameter, thereby showing that the warm scenario may be a workable variant to inflation. As it appears, its unique negative aspect is closely related to a possible thermodynamic fine-tunning, because an isothermal evolution of the radiative component is assumed from the very beginning in some versions of warm inflation (for comments on this issue, see ). In other words, the thermal coupling acting during inflation is so powerful and finely adjusted that the scalar field decays ensuring a constant temperature even considering the exponential expansion of the universe. In brief, the aim of this paper is to relax this hyphothesis. However, instead of proposing another particular inflationary model, we discuss how the differences between the isentropic and the isothermal inflationary scenarios can be depicted in a convenient parameter space. As we shall see, these models are only two extreme cases of an infinite two-parametric family. Hopefully, this unified view may indicate ways to a consistent phenomenological treatments of these models based on the methods of nonequilibrium thermodynamics. We also discuss how the standard slow roll conditions are modified due to the scalar field decay. ## II Scalar Field with Dissipation We will limit our analysis to homogeneous and isotropic universes, described by the flat Friedmann-Robertson-Walker (FRW) line element $$ds^2=dt^2a^2(t)\left(dr^2+r^2d\theta ^2+r^2\mathrm{sin}^2\theta d\varphi ^2\right),$$ (1) where $`a(t)`$ is the scale factor (in our units $`\mathrm{}=c=1`$). The source of this spacetime is a mixture of a real and minimally coupled scalar field interchanging energy with a perfect fluid representing all the other fields. The Lagragian density for the scalar field is $$=\frac{1}{2}^\mu \varphi _\mu \varphi V(\varphi )+_{\mathrm{int}},$$ (2) where the interaction is implied by the term $`_{\mathrm{int}}`$ and $`V(\varphi )`$ is the scalar field potential. This field has the stress-energy tensor given by $$T_\varphi ^{\mu \nu }=^\mu \varphi ^\nu \varphi g^{\mu \nu }.$$ (3) The other component of the mixture is a simple fluid with energy-stress tensor $$T_m^{\mu \nu }=(\rho +p)u^\mu u^\nu pg^{\mu \nu },$$ (4) where energy density and pressure are given respectively by $`\rho `$ and $`p`$. The total energy stress tensor of the system $`T_t^{\mu \nu }=T_\varphi ^{\mu \nu }+T_m^{\mu \nu }`$ obeys Einstein’s field equations, from which we obtain the equations of motion $$3H^2=\frac{8\pi }{m_{\mathrm{pl}}^2}\left(\frac{\dot{\varphi }^2}{2}+V(\varphi )+\rho \right),$$ (5) $$3H^2+2\dot{H}=\frac{8\pi }{mpl^2}\left(\frac{\dot{\varphi }^2}{2}V(\varphi )+p\right),$$ (6) where a dot means time derivative, $`H=\dot{a}/a`$ is the Hubble parameter, $`m_{\mathrm{pl}}^2=1/G`$ is the Planck mass, and we have used that the scalar field energy density and pressure are, respectively $`\rho _\varphi ={\displaystyle \frac{1}{2}}\dot{\varphi }^2+V(\varphi )`$ (7) $`p_\varphi ={\displaystyle \frac{1}{2}}\dot{\varphi }^2V(\varphi ).`$ (8) Now, assuming that the perfect fluid complies with the “gamma law” equation of state, $$p=(\gamma 1)\rho ,$$ (9) the energy conservation law for this interacting selfgravitating mixture can be cast in the form $$\dot{\varphi }(\ddot{\varphi }+3H\dot{\varphi }+V^{}(\varphi ))=\dot{\rho }3\gamma H\rho .$$ (10) where the prime denotes derivative with respect to the field $`\varphi `$. In order to decouple the two sides of the above equation and allow for the decay of the scalar field into ordinary matter and radiation, we will assume the usual phenomenological friction-like term (3$`\mathrm{\Gamma }\dot{\varphi }^2`$), thereby spliting (10) in two equations $$\ddot{\varphi }+3H\dot{\varphi }+V^{}(\varphi )=3\mathrm{\Gamma }\dot{\varphi }$$ (11) and $$\dot{\rho }+3\gamma H\rho =3\mathrm{\Gamma }\dot{\varphi }^2,$$ (12) where the dissipative coefficient $`\mathrm{\Gamma }`$ is the decay width of the scalar field and the factor 3 has been introduced for mathematical convenience. For models endowed with an isentropic inflationary stage, we know that $`\rho 0`$, at the onset of the coherent oscillations, that is, when the field $`\varphi `$ oscillates about the minimum of its potential $`V(\varphi )`$. In these cases, the term $`\mathrm{\Gamma }\dot{\varphi }^2`$ is generally inefficient for describing the first stages of the reheating process and is not acting during exponential inflation. However, in the presence of a nonnegligible thermal component, a friction term can be justified under special conditions during a de Sitter regime . Aplications of this theory lead to complicated but promising warm inflationary models, with an unreasonably large quantity of light fields coupled to the inflaton , although some of these models might be physically interpreted in terms of string theory. In what follows, it will be assumed the validity of this friction term as representing the thermal contact between $`\varphi `$ and the other fields for any epoch. We also consider the “thermal decay width” $`\mathrm{\Gamma }`$ as a generic function of the temperature, or equivalently, of the cosmic time. Let us now define the following dimensionless parameters: $$x\frac{\dot{\varphi }^2}{\dot{\varphi }^2+\gamma \rho },$$ (13) and $$\alpha \frac{3H\dot{\varphi }^2}{3H\dot{\varphi }^2+|\dot{\rho }|}.$$ (14) The adoption of these parameters is somewhat natural and may be understood as follows. The warm picture departs quantitatively from new inflation because during the inflationary stage the energy density of the material component is not negligible. More precisely, it is not negligible only in comparison to the scalar field kinetic term since the potential $`V(\varphi )`$ must be strongly dominant in order to generate exponential expansion. This means that one needs to compare the kinetic term $`\dot{\varphi }^2=\rho _\varphi +p_\varphi `$ with a quantity involving the energy of the material component written in a convenient form. For the mixture, the (total) equivalent quantity is $`\rho _t+p_t=\dot{\varphi }^2+\gamma \rho `$. In this way, recalling the energy conservation law, $`x`$ is defined by the ratio between two “inertial mass” terms which redshift away due to the universe expansion. Additionally, it is not enough to compare the values of $`\dot{\varphi }^2`$ and $`\gamma \rho `$ since we also need to quantify how they evolve in the course of the expansion. This explains the introduction of the parameter $`\alpha `$ involving $`3H\dot{\varphi }^2`$ and $`\dot{\rho }`$. The presence of $`|\dot{\rho }|`$ in this parameter is also reasonable. It comes into play because we are adopting the original isothermal scenario proposed by Berera as a limiting case. Indeed, it seems to be an extreme theoretical situation where the cooling rate of the radiation due to expansion is fully compensated by the transference of particles from the scalar field to the ordinary constituents. It is implicit in the definition of $`\alpha `$ that $`\dot{\rho }0`$, since a reheating phase is unecessary in warm inflation. However we should point out that a positive $`\dot{\rho }`$ is possible and has been investigated . The convenience of these parameters is apparent since by construction they are dimensionless and constrained on the intervals $`0x1`$ and $`0\alpha 1`$. In particular, for the isothermal inflation we have $`\alpha =1`$ and $`x0`$, because $`\dot{\rho }=0`$ and $`\gamma \rho \dot{\varphi }^2`$. For isentropic inflation one has $`x=1`$ and $`\alpha =1`$ since $`\gamma \rho \dot{\varphi }^2`$ and $`\dot{\rho }3H\dot{\varphi }^2`$ (the radiation becomes exponentially negligible). Similarly, noninteracting quintessence-like models lie at some intermediary value of $`\alpha =x`$ between 0 and 1. When both parameters are equal to zero, we have the standard model (with a possible cosmological constant). Therefore, the most common solutions, with or without thermal couplings, can be portrayed in this bidimensional parameter space (see Fig. 1). ## III Inflation, Dissipation and Slow Roll Conditions A fundamental ingredient of many inflationary variants is the period of “slow roll” evolution of the scalar field during which the inflaton field evolves so slowly that its kinetic term remains always much smaller than $`V(\varphi )`$. A full description of the two-component mixture requires the introduction of a dynamical parameter which is function of the total energy density of the mixture. In order to discuss slow roll dynamical conditions, a suitable choice of such a paramater is $`\gamma _{\mathrm{eff}}`$, derived from manipulations on Eqs. (5) and (6) $$\gamma _{\mathrm{eff}}=\frac{2\dot{H}}{3H^2}=\frac{\dot{\varphi }^2+\gamma \rho }{\rho _t}.$$ (15) Slow roll conditions are imposed to assure (nearly) de Sitter solutions for an amount of time, say, $`\mathrm{\Delta }t_I`$, which must be long enough to solve the problems of the hot big bang. By inspection of Eqs. (5), (6) and (15) one obtains that the condition for having a de Sitter universe is $$3H^22|\dot{H}|\gamma _{\mathrm{eff}}0V(\varphi )\rho +\frac{\dot{\varphi }^2}{2}.$$ (16) Additionally, a sufficient amount of inflation requires the above relations to be valid along the interval $`\mathrm{\Delta }t_I`$. This usually implies constraints on the slope and curvature of $`V(\varphi )`$. However, as we shall see, the coupling between $`\varphi `$ and the $`\gamma `$-fluid during inflation may relax these constraints for a large range of intermediary situations between the isentropic and warm scenarios. In our case, these conditions are represented by (see also) $$H^2\frac{8\pi }{3m_{\mathrm{pl}}^2}V(\varphi ),$$ (17) $$3(H+\mathrm{\Gamma })\dot{\varphi }V^{}(\varphi ).$$ (18) Notice that the discussion that follows is independent of constraints on the signal of $`\dot{\rho }`$, so that we are not limited to the definition of $`\alpha `$. The approximated expressions (17) and (18) give us the following constraints on the shape of $`V(\varphi )`$ $$\left(\frac{V^{}(\varphi )}{V(\varphi )}\right)^2\frac{m_{\mathrm{pl}}^2}{16\pi }\left(1+\frac{\mathrm{\Gamma }}{H}\right)^2\frac{\dot{\varphi }^2}{2V(\varphi )}$$ (19) and $$\frac{V^{\prime \prime }(\varphi )}{V(\varphi )}\frac{m_{\mathrm{pl}}^2}{24\pi }\left(1+\frac{\mathrm{\Gamma }}{H}\right)\frac{\dot{\varphi }^2+\gamma \rho }{2V(\varphi )}\frac{1}{3H}\frac{d}{dt}\left(\frac{\mathrm{\Gamma }}{H}\right),$$ (20) We recall that the standard slow roll conditions appearing in isentropic inflation are recovered for $`\mathrm{\Gamma }=0`$ (decoupled mixture). They imply that inflation is possible only if the potential is extremely flat, by the last inequalty in (16). However, in the limit ($`\mathrm{\Gamma }H`$), one may see that the first and the second derivative of the potential are not necessarily small, in order to guarantee the continued accelerated expansion. Thus, the extremely flat potential of usual inflation assuring the slow down of the scalar field by an enough amount of time, may be replaced by a large friction-like term with no extreme behavior of the derivatives of $`V(\varphi )`$. The field does not accelerate, thereby $`\dot{\varphi }^2`$ becoming comparable to $`V(\varphi )`$, because the friction may be really large and not because the potential is unusually flat. This explains why this class of extended scenarios may provide a solution for the fine tunning problems plaguing the new inflationary picture. ## IV A Toy Model The standard procedure in scalar field cosmological model building is to assume a specific potential $`V(\varphi )`$, motivated or not from particle physics. This potential is used to solve the $`\varphi `$ equation of motion (11). Subsequently, if there is a nonegligible energy density stored in the other fields, the solution of $`\varphi `$ is inserted into (12), which is solved for the energy density $`\rho `$. As widely known, even for a given coupling, many solutions are possible just changing the potential $`V(\varphi )`$. To fix terminology, this method for generating solutions will be called dynamic approach. On the other hand, the parameter space ($`\alpha ,x`$) can also be used as a guide in the search for new models. To show that a different (thermodynamic) route is also possible we first rewrite (12) in terms of $`x`$ and $`\alpha `$. We have $$\frac{\mathrm{\Gamma }}{H}=\frac{1}{x}\frac{1}{\alpha }.$$ (21) It is worth noticing that (21) does not include the potential since it is representative only of the possible couplings, but not of the dynamics. In what follows we obtain a simple model example based on equations (13), (14) and (21). In principle, a more quantitative analysis will clarify the interesting features contained in the parameter space ($`\alpha ,x`$). In order to be as generic as possible, we also include the possibility of a decaying $`\varphi `$ during any era. Since $`x`$ and $`\alpha `$ do not depend on $`V(\varphi )`$, one may show that equation (12) for the material medium can be rewritten as $$\dot{\rho }=3\gamma H\rho \left(1\delta \right),$$ (22) where for short we have introduced the quantity $$\delta \frac{\alpha x}{\alpha (1x)}.$$ (23) It is apparent that the simplest toy model is the one where the parameters $`\alpha `$ and $`x`$ are constants. In this case, the solution of (22) is $$\rho (a)=\rho _\mathrm{I}\left(\frac{a}{a_\mathrm{I}}\right)^{3\gamma (1\delta )},$$ (24) where the integration constants, $`\rho _\mathrm{I}`$ and $`a_\mathrm{I}`$, are “initial” conditions just at the onset of the inflationary stage. It should be noticed that if $`\delta =1`$, or equivalently, if $`\alpha =1`$, the energy density $`\rho `$ remains constant ($`\rho =\rho _\mathrm{I}`$). In particular, if $`x0`$, that is, if $`\rho \dot{\varphi }^2`$, we recover the isothermal inflationary scenario proposed by Berera (see also Fig.1). This show more clearly why models with values of $`0<x<1`$ may also evolve isothermally during inflation (exponential or power-law). The other extreme situation is obtained if $`\delta =0`$, that is, if $`\alpha =x`$, with the energy density scaling as $`\rho a^{3\gamma }`$. This behavior is typical for an adiabatic expansion (very weak coupling), and in Fig. 1, it corresponds to the adiabatic line. Note that even in these circumstances, where the parameters are constants, one may expect that an intermediary situation ($`0<\delta <1`$) be physically more probable. Under the above conditions, the energy density of the the scalar field may also be readily obtained. First we rewrite (11) as $$\dot{\rho }_\varphi =3H\dot{\varphi }^2\left(1+\frac{\mathrm{\Gamma }}{H}\right).$$ (25) Now, since $`x=\mathrm{const}`$ we see from (13) that $`\dot{\varphi }^2=\frac{x}{1x}\gamma \rho `$, and using (21) and (24), this equation can be integrated in terms of the scale factor. The result is $$\rho _\varphi =\rho _{\varphi _I}+\rho _I\left(\delta +\frac{x}{1x}\right)\left[\frac{\left(\frac{a}{a_I}\right)^{3\gamma (1\delta )}1}{1\delta }\right],$$ (26) where the constant $`\rho _{\varphi _I}`$ is the scalar field energy density when $`a=a_I`$. In the adiabatic limit ($`\delta =0`$) the above expression reduces to $$\rho _\varphi =\rho _{\varphi _I}+\rho _I\frac{x}{1x}\left[\left(\frac{a}{a_I}\right)^{3\gamma }1\right],$$ (27) The isothermal case ($`\delta =1`$) is also readily obtained using that $`lim_{q1}\frac{f^{1q}1}{1q}=\mathrm{ln}f`$. One finds $$\rho _\varphi =\rho _{\varphi _I}\frac{3\gamma }{1x}\rho _I\mathrm{ln}\left(\frac{a}{a_\mathrm{I}}\right)$$ (28) If $`\delta 1`$, using (24) and (26), we have that the total energy density may be written as $$\rho _t=\rho _{\varphi _I}B\rho _I+B\rho $$ (29) where $`B=\frac{x(1+\frac{\mathrm{\Gamma }}{H})}{(1x)(1\delta )}`$ is a constant. The time dependence of the scale factor may also be obtained from (13) and (15), which give $`\gamma _{\mathrm{eff}}=\frac{\gamma \rho }{(1x)\rho _t}`$. We see that the possible evolution laws depend critically on the initial conditions. In particular, if $`B\frac{\rho _{\varphi _I}}{\rho _I}`$ we have $`\gamma _{\mathrm{eff}}\frac{\gamma \rho _I}{(1x)\rho _{\varphi _I}}`$. Thus, if $`\rho _{\varphi _I}\rho _I`$ we have exponential inflation, and if $`\rho _\varphi `$ dominates only moderately, the scale factor will evolve as a power law inflation with a coupled thermal component. As one may check, in this case the potential $`V(\varphi )`$ scales as $`e^{\lambda \varphi }`$, where $`\lambda (\delta )`$ is a positive parameter. Note still that if the first two terms in (29) do not cancel each other, we may have a “remnant” cosmological constant for large values of the cosmological time. As we have seen, this simple model is representative of two different ways of extending the original warm inflation. One is keeping the isothermal condition ($`\alpha =\delta =1`$) for generic values of $`x`$. Another approach is to consider nonvanishing parameters $`\alpha `$ and $`x`$ ($`\alpha x`$), which allows for warm power law inflationary models. Hopefully, in a more general framework, where the pair of parameters ($`\alpha ,x`$) are time dependent functions, a consistent unified picture containing the new and warm inflation, as well as all the intermediary situations may be obtained. ## V Conclusions Our analysis might be useful as an heuristic tool for building models with scalar fields coupled to a material medium. In principle, it can be applied to warm inflation-like models (with or without phase transitions involved), reheating (even for the old scenario) and quintessence-like models. As a rule, it could be tried as a first step, before attempting different potentials for the scalar fields. As we have shown, this happens because depending on the two parameters, the conditions for distinct dynamics of the scale factor (including inflationary regimes) are relatively independent of the shape of the potencial, since a large dissipative term may provide a natural slow rolling for the field. This may unconstrain the conditions on the flatness of the potential in such a way that even exotic models like the oscillating inflation of Damour and Mukhanov may provide the necessary number of e-foldings and enough post-inflationary radiation temperature. The toy model presented in the last section somewhat suggest that any inflationary interpolating solution between the isentropic and isothermal limits can be represented in the bidimensional parameter space ($`\alpha ,x`$). Particular examples will be discussed elsewhere . As it appears, a more comprehensive phenomenological treatment of this matter should necessarily include thermodynamical constraints thus requiring the methods and techniques from nonequilibrium thermodynamics. As shown recently, a phenomenological coupling term explicitly dependent on the created particles (and not only on the scalar field) should be a natural outcome of these methods when the decay products thermalize with the heat bath. Such an approach may have interesting consequences on the old reheating and warm inflation models. Acknowledgements The authors are grateful for the support of CAPES and CNPq (Brazilian research agencies). One of us (JASL) was also supported by the project PRONEX/FINEP (No. 41.96.0908.00).
no-problem/9910/astro-ph9910450.html
ar5iv
text
# Population Synthesis of the GRB Progenitors and their Brightness and Redshift Distribution ## 1. Introduction For many years the binary relativistic star merger remains one of the most valuable models of gamma ray bursts (Blinnikov et al, 1984, Paczyński , 1991). Some recent alternative models, involving a collapse of massive rotating star (Paczyński , 1998), possibly easier fit the energy requirements for GRB, and are in nice agreement with the discoveries of GRB optical counterparts in actively star forming galaxies (Bloom et al, 1998). Nevertheless, the old merger model is not excluded, as the observed binary radiopulsars display the examples of definite merger precursors. This paper is devoted to the studies of close binary evolution which leads to the mergers of binary neutron star and black hole systems (NS+NS and NS+BH) which are suspected to be able to produce a GRB. The collapsar GRB and the merger GRB can have similar physical mechanisms for energy extraction (Lee, 1999) and production of radiation (Piran, 1999, Usov, 1999). Nevertheless, they can be discriminated by evolutionary considerations. ## 2. The model of evolution We use the “Scenario Machine”, initiated by Kornilov and Lipunov (1995), and later developed by Lipunov et al (1996), as a stellar evolution engine. The basic evolution model is similar to one of Vanbeveren et al (1998), which are the best to reproduce the galactic population of massive binaries. During the evolution of a binary its orbital separation changes in various modes of mass and angular momentum transfer. The most dramatic change of the orbit takes place during a supernova explosion, or a collapse into a black hole. It is usually supposed that a newly formed compact object obtains a kick velocity due to some asymmetry in the collapse. To explain the observed velocities of radio pulsars, one should assume the kick velocity to be of the order of $`200600`$ km/s (Lyne, Lorimer, 1994). As the observations suffer from many selection effects, the exact shape of the distribution is not precisely known. So we perform the calculations for several values of the kick velocity, in the range from $`0`$ to $`600`$ km/s, to show the influence of this parameter on the results. ## 3. The event age distribution The life of the merging system consists of two important phases: first, the nuclear powered evolution of the normal stars, and second, the gravity wave powered orbit shrinking phase. The characteristic time scale of the first phase occupies the range from $`310^6`$ years for the most massive stars to $`10^8`$years for the least massive stars being able to produce a neutron star. The gravitational inspiral time is determined by the parameters of the orbit after the formation of second compact object, and can be much greater than nuclear lifetime, being on average of the order of a billion years. The age of a merging binary is a sum of the nuclear lifetime and the gravitational inspiral duration, The distribution of the ages of merging binaries is a direct output of the population synthesis procedure. Another sense of this distribution is the merger rate time history after a simultaneous ($`\delta `$-like) star formation. It is shown in fig.1 for both NS+NS and NS+BH mergers. The merger age distribution shows a power-law behavior with a slope $`1`$. For NS+NS mergers, there is a strong dependency of the sharpness of the age distribution on the kick velocity. High kicks make the mergers on average younger. Without a kick, there are no mergers with ages less than $`10^8`$ years. For NS+BH mergers, the presence of a kick velocity is very important: without a kick, all the binaries are very wide and have very long inspiral times, usually greater than the Hubble time. The power law distribution has a “heavy tail”: though a half of the mergers take place before several hundred million years after the star formation, there is also a significant part of mergers that take part after billions of years. ## 4. The distribution of merger redshifts The distribution of merger redshifts can be obtained by a convolution of the star formation rate history and the merger age distribution. The present knowledge of the star formation rate (SFR) history is based on the works of Madau (1996). Earlier (Jørgensen et al, 1995) we assumed a simple two-parametric model of SFR, containing an initial star formation burst (where $`ϵ`$ of all the stars were formed) at $`z5`$, and a subsequent constant SFR. In this work we base on the Madau observational SFR, but still introduce an initial star formation burst, which should be responsible for the formation of the elliptical galaxies and spheroidal components of the spiral galaxies. Then, according to Fukugita et al (1998), the value of $`ϵ`$ should be $`2/3`$. The adopted cosmological model was ($`H_0=75`$ km/s/Mpc, $`\mathrm{\Omega }_m=0.3`$, $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$). The obtained NS+NS and NS+BH merger redshift distributions are displayed in fig. 2 for spiral and elliptical galaxies (Madau and burst-like SFR, respectively) and different values of mean kick velocity. Obviously, these event redshift distributions are different for spirals and ellipticals. This allows to propose a new observational test for the GRB progenitor. If we detect a GRB in an elliptical galaxy, it should be most likely a NS+NS and NS+BH merger and not a collapsar, because the population of the ellipticals is very old and does not contain massive normal stars. Thus, future identifications of GRB host galaxies can solve the collapsar/merger dilemma. ## 5. The GRB brightness distribution – $`\mathrm{log}N`$-$`\mathrm{log}P`$ Another implication of the merger age distributions is an attempt to construct a GRB brightness distribution ( $`\mathrm{log}N`$-$`\mathrm{log}P`$ ). This is done by a classic procedure (Weinberg, 1972) on the basis of the redshift distribution (fig. 2) and a luminosity function. Observations do not evidence directly for the spread of the GRB intrinsic luminosities, though some techniques allow to estimate it roughly (Petrosian, 1999). We assumed a power-law luminosity function characterized by the slope and range. For comparison with the observations, we have chosen the long ($`T_{90}>1.5`$s) and relatively bright ($`>1`$ photon/cm<sup>2</sup>) bursts from 4th BATSE GRB catalog (Meegan et al, 1998). Realistic GRB spectra were taken to estimate the spectral K-correction (Oke, Sandage, 1968). The best fits are displayed in fig. 4. The $`\mathrm{log}N`$-$`\mathrm{log}P`$ distribution is shown in a differential form multiplied by $`P^{5/2}`$ in order to outline the difference from the euclidean case, in which the distribution should look as a horizontal line. No acceptable fit can be obtained if luminosity spread is less than $`2`$ orders of magnitude. In the best fit, the luminosity function spread is $`2.5`$ orders of magnitude, and the most distant observed GRB are at redshift $`4.5`$. ## 6. Conclusions The computer simulations of the evolution of the possible GRB progenitors provide useful information which facilitates interpretation of the GRB statistics. In the present paper, we would like to outline two main conclusions: A new interesting method which can help to determine the nature of GRB progenitors is the determination of the ratio of the rates of bursts in spiral and elliptical hosts. For GRB produced by mergers, the rate in ellipticals should be only several times less than one in spirals, and increase with the redshift. For the massive collapsar GRB it is practically impossible to occur in an elliptical galaxy. The analysis of the observed BATSE $`\mathrm{log}N`$-$`\mathrm{log}P`$ distribution with taking into account the evolutionary effects shows that the the spread of GRB intrinsic luminosity function can not be less than two orders of magnitude. ### Acknowledgments. Author is grateful to the conference organizers, Dr Roland Svensson and Dr Juri Poutannen for their hospitality. This work was performed under supported of the INTAS project No. 96-0315, the “Universities of Russia” project No. 5559 and RFBR grant No. 98-02-16801. ## References Blinnikov, S.I., Novikov, I.D., Perevodchikova, T.V., Polnarev, A.G., 1984, PAZh, 10, 422; Sov. Astronomy Let., 10, 177 Bloom, J.S., Djorgovski, S.G., Kulkarni, S.R., Frail, D.A., 1998, ApJ, 507, 25 Fukugita, M., Hogan, C.J., Peebles, P.J.E., 1998, ApJ, 501, 518 Jørgensen , H.E., Lipunov, V.M., Postnov, K.A., Prokhorov, M.E., Panchenko, I.E., 1995, ApJ, 454, 593 Kornilov, V.G., Lipunov, V.M., 1983, AZh, 60, 574 Lee, 1999, this volume Lipunov, V.M., Postnov, K.A., Prokhorov, M.E., 1996, A&A, 310, 489 Lyne, A.G., Lorimer, D.R., 1994, Nature, 369, 127 Madau, P. et al, 1996, MNRAS, 283, 1388 Meegan, C.A. et al, 1998 In: Gamma-Ray Bursts : 4th Huntsville Symposium, Edited by Charles A. Meegan, Robert D. Preece, and Thomas M. Koshut. Woodbury, New York : American Institute of Physics (AIP conference proceedings ; 428), 3 Oke, J.B. Sandage, A., 1968, ApJ, 154, 21 Paczyński , B., 1991, Acta Astr., 41, 257 Paczyński , B., 1998, ApJ, 494, L45 Petrosian, V., 1999, this volume Usov, V.V., 1999, this volume Vanbeveren, D., et al 1998, New Astronomy, 3, 443 Weinberg, S., 1972 Gravitation and cosmology. Whiley & Sons, NY
no-problem/9910/hep-ph9910292.html
ar5iv
text
# Techniques for QCD calculations by numerical integration ## I Introduction This paper concerns a method, which was introduced in , for performing perturbative calculations in quantum chromodynamics (QCD) and other quantum field theories. The method is intended for calculations of quantities in which one measures something about the hadronic final state produced in a collision and in which the observable is infrared safe – that is, insensitive to long-distance effects. Examples include jet cross sections in hadron-hadron and lepton-hadron scattering and in $`e^+e^{}\mathrm{ℎ𝑎𝑑𝑟𝑜𝑛𝑠}`$. There have been many calculations of this kind carried out at next-to-leading order in perturbation theory. These calculations are based on a method introduced by Ellis, Ross, and Terrano in the context of $`e^+e^{}\mathrm{ℎ𝑎𝑑𝑟𝑜𝑛𝑠}`$. Stated in the simplest terms, the Ellis-Ross-Terrano method is to do some integrations over momenta $`\stackrel{}{\mathrm{}}_i`$ analytically, others numerically. In the method discussed here, one does all of these integrations numerically. Evidently, if one performs all of the integrations numerically, one gains flexibility to quite easily modify the integrand. There may be other advantages, as well as some disadvantages, to the numerical integration method compared to the numerical/analytical method. In this paper, I address only the process $`e^+e^{}\mathrm{ℎ𝑎𝑑𝑟𝑜𝑛𝑠}`$. I discuss three-jet-like infrared safe observables at next-to-leading order, that is order $`\alpha _s^2`$. Examples of such observables include the thrust distribution and the fraction of events that have three jets. The main techniques of the numerical integration method for $`e^+e^{}\mathrm{ℎ𝑎𝑑𝑟𝑜𝑛𝑠}`$ were presented briefly in . The principle purpose of this paper is to explain in detail some of the most important of these techniques. In the numerical/analytical method, one has to work hard to implement the cancellation of “collinear” and “soft” divergences that occur in the integrations. In the numerical method, as we will see, this cancellation happens automatically. On the other hand, in the completely numerical method one has the complication of having to deform some of the integration contours into the complex plane. We will see how to do this deformation. In both the numerical/analytical method and the completely numerical method, one must arrange that the density of integration points is singular near a soft gluon singularity of the integrand (even after cancellations). However, the precise behavior of the densities needed in the two cases is different. We will see what is needed in the numerical method. These techniques are presented in Secs. II-VI. They are illustrated in Sec. VII with a numerical example. Although a full understanding of the example requires the preceding sections, the reader may want to look briefly at Sec. VII before starting on Secs. II-VI. A brief summary of techniques not presented in detail in this paper is given in Sec. VIII. In , I presented results from a concrete implementation of the numerical method in computer code. Since then, one logical error in the code has been discovered and fixed and the performance of the program has been improved. Results from the improved code are presented in Sec. IX. Let us begin with a precise statement of the problem. We consider an observable such as a particular moment of the thrust distribution. The observable can be expanded in powers of $`\alpha _s/\pi `$, $$\sigma =\underset{n}{}\sigma ^{[n]},\sigma ^{[n]}\left(\alpha _s/\pi \right)^n.$$ (1) The order $`\alpha _s^2`$ contribution has the form $`\sigma ^{[2]}`$ $`=`$ $`{\displaystyle \frac{1}{2!}}{\displaystyle 𝑑\stackrel{}{k}_1𝑑\stackrel{}{k}_2\frac{d\sigma _2^{[2]}}{d\stackrel{}{k}_1d\stackrel{}{k}_2}𝒮_2(\stackrel{}{k}_1,\stackrel{}{k}_2)}`$ (4) $`+{\displaystyle \frac{1}{3!}}{\displaystyle 𝑑\stackrel{}{k}_1𝑑\stackrel{}{k}_2𝑑\stackrel{}{k}_3\frac{d\sigma _3^{[2]}}{d\stackrel{}{k}_1d\stackrel{}{k}_2d\stackrel{}{k}_3}𝒮_3(\stackrel{}{k}_1,\stackrel{}{k}_2,\stackrel{}{k}_3)}`$ $`+{\displaystyle \frac{1}{4!}}{\displaystyle 𝑑\stackrel{}{k}_1𝑑\stackrel{}{k}_2𝑑\stackrel{}{k}_3𝑑\stackrel{}{k}_4\frac{d\sigma _4^{[2]}}{d\stackrel{}{k}_1d\stackrel{}{k}_2d\stackrel{}{k}_3d\stackrel{}{k}_4}𝒮_4(\stackrel{}{k}_1,\stackrel{}{k}_2,\stackrel{}{k}_3,\stackrel{}{k}_4)}.`$ Here the $`d\sigma _n^{[2]}`$ are the order $`\alpha _s^2`$ contributions to the parton level cross section, calculated with zero quark masses. Each contains momentum and energy conserving delta functions. The $`d\sigma _n^{[2]}`$ include ultraviolet renormalization in the $`\overline{\mathrm{MS}}`$ scheme. The functions $`𝒮`$ describe the measurable quantity to be calculated. We wish to calculate a “three-jet-like” quantity. That is, $`𝒮_2=0`$. The normalization is such that $`𝒮_n=1`$ for $`n=2,3,4`$ would give the order $`\alpha _s^2`$ perturbative contribution the the total cross section. There are, of course, infrared divergences associated with Eq. (4). For now, we may simply suppose that an infrared cutoff has been supplied. The measurement, as specified by the functions $`𝒮_n`$, is to be infrared safe, as described in Ref. : the $`𝒮_n`$ are smooth functions of the parton momenta and $$𝒮_{n+1}(\stackrel{}{k}_1,\mathrm{},\lambda \stackrel{}{k}_n,(1\lambda )\stackrel{}{k}_n)=𝒮_n(\stackrel{}{k}_1,\mathrm{},\stackrel{}{k}_n)$$ (5) for $`0\lambda <1`$. That is, collinear splittings and soft particles do not affect the measurement. It is convenient to calculate a quantity that is dimensionless. Let the functions $`𝒮_n`$ be dimensionless and eliminate the remaining dimensionality in the problem by dividing by $`\sigma _0`$, the total $`e^+e^{}`$ cross section at the Born level. Let us also remove the factor of $`(\alpha _s/\pi )^2`$. Thus, we calculate $$=\frac{\sigma ^{[2]}}{\sigma _0(\alpha _s/\pi )^2}.$$ (6) Our problem is thus to calculate $``$. Let us now see how to set up this problem in a convenient form. We note that $``$ is a function of the c.m. energy $`\sqrt{s}`$ and the $`\overline{\mathrm{MS}}`$ renormalization scale $`\mu `$. We will choose $`\mu `$ to be proportional to $`\sqrt{s}`$: $`\mu =A_{UV}\sqrt{s}`$. Then $``$ depends on $`A`$. But, because it is dimensionless, it is independent of $`\sqrt{s}`$. This allows us to write $$=_0^{\mathrm{}}𝑑\sqrt{s}h(\sqrt{s})(A_{UV},\sqrt{s}),$$ (7) where $`h`$ is any function with $$_0^{\mathrm{}}𝑑\sqrt{s}h(\sqrt{s})=1.$$ (8) The quantity $``$ can be expressed in terms of cut Feynman diagrams, as in Fig. 1. The dots where the parton lines cross the cut represent the function $`𝒮_n(\stackrel{}{k}_1,\mathrm{},\stackrel{}{k}_n)`$. Each diagram is a three loop diagram, so we have integrations over loop momenta $`\mathrm{}_1^\mu `$, $`\mathrm{}_2^\mu `$ and $`\mathrm{}_3^\mu `$. We first perform the energy integrations. For the graphs in which four parton lines cross the cut, there are four mass-shell delta functions $`\delta (k_J^2)`$. These delta functions eliminate the three energy integrals over $`\mathrm{}_1^0`$, $`\mathrm{}_2^0`$, and $`\mathrm{}_3^0`$ as well as the integral (8) over $`\sqrt{s}`$. For the graphs in which three parton lines cross the cut, we can eliminate the integration over $`\sqrt{s}`$ and two of the $`\mathrm{}_J^0`$ integrals. One integral over the energy $`E`$ in the virtual loop remains. We perform this integration by closing the integration contour in the lower half $`E`$ plane. This gives a sum of terms obtained from the original integrand by some algebraic substitutions, as we will see in the following sections. Having performed the energy integrations, we are left with an integral of the form $$=𝑑\stackrel{}{\mathrm{}}_1𝑑\stackrel{}{\mathrm{}}_2𝑑\stackrel{}{\mathrm{}}_3\underset{G}{}\underset{C}{}g(G,C;\stackrel{}{\mathrm{}}_1,\stackrel{}{\mathrm{}}_2,\stackrel{}{\mathrm{}}_3).$$ (9) Here there is a sum over graphs $`G`$ (of which one is shown in Fig. 1) and there is a sum over the possible cuts of a given graph. The problem of calculating $``$ is now set up in a convenient form for calculation. If we were using the Ellis-Ross-Terrano method, we would calculate some of the integrals in Eq. (9) numerically and some analytically. In the method described here, we first perform certain contour deformations, then calculate all of the integrals by Monte Carlo numerical integration. In the following sections, we will learn the main techniques for performing the integrations in Eq. (9). We will do this by studying a simple model problem that will enable us to see the essential features of the numerical method with as few extraneous difficulties as possible. ## II A simplified model In the following sections, we consider a simplified model in which all complications that are not needed for a first understanding of the numerical method are stripped away. The model is represented by the graph shown in Fig. 2. There are contributions from all of the two and three parton cuts of this diagram, as shown in Fig. 3. Since QCD numerator functions do not play a major role, we consider this graph in $`\varphi ^3`$ theory. Thus, also, we can avoid the complications of ultraviolet renormalization. We consider the incoming momentum $`\stackrel{}{q}`$ to be fixed and nonzero. We calculate the integral of the graph over the incoming energy $`q^0`$. This is analogous to the technical trick of integrating over $`\sqrt{s}`$ in the full three loop QCD calculation (see Sec. I) and serves to provide three energy integrations to perform against three mass-shell delta functions for the three-parton cuts. We need a nontrivial measurement function $`𝒮`$. As an example, we choose to measure the transverse energy in the final state normalized to the total energy: $`𝒮_2(\stackrel{}{k}_1,\stackrel{}{k}_2)`$ $`=`$ $`(|\stackrel{}{k}_{T,1}|+|\stackrel{}{k}_{T,2}|)/(|\stackrel{}{k}_1|+|\stackrel{}{k}_2|)`$ (10) $`𝒮_3(\stackrel{}{k}_1,\stackrel{}{k}_2,\stackrel{}{k}_3)`$ $`=`$ $`(|\stackrel{}{k}_{T,1}|+|\stackrel{}{k}_{T,2}|+|\stackrel{}{k}_{T,3}|)/(|\stackrel{}{k}_1|+|\stackrel{}{k}_2|+|\stackrel{}{k}_3|),`$ (11) where $`\stackrel{}{k}_{T,j}`$ is the part of the momentum $`\stackrel{}{k}_j`$ of the $`j`$th final state particle that is orthogonal to $`\stackrel{}{q}`$. There are two loops in our diagram. We choose the independent loop momenta to be $`\mathrm{}_2^\mu `$ and $`\mathrm{}_4^\mu `$. The other momenta are understood to be expressed in terms of $`\mathrm{}_2^\mu `$, $`\mathrm{}_4^\mu `$, and $`q^\mu `$. Thus the example integral that we seek to calculate is $$=\frac{g^4}{2}\frac{dq^0}{2\pi }\frac{d^4\mathrm{}_2}{(2\pi )^4}\frac{d^4\mathrm{}_4}{(2\pi )^4}𝒲.$$ (12) Here $`g`$ is the coupling, $`1/2`$ is the statistical factor for this graph, and the integrand $`𝒲`$ consists of four parts, one for each of the cuts in Fig. 3: $$𝒲=𝒲_a+𝒲_b+𝒲_c+𝒲_d,$$ (13) where $`𝒲_a`$ $`=`$ $`i𝒮_2(\stackrel{}{\mathrm{}}_4,\stackrel{}{\mathrm{}}_5){\displaystyle \frac{1}{\mathrm{}_1^2+iϵ}}{\displaystyle \frac{1}{\mathrm{}_2^2+iϵ}}{\displaystyle \frac{1}{\mathrm{}_3^2+iϵ}}(2\pi )\mathrm{\Delta }(\mathrm{}_4)(2\pi )\mathrm{\Delta }(\mathrm{}_5),`$ (14) $`𝒲_b`$ $`=`$ $`i𝒮_2(\stackrel{}{\mathrm{}}_1,\stackrel{}{\mathrm{}}_3)(2\pi )\mathrm{\Delta }(\mathrm{}_1){\displaystyle \frac{1}{\mathrm{}_2^2iϵ}}(2\pi )\mathrm{\Delta }(\mathrm{}_3){\displaystyle \frac{1}{\mathrm{}_4^2iϵ}}{\displaystyle \frac{1}{\mathrm{}_5^2iϵ}},`$ (15) $`𝒲_c`$ $`=`$ $`𝒮_3(\stackrel{}{\mathrm{}}_1,\stackrel{}{\mathrm{}}_2,\stackrel{}{\mathrm{}}_5)(2\pi )\mathrm{\Delta }(\mathrm{}_1)(2\pi )\mathrm{\Delta }(\mathrm{}_2){\displaystyle \frac{1}{\mathrm{}_3^2}}{\displaystyle \frac{1}{\mathrm{}_4^2}}(2\pi )\mathrm{\Delta }(\mathrm{}_5),`$ (16) $`𝒲_d`$ $`=`$ $`𝒮_3(\stackrel{}{\mathrm{}}_4,\stackrel{}{\mathrm{}}_2,\stackrel{}{\mathrm{}}_3){\displaystyle \frac{1}{\mathrm{}_1^2}}(2\pi )\mathrm{\Delta }(\mathrm{}_2)(2\pi )\mathrm{\Delta }(\mathrm{}_3)(2\pi )\mathrm{\Delta }(\mathrm{}_4){\displaystyle \frac{1}{\mathrm{}_5^2}}.`$ (17) Here we have used the notation $$\mathrm{\Delta }(k)=\delta (k^2)\theta (k^0).$$ (18) ## III The integration over energies We begin by performing the integrals over the energies in Eq. (12). In the case of three partons in the final state, the three delta functions eliminate the three integrations. In the case of two partons in the final state, the two delta functions eliminate two of the energy integrations. This leaves one integration over the energy that circulates around the virtual loop. There are three poles in the upper half plane and three in the lower half plane. Closing the contour in one half plane or the other gives three contributions. Each of these contributions corresponds to putting one of the particles in the loop on shell. Thus altogether there are eight contributions to $``$, as indicated in Fig. 4. We write $``$ as $$=\frac{g^4}{2(2\pi )^6}𝑑\stackrel{}{\mathrm{}}_4𝑑\stackrel{}{\mathrm{}}_2𝒢,$$ (19) where the integrand $`𝒢`$ has eight parts: $$𝒢=𝒢_{a1}+𝒢_{a2}+𝒢_{a3}+𝒢_{b5}+𝒢_{b2}+𝒢_{b4}+𝒢_c+𝒢_d.$$ (20) The contributions to $`𝒢`$ are $`𝒢_{a1}`$ $`=`$ $`𝒮_2(\stackrel{}{\mathrm{}}_4,\stackrel{}{\mathrm{}}_5){\displaystyle \frac{1}{2|\stackrel{}{\mathrm{}}_1|}}{\displaystyle \frac{1}{(|\stackrel{}{\mathrm{}}_1||\stackrel{}{\mathrm{}}_4|)^2\stackrel{}{\mathrm{}}_2^{\mathrm{\hspace{0.17em}2}}+iϵ}}{\displaystyle \frac{1}{(|\stackrel{}{\mathrm{}}_1||\stackrel{}{\mathrm{}}_4||\stackrel{}{\mathrm{}}_5|)^2\stackrel{}{\mathrm{}}_3^{\mathrm{\hspace{0.17em}2}}+iϵ}}{\displaystyle \frac{1}{2|\stackrel{}{\mathrm{}}_4|}}{\displaystyle \frac{1}{2|\stackrel{}{\mathrm{}}_5|}},`$ (21) $`𝒢_{a2}`$ $`=`$ $`𝒮_2(\stackrel{}{\mathrm{}}_4,\stackrel{}{\mathrm{}}_5){\displaystyle \frac{1}{(|\stackrel{}{\mathrm{}}_2|+|\stackrel{}{\mathrm{}}_4|)^2\stackrel{}{\mathrm{}}_1^{\mathrm{\hspace{0.17em}2}}+iϵ}}{\displaystyle \frac{1}{2|\stackrel{}{\mathrm{}}_2|}}{\displaystyle \frac{1}{(|\stackrel{}{\mathrm{}}_2||\stackrel{}{\mathrm{}}_5|)^2\stackrel{}{\mathrm{}}_3^{\mathrm{\hspace{0.17em}2}}+iϵ}}{\displaystyle \frac{1}{2|\stackrel{}{\mathrm{}}_4|}}{\displaystyle \frac{1}{2|\stackrel{}{\mathrm{}}_5|}},`$ (22) $`𝒢_{a3}`$ $`=`$ $`𝒮_2(\stackrel{}{\mathrm{}}_4,\stackrel{}{\mathrm{}}_5){\displaystyle \frac{1}{(|\stackrel{}{\mathrm{}}_3|+|\stackrel{}{\mathrm{}}_4|+|\stackrel{}{\mathrm{}}_5|)^2\stackrel{}{\mathrm{}}_1^{\mathrm{\hspace{0.17em}2}}+iϵ}}{\displaystyle \frac{1}{(|\stackrel{}{\mathrm{}}_3|+|\stackrel{}{\mathrm{}}_5|)^2\stackrel{}{\mathrm{}}_2^{\mathrm{\hspace{0.17em}2}}+iϵ}}{\displaystyle \frac{1}{2|\stackrel{}{\mathrm{}}_3|}}{\displaystyle \frac{1}{2|\stackrel{}{\mathrm{}}_4|}}{\displaystyle \frac{1}{2|\stackrel{}{\mathrm{}}_5|}},`$ (23) $`𝒢_{b5}`$ $`=`$ $`𝒮_2(\stackrel{}{\mathrm{}}_1,\stackrel{}{\mathrm{}}_3){\displaystyle \frac{1}{2|\stackrel{}{\mathrm{}}_1|}}{\displaystyle \frac{1}{(|\stackrel{}{\mathrm{}}_3|+|\stackrel{}{\mathrm{}}_5|)^2\stackrel{}{\mathrm{}}_2^{\mathrm{\hspace{0.17em}2}}iϵ}}{\displaystyle \frac{1}{2|\stackrel{}{\mathrm{}}_3|}}{\displaystyle \frac{1}{(|\stackrel{}{\mathrm{}}_1|+|\stackrel{}{\mathrm{}}_3|+|\stackrel{}{\mathrm{}}_5|)^2\stackrel{}{\mathrm{}}_4^{\mathrm{\hspace{0.17em}2}}iϵ}}{\displaystyle \frac{1}{2|\stackrel{}{\mathrm{}}_5|}},`$ (24) $`𝒢_{b2}`$ $`=`$ $`𝒮_2(\stackrel{}{\mathrm{}}_1,\stackrel{}{\mathrm{}}_3){\displaystyle \frac{1}{2|\stackrel{}{\mathrm{}}_1|}}{\displaystyle \frac{1}{2|\stackrel{}{\mathrm{}}_2|}}{\displaystyle \frac{1}{2|\stackrel{}{\mathrm{}}_3|}}{\displaystyle \frac{1}{(|\stackrel{}{\mathrm{}}_1|+|\stackrel{}{\mathrm{}}_2|)^2\stackrel{}{\mathrm{}}_4^{\mathrm{\hspace{0.17em}2}}iϵ}}{\displaystyle \frac{1}{(|\stackrel{}{\mathrm{}}_3||\stackrel{}{\mathrm{}}_2|)^2\stackrel{}{\mathrm{}}_5^{\mathrm{\hspace{0.17em}2}}iϵ}},`$ (25) $`𝒢_{b4}`$ $`=`$ $`𝒮_2(\stackrel{}{\mathrm{}}_1,\stackrel{}{\mathrm{}}_3){\displaystyle \frac{1}{2|\stackrel{}{\mathrm{}}_1|}}{\displaystyle \frac{1}{(|\stackrel{}{\mathrm{}}_1||\stackrel{}{\mathrm{}}_4|)^2\stackrel{}{\mathrm{}}_2^{\mathrm{\hspace{0.17em}2}}iϵ}}{\displaystyle \frac{1}{2|\stackrel{}{\mathrm{}}_3|}}{\displaystyle \frac{1}{2|\stackrel{}{\mathrm{}}_4|}}{\displaystyle \frac{1}{(|\stackrel{}{\mathrm{}}_1|+|\stackrel{}{\mathrm{}}_3||\stackrel{}{\mathrm{}}_4|)^2\stackrel{}{\mathrm{}}_5^{\mathrm{\hspace{0.17em}2}}iϵ}},`$ (26) $`𝒢_c`$ $`=`$ $`𝒮_3(\stackrel{}{\mathrm{}}_1,\stackrel{}{\mathrm{}}_2,\stackrel{}{\mathrm{}}_5){\displaystyle \frac{1}{2|\stackrel{}{\mathrm{}}_1|}}{\displaystyle \frac{1}{2|\stackrel{}{\mathrm{}}_2|}}{\displaystyle \frac{1}{(|\stackrel{}{\mathrm{}}_2|+|\stackrel{}{\mathrm{}}_5|)^2\stackrel{}{\mathrm{}}_3^{\mathrm{\hspace{0.17em}2}}}}{\displaystyle \frac{1}{(|\stackrel{}{\mathrm{}}_1|+|\stackrel{}{\mathrm{}}_2|)^2\stackrel{}{\mathrm{}}_4^{\mathrm{\hspace{0.17em}2}}}}{\displaystyle \frac{1}{2|\stackrel{}{\mathrm{}}_5|}},`$ (27) $`𝒢_d`$ $`=`$ $`𝒮_3(\stackrel{}{\mathrm{}}_4,\stackrel{}{\mathrm{}}_2,\stackrel{}{\mathrm{}}_3){\displaystyle \frac{1}{(|\stackrel{}{\mathrm{}}_2|+|\stackrel{}{\mathrm{}}_4|)^2\stackrel{}{\mathrm{}}_1^{\mathrm{\hspace{0.17em}2}}}}{\displaystyle \frac{1}{2|\stackrel{}{\mathrm{}}_2|}}{\displaystyle \frac{1}{2|\stackrel{}{\mathrm{}}_3|}}{\displaystyle \frac{1}{2|\stackrel{}{\mathrm{}}_4|}}{\displaystyle \frac{1}{(|\stackrel{}{\mathrm{}}_2|+|\stackrel{}{\mathrm{}}_3|)^2\stackrel{}{\mathrm{}}_5^{\mathrm{\hspace{0.17em}2}}}}.`$ (28) So far, the operations that we have performed have been purely algebraic. They are evidently of a sort that can be easily implemented in a computer program in an automatic fashion. We are left with an integral over the loop momenta $`\stackrel{}{\mathrm{}}_2`$ and $`\stackrel{}{\mathrm{}}_4`$. We seek to perform this integration numerically. However, the integrand $`𝒢`$ has singularities, so it is not completely self-evident how to proceed. It is to this question that we now turn. ## IV Cancellation of singularities In this section, we discuss the cancellation of singularities in a numerical calculation of the integral in Eq. (19). Let us concentrate to begin with on the cut shown in Fig. 3(a). Then there is a virtual loop consisting of the propagators with momentum labels $`\mathrm{}_1`$, $`\mathrm{}_2`$ and $`\mathrm{}_3`$. Recall that we are taking $`\stackrel{}{\mathrm{}}_2`$, and $`\stackrel{}{\mathrm{}}_4`$ as the independent loop momenta. Put the integration over $`\stackrel{}{\mathrm{}}_2`$ inside the integration over $`\stackrel{}{\mathrm{}}_4`$. Then we can consider $`\stackrel{}{\mathrm{}}_4`$ as fixed while $`\stackrel{}{\mathrm{}}_2`$ varies. Fig. 5 illustrates the space of the loop momentum $`\stackrel{}{\mathrm{}}_2`$ for a particular choice of $`\stackrel{}{q}`$ and at a particular point in the integration over $`\stackrel{}{\mathrm{}}_4`$. The origin of coordinates is at the point labeled $`\stackrel{}{\mathrm{}}_2=0`$. The vector $`\stackrel{}{\mathrm{}}_4`$ is indicated as an arrow with its head at $`\stackrel{}{\mathrm{}}_2=0`$. Then the point $`\stackrel{}{\mathrm{}}_1=0`$ is at the tail of this vector, as indicated. The vector $`\stackrel{}{\mathrm{}}_5=\stackrel{}{q}\stackrel{}{\mathrm{}}_4`$ is indicated as an arrow with its tail at $`\stackrel{}{\mathrm{}}_2=0`$. Then the point $`\stackrel{}{\mathrm{}}_3=0`$ is at the head of this vector, as indicated. Finally, the vector $`\stackrel{}{q}`$ is indicated as an arrow with its tail at $`\stackrel{}{\mathrm{}}_1=0`$. Where are the singularities of the integrand for our graph? There is, first of all, a singularity when the momentum of any propagator vanishes since there is always a contribution in which that propagator is put on-shell, with a singularity $`1/(2|\stackrel{}{\mathrm{}}|)`$. Since an integration $`𝑑\stackrel{}{\mathrm{}}/(2|\stackrel{}{\mathrm{}}|)`$ is convergent in the infrared by two powers, these singularities do not cause much difficulty. We simply have to choose a density of points with a matching $`1/|\stackrel{}{\mathrm{}}|`$ singularity, as described later in Sec. VI. We do not discuss these singularities further in this section. The singularities of concern to us here are 1) A collinear singularity at $`\stackrel{}{\mathrm{}}_2=x\stackrel{}{\mathrm{}}_4`$ with $`0<x<1`$. 2) A collinear singularity at $`\stackrel{}{\mathrm{}}_2=x\stackrel{}{\mathrm{}}_5`$ with $`0<x<1`$. 3) A soft singularity at $`\stackrel{}{\mathrm{}}_2=0`$. 4) A scattering singularity at $`|\stackrel{}{\mathrm{}}_1|+|\stackrel{}{\mathrm{}}_3|=|\stackrel{}{\mathrm{}}_4|+|\stackrel{}{\mathrm{}}_5|`$. The locations of these singularities are indicated in Fig. 6. ### A The collinear singularities In this subsection, we examine the collinear singularity at $`\stackrel{}{\mathrm{}}_2=x\stackrel{}{\mathrm{}}_4`$ with $`0<x<1`$. The principles that we discover for this case will hold for the other collinear singularities as well. The terms $`𝒢_{a1}`$ and $`𝒢_c`$ in the integrand $`𝒢`$, Eq. (20), are singular along the line $`\stackrel{}{\mathrm{}}_2=x\stackrel{}{\mathrm{}}_4`$, $`0<x<1`$. In order to examine this singularity, let us write $`𝒢_{a1}`$ and $`𝒢_c`$ as given in Eq. (28) in the form $$𝒢_{a1}=\frac{1}{2|\stackrel{}{\mathrm{}}_2+\stackrel{}{\mathrm{}}_4|}\frac{1}{\left(E_2^{(a1)}\right)^2\stackrel{}{\mathrm{}}_2^{\mathrm{\hspace{0.17em}2}}}\frac{1}{2|\stackrel{}{\mathrm{}}_4|}(E_1,E_2^{(a1)},E_5,\stackrel{}{\mathrm{}}_2,\stackrel{}{\mathrm{}}_4)𝒮_2(\stackrel{}{\mathrm{}}_4,\stackrel{}{q}\stackrel{}{\mathrm{}}_4),$$ (29) $$𝒢_c=\frac{1}{2|\stackrel{}{\mathrm{}}_2+\stackrel{}{\mathrm{}}_4|}\frac{1}{2|\stackrel{}{\mathrm{}}_2|}\frac{1}{\left(E_1E_2^{(c)}\right)^2\stackrel{}{\mathrm{}}_4^{\mathrm{\hspace{0.17em}2}}}(E_1,E_2^{(c)},E_5,\stackrel{}{\mathrm{}}_2,\stackrel{}{\mathrm{}}_4)𝒮_3(\stackrel{}{\mathrm{}}_1,\stackrel{}{\mathrm{}}_2,\stackrel{}{q}\stackrel{}{\mathrm{}}_4).$$ (30) Here the first factors exhibit the denominators for the three propagators that carry collinear momenta at the singularity, $``$ denotes the rest of the Feynman graph, and the $`𝒮`$ functions are the measurement functions for the final state particles. The functions $``$ depend on the loop momenta $`\stackrel{}{\mathrm{}}_2`$ and $`\stackrel{}{\mathrm{}}_4`$ and on three loop energies, which we take to be $`E_1=\mathrm{}_1^0`$, $`E_2=\mathrm{}_2^0`$ and $`E_5=\mathrm{}_5^0`$. The energies are determined by the on-shell delta functions for the two contributions. For $`E_1`$ and $`E_5`$, the values are the same for the two contributions: $`E_1`$ $`=`$ $`|\stackrel{}{\mathrm{}}_2+\stackrel{}{\mathrm{}}_4|,`$ (31) $`E_5`$ $`=`$ $`|\stackrel{}{q}\stackrel{}{\mathrm{}}_4|.`$ (32) For $`E_2`$, the values are different: $`E_2^{(a1)}`$ $`=`$ $`|\stackrel{}{\mathrm{}}_2+\stackrel{}{\mathrm{}}_4||\stackrel{}{\mathrm{}}_4|,`$ (33) $`E_2^{(c)}`$ $`=`$ $`|\stackrel{}{\mathrm{}}_2|.`$ (34) In order to examine the behavior of $`𝒢_{a1}`$ and $`𝒢_c`$ near the singularity, let $$\stackrel{}{\mathrm{}}_2=x\stackrel{}{\mathrm{}}_4+\stackrel{}{\mathrm{}}_T,$$ (35) where $`\stackrel{}{\mathrm{}}_T\stackrel{}{\mathrm{}}_4=0`$. The singularity is at $`\stackrel{}{\mathrm{}}_T0`$. In $`𝒢_{a1}`$ the denominator $`\left(E_2^{(a1)}\right)^2\stackrel{}{\mathrm{}}_2^{\mathrm{\hspace{0.17em}2}}`$ vanishes at $`\stackrel{}{\mathrm{}}_T0`$: $$\left(E_2^{(a1)}\right)^2\stackrel{}{\mathrm{}}_2^{\mathrm{\hspace{0.17em}2}}=\frac{\stackrel{}{\mathrm{}}_T^{\mathrm{\hspace{0.17em}2}}}{1x}\left(1+𝒪(\stackrel{}{\mathrm{}}_T^{\mathrm{\hspace{0.17em}2}})\right).$$ (36) Thus there is a $`1/\stackrel{}{\mathrm{}}_T^{\mathrm{\hspace{0.17em}2}}`$ singularity which would give a logarithmically divergent result for the integral of $`𝒢_{a1}`$ alone. Altogether, the denominator factors for $`𝒢_{a1}`$ are $$\frac{1}{2|\stackrel{}{\mathrm{}}_2+\stackrel{}{\mathrm{}}_4|}\frac{1}{\left(E_2^{(a1)}\right)^2\stackrel{}{\mathrm{}}_2^{\mathrm{\hspace{0.17em}2}}}\frac{1}{2|\stackrel{}{\mathrm{}}_4|}=\frac{1}{4\stackrel{}{\mathrm{}}_4^{\mathrm{\hspace{0.17em}2}}}\frac{1}{\stackrel{}{\mathrm{}}_T^{\mathrm{\hspace{0.17em}2}}}\left(1+𝒪(\stackrel{}{\mathrm{}}_T^{\mathrm{\hspace{0.17em}2}})\right).$$ (37) Let us now look at the denominator factors for $`𝒢_c`$. The denominator $`\left(E_1E_2^{(c)}\right)^2\stackrel{}{\mathrm{}}_4^{\mathrm{\hspace{0.17em}2}}`$ takes the form $$\left(\left(E_1E_2^{(c)}\right)^2\stackrel{}{\mathrm{}}_4^{\mathrm{\hspace{0.17em}2}}\right)^2=\frac{\stackrel{}{\mathrm{}}_T^{\mathrm{\hspace{0.17em}2}}}{x(1x)}\left(1+𝒪(\stackrel{}{\mathrm{}}_T^{\mathrm{\hspace{0.17em}2}})\right),$$ (38) so that the denominator factors together take the form $$\frac{1}{2|\stackrel{}{\mathrm{}}_2+\stackrel{}{\mathrm{}}_4|}\frac{1}{2|\stackrel{}{\mathrm{}}_2|}\frac{1}{\left(E_1E_2^{(c)}\right)^2\stackrel{}{\mathrm{}}_4^{\mathrm{\hspace{0.17em}2}}}=\frac{1}{4\stackrel{}{\mathrm{}}_4^{\mathrm{\hspace{0.17em}2}}}\frac{1}{\stackrel{}{\mathrm{}}_T^{\mathrm{\hspace{0.17em}2}}}\left(1+𝒪(\stackrel{}{\mathrm{}}_T^{\mathrm{\hspace{0.17em}2}})\right).$$ (39) Again, we have a $`1/\stackrel{}{\mathrm{}}_T^{\mathrm{\hspace{0.17em}2}}`$ singularity. Note, however, that the denominator factors in Eqs. (37) and (39) are equal except for their sign, up to corrections that are not singular as $`\stackrel{}{\mathrm{}}_T^{\mathrm{\hspace{0.17em}2}}0`$. Thus if the remaining factors $``$ and $`𝒮`$ were exactly the same for $`𝒢_{a1}`$ and $`𝒢_c`$ there would be no singularity in their sum. We thus need to explore the matching of $``$ and $`𝒮`$. The two versions of $``$ are the same functions with the same arguments except for the fact that $`E_2^{(c)}E_2^{(a1)}`$. However, $$E_2^{(c)}=E_2^{(a1)}+𝒪(\stackrel{}{\mathrm{}}_T^{\mathrm{\hspace{0.17em}2}}).$$ (40) Thus $$(E_1,E_2^{(c)},E_5,\stackrel{}{\mathrm{}}_2,\stackrel{}{\mathrm{}}_4)=(E_1,E_2^{(a1)},E_5,\stackrel{}{\mathrm{}}_2,\stackrel{}{\mathrm{}}_4)+𝒪(\stackrel{}{\mathrm{}}_T^{\mathrm{\hspace{0.17em}2}}).$$ (41) For the functions $`𝒮`$ used in our example, we have $$𝒮_3((1x)\stackrel{}{\mathrm{}}_4+\stackrel{}{\mathrm{}}_T,x\stackrel{}{\mathrm{}}_4\stackrel{}{\mathrm{}}_T,\stackrel{}{q}\stackrel{}{\mathrm{}}_4)=𝒮_2(\stackrel{}{\mathrm{}}_4,\stackrel{}{q}\stackrel{}{\mathrm{}}_4)+𝒪(\stackrel{}{\mathrm{}}_T^{\mathrm{\hspace{0.17em}2}}).$$ (42) Using these matching equations we find that $$𝒢_{a1}+𝒢_c=𝒪(1)$$ (43) as $`\stackrel{}{\mathrm{}}_T0`$. There is no collinear singularity in $`𝒢`$. How general is this result? First of all note that, in the part of the argument not involving the measurement functions $`𝒮`$, we used only the explicit structure of the denominators for three propagators that meet at a vertex. In the limit in which the momenta carried on these propagators become collinear, there is a cancellation of the collinear singularity arising from these denominators. The three propagators can be part of a much larger graph, and there can be non-trivial numerator factors, as in QCD. All of the other factors can be lumped into a function $``$ and treated as above. Thus this cancellation works in QCD as well as $`\varphi ^3`$ theory and it works for cut graphs with at most one virtual loop at any order of perturbation theory. As for the measurement functions, in general we need to consider the difference between the measurement functions with $`n`$ and $`n+1`$ particles in the final state, $$F(\stackrel{}{\mathrm{}}_T)=𝒮_{n+1}(\stackrel{}{k}_1,\mathrm{},\stackrel{}{k}_{n1},x\stackrel{}{k}_n\stackrel{}{\mathrm{}}_T,(1x)\stackrel{}{k}_n+\stackrel{}{\mathrm{}}_T)𝒮_n(\stackrel{}{k}_1,\mathrm{},\stackrel{}{k}_{n1},\stackrel{}{k}_n).$$ (44) Assuming that $`F`$ is an analytic function of $`\stackrel{}{\mathrm{}}_T`$, it will have an expansion around $`\stackrel{}{\mathrm{}}_T=0`$ of the form $$F(\stackrel{}{\mathrm{}}_T)=a+b_i\mathrm{}_T^i+c_{ij}\mathrm{}_T^i\mathrm{}_T^j+\mathrm{}.$$ (45) Infrared safety requires that $`a=0`$. If $`b_i0`$ then $`F`$ vanishes on a surface that intersects the point $`\stackrel{}{\mathrm{}}_T=0`$. Measurement functions $`𝒮`$ with this property would define an infrared safe measurement, but I do not know of any example in common use. More typically, $`F`$ is non-zero in a neighborhood of $`\stackrel{}{\mathrm{}}_T=0`$ while vanishing at $`\stackrel{}{\mathrm{}}_T=0`$. Then both $`a`$ and the $`b_i`$ must vanish and the $`c_{ij}`$ should be a positive definite (or negative definite) matrix. Thus, for typical measurement functions, $$F(\stackrel{}{\mathrm{}}_T)=𝒪(\stackrel{}{\mathrm{}}_T^{\mathrm{\hspace{0.17em}2}})$$ (46) as $`\stackrel{}{\mathrm{}}_T0`$. Then the integrand does not have collinear singularities. For an atypical measurement function with $`b_i0`$, one would be left with an integrable singularity of the form $`\stackrel{}{b}\stackrel{}{\mathrm{}}_T/\stackrel{}{\mathrm{}}_T^{\mathrm{\hspace{0.17em}2}}`$. The current version of the computer code has a mechanism to deal with this contingency, but I do not discuss it here since I know of no case in which it is needed. ### B The soft singularities In this subsection, we examine the soft singularity at $`\stackrel{}{\mathrm{}}_2=0`$. Let us concentrate to begin with on the cut graph shown in Fig. 3(a). When we perform the integration over the energy circulating in the virtual loop, there is a contribution from the term in which the propagator carrying momentum $`\mathrm{}_1^\mu `$ is put on shell, as in Fig. 4(a1). This contribution is $`𝒢_{a1}`$ in Eq. (28). Let us examine this contribution in the limit $`\stackrel{}{\mathrm{}}_20`$. Expanding in powers of $`\stackrel{}{\mathrm{}}_2`$, we have $$\mathrm{}_1^0=|\stackrel{}{\mathrm{}}_1|=|\stackrel{}{\mathrm{}}_4+\stackrel{}{\mathrm{}}_2|=|\stackrel{}{\mathrm{}}_4|+|\stackrel{}{\mathrm{}}_2|\stackrel{}{u}_2\stackrel{}{u}_4+\mathrm{},$$ (47) where we adopt the notation $$\stackrel{}{u}_J=\stackrel{}{\mathrm{}}_J/|\stackrel{}{\mathrm{}}_J|.$$ (48) Then $$\mathrm{}_2^2=|\stackrel{}{\mathrm{}}_2|^2[1(\stackrel{}{u}_2\stackrel{}{u}_4)^2]+\mathrm{}$$ (49) and $$\mathrm{}_3^2=2|\stackrel{}{\mathrm{}}_5||\stackrel{}{\mathrm{}}_2|\stackrel{}{u}_2(\stackrel{}{u}_5\stackrel{}{u}_4)+\mathrm{}.$$ (50) Thus $`𝒢_{a1}`$ $``$ $`𝒮_2{\displaystyle \frac{1}{2|\stackrel{}{\mathrm{}}_4|}}{\displaystyle \frac{1}{|\stackrel{}{\mathrm{}}_2|^2[1(\stackrel{}{u}_2\stackrel{}{u}_4)^2]}}{\displaystyle \frac{1}{2|\stackrel{}{\mathrm{}}_5||\stackrel{}{\mathrm{}}_2|\stackrel{}{u}_2(\stackrel{}{u}_5\stackrel{}{u}_4)+iϵ}}{\displaystyle \frac{1}{2|\stackrel{}{\mathrm{}}_4|}}{\displaystyle \frac{1}{2|\stackrel{}{\mathrm{}}_5|}}`$ (51) $`=`$ $`{\displaystyle \frac{𝒮_2}{16|\stackrel{}{\mathrm{}}_4|^2|\stackrel{}{\mathrm{}}_5|^2}}{\displaystyle \frac{1}{|\stackrel{}{\mathrm{}}_2|^3}}{\displaystyle \frac{1}{1(\stackrel{}{u}_2\stackrel{}{u}_4)^2}}{\displaystyle \frac{1}{\stackrel{}{u}_2(\stackrel{}{u}_5\stackrel{}{u}_4)+iϵ}}.`$ (52) We proceed in this fashion to evaluate the contribution corresponding to Fig. 4(a2). Then we evaluate the contribution of Fig. 4(a3), but we find that this contribution is not singular as $`\stackrel{}{\mathrm{}}_20`$. Adding the three contributions, we obtain the net integrand for the cut graph of Fig. 3(a) in the soft limit $`\stackrel{}{\mathrm{}}_20`$: $$𝒢_a\frac{𝒮_2}{32|\stackrel{}{\mathrm{}}_4|^2|\stackrel{}{\mathrm{}}_5|^2}\frac{1}{|\stackrel{}{\mathrm{}}_2|^3}\frac{1}{1+\stackrel{}{u}_2\stackrel{}{u}_4}\frac{1}{1\stackrel{}{u}_2\stackrel{}{u}_5}\frac{2\stackrel{}{u}_2(\stackrel{}{u}_5\stackrel{}{u}_4)}{\stackrel{}{u}_2(\stackrel{}{u}_5\stackrel{}{u}_4)+iϵ}.$$ (53) Some comments are in order here. First, we have included the leading term, with a $`1/|\stackrel{}{\mathrm{}}_2|^3`$ singularity, and dropped less singular terms. If we decompose the integration over $`\stackrel{}{\mathrm{}}_2`$ into $`𝑑\mathrm{\Omega }_2|\stackrel{}{\mathrm{}}_2|^2d|\stackrel{}{\mathrm{}}_2|`$, then a $`1/|\stackrel{}{\mathrm{}}_2|^3`$ singularity produces a logarithmic divergence in the integration over $`|\stackrel{}{\mathrm{}}_2|`$. The less singular terms will lead to a finite integration over $`|\stackrel{}{\mathrm{}}_2|`$, although the integration $`𝑑\mathrm{\Omega }`$ over the angles $`\stackrel{}{u}_2`$ can still be divergent. There are, in fact, singularities in the angular integration. The factor $`1/[1\stackrel{}{u}_2\stackrel{}{u}_5]`$ is singular when $`\stackrel{}{\mathrm{}}_2`$ is collinear with $`\stackrel{}{\mathrm{}}_5`$, while the factor $`1/[1+\stackrel{}{u}_2\stackrel{}{u}_4]`$ is singular when $`\stackrel{}{\mathrm{}}_2`$ is collinear with $`\stackrel{}{\mathrm{}}_4`$. These singularities produce logarithmically divergent integrations over $`\stackrel{}{u}_2`$. However, the analysis of the previous subsection shows that the collinear singularities cancel among the cuts of our graph. There is also a singularity on the plane $`\stackrel{}{u}_2(\stackrel{}{u}_5\stackrel{}{u}_4)=0`$. This is the scattering singularity on the ellipse $`|\stackrel{}{\mathrm{}}_1|+|\stackrel{}{\mathrm{}}_3|=|\stackrel{}{\mathrm{}}_4|+|\stackrel{}{\mathrm{}}_5|`$. This ellipse passes through the point $`\stackrel{}{\mathrm{}}_2=0`$ and the plane tangent to the ellipse at this point is the plane $`\stackrel{}{u}_2(\stackrel{}{u}_5\stackrel{}{u}_4)=0`$. I will have more to say about this singularity later. Here we note simply that it comes with an $`iϵ`$ prescription, which has been preserved in Eq. (53). We now consider the cut graph shown in Fig. 3(b). Again, there are three contributions to consider, corresponding to the diagrams $`(b5)`$, $`(b2)`$ and $`(b4)`$ in Fig. 4. Adding the three contributions, we obtain the net integrand for the cut graph of Fig. 3(b) in the soft limit $`\stackrel{}{\mathrm{}}_20`$: $$𝒢_b\frac{𝒮_2}{32|\stackrel{}{\mathrm{}}_4|^2|\stackrel{}{\mathrm{}}_5|^2}\frac{1}{|\stackrel{}{\mathrm{}}_2|^3}\frac{1}{1\stackrel{}{u}_2\stackrel{}{u}_4}\frac{1}{1+\stackrel{}{u}_2\stackrel{}{u}_5}\frac{2+\stackrel{}{u}_2(\stackrel{}{u}_5\stackrel{}{u}_4)}{\stackrel{}{u}_2(\stackrel{}{u}_5\stackrel{}{u}_4)+iϵ}.$$ (54) As in Eq. (53), there are a scattering singularity and two collinear singularities. However, the signs that indicate the location of the collinear singularities are reversed compared to Eq. (54). If we add $`𝒢_a`$ and $`𝒢_b`$ we obtain $$𝒢_a+𝒢_b\frac{𝒮_2}{16|\stackrel{}{\mathrm{}}_4|^2|\stackrel{}{\mathrm{}}_5|^2}\frac{1}{|\stackrel{}{\mathrm{}}_2|^3}\frac{1+(\stackrel{}{u}_2\stackrel{}{u}_4)(\stackrel{}{u}_2\stackrel{}{u}_5)}{[1(\stackrel{}{u}_2\stackrel{}{u}_4)^2][1(\stackrel{}{u}_2\stackrel{}{u}_5)^2]}.$$ (55) Thus, the overall $`1/|\stackrel{}{\mathrm{}}_2|^3`$ singularity remains and the collinear singularities remain, but the scattering singularities cancel in the soft limit, $`\stackrel{}{\mathrm{}}_20`$, between the two cuts that leave virtual subgraphs. There are two more cut graphs to consider. The graph shown in Fig. 3(c) gives $$𝒢_c\frac{𝒮_3}{32|\stackrel{}{\mathrm{}}_4|^2|\stackrel{}{\mathrm{}}_5|^2}\frac{1}{|\stackrel{}{\mathrm{}}_2|^3}\frac{1}{[1+\stackrel{}{u}_2\stackrel{}{u}_4]}\frac{1}{[1+\stackrel{}{u}_2\stackrel{}{u}_5]}.$$ (56) The graph shown in Fig. 3(d) gives $$𝒢_d\frac{𝒮_3}{32|\stackrel{}{\mathrm{}}_4|^2|\stackrel{}{\mathrm{}}_5|^2}\frac{1}{|\stackrel{}{\mathrm{}}_2|^3}\frac{1}{[1\stackrel{}{u}_2\stackrel{}{u}_4]}\frac{1}{[1\stackrel{}{u}_2\stackrel{}{u}_5]}.$$ (57) Adding these together, we find $$𝒢_c+𝒢_d\frac{𝒮_3}{16|\stackrel{}{\mathrm{}}_4|^2|\stackrel{}{\mathrm{}}_5|^2}\frac{1}{|\stackrel{}{\mathrm{}}_2|^3}\frac{1+(\stackrel{}{u}_2\stackrel{}{u}_4)(\stackrel{}{u}_2\stackrel{}{u}_5)}{[1(\stackrel{}{u}_2\stackrel{}{u}_4)^2][1(\stackrel{}{u}_2\stackrel{}{u}_5)^2]}.$$ (58) We note that when we add the contributions of the cuts which leave virtual subgraphs to the contributions of the cuts which have no virtual subgraphs, the leading soft singularity cancels: $$𝒢_a+𝒢_b+𝒢_c+𝒢_d0.$$ (59) That is, after cancellation, the overall singularity is at worst proportional to $`1/|\stackrel{}{\mathrm{}}_2|^2`$. It is thus an integrable singularity provided that all of the singularities of the angular integration over $`\stackrel{}{u}_2`$ cause no problems. The cancellation of the leading soft singularity is built into the structure of Feynman diagrams, so that we do not have to do anything special to make it happen. However, there is a certain subtlety in arranging for the singularities in the angular integrations to be convergent in a Monte Carlo integration. Thus, we will return to the cancellation of the soft singularity after we have discussed contour deformations in the following section. ## V The scattering singularity and contour deformation Consider the contribution from Fig. 4(a1), as given in Eq. (28). There is a factor $$\frac{1}{(|\stackrel{}{\mathrm{}}_1||\stackrel{}{\mathrm{}}_4||\stackrel{}{\mathrm{}}_5|)^2\stackrel{}{\mathrm{}}_3^{\mathrm{\hspace{0.17em}2}}+iϵ}=\frac{1}{(|\stackrel{}{\mathrm{}}_3|+|\stackrel{}{\mathrm{}}_4|+|\stackrel{}{\mathrm{}}_5||\stackrel{}{\mathrm{}}_1|)(|\stackrel{}{\mathrm{}}_4|+|\stackrel{}{\mathrm{}}_5||\stackrel{}{\mathrm{}}_1||\stackrel{}{\mathrm{}}_3|+iϵ)},$$ (60) which has a singularity when $`|\stackrel{}{\mathrm{}}_1|+|\stackrel{}{\mathrm{}}_3|=|\stackrel{}{\mathrm{}}_4|+|\stackrel{}{\mathrm{}}_5|`$. In an analysis using time-ordered perturbation theory, the singular factor emerges from the energy denominator associated with the intermediate state consisting of partons 1 and 3, $$E_FE(\stackrel{}{\mathrm{}}_2)+iϵ,$$ (61) where $`E_F=|\stackrel{}{\mathrm{}}_4|+|\stackrel{}{\mathrm{}}_5|`$ and $$E(\stackrel{}{\mathrm{}}_2)=|\stackrel{}{\mathrm{}}_1|+|\stackrel{}{\mathrm{}}_3|=|\stackrel{}{\mathrm{}}_4+\stackrel{}{\mathrm{}}_2|+|\stackrel{}{\mathrm{}}_5+\stackrel{}{\mathrm{}}_2|.$$ (62) Thus the singularity appears when the momenta are right for particles 1 and 3 to be on-shell and scatter to produce the final state particles 4 and 5. The contribution from Fig. 4(b4) has a scattering singularity at the same place as that from the cut diagram (a1). However, these singularities do not cancel in general because the functions $`𝒮_2(\stackrel{}{\mathrm{}}_4,\stackrel{}{\mathrm{}}_5)`$ and $`𝒮_2(\stackrel{}{\mathrm{}}_1,\stackrel{}{\mathrm{}}_3)`$ do not match. We thus have a problem if we would like to perform the integration numerically. We notice, however, that the singularity is protected by an $`iϵ`$ prescription. The $`iϵ`$ in the denominator tells us what to do in an analytic calculation and it also tells us what to do in a numerical calculation: we need to deform the integration contour. We are integrating over a loop momentum $`\stackrel{}{\mathrm{}}_2`$. Let us replace $`\stackrel{}{\mathrm{}}_2`$ by a complex momentum $`\stackrel{}{\mathrm{}}_{2,c}=\stackrel{}{\mathrm{}}_2+i\stackrel{}{\kappa }`$, where $`\stackrel{}{\kappa }`$ is a function, which remains to be determined, of $`\stackrel{}{\mathrm{}}_2`$. Then as we integrate over the real vector $`\stackrel{}{\mathrm{}}_2`$, we are integrating over a contour in the space of the complex vector $`\stackrel{}{\mathrm{}}_{2,c}`$. When we deform the original contour $`\stackrel{}{\mathrm{}}_{2,c}=\stackrel{}{\mathrm{}}_2`$ to the new contour $`\stackrel{}{\mathrm{}}_{2,c}=\stackrel{}{\mathrm{}}_2+i\stackrel{}{\kappa }`$, the integral does not change provided that we do not cross any points where the integrand is singular and provided that we include a jacobian $$𝒥(\stackrel{}{\mathrm{}}_2)=det\left(\frac{\mathrm{}_{2,c}^i}{\mathrm{}_2^j}\right).$$ (63) There are some subtleties associated with this; the relevant theorem is proved in the Appendix. We need to choose $`\stackrel{}{\kappa }`$ as a function of $`\stackrel{}{\mathrm{}}_2`$. Consider first the direction of $`\stackrel{}{\kappa }`$. On the deformed contour, the energy denominator (61) has the form $$E_FE(\stackrel{}{\mathrm{}}_2+i\stackrel{}{\kappa })+iϵ.$$ (64) In order to fix the direction of deformation, it is useful to consider what happens when we deform the contour just a little way from the real $`\stackrel{}{\mathrm{}}_2`$ space. For small $`\kappa `$, we have $$E(\stackrel{}{\mathrm{}}_2+i\stackrel{}{\kappa })|\stackrel{}{\mathrm{}}_1|+|\stackrel{}{\mathrm{}}_3|+i\stackrel{}{\kappa }\stackrel{}{w},$$ (65) where $$\stackrel{}{w}=\frac{\stackrel{}{\mathrm{}}_1}{|\stackrel{}{\mathrm{}}_1|}+\frac{\stackrel{}{\mathrm{}}_3}{|\stackrel{}{\mathrm{}}_3|}.$$ (66) Thus the energy denominator is $`E_FE(\stackrel{}{\mathrm{}}_2)i\stackrel{}{\kappa }\stackrel{}{w}+iϵ`$ for small $`\stackrel{}{\kappa }`$. In order to keep on the proper side of the singularity, we want $`\stackrel{}{\kappa }\stackrel{}{w}`$ to be negative. The simplest way to insure this is to choose $`\stackrel{}{\kappa }`$ in the direction of $`\stackrel{}{w}`$. Thus we choose $$\stackrel{}{\kappa }=D(\stackrel{}{\mathrm{}}_2)\stackrel{}{w},D(\stackrel{}{\mathrm{}}_2)0.$$ (67) Then the singular factor is approximately $$\frac{1}{E_FE(\stackrel{}{\mathrm{}}_2)+iD(\stackrel{}{\mathrm{}}_2)\stackrel{}{w}^{\mathrm{\hspace{0.17em}2}}}$$ (68) for a small deformation. For a larger deformation, it not so simple to see that we stay on the correct side of the singularity, but it is easy to check numerically. The next question is how should we choose $`D(\stackrel{}{\mathrm{}}_2)`$? We want $`D`$ not to be small when $`\stackrel{}{\mathrm{}}_2`$ is near the surface $`E(\stackrel{}{\mathrm{}}_2)=E_F`$ in order that the integrand not be large there. We want $`D(\stackrel{}{\mathrm{}}_2)`$ not to grow as $`\stackrel{}{\mathrm{}}_2^{\mathrm{\hspace{0.17em}2}}\mathrm{}`$ in order to satisfy the conditions for the theorem that deforming the contour does not change value of the integral. Since there is no reason to keep any finite contour deformation for large $`\stackrel{}{\mathrm{}}_2^{\mathrm{\hspace{0.17em}2}}`$, we will simply choose to have $`D(\stackrel{}{\mathrm{}}_2)0`$ as $`\stackrel{}{\mathrm{}}_2^{\mathrm{\hspace{0.17em}2}}\mathrm{}`$. There is another condition that $`D(\stackrel{}{\mathrm{}}_2)`$ should obey: it should vanish at points where $`𝒢`$ has collinear and soft singularities. To see why takes some discussion. Consider the contributions from three parton cuts, for which there is no virtual loop. For these contributions, we do not want to deform the contours. This is because if any of the loop momenta were complex then at least one of the momenta of the final state particles would be complex. In principle, one could have complex momenta for final state particles as long as the measurement functions $`𝒮_n(\stackrel{}{k}_1,\mathrm{},\stackrel{}{k}_n)`$ are analytic. However, I have in mind applications in which the numerical integration program acts as a subroutine that produces “events” with final state particle momenta $`\{\stackrel{}{k}_1,\mathrm{},\stackrel{}{k}_n\}`$ and weights computed by the subroutine. Then the events could be the input to, for example, a Monte Carlo program that generates parton showers and hadronization. Surely complex momenta for the final state particles are not desirable. Now recall that there is a cancellation among the contributions $`𝒢_C`$ from different cuts $`C`$ at points where the $`𝒢_C`$ have collinear and soft singularities. Evidently, if we deform the contour for a contribution with a virtual graph but do not deform the contour for the canceling contribution, then the cancellation can be spoiled. We can avoid spoiling the cancellation if we make the contours match at the singularity. That is, $`D(\stackrel{}{\mathrm{}}_2)`$ should vanish at the points where the $`𝒢_C`$ have collinear and soft singularities. We also need to determine how fast $`D(\stackrel{}{\mathrm{}}_2)`$ needs to approach zero as $`\stackrel{}{\mathrm{}}_2`$ approaches a singularity. Since the integration is in a multidimensional complex space, we need an analysis that makes use of the multidimensional contour deformation theorem. This analysis is given in the Appendix. Here, I present a simpler one dimensional analysis that can serve to clarify the issue. Consider the following toy integral, $$I=_0^{x_{\mathrm{max}}}\frac{dx}{x}\left\{\frac{f_V(x)}{x1+iϵ}+f_R(x)\right\}.$$ (69) Here the endpoint singularity at $`x=0`$ plays the role of the collinear or soft singularities. The function $`f_V/(x1+iϵ)`$ plays the role of the integrand for the contribution with a virtual subgraph. In this contribution, there is a singularity at $`x=1`$ that comes with an $`iϵ`$ prescription. The function $`f_R`$ plays the role of the integrand for the contribution with no virtual subgraph. We assume that $`f_V(z)`$ and $`f_R(z)`$ are analytic functions. We also assume that $`f_V(0)=f_R(0)`$, so that the apparent singularity at $`x=0`$ cancels. Now the $`iϵ`$ prescription on the singularity at $`x=1`$ tells us that we can deform the integration contour into the upper half plane, replacing $`x`$ by $`z=x+iy(x)`$ where $`y(0)=y(x_{\mathrm{max}})=0`$. Thus $$I=_0^{x_{\mathrm{max}}}𝑑x\frac{1+iy^{}(x)}{x+iy(x)}\left\{\frac{f_V\left(x+iy(x)\right)}{x1+iy(x)}+f_R\left(x+iy(x)\right)\right\}.$$ (70) Suppose, however, that we want to keep the contour for $`f_R`$ on the real axis. Then we might hope that $`I=\stackrel{~}{I}`$, where $$\stackrel{~}{I}=\underset{x_{\mathrm{min}}0}{lim}_{x_{\mathrm{min}}}^{x_{\mathrm{max}}}𝑑x\left\{\frac{1+iy^{}(x)}{x+iy(x)}\frac{f_V\left(x+iy(x)\right)}{x1+iy(x)}+\frac{f_R\left(x\right)}{x}\right\}.$$ (71) The difference is $$\stackrel{~}{I}I=\underset{x_{\mathrm{min}}0}{lim}_{x_{\mathrm{min}}}^{x_{\mathrm{max}}}𝑑x\left\{\frac{f_R\left(x\right)}{x}[1+iy^{}(x)]\frac{f_R\left(x+iy(x)\right)}{x+iy(x)}\right\}.$$ (72) If we note that $`[f_R(z)f_R(0)]/z`$ is an analytic function even at $`z=0`$ and that the integral of an analytic function around a closed contour vanishes, we have $$0=\underset{x_{\mathrm{min}}0}{lim}_{x_{\mathrm{min}}}^{x_{\mathrm{max}}}𝑑x\left\{\frac{f_R\left(x\right)f_R\left(0\right)}{x}[1+iy^{}(x)]\frac{f_R\left(x+iy(x)\right)f_R\left(0\right)}{x+iy(x)}\right\}.$$ (73) Subtracting these and performing the integral, we have $$\stackrel{~}{I}I=f_R(0)\underset{x_{\mathrm{min}}0}{lim}_{x_{\mathrm{min}}}^{x_{\mathrm{max}}}𝑑x\left\{\frac{1}{x}\frac{1+iy^{}(x)}{x+iy(x)}\right\}=f_R(0)\underset{x_{\mathrm{min}}0}{lim}\mathrm{log}\left(1+i\frac{y(x_{\mathrm{min}})}{x_{\mathrm{min}}}\right).$$ (74) We can draw two conclusions. First, as long as $`y(x)0`$ at least as fast as $`x^1`$ as $`x0`$, we will realize the cancellation of the $`x0`$ singularity and obtain a finite value for $`\stackrel{~}{I}`$. Second, if we choose $`y(x)x`$ as $`x0`$, $`\stackrel{~}{I}`$ will be finite, but it will not be equal to the correct result $`I`$. In order to get a result $`\stackrel{~}{I}`$ that is not only finite but also correct, we need $`y(x)/x0`$ as $`x0`$. A convenient choice is $`y(x)x^2`$ as $`x0`$. We conclude from the multidimensional extension of this analysis, given in the Appendix, that as $`\stackrel{}{\mathrm{}}_2`$ approaches a singularity, $`D(\stackrel{}{\mathrm{}}_2)`$ should approach zero quadratically with the distance to the singularity. We now use the qualitative criteria just developed to give a specific choice of deformation. We have chosen $$\stackrel{}{\mathrm{}}_{2,c}=\stackrel{}{\mathrm{}}_2iD(\stackrel{}{\mathrm{}}_2)\stackrel{}{w},$$ (75) where $`\stackrel{}{w}`$, Eq. (66), specifies the direction of deformation. We now specify a deformation function $`D(\stackrel{}{\mathrm{}}_2)`$ that satisfies our criteria. We write $`D`$ in the form $$D=CG.$$ (76) The factor $`C`$ is designed to insure that the deformation vanishes quadratically near the collinear and soft singularities. The factor $`G`$ is designed to turn the deformation off for large $`\stackrel{}{\mathrm{}}_2`$. These factors are explained below and are defined precisely in Eqs. (80) and (83) below. First, we discuss the factor $`C`$. We want the deformation to vanish at the line $`\stackrel{}{\mathrm{}}_2=x\stackrel{}{\mathrm{}}_4`$ with $`0x1`$, where the amplitude has a collinear singularity. (Since $`\stackrel{}{\mathrm{}}_4=\stackrel{}{\mathrm{}}_1\stackrel{}{\mathrm{}}_2`$, this line is also $`\stackrel{}{\mathrm{}}_1=\lambda \stackrel{}{\mathrm{}}_2`$ with $`0<\lambda <\mathrm{}`$.) Define $$d_{12}=\frac{\left||\stackrel{}{\mathrm{}}_2|\stackrel{}{\mathrm{}}_1+|\stackrel{}{\mathrm{}}_1|\stackrel{}{\mathrm{}}_2\right|}{|\stackrel{}{\mathrm{}}_1\stackrel{}{\mathrm{}}_2|}=\frac{\left||\stackrel{}{\mathrm{}}_2|\stackrel{}{\mathrm{}}_1+|\stackrel{}{\mathrm{}}_1|\stackrel{}{\mathrm{}}_2\right|}{|\stackrel{}{\mathrm{}}_4|}.$$ (77) This function is zero on the line $`\stackrel{}{\mathrm{}}_2=x\stackrel{}{\mathrm{}}_4`$ with $`0x1`$, and furthermore, it vanishes linearly as $`\stackrel{}{\mathrm{}}_2`$ approaches this line. Similarly, we want the deformation to vanish on the line $`\stackrel{}{\mathrm{}}_2=x\stackrel{}{\mathrm{}}_5`$ with $`0x1`$, where the amplitude has its other collinear singularity. The function $`d_{23}`$, where $$d_{23}=\frac{\left||\stackrel{}{\mathrm{}}_3|\stackrel{}{\mathrm{}}_2+|\stackrel{}{\mathrm{}}_2|\stackrel{}{\mathrm{}}_3\right|}{|\stackrel{}{\mathrm{}}_2\stackrel{}{\mathrm{}}_3|}=\frac{\left||\stackrel{}{\mathrm{}}_3|\stackrel{}{\mathrm{}}_2+|\stackrel{}{\mathrm{}}_2|\stackrel{}{\mathrm{}}_3\right|}{|\stackrel{}{\mathrm{}}_5|},$$ (78) vanishes linearly as $`\stackrel{}{\mathrm{}}_2`$ approaches this line. (To see this, use $`\stackrel{}{\mathrm{}}_5=\stackrel{}{\mathrm{}}_2\stackrel{}{\mathrm{}}_3`$.) Let $$d=\mathrm{min}(d_{12},d_{23}).$$ (79) Then $`d`$ vanishes linearly with the distance to either of the collinear singularities. It also vanishes linearly with the distance to the soft singularity at $`\stackrel{}{\mathrm{}}_2=0`$. Now, we have seen that the deformation should vanish quadratically with the distance to any of the singularities. We can achieve this by letting $$C(d^2)=\frac{\alpha d^2}{1+4\beta d^2/(|\mathrm{}_4|+|\mathrm{}_5|+|\stackrel{}{q}|)^2},$$ (80) where $`\alpha `$ and $`\beta `$ are adjustable dimensionless parameters. Note that, for large $`d`$, $`C(d^2)`$ approaches a constant. Next, we discuss the factor $`G`$. We want to ensure that the contour deformation vanishes for large $`\stackrel{}{\mathrm{}}_2`$. Let us define $$a=|\stackrel{}{\mathrm{}}_1|+|\stackrel{}{\mathrm{}}_3||\stackrel{}{q}|$$ (81) and $$A=|\stackrel{}{\mathrm{}}_4|+|\stackrel{}{\mathrm{}}_5||\stackrel{}{q}|.$$ (82) Then the singularity that we are avoiding by means of contour deformation is at $`a=A`$. We can turn the deformation off for $`aA`$ by setting $$G(a)=\frac{1}{A+\gamma a},$$ (83) where $`\gamma `$ is an adjustable dimensionless parameter. There is a subsidiary reason for this choice. At the singularity, $`G=1/[(1+\gamma )A]`$. The factor $`1/A`$ serves to enhance the deformation in the case that $`\stackrel{}{\mathrm{}}_4`$ and $`\stackrel{}{\mathrm{}}_5`$ are nearly collinear, in which case $`d`$ is small on the ellipse $`a=A`$ and the deformation would otherwise be too small. The reader will note that, while there is a certain uniqueness in defining the direction of the deformation in Eq. (75) to be given by the vector $`\stackrel{}{w}`$, Eq. (66), the normalization $`D=CG`$ with $`C`$ and $`G`$ given in Eqs. (80) and (83) is rather ad hoc. Within the requirements that the deformation should vanish quadratically at the collinear and soft singularities and should vanish for large $`\stackrel{}{\mathrm{}}_2`$, many other choices would be possible. The choice given here is used in the current version of the code . Surely there is some other choice that is better. ## VI The Monte Carlo integration After the contour deformations, we have an integral of the form $$=𝑑\mathrm{}\underset{C}{}𝒥(C;\mathrm{})g(C;\mathrm{}+i\kappa (C;\mathrm{})),$$ (84) where we use $`\mathrm{}`$ for the loop momenta collectively, $`\mathrm{}=\{\stackrel{}{\mathrm{}}_2,\stackrel{}{\mathrm{}}_4\}`$. The index $`C`$ labels the cut, $`a`$, $`b`$, $`c`$, or $`d`$ in Fig. 3. There is a contour deformation that depends on the cut, as specified by $`\kappa (C;\mathrm{})`$, and there is a corresponding jacobian $`𝒥(C;\mathrm{})`$, Eq. (63). Define $$f(\mathrm{})=\mathrm{}\left\{\underset{C}{}𝒥(C;\mathrm{})g(C;\mathrm{}+i\kappa (C;\mathrm{}))\right\}.$$ (85) We know that $``$ is real, so $$=𝑑\mathrm{}f(\mathrm{}).$$ (86) To perform the integration, we use the Monte Carlo method. We choose points $`\mathrm{}`$ with a density $`\rho (\mathrm{})`$, with $$𝑑\mathrm{}\rho (\mathrm{})=1.$$ (87) After choosing $`N`$ points $`\mathrm{}_1,\mathrm{},\mathrm{}_N`$, we have an estimate for the integral: $$_N=\frac{1}{N}\underset{i}{}\frac{f(\mathrm{}_i)}{\rho (\mathrm{}_i)}.$$ (88) This is an approximation for the integral in the sense that if we repeat the procedure a lot of times the expectation value for $`_N`$ is $$_N=.$$ (89) The expected r.m.s. error is $``$, where $$^2=\left(_N\right)^2=\frac{1}{N}𝑑\mathrm{}\frac{f(\mathrm{})^2}{\rho (\mathrm{})}\frac{^2}{N}.$$ (90) One can rewrite this as $$^2=\frac{1}{N}𝑑\mathrm{}\rho (\mathrm{})\left(\frac{|f(\mathrm{})|}{\rho (\mathrm{})}\stackrel{~}{}\right)^2+\frac{\stackrel{~}{}^2^2}{N},$$ (91) where $$\stackrel{~}{}=𝑑\mathrm{}|f(\mathrm{})|.$$ (92) We see, first of all, that the expected error decreases proportionally to $`1/\sqrt{N}`$. Second, we see that the ideal choice of $`\rho (\mathrm{})`$ would be $`\rho (\mathrm{})=|f(\mathrm{})|/\stackrel{~}{}`$. Of course, it is not possible to choose $`\rho `$ in this way. But we know that $`|f|`$ has singularities at places where propagator momenta vanish and we know the structure of these singularities. We are not really able to choose $`\rho `$ so that $`|f(\mathrm{})|/\rho (\mathrm{})`$ is a constant, but at least we can choose it so that $`|f(\mathrm{})|/\rho (\mathrm{})`$ is not singular at the singularities of $`|f(\mathrm{})|`$. Note that it is easy to combine methods for choosing Monte Carlo points. Suppose that we have a recipe for choosing points with a density $`\rho _1`$ that is singular when one propagator momentum vanishes, a recipe for choosing points with a density $`\rho _2`$ that is singular when another propagator momentum vanishes, and in general recipes for choosing points with densities $`\rho _i`$ with several goals in mind. Then we can devote a fraction $`\lambda _i`$ of the points to the choice with density $`\rho _i`$ and obtain a net density $$\rho (\mathrm{})=\underset{i}{}\lambda _i\rho _i(\mathrm{}).$$ (93) ### A The density near where a propagator momentum vanishes Let $`\stackrel{}{\mathrm{}}_J`$ be the momentum of one of the propagators in our graph. We have seen that when particle $`J`$ appears in the final state, there is a factor $`1/|\stackrel{}{\mathrm{}}_J|`$ in the integrand. When propagator $`J`$ is part of a virtual loop, the contribution corresponding to this propagator being put on shell also contains a factor $`1/|\stackrel{}{\mathrm{}}_J|`$. Thus there is a singularity $`1/|\stackrel{}{\mathrm{}}_J|`$ for every propagator in the graph. The analysis given in the introduction to this section indicates that for each propagator $`J`$ one of the terms $`\rho _i`$ in the density function should have a singularity that is at least as strong as $$\rho _i(\mathrm{})1/|\stackrel{}{\mathrm{}}_J|$$ (94) as $`\stackrel{}{\mathrm{}}_J0`$. It is, of course, easy to choose points with a density proportional to $`1/|\stackrel{}{\mathrm{}}_J|^A`$ as $`\stackrel{}{\mathrm{}}_J0`$ as long as $`A<3`$. (The limitation on $`A`$ arises because for $`A3`$ we would have $`𝑑\stackrel{}{\mathrm{}}_J\rho (\mathrm{})=\mathrm{}`$.) Thus it is easy to arrange that the density of points has the requisite singularities. Specifically, we can choose $`\stackrel{}{\mathrm{}}_J`$ with the density $$\stackrel{~}{\rho }(\stackrel{}{\mathrm{}}_J)=\frac{1}{2\pi K_0^3}\frac{1}{\left[1+\left(|\stackrel{}{\mathrm{}}_J|/K_0\right)^2\right]^2}\frac{K_0}{|\stackrel{}{\mathrm{}}_J|},$$ (95) where $`K_0`$ is a momentum scale determined by the other, previously chosen, loop momenta. The singularity when $`\stackrel{}{\mathrm{}}_J0`$ can be more severe than $`1/|\stackrel{}{\mathrm{}}_J|`$, depending on the structure of the graph. Consider first the cases $`J=1,3,4,5`$. Here, the singularities for particular cuts, as given in Eq. (28), are $`1/|\stackrel{}{\mathrm{}}_J|^2`$. However, there is a cancellation after one sums over cuts (as for the singularity for $`\stackrel{}{\mathrm{}}_20`$), leaving a singularity $`1/|\stackrel{}{\mathrm{}}_J|`$. For $`J=2`$ there is a severe singularity of the form $`1/|\stackrel{}{\mathrm{}}_J|^3`$ for particular contributions to Eq. (28). A $`1/|\stackrel{}{\mathrm{}}_2|^3`$ singularity would not be integrable, but, as we have seen in detail, there is a cancellation among the contributions so that only a $`1/|\stackrel{}{\mathrm{}}_2|^2`$ singularity is left. However, it will not do to simply chose $`\rho _i(\mathrm{})1/|\stackrel{}{\mathrm{}}_2|^2`$ because there is also a singularity in the space of the angles of $`\stackrel{}{\mathrm{}}_2`$. It is to this subject that we now turn. ### B The soft parton singularity When two partons can scatter by exchanging a parton before they enter the final state, there is a severe singularity as the momentum of the exchanged parton goes to zero. For our graph, this happens for $`\stackrel{}{\mathrm{}}_20`$. In this subsection, we consider the behavior of the integrand for small $`\stackrel{}{\mathrm{}}_2`$ as a function of its magnitude $`\stackrel{}{\mathrm{}}_2`$ and of its direction $`\stackrel{}{u}_2=\stackrel{}{\mathrm{}}_2/|\stackrel{}{\mathrm{}}_2|`$. The singularity for individual cuts, as given in Eq. (28), is of the form $`1/|\stackrel{}{\mathrm{}}_2|^3`$ when we let $`|\stackrel{}{\mathrm{}}_2|0`$ with $`\stackrel{}{u}_2`$ held fixed. This singularity is not integrable. However, as we have seen, the leading term cancels when we sum over cuts, leaving a $`1/|\stackrel{}{\mathrm{}}_2|^2`$ singularity for $`|\stackrel{}{\mathrm{}}_2|0`$ with $`\stackrel{}{u}_2`$ fixed. Let us now recall from Eq. (53) that, before we deform the integration contour, the contribution for small $`\stackrel{}{\mathrm{}}_2`$ from the cut $`a`$ of Fig. 3 has, in addition to a factor $`1/|\stackrel{}{\mathrm{}}_2|^3`$, a factor $`1/[\stackrel{}{u}_2(\stackrel{}{u}_5\stackrel{}{u}_4)+iϵ]`$. That is, there is a singularity on a surface in the space of $`\stackrel{}{\mathrm{}}_2`$ whose tangent plane is the plane perpendicular to $`\stackrel{}{u}_5\stackrel{}{u}_4`$. We have avoided this singularity by deforming the integration contour. However, the deformation vanishes as $`\stackrel{}{\mathrm{}}_20`$. Thus we must face the question of what happens to the cancellation near the soft parton singularity when the contour deformation is taken into account. First, let us recall from Eq. (75) that for cut $`a`$ in Fig. 3 the deformation has the form $$\stackrel{}{\mathrm{}}_{2,c}=\stackrel{}{\mathrm{}}_2iD(\stackrel{}{\mathrm{}}_2)\stackrel{}{w},$$ (96) where $`\stackrel{}{w}=\stackrel{}{u}_1+\stackrel{}{u}_3`$. For $`\stackrel{}{\mathrm{}}_20`$, $$\stackrel{}{w}\stackrel{}{u}_4\stackrel{}{u}_5,$$ (97) while $`D`$ has the form $$D(\stackrel{}{\mathrm{}}_2)\stackrel{}{\mathrm{}}_2^{\mathrm{\hspace{0.17em}2}}\stackrel{~}{D}(\stackrel{}{u}_2).$$ (98) Here $`\stackrel{~}{D}`$ vanishes for $`\stackrel{}{u}_2=\stackrel{}{u}_4`$ and for $`\stackrel{}{u}_2=\stackrel{}{u}_5`$ and is positive elsewhere. Thus $$\stackrel{}{\mathrm{}}_{2,c}=|\stackrel{}{\mathrm{}}_2|\left(\stackrel{}{u}_2+i|\stackrel{}{\mathrm{}}_2|\stackrel{~}{D}(\stackrel{}{u}_2)(\stackrel{}{u}_5\stackrel{}{u}_4)+𝒪(\stackrel{}{\mathrm{}}_2^{\mathrm{\hspace{0.17em}2}})\right).$$ (99) Substituting $`\mathrm{}_{2,c}`$ as given above for $`\stackrel{}{\mathrm{}}_2`$ in Eq. (53), we obtain an expression for the contribution from cut $`a`$ to the integrand on the deformed contour near the soft singularity: $$𝒢_a\frac{𝒮_2}{32|\stackrel{}{\mathrm{}}_4|^2|\stackrel{}{\mathrm{}}_5|^2}\frac{1}{|\stackrel{}{\mathrm{}}_2|^3}\frac{1}{1+\stackrel{}{u}_2\stackrel{}{u}_4}\frac{1}{1\stackrel{}{u}_2\stackrel{}{u}_5}\frac{2\stackrel{}{u}_2(\stackrel{}{u}_5\stackrel{}{u}_4)}{\stackrel{}{u}_2(\stackrel{}{u}_5\stackrel{}{u}_4)+i|\stackrel{}{\mathrm{}}_2|\stackrel{~}{D}(\stackrel{}{u}_2)(\stackrel{}{u}_5\stackrel{}{u}_4)^2}.$$ (100) There are two cases to consider. First, when $`|\stackrel{}{\mathrm{}}_2|0`$ with $`\stackrel{}{u}_2`$ fixed, we can drop the second term in the last denominator. Then $`𝒢_ah(\stackrel{}{u}_2)/|\stackrel{}{\mathrm{}}_2|^3`$, where the function $`h(\stackrel{}{u}_2)`$ is the same as on the undeformed contour. As we have seen, the leading $`1/|\stackrel{}{\mathrm{}}_2|^3`$ terms cancel when one sums over cuts. Thus, as noted earlier, the net integrand behaves like $$𝒢h_{\mathrm{tot}}(\stackrel{}{u}_2)/|\stackrel{}{\mathrm{}}_2|^2$$ (101) when $`|\stackrel{}{\mathrm{}}_2|0`$ with $`\stackrel{}{u}_2`$ fixed. The second case is more interesting. Consider $`|\stackrel{}{\mathrm{}}_2|0`$ and $`\stackrel{}{u}_2(\stackrel{}{u}_5\stackrel{}{u}_4)0`$ with $`\stackrel{}{u}_2(\stackrel{}{u}_5\stackrel{}{u}_4)/|\stackrel{}{\mathrm{}}_2|`$ fixed. Then $`𝒢_a`$ is more singular, $`𝒢_a1/|\stackrel{}{\mathrm{}}_2|^4`$. To see what happens in this region, we analyze the contribution from cut $`b`$ in Fig. 3 in the same fashion. The contour deformation for cut $`b`$ is different from that for cut $`a`$, but the deformations match at leading order as $`|\stackrel{}{\mathrm{}}_2|0`$. (This is an important feature of the choice of contour deformations.) Thus we can use Eq. (99) in Eq. (54) to obtain $$𝒢_b\frac{𝒮_2}{32|\stackrel{}{\mathrm{}}_4|^2|\stackrel{}{\mathrm{}}_5|^2}\frac{1}{|\stackrel{}{\mathrm{}}_2|^3}\frac{1}{1\stackrel{}{u}_2\stackrel{}{u}_4}\frac{1}{1+\stackrel{}{u}_2\stackrel{}{u}_5}\frac{2+\stackrel{}{u}_2(\stackrel{}{u}_5\stackrel{}{u}_4)}{\stackrel{}{u}_2(\stackrel{}{u}_5\stackrel{}{u}_4)+i|\stackrel{}{\mathrm{}}_2|\stackrel{~}{D}(\stackrel{}{u}_2)(\stackrel{}{u}_5\stackrel{}{u}_4)^2}.$$ (102) We see that $`𝒢_b`$ is also proportional to $`1/|\stackrel{}{\mathrm{}}_2|^4`$ in the problematic region. However, since $`u_2u_4u_2u_5`$ in this region, the leading $`1/|\stackrel{}{\mathrm{}}_2|^4`$ behavior cancels when we add $`𝒢_b`$ to $`𝒢_a`$. We are left with the next term, proportional to $`1/|\stackrel{}{\mathrm{}}_2|^3`$. For the two remaining cuts there is no contour deformation. The contributions from these cuts are each proportional to $`1/|\stackrel{}{\mathrm{}}_2|^3`$. Calculation shows that there is no further cancellation. Thus the net behavior of the integrand is $$𝒢1/|\stackrel{}{\mathrm{}}_2|^3$$ (103) when $`|\stackrel{}{\mathrm{}}_2|0`$ and $`\stackrel{}{u}_2(\stackrel{}{u}_5\stackrel{}{u}_4)0`$ with $`\stackrel{}{u}_2(\stackrel{}{u}_5\stackrel{}{u}_4)/|\stackrel{}{\mathrm{}}_2|`$ fixed. ### C Density near a soft parton singularity According to the analysis at the beginning of this section, we should choose a density of integration points that has a singularity that is at least as strong as that of $`|𝒢|`$ near the soft singularity at $`\stackrel{}{\mathrm{}}_20`$. Thus we should choose one of the $`\rho _i`$ so that $`\rho _i(\mathrm{})`$ $``$ $`{\displaystyle \frac{1}{|\stackrel{}{\mathrm{}}_2|^p}},|\stackrel{}{\mathrm{}}_2|0,\stackrel{}{u}_2\mathrm{fixed},`$ (104) $`\rho _i(\mathrm{})`$ $``$ $`{\displaystyle \frac{1}{|\stackrel{}{\mathrm{}}_2|^{p+1}}},|\stackrel{}{\mathrm{}}_2|0,{\displaystyle \frac{\stackrel{}{u}_2(\stackrel{}{u}_5\stackrel{}{u}_4)}{|\stackrel{}{\mathrm{}}_2|}}\mathrm{fixed},`$ (105) with $`p2`$. Specifically, having chosen $`\stackrel{}{\mathrm{}}_4`$ we can choose the remaining loop momentum $`\stackrel{}{\mathrm{}}_2`$ with the density $$\stackrel{~}{\rho }(\stackrel{}{\mathrm{}}_2)=\frac{1}{2\pi K_0^3}\frac{1}{\left[1+\left(|\stackrel{}{\mathrm{}}_2|/K_0\right)^{(3p)}\right]^{(5p)/(3p)}}\left(\frac{K_0}{|\stackrel{}{\mathrm{}}_2|}\right)^p\frac{1}{\mathrm{\Gamma }\sqrt{\mathrm{cos}^2(\theta )+\stackrel{}{\mathrm{}}_2^{\mathrm{\hspace{0.17em}2}}/K_0^2}}.$$ (106) Here $`K_0`$ is a momentum scale determined by $`\stackrel{}{\mathrm{}}_4`$, $`\theta `$ is the angle between $`\stackrel{}{\mathrm{}}_2`$ and $`(\stackrel{}{u}_5\stackrel{}{u}_4)`$, and $$\mathrm{sinh}(\mathrm{\Gamma })=K_0/|\stackrel{}{\mathrm{}}_2|.$$ (107) It is easy to choose points with this density by first choosing $`|\stackrel{}{\mathrm{}}_2|`$, then choosing $`\mathrm{cos}(\theta )`$, and finally choosing the corresponding azimuthal angle $`\varphi `$ with a uniform density. Accounting for the fact that $`\mathrm{\Gamma }\mathrm{log}(|\stackrel{}{\mathrm{}}_2|)`$ for $`\stackrel{}{\mathrm{}}_20`$, we see that $`\stackrel{~}{\rho }`$ will have a singularity stronger than that of $`𝒢`$ provided that $`p>2`$. We will see how this works in a numerical example in the next section. ## VII Numerical Example In this section, I illustrate the principles developed above by means of a particular example. We consider the integral in Eq. (19). We hold $`\stackrel{}{\mathrm{}}_4`$ fixed and consider the integrand as a function of $`\stackrel{}{\mathrm{}}_2`$. In order to simplify the labelling, I define $$\stackrel{}{\mathrm{}}_2\stackrel{}{\mathrm{}}.$$ (108) There is a contribution for each cut $`C`$, with $`C=a,b,c,\mathrm{or}d`$. For each contribution from a cut $`C`$ in which there is a virtual loop, we want to deform the integration contour as discussed in Sec. V. Thus $`\stackrel{}{\mathrm{}}`$ gets replaced by a complex vector $`\stackrel{}{\mathrm{}}_c=\stackrel{}{\mathrm{}}+i\stackrel{}{\kappa }_C`$ and we need to supply a jacobian $`𝒥_C(\stackrel{}{\mathrm{}})`$, Eq. (63). Then the integration over $`\stackrel{}{\mathrm{}}`$ has the form $$𝑑\stackrel{}{\mathrm{}}\underset{C}{}𝒥_C(\stackrel{}{\mathrm{}})𝒢_C(\stackrel{}{\mathrm{}}).$$ (109) The functions $`𝒢_C`$ are the analytic continuations to the deformed contours of the functions given in Eq. (28). As discussed in Sec. VI, the quantity that is relevant for the convergence of the Monte Carlo integration is the integrand divided by the density of points chosen for the integration. In this section, I consider only the integration over $`\stackrel{}{\mathrm{}}`$, so I discuss a choice for the density of integration points $`\rho (\stackrel{}{\mathrm{}})`$ at a fixed $`\stackrel{}{\mathrm{}}_4`$ and display plots of the functions $$F_C(\stackrel{}{\mathrm{}})\frac{1}{\rho (\stackrel{}{\mathrm{}})}𝒥_C(\stackrel{}{\mathrm{}})𝒢_C(\stackrel{}{\mathrm{}})$$ (110) and $`F(\stackrel{}{\mathrm{}})=_CF_C(\stackrel{}{\mathrm{}})`$, as well as plots of the deformation and the density. For the numerical examples, I choose $$\stackrel{}{q}=(3,0.5,0)$$ (111) and then take $`\stackrel{}{\mathrm{}}_4`$ at the point $$\stackrel{}{\mathrm{}}_4=(2,1,0).$$ (112) Since $`\stackrel{}{\mathrm{}}_5=\stackrel{}{q}\stackrel{}{\mathrm{}}_4`$ we have $$\stackrel{}{\mathrm{}}_5=(1,0.5,0).$$ (113) The singularities of the functions $`𝒢_C(\stackrel{}{\mathrm{}})`$ lie in the plane of $`\stackrel{}{\mathrm{}}_4`$ and $`\stackrel{}{\mathrm{}}_5`$, that is the $`\mathrm{}_z=0`$ plane. In the plots, I choose $`\mathrm{}_z=0`$, so that we see the effect of the singularities. I plot $`|\stackrel{}{\kappa }_a|`$, $`|\stackrel{}{\kappa }_b|`$, $`\rho `$, $`F_a`$, $`F_b`$, $`F_c+F_d`$ and $`F`$ as functions of $`\mathrm{}_x`$ and $`\mathrm{}_y`$ in the domain $`2.5<\mathrm{}_x<1.0`$ and $`1.0<\mathrm{}_y<2.0`$. Consider first the contour deformation for cut $`a`$, $`\stackrel{}{\mathrm{}}\stackrel{}{\mathrm{}}_c=\stackrel{}{\mathrm{}}+i\stackrel{}{\kappa }_a`$. I take $`\stackrel{}{\kappa }_a=D\stackrel{}{w}`$ as given in Eqs. (75-83) with $`\alpha =\beta =\gamma =1`$. In Fig. 7, I show a graph of $`|\stackrel{}{\kappa }_a|`$ versus $`\mathrm{}_x`$ and $`\mathrm{}_y`$. We see that the deformation is not small. I also display in the figure the lines $`\stackrel{}{\mathrm{}}=x\stackrel{}{\mathrm{}}_4`$ with $`0<x<1`$ and $`\stackrel{}{\mathrm{}}=x\stackrel{}{\mathrm{}}_5`$ with $`0<x<1`$, where the collinear singularities for cut $`a`$ are located. We see that, as desired, the deformation vanishes quadratically as $`\stackrel{}{\mathrm{}}`$ approaches these lines. There is a different contour deformation for cut $`b`$. The same formulas apply as for cut $`a`$ with the replacements $`\stackrel{}{\mathrm{}}_4\stackrel{}{\mathrm{}}_1`$, $`\stackrel{}{\mathrm{}}_5\stackrel{}{\mathrm{}}_3`$, $`\stackrel{}{\mathrm{}}\stackrel{}{\mathrm{}}`$ and with the sign of $`\stackrel{}{\kappa }`$ reversed. I show a graph of $`|\stackrel{}{\kappa }_b|`$ versus $`\mathrm{}_x`$ and $`\mathrm{}_y`$ in Fig. 8. (This figure does not look like Fig. 7 because $`\stackrel{}{\mathrm{}}`$ varies with $`\stackrel{}{\mathrm{}}_4`$ held fixed, not with $`\stackrel{}{\mathrm{}}_1`$ held fixed as would be needed if we applied the replacement $`\stackrel{}{\mathrm{}}_4\stackrel{}{\mathrm{}}_1`$ to Fig. 7.) I also display in the figure the lines $`\stackrel{}{\mathrm{}}=\lambda \stackrel{}{\mathrm{}}_4`$ with $`0<\lambda `$ and $`\stackrel{}{\mathrm{}}=\lambda \stackrel{}{\mathrm{}}_5`$ with $`0<\lambda `$, where the collinear singularities for cut $`b`$ are located. The deformation vanishes quadratically as $`\stackrel{}{\mathrm{}}`$ approaches these lines. The jacobian functions $`𝒥_a(\stackrel{}{\mathrm{}})`$ and $`𝒥_b(\stackrel{}{\mathrm{}})`$ associated with the contour deformations are quite unremarkable, so I omit showing them. Consider next the density of integration points. I choose $$\rho (\stackrel{}{\mathrm{}})=0.2\rho _1(\stackrel{}{\mathrm{}}_1)+0.6\rho _2(\stackrel{}{\mathrm{}})+0.2\rho _3(\stackrel{}{\mathrm{}}_3),$$ (114) as shown in Fig. 9. The function $`\rho _1`$ has a mild singularity as $`\stackrel{}{\mathrm{}}_10`$ and is given by Eq. (95) with $`\stackrel{}{\mathrm{}}_1=\stackrel{}{\mathrm{}}_4\stackrel{}{\mathrm{}}`$ and with $`K_0`$ set equal to 2. The function $`\rho _3`$ has a mild singularity as $`\stackrel{}{\mathrm{}}_30`$; I use the same functional form with $`\stackrel{}{\mathrm{}}_3=\stackrel{}{\mathrm{}}\stackrel{}{\mathrm{}}_5`$. For $`\rho _2`$, I use the function given in Eq. (106) with $`K_0=2`$ and with the power $`p`$ taken as $`p=2.2`$. Then $`\rho _2`$ has a strong $`1/[|\stackrel{}{\mathrm{}}|^{2.2}\mathrm{log}(|\stackrel{}{\mathrm{}}|)]`$ singularity as we approach the $`\stackrel{}{\mathrm{}}=0`$. Furthermore, the density of points is largest near the plane $`\mathrm{}_y=0`$, the plane that is tangent at $`\stackrel{}{\mathrm{}}=0`$ to the ellipsoidal surface that (if we turn off the deformation) contains the scattering singularity. In order to display the dependence of $`\rho `$ on angle near $`\stackrel{}{\mathrm{}}=0`$, I plot in Fig. 10 the angle dependent factor in $`\rho _2`$, namely the factor $$\frac{|\stackrel{}{\mathrm{}}|/K_0}{\sqrt{\mathrm{cos}^2(\theta )+\stackrel{}{\mathrm{}}^{\mathrm{\hspace{0.17em}2}}/K_0^2}}$$ (115) in Eq. (106), in a region near $`\stackrel{}{\mathrm{}}=0`$. Here $`\mathrm{cos}(\theta )=\mathrm{}_y/|\stackrel{}{\mathrm{}}|`$. We see that the density of integration points is heavily concentrated very near the plane $`\mathrm{}_y=0`$ when $`|\stackrel{}{\mathrm{}}|`$ is small. We are now ready to look at the contribution $`F_a=𝒥_a𝒢_a/\rho `$ to $`F`$ from cut $`a`$. This function is displayed in Fig. 11 with a small rectangle near $`\stackrel{}{\mathrm{}}=0`$ removed from the graph. We see the two collinear singularities, at $`\stackrel{}{\mathrm{}}=x\stackrel{}{\mathrm{}}_4`$ and at $`\stackrel{}{\mathrm{}}=x\stackrel{}{\mathrm{}}_5`$ with $`0<x<1`$. As $`\stackrel{}{\mathrm{}}`$ approaches one of these singularities, $`F_a`$ approaches $`\mathrm{}`$. In the standard method for calculating $``$, we would perform the integration over $`\stackrel{}{\mathrm{}}`$ analytically for the contribution from cut $`a`$. Because of the singularities, the integration is divergent. However, we can get a finite answer if we regulate the integral by working in $`32ϵ`$ spatial dimensions. Then the result contains terms proportional to $`1/ϵ^2`$ and $`1/ϵ`$ as well as a remainder that is finite as $`ϵ0`$. What about the contribution to $`F`$ from cut $`b`$, the other cut for which there is a virtual subgraph? This function is displayed in Fig. 12 with the same small rectangle near $`\stackrel{}{\mathrm{}}=0`$ removed from the graph. We see the two collinear singularities, at $`\stackrel{}{\mathrm{}}=\lambda \stackrel{}{\mathrm{}}_4`$ and at $`\stackrel{}{\mathrm{}}=\lambda \stackrel{}{\mathrm{}}_5`$ with $`0<\lambda <\mathrm{}`$. As with $`F_a`$, as $`\stackrel{}{\mathrm{}}`$ approaches one of these singularities, $`F_b`$ approaches $`\mathrm{}`$. There are two cuts, $`c`$ and $`d`$, for which there are no virtual subgraphs. In Fig. 13 I show the contribution $`F_c+F_d`$ from these cuts. We see that $`F_c+F_d`$ approaches $`+\mathrm{}`$ at just the singularities where $`F_a`$ and $`F_b`$ approach $`\mathrm{}`$. In the standard method for QCD calculations, we would perform the integration over $`\stackrel{}{\mathrm{}}`$ partially numerically for the contribution from cuts $`c`$ and $`d`$. Of course, we would have to do something about the collinear and soft singularities, since otherwise we would obtain an infinite result. For instance, if we were to use the phase-space slicing method, we would slice away a small part of the integration domain near the singularities and calculate its contribution analytically in $`32ϵ`$ spatial dimensions in the limit that the region sliced away is small. Then we would be left with a numerical integration of $`𝒢_c+𝒢_d`$ in the remaining region (in exactly 3 spatial dimensions). Evidently, the density of points used in the present method would not do for this purpose; we would need to expend more points on the region near the collinear singularities. We see that the standard method for performing the integrations, in which some parts of the integrations are performed analytically and some are performed numerically, is, of necessity, rather complicated. In the numerical method, we simply combine $`𝒢_a`$, $`𝒢_b`$, $`𝒢_c`$, and $`𝒢_d`$ and integrate numerically. The argument in the preceding sections showed that the contributions from the various cuts cancel as one approaches the collinear singularities. This is illustrated in Fig. 14, where I plot $`F_a+F_b+F_c+F_d`$ versus $`\mathrm{}_x`$ and $`\mathrm{}_y`$. We see, first of all, that the singular behaviors at the collinear singularities cancel, just as the calculation of Sec. IV showed. There is also a cancellation at the soft singularity at $`\stackrel{}{\mathrm{}}=0`$. There is still a singularity in the integrand at $`\stackrel{}{\mathrm{}}=0`$, but it is integrable and is removed from $`F`$ by choosing a suitable density of points $`\rho `$. Thus $`F`$ remains less than about 20 everywhere. We can see the remnants of the scattering singularity, which is located on an ellipsoidal surface that intersects the plane $`\mathrm{}_z=0`$. If it were not for the contour deformation, $`F`$ would be singular on this surface, approaching $`+\mathrm{}`$ as one approached the surface from one side and approaching $`\mathrm{}`$ as one approached the surface from the other side. Since the deformed contour avoids the singularity, the singular behavior is removed and we are left with a ridge and valley near the ellipsoid. This structure is illustrated in Fig. 15, in which we see a slice through the ridge and valley at $`\mathrm{}_x=0.3`$. Since the amount of deformation vanishes as one approaches $`\stackrel{}{\mathrm{}}=0`$, the width of the ridge and valley structure becomes more and more narrow as $`|\stackrel{}{\mathrm{}}|0`$. Recall that the density $`\rho `$ of integration points is designed to match this increasing narrowness as $`|\stackrel{}{\mathrm{}}|0`$, so that the integration points are concentrated where the structure is. ## VIII Other issues In the preceding sections, we have seen the most important features of the method of numerical integration for one loop QCD calculations. There are other important issues that are outside of the scope of this paper. I mention these briefly here. First, full QCD has a much more complicated structure than $`\varphi ^3`$ theory, which was the example for this paper. However, the complications of full QCD are in the numerators of the expressions representing Feynman diagrams, while the cancellations and the analytic structure related to the contour deformation have to do with the denominator structure. Thus one can simply generate the numerator structure with computer algebra and carry it along. Second, the denominator structure in the example used in this paper is not the only denominator structure that one needs to treat. In the QCD calculation for three-jet-like quantities in electron-positron annihilation at order $`\alpha _s^2`$, there are five possibilities for how a virtual subgraph can occur inside an amplitude. The possibilities are indicated in Fig. 16. For each possibility there is an entering line representing the virtual photon or $`Z`$ boson, which we take to have zero three momentum, and there are three on-shell lines entering the final state. There are graphs of two types, (a) and (b), containing two point virtual subgraphs. There are graphs of two types, (c) and (d), containing three point virtual subgraphs. There is one type, (e), of graph with a four point virtual subgraph. In structure (c), a line with non-zero three momentum enters the three point virtual subgraph and two on-shell lines leave. This is the case that we analyzed in the example of this paper. The structure of the graph led to the singularity structure depicted in Fig. 6. Amplitudes of types (d) and (e) have different singularity structures from that studied here. Case (d) is simpler than the case we have studied, while case (e) is somewhat more complicated. However, the essential features are those that we have already studied. This leaves virtual self-energy subgraphs. In case (a), there is a self energy subgraph on a propagator that enters the final state. This case requires a treatment different from that discussed in the previous sections. This is evident because there is a nominal $`1/k^2`$ where $`k^2=0`$. The treatment required is to represent the virtual self-energy via a dispersion relation . In this representation, the subgraph is expressed as an integral over the three-momentum in the virtual loop with an integrand that is closely related to the integrand for the corresponding cut self-energy graph. The point-by-point cancellation between real and virtual graphs is then manifest. It is convenient also to use the dispersive representation for the much easier case (b). Third, we have to do something about ultraviolet divergences in virtual subgraphs. These are easily removed by subtracting an integrand that, in the region of large loop momenta, matches the integrand of the divergent subdiagram. The integrand of the subtraction term should depend on a mass parameter $`\mu `$ that serves to make the subtraction term well behaved in the infrared. Then, with the aid of a small analytical calculation for each of the one loop divergent subdiagrams that occur in QCD, one can arrange the definition so that this ad hoc subtraction has exactly the same effect as $`\overline{\mathrm{MS}}`$ subtraction with scale parameter $`\mu `$. Fourth, I use Feynman gauge for the gluon field, but then self-energy corrections on a gluon propagator require special attention . The one loop gluon self-energy subgraph, $`\pi ^{\mu \nu }(k)`$, contains a term proportional to $`k^\mu k^\nu `$ that contributes quadratic infrared divergences , while the cancellation mechanism that we have studied in this paper takes care of logarithmic divergences. This problem can be solved by replacing $`\pi ^{\mu \nu }`$ by $`P_\alpha ^\mu \pi ^{\alpha \beta }P_\beta ^\nu `$, where $`P_\alpha ^\mu =g_\alpha ^\mu k^\mu \stackrel{~}{k}_\alpha /\stackrel{~}{k}^2`$, with $`\stackrel{~}{k}=(0,\stackrel{}{k})`$. The terms added to $`\pi ^{\mu \nu }`$ are proportional to either $`k^\mu `$ or $`k^\nu `$ and thus vanish when one sums over different ways of inserting the dressed gluon propagator into the remaining subgraph. Since $`P_\alpha ^\mu k^\alpha =0`$, the problematic $`k^\mu k^\nu `$ term is eliminated. Effectively, this is a change of gauge for dressed gluon propagators from Feynman gauge to Coulomb gauge. Most of these issues are discussed briefly in . A quite detailed, but not very pedagogical, treatment can be found in . Further analysis of these issues is left for a future paper. I have also not given a complete presentation of algorithms for choosing integration points. As discussed in Sec. VI C, the crucial issue is to have the right singularities in the density of points near a soft parton singularity of the Feynman diagram. This is not the only issue that needs to be addressed in a complete algorithm. Of course, the demonstration program has a complete algorithm. However, this algorithm is quite a hodge-podge of methods and it seems that a detailed exposition should be reserved for a better and more systematic method, which remains to be developed. ## IX Results and conclusions In the preceding sections, we have seen some of the techniques needed for the numerical integration method for QCD calculations. Of course, since not all of the techniques have been explained, the explanation does not constitute a very convincing argument that such a calculation is feasible. A truly exhaustive explanation would help, but an actual computer program that demonstrates the techniques is better. Results from such a program were presented in . Since then, I have found and corrected one bug that resulted in errors a little bit bigger than 1% and have made some other improvements in the code. The resulting code and documentation are available at . The program is a parton-level event generator. The user is to supply a subroutine that calculates how an event with three or four partons in the final state contributes to the observable to be calculated. The program supplies events, each consisting of a set of parton momenta $`\{\stackrel{}{k}_1,\stackrel{}{k}_2,\stackrel{}{k}_3\}`$ or $`\{\stackrel{}{k}_1,\stackrel{}{k}_2,\stackrel{}{k}_3,\stackrel{}{k}_4\}`$, together with weights $`w`$ for the events. Then the user routine calculates $``$ according to $$\frac{1}{N}\underset{i=1}{\overset{N}{}}w_i𝒮(k_i).$$ (116) The weights used are the real parts of complex weights; the imaginary parts can be dropped since we always know in advance that $``$ is real. Thus the weights are both positive and negative. It would, of course, be more convenient to have only positive weights, but one can hardly have quantum interference without having negative numbers along with positive numbers. The first general purpose program for QCD calculation of three-jet-like quantities in $`e^+e^{}\mathrm{ℎ𝑎𝑑𝑟𝑜𝑛𝑠}`$ at order $`\alpha _s^2`$ was that of Kunszt and Nason . This program uses the numerical/analytical method of Ellis, Ross, and Terrano. In Table I, I compare the results of the Kunszt-Nason program to those obtained with the numerical method for the $`\alpha _s^2`$ contributions to moments of the thrust distribution, $$_n=\frac{1}{\sigma _0(\alpha _s/\pi )^2}_0^1𝑑t(1t)^n\frac{d\sigma ^{[2]}}{dt}.$$ (117) In Table II, I compare the results of the two methods for moments of the $`y_{\mathrm{cut}}`$ distribution for the three jet cross section. To define these quantities, let $`f_3(y_{\mathrm{cut}})`$ be the cross section to produce three jets according to the Durham algorithm with resolution parameter $`y_{\mathrm{cut}}`$. Let $`g_3(y_{\mathrm{cut}})`$ be the negative of its derivative, $$g_3(y_{\mathrm{cut}})=\frac{f_3(y_{\mathrm{cut}})}{dy_{\mathrm{cut}}}.$$ (118) Then we calculate moments of the $`\alpha _s^2`$ contribution to this quantity, $$_n=\frac{1}{\sigma _0(\alpha _s/\pi )^2}_0^1𝑑y_{\mathrm{cut}}(y_{\mathrm{cut}})^ng_3^{[2]}.$$ (119) In each table, the results for the numerical method are shown with their statistical and systematic errors. (The systematic error is estimated by changing the cutoffs that remove small regions near the singularities where roundoff errors start to become a problem.) The corresponding results for the numerical/analytical method are shown with the statistical errors as reported by the program. Inspection of the tables shows that there is good agreement between the two methods. We have explored some of the most important techniques necessary for a QCD calculation for three-jet-like quantities in electron-positron annihilation at order $`\alpha _s^2`$ using numerical integration throughout the calculation. For the techniques covered, this explanation expands on the brief presentation in . We have also seen that the method works. The older and very successful numerical/analytical method for QCD calculations has its complications. The numerical method has its own complications, but they are different complications. Thus one may expect that the classes of problems for which each of the methods is well adapted may be different. There may be some classes of problems for which the natural flexibility of the numerical method makes it the more useful method. It remains for the future to explore the possibilities. ###### Acknowledgements. I thank M. Seymour and Z. Kunszt for providing advice and results from their programs to help with debugging the program described here. I thank P. Nason for providing the Kunszt-Nason program that I used in preparing the Tables I and II. I thank M. Krämer for criticisms of the manuscript. Finally, I am most grateful to the TH division of CERN for its hospitality during a sabbatical year in which much of the writing of this paper was accomplished. ## Contour deformation in many dimensions The calculational method described in this paper makes use of Cauchy’s theorem in a multi-dimensional complex space. Since this theorem is not proved in most textbooks on complex analysis, I provide a proof here, including the special case, needed for our application, in which there is a singularity on the integration contour. Let $`f(z)`$ be a function of $`N`$ complex variables $`z^\mu =x^\mu +iy^\mu `$, with $`\mu =1,\mathrm{},N`$, where $`x^\mu `$ and $`y^\mu `$ are real variables. Consider a family of integration contours $`𝒞(t)`$ labeled by a parameter $`t`$ with $`0t1`$ and specified by $$z^\mu (x^1,\mathrm{},x^N;t)=x^\mu +iy^\mu (x^1,\mathrm{},x^N;t),\mu =1,\mathrm{},N.$$ (120) Let $`(t)`$ be the integral of $`f`$ over the contour $`𝒞(t)`$, $$(t)=_{𝒞(t)}𝑑zf(z)=𝑑xdet\left(\frac{z(x;t)}{x}\right)f(z(x;t)).$$ (121) Suppose that $`f(z)`$ is analytic in a region that contains the contours. Then we have Cauchy’s theorem: $$(1)=(0).$$ (122) To prove this theorem, we simply prove that $`d(t)/dt=0`$. Define $$A_\nu ^\mu =\frac{z^\mu }{x^\nu }=\delta _\nu ^\mu +i\frac{y^\mu }{x^\nu }.$$ (123) Let $`B_\nu ^\mu `$ be the inverse matrix to $`A_\nu ^\mu `$. Then $$B_\nu ^\mu detA=\frac{1}{(N1)!}ϵ^{\mu \mu _2\mathrm{}\mu _N}ϵ_{\nu \nu _2\mathrm{}\nu _N}\frac{z^{\nu _2}}{x^{\mu _2}}\mathrm{}\frac{z^{\nu _N}}{x^{\mu _N}},$$ (124) where $`ϵ^{\mu _1\mathrm{}\mu _N}`$ is the completely antisymmetric tensor with $`N`$ indices, normalized to $`ϵ^{\mathrm{1\hspace{0.17em}2}\mathrm{}N}=1`$, and $`ϵ_{\mu _1\mathrm{}\mu _N}`$ is the same tensor. This has the immediate consequence that $$\frac{}{x^\mu }\left(B_\nu ^\mu detA\right)=0.$$ (125) Also, $$detA=\frac{1}{N!}ϵ^{\mu _1\mathrm{}\mu _N}ϵ_{\nu _1\mathrm{}\nu _N}\frac{z^{\nu _1}}{x^{\mu _1}}\mathrm{}\frac{z^{\nu _N}}{x^{\mu _N}},$$ (126) so $$\frac{}{t}detA=\frac{A_\mu ^\nu }{t}B_\nu ^\mu detA=i\frac{y^\nu }{x^\mu t}B_\nu ^\mu detA.$$ (127) We need one more result: $$\frac{f}{x^\mu }=\frac{z^\nu }{x^\mu }\frac{f}{z^\nu },$$ (128) so $$\frac{f}{z^\nu }=B_\nu ^\mu \frac{f}{x^\mu }.$$ (129) Then, using the results (125), (127), and (129) and an integration by parts, we find $`{\displaystyle \frac{d}{dt}}(t)`$ $`=`$ $`{\displaystyle 𝑑x\frac{d}{dt}\left[detAf\right]}`$ (130) $`=`$ $`{\displaystyle 𝑑xdetA\left\{i\frac{y^\nu }{x^\mu t}B_\nu ^\mu f+\frac{f}{z^\nu }i\frac{y^\nu }{t}\right\}}`$ (131) $`=`$ $`{\displaystyle 𝑑xdetA\left\{i\frac{y^\nu }{x^\mu t}B_\nu ^\mu f+iB_\nu ^\mu \frac{f}{x^\mu }\frac{y^\nu }{t}\right\}}`$ (132) $`=`$ $`i{\displaystyle 𝑑xB_\nu ^\mu detA\frac{}{x^\mu }\left\{\frac{y^\nu }{t}f\right\}}`$ (133) $`=`$ $`i{\displaystyle 𝑑x\frac{y^\nu }{t}f\frac{}{x^\mu }\left\{B_\nu ^\mu detA\right\}}`$ (134) $`=`$ $`0.`$ (135) This proves the theorem. Consider now a more complicated problem. Suppose that we have an integral of the form $$=𝑑x[f(x)+g(x)].$$ (136) where $`f`$ and $`g`$ are both singular on a surface $`𝒫`$ in the space of the real variables $`x`$. Suppose that the strength of the singularities are such that the integral of either function would be logarithmically divergent. Suppose further that there is a cancellation in the sum such that the integral of $`f+g`$ is convergent. Let $`d(x)`$ be the distance from any point $`x`$ to the surface $`𝒫`$. Let us cut out a region of radius $`R`$ around $`𝒫`$ and write $$=\underset{R0}{lim}\left[_{d>R}𝑑xf(x)+_{d>R}𝑑xg(x)\right].$$ (137) Now we wish to explore the consequences of deforming the integration contour for the integral of $`f`$. Thus we investigate (with the same notation as above) $$(t)=\underset{R0}{lim}\left[_{d>R}𝑑xdet\left(\frac{z(x;t)}{x}\right)f(z(x;t))+_{d>R}𝑑xg(x)\right].$$ (138) Following the previous proof we find that there is a surface term in the integration by parts $`{\displaystyle \frac{d}{dt}}(t)`$ $`=`$ $`\underset{R0}{lim}\left[i{\displaystyle _{d<R}}𝑑x{\displaystyle \frac{}{x^\mu }}\left\{B_\nu ^\mu detA{\displaystyle \frac{y^\nu }{t}}f\right\}\right]`$ (139) $`=`$ $`\underset{R0}{lim}\left[i{\displaystyle 𝑑S_\mu \left\{B_\nu ^\mu detA\frac{y^\nu }{t}f\right\}}\right],`$ (140) where the integration is over the surface $`d=R`$ and $`dS_\mu `$ is the surface area differential normal to the surface. We want to arrange the deformation specified by $`y^\mu (x;t)`$ so that $`d(t)/dt=0`$. For this to happen, it is clear that $`y`$ will have to approach 0 as $`x`$ approaches the surface $`𝒫`$. Then $`B_\nu ^\mu \delta _\nu ^\mu `$ and $`detA1`$ as $`x`$ approaches $`𝒫`$. Let the dimensionality of the singular surface $`𝒫`$ be $`Na`$. If the function $`f`$ was such that the original integral was logarithmically divergent, then $`fR^a`$ for $`R0`$. The integration over the surface gives a factor $`dS^\mu R^{a1}`$ for $`R0`$. Suppose that the deformation vanishes proportionally to $`R^b`$. Then $$\frac{d}{dt}(t)\underset{R0}{lim}\left[R^{a1}R^aR^b\right].$$ (141) Then $`d(t)/dt=0`$ if $`b>1`$. The choice made in the main text of the paper is $`b=2`$.
no-problem/9910/hep-ex9910016.html
ar5iv
text
# Measurement of the Inclusive Charm Cross Section at 4.03 GeV and 4.14 GeV ## I Introduction The hadronic cross section of $`e^+e^{}`$ at all energies is needed to calculate the effects of vacuum polarization on parameters of the Standard Model. The energy region which contributes the largest uncertainty is the charm threshold region where the hadronic cross section has only been measured with an accuracy of $`1520\%`$. Traditionally, $`\sigma _{hadron}`$ is measured by counting hadronic events. This method requires a detailed understanding of trigger conditions, the efficiency of hadronic event selection criteria, and a subtraction of two photon events and other backgrounds. An alternative is measuring $`\sigma _{charm}`$ and adding this to an extrapolation of the $`\sigma _{u,d,s}`$ contribution from the region below charm threshold. The charmed mesons used in this study are the $`D^0`$ and $`D^+`$. The $`D_s`$ cross sections are taken from earlier works. There is no evidence for continuum charmonium production. The charm counting method is intrinsically less sensitive to trigger conditions, beam-related backgrounds, and two photon backgrounds due to the distinctive topology of charmed meson events. ## II Data Selection The data used for this analysis were accumulated with the Beijing Spectrometer; the total integrated luminosity at 4.03 (4.14) GeV was 22.3 (1.5) $`pb^1`$. Candidate tracks were required to have a good track fit passing within 1.5 cm of the collision point in $`R`$ and 15 cm in $`z`$, and satisfying $`|\mathrm{cos}\theta |<0.85`$. Particle identification was provided by an array of time of flight scintillation counters (TOF) and specific ionization measurements in the drift chamber used for charged particle tracking ($`dE/dx`$). For pions, consistency $`(CL(\pi )>0.1\%)`$ and loose electron rejection $`(L_\pi /(L_\pi +L_e)>0.2)`$ were required, where $`L_X`$ is the likelihood for hypothesis $`X`$. Kaon identification required consistency $`(CL(K)>0.1\%)`$, pion rejection $`(L_K/(L_K+L_\pi )>0.5)`$ and loose electron rejection $`(L_\pi /(L_\pi +L_e)>0.2)`$. Multiple counting of $`D^0`$ candidates was removed by positively identifying pions using the selection $`(L_K/(L_K+L_\pi )<0.5)`$. Muons are rejected using a momentum dependent criteria based on track penetration into the muon detector. ## III $`D^0`$ and $`D^+`$ Signal Inclusive $`K^{}\pi ^+`$ and $`K^{}\pi ^+\pi ^+`$ invariant mass distributions are shown in Figs. 1 and 2 for 4.03 and 4.14 GeV, respectively. (Here and throughout this paper, reference to a state also implies its charge conjugate state.) Each histogram was fit to a function which included a Gaussian signal plus a background function. For the $`K\pi `$ distributions (Figs. 1a and 2a), the background function consisted of a Gaussian centered at 1.60 GeV to account for the contribution to the $`K\pi `$ spectrum from $`D^0K\pi \pi ^0`$ decays plus a third order polynomial background. For the $`K\pi \pi `$ distributions (Figs. 1b and 2b), the background function consisted of a third order polynomial. The number of signal events in each mode is given in Table I. The momentum spectra of $`D`$ candidates with a mass between 1.81 and 1.91 GeV is presented in Fig. 3 for $`\sqrt{s}=4.03`$ GeV. The three momentum regions near 0.15, 0.55, and 0.75 GeV correspond to $`D^{}\overline{D}^{}`$, $`D^{}\overline{D}`$, and $`D\overline{D}`$ production, respectively. The spectrum for $`\sqrt{s}=4.14`$ GeV is shown in Fig. 4. The momentum regions near 0.45 and 0.70 and 0.90 GeV correspond to $`D^{}\overline{D}^{}`$, $`D^{}\overline{D}`$ and $`D\overline{D}`$ production, respectively. The shapes of the $`D^{}\overline{D}^{}`$ spectrum and part of the $`D^{}\overline{D}`$ spectrum are broadened due to Doppler-smeared $`D`$ mesons coming from $`D^{}`$ decays. In addition, a small low momentum tail on each structure is expected due to initial state radiation. The background shape under the momentum spectrum is not flat, making a direct subtraction difficult. ## IV Cross Section If the reconstruction efficiency for $`D`$ mesons were constant with respect to momentum, the observed cross section could be determined using $`\sigma (e^+e^{}DX)={\displaystyle \frac{N(\mathrm{signal})}{ϵB}},`$ (1) where $`N`$(signal) is the number of signal events, $`ϵ`$ is the efficiency, $`B`$ is the branching fraction of the $`D`$ meson to a decay mode and $``$ is the luminosity. However, Monte Carlo studies show some momentum dependence to the reconstruction efficiency (Fig. 5). In order to measure the cross section, the momentum spectrum for $`D^0`$ and $`D^+`$ mesons from 50 to 850 MeV was divided into 20 (40) MeV slices for 4.03 (4.14) GeV data. For each momentum slice, the invariant mass distribution was fit with a Gaussian plus a polynomial background. The central value of the Gaussian was fixed at the nominal $`D`$ mass; the width was fixed to a momentum dependent value determined by a fit to a coarser slicing of the data (Fig. 6). The differential cross section with respect to momentum for this data is shown in Figs. 7 and 8 for 4.03 and 4.14 GeV, respectively. The cross section times branching fractions of $`D^0`$ and $`D^+`$ mesons and cross sections calculated using the branching fractions of ref. are shown in Table II. The $`\sigma B`$ values from this measurement are compatible with previous measurements by Mark I and Mark II as shown in Fig. 9. ## V Corrections for Initial State Radiation The tree level cross sections for charm at $`\sqrt{s}=4.03`$ GeV and $`\sqrt{s}=4.14`$ GeV were obtained by correcting the observed cross section for the effects of initial state radiation (ISR). The ISR correction is dependent on the cross section for all energies less than the nominal energy. Since these measurements were performed only at two energies, some theoretical modeling of the cross section distribution was required. Two different theoretical predictions for these cross sections were used in this analysis, the Coupled Channel Model, and a $`P`$-wave phase space formalism. The models provide predictions for $`\sigma _{B,i}(s_{\mathrm{eff}})`$, the tree level (Born) cross section as a function of the effective center of mass energy squared for production mode $`i`$, where $`i=D^0\overline{D}^0`$, $`D^+D^{}`$, $`D^0\overline{D}^0`$, $`D^+D^{}`$, $`D^0\overline{D}^0`$, $`D^+D^{}`$. The effective center of mass energy squared is given by $$s_{\mathrm{eff}}s_{\mathrm{nom}}(1k),$$ (2) where $`kE_{\mathrm{beam}}`$ is the energy of radiated photons and $`s_{\mathrm{nom}}`$ is the nominal center of mass energy squared. The tree level cross sections are convoluted with a sampling function $`f(k,s_{\mathrm{nom}})`$ that represents a first order calculation of the effective luminosity for radiated photons in a two body radiation model, giving the observed cross section at $`s_{\mathrm{nom}}`$: $`\sigma _{\mathrm{obs},\mathrm{i}}(s_{\mathrm{nom}})=`$ (3) $`{\displaystyle _0^1}𝑑kf(k,s_{\mathrm{nom}})\sigma _{B,i}(s_{\mathrm{eff}})(1+\delta _{\mathrm{VP}}(s_{\mathrm{eff}})).`$ (4) The vacuum polarization correction $`(1+\delta _{\mathrm{VP}})`$ includes both leptonic and hadronic terms. It varies from charm threshold to 4.14 GeV by less than $`\pm `$2%. It is treated as a constant with the value $$(1+\delta _{\mathrm{VP}})=1.047\pm 0.024$$ (5) and moved outside the integrand. With this simplification, the ISR correction, $`g_i(s_{\mathrm{nom}})`$, is defined by $$g_i(s_{\mathrm{nom}})\frac{\sigma _{\mathrm{obs},i}(s_{\mathrm{nom}})}{\sigma _{B,i}(s_{\mathrm{nom}})(1+\delta _{\mathrm{VP}})}.$$ (6) The $`D^0`$ and $`D^+`$ branching fractions are used to calculate $`N_{D^0,i}`$ and $`N_{D^+,i}`$, the mean number of $`D^0`$ and $`D^+`$ per production mode $`i`$ event. The $`N_{D^0,i}`$ and $`N_{D^+,i}`$ values are used to weight the ISR correction for each mode to obtain the ISR correction averaged over all production modes: $$g_{D^0}(s_{\mathrm{nom}})=\frac{_ig_i(s_{\mathrm{nom}})N_{D^0,i}}{_iN_{D^0,i}}$$ (7) $$g_{D^+}(s_{\mathrm{nom}})=\frac{_ig_i(s_{\mathrm{nom}})N_{D^+,i}}{_iN_{D^+,i}}.$$ (8) This procedure results in the corrections shown in Fig. 10. Since neither method models the data precisely, and the two models vary differently with energy, a systematic uncertainty is assigned to be one half the rms difference between the two models over the energy range 3.9 GeV to 4.2 GeV excluding the region from 4.021 to 4.027 where the Coupled Channel Model $`D\overline{D}`$ cross section is tiny, causing the ISR correction to diverge. The corrections for initial state radiation at $`\sqrt{s}=4.03`$ GeV and $`\sqrt{s}=4.14`$ GeV are: $`4.03\mathrm{GeV}:`$ $`g_{D^0}=0.67\pm 0.05`$ (9) $`g_{D^+}=0.73\pm 0.05`$ (10) $`4.14\mathrm{GeV}:`$ $`g_{D^0}=0.83\pm 0.06`$ (11) $`g_{D^+}=0.84\pm 0.06.`$ (12) The ISR correction for $`D_s`$ mesons was calculated using the Coupled Channel Model and the same $`P`$-wave phase space formalism. Figure 11 shows a prediction for the tree level cross section of $`D_s`$ and $`D_s^{}`$, the observed cross section of $`D_s`$ and $`D_s^{}`$, the fraction of $`D_s`$ mesons from direct production, and the fraction of $`D_s`$ mesons from $`D_s^{}`$ decays. Figure 10c shows the ISR correction for $`D_s`$ mesons. The ISR contribution to the systematic error for $`D_s`$ production is taken as one half the rms difference between the two models: $`4.03\mathrm{GeV}:`$ $`g_{D_s}=0.73\pm 0.04`$ (13) $`4.14\mathrm{GeV}:`$ $`g_{D_s}=0.78\pm 0.05.`$ (14) The ISR and vacuum polarization corrections are applied to the observed $`D^0`$ and $`D^+`$ cross sections found in Table II to obtain the tree level cross section for $`D^0`$ and $`D^+`$ at 4.03 GeV and 4.14 GeV as shown below: $`4.03\mathrm{GeV}:`$ $`\sigma _{D^0}+\sigma _{\overline{D}^0}=19.9\pm 0.6\pm 2.3\mathrm{nb}`$ (15) $`\sigma _{D^+}+\sigma _D^{}=6.5\pm 0.2\pm 0.8\mathrm{nb}`$ (16) $`4.14\mathrm{GeV}:`$ $`\sigma _{D^0}+\sigma _{\overline{D}^0}=9.3\pm 2.1\pm 1.1\mathrm{nb}`$ (17) $`\sigma _{D^+}+\sigma _D^{}=1.9\pm 0.9\pm 0.2\mathrm{nb}.`$ (18) Systematic uncertainties are treated in Section VI. The BES observed $`D_s`$ cross section at 4.03 GeV is $`\sigma _{D_s^+}+\sigma _{D_s^{}}=0.62\pm 0.12\pm 0.20`$ nb. After applying the ISR and vacuum polarization corrections, the tree level $`D_s`$ cross section is $`\sigma _{D_s^+}+\sigma _{D_s^{}}=0.81\pm 0.16\pm 0.27`$ nb. The Mark III observations of $`D_s`$ at 4.14 GeV give $`\sigma _{D_s^+}+\sigma _{D_s^{}}=1.34\pm 0.32\pm 0.34`$ nb. After correction, the tree level value is $`\sigma _{D_s^+}+\sigma _{D_s^{}}=1.64\pm 0.39\pm 0.42`$ nb. ## VI Systematics Several systematic checks were performed. The numbers of signal events in the distributions shown in Figs. 1, and 2 were compared to the sum of signal events from each momentum slice as shown in Table III. Good agreement for the 4.03 GeV data validates the slicing technique. For the 4.14 GeV data set, which is much smaller, the agreement is poorer. This could be due to statistical fluctuations. The analysis was repeated using wrong sign combinations $`(K^+\pi ^+,K^+\pi ^+\pi ^{})`$ to explore systematic bias from the slicing and fitting procedure. As expected, the structures that are so evident in the right sign spectra are absent in the wrong sign spectra. (Fig. 12). There is, however, a small excess when the $`K^{}\pi ^{}\pi ^+`$ spectrum is integrated: $`(\sigma _{D^0}+\sigma _{\overline{D}^0})_{WS}=0.29\pm 0.21\mathrm{nb}`$ (19) $`(\sigma _{D^+}+\sigma _D^{})_{WS}=0.7\pm 0.2\mathrm{nb}.`$ (20) Both the data and the charged MC show a small excess that was not present in neutral decays. A much larger MC study would be required to prove whether the source is procedural, or if there is a small feed down from right sign channels into the wrong sign analysis. If the source of the excess is procedural, the MC efficiency calculation should correct for the effect in the data. In either case, no correction is required. Systematic errors arising from the choice of parameters were evaluated by repeating the analyses using different bin sizes, fitting ranges, and background function shapes. Electron particle identification dominates the integrated luminosity uncertainty as determined from wide angle Bhabha scattering events. The uncertainty is evaluated by comparing samples selected independently using $`dE/dx`$ and the barrel shower counter. In addition there are systematic errors due to the uncertainties in the Monte Carlo-determined reconstruction efficiency, errors in the charmed meson branching fractions, and the uncertainties in the evaluation of the ISR correction scheme as discussed above. Magnitudes of these systematic uncertainties are shown in Table IV. Sources of systematic uncertainty are segregated into components that are common or independent for $`D^0`$, $`D^+`$, and $`D_S`$ measurements. The common components are the integrated luminosity measurement, the ISR correction, the vacuum polarization correction, and a portion of the $`D`$ branching fraction uncertainties. Since the absolute branching fraction scale for $`D^+`$ mesons depends on the $`D^0`$ branching fraction scale, the total percentage uncertainty for $`D^+`$ branching fraction (6.7%) is split into a common component that matches the percentage uncertainty for the $`D^0`$ branching fraction (2.3%) and an independent component (6.2%). All other systematic uncertainties are treated as independent and added in quadrature. Values are found in Table IV. The total observed $`D^0`$ and $`D^+`$ cross sections are shown in Table II. Tree level $`D^0`$ and $`D^+`$ cross sections are shown in Table V. ## VII Total Inclusive Charm Cross Section Since all $`D`$ mesons are produced in pairs, the tree level non-strange $`D`$ cross sections are: $`4.03\mathrm{GeV}:`$ $`\sigma (D\overline{D}X)`$ $`=13.2\pm 0.3\pm 1.4\mathrm{nb}`$ (21) $`4.14\mathrm{GeV}:`$ $`\sigma (D\overline{D}X)`$ $`=5.6\pm 1.1\pm 0.6\mathrm{nb}.`$ (22) Adding the tree level $`D`$ cross sections to the tree level $`D_s`$ cross sections gives the total tree level charm cross section: $`4.03\mathrm{GeV}:`$ $`\sigma _{\mathrm{charm}}`$ $`=13.6\pm 0.3\pm 1.5\mathrm{nb}`$ (23) $`4.14\mathrm{GeV}:`$ $`\sigma _{\mathrm{charm}}`$ $`=6.4\pm 1.2\pm 0.7\mathrm{nb}.`$ (24) These results are compared to Coupled Channel model predictions in Table V. ## VIII Measurement of $`R_D`$ and $`R`$ A measurement of $`R_D`$ is obtained by dividing $`2\times \sigma _{\mathrm{charm}}`$ by the QED prediction for the tree level muon pair cross section $$\sigma (e^+e^{}\mu ^+\mu ^{})=\frac{86.8\mathrm{nb}}{\mathrm{s}(\mathrm{GeV})^2}$$ (25) giving: $`4.03\mathrm{GeV}:`$ $`R_D=5.10\pm 0.12\pm 0.55`$ (26) $`4.14\mathrm{GeV}:`$ $`R_D=2.53\pm 0.46\pm 0.27.`$ (27) The contribution to $`R`$ in the charm threshold region from the light quarks, $`R_{uds}`$ is estimated to be $`2.5\pm 0.25`$. This value was compiled from an average of measurements of $`R`$ below charm threshold. The theoretical expectation is that $`R_{uds}`$ is approximately independent of center of mass energy in this region. The value of $`R`$ is evaluated using $`R=R_D/2+R_{uds}`$ giving: $`4.03\mathrm{GeV}:`$ $`R=5.05\pm 0.06\pm 0.37`$ (28) $`4.14\mathrm{GeV}:`$ $`R=3.76\pm 0.23\pm 0.28.`$ (29) This measurement is more precise, but compatible with previous $`R`$ measurements using the total cross section method shown in Figures 13a and 13b and a previous measurement employing a similar $`R_D`$ technique shown in Figure 13c. Charm-counting complements direct $`R`$ measurements since the two methods feature different systematic uncertainties. We would like to thank the staffs of the BEPC accelerator and the Computing Center at the Institute of High Energy Physics (Beijing). This work was supported in part by the National Natural Science Foundation of China under contract No. 19290400 and the Chinese Academy of Sciences under contract No. KJ85 (IHEP); by the Department of Energy under contract Nos. DE-FG03-92ER40701 (Caltech), DE-FG03-93ER40788 (Colorado State University), DE-AC02-76ER03069 (MIT), DE-AC03-76SF00515 (SLAC), DE-FG03-91ER40679 (UC Irvine), DE-FG03-94ER40833 (University of Hawaii), DE-FG03-95ER40925 (UT Dallas); by the U.S. National Science Foundation Grant No. PHY9203212 (University of Washington).
no-problem/9910/hep-ph9910515.html
ar5iv
text
# Intrinsic Charm in the Nucleon ## Abstract The quark flavor and spin structure of the nucleon is discussed in SU(4) symmetry breaking chiral quark model. The flavor and spin contents for charm quarks and anti-charm quarks are predicted and compared with the results given by other models. The intrinsic charm quark contribution to the Ellis-Jaffe sum rule is discussed. preprint: INPP-UVA-99-02 October, 1999 hep-ph/9910515 I. Introduction As suggested by several authors long time ago , there are so called ‘intrinsic’ heavy quark components in the proton wave function. The ‘extrinsic’ heavy quarks are created on a short time scale in associate with a large transverse momentum reaction and their distributions can be derived from QCD bremsstrahlung and pair production processes, which lead to standard QCD evolution. The intrinsic heavy quarks, however, exist over a long time scale independent of any external probe momentum. They are created from the quantum fluctuations associated with the bound state hadron dynamics. Hence the probability of finding the intrinsic heavy quark in the hadron is completely determined by nonperturbative mechanism. Since the chiral quark model provides a useful tool in studying the quark spin-flavor structure of the nucleon in nonperturbative way, we will use this model to discuss the heavy quark components of the proton. The quark components from the bottom and top quarks are negligible in the proton at the scale $`m_c^2`$ or lower, hence we only discuss the intrinsic charm (IC) quarks in this paper. Although the SU(3) chiral quark model with symmetry breaking has been quite successful in explaining the quark flavor and spin contents in the nucleon, the model is unnatural from the point of view of the standard model. According to the symmetric GIM model , one should deal with the weak axial current in the framework of SU(4) symmetry. It implies that the charm quark should play some role in determining the spin and flavor structure of the nucleon. An interesting question in high energy spin physics is whether the intrinsic charm exists in the proton. If it does, what is the size of the IC contribution to the flavor and spin observables of the proton. There are many publications on this topic (see for instance ). In an ICTP internal report , we have extended the SU(3) chiral quark model given in to the SU(4) case and obtained main results of the spin and flavor contents in the SU(4) chiral quark model. In this paper, we will give more detail results on the contributions to the structure of the proton from the intrinsic charm and anticharm quarks. The results are compared with those given by other approaches. The intrinsic charm contribution to the Ellis-Jaffe sum rule is also discussed. II. SU(4) chiral quark model with symmetry breaking In the framework of SU(4), there are sixteen pseudoscalar bosons, a 15-plet and a singlet. The effective Lagrangian describing interaction between quarks and the bosons is $$L_I=g_{15}\overline{q}\left(\begin{array}{cccc}G_u^0& \pi ^+& \sqrt{ϵ}K^+& \sqrt{ϵ_c}\overline{D}^0\\ \pi ^{}& G_d^0& \sqrt{ϵ}K^0& \sqrt{ϵ_c}D^{}\\ \sqrt{ϵ}K^{}& \sqrt{ϵ}\overline{K}^0& G_s^0& \sqrt{ϵ_c}D_s^{}\\ \sqrt{ϵ_c}D^0& \sqrt{ϵ_c}D^+& \sqrt{ϵ_c}D_s^+& G_c^0\end{array}\right)q,$$ $`(1)`$ where $`G_{u(d)}^0`$ and $`G_{s,c}^0`$ are defined as $$G_{u(d)}^0=+()\frac{\pi ^0}{\sqrt{2}}+\sqrt{ϵ_\eta }\frac{\eta ^0}{\sqrt{6}}+\zeta ^{}\frac{\eta ^0}{\sqrt{3}}\sqrt{ϵ_c}\frac{\eta _c}{4}$$ $`(2a)`$ $$G_s^0=\sqrt{ϵ_\eta }\frac{2\eta ^0}{\sqrt{6}}+\zeta ^{}\frac{\eta ^0}{4\sqrt{3}}\sqrt{ϵ_c}\frac{\eta _c}{4}$$ $`(2b)`$ $$G_c^0=\zeta ^{}\frac{\sqrt{3}\eta ^0}{4}+\sqrt{ϵ_c}\frac{3\eta _c}{4}$$ $`(2c)`$ with $$\pi ^0=\frac{1}{\sqrt{2}}(u\overline{u}d\overline{d});\eta =\frac{1}{\sqrt{6}}(u\overline{u}+d\overline{d}2s\overline{s})$$ $`(3a)`$ $$\eta ^{}=\frac{1}{\sqrt{3}}(u\overline{u}+d\overline{d}+s\overline{s});\eta _c=(c\overline{c}).$$ $`(3b)`$ The breaking effects are explicitly included in (1) and the SU(4) singlet term has been neglected. Defining $`a|g_{15}|^2`$, which denotes the transition probability of splitting $`u(d)d(u)+\pi ^{+()}`$, then $`ϵa`$ denotes the probability of splitting $`u(d)s+K^{(0)}`$. Similar definitions are used for $`ϵ_\eta a`$ and $`ϵ_ca`$. If the breaking effects are dominated by mass differences, we expect $`0<ϵ_ca<ϵ_\eta aϵa<a`$. We also have $`0<\zeta ^2<<1`$ as shown in . For a valence u-quark with spin-up, the allowed fluctuations are $$u_{,()}d_{,()}+\pi ^+,u_{}s_{}+K^+,u_{}u_{}+G_u^0,u_{}c_{}+\overline{D}^0,u_{}u_{}.$$ $`(4)`$ Similarly, one can write down the allowed fluctuations for $`u_{}`$, $`d_{}`$, $`d_{}`$, $`s_{}`$, and $`s_{}`$. Since we are only interested in the spin-flavor structure of non-charmed baryons, the fluctuations from a valence charmed quark are not discussed here. The spin-up and spin-down quark or antiquark contents in the proton, up to first order of fluctuation, can be written as $$n_p(q_,^{},\mathrm{or}\overline{q}_{}^{}{}_{,}{}^{})=\underset{q=u,d}{}\underset{h=,}{}n_p^{(0)}(q_h)P_{q_h}(q_,^{},\mathrm{or}\overline{q}_{}^{}{}_{,}{}^{}),$$ $`(5)`$ where $`P_{q_,}(q_,^{})`$ and $`P_{q_,}(\overline{q}_,^{})`$ are the probabilities of finding a quark $`q_,^{}`$ or an antiquark $`\overline{q}_,^{}`$ arise from all chiral fluctuations of a valence quark $`q_,`$. The probabilities $`P_{q_,}(q_,^{})`$ and $`P_{q_,}(\overline{q}_,^{})`$ can be obtained from the effective Lagrangian (1). In Table I only $`P_q_{}(q_,^{})`$ and $`P_q_{}(\overline{q}_,^{})`$ are listed. Those arise from $`q_{}`$ can be obtained by using the relations, $$P_q_{}(q_,^{})=P_q_{}(q_,^{}),P_q_{}(\overline{q}_,^{})=P_q_{}(\overline{q}_,^{})$$ $`(6)`$ . The notations appeared in Table I are defined as $$f\frac{1}{2}+\frac{ϵ_\eta }{6}+\frac{\zeta ^2}{48},f_s\frac{2ϵ_\eta }{3}+\frac{\zeta ^2}{48}+\frac{ϵ_c}{16}$$ $`(7a)`$ and $$\stackrel{~}{A}\frac{1}{2}\frac{\sqrt{ϵ_\eta }}{6}\frac{\zeta ^{}}{12},\stackrel{~}{B}\frac{\sqrt{ϵ_\eta }}{3}+\frac{\zeta ^{}}{12},\stackrel{~}{C}\frac{2\sqrt{ϵ_\eta }}{3}+\frac{\zeta ^{}}{12},\stackrel{~}{D}\frac{\sqrt{ϵ_c}}{4}.$$ $`(7b)`$ The special combinations $`\stackrel{~}{A}`$, $`\stackrel{~}{B}`$, $`\stackrel{~}{C}`$, and $`\stackrel{~}{D}`$ stem from the quark and antiquark contents in the neutral bosons $`G_{u,d,s,c}^0`$ appeared in the effective Lagrangian (1) and defined in (2a)-(2c). The numbers $`fa`$ and $`f_sa`$ stand for the probabilities of the quark splitting $`u_{}(d_{})u_{}(d_{})+G_{u(d)}^0`$ and $`s_{}s_{}+G_s^0`$ respectively. In the limit $`ϵ_c0`$ and change $`\zeta ^{}`$ to $`4\zeta ^{}`$, the $`f`$ and $`f_s`$ reduce to the corresponding quantities in the SU(3) case. We also have $$\stackrel{~}{A}\frac{1}{3}A_{SU(3)},\stackrel{~}{B}\frac{1}{3}B_{SU(3)},\stackrel{~}{C}\frac{1}{3}C_{SU(3)},\stackrel{~}{D}0$$ $`(8)`$ II. Quark flavor and spin contents We note that the quark helicity flips in the chiral splitting processes $`q_{,()}q_{,()}`$+GB, i.e. the first four processes in (4), but not for the last one. In the valence approximation, the SU(3)$``$SU(2) proton wave function gives $$n_p^{(0)}(u_{})=\frac{5}{3},n_p^{(0)}(u_{})=\frac{1}{3},n_p^{(0)}(d_{})=\frac{1}{3},n_p^{(0)}(d_{})=\frac{2}{3}.$$ $`(9)`$ Using (5), (9) and the probabilities $`P_{q_,}(q_,^{})`$ and $`P_{q_,}(\overline{q}_,^{})`$ listed in Table I, we obtain the quark and antiquark flavor contents $$u=2+\overline{u},d=1+\overline{d},s=0+\overline{s},c=0+\overline{c},$$ $`(10a)`$ where $$\overline{u}=a[1+\stackrel{~}{A}^2+2(1\stackrel{~}{A})^2],\overline{d}=a[2(1+\stackrel{~}{A}^2)+(1\stackrel{~}{A})^2],$$ $`(10b)`$ $$\overline{s}=3a[ϵ+\stackrel{~}{B}^2],\overline{c}=3a[ϵ_c+\stackrel{~}{D}^2]$$ $`(10c)`$ From (10b), one obtains $$\frac{\overline{u}}{\overline{d}}=1\frac{6\stackrel{~}{A}}{(3\stackrel{~}{A}1)^2+8}$$ $`(11a)`$ $$\overline{d}\overline{u}=2a\stackrel{~}{A}$$ $`(11b)`$ Similarly, one can obtain $`2\overline{c}/(\overline{u}+\overline{d})`$, $`2\overline{c}/(q+\overline{q})`$ and other flavor observables. It is easy to verify that in the limit $`ϵ_c0`$, all results reduce to those given in the SU(3) case . For quark spin contents, we have $$\mathrm{\Delta }u=\frac{4}{3}[1a(ϵ+ϵ_c+2f)]a$$ $`(12a)`$ $$\mathrm{\Delta }d=\frac{1}{3}[1a(ϵ+ϵ_c+2f)]a$$ $`(12b)`$ $$\mathrm{\Delta }s=aϵ$$ $`(12c)`$ $$\mathrm{\Delta }c=aϵ_c$$ $`(12d)`$ $$\mathrm{\Delta }\mathrm{\Sigma }\underset{q=u,d,s,c}{}\mathrm{\Delta }q=12a(1+ϵ+ϵ_c+f)$$ $`(12e)`$ and $$\mathrm{\Delta }\overline{q}=0,(q=u,d,s,c)$$ $`(12f)`$ Comparing to the SU(3) case, a new $`ϵ_c`$ term has been included in $`\mathrm{\Delta }u`$, $`\mathrm{\Delta }d`$ and $`\mathrm{\Delta }\mathrm{\Sigma }`$, but there is no change for $`\mathrm{\Delta }s`$. In SU(4) chiral quark model, the charm quark helicity $`\mathrm{\Delta }c`$ is nonzero and definitely negative. The size of the intrinsic charm (IC) helicity depends on the parameters $`ϵ_c`$ and $`a`$. We will see below that the range of $`ϵ_c`$ is about 0.1$``$0.3. Since $`a0.14`$, one has $$\mathrm{\Delta }c0.03$$ $`(13a)`$ The ratio of $`\mathrm{\Delta }c/\overline{c}`$, however, is a constant $$\frac{\mathrm{\Delta }c}{\overline{c}}=\frac{16}{51}0.314$$ $`(13b)`$ which does not depend on any chiral parameters. This is a special prediction from the chiral quark model. In the framework of SU(4) parton model, the first moment of the spin structure function $`g_1(x,Q^2)`$ in the proton is $$_0^1g_1^p(x,Q^2)𝑑x=\frac{1}{2}[\frac{4}{9}\mathrm{\Delta }u+\frac{1}{9}\mathrm{\Delta }d+\frac{1}{9}\mathrm{\Delta }s+\frac{4}{9}\mathrm{\Delta }c]$$ $`(14)`$ which can be rewritten as $$_0^1g_1^p(x,Q^2)𝑑x=\frac{1}{12}[a_3+\frac{\sqrt{3}}{3}a_8\frac{\sqrt{6}}{3}a_{15}+\frac{5}{3}a_0]$$ $`(15)`$ where the notations $$a_3=\mathrm{\Delta }u\mathrm{\Delta }d,a_8=\frac{1}{\sqrt{3}}[\mathrm{\Delta }u+\mathrm{\Delta }d2\mathrm{\Delta }s],a_{15}=\frac{1}{\sqrt{6}}[\mathrm{\Delta }u+\mathrm{\Delta }d+\mathrm{\Delta }s3\mathrm{\Delta }c]$$ $`(16a)`$ and $$a_0=\mathrm{\Delta }u+\mathrm{\Delta }d+\mathrm{\Delta }s+\mathrm{\Delta }c$$ $`(16b)`$ have been introduced. IV. Numerical results and discussion. To estimate the size of $`\mathrm{\Delta }c`$ and other intrinsic charm contributions, we use the same parameter set given in , $`a=0.145`$, $`ϵ_\eta ϵ=0.46`$, $`\zeta ^2=0.10`$. We choose $`ϵ_c`$ as a variable, then other quark flavor and helicity contents can be expressed as functions of $`ϵ_c`$. We found that $$ϵ_c0.10.3$$ $`(17)`$ Our model results, data and theoretical predictions from other approaches are listed in Table II and Table III respectively, where $`ϵ_c=0.20\pm 0.10`$ is assumed. One can see that the fit to the existing data is as good as in the SU(3) case. Several remarks are in order: (1) The chiral quark model predicts an intrinsic charm component of the nucleon ($`2\overline{c}/(q+\overline{q})`$) around $`3\%`$, which agrees with the result given in and the earlier number given in . But the result given in is much smaller (0.5$`\%`$) than ours. (2) The prediction of intrinsic charm polarization $`\mathrm{\Delta }c=0.029\pm 0.015`$ from the chiral quark model is very close to the result $`\mathrm{\Delta }c=0.020\pm 0.005`$ given in the instanton QCD vacuum model . We note that the size of $`\mathrm{\Delta }c`$ given in is about two order of magnitude smaller than ours. Hence further investigation in this matter is needed. (3) We plot the ratio $`\mathrm{\Delta }c/\mathrm{\Delta }\mathrm{\Sigma }`$ as function of $`ϵ_c`$ in Fig.1. In the range $`0.1<ϵ_c<0.3`$, we have $$\frac{\mathrm{\Delta }c}{\mathrm{\Delta }\mathrm{\Sigma }}=0.08\pm 0.05$$ $`(18)`$ which agrees well with the prediction given in and is also not inconsistent with the result given in . (4) For the first moment of the spin structure function $`g_1^{(p,n)}`$, we have included the QCD radiative corrections and the results agree well with the data. To summarize, we have discussed the intrinsic charm contribution to the quark flavor and spin observables in the chiral quark model with symmetry breaking. The results are compatible with other theoretical predictions. Acknowledgments The author would like to thank S. Brodsky for useful comments and suggestions. This work was supported in part by the U.S. DOE Grant No. DE-FG02-96ER-40950, the Institute of Nuclear and Particle Physics, University of Virginia, and the Commonwealth of Virginia.
no-problem/9910/astro-ph9910080.html
ar5iv
text
# 1 Introduction ## 1 Introduction There are many observational evidences where the incoming matter has the potential to become as hot as its virial temperature $`T_{virial}10^{13}`$ $`K`$ . Through various cooling effects, this incoming matter is usually cooled down to produce hard and soft states . In the accretion disk, matter in the sub-Keplerian region generally remains hotter than Keplerian disks. The matter is so hot that after big-bang nucleosynthesis this is the most fevourable temperature to produce significant nuclear reactions. The energy generation due to nucleosynthesis could be high enough to destabilize the flow and the modified composition may come out through winds to affect the metallicity of the galaxy \[3-7\]. Previous works on nucleosynthesis in disk was done for cooler thick accretion disks. Since the sub-Keplerian region is much hotter than of Keplerian region and also than the central temperature ($`10^7`$K) of stars, presently we are interested to study nucleosynthesis in hot sub-Keplerian region of accretion disks. ## 2 Basic Equations and Physical Systems In 1981 Paczyński & Bisnovatyi-Kogan initiated the study of viscous transonic flow although the global solutions of advective accretion disks were obtained much later which we use here. In the advective disks, matter must have radial motion which is transonic. The supersonic flow must be sub-Keplerian and therefore must deviate from a Keplerian disk away from the black hole. The basic equations which matter obeys while falling towards the black hole from the boundary between Keplerian and sub-Keplerian region are given below (for details, see, ): (a) The radial momentum equation: $$\vartheta \frac{d\vartheta }{dx}+\frac{1}{\rho }\frac{dP}{dx}+\frac{\lambda _{Kep}^2\lambda ^2}{x^3}=0,$$ $`(1a)`$ (b) The continuity equation: $$\frac{d}{dx}(\mathrm{\Sigma }x\vartheta )=0,$$ $`(1b)`$ (c) The azimuthal momentum equation: $$\vartheta \frac{d\lambda (x)}{dx}\frac{1}{\mathrm{\Sigma }x}\frac{d}{dx}(x^2W_{x\varphi })=0,$$ $`(1c)`$ (d) The entropy equation: $$\frac{2na\rho \vartheta h(x)}{\gamma }\frac{da}{dx}\frac{a^2\vartheta h(x)}{\gamma }\frac{d\rho }{dx}=fQ^+$$ $`(1d)`$ where the equation of state is chosen as $`a^2=\frac{\gamma P}{\rho }`$. Here, $`\lambda `$ is the specific angular momentum of the infalling matter, $`\lambda _{Kep}`$ is that in the Keplerian region is defined as $`\lambda _{Kep}^2=\frac{x^3}{2(x1)^2}`$ , $`\mathrm{\Sigma }`$ is vertically integrated density, $`W_{x\varphi }`$ is the stress tensor, $`a`$ is the sound speed and $`h(x)`$ is the half thickness of the disk ($`ax^{1/2}(x1)`$), $`n=\frac{1}{\gamma 1}`$ is the polytropic index, $`f`$ is the cooling factor which is kept constant throughout our study, $`Q^+`$ is the heat generation due to the viscous effect of the disk. For the time being we are neglecting the magnetic heating term. During infall different nuclear reactions take place and nuclear energy is released. Here, our study is exploratory so in the heating term $`Q^+`$, we do not include the heating due to nuclear reactions. (Work including nuclear energy release term is in .) Another parameter $`\beta `$ is defined as ratio of gas pressure to total pressure, which is assumed to be a constant value throughout a particular case. Actually, the factor $`\beta `$ is used to take into account the cooling due to Comptonization. To compute the temperature of the Comptonized flow in the advective region which may or may not have shocks, we follow Chakrabarti & Titarchuk and Chakrabarti’s works and method. The temperature is computed from. $$T=\frac{a^2\mu m_p\beta }{\gamma k}.$$ $`(2)`$ It is seen that due to hotter nature of the advective disk especially when accretion rate is low, Compton cooling is negligible, the major precess of hydrogen burning is the rapid proton capture process, which operates at $`T\stackrel{>}{}0.5\times 10^9`$K which is much higher than the operating temperature of PP chain (operates at $`T0.010.2\times 10^9`$K) and CNO cycle (operates at $`T0.020.5\times 10^9`$K) which take place in the case of stellar nucleosynthesis where temperature is much lower. Also in stellar case, in different radii same sets of reaction take place but in the case of disk, in different radii different reactions (or different sets of reaction) can take place simultaneously. These are the basic differences between the nucleosynthesis in stars and disks. For simplicity, we take the solar abundance as the initial abundance of the disk and our computation starts where matter leaves a Keplerian disk. According to and , the black hole remains in hard states when viscosity and accretion rate are smaller. In this case, $`x_K`$ (at radius $`x_K`$ matter deviates from Keplerian to sub-Keplerian region) is large. In this parameter range the protons remain hot ($`T_p110\times 10^9`$K). The corresponding factor $`f(=1Q^+/Q^{})`$ is not low enough to cool down the disk, (in , it is indicated that $`\dot{m}/\alpha ^2`$ is a good indication of the cooling efficiency of the hot flow), where $`Q^+`$ and $`Q^{}`$ are the heat gain and heat loss due to viscosity of the disk. We have studied a large region of parameter space with $`0.0001\stackrel{<}{}\alpha \stackrel{<}{}1`$, $`0.001\stackrel{<}{}\dot{m}\stackrel{<}{}100`$, $`0.01\stackrel{<}{}\beta \stackrel{<}{}1`$, $`4/3\stackrel{<}{}\gamma \stackrel{<}{}5/3`$. We study a case with a standing shock as well. In selecting the reaction network we kept in mind the fact that hotter flows may produce heavier elements through triple-$`\alpha `$ and rapid proton and $`\alpha `$ capture processes. Furthermore due to photo-dissociation significant neutrons may be produced and there is a possibility of production of neutron rich isotopes. Thus, we consider sufficient number of isotopes on either side of the stability line. The network thus contains protons, neutrons, till $`{}_{}{}^{72}Ge`$ – altogether 255 nuclear species. The standard reaction rates were taken . ## 3 Results Here we present a typical case containing a shock wave in the advective region . We express the length in the unit of one Schwarzschild radius which is $`\frac{2GM}{c^2}`$ where $`M`$ is the mass of the black hole, velocity is expressed in the unit of velocity of light $`c`$ and the unit of time is $`\frac{2GM}{c^3}`$. We use the mass of the black hole $`M/M_{}=10`$ ($`M_{}=`$ solar mass), $`\mathrm{\Pi }`$-stress viscosity parameter $`\alpha _\mathrm{\Pi }=0.05`$, the location of the inner sonic point $`x_{in}=2.8695`$, the value of the specific angular momentum at the inner edge of the black hole $`\lambda _{in}=1.6`$, the polytropic index $`\gamma =4/3`$ as free parameters. The net accretion rate $`\dot{m}=1`$ in the unit of Eddington rate, cooling factor due to Comptonization $`\beta =0.03`$, $`x_K=481`$. The proton temperature (in the unit of $`10^9`$), velocity distribution (in the units of $`10^{10}`$ cm sec<sup>-1</sup>), density distribution (in the unit of $`2\times 10^8`$ gm cm<sup>-3</sup>) are shown in Fig. 1(a). In Fig. 1b, we show composition changes close to the black hole both for the shock-free branch (dotted curves) and the shocked branch of the solution (solid curves). Only prominent elements are plotted. The difference between the shocked and the shock-free cases is that, in the shock case the similar burning takes place farther away from the black hole because of much higher temperature in the post-shock region. A significant amount of the neutron (with a final abundance of $`Y_n10^3`$) is produced due to photo-dissociation process. Note that closer to the black hole, $`{}_{}{}^{12}C`$, $`{}_{}{}^{16}O`$, $`{}_{}{}^{24}Mg`$ and $`{}_{}{}^{28}Si`$ are all destroyed completely. Among the new species which are formed closer to the black hole are $`{}_{}{}^{30}Si`$, $`{}_{}{}^{46}Ti`$, $`{}_{}{}^{50}Cr`$. Note that the final abundance of $`{}_{}{}^{20}Ne`$ is significantly higher than the initial value. Thus a significant metallicity could be supplied by winds from the centrifugal barrier. In Fig. 1c we show the change of abundance of neutron ($`n`$), deuterium ($`D`$) and lithium ($`{}_{}{}^{7}Li`$). It is noted that near black hole a significant amount of neutron is formed although initially neutron abundance was almost zero. Also $`D`$ and $`{}_{}{}^{7}Li`$ are totally burnt out near black hole which is against the major claim of Yi & Narayan which found significant lithium in the disk. It is true that due to spallation reaction, i.e., $${}_{}{}^{4}He+^4He^7Li+p$$ $`{}_{}{}^{7}Li`$ may be formed in the disk but due to photo-dissociation in high temperature all $`{}_{}{}^{4}He`$ are burnt out before forming $`{}_{}{}^{7}Li`$ i.e. the formation rate of $`{}_{}{}^{4}He`$ from $`D`$ is much slower than the burning rate of it. Yi & Narayan do not include the possibility of photo-dissociation in the hot disk. In Fig. 1d, we show nuclear energy release/absorption for the flow in in units of erg sec<sup>-1</sup> gm<sup>-1</sup>. Solid curve represents the nuclear energy release/absorption for the shocked flow and the dotted curve is that for unstable shock-free flow. As matter leaves the Keplerian region, the rapid proton capture such as, $`p+^{18}O^{15}N+^4He`$ etc., burn hydrogen and releases energy to the disk. At around $`x=50`$, $`Dn+p`$ dissociates $`D`$ and the endothermic reaction causes the nuclear energy release to become ‘negative’, i.e., a huge amount of energy is absorbed from the disk. At around $`x=15`$ the energy release is again dominated by the original processes because no deuterium is left to burn. Due to excessive temperature, immediately $`{}_{}{}^{3}He`$ breaks down into deuterium and through dissociation of $`D`$ again a huge amount of energy is absorbed from the disk. It is noted that energy absorption due to photo-dissociation as well as the magnitude of the energy release due to proton capture process and that due to viscous dissipation ($`Q^+`$) are very similar (save the region where endothermic reactions dominate). This suggests that even with nuclear reactions, at least some part of the advective disk may be perfectly stable. We now present another interesting case where lower accretion rate ($`\dot{m}=0.01`$) but higher viscosity ($`0.2`$) were used and the efficiency of cooling is not $`100\%`$ ($`f=0.1`$). That means that the temperature of the flow is high ($`\beta =0.1`$, maximum temperature $`T_9^{max}=11`$). In this case $`x_K=8.8`$, if the high viscosity is due to stochastic magnetic field, protons would be drifted towards the black hole due to magnetic viscosity, but the neutrons will not be drifted till they decay. This principle has been used to do the simulation in this case. The modified composition in one sweep is allowed to interact with freshly accreting matter with the understanding that the accumulated neutrons do not drift radially. After few iterations or sweeps the steady distribution of the composition may be achieved. Figure 2 shows the neutron distributions in iteration numbers $`1`$, $`4`$, $`7`$ & $`11`$ respectively (from bottom to top curves) in the advective region. The formation of a ‘neutron torus’ is very apparent in this result and generally in all the hot advective flows. In 1987 Hogan & Applegate showed that formation of neutron torus is possible with high accretion rate. But high accretion rate means high rate of photon to dump into sub-Keplerian region and high rate of inverse Compton process through which matter cool down, that is why photo-dissociation will be less prominent. Also formation of neutron is possible through the photo-dissociation of deuterium in the hot disk which is physically possible prominently in our parameter region, where neutron torus is formed. Details are in Chakrabarti & Mukhopadhyay . ## 4 Discussions and Conclusions In this paper, we have explored the possibility of nuclear reactions in advective accretion flows around black holes. Temperature in this region is controlled by the efficiencies of bremsstrahlung and Comptonization processes . For a higher Keplerian rate and higher viscosity, the inner edge of the Keplerian component comes closer to the black hole and the advective region becomes cooler . However, as the viscosity is decreased, the inner edge of the Keplerian component moves away and the Compton cooling becomes less efficient. The composition changes especially in the centrifugal pressure supported denser region, where matter is hotter and slowly moving. Since centrifugal pressure supported region can be treated as an effective surface of the black hole which may generate winds and outflows in the same way as the stellar surface, one could envisage that the winds produced in this region would carry away modified composition \[16-18\]. In very hot disks, a significant amount of free neutrons are produced which, while coming out through winds may recombine with outflowing protons at a cooler environment to possibly form deuteriums. A few related questions have been asked lately: Can lithium in the universe be produced in black hole accretion ? We believe that this is not possible. When the full network is used we find that the hotter disks where spallation would have been important also heliums photo-dissociate into deuteriums and then to protons and neutrons before any significant production of lithiums. Another question is: Could the metallicity of the galaxy be explained, at least partially, by nuclear reactions? We believe that this is quite possible. Details are in . Another important thing which we find that in the case of hot inflows formation of neutron tori is a very distinct possibility . Presence of a neutron torus around a black hole would help the formation of neutron rich species as well, a process hitherto attributed to the supernovae explosions only. It can also help production of Li on the companion star surface (see and references therein). The advective disks as we know today do not perfectly match with a Keplerian disk. The shear, i.e., $`d\mathrm{\Omega }/dx`$ is always very small in the advective flow compared to that of a Keplerian disk near the outer boundary of the advective region. Thus some improvements of the disk model at the transition region is needed. Since major reactions are closer to the black hole, we believe that such modifications of the model would not change our conclusions. The neutrino luminosity in a steady disk is generally very small compared to the photon luminosity , but occasionally, it is seen to be very high. In these cases, we predict that the disk would be unstable. Neutrino luminosity from a cool advective disk is low. In all the cases, even when the nuclear composition changes are not very significant, we note that the nuclear energy release due to exothermic reactions or absorption of energy due to endothermic reactions is of the same order as the gravitational binding energy release. Like the energy release due to viscous processes, nuclear energy release strongly depends on temperatures. This additional energy source or sink may destabilize the flow . ## 5 Acknowledgment I would like to thank Prof. Sandip K. Chakrabarti for introducing me to this subject and helpful discussion throughout the work.
no-problem/9910/cond-mat9910416.html
ar5iv
text
# The N-steps Invasion Percolation Model ## 1 Introduction When a nonviscous liquid is slowly injected into a porous medium already filled with a viscous fluid the predominant forces on the interface are of capillary nature. These forces are such as to make the injected fluid spontaneously displace the viscous one. The interface between the fluids advances pore by pore, being its dynamics determined by the capillary rule: the smaller pore is invaded first. Invasion percolation is a theoretical model used to describe fluid-fluid displacement. As it was pointed out invasion percolation is a kind of self-organizing criticality . The system exhibits scale invariant behaviour in space and time achieving the critical state without a fine tuning mechanism to a particular parameter. There is now a strong evidence that this critical state corresponds to the critical standard percolation . Under some modifications, the invasion percolation model has been successfully applied to describe the fingering phenomena in soils and fluid flowing with a privileged direction . In its original formulation, the invasion percolation assumes that all the pores situated in the perimeter of the cluster exchange informations in such a way that, at growth stage $`t`$, no matter how far they are, only an unique pore is invaded - that one with the smallest size. After this pore has been invaded the fluid flow stops, a new perimeter is determined and the process continues. Recently, a multiple invasion percolation model was proposed permitting that not just one but many pores belonging to the actual perimeter can be simultaneously invaded. But here again, the fluid does not exhibit inertia. By this we mean the tendency of the fluid to proceed further, invading the sites surrounding that pore of the perimeter where the invasion process is taking place. We implement this idea by proposing a model in which, at the growth stage $`t`$, the fluid occupies not only the smaller perimeter site $`j`$ but also invades (one by one and always following the capillary rule) $`N1`$ additional pores on $`j`$’s neighborhood. At $`t+1`$, a new perimeter is determined and the process continues. We can say that the fluid walks (occupies) $`N`$ steps (pores) before to loose its inertia. We call this process the N-steps invasion percolation. The ordinary invasion percolation corresponds to the case $`N=1`$. The N-steps invasion percolation model exhibits very different behaviours in two and three dimensions. In two dimensions and for large $`N`$, the walks can be easily blocked since hindrances to the growth are set everywhere by parts of the own cluster. The walks are most of the time incomplete. In such walks, the number of steps actually executed is always equal or smaller than the external (and fixed) parameter $`N`$. The mean number of steps $`n`$ is bounded and its maximum value depends on the lattice geometry. For an infinite square lattice we find that $`n`$ cannot be greater than 35. The clusters have fractal dimensions very close to that of the ordinary invasion percolation but very different mean coordination numbers. The acceptance profiles show a strong dependence of the critical threshold with $`n`$. In three dimensions, the possibility of blocking is so small that $`n`$ always coincides with $`N`$. The difference between the two and three dimensional behaviours is reminiscent of what happens in many growth models like, e. g., in the invasion percolation model with trapping . The calculated fractal dimensions obtained from a careful finite size scaling analysis, indicate that the N-steps model and the ordinary invasion percolation belong to different classes of universality. ## 2 The Model Before treating the N-steps invasion percolation model, let us briefly recall how the ordinary invasion percolation is simulated. Assign a random number $`r`$, uniformly distributed in the range $`[0,1]`$ to each lattice site and choose the central site as the seed of the growth. The perimeter sites of the cluster are identified as the growth sites. At each growth step, occupy an unique perimeter site - that one with the smallest associated random number. The growth process is interrupted after the cluster reaches the lattice boundary. It is important to note that in the ordinary invasion percolation the occupation of one site corresponds to one growth step. In the construction of the N-steps invasion percolation cluster we define one growth step, taking place at the growth stage $`t`$ (which shall not be confused with time, see below for details), as being composed by the following procedures. First, as in the ordinary invasion percolation, the smallest perimeter site is occupied. Then, starting from this site, additional $`N1`$ sites are sequentially invaded. In each invasion, the capillary rule is obeyed, i. e., among the few sites surrounding the actual growth site, only the smallest empty site is invaded. In terms of a fluid flowing language, these supplementaries $`N1`$ invasions would correspond to the inertia of the fluid - that is, after the perimeter rupture has occurred, the fluid cannot stop instantaneously. It walks $`N1`$ steps further. It may happen that this occupation of exactly $`N`$ sites is not allowed by restrictions imposed by the growth process itself and the lattice geometry. Many times, an incomplete walk can be found in a ’cul de sac’. In such cases, of course, a smaller number of sites is in fact invaded. Any way, the next thing to do is now to determine the new cluster’s perimeter. This completes what we called one growth step. The growth stage is then updated to $`t+1`$ and the process is repeated until the cluster touches one of the lattice frontiers. ## 3 Numerical Simulations As mentioned above, sometimes at a growth stage $`t`$, the presence of already occupied sites prevents the walk from completing all $`N`$ steps, and the walk is blocked. This means that the number of steps actually executed is a wild function of the growth stage $`t`$ and, more important, it is always smaller or equal to $`N`$. On average, this would bring the mean number of steps too far from $`N`$. We are then faced with the problem of having an external (and fixed) parameter $`N`$ disconnected from the really performed mean number of steps. In order to bring the values of these two quantities as closed as possible, we devised the following compensation mechanism in our simulations. Any time a walk has performed say $`\overline{N}`$ steps ($`\overline{N}<N`$) then the debt $`N\overline{N}`$ is recorded and,in the next growth stage, a total of $`N+(N\overline{N})`$ number of steps will be permitted. So, sometimes the number of steps actually executed can be larger than $`N`$, sometimes smaller and the mean number of steps will be fluctuating around $`N`$. This scheme resembles those used in systems with self organized criticality. In our case, it is the mean number of steps which is spontaneously tuned to the value $`N`$. As we shall see later, this tuning is always possible in three dimensions but not in two. Let $`n(t)`$ be the number of steps really executed and $`n_e(t)`$ the expected number of steps to be executed at growth stage $`t`$ ($`t`$ is an integer such that $`t1`$). The debt is defined as $`D(t)=n_e(t)n(t)`$ with $`n_e(t=1)=N`$ . This debt should be payed in the next growth stage $`t+1`$ by executing a longer walk, i. e., one with a expected number of steps $`n_e(t+1)=N+D(t)`$ steps. The quantities $`n(t)`$ and $`n_e(t)`$ are plotted in Fig. 1 for one typical realization. From the fluctuating character of $`n(t)`$ we conclude that the relevant parameter to be measured is the mean number of steps $`n`$. If, from a total of $`S`$ realizations, the number of growth steps in the $`ith`$ realization is $`T_i`$ then $$n=\frac{1}{S}\underset{i=1}{\overset{S}{}}\frac{_{t=1}^{T_i}n_i(t)}{T_i}$$ (1) where $`n_i(t)`$ stays for the number of steps really executed at the growth stage $`t`$ in the $`ith`$ realization. We performed numerical simulations of the N-steps invasion percolation model on two and three dimensional lattices. For the square and the honeycomb lattices, sizes of $`201`$,$`401`$,$`801`$ and $`1601`$ were used and the mean number of steps $`n`$ (averaged over $`4002000`$ experiments) determined for several values of $`N`$. Even for small $`N`$, we observe the presence of blocking. In order to measure how frequent are these blockings for fixed $`N`$, we calculate the quantity $$f_b=\frac{1}{S}\underset{i=1}{\overset{S}{}}\frac{totalnumberofblockingsoccurredintheithrealization}{T_i}$$ (2) As can be seen in Fig. 2 (a), this fraction of blocking increases with $`N`$ and reaches 100 % around $`N=35`$ for the square lattice. Our finite lattice size analysis, indicates that, as the thermodynamic limit of an infinite lattice is approached, the curve extrapolates to a cusp. Fig. 2 (b) shows the dependence of the mean number of steps $`n`$ with $`N`$. They coincide until the upper bound $`N_{max}=35`$. This means that, no matter how much $`N`$ is bigger than $`35`$, the ultimate N-steps invasion percolation model is that one with $`N=35`$. We conclude that the model is well defined only for $`N[1,35]`$ where $`n`$ and $`N`$ coincide. This blocking phenomenon is also found in the honeycomb lattice, but it is more frequent than that observed for the square lattice. As a result, the breakdown is slightly smaller, i. e., $`N_{max}=30`$. From studies of the kinetic growth walk model - we know that the blocking phenomenon is especially acute in two dimensions but it is irrelevant in higher dimensions. For the three-dimensional case we use the simple cubic lattice with $`L=51`$, $`75`$, $`101`$, $`151`$ and $`201`$ ($`4002000`$ experiments). Here, the blocking is so rare that $`n`$ and $`N`$ coincide for the whole interval $`1N100`$. ## 4 Cluster Structure The clusters of ordinary invasion percolation are fractal objects in the sense that their mean mass $`M`$ scales with their gyration radius $`R_g`$ with a non-integer exponent . This exponent is the fractal dimension $`D_F`$. The values of $`D_F`$ are known from several studies: $`D_F=1.89`$ for two dimensional systems and $`D_F=2.52`$ for the three dimensional case. The fractal dimension and the mean coordination number are two simple ways of characterizing the cluster’s structure. We determined these quantities for the N-steps invasion percolation model defined on the square and simple cubic lattices. The estimated fractal dimensions as a function of $`n`$ are shown in the Table 1. For the square lattice only a small dependence is detected. On the other hand, universality is definitely broken for the cubic lattice, with $`D_F`$ varying from $`2.52`$ to $`2.77`$. These results are shown in Fig. 3. The mean coordination number $`z`$ of the clusters is the number of first neighbours averaged over all sites of the clusters. For the ordinary site invasion percolation this quantity has values: $`2.51(1)`$ for the square lattice and $`2.31(1)`$ for the simple cubic lattice. Our estimated values are shown in the Table 1. The mean coordination number increases with $`n`$ but quickly stabilizes. These considerable changes of $`z`$ with $`n`$ indicate that some visual differences between the clusters should exist. This can be seen in Fig. 4. We note the formation of globules, i. e., regions with higher densities, as a result of increasing $`n`$. ## 5 The acceptance profile The acceptance profile $`a(r)`$ concept was introduced by Wilkinson and Willemsen to study ordinary invasion percolation. They defined $`a(r)`$ as the ratio between the number of random numbers in the interval $`[r,r+dr]`$ which were accepted into the cluster to the number of random numbers in that range which became available. In the limit of an infinite lattice the acceptance profile tends to a step function with the discontinuity located at the ordinary percolation threshold $`p_c`$. We determined the acceptance profile $`a(r)`$ for the N-steps invasion percolation as a function of $`n`$ for both the square and simple cubic lattices. In the Fig.5(a) we show how the finite size of the lattice affects $`a(r)`$. As the lattice size is increased, the acceptance profile develops a plateau up to a threshold $`r_c0.35`$ (indicating that all the small random numbers which became available were accepted into the cluster). After this value, however, a tail appears and remains finite even in thermodynamic limit. This happens here due to the fact that the dynamic of the model forces invasion of some larger pores. Fig.5(b) shows the acceptance profile for many values of $`n`$ for the square lattice. The threshold $`r_c`$ of the plateau clearly depends on the mean number of steps, i. e., $`r_c(n)`$. For $`n=1`$ the acceptance profile tends to a step function with $`r_c(n=1)0.59`$ and the ordinary invasion percolation behaviour is recovered. Increasing $`n`$ diminishes $`r_c`$ until the ultimate value $`r_c(35)0.24`$. A similar behaviour was observed for the acceptance profile of the simple cubic lattice, the only difference being the probable inexistence of the upper limited value of $`N`$ due the absence of blocking. ## 6 Conclusions We proposed a new kind of invasion percolation model where the inertia of the invader fluid was taken into account. The additional number of steps (or pores) $`N`$ governs the impetus of the fluid. In two dimensions, the appearance of the blocking phenomenon, gives to $`N`$ an upper bound value $`N_{max}`$. This means that the proposed mechanism can only be implemented with $`N`$ varying from $`1`$ to $`N_{max}`$. The $`N_{max}`$ values were estimated to be $`35`$ and $`30`$ for square and honeycomb lattices, respectively. There is a strong dependence between the mean coordination number $`z`$ and $`N`$. As $`N`$ is increased, we find that some globules are formed inside the clusters structure. For the simple cubic lattice, the fractal dimensions depend strongly on $`N`$ and put, definetely, the N-steps invasion percolation model in different classes of universality. Let us now discuss in what sense we believe our model may contribute to a better understanding of fluid flow in porous media. Inertial forces in a porous medium are directly proportional to the square of the fluid velocity and inversely proportional to the pore’s diameter. The Reynolds number is given by the ratio between the inertial forces and viscous forces and, consequently, has a linear dependence with the fluid velocity. If this number is small (low velocity) then the Darcy’s law (which states that the pressure gradient is linearly proportional to the velocity) can be applied. However, if the fluid velocity is increased but still kept below the turbulent regime, then a departure of Darcy’s law can be experimentally measured. This departure is macroscopically well described by the Forchheimer equation which adds a quadratic velocity term to the Darcy’s law. It is believed that this quadratic term comes, essentially, from inertial forces contributions and it could be detected and measured by augmenting the fluid velocity. Recently , the Forchheimer equation has been investigated by numerical solutions of the continuous Navier-Stokes equation. In our model, the parameter $`N`$ can be related to the fluid velocity. This can be seen if we remember that the invaded volume by growth step (or the fluid flux) increases with $`N`$. To check this directly will require to find amenable definitions of average pressure gradients and velocities valid in the context of the invasion percolation models. ## 7 Acknowledgments We acknowledge CNPq (Conselho Nacional de Desenvolvimento Científico e Tecnológico) and FAPESP ( Fundação de Amparo a Pesquisa do Estado de São Paulo ) for the financial support. Table 1 | Square Lattice | | | Simple Cubic Lattice | | | | --- | --- | --- | --- | --- | --- | | $`n`$ | $`D_F`$ | $`z`$ | $`n`$ | $`D_F`$ | $`z`$ | | $`1`$ | $`1.89(1)`$ | $`2.51(1)`$ | $`1`$ | $`2.52(2)`$ | $`2.31(1)`$ | | $`2`$ | $`1.89(1)`$ | $`2.65(1)`$ | $`2`$ | $`2.52(2)`$ | $`2.42(1)`$ | | $`4`$ | $`1.90(1)`$ | $`2.84(1)`$ | $`8`$ | $`2.58(2)`$ | $`2.51(1)`$ | | $`15`$ | $`1.91(1)`$ | $`2.89(1)`$ | $`15`$ | $`2.64(3)`$ | $`2.52(1)`$ | | $`30`$ | $`1.92(1)`$ | $`2.89(1)`$ | $`30`$ | $`2.69(3)`$ | $`2.52(1)`$ | | $`35`$ | $`1.92(1)`$ | $`2.89(1)`$ | $`45`$ | $`2.75(3)`$ | $`2.53(1)`$ | | $`40`$ | $`1.92(1)`$ | $`2.89(1)`$ | $`60`$ | $`2.77(3)`$ | $`2.53(1)`$ | | $`60`$ | $`1.92(1)`$ | $`2.89(1)`$ | $`100`$ | $`2.77(3)`$ | $`2.53(1)`$ | FIGURE CAPTION Figure 1. Simulation of the number of executed steps $`n(t)`$ and the number of expected steps $`n_e(t)`$ as a function of the growth steps $`t`$. Figure 2.(a) The fraction of blocking $`f_b`$ as a function of $`N`$ for the square lattice, (b) The mean number of steps versus $`N`$. The dotted line is the straight $`n`$=$`N`$. Figure 3. Logarithm of the cluster average mass plotted versus the logarithm of the average gyration radius for the cubic and square lattices and for some values of $`n`$. Figure 4. The upper figure shows a typical cluster of the ordinary invasion percolation model for a square lattice with $`L=201`$. The lower is a cluster of the N-steps invasion percolation model with $`n=30`$ and $`L=201`$. Figure 5. The dependence of the acceptance profiles with the random number $`r`$ as the lattice size $`L`$ and the mean number of steps $`n`$ are changed ( (a) and (b), respectively). TABLE CAPTION Table 1. The fractal dimensions and mean coordination numbers of the clusters on the square and simple cubic lattices for several values of $`n`$.
no-problem/9910/astro-ph9910495.html
ar5iv
text
# The Extent and Cause of the Pre-White Dwarf Instability Strip ## 1 Introduction Until about twenty years ago, the placement of stars in the transition region between the planetary nebula nucleus track and the upper end of the white dwarf cooling sequence was problematic. This was due not only to the rapidity with which stars must make this transition (making observational examples hard to come by) but also to the difficulty of specifying log $`g`$ and $`T_{\mathrm{eff}}`$ for such objects. Determining these quantities from spectra requires that we construct a reasonable model of the star’s atmosphere. This is very difficult for compact stars with $`T_{\mathrm{eff}}`$ in excess of 50,000 K. The assumption of local thermal equilibrium (LTE), so useful in modeling the spectra of cooler stars, breaks down severely at such high temperatures and gravities. Fortunately, the known sample of stars that occupy this phase of the evolutionary picture has grown over the last two decades. The most important discovery was that of a new spectral class called the PG 1159 stars. They are defined by the appearance of lines of HeII, CIV, and OVI (and sometimes NV) in their spectra. Over two dozen are known, ranging in $`T_{\mathrm{eff}}`$ from over 170,000 K down to 80,000 K. About half are central stars of planetary nebula. The most evolved PG 1159 stars merge with the log $`g`$ and $`T_{\mathrm{eff}}`$ of the hottest normal white dwarfs. This class thus forms a complete evolutionary sequence from PNN to the white dwarf cooling track (Werner 1995, Dreizler & Huber 1998). About half of the PG 1159 stars are pulsating variable stars, spread over the entire range of log $`g`$ and $`T_{\mathrm{eff}}`$ occupied by members of the spectral class. This represents the widest instability “strip” (temperature-wise) in the H-R diagram. Central star variables are usually denoted as PNNV stars (planetary nebula nucleus variables). Variable PG 1159 stars with no nebula make up the GW Virginis (or simply GW Vir) stars. PG 1159 thus serves as the prototype for both a spectroscopic class and a class of variable stars. Farther down the white dwarf sequence, we find two additional instability strips. At surface temperatures between about 22,000 and 28,000 K (Beauchamp et al. 1999), we find the DBV (variable DB) stars. Even cooler are the ZZ Ceti (variable DA) stars, with $`T_{\mathrm{eff}}`$ between 11,300 and 12,500 K (Bergeron et al. 1995). Variability in all three strips results from $`g`$-mode pulsation (for the ZZ Cetis, see Warner & Robinson 1972; for the DBVs, see Winget et al. 1982; for the PG 1159 variables, see Starrfield et al. 1983). The pulsation periods provide a rich mine for probing the structure of white dwarf and pre-white dwarf (PWD) stars. Despite the wealth of pulsational data available to us in studying the variable PWDs, they have so far resisted coherent generalizations of their group properties. Such a classification is required for understanding possible driving mechanisms or explaining the observed period distribution. Until now, the errors bars associated with spectroscopic determination of mass, luminosity, and $`T_{\mathrm{eff}}`$ for a given variable star spanned a significant fraction of the instability strip. Even given the limited information provided by spectroscopic determinations of log $`g`$ and $`T_{\mathrm{eff}}`$, many attempts have been made to form a coherent theory of their pulsations. Starrfield et al. (1984) proposed that GW Vir and PNNV stars are driven by cyclical ionization of carbon (C) and oxygen (O). However, their proposal suffers from the difficiency that a helium (He) mass fraction of only 10% in the driving zone will “poison” C/O driving. Later, they managed to create models unstable to (C driven) pulsation with surface He abundance as high as 50%, but only at much lower $`T_{\mathrm{eff}}`$ than most GW Vir stars (Stanghellini, Cox, & Starrfield 1991; see also the review by Cox, 1993). Another problem is the existence of non-pulsating stars within the strip with nearly identical spectra to the pulsators (Werner 1995). More precise observations can detect subtle differences between the pulsators and non-pulsators. For instance, Dreizler (1998) finds NV in the spectra of all the pulsators but only some of the non-pulsators. It remains to be seen if these differences are important. More recently, Bradley & Dziembowski (1996) studied the effects of using the newer OPAL opacities in creating unstable stellar models. Their models pulsate via O driving, and—in exact opposition to Starrfield, Stanghellini, and Cox—they require no C (or He) in the driving region to obtain unstable modes that match the observed periods. Their models also require radii up to 30% larger than those derived from prior asteroseismological analyses of GW Vir stars, in order to match the observed range of periods. Finally, Saio (1996) and Gautschy (1997) proposed driving by a “metallicity bump,” where the opacity derivative changes sign due to the presence of metals such as iron in the envelope. This $`\kappa `$ mechanism is similar to those currently thought to drive pulsation in the $`\beta `$-Cephei variables (see for instance Moskalik & Dziembowski, 1992) and in the recently discovered subdwarf B variables (Charpinet et al. 1996). Unfortunately, their simplified models are inconsistent with the evolutionary status of PG 1159 stars. More importantly, their period structures do not match published WET observations of real pre-white dwarfs (Winget et al. 1991, Kawaler et al. 1995, Bond et al. 1996, O’Brien et al. 1998). With so many different explanations for pulsational driving, stricter constraints on the observable conditions of the pulsators and non-pulsators are badly needed. The most effective way to thin the ranks of competing theories (and perhaps point the way to explanations previously unthought of) is to obtain better knowledge of when the pulsations begin and end for PWD stars of a given mass. Of course, with only a few stars available to study, even complete asteroseismological solutions for all of them might not prove significantly illuminating. Even if their properties follow recognizable patterns, it is difficult to show this compellingly given only a few cases. However, with asteroseismological analyses now published for the majority of the GW Vir stars, we can finally attempt to investigate their behavior as a class of related objects. In the next section, we outline the analytic theory of PWD pulsation. In § 3, we use the observed properties of the variable PWDs to show that they do follow compelling trends, spanning their entire range of stellar parameters, to which any model of PWD pulsation must conform. Next we introduce a new set of numerical models developed to help interpret this behavior. In § 5, we show how the evolution (and eventual cessation) of pulsation in PWDs can be governed by the changing relationship of the driving timescale to the maximum sustainable $`g`$-mode period. These results suggest several fruitful directions for future work, both theoretical and observational, which we discuss in the concluding section. ## 2 Theory The set of periods excited to detectable limits in white dwarf stars is determined by the interplay of several processes. The first is the driving of pulsation by some—as yet unspecified—mechanism (no matter how melodious the bell, it must be struck to be heard). Second is the response of the star to the driving taking place somehere in its interior. A pulsating PWD star is essentially a (spherically symmetric) resonant cavity, capable of sustained vibration at a characteristic set of frequencies. Those frequencies are determined by the structure of the star, its mass and luminosity, as well as the thickness of its surface layers. Finally, the actual periods we see are affected by the mechanism through which internal motions are translated into observable luminosity variations. This is the so-called “transfer function,” and clues to its nature are to be found in the observed variations as well. ### 2.1 Asyptotic Relations If we wish to make the most of the observed periods, we must understand the process of driving, and response to driving, in as much detail as possible. However, we can learn a great deal by simply comparing the periods of observed light variations to the normal-mode periods of model stars. The normal mode oscillations of white dwarf and PWD stars are most compactly described using the basis set of spherical harmonic functions, $`Y_m^{\mathrm{}}`$, coupled with an appropriate set of radial wave functions $`R_n`$. Here $`n`$ is the number of nodal surfaces in the radial direction, $`\mathrm{}`$ denotes the number of nodal planes orthogonal to such surfaces, and $`m`$ is the number of these planes which include the pulsational axis of the star. The periods of $`g`$-modes of a given $`\mathrm{}`$ are expected to increase monotonically as the number of radial nodes $`n`$ increases. The reason is that the buoyant restoring force is proportional to the total mass displaced, and this mass gets smaller as the number of radial nodes increases. A weaker restoring force implies a longer period (see, e.g., Cox 1980). In the “asymptotic limit” that $`n\mathrm{}`$, the periods of consecutive modes should obey the approximate relation $$\mathrm{\Pi }_n\frac{\mathrm{\Pi }_0}{[\mathrm{}(\mathrm{}+1)]^{1/2}}(n+ϵ)n\mathrm{},$$ (1) where $`\mathrm{\Pi }_n`$ is the $`g`$-mode period for a given value of $`n`$, and $`\mathrm{\Pi }_0`$ is a constant that depends on the overall structure of the star (see, e.g., Tassoul 1980, Kawaler 1986).<sup>1</sup><sup>1</sup>1The additional constant $`ϵ`$ is assumed to be small, though its exact value depends on the boundary conditions. Since the actual boundary conditions depend on the period, $`ϵ`$ probably does, too. Equation (1) implies that modes of a given $`\mathrm{}`$ should form a sequence based upon a fundamental period spacing $`\mathrm{\Delta }\mathrm{\Pi }=\mathrm{\Pi }_0/\sqrt{\mathrm{}(\mathrm{}+1)}`$. This is the overall pattern identified by various investigators in the lightcurves of most of the GW Vir stars. Once a period spacing is identified, we can compare this spacing to those computed in models to decipher the star’s structure. Kawaler (1986) found that the parameter $`\mathrm{\Pi }_0`$ in static PWD models is dependent primarily on the overall stellar mass, with a weak dependence on luminosity. Kawaler & Bradley (1994) present the approximate relation $$\mathrm{\Pi }_015.5\left(\frac{M}{M_{}}\right)^{1.3}\left(\frac{L}{100L_{}}\right)^{0.035}\left(\frac{q_y}{10^3}\right)^{0.00012}$$ (2) where $`q_y`$ is the fraction by mass of He at the surface.<sup>2</sup><sup>2</sup>2Note that the sign of the exponent in the $`L`$ term is in error in Kawaler & Bradley (1994). Other questions, however, can only be answered from knowledge of the cause of the light variations we measure. In the case of the PWDs, this is chief among the mysteries we would like to solve. The most telling clue will be the extent and location of the region of instability in the H-R diagram—derived in part from pulsational analysis of the structure of stars which bracket this region. The will now provide some of the background needed to attack these issues. ### 2.2 Pulsation in Pre-White Dwarf Stars In a star, energy generally flows down the temperature gradient from the central regions to the surface in a smooth, relatively unimpeded fashion. Of course small, random perturbations to this smooth flow constantly arise. The situation is stable as long as such perturbations quickly damp out, restoring equilibrium. For instance, equilibrium is restored by the forces of buoyancy and pressure; theses forces define the nature of $`g`$\- and $`p`$-modes. They resist mass motions and local compression or expansion of material away from equilibrium conditions. #### 2.2.1 Driving and Damping Thermodynamics and opacity affect local equilibrium. In general, if a parcel of material in a star is compressed, its temperature goes up while its opacity decreases. The higher temperature causes more radiation to flow out to the surrounding material, while lower opacity decreases the efficiency with which radiation is absorbed. From the first law of thermodynamics, an increasing temperature accompanied by net heat loss implies that work is being done on the parcel by its surroundings. Similar arguments show that work is done on the parcel during expansion, also. Thus any initial perturbation will be quickly damped out—each parcel demands work to do otherwise. The requirement that a region lose heat when compressing and gain heat when expanding is the fundamental criterion for stability. When the opposite is true, and work is done by mass elements on their surroundings during compression and expansion, microscopic perturbations can grow to become the observed variations in pulsating stars. Under certain circumstances, the sign of the opacity derivative changes compared to that described above. If the opacity, $`\kappa `$, increases upon compression, then heat flowing through a mass element is trapped there more efficiently. Within regions where this is true, work is done on the surrounding material. Thus, these regions can help destabilize the star, if this driving is not overcome elsewhere in the star. However, other regions, where work is required to compress and expand material, tend to damp such pulsation out. Global instability arises only when the work performed by the driving regions outweighs the work done on the damping regions over a pulsation cycle. In this case, the flow of thermal energy can do mechanical work, and this work is converted into the pulsations we observe. This method of driving pulsation is called the $`\kappa `$ mechanism. A region within a star will drive pulsation via this mechanism if the opacity derivatives satisfy the condition (see for instance Cox 1980) $$\frac{d}{dr}\left(\kappa _T+\frac{\kappa _\rho }{\mathrm{\Gamma }_31}\right)>0$$ (3) where $$\kappa _T\left(\frac{ln\kappa }{lnT}\right)_\rho ,\kappa _\rho \left(\frac{ln\kappa }{ln\rho }\right)_T,\mathrm{and}\mathrm{\Gamma }_31\left(\frac{lnT}{ln\rho }\right)_S.$$ Here $`S`$ represents the specific entropy, $`\kappa `$ is the opacity in cm<sup>2</sup>/g, and the variables $`r`$, $`\rho `$, and $`T`$ all have their usual meaning. Equation (3) is satisfied most commonly when some species within a star is partially ionized. In particular, $`\kappa _T`$ usually increases in the hotter (inner) portion of a partial ionization zone and decreases in the cooler (outer) portion. Thus the inner part of an ionization zone may drive while the outer part damps pulsation. The adiabatic exponent, $`\mathrm{\Gamma }_31`$, is always positive but usually reaches a minimum when material is partially ionized. This enhancement of the $`\kappa `$ mechanism is called the $`\gamma `$ mechanism. Physically, the $`\gamma `$ mechanism represents the conversion of some of the work of compression into further ionization of the species in question. This tends to compress the parcel more, aiding the instability. Release of this ionization energy during expansion likewise increases the purturbation. Since they usually occur together, instabilities caused by both the opacity and ionization effects are known as the $`\kappa `$-$`\gamma `$ mechanism. The pulsations of Cepheid and RR Lyrae variables, for instance, are driven via the $`\kappa `$-$`\gamma `$ mechanism operating within a region where HeI and HeII have similar abundances. This same partial ionization zone is apparently the source of instability for the DBV white dwarfs. The variations observed in ZZ Ceti white dwarfs have long been attributed to partial ionization of H. However, Goldreich & Wu (1999a,b,c) have recently shown the ZZ Ceti pulsations can be driven through a different mechanism in efficient surface convection.<sup>3</sup><sup>3</sup>3Such convection zones often accompany regions of partial ionization associated with the $`\kappa `$-$`\gamma `$ mechanism. However, the driving proposed by Goldreich & Wu is not directly related to ionization. It is possible this theory might eventually be expanded to account for DBV pulsation as well. It is unlikely to find application in PWDs, though, since models of PG 1159 stars generally do not support convection. Great efforts have been expended by theorists attempting to explain GW Vir and PNNV pulsations in terms of some combination of C and O partial ionization (Starrfield et al. 1983; Stanghellini, Cox, & Starrfield 1989; Bradley & Dziembowski 1996). A primary difficulty arises from the damping effects of He (and C in the O-driving scheme proposed by Bradley & Dziembowski 1996) in the driving zone, which can “poison” the driving. More recently, Saio (1996) and Gautschy (1997) attempted to explain driving in terms of an “opacity bump” in the models, without partial ionization—in other words, using the $`\kappa `$ mechanism alone. A fundamental problem has been the lack of information on the exact extent of the instability strip and the structure of the pulsating stars themselves. The initial goal in this paper, therefore, is to define as precisely as possible the observational attributes which any proposed driving source must reproduce. Whether or not the mechanism we later identify is correct, we hope to lay the groundwork for future studies which will eventually provide a definitive answer to this question. If a star pulsates, we wish to know what relationship the driving has to the periods we observe. In general, no star will respond to driving at just any arbitrary period; the pulsation period range is determined by the driving mechanism. Cox (1980) showed that the approximate pulsation period is determined by the thermal timescale of the driving zone: $$\mathrm{\Pi }\tau _{\mathrm{th}}=\frac{c_\mathrm{v}Tm_{\mathrm{dz}}}{L}.$$ (4) Here $`\tau _{\mathrm{th}}`$ is the thermal timescale, $`c_\mathrm{v}`$ is the heat capacity, and $`m_{\mathrm{dz}}`$ is the mass above the driving zone. This equation gives the approximate time it takes the star to radiate away—via its normal luminosty, $`L`$—the energy contained in the layers above the region in question. Though Cox derived this relationship for radial modes, Winget (1981) showed that it applies equally well to nonradial $`g`$-mode pulsation. The basic idea behind Equation (4) is that energy must be modulated at approximately the same rate at which it can be dammed up and released by the driving zone. Consider the question of whether a given zone can drive pulsation near a certain period. If the driving zone is too shallow, then the thermal timescale is shorter than the pulsation period. Any excess heat is radiated away before the compression increases significantly. Thus, it can’t do work to create mechanical oscillations on the timescale in question. If the driving zone is too deep ($`\tau _{\mathrm{th}}>\mathrm{\Pi }`$), then excess heat built up during contraction is not radiated away quickly enough during expansion; it then works against the next contraction cycle. Of course, this relationship is only approximate, and other factors might intervene to limit pulsation to periods far from from those implied by Equation (4). #### 2.2.2 Period Limits One such factor is the range of pulsation modes a star can sustain in the form of standing waves. Hansen, Winget, & Kawaler (1985) showed that there is a maximum period for white dwarf and PWD stars above which oscillations will propagate only as running waves that quickly damp out. They attempt to calculate this maximum $`g`$-mode period, $`\mathrm{\Pi }_{\mathrm{max}}`$, to explain the observed trend of ZZ Ceti periods with $`T_{\mathrm{eff}}`$. We can recast their equations (6) through (8) as $$\mathrm{\Pi }_{\mathrm{max}}940s\left(\frac{\mu }{\mathrm{}(\mathrm{}+1)}\right)^{0.5}\left(\frac{R}{0.02R_{}}\right)\left(\frac{T_{\mathrm{eff}}}{10^5\mathrm{K}}\right)^{0.5},$$ (5) or, using the relation $`R^2=L/4\pi \sigma T_{\mathrm{eff}}^4`$, $$\mathrm{\Pi }_{\mathrm{max}}940s\left(\frac{\mu }{\mathrm{}(\mathrm{}+1)}\right)^{0.5}\left(\frac{L}{35L_{}}\right)^{0.5}\left(\frac{T_{\mathrm{eff}}}{10^5\mathrm{K}}\right)^{2.5}.$$ (6) where $`R`$ is the stellar radius, $`\mu `$ represents the mean molecular weight, and $`\mathrm{}`$ is the pulsation index introduced earlier. For a complete derivation of these equations, see the Appendix. For cool white dwarfs of a given mass, the radius is roughly constant with time, and $`\mathrm{\Pi }_{\mathrm{max}}`$ is expected to increase as stars evolve to lower $`T_{\mathrm{eff}}`$ and the driving zone sinks deeper. Cooler ZZ Ceti and DBV white dwarf stars should in general have longer periods; for the ZZ Cetis at least, this is indeed the case. However, in PWDs degeneracy is not yet sufficient to have halted contraction, and the $`R`$ dependence is still a factor in determining how $`\mathrm{\Pi }_{\mathrm{max}}`$ varies through the instability strip. We cannot say a priori which dominates: the shrinking radius which tends to decrease $`\mathrm{\Pi }_{\mathrm{max}}`$, or the falling $`T_{\mathrm{eff}}`$ which has the opposite effect. We cannot even predict in advance whether $`\mathrm{\Pi }_{\mathrm{max}}`$ is a factor in determining the pulsation periods at all. In § 5, we answer these questions in view of the observed properties of both the PNNV and GW Vir stars. In summary, we can measure individual white dwarf mass and luminosity through identification of the period spacing. The resulting determinations of GW Vir structure, summarized in the next section, will tell us the precise boundaries of the instability strip, in other words, when the pulsations begin and end in the evolution of PWD stars of a given mass. This knowledge, coupled with the timescales of driving and the maximum $`g`$-mode period in models matched to the observations, will provide strict constraints on the allowable form of the driving mechanism. This information is absolutely necessary to any research program designed to discover why PWDs pulsate in the first place. ## 3 Observed Temperature Trends of the PG 1159 Pulsators ### 3.1 Mass Distribution White dwarfs exhibit a very narrow mass distribution, centered at about 0.56–0.59$`M_{}`$ (Bergeron, Saffer & Liebert 1992, Weidemann & Koester 1984). If, for a given $`T_{\mathrm{eff}}`$, pulsating PWD masses conform to the same distribution expected of non-variables, then we are led to certain expectations concerning the pulsations seen in the former. Based on Equations (1) and (2), period spacing (proportional to $`\mathrm{\Pi }_0`$) should increase monotonically as the luminosity and $`T_{\mathrm{eff}}`$ of a given star decrease. Four PWD stars have so far yielded to asteroseismological scrutiny, the GW Vir stars PG 1159 (Winget et al. 1991), PG 2131 (Kawaler et al. 1995), and PG 0122 (O’Brien et al. 1998), and the central star of the planetary nebula NGC 1501 (Bond et al. 1996). We summarize the parameters of these four stars in Table 1. All show patterns of equal period spacing very close to $`21`$s. This is a remarkable trend, or more accurately, a remarkable lack of a trend! If these stars follow the narrow mass distribution observed in white dwarf stars, then the period spacing is expected to increase with decreasing $`T_{\mathrm{eff}}`$ as they evolve from the blue to the red edge of the instability strip. Observationally, this is not the case. For instance PG 1159, with a $`T_{\mathrm{eff}}`$ of 140,000 K, should see its period spacing increase by about 20%, from 21 s to 26 s, by the time it reaches the effective temperature of PG 0122—80,000 K. In other words, the farther PG 0122’s period spacing is from 26 s, the farther is its mass from that of PG 1159. In fact these two objects, representing the high and low $`T_{\mathrm{eff}}`$ extremes of the GW Vir stars, have almost exactly the same period spacing despite enormous differences in luminosity and temperature. For PG 0122, its low $`T_{\mathrm{eff}}`$ pushes it toward longer $`\mathrm{\Delta }\mathrm{\Pi }`$; this must be offset by a higher mass. With such a significant $`T_{\mathrm{eff}}`$ difference, the mass difference between PG 0122 and PG 1159 must be significant also, and it is: $`0.69M_{}`$ versus $`0.58M_{}`$. Two stars with a coincident period spacing—despite widely different mass and luminosity—might simply be curious. In fact all four PWDs with relatively certain period spacing determinations have the same spacing to within 2 s, or 10%. This includes the central star of NGC 1501, which has a luminosity over three orders of magnitude larger than that of PG 0122. Hence, NGC 1501 must have an even lower mass than PG 1159 by comparison. Figure 1 shows the mass versus luminosity values for the known GW Vir stars plus NGC 1501, based on the values of $`\mathrm{\Delta }\mathrm{\Pi }`$ from Table 1. Figure 2 shows the implications of the common 21-22 s spacing for the instability “strip” in the log $`g`$–log $`T_{\mathrm{eff}}`$ plane. The observational region of instability has shrunk significantly. It exhibits such a striking slope in the figure that, unlike most other instability regions in the H-R diagram, it can no longer be refered to accurately as an instability strip (in temperature) at all. Nevertheless, we will continue to refer to the region of instability pictured in Figure 2 as the GW Vir “instability strip” with the understanding that the effective temperatures of the red and blue edges are highly dependent on log $`g`$ (or $`L`$). Why should the PWD instability strip apparently straddle a line of approximately constant period spacing? Normally, theorists search for explanations for the observed boundaries (the red and blue edges) of an instability strip based on the behavior of a proposed pulsation mechanism. That behavior is determined by the thermal properties of a PWD star, while its period spacing is determined by its mechanical structure. In degenerate or nearly degenerate stars, the thermal properties are determined by the ions, while the mechanical structure is determined by the degenerate electrons, and normally the two are safely treated as separate, isolated systems. If the 21-22 s period spacing is somehow a prerequisite for pulsation, then this implies an intimate connection between the mechanical oscillations and the thermal pulsation mechanism. The alternative is that the mass-luminosity relationship along the instability strip is caused by some other process—or combination of processes—which approximately coincides with the relationship that governs the period spacing. In this case, some mechanism must shut off observable pulsation in low mass PWDs before they reach low temperature, and delay observable pulsation in higher mass PWDs until they reach low temperature.<sup>4</sup><sup>4</sup>4We use the phrase “observable pulsation” to indicate that possible solutions might reside in some combination of observational selection effects as well the intrinsic behavior of a driving mechanism. We will explore mechanisms which meet these criteria in § 5. ### 3.2 Period Distribution The PWD pulsators exhibit another clear observational trend: their periods decrease with decreasing luminosity (increasing surface gravity). Figure 3 shows the luminosity versus dominant period (that is, the period of the largest amplitude mode) for the same stars from Figure 1. The trend apparent in the figure is in marked contrast to the one seen in the ZZ Ceti stars, which show longer periods with lower $`T_{\mathrm{eff}}`$. The ZZ Ceti period trend is generally attributed to the changing thermal timescale in the driving zone, which sinks deeper (to longer timescales) as the star cools. If the same effect determines the periods in GW Vir and PNNV stars, then Figure 3 might be taken to indicate that the driving zone becomes more shallow with decreasing $`T_{\mathrm{eff}}`$. We will show that this is not the case in PWD models. We conclude that some other mechanism must be responsible for setting the dominant period in PWD variables. Are the trends seen in Figures 1 through 3 related? To explore this question in detail, we developed a new set of PWD evolutionary models, which we summarize in the next section. In § 5 we analyze the behavior of driving zones in PWD models in light of—and to seek an explanation for—the trends just discussed. ## 4 A New Set of Pre-White Dwarf Evolutionary Models To understand the various trends uncovered in the hot pulsating PWDs, we appeal to stellar models. Models have been essential for exploiting the seismological observation of individual stars. For this work, though, we needed models over the entire range of the GW Vir stellar parameters of mass and luminosity. Our principal computational tool is the stellar evolution program ISUEVO, which is described in some detail by Dehner (1996; see also Dehner & Kawaler 1995). ISUEVO is a “standard” stellar evolution code that is optimized for the construction of models of PWDs and white dwarfs. The seed model for the models used in this section was generated with ISUEVO by evolution of a $`3M_{}`$ model from the Zero Age Main Sequence through the thermally pulsing AGB phase. After reaching a stable thermally pulsing stage (about 15 thermal pulses), mass loss was invoked until the model evolved to high temperatures. This model (representing a PNN) had a final mass of $`0.573M_{}`$, and a helium-rich outer layer. To obtain self-consistent models within a small range of masses, we used the $`0.573M_{}`$ model, and scaled the mass up or down. For example, to obtain a model at $`0.60M_{}`$, we scaled all parameters by the factor $`0.60/0.573`$ for an initial model. Relaxation to the new conditions was accomplished by taking many very short time steps with ISUEVO. Following this relaxation, the evolution of the new model proceeded as before. In this way, we produced models that were as similar as possible, with mass being the only difference. Comparison of our evolutionary tracks and trends with the earlier model grids of Dehner (1996) shows very close agreement, given the different evolutionary histories. Dehner’s initial models were derived from a single model with a simplified initial composition profile, while our models are rooted in a self-consistent evolutionary sequence. We note that the work by Dehner (1996) included elemental diffusion (principally by gravitational settling), while the models we use here did not include diffusion. Within the temparature range of the GW Vir stars, however, observations of their surface abundances indicate that the effects of diffusion have only a small influence. ## 5 Selection Effects, Driving, and the Blue and Red Edges ### 5.1 The Observed “Blue” and “Red” Edges In explaining the observed distribution of pulsating stars with respect to stellar parameters, we must distinguish observational selection effects from causes intrinsic to the objects under study. Usually, understanding selection effects is an important part of decoding the shape of the distribution in terms of physical effects. In this case, the blue and red edges exhibit a similar slope in the log $`g`$–log $`T_{\mathrm{eff}}`$ plane, but we must still separate out selection effects from the intrinsic shape of one or both of them. The more rapid the evolution through a particular region of the H-R diagram, the less likely it is that stars will be found there. Also, the relative sample volume is larger for stars with higher luminosity, since they are detectable at greater distances. Some combination of these effects will determine the odds of finding stars of a particular mass at a particular point in their evolution. One of the most common ways to explore these combined effects is to construct a luminosity function, which is simply a plot of the expected number density of stars per unit luminosity, based on how bright they are and how fast they are evolving at different times. Figure 4 shows schematic luminosity functions for PWD stars of two different masses, based on the models described in the previous section and normalized according to the white dwarf mass distribution (see for instance Wood 1992). One important result of this figure is that higher mass models always achieve a given number density per unit luminosity later (at lower $`L`$ and $`T_{\mathrm{eff}}`$) than lower mass models. Also, the number density per unit luminosity increases for all models as they evolve to lower $`T_{\mathrm{eff}}`$, due to the increasing amount of time spent in a luminosity bin. These effects together imply that, if PWDs of all mass pulsate all the way down to 80,000 K, then the distribution of known pulsators should be skewed heavily toward those with both low $`T_{\mathrm{eff}}`$ and low mass. We don’t see such stars; thus the red edge must exclude the low mass, low $`T_{\mathrm{eff}}`$ stars from the distribution. While the observed red edge actually marks the dissappearance of pulsations from PWD stars, selection effects change our chances of finding high-mass, high-$`T_{\mathrm{eff}}`$ pulsators. In this case, we expect that the likehood of finding stars of a given mass within the instability strip will increase the closer those stars get to the red edge. This would tend to render the theoretical blue edge (as defined by the onset of pulsation in models of a given mass) undetectible in real PWD stars—given the small number of known variables. In other words, stars “bunch up” against the red edge due to their continually slowing rate of evolution, causing the apparent blue edge to shadow the slope of the red edge in the log $`g`$–log $`T_{\mathrm{eff}}`$ plane. This could explain the approximately linear locus of pulsating stars found within that plane implied by their tight distribution of $`\mathrm{\Delta }\mathrm{\Pi }`$. We are left to explain the observed red edge in terms of the intrinsic properties of the stars themselves, which we defer to § 5.3. First, however, we will discuss the effects of the observed mass distribution along the strip on the period trend seen in Figure 3. ### 5.2 The $`\mathrm{\Pi }`$ versus $`T_{\mathrm{eff}}`$ Trend As mentioned previously (and as we show in § 5.3, below), the depth of an ionization-based driving zone increases—moves to larger thermal timescales—with decreasing $`T_{\mathrm{eff}}`$ for PWD stars of a given mass. This implies a period trend opposite to that observed in Figure 3. Bradley & Dziembowski (1996) discuss this very problem, since their models predict that the maximum period of unstable modes should increase as $`T_{\mathrm{eff}}`$ decreases. They suggest that the composition of the driving zone might somehow change with time, or that the stellar radii shrink much more quickly than is currently thought (or some combination of the two), in such a way as to make the maximum unstable period decrease with decreasing $`T_{\mathrm{eff}}`$. However, no one has yet calculated how (and whether) these suggestions could reasonably account for the observed trend. It is clear, however, that something other than the depth of the driving zone alone determines the observed period range. What other mechanisms might affect the observed periods? One such mechanism is the changing value of the maximum sustainable $`g`$-mode period, $`\mathrm{\Pi }_{\mathrm{max}}`$. For the ZZ Ceti stars, $`\mathrm{\Pi }_{\mathrm{max}}`$ probably does not influence the period distribution much, since from Equations (5) and (6) it increases through the ZZ Ceti instability strip. In PWDs, however, the $`R`$ dependence in Equation (5) must be taken into account; we cannot be certain that the trend implied by a lengthening driving timescale won’t find itself at odds with a decreasing $`\mathrm{\Pi }_{\mathrm{max}}`$. Figure 5 shows the (arbitrarily normalized) run of $`\mathrm{\Pi }_{\mathrm{max}}`$ versus $`T_{\mathrm{eff}}`$ for three PWD model sequences of different mass. Clearly, $`\mathrm{\Pi }_{\mathrm{max}}`$ decreases significantly as models evolve along all three sequences, with high-mass stars exhibiting a smaller $`\mathrm{\Pi }_{\mathrm{max}}`$ than low-mass stars at all $`T_{\mathrm{eff}}`$.<sup>5</sup><sup>5</sup>5If this trend continued all the way to the cooler white dwarf instability strips, then ZZ Ceti stars could only pulsate at very short periods, but $`T_{\mathrm{eff}}`$ begins to dominate below around 60,000 to 70,000 K, pushing $`\mathrm{\Pi }_{\mathrm{max}}`$ back to longer and longer periods once the stars approach their minimum radius at the top of the white dwarf cooling track. These two effects, when combined with the PWD mass distribution, imply that $`\mathrm{\Pi }_{\mathrm{max}}`$ should plummet precipitously with increasing log $`g`$ in the models shown in Figure 2. For example, the ratio of $`\mathrm{\Pi }_{\mathrm{max}}`$ for PG 1159 to that of PG 0122 is expected to be $`1.42`$:1, while the ratio of their observed dominant periods is 540:400 = 1.35:1. The period distribution seen in Figure 2 is thus consistent with the idea that the value of $`\mathrm{\Pi }_{\mathrm{max}}`$ determines the dominant period in GW Vir stars. As we will see in the next section, $`\mathrm{\Pi }_{\mathrm{max}}`$ probably also plays an important part in determining the more fundamental question of when a given star is likely to pulsate. ### 5.3 Driving Zone Depth, $`\mathrm{\Pi }_{\mathrm{max}}`$, and the Red Edge In § 2.2, we discussed the relationship between the depth of the driving zone and the period of $`g`$-mode oscillations. Equation (4) implies that the dominant period should increase in response to the deepening driving zone, as long as other amplitude limiting effects do not intervene. Figure 5 shows how one particular effect—the decreasing maximum period—might reverse the trend connected to driving zone depth, and the observed periods of GW Vir stars supports the suggestion that $`\mathrm{\Pi }_{\mathrm{max}}`$ is the key factor in setting the dominant period. While $`\mathrm{\Pi }_{\mathrm{max}}`$ limits the range of periods that can respond to driving, $`\tau _{\mathrm{th}}`$ in the driving zone limits the periods that can be driven. However, $`\tau _{\mathrm{th}}`$ increases steadily for all GW Vir stars, and $`\mathrm{\Pi }_{\mathrm{max}}`$ decreases steadily. Eventually, therefore, every pulsator will reach a state where $`\tau _{\mathrm{th}}>\mathrm{\Pi }_{\mathrm{max}}`$ over the entire extent of the driving zone. In such a state, the star can no longer respond to driving at all, and pulsation will cease. If $`\mathrm{\Pi }_{\mathrm{max}}`$ remains the most important amplitude limiting factor for stars approaching the red edge, then the red edge itself could be caused by the situation just described. We can test this idea by asking if it leads to the kind of mass-dependent red edge we see. To answer this question, we need to know how the depth of the driving zone (as measured by $`\tau _{\mathrm{th}}`$) changes with respect to $`\mathrm{\Pi }_{\mathrm{max}}`$ for stars of various mass. Figure 6 depicts the driving regions for three different model sequences ($`M=0.57,0.60`$ and $`0.63M_{}`$) at three different effective temperatures (144,000 K, 100,000 K, and 78,000 K). The driving strength, $`dk/dr`$, is determined from Equation (3), where $`k`$ represents the expression inside parentheses. The vertical axis has not been normalized and is the same scale in all three panels. The surface of each model is on the left, at $`\tau _{\mathrm{th}}=0`$. The vertical lines in the figure represent $`\mathrm{\Pi }_{\mathrm{max}}`$, for each model, normalized to 1000 s in the $`0.57M_{}`$ model at 144,000 K. We have made no attempt to calculate the actual value of $`\mathrm{\Pi }_{\mathrm{max}}`$ (however, see the Appendix); the important thing is its changing relationship to the depth of the driving zone with changing mass and $`T_{\mathrm{eff}}`$. A couple of important trends are clear in the figure. First, the driving zone in models of a given mass sinks to longer $`\tau _{\mathrm{th}}`$, and gets larger and stronger, with decreasing $`T_{\mathrm{eff}}`$. In the absence of other effects, this trend would lead to ever increasing periods of larger and larger amplitude as $`T_{\mathrm{eff}}`$ decreases. Meanwhile, $`\mathrm{\Pi }_{\mathrm{max}}`$ changes more moderately, moving toward slightly shorter timescales with decreasing $`T_{\mathrm{eff}}`$ and increasing mass. If we make the somewhat crude assumption that the effective driving zone consists only of those parts of the full driving zone with $`\tau _{\mathrm{th}}<\mathrm{\Pi }_{\mathrm{max}}`$, then a picture of the red edge emerges. At $`T_{\mathrm{eff}}`$ =144,000 K, the driving zone is relatively unaffected in all three model sequences. As $`T_{\mathrm{eff}}`$ decreases, and the driving zone sinks to longer $`\tau _{\mathrm{th}}`$, the $`\mathrm{\Pi }_{\mathrm{max}}`$-imposed limit encroaches on the driving zone more and more for every mass. Thus the longer periods, while driven, are eliminated, moving the locus of power to shorter periods than would be seen if $`\mathrm{\Pi }_{\mathrm{max}}`$ was not a factor. Eventually, all the driven periods are longer than the maximum period at which a star can respond, and pulsation ceases altogether. This is the red edge. Pulsations will not shut down at the same $`T_{\mathrm{eff}}`$ in stars of every mass. High mass models retain more of their effective driving zones than low mass models at a given $`T_{\mathrm{eff}}`$. This occurs because at a given $`T_{\mathrm{eff}}`$, the top of the driving zone moves toward the surface (to smaller $`\tau _{\mathrm{th}}`$) with increasing mass. This effect of decreasing $`\tau _{\mathrm{th}}`$ at the top of the driving zone outstrips the trend to lower $`\mathrm{\Pi }_{\mathrm{max}}`$ with increasing mass. The result is that, at 78,000 K, the driving zone in the $`0.57M_{}`$ model (upper panel of Figure 6) has moved to timescales entirely above the $`\mathrm{\Pi }_{\mathrm{max}}`$ limit, while the $`0.63M_{}`$ model (appropriate for PG 0122: see the bottom panel in Figure 6) still produces significant driving at thermal timescales below $`\mathrm{\Pi }_{\mathrm{max}}`$. Recall from Table 1 and Figure 2 that GW Vir stars of lower $`T_{\mathrm{eff}}`$ have higher mass. Why? We can now give an answer: at low $`T_{\mathrm{eff}}`$, the low mass stars have stopped pulsating because they only drive periods longer than those at which they can respond to pulsation! Higher mass stars have shallower ionization zones (at a given $`T_{\mathrm{eff}}`$) that still drive periods shorter than the maximum allowed $`g`$-mode period, even at low $`T_{\mathrm{eff}}`$. This causes the red edge to move to higher mass with decreasing luminosity and $`T_{\mathrm{eff}}`$, the same trend followed by lines of constant period spacing. The interplay between driving zone depth and $`\mathrm{\Pi }_{\mathrm{max}}`$ enforces the strikingly small range of period spacings ($`\mathrm{\Delta }\mathrm{\Pi }21.5`$s) seen in Table 1 and Figure 2. These calculations are not an attempt to predict the exact location of the red edge at a given mass. We are only interested at this point in demonstrating how the position of the red edge is expected to vary with mass, given an ionization-based driving zone with an upper limit placed on it by $`\mathrm{\Pi }_{\mathrm{max}}`$. In the particular models shown in Figure 6, the top of the driving zone corresponds to the ionization temperature of OVI, but the behavior of the red edge should be the same no matter what species causes the driving. Its absolute location, though, would probably be different given driving by different species. In order to use the location of the red edge for stars of different mass to identify the exact species that accomplishes driving, we would need to calculate $`\mathrm{\Pi }_{\mathrm{max}}`$ precisely for all the models in Figure 6 and better understand exactly how the value of $`\mathrm{\Pi }_{\mathrm{max}}`$ affects the amplitudes of modes nearby in period. Such a calculation is beyond the scope of this paper, and we leave it to future investigations to attempt one. Alternatively, the discovery of more GW Vir stars would help us better understand these processes by defining the red edge with greater observational precision. The simplest test of our theories would be to find low-$`T_{\mathrm{eff}}`$ GW Vir pulsators of low mass. If they exist, then $`\mathrm{\Pi }_{\mathrm{max}}`$ is probably not a factor in determining their periods, and variation should be sought in their lightcurves at longer periods than in the GW Vir stars found so far; Figure 6 suggests their dominant periods should be in the thousands of seconds. It is possible that these stars could be quite numerous (and should be quite numerous if they exist at all, given their slower rate of evolution) and still escape detection, since standard time-series aperture photometry is not generally effective at these timescales. Most current CCD photometers are quite capable of searching for such variability, however. Based on these results, we encourage future studies to determine whether or not low-mass, low-$`T_{\mathrm{eff}}`$ GW Vir pulsators do indeed exist. ## 6 Summary and Conclusions Our purpose is to understand how and why PWD stars pulsate, so that astronomers can confidently apply knowledge of PWD structure—gained via asteroseismology—to study how white dwarfs form and evolve and better understand the physics that governs these processes. We have pursued PWD structure via the pulsation periods and the additional constraints they provide for our models. A surprising similarity emerged among them: their patterns of average period spacing span a very small range from 21–23 s. Including the PNNV star NGC 1501, this uniformity extends over three orders of magnitude in PWD luminosity. Since the average period spacing increases with decreasing luminosity—and decreases with increasing mass—this result implies a trend toward significantly higher mass with decreasing luminosity through the instability strip. This trend has several important implications for our understanding of PWD pulsations. The instability is severely sloped toward lower $`T_{\mathrm{eff}}`$ with stellar mass increasing down the instability strip. To understand this sloped instability strip, we needed information inherent in the other fundamental observed trend: the dominant period in PWD pulsators decreases with decreasing luminosity. We found that the observed dominant period is correlated with the theoretical maximum $`g`$-mode period, $`\mathrm{\Pi }_{\mathrm{max}}`$. If $`\mathrm{\Pi }_{\mathrm{max}}`$ is a factor in determining the range of periods observed in pulsating PWD stars, then it should play a role in determining when pulsation ceases, because the driving zone tends to drive longer and longer periods in models of given mass as the models evolve to lower $`T_{\mathrm{eff}}`$. At low enough temperatures, the driving zone is only capable of driving periods longer than $`\mathrm{\Pi }_{\mathrm{max}}`$, and pulsations should then cease. Since the top of the driving zone moves to shorter timescales with increasing mass, and does so faster than $`\mathrm{\Pi }_{\mathrm{max}}`$ decreases with increasing mass, higher mass models should pulsate at lower $`T_{\mathrm{eff}}`$ than lower mass models. This behavior is compatible with the observed slope of the red edge in the log $`g`$–log $`T_{\mathrm{eff}}`$ plane. This mechanism does not account for the lack of observed high-mass pulsators at high $`T_{\mathrm{eff}}`$, however. Theoretical luminosity functions for the PWD stars indicate the observed blue edge is probably significantly affected by selection effects due to rapid evolution at high $`T_{\mathrm{eff}}`$. Since higher mass models are both less numerous and less luminous at a given $`T_{\mathrm{eff}}`$ than models of lower mass, we are more likely to detect low-mass than high-mass PWDs at a given temperature. This will cause stars to “bunch up” against the red edge in the observed instability strip, and the apparent blue edge will thus “artificially” resemble the shape of the red edge—no matter what the true shape might be. This selection effect, in isolation, would imply that low-mass, low $`T_{\mathrm{eff}}`$ GW Vir pulsators should be most numerous of all. That we in fact find none strengthens our contention that the mass-dependence of the red edge is a real effect—we find no low-mass, low-$`T_{\mathrm{eff}}`$ GW Vir stars because they don’t exist. The boundaries of the PWD instability strip derived from our pulsation studies are far smaller than those based on spectroscopic measurements alone. Though the actual range of log $`g`$ and $`T_{\mathrm{eff}}`$ spanned by the known pulsators is no smaller than before, the newly discovered mass-luminosity relationship implies that the width of the strip in $`T_{\mathrm{eff}}`$ at a given log $`g`$ is quite small, and vice versa. We can therefore no longer say for certain that any non-pulsators occupy this newly diminished instability strip, since the uncertainties in log $`g`$ and $`T_{\mathrm{eff}}`$ as determined from spectroscopy are larger now than the observed width of the strip itself at a given log $`g`$ or $`T_{\mathrm{eff}}`$. This highlights the importance of finding additional pulsators with which to further refine our knowledge of the instability strip boundaries. In particular, we still have no observations with which to constrain theories of the blue edge, since there is no reason that the instrinsic blue edge lies near the observed blue edge at any effective temperature. If the trend we have discovered continues down to effective temperatures below the coolest known pulsating PWDs, then high-mass ($`1M_{}`$ or greater) white dwarfs might pulsate at temperatures as low as 50,000–60,000 K (and log $`g8`$). Their dominant periods (again, assuming they follow the trends outlined in § 3 and § 5) would be shorter than any known PWDs, perhaps as low as 200-300 s. Such stars would not be PWDs at all but rather white dwarfs proper. Their discovery would complete a “chain” of variable stars from PNN stars to “naked” PWDs to hot white dwarfs, and as such they would represent a incalculable boon to astronomers who study the late stages of stellar evolution. On the theory side, we need to understand how PWD stars react to driving near the maximum $`g`$-mode period. Studies should be undertaken to determine the actual maximum period in PWD models. PWD evolution sequences should be constructed that contain all the elements thought to exist in PWD stars. We have observational information to test all these calculations, and with the observational program proposed above, we will gain more. There is much to do. The author expresses deepest gratitude to Steve Kawaler and Chris Clemens, who together taught him the crafts of astronomy: theory, observation, and communication. Their sage council and unblinking criticism greatly enhanced the quality of this work. I am also indebted to Paul Bradley, whose thoughtful insights substantially improved both the content and presentation of this paper. ## Appendix A The Maximum Period A variable pre-white dwarf star is a resonant cavity for non-radial $`g`$-mode oscillations. This spherically symmetric cavity is bounded by the stellar center (or the outer edge of the degenerate core in ZZ Ceti and DBV white dwarfs) and surface. At sufficiently long periods, however, the surface layers no longer reflect internal waves. The pulsation energy then leaks out through the surface, damping the pulsation. This idea was first applied to white dwarf pulsations by Hansen, Winget & Kawaler (1985), who attempted to calculate the approximate critical frequencies to explain both the red edge and maximum observed periods in ZZ Ceti stars. Assuming an Eddington gray atmosphere, they derive the following expression for the dimensionless critical $`g`$-mode frequency: $$\omega _g^2\frac{\mathrm{}(\mathrm{}+1)}{V_g}$$ (A1) where $$V_g=\frac{3g\mu R}{5N_akT_{\mathrm{eff}}}.$$ (A2) Here $`g`$ and $`R`$ represent the photospheric surface gravity and radius, and $`N_a`$, $`k`$, and $`\mu `$ have their usual meaning. The dimensionless frequency, $`\omega `$, is related to the pulsation frequency, $`\sigma `$, according to $$\omega ^2=\frac{\sigma ^2R}{g}.$$ (A3) The pulsation period is $`\mathrm{\Pi }=2\pi /\sigma `$, so combining Equations (A1) through (A3) we arrive at the maximum $`g`$-mode period: $$\mathrm{\Pi }_{\mathrm{max}}940s\left(\frac{\mu }{\mathrm{}(\mathrm{}+1)}\right)^{0.5}\left(\frac{R}{0.02R_{}}\right)\left(\frac{T_{\mathrm{eff}}}{10^5\mathrm{K}}\right)^{0.5},$$ (A4) or, using the relation $`R^2=L/4\pi \sigma T_{\mathrm{eff}}^4`$, $$\mathrm{\Pi }_{\mathrm{max}}940s\left(\frac{\mu }{\mathrm{}(\mathrm{}+1)}\right)^{0.5}\left(\frac{L}{35L_{}}\right)^{0.5}\left(\frac{T_{\mathrm{eff}}}{10^5\mathrm{K}}\right)^{2.5}.$$ (A5) For PG 1159, with $`L=200L_{}`$ and $`T_{\mathrm{eff}}=140,000`$ K, Equation (A5) predicts $`\mathrm{\Pi }_{\mathrm{max}}=850`$ s for $`\mathrm{}=1`$ modes and 492 s for $`\mathrm{}=2`$ modes.<sup>6</sup><sup>6</sup>6Assuming a mix of C:O:He = 0.4:0.3:0.3 by mass, implying $`\mu =1.59`$. PG 0122, with $`L=5.6L_{}`$ and $`T_{\mathrm{eff}}=80,000`$ K, would have $`\mathrm{\Pi }_{\mathrm{max}}=580`$ s for $`\mathrm{}=1`$ and 340 s for $`\mathrm{}=2`$. Because of the simplicity of the gray atmosphere assumption, these numbers are more useful in comparison to each other than as quantitative diagnostics of the maximum period. For instance, Hansen, Winget & Kawaler (1985) find that the values of $`\mathrm{\Pi }_{\mathrm{max}}`$ derived from this analysis are “within a factor of 2” of those based on more rigorous calculations. We are more interested here in the run of $`\mathrm{\Pi }_{\mathrm{max}}`$ with respect to global stellar quantities such as $`L`$ and $`T_{\mathrm{eff}}`$. If $`\mathrm{\Pi }_{\mathrm{max}}`$ determines the long-period cutoff in GW Vir pulsators, then the longest period $`\mathrm{}=1`$ modes should be (approximately) a factor of 1.73 times longer than the longest period $`\mathrm{}=2`$ modes. PG 1159 is the only GW Vir star with positively identified $`\mathrm{}=2`$ modes. In the period list of Winget et al. (1991), the longest period (positively identified) $`\mathrm{}=2`$ mode has a period of 425 s, while the longest period $`\mathrm{}=1`$ mode has a period of 840 s. The ratio of these two periods is 1.98, close to the predicted ratio. The periods themselves are also surprisingly close to the calculated values. The predicted ratios between the longest period modes also hold for intercomparison of different stars. The longest period mode so far identified for PG 0122 is 611 s, 1.37 times smaller than the longest period in PG 1159, again very close to the predicted ratio of 1.43 from Equations (A4) and (A5). The agreement between $`\mathrm{\Pi }_{\mathrm{max}}`$ from our rough calculations and the observed maximum periods in GW Vir stars is impressive enough to warrant further study. In particular, more rigorous theoretical calculations of $`\mathrm{\Pi }_{\mathrm{max}}`$ should be undertaken to corroborate or refute these results.
no-problem/9910/hep-th9910045.html
ar5iv
text
# Structure of Spinning Particle Suggested by Gravity, Supergravity and Low Energy String Theory ## 1 Introduction The Kerr solution is well known as a field of the rotating black hole. However, for the case of a large angular momentum $`L`$, $`a=L/mm`$, all the horizons of the Kerr metric are absent, and the naked ring-like singularity is appeared. This naked singularity has many unpleasant manifestations and must be hidden inside a rotating disk-like source. The Kerr solution with $`am`$ displays some remarkable features indicating a relation to the structure of the spinning elementary particles. In the 1969 Carter observed, that if three parameters of the Kerr - Newman solution are adopted to be ($`\mathrm{}`$=c=1 ) $`e^21/137,m10^{22},a10^{22},L=ma=1/2,`$ then one obtains a model for the four parameters of the electron: charge, mass, spin and magnetic moment, and the giromagnetic ratio is automatically the same as that of the Dirac electron. Israel has introduced a disk-like source for the Kerr field, and it was shown by Hamity that this source represents a rigid relativistic rotator. A model of ”microgeon” with Kerr metric was suggested and an analogy of this model with string models . Then a model of the Kerr-Newman source in the form of oblate spheroid was suggested . It was shown that material of the source must have very exotic properties: null energy density and negative pressure. An attempt to explain these properties on the basis of the volume Casimir effect was given in work . The electromagnetic properties of the material are close to those of a superconductor , that allows to consider singular ring of the Kerr source as a closed vortex string like the Nielsen-Olesen and Witten superconducting strings. Since 1992 black holes have paid attention of string theory. In 1992 the Kerr solution was generalized by Sen to low energy string theory . It was shown that black holes can be considered as fundamental string states, and the point of view has appeared that some of black holes can be treated as elementary particles . The obtained recently super-Kerr-Newman solution represents a natural combination of the Kerr spinning particle and superparticle models and predicts the existence of extra axial singularity and fermionic traveling waves on the Kerr-Newman background. ## 2 Kerr singular ring The Kerr string-like singularity appears in the rotating BH solutions instead of the point-like singularity of the non-rotating BH. The simple solution possessing the Kerr singular ring was obtained by Appel in 1887 (!) . It can be considered as a Newton or Coulomb analogue to the Kerr solution. When the point-like source of the Coulomb solution $`f=1/\stackrel{~}{r}=1/\sqrt{(xx_o)^2+(yy_o)^2+(zz_o)^2}`$ is shifted to a complex point of space $`(x_o,y_o,z_o)(0,0,ia)`$, the Kerr singular ring arises on the real slice of space-time. The complex equation of singularity $`\stackrel{~}{r}=0`$ represents a ring as an intersection of plane and sphere. The complex radial distance $`\stackrel{~}{r}`$ can be expressed in the oblate spheroidal coordinates $`r`$ and $`\theta `$: $`\stackrel{~}{r}=r+ia\mathrm{cos}\theta `$. The Kerr singular ring is a branch line of the space on two sheets: ”positive” one covered by $`r0`$, and ”negative” one, an anti-world, covered by $`r0`$. The sheets are connected by disk $`r=0`$ spanned by singular ring. The physical fields change signs and directions on the ”negative ”sheet. Truncation of the negative sheet allows one to avoid the twosheetedness. In this case the fields will acquire a shock crossing the disk, and some material sources have to be spread on the disk surface to satisfy the field equations. The structure of electromagnetic field near the disk suggests then that the ”negative” sheet of space can be considered as a mirror image of the real world in the rotating superconducting mirror. The source of Kerr-Newman solution, like the Appel solution, can be considered from complex point of view as a ”particle” propagating along a complex world-line $`x^i(\tau )`$ parametrized by complex time $`\tau `$. The objects described by the complex world-lines occupy an intermediate position between particle and string. Like the string they form the two-dimensional surfaces or the world-sheets in the space-time. It was shown that the complex Kerr source may be considered as a complex hyperbolic string which requires an orbifold-like structure of the world-sheet. It induces a related orbifold-like structure of the Kerr geometry which is closely connected with the above mentioned twosheetedness. ## 3 Kerr congruence and disk-like source Second remarkable peculiarity of the Kerr solution is the twisting principal null congruence (PNC) which can be considered as a vortex of null radiation. This vortex propagates via disk from negative sheet of space onto positive one forming a caustic at singular ring. PNC plays fundamental role in the structure of the Kerr geometry. The Kerr metric can be represented in the Kerr-Schild form $`g_{ik}=\eta _{ik}+2hk_ik_k,`$ where $`\eta `$ is metric of an auxiliary Minkowski space and $`h`$ is a scalar function. Vector field $`k_i(x)`$ is null, $`k_ik^i=0,`$ and tangent to PNC. The Kerr PNC is geodesic and shear free . Congruences with such properties are described by the Kerr theorem via complex function $`Y(x)`$ representing a projective spinor coordinate $`Y(x)=\overline{\mathrm{\Psi }}^{\dot{2}}/\overline{\mathrm{\Psi }}^{\dot{1}}`$. The null vector field $`k_i(x)`$ can be expressed in spinor form $`k\overline{\mathrm{\Psi }}\sigma _idx^i\mathrm{\Psi }`$. The above complex representation of source allows one to obtain the Kerr congruence by a retarded-time construction . The complex light cone with the vertex at some point $`x_0`$ of the complex world line $`(x_ix_{0i})(x^ix_0^i)=0`$ can be split into two families of null planes: ”left” and ”right”. In spinor form this splitting can be described as $$x_i=x_{0i}+\mathrm{\Psi }\sigma _i\stackrel{~}{\mathrm{\Psi }},$$ (1) where ”right” (or ”left”) null planes can be obtained keeping $`\mathrm{\Psi }`$ constant and varying $`\stackrel{~}{\mathrm{\Psi }}`$, or keeping $`\stackrel{~}{\mathrm{\Psi }}`$ constant and varying $`\mathrm{\Psi }`$. The rays of the twisting Kerr congruence arise as real slice of the ”left” null planes of the complex light cones emanated from the complex world line . Replacement of the negative sheet by a disk-like source at surface $`r=0`$ allows one to avoid twosheetedness of the Kerr space. However, there is still a small region of causality violation on positive sheet of space. By the Löpez suggestion this region has to be also covered by source . The minimal value of $`r`$ covering this region is ‘classical radius’ $`r_e=\frac{e^2}{2m}`$. The resulting disk-like source has a thickness of order $`r_e`$ and its degree of oblateness is $`\alpha ^1137`$. ## 4 Stringy suggestions In 1974, in the frame of Einstein gravity the model of microgeon with the Kerr-Newman metric was considered , where singular ring was used as a waveguide for wave excitations. It was recognized soon that singular ring represents in fact a string with traveling waves. Further, in dilaton gravity, the string solutions with traveling waves have paid considerable attention. The obtained by Sen generalization of the Kerr solution to low energy string theory with axion and dilaton was analyzed in . It was shown that, in spite of the strong deformation of metric by dilaton (leading to a change the type of metric from type D to type I), the Kerr PNC survives in the Kerr-Sen solution and retains the properties to be geodesic and shear free. It means that the Kerr theorem and the above complex representation are valid for the Kerr-Sen solution too. It has also been obtained that the field of the Kerr-Sen solution near the Kerr singular ring is similar to the field around a fundamental heterotic string that suggested stringy interpretation of the Kerr singular ring. ## 5 Supergeneralization Description of spinning particle based only on the bosonic fields cannot be complete. On the other hand the fermionic models of spinning particles and superparticles based on Grassmann coordinates have paid considerable attention. In a natural way to combine the Kerr spinning particle and superparticle models was suggested leading to a non-trivial super-Kerr-Newman black hole solution. The simplest consistent supergeneralization of Einstein gravity represents an unification of the gravitational field $`g_{ik}`$, with a spin 3/2 Rarita-Schwinger field $`\psi _i`$ . There exists the problem of triviality of supergravity solutions. Any exact solution of Einstein gravity is indeed a trivial solution of supergravity field equations with a zero field $`\psi _i`$. Starting from such a solution and using supertranslations, one can easily turn the gravity solution into a form containing the spin-3/2 field $`\psi _i`$. However, since this spin-3/2 field can be gauged away by reverse transformations such supersolutions have to be considered as trivial. The hint how to avoid this triviality problem was given by complex representation of the Kerr geometry. One notes that from complex point of view the Schwarzschild and Kerr geometries are equivalent and connected by a trivial complex shift. The non-trivial twisting structure of the Kerr geometry arises as a result of the shifted real slice regarding the source . Similarly, it is possible to turn a trivial super black hole solution into a non-trivial if one finds an analogue to the real slice in superspace. The trivial supershift can be represented as a replacement of the complex world line by a superworldline $`X_0^i(\tau )=x_0^i(\tau )i\theta \sigma ^i\overline{\zeta }+i\zeta \sigma ^i\overline{\theta },`$ parametrized by Grassmann coordinates $`\zeta ,\overline{\zeta }`$, or as a corresponding coordinate supershift $`x^i=x^i+i\theta \sigma ^i\overline{\zeta }i\zeta \sigma ^i\overline{\theta };\theta ^{}=\theta +\zeta ,\overline{\theta }^{}=\overline{\theta }+\overline{\zeta }.`$ Assuming that coordinates $`x^i`$ before the supershift are the usual c-number coordinates one sees that coordinates acquire nilpotent Grassmann contributions after supertranslations. Therefore, there appears a natural splitting of the space-time coordinates on the c-number ‘body’-part and a nilpotent part - the so called ‘soul’. The ‘body’ subspace of superspace, or B-slice, is a submanifold where the nilpotent part is equal to zero, and it is a natural analogue to the real slice in complex case. Reproducing the real slice procedure of the Kerr geometry in superspace one obtains the condition of proportionality of the commuting spinors $`\overline{\mathrm{\Psi }}(x)`$ determining the PNC of the Kerr geometry and anticommuting spinors $`\overline{\theta }`$ and $`\overline{\zeta }`$, As a consequence of the B-slice and superlightcone constraints one obtains a submanifold of superspace $`\theta =\theta (x),\overline{\theta }=\overline{\theta }(x).`$ The initial supergauge freedom is lost now, and there appears a non-linear realization of broken supersymmetry introduced by Volkov and Akulov and considered in N=1 supergravity by Deser and Zumino . It is assumed that this construction is similar to the Higgs mechanism of the usual gauge theories, and $`\zeta ^\alpha (x),\overline{\zeta }^{\dot{\alpha }}(x)`$ represent Goldstone fermion which can be eaten by appropriate local supertransformation with a corresponding redefinition of the tetrad and the spin-3/2 field $`\psi _i`$. However, the complex character of supertranslations demands to extend this scheme to N=2 supergravity. In this way the self-consistent super-Kerr-Newman solutions to broken N=2 supergravity coupled to Goldstone fermion field was obtained . The solution describes the massless Dirac wave field propagating on the Kerr-Newman background along the Kerr congruence. Besides the Kerr singular ring solution contains an extra axial singularity and traveling waves propagating along the ring-like and axial singularity. The ‘axial’ singularity represents a half-infinite line threading the Kerr singular ring and passing to ‘negative’ sheet of the Kerr geometry. The position and character of axial singularity depend on the index $`n`$ of elementary excitation. The case $`n=1/2`$ is exclusive: there are two ‘decreasing’ singularities which are situated symmetrically at $`\theta =0`$ and $`\theta =\pi `$. ## 6 Problem of hard core The obtained supergeneralization is based on the massless Goldstone field. At present stage of investigation our knowledge regarding the origin of the Goldstone fermion is very incompleted. Analyzing the Wess-Zumino model of super-QED and some other schemes of spontaneously broken supersymmetry , one sees that it can leads to massless Goldstone fermions, at least in the region of massless fields out of the BH horizons. However, for the known parameters of spinning particles, the angular momentum is very high, regarding the mass parameter, and the black hole horizons disappear. The resulting object is ”neither black and nor hole”, and the considered above disk-like ‘hard core’ region is naked. Structure of this region represents a very important and extremely complicated problem. Among the possible field models for description this region could be mentioned the Landau-Ginzburg model, super-QED, non-abelian gauge models, Seiberg-Witten theory, as well as the recent ideas on the confinement caused by extra dimensions in the bulk/boundary ( AdS/CFT correspondence) models . Apparently, this problem is very far from resolution at present, and one of the most difficult points can be the concordance of the field model with the rotating disk-like bag of the Kerr geometry. ## 7 Suggestions to experimental test The predicted comparative big size of the disk-like bag looks as a serious contradiction to the traditional point of view on the structureless, point-like electron. However, the suggested by QED virtual photons surrounding electron in the region of Compton size, zitterbewegung and the vacuum zero-point fluctuations, spreading the position of electron, can be treated as some indirect evidences for the existence of an geometrical structure in the Compton region. At least, one can assume that region of virtual photons has a tendency to be very ordered with formation of the Kerr congruence and the ring-like singularity. The modern progress in the formation of polarized beams of spinning particles suggests the possible methods for experimental test of the main predicted feature of the Kerr spinning particle - its highly oblated form. In particular, it could be the method proposed in based on estimation of the cross section differences between transversely and longitudinally polarized states in proton-proton collisions. One proposes that similar experiment could be more effective for the electron-electron collisions. Another possible way of experimental test could be the analysis of the diffraction of photons on the polarized electrons. Apparently, the strong influence of vacuum fluctuations will not allow one to observe the predicted very high oblateness of electrons. Nevertheless, one expects that an essential effect should be observed if the Kerr source model reflects the reality.
no-problem/9910/cond-mat9910352.html
ar5iv
text
# Evidences Against Temperature Chaos in Mean Field and Realistic Spin Glasses ## Abstract We discuss temperature chaos in mean field and realistic $`3D`$ spin glasses. Our numerical simulations show no trace of a temperature chaotic behavior for the system sizes considered. We discuss the experimental and theoretical implications of these findings. The problem of chaos in spin glasses has been under investigations for many years . Even in the Sherrington-Kirkpatrick (SK) model, which is well understood with Parisi solution of the mean field theory, the possible presence or absence of temperature chaos is still an open problem. On the contrary, for example, chaos induced by a magnetic field $`h`$ was already discussed by Parisi $`15`$ years ago , and it is a clear feature of the Replica Symmetry Breaking (RSB) scenario. We will give here numerical evidences of the fact that, for all lattice sizes we are able to investigate by using state of the art optimized Monte Carlo method , there is no trace of temperature chaos in mean field (infinite range) and realistic spin glasses, in contradiction with previous claims. The question about temperature chaos can be phrased by considering a typical equilibrium configuration at temperature $`T`$, and one (under the same realization of the quenched disorder) at $`T^{}=T+dT`$, where $`dT`$ is small: how similar are such two configurations? In a chaos scenario for any non-zero $`dT`$ the typical overlap would be exponentially small in the system size. We study both SK and Diluted Mean Field (DMF) models. We consider the DMF model in its version with constant connectivity $`c=6`$. Each lattice site is connected to $`c`$ other sites chosen at random. It is interesting to check if this model has the same features as the SK model. We also study the $`3D`$ Edwards Anderson (EA) realistic spin glass. In all models spin variables are Ising like ($`\sigma =\pm 1`$), and the couplings $`J`$ can take the two values $`\pm 1`$ with probability $`\frac{1}{2}`$. Our Monte Carlo dynamics is based on Parallel Tempering (PT) : we run in parallel two sets of copies of the system, and always take overlap of configurations from two different Markov chains. We use all standard precautions for checking thermalization of our data . The indicator of a potential chaotic behavior will be the two temperature overlap $`q_{T^{},T}^{(2),(N)}\overline{\left(\frac{1}{N}_{i=1}^N\sigma _i^{(T)}\tau _i^{(T^{})}\right)^2}`$. The usual square overlap $`q_{T,T}^{(2),(N)}`$ is a special case of $`q_{T^{},T}^{(2),(N)}`$. Let us start from the analysis of our data for the SK model. In figure 1 we plot the square overlap for the two temperature values $`(0.4,T)`$ (i.e. the overlap of a copy of the system at temperature $`T^{}=0.4`$ with a copy of the system at $`T(0.4,1.35)`$), and the one at equal temperature $`(T,T)`$. The two upper (on the left side of the plot) dashed curves (merging at $`T=0.4`$ at a value close to $`0.42`$) are for $`N=512`$ spins (a small lattice size), the upper one being the $`(0.4,T)`$ curve and the lower one the $`(T,T)`$ one. The two lower curves (merging at $`T=0.4`$ at a value close to $`0.40`$) are for $`N=4096`$ (our largest lattice for the SK model): of these two lower curves the solid, upper curve is for $`(0.4,T)`$, while the dashed lower one is the $`(T,T)`$ $`q^{(2)}`$. The fifth curve from the top, that stops down at $`T=0.7`$ is the perturbative result for $`q_{T,T}^{(2),(\mathrm{})}`$ (useful for checking our numerics and the quality of the approach to the asymptotic large volume limit). Here we only plot data from two lattice sizes, and do not show the statistical errors that are small enough not to affect any of the issue discussed here, but would make the picture less readable. We show data for the lowest temperature we have been able to thermalize, $`T^{}=0.4`$. The same qualitative picture holds for larger $`T^{}`$ values ($`T^{}<T_c`$). One notices at first glance from figure 1 that for both $`N`$ values (and, as we will see, for all $`N`$ values and different systems we have analyzed) $`q_{0.4,T}^{(2),(N)}`$ $`\stackrel{>}{}`$ $`q_{T,T}^{(2),(N)}`$ ($`T0.4`$). This is what happens in a non-chaotic systems (for example ferromagnets where $`q_{T^{},T}^{(2),(N)}=M(T)^2M(T^{})^2`$, where $`M(T)`$ is the magnetization at temperature $`T`$), and is very different from what would happen in a system with $`T`$-chaotic states. The second crucial observation is that the distance between $`q_{0.4,T}^{(2),(N)}`$ and $`q_{T,T}^{(2),(N)}`$, at fixed $`T>0.4`$, decreases with $`N`$, the two curves even seem to collapse at large $`N`$. This kind of behavior shows the absence of temperature chaos in the Sherrington-Kirkpatrick system and, as we will discuss in the following, in the diluted mean field and $`3D`$ EA spin glasses. This evidence, together with an understanding of the physical mechanism that is at the origin of this behavior (thanks to the analysis of $`P(q)`$) is the main point of this note. A more quantitative evidence comes from figure 2, where we plot $`q_{0.4,T}^{(2),(N)}q_{T,T}^{(2),(N)}`$ as a function of $`T`$ for the SK model with $`N=256`$, $`512`$, $`1024`$, $`2048`$ and $`4096`$. Here the errors are computed by an analysis of sample to sample fluctuations (it is important not to forget that the points for different temperatures are strongly correlated, since they involve the same $`T=0.4`$ data, or data from different temperatures but nevertheless from the same PT simulation). In the large volume limit both contributions to the difference are zero for $`T>T_c=1`$, so that the non zero value of the curves in this regime gives us a measure of finite size effects. Large lattices have larger fluctuations. This is connected to the non-self-averageness of $`P_J(q)`$: the peaks of $`P_J(q)`$ become narrower for large lattices (eventually approaching $`\delta `$-functions in the large volume limit), and averaging them to compute expectation values of the overlap gives a wiggling behavior, that becomes smooth only for a very large number of disorder samples. We are not able to keep under control a precise fit of the data of figure 2 for $`N\mathrm{}`$, but the strong decrease of the difference of the data at large $`N`$ is clear, and the possibility that the limit is zero everywhere looks very plausible (it would be very interesting to understand theoretically this behavior). We use figure 3 for trying to understand better the mechanism governing how stable states of the system vary as a function of $`T`$. We plot the probability distribution $`P_J(q)`$ for a given disorder realization of the SK model with $`N=4096`$. We show, from top to bottom, the results at $`T=0.50`$, $`0.45`$ and $`0.40`$. The function $`P_J(q)`$ should be symmetric around $`q=0`$, since we are at zero magnetic field. The level of symmetry reached by our finite statistics sample is a measure of the quality of our thermalization: from figure 3 it looks very good. Note that there is no peak close to, or at, the origin: this disorder realization carries little weight in the $`q0`$ region. At the lowest $`T`$ value there are $`5`$ peaks for positive $`q`$, three of which very well separated. It is interesting to follow the evolution of $`P_J(q)`$ from $`T=0.50`$ down to $`T=0.40`$. At $`T=0.50`$ there are basically two very broad peaks, that get resolved at $`T=0.45`$: one broad peak gets divided in two clear peaks (that become very clear at $`T=0.4`$), while the other forms a $`3`$ peak structure, that get different weights at $`T=0.4`$. What one sees in figure 3 is interesting since it constitutes a typical pattern: when lowering $`T`$, states start to contribute to the $`P_J(q)`$ by bifurcations (new peaks emerge) and by smooth rearrangements of the weights. One never sees dramatic changes involving strong redistributions of weight among far away peaks, that would be typical of a chaotic situation: the phase space is obviously very complex, as it has to be in a situation characterized by RSB , but the $`T`$-dependence of the phase space is smooth and non chaotic. The situation in the DMF model (where $`T_c2.07`$) is very similar to the one in the SK model. In figure 4 we show the analogous of figure 2, for $`N=64`$, $`512`$ and $`4096`$ spins. The two figures are very similar, and even the size of the difference we are plotting is very similar in the two models, when comparing the same values of $`N`$. The situation in the DMF model looks exactly the same of the SK model: there is no temperature chaos. The situation in the $`3D`$ EA model is different only in that finite size effects are very large (this is well known from numerical simulations ). In figure 5 we show $`q^{(2)}`$ at equal and different $`T`$ values for $`L=4`$, $`8`$ and $`16`$. It is clear that $`q^{(2),(N)}`$ decreases noticeably with $`N=L^3`$ for all values of $`T`$. It is also remarkable that even at very large $`T`$ values (with $`T`$ far larger than the estimated value of $`T_c1.16`$) $`q^{(2),(N)}`$ is different from zero even at $`N=4096=16^3`$. Apart of that figure 5 shows a situation very similar to the one of 1. We are definitively not in a situation where $`q_{0.4,T}^{(2),(N)}`$ goes to zero exponentially and $`q_{T,T}^{(2),(N)}`$ goes to a non zero limit (even if the distance between the two curves for $`L=16`$ has become very small, and even negative in a temperature region). In figure 6 we show the $`3D`$ analogous of figures 2 and 4. Again, even for $`T>T_c`$, on the smaller lattices one has non-zero differences: finite size effects are large, but apart from that the emerging picture is analogous to the one we have found in mean field (diluted and not). Now, before discussing the data, we give a few details about our runs. For the SK model we use $`T_{min}=0.4=0.4T_c`$. We simulate $`N=256`$, $`512`$, $`1024`$, $`2048`$ and $`4096`$: for the different $`N`$ cases we have from $`26`$ to $`142`$ samples, a set of from $`38`$ to $`75`$ temperature values with a $`dT`$ going from $`0.025`$ to $`0.0125`$. We run $`200000`$ sweeps but for the $`N=4096`$ and $`N=2048`$ lattices where we run $`400000`$ sweeps (we always use for measurement only the second part of the run). For the DMF model we have $`T_{min}=0.80.4T_c`$. We use from $`640`$ to $`1024`$ samples for $`N=64`$, $`512`$ and $`4096`$. Here $`T_{max}`$ is $`3`$, the number of temperatures from $`45`$ to $`89`$ and the number of iterations from $`100000`$ to $`200000`$. In the $`3D`$ EA model we use $`T_{min}=0.4`$, $`T_{max}=2.075`$ (here $`T_c1.16`$) and a $`dT`$ going from $`0.050`$ to $`0.025`$. We have $`1344`$ samples for $`L=4`$ ($`200000`$ sweeps) and $`L=8`$ ($`300000`$ sweeps), and $`64`$ samples for $`16^3`$ (where $`T_{min}=0.5`$, with $`3450000`$ sweeps, this is very many sweeps of many tempering copies). Our SK program was multi-spin coded on different sites of the same system (we store $`64`$ spins of the system in the same word), while the DMF and $`3D`$ codes are multi-spin coded on different copies of the system . We want to note that, as compared to previous numerical simulations, we have been able (thanks to a large computational effort and to the use of PT) to thermalize the systems at very low $`T`$ values. It is also interesting to notice that in the $`N=4096`$ case the $`3D`$ EA model requires many more sweeps than the DMF and the SK model. In all our simulations we do not observe any temperature chaos effect. This is true for the SK model, the diluted mean field and the $`3D`$ EA model: the three models behave very similarly. The differences we have plotted, that would decrease exponentially in a chaotic scenario, do not decrease faster than logarithmically. Obviously from our numerical findings we cannot be sure that things will not change for very large system sizes, but again, we can claim that the absence of any temperature chaotic behavior is crystal clear on our lattice size. Two final comments are in order. At first, as we already said, we cannot be sure about the behavior in the very large system limit: the difference of $`q_{T^{},T}^{(2)}`$ and $`q_{T,T}^{(2)}`$ (for $`T^{}0.4T_c`$ and $`T>T^{}`$) decreases with the system volume, and is close to zero on the larger lattice sizes we can simulate. This difference could eventually become negative, and the correlation at $`TT^{}`$ could eventually drop exponentially on very large lattices: we can only say we do not see any trace of that. The second comment is that, in any case, our results have an experimental relevance: the number of spins that are equilibrated during a real experiment is of an order of magnitude only slightly larger than the order of magnitude of the one we can thermalize in our numerical simulations , so our results strongly suggest the absence of temperature chaos in real experiments. The previous work of other authors on chaos was pointing toward the presence of temperature chaos. On one side in this context the analytic computations have by no means an unambiguous meaning, since they are based on strong assumptions or on a perturbative and/or approximate treatment. On the other side numerical computations of older generations were much limited in scope as compared to what we can do now. For example Ritort numerical computation , that was correctly, in the limit of the gathered data, finding chaos, was looking at a $`T`$ starting value of $`0.4`$, and a $`dT`$ of $`0.5`$ (as compared to the $`0.0125`$ we have been able to use here), i.e. was comparing $`T=0.4`$ to $`T=0.9`$ (where $`T_c=1`$) on reasonably small lattice. In this case the decrease of the overlap is clear, but turns out to be due to finite size effect (since even the equal $`T`$ overlap has to go to zero at $`T_c`$). A last comment (following for example ) is about the relevance of the absence of chaos for the description of realistic, finite dimensional spin glasses. In short the absence of a temperature chaotic behavior makes impossible a modified droplet like description of realistic spin glasses (the original droplet model cannot work for example because of the observed dynamical scaling of the energy barriers). Following one notices that the very weak dependence of spin glass physical properties on the cooling rate is not plausible in a scenario of activated domain growth. Only arguing that there is temperature chaos one can reconciliate the negligible effect of the cooling rate with a droplet picture. The absence of temperature chaos makes this reconciliation impossible. We are aware that G. Parisi and T. Rizzo in a perturbative computation close to $`T_c`$ find absence of temperature chaos (at the order they do compute, but not necessarily at all orders in perturbation theory), both in the SK and in the DMF model. S. Franz and I. Kondor have connected evidence that excludes temperature chaos at lowest orders close to $`T_c`$. We deeply thank all of them, together with J.-P. Bouchaud and F. Ritort, for interesting conversations. The numerical simulations have used, together with a number of workstations, computer time from the Grenoble T3E Cray and the Cagliari Linux cluster Kalix2 (funded from Italian MURST under a COFIN grant).
no-problem/9910/hep-lat9910006.html
ar5iv
text
# A lattice NRQCD computation of the bag parameters for Δ⁢𝐵=2 operatorsPresented by N.Yamada. ## 1 Introduction The NRQCD calculation is essential to obtain a prediction for $`B_B`$ with precision better than $`O(20\%)`$, as the size of $`1/M`$ correction, which is not included in the static calculations, is expected to be $`\mathrm{\Lambda }_{QCD}/m_b`$ = 0.1$``$0.2. We update our study of $`B_B`$ using the NRQCD, which was previously presented at the last lattice conference . A paper version is also available . Based on the same calculation method, we have also calculated $`B_S`$, which is relevant to the width difference of $`B_s`$ meson system . ## 2 Method The bag parameter $`B_{X_q}(m_b)`$ is defined using a vacuum saturation approximation (VSA) as $`B_{X_q}(m_b)`$ $`=`$ $`{\displaystyle \frac{\overline{B}_q^0|𝒪_{X_q}(m_b)|B_q^0}{\overline{B}_q^0|𝒪_{X_q}(m_b)|B_q^0_{\mathrm{VSA}}}},`$ (1) where the $`\mathrm{\Delta }B`$=2 operators $`𝒪_{X_q}`$ are $`𝒪_{L_q}=\overline{b}\gamma _\mu P_Lq\overline{b}\gamma _\mu P_Lq`$ or $`𝒪_{S_q}=\overline{b}P_Lq\overline{b}P_Lq`$. ($`P_L`$ is a projection operator $`P_L=1\gamma _5`$.) Subscript $`q`$ denotes the valence light quark $`d`$ or $`s`$, which we omit in the following if there is no risk of confusion. We use a notation $`B_L`$ instead of $`B_B`$ to remind that it is a matrix element of $`𝒪_L`$ and to distinguish it from $`𝒪_S`$. Using the operators constructed on the lattice with static heavy and clover light quark $`O_X^{\mathrm{lat}}`$, the continuum operators defined with the $`\overline{MS}`$ scheme $`𝒪_X`$ are written as $`𝒪_L(m_b)`$ $`=`$ $`{\displaystyle \underset{X=\{L,S,R,N\}}{}}Z_{L,X}O_X^{\mathrm{lat}}(a^1),`$ (2) $`𝒪_S(m_b)`$ $`=`$ $`{\displaystyle \underset{X=\{S,L,R,P\}}{}}Z_{S,X}O_X^{\mathrm{lat}}(a^1),`$ (3) $`𝒜_0`$ $`=`$ $`Z_AA_0^{\mathrm{lat}},`$ (4) where new operators $`𝒪_R`$, $`𝒪_N`$ and $`𝒪_P`$ are involved: $`𝒪_R`$ $`=`$ $`\overline{b}\gamma _\mu P_Rq\overline{b}\gamma _\mu P_Rq,`$ $`𝒪_N`$ $`=`$ $`2\overline{b}\gamma _\mu P_Lq\overline{b}\gamma _\mu P_Rq+4\overline{b}P_Lq\overline{b}P_Rq,`$ $`𝒪_P`$ $`=`$ $`2\overline{b}\gamma _\mu P_Lq\overline{b}\gamma _\mu P_Rq+12\overline{b}P_Lq\overline{b}P_Rq.`$ $`Z_{L,X}`$ and $`Z_{S,X}`$ are perturbative matching factors obtained at one-loop level . We also write the matching of the heavy-light axial current $`𝒜`$ with the renormalization constant $`Z_A`$. The bag parameters are, then, written in terms of the corresponding quantities measured on the lattice $`B_X^{\mathrm{lat}}`$ as $`B_L(m_b)`$ $`=`$ $`{\displaystyle \underset{X=\{L,S,R,N\}}{}}Z_{L,X/A^2}B_X^{\mathrm{lat}}(a^1),`$ (5) $`B_S(m_b)/0.734`$ $`=`$ $`{\displaystyle \underset{X=\{S,L,R,P\}}{}}Z_{S,X/A^2}B_X^{\mathrm{lat}}(a^1).`$ (6) Here $`Z_{L,X/A^2}`$ denotes a ratio of matching constants $`Z_{L,X}/Z_A^2`$, and $`B_X^{\mathrm{lat}}`$ is defined by $`B_X^{\mathrm{lat}}(a^1)`$ $`=`$ $`{\displaystyle \frac{\overline{B}^0|𝒪_X^{\mathrm{lat}}(a^1)|B^0}{c\overline{B}^0|A_\mu ^{\mathrm{lat}}|00|A_\mu ^{\mathrm{lat}}|B^0}}.`$ (7) A numerical constant $`c`$ is 8/3 or $``$5/3 in $`B_L`$ or in $`B_S`$ respectively. The vacuum saturation of the operator $`𝒪_S`$ introduces a matrix element of the pseudoscalar density $`P=\overline{b}\gamma _5q`$, which is often rewritten in terms of $`A_\mu `$ using the equation of motion. In doing so, a factor $`(\overline{m}_b(m_b)+\overline{m}_s(m_b))^2/M_{B_s}^2`$ appears, for which we use $`m_b=4.8\mathrm{GeV},`$ $`\overline{m_b}(m_b)=4.4\mathrm{GeV},`$ $`\overline{m_s}(m_b)=0.2\mathrm{GeV},`$ $`M_{B_s}=5.37\mathrm{GeV}`$ as in Ref., and obtain 0.734 given in Eq.(6). Unfortunately the one-loop coefficients for the perturbative matching are not yet available for the NRQCD action. We use, therefore, the one-loop coefficients calculated in the static limit as an approximation. It introduces a systematic error of $`O(\alpha _s/(am_Q))`$, but no logarithmic divergence appears. The numerical values of $`Z_{L,X/A^2}`$ and $`Z_{S,X/A^2}`$ at $`\beta `$=5.9 are given in Table 1, in which we linearize the perturbative expansion of $`Z_{L,X}/Z_A^2`$ and neglect all the $`O(\alpha _s^2)`$ terms. For the coupling constant, $`\alpha _V(q^{})`$ with $`q^{}=1/a`$ and $`\pi /a`$ is used throughout this paper. Our simulation was carried out on a quenched $`16^3\times 48`$ lattice at $`\beta `$=5.9. We have increased the statistics to 250 from 100 at the time of Lattice 98 . We performed two sets of simulations with the NRQCD actions and currents improved through $`O(1/m_Q)`$ and $`O(1/m_Q^2)`$, which enables us to study the higher order effects in the $`1/m_Q`$ expansion explicitly. The light quark is described by the clover action with the tadpole improved clover coefficient $`c_{sw}=1/u_0^3`$. The inverse lattice spacing is determined from the rho meson mass as $`a^1`$ = 1.62 GeV. ## 3 $`B_L`$ Figure 1 shows $`1/M_P`$ dependence of $`B_{L_d}(m_b)`$ ($`q^{}`$=$`1/a`$) with $`M_P`$ the pseudoscalar heavy-light meson mass. Open circles denote the results with the $`O(1/m_Q)`$ NRQCD action and open triangles denote those with the $`O(1/m_Q^2)`$ action. We fit the $`O(1/m_Q^2)`$ results to a quadratic function of $`1/M_P`$ (dashed line) and obtain the value in the static limit (small open triangle). We also plot the previous results in the static limit by UKQCD (filled diamond), Kentucky group (filled circle) and Giménez and Martinelli (filled triangle). In order to make a consistent comparison we reanalyzed their data using the same matching procedure described in the last section. Our data extrapolated to the static limit nicely agrees with these direct simulation results, as it should be. From Figure 1 we observe that $`B_L`$ has a small negative slope in $`1/M_P`$, which is well described by the vacuum saturation approximation and also observed in the lattice calculations with relativistic actions . We also find that the $`O(1/m_Q^2)`$ corrections to the action and current gives only a few per cent contribution to $`B_L`$. The dominant uncertainty in our result comes from the unknown one-loop coefficients for the NRQCD action. A crude estimate with order counting suggests that the corresponding systematic error is $`O(\alpha _s/(am_b))`$ $``$ 10%. Other possible systematic errors are the discretization error of $`O(a^2\mathrm{\Lambda }_{QCD}^2)`$ and of $`O(\alpha _sa\mathrm{\Lambda }_{QCD})`$, the relativistic correction of $`O(\mathrm{\Lambda }_{QCD}^2/m_b^2)`$, and a small uncertainty in the chiral extrapolation. Taking them into account, we obtain the following values as our final results from the quenched lattice, $`B_{B_d}(m_b)=0.75(3)(12),{\displaystyle \frac{B_{B_s}}{B_{B_d}}}=1.01(1)(3),`$ (8) where the first error is statistical and the second a sum of the systematic errors in quadrature. In estimating the error in the ratio $`B_{B_s}/B_{B_d}`$ we consider the error from chiral extrapolation only, assuming that other uncertainties cancel in the ratio. ## 4 $`B_S`$ Figure 2 shows the $`1/M_P`$ dependence of $`B_{S_s}(m_b)`$ with $`q^{}=1/a`$. We see a significant increase of $`B_S`$ with the $`1/M`$ correction, which is 20$``$30%. Our preliminary result with a similar error analysis as in $`B_L`$ is $`B_{S_s}(m_b)=1.19(2)(20).`$ (9) The width difference in the $`B_s\overline{B}_s`$ mixing $`\mathrm{\Delta }\mathrm{\Gamma }_s`$ is theoretically calculated using the $`1/M`$ expansion as $`\left({\displaystyle \frac{\mathrm{\Delta }\mathrm{\Gamma }}{\mathrm{\Gamma }}}\right)_s=\left({\displaystyle \frac{f_{B_s}}{210\mathrm{M}\mathrm{e}\mathrm{V}}}\right)^2`$ $`\times `$ $`\left[0.006B_{L_s}(m_b)+0.150B_{S_s}(m_b)0.063\right].`$ Using our result for $`B_L`$ and $`B_S`$, and a recent dynamical lattice result $`f_{B_s}`$ = 245(30) MeV , we obtain $`(\mathrm{\Delta }\mathrm{\Gamma }/\mathrm{\Gamma })_s`$ = 0.16(3)(4), where errors are from $`f_{B_s}`$ and from $`B_S`$ respectively. Numerical calculations have been done on Paragon XP/S at Institute for Numerical Simulations and Applied Mathematics in Hiroshima University. We are grateful to S. Hioki for allowing us to use his program to generate gauge configurations. S.H. and T.O. are supported in part by the Grants-in-Aid of the Ministry of Education (Nos. 11740162,10740125). K-I.I. would like to thank the JSPS for Young Scientists for a research fellowship.
no-problem/9910/astro-ph9910282.html
ar5iv
text
# Section 1 Introduction ## Section 1 Introduction Speckle interferometric technique (Labeyrie, 1970) has made a major break through in observational astronomy by counteracting the effect of atmospheric turbulence on the structure of stellar images, allowing measurements of a wide range of celestial objects (Barlow et al., 1986, Wood et al., 1987, Afanas’jev et al., 1988, McAlister, 1988, Ridgway, 1988, Ebstein et al., 1989, Foy, 1992, Saha et al., 1997a, Saha and Venkatakrishnan, 1997). The principle of this technique is described in detail in the literature (Dainty, 1975). A new instrument based on the above technique has been developed to operate at both the foci, prime (f/3.25), as well as Cassegrain (f/13) of the 2.34 meter Vainu Bappu Telescope (VBT) at Vainu Bappu Observatory (VBO), Kavalur, India. Since the afore-mentioned interferometer is a very sensitive instrument, any inaccuracies will lead to erroneous conclusion of the observations. Therefore, emphasis was given on the accuracies of the mechanical mounts and housing so as to ensure the optical path precisely. Most of the critical mounts were machined out of a solid piece. Care is taken in the design analysis and manufacturing to get required dimensional and geometrical accuracies. The salient features of important elements are described in this paper. Finally, we present the size of r<sub>o</sub>, measured from the speckle-grams of $`\alpha `$-Andromeda obtained with this interferometer at the VBT, Kavalur. ## Section 2 Design Features ### 2.1 Optical Design Figure 1 shows the optical design of the speckle camera system. Provisions have been made in the design to observe at both the foci of the 2.34 VBT. The wave front falls on the focal plane of an optical flat made of low expansion glass with a high precision hole of aperture ($``$350 $`\mu `$), at an angle of 15<sup>o</sup> on its surface (Saha et al., 1997). The image of the object passes on to the microscope objective through this aperture, which slows down the image scale of this telescope to f/130. A narrow band filter is placed before the detector, to avoid the chromatic blurring. The surrounding star field of diameter $`\varphi `$ 10 mm, gets reflected from the optical flat on to a plane mirror and is re-imaged on an intensified CCD, henceforth ICCD, (Chinnappan et al., 1991). ### 2.2 Mechanical Design Requirements The instrument has been designed with the help of the Computer Aided Design and analysis techniques and manufactured accurately on precision machine tools. The instrument poses the following requirements. The instrument should primarily be light in weight besides being rigid when subjected to bending loads arising out of multiple orientations of the telescope. The deflection of the instrument should be minimum under its own weight, as well as when it is fitted with all accessories required for observation. Provisions for fine focusing of the image is to be incorporated for better resolution and clarity in observation. Besides these, it has to be provided with linear motion systems with no rotation of the optical elements and rigid locking of the lens position. The instrument should be aloof from temperature effects. The instrument is made of Martensitic Stainless Steel (SS410), a material with low co-efficient of linear expansion (about half that of normal steel). #### 2.2.1 Design Analysis Since the optical enclosure of the interferometer has to support various mounts and the detector without any deflection in various orientations of the telescope, before arriving at the final configuration and sizes, finite element method (FEM) was used to analyze (Zienkiewicz, 1967) as well as to have prior information about the deflections, stress, flexure etc. of the enclosure. For structural analysis, Pro-mechanica software has been used for optimizing the structural members. The analysis shows that the instrument can hold any detector or camera of 20 Kgs. weight, kept at a distance of 270mm away from the mounting flange of the interferometer undergoes a maximum deflection of 1.65$`\mu `$. The model was analyzed for strength and deflection for a distributed load of 20 Kgs. over the span of the instrument. Deflection of only 0.7$`\mu `$ was observed. Figure 2 shows the FEM model of the speckle interferometer. It depicts the analysis carried out for two kinds of loads in the two windows. Left window shows the deformation pattern for load 1 and right window shows deformation pattern for load 2. The grey-scale blocks in the picture indicate the extent of deformations at individual places in the model of the interferometer. ### 2.3 Elements of the Interferometer An overall view of the interferometer is shown in Figure 3. It can be seen that the instrument basically consists of an optical enclosure, which houses the following assemblies. (i) Microscope assembly, (ii) Filter assembly, (iii) Detector mount, (iv) Guiding system. #### 2.3.1 Optical Enclosure The instrument has been conceived as a box made of two end plates of thickness 8mm and joined by means of struts of section 22mm square. The struts have been machined from cylinders of $`\varphi `$ 25mm diameter and required length. The struts are provided with spigots of $`\varphi `$ 18mm diameter at ends for locating the end plates. The diameters of $`\varphi `$ 25mm and $`\varphi `$ 18mm at both ends are ground between centers to ensure concentricity of the spigots and the main cylinder. The end plates are provided with four holes to receive the spigots and are machined together in one setting on Wire-cut Electro Discharge Machining (EDM) to assure the size and the required center distance accuracies. The end plates, when locked with the struts, form a box structure of light weight with the required strength to house the designed mounts with minimum deflection. A bottom plate has been clamped on to two of the struts and this forms the platform on which the various mounts of the microscope can be mounted. Once the reference is established parallel to the path of the wave front, the mounts can be manufactured in such a way that the lens mounting holes are at an accurate distance from their bases. In case of any error, the base of the mounts can be ground to get the required center height. Thus the plane in which the light travels through the microscope and the mirrors is maintained accurately. One of the sides has been provided with side plates shaped in such a way as to reduce overall volume occupied by the enclosure while ensuring no light leakage. The optical enclosure has been blackened to ensure that no stray light is received by the optical elements. A special process has been used to blacken the stainless steel material. #### 2.3.2 Microscope Assembly The microscope mount assembly is shown in figure 4. The main body of the assembly has a base and a vertical face in which a bore has been made to receive the focal plane mirror and the microscope. The axis of the bore is maintained parallel to the base and its distance is maintained accurately like all the other mounts. The focal plane mirror mount is mounted at an angle of 15<sup>o</sup> with the axis of the aperture (350$`\mu `$) as mentioned earlier (see section 2.1). After mounting the focal plane mirror, the aperture comes in the axis of the microscope mount. The image of the object is passed on to the microscope objective through this aperture. In the design, fine focusing arrangement has been provided to get optimum focus of the image. Accurate aligning, guiding and locating have been ensured to get an accurate straight line motion of the objective along the direction of optical axis. A rigid locking system has been provided taking into consideration that the image should not be disturbed either during or after locking. Care has also been taken to ensure that the axes of the 15<sup>o</sup> bore and the optical axes intersect at a predetermined point. A mirror cell has been conceived and designed to facilitate the focal point flat elliptical mirror ensuring its safety into consideration. Further development of this incorporates a nano-adjusting mechanism which helps in ultra fine focusing of the microscope objective. Flexure mechanism is being used to achieve the nanometric motion of the same. #### 2.3.3 Detector Mount The high speed electronic detectors are being used to record the speckle-grams of faint point source. This may be in the form of either detecting the photon events per frames $``$ up to frame rate of 50 Hz (Blazit et al., 1977, Blazit, 1986), or detecting individual photon events with time resolution of a few $`\mu `$sec (Papaliolios et al, 1985, Durand et al., 1987, Graves et al., 1993). In this design too, we made an arrangement for fine focusing the speckles by moving the detector back and forth precisely in line with the optical axis. Locking arrangement has been provided to avoid effects of backlash. In addition, we made an arrangement for the micrometric x$``$y movement of the detector which would ensure its position precisely. A good quality manually adjusted iris kept in front of the detector, eliminates the extraneous light. Provision for mounting different narrow band filters for observation have also been made. #### 2.3.4 Guiding System The surrounding star field of a few arc min gets reflected from the elliptical mirror and falls on to the flat mirror at an angle of 30<sup>o</sup> with respect to the optical axis. The mount of the flat mirror has been relieved suitably to ensure that the image reaches from the elliptical mirror without any obstructions and image is fully received by the flat mirror. The re-imaging lens mount is designed with provisions for lens mounting, focusing and a rigid locking mechanism. The lens collects the reflected rays from the afore-mentioned optical flat and is re-imaged on to an ICCD so as to enable one to guide remotely from the Console room of the telescope. #### 2.3.5 Spacer Assembly It is found that the best focus at the Cassegrain end of the 2.34 meter VBT is at a distance $``$570 mm away from its mounting flange. In the design of this interferometer, we have kept the focal point, the aperture (350$`\mu `$) on the elliptical mirror, as described in section 2.1, 110mm away from the surface of the mounting flange at the Prime focus of this telescope. Therefore, it is essential to design a spacer of $``$450mm size, so as to enable us to observe at the Cassegrain end of the VBT. The design of the spacer has to satisfy the following requirements : light in weight, high rigidity and low deflection at various orientations of the telescope. When mounted, both the mounting diameters should be perfectly concentric to each other and it should not allow the formation of eddies that may be produced due to hot air entrapment. In order to satisfy the above requirements, spacer is designed with two plates separated by six pillars of equal height. The pillar rods have concentric spigots at both ends. These spigots are located in holes provided on pitch circle diameter concentric with respect to the reference diameters. The pillars are rigidly bolted on to the plates at both ends. This design ensures a rigid, light weight spacer which allows free movement of air and has concentric diameter plates on both ends. Figure 5 depicts the design of the spacer. ## Section 3 Observations at VBT Observations of several close binary systems (separation $`<`$1 arc sec), and of other stars, using this newly built interferometer have been successfully carried out on 29/30 November, 1996, at the Cassegrain focus of the VBT through a 5nm filter centered on H$`\alpha `$ using uncooled ICCD. The image scale at the afore-mentioned focus of this telescope is magnified to 0.67 arcsecond per mm. The CCD gives video signal as output. The images were acquired at an exposure times of 20 ms using a frame grabber card DT-2861 manufactured by Data Translation<sup>T</sup><sup>M</sup> and stored on to the hard disk of PC486 computer. It can store upto 64 frames in a few seconds time. The interface between the intensifier screen and the CCD chip is a fibre-optic bundle which reduces the image size by a factor of 1.69. The frame grabber card digitises the video signal and resamples the pixels of each row (385 CCD columns to 512 digitised samples), by introducing a net reduction in the row direction of a factor of 1.27. These images were analysed on a Pentium PC. ## Section 4 Measurement of r<sub>o</sub> The resolution $`\theta `$ of a large telescope, limited by the atmospheric turbulence, as defined by the Strehl criterion is $$\theta =\frac{4}{\varphi }\frac{\lambda }{r_o}$$ (1) where $`\lambda `$ is the wavelength of observation and r<sub>o</sub> is the Fried’s parameter. Hence, r<sub>o</sub> is directly related to the seeing at any place. The conventional method of measuring seeing from the star image is to measure the full width half maximum (FWHM) of a long exposure stellar image at zenith: $$FWHM=0.98\frac{\lambda }{r_o}$$ (2) There are different methods of measuring the seeing at any given place. Through a large telescope, the size of the seeing disk is often estimated by comparing it to the known angular separation of a double star. The seeing value can be estimated by star trails too. The other qualitative method is to measure r<sub>o</sub> from the short exposure images using speckle interferometric technique. The speckle life time is an important atmospheric parameter since it describes the longest possible exposure time for recording speckle interferograms. It is an important parameter too for testing the atmospheric condition at existing and incoming astronomical sites (Vernin et al., 1991). In this technique, the area of the telescope aperture divided by the estimated number of speckles gives the wavefront coherence area $`\sigma `$, from which r<sub>o</sub> can be found by using relation, $$\sigma =0.342\left(\frac{r_o}{\lambda }\right)^2$$ (3) Von der L$`\ddot{u}`$he (1984) suggested that the squared modulus of the average observed Fourier transform should be divided by the averaged observed power spectrum. This expression depends only on the seeing conditions that prevails during the period covered by the time series and on the signal-to-noise ratio. The method may be useful for the observations of extended object where reference is not available. If many short exposure images and their autocorrelations are summed, the summed images have the shape of seeing disk, while the summed autocorrelations contain autocorrelation of the seeing disk together with the autocorrelation of the mean speckle cell (Wood, 1985). It is width of the speckle component of the autocorrelation that provides information on the size of the object being observed. We have used the speckle interferometric technique to measure the Fried’s parameter at 2.34 meter VBT, VBO, Kavalur. 10 continuous speckle-grams of $`\alpha `$-Andromeda, were processed. Figure 6 depicts the autocorrelation of the seeing disk together with the autocorrelation of the mean speckle cell. The size of the r<sub>o</sub> is found to be the 11.44cm at the FWHM. ## Section 5 Discussions and Conclusions Optimized instruments and meticulous observing procedures are part of the important ground work for addressing the basic astrophysical problems. The new interferometer would enable one to observe many interesting objects and to map their high resolution features, taking the positional advantage of the site, at Kavalur. The latitude of this observatory (12.58<sup>o</sup>) gives us access to almost 70<sup>o</sup> south of the celestial equator, therefore, most of the observational results beyond 30<sup>o</sup> south of zenith at high latitude stations obtained earlier can be confirmed. Before arriving at the final concept of the design of this instrument, several experiments were carried out at the various telescopes (Saha et al., 1987, Chinnappan et al., 1991), as well as at the laboratory (Saha et al., 1988). Emphasis was given on the design analysis using the modern FEM technique while designing the instrument to obtain required dimensional and geometrical accuracies of the mechanical mounts and house so as to avoid erroneous conclusion of the observations. The quality of the image degrades due to the following reasons: (i) variations of air mass X ($``$ 1/cosZ) or of its time average between the object and the reference, (ii) seeing differences between the modulation transfer function (MTF) for the object and its estimation from the reference, (iii) deformation of mirrors or to a slight misalignment while changing its pointing direction, (iv) bad focusing, (v) thermal effect from the telescope etc. (Foy, 1988). Measurement of r<sub>o</sub> is of paramount importance to estimate the seeing at any astronomical site. Systematic studies of this parameter would enable to understand the various causes of the local seeing, for example, thermal inhomogeneities associated with the building, aberrations in the design, manufacture and alignment of the optical train etc. Significant improvement of the seeing was noticed during the observations of the speckles of close binaries, by introducing the afore-mentioned spacer, that does not allow the formation of eddies produced by the hot air entrapment, as an interface between the telescope and the interferometer compare to earlier observations (Saha and Chinnappan, 1998). ## Acknowledgements The authors are grateful to Prof. J C Bhattacharyya, for the constant encouragement during the progress of the work. The personnel of the mechanical division of IIA, in particular Messrs. B R Madhava Rao, R M Paulraj, K Sagayanathan and A Selvaraj provided excellent support during execution of the work. The help rendered by Mr. J R K Murthy and Y V Ramana Murthy of CMTI for the computer analysis of the design and by Dr. Indira Rajagopal of National Aerospace Laboratory, Bangalore for the black chrome plating are also gratefully acknowledged.
no-problem/9910/cond-mat9910307.html
ar5iv
text
# Investigation of Single Boron Acceptors at the Cleaved Si:B (111) Surface ## I Introduction In recent years much interest has been devoted to identify individual dopant atoms at the surface of doped semiconductors by scanning tunneling microscopy (STM). For instance, on GaAs (110) substitutional Si donors (Si<sub>Ga</sub>) and Be or Zn acceptors in the top few surface layers appear as protrusions due to the local change of the tip-induced band-bending arising from the Coulomb potential of the ionized dopant . At low temperatures and negative sample bias circularly modulated structures in the topographic image have been interpreted as Friedel oscillations around the ionized donors induced by the accumulated electrons in the space-charge region . Furthermore, in $`n`$-doped InAs the scattering states of ionized dopants at low temperatures have been explored only recently . In addition to the delocalized features in the subsurface region, localized features have also been found which were attributed to a local change of the electronic structure around dopants at the GaAs (110) surface . Remarkably, ab initio calculations predict that the additional electron of the substitutional Si<sub>Ga</sub> atom is trapped by a localized midgap level due to a local modification of the electronic structure around the Si atom . Furthermore, different kinds of dopant-induced features and surface defects have been distinguished by voltage dependent imaging of the occupied and unoccupied electronic states . Apart from studies of the segregation or adsorption of group III or group IV elements on the Si (111) surface, most of the work on the local electronic structure around individual dopant atoms focussed on doped III-V semiconductors rather than on elemental semiconductors like Si, although these surfaces have been investigated in great detail by STM and scanning tunneling spectroscopy (STS) . The question of how the dopant atoms are spatially distributed is of crucial interest for the investigation of the metal-insulator transition in Si doped with phosphorus and/or boron . In a previous report we have shown that individual P donors at the ($`2\times 1`$) reconstructed (111) surface of $`n`$-doped Si can be identified by voltage dependent imaging and STS at room temperature . Electronic surface states energetically located in the bulk band gap pin the Fermi level at $`0.4`$ eV above the top of the valence band at the surface. Consequently, the bands are bent upward towards the surface in contrast to GaAs (110) where the bands remain flat at zero bias. In Si:P the Coulomb potential around the ionized donor causes a local down-shift of the electronic density of states (DOS) and the donor appears as a protrusion at positive sample bias and as an indentation at negative bias. In order to further investigate the change of the local electronic structure around dopant atoms at the Si (111) surface we performed STM and STS measurements on cleaved $`p`$-doped Si (Si:B) at room temperature. Previous studies on the adsorption of B on Si (111) revealed different stages of B incorporation in the surface depending on coverage and thermal treatment in contrast to other group III adsorbates . For the cleaved Si:B (111) surface one might naively expect that apart from the change of the sign of the Coulomb potential the influence of the negatively ionized B acceptor on the electronic structure can be explained in a way similar to Si:P. However, we will show that unlike in Si:P, the DOS in Si:B is strongly modified at the acceptor site possibly due to the different electronic configuration of substitutional B in Si compared to P. ## II Experimental Measurements were performed with an Omicron STM in ultra-high vacuum (UHV) at room temperature. STM tips were prepared from electro-chemically etched tungsten wire and further cleaned in UHV by repeated cycles of annealing and consecutive Ar<sup>+</sup> sputtering. Samples ($`0.3\times 4\times 10`$ mm<sup>3</sup>) with boron concentrations $`N_A=710^{18}\mathrm{cm}^3`$ and $`4.510^{19}\mathrm{cm}^3`$ were cut from Czochralski-grown single crystals and cleaved in situ to expose the (111) surface to the tip. STM images were acquired directly after cleavage without further heat treatment of the samples to maintain the original dopant distribution at the surface. Images were taken in the constant-current mode with the voltage applied to the sample and the tip grounded. Hence, at positive voltages unoccupied electronic states of the sample are imaged whereas at negative voltages occupied states are imaged, implying that the DOS of the tip varies smoothly with energy. Scans of opposite polarity were acquired quasi-simultaneously by scanning each line forward and backward with reversed polarities. ## III Results and Discussion ### A Identification and Distribution of B Acceptors Fig. 1 shows STM images of the cleaved Si(111) surface at +1.2 V and $``$1.2 V respectively. The bright rows are characteristic for the ($`2\times 1`$) reconstruction of the cleaved (111) surface as investigated in detail by Feenstra et al. . The reconstruction is explained by a revised $`\pi `$-bonded chain model where the $`p_z`$ orbitals representing ‘dangling bonds’ (DB) retain their local character and are therefore ideally suited for imaging with an STM. Fig. 2 shows a schematic sketch of this reconstruction where atoms marked 3 and 4 are fourfold coordinated and lowered, while the outer atoms 1 and 2 have one DB each which is mainly occupied at atom 1 and mainly unoccupied at atom 2 . In the STM only the elevated atoms 1 and 2 are imaged. At negative sample voltage electrons tunnel out of the mainly occupied orbitals at position 1 which therefore appear as bright chains in the image, separated by dark stripes corresponding to atoms at positions 3 and 4. At low positive voltages the electrons are able to tunnel into the mainly unoccupied states located at atom 2 which therefore appear as bright chains. The small shift of the rows along the \[2̄11\] direction is verified by comparing images of opposite polarity. At even higher positive voltages electrons tunnel into unoccupied states that are located at the bonds between atoms 1 and 2 so that the rows appear as a zigzag chain. Several types of defects are observed. Those marked by arrows have a characteristic voltage-dependent contrast. These defects appear as protrusions (bright) at $``$1.2 V in contrast to images taken at +1.2 V where they appear as indentations (dark). Hence, the contrast is exactly inverted with respect to Si:P. These characteristic defects are observed for both investigated boron concentrations. We count a total of 67 defects in a surface area of 21323 $`\mathrm{nm}^2`$ for $`N_A=710^{18}\mathrm{cm}^3`$ and 106 defects in 8357 $`\mathrm{nm}^2`$ for $`N_A=4.510^{19}\mathrm{cm}^3`$. The values correspond to dopant densities of $`3.1410^3\mathrm{nm}^2`$ and $`1.2610^2\mathrm{nm}^2`$, in good agreement with the respective surface densities of $`2.210^3\mathrm{nm}^2`$ and $`1.410^2\mathrm{nm}^2`$ obtained from the bulk concentrations, where we assume that only atoms in the outermost layer (atoms 1 to 4, Fig. 2) give rise to a contrast in the image. We therefore ascribe these features as being due to single B acceptors in the surface layer. It is reassuring that structures with this specific voltage-dependent contrast have not been found on the previously investigated Si:P (111) surface . Vice versa, the characteristic voltage-dependent contrast ascribed to P donors on Si:P has not been found on the Si:B surface investigated here. The identification of these defects as individual B dopants allows to check whether the dopant distribution in Si:B is random. The probability of finding the nearest neighbor B atom at a distance $`r`$ from a given atom at the surface is $`f(r)=2\pi r\rho e^{\rho \pi r^2}`$, assuming a Poisson distribution of dopants with the dopant surface-density $`\rho `$. For the determination of the nearest neighbor distances, B atoms being nearer to the image border than to any other B atom have been discarded. The distances $`r`$ have been grouped into 0.665 nm wide intervals corresponding to the unit cell dimension along \[2̄11\]. Fig. 3 shows a histogram together with $`f(r)`$ normalized to the area under the histogram. The behavior does not change considerably when the intervals are shifted by the row-to-row distance of 0.332 nm. We find reasonably good agreement between the statistical and experimental distribution, although there seems to be a cut-off at low $`r`$ similar to what has been observed for Si:P. The shortest distance between two B atoms was found to be $`1.67\mathrm{nm}`$. This is in reasonable agreement with the distance of $`2.3\mathrm{nm}`$ that follows from the solubility limit $`1.2\mathrm{at}\%(T<1400\mathrm{K})`$ of B in Si for a random distribution . The reduced counts at longer distances are likely to be due to the restricted size of the STM images. For this reason, the statistics has been carried out for the higher concentration of $`N_A=4.510^{19}\mathrm{cm}^3`$ only, because most of the images taken on the sample with $`N_A=710^{18}\mathrm{cm}^3`$ showed only one or two B defects. We conclude that B acceptors in Si:B are distributed randomly with a short-distance cut-off similar to P donors in Si:P. ### B Electronic Structure of Single B Acceptors We now discuss the voltage-dependent contrast in more detail. Fig. 4 shows STM images of a B-induced feature for different voltages. At all negative voltages boron appears as an isotropic shallow protrusion with a diameter of $``$ 2 nm. In contrast, at positive voltages B appears as a needle-shaped protrusion at $`+0.4`$ V but can hardly be distinguished from Si at $`+0.9`$ V. Further increase of the voltage leads to a reversal of the image contrast compared to $`+0.4`$ V and the acceptor appears as an indentation. This change in contrast with voltage is explained by a local change of the surface DOS around the acceptor. Fig. 5 shows the surface DOS for the $`p`$-type cleaved Si (111) surface together with the near-surface bulk energy bands as inferred from photoemission and STS experiments . In $`p`$-type Si, the acceptor level at 45 meV above the valence band edge is occupied at room temperature and the boron atom is negatively charged. This occurs for B acceptors at the surface as well, due to the low hole binding energy of 25 meV estimated for high carrier concentrations . In addition, due to the presence of electronic surface states in the bulk band gap (Fig. 5, solid line), holes (majority carriers) are accumulated at the surface and the positive charge is compensated by the formation of a hole depletion layer of depth $`d`$. The negative space charge of the depletion layer gives rise to a downward band bending towards the surface. Furthermore, the Fermi level is pinned 0.4 eV above the valence band edge at the surface almost independent of dopant concentration for moderately doped Si . For a boron concentration of $`N_A=4.510^{19}`$ cm<sup>-3</sup> and a band bending of $``$0.4 V a depletion-layer depth $`d=3.4`$ nm and a surface charge corresponding to $`1.5\times 10^{13}e`$cm<sup>-2</sup> are estimated . This charge corresponds to a large hole density of $`2.6\times 10^{20}\mathrm{cm}^3`$ near the surface owing to the fact that the surface states decay into the volume with a short decay length of 1 - 2 lattice constants . Adopting the simple description which worked successfully for Si:P, the surface DOS at the location of the boron atom is energetically shifted upward due to the screened negative Coulomb potential of the B acceptor (Fig. 5, dashed line). For the occupied states below $`E_F`$ this upward shift leads to a local increase of the surface DOS near the Fermi level. At negative voltages, electrons near the Fermi level of the sample have the highest transmission factor for tunneling into unoccupied states of the tip. Therefore, at the site of the B atom more electrons are able to tunnel into the tip compared to Si sites. Thus, B appears as a protrusion for all negative voltages. At positive voltages, electrons at energies near the Fermi level of the tip have the highest probability for tunneling into unoccupied surface states of the sample. At $`+0.4`$ V there are more states available at the position of B which therefore appears as a bright protrusion. At $`+0.9`$ V the surface DOS of B and Si are alike so that B and Si can hardly be distinguished. At even higher voltages ($`+1.2`$ V) there are more surface states on Si, leading to a higher tunneling probability at the Si position than at the B position and B appears as an indentation. Hence, in this simple picture of a locally shifted DOS the contrast changes from bright to dark with increasing positive voltage as experimentally observed. A more detailed analysis of the image taken at $`+0.4`$ V shows that apparently one unoccupied orbital in the row containing the defect is missing, compared to the image for $`U=0.4`$ V (Fig. 6). The same behavior was found for $`\pm 0.6`$ V bias. The fact that the orbital is present for higher voltages confirms that the feature is not due to a surface vacancy. The missing orbital is better seen in the cross-section lines where each maximum corresponds to an unoccupied orbital (Fig. 6). At the position of the B acceptor a minimum appears at a position where actually a maximum is expected. Counting the number of maxima along the two lines B, B’, i.e. along the \[011̄\] direction, one maximum is missing in the line taken across the defect (B) compared to a scan away from the acceptor (B’). We stress that this behavior was found for about 50 % of all B-induced features while 50 % appeared regular. In Si:P the appearance of an additional orbital at the position of the P donor at U $`=0.4`$ V was explained as being due to Si surface states just above $`E_F`$, which are usually unoccupied but are shifted to below the Fermi level by the Coulomb potential of the positively charged P donor. These states refer to the dangling-bond orbital at atom 2 (cf. Fig. 2). Therefore, this orbital becomes occupied and appears as an additional “atom” at negative voltages. As is apparent from Fig. 5, a corresponding explanation invoking simply a local shift of the surface DOS cannot be found to account for a missing orbital at the acceptor site in Si:B. We believe that this is due to different substitutional positions of boron at the surface. Rather, a qualitative change of the surface DOS at the B site must be invoked. For instance, the trivalent B located at position 1 or 2 would participate in three covalent bonds although the preferred flat $`sp^2`$-hybrid configuration cannot be realized. In contrast, B at position 3 or 4 has to satisfy four bonds making a charge transfer from adjacent Si atoms very likely. Such an effect of the B substitutional site on the electronic structure has been reported for B deposited on Si (111) where B does indeed occupy immediate subsurface positions . In addition, the Si-B bond length is about 12 % shorter than the Si-Si bond length mainly due to the smaller covalent radius of B compared to Si . This will lead to a relaxation of the surface structure around B which could be connected with a charge transfer between Si and B and/or a reformation of the unoccupied DB at/nearest to the B site towards a more back-bonded type. A missing orbital in the STM image at $`+0.4`$ V could either be explained by a complete occupation of the orbital at position 2, which then would not be imaged for $`U>0`$, or by a reformation of the DB towards a more back-bonded character. Because only atoms 1 or 2 are imaged the effect of B at position 3 or 4 on the image contrast is presumably weaker because the acceptor at position 3 or 4 can only be “imaged” via an electronic interaction to Si atoms at positions 1 or 2. Hence, we conclude that the effect of a missing orbital is presumably due to B located at positions 1 or 2. This is in good agreement with the fact that we found a missing orbital at roughly 50 % of the B sites. We emphasize that the topological structure of the B-induced feature clearly requires theoretical calculations. Moreover, a more sophisticated model has to include many-body effects which are important for highly localized electronic states . The local change of the DOS at the acceptor site is confirmed by scanning tunneling spectroscopy (STS). During $`I(U)`$ data acquisition the tip was retracted from the surface by $`0.05\mathrm{nm}/\mathrm{V}`$ to compensate for the exponential increase of current with voltage. Several $`I(U)`$ curves were averaged to reduce the scatter of the data. As demonstrated earlier, the logarithmic derivative $`(dI/dU)/(I/U)`$ is independent of the distance and is related to the surface DOS with $`U=0`$ corresponding to $`E_\mathrm{F}`$ . Due to the numerical evaluation of $`(dI/dU)/(I/U)`$ data points around $`U=0`$ were omitted because division by zero leads to unreasonable results, while $`(dI/dU)/(I/U)|_{U=0}=1`$. Fig. 7 shows spectra taken above a Si atom away from the acceptor which exhibits peaks at $`U=0.9,+0.5,+1.4\mathrm{V}`$ in agreement with previous results for $`p`$-doped Si where similar peaks were observed at somewhat lower voltages $`U=1.0,0.35,+0.17,+1.25`$ V due to the lower doping level compared to the present samples. These peaks can be attributed to the surface DOS of Si observed in photoemission experiments . A peak expected at $`U=0.35\mathrm{V}`$ corresponding to the occupied DB states could not be resolved due to the numerical difficulties around $`U=0`$. In comparison, the spectrum taken above the acceptor is strongly modified, apart from the peak at $`U=+1.45\mathrm{V}`$ which is slightly shifted by $`\mathrm{\Delta }U0.1`$ V compared to the peak on Si. The most striking result is the strong reduction of the large peak observed on Si at $`+0.5`$ V. The corresponding unoccupied electronic states are located at the DB orbitals at position 2. Strong modifications of the STS spectra have also been reported for B deposited on Si (111) . In the present case the reduction of the surface DOS above $`E_\mathrm{F}`$ confirms the absence of a localized DB state at the B site and the effect of a missing orbital in the STM image at $`U=0.40.6\mathrm{V}`$. Furthermore, the peak at $`0.9`$ V representing the bottom of the occupied surface band is considerably shifted upward by $`0.5`$ V at the B site. Since at negative bias, states near $`E_\mathrm{F}`$ of the sample dominate the tunneling current this confirms the bright image contrast at the B site observed for all negative voltages. A further question concerns the apparent anisotropic shape of the B defect. As mentioned above, at $`0.4`$ V the extension of the B induced feature seems to be isotropic while it appears strongly elongated along the \[011̄\] direction at positive voltage. Such an effect could be due to a different electronic interaction between Si and B along the $`\pi `$-bonded chains compared to the perpendicular direction and needs further investigation. We mention that the P-induced feature on Si:P appeared to be isotropic, although its overall extension was smaller. An explanation for the larger extent of the B-derived features could be a different distance between the tip and the surface compared to Si:P caused by a higher tunneling current of 0.7 nA compared to 0.3 nA in the latter case. ## IV Summary The Si (111) ($`2\times 1`$) surface of cleaved B-doped single crystals has been investigated by STM. The individual B acceptors have been identified by a voltage-dependent contrast and have been further characterized by STS. Similarly to Si:P the general change in contrast is governed by the local shift of the surface DOS due to the Coulomb potential of the charged dopant atom. In addition, the local change of the atomic and electronic structure around the B acceptor gives rise to a missing orbital observed in 50 % of the images taken at $`U=+0.4`$ V. These acceptors are presumably located at the upper positions 1 and 2 of the $`(2\times 1)`$ reconstructed surface. The unoccupied DOS of B at the surface as determined by STS is strongly reduced around 0.4 eV, in contrast to Si:P where such drastic changes are not observed. This shows that the electronic configuration of substitutional dopants has a decisive influence on the local electronic structure. The fact that in Si:B and Si:P the surface-densities of dopants estimated from the STM data are in good agreement with the surface densities derived from the bulk concentrations strongly confirms the assumption that only dopants in the outermost layer are imaged. This is in strong contrast to GaAs, where features arising from dopants in several subsurface layers are observed . Presumably, the screening of dopants is very different in doped Si and needs to be investigated further. ###### Acknowledgements. This work was supported by the Deutsche Forschungsgemeinschaft through the Graduiertenkolleg ”Kollektive Phänomene im Festkörper”and the Sonderforschungsbereich 195. We would like to thank T. Trappmann and M. Wenderoth for useful discussions.
no-problem/9910/astro-ph9910077.html
ar5iv
text
# Section 1 Introduction ## Section 1 Introduction It is generally believed that the energy released in solar flares is stored in nonpotential magnetic fields. The energy buildup could be a response of the coronal magnetic field to the changes in photospheric magnetic environment caused mostly by sunspot motions and emerging fluxes. Since the measurement of magnetic fields in the corona is not available, the field measurement at the photospheric level has been widely used for the study of magnetic nonpotentiality in flare-producing active regions. In particular, the change of some nonpotentiality parameters during flare activity is regarded as a very important clue for understanding flare mechanisms. The MSFC (Marshal Space Flight Center) group studied the magnetic nonpotentiality associated with solar flares using the MSFC magnetograph (Hagyard et al., 1982; Hagyard et al., 1984). They examined the vertical current density, source field and magnetic shear derived from vector magnetograms of active regions in parallel with flare observations (Hagyard et al., 1984; Gary et al., 1987; Hagyard et al., 1990). On the basis of these studies, they suggested a few important points characterizing flare producing active regions. They are a large shear angle, a strong transverse magnetic field along the neutral line and a twist of long neutral lines (Gary et al., 1991). The BBSO (Big Bear Solar Observatory) and Huairou (Beijing Observatory) groups have also presented several interesting studies on the change of nonpotentiality parameters associated with solar flares (e.g., Wang et al., 1996; Wang 1997). Wang (1992) defined a transverse field weighted angular shear and made a first attempt to study the change of the weighted mean shear angle just after a flare from vector magnetograph measurements. He found that the weighted mean shear angle jumped about 5 degrees, coinciding with a flare. He also showed that five more X-class and M-class flares had the same pattern of shear angle variation (see Table I in Wang 1997). It is to be noted that the shear increases in some active regions during flare activities coincide with emerging flux loops (e.g., Wang and Tang, 1993). On the other hand, the MSFC group reported that the shear may increase, decrease or remain the same after major flares (Hagyard, West, and Smith, 1993; Ambastha, Hagyard, and West, 1993). Also, Chen et al. (1994) observed that there were no detectable changes in magnetic shear after 18 M-class flares. Thus, the time variation of magnetic shear before and after a solar flare still remains controversial. For a quantitative understanding of nonpotentiality parameters such as angular shear, we have to keep in mind that there are several technical problems calling for a careful treatment. First, the calibration from polarization signals to vector fields should be well established. Second, the $`180^{}`$ ambiguity of the observed transverse field has to be properly resolved. Most of nonpotentiality parameters such as vertical current density and magnetic shear critically depend on how well the $`180^{}`$ ambiguity problem is solved. Third, it is noted that Stokes line profiles could be changed due to the variation of thermodynamic parameters associated with flaring processes. So far, studies on the change of magnetic shear have been made by using mainly filter-based magnetographs thanks to their wide field of views and high time resolutions (Zirin, 1995; Wang et al., 1996; Wang, 1997). However, filter-based magnetographs have some problems which originate from insufficient spectral information. Wang et al. (1996) well reviewed reliability and limitations of filter-based magnetographs. The calibration of filter-based magnetographs has often been made by employing the line slope method (Varsik, 1995) under the weak field approximation (Jefferies and Mickey, 1991) or using nonlinear calibration curves based on theoretical atmospheric models (Hagyard et al., 1988; Sakurai et al., 1995; Kim, 1997). In these methods, the longitudinal and transverse field components are independently given by $$B_L=K_L\frac{V}{I},$$ (1) $$B_T=K_T\left[\frac{Q^2+U^2}{I^2}\right]^{1/4},$$ (2) in which $`K_L`$ and $`K_T`$ are calibration coefficients or curves which depend on the observing system or the models considered. Hagyard and Kineke (1995) developed an iterative method for Fe I 5250.22 line by considering theoretical polarization signals with various inclination angles of the magnetic field. Recently, Moon, Park, and Yun (1999b) have devised an iterative calibration method for Fe I 6302.5 line following Hagyard and Kineke (1995). They applied this calibration method to the dipole model of Skumanich (1992) in determining the field configuration of a simple round sunspot. This study revealed that the conventional calibration methods remarkably underestimate transverse field strengths in the inner penumbra. Most of previous studies on magnetic shear have adopted the potential method for resolving the $`180^{}`$ ambiguity (Wang et al., 1994). However, the method may possibly break down for flaring active regions, which generally have a strong shear near the polarity inversion line. In filter-based magnetographs, magnetic field vectors are derived from polarization signals integrated over the filter passband and their calibration relations are determined without considering variation of thermodynamic parameters. That is to say, the derived magnetic fields may be affected by various physical processes involved in flares. In this study, we use a set of MSO magnetograms of AR 6919 obtained by the Haleakala Stokes Polarimeter which provides simultaneous full Stokes polarization profiles. Since the calibration is based on the nonlinear least square method (Skumanich and Lites, 1987), the magnetic field components derived are expected to be less affected by flare-related physical processes. Since it takes about an hour to scan a whole active region by the polarimeter, we can not detect such an abrupt change of magnetic shear as Wang (1992) found. In our study, emphasis is given to the evolution of magnetic nonpotentiality with the progress of an X-class flare which occurred in AR 6919. A similar study for AR 5747 is being prepared by Moon et al. (1999c). We expect that our data are able to yield a more reasonable estimate of various nonpotentiality parameters. In Section 2, an account is given of the magnetic nonpotentiality parameters that we have considered in this study. The observation and analysis are presented in Section 3 and the evolution of nonpotentiality parameters is discussed in Section 4. Finally, a brief summary and conclusion are delivered in Section 5. ## Section 2 Magnetic Nonpotentiality Parameters ### 2.1 Electric Current Density It is well accepted that electric currents play an important role in the process of energy buildup and relaxation of solar active regions. Since the observation of vector magnetic fields is available only in the photosphere, the vertical current at the photosphere is widely used in studies of solar active regions. According to Ampere’s law, the vertical current density is given by $$J_z=\frac{1}{\mu _0}\left(\frac{B_y}{x}\frac{B_x}{y}\right),$$ (3) in which $`\mu _0=0.012\mathrm{Gm}/\mathrm{A}`$ is the magnetic permeability of free space. ### 2.2 Magnetic Angular Shear Hagyard et al. (1984) defined the magnetic angular shear (or magnetic shear) as the angular difference between the observed transverse field and the azimuth of the transverse component of the potential field which is computed employing the observed longitudinal field as a boundary condition. That is, the magnetic angular shear $`\theta _a`$ is given by $$\theta _a=\theta _o\theta _p,$$ (4) in which $`\theta _o=\mathrm{arctan}(B_y/B_x)`$ is the azimuth of observed transverse field and $`\theta _p=\mathrm{arctan}(B_{px}/B_{py})`$ is that of the corresponding potential field component. Noting that flares are associated with magnetic shear of strong transverse fields, Wang (1992) proposed a transverse weighted mean angular shear given by $$\overline{\theta }_a=\frac{B_t\theta _a}{B_t},$$ (5) in which $`B_t`$ is the transverse field strength and the sum is taken over all the pixels in the considered region. In this work we use horizontal field strength in the heliographic coordinate instead of transverse field strength. ### 2.3 Magnetic Shear Angle et al. (1993) suggested a new nonpotentiality indicator, the angle between the observed magnetic field vector and the corresponding potential magnetic field vector. By definition, the shear angle $`\theta _s`$ can be expressed as $$\theta _s=\mathrm{arccos}\left(\frac{𝐁_𝐨𝐁_𝐩}{|𝐁_𝐨||𝐁_𝐩|}\right)$$ (6) where $`𝐁_𝐨`$ and $`𝐁_𝐩`$ are the observed and potential magnetic field vectors, respectively. In this equation, $`B_{pz}`$ is identical with $`B_{oz}`$ as explained earlier. In our study, we proceed a step further and consider a field strength weighted mean shear angle defined by $$\overline{\theta }_s=\frac{|𝐁|\theta _s}{|𝐁|}$$ (7) where $`|𝐁|`$ is the field strength and the sum is taken over all the pixels in the considered region. ### 2.4 Magnetic Free Energy Density The density of magnetic free energy is given by $$\rho _f=\frac{(𝐁_𝐨𝐁_𝐩)^2}{8\pi }=\frac{𝐁_{𝐬}^{}{}_{}{}^{2}}{8\pi }$$ (8) where $`𝐁_𝐬`$ is the nonpotential part of the magnetic field, which was defined as the source field by Hagyard, Low, and Tandberg-Hanssen (1981). It can also be expressed as (Wang et al., 1996) $$\rho _f=\frac{(|𝐁_𝐨||𝐁_𝐩|)^2}{8\pi }+\frac{|𝐁_𝐨||𝐁_𝐩|}{2\pi }\mathrm{sin}^2(\theta _s/2),$$ (9) where $`|𝐁_𝐨|`$ and $`|𝐁_𝐨|`$ are magnitudes of the observed field and the computed potential field, respectively. The tensor virial theorem can be utilized to estimate the total magnetic free energy of a solar active region (e.g., Metcalf et al., 1995). However, as McClymont et al. (1997) pointed out (for details, see Appendix A of their paper), there are several controversial problems in estimating the magnetic free energy of a real active region with this method. In this study, we examine an observable quantity, the sum of magnetic free energy density over a field of view. This quantity is expected to indicate the degree of nonpotentiality at least near the photosphere. ## Section 3 Observation and Analysis For the present work, we have selected a set of MSO magnetogram data of AR 6919 taken on Nov. 15, 1991. The magnetogram data were obtained by the Haleakala Stokes polarimeter (Mickey, 1985) which provides simultaneous Stokes I,Q,U,V profiles of the Fe I 6301.5, 6302.5 $`\mathrm{\AA }`$ doublet. The observations were made by a rectangular raster scan with a pixel spacing of 2.8<sup>′′</sup> (high resolution scan) or 5.6<sup>′′</sup> (low resolution scan) and a dispersion of 25 $`\mathrm{m}\mathrm{\AA }/\mathrm{pixel}`$. Most of the analyzing procedure is well described in Canfield et al. (1993). To derive the magnetic fields from Stokes profiles, we have used a nonlinear least square fitting method (Skumanich and Lites, 1987) for fields stronger than 100 G and an integral method (Ronan, Mickey, and Orral 1987) for weaker fields. In that fitting, the Faraday rotation effect, which is one of the error sources for strong fields, is properly taken into account. The noise level in the original magnetogram is about 70 G for transverse fields and 10 G for longitudinal fields. The basic observational parameters of magnetograms used in this study are presented in Table I. To resolve the $`180^{}`$ ambiguity and to transform the image plane (longitudinal and transverse) magnetograms to heliographic (vertical and horizontal) ones, we have adopted a multi-step method by Canfield et al. (1993) (for details, see the Appendix of their paper). In Steps 3 and 4 of their method, they have chosen the orientation of the transverse field that minimizes the angle between neighboring field vectors and the field divergence $`|𝐁|`$. Moon et al. (1999a) have already discussed in detail the problem of $`180^{}`$ ambiguity resolution for present magnetograms. ## Section 4 Evolution of Magnetic Nonpotentiality Moon et al. (1999a) already explained in detail three magnetograms under consideration and their characteristics. Here we present the second image plane vector magnetogram of AR 6919 superposed on white light images in Figure 1. According to the Solar Geophysical Data, a 3B H$`\alpha `$ flare and an X1.5 flare occurred around 22:35 UT on November 15, 1991 with the heliographic coordinate of S13 and W19. Considering the timing of the flaring events and the weak projection effect, our selected magnetograms are useful for studying the change of magnetic field structures associated with the flare. This active region was also studied in terms of coordinated Mees/Yohkoh observations (Canfield et al., 1992; Wülser et al., 1994), X-ray imaging observations (Sakao et al., 1992), white light flare observations (Hudson et al., 1992), and its preflare phenomena (Canfield and Reardon, 1998). From the three vector magnetograms in heliographic coordinates, we have derived a set of nonpotential parameters described in Section 2. In this study, we pay attention to their time variation associated the X-class flare. To examine the change of magnetic fluxes before and after the flare occurrence, we have plotted the time variation of magnetic fluxes of positive and negative polarities in Figure 2, in which the asterisked curves show the fluxes in a $`\delta `$ sunspot region. As seen in the figure, magnetic fluxes of the $`\delta `$ sunspot region for both polarities decreased with time. It is also noted that there were several emerging fluxes (P1, P2, N1, N2 in Figure 1) outside the $`\delta `$ sunspot region, which were accompanied by eruptions of $`H\alpha `$ arch filaments (Canfield and Reardon, 1998). The new emerging positive fluxes such as P1 and P2 account for total flux increase for positive polarities. These facts imply that the X-class flare should be associated with both cancellation of the $`\delta `$-type magnetic fields and new emerging fluxes. We show the vertical current density and inversion lines inferred from the longitudinal fields in Figure 3. As seen in Figures 3-b) and 3-c), the strongest vertical current density kernel is located near the inversion line of the $`\delta `$ sunspot. As seen in the figures, the vertical current density much increased just before the onset of the X-class flare. Figures 4 and 5 show angular shear weighted with transverse field strength and shear angle weighted with total field strength derived from vector magnetograms and computed potential fields (Sakurai, 1992). As seen in the figures, high values of shear angles are located near the $`\delta `$ spot region. The changes of two mean shear angles with time are given in Figure 6, which shows that shear angles for the $`\delta `$ spot area (denoted by Local in the figure) increased before the X-class flare and then decreased after it, but those for the whole active region decreased before the flare. It is to be noted that a similar pattern is also found in the evolution of a measure of magnetic field discontinuity, MAD (see Figure 7 in Moon et el. 1999a). The derived mean shear angles are listed in Table II, in which for comparison we also tabulate the corresponding values obtained with the potential field method for $`180^{}`$ ambiguity resolution. It is interesting that the mean shear angles for the $`\delta `$ spot area (denoted by local in Table II) obtained with the potential method continuously increased during the flaring activity unlike those obtained with our method. This eloquently demonstrates that totally different conclusions can be drawn on the shear angle evolution throughout a flare, critically depending on the $`180^{}`$ ambiguity resolution method employed in data reduction. The magnetic free energy densities are shown in Figure 7. The free energy density near the $`\delta `$ spot region largely decreased before the flare. We also list mean energy densities and total energy densities in Table II and plot their variations with time in Figure 8. These data indicate that magnetic free energy, at least in the neighborhood of the $`\delta `$ spot, was indeed released during the flaring process. It is notably interesting and not easy to understand that in Figures 6 and 8, the curvatures of curves for the whole active region are of opposite sign to those for the local $`\delta `$ spot region. This may create some skepticism about the conventional picture of the flare mechanism. In Figure 6, where shear angle variations are displayed, the curves (Local $`\theta _a`$ and Local $`\theta _s`$) for the $`\delta `$ spot region seem to be consistent with the conventional flare picture, according to which the magnetic shear increases to build up free energy until the flare onset and then decreases as free energy being released. The free energy curve for the same $`\delta `$ spot region in Figure 8, however, shows that the energy decrease started even before the flare onset although a more drastic energy release followed the onset. This indicates that relaxation of the twisted magnetic field mildly starts even hours before the visible flare onset. The decrease of mean shear angle for the whole active region before the flare supports this argument. Why, then, does the local shear increase before the flare? This may be interpreted as development of a current sheet of a macroscopic size in the possible reconnection region near the $`\delta `$ spot. Magnetic fields tend to take a lowest possible energy state under given constraints. The lowest energy state is expected to have a smooth spatial variation of magnetic field. However, a smooth state may not exist if the field has undergone too much twist. Then, a current sheet must develop in some regions while the field takes a smooth configuration in other regions. The process of current sheet development may be either a pure ideal MHD process as described above or a process involving a lot of small scale magnetic reconnections. The latter process can be analogized with an avalanche of a sand pile initiated by a slide of a few sand grains. This scenario was proposed and investigated by Lu et al. (1991, 1993). The increase of the planar sum of free energy density over the whole active region shown in Figure 8 still requires more investigation. However, this can be also explained within the conventional picture of solar flares. When a magnetic field is twisted, the whole field tends to expand to occupy more volume per flux. The magnetic energy is thus more concentrated near the surface in a potential field than in a twisted field. The increase of the free energy after the flare onset might imply that the coronal field shrinks down while only part of the total free energy is released by the flaring process. However, our speculations based on only one flare case study definitely need to be examined by investigation of more cases. ## Section 5 Summary and Conclusion In this study, we have studied the magnetic nonpotentiality of AR 6919 associated with an X-class flare which occurred on November 15, 1991, using MSO magnetograms. The magnetogram data were obtained before and after the X-class flare using the Haleakala Stokes Polarimeter. A nonlinear least square method was adopted to derive the magnetic field components from the observed Stokes profiles and a multi-step ambiguity solution method was employed to resolve the $`180^{}`$ ambiguity. From the $`180^{}`$ ambiguity-resolved vector magnetograms, we have derived a set of physical quantities characterizing the field configuration. The results emerging from this study can be summarized as follows: 1) There was flux decreases for both polarities in a $`\delta `$ sunspot pair as well as flux increases outside it, which implies that the energy release of the X-class flare should be associated with flux cancellation and emergence. 2) It was found that the vertical current near the $`\delta `$ sunspot much increased before the flare. 3) We also found that magnetic shear near the $`\delta `$ sunspot increased before the flare and then decreased after it, while magnetic shear in the whole active region decreased before the flare. 4) The sum of magnetic energy density decreased before the flare, indicating that magnetic free energy was released by the flaring event. However, we also found different evolutionary tendencies of nonpotentiality parameters for the whole active region and for the local $`\delta `$ spot region. These differences must be examined through further studies of many other flare-bearing active regions. If some of them could be confirmed, these would serve as important clues to understand flaring mechanisms. ## Acknowledgements We wish to thank Dr. Metcalf for allowing us to use some of his numerical routines for analyzing vector magnetograms and Dr. Labonte for helpful comments. The data from the Mees Solar Observatory, University of Hawaii are produced with the support of NASA grant NAG 5-4941 and NASA contract NAS8-40801. This work has been supported in part by the Basic Research Fund (99-1-500-00 and 99-1-500-21) of Korea Astronomy Observatory and in part by the Korea-US Cooperative Science Program under KOSEF(995-0200-004-2).
no-problem/9910/astro-ph9910555.html
ar5iv
text
# COMPTEL limits on 26Al 1.809 MeV line emission from 𝜸^𝟐 Velorum ## 1 Introduction The 1.809 MeV gamma-ray line from radioactive decay of <sup>26</sup>Al $`[`$mean lifetime $`(1.07\pm 0.04)\mathrm{\hspace{0.33em}10}^6`$ yr Endt (1990)$`]`$ traces recent nucleosynthesis in the Galaxy. This was the first gamma-ray line detected from the interstellar medium Mahoney et al. (1984), and had been predicted from nucleosynthesis calculations of explosive carbon burning in core-collapse supernovae Ramaty & Lingenfelter (1977); Arnett (1977). Other <sup>26</sup>Al production sites have been proposed as well, covering a wide range of densities (0.5 – $`\mathrm{3\hspace{0.33em}10}^5`$ g cm<sup>-3</sup>), temperatures ($`\mathrm{3\hspace{0.33em}10}^7`$$`\mathrm{3\hspace{0.33em}10}^9`$ K), and time scales (1 – $`10^{14}`$ s) at which proton capture on <sup>25</sup>Mg (within the Mg-Al chain) would create <sup>26</sup>Al: explosive hydrogen burning in novae and supernovae Arnould et al. (1980), neon burning in the presupernova and supernova stage (e.g. Woosley & Weaver, 1995), neutrino-induced nucleosynthesis in supernovae Woosley et al. (1990), convective core hydrogen burning in very massive stars Dearborn & Blake (1985), and hydrogen shell burning or “hot bottom burning” at the base of the convective stellar envelope in asymptotic giant branch (AGB) stars Nørgaard (1980); Forestini et al. (1991). <sup>26</sup>Al nucleosynthesis and observations have been reviewed by Clayton & Leising (1987); MacPherson et al. (1995); Prantzos & Diehl (1996); Diehl & Timmes (1998). Current theories still predict significant amounts of <sup>26</sup>Al from core-collapse supernovae Woosley & Weaver (1995); Thielemann et al. (1996); Timmes et al. (1995), the wind phases of very massive stars where <sup>26</sup>Al is produced on the main sequence and ejected mainly in the Wolf-Rayet stage Langer et al. (1995); Meynet et al. (1997) or possibly (in case of fast rotating stars) in a red supergiant stage as well Langer et al. (1997), and from the most massive AGB stars Bazán et al. (1993). The expected <sup>26</sup>Al contribution of chemically enriched novae lowered after major revisions of key reaction rates José et al. (1997); Starrfield et al. (1998). All models suffer from large uncertainties in <sup>26</sup>Al yields, ranging from factors of three to orders of magnitude for the various proposed astrophysical sites. Recent results from the COMPTEL telescope aboard the Compton Gamma-Ray Observatory showed growing evidence for a young (massive) population dominating the galaxy-wide <sup>26</sup>Al production Diehl et al. (1995b); Knödlseder et al. (1996b); Oberlack et al. (1996); Oberlack (1997); Knödlseder (1997). While low-mass AGB stars and novae can be ruled out as the main <sup>26</sup>Al source, a distinction between supernovae and hydrostatic production in very massive stars appears difficult due to the similar evolutionary time scales involved.<sup>1</sup><sup>1</sup>1See Knödlseder (1999) for arguments favouring WR stars as the dominant source of Galactic <sup>26</sup>Al. Therefore, detection of individual objects would be essential as calibrator, but the sensitivity of current instruments restricts this approach to very few objects. Upper limits for five individual supernova remnants have been derived by Knödlseder et al. (1996a) and the possibility of interpreting 1.8 MeV emission from the Vela region with individual objects has been discussed by Oberlack et al. (1994) and Diehl et al. (1999). $`\gamma ^2`$ Vel (WR 11, van der Hucht et al., 1981) is a WC8$`+`$O8–8.5III binary system at $`(l,b)=(262.8^{},7.7^{})`$, containing the nearest Wolf-Rayet star to the Sun at a distance of $`258_{31}^{+41}`$ pc, as determined by parallax measurements with the HIPPARCOS satellite van der Hucht et al. (1997); Schaerer et al. (1997). $`[`$Another recent investigation determines a spectral type of O7.5 for the O star De Marco & Schmutz (1999).$`]`$ A previous 1.8 MeV flux limit(2 $`\sigma `$) for $`\gamma ^2`$ Vel of $`\mathrm{1.9\hspace{0.33em}10}^5`$ $`\gamma \mathrm{cm}^2\mathrm{s}^1`$ had been determined by Diehl et al. (1995a), based on observations of the first 2 1/2 CGRO mission years. In Sect. 2, we describe our search for 1.8 MeV emission from $`\gamma ^2`$ Vel and the derivation of upper flux limits for several emission models. Sect. 3 discusses the initial mass range of the WR star, implications of our flux limit for stellar models, and proposes an alternative interpretation of the “IRAS Vela shell”. We summarize in Sect. 4. ## 2 Data analysis and source models The COMPTEL telescope spans an energy range from 0.75 to 30 MeV with spectral resolution of 8% (FWHM) at 1.8 MeV, and performs imaging with an angular resolution of 3.8 FWHM at 1.8 MeV within a $`1`$ sr field of view. It features the Compton scattering detection principle through a coincidence measurement in two detector planes (see Schönfelder et al. (1993) for details). Imaging analysis and model fitting occurs in a three-dimensional dataspace consisting of angles $`(\chi ,\psi ,\overline{\phi })`$ describing the scattered photon’s direction and the estimated Compton scatter angle, respectively Diehl et al. (1992). In this paper, we concentrate on fitting sky models, convolved with the instrumental response, in the imaging dataspace. The present analysis makes use of all data from observations 0.1 – 522.5 which have been combined in a full-sky dataset, comprising 5 years of observing time between May 1991 and June 1996. Events were collected within a 200 keV wide energy window from (1.7 – 1.9) MeV into the imaging dataspace with $`1^{}\times 1^{}`$ binning in $`(\chi ,\psi )`$ (in galactic coordinates) and 2 binning in $`\overline{\phi }`$. The dataspace has been restricted to $`l[185^{},320^{}]`$ and $`|b|50^{}`$ to concentrate on emission from the Vela region, but not to loose information from the up to 100 wide response cone (at $`\overline{\phi }=50^{}`$). The instrumental and celestial continuum background have been modelled by interpolation from adjacent energies, with enhancements from Monte Carlo modelling of identified activation background components. A previous version of the background handling and event selections have been described by Oberlack et al. (1996) and Knödlseder et al. (1996c), more details on the recently improved background handling and the complete dataset are reported in Oberlack (1997) \[Oberlack et al., in prep.\]. For the derivation of upper limits, the maximum likelihood ratio test has been applied Cash (1979): A null hypothesis $`H_0`$ is compared with an extended alternative hypothesis $`H_1`$, which includes $`q`$ additional continuous model parameters, using the likelihood function $``$, which is the product of the Poisson probabilities $`p_k`$ in $`N`$ dataspace cells: $$=\underset{k=1}{\overset{N}{}}p_kp_k=\{\begin{array}{cc}\frac{\mu _k^{n_k}}{n_k!}e^{\mu _k}\hfill & \text{ for }\mu _k>0\hfill \\ 1\hfill & \text{ for }\mu _k=0,n_k=0\hfill \\ 0\hfill & \text{ for }\mu _k=0,n_k>0\hfill \end{array}$$ (1) where $`n_k`$ is the number of counts in cell $`k`$ and $$\mu _k=\underset{s=1}{\overset{n_{\mathrm{src}}}{}}a^{(s)}\mu _k^{(s)}+b\mu _k^{\mathrm{bgd}}$$ (2) is the predicted number of counts due to sources and the background model, which includes the scaling parameters $`a^{(s)}`$, $`b`$ varied in the fit. Each source model $`(s)`$ can be described by a (normalized) flux map $`\{f_j^{(s)}\}`$ convolved with the response matrix $`R_{jk}`$: $$\mu _k^{(s)}=\underset{j}{}R_{jk}f_j^{(s)}$$ (3) If the null hypothesis is true, the probability distribution of the ratio of the maximum likelihood $`\widehat{}_1`$ achieved by fitting $`H_1`$ to the data over the maximum likelihood $`\widehat{}_0`$ achieved by fitting $`H_0`$ to the data can be described analytically: $$p\left(2\mathrm{ln}\left(\frac{\widehat{}_1}{\widehat{}_0}\right)\right)=p(\chi _q^2)$$ (4) where $`p(\chi _q^2)`$ is the tabulated $`\chi ^2`$ probability distribution with $`q`$ degrees of freedom. Fig. 1 shows a Maximum-Entropy deconvolved map of the 1.8 MeV emission from the Carina / Vela / Puppis regions. While no 1.8 MeV flux is detected from the position of $`\gamma ^2`$ Vel, significant extended emission is observed in nearby regions, with an intensity peak around $`(267\stackrel{}{.}5,0\stackrel{}{.}5)`$. Due to the broad COMPTEL response, such emission needs to be modelled. We test its impact on flux limits for $`\gamma ^2`$ Vel with four different emission models (additional to the background model), guided by the deconvolved image, by candidate <sup>26</sup>Al sources in the region, and by results on the galaxy-wide <sup>26</sup>Al distribution. These are the tested source models in addition to the background model: 1. A single point source at the position of $`\gamma ^2`$ Vel. 2. A detailed model describing the observed emission empirically and including 1.8 MeV candidate sources of the region (Fig. 1): an exponential disk with emissivity $`\mathrm{exp}\{R/R_0\}\mathrm{exp}\{|z|/z_0\}`$, galactocentric scale length $`R_0=4`$ kpc, and scale height $`z_0=180`$ pc as an approximation to the large-scale <sup>26</sup>Al distribution, an additional homogenous stripe around $`l=310^{}`$ for simplified modeling of excess emission in this region and 8 point sources including $`\gamma ^2`$ Vel. Details are listed in Table 1. 3. An intermediate model: 2 point sources at the positions of maximum intensity in Fig. 1 at $`(l,b)`$: $`(286\stackrel{}{.}0,0\stackrel{}{.}0)`$, $`(267\stackrel{}{.}5,0\stackrel{}{.}5)`$ plus a point source for $`\gamma ^2`$ Vel. 4. Like model (c) but replacing the point source for $`\gamma ^2`$ Vel by a model for the IRAS Vela Shell: a spherical shell with a thickness of 10% of the 8 radius around $`(l,b)=(263^{},7^{})`$ Sahu & Blaauw (1993). The first three models assume that all <sup>26</sup>Al from $`\gamma ^2`$ Vel is kept within an observed ’ejecta-type’ wind shell around the binary system with a diameter of $`57^{}`$ Marston et al. (1994). Given COMPTEL’s angular resolution, this can be modelled by a point source. For completeness, we consider a fourth model, which places <sup>26</sup>Al into the so-called IRAS Vela shell, an extended ($`8^{}`$ radius) structure almost centered on $`\gamma ^2`$ Vel, which has been identified by Sahu (1992) from IRAS 25$`\mu `$m – 100$`\mu `$m maps and investigations of cometary globules. This is a region of relatively bright H$`\alpha `$ emission within the Gum nebula Chanot & Sivan (1983), where young stellar objects are forming Prusti et al. (1992). Sahu found the Vela shell to be a structure distinct from the Gum nebula and presented an interpretation as supershell from the aged Vela OB2 association of which $`\gamma ^2`$ Vel may be a member de Zeeuw et al. (1997). (As a consequence of HIPPARCOS parallaxes, $`\gamma ^2`$ Vel would be located on the very near side of the association.) In this case, part of the ejected <sup>26</sup>Al may have traversed the gas bubble and be accumulated in the dense outer shell. We discuss a different interpretation, however, in Section 3.1. Fig. 2 shows the logarithmic maximum likelihood ratio $`2\mathrm{ln}(\widehat{}/\widehat{_0})`$ as a function of model flux. No significant flux from $`\gamma ^2`$ Vel is found for any of the tested models. The following 2$`\sigma `$ upper limits are derived: $$\begin{array}{cccc}\text{Model a:}\hfill & \hfill f& <& \mathrm{1.6\hspace{0.33em}10}^5\gamma \mathrm{cm}^2\mathrm{s}^1\hfill \\ \text{Model b:}\hfill & \hfill f& <& \mathrm{0.9\hspace{0.33em}10}^5\gamma \mathrm{cm}^2\mathrm{s}^1\hfill \\ \text{Model c:}\hfill & \hfill f& <& \mathrm{1.1\hspace{0.33em}10}^5\gamma \mathrm{cm}^2\mathrm{s}^1\hfill \\ \text{Model d:}\hfill & \hfill f& <& \mathrm{1.9\hspace{0.33em}10}^5\gamma \mathrm{cm}^2\mathrm{s}^1\hfill \end{array}$$ Given our 1.8 MeV map of the region, model (a) seems overly simplistic. Lacking alternative source models, part of the observed Vela emission is attributed to $`\gamma ^2`$ Vel and yields the most conservative upper limit for a point source model. Model (b) contains many free parameters and may approach an over-fit of the data, where statistical fluctuations of the background are fitted. Although the fitted background scaling factor is lowest for this model, the lowest (even slightly negative) flux is attributed to $`\gamma ^2`$ Vel. Consistent with the map in Fig. 1, the flux from Vela-Puppis is better described by sources within the galactic plane than by $`\gamma ^2`$ Vel. Model (c) is “intermediate” in that it accounts for the strongest features in the map with a minimum of free model parameters. We adopt its result for the $`\gamma ^2`$ Vel flux limit and consider models (a) and (b) as the extreme values for the systematic uncertainty due to the choice in modelling of other emission close ($`5^{}`$$`10^{}`$) to $`\gamma ^2`$ Vel. The limit from model (d) is highest due to the large extent of this source model. Yet, we do not consider this model a likely representation of <sup>26</sup>Al from $`\gamma ^2`$ Vel, even if the structure itself may well be related to the binary system, as discussed in the next section. The 1.8 MeV flux directly translates into the “alive” <sup>26</sup>Al mass in the circumstellar medium, for a point source via: $$f_{1.8\mathrm{MeV}}=\mathrm{1.8\hspace{0.33em}10}^5\gamma \mathrm{cm}^2\mathrm{s}^1\left(\frac{258}{d[\mathrm{pc}]}\right)^2\frac{M_{26}[\mathrm{M}_{}]}{\mathrm{1.0\hspace{0.33em}10}^4}$$ Therefore, our $`2\sigma `$ upper flux limit corresponds to a maximum <sup>26</sup>Al mass from $`\gamma ^2`$ Vel of $$M_{26}^{\mathrm{WR11}}<\left(6.3_{1.4}^{+2.1}\right)\mathrm{\hspace{0.33em}10}^5\mathrm{M}_{}$$ (5) where the $`1\sigma `$ distance uncertainties from the HIPPARCOS measurement have been taken into account. ## 3 Discussion ### 3.1 Interpretation of the IRAS Vela shell Is our upper limit for the <sup>26</sup>Al yield from $`\gamma ^2`$ Vel realistic, being derived for a point source model, or should we rather consider the higher value implied by potential <sup>26</sup>Al accumulation in the IRAS Vela shell? The refined distance of $`\gamma ^2`$ Vel suggests a new interpretation of the IRAS Vela shell as the *main sequence bubble* of the WR progenitor star in the $`\gamma ^2`$ Vel system. This would make a significant <sup>26</sup>Al contamination of the shell unlikely since this isotope is expected to appear at the stellar surface only in later stages of stellar evolution, together with the products of core hydrogen burning, after the hydrogen shell has been expelled. Evidence for our interpretation stems from observations of interstellar reddening by Franco (1990), who studied two regions within the Gum nebula, one of which in projection to the IRAS shell. Only this field showed clear evidence for a dust wall at a distance of $`200\pm 20`$ pc, interpreted by the authors as the near edge of the Gum nebula. If this is now interpreted as the near edge of the IRAS shell, its angular extent would correspond to a radius of 32 pc and the centre would be placed at a distance of $`230\pm 20`$ pc, well within the $`1\sigma `$ uncertainty of the $`\gamma ^2`$ Vel distance. This would argue against a supershell interpretation since the distance to the centre of Vela OB2, the assumed origin of the supershell, has been precisely measured by HIPPARCOS to $`415\pm 10`$ pc de Zeeuw et al. (1997). Scaling down the (uncertain) mass estimate for the shell by Sahu from the distance of 450 pc she assumed to the reduced distance of $`\gamma ^2`$ Vel, yields a mass of $`2\times 10^5`$ $`\mathrm{M}_{}`$. Combined with the observed expansion velocity of $`10`$ km s<sup>-1</sup> Sahu & Sahu (1993) this corresponds to a kinetic energy of $`2\times 10^{43}`$ J ($`2\times 10^{50}`$ erg), within the range of stellar wind energy release by a massive star. In a hydrodynamic model coupled with a stellar evolution code for a 60 $`\mathrm{M}_{}`$ star, Garcia-Segura et al. (1996) even find a total energy release of $`3.3\times 10^{44}`$ J (70% - 80% thereof ejected before a “luminous blue variable” \[LBV\] phase) with a 45 pc radius O-star bubble. This number only scales weakly with ambient density ($`n^{1/5}`$, $`n=20\text{cm}^3`$ assumed in the model), but is considered an upper limit by the authors because effects like heat conduction or cloud evaporation are ignored here. Within the uncertainties of the model, our interpretation of the IRAS Vela shell seems therefore plausible. For the further discussion we will hence adopt our 1.8 MeV mass limit which was derived from the assumption that all <sup>26</sup>Al from $`\gamma ^2`$ Vel is contained within a region of $`1^{}`$ in diameter around $`\gamma ^2`$ Vel. ### 3.2 Current mass estimate for $`\gamma ^2`$ Vel Model predictions of <sup>26</sup>Al yields for massive stars are a strong function of initial stellar mass, e.g., $`M_{26}M_\mathrm{i}^{2.8}`$ Meynet et al. (1997). A direct determination of the WR star initial mass (and thus its predicted <sup>26</sup>Al yield) from comparison of luminosity and effective temperature with stellar evolution tracks, as is typically done for other types of stars, is not feasible due to the lack of sufficiently accurate models of WR wind atmospheres (and the additional complications from the binary nature of this stellar system). While binarity makes modeling more complicated due to additional degrees of freedom in parameter space it allows measurement of current masses, which can be matched with theoretical predictions together with the generic spectral type of the WR star. Spectroscopic determination of $`M_{1,2}\mathrm{sin}^3i`$ (where $`i`$ is the inclination) based on Doppler-shifted absorption (O star) and emission lines (WR star) led to contradictory results Pike et al. (1983); Moffat et al. (1986). A recent redetermination of orbital parameters by Schmutz et al. (1997) yields spectroscopic masses of: $$M_{\mathrm{WR}}\mathrm{sin}^3i=6.8\pm 0.6\mathrm{M}_{}M_\mathrm{O}\mathrm{sin}^3i=21.6\pm 1.1\mathrm{M}_{}$$ They reject an earlier inclination measurement from polarisation data of $`i=70^{}\pm 10^{}`$ by St.-Louis et al. (1988) and rather state wider inclination limits of $`57^{}<i<86.3^{}`$ from other evidence, corresponding to a factor $`1/\mathrm{sin}^3i=1.0`$ – 1.7 or a range from 6 to 12 $`\mathrm{M}_{}`$ for the current mass of the WR star. Relying on a mass-luminosity relation for the O star from *single* star evolution models, Schaerer et al. (1997) derive $`M_\mathrm{O}=29\pm 4`$ $`\mathrm{M}_{}`$, which, in turn, yields a consistent, but model-dependent, inclination estimate of $`i=65^{}\pm 8^{}`$ or a mass estimate for the WR star of $`M_{\mathrm{WR}}=9_{1.2}^{+2.5}`$ $`\mathrm{M}_{}`$ Schmutz et al. (1997). A different analysis of the same spectral data yields a slightly higher luminosity for the O star, leading to a slightly higher O star mass estimate of $`30\pm 2`$ $`\mathrm{M}_{}`$ De Marco & Schmutz (1999), yet, with the same mass-luminosity relation from single-star models. Another observational hint on the total mass of the binary system has been derived from interferometric measurements of the major half-axis of the binary system $`a^{\prime \prime }=(4.3\pm 0.5)`$ mas by Hanbury Brown et al. (1970). Kepler’s law leads to a consistent, but barely constraining, mass estimate: $$M_{\mathrm{WR}}+M_\mathrm{O}=\frac{(2\pi )^2}{G}\frac{(a^{\prime \prime }d)^3}{T^2}=(30\pm 16)\mathrm{M}_{}$$ (6) The large relative uncertainty in total mass of $`54\%`$ is due to the uncertainties in $`a^{\prime \prime }`$ and $`d`$ to about equal amounts. Additional systematic errors, however, may affect the determination of $`a^{\prime \prime }`$ by a fit in which other quantities like orbital parameters, inclination, and brightness ratio of the two stars had to be taken as fixed parameters due to limited data quality. Those values were taken from other measurements available at the time. Most notably, an eccentricity of 0.17 had been assumed which is distinctly lower than a recent value of $`0.326\pm 0.01`$ Schmutz et al. (1997). Future interferometric measurements could improve this situation significantly. ### 3.3 Single star models Given the remaining uncertainties in current mass determinations, not surprisingly *initial* mass estimates and predicted <sup>26</sup>Al yields vary significantly. To start with the simpler case, we first discuss our results in the context of single star models, assuming that the stellar structure of the primary (defined in terms of which star evolved first, i.e. now the WR star) has not been significantly altered due to the presence of its binary companion and that all of the mass expelled from the primary has reached the ISM without being captured by the secondary. This corresponds to fully non-conservative mass transfer, which is usually parametrized by the fraction $`\beta `$ of mass that is accreted by the secondary out of the mass lost by the primary (i.e. fully non-conservative mass transfer means $`\beta =0`$). For the Geneva single star models, a minimum initial mass of about 40 $`\mathrm{M}_{}`$ is required at solar metallicities to yield a WC-type Wolf-Rayet star Meynet et al. (1994) as observed in $`\gamma ^2`$ Vel. Therefore, these models would predict a minimum <sup>26</sup>Al yield of $`5.5\times 10^5`$ $`\mathrm{M}_{}`$ for $`\gamma ^2`$ Vel with initial mass of the WR component $`M_\mathrm{i}=40`$ $`\mathrm{M}_{}`$ and even $`1.2\times 10^4`$ $`\mathrm{M}_{}`$ for $`M_\mathrm{i}=60`$ $`\mathrm{M}_{}`$, the value used by Meynet et al. (1997) for the description of $`\gamma ^2`$ Vel. These yields are consistent with the yields predicted by the single-star models of Langer et al. (1995). Schaerer et al. (1997) estimate an initial mass of $`M_\mathrm{i}=57\pm 15`$ $`\mathrm{M}_{}`$ for the WR star, using the Geneva models. Has all <sup>26</sup>Al been expelled yet or is some fraction still buried invisibly inside the star? The spectral type of the Wolf-Rayet star is WC, i.e. the stellar wind is carbon rich, which means that products of core helium burning have reached the stellar surface. Since remaining <sup>26</sup>Al in the stellar core is efficiently destroyed in helium burning due to neutron captures, standard stellar evolution models predict that the wind ejection phase of hydrostatically produced <sup>26</sup>Al has ended already. How much <sup>26</sup>Al has already decayed? Typical WR lifetimes from observational constraints and theoretical models are on the order of $`5\times 10^5`$ yr, which is shorter than the <sup>26</sup>Al lifetime by a factor of 2. This could account for a reduction of observable <sup>26</sup>Al in the ISM of at most 40% if the WR star were close to the end of its evolution. Some models predict WR lifetimes in excess of a million years for the most massive stars Meynet et al. (1997) but these models also predict sufficiently large <sup>26</sup>Al yields such that the prolonged time available for decay would not reduce the observable amount below the measured limit. Also note that WR lifetimes strongly depend on the mass loss description applied in the model. A straight-forward explanation for the missing 1.8 MeV flux would be sub-solar metallicity since the <sup>26</sup>Al yields scale approximately like $`Z^2`$. Yet, sub-solar abundances in the ISM observed in the Vela direction can readily be understood by dust formation (and therefore depletion of the gaseous phase) rather than by intrinsic low metallicity Fitzpatrick (1996). Analysis of spectroscopic data for the O star in the binary system is indeed consistent with solar metallicity De Marco & Schmutz (1999). With these considerations, Fig. 3 shows that <sup>26</sup>Al yields predicted from single star models are barely compatible with the COMPTEL flux limit. This would suggest that model parameters such as the mass loss description, which has greatest impact on the stellar structure for initial masses $`40`$ $`\mathrm{M}_{}`$, or internal mixing parameters like core overshooting or semi-convection may have to be modified. ### 3.4 Binary models Differences in the stellar evolution of the primary star in a relatively wide binary system such as $`\gamma ^2`$ Vel ($`a1`$ AU) stem from mass loss, while effects of tidal forces on the stellar structure are negligible during the main sequence phase when <sup>26</sup>Al is produced in the core. In addition to the mass loss mechanisms of single stars, Roche Lobe Overflow (RLOF) can change the stellar structure of both primary and secondary star. For the discussion of <sup>26</sup>Al yields, we can concentrate on the evolution of the primary since the secondary O star merely reveals unprocessed material from the stellar envelope at its surface, but might bury some of the processed material transfered from the primary. Langer (1995) proposes that binary stars with $`M_\mathrm{i}40`$ $`\mathrm{M}_{}`$ undergoing RLOF are essentially indistinguishable from single stars in the same mass range undergoing a phase of very intense mass loss as LBVs. Differences in the <sup>26</sup>Al yield in the circumstellar medium would merely result from the fraction of expelled mass captured by the secondary companion, i.e. some fraction $`\beta `$ could be buried in the surface layers of the secondary. (Stellar winds from primary and secondary will always transport some fraction of <sup>26</sup>Al into the ISM.) Vanbeveren (1991) argues that this LBV mass loss may even prevent any occurence of RLOF, which means that <sup>26</sup>Al yields should remain the same as for single stars. For initial masses below $`40`$ $`\mathrm{M}_{}`$, RLOF can provide additional mass loss not attainable in the single star scenario, pushing down the lower initial mass limit for the formation of WR stars to about 20 $`\mathrm{M}_{}`$ Vanbeveren et al. (1998). Yet, adopting current masses of 9 $`\mathrm{M}_{}`$ for the WR star and 29 $`\mathrm{M}_{}`$ for the O star and assuming a fraction of mass transfered to the secondary as high as $`\beta =0.5`$ would yield a minimum initial mass for the primary of $`30`$ $`\mathrm{M}_{}`$, given that the primary star must have been the more massive partner initially to evolve faster. The observed current mass loss rate of $`(2.8_{0.9}^{+1.2})\times 10^5\mathrm{M}_{}\text{/yr}`$ Schaerer et al. (1997), quite typical for this type of star, supports a minimum mass lost into the ISM of at least 10 $`\mathrm{M}_{}`$ in the last few 100,000 years. Vanbeveren et al. (1998) quote a minimum initial mass of 38 $`\mathrm{M}_{}`$ for the WC star in $`\gamma ^2`$ Vel based on detailed models of binary evolution. Overall, the minimum initial mass for the WR star in binary models is found close to the values obtained by single star models, namely around 40 $`\mathrm{M}_{}`$. While the real initial mass may well have been larger, it is apparent from Fig. 3 that discrepancies between predicted <sup>26</sup>Al yields and the measured 1.8 MeV flux limit become quite severe for larger masses. Even for $`M_\mathrm{i}=40`$ $`\mathrm{M}_{}`$, models are in clearly better agreement with the flux limit if a significant fraction of the ejected mass of the primary accreted onto the secondary. The binary models of Braun & Langer (1995) lead to a typical reduction in <sup>26</sup>Al yield of about 40% for models with $`\beta =0.5`$ as displayed in the figure. Information regarding a quantitative estimate of $`\beta `$ is still sparse, but some statements can be made. If we consider $`M_\mathrm{i}40`$ $`\mathrm{M}_{}`$ the lowest possible initial mass for the WR star, the fact that binaries with initial mass ratios (secondary / primary) $`q<0.2`$ are expected to merge Vanbeveren et al. (1998) leads to a minimum initial mass for the O star of $`8`$ $`\mathrm{M}_{}`$, hence an initial total mass of the system of $`48`$ $`\mathrm{M}_{}`$. Assuming a current mass of $`(29+9=38)`$ $`\mathrm{M}_{}`$ as for the initial mass estimate, a minimum of 10 $`\mathrm{M}_{}`$ must have been expelled to the ISM, while about 20 $`\mathrm{M}_{}`$ would have been transfered to the secondary, corresponding to $`\beta 2/3`$. Any larger value of $`q`$, i.e. larger initial mass of the O star, would yield lower $`\beta `$. For larger values of the initial mass of the WR star, the maximum allowed $`\beta `$ for $`q>0.2`$ rapidly decreases as illustrated in Fig. 4. Observation of an “ejecta-type” ring nebula around $`\gamma ^2`$ Vel with 57’ angular diameter Marston et al. (1994), corresponding to a radius of $`2.1`$ pc, demonstrates that significant mass has been expelled from the system, probably during a preceding LBV / WN phase, even though the mass of the shell has not yet been determined. If indeed the IRAS Vela shell were the remnant of the main sequence bubble of the primary star, the mass transfer to the secondary would be negligible compared to the ejected mass and the WR star could be reasonably modeled by a $`M_\mathrm{i}60`$ $`\mathrm{M}_{}`$ star Garcia-Segura et al. (1996). This scenario is supported by the recent finding that the helium abundance in the companion O star is not enriched De Marco & Schmutz (1999) as one might expect if significant amounts of processed material had been transfered to the secondary star. This, however, would imply a major conflict of the <sup>26</sup>Al yields predicted by corresponding stellar models with our measurement. There are additional qualitative arguments for both *very little* mass transfer to the secondary but also *some* mass transfer: The relatively high excentricity of the orbit favours little mass transfer, which usually tends to reduce the excentricity. On the other hand, the high rotation velocity of the secondary of $`v_{\mathrm{rot}}\mathrm{sin}i=220`$ km s<sup>-1</sup> Baade et al. (1990) may be the result of spinning up by accretion. More detailed modeling is required to resolve these issues. ## 4 Conclusion Given the small distance of $`\gamma ^2`$ Vel from HIPPARCOS measurements and predicted <sup>26</sup>Al yields of current stellar models, the non-detection of 1.8 MeV emission by COMPTEL comes as a surprise. Combined with other observations regarding current masses, metallicity and mass transfer, only a very small volume of the model parameter space remains consistent with the COMPTEL $`2\sigma `$ upper limit of $`M_{26}^{\mathrm{WR11}}<\left(6.3_{1.4}^{+2.1}\right)\mathrm{\hspace{0.33em}10}^5\mathrm{M}_{}`$. Single star models are in conflict with this value. Binary models alleviate the discrepancy if significant mass transfer to the secondary occured, burying some fraction of <sup>26</sup>Al in the surface layers of the O star, and if the initial mass of the WR star was close to its minimum value of $`40`$ $`\mathrm{M}_{}`$. It may be more likely, however, that adjustment of some of the model parameters, e.g. the parametrization of mass loss, is required. Unfortunately, $`\gamma ^2`$ Vel is the only known WR star for which there was hope to obtain a positive detection with current instruments. The WR stars next closest to the Sun (WR142, WR145, and WR147 in the Cygnus region) are at least a factor of two more distant, and therefore their expected fluxes are out of reach for COMPTEL. For the forthcoming INTEGRAL mission, these stars may just become detectable, and the 1.8 MeV flux from $`\gamma ^2`$ Vel will be tested down to significantly lower values. Yet, to probe the 1.8 MeV flux from individual WR stars within the radius of completeness of the current WR catalogue ($`2.5`$ – 3 kpc), would require a next-generation instrument with a sensitivity of $`10^7`$ $`\gamma \mathrm{cm}^2\mathrm{s}^1`$ and an angular resolution $`<0.2^{}`$. ###### Acknowledgements. The COMPTEL project is supported by the German government through DARA grant 50 QV 90968, by NASA under contract NAS5-26645, and by the Netherlands Organization for Scientific Research NWO. The authors are grateful for discussions with Norbert Langer, Orsola De Marco, Georges Meynet, Tony Marston, Nikos Prantzos, Daniel Schaerer, and Werner Schmutz. We thank the referee Dieter Hartmann for helpful comments.
no-problem/9910/cond-mat9910064.html
ar5iv
text
# Observation of two time scales in the ferromagnetic manganite La1-xCaxMnO3, 𝒙≈0.3 After several years of study it is clear that the richness in the temperature-composition phase diagram for La<sub>1-x</sub>Ca<sub>x</sub>MnO<sub>3</sub> is produced by a strong interplay between the spin, charge and lattice degrees of freedom in these materials . Of particular interest has been the ferromagnetic (FM) insulator-to-metal transition and its accompanying large magnetoresistance. Millis and collaborators first suggested that the theoretical description of this transition must include the electron-phonon Jahn-Teller (JT) coupling, in addition to the double-exchange (DE) spin-spin interaction, thus invoking polaronic degrees of freedom. Experimental signatures of small-polaron hopping have emerged from transport measurements above the FM critical temperature $`T_C`$ , leading to a theoretical description of magnetoelastic polarons in terms of a small cluster of aligned Mn spins surrounding a local JT lattice distortion . Despite these important insights, an experimental and theoretical description of the FM transition encompassing all three important degrees of freedom (lattice, charge, and spin) remains incomplete. The persistence of local lattice distortions below $`T_C`$ is clear from neutron pair-distribution function (PDF) and x-ray absorption fine structure (XAFS) measurements . A two-fluid model for the charge degrees of freedom is consistent with the evolution of localized charges into itinerant carriers near $`T_C`$ . A comparable description of the behavior of the spin system near $`T_C`$ is lacking, however. In this Letter we present new muon spin relaxation ($`\mu `$SR) and neutron spin echo (NSE) measurements in FM La<sub>1-x</sub>Ca<sub>x</sub>MnO<sub>3</sub>, $`x0.3`$. Neutron scattering has been used to study the low-temperature magnetic properties in FM perovskites , and has revealed a broad peak centered around zero energy transfer which coexists with Mn spin waves near $`T_C`$ . An unambiguous explanation for this peak is still lacking, however. The NSE technique allows a direct measure of the spin-spin correlation function $`S(q,t)`$ at much higher resolution and for longer correlation times than are achievable with conventional neutron scattering. The muon is a local probe (bound within about 1 Å of an oxygen atom in oxide materials ) and is, therefore, sensitive to spatial inhomogeneity in spin fluctuation rates. Our $`\mu `$SR and NSE results are consistent with a spatially distributed FM transition involving at least two interacting regions with very different spin dynamics. We are able to follow the evolution of these different regions with $`\mu `$SR over the temperature range $`0.7T_CTT_C`$. The $`\mu `$SR data were taken at the Paul Scherrer Institute in Villigen, Switzerland, and at TRIUMF in Vancouver, Canada, between 10 and 300 K. The NSE data were taken at 275 K with the IN11 spectrometer at the Institute Laue Langevin, using incident neutron wavelengths of 4 Å and 6 Å. All measurements were performed in zero applied field ($`1`$ Oe) on polycrystalline samples. (NSE measurements below $`T_C`$ were impossible in zero field because the neutrons were completely depolarized by randomly oriented FM domains.) The $`\mu `$SR sample \[$`x=0.33`$, $`T_C`$ = $`262(3)`$ K from magnetization measurements\] was from the same batch as that used in Ref. . For the NSE sample $`x=0.30`$ and $`T_C`$ = $`250(3)`$ K. In general polycrystalline materials of (La,Ca)MnO<sub>3</sub> are more compositionally homogeneous than comparable volumes of single crystals, because of relatively less Ca evaporation and fluctuating growth conditions. The $`\mu `$SR data could be fit to a relaxation function $`G_z(t)=G_{\mathrm{osc}}(t)+G_{\mathrm{rlx}}(t)`$, corresponding to oscillating and relaxing terms, respectively. $`G_{\mathrm{osc}}(t)`$ is given by $`A_{\mathrm{osc}}\mathrm{exp}(t/T_2)\mathrm{cos}(2\pi \nu _\mu t+\varphi _\mu )`$, where $`\nu _\mu `$ is the muon precession frequency, $`1/T_2`$ is the inhomogeneous damping rate, and $`2\pi \nu _\mu =\gamma _\mu B`$, with $`\gamma _\mu `$ the muon gyromagnetic ratio and $`B`$ the local magnetic field. $`G_{\mathrm{rlx}}(t)`$ was first fit to the stretched-exponential form $`A_{\mathrm{rlx}}\mathrm{exp}[(t/T_1)^K]`$, where $`1/T_1`$ is a characteristic spin-lattice relaxation rate and a value of $`K<1`$ implies a distribution of rates. For our experiment $`A_{\mathrm{osc}}+A_{\mathrm{rlx}}0.2`$ independent of temperature. The current $`\mu `$SR data are of greater statistical precision than previously reported , and are taken at smaller temperature intervals near $`T_C`$, allowing a more refined interpretation of the relaxation phenomena. Fit parameters are shown in Figure 1. The observed zero-field frequencies $`\nu _\mu (T)`$ tend to zero at a temperature of $`266\pm 2`$ K, in agreement with $`T_C`$ from the magnetization data mentioned above. The amplitudes A<sub>osc</sub> (not shown) indicate that the entire sample volume experiences growth of the sublattice magnetization. The higher statistical accuracy allows observation of the rapid decline of $`K`$ to about $`0.2`$ just below $`T_C`$, whereas previously it was necessary to freeze $`K`$ at a somewhat arbitrary value of 0.5 below about 270 K . Because $`K`$ and $`1/T_1`$ are highly correlated, the $`1/T_1`$ values reported here are different from those reported earlier. For rapid fluctuations the local muon relaxation rate $`\lambda `$ is given by $`\lambda \gamma _\mu ^2_q|\delta B(q)|^2\tau (q)`$, where $`|\delta B(q)|`$ is the amplitude of the fluctuating local field and $`\tau (q)`$ is the Mn-ion correlation time. A distribution of $`\lambda `$ implies that $`\delta B`$ and/or $`\tau `$ are distributed but does not determine the distributions separately, whereas $`S(q,t)`$ obtained from NSE measurements gives a direct measure of the distribution of $`\tau (q)`$ only. Figure 2 displays $`S(q,t)`$ at $`T=275`$ K in La<sub>0.70</sub>Ca<sub>0.30</sub>MnO<sub>3</sub>. The data can be fit to $`S(q,t)=\mathrm{exp}((t/\tau (q))^\beta )`$, $`S(q,0)=1`$; the parameters from these fits are given in Table I. These results, together with the $`\mu `$SR data, directly confirm that the Mn-ion correlation times are spatially distributed. Note that $`\beta 1`$ for $`q>0.10`$ Å<sup>-1</sup> and $`\beta <1`$ for $`q0.10`$ Å<sup>-1</sup>. The use of a stretched exponential to describe the $`\mu `$SR and NSE relaxation functions is not unique and may not be physically appropriate, as we now discuss. A stretched exponential relaxation function implies a continuous distribution $`P(\lambda )`$ of rates $`\lambda `$ defined by $`𝑑\lambda P(\lambda )\mathrm{exp}(\lambda t)=\mathrm{exp}[(\mathrm{\Lambda }t)^\alpha ]`$, which is broad for small $`\alpha `$. Such a distribution is applicable to dilute spin glasses, where a distribution of energy barriers yields a broad distribution of relaxation rates, and a given spin may relax via many possible channels . Thus NSE measurements in dilute spin glass CuMn show no appreciable $`q`$ dependence of $`S(q,t)`$, an indication of the many relaxation channels available to a given spin. This is in marked contrast to La<sub>0.70</sub>Ca<sub>0.30</sub>MnO<sub>3</sub> (Fig. 2), where the $`q`$-dependence of $`S(q,t)`$ is likely due to a spatial distribution of relaxation rates, as discussed below. We have therefore chosen to adopt a bimodal distribution $`P(\lambda )A_f\delta (\lambda \lambda _f)+A_s\delta (\lambda \lambda _s)`$, corresponding to a two-exponential fit for $`S(q,t)`$ and $`G_{\mathrm{rlx}}(t)`$. Further justification for this is given below. The results from fitting $`S(q,t)=A_F(q)\mathrm{exp}(t/\tau _F(q))+A_S(q)\mathrm{exp}(t/\tau _S(q))`$, where $`A_F+A_S=1`$, are given in Table I. The two components are labeled ‘fast’ (small $`\tau `$) and ‘slow’ (large $`\tau `$). The slow component sets in only for $`q0.10`$ Å<sup>-1</sup>, and the correlation times $`\tau _F`$ and $`\tau _S`$ differ by about an order of magnitude at the smallest $`q`$ values. The $`\mu `$SR data are also well described using a two-exponential form $`G_{\mathrm{rlx}}(t)=A_f\mathrm{exp}(\lambda _ft)+A_s\mathrm{exp}(\lambda _st)`$. Here $`A_f+A_s=A_{\mathrm{rlx}}`$. We have labeled the rates $`\lambda _f`$ and $`\lambda _s`$, in analogy to the NSE analysis. Again, the two exponential rates differ by at least a factor of ten, allowing good convergence of the fits. We note that non-uniqueness of the fitting function for both the NSE and $`\mu `$SR data is a general property of sub-exponentially decaying curves . The justification for which function to use must therefore ultimately reside in a physical interpretation of the data. Figure 3 shows the temperature dependence of $`A_f`$, $`A_s`$, $`\lambda _f`$, and $`\lambda _s`$. For temperatures below 185 K the overall relaxation rate is too small to resolve two exponentials clearly, and above 273 K the exponent $`K`$ approaches 1. Note that there is consistency between the NSE and $`\mu `$SR measurements, taking into account that $`\mu `$SR rates involve a sum over all $`q`$, which in ferromagnets is most heavily weighted for $`q0`$. The amplitudes of the fast and slow components near $`T_C`$ are roughly equal. Although the $`\mu `$SR data do not give precise values for $`\tau _f`$ and $`\tau _s`$, because the $`\delta B(q)`$ are not known, the fact that $`\lambda _s/\lambda _f`$ and $`\tau _S/\tau _F`$ (at small $`q`$) are both $`10`$ is quite consistent. The $`\mu `$SR data in Fig. 3 exhibit the following important trends. The peak value of $`\lambda _f`$ coincides with $`\nu _\mu (T)0`$ at $`TT_C`$, unlike the stretched-exponential fits in Fig. 1, where the peak values of $`1/T_1`$ occur at distinctly lower temperatures ($`255`$ K) than where $`\nu _\mu (T)0`$. Ordinarily the spin-lattice relaxation rate is expected to peak at $`T_C`$, where the susceptibility and the spin-spin correlation times reach their maxima. This does not occur for the stretched-exponential fits because the rapidly changing values of the exponent $`K`$ near $`T_C`$ result in a changing functional form of $`G_{\mathrm{rlx}}(T)`$; then the derived values of $`1/T_1`$ are not consistent from one temperature to the next. Thus the two-exponential model function seems more appropriate. By contrast, the temperature dependence of $`\lambda _s`$ shows no significant maximum near $`T_C`$, but rises slowly below the critical temperature. Near $`T_C`$, $`A_f`$ is about 60% of the total relaxing amplitude and gradually increases as the temperature is reduced, reaching about 75% at $`T/T_C0.7`$. The dotted lines (Fig. 3, top) are guides to the eye, and indicate that the trend of $`A_f`$ is to approach 100% of the fluctuating amplitude at $`T=0`$ K. The $`\mu `$SR and NSE data suggest the following qualitative interpretation: there are two spatially separated regions in the sample, characterized in our measurements by very different Mn spin dynamics and temperature-dependent volumes. The muon relaxation samples these regions locally through its short-ranged dipolar coupling (matrix element $`r^6`$), predominantly to its two nearest-neighbor Mn spins. The $`\mu `$SR rates involve sums over all $`q`$, but are dominated by FM fluctuations. NSE is sensitive to the spatial configuration of the spin fluctuations through the $`q`$ dependence of the correlation times. The single-exponential relaxation observed for larger $`q`$ reflects nearly single-ion dynamics which are not associated with the FM transition, whereas the two-component relaxation evident for smaller $`q`$ shows that the FM dynamics are inhomogeneous. The length scale associated with this crossover is a few lattice spacings, which is roughly consistent with the ‘droplets’ found in small-angle neutron scattering in lightly-doped manganites . We have also studied $`\mu `$SR relaxation in undoped LaMnO<sub>3</sub> and CaMnO<sub>3</sub>, both of which exhibit temperature-independent single-exponential relaxation above 140 K with rates 0.05–$`0.10\mu \mathrm{s}^1`$. This is very different than the behavior of either component in La<sub>0.67</sub>Ca<sub>0.33</sub>MnO<sub>3</sub>, which is strong evidence against large-scale separation of Mn<sup>3+</sup> and Mn<sup>4+</sup> ions. We postulate that the two observed relaxation components constitute the spin signature of magnetoelastic polarons as the system becomes FM. The more rapidly relaxing component, which shows a peak in the relaxation rate at $`T_C`$ (as expected for a FM phase transition), is attributed to spins inside the overlapping polaron regions. The volume fraction of these regions grows below $`T_C`$ as the polarons continue to grow. The second component is characterized by much slower Mn-ion relaxation rates, shows no sign of critical slowing down, and occupies a diminishing volume fraction as the temperature is lowered. We associate this component with the spins at the boundaries of, or between, the polarons. In this picture the interior regions are characterized below $`T_C`$ by relatively low (metallic) resistivity, an average Mn valence between $`3^+`$ and $`4^+`$ (corresponding to the rapid motion of charge carriers via the DE interaction), and a smaller local JT lattice distortion than is found in LaMnO<sub>3</sub>. The rapid band-like carrier motion produces more rapid Mn spin fluctuations than in the less-metallic exterior regions, where spin and charge motion are frustrated by more extreme local lattice distortions. These larger distortions limit the polaron size at a given temperature. Note that the two regions must be in close proximity to one another to explain the $`q`$ dependence of $`S(q,t)`$, and thus their spins should interact. This interaction presumably causes weak FM alignment in the less-metallic regions, and may also explain the broad peak in $`\lambda _f`$ around $`T_C`$, which is uncharacteristic of conventional critical dynamics. This breadth may also be caused by a distribution of polaron sizes (and local $`T_C`$ values). We compare these data to other measurements. Our picture is conceptually similar to the two-fluid model of resistivity , with one exception. The $`\mu `$SR data indicate that slowly fluctuating spins in relatively less metallic regions persist far below $`T_C`$, whereas in the two-fluid model the growth of the conducting charge fraction reaches essentially 100% just below $`T_C`$. This can be explained by realizing that when a conducting path is reached (at $`T_C`$) the more resistive regions are short-circuited, even though they may still occupy a considerable volume fraction. Persistence of inhomogeneity below $`T_C`$ is reinforced by local probes of the lattice structure , which are consistent with the gradual loss of structural inhomogeneity below $`T_C`$, similar to that reflected in the spin-lattice relaxation rates reported here. Finally, it is likely that the central peak observed in inelastic neutron scattering corresponds to our slowly relaxing component. We have presented evidence for two time scales in the spin system of (La,Ca)MnO<sub>3</sub>, and associated these with a fine-scale spatially-distributed FM transition due to the formation and accretion of magnetoelastic polarons. Realistic theories of these slow inhomogeneous spin dynamics have yet to be developed. The spatial distribution of spin fluctuation rates is probably related to the disordered spatial distribution of La and Ca ions and the corresponding local fluctuations of the local lattice distortions, which influence the spin subsystem via magnetoelastic coupling. A theoretical model which includes disorder and couples JT pseudo-spin variables with the Mn-spin degrees of freedom may shed light on the mystery of the slow, inhomogeneous spin dynamics reported here. Work at Los Alamos was performed under the auspices of the U.S. DOE. Work elsewhere was supported in part by the U.S. NSF, Grant no. DMR-9731361 (U.C. Riverside), and by the Dutch Foundations FOM and NWO (Leiden).
no-problem/9910/hep-lat9910020.html
ar5iv
text
# An Investigation of the Soft Pion Relation in Quenched Lattice QCD ## 1 Introduction Heavy-quark and chiral symmetries combine to predict the following relation for a heavy-light meson H the mass of whose heavy quark is larger than some arbitrary hadronic scale, and for a soft pion: $$f_0(q_{\mathrm{max}}^2)_{H\pi }=f_H/f_\pi $$ (1) where $`f_0(q^2)`$ is the form factor in the $`0^+`$ channel for the vector current hadronic matrix element $`H(p)|V^\mu |\pi (pq)`$. The two quantities on the right-hand side are the corresponding PS meson decay constants. Equation 1 is the soft pion relation for heavy-light PS mesons . Results are given for $`f_0(q_{\mathrm{max}}^2)_{H\pi }`$ and $`f_H/f_\pi `$ for heavy-light PS mesons and are extrapolated to the B mass. Errors are quoted as $$f_B=190(5)_{\mathrm{stat}.}(10)_{\mathrm{sys}.}\mathrm{MeV}.$$ (2) ## 2 Extracting the form factors The form factors for the hadronic matrix element of semileptonic heavy pseudoscalar decay to a pion are extracted from mass extrapolations of the appropriate matrix elements of quenched lattice QCD with SW action with $`a^1`$=2.6(1) GeV, $`\beta `$=6.2 on a $`24^3\times 48`$ lattice using 216 gauge configurations generated with a combination of the Cabbibo-Marinari algorithm and an over-relaxed algorithm . The matrix elements are calculated using the method of extended quark propagators with all combinations of the following hopping parameters: $`\kappa _{\mathrm{heavy}}`$ $``$ $`\{0.1200,0.1233,0.1266,0.1299\}`$ $`\kappa _{\mathrm{light}}`$ $``$ $`\{0.1346,0.1351,0.1353\}`$ $`\kappa _{\mathrm{spectator}}`$ $``$ $`\{0.1346,0.1351\}.`$ From the light PS meson masses $`\kappa _{\mathrm{crit}}`$=0.13582(1)(2) is found. The action and operators are improved to O($`a`$) and renormalized to a continuum scheme, using coefficients determined non-perturbatively where possible . The following values are used: $`Z_V`$ = 0.792 $`Z_A`$ = 0.807 $`b_V`$ = 1.41 $`b_A`$ = 1.11 $`c_V`$ = $`1.58\times 10^2`$ $`c_A`$ = $`3.71\times 10^2`$ $`c_{SW}`$ = 1.61 $`b_M`$ = 0.583 Of these, only $`c_V`$, $`b_A`$ and $`b_M`$ are determined from a perturbative scheme . In the continuum, the form factors $`f_+`$ and $`f_0`$ are defined from hadronic matrix elements as follows: $`H(p)|V^\mu |\pi (pq)=`$ $`f_+(q^2)\left(2p^\mu \left[\left(M_H^2M_\pi ^2\right)/q^2+1\right]q^\mu \right)`$\+ $`f_0(q^2)\left(\left[M_H^2M_\pi ^2\right]/q^2\right)q^\mu `$ ## 3 Pion physics on the lattice Lattice correlators in this study are directly relevant to the physics of heavy-light mesons with mass $`13002200`$ MeV, and of light-light mesons with mass $`350850`$ MeV. Form factors at the same value of $`q^2`$ for decays to a pion can be estimated by assuming the following dependence of a form factor $`f\{f_+,f_0\}`$ on the meson masses: $$f(q^2,M,m)=f(q^2,M_{\mathrm{chiral}},0)+a\times \mathrm{\Delta }M+b\times m^2$$ (3) and fitting the data to determine $`f(0)`$ and $`a`$. Similarly light-light meson and heavy-light meson decay contants $`f_L`$ and $`f_H`$ are extrapolated in the meson mass according to the following: $$f_L(m)=f_L(0)+a^{}\times m^2$$ (4) $$f_H(M)=f_H(M_{\mathrm{chiral}})+a^{\prime \prime }\times \mathrm{\Delta }M$$ (5) where $`\mathrm{\Delta }MMM_{\mathrm{chiral}}`$. The $`q^2`$ dependence of the form factors can be modelled using the following parameterization . $`f_+(q^2)`$ $`=`$ $`{\displaystyle \frac{c_B(1\alpha )}{(1q^2/M_{}^2)(1\alpha q^2/M_{}^2)}}`$ (6) $`f_0(q^2)`$ $`=`$ $`{\displaystyle \frac{c_B(1\alpha )}{(1q^2/\beta M_{}^2)}}`$ (7) where $`M_{}`$ is the heavy-light vector meson mass. $`f_0(q_{\mathrm{max}}^2)`$ is extracted by extrapolating $`f_0`$ upwards in $`q^2`$ using the best fit curve from a fit of 7 (fig. 1). One finds the soft pion relation to be well satisfied at simulated values of heavy quark mass. ## 4 The soft pion relation for $`B\pi `$ Heavy quark effective theory predicts the following form for the dependence of $`f_0`$ on the heavy quark mass, at constant recoil variable $`v(pq)`$: $$f_0(M,\omega )\mathrm{\Theta }(M)\sqrt{M}=a+b/M+c/M^2+O(\frac{1}{M^3})$$ (8) and for a heavy-light pseudoscalar decay constant: $$f_H(M)\mathrm{\Theta }(M)\sqrt{M}=a^{}+b^{}/M+c^{}/M^2+O(\frac{1}{M^3})$$ (9) where $`\mathrm{\Theta }(M)`$ is a perturbative matching coefficient function whose value is very close to 1. These prescriptions are used as fit ansatz to determine a best fit quadratic in 1/M to the data in table 1. The curve is shown in (fig. 2). Values for $`f_0(q_{\mathrm{max}}^2)`$ and $`f_H`$ extrapolated to the b mass are presented in table 2. It is instructive to repeat the whole procedure, extrapolating form factors for a pion which is not massless but has its physical mass. Results are presented in table 3 and are practically unchanged from the massless case. ## 5 Systematic error Systematic error in this work arises from: 1. discretization errors of O($`a^2`$) 2. quenched approximation 3. estimation of correlation matrix for fits 4. interpolation of lattice form factors in $`q^2`$ 5. corrections to heavy quark effective theory 6. choice in setting the scale. The quoted systematic error is an estimate of the variation arising from the last sources $`36`$, achieved by repeating the analysis with different numbers of bootstrap sets, varying the model function used to interpolate the form factor in $`q^2`$, changing the degree of the polynomial in 1/M used as an ansatz for the extrapolation in heavy mass, and setting the scale alternatively against $`M_\rho `$ and the gluonic scale $`r_0`$. Quenching error is not quantified here. Residual discretization errors also may be significant, particularly in the heavy extrapolation where the fitted curve used may be diverted substantially by a correction to the lattice form factor at the largest simulated heavy quark mass, for which $`(Ma)^20.65`$. ## 6 Conclusions For a simulated heavy-light decay whose mass is within 20% of $`M_D`$, the soft pion relation is reproduced in this study. On extrapolating the form factor and the heavy-light decay constant to the B mass, systematic errors become O(30%), to within which precision the soft pion relation holds for B meson decays. Other approaches to date have generally violated this relation, with the form factor lying significantly below the ratio of decay constants. Work is ongoing with the prospect of controlling further the systematic error, and a simulation at $`\beta =6.0`$ including a static heavy quark will address a large source of systematic uncertainty as well as providing scaling information .
no-problem/9910/cond-mat9910337.html
ar5iv
text
# Two-stage superconducting-quantum-interference-device amplifier in a high-Q gravitational wave transducer ## Abstract We report on the total noise from an inductive motion transducer for a gravitational-wave antenna. The transducer uses a two-stage SQUID amplifier and has a noise temperature of 1.1 mK, of which 0.70 mK is due to back-action noise from the superconducting quantum interference device (SQUID) chip. The total noise includes thermal noise from the transducer mass, which has a measured Q of $`2.60\times 10^6`$. The noise temperature exceeds the expected value of 3.5 $`\mu `$K by a factor of 200, primarily due to voltage noise at the input of the SQUID. Noise from flux trapped on the chip is found to be the most likely cause. Detection of gravitational waves from astronomical sources requires extremely low noise antennas and amplifiers . The dominant noise source is the first stage electrical amplifier, which has typically been made from a superconducting quantum interference device (SQUID). Our previous work on gravitational wave transducers using a commercial SQUID from Quantum Design of San Diego California found a noise temperature of 3.9 mK, with 1.2 mK attributed to SQUID back-action noise . Here we report noise measurements for an integrated two-SQUID system in which one SQUID amplifies the output of another. We use the interaction between the SQUID input and the high-Q transducer circuit to distinguish different noise sources internal to the SQUIDs. The first-stage SQUID is used as an electrical amplifier and immediately follows the transducer, which is connected to the gravitational-wave antenna. The transducer used for these measurements was a Paik-style , inductive, resonant mass. Figure 1(a) shows a schematic of the antenna with a transducer mass. There are two coils of superconducting wire on either side of the transducer’s proof mass, as shown in Fig. 1(b). Conservation of magnetic flux in a superconducting circuit requires that persistent current stored in these coils changes with inductance. This signal current is shunted to the transformer with primary $`L_{t1}`$. The secondary, $`L_{t2}`$, of this transformer is connected to the input of the SQUID chip. The transducer was made from a round plate of niobium out of which circular grooves were milled on both faces, defining a central mass. The remaining thin niobium annulus acts as the mechanical spring connecting the central proof mass to the case. The case would be bolted rigidly to the antenna during gravitational-wave searches. The proof mass was electropolished in an acid solution and was heat treated to 1500 C to improve the quality factor of the resonance. Two other niobium plates were then bolted to either side of the proof mass and contain the sensing coils $`L_1`$ and $`L_2`$. Measurements revealed 10 $`\mu `$m gaps between the coils and the proof mass at room temperature and 25 $`\mu `$m gaps at the operating temperature of 4.2 K. The SQUID amplifier was comprised of two SQUIDs, the first serving as a preamplifier for the second . Both SQUIDs had junctions made from Nb-Al/AlO<sub>x</sub>-Nb trilayers and were impedance matched to the transducer circuit using a 40:1, thin-film, on-chip transformer. The first SQUID was kept in a flux-locked loop by modulating the second SQUID with a 500 kHz square-wave signal. The demodulated signal was negatively fed back to the first SQUID to linearize the amplifier response. A second feedback loop was employed to keep the flux gain between the SQUIDs at a maximum (see Figure 2). The additional loop modulated the second SQUID with a small amplitude flux signal at 8 kHz. This signal was demodulated at 16 kHz and the resulting low frequency signal was negatively fed back to the second SQUID. Using the second harmonic of the input signal as the source of the feedback flux made this loop sensitive to the second derivative of the inverse of the function $$\mathrm{\Phi }_2(\mathrm{\Phi }_1)=\frac{G_\mathrm{\Phi }}{2\pi }\mathrm{sin}\{\frac{2\pi }{\mathrm{\Phi }_0}[\mathrm{\Phi }_1+\mathrm{\Phi }_{\mathrm{B1}}(t)]\}+\mathrm{\Phi }_{\mathrm{B2}}(t),$$ (1) where $`\mathrm{\Phi }_2`$ is the flux in the second SQUID, $`\mathrm{\Phi }_1`$ is the flux in the first SQUID, $`G_\mathrm{\Phi }`$ is the maximal flux gain between the two SQUIDs, $`\mathrm{\Phi }_0`$ is the quantum of magnetic flux, and $`\mathrm{\Phi }_{\mathrm{B1}}(t)`$ and $`\mathrm{\Phi }_{\mathrm{B2}}(t)`$ are the (possibly time-varying) background fluxes in the first and second SQUID, respectively. This process is equivalent to maximizing the flux gain $`d\mathrm{\Phi }_2/d\mathrm{\Phi }_1`$ by shifting the background fluxes. Determining and then maintaining this maximized flux gain against changing an external background flux $`\mathrm{\Phi }_{\mathrm{B2}}(t)`$ is necessary to minimize the effect of noise from both SQUIDs. To make noise measurements, the SQUID amplifier was connected to the transducer (the transducer was not attached to an antenna in these experiments but was instead suspended from a vibration isolator). A current of 6.5025 A was stored in the two coils surrounding the proof mass. The noise was recorded as voltage across the feedback resistor $`R_{\mathrm{fb}}`$. Figure 3 shows the resulting root-mean-square flux noise spectrum in a 62.5 Hz bandwidth around the transducer resonance at 892 Hz. We use the observed contrast between constructive and destructive interference between correlated noise above and below the transducer resonance frequency, evident in Fig. 3, to make qualitative and quantitative inferences about the magnitude and source of back-action noise. Two sources of noise were expected to dominate the total noise: additive noise from the SQUID amplifier and thermal force noise from dissipation in the proof mass. The expected signal-to-noise ratio density, $`r(\omega )`$, can be written , $$r(\omega )=\frac{2\mathrm{}[Z_n]\mu \omega ^2}{k_BT_n|k\mu \omega ^2+j\omega Z_n|^2},$$ (2) where $`Z_n`$ is the mechanical noise impedance, $`\mu `$ is the reduced mass of the proof mass and case mass, $`k`$ is the spring constant between the proof mass and the case, $`k_B`$ is Boltzmann’s constant, and $`T_n`$ is the noise temperature. This quantity $`r(\omega )`$ represents the sensitivity of the transducer per unit energy deposited in the transducer resonance by a signal. A more complete discussion of its derivation and use can be found in and . After calibrating the transducer response to a mechanical signal, we used Eq. 2 to describe the total noise in terms of a noise temperature $`T_n`$ and a complex noise impedance $`Z_n`$. The values of $`T_n`$ and $`Z_n`$ describe the total force and velocity noises and their correlation and can be found from fitting the noise data to Eq. 2. We found from this fit $`T_n`$ $`=`$ $`1.08\times 10^3\mathrm{K},`$ (3) $`Z_n`$ $`=`$ $`(16.9+4.24j)\mathrm{kg}/\mathrm{s}.`$ (4) The double-sided spectral density of force noise, velocity noise, and their correlation can then be obtained from these parameters . Through additional calibration measurements , we were able to characterize the electromechanical circuit parameters in the transducer and SQUID chip to allow us to quantitatively relate $`T_n`$ and $`Z_n`$ to various possible mechanical and electrical noise sources internal to the transducer and SQUIDs. The back-action noise due to the SQUID was determined by subtracting the thermal force noise from the total force noise observed. The thermal noise arises from the dissipation of the transducer resonance. The magnitude of the thermal force noise can be predicted using the fluctuation-dissipation theorem and the measured exponential decay time of the resonance. The Q for this mode is measured directly from the damped oscillation and was found to be $$Q=2.60\times 10^6,$$ (5) which includes contributions from the mechanical spring and the electrical spring created by the stored currents. This Q-value has been corrected for cold damping produced by the SQUID feedback loop so that it gives the passive dissipation in thermal equilibrium at the transducer temperature. By studying the dependence of Q on stored current, the total Q could be broken down into mechanical and electrical components : $`Q_m`$ $`=`$ $`3.15\times 10^6,`$ (6) $`Q_e`$ $`=`$ $`2.52\times 10^5.`$ (7) The thermal force noise can be found using the total $`Q`$ from $$S_f=2k_BT\frac{\omega _0\mu }{Q},$$ (8) where $`T`$ is the measured physical temperature, $`\omega _0`$ is the resonance frequency, $`\mu `$ is the reduced mass of the proof mass and the case, and $`Q`$ is the measured, passive Q of the resonance. After subtracting this thermal noise, the noise temperature of the SQUID amplifier was found to be $$T_s=6.99\times 10^4\mathrm{K}.$$ (9) Using a circuit model for the transducer and SQUID input circuit, we calculated the SQUID’s electrical noise impedance: $$Z_s=(5.9\times 10^38.9\times 10^6j)\mathrm{\Omega }.$$ (10) We also performed tests on the SQUID without the transducer attached. The input port was left open-circuited and the entire SQUID chip was contained in a niobium box. The SQUID’s energy sensitivity in this configuration was $`9.22\times 10^6`$ K. We note that with only one port on the amplifier available, the noise can only be expressed as a single number and a true noise temperature can not be calculated. We used the method of Clarke, Tesche, and Giffard , as extended by Martinis and Clarke , to calculate the minimum or true noise temperature of the SQUID expected from the Johnson noise of its shunt resistors: $$T_{\mathrm{CTG}}=3.5\times 10^6\mathrm{K}.$$ (11) Table I presents the Clarke-Tesche-Giffard (CTG) predictions for each noise component as well as the experimental results. The data is presented as both the experimental result and as a minimum limit derived from the data. These limits come from using the observed high/low frequency asymmetry seen in the noise spectrum while assuming the thermal noise is large enough that an accurate force noise subtraction cannot be done. The voltage noise and current noise must be at least as large as this limiting case. The measured noise temperature is a factor of 200 above the expected noise of $`T_{\mathrm{CTG}}`$, thereby reducing the sensitivity of the transducer through Eq. 2. This noise is almost exclusively due to voltage noise at the input of the SQUID as the excess noise is observable only when the SQUID is coupled to the high-Q transducer. We examined possible explanations for this excess voltage noise. Flux creep in the large sensing coils $`L_1`$ and $`L_2`$ (see Figure 1 (b)) was eliminated as a possible source based on the predicted signature of the two noise components. Noise from a varying magnitude RF signal was also considered but was rejected because we expect an accompanying large current noise, which was not observed. Noise from flux lines moving between pinning sites in the on-chip transformer $`L_{s4}`$ was modeled using the method of Ferrari et al. . We found that the product $$nS_r=9.1\times 10^9\mathrm{Hz}^1,$$ (12) where $`n`$ is the flux vortex density and $`S_r`$ is the spectral density of the flux’s radial motion, would give rise to the observed voltage noise. This same $`nS_r`$ would predict values of $`S_I(\omega _0L)/k_B`$ $`=`$ $`1.8\pm 0.7\mu \mathrm{K},`$ (13) $`|S_{VI}|/k_B`$ $`=`$ $`80\pm 30\mu \mathrm{K}.`$ (14) The predicted value of the current noise is below the CTG value, so on-chip flux noise would not be significant. The correlation noise agrees within the error bars with the experimental limit in Table I. While the back-action noise we observe would allow only modest sensitivity improvement for detectors operating at 4 K , the potential for millikelvin detectors may be better. We were unable to measure the SQUID in a transducer at millikelvin temperatures, but separate noise measurements on a similar chip in a dilution refrigerator indicate flux motion noise may be much less at 90 mK, depending on the source of the flux. Our SQUID is being considered for use in a multimode transducer on the ALLEGRO detector and, with further research, it may be possible to improve the 4 K performance by using a heater to expel flux. This work was supported by grant PHY93-12229 from the National Science Foundation and by the Gravitational Wave Lab at Louisiana State University. Figure 1 - G. M. Harry et al. (a) (b) Figure 2 - G. M. Harry et al. Figure 3 - G. M. Harry et al.
no-problem/9910/astro-ph9910524.html
ar5iv
text
# The Star Formation Histories of Two Northern LMC Fields ## 1 INTRODUCTION The LMC is the Milky Way’s best-studied companion galaxy, a result of both its proximity ($``$50 kpc) and high galactic latitude. Its stellar content is easily accessible from ground-based telescopes, allowing 1m-class telescopes to obtain effectively deeper data than the HST can in more distant Local Group members such as WLM and LGS 3. The global properties of the LMC are well-studied, with a total mass of $`3.5\times 10^9M_{}`$ and neutral gas mass of $`5.2\times 10^8M_{}`$ (15% of the total), including HeI, determined by Kim et al. (1998b). Kim et al. (1998a) also overlaid an H$`\alpha `$ map over their HI map, showing the highest concentration of H$`\alpha `$ in the 30 Doradus region. As a result of the LMC’s proximity, much work has already been done on this subject, much of which is well summarized in a review by Olszewski, Suntzeff & Mateo (1996). The majority of star clusters seem to have mostly formed in the past 4 Gyr, with the remaining clusters formed over 12 Gyr ago. The clusters also show a rise in metallicity over the galaxy’s history, from -1.8 15 Gyr ago to -0.5 at present, both values with a spread of $`\pm `$0.5 dex. Whether the formation times and metallicities of the clusters trace those of the global star formation, however, is uncertain. The distribution of ages of the field stars is more difficult to measure. An early attempt at studying the disk stars was made by Butcher (1977), who identified a break in the main-sequence LF as indicating that most of the stars have been formed in the past 3-5 Gyr. A study of the star formation history of the LMC field stars using comparisons with synthetic CMDs was made by Bertelli et al. (1992), which confirmed the Butcher (1977) results. It was determined that a burst of star formation activity began some 3-5 Gyr ago, with a star formation rate as high as ten times the rate at older ages. These results confirm the expectations from the cluster studies. Recently, however, HST results have brought this seemingly firm picture into question. Holtzman et al. (1997) take HST data for the same field studied by Butcher (1977) and Bertelli et al. (1992), and obtain a different result. They find that the star formation rate increase happened more recently (2-3 Gyr ago) and that, more importantly, the rate increase was only approximately a factor of 3. This result was confirmed with the addition of ground-based data of the same field and surrounding regions by Stappers et al. (1997), and Geha et al. (1998) find that this star formation history is also correct for two other outer LMC fields. It has also been hinted that the star formation rates may have begun to decrease during the past Gyr. Olsen (1999) studied the HST field surrounding NGC 1754, which also lies in the northern part of the disk, as well as the fields surrounding four globular clusters in the bar. Using a method much like that used in this work, he found that the Holtzman et al. (1997) results were qualitatively correct, but that the star formation enhancement during the past 3 Gyr has not been a uniform star formation rate. Olsen (1999) also finds evidence of the decrease in star formation rate during the past 0.5 Gyr. Similarly, Holtzman et al. (1999) find that the field star formation history differs from the observed star cluster formation history, and have results consistent with those of Olsen (1999). For the upper main sequence, Holtzman et al. (1997) find a metallicity of between \[Fe/H\]=-0.7 and -0.4, while Geha et al. (1998) find the \[Fe/H\]=-0.4 isochrone to fit the best. The abundances of the old stars are less constrained, with both groups finding that the red giant branch appears to have an old population with \[Fe/H\]=-1.7 but could have the upper RGB dominated by younger (less than 2 Gyr) stars with \[Fe/H\]=-1.3. This study will present data for two fields near the Constellation III region, in the northern part of the LMC disk, as well as a star formation history analysis. Goals for this study will be: * Determining a photometric distance to the LMC, and comparing with other measurements. * Dating the oldest stars in the galaxy, and determining of star formation rates since that time. Special importance will be attached to the 3-5 Gyr range, and the ratio of recent to intermediate age star formation rates, as all previous ground-based work has supported one interpretation, while all HST-based work has supported another. * Determining the chemical enrichment history, and whether the red giant branch contains a significant number of young stars. * Comparing the two fields for differences in recent star formation rates. ## 2 DATA ### 2.1 Observations $`UBV`$ images of five fields inside Constellation III and two fields outside the region were obtained by Deidre Hunter at the Cerro Tololo Inter-American Observatory (CTIO) 26, 27, 30, and 31 January 1993 with a 2048$`\times `$2048 Tektronix CCD on the 0.9 m telescope. Conditions were photometric (except as mentioned below), with seeing typically 1-2″. The fields outside the Constellation will be the ones studied here, referred to as the SE and NW fields, with positions given in Table 1. Additionally, Figure 1 of Dolphin & Hunter (1997) shows rough sizes and locations of the seven fields. Images in each filter were obtained with at least two integration times to give greater dynamic range, as well as for cosmic ray cleaning. Standard stars were also observed, chosen from the lists of Graham (1982) and Landolt (1973; 1983). Image reduction was done normally, although there were some problems with the images. The CCD was non-linear, with an error of 0.015 magnitudes per magnitude, which was corrected with a correction determined by Alistair Walker (private communication). Some standard star images showed a step-function in the bias, for which only the part of the images with the normal bias was used so that no additional uncertainty was added to the photometry. This bias problem was not observed in the data frames. Additionally, all images with integration times of $``$5 seconds were corrected for the finite time required for the shutter to open and close. Bright but unsaturated stars also had additional structures in their PSFs that could not be accounted for by PSF fitting, so those stars were not included in the photometry. An iterative fit for the transformations to standard filters was made, with a zero point, airmass term, and colour term for each filter. We required that the colour term and zero point were constant for the four nights, allowing only the extinction to vary. In the data for the second night, there were two $`U`$ images and one $`B`$ image for which the standard star photometry was 0.15 magnitudes fainter than the rest. Because the images were interleaved and because the offset was exactly the same in each case, the offset is unlikely to be due to clouds, although no other problems were obvious. All other images agreed with the brighter level, so those three images were removed from the solution. The rms’s of the final solutions were 0.05 in $`U`$ and 0.02 for $`B`$ and $`V`$. ### 2.2 Reduction The reduction procedure for this data is detailed in Dolphin & Hunter (1997). Only a brief outline is given here. DAOPHOT (Stetson 1987) in IRAF was used to obtain photometry, following the procedure described by Massey & Davis (1992). Because of significant PSF variations in the chip, we were forced to use a quadratically varying PSF solution, which created minor differences from the standard DAOPHOT photometry recipe. The output from DAOPHOT was processed to minimize contamination from bad pixels, cosmic rays, and field edges. Stars were removed from the list based on proximity to a bad column, a high $`\chi ^2`$ value, or a sharpness greater than 2$`\sigma `$ from the mean at its magnitude. Finally, stars were matched between the many exposures for each filter, and a $`UBV`$ photometry list and a $`BV`$ photometry list created. Photometry was compared with the Balona & Jerzykiewicz (1993) $`UBV`$ CCD photometry near the NGC 2004 field and the Lucke (1972) $`BV`$ photoelectric photometry of LH 77 (two of the Constellation III fields). Our values were consistent with the Balona & Jerzykiewicz (1993) results, but calibration problems on the first night (in which the SE field was observed) forced us to use the Lucke (1972) photoelectric photometry as calibration, with offsets of $`\mathrm{\Delta }V=0.010\pm 0.009`$ and $`\mathrm{\Delta }B=0.031\pm 0.011`$. $`V,BV`$ CMDs for both fields are shown in Figure 1. Unfortunately no data was available for foreground star subtraction (the fields studied here were taken as the background fields of Dolphin & Hunter 1997), meaning that foreground contamination cannot be corrected for. A calculation using field star densities from Ratnatunga & Bahcall (1985) indicates that the combined CMD would have approximately 600 Galactic foreground stars with $`V`$ less than 21, much less than the 21000 stars we observe. Thus the overall CMD is very clean, with only an estimated 3% of stars being Galactic. However, the red giant branch, defined by $`V`$ brighter than 19 and $`BV`$ redder than 0.8, contains 1710 observed stars and an estimated 125 foreground stars for a contamination fraction of over 7%. Contamination at this level will certainly affect the quality of the fits, but because the Galctic stars will not fall along LMC isochrones, the primary effect should be to increase the $`\chi ^2`$ of the fits rather than altering the solved star formation history. ### 2.3 Artificial Stars Artificial star tests were made on all fields, and are described in detail by Dolphin & Hunter (1997). The DAOPHOT reduction of the fields with artificial stars added was identical to the original photometry, so that the artificial star photometry would be of comparable quality to the stars. The primary difference between the artificial and observed stars is that the artificial stars, naturally, had no PSF-fitting errors. Figures 2 and 3 show the completeness levels in the three filters, while Figures 4 and 5 show the photometric error as determined by the artificial star tests. The measured errors from the artificial star tests were significantly higher than the DAOPHOT uncertainties, which is expected for crowded field work. The levels at which completeness in the SE field drops below half are $`B`$=22.3 and $`V`$=21.8; the levels at which photometric uncertainty (including crowding errors) increases above 0.2 magnitudes are $`B`$=19.5 and $`V`$=19.3. Values for the NW field are approximately 0.5 magnitudes less. ## 3 ANALYSIS ### 3.1 Star Formation History Solutions For all CMD analyses in this paper, the Padova isochrones (Girardi et al. 1996; Fagotto et al. 1994) were used. These models include later (post-helium flash) phases of stellar evolution, despite the large uncertainties in those phases, providing completeness at the cost of accuracy. For the purpose of matching the entire observed CMD with synthetic data, however, it is far better to use models with slightly incorrect AGB tracks than ones with no AGB tracks. A standard least-$`\chi ^2`$ fit can be easily thrown off when dealing with common observational errors, most notably the presence of observed “stars” (cosmic rays, noise, binary stars, Galactic foreground contamination, etc) in regions of the CMD where the models do not predict any stars. Such a case will give a number of expected stars of $`0\pm 0`$ in that region of the CMD, which will clearly make it the most important part of the fit. To compensate, a small constant value was added to the $`\sigma ^2`$ denominators of the $`\chi ^2`$ calculations. In addition, the fit parameter was modified so that any points with worse than a 3 $`sigma`$ error would contribute $`3\times \chi `$ to the fit, rather than $`\chi ^2`$. Both of these modifications were made so that unfittable regions of the CMD (where no stars were predicted but stars were observed, or where models poorly reproduce the data) would not dominate the fit. ### 3.2 Era of Initial Star Formation of the LMC The time of initial star formation in this field is an essential parameter to calculate before determining the galaxy’s star formation history. This is because the evolutionary models reproduce neither the horizontal branch nor the red clump structures very well, and thus a blind solution using the models can easily be thrown off in dealing with these old populations. In the case of LMC, doing so could cause the models to attempt to recreate the tight red clump as a red horizontal branch. But clearly the CMD has no pronounced horizontal feature, which sets the upper limit on large amounts of star formation to $``$12 Gyr ago. This limit has been adopted into the models below, which have a 12 Gyr maxmimum age. To determine the age of initial star formation, a set of history calculations was made using the method described in Dolphin (1997) that resolved the age of initial star formation, with the distance, extinction, and enrichment history allowed to vary. The Padova isochrones (Girardi et al. 1996; Fagotto et al. 1994) were used for this and all other solutions in this paper. Of the possible starting points, the oldest (12 Gyr) gave the best fits and is used below. The question of whether the galaxy actually had no star formation until 12 Gyr ago, or whether the galaxy had a non-zero but very small star formation rate cannot be solved from this data, as there are stars where the horizontal branch would be expected, but no horizontal feature is seen in the CMD. Regardless of whether or not the galaxy waited until 12 Gyr ago to form any stars, it waited until that point to form a measurable number of stars. ### 3.3 Global Star Formation History and Enrichment In order to allow for the fullest use of the CMD-matching algorithm used to determine the star formation histories, a solution for the distance and extinction to the fields was made in addition to the solution for the star formation history. To do this, many star formation history solutions were made, each with a different combination of those parameters. The best fits were then combined, with standard deviations of the parameters of the best fits taken as the uncertainties in the solutions. Monte Carlo simulations were also run to attempt to determine the uncertainties inherent in the fitting procedure itself, and the uncertainties from that were added in quadrature to determine the uncertainties. Thus the quoted standard errors are the total uncertainty in the measurements, excluding effects of uncertainties in the models. The solution was made using 22576 stars with $`B`$ and $`V`$ between 15.5 and 21.5. The determined extinction to the two fields was A<sub>V</sub> = 0.30 $`\pm `$ 0.05 and the distance modulus was $`(mM)_0=18.41\pm 0.04`$), which are discussed in the summary. There is no evidence for differential reddening in the fields studied. (Note that the broad swath of stars running diagonally from the top of the CMD to the red edge is due to Galactic foreground stars, which for reasons given above could not be subtracted.) The star formation history with formal errors is shown in Table 2 and Figure 6. Average star formation rates for each era are given assuming a Salpeter IMF with cutoffs at 120 and 0.1 $`M_{}`$, which is consistent with the Holtzman et al. (1997) results. An IMF error would simply have the result of changing the ratio of old to young stars, but would not qualitatively alter the two-burst structure observed. A synthetic CMD for the region reconstructed from the models, artificial star results, and determined star formation history is shown in Figure 7b, with the observed data shown in Figure 7a. The artificial star tests for this data set clearly do not account for all observational effects, as the main sequence sharpness, empty region between the main sequence and blue loops, and other effects occur in the reconstructed CMD. This could be a result of any combination of cosmic rays, PSF fitting errors (see above), bad pixels, etc. Additionally, the reconstructed CMD assumes a single metallicity for each age of stars, so a broader main sequence could be constructed with a spread in the age-metallicity relation. A small red horizontal branch appears in the reconstructed data, indicating that either the time of initial star formation or the initial metallicity could be incorrect, or that the isochrones are incorrect. This is of less concern, as the theoretical models do not reproduce the red clump or the horizontal branch terribly well in any case. For the most part, the RGB is well-reproduced. The split RGB seen in the reconstructed CMD is an artifact of assuming that all stars in an age bin (ie, 9 to 12 Gyr old) have the same metallicity. A continuous age-metallicity relation would eliminate this split. On the main sequence, the observed data appears to cut off near $`V`$ of 16, while the reconstructed main sequence extends well above that. This is a result of the chosen binning and assumption of constant star formation rate within each bin. In this case, the youngest bin was 0 to 200 Myr old, meaning that a main sequence turnoff between $`M_V`$ of -2 and -3 is irreproducible with the chosen binning. The young stellar population will be dealt with more thoroughly (with higher resolution solutions) in the following section. Because of the large amount of foreground contamination (for example the Sun, at 1 kpc, would fall squarely in the blue loop region), comparison between observed and reconstructed CMDs in the blue loop region will not be fruitful. Ideally, data would have been taken outside the LMC to sample foreground stars, but the data used for this paper was taken as a background field for the Constellation III study (Dolphin & Hunter 1998). A more detailed comparison is shown in Figures 8 and 9, which show the binned CMDs that were used for the solution (0.1 magnitude resolution in $`BV`$ and 0.3 magnitude resolution in $`B`$ for Figure 8; 0.05 $`BV`$ and 0.15 $`B`$ for Figure 9), as well as the residuals and the fit parameter values. Magnitude limits in the plots are $`15.5<B<20.9`$ and $`0.4<VI<2.4`$. The subtracted CMD is shown at twice the contrast level of the two CMDs (darker being undersubtracted and lighter oversubtracted), while the Hess diagrams are shown on a scale from -9 $`\sigma `$ to +9 $`\sigma `$. The fitting procedure was able to fit the main sequence reasonably well, with all points in the main sequence area being fit within 3 $`\sigma `$. A broader theoretical main sequence (either from larger photometric errors or an age-metallicity spread) would have eliminated most of the larger errors in the main sequence fit. The upper red giant branch was also well-fit, with nearly all points being fit within 2 $`\sigma `$. By far the worst-fit part of the CMD was the region with the red clump, and horizontal branch, which contained the 9 $`\sigma `$ errors in the fitting. Given the limitations of the isochrones used, these errors are not surprising. Since the robust fit parameter used here was less sensitive to this than a traditional $`\chi ^2`$ fit would have been, this had a minimal effect on the overall star formation history solutions. The star formation in the LMC apparently began with a significant episode, which accounted for about half of the LMC’s total star formation (by mass of stars formed) in the first 3 Gyr. After that, the star formation rate appears to have slowly declined from 11 Gyr ago until about 2.5 Gyr ago. During more recent times, beginning between 1.5 and 2.5 Gyr ago, the LMC has undergone a second large episode of star formation, which is studied in greater detail in the following section. This history appears to support the recent HST-based results (Holtzman et al. 1997; Stappers et al. 1997; Geha et al. 1998; and Olsen 1999), with the recent burst beginning about 2.5 Gyr ago. The metallicity of the LMC’s oldest population of stars is \[Fe/H\] = -1.63 $`\pm `$ 0.10. This is consistent with Holtzman et al. (1997)’s estimate of \[Fe/H\] = -1.7 for an old RGB. It has climbed very steadily since then, to its present value of -0.38 $`\pm `$ 0.10, consistent with the findings of Holtzman et al. (1997) and Geha et al. (1998). ### 3.4 Recent Star Formation History With more recent star formation, it is beneficial to study the two fields separately, as recent star formation in the two fields is likely to be quite different. Specifically, the fields are on either side of LMC Constellation III, a $``$1 kpc diameter superbubble where significant amounts of recent star formation have taken place (Dolphin & Hunter 1998). Although the Constellation III event happened very recently (in the past $``$20 Myr), it is possible that the recent star formation histories in these two fields have been quite different over a longer period. Using the distance and extinction determined in the previous section, solutions for the two fields were run that had higher resolution for recent star formation rates and events. The number of stars in each field was 10354 in NW and 12222 in SE. Resulting star formation rates are given in Table 3 and Figure 10. Reconstructed CMDs of the two fields are shown in Figure 11, which can be compared with the observed data in Figure 1. Aside from the fitting problems mentioned above in relation to the combined CMD, the agreement is excellent. Again, the split red giant branch is a result of the assumption that the metallicity is constant over each age bin. Thus there is a jump between \[Fe/H\] of -1.63 $`\pm `$0.10 and -1.32 $`\pm `$0.09 for the 9-12 and 7-8 Gyr old bins respectively, when in reality the metallicity would have gradually increased. However, the total width of the red giant branch is well-reconstructed. The star formation histories of the two fields are remarkably similar, and are consistent with each other until about 200 Myr ago. From 200 to 2500 Myr ago, the average star formation rate in the two fields is $`1.4\times 10^4M_{}`$/yr, $`30\%`$ higher than the average star formation rate between 2.5 and 12 Gyr ago. The NW field, overall, is consistent within 2.5 $`\sigma `$ of a constant star formation rate of $`1.4\times 10^4M_{}`$/yr over the recent 2500 Myr episode. The SE field, however, has shown a very strong increase in star formation activity over the past 200 Myr, as shown in Table 3 (as well as with the casual observation that the SE field’s CMD shows many more upper main sequence stars than does the NW field’s CMD). The star formation rate during this period is consistent with a constant value of $`3.4\times 10^4M_{}`$/yr, a 140% increase over the 2500 Myr average rate. With both fields having consistent star formation rates for each bin until very recently, it follows that whatever conditions caused this recent (2500 Myr) outburst of star formation happened over a large scale. Given their physical separation of $``$2 kpc, it seems unlikely that a single localized event could have triggered the large amount of star formation seen here. Also, with other studies of the LMC also finding a recent burst, there must have been some large change of environment that triggered an era of star formation throughout the galaxy. The more recent (200 Myr) event was localized, only affecting the SE field. ## 4 SUMMARY Ground-based UBV photometry of two fields in the northern disk of the LMC has been presented. A distance modulus of 18.41 $`\pm `$ 0.04 and an extinction of 0.30 $`\pm `$ 0.05 have been calculated. The distance is smaller than the SN 1987A distance of 18.55 $`\pm `$ 0.05 (Panagia 1998), but consistent with the red clump distance of 18.36 $`\pm `$ 0.17 (Cole 1998). The extinction agrees with the average extinction of $`A_V=0.40\pm 0.11`$ found in the two fields by Dolphin & Hunter (1998). The LMC’s oldest significant population appears to have a metallicity \[Fe/H\] of -1.63 $`\pm `$ 0.10, with an age of initial star formation of roughly 12 Gyr to account for the lack of a horizontal branch in the observed data. About half of the LMC’s stars (by mass) formed by 9 Gyr ago. After this initial event, the star formation tapered off slowly, until a recent burst beginning about 2.5 Gyr ago. This result is consistent with most other LMC star formation history works, as it follows the on-off-on star formation history. The metallicity has been constantly rising, and is currently at \[Fe/H\] = -0.38 $`\pm `$ 0.10, consistent with the findings of Geha et al. (1998) and Holtzman et al. (1997). This recent episode dramatically strengthened in the SE field about 200 Myr ago, but the NW field is consistent with a constant star formation rate for the past 2.5 Gyr. In summary: * The distance modulus for this portion of the LMC was determined photometrically to be $`(mM)_0=18.41\pm 0.04`$. This is shorter than the SN 1987A distance of 18.55 $`\pm `$ 0.05 (Panagia 1998), but consistent with a red clump distance of 18.36 $`\pm `$ 0.17 (Cole 1998). * The oldest significant star formation in the LMC occurred about 12 Gyr ago, with about half of the LMC’s star formation occuring in the first 3 Gyr. After decreasing star formation until about 2.5 Gyr ago, a recent burst has rekindled the LMC. In agreement with the recent HST-based results, the burst appears to have begun in the 2-3 Gyr range (rather than the 3-5 Gyr range favored by previous ground-based studies). * The metallicity of this part of the LMC has increased from an initial value of -1.63 $`\pm `$ 0.10 to a present value of -0.38 $`\pm `$ 0.10. * The two fields studied show very similar star formation histories over the past 2.5 Gyr, but have deviated significantly from one another during the past 200 Myr. Given the separation of $``$2 kpc, the environmental change that caused this burst must have occurred over a very large, perhaps global scale in the LMC, with the recent ($`<`$200 Myr) star formation beginning to show regional differences. ## Acknowledgments I am indebted to Deidre Hunter for obtaining the data used in this paper.
no-problem/9910/astro-ph9910409.html
ar5iv
text
# Star Formation Histories of the Galactic Satellites ## 1 Introduction The bulk of the stellar populations in the Galactic halo field and globular cluster stars show a well-defined turn-off, at $`BV0.4`$, implying that the vast majority of the stars are old. The fraction of stars which lie blueward of this well-defined turn-off, with metallicities similar to that of the present dSphs, was analysed by Unavane, Wyse, & Gilmore (1996; hereafter UWG96) to place limits on the importance of the recent accretion of stellar systems similar to the extant (surviving?) dwarf satellite galaxies. UWG96 showed that very few ($``$10 per cent) stars were found to be bluer (and by implication, younger) than the dominant turnoff limit, with the highest value found for the more metal-rich halo (\[Fe/H\]$`>1.5`$). Direct comparison of this statistic with the colour distribution of the turnoff stars in the Carina dwarf allowed UWG96 to derive an upper limit on the number of mergers of such satellite galaxies into the halo of the Milky Way. This upper limit was $``$ 60 Carina-like galaxies. The higher metallicity data constrain satellite galaxies like the Fornax dwarf; only $``$6 of these could have been accreted within the last $``$10 Gyr. No galaxy like either Magellanic Cloud can ever have merged. Interestingly, a limit of zero also applies to the Sagittarius dwarf galaxy (Ibata, Gilmore & Irwin 1995) if recent suggestions that it has a substantial membership of near-solar metallicity are correct. This result has recently developed even more general significance, with comparison of the evolution in the halo formation rate in model galaxies, calculated ab initio from hierarchical structure formation models, with the observed evolution in the global star formation history indicating that the latter is not inconsistent with being largely driven by halo mergers at $`z>1`$. Thus one would like to be confident that the Milky Way Galaxy really has not suffered such mergers, before concluding that the single good test case appears not to match the single available good model. The question arises from the UWG96 analysis as to the validity of adopting Carina as a ‘representative’ dSph galaxy. Ideally, of course one would also like to consider the age distribution function of the dSph and the field halo stellar populations, rather than merely the colour data. Motivated by this, and the many other scientific applications of the method, Hernandez, Valls-Gabaud & Gilmore (1999; henceforth HVGG), and see also Gilmore, Hernandez, & Valls-Gabaud (1999), developed an objective technique to derive star formation histories. Application of this method to the set of dSph galaxies provides an objective determination of the age distribution in the dSph stellar populations, for comparison with the field halo (old) ages. This comparison has been achieved by Hernandez, Gilmore & Valls-Gabaud (1999, henceforth HGVG). HGVG used their reduction of archive HST observations of the resolved populations of a sample of dSph galaxies (Carina, LeoI, LeoII, Ursa Minor and Draco) uniformly taken and reduced, to recover the star formation histories (henceforth $`SFR(t)`$) of each, applying their new non-parametric maximum likelihood variational calculus method. ## 2 Objective Determination of Star Formation Histories A detailed description of our new method for objective determination of star formation histories, which uses a variational calculus approach supplemented by maximum likelihood statistical analysis, is provided in HVGG. That reference also describes the extensive numerical tests carried out to ensure numerical reliability in the implementation. We provide just a short summary here. The available and necessary information is an observed colour-magnitude diagram, extending below the main-sequence turnoff of the oldest population of interest, an independent determination of the stellar metallicities, and a set of model isochrones. Given all that, we can construct the probability that the $`n`$ observed stars resulted from a certain star formation history, $`SFR(t)`$. This will be given by: $$=\underset{i=1}{\overset{n}{}}\left(_{t_0}^{t_1}SFR(t)G_i(t)𝑑t\right),$$ (1) where $$G_i(t)=\frac{\rho (l_i;t)}{\sqrt{2\pi }\sigma (l_i)}exp\left(\frac{\left[C(l_i;t)c_i\right]^2}{2\sigma ^2(l_i)}\right)$$ In the above expression $`\rho (l_i;t)`$ is the density of points along the isochrone of age $`t`$, around the luminosity of star $`i`$, and is determined by an assumed IMF (the results are not sensitive to this) together with the duration of the differential phase around the luminosity of star $`i`$. $`t_0`$ and $`t_1`$ are a maximum and a minimum time needed to be considered, for example 0 and 15 Gyr. $`\sigma (l_i)`$ is the amplitude of the observational errors in the colour of the stars, which are a function of the luminosity of the stars. This function is supplied by the particular observational sample one is analysing. Finally, $`C(l_i;t)`$ is the colour the observed star would actually have if it had formed at time $`t`$. HVGG refer to $`G_i(t)`$ as the likelihood matrix, since each element represents the probability that a given star, $`i`$, was actually formed at time $`t`$. Since the colour of a star having a given luminosity and age can sometimes be multi-valued function, in practice we check along a given isochrone, to find all possible masses a given observed star might have as a function of time, and add all contributions (mostly 1, sometimes 2 and occasionally 3) in the same $`G_i(t)`$. In this construction we are only considering observational errors in the colour, and not in the luminosity of the stars. Although the generalisation to a two dimensional error ellipsoid is trivial, the observational errors in colour dominate to the extent of making this refinement unnecessary. The condition that $`(SFR)`$ has an extremal can be written as $$\delta (SFR)=0,$$ and a variational calculus treatment of the problem applied. This in effect transforms the problem from one of searching for a function which maximizes a product of integrals to one of solving an integro-differential equation. The numerical implementation required to ensure convergence to the maximum likelihood SFR(t) is described fully in HVGG, as are the extensive tests and simulations using synthetic HR diagrams. The main advantages of our method over other maximum likelihood schemes are the totally non-parametric approach the variational calculus treatment allows, and the efficient computational procedure. No time consuming repeated comparisons between synthetic and observational CMDs are necessary, as the optimal star formation history, independent of any preconceptions or assumptions, is solved for directly. ## 3 Star Formation Histories of the dSph satellites Application of the variational calculus method to archival HST data for five representative dSph galaxies has been completed by HGVG, where further details may be found. The star formation histories of these five galaxies cover all possible combinations. Stars in UMi are exclusively old, with the star formation history of that galaxy resembling that of a metal-poor Galactic globular cluster. At the other extreme, LeoI shows continuing star formation over all times, rising to a gentle maximum about 3 Gyr ago. Carina illustrates a more constant rate of star formation, though again continuing over the whole history of the Local Group. Its use as a template for the mean star formation history is indeed justified. The CMD and derived star formation history for Carina is shown below (Figure 1), to illustrate the results and their diversity. A special feature of the method we have developed and applied here is that the derived star formation rate has real units, most conveniently in solar masses per Myr, integrated over the whole dSph galaxy. Thus it is straightorward to sum the independent star formation histories, to provide a real history of the dSph galaxy system. This is the star formation history of the metal-poor outer halo. The result is shown in Figure 2. It is worth noting that the decline of star formation to zero recently is an artefact of definition. Those satellite galaxies which are still forming stars today are not dSph, but are Irregulars, notably the LMC and SMC. Both of course are also quite metal rich, and are very large, compared to the metal-poor galaxies of relevance here. The rather massive Sgr dSph is also missing from this study, since inadequate photometric and abundance data exist as yet. ## 4 Conclusions We have developed a new methodology, which removes the guesswork from derivation of star formation histories corresponding to a given colour-magnitude diagram. Application of this method to HST data for five representative dSph satellite galaxies provides the star formation history of the metal-poor Galactic satellite system. This age distribution can then be directly compared with the equivalent distribution for Galactic halo field stars, and globular clusters. The Galactic field stars are, to better than 90% accuracy, all old. Thus, late Galactic mergers can have formed no more than some 10% of the field halo in the last $``$10 Gyr. This conclusion is profoundly at variance with standard galaxy formation models, which predict early star formation in dwarfs, which later merge to form Milky Way-sixed spirals. Either the dwarfs merged before they formed stars, or they never formed. ## References > Gilmore, G., Hernandez, X, & Valls-Gabaud, D. 1999, in: Astrophysical Dynamics, eds D. Berry, D. Breitschwerdt, A. da Costa, & J. Dyson, ApSpSci special conference issue, in press > > Hernandez, X, Gilmore, G., & Valls-Gabaud, D., 1999, MNRAS in press (HGVG) > > Hernandez, X, Valls-Gabaud, D. & Gilmore, G., 1999, MNRAS 304, 705 (HVGG) > > Ibata, R., Gilmore, G., & Irwin, M., 1995 MNRAS 277, 781 > > Unavane, M., Wyse, R.F.G., & Gilmore, G., 1996, MNRAS 278, 727
no-problem/9910/astro-ph9910430.html
ar5iv
text
# 1 Space trajectories of bright stars within 10 parsecs, over timescales of 0.8 105 years. Dots are current star positions. Regions of high and low density interstellar matter are denoted, based on results from Genova et al. (1990, ApJ, v355, p150). Note that stars located in the galactic center hemisphere, and at low galactic latitudes, are more likely to be immersed in clouds yielding accretion of ISM onto planetary atmospheres. The Galactic Environments of Nearby Cool Stars Priscilla C. Frisch University of Chicago; Dept. Astronomy & Astrophysics Abstract The definition of nearby star systems is incomplete without an understanding of the dynamical interaction between the stars and ambient interstellar material. The Sun itself has been immersed in the Local Bubble interior void for millions of years, and entered the outflow of interstellar material from the Scorpius-Centaurus Association within the past $``$10<sup>5</sup> years. Heliosphere dimensions have been relatively large during this period. A subset of nearby stars have similar recent histories, and astrosphere properties are predictable providing ambient interstellar matter and stellar activity cycles are understood. The properties of astrospheres can be used to probe the interstellar medium, and in turn outer planets are more frequently immersed in raw interstellar material than inner planets. Astrospheres of Nearby Stars Interstellar matter (ISM) governs the interplanetary environment of nearby cool star systems since neutral interstellar gas penetrates stellar wind bubbles (astrospheres). In the case of the heliosphere (the solar wind bubble around the Sun), 98% (by number) of the diffuse gas in the heliosphere is interstellar gas. By analogy with the heliosphere, stellar astrospheres can be modeled by equating the ram pressure of the stellar wind (which depends on activity cycles) and the ram pressures of the surrounding interstellar cloud. Heliosphere models explain many observable particle populations and phenomena seen by spacecraft, such as the distribution and ionization of interstellar neutrals, the pickup ion and anomalous cosmic ray daughter products, and the distribution of interstellar dust (see Landgraf paper, this volume). Outer planets are more likely to be immersed in raw interstellar material than inner planets. The interstellar pressure on an astrosphere is set by charged ISM components which are excluded from the astrosphere by the Lorentz force – interstellar ions (including those formed by charge exchange in the astropause regions), low energy cosmic rays, and the smallest interstellar dust grains. Astrosphere dimensions for several nearby G-stars have been estimated and are shown in the attached table (from Frisch 1993). For example, our nearest star the Sun has a heliosphere radius of $``$120 AU, and a weak bow shock may be located at $``$200 AU. Heliosphere properties vary with both ambient interstellar properties and with the solar activity cycle, of which the well-known Forbush decrease in cosmic-ray intensity is an example. Local Interstellar Matter We have a basic understanding of the properties of ISM within 25 pc of the Sun. The interstellar cloud surrounding the solar system is warm, partially ionized and low density (T$``$7000 K, n(H<sup>o</sup>)$``$0.22 $`\mathrm{cm}^3`$, n(H<sup>+</sup>)$``$0.1 $`\mathrm{cm}^3`$ (e.g. Frisch et al. 1999). The relative cloud-Sun velocity is $``$26 $`\mathrm{km}\mathrm{s}^1`$. The thumbprint of 100–200 $`\mathrm{km}\mathrm{s}^1`$interstellar shocks is apparent from the enhanced abundances of refractory elements observed in nearby ISM, resulting from interstellar grain destruction (Frisch et al. 1999). The distribution of nearby (d$`<`$30 pc) interstellar material is highly asymmetric, with the bulk of material located in the galactic-center hemisphere and low galactic latitudes (see Fig. 1). Local ISM is structured, indicating inhomogeneous densities. If a density clump of n(H<sup>o</sup>)=10 $`\mathrm{cm}^3`$ were embedded in the cloud surrounding the solar system and encountered by the Sun, the heliosphere radius would shrink to about 15 AU (from the current $``$120 AU) and the heliopause would become unstable. The mass density of interstellar neutrals would increase from $``$1 x 10<sup>-25</sup> to $``$5 x 10<sup>-24</sup> g $`\mathrm{cm}^3`$ at the 1 AU location of the Earth after such an encounter, dramatically altering the Earth’s interplanetary environment (see Zank and Frisch 1999). Outer planets are therefore more likely to be exposed to raw interstellar material then inner planets. The outflow of interstellar material from the nearby Scorpius-Centaurus Association governs the galactic environment of the Sun and other nearby stars. This author believes that the Sun is immersed in the leading edge of a superbubble shell associated with the latest epoch of star formation in the Scorpius-Centaurus Association (Frisch 1998b). Velocities of nearby ISM clouds cluster about a vector motion consistent with a gas flow from the Scorpius-Centaurus Association. A bulk flow velocity in the LSR of –20 $`\mathrm{km}\mathrm{s}^1`$from the direction l=315<sup>o</sup>, b=–3<sup>o</sup>provides a better match to radial velocities of interstellar clouds observed towards nearby stars than does the LSR ”standard” velocity frame. (This value for the LSR upwind direction depends on the value used for the solar apex motion; a recent apex velocity based on Hipparchos data yields an interstellar flow velocity of V=–15 $`\mathrm{km}\mathrm{s}^1`$, arriving from l=344<sup>o</sup>, b=–2<sup>o</sup>, Frisch 1999). Paleoastrospheres The historical astrosphere of a star can be predicted by comparing the stellar space trajectory with the distribution and motions of interstellar clouds. Space motions of stars can be extrapolated back in time for several million years. The dynamics of interstellar clouds are governed by star formation activity (for diffuse clouds) and spiral arm patterns (for molecular clouds). The ISM is highly structured, with tenuous hot plasmas and warm diffuse low density material both yielding large astrospheres (although the interplanetary environments differ for these two cases). Astrosphere properties can be predicted based on properties of interstellar clouds surrounding each star, and relative to space motions of the star and ISM. The following nearby stars are predicted to have astrospheres unchanged over the past several million years, with astropause radii $``$65-75 AU, based on the space trajectories of each star (from Frisch 1993): Conclusions The physical properties of external planetary systems must necessarily be highly sensitive to the dynamical interactions of stellar astrospheres with ambient interstellar matter. This sensitivity is demonstrated by spacecraft observations of the best-observed example, the solar heliosphere. The space motions of nearby stars demonstrate that nearby planetary systems will have dramatically variable interplanetary environments, depending on the location of the star and the stellar trajectory relative to nearby interstellar clouds (Fig. 1). References * ”The Galactic Environment of the Sun” Frisch, 1999, 1999, J. Geophysical Research - Blue, in press, * ”Dust in the Local Interstellar Wind” P. Frisch, J. Dorschner, J. Geiss, M. Greenberg, E. Gruen, P. Hoppe, A. Jones, W. Kraetschmer, M. Landgraf, T. Linde, G. Morfill, W. Reach, J. Slavin, J. Svestka, A. Witt, G.Zank, 1999, Ap. J., October 20, 1999 issue * ”Consequences of a Change in the Galactic Environment of the Sun” G. Zank and P. Frisch , 1999, Ap. J., 518, 596. (June 20, 1999) * ”Galactic Environments of the Sun and Cool Stars” Frisch, P. 1998a, In press, Planetary Sciences - The Long View, eds. L. M. Celnikier and J. Tran Than Van, Editions Frontieres * ”The Local Bubble, Local Fluff, and Heliosphere” Frisch, P. 1998b, in The Local Bubble and Beyond, Proceedings of IAU Colloquium No. 166 (Springer-Verlag, Eds. Dieter Breitschwerdt and Michael Freyberg) Lecture notes in Physics Series, 506, 305 * ”Characteristics of Nearby Interstellar Matter” Frisch, P. 1995, Space Science Rev., 72, 499 * ”G Star Astropauses – A Test for Interstellar Pressure” Frisch, P. 1993, Ap. J., 407, 198
no-problem/9910/hep-ph9910258.html
ar5iv
text
# 1 INTRODUCTION ## 1 INTRODUCTION Electron scattering may be characterized according to distance and time scales (or momentum and energy transfer). At large distances mesons and nucleons are the relevant degrees of freedom. We can study peripheral properties of the nucleon near threshold, or meson exchange processes at high energies. At short distances and short time scales, the coupling involves elementary quarks and gluon fields, governed by pQCD, and we can map out parton distributions in the nucleon. At intermediate distances, quarks and gluons are relevant, however, confinement is important, and they appear as constituent quarks and glue. We can study interactions between these constituents via their excitation spectra and wave functions. This is the region I will be focussing on. These regions obviously overlap, and the hope is that hadron structures may eventually be described in a unified approach based on fundamental theory. Because the electro-magnetic and -weak probes are well understood, they are best suited to provide the data for such an endeavor. ### 1.1 Open problems in nucleon structure at intermediate distances QCD has not been solved for processes at intermediate distance scales and, therefore, the internal structure of nucleons is generally poorly known in this regime. On the other hand, theorists are not challenged due to the lack of high quality data in many areas. The following are areas where the lack of high quality data is most noticeable: * The electric form factors of the nucleon are poorly known, especially for the neutron, but also for the proton. This means that the charge distribution of the most common form of matter in the universe is virtually unknown. * What role do strange quarks play in the wave function of ordinary matter? * The nucleon spin structure has been explored for more than two decades at high energies. The transition from the deep inelastic regime to the confinement regime has not been explored at all. * To understand the ground state nucleon we need to understand the full excitation spectrum as well as the continuum. Few transitions to excited states have been studied well, and many states are missing from the spectrum as predicted by our most accepted models. * The role of the glue in the baryon excitation spectrum is completely unknown, although gluonic excitations of the nucleon are likely produced copiously . * The long-known connection between the deep inelastic regime and the regime of confinement (quark-hadron duality) remained virtually unexplored for decades. Carrying out an experimental program that will address these questions has become feasible due the availability of CW electron accelerator, modern detector instrumentation with high speed data transfer techniques, and the routine availability of spin polarization. The main contributor to this field is now the CEBAF accelerator at Jefferson Lab in Newport News, Virginia, USA. A maximum energy of 5.6 GeV is currently available, and the three experimental halls can receive polarized beam simultaneously, with different or the same beam energies. ## 2 ELASTIC SCATTERING ### 2.1 Electromagnetic form factors This process probes the charge and current distribution in the nucleon in terms of the electric ($`G_E^\gamma `$)and magnetic ($`G_M^\gamma `$) form factors. Early experiments from Bonn, DESY, and CEA showed a violation of the so-called ”scaling law”, which may be interpreted that the spatial distribution of charge and magnetization are not the same, and the corresponding form factors have different $`Q^2`$ dependencies. The data showed a downward trend for the ratio $`R_{EM}^\gamma =G_E^\gamma /G_M^\gamma `$ as a function of $`Q^2`$. Adding the older and newer SLAC data sets confuses the picture greatly (Figure 1). Part of the data are incompatible with the other data sets. They also do not show the same general trend as the other data sets. Reliable data were urgently needed to clarify the experimental situation and to constrain theoretical developments. The best way to get reliable data at high $`Q^2`$ is via double polarization experiments, and the first experiments of this type have now produced results. Since the ratio $`R_{EM}^\gamma `$ is accessed directly in the double polarization asymmetry $$A_{\stackrel{}{e}\stackrel{}{p}}=\frac{k_1R_{EM}^\gamma }{k_2(R_{EM}^\gamma )^2+k_3}$$ this experiment has much lower systematic uncertainties than previous experiments at high $`Q^2`$ (Figure 1). They confirm the trend of the early data, improve the accuracy at high $`Q^2`$ significantly, and extend the $`Q^2`$ range. The data illustrate the power of utilizing polarization in electromagnetic interactions. ### 2.2 Strangeness form factors From the analysis of deep inelastic polarized structure function experiments we know that the strange quark sea is polarized, and contributes at the 5 - 10% level to the nucleon spin. Then one may ask what are the strange quark contributions to the nucleon wave function and their corresponding form factors? The flavor-neutral photon coupling does not distinguish s-quarks from u- or d-quarks. However, the tiny contribution of the $`Z^o`$ is parity violating, and allows measurement of the strangeness contribution. The effect is measurable due to the interference with the single photon graph. The asymmetry $$A_{\stackrel{}{e}p}=\frac{G_FQ^2}{\sqrt{2}\pi \alpha }\frac{ϵG_E^\gamma G_E^Z+\tau G_M^\gamma G_M^Z}{ϵ(G_E^\gamma )^2+\tau (G_M^\gamma )^2}$$ in polarized electron scattering contains combinations of electromagnetic and weak form factors which can be expressed in terms of the electromagnetic and the strangeness form factors ($`G^s`$). For example, the weak electric form factor can be written: $$G_E^Z=(\frac{1}{4}sin^2\theta _W)G_{Ep}^\gamma \frac{1}{4}(G_{En}^\gamma +G_E^s)$$ A similar relation holds for the magnetic form factors. The $`G^s`$ form factors can be measured since the $`G^\gamma `$ form factors are known. The elastic $`\stackrel{}{e}p`$ results of the JLAB HAPPEX experiment measured at $`Q^2=0.47GeV^2`$ show that strangeness contributions are small, consistent with zero, when measured in a combination of $`G_E^s`$ and $`G_M^s`$ : $$G_E^s+0.4G_M^s=0.023\pm 0.034(stat)\pm 0.022(syst)\pm 0.026(G_E^n)$$ At least a factor 2 smaller statistical error will be obtained in the 1999 run, so that the systematic error is limited by our knowledge of the neutron electric form factor! New measurements of $`G_E^n`$ should remedy this situation . ## 3 NUCLEON SPIN STRUCTURE - FROM SHORT TO LARGE DISTANCES The nucleon spin has been of central interest ever since the EMC experiment found that at small distances the quarks carry only a fraction of the nucleon spin. Going from shorter to larger distances the quarks are dressed with gluons and $`q\overline{q}`$ pairs and acquire more and more of the nucleon spin. How is this process evolving with the distance scale? At the two extreme kinematic regions we have two fundamental sum rules: the Bjorken sum rule (Bj-SR) which holds for the proton-neutron difference in the asymptotic limit, and the Gerasimov Drell-Hearn sum rule (GDH-SR)at $`Q^2=0`$: $$I_{GDH}=\frac{M^2}{8\pi ^2\alpha }\frac{\sigma _{1/2}(\nu ,Q^2=0)\sigma _{3/2}(\nu ,Q^2=0)}{\nu }𝑑\nu =\frac{1}{4}\kappa ^2.$$ The integral is taken over the entire inelastic energy regime. The quantity $`\kappa `$ is the anomalous magnetic moment of the target. One connection between these regions is given by the constraint due to the GDH-SR \- it defines the slope of the Bjorken integral ($`\mathrm{\Gamma }_1^{pn}(Q^2)=g_1^{pn}(x,Q^2)𝑑x`$) at $`Q^2=0`$: $$I_{GDH}^{pn}(Q^20)=2\frac{M^2}{Q^2}\mathrm{\Gamma }_1^{pn}(Q^20)$$ Phenomenological models have been proposed to extend the GDH-SR integral for the proton and neutron to finite $`Q^2`$ and connect it to the deep inelastic regime . An interesting question is whether the transition from the Bj-SR to the GDH-SR for the proton-neutron difference can be described in terms of fundamental theory. While for the proton and neutron alone, the GDH-SR is nearly saturated by low-lying resonances with the largest contributions coming from the excitation of the $`\mathrm{\Delta }(1232)`$, this contribution is absent in the pn difference. Other resonance contributions are reduced as well and the $`Q^2`$ evolution may take on a smooth transition to the Bj-SR regime. A crucial question in this connection is how low in $`Q^2`$ the Bj-SR can be evolved using the modern techniques of higher order QCD expansion? Recent estimates suggest as low as $`Q^2=0.5`$ GeV<sup>2</sup>. At the other end, at $`Q^2=0`$, where hadrons are the relevant degrees of freedom, chiral perturbation theory is applicable at very small $`Q^2`$, and may allow evolution of the GDH-SR to finite $`Q^2`$. Theoretical effort is needed to bridge the remaining gap. The importance of such efforts cannot be overemphasized as it would mark the first time that hadronic structure is described by fundamental theory in the entire kinematics regime, from short to large distances. Experiments have been carried out at JLAB on $`NH_3`$, $`ND_3`$, and $`{}_{}{}^{3}He`$ targets to extract the $`Q^2`$ evolution of the GDH integral for protons and neutrons in a range of $`Q^2=0.12.0GeV^2`$ and from the elastic to the deep inelastic regime. Results are expected in the year 2000. Figure 2 shows a raw asymmetry from an experiment on polarized $`NH_3`$. The positive elastic asymmetry, the negative asymmetry in the $`\mathrm{\Delta }`$ region, and the switch back to positive asymmetry for higher mass resonances and the high energy continuum are evident. ## 4 EXCITATION OF NUCLEON RESONANCES A large effort is being extended to the study of excited states of the nucleon. The transition form factors contain information on the spin structure of the transition and the wave function of the excited state. We test predictions of baryon structure models and strong interaction QCD. Another aspect is the search for, so far, unobserved states which are missing from the spectrum but are predicted by the QCD inspired quark model . Also, are there other than $`|Q^3>`$ states? Gluonic excitations of the nucleon, i.e. $`|Q^3G>`$ states should be copious, and some resonances may be “molecules” of baryons and mesons $`|Q^3Q\overline{Q}>`$. Finding at least some of these states is important to clarify the intrinsic quark-gluon structure of baryons and the role played by the glue and mesons in hadron spectroscopy and structure. Electroproduction is an important tool in these studies as it probes the internal structure of hadronic systems. The scope of the $`N^{}`$ program at JLAB is to measure many of the possible decay channels of resonances in a large kinematics range. ### 4.1 The $`\gamma N\mathrm{\Delta }`$ transition. The lowest excitation of the nucleon is the $`\mathrm{\Delta }(1232)`$ ground state. The electromagnetic excitation is due dominantly to a quark spin flip corresponding to a magnetic dipole transition. The interest today is in measuring the small electric and scalar quadrupole transitions which are pedicted to be sensitive to possible deformation of the nucleon or the $`\mathrm{\Delta }(1232)`$ . Contributions at the few % level may come from the pion cloud at large distances, and gluon exchange at small distances. An intriguing prediction is that in the hard scattering limit the electric quadrupole contribution should be equal in strength to the magnetic dipole contribution. An experiment at JLAB Hall C measured $`p\pi ^o`$ production in the $`\mathrm{\Delta }(1232)`$ region at high momentum transfer, and found values for $`|E_{1+}/M_{1+}|<5\%`$ at $`Q^2=4GeV^2`$. There are no indications that the asymptotic value of +1 may be reached soon. ### 4.2 Higher mass resonances The inclusive spectrum shows only 3 or 4 enhancements, however more than 20 states are known in the mass region up to 2 GeV. By measuring the electromagnetic transition of many of these states we can study symmetry properties between excited states and obtain a more complete picture of the nucleon structure. For example, in the single-quark-transition model only one quark participates in the interaction. It predicts transition amplitudes for a large number of states based on a few measured amplitudes. The goal of the N\* program at JLAB with the CLAS detector is to provide data in the entire resonance region, by measuring most channels, with large statistics, including many polarization observables. The yields of several channels recorded simultaneously are shown in Figures 3 and 4. Resonance excitations seem to be present in all channels. These yields illustrate how the various channels have different sensitivity to various resonance excitations. For example, the $`\mathrm{\Delta }^{++}\pi ^{}`$ channel clearly shows resonance excitation near 1720 MeV while single pion production is more sensitive to a resonance near 1680 MeV . The $`p\omega `$ channel shows resonance excitation near threshold, similar to the $`p\eta `$ channel. No resonance has been observed in this channel so far. For the first time $`n\pi ^+`$ electroproduction has been measured throughout the resonance region. Figure 4 illustrates the vast improvement in data volume for the $`\mathrm{\Delta }^{++}\pi ^{}`$ channel. The top panel shows DESY data taken more than 20 years ago. The other panel show samples of the data taken so far with CLAS. At higher $`Q^2`$ resonance structures, not seen before in this channel are revealed. ### 4.3 Missing quark model states These are states predicted in the $`|Q^3>`$ model to populate the mass region around 2 GeV. However, they have not been seen in $`\pi N`$ elastic scattering, our main source of information on the nucleon excitation spectrum. How do we search for these states? Channels which are predicted to couple strongly to these states are $`N(\rho ,\omega )`$ or $`\mathrm{\Delta }\pi `$. Some may also couple to $`KY`$ or $`p\eta ^{}`$ . Figure 5 shows preliminary data from CLAS in $`\omega `$ production on protons. The process is expected to be dominated by diffraction-like $`\pi ^o`$ exchange with strong peaking at forward $`\omega `$ angles, or low t, and a monotonic fall-off at large t. The data show clear deviations from the smooth fall-off for the W range near 1.9 GeV, were some of the “missing” resonances are predicted, in comparison with the high W region. Although indications for resonance production are strong, analysis of more data and a full partial wave study are needed before definite conclusions may be drawn. The SAPHIR experiment with an analysis of just 250 $`p\eta ^{}`$ events at ELSA found evidence for two states with masses of 1.9 and 2.0 GeV . The quark model predicts indeed two resonances in this mass range with coupling to the $`N\eta ^{}`$ channel. CLAS has already collected 50,000 $`\eta ^{}`$ events in photo production, and a lot more are forthcoming later this year. Production of $`\eta ^{}`$ has also been observed in electron scattering for the first time with CLAS. This channel may also provide a new tool in the search for missing states. $`K\mathrm{\Lambda }`$ or $`K\mathrm{\Sigma }`$ production may yet be another source of information on resonant states. The K$`\mathrm{\Lambda }`$ data from SAPHIR show a bump near W = 1.72 GeV, which could be due to resonance decay of the $`P_{11}(1710)`$ and $`S_{11}(1650)`$, both of which couple to the K$`\mathrm{\Lambda }`$ channel. Possible resonance excitation is also seen in $`K\mathrm{\Lambda }(1520)`$ production at SAPHIR, compatible with a predicted state with a mass near 2 GeV. New data with much higher statistics are being accumulated with the CLAS detector, both in photo - and electro production. Strangeness production could open up a new window for light quark baryon spectroscopy, not available in the past. ## 5 QUARK-HADRON DUALITY I began my talk by expressing the expectation that we may eventually arrive at a unified description of hadronic structure from short to large distances. Then there should be obvious connections visible in the data between these regimes. Strong connections have indeed been observed by Bloom and Gilman , in the observation that the scaling curves from the deep inelastic cross sections also describe the average inclusive cross sections in the resonance region. This observation has recently been filled with more empirical evidence using inclusive ep scattering data from JLAB . Remarkably, elastic form factors or resonance excitations of the nucleon can be predicted approximately just using data from inclusive deep inelastic scattering at completely different momentum transfers. Figure 6 shows the ratio of measured integrals over resonance regions, and predictions using deep inelastic data only. The agreement is surprisingly good, though not perfect, indicating that the concept of duality likely is a non-trivial consequence of the underlying dynamics. ## 6 QUTLOOK The ongoing experimental effort will provide us with a wealth of data in the first decade of the next millennium to address many open problems in hadronic structure at intermediate distances. The experimental effort must be accompanied by a significant theoretical effort to translate this into real progress in our understanding of the complex regime of strong interaction physics. New instrumentation will become available, e.g. the $`G^o`$ experiment at JLAB ,allowing a broad program in parity violation to study strangeness form factors in electron scattering in a large kinmatics range. Moreover, there are new physics opportunities on the horizon. Recently, it was shown that in exclusive processes the soft part and the hard part factorize for longitudinal photons at sufficiently high $`Q^2`$. A new set of “skewed parton distributions” can then be measured which are generalizations of the inclusive structure functions measured in deep inelastic scattering. For example, low-t $`\rho `$ production probes the unpolarized parton distributions, while pion production probes the polarized structure functions. Experiments to study these new parton distributions need to have sufficient energy transfer and momentum transfer to reach the pQCD regime, high luminosity to measure the small exclusive cross sections, and good resolution to isolate exclusive reactions. This new area of research may become a new frontier of electromagnetic physics well into the next century. To accommodate new physics requirements, an energy upgrade in the 10-12 GeV range has been proposed for the CEBAF machine at JLAB. This upgrade will be accompanied by the construction of a new experimental hall for tagged photon experiments with a 4$`\pi `$ solenoid detector to study exotic meson spectroscopy, and production of other heavy mesons. Existing spectrometers in Hall C will be upgraded to reach higher momenta and improvements of CLAS will allow it to cope with higher multiplicities. This will give us access to kinematics where copious hybrid meson production is expected, higher momentum transfer can be reached for form factor measurements, and we may begin to map out the new generalized parton distributions.
no-problem/9910/astro-ph9910148.html
ar5iv
text
# N-body simulations of globular cluster tides ## 1 Introduction Globular clusters are fascinating systems, since contrary to their apparent geometrical simplicity, they are the sites of many complex physical phenomena. The two-body relaxation time is on average of the order of 10<sup>9</sup> yr, shorter than their life-time, so that the relaxation is very efficient especially in the core, where the memory of the cluster’s initial conditions is expected to be washed out (see the review by Spitzer 1987). The result of this relaxation is a slow collapse of the core, while the less bound stars in the envelope evaporate. Moreover, the relaxation tends to establish equipartition of energy, and mass segregation, so that the low-mass stars are preferentially expelled into the envelope. Mass loss due to stellar evolution is no longer significant for the old clusters in our Galaxy, but internal dynamical evolution alone could destroy a large fraction of the population (Hénon 1961). In terms of the half-mass relaxation time t<sub>rh</sub>, the collapse of the core occurs in $``$ 15 t<sub>rh</sub>, while total evaporation occurs in $``$ 100 t<sub>rh</sub>. The core collapse is however unspectacular, involving less than 1% of the stars; it can be reversed, and gravothermal oscillations can happen, according to the amount of heating provided by binaries (Makino 1996, Kim et al. 1998). External perturbations can considerably accelerate the evolution of globular clusters: compressive shocks at the crossing of the galactic plane, tidal interaction with the bulge, or the dark matter halo. These external perturbations do much more than a tidal limitation of a cluster in a circular orbit, corresponding to a time-independent external potential; they are also cause of tidal stripping and heating of the cluster (Allen & Richstone 1988). Paradoxically, they also accelerate the core collapse (Spitzer & Chevalier 1973). Much effort has been devoted to quantify the dynamical evolution of clusters, since it is crucial to be able to deduce their initial number and distribution and to go back to the Galaxy formation. This has been done with the Fokker-Planck method, orbit-averaging the relaxation effects, and estimating the tidal shocks through impulse approximations (Chernoff et al. 1986, Aguilar et al. 1988). It was already concluded that the number of remaining globular clusters now is a small fraction of those formed initially (Aguilar et al. 1988). The majority of them were destroyed in the early Galaxy evolution, through violent dynamical interactions. Kundić & Ostriker (1995) and Gnedin & Ostriker (1997) have revised these estimates, showing that the second-order tidal shocking ($`<\mathrm{\Delta }E^2>`$) is even more important than the first order ($`<\mathrm{\Delta }E>`$); the corresponding shock-induced relaxation could dominate the two-body relaxation. It ensues that about 75% of all present globular clusters will be destroyed in the next Hubble time, which is compatible with observational estimates (Hut & Djorgovski 1992). Since already the more fragile are missing today, and in particular there is a depletion of clusters orbiting within the central 3 kpc of the galaxy, it is likely that the initial cluster population was more than an order of magnitude more numerous than today. Many uncertainties remain when trying to quantify the present mass loss of globular clusters. The effect of irregularitites in the disk potential on the clusters evolution have been studied: Giant Molecular Clouds (Chernoff et al. 1986), spiral arms and bars (Ostriker et al. 1989, Long et al. 1992) are found to be only a secondary effect in the destruction of the clusters, the crossing of a thin plane being the dominant factor (and also the bulge crossing in the inner parts, Nordquist et al. 1999). Multi-mass models undergo more rapid evolution than single-mass models (Lee & Goodman 1995): the rate of mass loss can be doubled per half-mass relaxation time. Internal rotation of clusters, that was more important in the past, might be a significant factor, too. Recently Grillmair et al. (1995) observed the outer parts of 12 Galactic globular clusters, using deep two color star counts. They discovered huge tidal tails, consisting of stars escaping the clusters, that can help to quantify the mass loss, and to bring some constraints on the cluster orbits. We have also carried out such a photometric study on 20 Galactic clusters (Leon et al. 1999), and report about the characteristics of the tidal tails, once the main observational biases are taken into account (extinction, galaxy clusters, etc…). In the present work, we try to reproduce the observations, in order to better quantify the effect of external perturbations from the Galaxy on clusters, with mass segregation and rotation included. In particular we want to identify the fate of escaped stars, and relate the tidal tails morphology to the cluster orbits. In Section 2, we first describe the methods used, together with the models adopted for the globular clusters and the Galaxy potential, and describe the results of the simulations in Section 3. Section 4 summarizes the results. ## 2 Numerical methods ### 2.1 Overwiew One of the most important problem in simulating the dynamical evolution of globular clusters, is the wide range of time-scales involved. Two kinds of methods have been used: either an N-body integration with various algorithms, following the internal stellar orbits, which have a dynamical time of the order 1 Myr; but the method is expensive since the total simulation must be carried out over Gyrs; also the two-body relaxation might be over- or under-estimated, according to the number of particles used and the softening. Alternatively, the Fokker-Planck or Monte-Carlo methods are used, with orbit-averaging and use of diffusion coefficients to take into account the two-body relaxation effects. But then external gravitational perturbations cannot be evaluated exactly, and approximations are used, as adiabatic invariants, impulse approximation, steady potentials, etc… Weinberg (1994a,b,c) has shown that the adiabatic approximation, consisting in neglecting the effect of perturbations slow enough with respect to the internal stellar periods, is generally not valid for stellar systems, that are heated by slow perturbations. Widely used is also the impulse approximation, that assumes a very rapid perturbation (or shock) with respect to internal motions; however, this is only a crude estimate (Johnston et al. 1999). Adiabatic corrections are required, and are rather critical, since they can multiply the cluster destruction time by a factor two (Gnedin et al. 1998, 1999, Gnedin & Ostriker 1999). Oh et al. (1992a,b, 1995) developped an hybrid method, using the Fokker-Planck equations for the cluster center, monitoring the effects of two-body relaxation, and a three-body integration for the envelope. They have been able to follow escaped particles up to 10 initial limiting radii after 20 orbits around the galaxy. Recently Johnston et al. (1999) used the SCF (self-consistent field) method developped by Hernquist & Ostriker (1992), allowing to simulate clusters with the actual number of stars (of the order of 10<sup>6</sup>). The Poisson noise is therefore exactly natural, however the two-body relaxation is under-estimated, except in rare cases when a Fokker-Planck diffusion scheme is added to the N-body code. For a long time, Fokker-Planck calculations appeared to yield much lower lifetimes for clusters in the Galaxy, compared to N-body simulations. Both methods are now converging (Takahashi & Portegies Zwart 1999). Our aim here is to determine and quantify the tidal effects of the Galaxy on a globular cluster along one orbit, on a time-scale of the order of t<sub>rh</sub>; we do not focus on long-term effects, and do not follow the cluster until its destruction. The two-body relaxation is only approximated, and we do not take into account dynamical friction, which occurs on very long time-scale and for very massive clusters only. We assume the globular cluster old enough so that the effects of stellar evolution are negligible. We however take into account the mass segregation inside the cluster, since it can affect the tidal tail behaviour or the mass loss, most of the small stellar masses being confined in the outer parts. We will focus on the characteristics of the tidal tails, their amplitude, and 3D shapes, in order to compare with observations, and therefore to put constraints on the globular clusters orbits and mass loss. ### 2.2 Algorithm The N-body code used is an FFT algorithm, using the method of James (1977) to avoid the periodic images. This method finds correction charges on the 2D boundary surfaces, which, once convolved with the Green function, cancel out the effect of images. It increases considerably the efficiency of the FFT method, especially in 3D, since it avoids to multiply by 8 the volume where the FFT is computed. We used a 128<sup>3</sup> grid with N=1.5 10<sup>5</sup> particles, which required 2.7s of CPU per time step on a Cray-C94. The Green function used is the g<sub>2</sub> function from James (1977), so that the deviation from Newtonian law is of the order of $`R^5`$, at large distance $`R`$; the resulting softening parameter is of the order of the grid size, i.e. 1pc. The units used in the simulations are pc, km/s, Myr, and G=1 (the corresponding unit of mass is then 2.32 10<sup>2</sup> M). ### 2.3 Cluster model We used several initial cluster models, built from Michie-King multi-mass distributions. We divide the particules in 10 mass bins, distributed logarithmically. The stellar masses in old globular clusters range from 0.12 to 1.2 M essentially, so we simulate a mass spectrum over one decade. Although the mass turn-over is around 0.8 M for such old star populations, one should also take into account the remnants of massive stars that contribute to replenish the high-mass end. We adopt the Salpeter mass function for the spectrum, i.e. $`dN/dmm^{2.35}`$. With such a spectrum, which represents quite nicely the observations (Richer et al. 1991), the total number of stars is of the order of 1.2 10<sup>6</sup> for a cluster mass of 3 10<sup>5</sup> M (or an average mass of 0.24 M). Since we use a constant total number of particles of 1.6 10<sup>5</sup> for all our models, whatever their total mass, there will not be exact correpondence between particles and stars, but particles are statistically representative of the stars. Therefore, we ignore the possible depletion of stars at both ends, to only consider power-law mass spectra within the two mass limits. To find the distribution function for each mass class, we integrate the Poisson equation iteratively, with the method described by da Costa & Freeman (1976). The starting solution for each mass is the single-mass distribution function $$f(ϵ)=\rho _0(2\pi \sigma _0^2)^{3/2}(e^{ϵ/\sigma _0^2}1)ϵ>0$$ where $`ϵ=\mathrm{\Psi }\frac{1}{2}v^2`$ and $`\mathrm{\Psi }(r)=\mathrm{\Phi }(r)+\mathrm{\Phi }(r_t)`$, $`\mathrm{\Phi }`$ being the gravitational potential, $`\sigma _0`$ the central velocity dispersion and $`r_t`$ the tidal radius. Each model is determined by three initial parameters, the King core radius r<sub>0</sub>, the depth of the potential $`W_0`$, and the central velocity dispersion $`\sigma _0`$. The central density $`\rho _0`$ is then derived through the relation: $$r_0^2=\frac{9\sigma _0^2}{4\pi G\rho _0}$$ The velocity dispersions for each mass class is determined through equipartition of energy. Only a few iterations are required for a relative accuracy of 10<sup>-3</sup>, and the resulting solution gives the total mass, the limiting radius, and the final radial density distributions plotted in Fig. 1. The degree of mass segregation can be estimated from Fig. 3. ### 2.4 Rotation Rotation is at present very weak in globular clusters. It has been measured convincingly only in the brightest cluster, $`\omega `$ Centauri (Meylan & Mayor 1986, Merritt et al. 1997), where the rotation is almost solid-body until about 15% of the tidal radius, and then falls off quickly. It seems that rotation is correlated with flattening (Meylan & Mayor 1986, Davoust & Prugniel 1990), which is compatible with the result of ”isotropic oblate rotator” found in $`\omega `$ Cen. The average observed flattening is $`b/a`$ = 0.9, but an important fraction of the clusters have axis ratios smaller than that, between 0.8-0.9. Since two-body relaxation is efficient in the core, and erases all possible primordial anisotropy, the flattening of globular clusters is likely due to rotation, contrary to elliptical galaxies. The influence of rotation on the dynamical evolution of clusters has been investigated by Lagoute & Longaretti (1996) and Longaretti & Lagoute (1997a,b). The rate of evaporation is increased significantly (up to a factor of 3 to 4) per relaxation time, although the latter is somewhat lengthened. Stellar escape reduces the amount of rotation and flattening, a result compatible with the observation of decrease of flattening with age (Frenk & Fall 1982). Globular clusters with shorter relaxation time are also rounder (Davoust & Prugniel 1990), which supports the loss of rotation with relaxation. Gravitational shocks at disk crossing produces an apparent flattening, mainly parallel to the galactic plane. It has long been argued that the observed flattening could not be due to the galactic tidal field, because its direction is not aligned with the galactic center (see Lagoute & Longaretti 1996); but this is not the expectation for compressive shocks. Also, it is possible that tidal interactions increase the rotation of the clusters. These effects are investigated below. To incorporate rotation in the initial cluster models, a distribution function $`f(ϵ,L_z)`$ should be chosen, corresponding to a flattened density distribution $`\rho (r,z)`$; however, only a few studies have developped in this domain (cf Wilson 1975, Dejonghe 1986), and no analytic function has been found corresponding to the likely rotation curve of clusters. The density is only a function of the even distribution $`f_+=0.5(f(ϵ,L_z)+f(ϵ,L_z))`$, since of course the sense of rotation does not influence the spatial density, so that for a given flattened cluster, an infinite distribution functions could be chosen, the odd function $`f_{}=0.5(f(ϵ,L_z)f(ϵ,L_z))`$ indicating the rotational velocity. Since the rotation is only a very weak effect, we have selected another scheme to introduce it. From a non-rotating cluster model, we select a certain fraction of the particles, and reverse their sign of velocity ( $`𝐯`$ in $`𝐯`$) if their projected angular momentum $`L_z`$ is not positive. This process consists in introducing and $`L_z`$-odd part $`f_{}`$ in the initial distribution. This algorithm allows to control the amount of rotation by the fraction selected, as a function of radius. In practice, selecting particles whatever their radius already results in a rotation curve very compatible with that observer for $`\omega `$ Cen by Merritt et al. (1997). The cluster is not exactly in equilibrium at start, but is left to violently relax (varying potential) during a few crossing times ($``$ 10 Myr), and reaches quickly a flattened relaxed state, with flattening of the order $`b/a0.90.95`$. The resulting rotation profile is displayed in Fig. 2. Table 1 displays the principal parameters of the cluster models: the mass, the concentration c = log ($`r_t/r_c`$), the core radius $`r_c`$ where the surface density is halved, the half-mass radius $`r_h`$, the tidal radius $`r_t`$ which is here the limiting radius of the King-Michie model initially, the depth of the central potential $`W_0=\mathrm{\Psi }_0/\sigma _0^2`$, the central number density of stars n<sub>0</sub>, the central relaxation time t<sub>r0</sub>, and the half-mass relaxation time t<sub>rh</sub>, as defined by Spitzer (1987): $$t_{r0}=\frac{0.065v_m^3}{n_0m^2G^2ln\mathrm{\Lambda }}$$ and $$t_{rh}=0.138\frac{N^{1/2}r_h^{3/2}}{m^{1/2}G^{1/2}ln\mathrm{\Lambda }}=\frac{1.710^4r_h(pc)^{3/2}N^{1/2}}{m(M_{})^{1/2}}Gyr$$ where $`N`$ is the true star number, $`m`$ the mean stellar mass, and $`v_m^2`$ the mean square velocity (and $`ln\mathrm{\Lambda }`$ 12); the N/N<sub>s</sub> ratio is between the actual star number N and the simulated number of particles N<sub>s</sub>. The nature of the orbit is also shown (together with the Galaxy model, see below), and the effective peri- and apocenter values. Let us note that the concentration chosen are in the low range, because of computing constraints: large concentrations require a high spatial resolution, i.e. a large number of particles. ### 2.5 Galaxy model We model the potential of the Galaxy by three components, bulge, disk and dark matter halo. The bulge is a spherical Plummer law: $$\mathrm{\Phi }_b(r)=GM_b(r^2+a_b^2)^{1/2}$$ corresponding to the total mass $`M_b`$ and characteristic size $`a_b`$ ($`r`$ is the spherical radius); the disk is a Miyamoto-Nagai model, with mass $`M_d`$ and scale parameters $`a_d`$ and $`h_d`$: $$\mathrm{\Phi }_d(r_c,z)=GM_d(r_c^2+(\sqrt{z^2+h_d^2}+a_d)^2)^{1/2}$$ where $`r_c`$ is the cylindrical radius. The dark matter halo is added to obtain a flat Galactic rotation curve $`V=V_h`$ in the outer parts, i.e.: $$\mathrm{\Phi }_h(r)=\frac{1}{2}V_h^2ln(r^2+a_h^2)$$ We tried some extreme models, maximum disk or not, to explore all possibilities for the disk mass, that can give very different disk shocking efficiencies, for the same rotation curve. The first two models (Gal-1 and Gal-2, see Tables 1 and 2) have spherical dark matter haloes, but Gal-2 has the most concentrated disk, so that the disk surface density is much higher towards the center. Both have comparable rotation curves in the Galaxy plane. The thickness of Gal-2 is equivalent to that of an exponential of scale-height 300pc, still somewhat higher than the recent estimations of 250pc (Haywood et al. 1997) for the Milky Way. A summary of all parameters used is displayed in Table 2, and the resulting rotation curve of Gal-2 is compared in Fig. 4 to the observed Milky Way rotation curve. The density distribution in the first two models of the galaxy is displayed through density contours in central cuts in Fig. 5. The third model Gal-3 has been run to test an extreme flattening of the dark matter halo component. For that reason, we have not chosen to generalise the above spherical logarithmic potential to isopotentials on concentric ellipsoids, since at large flattening, it corresponds to unphysical density distributions. In this third model, the visible components are of the same form as previously, and the dark halo mass density is given by a pseudo-isothermal ellipsoid (Sackett & Sparke 1990): $$\rho _h(R,z)=\rho _0\left[1+\frac{(r_c^2+z^2/q^2)}{a_h^2}\right]^1$$ where $`\rho _0`$ is the central density, $`a_h`$ the core radius, and $`q`$ is the axial ratio of the isodensity curves, which vary from spherical ($`q`$ = 1) to flattened ellipsoids ($`q<`$ 1). The potential corresponding to this density is: $$\mathrm{\Phi }_h(r_c,z)=2\pi Gq\rho _0a_h^2_0^{1/q}\frac{ln\left[1+\frac{x^2}{a_h^2}(\frac{r_c^2}{x^2ϵ^2+1}+z^2)\right]dx}{x^2ϵ^2+1}$$ where $`ϵ^2=1q^2`$. Forces can be derived analytically, in cylindrical coordinates (Sackett et al. 1994) and in ellipsoidal coordinates (de Zeeuw & Pfenniger 1988). This model also gives an asymptotically flat rotation curve ($`V_h`$), that we will refer to, instead of the central density $`\rho _0`$: $$V_h^2=4\pi G\rho _0a_h^2qArccos(q)/ϵ$$ and the mass included in the ellipsoid of axes $`a`$ and $`aq`$ is: $$M_h=\frac{V_h^2a_hϵ}{GArccos(q)}\left[\frac{a}{a_h}Arctan\frac{a}{a_h}\right]$$ The model used (Gal-3, parameters in Table 2) is chosen with an extreme flattening of $`q=0.2`$ to probe the effect. In these potentials, essentially two kinds of orbits were selected; the nearly polar orbits, and the ”disk” orbits, where the cluster crosses the disk very frequently (see Fig. 6). ## 3 Results and Discussion ### 3.1 Relaxation First we want to estimate the degree of relaxation provided by our N-body scheme. The relaxation in actual star clusters is due to the granularity of the potential created by the finite number of stars. This granularity is larger with a reduced number of particles, so the relaxation is accelerated by simulating a number of particles inferior to the real one. On the contrary, the two-body relaxation is reduced by the softening of the potential at small scale. The two effects are somewhat compensating in a complex way, and we can only estimate the resulting rate of relaxation numerically. In any case, even if the resulting relaxation time is comparable to the actual one, the effects will not be the same in terms of spatial frequency. In Table 1 are indicated the real relaxation times, and the N/N<sub>s</sub> ratio between the actual star number N and the simulated number of particles N<sub>s</sub>; without softening, the expected relaxation time in the simulations is shorter than t<sub>rh</sub> by the factor N/N<sub>s</sub>. We have run the model m20 isolated to measure the rate of evaporation precisely due to the relaxation. The measured mass loss after 0.86 Gyr is 0.3% (see Fig. 7). Since the expected loss by evaporation alone is of the order of 4% per t<sub>rh</sub> (Hénon 1961, Gnedin & Ostriker 1997), we infer that the equivalent t<sub>rh</sub> is 12 Gyr, not so far from the actual one (15.3 Gyr). At least, rapid relaxation in the simulations are not perturbing too much the dynamical evolution we want to follow here. ### 3.2 Mass loss The computation of the amount of unbound particles at a given epoch is delicate. The concept of a tidal radius separating the bound and unbound stars is clear only when the globular cluster is embedded in a steady potential, which is the case for an ideal circular orbit in the galactic plane for instance, without any disk crossing (e.g. Spitzer 1987). Globular clusters are observed with a limiting radius, where the surface density drops (e.g. Freeman & Norris, 1981). Attempts have been made through cluster modelling to relate this observed cut-off to the tidal radius, as defined by King (1962). Keenan (1981) found that the limiting radius was close to the tidal radius at pericenter, while it was interpreted as the local tidal radius by Innanen et al. (1983). Since globular clusters orbits are not precisely known, and also the limiting radii are only determined with large uncertainties, the situation remains unclear. However, for very excentric orbits, the tidal radius varies considerably along the orbit, and not all particles which become unbound at pericenter are still so at apocenter. Moreover, the strongest tidal force is in fact the force perpendicular to the plane, felt by particles at disk crossing, everywhere except in the bulge (see Fig. 8). Although the vertical forces always correspond to a compression, they yet give energy to the particles, and trigger a rebound or oscillation, produce a vertical tidal tail and can unbind the particles. The vertical force gradient is therefore dominant in the Galaxy, except for the bulge and the remote regions dominated by the halo. For each cluster model, we have taken into account the required galactic force gradient to explain its limiting radius, to choose its corresponding orbit. This ensures that the cluster is not launched in a completely un-realistic manner, with its limiting radius much larger (or much smaller) than its tidal radius, in which case it would quickly have lost a large fraction of its mass (or it would have relaxed in a long time to another limiting radius, without mass loss). We therefore hope to reach quickly a quasi-steady state, and determine the corresponding tidal tail, while estimating the mass loss rate. In determining the mass loss at a given epoch, we take into account the dynamical evolution of the cluster (in concentration and mass), but keep the force gradients of the galaxy constant, and equal to that at pericenter. We solve at each epoch the equation for the tidal radius, which slightly decreases as evolution proceeds, and consider stars unbound when their relative energy is positive, i.e. $`ϵ<`$0, where: $$ϵ=\mathrm{\Phi }(r_t)\mathrm{\Phi }(r)\frac{1}{2}v^2$$ The lost-mass fraction for the model m2, m2r (with rotation), m20 the comparison isolated cluster, is plotted in Fig. 7, and for all other runs in Fig. 9. For almost all the runs, the gravitational shocks are more efficient than the evaporation by a factor 1 to 100. Only the run m1 which has a very short relaxation time (3 Gyr) has an evaporation time scale lower than the gravitational shock time-scale in the thick disk Galaxy model Gal-1. Moreover the relatively low concentration of the clusters simulated here locates them in the most sensitive branch of the curve of the evaporation time versus concentration shown by Gnedin & Ostriker (1997), namely $`T_{evap}/T_{rh}`$ varies between 20 and 30 for our set of simulations. The disk/bulge shocking is very efficient to destroy in a very rapid phase the cluster m22 because of the thinner disk of the Gal-2 model. In the similar case of run m3, in spite of the Gal-1 model, the mass loss is important, because of the large size of the cluster. Observational studies of mass loss combined with cluster parameters and reliable orbits will constrain strongly the disk/bulge parameters. ### 3.3 Influence of rotation Merritt et al. (1997) have studied in details the rotation of $`\omega `$ Centauri, from radial velocity data of about 500 stars. The cluster is in axisymmetric, non cylindrical rotation, with a peak of 7.9 km/s (at 11 pc from the center, i.e. 0.15 $`r_t`$). Their rotational velocity corresponds well with our rotating model (see Fig. 2). Drukier et al. (1998) also analysed in details the kinematics of 230 stars in M15, and found that a model with rotation is marginally favored over one without rotation. They discover that the velocity dispersion increases slightly in the outer parts, indicating the possible heating by the galactic tides. Keenan & Innanen (1975) have shown that clusters rotating in a retrograde sense are more stable in the tidal field of the Galaxy than direct rotating or non-rotating clusters. They also followed the orbits of escaped stars, and found that they can stay in the tidal tail for a large part of the cluster orbit. However, they use the three-body integration scheme, taking no account of self-gravity and relaxation. We have run several models with and without rotation, to test the effect on the mass loss of globular clusters in the Galaxy. When the rotation of the cluster is in the direct sense with respect to its orbit, the mass loss appears higher than for models without rotation: it is the case for m2/m2r (Fig. 7), and m4/m4r (Fig. 9). However, the difference is negligible when the rotation of the cluster is in the retrograde direction (cf m22/m2ret, Fig. 9). This phenomenon has been seen and explained in many circumstances, including galaxy interactions. It comes from the fact that particles rotating in the direct sense in the cluster resonate more with the galaxy potential, while the relative velocities of cluster stars and the Galaxy are higher in the retrograde case, and the perturbation is then more averaged out. As a consequence, the directly rotating clusters are disrupted earlier, and there should remain today an excess of retrograde clusters. This is difficult to check statistically (it can be noted that the angular momentum of $`\omega `$ Cen is anti-parallel to that of the Galaxy, and therefore compatible with predictions). ### 3.4 Influence of the cluster concentration and of its orbit Fig. 7 and 9 demonstrate clearly the effect of disk shocking: between 1 and 20 kpc in radius, the z-acceleration of the disk close to the plane decreases by 100, and between the models of Gal-1 and Gal-2 (maximum disk) the acceleration is multiplied by 4 near the center. This explains that the run m4 and m4r lead to the destruction of the cluster, while only a slight mass loss is observed for m2 and m2r. The more concentrated clusters (m1 and m6) are also much less affected, even at low pericenter, and with maximum disk. For polar orbits, the mass loss curves show clear flat stages, between steps corresponding to each disk crossing (m1, m5, m8). There is even some bouncing effect: the particles unbound to the cluster by a disk shocking can be re-captured during a more quiet phase (m5). Less steep ondulations are observed for the disk orbits, at each disk crossing (m3, m4). To better quantify the disk shocking effects, and to compare it to the heating expected from the impulse approximation, the heating per disk crossing, at the beginning of the runs is estimated in Fig. 10. The shock strength is defined as the shock heating per unit mass in the impulsive approximation $`\mathrm{\Delta }E_{imp}`$, normalized to the internal velocity dispersion of the cluster, i.e. (Spitzer 1987): $$\mathrm{\Delta }E_{imp}=2z^2g_m^2/V^2$$ where $`g_m`$ is the maximum z-acceleration due to the disk, $`z^2=r_h^2/3`$ characterizes the typical size of the cluster, and $`V`$ is the z-velocity at which it crosses the disk. To estimate the heating per unit mass and per shock in the simulations, we computed the total energy of the cluster in the first 100-200 Myr of their orbits, averaging the heating over 3-7 shocks (except for the polar runs m5 and m8, where we considered only one disk crossing, since the second occurs after 400 Myr). Fig. 10 (upper panel) shows that there is a rough correlation between the two quantities, $`\mathrm{\Delta }E/\sigma _0^2`$ and $`\mathrm{\Delta }E_{imp}/\sigma _0^2`$, expected if the impulsive approximation is applicable. The polar runs m5 and m8 are however outside this correlation, since their interaction with the Galaxy cannot be approximated at all by shocks at disk crossing, given the weakness of the disk they encountered on their orbits. All other runs display a heating rate below that from the impulse approximation, which is expected if an adiabatic correction is added. We define this correction as usual from the product of the internal frequency $`\omega `$ by the time-scale of the perturbation $`\tau =H/V`$ (crossing-time of the disk height $`H`$), and $`\omega `$ is related to the central density $`\rho _0`$ such that $$\omega ^2=G4/3\pi \rho _0$$ The ratio of the observed heating rate to the expected one for the impulse approximation is plotted versus this adiabatic parameter in Fig. 10 (lower panel). It shows clearly that a negative adiabatic correction should be added. This correction appears not as steep as an exponential, as predicted by Spitzer (1987), and corresponds more to a power-law as found by Gnedin & Ostriker (1999). A more detailed estimate of this correction should involve individual stellar orbits in the cluster, and is beyond the scope of this study. ### 3.5 Influence of the dark matter flattening Two models were run with an extreme flattening for the dark halo (an axis ratio of $`q`$ = 0.2). In the first one (m7), the small radius of the orbit places it in a region where the dark halo contribution to the potential is not large, and the difference with a spherical halo is not significant. For the second one (m8) the orbit peri- and apocenters are both in a region dominated by the dark halo, and the orbit is polar, so that the halo flattening effect should be maximum. The difference with the comparable run (m5) with a spherical halo is easily seen (Fig. 9), but the overall mass loss rate are comparable. This means that the tidal shocks at the crossing of the flat halo are not strong enough to compete with disk shocking. Also, there is less mass in the flattened halo model, with respect to the spherical model, for the same rotation curve. Inside R = 26 kpc, the dark halo mass is 1.1 and 1.8 10<sup>11</sup> M for the flattened and spherical models, respectively. Near the plane, the force per unit mass can be approximated as $`K_z`$z, and the value of $`K_z=\frac{^2\mathrm{\Phi }}{z^2}`$ is about 5 times higher for the flattened than for the spherical dark halo. But the disk $`K_z`$ is always larger than that of the flattened dark halo; they begin to equalize only at radii of 30 kpc. That explains why the halo shocking is not significant. The globular cluster dynamics is not useful to constrain the flattening of dark haloes, nevertheless the halo geometry will affect the destruction rate of the remote clusters. ### 3.6 Mass segregation It is well known that the mass distribution function for low-mass stars in an old globular cluster is very close to the IMF; this is specially true at the radius $`r_h`$, region which is more robust against mass segregation and tidal stripping (Vesperini, 1997), the two processes somewhat compensating at this radius to preserve the IMF. In the multi-mass King-Michie model that we adopted as initial conditions, the mass function varies with radius, as sketched in Fig. 12. It is also expected that the tidal stripping increases the mass segregation, in acting together with the relaxation of the central parts. Since the envelope is preferentially populated by low-mass stars, they are more stripped than the high-mass ones. This is shown in Fig. 11, where the value of the slope of the mass function is plotted (in gray-scale and in contours) as a function of time and radius in the cluster. Core relaxation is a way to replenish the low-mass stars content of the envelope. Recent HST results have discovered that some globular clusters have indeed mass functions depleted in low-mass stars (Sosin & King 1997, Piotto et al. 1997); the depleted clusters are very likely candidates for recent tidal shocking, from their presumed present galactic position and orbit (Dauphole et al. 1996). The evolution through its dynamical life of the mass function of a globular cluster has been widely studied, in the goal of deriving the IMF slope at low mass of the galactic halo itself, which has important implications for the nature of dark matter. Gnedin & Ostriker (1997) showed that about 75% of the present globular cluster population will be destroyed in a Hubble time, and therefore that the majority of initial clusters is now destroyed and forms a large fraction of the stellar halo and bulge. Vesperini (1998), using an analytical scheme for the destruction processes, estimated the initial population of globular clusters to be about 300 clusters and the contribution of disrupted clusters to the halo would be about $`\mathrm{5.5\; 10}^7M_{}`$, of the same order as the stellar mass in the halo (Binney & Merrifield, 1998). This means that the remaining clusters must have evolved considerably, and in particular their mass function. Capaccioli et al. (1993) have observed that the mass function slope is correlated with the position of the globular cluster in the galactic plane, and in particular with its height. The mass function is steeper at large distance. They show through analytical and N-body calculations that this could be due to disk shocking, that flattens the mass function. Johnston et al. (1999) have studied through N-body simulations the mass loss rates in mass-segregated systems. They confirm that the mass function is considerably flattened during tidal evolution; the slope x can fall from 1.35 to almost 0 for the more tide-vulnerable cases (small clusters in disk orbits). ### 3.7 Tidal tails Once the particles are unbound, they slowly drift along the globular cluster path when they were launched, and form a huge tidal tail. They can still form a recognizable features well outside the cluster envelope, and hundreds of pc away, as is observed on the sky (Grillmair et al. 1995, Leon et al. 1999). The tidal tails for 9 different models are displayed in Fig. 13, at the same spatial scale. The unbound particles, and therefore the tails, are a tracer of the cluster orbit. The tails look asymmetrical, with a heavier tail on one side with respect to the other, but this is a projection effect, due to the particular shapes of the tails. There are sometimes special wiggles at the basis of the tails, in the cluster envelope, that are not due to the rotation of the clusters, since they are seen around non-rotating clusters as well (Fig. 13). These will be interpreted in Section 3.8. Analysis of the faint tails is best performed with the wavelet decomposition, that can achieve multi-resolution, as in Fig. 14. The tidal tails can contain a few percents of the mass of the cluster. Fig. 13, 14 and especially Fig. 20 show the clumpy structure in the tidal tails. The denser unbound clumps are the tracers of the strongest phase of the gravitational shocks: the two symmetrical counterparts are visible on each side of the tails. Although these clumps are not bound, but transient caustics, the structure of the tail remains clumpy all over the simulations, even if it is not the same stars in the same clumps. We have followed in the simulations a group of particules that formed a clump at a given epoch: the group stays together for some time, until the packet of particules moves away from the cluster, then another clump is formed by a new tidal shock. It takes more than 800 Myr for a clump to disperse. A typical clump can contain 0.5% of the cluster stars. Some observed clusters (Leon et al. 1999) show evidence for such features in their tidal tails (e.g. NGC 5264, Pal 12). These overdensities in tidal tails are related to the ”streamers” or moving groups in the halo (Aguilar, 1997). These symmetrical features have been detected as well for open clusters oscillating in the galactic plane (Bergond et al. 1999). How close tails trace the globular cluster orbit can be clearly seen on Fig. 15, showing a large-scale view of the globular cluster of the run m4, in a disk-like orbit. This cluster experienced strong disk shocks, due to the thin-disk model chosen, and was disrupted in 0.5 Gyr. Unbound particles, outside the tidal radius of the cluster, spread out in density like a power law, with a slope in average of –4. (Fig. 16). This kind of behaviour has been found for tidal extensions in numerical simulations of interacting galaxies (cf Aguilar & White 1986), and is that expected of an unbound cloud of particles in an 1/r potential, with a continuous spread in energy (with an almost constant probability, N(E), since then N(r)dr $`\rho r^2dr`$ N(E)dE $`r^2`$) starting from just zero relative energy, since at large distance, the globular cluster can be considered as a point mass, with 1/r potential. It does not depend on the source of the mass loss, and not significantly on the Galaxy potential either, since the latter gradients develop on larger scales. The corresponding surface density around the cluster falls as $`r^3`$ or steeper, which is far from the predictions of Johnston et al. (1999) for independent tidal debris, free to move and spread in the Galaxy potential: their expected slope of surface density is –1. Although these particles are in majority unbound to the cluster (have positive energy, as defined in Section 3.2), they cannot be considered as completely free, but still under the influence of the cluster potential; in particular, the closest ones can still be re-captured by the cluster. The discrepancy with the observations (Grillmair et al. 1995, Leon et al. 1999) where most of the slopes are around –1 is likely the consequence of noisy background-foreground subtraction. ### 3.8 Flattening At smaller scale, closer to the tidal radius, the just escaping particles are in general oriented perpendicular to the plane, just after a disk crossing. Monitoring the flattening of the cluster can be done through a Fourier analysis. Fig. 17 shows the resulting of even harmonics ($`m=`$ 2, 4, and 6) of the surface density projected in the plane, or perpendicular to the plane. When the cluster has no rotation, periodic compression of the cluster at its crossing of the plane, and subsequent relaxation, corresponding sometime in an extension of the cluster in the vertical direction is easily seen through the orientation of density maxima. In the case of a rotating cluster (m2r), the flattening in the xz plane is dominated by the rotation, although it varies slightly at the disk crossings. In the equatorial plane (xy), there is also an $`m=2`$ perturbation, which appears to tumble in the sense of rotation (Fig. 18). To better characterize the deformation of the globular cluster and the direction of the flattening, we have computed the (3x3) inertia momentum tensor $$I_{xy}=\frac{\mathrm{\Sigma }_nm(n)xy/r^2}{\mathrm{\Sigma }_nm(n)}$$ with all combinations (x,y,z) taken into account. This computation is carried out on the globular cluster and its immediate envelope and avoids the tidal tails. The diagonalisation of this matrix gives the three eigen values plotted in Fig. 19 and the eigen vectors give the orientation of the major axis ($`\theta `$ and $`\varphi `$ in spherical coordinates). In most of the models the eigen values are all almost equal to 1/3 (no large perturbations with respect to spherical shape). In the most perturbed cases, it is possible to see a clear prolate shape (the two almost equal axes are the smallest ones) with the major axis, located at an almost constant angle with the z-direction, and precessing around this z-axis (perpendicular to the plane) with retrograde sense (see Fig. 19). Some bouncing effects can also be noted, for example in model m4. The period of precession does not depend on the nature of the orbit, and has nothing to do with the time-scale between two crossings of the plane (it is in general much larger). On the contrary, the periods are similar for runs with the same initial state for the clusters. We interprete this period as an eigen value of the cluster itself: once a perturbation is excited, it develops with these proper frequencies (see e.g. Prugniel & Combes 1992). It is interesting to remark that this prolate shape taken by the globular clusters due to the tidal forces orients the mass loss just at the beginning of the tidal tails: unbound stars preferentially escape in the direction of the major axis, which gives the crooked shape of the global tail, that follows at large scale the cluster orbit (see Fig. 20). ## 4 Conclusions We have carried out about a dozen N-body simulations of the tidal interactions between a globular cluster and the Galaxy, in order to characterize the perturbations experienced at disk crossing, to determine the geometry and density distribution of the tidal tails and quantify the mass loss as a function of cluster properties and Galaxy models. Our main conclusions can be summarised as follows: * All runs show that the clusters are always surrounded by tidal tails and debris, even those that had only a very slight mass loss. These unbound particles distribute in volumic density like a power-law as a function of radius, with a slope around –4. This slope is much steeper than in the observations where the background-foreground contamination dominates at very large scale. * These tails are preferentially composed of low mass stars, since they are coming from the external radii of the cluster; due to mass segregation built up by two-body relaxation, the external radii preferentially gather the low mass stars. * The mass loss is enhanced for a cluster in direct rotation with respect to its orbit. No effect is seen for retrograde rotation. * For sufficiently high and rapid mass loss, the cluster takes a prolate shape, whose major axis precess around the z-axis. * When the tidal tail is very long (high mass loss) it follows the cluster orbit: the observation of the tail geometry is thus a way to deduce cluster orbits. Stars are not distributed homogeneously through the tails, but form clumps, and the densest of them, located symmetrically in the tails, are the tracers of the strongest gravitational shocks. * Mass loss is highly enhanced with a “maximum disk” model for the Galaxy. On the contrary, the flattening of the dark halo has negligible effect on the clusters, for a given rotation curve. Finally, these N-body experiments help to understand the recent observations of extended tidal tails around globular clusters (Grillmair et al. 1995, Leon et al. 1999): the systematic observations of the geometry of these tails should bring much information on the orbit, dynamic, and mass loss history of the clusters. ###### Acknowledgements. All simulations have been carried out on the Crays C-94 and C-98 of IDRIS, the CNRS computing center at Orsay, France.
no-problem/9910/nucl-th9910042.html
ar5iv
text
# Particle-Number Reprojection in the Shell Model Monte Carlo Method: Application to Nuclear Level Densities ## Abstract We introduce a particle-number reprojection method in the shell model Monte Carlo that enables the calculation of observables for a series of nuclei using a Monte Carlo sampling for a single nucleus. The method is used to calculate nuclear level densities in the complete $`(pf+g_{9/2})`$-shell using a good-sign Hamiltonian. Level densities of odd-$`A`$ and odd-odd nuclei are reliably extracted despite an additional sign problem. Both the mass and the $`T_z`$ dependence of the experimental level densities are well described without any adjustable parameters. The single-particle level density parameter is found to vary smoothly with mass. The odd-even staggering observed in the calculated backshift parameter follows the experimental data more closely than do empirical formulae. The interacting shell model has successfully described a variety of nuclear properties. However, the size of the model space increases rapidly with the number of valence nucleons and/or orbits, and exact diagonalization of the nuclear Hamiltonian in a full major shell is limited to nuclei with $`A50`$ . The development of quantum Monte Carlo methods for the nuclear shell model allowed realistic calculations of finite- and zero-temperature observables in model spaces that are much larger than those treated by conventional diagonalization techniques . The Monte Carlo method was successfully adapted to the microscopic calculations of nuclear level densities . Accurate level densities are needed for estimating nuclear reaction rates, e.g., neutron and proton capture rates. The nucleosynthesis of many of the heavy elements proceeds by radiative capture of neutrons ($`s`$ and $`r`$ processes) or protons ($`rp`$ process) in competition with beta decay . Theoretical approaches to level densities are often based on the Fermi gas model, i.e., the Bethe formula , which describes the many-particle level density in terms of the single-particle level density parameter $`a`$. Shell corrections and two-body correlations are taken into account empirically by defining a fictitious ground state energy. In the backshifted Bethe formula (BBF) the ground state energy is shifted by an amount $`\mathrm{\Delta }`$. This formula describes well the experimental level densities of many nuclei if both $`a`$ and $`\mathrm{\Delta }`$ are fitted for each individual nucleus . While these parameters have been discussed in terms of their global systematics, it is difficult to predict their values for particular nuclei. The nuclear shell model offers an attractive framework to calculate level densities, but the model space required to calculate level densities at excitation energies in the neutron resonance regime is usually too large for conventional diagonalization methods. We have recently developed a method to calculate exact level densities using the shell model Monte Carlo (SMMC) approach, and applied it to calculate the level densities of even-even nuclei from iron to germanium . Fermionic Monte Carlo methods are usually hampered by the so-called sign problem, which causes a breakdown of the method at low temperatures. A practical solution in the nuclear case was developed in Ref. but the associated extrapolation errors are too large for reliable calculations of level densities. In Ref. the sign problem was overcome by constructing a good-sign interaction in the $`(pf+g_{9/2})`$-shell that includes correctly the dominating collective components of realistic effective interactions . The SMMC level densities are well-fitted by the BBF, and both $`a`$ and $`\mathrm{\Delta }`$ can be extracted from the microscopic calculations. The SMMC approach is computationally intensive. In particular, the SMMC level densities require calculations of the thermal energy at all temperatures. The weight function used in the random walk is temperature dependent, and a new Monte Carlo sampling is required at each temperature. Since this procedure has to be repeated for each nucleus, the calculations are time-consuming. In this paper we describe a particle-reprojection method that allows us to calculate observables for a series of nuclei using Monte Carlo sampling for one nucleus only. The random walk is done with a weight function proportional to the partition function of a given even-even nucleus (which is positive definite for a good-sign interaction), and the thermal observables are then calculated for several nuclei by reprojection on different particle numbers (both even and odd). This method allows significantly more economical calculations of level densities. We apply the method in the full $`(pf+g_{9/2})`$-shell to study the systematics of $`a`$ and $`\mathrm{\Delta }`$ for even-even, odd-$`A`$ and odd-odd manganese, iron and cobalt nuclei. A direct comparison with both experimental data and empirical formulae is presented. The agreement with the data is remarkably good with no adjustable parameters in the microscopic calculations. Furthermore, we find that the SMMC values follow the data more closely than do the empirical values. The Monte Carlo method is based on the Hubbard-Stratonovich representation of the imaginary-time propagator, $`e^{\beta H}=D[\sigma ]G(\sigma )U_\sigma `$, where $`G(\sigma )`$ is a Gaussian weight and $`U_\sigma `$ is the propagator of non-interacting nucleons moving in fluctuating auxiliary fields $`\sigma `$ which depend on imaginary time. The canonical thermal expectation value of an observable $`O`$ is given by $`O_𝒜=D[\sigma ]G(\sigma )\mathrm{Tr}_𝒜(OU_\sigma )/D[\sigma ]G(\sigma )\mathrm{Tr}_𝒜U_\sigma `$, where $`\mathrm{Tr}_𝒜`$ denotes a trace in the subspace of a fixed number of particles $`𝒜`$. In actual calculations we project on both neutron number $`N`$ and proton number $`Z`$, and in the following $`𝒜`$ will denote $`(N,Z)`$. We rewrite $$O_𝒜=\mathrm{Tr}_𝒜(OU_\sigma )/\mathrm{Tr}_𝒜U_\sigma _W,$$ (1) where we have introduced the notation $$X_\sigma _W\frac{D[\sigma ]W(\sigma )X_\sigma }{D[\sigma ]W(\sigma )},$$ (2) and $`W(\sigma )G(\sigma )\mathrm{Tr}_𝒜U_\sigma `$. For an even number of particles with a good-sign interaction, $`W(\sigma )`$ is positive definite. In the Monte Carlo method we choose $`M`$ samples (each denoted by $`\sigma _k`$) according to the weight function $`W(\sigma )`$, and estimate $`X_\sigma _W_kX_{\sigma _k}/M`$. We assume that the Monte Carlo sampling is done for a nucleus with particle number $`𝒜`$, and consider the ratio $`Z_𝒜^{}/Z_𝒜`$ between the partition function of a nucleus with $`𝒜^{}`$ particles and the partition function of the original nucleus. In the notation of Eq. (2) $`{\displaystyle \frac{Z_𝒜^{}(\beta )}{Z_𝒜(\beta )}}{\displaystyle \frac{\mathrm{Tr}_𝒜^{}e^{\beta H}}{\mathrm{Tr}_𝒜e^{\beta H}}}={\displaystyle \frac{\mathrm{Tr}_𝒜^{}U_\sigma }{\mathrm{Tr}_𝒜U_\sigma }}_W.`$ (3) Similarly, the expectation value of an observable $`O`$ for the nucleus with $`𝒜^{}`$ particles can be calculated from $$O_𝒜^{}=\frac{\left(\frac{\mathrm{Tr}_𝒜^{}OU_\sigma }{\mathrm{Tr}_𝒜^{}U_\sigma }\right)\left(\frac{\mathrm{Tr}_𝒜^{}U_\sigma }{\mathrm{Tr}_𝒜U_\sigma }\right)_W}{\frac{\mathrm{Tr}_𝒜^{}U_\sigma }{\mathrm{Tr}_𝒜U_\sigma }_W}.$$ (4) The Monte Carlo walk is carried out by projection on a fixed $`𝒜`$, and Eqs. (3) and (4) are then used to calculate the partition functions and observables for a family of nuclei with $`𝒜^{}𝒜`$. We applied the method to nuclei in the $`(pf+g_{9/2})`$-shell, using the Hamiltonian of Ref. . The single-particle energies are computed in a central Woods-Saxon potential $`V(r)`$ plus spin-orbit interaction, while the two-body interaction includes a monopole isovector pairing of strength $`g_0`$ plus a separable surface-peaked interaction $`v(𝒓,𝒓^{})=\chi (dV/dr)(dV/dr^{})\delta (\widehat{𝒓}\widehat{𝒓}^{})`$. The surface-peaked interaction is expanded into multipoles and only the quadrupole, octupole and hexadecupole terms are kept. The strength $`\chi `$ is determined self-consistently and renormalized. The strength of the pairing interaction $`g_00.2`$ is determined from experimental odd-even mass differences. Both the pairing and the surface-peaked interactions are attractive and lead to a good-sign Hamiltonian. A repulsive isospin-dependent interaction leads to a sign problem, and was included perturbatively in recent level density calculations in $`sd`$-shell and $`pf`$-shell nuclei. In the particle-reprojection method described above we have assumed that the Hamiltonian $`H`$ is independent of $`𝒜`$. Suitable corrections should be made if some of the Hamiltonian parameters vary with $`𝒜`$. Since $`\chi `$ depends only weakly on the mass number $`A`$ ($`A^{1/3}`$), and the pairing strength $`g_0`$ is constant through the shell, the largest variation is that of the single-particle energies of the orbit $`a`$, $`ϵ_a(𝒜)`$. To correct this variation we approximate the thermal energy of $`𝒜^{}(N^{},Z^{})`$ particles by $$E_𝒜^{}(\beta )\underset{a}{}[ϵ_a(𝒜^{})ϵ_a(𝒜)]n_a_𝒜^{}+H_𝒜^{}$$ (5) where $`H`$ is the Hamiltonian for a nucleus with $`𝒜`$ particles. In calculating the energy for $`𝒜^{}`$ particles from (5), we used in the propagator ($`e^{\beta H}`$) the Hamiltonian $`H`$ for nucleus $`𝒜`$ rather than $`𝒜^{}`$. To minimize the error we reproject on nuclei with $`N^{}Z^{}`$ values close to $`NZ`$ (the Woods-Saxon potential depends on $`NZ`$). In the applications below we have checked that the resulting error in the level density is negligible. We used the reprojection method to calculate the thermal energies versus $`\beta `$ for <sup>50-56</sup>Mn, <sup>52-58</sup>Fe, and <sup>54-60</sup>Co including odd-$`A`$ and odd-odd nuclei. We sampled according to the even-even nucleus <sup>56</sup>Fe and reprojected onto <sup>53-56</sup>Mn, <sup>54-58</sup>Fe, and <sup>54-60</sup>Co, while the nuclei <sup>50-52</sup>Mn and <sup>52,53</sup>Fe are reprojected from Monte Carlo sampling of the odd-odd $`N=Z`$ nucleus <sup>54</sup>Co. The calculations are done for values of $`\beta `$ between $`\beta =0`$ and 1 MeV<sup>-1</sup> in steps of $`\mathrm{\Delta }\beta =1/16`$, and between 1 and 2.5 in steps of $`\mathrm{\Delta }\beta =1/8`$. At each $`\beta `$ we used about 4000 independent samples. Reprojected energy calculations typically have larger statistical errors at larger values of $`\beta `$. Therefore we also performed direct Monte Carlo runs (without reprojection) for the above nuclei at several values of $`\beta `$ between $`1.75`$ and $`3.0`$ MeV<sup>-1</sup>. For odd-$`A`$ and odd-odd nuclei, a typical statistical error for the energy at $`\beta 2.5`$ is $`0.5`$ MeV, while for $`\beta 3`$ the error is too large for the data to be useful. This is just another manifestation of the sign problem for nuclei with an odd number of protons and/or neutrons. Fortunately, because of the high degeneracy in the vicinity of the ground state of these nuclei, the thermal energy is already close to its asymptotic value. Fig. 4 shows the calculated SMMC thermal energies versus $`\beta `$ for a series of cobalt isotopes. The effect of pairing on the thermal energies at low temperatures (i.e. large $`\beta `$) is clearly seen in their uneven spacings. The inset of Fig. 4 shows the SMMC thermal energies (triangles with error bars) for <sup>60</sup>Co for the large values of $`\beta `$ only. In calculating the level density versus excitation energy, it is important to get accurate values of the ground state energy. In Ref. we used, for even-even nuclei, a two-state model ($`0^+`$ and $`2^+`$) to obtain a two-parameter fit to the thermal energy and $`𝑱^2`$. For odd-$`A`$ and odd-odd nuclei this method is not useful since in general we do not know the spin of the ground state and first excited state. Moreover, these nuclei do not have a gap and often more than two levels contribute to the thermal energy at the lowest temperatures for which Monte Carlo calculations are still possible. We estimate the ground state energy of these nuclei by taking an average of the large-$`\beta `$ SMMC values for the thermal energy. The diamonds in the inset of Fig. 4 are such average values for <sup>60</sup>Co. We estimate the ground state energy of the odd-$`A`$ and odd-odd nuclei to be reliable to about $`0.3`$ MeV. To calculate the level density we first find the partition function $`Z_𝒜^{}`$ by integrating the relation $`\mathrm{ln}Z_𝒜^{}/\beta =E_𝒜^{}`$. The level density is then given by $`\rho _𝒜^{}=(2\pi \beta ^2C_𝒜^{})^{1/2}e^{S_𝒜^{}},`$ (6) in terms of the canonical entropy $`S_𝒜^{}=\beta E_𝒜^{}+\mathrm{ln}Z_𝒜^{}`$ and the heat capacity $`C_𝒜^{}=\beta ^2dE_𝒜^{}/d\beta `$. The level densities for the cobalt isotopes of Fig. 4 are shown in Fig. 4 as a function of excitation energy. These total level densities are fitted to $`\rho (E_x)g{\displaystyle \frac{\sqrt{\pi }}{24}}a^{\frac{1}{4}}(E_x\mathrm{\Delta }+t)^{\frac{5}{4}}e^{2\sqrt{a(E_x\mathrm{\Delta })}},`$ (7) where $`t`$ is a thermodynamic temperature defined by $`E_x\mathrm{\Delta }=at^2t`$ and $`g=2`$. Eq. (7) is a modified version of the BBF derived by Lang and Le Couteur . It differs from the usual BBF by the additional “temperature” term $`t`$ in the pre-exponential factor, and provides a better fit to the calculated level density at lower excitation energies. The solid lines in Fig. 4 are the BBF level densities (7) fitted to the SMMC level densities in the range $`E_x<20`$ MeV. In general we obtain a good fit down to energies of $`1`$ MeV or smaller. The inset shows the low energy fit for <sup>55</sup>Co. The dashed line is a fit to the BBF without $`t`$; this approximation starts to diverge around $`2`$ MeV due to the singularity of the pre-exponential factor $`(E_x\mathrm{\Delta })^{5/4}`$. Notice that the level density for an odd-odd cobalt (e.g., <sup>54</sup>Co) is higher than the level density of the subsequent odd-even cobalt (e.g., <sup>55</sup>Co) even though the latter has a larger mass. This is due to reduced pairing correlations in the odd-odd nucleus that lead to a smaller backshift $`\mathrm{\Delta }`$. We extracted the level density parameters $`a`$ and $`\mathrm{\Delta }`$ for the above nuclei by fitting Eq. (7) to the SMMC level densities. The results for $`a`$ and $`\mathrm{\Delta }`$ versus mass number $`A`$ are shown in Fig. 4. The Monte Carlo results (solid squares) are compared with the experimental data (X’s) quoted in Refs. and . The solid lines describe the results of the empirical formulae of Refs. . The calculated values of $`a`$ depend smoothly on the mass, unlike some of the empirical results, and in the case of the cobalt isotopes follow the data more closely. The staggering seen in the behavior of $`\mathrm{\Delta }`$ versus $`A`$ is a result of pairing effects. In the empirical formulae, $`\mathrm{\Delta }0`$ for odd-even nuclei, is positive for even-even nuclei and is negative for odd-odd nuclei. We see that the present values of $`\mathrm{\Delta }`$ follow rather closely the experimental values, and are in general in better agreement than the empirical values. The lower values of $`a`$ (relative to the experimental values) for the odd-odd manganese nuclei are compensated by corresponding lower values of $`\mathrm{\Delta }`$, and thus do not cause significant discrepancies in the level densities for $`E_x10`$ MeV. To demonstrate how the Monte Carlo results improve over the empirical formulae we show in Fig. 4 the calculated level densities of three $`A=55`$ nuclei (Mn, Fe and Co). According to the empirical formula, $`\mathrm{\Delta }0`$ for odd-A nuclei, and the values of $`a`$ are similar (since $`A`$ is the same). The empirical formulae therefore predict similar level densities for these nuclei. However the SMMC level densities of these three nuclei are seen to be quite different from each other. Indeed we find that $`\mathrm{\Delta }`$ is positive for <sup>55</sup>Co, close to zero for <sup>55</sup>Fe and negative for <sup>55</sup>Mn. The experimental level densities (dashed lines) are in good agreement with the Monte Carlo calculations, suggesting a $`T_z(NZ)/2`$ dependence of the level density, which is usually ignored in empirical formulae but is clearly observed in our microsocpic calculations. In conclusion, we have described a particle-reprojection method in the SMMC that enables the calculation of thermal properties for a series of nuclei using Monte Carlo sampling for a single nucleus. We applied the method to the calculation of level densities. This work was supported in part by the Department of Energy grant No. DE-FG-0291-ER-40608, and by the Ministry of Education, Science, Sports and Culture of Japan (grants 08044056 and 11740137). Computational cycles were provided by the Cornell Theory Center, the San Diego Supercomputer Center and the NERSC high performance computing facility at LBL.
no-problem/9910/hep-ph9910415.html
ar5iv
text
# LU TP 99–33 October 1999 hep-ph/9910415 Models of Low Energy Effective Theory applied to Kaon Non-leptonic Decays and Other Matrix ElementsWork supported in part by TMR, EC-Contract No. ERBFMRX-CT980169 (EURODAΦNE).,To appear in the proceedings of the International Workshop on Hadron Physics ‘Effective Theories of Low Energy QCD’, Coimbra, Portugal, September 10-15, 1999 ## Introduction The problem of describing non-leptonic decays is a very old one and is still not fully solved today. In this talk I will describe the large $`N_c`$ method first suggested in a series of papers by Bardeen, Buras and GérardBBG and later extended by several other authors. I will illustrate most of the problems and solutions on the example of the charged and neutral pion mass difference and afterwards show how this method can be extended systematically to the case of non-leptonic weak decays as well. The main results described there are those of BP1 and also descibed in talks . The subject of this meeting was hadronic physics, so why are we interested in these extra quantities. They provide a very strong test of our understanding of the strong interaction at all length scales. Our present knowledge of the strong interaction can be summarized as: This is the perturbative QCD domain and here QCD has had many successes, we count this region as understood. This is the Chiral Perturbation Theory (CHPT) regimeCHPTreview . Many successes again and basically understood. This is the domain of models supplemented with various arguments, sum rules, lattice QCD results, etc. and is the most difficult. In the type of observables covered in this talk all three regimes are important. We consider processes with incoming and outgoing hadrons but with an internally exchanged photon or weak boson. The difficulty now resides in the fact that even if the external hadrons have all low momenta we need to integrate over all momenta of the internal $`\gamma `$ or $`W^+`$. This means that all regimes come into play and that they need to be connected properly to each other. The last is known as matching. The main part is in Sect. I where I show how we can explain the mass difference, $`m_{\pi ^+}^2m_{\pi ^0}^2`$ using this class of methods. Here we can also see how the model approach and the correct answer agree. Sect. II then covers the extra problems involved in non-leptonic weak decays and how the $`X`$-boson method of BP1 can be used to solve those. Finally I present numerical results for this case and conclusions. ## I A simple example: The $`\pi ^+`$-$`\pi ^0`$ mass difference. This non-leptonic matrix element has several features that make it simpler. 1. We can neglect $`m_u`$ and $`m_d`$ to a rather good approximation. This then allows current algebra to relate the electromagnetic mass difference to a vacuum to vacuum matrix element onlydas . This can then be related to the measured hadronic cross-sections in electron-positron annihilation so in this case we know the correct answer. 2. There are no large masses involved so there are no large logarithms that need resummation. 3. The photon itself provides for an easy identification of correct scales. Basically the procedure is now to evaluate $$m_{em}^2=M|e^2\frac{d^4q}{(2\pi )^4}\frac{J_\mu (q)J_\nu (q)}{q^2}\left(g_{\mu \nu }\xi \frac{q_\mu q_\nu }{q^2}\right)|M.$$ (1) where $`M`$ stands for the meson under consideration and $`J_\mu `$ for the electromagnetic current. $`\xi `$ is a gauge parameter. The procedure is now as follows: 1:We rotate the integral over photon momenta in Eq. (1) to Euclidean space. This has two advantages, in Euclidean space thresholds and poles are smoothed out making treatment of these easier and Euclidean space momenta have all components small if $`q_E^2`$ is small. The latter allows for a simpler identification of long and short-distance than in Minkowski space. 2: The final step is now to set $$d^4q_E=_0^{\mathrm{}}q_E^3𝑑q_E𝑑\mathrm{\Omega }=\underset{\text{long-distance}}{\underset{}{_0^\mu q_E^3𝑑q_E𝑑\mathrm{\Omega }}}+\underset{\text{short-distance}}{\underset{}{_\mu ^{\mathrm{}}q_E^3𝑑q_E𝑑\mathrm{\Omega }}}$$ (2) and perform both integrals separately. Notice that the scale $`\mu `$ is just a splitting scale in the integral and is not directly related to any subtraction scale in the calculation itself. Therefore, if both the long-distance (from 0 to $`\mu `$) and the short-distance are calculated with high enough precision the final result should be independent of the value of $`\mu `$. We check this by varying $`\mu `$ in all our calculations, i.e. we check the matching. ### I.1 Short-Distance The short-distance contribution was first calculated in BBGpi using the sum rule by Das et al.das . It was later rederived using the Operator Product expansion in dashen . The diagrams in Fig. 1 depict the main contributions. Performing the photon integral leads to a set of four-quark operators that can be evaluated in leading $`1/N_c`$ since we can then apply factorization. The result is BBGpi ; dashen $$\left(m_{\pi ^+}^2m_{\pi ^0}^2\right)_{SD}=\frac{3\alpha _S\alpha }{\mu ^2}F^2B_0^2=\frac{3\alpha _S\alpha }{\mu ^2}\frac{\overline{q}q^2}{F^2},$$ (3) with $`F`$ the pion decay constant in the chiral limit and $`B_0`$ the parameter in lowest order CHPT describing the quark condensate. ### I.2 Long-Distance In the previous subsection we could use perturbative QCD but that is not possible in the long distance domain. So here we have to put in the things we know and try various models. This can be done for $`\mu `$ rather small and leads to $$\left(m_{\pi ^+}^2m_{\pi ^0}^2\right)_{LD}=\frac{3\alpha }{4\pi }\left(\mu ^2+\frac{2L_{10}}{F^2}\mu ^4\right).$$ (4) The first term was first done in BBGpi and the chiral correction in Bijnens ; Knecht1 . The two contributing diagrams are depicted in Fig. 2. Something that is important is that the gauge dependence only cancels between the two diagrams in Fig. 2. This was done in BdR and gives only a marginal improvement. Note that we cannot use the usual dimensional regularization here but must use the cut-off in the photon propagator. There is the additional problem that at first sight only the equivalent of the diagram of Fig 2(a) appears, which is a two-loop diagram, and the result is not gauge invariant. Only after the equivalent of (b) is added, which is a three-loop diagram, does the gauge dependence cancel as requiredBdR . We have to include here Weinberg’s constraint on the couplings to obtain a unique result otherwise the result will be very dependent on the specific model used. E.g. a hidden gauge model with only vector mesons is still quadratic in $`\mu ^2`$ but with a negative coefficient. Using Weinberg’s constraints leads to $$\left(m_{\pi ^+}^2m_{\pi ^0}^2\right)_{LD}=\frac{3\alpha }{2\pi }M_V^2\mathrm{log}\left[\frac{M_V^2+\mu ^2}{M_A^2+\mu ^2}\frac{M_A^2}{M_V^2}\right].$$ (5) But beware of partial results. Using a linear vector representation only even gave a quartic dependence on $`\mu `$Bijnens . The result in (5) for $`\mu \mathrm{}`$ is basically the result of das and was also obtained in Ecker89 . It has also several nice features. Expanding in $`\mu `$ for small $`\mu `$ reproduces the CHPT result with the meson dominated value for $`L_{10}`$. For large $`\mu `$ it goes as $`1/\mu ^2`$ so it can match on very well to the earlier short-distance result. This basically coincides with the previous result and was first obtained in BRZ . Extensions of the above exists for nonzero-quark massesdashen ; BP2 and references therein and also with more large $`N_c`$ arguments to underpin the matchingKnecht1 . ### I.3 Discussion Numerical results are shown in Fig 3 for all cases. The experimental value and the one with the sub-leading in $`1/N_c`$ subtracted are shown as the horizontal lines. The subtracted part is the chiral logarithm contribution as estimated in BP2 . Notice that CHPT starts deviating quickly from the VMD and ENJL results. The CHPT result is only reliable up to about 500 MeV. The VMD result and the ENJL result basically coincide here, the difference is due to the precise input values. Both these curves also follow essentially the correct answer as obtained from electron-positron annihilation and the sum-rule of das . Notice that the ENJL model has the correct matching on to the low $`\mu `$ CHPT result and is a considerable improvement over it at higher $`\mu `$. Notice also the almost perfect agreement with the estimated part leading $`N_c`$ part of the mass-difference. From this section we can conclude: 1. Different Low energy models give quite different results and we have to use short-distance constraints and phenomenological inputs to improve the long-distance contribution to above the regime where CHPT is applicable. 2. CHPT alone for the long-distance regime is as a first guestimate acceptable but start differing from the correct answer at a scale of about 500 MeV. 3. Even for this low-momentum dominated observable the short-distance contributions are sizable at scales around 800 MeV. ## II Kaon non-leptonic Decays One of the difficult unresolved problems is to understand the origin of the $`\mathrm{\Delta }I=1/2`$ rule. The underlying process is $`W^+`$-exchange leading to an operator of the quark-structure $`(\overline{s}u)(\overline{u}d)`$ which has both isospin $`1/2`$ and isospin $`3/2`$ pieces. If we assume the $`W^+`$ couples directly to hadrons the process $`K^+\pi ^+\pi ^0`$ goes simply via the diagrams in Fig. 4, but there are no such diagrams for $`K^0\pi ^0\pi ^0`$ because of charge conservation. So we would expect that $`\mathrm{\Gamma }(K^+\pi ^+\pi ^0)\mathrm{\Gamma }(K^0\pi ^0\pi ^0)`$. The experimental numbers are $`\mathrm{\Gamma }(K^+\pi ^+\pi ^0)=1.110^{14}\text{MeV}`$ and $`\mathrm{\Gamma }(K^0\pi ^0\pi ^0)=\frac{1}{2}\mathrm{\Gamma }(K_S\pi ^0\pi ^0)=2.310^{12}\text{MeV,}`$ precisely the opposite. Translated into isospin amplitudes for the decays, see e.g. BPP for the precise definitions, we obtain $`\left|A_0/A_2\right|_{\text{exp}}=22.1.`$The problem is not due to chiral corrections since using the estimate of BPP ; KMW we can extract them and get $$\left|A_0/A_2\right|_{\text{chiral}}=16.4\underset{\text{naive}}{\underset{}{=\sqrt{2}}}.$$ (6) where the last number is the one using naive $`W^+`$-exchange as depicted in Fig. 4. In the notation used in BP1 ; BPP we have $$A_0=C(9G_8+G_{27})\sqrt{6}/9F_0(m_K^2m_\pi ^2)A_2=C10G_{27}\sqrt{6}/9F_0(m_K^2m_\pi ^2)$$ (7) which after subtracting the estimated chiral corrections from experiment yields $$G_8=6.2\pm 0.7G_{27}=0.48\pm 0.06.$$ (8) Both $`G_8`$ and $`G_{27}`$ are equal to one in the $`W^+`$-exchange limit, the constant $`C`$ was chosen to have this. We thus have to explain the large deviation from 1 using the corrections suppressed by $`1/N_c`$. This is not a hopeless task as the sub-leading corrections coming from the diagrams in Fig. 1 with the photon replaced by the gluon are of order $$\frac{\alpha _S}{N_c}\mathrm{log}\frac{M_W^2}{\mu ^2}$$ (9) compared to the leading contribution and this is in fact larger than one. Luckily we know how to resum this type of logarithmsburaslectures . At a high scale we can replace the effect of $`W^+`$-exchange by a sum of local operators by virtue of the operator product expansion. We can then use the whole renormalization group machinery to run this sum over local four-quark operators down to a low scale $`\mu _R`$. This is explained in great detail in buraslectures . The end result is $$H_W=\underset{i=1,10}{}C_i(\mu _R)Q_i$$ (10) with a series of known coefficients $`C_i(\mu _R)`$, the Wilson coefficients. The final answer is then the matrix element of this sum over four-quark operators, $`\pi \pi |H_W|K`$. But here we have two problems: 1. In the previous section the short and long-distance contributions were separated via the photon momentum. Here we have to link this somehow to the scale $`\mu _R`$ appearing in the weak Hamiltonian $`H_W`$. 2. To next-to-leading order in the renormalization group the coefficients $`C_i(\mu _R)`$ also depend rather strongly on the precise definition of the local four quark operators $`Q_i`$ in QCD perturbation theory. In BPBK we showed how the method of BBG supplemented with the correct momentum routing BBGpi ; BGK solved problem 1. In BP1 and in the various already published talks talks we showed how a careful identification across the long-short-distance boundary is also possible in this case. The basic idea is to go at the scale $`\mu _R`$ back from the local four-quark operators to the exchange of a series of $`X`$-bosons. These $`X`$-bosons can then be treated in exactly the same way as we did the photon in the previous section, thus allowing a correct calculation at all length scales. So we replace, using a single operator as an example $$C_1Q_1=(\overline{s}_L\gamma _\mu d_L)(\overline{u}_L\gamma _\mu u_L)X_\mu \left[g_1(\overline{s}_L\gamma ^\mu d_L)+g_2(\overline{u}_L\gamma ^\mu u_L)\right].$$ (11) Using the tree level diagrams of Fig. 5 this gives $`C_1=g_1g_2/M_X^2`$. If we now include the one loop diagrams we obtain instead: $$C_1\left(1+\alpha _S(\mu _R)r_1\right)=\frac{g_1g_2}{M_X^2}\left(1+\alpha _S(\mu )a_1+\alpha _S(\mu _R)b_1\mathrm{log}\frac{M_X^2}{\mu _R^2}\right).$$ (12) On the l.h.s. the scheme dependence disappears but there is a dependence in $`r_1`$ on the choice of external states. The exact same dependence in $`a_1`$ cancels this. We now split the integral over the $`X`$-boson momentum as in the previous section $$_0^{\mathrm{}}𝑑p_X_0^\mu 𝑑p_X+_\mu ^{\mathrm{}}𝑑p_X$$ (13) In the final answer all $`M_X`$ dependence drops out, the logarithm proportional to $`b_1`$ shows up in precisely the same way in the evaluation of the short distance part of (13) which is proportional to $`g_1g_2/M_X^2\left\{\alpha _S(\mu )a_2+\alpha _S(\mu )b_1\mathrm{log}(M_X^2/\mu ^2)\right\}`$ The coefficients $`r_1`$, $`a_1`$ and $`a_2`$ give the corrections to the naive $`1/N_c`$-method. We now use the $`X`$-boson method described above and put $`\mu =\mu _R`$. The low energy part can be calculated using CHPT, this is the approach used originally by BBG and presently pursued by Hambye without including the corrections due to the change in scheme when going to the long-distance part. Their results coincide with ourss when we restrict our results to their approximations. We obtainBP1 $`B_K=0.6\text{}0.8`$ $`B_K^\chi =0.25\text{}0.40G_8=4.3\text{}0.75`$ $`G_{27}=0.25\text{}0.40`$ $`G_8^{}=0.8\text{}1.1B_6(\mu =0.8\text{GeV})2.2`$ (14) $`B_K`$ is the bag-parameter relevant for $`K^0\overline{K^0}`$ mixing at the physical quark masses and $`B_K^\chi `$ the same in the chiral limit. The quark mass corrections are quite sizable. The results for $`G_8`$ and $`G_{27}`$ are obtained without any free input and agree within the uncertainties of the method with the experimental values. We conclude that we basically now have a first principle understanding of the $`\mathrm{\Delta }I=1/2`$ rule. We discuss the various contributions below. $`G_8^{}`$ is the coefficient of the weak mass term, it contributes at leading order to processes like $`K_L\gamma \gamma `$BPP and is often forgotten in those analyses. Finally $`B_6`$ is much larger than used in all the analysises of the recent experimental results for $`ϵ^{}/ϵ`$ktevna48 which is very encouraging. The final result for $`G_8`$ is depicted in Fig. 6. We have shown the one-loop result (1-loop), the two-loop result with NDR Wilson coefficients (2-loop) and the two-loop results with correction for the long-distance scheme added (SI) using our results for the long distance part. We also showed what naive factorization with the SI Wilson coefficients would give and what the method of Hambye would give in the chiral limit with the same Wilson coefficients (SI quad). If we look at the various contributions to $`G_8`$ we see in Fig. 7 that the contribution of $`Q_1`$ and $`Q_2`$ are both large and fairly constant while $`Q_6`$ contributes 20% or less. If we look inside the calculation we see that the difference with the $`G_{27}`$ evolution is mainly given by the long-distance Penguin-like contributions to $`Q_2`$. The behaviour of $`B_6`$ is more difficult, it is ill-defined in the chiral limit in the factorizable approximationBP1 and we can thus only define it with respect to the full large $`N_c`$ limit. Calculating it in CHPT only then gives fairly low values as is visible in the second line of Table 1. Adding higher order corrections we immediately obtain a strongly enhanced value as is obvious from the second line in Table 1. ## III Conclusions #### The $`X`$-boson method in combination with large $`N_c`$ arguments allows to correctly identify quantities across theory boundaries assuming we can identify currents across the boundary. #### The mass difference $`m_{pi^+}^2m_{\pi ^0}^2`$ is well described by these methods with a surprisingly large short-distance contribution. #### The $`\mathrm{\Delta }I=1/2`$ rule is now quantitatively understood to about 30% with NO free input. This calculation passes all requirements usually asked of in this context but there are many technical subtleties. #### $`B_62.2`$ is good news for those trying to explain the observed values of $`ϵ^{}/ϵ`$ within the standard model. #### This program has been quite successful but we need new ideas to calculate more complex processes.
no-problem/9910/cond-mat9910039.html
ar5iv
text
# The diffusion equation and the principle of minimum Fisher information ## I Introduction The derivation of the diffusion equation from a fixed end-point variational principle is well known . The Lagrangian that is normally used leads simultaneously to two equations for two real functions: the diffusion equation for a function $`\psi `$, and its adjoint (time reversed) equation for a function $`\psi ^{}`$. This Lagrangian is usually introduced formally, without physical justification (consider the following quote from Ref. : “The introduction of the mirror-image field $`\psi ^{}`$, in order to set up a Lagrange function from which to obtain the diffusion equation, is probably too artificial a procedure to expect to obtain much of physical significance from it”). We wish to show that this Lagrangian results from applying an information-theoretic approach to the solution of the following interpolation problem. Consider an experiment, where the probability density $`\rho (x,t)`$ and the average velocity field $`v(x,t)`$ of a cloud of particles of mass $`m`$ is measured at times $`t_0`$ and $`t_1`$(for simplicity, we consider only motion in one dimension). Assume that $`\rho `$ satisfies the continuity equation $$\frac{\rho }{t}+\frac{}{x}\left(\rho v\right)=0.$$ (1) Without additional assumptions regarding the dynamics of the system, the problem of determining the probability density and velocity field at times $`t`$ (where $`t_0<t<`$ $`t_1`$) can not be solved, since there are an infinite number of probability densities and velocity fields that will interpolate between the values measured at times $`t_0`$ and $`t_1`$. However, we would still like to find best estimates of $`\rho `$ and $`v`$, perhaps by adding some assumptions about the physical processes that determine the motion of the cloud of particles, and by using some principle of inference to select the most likely probability distribution that might describe its evolution. The main result of this paper is to show that the dynamics of such a system will be determined uniquely by the diffusion equation and its adjoint equation, $$\frac{\psi }{t}=\frac{D}{2m}\frac{^2\psi }{x^2},$$ (2) $$\frac{\psi ^{}}{t}=\frac{D}{2m}\frac{^2\psi ^{}}{x^2},$$ (3) (where $`\psi `$ and $`\psi ^{}`$, defined by eqs. (13) and (14), are real functions of $`\rho `$ and $`\sigma `$, and $`D/2m`$ is the diffusion constant) provided we make the following two assumptions about the system: that the velocity field can be derived from a potential function $`\sigma (x,t)`$, according to $$v=\frac{1}{m}\frac{\sigma }{x},$$ (4) and that the probability density $`\rho `$ that interpolates between times $`t_0`$ and $`t_1`$ is the one that minimizes the Fisher information $`I`$ associated with $`\rho `$, which we define by (see Appendix A) $$I=\frac{1}{m}_{t_0}^{t_1}_{\mathrm{}}^+\mathrm{}\frac{1}{\rho }\left(\frac{\rho }{x}\right)^2𝑑x𝑑t.$$ (5) The first assumption is equivalent to introducing a particular physical model, in which the motion of the cloud of particles corresponds to that of a fluid with no vorticity. The second assumption is an information-theoretical assumption. ## II Derivation of the diffusion equation from a variational principle Eqs. (1) and (4) lead to the continuity equation $$\frac{\rho }{t}+\frac{}{x}\left(\rho \frac{1}{m}\frac{\sigma }{x}\right)=0.$$ (6) Eq. (6) can be derived from the Lagrangian $`L_{CL}`$ by fixed end-point variation with respect to $`\sigma `$, $$L_{CL}=_{t_0}^{t_1}_{\mathrm{}}^+\mathrm{}\rho \left(\frac{\sigma }{t}+\frac{1}{2m}\left(\frac{\sigma }{x}\right)^2\right)𝑑x𝑑t.$$ (7) Note also that fixed end-point variation with respect to $`\rho `$ leads trivially to the Hamilton-Jacobi equation, $$\frac{\sigma }{t}+\frac{1}{2m}\left(\frac{\sigma }{x}\right)^2=0.$$ (8) Therefore, variation of $`L_{CL}`$ with respect to both $`\rho `$ and $`\sigma `$ leads to the equations of motion for a classical ensemble, eqs. (6) and (8). There is still considerable freedom in the choice of probability density that can be used to describe the system, since it is only subject to eq. (6). To derive the diffusion equation and its adjoint, we need to restrict the choice of probability densities using the principle of minimum Fisher information. We consider therefore the Lagrangian $`L_D`$, $$L_D=_{t_0}^{t_1}_{\mathrm{}}^+\mathrm{}\rho \left\{\frac{\sigma }{t}+\frac{1}{2m}\left[\left(\frac{\sigma }{x}\right)^2\frac{D^2}{4}\frac{1}{\rho ^2}\left(\frac{\rho }{x}\right)^2\right]\right\}𝑑x𝑑t.$$ (9) The Lagrangian $`L_D`$ equals $`L_{CL}`$ plus an additional term proportional to the Fisher information $`I`$, $$L_D=L_{CL}+\frac{1}{2}\frac{D^2}{4}I.$$ (10) Fixed end point variation of $`L_D`$ with respect to $`\sigma `$ leads once more to eq. (6), while variation with respect to $`\rho `$ leads to a modified Hamilton-Jacobi equation that includes a term $`Q`$ which is of the form of Bohm’s quantum potential (but notice that it appears here within the context of a classical theory), $$\frac{\sigma }{t}+\frac{1}{2m}\left(\frac{\sigma }{x}\right)^2+Q=0$$ (11) with $$Q=\frac{D^2}{4}\left[\frac{1}{\rho ^2}\left(\frac{\rho }{x}\right)^2\frac{2}{\rho }\left(\frac{^2\rho }{x^2}\right)\right].$$ (12) Eqs. (6) and (11) are identical to eqs. (2) and (3) provided we set $$\psi =\sqrt{\rho }e^{+\sigma /D},$$ (13) $$\psi ^{}=\sqrt{\rho }e^{\sigma /D}.$$ (14) It can be shown (see Appendix B) that the Fisher information $`I`$ increases when $`\rho `$ is varied while $`\sigma `$ is kept fixed. Therefore, the solution derived here is the one that minimizes the Fisher information for a given $`\sigma `$. ## III Connection to Brownian motion Although $`\psi `$ is a solution of the diffusion equation, it will not correspond in general to the case of Brownian motion. Here, $`\psi =\sqrt{\rho }e^{+\sigma /D}`$ is proportional to the square root of a probability distribution, while in Brownian motion the probability distribution $`\rho `$ is the function that satisfies the diffusion equation. The $`\psi `$ that we have derived here is essentially the “wave function” of Euclidean quantum mechanics . The case of Brownian motion corresponds to a particular solution of eqs. (6) and (11), one for which $$\sigma _{BM}=\frac{D}{2}\mathrm{ln}\rho .$$ (15) In this case, the velocity field then takes the form $$v_{BM}=\frac{D}{2m}\frac{\mathrm{ln}\rho }{x}.$$ (16) Eq. (16) is known as the osmotic equation, and $`v_{BM}`$ is the osmotic velocity. If we substitute $`\sigma _{BM}`$ into eqs. (13) and (14), the “wave functions” become $$\psi _{BM}=\rho $$ (17) $$\psi _{BM}^{}=1$$ (18) which solve eqs. (2) and (3) provided the probability density $`\rho `$ is a solution of the diffusion equation. One can also check that eqs. (6) and (11) both reduce to $$\frac{\rho }{t}+\frac{D}{2m}\frac{^2\rho }{x^2}=0$$ (19) when eq. (15) holds. ## IV Discussion It has been shown that the diffusion equation and its adjoint (time reversed) equation can be derived using an information-theoretic approach that is based on the principle of minimum Fisher information. In the information-theoretic approach followed here, the emphasis is on using the principle of minimum Fisher information to complement a physical picture derived from a particular hydrodynamical model of the system. Variation of the Lagrangian (9) can be interpreted as the minimization of the Fisher information subject to the constraint that the probability density satisfy the continuity equation (6), which arises naturally in the hydrodynamical model. An alternative approach to the diffusion equation that also uses minimum Fisher information can be found in Ref. . This derivation, however, differs from the present one in two crucial respects; in particular, the equation is not derived from a Lagrangian, and the derivation does not make reference to the hydrodynamical model. The approach followed here provides a physically well motivated derivation of the diffusion equation which distinguishes between physical and information-theoretical assumptions. A similar approach leads to the Schrodinger and Pauli equations. ## V Appendix A Let $`\mu `$ be a measure defined on $`^n`$, let $`P(y^i)`$ be a probability density with respect to $`\mu `$ which is a function of $`n`$ continuous parameters $`y^i`$, and let $`P(y^i+\mathrm{\Delta }y^i)`$ be the density that results from a small change in the $`y^i`$. Expand the $`P(y^i+\mathrm{\Delta }y^i)`$ in a Taylor series, and calculate the cross-entropy up to the first non-vanishing term, $$P(y^i+\mathrm{\Delta }y^i)\mathrm{ln}\frac{P(y^i+\mathrm{\Delta }y^i)}{P(y^i)}d\mu (y^i)\frac{1}{2}\underset{j,k}{\overset{n}{}}\left[\frac{1}{P(y^i)}\frac{P(y^i)}{y^j}\frac{P(y^i)}{y^k}𝑑\mu (y^i)\right]\mathrm{\Delta }y^j\mathrm{\Delta }y^k$$ (20) The terms in square brackets are the elements of the Fisher information matrix (while this is not the most general definition of the Fisher information matrix, it is one that applies to the present case . For the general case, see Ref. ). If $`P`$ is defined over an $`n`$-dimensional manifold $`M`$ with (positive) metric $`g^{ik}`$, there is a natural definition of the amount of information $`I`$ associated with $`P`$, which is obtained by contracting the metric $`g^{ik}`$ with the elements of the Fisher information matrix, $$I=\underset{j,k}{\overset{n}{}}g^{ik}\frac{1}{P}\left(\frac{P}{y^i}\right)\left(\frac{P}{y^k}\right)𝑑\mu (y^i).$$ (21) In the case where $`M`$ is the $`n+1`$ dimensional extended configuration space $`QT`$ (with coordinates $`\{t,x^1,\mathrm{},x^n\}`$) of a non-relativistic particle of mass $`m`$, the natural metric is the one used to define the kinematical line element in configuration space, which is of the form $`g^{ik}=diag(0,1/m,\mathrm{},1/m)`$ . Note that with this metric, it is straightforward to generalize the results of the paper to the case of diffusion in many space dimensions. In particular, we replace the velocity field in eq. (4) by the expression $`v^i=g^{ik}\sigma /x^k`$, and the Lagrangian in eq. (9) by $$L_D=\rho \left\{\frac{\sigma }{t}+\frac{1}{2}\underset{j,k}{\overset{n}{}}g^{ik}\left[\frac{\sigma }{x^i}\frac{\sigma }{x^k}\frac{D^2}{4}\frac{1}{\rho ^2}\frac{\rho }{x^i}\frac{\rho }{x^k}\right]\right\}d^nx𝑑t.$$ (22) In the case of one time and one space dimension, eq.(21) reduces to eq. (5). To express $`I`$ in units of energy, we need to introduce a conversion factor with units of action squared and multiply eq. (5) by this factor. In the case of the diffusion process, we can set the conversion factor proportional to $`D^2`$, although it is also possible to introduce a universal constant of action, such as $`\mathrm{}`$, and set the conversion factor proportional to $`\mathrm{}^2`$. ## VI Appendix B We want to examine the extremum obtained from the fixed end-point variation of the Lagrangian $`L_D`$. In particular, we wish to show the following: given $`\rho `$ and $`\sigma `$ that satisfy eqs. (6) and (11), a small variation of the probability density $`\rho (x,t)\rho (x,t)^{}=\rho (x,t)+ϵ\delta \rho (x,t)`$ for fixed $`\sigma `$ will lead to an increase in $`L_D`$, as well as an increase in the Fisher information $`I`$. We assume fixed end-point variations ($`\delta \rho =0`$ at the boundaries), and variations $`ϵ\delta \rho `$ that are well defined in the sense that$`\rho ^{}`$ will have the usual properties required of a probability distribution (such as $`\rho ^{}>0`$ and normalization). Let $`\rho \rho ^{}=\rho +ϵ\delta \rho `$. Since $`\rho `$ and $`\sigma `$ are solutions of the variational problem, the terms linear in $`ϵ`$ vanish. If we keep terms up to order $`ϵ^2`$, we find that $`\mathrm{\Delta }L_D`$ $``$ $`L_D(\rho ^{},\sigma )L_D(\rho ,\sigma )`$ (23) $`=`$ $`ϵ^2{\displaystyle \frac{D^2}{8m}}{\displaystyle \left\{\frac{(\delta \rho )^2}{\rho ^3}\left(\frac{\rho }{x}\right)^2\frac{2\delta \rho }{\rho ^2}\left(\frac{\rho }{x}\right)\left(\frac{\delta \rho }{x}\right)+\frac{1}{\rho }\left(\frac{\delta \rho }{x}\right)^2\right\}𝑑x𝑑t}+O\left(ϵ^3\right).`$ (24) Using the relation $$\frac{}{x}\left(\frac{\delta \rho }{\rho }\right)=\frac{1}{\rho }\frac{\delta \rho }{x}\frac{1}{\rho ^2}\frac{\rho }{x}\delta \rho ,$$ (25) we can write $`\mathrm{\Delta }L_D`$ as $$\mathrm{\Delta }L_D=ϵ^2\frac{D^2}{8m}\rho \left[\frac{}{x}\left(\frac{\delta \rho }{\rho }\right)\right]^2𝑑x𝑑t+O\left(ϵ^3\right),$$ (26) which shows that $`\mathrm{\Delta }L_D>0`$ for small variations, and therefore the extremum of $`\mathrm{\Delta }L_D`$ is a minimum. Furthermore, since $`\mathrm{\Delta }L_DD^2`$, it is the Fisher information term $`I`$ in the Lagrangian $`\mathrm{\Delta }L_D`$ that increases, and the extremum is also a minimum of the Fisher information.
no-problem/9910/hep-ph9910298.html
ar5iv
text
# Signature of exotic particles in light by light scattering thanks: E–mail: gtv@fis.cinvestav.mxthanks: E–mail: jtoscano@fcfm.buap.mx ## Abstract We discuss the implications on light by light scattering of two kind of exotic particles: doubly charged scalar bosons and doubly charged fermions; the virtual effects of a nonstandard singly charged gauge boson are also examined. These particles, if their masses lie in the range 0.1–1.0 TeV, will have a clear signature in the future linear colliders. The present analysis has the advantage that it depends only on electromagnetic symmetry, so it is applicable to any model which predicts this class of particles. In particular, our results have interesting consequences on left-right models and their supersymmetric extension. The major dream in particle physics is a final theory of elementary interactions. The standard model (SM) leaves many unaddressed questions, so it is only one step toward the achievement of such a theory. In the attempt to take one step forward many extensions have been conjectured, resulting in the prediction of new particles, whose experimental evidence would point in the right direction. In this letter we will examine the implications of different kinds of exotic particles on light by light scattering , which has been proposed recently as an useful mode to detect virtual effects of new physics at the future linear colliders (LC) . Our study includes doubly charged scalar bosons and fermions, as well as a singly charged gauge boson heavier than the SM one. The mass range studied will be 0.1–1 TeV, which would be at the reach of LC . Since the $`\gamma \gamma `$ scattering amplitude is proportional to the electric charge factor $`Q^4`$, the contribution of particles with charge greater than the unity, in terms of the positron charge, would enhance dramatically the respective cross section, resulting in a distinctive signal of new physics. Moreover, the structure of the cross section is entirely dictated by the spin of the particles circulating in the loop and by electromagnetic symmetry, thus it is not necessary to make further assumptions about a specific model. As a consequence, our analysis is applicable to any model which predicts such particles. Nevertheless, the main motivation of our work resides in two popular extensions of the standard model, namely left-right symmetric models (LRM) and their supersymmetric extension (SUSYLR) , where these exotic particles are a natural prediction and they might provide the most distinctive signature of these models. We will begin by presenting a brief outline of these extensions. Doubly charged Higgs bosons emerge in many extensions of the SM Higgs sector . Unacceptable effects on the electroweak tree level relation $`\rho m_W^2/(\mathrm{cos}\theta _Wm_Z^2)1`$ may arise from exotic representations which have triplets or higher representations with a neutral member, but it is possible to overcome this problem by recoursing to extra assumptions. This is the case in a popular class of models with both doublets and triplets, where a custodial $`\mathrm{SU}(2)`$ symmetry is invoked to protect the $`\rho `$ relation . In general, the $`\rho `$ problem is avoided in representations without neutral members or either in models where the vacuum expectation value (VEV) of the neutral component of the Higgs multiplets vanishes. More complicated possibilities arise in extensions of the $`\mathrm{SU}(2)\times \mathrm{U}(1)`$ gauge group. For instance, the minimal version of LRM, which are based on the $`\mathrm{SU}(2)_\mathrm{L}\times \mathrm{SU}(2)_\mathrm{R}\times \mathrm{U}(1)_{\mathrm{B}\mathrm{L}}`$ gauge group, requires the presence of one bidoublet as well as left and right triplets. These models predict the existence of new physics at an intermediate scale provided by the VEV of the neutral component of the right triplet, parity-symmetry would be restored at this scale. The VEV’s of the neutral components of the bidoublet are identified with the Fermi scale. As far as the left triplet is concerned, it is only required to preserve left-right symmetry, and its neutral component can get a VEV constrained to be small to maintain the $`\rho `$ relation. In its minimal realization, LRM predict fourteen physical Higgs bosons but we are only interested on the doubly charged bosons as it is likely that they give a clear signature of new physics. In the gauge sector, the only new contribution to light by light scattering comes from a charged right–handed gauge boson. The low-energy implications of LRM depend strongly on the structure and the vacuum stability of the Higgs potential. It was shown that a careful analysis of the most general Higgs potential consistent with the low-energy data might not lead to new physics at a low scale . However, if new physics featured by discrete symmetries is imposed, the most delicate terms of the Higgs potential can be eliminated, allowing the existence of a scale that would be accessible at LC. In particular, in these scenarios some Higgs bosons might be light, with masses of the order of the Fermi scale. With regard to doubly charged fermions, they appear first in natural lepton models . More recently, they have emerged naturally from the supersymmetric extension of LRM. At this respect, there is the belief that supersymmetry (SUSY), and in particular its minimal low-energy realization, the supersymmetric standard model (MSSM), is a natural candidate to supersede the SM. Although MSSM offers solution to problems not explained by the SM it has many undesirable features, namely it predicts large $`CP`$ violating effects, it allows the presence of baryon and lepton number violating terms in the lagrangian, and it forbids the existence of massive neutrinos if global $`R`$–parity is to be conserved. These problems may be cured by considering the supersymmetric extension of LRM, SUSYLR , which has also the attractiveness of the presence of a low-energy scale $`mM_R^2/M_{\mathrm{Planck}}`$, where $`M_R`$ is the scale of left-right symmetry breaking. It happens that some singly and doubly charged Higgs scalars and the respective superpartners have their masses proportional to $`m`$. Very recently it has been argued that two interesting possibilities arise depending if the vacuum of the theory does or does not conserve $`R`$–parity . When the vacuum conserves $`R`$–parity, low energy data set a lower limit on $`M_R`$ of about $`10^{10}`$ GeV. On the other hand, in the scenario where the vacuum state breaks $`R`$–parity spontaneously, it exists an upper limit of about 10 TeV for $`M_R`$. It follows that even in the case of $`M_R`$ in the range $`10^{10}10^{12}`$ GeV, there is the possibility of some light doubly charged Higgs bosons and Higgsinos. This is an important motivation to study the implications of doubly charged scalars and fermions at LC through light by light scattering. The main virtue of light by light scattering is that to a certain extent it is a model-independent process: the structure of its amplitude is entirely dictated by the spin of the virtual particles as well as electromagnetic symmetry, and of course by the number of such particles. Therefore, the only dependence on a specific model is given by the mass and electric charge of its particle content. As was pointed out in , in the SM the helicity amplitudes of the $`\gamma \gamma `$ scattering are almost purely imaginary at high energies, precisely in the range where there is the possibility of observing the appearance of fields associated with models beyond the SM. There follows that if the contribution arising from additional charged particles has an appreciable imaginary part, the virtual effects would become evident through the interference with the SM contribution even if the respective contribution is too small to be detected by itself. Another interesting feature of light by light scattering is that the amplitude arising from loops with the same particle circulating on them turns out to be proportional to the charge factor $`Q^4`$, the consequence is that the cross section gets enhanced dramatically when the contribution of a doubly charged particle is considered. With all these properties, the process $`\gamma \gamma \gamma \gamma `$ provides an excellent mechanism to search for virtual effects of new physics at LC: the signature of a certain particle with a particular spin and charge depends only on its mass, and it is not necessary to consider model-dependent parameters or make further assumptions as it does the case when direct production is studied. We will proceed to discuss our results. We have considered the implications on light by light scattering of doubly charged scalars and fermions with masses in the range 0.1–0.5 TeV. As far as the singly charged gauge boson is concerned, we have studied the case in which its mass is greater that the existing bound of $`550`$ GeV, obtained if it is assumed a light right-handed neutrino . For the purpose of this work, it is sufficient to analyze unpolarized cross sections. A more detailed study will be presented elsewhere , including polarized cross sections and the implications of the technical details of LC, together with the study of exotic particles not discussed in here: doubly charged gauge bosons and exotic quarks with charges $`5/3e`$ and $`4/3e`$, which are predicted by some $`\mathrm{SU}(3)_\mathrm{c}\times \mathrm{SU}(3)_\mathrm{L}\times \mathrm{U}(1)_\mathrm{N}`$ models . The helicity amplitudes of contributions of loops with scalars, fermions and gauge bosons are well known . To obtain the cross section we have worked with the exact amplitudes without making any simplification but to consider the observable cross section, the integration has been constrained to $`|\mathrm{cos}\theta |30^\mathrm{o}`$. The numerical analysis was done with the program FF . The case where singly charged fermions and scalar bosons are involved was studied with details in , in the context of SUSY models <sup>1</sup><sup>1</sup>1We do not show these results, but it must be noticed that we find a nice agreement with . . The remarkably properties of the $`\gamma \gamma `$ scattering acquire new dimensions when particles with a charge greater than the unity, in units of the positron charge, are involved. This is shown through Figs. 1-6, where we have plotted separately the contributions of each exotic particle for different values of its mass. In Fig. 1, it is shown the respective contribution as well as the interference between a doubly charged fermion and the SM contribution, scaled by the SM unpolarized cross section $`\sigma _{\mathrm{SM}}`$. It must be noticed that, though the virtual effects arise predominantly from the interference term, in the case of a light fermion with a mass of about 100 GeV even its own contribution plays an important role. This seems to contradict the well known result that at energies about 300 GeV the top contribution is negligible with respect to the $`W_L`$ term. The explanation resides in the powerful charge factor: as the helicity amplitudes for the contribution of a certain particle are proportional to the factor $`Q^4`$, the cross section arising from a doubly charged fermion turns out to be improved by the factor $`2^8/(3(2/3)^4)^2=3^6`$ in comparison with the case of an up–type quark, whereas the interference term gets enhanced by the factor $`3^3`$. When the mass $`M_{\widehat{\delta }_{++}}`$ of the doubly charged fermion is greater than 200 GeV, its contribution to the unpolarized cross section tends to be suppressed with respect to the interference term. It is also interesting to note that the virtual effects are always observed above the threshold $`\sqrt{s}2M_{\widehat{\delta }_{++}}`$. This fact is explained because of the character predominantly imaginary of the SM helicity amplitudes at energies above 300 GeV, at the same time the new contributions are purely real below the threshold and complex above it. As the interference is given by $`2\mathrm{}(A_{\mathrm{SM}}A_{\mathrm{New}})`$, it follows that the virtual effects will become evident above the threshold. To realize the magnitude of the deviation from the SM, we have plotted in Fig. 2 the effect that would be observed if in addition to the SM particle content, a doubly charged fermion is included. It is clear that the signature of such an exotic particle would be very distinctive. The sensitivity of the unpolarized cross section at a linear collider running at energies in the range of 350–800 GeV has been examined for the case of a singly charged fermion , it was found that for a chargino with a mass of 100-250 GeV the signal varies between 3 SD–1 SD. It is evident that in the case of a doubly charged fermion we should expect an important increment . Although the signature of a doubly charged scalar is not as spectacular as that of a fermion with the same charge, the situation is also promising as it is depicted in Figs 3–4. In Fig. 3 it can be seen that the virtual effects come predominantly from the interference with the SM particles, the result is reflected on the deviation from the unpolarized cross section $`\sigma _{\mathrm{SM}}`$ (Fig. 4). If we compare this particular case with that of a singly charged scalar with about the same mass, we find that, since the dominant effect comes from interference, the deviation for a doubly charged scalar is larger for a factor of $`2^4`$. As a result, the possibility of observing the virtual effects of a relatively heavy doubly charged scalar would be increased as compared to the case of the singly charged scalar. Finally, for completeness we have studied the potential effects of a relatively light singly charged gauge boson. The motivation is that it has been noticed that the existence of such a particle would have important implications to elucidate some aspects of SUSYLR . We have plotted in Figs 5–6 the deviation from the SM cross section as caused by a singly charged gauge boson with a mass in the range 0.55–1 TeV. It can be seen that, as expected, the enhancement of the cross section is not as important as the ones arising from doubly charged particles. However, at energies of about 2 TeV, the signal would be more important than the one coming from a doubly charged scalar. In contrast to the situation of fermions and scalars, where the deviation from the SM cross section is most important near the threshold, the one coming from a gauge boson is larger far beyond. Our results show that the signature of exotic particles would be distinctive enough in $`\gamma \gamma `$ scattering to provide evidence of new physics. In particular, in the context of SUSYLR, an interesting implication is that a doubly charged Higgs scalar boson or a doubly charged Higgsino with masses in the range 100-500 GeV would produce a clear signature. This signal would be more distinctive at LC that the one which could provide a chargino or a sfermion with the same or even with a lighter mass. In addition, the existence of several doubly charged particles will enhance spectacularly the cross section and as a result particle counting might be possible through light by light scattering. If SUSYLR is realized in nature, then it would exist the possibility that low–energy remnant doubly charged Higgs bosons and doubly charged Higgsinos would exist, as it has been suggested recently in conjecturing some scenarios . If this possibility became true, it is likely that this kind of particles would be discovered by direct production by the time that LC would be a reality. In this situation, $`\gamma \gamma `$ scattering might be an effective process to probe the theory with a high precision. In conclusion, due to its outstanding properties, light by light scattering rises as an invaluable process to search indirectly signals of doubly charged particles at the planned linear colliders. The most remarkably feature is that the signal arising from a certain particle depends only on its mass. As an alternative to direct production, where model–dependent parameters have to be considered, light by light scattering might offer the possibility of testing to a great detail the properties of new charged particles. In particular, it would aid also to elucidate some characteristics of a specific model, for instance the number of particles of a certain kind. We acknowledge support from CONACYT and SNI (México).
no-problem/9910/quant-ph9910116.html
ar5iv
text
# Quantum Instantons and Quantum Chaos ## Abstract We suggest a closed form expression for the path integral of quantum transition amplitudes to construct a quantum action. Based on this we propose rigorous definitions of both, quantum instantons and quantum chaos. As an example we compute the quantum instanton of the double well potential. 1. Introduction Instantons and chaos both play an important role in modern science. Chaos occurs in physics, chemistry, biology, physiology, meteorology, economy etc. Both concepts are defined by classical physics. On the other hand, for instanton solutions occuring in quantum field theory of gauge theories, quantum effects are very important. So far quantum effects have been computed mostly perturbatively (WKB). The chaotic behavior of quantum liquids and quantum dots requires a quantum mechanical description. However, the concept of quantum chaos has eluded yet a rigorous definition, albeit a quantitative computation. In this work we propose how to define quantum instantons and quantum chaos. As an example, we compute quantitatively the quantum instanton for a 1-dim quantum system. Chaotic phenomena were found in a number of quantum systems. For example, the hydrogen atom in a strong magnetic field shows strong irregularities in the spectrum . Irregular patterns were observed in the wave functions of the quantum mechanical model of the stadium billard . Billard like boundary counditions have been realized experimentally in mesoscopic quantum systems, like quantum dots and quantum corrals, formed by atoms in semi-conductors . While classical chaos theory, using Lyapunov exponents and Poincaré sections, is based on identifying trajectories in phase space, this does not carry over to quantum mechanics, because Heisenberg’s uncertainty relation $`\mathrm{\Delta }x\mathrm{\Delta }p\mathrm{}/2`$ does not allow to specify a point in phase space with zero error. Thus workers have studied chaotic phenomena of a quantum system in a semi-classical regime, e.g. highly excited states of a hydron atom (Rydberg states). In this regime Gutzwiller’s trace formula derived from the path integral has been very useful. Also it has been quite popular to look for signatures of chaos in quantum systems, which have classically chaotic counter parts. According to Bohigas’ conjecture , quantum chaos manifests itself in a characteristic spectral distribution, being of Wigner-type. The spectral distribution is a global property of the system. One may ask the question: Is it possible to obtain more specific and local information about the chaotic behavior of a quantum system? Like quantum mechanics for a metal conductor explains the existence conducting bands and non-conducting bands, there may co-exist regular and chaotic domains in a quantum system (as they are known to exist in classical Hamiltonian chaos). To answer this question it is natural to look for a bridge between classical physics and quantum physics. The virtue of Gutzwiller’s trace formula is that it forms such a bridge, valid in the semi-classical regime. More generally, a bridge between classical physics and quantum physics is renormalisation and the effective action. The effective potential $`V^{eff}`$ and effective action $`\mathrm{\Gamma }`$ introduced in quantum field theory some decades ago is defined also in quantum mechanics, viewed as QFT in $`0+1`$ dimensions. $`Z[J]`$ $`=`$ $`e^{iW[J]}`$ $`{\displaystyle \frac{}{J(x)}}W[J]`$ $`=`$ $`<0|\varphi (x)|0>_J`$ $`\varphi _{cl}(x)`$ $`=`$ $`<0|\varphi (x)|0>_J`$ $`\mathrm{\Gamma }[\varphi _{cl}]`$ $`=`$ $`W[J]{\displaystyle d^4yJ(y)\varphi _{cl}(y)}.`$ (1) Cametti et al. have computed the effective action $`\mathrm{\Gamma }`$ in quantum mechanics, using perturbation theory and the loop expansion. They consider the Lagrangian $$L(q,\dot{q},t)=\frac{m}{2}\dot{q}^2V(q),V(q)=\frac{m}{2}\omega ^2q^2+U(q),$$ (2) where $`U(q)`$ is, e.g., a quartic potential $`U(q)q^4`$. Then the effective action takes the form $`\mathrm{\Gamma }[q]`$ $`=`$ $`{\displaystyle 𝑑t\left(V^{eff}(q(t))+\frac{Z(q(t))}{2}\dot{q}^2(t)+A(q(t))\dot{q}^4(t)+B(q(t))(d^2q/dt^2)^2(t)+\mathrm{}\right)}`$ $`V^{eff}`$ $`=`$ $`{\displaystyle \frac{1}{2}}m\omega ^2q^2+U(q)+\mathrm{}V_1^{eff}(q)+O(\mathrm{}^2)`$ $`Z(q)`$ $`=`$ $`m+\mathrm{}Z_1(q)+O(\mathrm{}^2)`$ $`A(q)`$ $`=`$ $`\mathrm{}A_1(q)+O(\mathrm{}^2)`$ $`B(q)`$ $`=`$ $`\mathrm{}B_1(q)+O(\mathrm{}^2).`$ (3) There are higher loop corrections to the effective potential $`V^{eff}`$ as well as to the mass renormalisation $`Z`$. The most important property is the occurrence of higher time derivative terms. Actually, there is an infinite series of increasing order. This is due to a non-local structure coming from the expansion around a time-dependent classical path. Here comes the problem: When we want to interpret $`\mathrm{\Gamma }`$ as effective action, e.g. for the purpose to analyze quantum chaos, the higher time derivatives require more boundary conditions than the classical action. Those boundary conditions are unknown. Moreover, analytical evaluation of the effective action to higher order in perturbation theory becomes prohibitively difficult and finally, such perturbation expansion does not converge. This makes the effective action in its standard form a tool unsuitable for the analysis of quantum chaos. 2. Quantum action In the following we will present an alternative way to construct an action taking into acount quantum corrections. The classical trajectory from $`x_{in}`$ to $`x_{fi}`$ corresponds in quantum mechanics to the probability amplitude for the transition, given in terms of the path integral by $$G(x_{fi},t_{fi};x_{in},t_{in})=[dx]\mathrm{exp}[\frac{i}{\mathrm{}}S[x]]|_{x_{in},t_{in}}^{x_{fi},t_{fi}}.$$ (4) The classical trajectory $`x_{cl}`$ is defined as extremum of the classical action $$S=𝑑tL(x,\dot{x})=𝑑t\frac{m}{2}\dot{x}^2V(x).$$ (5) In Ref. we have proposed a conjecture, establishing a new link between classical mechanics and quantum mechanics. Conjecture: For a given classical action $`S`$ with a local interaction $`V(x)`$ there is a renormalized/quantum action $$\stackrel{~}{S}=𝑑t\frac{\stackrel{~}{m}}{2}\dot{x}^2\stackrel{~}{V}(x),$$ (6) such that the transition amplitude is given by $`G(x_{fi},t_{fi};x_{in},t_{in})=\stackrel{~}{Z}\mathrm{exp}\left[{\displaystyle \frac{i}{\mathrm{}}}\stackrel{~}{S}[\stackrel{~}{x}_{class}]|_{x_{in},t_{in}}^{x_{fi},t_{fi}}\right],`$ (7) where $`\stackrel{~}{x}_{class}`$ denotes the classical path corresponding to the action $`\stackrel{~}{S}`$. The renormalized action is independent of the boundary points $`x_{in}`$, $`x_{fi}`$, but depends on the transition time $`T=t_{fi}t_{in}`$. Because the renormalized action, describing quantum physics, has mathematically the form of a classical action, this bridges the gap between classical and quantum physics. It opens a new vista to investigate quantum phenomena, the definition of which comes from classical physics. Prominent examples are instantons and chaos. 3. Effective action vs. quantum action The effective action is defined via the vacuum-vacuum transition amplitude at infinite time. It allows to compute the ground state energy. The effective action is a function of the classical path. The quantum action is defined as transition amplitude from a set of initial positions to some set of final positions. It depends on some finite transition time. It does not depend on a classical path. In order to compare numerically the effective action with the quantum action, we have considered a harmonic oscillator with a weak anharmonic perturbation, $$S=𝑑t\frac{m}{2}\dot{x}^2+v_2x^2+\lambda v_4x^4,\mathrm{}=m=v_2=m\omega ^2/2=v_4=1.$$ (8) We have varied the parameter $`\lambda `$ in the range $`0\lambda 0.1`$ (weak perturbation). The effective action to one loop order and up to 2nd order of $`\lambda `$ yields $`v_2^{eff}`$ $`=`$ $`v_2+\delta v_2,\delta v_2={\displaystyle \frac{3}{m\omega }}\lambda `$ $`v_4^{eff}`$ $`=`$ $`v_4+\delta v_4,\delta v_4={\displaystyle \frac{9}{m\omega ^2}}\lambda ^2.`$ (9) The comparison with the quantum action, computed numerically for transition time $`T=4`$, is shown in Fig.. One observes that the effective potential and the potential of the quantum action are close for small $`\lambda `$. At about $`\lambda 0.06`$, one observes a discrepancy, which shows the onset of non-perturbative effects. We would like to point out that the quantum action in imaginary time can be interpreted as quantum action at finite temperature. According to the laws of quantum mechanics and thermodymical equilibrium, the expectation value of some observable $`O`$, like e.g. average energy is given by $$<O>=\frac{Tr\left[O\mathrm{exp}[\beta H]\right]}{Tr\left[\mathrm{exp}[\beta H]\right]}$$ (10) where $`\beta `$ is related to the temperature $`\tau `$ by $`\beta =1/(k_B\tau )`$. On the other hand the (Euclidean) transition amplitude is given by $$G(x_{fi},T;x_{in},0)=<x_{fi}|\mathrm{exp}[HT/\mathrm{}]|x_{in}>$$ (11) Thus from the definition of the quantum action, Eqs.(6,7), one obtains $$<O>=\frac{_{\mathrm{}}^+\mathrm{}𝑑x_{\mathrm{}}^+\mathrm{}𝑑y<x|O|y>\mathrm{exp}[\stackrel{~}{S}_\beta |_{x,0}^{y,\beta }]}{_{\mathrm{}}^+\mathrm{}𝑑x\mathrm{exp}[\stackrel{~}{S}_\beta |_{x,0}^{x,\beta }]},$$ (12) if we identify $`\beta =\frac{1}{k_B\tau }=T/\mathrm{}`$. As a result, the quantum action $`\stackrel{~}{S}_\beta `$ computed from transition time $`T`$, describes equilibrium thermodynamics at $`\beta =T/\mathrm{}`$, i.e. temperature $`\tau =1/(k_B\beta )`$. Consequently, the quantum action for some finite transition time can be used for the study of quantum instantons and quantum chaos at finite temperature. Tab.1. Double well potential $`V(x)=\frac{1}{2}x^2+\frac{1}{2}x^4`$. $`T=0.5`$. $`\stackrel{~}{m}`$ $`\stackrel{~}{v_0}`$ $`\stackrel{~}{v_1}`$ $`\stackrel{~}{v_2}`$ $`\stackrel{~}{v_3}`$ $`\stackrel{~}{v_4}`$ interval Fit MC 0.9959(1) 1.5701(54) 0.000(2) -0.739(5) 0.000(2) 0.487(4) \[-1.2,+1.2\] Fit MC 0.9961(2) 1.5714(17) 0.000(2) -0.747(10) 0.000(2) 0.489(7) \[-1.4,+1.4\] Fit MC 0.9961(1) 1.5732(11) 0.000(3) -0.760(5) 0.000(3) 0.499(3) \[-1.6,+1.6\] Fit MC 0.9959(1) 1.5713(10) 0.000(2) -0.747(4) 0.000(2) 0.495(2) \[-1.8,+1.8\] Fit MC 0.9959(1) 1.5744(11) 0.000(3) -0.754(4) 0.000(2) 0.498(2) \[-2.0,+2.0\] Fit MC 0.9959(2) 1.5694(19) 0.000(2) -0.739(6) 0.000(2) 0.492(3) \[-2.2,+2.2\] Fit MC 0.9964(2) 1.5718(16) 0.000(3) -0.745(6) 0.000(2) 0.491(2) \[-2.4,+2.4\] Fit MC 0.9962(8) 1.5685(18) 0.000(2) -0.740(7) 0.000(1) 0.492(3) \[-2.6,+2.6\] Fit MC 0.9963(2) 1.5731(7) 0.000(0) -0.742(2) 0.000(2) 0.492(1) \[-2.8,+2.8\] Fit MC 0.9966(3) 1.5695(2) 0.000(4) -0.744(8) 0.000(3) 0.492(2) \[-3.0,+3.0\] Average 0.9961(2) 1.5710(17) 0.000(2) -0.745(6) 0.000(2) 0.493(3) 4. Quantum Instantons Instantons are classical solutions of field theories in imaginary time (Euclidean field theory) . Instantons are believed to play an important role in gauge theories. They are the tunneling solutions between the $`\theta `$-vacua . t’Hooft has solved the $`U_A(1)`$ problem , showing that due to instanton effects, the Goldstone boson is not a physical particle in this case. In $`QCD`$ instantons may be responsible for quark confinement. It has recently been predicted by Shuryak that $`QCD`$ at high temperature and density not only displays a transition to the quark-gluon plasma, but has a much richer phase structure, due to strong instanton effects. Instantons in gauge theories can be characterized by topological quantum numbers. Tunneling and instantons play an important role also in the inflationary scenario of the early universe. During inflation, quantum fluctuations of the primordial field expand exponentially and terminate their evolution as a classical field. Its deviation from its average value is of the order of the size of the horizon . The evolution of those clssical fluctuations eventually leads to the formation of galaxies . There are theoretical contradictions in scenarios where inflation terminates by a first order phase transition . Here we suggest that besides the classical instanton solution also its quantum descendent can be precisely defined and quantitatively computed. We want to discuss this matter in the context of quantum mechanics. We consider in 1-D a particle of mass $`m`$ interacting with a quartic potential, which is symmetric under parity and displays two minima of equal depth (double well potential). This is an analogue of ”degenerate vacua” occuring in field theory. The potential $$V(x)=A^2(x^2a^2)^2,$$ (13) has minima located at $`x=\pm a`$. In particular, we have chosen the potential $`V(x)=\frac{1}{2}x^2+\frac{1}{2}x^4`$ ($`A=1/\sqrt{2}`$, $`a=1`$), and $`m=1`$, $`\mathrm{}=1`$. The double well potential has a classical instanton solution. It is obtained by solving the Euler-Lagrange equations of motion of the Euclidean classical action, with the initial conditions $`x(t=\mathrm{})=a`$, $`\dot{x}(t=\mathrm{})=0`$, $`x_{inst}^{cl}(t)=a\text{tanh}[\sqrt{2/m}Aat].`$ (14) The classical instanton goes from $`x(t=\mathrm{})=a`$ to $`x(t=+\mathrm{})=+a`$ (see Fig.). What is the problem with the quantum instanton? Due to Heisenberg’s uncertainty relation, there is no quantum solution corresponding to the initial conditions of the classical instanton (specifying initial position and momentum with zero uncertainty). The same problem is the reason why one can not define Lyapunov exponents in quantum chaos, and hence can not apply the concepts of classical chaos. In the following we will define the quantum instanton via the renormalized action, for which we make the following ansatz, $$\stackrel{~}{S}=𝑑t\frac{1}{2}\stackrel{~}{m}\dot{x}^2\underset{k=0}{\overset{4}{}}\stackrel{~}{v}_kx^k.$$ (15) Note that one expects higher terms to occur in the renormalized potential. In a first step, we have restrained the search for the renormalized potential parameters to fourth order polynomials, for the sake of numerical simplicity. The term $`\stackrel{~}{v}_0`$ represents the constant term $`\stackrel{~}{Z}`$ in Eq.(7). We have worked in imaginary (Euclidean) time. We have computed the transition matrix element $`G`$, Eq.(7), by using the Monte Carlo method and as alternative from the spectral decomposition by solving the Schrödinger equation for the lowest 7 and 30 states, respectively. The parameters $`\stackrel{~}{m}`$ and $`\stackrel{~}{v}_k`$ have been obtained by minimizing the difference between the l.h.s. and the r.h.s. of Eq.(7), for a given time interval $`T=t_{fi}t_{in}`$ and different combinations of boundary points $`x_{fi}`$, $`x_{in}`$. For more details about the numerical method see Ref.. Results of the renormalized action at $`T=0.5`$ from Monte Carlo are presented in Tab.. The renormalized parameters appear to be reasonably independent from the position of boundary points, distributed over the interval $`[a,+a]`$. The data correspond to $`J=6`$ initial and final boundary points. The renormalized action parameters as a function of $`T`$ are shown in Fig.\[2a,b\]. Fig.\[2a\] shows the mass $`\stackrel{~}{m}`$ and the constant term $`\stackrel{~}{v}_0`$ of the potential as a function of $`T`$, while Fig.\[2b\] shows the quadratic term $`\stackrel{~}{v}_2`$ and the quartic term $`\stackrel{~}{v}_4`$. The data in Fig. can be interpreted also parameters of the quantum action versus finite temperature $`\tau `$ ($`\tau =\mathrm{}/(k_BT)`$), and $`T\mathrm{}`$ corresponds to the zero temperature limit. We have introduced the time scale $`T_{sc}=\mathrm{}/E_{gr}`$, via the ground state energy $`E_{gr}=0.568893`$, giving $`T_{sc}=1.75779`$. Over a wide range of $`T`$ values, one observes that $`\stackrel{~}{v}_0`$ can be fitted by $`\stackrel{~}{v}_0A+B/T`$. For $`T\mathrm{}`$ one finds that the asymptotic behavior $`\stackrel{~}{v}_00.5677\pm 0.015`$, compatible with $`\stackrel{~}{v}_0E_{gr}`$. The mass $`\stackrel{~}{m}`$ changes little in the regime $`0<T<T_{sc}`$, it undergoes a drastic change between $`T_{sc}<T<7`$, and stabilizes for $`T>7`$. The upper part of error bars indicate the error of the fit, while the lower part of error bars indicate $`\sigma `$ of the estimator under variation of intervals of boundary points. It turns out that both errors are quite small for $`\stackrel{~}{v}_0`$. One observes asymptotic convergence of the renormalized parameters when $`T/T_{sc}`$ becomes large. On the other hand, the region of large $`T`$ is also the regime of the instanton (which goes from $`t=\mathrm{}`$ to $`t=+\mathrm{}`$). Thus we suggest the following Definition of the quantum instanton: The quantum instanton is the classical instanton solution of the renormalized action in the regime of asymptotic convergence in time $`T`$ (zero temperature). The quantum instanton at finite temperature $`\tau `$ is the classical instanton solution of the renormalized action at the corresponding finite transition time $`T`$. For example, at $`T=0.5`$ we find the following parameters $`\stackrel{~}{m}=0.9961(2)`$, $`\stackrel{~}{v}_0=1.5710(17)`$, $`\stackrel{~}{v}_1=0.000(2)`$, $`\stackrel{~}{v}_2=0.745(6)`$, $`\stackrel{~}{v}_3=0.000(2)`$, $`\stackrel{~}{v}_4=0.493(3)`$. By adding a constant, the renormalized potential can be expressed as $$\stackrel{~}{V}(x)=\stackrel{~}{A}^2(x^2\stackrel{~}{a}^2)^2,$$ (16) where $`\stackrel{~}{A}^2=\stackrel{~}{v}_4`$ and $`\stackrel{~}{a}^2=\stackrel{~}{v}_2/(2\stackrel{~}{v}_4)`$, giving $`\stackrel{~}{A}=0.702(2)`$ and $`\stackrel{~}{a}=0.869(6)`$. The minima of the potential $`\stackrel{~}{V}`$ are located at $`\pm \stackrel{~}{a}`$. Thus, like the classical action, also the renormalized action at $`T=0.5`$ displays ”degenerate vacua”. Hence it has an instanton solution, corresponding to $`T=0.5`$, given by $$x_{inst}^{T=0.5}(t)=\stackrel{~}{a}\text{tanh}[\sqrt{2/\stackrel{~}{m}}\stackrel{~}{A}\stackrel{~}{a}t]0.869\text{tanh}[0.865t].$$ (17) Similarly, we find an instanton solution for any larger value of $`T`$. The quantum instanton is obtained in the asymptotic limit $`T\mathrm{}`$. The evolution of the instantons under variation of $`T`$, i.e. under variation of the temperature, is depicted in Fig.. It shows the transition from the classical instanton (at infinite temperature) to the quantum instanton (at zero temperature). One observes asymptotic convergence for $`T>7`$ and a drastic change in between the classical instanton ($`T=0`$) and the quantum instanton ($`T9`$). The corresponding instanton action is shown in Fig.. While the action of the classical instanton is of order $`\mathrm{}=1`$, the action of the quantum instanton is smaller by one order of magnitude. This is due to a lower barrier of the potential in the quantum action. 5. New Vista to Quantum Chaos For classical chaos, we have today precise mathematical definitions of the concepts, many experimental observations and high precision computer simulations. However, the physical understanding of its quantum correspondence, the so-called quantum chaos has not matured to the same degree. When we speak of quantum chaos we mean the quantum descendent of a system, which is classically chaotic. An example is the kicked rotor. In quantum mechanics it corresponds to an atom interacting with a pulsed laser. Workers have investigated chaos also in scattering systems . The signals of quantum chaos are less clear in such situations. For quantum systems one has neither been able to define nor to compute quantum Lyapunov exponents, quantum Poincaré sections, quantum Kolmogorov-Sinai entropy etc. Let us consider as example a Hamiltonian system in D=2 dimensions, suggested by Henon and Heiles in the context of planetary motion. Its classical action is given by $$S=𝑑t\frac{1}{2}(\dot{x}^2+\dot{y}^2)\frac{1}{2}(x^2+y^2+2x^2y\frac{2}{3}y^3).$$ (18) According to the conjecture, there is a renormalized action $`\stackrel{~}{S}`$ and all quantum transition amplitudes can be written as a sum over the classical paths corresponding to the renormalized action. Then one can apply the tools of classical chaos theory to the action $`\stackrel{~}{S}`$ in order to decide if $`\stackrel{~}{S}`$ produces classical chaos. On the other hand $`\stackrel{~}{S}`$ describes quantum physics. Thus we suggest the following criterion to decide whether the Henon-Heiles system is quantum mechanically chaotic: Definition of quantum chaos: Consider a classical system, described by a classical action $`S`$. Its corresponding quantum mechanical system is said to display quantum chaos (at zeo temperature), if the renormalized action $`\stackrel{~}{S}`$ in the regime of asymptotic convergence in time $`T`$ displays classical chaos. Also we speak of quantum chaos at finite temperature $`\tau `$, if the renormalized action $`\stackrel{~}{S}`$ at the corresponding finite transition time $`T`$ is classically chaotic. 6. Concluding Discussion What is the physical interpretation of the quantum action and its corresponding trajectory? First, in the path integral based on the classical action, quantum physics is due to a sum over histories (paths) including the classical path, but mostly paths different from the classical one. In the quantum action, by construction, there is only one path, which is a clssical path of the quantum action. Quantum physics effects are captured in the coefficients of the new action. What is the interpretation of the trajectory of the quantum action? Is there a particle following this trajectory? We have introduced an effective (quasi) particle. Its behavior differs from that of a quantum particle. E.g., we have seen that it has a different mass. Moreover, its behavior between initial and final points of propagation is different from a quantum mechanical particle: It follows a smooth trajectory, while a quantum particle on average performs a zig-zag motion, (fractal curve of Hausdorff dimension $`d_H=2`$). However, like the quantum mechanical particle, the effective particle leaves from and arrives at the same boundary points, with the same probability amplitude! The concept of effective particles or quasi particles is well known in physics, e.g., describing collective degrees of freedom in nuclei or Cooper pairs in the BCS theory of supraconductivity. Acknowledgements H.K. and K.J.M.M. have been supported by NSERC Canada. X.Q.L. has been supported by NSF for Distinguished Young Scientists of China, by Guangdong Provincial NSF and by the Ministry of Education of China. H.K. is grateful for discussions with L.S. Schulman. Figure Caption Quantum corrections in action parameters for a weakly anharmonic perturbed harmonic oscillator, Eq.(8). Comparison of perturbative prediction from effective action versus quantum action. Double well potential. (a) Parameters $`\stackrel{~}{m}`$, $`\stackrel{~}{v}_0`$, and (b) parameters $`\stackrel{~}{v}_2`$ and $`\stackrel{~}{v}_4`$ of quantum action versus transition time $`T`$ (inverse temperature $`\beta `$). Quantum instanton solution of quantum action corresponding to transition time $`T`$ (inverse temperature $`\beta `$). Value of quantum action of quantum instanton solution corresponding to transition time $`T`$ (inverse temperature $`\beta `$).
no-problem/9910/quant-ph9910046.html
ar5iv
text
# Quantum information processing in localized modes of light within a photonic band-gap material \[ ## Abstract The single photon occupation of a localized field mode within an engineered network of defects in a photonic band-gap (PBG) material is proposed as a unit of quantum information (qubit). Qubit operations are mediated by optically-excited atoms interacting with these localized states of light as the atoms traverse the connected void network of the PBG structure. We describe conditions under which this system can have independent qubits with controllable interactions and very low decoherence, as required for quantum computation. \] The field of quantum information has experienced an explosion of interest due in large part to the creation of quantum algorithms that are far more efficient at solving certain computational problems than their classical counterparts . However, despite the development of error correction protocols designed to preserve a desired quantum state and several promising proposals for the implementation of quantum information processing , experimental progress to date has been limited to systems of only 2 or 3 quantum bits (qubits). This is due to the difficulties associated with the precise preparation and manipulation of a quantum state, as well as decoherence; that is, the degradation of a quantum state brought about by its inevitable coupling to the degrees of freedom available in its environment. In this Letter, we propose a scheme for quantum information processing in which our qubit is the single photon occupation of a localized defect mode in a three–dimensional photonic crystal exhibiting a full photonic band-gap (PBG). The occupation and entanglement of multiple qubits is mediated by the interaction between these localized states and an atom with a radiative transition that is nearly resonant with the localized modes. The atom passes between defects through a matter–waveguide channel in the extensive void network of the PBG material. We argue that such materials may provide independent qubit states, low error qubit (quantum gate) operations, a long decoherence time relative to the time for a gate operation, the potential for scalability to a large number of qubits, and considerable flexibility in the means of controlling the quantum state of the system. PBG materials have themselves been an area of intensive research over the past decade . These are highly porous, three–dimensionally periodic materials of high refractive index with pore periodicity on the length scale of the relevant wavelength of light. The combination of Bragg scattering from the dielectric backbone and Mie scattering resonances of individual voids leads to the complete exclusion of electromagnetic modes over a continuous range of frequencies. A PBG material amenable to fabrication at microwave and optical wavelengths is the stacked wafer or “woodpile” structure, constructed in analogy with the arrangement of atoms in crystalline silicon . An optical PBG material is the inverse opal structure , which is a self–organizing FCC array of connected air spheres in a high index material such as Si, Ge, or GaP (Fig. 1). The void regions in both of these structures allow line of sight propagation of an atomic beam through the crystal. For example, the Si inverse opal consists of approximately $`75\%`$ connected void regions, can be readily grown on the scale of hundreds of lattice constants, and can exhibit a PBG spanning $`10\%`$ of the gap center frequency. An atom with a transition in a PBG will be unable to spontaneously emit a photon; instead, a long–lived photon–atom bound state is formed . By introducing isolated voids that are larger (air defect) or smaller (dielectric defect) than the rest of the array, strongly localized single mode states of light can be engineered within the otherwise optically empty PBG . For isolated defect modes in a high quality dielectric material, Q-factors of $`10^{10}`$ or higher should be attainable (see below). Excited atoms passing through or near a defect can exchange energy coherently with this localized state, thus preparing the state of our qubit. Furthermore, such “point defects” in PBG materials can give rise to modal confinement to within the wavelength of the mode , giving an enhancement of the cavity Rabi frequency of $`550`$ times over conventional microcavities. These facts make localized modes in PBG materials excellent candidates for the strong coupling regime of cavity quantum electrodynamics (CQED) . Along with the point defects described above, line defects (waveguides) can be engineered within a PBG material. This can be accomplished by modifying the initial templating mold of a single inverted opal crystal , or by removing or modifying selected dielectric rods in a woodpile crystal. Such extended defects can be used to inject a coherent light field into the PBG of the system, thereby controllably altering the Bloch vector of a two–level atom that passes through the illuminated region without the generation of unwanted quantum correlations in the system. The information thereby input to the system can then be transferred from the atom to the localized modes. Our qubits are protected from the narrowly confined externally injected fields by the surrounding dielectric lattice. Our basic scheme for the entanglement of localized modes is shown in Fig. 1. A two–level atom of excited state $`|e`$ and ground state $`|g`$ is prepared in an initial state $`|\psi =c_e|e+c_g|g`$ in an illuminated line defect as it passes through the crystal with velocity $`v`$ (path A). It then successively interacts with localized states $`p`$, $`q`$, and $`p^{}`$. The state of the atom may be further modified in mid-flight as it passes though additional line defects. Due to the absence of spontaneous emission in a PBG, the coherence of the atom mediating the entanglement may be maintained over many defect spacings. This allows for the entanglement of more than two defects or of pairs of more distant defects with lower gate error than is possible with conventional microcavity arrays. The lack of spontaneous emission may also enable the use of atomic or molecular excitations that would be too short–lived for use in conventional CQED. Measurements of the states of defect modes can be made via ionization measurements of the state of a probe atom after it has interacted with individual defects (e.g. path B). In principle, our proposal applies to systems with atomic transitions in the microwave or in the optical/near-IR. However, to perform a precise sequence of many gate operations will require that single atoms of known velocities be sent through the crystal with known trajectories at well–defined times. In a microwave PBG material, the void channels are sufficiently large ($`3\mathrm{m}\mathrm{m}`$) that experiments involving small numbers of qubits can be performed with atoms velocity selected from a thermal source which initially emits atoms with a Poissonian velocity distribution . The outcome of these simple quantum algorithms can then be evaluated by statistical measures on an ensemble of appropriately prepared atoms passing through the defect network. Because optical PBG materials have micron–sized void channels, atomic waveguiding may be necessary in order to prevent the van der Waals adhesion of the atoms to the dielectric surfaces of the crystal. To this end, a field mode excited from the lower photonic band edge resides almost exclusively in the dielectric fraction of the crystal, producing an evanescent field at the void–dielectric interface. This field acts as a repulsive atomic potential if blue–shifted from an atomic transition . Similarly, one may guide atoms with such evanescent waves through engineered line defects which support only field modes far detuned from the qubit atomic transition, in a manner analogous to atomic waveguiding in hollow optical fibers . Using such techniques, optical CQED and atom interferometry experiments using thermally excited atoms or atoms dropped into a PBG material from a magneto–optical trap should currently be possible. We note that long–lived localized modes may also be used to entangle the electronic states of ions in ion traps . Unlike with atomic beams, trapped ions may be precisely manipulated into and out of a defect mode in a PBG material. This may make the ionic system more amenable to large scale implementations of quantum information. We first consider the dynamics of a two–level atom passing through a point defect. In the dipole and rotating wave approximations, the evolution of an atom at position $`𝐫`$ in a localized defect mode is given by a position dependent Jaynes–Cummings Hamiltonian , $$H(𝐫)=\frac{\mathrm{}\omega _a}{2}\sigma _z+\mathrm{}\omega _da^{}a+\mathrm{}G(𝐫)\left(a\sigma _++a^{}\sigma _{}\right),$$ (1) where $`\sigma _z`$, $`\sigma _\pm =\sigma _x\pm i\sigma _y`$ are the usual Pauli spin matrices for a two–level atom with transition frequency $`\omega _a`$, and $`a`$, $`a^{}`$ are respectively the annihilation and creation operators for a photon in a defect mode of frequency $`\omega _d`$. The atom–field coupling strength may be expressed as $`G(𝐫)=\mathrm{\Omega }_0\left(\widehat{𝐝}_{21}\widehat{𝐞}(𝐫)\right)f(𝐫)`$ , where $`\mathrm{\Omega }_0`$ is the peak atomic Rabi frequency over the defect mode, $`\widehat{𝐝}_{21}`$ is the orientation of the atomic dipole moment, and $`\widehat{𝐞}(𝐫)`$ is the direction of the electric field vector at the position of the atom. In general, the three-dimensional mode structure will be a complicated function of the size and shape of the defect . However, we need only consider the one–dimensional mode profile that intersects the atom’s linear path, i.e. $`G(𝐫)G(r)`$. The profile $`f(r)`$ will have an exponential envelope centered about the point in the atom’s trajectory that is nearest to the center of the defect mode, $`r_0`$. Within this envelope, the field intensity will oscillate sinusoidally, and for fixed dipole orientation, variations in the relative orientation of the dipole and the electric field will also give a sinusoidal contribution . For a PBG material with lattice constant $`a`$, we can thus set $`\widehat{𝐝}_{21}\widehat{𝐞}(r)=1`$ and write $`f(r)=e^{\left|rr_0\right|/R_{\mathrm{def}}}\mathrm{cos}\left[\frac{\pi }{a}(rr_0)+\varphi \right]`$ (see Fig. 2). Because we want to transfer energy between the atom and the localized mode, we wish to use modes that are highly symmetric about the atom’s path. We therefore set $`\varphi =0`$. $`R_{\mathrm{def}}`$ defines the spatial extent of the mode, and is at most a few lattice constants for a strongly confined mode deep in a PBG . The atom–field state function after an initially excited atom has passed through a defect can be written in the form $$|\mathrm{\Psi }_f=u|e,0+w|g,1.$$ (2) $`u`$ and $`w`$ are obtained by replacing $`rr_0`$ by $`vtb`$ in the Hamiltonian (1) and integrating the corresponding Schrödinger equation from $`t=0`$ to $`2b/v`$. $`b`$ is chosen so that the interaction is negligible at $`t=0`$; we set $`b=10R_{\mathrm{def}}`$. Fig. 2 plots $`\left|u\right|^2`$ for the $`\lambda =780\mathrm{n}\mathrm{m}`$ transition of an initially excited Rb atom traveling through a defect in an optical PBG material (e.g., GaP) at thermally–accessible velocities ($`v100600\mathrm{m}/\mathrm{s}`$) for various detunings of the atomic transition frequency from a defect resonance, $`\delta \omega _a\omega _d`$. Such a detuning can be achieved by applying an external field to Stark shift the atomic transition as the atom passes through selected defects. The final state of the atom is seen to be a sensitive function of $`\delta `$ and $`v`$. Similar velocity–dependent atomic inversions are obtained for the above system with the atom in free fall ($`v.1\mathrm{m}/\mathrm{s}`$), and for the $`5.9\mathrm{mm}`$ Rydberg transition of Rb using a thermal beam of atoms passing through an appropriate microwave PBG material. At thermal velocities, the inversion can be finely tuned within the $`1\mathrm{m}/\mathrm{s}`$ resolution of thermally generated atomic beams. To show that our system is capable of encoding quantum algorithms for realistic values of system parameters, we demonstrate the viability of producing a maximally entangled state of two defect modes after an atom prepared in its excited state has passed through both defects. The final state of this system after the two defect entanglement can be written as $$|\mathrm{\Psi }=\alpha |g,1,0+\beta |g,0,1+\gamma |e,0,0,$$ (3) where $`|i,j,k`$ refers to the state of the atom and the photon occupation number of the first and second defect modes respectively. $`\alpha `$, $`\beta `$ and $`\gamma `$ can be expressed in terms of the probability amplitudes for an atom that has passed through only defect 1 or 2; in the notation of Eq. (2), $`\alpha =w(1)`$, $`\beta =u(1)w(2)`$, and $`\gamma =u(1)u(2)`$. Maximal defect entanglement is obtained for $`\left|\alpha \right|=\left|\beta \right|=1/\sqrt{2}`$, $`\left|\gamma \right|=0`$, which leaves the atom disentangled from the defect modes. As an example, using the system of Fig. 2 with $`v=278\mathrm{m}/\mathrm{s}`$ and with the atom on resonance with the first defect, maximally entangled states are obtained for atomic detunings of $`\delta _2/\mathrm{\Omega }_0=.07`$, $`.13`$ and $`.31`$ from the second defect mode. As discussed, by passing a 2–level atom through an externally illuminated line defect, the atom’s Bloch vector can be initialized or modified in–flight without becoming correlated with the state of a defect mode . For simplicity, we assume that an atom sees a uniform mode profile as it crosses such an optical waveguide mode of width $`2\lambda `$. For an injected single–mode field resonant with the atomic transition, the Bloch vector will then rotate at the semi–classical Rabi frequency, $`_0𝐝_{21}𝐄/2\mathrm{}`$, where $`𝐄`$ is the amplitude of the applied electric field . At a thermal velocity of $`100\mathrm{m}/\mathrm{s}`$, the minimum field strength, $`E`$, required to fully rotate the Bloch vector for the $`\lambda =5.9\mathrm{mm}`$ transition in Rb ($`d_{21}=2.0\times 10^{26}\mathrm{C}\mathrm{m}`$) is $`E1\mathrm{m}\mathrm{V}/\mathrm{m}`$. For the $`\lambda =780\mathrm{n}\mathrm{m}`$ optical transition of Rb ($`d_{21}=1.0\times 10^{29}\mathrm{C}\mathrm{m}`$), $`E9\mathrm{k}\mathrm{V}/\mathrm{m}`$ at $`v=100\mathrm{m}/\mathrm{s}`$, whereas in free fall ($`v=.3\mathrm{m}/\mathrm{s}`$), $`E30\mathrm{V}/\mathrm{m}`$. The weak RF field required in the microwave system implies that such experiments must be conducted at low temperatures ($`.6\mathrm{K}`$) in order to prevent the modification of the atomic state by thermal photons. In the optical, the required field strengths are attainable using a cw laser whose output is coupled into waveguide channels of $`\lambda ^2`$ cross section, and are well below the ionization field strengths of both the messenger atoms and the PBG material. For quantum computation, a universal 2 qubit gate can be constructed in analogy with conventional CQED . We outline the construction of the associated controlled-NOT (CNOT) gate, which flips the occupation state of a target defect $`p`$ only if a second control defect $`q`$ is occupied (Fig. 1b). The state of defect $`p`$ is first transferred to an incident atom via a near–resonant atom–defect interaction, such that $`(\alpha |1_p+\beta |0_p)|g(i\alpha |e+\beta |g)|0_p`$. This is analogous to an integrated $`\pi /2`$ interaction in a spatially uniform cavity, i.e., one for which $`G(𝐫)\mathrm{\Omega }_0`$. The atomic Bloch vector is then rotated by $`_0t=\pi /4`$ by applying a classical field in line defect $`R_1`$, thus generating the transformation $`|e=(|e+i|g)/\sqrt{2}`$, $`|g=(i|e+|g)/\sqrt{2}`$. Next, a dispersive interaction is created by detuning the atom far from the defect resonance (but still within the PBG) as it passes through $`q`$. This is used to produce a phase rotation of $`\pi `$ on the excited state, conditioned on the presence of a photon in $`q`$, thus causing the amplitude of $`|e`$ to change sign if $`q`$ is occupied. The Bloch vector rotation $`^1`$ is then performed in $`R_2`$, which switches states $`|e`$ and $`|g`$ relative to their initial values only if a photon was present in $`q`$. Finally, the state of the atom is transferred to defect $`p^{}`$ by a near–resonant $`3\pi /2`$ interaction, leaving the atom in its ground state. $`p^{}`$ then carries the result of the CNOT operation. We now turn to the crucial issue of the decoherence and energy loss of a photon in a defect mode. In general, these processes can have very different time scales, as the former is a result of the entanglement of a quantum state with its environment, which can occur well before the dissipation of energy . It has however been shown that for the (linear) coupling between a photon and a non–absorbing linear dielectric, the coherence of a small number of photons in a given mode is not destroyed by the interaction . Therefore, the coherence of excited defect modes in a high quality PBG material is essentially limited by energy loss. We note that phonon mediated spontaneous Raman and Brillouin scattering of photons out of a defect mode are ineffective loss mechanisms due to the vanishing overlap between a localized field mode and the extended states of the electromagnetic continuum . Unlike most qubit elements, photons do not interact significantly with one another, preventing the propagation of single bit errors through a network of defect modes. However, a photon may “hop” from one defect to another either via direct tunneling or through phonon–assisted hopping . The likelihood of both of these processes decreases exponentially with the spatial separation of the defects, and is negligible for a defect separation of $`10`$ lattice constants for a strongly confined mode. For an isolated point defect engineered well inside a large–scale PBG material, the Q–factor will then be limited by absorption and impurity scattering from the dielectric backbone of the crystal . In the optical and microwave frequency regimes, away from the electronic gap and the restrahlen absorption frequencies of a high quality semiconductor material, $`ϵ_2`$ can be as low as $`10^9`$ at room temperature, and may be reduced at lower temperatures . One can further reduce absorptive losses by minimizing the fraction of the mode in the dielectric (e.g., a strongly localized mode in an air defect). Judicious defect fabrication in a high quality dielectric material should then give Q–factors of $`10^{10}`$ (assuming 10% of the mode is in the dielectric) or higher, which corresponds to photon lifetimes of $`10^1\mathrm{sec}`$ and $`10^4\mathrm{sec}`$ at microwave and optical frequencies respectively. This Q–value is comparable with present microwave CQED experiments, and is in excess of that currently used in optical CQED. Assuming defects are separated by 10 lattice constants, we obtain a decoherence to gate time ratio of $`200`$ even at room temperature, showing that our system is potentially capable of encoding complex quantum algorithms. We thank K. Busch, H.M. van Driel, and J.M. Raimond for valuable discussions. N.V. acknowledges support from the Ontario Graduate Scholarship Program. This work was sponsored in part by the Killam Foundation, the New Energy and Industrial Technology Organization (NEDO) of Japan, Photonics Research Ontario, and the NSERC of Canada.
no-problem/9910/cond-mat9910137.html
ar5iv
text
# Schottky barriers in carbon nanotube heterojunctions \[ ## Abstract Electronic properties of heterojunctions between metallic and semiconducting single-wall carbon nanotubes are investigated. Ineffective screening of the long range Coulomb interaction in one-dimensional nanotube systems drastically modifies the charge transfer phenomena compared to conventional semiconductor heterostructures. The length of depletion region varies over a wide range (from the nanotube radius to the nanotube length) sensitively depending on the doping strength. The Schottky barrier gives rise to an asymmetry of the I-V characteristics of heterojunctions, in agreement with recent experimental results by Yao et al. and Fuhrer et al. Dynamic charge build-up near the junction results in a step-like growth of the current at reverse bias. \] Single-wall carbon nanotubes (SWNTs) are giant linear fullerene molecules which can be studied individually by methods of nanophysics . Depending on the wrapping of a graphene sheet, SWNTs can either be one-dimensional (1D) metals or semiconductors with the energy gap in sub-electronvolt range . While metallic nanotubes can play a role of interconnects in future electronic circuits, their semiconducting counterparts can be used as basic elements of switching devices. An example is the field effect transistor on semiconducting SWNT operating at room temperature . Of particular interest are all-nanotube devices . The simplest can be fabricated by contacting two SWNTs with different electronic properties. The SWNTs can be seamlessly joined together by introducing topological defects (pentagon-heptagon pairs) into the hexagonal graphene network . The resulting on-tube junction generically has the shape of a kink. Electronic properties of such junctions have been investigated theoretically (see e.g. Refs. ) within the model of non-interacting electrons. Electron transport in nanotube heterojunctions has been studied in two recent experiments. Yao et al. treated junctions in SWNTs with kinks whereas Fuhrer et al. explored contacts of crossed nanotubes . Both groups observed non-linear and asymmetric $`IV`$ characteristics resembling that of rectifying diodes. On one hand, the rectifying behavior can be naturally interpreted in terms of Schottky barriers (SBs). On the other hand, formation of a SB might be surprising since one expects no charge transfer in junctions between two SWNTs made of the same material. A possible reason for the charge transfer might be the doping of the nanotubes forming the heterojunction . The doping can be caused by introduction of dopant atoms into the nanotubes or by charge transfer from metallic electrodes. In the latter case the doping strength can also be controlled by the gate voltage. It is important to mention that screening of the Coulomb interaction is ineffective in one-dimensional nanotubes. For this reason the effect of the doping is long-ranged: the density of the transferred charge decays slowly with the distance from the electrodes and might be appreciable at the heterojunction . The long-range Coulomb interaction should be properly taken into account when treating the charge transfer in the heterojunction itself. Unfortunately, this was not accomplished in Ref. , where the electric field was assumed to be fully screened in the region of a few atomic layers near the junction. In this Letter we study charge transfer phenomena in nanotube heterojunctions with true long-range Coulomb interaction. We concentrate on the metal-semiconductor SWNT junction and analyze its equilibrium and non-equilibrium properties (SB parameters, I-V characteristics) by solving the Poisson equation self-consistently. As a model system we consider ”straight” junction between metallic ($`x<0`$) and semiconducting ($`x>0`$) SWNTs (Fig. 1). We assume that the conducting $`p_z`$ electrons in SWNTs are confined to the surface of a cylinder of radius $`R`$. The nanotubes are surrounded by a coaxial cylindrical gate electrode of radius $`R_sR`$. The Fourier components of the 1D Coulomb interaction are given by $$U(q)=\frac{2e^2}{\kappa }\left\{I_0(qR)K_0(qR)\frac{I_0^2(qR)K_0(qR_s)}{I_0(qR_s)}\right\},$$ (1) with the dielectric constant of the medium $`\kappa `$ and the modified Bessel functions $`I_0`$, $`K_0`$. Equation (1) describes the long-range Coulomb interaction, $`U(x)=1/\kappa x`$, for $`RxR_s`$. The interaction is screened at large distances $`xR_s`$, so that $`U(0)=e^2/C=(2e^2/\kappa )\mathrm{ln}(R_s/R)`$, $`C`$ being the capacitance of SWNT per unit length. The kernel (1) relates the electrostatic potential $`\phi `$ at the surface of SWNTs to 1D charge density $`e\rho `$ ($`e>0`$), $$e\phi _q=U(q)\rho _q.$$ (2) Since experimental values of the conductance of heterojunctions are small, $`G/(e^2/h)10^2`$, we will assume low transparency $`T1`$ of the barrier between two SWNTs. In this case the electrons in the nanotubes are described by the equilibrium Fermi distribution $`f(E)`$, also when the voltage $`V`$ is applied to the system. In equilibrium, the charge density is related to the energy $`\stackrel{~}{E}_0(x)=E_0(x)E_F(x)`$ of the gapless point (charge neutrality level) of graphite $`E_0`$ counted from the Fermi level $`E_F`$, $$\rho (x)=𝑑Esign(E)\nu (E)f[(E\stackrel{~}{E}_0(x))sign(E)],$$ (3) with the density of electronic states $`\nu `$ . Equation (3) is valid provided that $`\stackrel{~}{E}_0(x)`$ varies slowly on the scale of the Fermi wavelength. We restrict our consideration to low energies $`|\stackrel{~}{E}_0|<\mathrm{\Delta }^{(1)}`$, $`k_BT\mathrm{\Delta }^{(1)}`$ and neglect the effect of higher 1D subbands ($`\mathrm{\Delta }^{(1)}/(\mathrm{}v_F/R)=1,2/3`$ for metallic/semiconducting SWNT). The densities of states in metallic and semiconducting SWNTs are given by $$\nu _M=\frac{4}{\pi \mathrm{}v_F},\nu _S=\frac{4}{\pi \mathrm{}v_F}\frac{|E|\mathrm{\Theta }(|E|\mathrm{\Delta })}{\sqrt{E^2\mathrm{\Delta }^2}},$$ (4) with the Fermi velocity $`v_F8.1\times 10^5`$ m/s and the energy gap $`2\mathrm{\Delta }=2\mathrm{}v_F/3R`$ in semiconducting SWNT ($`\mathrm{\Delta }0.3`$ eV for generic SWNTs with $`R=0.50.7`$ nm). In the limit of zero temperature Eq. (3) may be inverted as, $$\stackrel{~}{E}_0(\rho )=\{\begin{array}{c}\rho /\nu _M,x<0,\\ \sqrt{\mathrm{\Delta }^2+\left(\rho /\nu _M\right)^2},x>0.\end{array}$$ (5) The charge neutrality level $`\stackrel{~}{E}_0(x)`$ is related to the electrostatic potential (2), $$\stackrel{~}{E}_0(x)+e\phi (x)=\mu +eVsign(x)/2,$$ (6) $`\mu eV/2`$ being the electro-chemical potentials for holes in metallic and semiconducting SWNTs. The potential $`\mu =\alpha (\mathrm{\Delta }WeV_g)`$ can be controlled by the gate voltage $`V_g`$ (Fig. 1). It also incorporates the difference $`\mathrm{\Delta }W=W_MW_{NT}`$ of the work functions of the gate electrode and SWNT (the coefficient $`\alpha `$ characterizes mutual capacitance of the nanotubes to the gate and is equal to unity in our case). We solve Eqs. (2), (3), (6) self-consistently by numerical minimization of the corresponding energy functional. The Coulomb energy is computed in the Fourier space. Figures 2, 3 display the results for the following parameters: $`R_s/R=75`$ ($`R_s50`$ nm for (10,10) SWNTs) and $`\nu _MU(0)/\mathrm{ln}(R_s/R)=5`$. The latter value corresponds to the dielectric constant $`\kappa 1.4`$ which can be inferred from the experimental data (see Fig. 4 of Ref. ). The band bending diagrams (Fig. 2) display the charge neutrality level $`\overline{E}_0(x)=\stackrel{~}{E}_0(x)eVsign(x)/2`$ counted from the ”average” Fermi level of metallic and semiconducting SWNTs, as well as the energies $`E_{c,v}=\overline{E}_0\pm \mathrm{\Delta }`$ of the conduction and valence bands in semiconducting SWNT. Let us start from the case of zero bias, $`V=0`$ (Figs. 2(a), 2(b)). At zero electro-chemical potential, $`\mu =0`$, the Fermi level of the nanotubes coincides with the gapless point of graphite and the system is charge neutral (Fig. 2(a)). This situation occurs for isolated nanotubes. The barriers for the electron and hole transport are equal to $`\mathrm{\Delta }`$ (Fig. 3(a)). To make contact with the experiments we will concentrate on p-doped SWNTs ($`\mu >0`$). Due to larger number of electronic states $`_0^{\stackrel{~}{E}_0}𝑑E\nu (E)`$ the metallic SWNT acquires more charge and has higher electrostatic potential $`\phi (\mathrm{})`$ (lower charge neutrality level $`\stackrel{~}{E}_0(\mathrm{})`$) compared to semiconducting SWNT kept at the same electrochemical potential, see Eqs. (5), (6). The electric field induced by this charge bends the bands in the semiconducting part downwards so that a SB is formed near the interface (Fig. 2(b)). For $`|\mu |<\mathrm{\Delta }`$, there are no free charges in the semiconducting SWNT. Our numerical results indicate that the electrostatic potential $`\phi (x)`$ decays logarithmically at $`RxR_s`$ so that the bend bending extends over long distances $`xR_s`$ (the analytical estimate, $`\phi (x)e\nu _M\mu \mathrm{ln}(R_s/x)/\kappa `$, is available in the limit of weak interaction, $`\nu _MU(0)1`$). At $`\mu =\mathrm{\Delta }`$ holes enter the semiconducting SWNT. With increasing the electro-chemical potential the holes come closer to the junction reducing the length $`l`$ and the height $`u`$ of a SB (Fig. 2(b)). In the case of weakly doped semiconducting SWNT, $`\mu =\mathrm{\Delta }(1+\delta )`$, $`\delta 1`$, a rough estimate of the depletion length $`l`$ can be made, $`\mathrm{ln}(l/R_s)\delta \mathrm{ln}(R/R_s)`$, for $`RlR_s`$. Therefore, the depletion length changes rapidly from $`lR_s`$ to $`lR`$ with increasing doping in this regime. The height of a SB can be estimated from the difference of the charge neutrality levels in semiconducting and metallic SWNTs, $`u\stackrel{~}{E}_0(\mathrm{})\stackrel{~}{E}_0(\mathrm{})`$. The latter evaluate at $`\stackrel{~}{E}_0(\mathrm{})=\mu /(1+\nu _MU(0))`$ and $`\stackrel{~}{E}_0(\mathrm{})=\mathrm{\Delta }`$ for $`\delta \nu _MU(0)`$, see Eqs. (2), (5), (6). Since the band bending occurs predominantly in the semiconducrting part (Figs. 2(a), 2(b)) and $`\nu _MU(0)1`$, one expects that $`u`$ $`\mathrm{\Delta }`$ for $`\delta \nu _MU(0)`$. Note that SB persists up to remarkably large values of the electro-chemical potential, $`\mu 14\mathrm{\Delta }`$ (Fig. 3(a)) though it becomes rather short ($`l<R`$) for $`\mu 8\mathrm{\Delta }`$. Figure 3(a) shows the result for the SB height defined as the minimum energy of electron or hole excitation required to transfer the elementary charge across the junction in the absence of tunneling through the SB. The SB height shows pronounced asymmetry as a function of the bias voltage. Under a forward/backward bias the charge density in metallic SWNT decreases/increases. This reduces/enhances the band bending (Figs. 2(c), 2(d)) in the semiconducting part (the charge density and the band bending change sign for $`V>2\mu `$, cf. Fig. 3(a)). As a result, the SB height decreases faster under a forward bias. This gives rise to the asymmetry $`V_+<|V_{}|`$ of positive $`V_+`$ and negative $`V_{}`$ threshold voltages at which SB vanishes ($`u0`$) and the onset of the conductance occurs. The positive threshold voltage is relatively insensitive to the doping strength, Fig. 3(a). This can be used for a rough estimate of the gap $`\mathrm{\Delta }`$ from experimental data, $`\mathrm{\Delta }eV_+`$, for $`\mu 1`$. Note that we assume weak interband tunneling in semiconducting SWNT so that the electronic states in the conductance band are empty in Fig. 2(d). We will proceed with the analysis of non-equilibrium electron transport. The current through the heterojunction is given by the Landauer formula, $$I=\frac{2e}{\pi \mathrm{}}𝑑ET(E)\left\{f(EeV/2)f(E+eV/2)\right\},$$ (7) with the energy-dependent transmission coefficient $`T(E)`$ of the junction. It is natural to separate the contribution $`T_i(E)`$ of a barrier at the interface between SWNTs and the contribution $`T_S(E)`$ of a SB to the total transmission. As a minimal model, we assume that the transparency $`T_i`$ is energy independent whereas the transparency $`T_S(E)`$ increases from zero to unity when the energy $`E`$ crosses the edge of a SB. In this case the total transmission reads $`T(E)=0,(T_i)`$, for the energies in (out of) the SB range $`[E_{\mathrm{min}},E_{\mathrm{max}}]`$. In the case of downward bending (Fig. 2) the SB range is given by $`[E_v(0),E_c(\mathrm{})]`$, in the absence of charge carriers in the conduction band, $`\mu +eV/2>\mathrm{\Delta }`$ (Figs. 2(a-d)), and by $`[E_v(0),E_c(0)]`$, in their presence, $`\mu +eV/2<\mathrm{\Delta }`$ (Fig. 2(e)). The results for the $`IV`$ characteristics at zero temperature are presented in Figs. 2(f), 3(b). With increasing forward bias the SB (Fig. 2(c)) disappears and the hole transport channel opens at $`V>V_+`$. The cusp at the $`IV`$ characteristics (Fig. 2(f)) at somewhat higher voltages corresponds to the onset of the electron channel. Note that at high (forward or reverse) bias both the electron and hole channels are open and the current is given by $`I=(2eT_i/\pi \mathrm{})[eV2\mathrm{\Delta }sign(V)]`$. The onset of the conductance under reverse bias depends critically on the electro-chemical potential $`\mu `$ (Figs. 2(f), 3(b)). At low doping, $`\mu 1.8\mathrm{\Delta }`$, the current increases abruptly at $`V_{}=V_c`$, $`eV_c=2(\mathrm{\Delta }+\mu )`$. The voltage $`V_c`$ corresponds to the alignment of the Fermi level with the conduction band of semiconducting SWNT. Electrons entering the conduction band cause the charge build-up near the junction. The reconstruction of the band profile (cf. Figs. 2(d) and 2(e)) results in the onset of the electron and hole channels of transport giving rise to a step-like growth of the current. At higher doping, $`\mu 1.8\mathrm{\Delta }`$, the threshold voltage $`V_{}>V_c`$ corresponds to the opening of the hole channel. The current gradually grows under reverse bias $`V_c<V<V_{}`$ until the reconstruction of the band profile occurs at $`V=V_c`$ (see the curve for $`\mu =3`$ in Fig. 2(f)). We now consider quantum tunneling through the SB. The transparency $`T_S(E)`$ of the SB can be evaluated using WKB method and the effective mass approximation. For a triangular barrier of the length $`l`$ and the height $`u`$ we obtain, $$T_S\mathrm{exp}\left(\frac{4l}{9R}\sqrt{\frac{2u}{\mathrm{\Delta }}}\right).$$ (8) The transparency $`T_S`$ increases considerably near the boundaries of the transport blockade region $`[V_{},V_+]`$ (Fig. 3(a)) due to decreasing $`u`$ and $`l`$. For example, $`T_S2.5\times 10^3`$ for the SB in Fig. 2(b), whereas $`T_S1`$ for the SBs in Figs. 2(c), 2(d). This gives rise to a substantial leakage current in the blockade region. The asymmetry of the $`IV`$ characteristics and threshold voltages has been discovered in recent experiments . According to the data of Ref. , both the thresholds $`V_+`$, $`V_{}`$ shift upwards with the gate voltage. Moreover, the positive threshold shifts less than the negative one. Such behavior is consistent with our model in the regime of moderate doping, $`0.5<\mu /\mathrm{\Delta }1.8`$ (Fig. 3). However, the blockade region of $`34`$ V detected in the experiment is somewhat wider than the theoretical estimate, $`V_+V_{}6.5\mathrm{\Delta }2`$ eV. The extra voltage drop could be due to potential disorder in semiconducting SWNT and/or an additional SB at the interface between semiconducting SWNT and metallic electrode. We now check the model against the experimental data of Ref. . The measured width of the blockade region, $`0.50.7`$ V, agrees with the theoretical estimate. The gap in semiconducting SWNT, $`\mathrm{\Delta }eV_+`$, evaluates at $`\mathrm{\Delta }=0.19`$, $`0.29`$ eV for the two devices studied . These values are in the expected range $`\mathrm{\Delta }0.250.35`$ eV . A smooth onset of the current over the range $`0.10.3`$ eV around threshold voltages is naturally associated with quantum tunneling through a ”leaky” SB (thermal energies are much smaller, $`k_BT5`$ meV). Finally, the step-like feature of the current under reverse bias almost certainly corresponds to the reconstruction of the band profile due to the Fermi level entering the conduction band of semiconducting SWNT. Gradual onset of the differential conductance following the reconstruction might be associated with increasing conductance of disordered semiconducting SWNT under the doping . To conclude, we have studied the electronic properties of carbon nanotube heterojunctions and provided explanation for the main features of recent experimental data . Due to the long-range Coulomb interaction, the charge transfer phenomena in one-dimensional nanotube systems differ drastically from those in conventional semiconductor heterostructures. This creates new challenges in the design of novel electronic devices. In particular, the long-range electrostatic potential in underdoped junctions might affect other components of a circuit, whereas substantial leakage current in overdoped junctions spoils the rectification. In view of these challenges a new concept of functional devices on molecular level might be needed. In the process of writing this paper I became aware of the preprint by Léonard and Tersoff who investigated equilibrium properties of junctions between semiconducting SWNTs and found the long-range charge-transfer phenomena in these systems (see also Ref. ). The author wishes to thank B.L. Altshuler, G.E.W. Bauer, Yu.V. Nazarov, S. Tarucha, Y. Tokura, Z.Yao, and, especially, C. Dekker and P. McEuen for stimulating discussions. F. Léonard and J. Tersoff are acknowledged for sharing the results of Ref. before publication. This work was supported by the Royal Dutch Academy of Sciences (KNAW).
no-problem/9910/astro-ph9910473.html
ar5iv
text
# High Redshift Clusters and Protoclusters ## 1. Introduction The evolution of galaxy clusters is quite sensitive to the physical processes which dominate the formation of structure and to the cosmological parameters. Cluster evolution is inherently complex both because clusters are not closed systems and because the 3 main mass components (dark matter, intracluster gas, and galaxies) evolve differently. As a consequence, different cluster parameters evolve on different timescales depending on the thermal and dissipative properties of the mass component(s) which most strongly control each cluster parameter. Furthermore, it is becoming evident that the process of cluster formation extends over a moderately broad range in redshift. Given the complexity of the problem, breakthroughs in understanding the formation and evolution of clusters of galaxies must rely on observations spread over a large range of wavelength and redshift. With the advent of the Keck 10m telescopes, the restored resolution of HST imagery, improvements in IR arrays, and the enhanced x-ray imaging and spectroscopic capabilities of ROSAT and ASCA, constraints on the properties of the ICM and the cluster galaxy population have been extended out to $`z1`$ and beyond. Indeed, cluster candidates have been identified out to $`z3`$ (see Table 1). The large temporal baseline these data cover now allow much tighter constraints on scenarios for cluster evolution. ## 2. Search Strategies The most distant cluster candidates have been found by conducting searches in the vicinity of high-z radio galaxies or quasars. The evidence for the presence of a cluster in these cases is typically based on fewer than 5 spectroscopically confirmed members plus a statistical excess of red galaxies or Ly$`\alpha `$ emitters. Table 1. Examples of $`z>1`$ Cluster Candidates | Name | Redshift | Reference | | --- | --- | --- | | MRC 0316-257 | 3.14 | Le Fevre et al. 1996 | | QSO 0953+545 | 2.50 | Malkan et al. 1996 | | QSO 1312+4237 | 2.50 | Campos et al. 1999 | | 53W002 | 2.39 | Pascarelle et al. 1995 | | QSO 2139-4434 | 2.38 | Francis et al. 1996 | | 3C294 | 1.79 | Dickinson et al. 1999 | | RXJ0848+4453<sup>a</sup> | 1.27 | Stanford et al. 1997; Rosati et al. 1999 | | 3C324 | 1.21 | Dickinson 1997 | | AXJ2019+112 | 1.01 | Benitez et al. 1998 | | 3C184<sup>a</sup> | 1.00 | Deltorn et al. 1997 | <sup>a</sup> Spectroscopic confirmation based on more than 10 $`z`$’s Hall & Green (1998) have also performed a search around a sample of radio-loud quasars and have identified 31 possible clusters in the range $`1<z<2`$. The physical properties of $`z>1`$ cluster candidates are not well quantified because of the limited amount of spectroscopic data available to date. While looking for clusters in the vicinities of radio galaxies or quasars is fruitful, the resulting samples will naturally suffer from selection effects associated with limiting ones search to such interesting environments. None the less, it appears that overdensities which may be the progenitors of present day clusters exist at $`z3`$ ($`15`$% of the current age of the universe). A more complete picture of the properties of high-redshift clusters can be obtained at $`z<1.3`$ through objective searches of wide areas of sky in optical, NIR, and x-ray passbands. The advantage of x-ray cluster selection is two-fold: 1) emission from the hot intracluster medium (ICM) directly indicates the presence of a gravitationally bound system and 2) the ICM comprises 70 to 80% of the cluster’s baryonic mass. Nearly all x-ray selected high-$`z`$ clusters are rich and elliptical dominated. The Extended Medium Sensitivity Survey (EMSS; Gioia et al. 1990a; Henry et al. 1992) has been used to identify clusters out to $`z0.85`$ over an area of $`850`$ deg<sup>2</sup> and the ROSAT Distant Cluster Survey (RDCS; Rosati et al. 1998) has been used to find systems out to $`z1.3`$ over an area of $`30`$ deg<sup>2</sup>. The RDCS, in particular, includes 100 spectroscopically confirmed clusters. Of these, 33% have $`z>0.4`$ and 25% have $`z>0.5`$ (Rosati 1998). While past x-ray telescopes have had fairly low effective areas, new observatories, like XMM, will provide at least an order of magnitude improvement. Equally exciting are developments in the use of the Sunyaev-Zeldovich Effect to locate clusters. Mohr et al. (1999) indicate that SZE facilities in the near future will be able to detect $`100z>1`$ clusters per year. Searching for distant clusters in the optical and NIR, however, also has significant advantages. From a practical point of view, there are more telescopes and larger area mosaic cameras available in the optical/NIR than in x-rays. Optical/NIR searches will also find clusters spanning a wider range of x-ray luminosity and total mass (e.g., Holden et al. 1997). Although the spurious detection rate at high-$`z`$ can be $`30`$%, the use of photometric redshifts can dramatically reduce the number of false positives. Some of the largest area and deepest optical/NIR distant cluster surveys include the Palomar Distant Cluster Survey (5.1 deg<sup>2</sup>; Postman et al. 1996), the ESO Imaging Survey (12 deg<sup>2</sup>; Scodeggio et al. 1999) The Deeprange Survey (16 deg<sup>2</sup>; Postman et al. 1998), and the NOAO Deep-Wide Survey (18 deg<sup>2</sup>; Jannuzi & Dey 1999). An optimal strategy, of course, is to combine x-ray and optical/NIR data obtained over the same region of sky. This allows a full assessment of the selection biases to be made and is likely to reveal subtle effects which can be important in interpreting, for example, the abundance of high-$`z`$ clusters. The benefits of such joint searches are already being realized: Scharf et al. (1999) used 22 deep ROSAT PSPC fields as targets for deep optical imaging to study the effects of optical and x-ray selection on derived cluster evolution and to look for correlations in the large-scale distribution of diffuse x-ray emission and the galaxy distribution. Preliminary results include the possible first direct detection of x-ray emission from an intercluster filament at $`z0.40.5`$. Stanford et al (1997) and Rosati et al (1999) have identified a supercluster at $`z=1.27`$ in the Lynx field which was initially detected in the NIR (K-band) and, subsequently, in x-rays. ## 3. Cluster Abundance The abundance of clusters as a function of redshift is one of the fundamental constraints on both structure formation and cosmological models. The space density of clusters at $`z>0.5`$, for example, is highly sensitive to $`\mathrm{\Omega }_m`$ (Bahcall, Fan, & Renyue 1997; Donahue & Voit 1999). Present observational constraints from x-ray surveys (followed up by optical spectroscopy) indicate that the comoving space density of clusters per unit $`L_x`$ is invariant out to at least $`z=0.8`$ for systems with $`L_x3\times 10^{44}`$ erg sec<sup>-1</sup> (Henry et al. 1992; Rosati et al. 1998). For more luminous (massive) clusters, mild negative evolution has been reported (Gioia et al. 1990b; Henry et al. 1992; Vikhlinin et al. 1998) although the deficit, expressed in absolute numbers, is only a dozen or so EMSS clusters at $`z>0.4`$ (small enough that one might worry about subtle selection biases at low x-ray surface brightness levels in the existing surveys). The distribution of poor to moderately rich optically selected clusters is also consistent with a constant comoving space density to at least $`z=0.6`$ (Postman et al. 1996; Holden et al. 1998). At $`z>1`$, our constraints on cluster abundances presently suffer from a lack of data. There are at least 5 known clusters with $`0.75<z<1.3`$ that have velocity dispersions in the range $`700\sigma _{1D}1400`$ km sec<sup>-1</sup>. At least two of these, MS-1137 ($`z=0.78`$) and MS-1054 ($`z=0.83`$), have relatively high kinetic gas temperatures ($`T_x`$) – 5.7 keV and 12.4 keV, respectively (Donahue et al. 1998, 1999). The existence of massive ($`>5\times 10^{14}`$ M) clusters at $`z1`$ is, thus, no longer in doubt. Interestingly, the space density of $`z1`$ clusters inferred from the RDCS coverage of the Lynx region is $`2.4\times 10^6h^3`$ Mpc<sup>-3</sup> (Rosati 1999), within a factor of 2 of the density $`z3`$ structures delineated by Lyman break galaxies (Steidel et al. 1998). Whether this is indicative of an evolutionary connection or mere coincidence remains to be decided. ## 4. Evolution of the Gravitational Potentials of Clusters The evolution of the ICM and its correlation with global cluster kinematics provide direct constraints on the growth of the gravitational potentials which we call clusters. The relationship between $`L_x`$ and $`T_x`$ in low-$`z`$ clusters is remarkably tight but is somewhat steeper than that predicted by bremsstrahlung emission from a population of virialized, structurally identical clusters with constant gas fraction (Arnaud & Evrard 1999). However, with reasonable constraints on cluster structure, the same authors find the fractional variation in cluster gas fraction is $`<15`$%. Mushotsky & Scharf (1997) demonstrate that the $`L_xT_x`$ relationship exhibits no significant evolution out to $`z0.4`$. Donahue et al. (1999) have extended this work out to $`z0.9`$: if the evolution of the relation is parameterized as $`L_x=T_x^\alpha (1+z)^A`$, then they find that $`A1.5`$ is rejected with greater than $`3\sigma `$ confidence (for $`q_o=0.5`$). Values of $`A=23`$ would be required to explain the lack of evolution in the x-ray luminosity function cited above if $`\mathrm{\Omega }_m=1`$. Donahue et al. (1998) have further shown (see Figure 1) that the relation between the cluster velocity dispersion ($`\sigma _{1D}`$) and $`T_x`$ is invariant out to $`z0.8`$. Cluster potentials are clearly well established in the universe by $`z0.9`$ and, on average, the x-ray properties of the ICM are similar to those in current epoch clusters. The distribution of the optically luminous mass in clusters, as delineated by the member galaxies, may be experiencing more recent evolution than the ICM. Clusters exhibit significantly more asymmetry in their galaxy distribution at $`z>0.7`$ than at the present epoch (Lubin & Postman 1996) – the observed profiles are inconsistent with azimuthal symmetry at the 99.9% confidence level, in strong contrast with the situation at $`z<0.3`$. In some cases, like MS-1054, the clumpiness seen in the galaxy distribution is also seen in the x-ray brightness distribution (Donahue et al. 1998) and in a mass map based upon weak lensing distortions (Hoekstra, Franx, & Kuijken 1999) – characteristic of recent merger activity. Indeed, mergers of group-size clumps at $`z1`$ may be the origin of some of the current epoch richness class 1 and 0 clusters (Lubin et al. 1998; Gioia et al. 1999). However, the majority of the known high-$`z`$ clusters appear to have been in existence since at least $`z2`$, as observations discussed below suggest. ## 5. Constraining the Epoch of Cluster Formation The processes which control the formation of clusters leave observable signatures in the evolution of the morphological and spectrophotometric properties of cluster galaxies. This is a key reason why observational work in this area has been a major component of recent extragalactic telescope programs. The evolution of the mass function of cluster galaxies, in particular, provides critical constraints on and tests of cluster formation scenarios. At high-$`z`$, cluster galaxy mass determinations are difficult to obtain but the K-band cluster galaxy luminosity function (KLF) can provide a reliable substitute because it probes the total stellar mass component and is not strongly sensitive to the instantaneous star formation rate (e.g., see Gavazzi, Pierini, Boselli 1996). De Propris et al. (1999) have derived the KLF for 38 clusters at $`0.1<z<1`$. Their main result is that the KLF departs from no-evolution predictions at $`z>0.4`$ however the changes observed are consistent with simple, passive evolution (aging of the existing stellar population) and a narrow formation epoch around $`z=2`$ (if $`\mathrm{\Lambda }=0.7`$) or $`z=3`$ (if $`\mathrm{\Lambda }=0`$). Comparison of the broadband colors and spectral features of the early type cluster members at $`0.76z0.92`$ with spectral synthesis models suggests these galaxies are old (mean ages $`3\pm 2`$ Gyr, at the observed redshifts) implying a relatively early formation at $`z>2`$ as well (e.g., Bower, Kodama, Terlevich 1998; Postman, Lubin, Oke 1998; Stanford, Eisenhardt, Dickinson 1998). Such observations also suggest that the mass-to-light ratios of the early type cluster galaxies have evolved passively since at least $`z1.2`$ (Kelson et al. 1997; van Dokkum et al. 1998). Taken in concert with the results on the KLF evolution, one may conclude that the mass function of cluster galaxies has remained roughly invariant since $`z1.2`$. An additional constraint of the duration of the cluster galaxy formation era comes from the optical/NIR color-magnitude relations for the red galaxy population which are well-defined and exhibit remarkably low scatter in clusters from $`0<z<1`$ (Stanford, Eisenhardt, Dickinson 1998). This places a stringent constraint on their formation synchronicity of $`\mathrm{\Delta }t4`$ Gyr (roughly the time between $`z=10`$ and $`z=1.5`$ in a $`\mathrm{\Omega }_o=0.2,h=0.6,\mathrm{\Lambda }=0`$ cosmology). The coeval nature of cluster elliptical evolution is also reflected in observations which are consistent with exponentially decaying star formation rates with relatively short $`e`$-folding times ($`0.1<\tau <0.6`$ Gyr; Postman, Lubin, Oke 1998). There is some evidence (Fuller, West, Bridges 1999) that the brightest cluster galaxies are preferentially aligned with the global cluster galaxy distribution, an effect also suggestive of an early formation epoch. The metallicities of cluster ellipticals in the redshift range $`0.5<z<1`$ are consistent, on average, with the close to solar values observed in current epoch ellipticals. Similarly, there does not appear to have been much change in the metallicity of the ICM ($`0.20.45`$ solar) between now and $`z0.8`$ (Donahue et al. 1999). This suggests reprocessing of the baryonic mass component of clusters had been on-going for a few Gyr prior to current lookback time. Hierarchical models do predict, however, that some star formation activity should be occurring in clusters since $`z1`$. This is indeed seen in the spiral members and in the surrounding field galaxy population. For example, the fraction of cluster members with strong OII emission (EW $`>`$ 15Å), a reliable star formation indicator, increases by a factor of 3 - 4 between now and $`z0.92`$ (Balogh et al. 1998; Postman, Lubin, Oke 1998). The percentage of cluster galaxies with post-starburst spectral features increases nearly tenfold between now and $`0.3<z<0.5`$ (Dressler et al. 1999). ## 6. Evolution of the Morphological Mix Using the high angular resolution imaging provided by the Hubble Space Telescope, several teams (e.g., Dressler et al. 1997; Oemler et al. 1997; Smail et al. 1997; Lubin et al. 1999) have conducted morphological surveys of clusters in from $`0.3<z<1`$. Upon comparison with similar ground-based studies of low-$`z`$ clusters, it appears that the distribution of cluster galaxy morphologies has undergone rather substantial evolution between $`0.4<z<1`$ but remains relatively invariant between $`z=0.4`$ and the present epoch. One indicator that has been used to gauge the evolution is the ratio of the number of lenticulars (S0) to ellipticals in the central regions of clusters. Figure 2 summarizes the current constraints on the redshift dependence of the S0/E ratio in clusters observed by Dressler et al. 1997 (D97) and Lubin et al. 1999. Although the D97 results have been widely cited as evidence for substantial morphological evolution even since $`z=0.4`$, re-analysis of the data by Andreon (1998) and new data at higher $`z`$ from Lubin et al. 1999 suggest that the metamorphosis may not be as dramatic as originally thought. However, the high-$`z`$ data from Lubin et al. also find that the while ellipticals still preferentially reside in the highest density environments, the Spiral/S0 morphology-density relation at $`z>0.5`$ is much less well-defined than it is now. There is further evidence that morphological modifications are occurring at least as recently as $`z0.30.4`$: Dressler et al. (1999) find that the OII equivalent widths, for a given morphological type, are lower in $`0.3<z<0.5`$ cluster galaxies than in the field and that the actively star-forming galaxies in these clusters have a more extended spatial distribution than the non-active galaxies. The kinematic properties of the early and late type cluster galaxies appear to differ as well, although these differences exist both in the present epoch and at $`z1`$ (Adami et al. 1998; Lubin et al. 1999). Specifically, the spiral population tends to have a higher velocity dispersion than the elliptical members suggestive of an on-going spiral infall process. Poggianti et al. (1999) find, however, evidence for widespread cessation of star formation activity in intermediate $`z`$ clusters over a relatively short ($`1`$ Gyr) timescale. Specifically, 90% of the spiral cluster members they studied show spectral signatures of either enhanced or suppressed star formation relative to local spirals. ## 7. A Possible Scenario I will now propose a possible evolutionary sequence which incorporates the myriad of constraints derived from observations of clusters. Note that some of the steps below have not been observationally confirmed or are still controversial! * In a universe with $`\mathrm{\Omega }_m<1`$ (and possibly $`\mathrm{\Lambda }0`$), protoclusters form at $`z>3`$. * The sites of formation are located at the intersections of filamentary matter flows and the first cluster galaxies form during the first generation of matter crossings (This is a conjecture based solely on N-body simulations and popular dark matter models). * The richest, current epoch clusters formed first. Some of the poorer clusters seen today may have developed via group-group mergers since $`z1`$. * Primordial ICM shocks and begins to emit x-rays at $`z2`$ (and perhaps earlier). Enrichment of the ICM most likely occurs in the $`z>1`$ era. From $`z1`$ to now, there is little evolution of the ICM. * The brightest cluster galaxies grow via cannibalism until $`z1.5`$. Most merger activity ceases by $`z1`$ and subsequent evolution is passive. Other massive ellipticals assemble prior to $`z2`$ and are the first to reach dynamical equilibrium with the cluster potential. * The most active periods of star formation within the cluster occur at $`z>1`$. Most star formation is quenched, however, by $`z0.40.5`$. * Infall of spirals results in morphological and color gradients within the cluster. This process continues up to the present epoch. * The S0 and dwarf elliptical populations develop within the cluster core, certainly by $`z0.5`$ and more likely by $`z1`$. The likely relevant processes involved are ram pressure stripping, mergers, and tidal stress. S0’s may descend from high surface brightness spirals, dE’s from low surface brightness spirals (Moore at al. 1998). There are notable exceptions to the above scenario such as low-$`z`$ spiral rich clusters (e.g., Virgo, Hercules) and low-$`z`$ irregular clusters (e.g., Abell 1185), which are probably still dynamically young, suggesting that some cluster evolution is still occurring at the present epoch. Furthermore, our knowledge of cluster evolution at $`z>1`$ is still quite rudimentary. Thus, while great strides have been made, there remain many steps to go before our understanding of the cluster formation process is complete. Some of the observational programs which will take us farther towards this goal are now, or soon will be, underway. These include more complete and larger $`z>1`$ cluster samples, more objective and precise studies of the $`z=0`$ cluster population (e.g., the Sloan Digital Sky Survey, the 2dF survey, the REFLEX survey), improved x-ray observations from XMM and Chandra and optical/IR observations with HST (ACS, WF3) and SIRTF, extended spectroscopic studies of high-$`z`$ clusters using the growing suite of 8 – 10m ground-based telescopes, construction of mass-selected cluster catalogs from SZE surveys, and ultra deep 21 cm searches for protoclusters at $`z>2`$ using the Giant Meter-wave Radio Telescope. ### Acknowledgments. I thank the SOC for their generous travel support which made attendance of this meeting possible. I also wish to thank Megan Donahue and Caleb Scharf for providing results based on their x-ray observations of clusters in advance of publication. ## References Adami, C. et al. 1998, A&A, 331, 439 Andreon, S. 1998, ApJ, 501, 533 Arnaud, M., Evrard, A. 1999, MNRAS, 305, 631 Bahcall, N., Fan, X., Renyue, C. 1997, ApJ, 485, L53 Balogh, M. et al. 1998, ApJ, 504, L75 Benitez, N. et al. 1998, astro-ph/9812218 Bower, R., Kodama, T., Terlevich, A. 1998, MNRAS, 299, 1193 Campos, A. et al., 1999, ApJ, 511, L1 De Propris, R. et al. 1999, AJ, 118, 719 Deltorn, J. M. et al. 1997, ApJ, 483, L21 Dickinson, M. 1997, in HST and the High Redshift Universe, eds. N. Tanvir, A. Aragon-Salamanca, and J.V. Wall, (Singapore: World Scientific), p. 207 Dickinson, M. et al. 1999, ApJ, submitted Donahue, M. & Voit, M. 1999, ApJ, 523, L137 Donahue, M. et al. 1998, ApJ, 502, 550 Donahue, M. et al. 1999, ApJ, in press, also astro-ph/9906295 Dressler, A. 1980, ApJ, 236, 351 Dressler, A. et al. 1997, ApJ, 490, 577 Dressler, A. et al. 1999, ApJS, 122, 51 Francis, P. et al. 1996, ApJ, 457, 490 Fuller, T., West, M., Bridges, T. 1999, ApJ, 519, 22 Gavazzi, G., Pierini, D., Boselli, A. 1996, A&A, 312, 397 Gioia, I. et al. 1990a, ApJS, 116, 247 Gioia, I. et al. 1990b, ApJ, 365, 35 Gioia, I. et al. 1999, AJ, 117, 2608 Hall, P., Green, R. 1998, ApJ, 507, 558 Henry, J. et al. 1992, ApJ, 386, 408 Hoekstra, H., Franx, M., Kuijken, K. 1999, ApJ, in press Holden, B. et al. 1997, AJ, 114, 1701 Holden, B. et al. 1998, AAS, 193, 3817 Jannuzi, B., Dey, A. 1999, AAS, 194, 8803 Kelson, D. et al. 1997, ApJ, 478, L13 Le Fevre, O. et al. 1996, ApJ, 471, L11 Lubin, L., Postman, M. 1996, AJ, 111, 1795 Lubin, L. et al. 1998, AJ, 116, 584 Lubin, L. et al. 1999, AJ, submitted Malkan, M., Teplitz, H., Mclean, I. 1996, ApJ, 468, L9 Mohr, J. et al. 1999, astro-ph/9905256 Oemler, A., Dressler, A., Butcher, H. 1997, ApJ, 474, 561 Pascarelle, S.M. et al. 1996, ApJ, 456, L21 Poggianti, B. et al. 1999, ApJ, 518, 576 Postman, M. et al. 1996, AJ, 111, 615 Postman, M. et al. 1998, ApJ, 506, 33 Postman, M., Lubin, L., Oke, J. B. 1998, AJ, 116, 560 Rosati, P. 1998, in Wide Field Surveys in Cosmology, eds. S. Colombi, Y. Mellier, B. Raban, p. 219 Rosati, P. et al. 1998, ApJ, 492, L21 Rosati, P. 1999, private comm. Rosati, P. et al. 1999, AJ, 118, 76 Scharf, C. et al. 1999, ApJ, in press Scodeggio, M. et al. 1999, A&AS, 137, 83 Smail, I. et al. 1997, ApJS, 110, 213 Stanford, S. et al. 1997, AJ, 114, 2232 Stanford, S., Eisenhardt, P., Dickinson, M. 1998, Apj, 492, 461 Steidel, C. et al. 1998, ApJ, 492, 428 van Dokkum, P. et al. 1998, ApJ, 504, L17 Vikhlinin, A. et al. 1998, ApJ, 502, 558
no-problem/9910/math9910115.html
ar5iv
text
# On the non-existence of tight contact structures ## 1. Introduction In 1971, Martinet showed that every 3-manifold admits a contact structure. But in the subsequent twenty years, through the work of Bennequin and Eliashberg , it became apparent that not all contact structures are created equal in dimension 3. Specifically, contact structures fall into one of two classes: tight or overtwisted. In this new light, what Martinet actually showed was that every 3-manifold admits an overtwisted contact structure. In Eliashberg classified overtwisted contact structures on closed 3-manifolds by proving the weak homotopy equivalence of the space of overtwisted contact structures up to isotopy and the space of 2-plane fields up to homotopy — hence overtwisted contact structures could now be understood via homotopy theory. On the other hand, it has become apparent that tight contact structures have surprising and deep connections to the topology of 3-manifolds, not limited to homotopy theory. For example, Rudolph and Lisca and Matić found connections with slice knots and slice genus, Kronheimer and Mrowka found connections with Seiberg-Witten theory, and Eliashberg and Thurston found connections with foliation theory. Thus, whether or not every 3-manifold admits a tight contact structure became a central question in 3-dimensional contact topology. The first candidate for a 3-manifold without a tight contact structure was the Poincaré homology sphere $`M=\mathrm{\Sigma }(2,3,5)`$ with reverse orientation. The difficulty of constructing a holomorphically fillable contact structure on $`M`$ was highlighted in Gompf’s paper . Subsequently Lisca , using techniques from Seiberg-Witten theory, proved that $`M`$ has no symplectically semi-fillable contact structure. In this paper we prove the following nonexistence result. ###### Theorem 1. There exist no positive tight contact structures on the Poincaré homology sphere $`\mathrm{\Sigma }(2,3,5)`$ with reverse orientation. This is the first example of a closed 3-manifold which does not carry a positive tight contact structure. ###### Corollary 2. Let $`M`$ be the Poincaré homology sphere with reverse orientation. Then the connect sum $`M\mathrm{\#}\overline{M}`$, where $`\overline{M}`$ is $`M`$ with the opposite orientation, does not carry any tight contact structure, positive or negative. This follows from Theorem 1, since a tight structures on a reducible manifold may be decomposed into tight structures on its summands ## 2. Contact topology preliminaries We assume the reader is familiar with the basics ideas of contact topology in dimension 3 (see for example ). A thorough understanding of would be helpful but we include a brief summary of the ideas and terminology. The reader might also find useful for various parts of this section. In this paper the ambient manifold $`M`$ will be an oriented 3-manifold, and the contact structure $`\xi `$ will be positive, i.e., given by a 1-form $`\alpha `$ with $`\alpha d\alpha >0`$. Throughout this section we only consider $`(M,\xi )`$ tight. Also, when we refer to Legendrian curves we mean closed curves, in contrast to Legendrian arcs. ### 2.1. Convexity Recall an oriented embedded surface $`\mathrm{\Sigma }`$ in $`(M,\xi )`$ is called convex if there is a vector field $`v`$ transverse to $`\mathrm{\Sigma }`$ whose flow preserves $`\xi .`$ Perhaps the most important feature of convex surfaces is the dividing set. If $``$ is a singular foliation on $`\mathrm{\Sigma }`$ then a disjoint union of (properly) embedded curves $`\mathrm{\Gamma }`$ is said to divide $``$ if $`\mathrm{\Gamma }`$ divides $`\mathrm{\Sigma }`$ into two parts $`\mathrm{\Sigma }^\pm ,`$ $`\mathrm{\Gamma }`$ is transverse to $`,`$ and there is a vector field $`X`$ directing $``$ and a volume form $`\omega `$ on $`\mathrm{\Sigma }`$ such that $`\pm L_X\omega >0`$ on $`\mathrm{\Sigma }^\pm `$ and $`X`$ points transversely out of $`\mathrm{\Sigma }^+.`$ If $`\alpha `$ is a contact 1-form for $`\xi `$ then the zeros of $`\alpha (v)`$ provide dividing curves for the characteristic foliation $`\mathrm{\Sigma }_\xi .`$ It is sometimes useful to keep in mind that the dividing curves are where $`v`$ is tangent to $`\xi .`$ An isotopy $`F:\mathrm{\Sigma }\times [0,1]M`$ of $`\mathrm{\Sigma }`$ is called admissible if $`F(\mathrm{\Sigma }\times \{t\})`$ is transversal to $`v`$ for all $`t.`$ An important result concerning convex surfaces we will (implicitly) be using through out this paper is: ###### Theorem 3 (Giroux ). Let $`\mathrm{\Gamma }`$ be the dividing set for $`\mathrm{\Sigma }_\xi `$ and $``$ another singular foliation on $`\mathrm{\Sigma }`$ divided by $`\mathrm{\Gamma }.`$ Then there is an admissible isotopy $`F`$ of $`\mathrm{\Sigma }`$ such that $`F(\mathrm{\Sigma }\times \{0\})=\mathrm{\Sigma },`$ $`F(\mathrm{\Sigma }\times \{1\})_\xi =`$ and the isotopy is fixed on $`\mathrm{\Gamma }.`$ Roughly speaking, this says that the dividing set $`\mathrm{\Gamma }`$ dictates the geometry of $`\mathrm{\Sigma }`$, not the precise characteristic foliation. We will let $`\mathrm{\#}\mathrm{\Gamma }`$ denote the number of connected components of $`\mathrm{\Gamma }`$. If there is any ambiguity, we will also write $`\mathrm{\Gamma }_\mathrm{\Sigma }`$ instead of $`\mathrm{\Gamma }`$ to denote the dividing set of $`\mathrm{\Sigma }`$. ### 2.2. Edge-rounding Let $`\mathrm{\Sigma }_1`$ and $`\mathrm{\Sigma }_2`$ be compact convex surfaces with Legendrian boundary, which intersect transversely along a common boundary Legendrian curve $`L`$. The neighborhood of the common boundary Legendrian is locally isomorphic to the neighborhood $`\{x^2+y^2\epsilon \}`$ of $`M=𝐑^2\times (𝐑/𝐙)`$ with coordinates $`(x,y,z)`$ and contact 1-form $`\alpha =\mathrm{sin}(2\pi nz)dx+\mathrm{cos}(2\pi nz)dy`$, for some $`n𝐙^+`$. Let $`A_i\mathrm{\Sigma }_i`$, $`i=1,2`$, be an annular collar of the boundary component $`L`$. We may choose our local model so that $`A_1=\{x=0,0y\epsilon \}`$ and $`A_2=\{y=0,0x\epsilon \}`$ (or the same with $`A_1`$ and $`A_2`$ switched). Assuming the former, if we join $`\mathrm{\Sigma }_1`$ and $`\mathrm{\Sigma }_2`$ along $`x=y=0`$ and round the common edge, the resulting surface is convex, and the dividing curve $`z=\frac{k}{2n}`$ on $`\mathrm{\Sigma }_1`$ will connect to the dividing curve $`z=\frac{k}{2n}\frac{1}{4n}`$ on $`\mathrm{\Sigma }_2`$, where $`k=0,\mathrm{},2n1`$. ### 2.3. Bypasses We now introduce the main idea on which this work is based. Let $`L`$ be a Legendrian arc. A half-disk $`D`$ is called a bypass for $`L`$ if $`D`$ is Legendrian and consists of two arcs $`a_0=DL`$ and $`a_1`$ such that, if we orient $`D`$, $`a_0a_1`$ are both positive (negative) elliptic singular points in $`D_\xi ,`$ there is a negative (positive) elliptic point along $`a_0`$, and the singular points along $`a_1`$ are all positive (negative) and alternate between elliptic and hyperbolic. See Figure 1. We also allow for the degenerate case when $`L`$ is a closed curve and the endpoints of $`a_0`$ are the same (i.e., $`L=a_0`$). We refer to such a bypass as a bypass of degenerate type. Once an orientation on $`D`$ is fixed, the sign of the bypass is the sign of the elliptic point on the interior of $`a_0.`$ The reason for the name ‘bypass’ is that, instead of traveling along the Legendrian arc $`L`$, we may go around and drive through $`L^{}=(L\backslash a_0)a_1`$ — this has the effect of increasing the twisting of $`\xi `$ along the Legendrian curve. (Here we are using the convention that left twists are negative.) One can then show: ###### Theorem 4 (Honda ). Assume $`M`$ has a convex boundary $`\mathrm{\Sigma }`$ and there exists a bypass $`D`$ along the Legendrian curve $`L\mathrm{\Sigma }.`$ Then we can find a neighborhood $`N`$ of $`\mathrm{\Sigma }D`$ with $`N=\mathrm{\Sigma }\mathrm{\Sigma }^{}`$, and $`\mathrm{\Gamma }_\mathrm{\Sigma }^{}`$ is related to $`\mathrm{\Gamma }_\mathrm{\Sigma }`$ as shown in Figure 2. Consider a convex torus $`T`$. If $`\xi `$ is tight, then one can show that no dividing curve for $`T`$ bounds a disk. Thus $`\mathrm{\Gamma }_T`$ consists of $`2n`$ parallel dividing curves with $`n\text{Z}^+`$. Assume for now that the dividing curves are all horizontal. Using Theorem 3 we can isotop $`T`$ so that there is one closed curve of singularities in $`T_\xi `$ in each region of $`T\mathrm{\Gamma }_T`$ — these are called Legendrian divides and are parallel to the dividing curves. We can further assume all the other leaves of $`T_\xi `$ form a 1-parameter family of parallel closed curves transverse to the Legendrian divides. These Legendrian ruling curves can have any slope not equal to the slope of the dividing curves. A convex torus $`T`$ is said to be in standard form if $`T_\xi `$ has this nongeneric form, consisting of Legendrian divides and Legendrian ruling curves. Suppose $`n>1`$ and we find a bypass for one of the ruling curves for $`T`$ then we may isotop $`T`$ across the bypass as in Theorem 4 and reduce the number $`n`$ by one. If $`n=1`$ we have a configuration change as follows: ###### Theorem 5 (Honda ). Let $`T`$ be a convex torus with $`\mathrm{\#}\mathrm{\Gamma }_T=2`$ and in some basis for $`T`$ the slope of the dividing curves is 0. If we find a bypass for a ruling curve of slope between $`\frac{1}{m}`$ and $`\frac{1}{m+1}`$, $`m\text{Z},`$ then after pushing $`T`$ across the bypass the new torus has two dividing curves of slope $`\frac{1}{m+1}.`$ ### 2.4. Legendrian curves and the twisting number Let $`\gamma `$ be a Legendrian curve in $`M.`$ We can always find a neighborhood $`N`$ of $`\gamma `$ whose boundary is a convex torus with two dividing curves. The linking of a dividing curve on $`N`$ with $`\gamma `$ is the Thurston-Bennequin invariant of $`\gamma .`$ We call the slope of these dividing curves the boundary slope of $`N.`$ If $`\gamma `$ is not null-homologous, then the framing induced on $`\gamma `$ relative to some pre-assigned trivialization of the normal bundle of $`\gamma `$ will be called the twisting number of $`\gamma .`$ In contrast to transverse curves, neighborhoods of Legendrian curves have a quantifiable thickness — no matter how small a (nice) neighborhood of a Legendrian curve one takes, the boundary slope of the neighborhood is fixed. If $`N`$ is the neighborhood of a Legendrian curve with twisting number $`m`$ (relative to a framing that is already fixed), then the slopes of the dividing curves on $`N`$ are $`\frac{1}{m}`$. On the other hand, any tight contact structure on the solid torus $`S^1\times D^2`$ with boundary slope $`\frac{1}{m}`$ is contact isotopic to a neighborhood of a Legendrian curve with twisting number $`m,`$ see . It is easy to decrease the twisting number inside $`N`$ by adding ‘zigzags’ (see below for an explanation of terminology). Hence, increasing the twisting number is one way of thickening $`N`$. We now show how to use bypasses to increase the twisting number of $`\gamma .`$ ###### Lemma 6 (Twist Number Lemma: Honda ). Let $`\gamma `$ be a Legendrian curve in $`M`$ with a fixed framing. Let $`N`$ be a standard neighborhood of $`\gamma `$ and $`n`$ the twisting number of $`\gamma .`$ If there exists a bypass attached to a Legendrian ruling curve of $`N`$ of slope $`r`$ and $`\frac{1}{r}n+1,`$ then there exists a Legendrian curve with larger twisting number isotopic to $`\gamma .`$ This lemma is a very useful formulation of the observation that if one has a bypass for a Legendrian knot then one can increase its twisting number. ### 2.5. Finding bypasses Now that we see bypasses are useful let us consider how to find them. Let $`\mathrm{\Sigma }`$ be a convex surface with Legendrian boundary $`L.`$ If the Thurston-Bennequin invariant of $`L`$ (or let us say twisting number with respect to $`\mathrm{\Sigma }`$) is negative then we can arrange that all the singularities of $`\mathrm{\Sigma }_\xi `$ along $`L`$ are (half)-elliptic. Moreover we can assume the singular foliation $`\mathrm{\Sigma }_\xi `$ is Morse-Smale. If $`n`$ is the twisting number of $`L`$, then $`\mathrm{\Gamma }_\mathrm{\Sigma }`$ intersects $`L`$ at $`2n`$ points. In this situation we can use the flow of $`X`$ (the vector field directing $`\mathrm{\Sigma }_\xi `$ in the definition of the dividing set) to flow any dividing curve to a Legendrian arc on $`\mathrm{\Sigma }`$, all of whose singularities are of the same sign. Now, starting with a dividing curve $`c`$ with endpoints along $`L`$ isotopic to the boundary of $`\mathrm{\Sigma }`$, we push $`c`$ ‘back into $`\mathrm{\Sigma }`$ along $`X`$’ and eventually arrive at a Legendrian arc which cuts a bypass for $`L`$ off of $`\mathrm{\Sigma }.`$ Thus to find bypasses we need only look for these $``$-compressible dividing curves. ### 2.6. Layering neighborhoods of Legendrian knots We now explore the neighborhood of a Legendrian knot in more detail. To this end let $`N`$ be a standard neighborhood of a Legendrian knot $`\gamma `$ with a fixed framing so that $`N=S^1\times D^2`$ and $`\gamma `$ has twisting number $`n1`$ in this framing. Recall this means that $`N`$ is convex with two dividing curves (and two Legendrian divides) of slope $`\frac{1}{n}.`$ Inside of $`N`$ we can isotop $`\gamma `$ (but not Legendrian isotop) to $`\gamma ^{}`$ so as to decrease the twisting number by one. There are actually two different ways to do this. If $`\gamma `$ were a knot in $`\text{R}^3`$ with the standard tight contact structure then this process corresponds to stabilizing the knot (reducing the Thurston-Bennequin invariant) by adding zigzags, and one can do this while increasing or decreasing the rotation number by one. To see how to detect this difference in $`N`$ we do the following: first, fix an orientation on $`N`$ and orient $`\gamma `$ so that $`\gamma `$ and $`\{pt\}\times D^2`$ intersect positively. Let $`N^{}`$ be a standard neighborhood of $`\gamma ^{}`$. Then consider the layer $`U=T^2\times [0,1]=N\backslash N^{}`$, where we set $`T^2\times \{0\}=N^{}`$ and $`T^2\times \{1\}=N.`$ We assume the Legendrian ruling curves are vertical so that $`A=S^1\times [0,1]`$ is an annulus with Legendrian boundary. Note that $`S^1\times \{0\}`$ intersects the dividing curves on $`N^{},`$ $`2n2`$ times and $`S^1\times \{1\}`$ intersects the dividing curves on $`N,`$ $`2n`$ times. So there will be a bypass (in fact just one bypass) on $`A`$ for $`S^1\times \{0\}.`$ If $`A`$ is oriented so that the orientation on $`S^1\times \{0\}`$ agrees with the one chosen above on $`\gamma `$, then the sign of the bypass is what distinguishes the two possible $`\gamma ^{}`$’s. If $`c`$ is a curve on $`S^1\times \{0\}`$ whose slope is not $`\frac{1}{n}`$ or $`\frac{1}{n+1}`$ then we can make the Legendrian ruling curves on $`U`$ parallel to $`c.`$ Later it will be useful to know what the dividing curves on $`c\times [0,1]`$ will look like. The relative Euler class in the next paragraph can be useful for this purpose. ### 2.7. Euler class Let $`\xi `$ be any tight structure on a toric annulus $`U=T^2\times [0,1]`$ (not necessarily the same one as in the above paragraphs). Assume the boundary is convex and in standard form. Let $`v_b`$ be a section of $`\xi |_U`$ which is transverse to and twists (with $`\xi `$) along the Legendrian ruling curves. We also take $`v_b`$ to be tangent to the Legendrian divides. We may now form the relative Euler class $`e(\xi ,v_b)`$ in $`H^2(U,U;\text{Z}).`$ First note that $`e(\xi ,v_b)`$ is unchanged if we perform a $`C^0`$-small isotopy of $`U`$ so as to alter the slopes of the ruling curve. Now given an oriented curve $`c`$ on $`T^2\times \{0\}`$ we can assume the annulus $`A=c\times [0,1]`$ has Legendrian boundary and is also convex. We orient $`A`$ so that it induces the correct orientation on $`c.`$ Now we have (1) $$e(\xi ,v_b)(A)=\chi (A_+)\chi (A_{}),$$ where $`A_\pm `$ are the positive and negative regions into which the dividing curves cut $`A.`$ This formula follows from Proposition 6.6 in once one observes that $`v_b`$ may be homotoped in $`\xi |_U`$ so as to be tangent to and define the correct orientation on $`A.`$ ### 2.8. Twisting We end this section by making precise the notion of twisting along $`T^2\times [0,1]`$. Let $`\xi _1`$ be the kernel of $`\alpha _1=\mathrm{sin}zdx+\mathrm{cos}zdy`$ on $`T^3=T^2\times S^1=(\text{R}^2/\text{Z}^2)\times S^1`$ with coordinates $`((x,y),z)`$. This contact structure is tight. The characteristic foliation on $`T^2\times \{p\}`$ is by lines of a fixed slope. The slope “decreases” as $`p`$ moves around $`S^1`$ in the positive direction. Let $`[a,b]`$ be an interval in $`[0,\pi ]`$ so that at one end $`T_a=T^2\times \{a\}`$ has slope $`\frac{1}{n}`$ and at the other end $`T_b=T^2\times \{b\}`$ has slope $`\frac{1}{n+1}.`$ We can $`C^{\mathrm{}}`$-perturb $`T_a`$ and $`T_b`$ so that they are convex and each has two dividing curves of slope $`\frac{1}{n}`$ and $`\frac{1}{n+1},`$ respectively. Let $`U=T^2\times [0,1]`$ be the layer between $`N^{}`$ and $`N`$ as in 2.6. It is possible to show that $`T^2\times [a,b]`$ is contactomorphic to $`U`$ with the contact structure constructed above (see ). The slopes of the characteristic foliations on $`T^2\times \{pt\}`$ vary from $`\frac{1}{n}`$ to $`\frac{1}{n+1}`$ in $`T^2\times [a,b]`$ so they do likewise in $`U.`$ From this one can conclude that if $`N`$ is a standard neighborhood of a Legendrian knot $`\gamma `$ with twist number $`n`$ (with respect to some fixed framing), then for any slope in $`[\frac{1}{n},0)`$ one can find a torus around $`\gamma `$ whose characteristic foliation has this slope (note if $`n>0`$ then $`[\frac{1}{n},0)`$ means $`[\mathrm{},0)[\frac{1}{n},\mathrm{}]`$ where $`\mathrm{}`$ is identified with $`\mathrm{}`$). More generally, if $`T^2\times [0,1]`$ has boundary slopes $`s_i`$ for $`T^2\times \{i\}`$, then we may find convex tori parallel to $`T^2\times \{i\}`$ with any slope $`s`$ in $`[s_1,s_0]`$ (if $`s_0<s_1`$ then this means $`[s_1,\mathrm{}][\mathrm{},s_0]`$). This follows from the classification of tight contact structures on $`T^2\times I`$ — in the proof we layer $`T^2\times I`$ into ‘thin’ toric annuli, each of which is isomorphic to $`T^2\times [a,b]`$ above (see ). ## 3. Thickening the singular fibers Consider the Seifert fibered space $`M`$ with 3 singular fibers over $`S^2`$. $`M`$ is described by the Seifert invariants $`(\frac{\beta _1}{\alpha _1},\frac{\beta _2}{\alpha _2},\frac{\beta _3}{\alpha _3})`$. Let $`V_i`$, $`i=1,2,3`$, be the neighborhoods of the singular fibers $`F_i`$, isomorphic to $`S^1\times D^2`$ and identify $`M(_iV_i)`$ with $`S^1\times \mathrm{\Sigma }_0,`$ where $`\mathrm{\Sigma }_0`$ is a sphere with three punctures. Then $`A_i:V_i(M\backslash V_i)`$ is given by $`A_i=\left(\begin{array}{cc}\alpha _i& \gamma _i\\ \beta _i& \delta _i\end{array}\right)SL(2,\text{Z})`$. We identify $`V_i=\text{R}^2/\text{Z}^2`$, by choosing $`(1,0)^T`$ as the meridional direction, and $`(0,1)^T`$ as the longitudinal direction with respect to the product structure on $`V_i`$. We identify $`(MV_i)=\text{R}^2/\text{Z}^2,`$ by letting $`(0,1)^T`$ be the direction of an $`S^1`$-fiber, and $`(1,0)^T`$ be the direction given by $`(MV_i)(\{pt\}\times \mathrm{\Sigma }_0)`$. Let $`M`$ be the Poincaré homology sphere $`\mathrm{\Sigma }(2,3,5)`$ with reverse orientation. It is a Seifert fibered space over $`S^2`$ with Seifert invariants $`(\frac{1}{2},\frac{1}{3},\frac{1}{5})`$. In the case $`V=V_1`$, we choose $`A_1=\left(\begin{array}{cc}2& 1\\ 1& 0\end{array}\right)`$. Notice there we has some freedom to choose $`\gamma _1`$, $`\delta _1`$, since we could have postmultiplied by $`\left(\begin{array}{cc}1& m\\ 0& 1\end{array}\right)`$ if we changed our framing for $`V_1`$. Similarly, let $`A_2=\left(\begin{array}{cc}3& 1\\ 1& 0\end{array}\right)`$, and $`A_3=\left(\begin{array}{cc}5& 1\\ 1& 0\end{array}\right)`$. Now let $`\xi `$ be a positive contact structure on $`M`$. Assume $`\xi `$ is tight. The goal of the paper is to obtain a contradiction by finding overtwisted disks inside $`M`$. In the first stage of the proof, we will try to thicken neighborhoods $`V_i`$ of the singular fibers $`F_i`$ (we may assume the singular fibers are Legendrian after isotopy). This is done by maximizing the twisting number $`m_i`$ among Legendrian curves isotopic to $`F_i`$, subject to the condition that all three Legendrian curves be simultaneously isotopic to $`(F_1,F_2,F_3)`$. Let $`V_i`$ be a standard tubular neighborhood of $`F_i`$ with minimal convex boundary and boundary slope $`\frac{1}{m_i}`$ (note this is negative). It is useful to note how the dividing curves map under $`A_i`$. $`A_1:(m_1,1)^T(2m_11,m_1)^T`$, $`A_2:(m_2,1)^T(3m_2+1,m_2)^T`$, and $`A_3:(m_3,1)^T(5m_3+1,m_3)^T`$. Therefore, the boundary slopes are $`\frac{m_1}{2m_11}`$, $`\frac{m_2}{3m_2+1}`$ and $`\frac{m_3}{5m_3+1}`$, when viewed on $`(MV_i)`$. Warning: We will often call the same surface by different names, such as $`V_i`$ and $`(MV_i)`$. Although the surfaces themselves are the same, their identifications with $`\text{R}^2/\text{Z}^2`$ are not. Therefore, when we refer to slopes on $`V_i`$, we implicitly invoke the identification of $`V_i`$ with $`\text{R}^2/\text{Z}^2`$ given in the first paragraph of this section. ###### Lemma 7. We can increase $`m_i`$ so that $`m_1=0`$, $`m_2=m_3=1`$, and thicken $`V_i`$ to $`V_i^{}`$ so that the slopes of $`(MV_i^{})`$ are all infinite. ###### Proof. We may modify the Legendrian rulings on both $`(M\backslash V_2)`$ and $`(M\backslash V_3)`$ so that they are vertical. Take a vertical annulus $`S^1\times I`$ spanning from a vertical Legendrian ruling curve on $`(M\backslash V_2)`$ to a vertical Legendrian ruling curve on $`(M\backslash V_3)`$. Here ‘vertical’ means ‘in the direction of the $`S^1`$-fibers’. Note $`S^1\times I`$ intersects the dividing curves on $`V_2`$ and $`V_3,`$ $`3m_2+1`$ and $`5m_3+1`$ times respectively. Assume $`m_2,m_31`$. If $`3m_2+15m_3+1`$, then there exists a bypass (due to the imbalance), attached along a vertical Legendrian curve of $`(M\backslash V_2)`$ or $`(M\backslash V_3)`$. We transform $`(0,1)^T`$ via $`A_i^1`$ to use the Twist Number Lemma. $`A_2^1:(0,1)^T(1,3)^T`$, and $`A_3^1:(0,1)^T(1,5)^T`$. The Legendrian rulings will have slope $`3`$ and $`5`$, and, therefore, we may increase the twisting number by 1 if the twisting number is $`<1`$. Next, assume $`3m_2+1=5m_3+1`$ and there are no bypasses on the vertical annulus. Then we may take $`S^1\times I`$ to be standard, with vertical rulings, and parallel Legendrian divides spanning from $`V_2`$ to $`V_3`$. Cutting along $`S^1\times I`$ and rounding the corners, we obtain the torus boundary of $`M(V_2V_3(S^1\times I))`$; if we identify this torus with $`\text{R}^2/\text{Z}^2`$ in the same way as $`(M\backslash V_1)`$, then the boundary slope is $`\frac{m_2+m_3+1}{3m_2+1}=\frac{\frac{8}{5}m_2+1}{3m_2+1}`$. When $`m_2=5`$, then the slope is $`\frac{1}{2}`$, and any Legendrian divide gives rise to an overtwisted disk — this is because $`A_1:(1,0)^T(2,1)^T`$, which corresponds to a slope of $`\frac{1}{2}`$ on $`(M\backslash V_1)`$. When $`m_2<5`$, then the slope is $`<\frac{1}{2}`$, which implies that on the $`S^1\times D^2=M(V_2V_3(S^1\times I))`$, the twisting in the radial direction is almost $`\pi `$. In particular, there exists a convex torus with slope $`\mathrm{}`$ inside $`S^1\times D^2`$, and hence a vertical Legendrian curve with zero twisting, obtained as a Legendrian divide of the convex torus. Any vertical annulus taken from this vertical Legendrian to $`V_2`$ (or $`V_3`$) will give bypasses. We can now assume $`m_2=m_3=1`$. The boundary slopes of $`(MV_2)`$ and $`(MV_3)`$ are $`\frac{1}{2}`$ and $`\frac{1}{4}`$, respectively. Again look at the vertical annulus $`S^1\times I`$ spanning from $`V_2`$ to $`V_3`$, with Legendrian boundary. There are three possibilities: (1) There are no bypasses along $`S^1\times \{0\}`$ (on the $`V_2`$ side) — in this case, we can cut along the annulus as before, and get boundary component with boundary slope $`\frac{1}{2}`$, contradicting tightness. (2) There is one bypass along $`S^1\times \{0\}`$ — cutting along the annulus again, we find that the boundary slope is $`1`$. This means that the twisting of $`T^2\times I=M(V_1V_2V_3(S^1\times I))`$ is large, and that we have a convex torus in standard form with Legendrian divides of infinite slope as before. (3) There are two bypasses. In any case, there exist vertical Legendrian curves with twisting number $`0`$ with respect to the framing from the fibers. Since $`A_1^1:(0,1)^T(1,2)^T`$, we have bypasses for $`V_1`$ as well, and we can increase to $`m_1=0`$ using the Twist Number Lemma. Next, taking a vertical annulus from $`V_2`$ to one such vertical Legendrian curve, we obtain two bypasses for $`V_2`$ and, similarly, we obtain four bypasses for $`V_3`$. By attaching these two bypasses, we obtain a thickening $`V_i^{}`$ of $`V_i`$, $`i=1,2,3`$, so that $`(MV_i)`$ has two vertical dividing curves. ∎ ## 4. The Fibration In the section we use the structure of $`MV_3`$ as a punctured torus bundle over $`S^1`$ to complete the proof of our main theorem. The strategy is as follows: First we use the bundle structure to increase $`m_3`$ to 0 by finding a bypass along the boundary of the punctured torus fiber. We then show how to increase $`m_3`$ to 1 by making the boundary of the punctured torus fiber Legendrian with twist number 0. When $`m_3=1`$ the corresponding neighborhood of $`F_3`$ almost contains an overtwisted disk. By increasing it a little further it does contain one. This will complete the proof of our main theorem. ### 4.1. Bundle structure We now describe the fibration as a punctured torus bundle over $`S^1`$. In the previous section, $`V_i`$ was the neighborhood of a Legendrian curve $`F_i`$ with twisting number $`m_i`$, and $`V_i^{}`$ its thickening. Write $`M\backslash (_iV_i^{})=S^1\times \mathrm{\Sigma }`$, where $`\mathrm{\Sigma }`$ is a 3-holed sphere. Let $`\gamma `$ be an embedded arc on $`\mathrm{\Sigma }`$ connecting $`V_2^{}`$ to $`V_1^{}`$, $`A=S^1\times \gamma `$ an annulus connecting $`V_2^{}`$ to $`V_1^{}`$, and $`V`$ a neighborhood of $`A`$ in $`M(V_1^{}V_2^{})`$. Define $`M^{}=V_1^{}V_2^{}V`$, which is $`MV_3`$ with a $`T^2\times I`$ layer removed from the boundary. Let $`D_1`$ and $`D_2`$ be meridional disks for $`V_1^{}`$ and $`V_2^{},`$ respectively. The slope of $`D_1`$ on $`(MV_1^{})`$ is $`\frac{1}{2}`$ and the slope of $`D_2`$ on $`(MV_2^{})`$ is $`\frac{1}{3}.`$ Thus we can take two copies of $`D_2`$ and three copies of $`D_1`$ and glue them together with six copies of $`V\mathrm{\Sigma }`$ to obtain a punctured torus $`T`$ in $`M^{}`$ with boundary on $`M^{}.`$ (See Figure 7.) Parallel copies of $`T`$ will fiber $`M^{}`$ (and $`MV_3`$) as a punctured torus bundle over $`S^1.`$ The slope of $`T`$ on $`(MV_3)`$ is $`\frac{1}{6}.`$ Thus the slope on $`V_3`$ is 1, so $`T`$ provides a Seifert surface for $`F_3.`$ We can show (for example by using Kirby calculus) that $`M,`$ represented in Figure 3(a), is diffeomorphic to Figure 3(b), and that $`M\backslash V_3`$ is diffeomorphic to the complement of the right-handed trefoil knot in $`S^3.`$ Thus we may identify the monodromy of the punctured torus bundle as $$\left(\begin{array}{cc}1& 1\\ 1& 0\end{array}\right).$$ ### 4.2. Dividing curves on the punctured torus fiber Recall we have arranged that $`m_3=1`$, so the dividing curves on $`V_3`$ have slope $`1.`$ Thus, if we make $`T`$ a Legendrian curve on $`V_3`$, then $`tb(T)=2.`$ Here is a preliminary lemma: ###### Lemma 8. We can always find a bypass along $`T`$ (after possibly isotoping $`T`$). ###### Proof. We begin by showing that either we can find a bypass for $`T`$ or arrange that the dividing curves on $`T`$ consist of exactly two parallel arcs. If there are no bypasses, then the dividing set must consist of two parallel arcs and an even number, say $`2m,`$ of closed parallel curves. Consider $`M^{}\backslash T=T\times [0,1]`$ with $`T`$ identified with $`T\times \{0\}.`$ Here we are viewing $`M^{}`$ as $`M\backslash V_3`$. Let $`\alpha `$ be a closed Legendrian curve on $`T`$ parallel to the dividing curves. Then consider $`𝒜=\alpha \times [0,1]`$ which we may assume is convex with Legendrian boundary. The dividing curves do not intersect $`\alpha \times \{0\}`$, but they intersect $`\alpha \times \{1\}`$ at least $`2m`$ times. Thus we may find a bypass $`D`$ for $`T\times \{1\}.`$ The inner boundary $`T^{}`$ of a neighborhood of $`(T\times \{1\})D`$ is a convex torus with either a bypass for $`T^{}`$ or two fewer dividing curves than $`T.`$ Repeating this argument $`m`$ times will result in a bypass for $`T`$ or a convex torus whose dividing set consist of exactly two parallel arcs. Now suppose $`\mathrm{\Gamma }_T`$ consists of two parallel arcs, since otherwise we are done. Then $`\mathrm{\Gamma }_{T\times \{0\}}`$ consists of two arcs and $`\mathrm{\Gamma }_{T\times \{1\}}`$ consists of the image of these two arcs under the monodromy map for the bundle. Let $`\alpha `$ be a closed curve on $`T\times \{0\}`$ parallel to the dividing curves, and $`𝒜=\alpha \times [0,1].`$ We may arrange for $`\alpha \times \{0,1\}`$ to be Legendrian and for $`𝒜`$ to be convex. Now the dividing curves on $`𝒜`$ do not intersect $`\alpha \times \{0\}`$ but intersect $`\alpha \times \{1\}`$ at least two times. Thus we have a bypass $`D`$ for the dividing curves on $`T\times \{1\}.`$ Assume first that the intersection number is exactly two. Then the annulus $`𝒜`$ may be split into two annuli, one of which, call it $`𝒜^{},`$ intersects $`T\times \{1\}`$ and contains $`D`$, which is of degenerate type. The boundary of a small neighborhood of $`𝒜^{}`$ in $`T\times (0,1)`$ is an annulus $`𝒜^{\prime \prime }`$ and has dividing curves as shown in Figure 4. The boundary of $`𝒜^{\prime \prime }`$ sits on $`T\times \{1\}.`$ If we cut out the annulus in $`T\times \{1\}`$ that cobounds a solid torus with $`𝒜^{\prime \prime },`$ glue in $`𝒜^{\prime \prime }`$, and smooth corners, then the dividing curves on the new $`T\times \{1\}`$ are shown in Figure 4. In particular we have a bypass for $`T.`$ If the intersection number is greater than two, then simply attach the bypass $`D`$ onto $`T\times \{1\}`$ — either the new $`T\times \{1\}`$ has a bypass, in which case we are done, or we can take a new $`𝒜`$ which has fewer (but $`2`$ intersections). We continue until the intersection number becomes two. ∎ ### 4.3. Twist number increase We are now ready to finish the first part of the program outlined in the beginning of this section. ###### Lemma 9. We may increase $`m_3`$ to $`0`$ while keeping $`m_1=0`$ and $`m_2=1.`$ Moreover, $`V_2`$ may be further thickened to $`V_2^{\prime \prime }`$ so that $`(MV_2^{\prime \prime })`$ has dividing curves of slope $`1.`$ ###### Proof. Since we have a bypass on $`T`$ we may apply the Twist Number Lemma to increase the twist number of $`F_3`$ to $`m_3=0.`$ So we have two vertical dividing curves on $`V_3`$ and on $`(MV_3)`$ they have slope 0. Now repeating the argument in Lemma 7 we see that we can arrange $`m_1=0`$ and $`m_2=1.`$ Note the dividing curves on $`(M\backslash V_2)`$ and $`(M\backslash V_1)`$ have slope $`\frac{1}{2}`$ and 0, respectively. Thus taking a vertical annulus between $`V_1`$ and $`V_2`$, we find a vertical bypass along $`V_2.`$ With this bypass we can thicken $`V_2`$ to $`V_2^{\prime \prime }`$ whose dividing curves have slope $`1.`$ Note that now $`V_1`$, $`V_2`$, $`V_3`$ are neighborhoods of Legendrians with $`m_1=0`$, $`m_2=1`$, $`m_3=0`$, and $`V_2^{\prime \prime }`$ has slope $`1`$ on $`(MV_2^{\prime \prime })`$. ### 4.4. Thinning before thickening Note if we had another bypass along $`T`$ we could increase $`m_3`$ to 1 and from here we could then find an overtwisted disk (see below). However, we know of no direct way to prove this bypass exists. Therefore we use the following strategy, which can be called as ‘thinning before thickening’. Notice we have thickened $`V_3`$ so that $`m_3=1`$; we will now backtrack by having $`V_3`$ relinquish some of its thickness (we peel off a toric annulus from $`V_3`$ and reattach to $`V_1`$ and $`V_2`$), and then thicken again to obtain a contradiction. Let $`D_i`$ be the meridional disk to $`V_i`$ for $`i=1,2.`$ If we arrange that the Legendrian rulings on $`V_i`$ are horizontal and the $`D_i`$ are convex, then $`\mathrm{\#}\mathrm{\Gamma }_{D_1}=1`$ and $`\mathrm{\#}\mathrm{\Gamma }_{D_2}=2`$. The dividing sets divide $`D_1`$ into two regions, one positive and one negative, and $`D_2`$ into three regions — without loss of generality we can assume that, two regions are positive and one is negative. Note that the signs of these regions depend on an orientation on the disks. Pick an orientation for the fibers of the Seifert fibration, and the disks will be oriented so that their intersection with a fiber is positive. Note we can write $`V_3=CU`$, where $`C`$ is a solid torus with convex boundary and dividing curves of slope $`1`$ (on $`C`$), $`U=T^2\times [0,1]`$, $`T^2\times \{0\}=C`$, and $`T^2\times \{1\}`$ is convex with vertical dividing curves. We do this by stabilizing (reducing the twist number) $`F_3`$ with $`m_3=0`$ in $`V_3.`$ In $`U`$ we can find a $`T^2`$, say $`T^2\times \{\frac{1}{2}\},`$ with dividing curves of slope $`5`$ (which correspond to vertical dividing curves from the point of view of $`(M\backslash V_3)`$). By performing the correct stabilization we can arrange that on a convex annulus in $`T^2\times [\frac{1}{2},1]`$ of slope $`5,`$ with Legendrian boundary on $`T^2\times \{\frac{1}{2}\}`$ and $`T^2\times \{1\},`$ there is a negative bypass along the boundary component on $`T^2\times \{1\}.`$ We can use this bypass to thicken $`V_i`$, $`i=1,2`$, to $`V_i^{}`$ so that $`(MV_i^{})`$ have vertical slopes. Using the relative Euler class we can easily see that each meridional disk $`D_i^{}`$ in $`V_i^{}`$ has an extra positive region. Specifically, if $`e`$ is the relative Euler class on $`V=V_1^{}V_1`$ (as defined in Section 2) and $`\mu `$ and $`\lambda `$ are the horizontal and vertical curves on $`V`$ then one may easily compute that $`e(\lambda \times [0,1])=1`$ and $`e((\mu \lambda )\times [0,1])=0.`$ Thus $`e(c\times [0,1])=1`$ where $`c`$ is the boundary of the meridional disk in $`V_1.`$ From this and Equation 1, we can conclude that the dividing curves are as in Figure 5 (a), and hence a convex meridional disk in $`V_1^{}`$ will have one negative and two positive regions in the complement of the dividing curves. Similarly, a convex meridional disk in $`V_2^{}`$ will have one negative and three positive regions since a relative Euler class argument will show its intersection with $`V_2^{}V_2`$ is shown in Figure 5 (b) ### 4.5. Tight structures on $`S^1\times \mathrm{\Sigma }`$ We would now like to piece these meridional disks together to form a punctured torus fiber for $`MV_3^{}`$, but to do this we first need to understand the complement of the singular fibers. Recall all the boundary slopes of $`S^1\times \mathrm{\Sigma }=M(V_1^{}V_2^{}V_3^{})`$ are infinite. Take $`\mathrm{\Sigma }=\{0\}\times \mathrm{\Sigma }`$, which we assume is convex with Legendrian boundary. All the boundary components of $`\mathrm{\Sigma }`$ have exactly two half-elliptic points. ###### Lemma 10. Each dividing curve of $`\mathrm{\Sigma }`$ must connect one boundary component to another boundary component. ###### Proof. There are several possible configurations of the dividing curves. See Figure 6. We argue that if there is a $``$-compressible dividing arc (as in (B), (C), (D)) on $`\mathrm{\Sigma }`$, then $`M`$ is overtwisted. The $``$-compressible dividing arc implies the existence of a bypass (of degenerate type) along some $`(M\backslash V_i^{})`$. Hence, there exists a layer $`L_1=T^2\times IS^1\times \mathrm{\Sigma }`$ with one boundary component $`(M\backslash V_i^{})`$ and boundary slopes $`0`$ and $`\mathrm{}`$. However, since there are vertical Legendrian curves with twisting number 0 outside of $`V_i^{}L_1`$, we obtain another layer $`L_2=T^2\times I`$, this time with boundary slopes $`\mathrm{}`$ and $`0`$. Therefore, $`V_i^{}L_1L_2`$ has too much radial twisting, and is overtwisted. The only possible configuration without a $``$-compressible dividing arc is (A). ∎ If we take signs into consideration, there are two possible tight structures on $`S^1\times \mathrm{\Sigma }=M(V_1^{}V_2^{}V_3^{})`$, depending on whether in Figure 6(A), the dividing curve from the top hits $`V_2^{}`$ or $`V_3^{}`$. ###### Lemma 11. For each of the two configurations of dividing curves on $`\mathrm{\Sigma }`$, the tight contact structure on $`S^1\times \mathrm{\Sigma }=M(V_1^{}V_2^{}V_3^{})`$ is unique. ###### Proof. We cut $`S^1\times \mathrm{\Sigma }`$ along $`\mathrm{\Sigma }`$, round the edges, and examine the dividing curves on the boundary of the resulting genus 2 handlebody. We then cut along the meridional disks of the handlebody, which we may assume have Legendrian boundary, and eventually obtain a 3-ball. Since each meridional disk of the handlebody intersects the dividing set only twice, the configuration of dividing curves on the meridional disks is unique. Therefore the initial configuration of dividing curves on $`\mathrm{\Sigma }`$ uniquely determines the tight structure on $`S^1\times \mathrm{\Sigma }`$. ∎ ### 4.6. The final stretch The lemma implies that the tight structure on $`S^1\times \mathrm{\Sigma }`$ is a translation-invariant tight structure on $`T^2\times I`$ with infinite boundary slopes, with the standard neighborhood of a vertical Legendrian curve with zero twisting removed. We view this $`T^2\times I`$ (minus $`S^1\times D^2`$) as the region between $`V_1^{}`$ and $`V_2^{}`$ (minus $`V_3^{}`$). We may think of the $`I`$-factor as being quite small, and then isotop one of the Legendrian divides on $`V_1^{}`$ to one of the Legendrian divides on $`V_2^{}`$ and finally identify small neighborhoods (in the tori) of these divides. We may then isotop $`V_i^{}`$, $`i=1,2`$, away from these neighborhoods so that the meridional disk $`D_i^{}`$ in $`V_i^{}`$ has Legendrian boundary. Forming the fiber $`T`$ in the fibration of $`MV_3^{}`$ from three copies of $`D_1`$ and two copies of $`D_2`$, we can arrange that $`T`$ is Legendrian and the dividing curves on $`T`$ are as in Figure 7. Thus there are six bypasses for $`T.`$ We can use these to increase the twisting number of $`T`$ to 0 (i.e., $`T`$ will be a Legendrian divide on some thickening of $`V_3`$) which corresponds to an increase of $`m_3`$ to 1. Thus the slope of the dividing curves on $`(MV_3)`$ is $`\frac{1}{6}`$. Now repeat the argument in Lemma 7 — take a vertical annulus from $`(MV_2)`$ to $`(MV_3)`$, and start with $`m_2`$ small. Since the denominator of $`\frac{1}{6}`$ is never equal to $`3m_2+1`$, we eventually arrive at $`m_2=1.`$ This implies the existence of a vertical bypass for $`(MV_3)`$, with which we can increase the slope of the dividing curves on $`(MV_3)`$ to $`\frac{1}{5}`$. Thus the Legendrian divide on our thickened solid torus bounds a meridional disk in $`V_3`$, so we have found an overtwisted disk. This completes the proof of the main theorem. ∎
no-problem/9910/hep-ph9910408.html
ar5iv
text
# 1 Introduction ## 1 Introduction It is well known that supersymmetric (SUSY) models allow for new possibilities for CP violation. The soft SUSY breaking terms are in general complex. For instance, in the minimal supersymmetric model (MSSM), there are complex phases in the parameters $`A`$, $`B`$ (which are the coefficients of the SUSY breaking of the trilinear and bilinear couplings respectively), $`M`$ (the gaugino masses), and $`\mu `$ (the mass coefficient of the bilinear terms involving the two Higgs doublets). However, only two of these phases are physical (they can not be rotated away by all possible field redefinitions). These phases give large one-loop contributions to the electric dipole moments (EDM) of the neutron and electron which exceed the current limits $`d_n`$ $`<`$ $`6.3\times 10^{26}\mathrm{ecm},`$ $`d_e`$ $`<`$ $`4.3\times 10^{27}\mathrm{ecm}.`$ (1) Hence SUSY phases, which are generally quite constrained by (1), have to be of order $`10^3`$ for SUSY particle masses of order the weak scale . However it was pointed out that there are internal cancellations among various contribution to the EDM (including the chromoelectric and purely gluonic operator contributions) whereby allowing for large phases . We have shown that in the effective supergravity (derived from string theory) such cancellations are accidental and it only occurs at few points in the parameter space . Recently, it was argued that the non-universal gaugino masses and their relative phases are crucial for having sufficient cancellations among the contributions to EDMs . These cancellations have been studied in the framework of a D-brane model where $`SU(3)_C\times U(1)`$ and $`SU(2)`$ arise from one five brane sector and from another set of five branes respectively . This model leads to non universal gaugino mass which is necessary to ensure these cancellations. In such a case one expects that these large phases have important impact on the lightest supersymmetric particle (LSP) relic density and its detection rates. In Ref. the effect of SUSY phases on the LSP mass, purity, relic density, elastic cross section, and detection rates has been considered within models with universal, hence real, gaugino masses. It was shown that the phases have no significant effect on the LSP relic abundance but, however, they have a substantial impact on the LSP detection rates. In this paper we study the cosmological implications of the gaugino phases. In particular, we consider the D-brane model, recently proposed in Ref. , which allows large value of SUSY phases without exceeding the experimental upper limit on the the neutron and electron EDMs Ref. . It turns out that the LSP of this model, depending on the ratio between $`M_1`$ and $`M_2`$, could be bino or wino like. In the region where the EDMs satisfy the upper (1), the mass of the LSP is very close to the lightest chargino. Hence, in this case, the co-annihilation between the LSP and the lightest chargino becomes very important and it greatly reduces the relic density. The phases have no important effect on the LSP relic abundance as in the case studied in Ref. . However, their effect on the detection rates is very significant and it is larger than what is found in the case of real gaugino masses . In section 2 we briefly review the formula of the soft SUSY breaking terms in $`D`$-brane model . We also study the effect of the phases on the LSP mass and its composition. In section 3 we compute the relic density of the LSP including the co-annihilation with the lightest chargino. We also comment on the effect of SUSY phases on the LSP detection rate. We give our conclusions in section 4. ## 2 Non-universal gaugino masses In the framework of the MSSM and the minimal supergravity (SUGRA) the universality of gaugino masses is usually assumed, i.e., $`M_a=M_{1/2}`$ for $`a=1,2,3`$. Despite the simplicity of this assumption, it is a very particular case and there exist classes of model in which non-universal gaugino masses can be derived . A type I string derived model, which has recently proposed in Ref. leads to non-universal gaugino masses. This property, as emphazised in Ref. , is very important for the cancellation mechanism, mentioned in the previous section. It has been shown that for this model there exists a special region in the parameter space where both the electron and the neutron EDMs satisfy the experimental constraint (1) and large values of SUSY phases and light SUSY spectrum are allowed. The soft SUSY breaking terms in type I string theories depend on the embedding of the standard model (SM) gauge group in the D-brane sector. In case of the SM gauge group is not associated with a single set of branes the gaugino masses are non-universal . If the $`SU(3)_C`$ and $`U(1)_Y`$ are associated with one set of five branes (say $`5_1`$) and $`SU(2)_L`$ is associated with a second set $`5_2`$ . The soft SUSY breaking terms take the following form $`M_1`$ $`=`$ $`\sqrt{3}m_{3/2}\mathrm{cos}\theta \mathrm{\Theta }_1e^{i\alpha _1}=M_3=A,`$ (2) $`M_2`$ $`=`$ $`\sqrt{3}m_{3/2}\mathrm{cos}\theta \mathrm{\Theta }_2e^{i\alpha _2},`$ (3) where $`A`$ is the trilinear coupling. The soft scalar masses squared are given by $`m_Q^2`$ $`=`$ $`m_L^2=m_{H_1}^2=m_{H_2}^2=m_{3/2}(13/2\mathrm{sin}^2\theta ),`$ (4) $`m_D^2`$ $`=`$ $`m_U^2=m_E^2=m_{3/2}(13\mathrm{cos}^2\theta ),`$ (5) and $`\mathrm{\Theta }_1^2+\mathrm{\Theta }_2^2=1`$. In this case, by using the appropriate field redefinitions and the $`R`$-rotation we end up with four physical phases, which can not be rotated away. These phases can be chosen to be: the phase of $`M_1`$ ($`\varphi _1`$), the phase of $`M_3`$ ($`\varphi _3`$), the phase of $`A`$ ($`\varphi _A`$), and the phase of $`\mu `$ ($`\varphi _\mu `$). The phase of $`B`$ is fixed by the condition that $`B\mu `$ is real. We notice that at the GUT scale $`\varphi _1=\varphi _3=\varphi _A=\alpha _1\alpha _2`$ while the phase of $`\mu `$ is arbitrary and scale independent. The effect of these phases on the EDM of the electron and the neutron (by taking into account the cancellation mechanism between the different contributions), has been examined in Ref. . It was shown that large values of these phases can be accommodated and the electron and neutron EDMs satisfy the experimental constraint. It is interesting to note, however, that the EDMs impose a constraint on the ratio $`M_1/M_2`$. In fact, in order to have an overlap between the electron and neutron EDM allowed regions, $`M_2`$ should be smaller than $`M_1`$. In particular, as explained in Ref. , a precise overlap between these two regions occurs at $`\mathrm{\Theta }_1=0.85`$. Such constraint has an important impact on the LSP. In this case we have the following ratios of the gaugino masses at the string scale $$|M_3|:|M_2|:|M_1|=1:\frac{\mathrm{\Theta }_2}{\mathrm{\Theta }_1}:1,$$ (6) where $`\frac{\mathrm{\Theta }_2}{\mathrm{\Theta }_1}<1`$. So that $`M_2`$ is the lightest gaugino at GUT scale. However, at the weak scale we approximately have $$|M_3|:|M_2|:|M_1|=7:2\frac{\mathrm{\Theta }_2}{\mathrm{\Theta }_1}:1,$$ (7) since $`\alpha _1:\alpha _2:\alpha _31:2:7`$ at $`M_Z`$. In Figure (1) we show the running values for $`|M_i|`$ with $`m_{3/2}`$ of order 100 GeV and $`\mathrm{\Theta }_1=0.85`$. In our analysis we restrict ourselves to the region found in Ref. , where the electron and neutron EDMs are smaller than the limit (1), i.e., we take $`\mathrm{tan}\beta 2`$, $`\theta =0.2`$, $`\mathrm{\Theta }_1=0.85`$, $`\varphi _\mu 10^1`$ and $`\varphi _1(11.5\pi )`$. This restriction suggests that in this scenario the lightest neutralino is a bino like. Indeed we find that the lightest neutralino, which in general is a linear combination of the Higgsinos $`\stackrel{~}{H}_1^0`$, $`\stackrel{~}{H}_2^0`$ and the two neutral gaugino $`\stackrel{~}{B}^0`$ (bino) and $`\stackrel{~}{W}_3^0`$ (wino) $$\chi =N_{11}\stackrel{~}{B}+N_{12}\stackrel{~}{W}^3+N_{13}\stackrel{~}{H}_1^0+N_{14}\stackrel{~}{H}_2^0,$$ is bino like with the gauge function $`f_g=|N_{11}|^2+|N_{12}|^20.98`$. Moreover it turns out that the LSP mass is close to the lightest chargino mass which is equal to the mass of the next lightest neutralino $`(\stackrel{~}{\chi }_2^0)`$. Figure (2) shows that the mass splitting between LSP and the lightest chargino $`\mathrm{\Delta }m_{\chi ^+}=m_{\chi _1}^+/m_\chi 1`$ is less than $`20\%`$. Therefore the co-annihilations between the bino and the chargino, as well as the next to lightest neutralino, are very important and have to be included in the calculation of the relic density. ## 3 Relic Abundance and Co-annihilation effect In this section we compute the relic density of the LSP. Moreover, we study the effect of the SUSY CP violating phases and the co-annihilation on both the relic density and the upper bound of the LSP mass. As usual, since the LSP is bino like, the annihilation is predominantly into leptons by the exchange of the right slepton. Without the co-annihilation, the constraint on the relic density $`0.025<\mathrm{\Omega }_{LSP}h^2<0.22`$ imposes sever constraint on the LSP mass, namely $`m_\chi <150`$ GeV. As shown in Figure (3), the SUSY phases have no any significant effect in relaxing such sever constraint. We now turn to the calculation of the cosmological relic density of the LSP by including the co-annihilation of $`\chi `$ with $`\chi _1^+`$ and $`\stackrel{~}{\chi }_2^0`$. As shown in Figure (3) the $`\mathrm{\Omega }_{LSP}h^2`$ increases to unacceptable high values as $`m_\chi `$ approaches $`300`$ GeV. This results imposes severe constraints on the entire parameter space. Therefore, in order to reduce the LSP relic density to an acceptable level, it is very important to include the co-annihilation. Several studies, which explain the effect of the co-annihilation with the next to lightest SUSY particle (NLSP), have been recently reported . In these studies it was shown that, in the models with large $`\mathrm{tan}\beta `$, the NLSP turns out to be stau and its co-annihilation with the LSP is crucial to reduce the relic density to an acceptable region. By following Ref. , we define the effective number of the LSP degrees of freedom $$g_{eff}=\underset{i}{}g_i(1+\mathrm{\Delta }_i)^{3/2}e^{\mathrm{\Delta }_ix},$$ (8) where $`\mathrm{\Delta }_i=m_i/m_{\stackrel{~}{\chi }_1^0}1`$, $`x=m_{\stackrel{~}{\chi }_1^0}/T`$ with $`T`$ is the photon temperature and $`g_i=2,4,2(i=\stackrel{~}{\chi }_1^0,\stackrel{~}{\chi }_1^+,\stackrel{~}{\chi }_2^0)`$ is the number degrees of freedom of the particles. Note that the neutralinos $`\chi _{1,2}^0`$ and chargino $`\chi ^\pm `$, which are Majorana and Dirac fermions, have two and four degrees of freedom respectively. The Boltzmann equation for the total number density $`n=_in_i`$ is given by $$\frac{dn}{dt}=3Hn\sigma _{eff}v(n^2(n^{eq})^2),$$ (9) where $`H`$ is the Hubble parameter, $`v`$ is the relative velocity of the annihilation particles. The number density in the thermal equilibrium $`n^{eq}`$ is given by $`n_i/nn_i^{eq}/n^{eq}=r_i`$. The effective cross section, $`\sigma _{eff}`$ is defined by $$\sigma _{eff}=\underset{i,j}{}\sigma _{ij}r_ir_j$$ and $`\sigma _{ij}`$ is the pair annihilation cross section of the particle $`\chi _i`$ and $`\chi _j`$. Here $`r_i`$ is given by $$r_i=\frac{g_i(1+\mathrm{\Delta }_i)^{3/2}e^{\mathrm{\Delta }_ix}}{g_{eff}}$$ Due to the fact that the LSP is almost pure bino, the co-annihilation processes go predominantly into fermions. However, since the coupling of $`\stackrel{~}{\chi }_2^0f\stackrel{~}{f}`$ is proportional to $`Z_{2j}`$, this coupling is smaller than the corresponding one of $`\stackrel{~}{\chi }_1^+f\stackrel{~}{f^{}}`$. We found that the dominant contribution is due to the co-annihilation channel $`\stackrel{~}{\chi }_1^+\chi f\overline{f}`$. We also include the $`\stackrel{~}{\chi }_1^+\chi W^+\gamma `$ channel which is estimated to give a few percent contribution. Then, we can calculate the relic abundance from the equation $$\mathrm{\Omega }_\chi h^2\frac{1.07\times 10^9\mathrm{GeV}^1}{g_{}^{1/2}M_Px_F^1_{x_f}^{\mathrm{}}\sigma _{eff}vx^2𝑑x}.$$ (10) Here $`M_P`$ is the Planck scale, $`g_{}81`$ is the effective number of massless degrees of freedom at freeze out and $`x_F`$ is given by $$x_F=\mathrm{ln}\frac{0.038g_{eff}M_P(c+2)cm_{\stackrel{~}{\chi }_1^0}\sigma _{eff}v(x_F)}{g_{}^{1/2}x_F^{1/2}},$$ (11) the constant $`c`$ is equal to $`1/2`$. In Figure(4) we plot the values of the LSP relic abundance $`\mathrm{\Omega }_\chi h^2`$ values versus $`m_\chi `$. These values have been estimated using eq.(10) with including the co-annihilations. The results in Figure (4) show how the co-annihilation processes can play a crucial rule in reducing the values of $`\mathrm{\Omega }_\chi h^2`$. By means of the lower bound on the relic density $`\mathrm{\Omega }_\chi h^2>0.025`$ leads to $`m_\chi <400`$ GeV. Also, here the effect of the SUSY phases is insignificant and the same upper bound of the LSP mass is obtained for vanishing and non vanishing phases. It is worth mentioning that the gaugino phases, especially the phase of $`M_3`$, have a relevant impact on generating large $`\varphi _A`$ at the electroweak (EW) scale. It dominantly contributes to the phase of the $`A`$-term during the renormalization from the GUT scale to EW scale. Thus the radiative corrections to $`\varphi _A`$ are very small and the phase of $`A`$ can be kept large at EW. However, as we were shown, such large phases are not effecting for the LSP mass and the relic abundance. In fact, this result can be explained as follow: the LSP is bino like (so it slightly depends on the phase of $`\mu `$) and the contribution of the phases can be relevant if there is a significant mixing in the sfermion mass matrix. In the class of models we consider the off diagonal element are much smaller than the diagonal element. As shown in Ref. , the SUSY phases have a significant effect on the direct detection rate ($`R`$) and indirect detection rate ($`\mathrm{\Gamma }`$): the phase of $`\varphi _A`$ increases the values of $`R`$ and $`\mathrm{\Gamma }`$ . Furthermore the enhancement of the ratios of the rates with non vanishing $`\varphi _A`$ to the rates in the absence of this phase are even larger than what is found in Ref. . Indeed due to the gluino contribution (through the renormalization) the phase $`\varphi _A`$ can get larger values at EW scale. ## 4 Conclusions We considered type I string derived model which leads to non-universal gaugino masses and phases. As recently shown, these non-universality is very important to have sufficient cancellations among different contributions to the EDM. Moreover the EDM of the electron and neutron imposed constraint on the ratio of the gaugino masses $`M_1`$ and $`M_2`$. This implies that the mass of the LSP (bino-like) is close to the lightest chargino mass. The co-annihilation between the LSP and lightest chargino is crucial to reduce the relic density to an interesting region. The phases have no significant effect on the LSP mass and its relic density, but have a substantial effect on the direct and indirect detection rates. ## Acknowledgments This work is supported by the Spanish Ministerio de Educacion y Cultura research grant.
no-problem/9910/hep-th9910250.html
ar5iv
text
# On Inherited Duality in 𝒩=1 𝑑=4 Supersymmetric Gauge Theories ## Appendix In this appendix we show that the application of our techniques to the $`SU(2)`$ theory with two adjoints reproduces the results of for $`SO(3)`$ with two triplets. Consider $`SU(2)`$ $`𝒩=4`$ Yang-Mills with gauge coupling $`\tau `$ deformed by $`𝒩=1`$–preserving masses, giving the superpotential $$W=\sqrt{2}\mathrm{tr}\varphi _1[\varphi _2,\varphi _3]+m^{ij}u_{ij}$$ where classically $`u_{ij}=\frac{1}{2}\mathrm{tr}(\varphi _i\varphi _j)`$. Denote the three eigenvalues of $`m^{ij}`$ by $`m_i`$, and define $`uu_{11}`$. Our complexified flavor rotation trick implies that for $`m_1=0`$, the low-energy effective coupling $`\tau _L`$ on the Coulomb branch of this theory is equal to that of the $`SU(2)`$ $`𝒩=2`$ theory with a fundamental hypermultiplet of mass $`\widehat{m}^2=m_2m_3`$. In , $`\tau _L`$ is given as the modular parameter of the auxiliary torus $$y^2=\underset{i=1}{\overset{3}{}}\left(xe_i(q)\stackrel{~}{u}+\frac{1}{4}e_i^2(q)m_2m_3\right),$$ (8) where $$\stackrel{~}{u}u+(1/8)e_1(q)m_2m_3,$$ (9) $`qe^{2\pi i\tau }`$, and the $`e_i(q)`$ are the usual modular forms associated with the torus, satisfying $`e_1+e_2+e_3=0`$, $`e_1e_2=[\theta _3(\tau )]^4`$, etc., with small-$`q`$ expansions $`e_1(q)`$ $`=`$ $`{\displaystyle \frac{2}{3}}+16q+𝒪(q^2),`$ (10) $`e_2(q)`$ $`=`$ $`{\displaystyle \frac{1}{3}}8q^{1/2}8q+𝒪(q^{3/2}),`$ (11) $`e_3(q)`$ $`=`$ $`{\displaystyle \frac{1}{3}}+8q^{1/2}8q+𝒪(q^{3/2}).`$ (12) For fixed $`m_2m_3`$ and $`\stackrel{~}{u}`$ the torus is $`SL(2,𝐙)`$ invariant. This follows from the modular properties of the $`e_i`$, which are interchanged with one another under $`SL(2,𝐙)`$. In particular, $`e_1e_2`$ under $`\tau 1/\tau `$, while $`e_2e_3`$ under $`\tau \tau +1`$. Note that with the definition of $`\stackrel{~}{u}`$ given in Eq. (9), the Coulomb branch coordinate $`u`$ transforms under $`SL(2,𝐙)`$. There is, however, an ambiguity involved in the definitions of $`u`$ and the masses. The symmetries of the theory permit non-perturbative redefinitions of the form $$uk_1(q)u+qk_2(q)m_2m_3,k_i(q)=1+𝒪(q).$$ (13) and $$m_i\mathrm{}_i(q)m_i,\mathrm{}_i(q)=1+𝒪(q).$$ (14) The reason is that $`u`$ and the bare masses are only defined in the weak coupling ($`q0`$) limit, which the above redefinitions preserve. (More generally, $`u_{ij}\frac{1}{2}\mathrm{tr}(\varphi _i\varphi _j)`$ and $`m^{ij}`$ can suffer such redefinitions, with $`u_{ij}`$ mixing at order $`q`$ with the subdeterminants of $`m^{ij}`$.) It is convenient to use the above freedom to redefine the masses by $$m_2m_39e_2(q)e_3(q)m_2m_3,$$ (15) which can be checked to be of the form of Eq. (14) using Eq. (10). The virtue of this redefinition is that, when used in Eq. (9), $`u`$ is $`SL(2,𝐙)`$ invariant. This is convenient for keeping manifest $`SL(2,𝐙)`$ invariance in our calculations when we integrate $`u`$ out, as we will do shortly. We can now recover the results of for the $`SO(3)`$ theory with two triplets and no superpotential. This is the limit of our theory in which $`m_3\mathrm{}`$, $`q0`$ keeping $`\mathrm{\Lambda }m_3\sqrt{q}`$ fixed. In three descriptions of the theory were found, in terms of electric, magnetic, and dyonic states. The electric description has no superpotential; the magnetic (dyonic) description (upon integrating out some massive singlets used in the presentation of ) has superpotential $$W=\frac{\eta }{8\mathrm{\Lambda }}\underset{j,k}{det}[\mathrm{tr}(\varphi _j\varphi _k)]$$ (16) where $`\eta =1`$ $`(1)`$ in the magnetic (dyonic) description. Now, the $`SU(2)`$ theory with one massless and two massive adjoints has a superpotential which enforces the relation (8), with the substitution of Eq. (15), by way of a Lagrange multiplier $`\lambda `$ $$W=\lambda \left[y^2\underset{r=1}{\overset{3}{}}\left(xe_r\left[u+\frac{9}{8}e_1e_2e_3m_2m_3\right]+\frac{1}{4}e_r^2m_2m_3\right)\right].$$ Adding a mass for $`\varphi _1`$ takes $`WW+m_1u`$. Upon integrating $`u`$ out, one finds a low energy superpotential with three branches $$W_{r,L}=m_1u_r(q,\widehat{m})|_{\frac{dW}{du}=0}=\frac{1}{8}m_1m_2m_3[9e_1(q)e_2(q)e_3(q)+2e_r(q)]$$ (17) where $`u_r`$ is one of the three values of $`u`$ at which the torus (8) becomes singular. (See also .) Taking $`m_1m_2=det\stackrel{~}{m}`$ where $`\stackrel{~}{m}^{jk}`$ for $`j,k=1,2`$ is the mass matrix for $`\varphi _{1,2}`$, and integrating the two adjoint fields $`\varphi _1`$ and $`\varphi _2`$ back in, as in , gives $$W_r=\left[W_{r,L}\stackrel{~}{m}^{jk}u_{jk}\right]_{\frac{dW}{d\stackrel{~}{m}}=0}=\frac{2}{m_3[9e_1e_2e_3+2e_r]}\underset{j,k=1,2}{det}u_{jk}$$ The $`q0`$, $`m_3\mathrm{}`$ limit, for which $`u_{ij}\frac{1}{2}\mathrm{tr}(\varphi _i\varphi _j)`$, gives $`W_1`$ $`=`$ $`{\displaystyle \frac{4\underset{j,k}{det}u_{jk}}{m_3e_1(9e_2e_3+2)}}0,`$ (18) $`W_2`$ $`=`$ $`{\displaystyle \frac{4\underset{j,k}{det}u_{jk}}{m_3e_2(9e_1e_3+2)}}{\displaystyle \frac{\underset{j,k}{det}[\mathrm{tr}(\varphi _j\varphi _k)]}{8\mathrm{\Lambda }}},`$ (19) $`W_3`$ $`=`$ $`{\displaystyle \frac{4\underset{j,k}{det}u_{jk}}{m_3e_3(9e_1e_2+2)}}{\displaystyle \frac{\underset{j,k}{det}[\mathrm{tr}(\varphi _j\varphi _k)]}{8\mathrm{\Lambda }}},`$ (20) matching to the electric, magnetic, and dyonic superpotentials of Eq. (16), respectively. In this way we see explicity how the $`SL(2,𝐙)`$ duality transformations $`\tau \frac{1}{\tau }`$ and $`\tau \tau +1`$ correspond to electric-magnetic duality and magnetic-dyonic duality. A $`\mathrm{\Gamma }_2`$ subgroup of $`SL(2,𝐙)`$ leaves the descriptions invariant, while $`SL(2,𝐙)/\mathrm{\Gamma }_2S_3`$ permutes the three descriptions. These facts were already understood in from the connection of the theory with the duality of $`𝒩=4`$ $`SU(2)`$ gauge theory; here we understand it as following from the $`SL(2,𝐙)`$ duality of the $`𝒩=1`$ theory itself. Furthermore, it is easy to check that redefining the fields and parameters by putting in arbitrary $`g(q)`$, $`k_i(q)`$, and $`\mathrm{}_i(q)`$ \[see Eqs. (4), (13), and (14)\] has no effect on the above calculation. One could also attempt to start from the low energy superpotentials in Eq. (17) and integrate back in all three $`\varphi _j`$; we then expect to recover the $`𝒩=4`$ theory with coupling $`q`$. However, this is more difficult than in the above two-flavor case because we must introduce not only $`\mathrm{tr}(\varphi _i\varphi _j)`$ but also the the gauge invariant non-quadratic operator $`det(\varphi )`$. (Here $`\varphi `$ is a $`3\times 3`$ matrix in flavor and color.) So we instead consider going in the other direction, attempting to recover Eq. (16) in the theory with two adjoints by integrating massive $`\varphi _3`$ out of the $`𝒩=4`$ theory. Consider the theory with three adjoints, superpotential $`W=\sqrt{2}\beta det\varphi `$, and coupling $`q`$. For $`\beta =1`$ the theory is conformal. For $`\beta 1`$ the theory has only $`𝒩=1`$ supersymmetry, but flows until it reaches the IR attractive $`𝒩=4`$ fixed point where the physical (nonholomorphic) coupling $`\beta =1`$. As discussed in , the quantity $`t\beta ^4q`$ is invariant under this RG flow (more generally, $`\beta ^{2C_2(G)}q`$ is invariant), and the low-energy $`𝒩=4`$ conformal theory has coupling $`t`$. The theory is nowhere weakly coupled unless $`t1`$. Adding $`\frac{1}{2}m_3u_{33}`$ to the superpotential and integrating out $`\varphi _3`$, we find that symmetries ensure that the low-energy superpotential is $$W_L=\frac{\beta ^2s(t)}{m_3}\underset{i,j=1,2}{det}\mathrm{tr}(\varphi _i\varphi _j).$$ (21) where $`s(t)`$ is an unknown function. (The symmetries ensure that in this case $`u_{ij}`$ and $`\frac{1}{2}\mathrm{tr}(\varphi _i\varphi _j)`$ are proportional, differing by another function of $`t`$.) For $`\beta `$ fixed and $`q0`$, the theory is weakly coupled and we may integrate out $`\varphi _3`$ classically, which reveals that $`s(t=0)=1`$. At finite $`t`$, $`s(t)`$ is undetermined. However, we know the $`\beta =1,q0`$ theory is $`SL(2,𝐙)`$–dual to $`\beta =1,q1`$ and $`\beta =1,qe^{2\pi i}`$, which are the magnetic and dyonic descriptions. In these descriptions, where $`t1`$, the classical analysis is not valid, and $`s(t)`$ may differ from 1. We now determine $`s(1)`$ in the scheme used in . Taking $`\beta =1`$, $`q0`$, $`m_3\mathrm{}`$, with $`\mathrm{\Lambda }=m_3q^{1/2}`$ held fixed, Eq. (21) obviously yields the expected electric low-energy superpotential, Eq. (16) with $`\eta =0`$. A magnetic description of the same theory should be obtained by studying the theory with $`\beta =1`$ and $`q1`$, but it is convenient instead to study a theory with the same infrared physics, namely one with $`\beta =q^{1/4}`$ and $`q0`$; both theories have $`t=1`$. The latter theory can be defined by holding the strong coupling scale $`\mathrm{\Lambda }m_3q^{1/2}`$ fixed, as in . From Eq. (21) this limit has superpotential $$W_L=\frac{s(1)}{m_3q^{1/2}}\underset{i,j=1,2}{det}\mathrm{tr}(\varphi _i\varphi _j)$$ which agrees with Eq. (16) for $`\eta =1`$ provided $`s(1)=1/8`$. The dyonic description is given by taking $`qe^{2\pi i}`$, which changes the sign of the superpotential in agreement with Eq. (16).
no-problem/9910/astro-ph9910156.html
ar5iv
text
# Jet driven molecular outflows in Orion ## 1 Introduction The Orion Molecular Cloud (hereafter OMC 1) has played a central role in the study of the formation and evolution of high$``$mass stars. This molecular cloud harbors massive stars in the stage of energetic mass loss. Two regions contain high-velocity (HV hereafter) molecular outflows, the IRc2/I outflow in the BN/KL nebula and the Orion–S region. The first evidence of massive star formation in the Orion–S region, located $`100^{\prime \prime }`$ south of IRc2 came from the presence of warm gas (Ziurys et al., 1981) and large dust column densities (Keene, Hildebrand & Whitcomb, 1982). Subsequent observations of CO with higher angular resolution showed the presence of a long filament at moderate velocities ($`30`$km s<sup>-1</sup>) which has been interpreted as a redshifted monopolar jet powered by FIR 4 (Schmid$``$Burgk et al., 1990). H<sub>2</sub>O masers have been detected in the vicinity of FIR 4 but they are not related with the monopolar jet (Gaume et al., 1998). In fact, the H<sub>2</sub>O masers seem to be associated with the very fast bullets ($`v_{LSR}>50`$km s<sup>-1</sup>) found in the compact bipolar outflow recently detected by Rodríguez–Franco et al. (1999) which is perpendicular to the low velocity jet. Due to its nearness and intense CO emission, the prominent high-velocity CO outflow in the KL nebula has been the subject of intense study in several rotational transitions of this molecule (Kwan & Scoville, 1976; Zuckerman, Kuiper & Rodríguez$``$Kuiper, 1976; Wannier & Phillips, 1977; Phillips et al., 1977; Goldsmith et al., 1981; Van Vliet et al., 1981). The first observations showed the presence of outflowing gas at velocities up to $`100`$km s<sup>-1</sup> from the cloud velocity. Strong shocks produced by the interaction of the outflow from the powering source with the ambient material gives rise to strong emission of vibrationally excited molecular hydrogen H$`{}_{}{}^{}{}_{2}{}^{}`$ (see e.g. Nadeau & Geballe, 1979), H<sub>2</sub>O masers around the source IRc2/I (Gaume et al, 1998) and molecules like SiO, SO, SO<sub>2</sub> (Wright et al., 1996) produced by shock chemistry. In spite of the large numbers of observations of this molecular outflow, its nature is so far unclear. Observations with high angular resolution of the HV CO emission at moderate velocities (Erickson et al., 1982; Chernin & Wright, 1996) have shown a weak bipolarity of the outflow. Based on these observations, it was proposed that this outflow is a conical outflow with a wide ($`130^\mathrm{o}`$) opening angle, oriented in the southwest$``$northeast direction. This is powered by source I (Chernin & Wright, 1996). However the wide opening angle model cannot account for a number of observational facts such as the spatial distribution and kinematics of the high and low velocity H<sub>2</sub>O and the SiO masers (Greenhill et al., 1998; Doeleman et al., 1999). Furthermore, the morphology of the HV CO emission at intermediate velocities shows that the outflow is offset from IRc2/I by $`10^{\prime \prime }`$ indicating that a complex channeling of the HV gas is needed if IRc2/I is the powering source (Wilson et al., 1986). Recently Rodríguez–Franco et al. (1999) has mapped the IRc2/I molecular outflow with high sensitivity. The spatial distribution of the most extreme velocity shows the lack of bipolarity around IRc2/I as expected from the wide opening angle model. These maps also reveal a ring-like structure of high-velocity bullets which also are difficult to account for if the outflow has a wide opening angle. Rodríguez–Franco et al. (1999) have proposed that the ring of the HV CO bullets and the distribution and kinematics of the H<sub>2</sub>O masers could be explained if the outflow is driven by a precessing jet oriented along the line of sight. If this is the explanation, this will allow us to specify the main entrainment mechanism in molecular outflows (see e.g. Raga & Biro, 1993; Cabrit, 1995). In this paper we present new maps of the CO emission and analyze in detail the morphology of the CO emission at moderate and extreme velocities in the IRc2/I and Orion–S molecular outflows, and present new arguments supporting the idea that the Orion outflows are young and driven by high velocity jets. ## 2 Observations and results The observations of the $`J=21`$ and the $`J=10`$ lines of CO where carried out with the IRAM 30$``$m telescope at Pico Veleta (Spain). Both rotational transitions were observed simultaneously with SIS receivers tuned to single side band (SSB) with an image rejection of $`8`$dB. The SSB noise temperatures of the receivers at the rest frequencies for the CO lines were 300 K and 110 K for the $`J=21`$ and the $`J=10`$ lines, respectively. The half power beam width (HPBW) of the telescope was $`12^{\prime \prime }`$ for the $`J=21`$ line and $`24^{\prime \prime }`$ for the $`J=10`$ line. As spectrometers, we used two filter banks of $`512\times 1`$MHz that provided a velocity resolution of 1.3 and 2.6 km s<sup>-1</sup> for the $`J=21`$ and the $`J=10`$ lines of CO, respectively. The observation procedure was position switching with the reference taken at a fixed position located 15 away in right ascension. The mapping was carried out by combining 5 on$``$source spectra with one reference spectrum. The typical integration times were 20 sec for the on-positions and 45 sec for the reference spectrum. The rms sensitivity of a single $`J=21`$ spectrum is 0.6 K. Pointing was checked frequently on nearby continuum sources (mainly Jupiter) and the pointing errors were $`\stackrel{<}{}`$4<sup>′′</sup>. The calibration was made by measuring sky, hot and cold loads. The line intensity scale is in units of main beam brightness temperature, using main beam efficiencies of 0.60 for the $`J=10`$ line and 0.45 for the $`J=21`$ line. We have made an unbiased search for high-velocity molecular gas in OMC 1 by mapping with high sensitivity the $`J=10`$ and $`J=21`$ lines of CO in a region of $`14^{}\times 14^{}`$ around IRc2/I. In this article we will analyze the very high-velocity gas associated to molecular outflows. The widespread CO emission with moderate velocities ($`v_{LSR}9`$$`\stackrel{<}{}`$40 km s<sup>-1</sup>) will be discussed elsewhere (Martín$``$Pintado & Rodríguez$``$Franco, 2000). In our CO maps we have only detected the known molecular outflows with $`v_{LSR}9`$$`\stackrel{>}{}`$30 km s<sup>-1</sup>, IRc2 and Orion$``$S. Figure 1 shows a sample of line profiles taken towards selected positions around IRc2 and Orion$``$S. ## 3 High-velocity gas around IRc2/I ### 3.1 The morphology Fig. 2 shows the spatial distribution of the CO $`J=21`$ integrated intensity emission around IRc2 for radial velocity intervals of 20 km s<sup>-1</sup> for the blueshifted and redshifted gas. The spatial distribution of the HV gas with moderate velocities ($`\stackrel{<}{}`$60 km s<sup>-1</sup>), hereafter MHV, in our data is similar to that observed by Wilson et al. (1986). Our new maps have better sensitivity. These reveal for the first time the spatial distribution of the high-velocity gas for the most extreme velocities ($`v_{LSR}>60`$km s<sup>-1</sup>), hereafter EHV, of the molecular outflow which shows a different morphology from that of the MHV gas. The maximum of the CO emission for the blueshifted gas always appears northwest of IRc2/I. The offset between the CO maximum and IRc2 systematically increases from 10<sup>′′</sup> for radial velocities of $``$40 km s<sup>-1</sup> up to 25<sup>′′</sup> for the gas at $``$110 km s<sup>-1</sup>. The location of the redshifted CO emission maxima does not show such a clear trend. At moderate velocities (up to 75 km s<sup>-1</sup>), the maximum CO emission occurs in a ridge with two peaks of nearly equal intensity located southeast and northwest of IRc2/I. The northwest peak of the redshifted emission occurs at the same position as the blueshifted peak for lower radial velocities. For the most redshifted gas ($``$100 km s<sup>-1</sup>), the CO emission breaks up into several condensations located around IRc2/I. For this radial velocity, the most intense CO condensation peaks as for the blueshifted gas, northwest of IRc2, there is weaker emission found 20<sup>′′</sup> west and southeast of IRc2/I. Fig. 3a and b summarize the distribution of the high-velocity molecular gas in Orion for moderate and extreme velocities. The new data show that the MHV molecular gas around IRc2 (fig. 3a) has a very different morphology than that of the EHV gas (fig. 3b). At moderate velocities, the CO emission shows a very weak bipolarity (if any) around IRc2 in the southeast$``$northwest direction (see Fig. 3a). For radial velocities close to the terminal velocity, the strongest CO emission (see Fig. 3b) does not show any bipolarity around IRc2/I. The only possible bipolar morphology is observed around a position located $``$20<sup>′′</sup> northwest of IRc2 and $`10^{\prime \prime }`$ south of IRc9 represented as a filled square in Fig. 3. ### 3.2 The size-velocity relation in the HV gas The terminal velocities of the blue and redshifted MHV gas show similar spatial distribution with an elliptical–like shape centred in the vicinity of IRc2/I, and a systematic trend of the size of HV gas to decrease as a function of the velocity (see Fig. 2). These two characteristics are illustrated in Fig. 3c and d where we plot the dependence of the size of the HV CO emission as a function of the terminal velocity for the red and the blue shifted gas respectively. The contours in these figures shows the location of the gas with the same terminal velocity. The area enclosed by a given contour level corresponds to the region which contains HV molecular gas with terminal velocities larger than that of the contour level. For moderate velocities ($`\stackrel{<}{}`$60 km s<sup>-1</sup>), the size of HV gas is $``$110<sup>′′</sup> and decreases to $``$40<sup>′′</sup> for the extreme velocities. The HV gas, for most radial velocities, is concentrated in a region with an elliptical like shape centered at ($`4^{\prime \prime }`$, $`4^{\prime \prime }`$) with respect to IRc2/I. ### 3.3 High-velocity “Bullets” The detection of these high velocity bullets in the Orion IRc2/I outflow has been reported by Rodríguez–Franco et al. (1999). For completeness, we summarize the main characteristics. Fig. 1 shows several examples of CO profiles towards the IRc2/I region where the location of the HV bullets are shown by vertical arrows. The typical line-width of the CO HV bullets is $`2030`$km s<sup>-1</sup>, these are distributed in a ring like structure of size $`10^{\prime \prime }\times 50^{\prime \prime }`$ ($`0.02\times 0.1`$pc) and thickness $`12^{\prime \prime }20^{\prime \prime }`$ ($`0.020.04`$pc) with IRc2 located in the southeast edge of the HV bullets rings (see fig. 3e and f). ## 4 High velocity gas around Orion–S ### 4.1 The morphology and characteristics of the high-velocity bipolar outflow Orion–S Moderate high velocity gas ($`30`$km s<sup>-1</sup>) have been detected in this region in CO and SiO (Schmid–Burgk et al. 1990; Ziurys & Friberg, 1987). The CO emission with moderate velocities has been associated to a monopolar outflow (low velocity redshifted jet) in Orion–S discovered by Schmid–Burgk et al. (1990). Rodríguez–Franco et al. (1999) reported the detection of a new compact and Fast Bipolar Outflow (hereafter Orion-South Fast Bipolar Outflow, or Orion-SFBO) with terminal velocities of $`100`$km s<sup>-1</sup>. In this work, we will present the main features of this new outflow. To avoid the confusion with other HV features in Orion–S, such as H II region (see Martín–Pintado et al., 2000) and the monopolar outflow (Schmid–Burgk et al. 1990) we will consider only the CO emission for radial velocities in the range $`<5`$km s<sup>-1</sup>, and $`>25`$km s<sup>-1</sup>. Fig. 4a shows the integrated intensity map in the $`J=21`$ rotational transition of CO for the blueshifted (solid contours) and the redshifted line wings (dashed contours) of the Orion$``$SFBO. This new outflow shows a clear bipolar structure in the southeast$``$northwest direction, just perpendicular to the low velocity redshifted jet. The morphology of the Orion$``$SFBO suggests that the axis of the bipolar outflow is oriented close to the plane of the sky. The blueshifted emission peaks $`40^{\prime \prime }`$ northwest of FIR 4 while redshifted gas has its maximum intensity $`15^{\prime \prime }`$ northeast from that source. Since the morphology of the bipolar SiO emission (Ziurys & Friberg, 1987) is similar to the CO emission reported in this paper, the SiO emission is very likely related to the Orion$``$SFBO rather than to the low velocity monopolar jet (Schmid–Burgk et al., 1990). The structure of the HV gas in the Orion$``$SFBO as a function of the radial velocities is shown in Fig. 4b. The HV gas in the blue and redshifted wings have a different behavior. While the redshifted CO emission peak is located at the same spatial location for all radial velocities, the blueshifted CO peak moves to larger distance ($`10^{\prime \prime }`$, nearly one beam) from FIR 4 as the radial velocity increases from $`70`$ to $`100`$km s<sup>-1</sup>. The most extreme velocities in the blue lobe (between $`80`$ and $`110`$km s<sup>-1</sup>) arise from a condensation of $`11^{\prime \prime }`$ located $`36^{\prime \prime }`$ northwest from FIR 4. High velocity bullets (see vertical arrows in fig. 1) also appear in both the blue and the redshifted, lobes. A description can be found in Rodríguez–Franco et al. (1999). ### 4.2 Exciting source The source(s) powering the high-velocity gas in Orion–S is, so far, unknown. The source FIR 4 has been proposed to be the exciting source of the low velocity jet (Schmid$``$Burgk et al., 1990). However, FIR 4 cannot be the powering source for this outflow, because this source is located $`20^{\prime \prime }`$ south from the geometrical center defined by the two lobes (see figure 4a). One can use the kinematics and the morphology of the HV gas to estimate the position of the exciting source. This procedure has been successfully used for outflows associated with low mass stars (see Bachiller et al., 1990). From the velocity$``$position diagram along the direction of the outflow axis we have estimated the location of the possible exciting source by considering that the source should be located at the position where the radial velocity changes from blue to redshifted. The inferred position of the exciting source using this procedure is located $`20^{\prime \prime }`$ north of FIR 4, and it is shown in Fig. 4 as a filled square. In contrast to the outflows powered by low mass star where the exciting source appears as strong millimeter emitters and faint centimeter emitters, the source(s) of the Orion$``$SFBO has not been detected so far in the mm or cm continuum emission (see e.g. Gaume et al., 1998). The limit of the cm radio continuum emission is a factor of 10 smaller than the predicted by the relationship of Anglada et al. (1998) for collisional ionization. This could be due to an underestimation of the dynamical age of the outflow or more likely to a time variable jet (see section 6.3). ## 5 The physical conditions of the HV gas in the IRc2/I outflow and in Orion$``$SFBO The H<sub>2</sub> densities of the HV gas in the IRc2/I outflow are high enough ($`10^5`$cm<sup>-3</sup>, see Boreiko et al., 1989; Boreiko & Betz, 1989; Graf et al., 1990) to thermalize the low rotational ($`J`$) lines at a kinetic temperature of $`\stackrel{>}{}`$$`70`$K (Boreiko et al., 1989). Under these conditions, one can estimate the opacities of the HV gas from the intensity ratio between the $`J=21`$ and the $`J=10`$ lines. Fig. 6 shows the profiles of the $`J=21`$ and the $`J=10`$ lines of CO and the ratio between the lines towards IRc2/I. In order to account for the different beam size in both lines, the $`J=21`$ line has been smoothed to the resolution of the $`J=10`$ line. The expected contamination to the $`J=10`$ of CO by the recombination line H38$`\alpha `$ is also shown, as a dotted line, in the central panel of Fig. 6. Since the contribution of the recombination lines is negligible for the CO HV gas, we can use the CO line ratio over the whole velocity range. We are interested only in the relatively compact ($`\stackrel{<}{}`$40<sup>′′</sup>) emission of the HV gas in the IRc2/I region and therefore, the line intensity ratio in Fig. 6 have been calculated using the main beam brightness temperature scale. To obtain the corresponding ratio for extended sources, the values in fig. 6 should be divided by a factor 1.5; i.e. the ratio between the main beam efficiencies of the telescope for both lines. The line ratio shows a remarkably symmetric distribution around 5 km s<sup>-1</sup> (the ambient cloud velocity) suggesting that possible contamination of the data from other molecular species emission must be negligible. For the ambient velocities of the gas (between 0 and 15 km s<sup>-1</sup>) the emission is extended and the ratio of $`1.5`$ (1 in the $`T_a^{}`$ scale) corresponds to the expected value for extended optically thick emission. As the radial velocity increases, the ratio increases up to values around 2.7 for velocities of $`v_0v_{\mathrm{LSR}}50`$km s<sup>-1</sup>. For this velocity range our results are consistent with those of Snell et al. (1984), once the contribution from the extended emission within their larger beam is take into account. For radial velocities larger than $`v_0v_{\mathrm{LSR}}50`$km s<sup>-1</sup>, the line ratio decreases again. The minimum value of 1.5 is close to the values expected for optically thick emission. This is present at $`\pm 55`$km s<sup>-1</sup>, just at the radial velocity where the bullet features are found (Rodríguez–Franco et al., 1999). The smaller line ratio is consistent with the increase in the CO column density due to the CO HV bullets. The line ratio rises to its maximum value, 3.5, for radial velocities near to $`\pm 70`$km s<sup>-1</sup>. These data suggest that CO emission for most of the velocity range is optically thin, except for the radial velocities where the bullets are found. Then, except for the CO HV bullets the CO integrated intensity can be translated directly into CO column of densities. Table 2 gives the physical characteristics for the outflow and the CO HV bullets. The opacity of the bullets is unknown. To derive the properties, we have assumed optically thin emission and the ring morphology observed by Rodríguez–Franco et al. (1999). This gives a lower limit to the mass in the CO bullet ring. The total mass in the bullet ring represents the $`\stackrel{>}{}`$20% of the total mass of the outflow. We have also estimated the physical parameters of the high velocity gas and of the bullets associated to the Orion$``$SFBO. We have assumed optically thin emission, LTE excitation at a temperature of 80 K, and a typical CO abundance of 10<sup>-4</sup>. The results given in Table 1 show the characteristics for the molecular outflow and for the bullets. The characteristics of these CO HV bullets are similar to those found in low mass stars. ## 6 A jet driven molecular outflow in the IRc2/I region The morphology, the presence of HV bullets, and the high degree of collimation of the Orion$``$SFBO are clear evidence that this outflow is jet driven, with the jet oriented close to the plane of the sky (see also Rodríguez–Franco et al., 1999). The situation for the IRc2/I outflow is less clear and two models (wide opening angle and jet driven outflow) has been proposed to explain the kinematics and structure of this molecular outflow. Any model proposed to explain the origin of the IRc2/I molecular outflow must account for the following observational facts: * The EHV gas does not shows any clear bipolarity around IRc2/I, if any, this appears in a position located $`20^{\prime \prime }`$ north from this source. * The blue and redshifted HV CO emission show a similar spatial distribution with an elliptical shape, centred near IRc2/I. * There is a systematic trend for the size of HV gas to decrease as a function of the radial velocity (see Fig. 2). * The presence of CO HV bullets distributed in a thin elliptical ring-like structure around the EHV gas (see Fig. 3e and f) surrounding the EHV gas. We will now discuss how the proposed models account for these observational results. ### 6.1 Wide open angle conical outflow powered by source I The model used to explain the IRc2/I outflow at moderate velocities is a wide open angle ($`130^\mathrm{o}`$) conical outflow oriented in the southeast$``$northwest direction and powered by source I (Chernin & Wright, 1996). This model was based on the weak bipolarity of the MHV in the Orion IRc2 outflow reported by Erickson et al. (1982). Observations with higher angular resolution of the MHV CO emission seem to support this type of model (Chernin & Wright, 1996). Recent VLA observations of the SiO maser distribution can also be explained by this kind of model. However, the kinematics and the morphology of the low velocity H<sub>2</sub>O maser emission cannot be explained by the SiO outflow. Two alternatives have been suggested: an additional outflow powered by the same source and expelled perpendicular to that producing the SiO masers (Greenhill et al., 1998), and a flared outflow (Doeleman et al., 1999). Furthermore, from the morphology of the HV gas, Wilson et al. (1986) pointed out that if the HV CO emission were produced by a wide open angle bipolar outflow driven by IRc2/I, it would require a complex channeling of the outflowing gas to account for the morphology of the MHV CO emission. This situation is even more extreme when the morphology of the EHV gas presented in this paper is considered (see also Rodríguez–Franco et al., 1999). If the outflow is wide opening angle one could use the morphology of the EHV gas to locate the powering source as in the case of low mass stars (see Chernin & Wright, 1996). If so, the source(s) driving the outflow should be $`20^{\prime \prime }`$ north of IRc2. If the exciting source of the EHV is located $`20^{\prime \prime }`$ north of IRc2, then IRc2 would not be the cause of the CO outflow (see Figs. 3c and d). Similar arguments would also apply to other wide opening angle wind models like those of Li & Shu (1996). ### 6.2 Multiple molecular outflows scenario One alternative to explain the CO morphologies, would be several molecular outflows in the region. A wide open angle bipolar outflow powered by source I with moderate velocities in the CO emission and a compact highly collimated and very fast outflow powered by an undetected source located approximately $`20^{\prime \prime }`$ north of IRc2 (EHV outflow). However, such a multiple outflow scenario would require an additional outflow to explain the low velocity H<sub>2</sub>O maser emission. This would be one of the highest concentration of outflows found in a star forming region. Although the multiple outflow scenario is possible, it seems certain that all (the HV CO emission, the low and high velocity H<sub>2</sub>O masers in the IRc2 region, the SiO masers, and the HV CO molecular bullets) the observational features of the IRc2/I outflow could be caused by a molecular outflow driven by a variable precessing jet oriented along the line of sight and powered by source I (Johnston et al., 1992; Rodríguez–Franco et al., 1999). ### 6.3 A molecular outflow driven by a wandering jet Rodríguez–Franco et al. (1999) have presented a number of arguments in favor of the possibility that the IRc2 outflow is driven by a wandering jet oriented along the line of sight. To strengthen the arguments for this model, we will analyze in detail the size–terminal velocity dependence found in the previous section, and the mass distribution of the HV gas as a function of the radial velocity. #### 6.3.1 The size–velocity relation In Fig. 7a we present the dependence of the area enclosed level at a given terminal velocity as a function of the terminal velocity (see Figs. 3c and d). As already noted, the blue (filled squares) and the redshifted (filled triangles) HV gas show very similar distributions. This similarity could be due to a constant spherical or elliptical expansion at constant velocity, as that proposed for the H<sub>2</sub>O masers in Orion and in other massive star forming regions (Genzel & Downes, 1983; Greenhill et al., 1998). However, the expected area$``$velocity dependence for this kind of expansion (see the dashed and dotted lines in Fig. 7a) is inconsistent with the CO data, indicating that the isotropic low velocity outflow modelled for the H<sub>2</sub>O masers do not appear in the bulk of the HV gas. Then the H<sub>2</sub>O masers only represents a small fraction of the outflowing gas. We therefore exclude this possibility for the CO emission. We now consider that the observed size$``$velocity distribution is produced by a jet oriented along the line of sight. First, we compare the size$``$velocity distribution observed for the IRc2/I outflow with other low mass outflows which are known to be driven by jets. Unfortunately, The Orion–SFBO outflow cannot be used because perpendicular to the jet it is only slightly resolved by our beam. Good examples of jet driven outflows are L 1448 and IRAS$`\mathrm{\hspace{0.17em}03282}+3035`$ (hereafter I 3282) (see e.g. Bachiller et al., 1990, 1991). These two outflows are driven by low mass stars and their jets are oriented at a small angle ($`45^\mathrm{o}`$) relative to the plane of the sky. To compare the area$``$velocity distribution measured in these outflows with that expected when the jet is aligned along the line of sight, we have to rotate the outflow axis to point towards the observer. To do this, we have assumed that the outflows have a cylindrical geometry. The radial velocities have not been corrected for projection effects since this is a constant factor for all velocities. The results for L$`\mathrm{\hspace{0.17em}1448}`$ and I 3282 are shown in Figs. 7b and c respectively. Remarkably, the area$``$velocity dependence derived for both jet driven molecular outflows are very similar to that found for the IRc2/I outflow. These results strongly support the idea that the IRc2/I outflow is also driven by a jet oriented along the line of sight. To explain the area$``$velocity dependence found for these outflows, we have considered a very simple model which mimics the kinematics of a bipolar jet. In this simple model, the ejected material is very fast and well collimated around the symmetry axis. Away from the jet axis, the material surrounding the jet is entrained generating the more extended low velocity outflow with lower terminal velocities. We have modeled this kinematics by using a simple velocity law given by $$\stackrel{}{v}(x,y,z)=v_{jet}\mathrm{exp}^{\left(\frac{x^2+z^2}{2p^2}\right)}\stackrel{}{ȷ}$$ (1) where $`x`$, $`y`$ and $`z`$ are the spatial coordinates, $`p`$ is the collimation parameter of the outflow (i.e. the radius of the jet), $`v_{jet}`$ is the velocity of the molecular jet and $`\stackrel{}{ȷ}`$ is the direction in which the material is moving. When the material is within the jet ($`x^2+z^22p`$) $`\stackrel{}{ȷ}`$ is along the jet direction, while it is in the radial direction outside the jet. The bipolar morphology is taken into account by supposing that the HV gas is restricted to a biconical geometry. The model combines two kind of parameters: the intrinsic outflow parameters such as $`p`$ and $`v_{jet}`$, and the geometry (i.e. the cone parameters). The collimation parameter is derived by fitting the observations, and $`v_{jet}`$, which cannot be determined since the orientation of the jet along the line of sight is unknown, has been considered the terminal velocity measured for the data. If the symmetry axis of the cone is along the line of sight, the semi-axis of the ellipses on the plane of the sky ($`a`$ and $`c`$) are directly measured from the size corresponding to the minimum terminal velocity contour (see Fig. 3c and d). The length of the cone along the line of sight, $`b`$, is a free parameter, determined from the model. Under the assumptions discussed above, the model contains only two free parameters: the collimation parameter, and the size of the molecular outflow along the line of sight. The results of this simple model for the best fit to the data for the three outflows are shown in Figs. 7a, b, and c as solid lines, and the derived parameters are given in table 3. The results are very sensitive to small changes of the collimation parameter, but very insensitive to the size of the outflow along the line of sight. Changes by $`20\%`$ in the collimation parameter greatly worsen the fit to the data. The data, however, can be fit with any length of an outflow larger than the value of the minor semiaxis of the ellipsoid. Our results indicate a similar collimation parameter for the two outflows driven by low mass stars, but it is a factor of 2 larger for the IRc2/I outflow. This difference can be related either to the fact that the low mass star outflows are much younger than that of the IRc2/I outflow, or to different types of stars driving the outflows. We conclude that the overall kinematics and the morphology of the EHV gas observed in the IRc2/I outflow is consistent with a jet driven molecular outflow oriented along the line of sight with a jet radius of 0.06 pc. ### 6.4 The mass distribution of the gas We have shown that a jet driven molecular outflow can explain the terminal velocities and the spatial distribution of the HV gas. The next question is, how does this model explain the mass distribution of the HV gas in the outflow?. In Figure 8a and b we show masses derived from the CO line intensities integrated in 5 km s<sup>-1</sup> intervals as a function of velocity. This figure shows that the bulk of the mass at moderate radial velocities is also found at the locations where the gas shows the largest terminal velocity. In the proposed jet driven model, this would correspond to the material located at small projected distance from the outflow axis (i.e. in the jet direction). At first glance, these results are surprising since one expects to find only the highest velocity gas in the jet direction. However, there are two possibilities which could explain the derived mass distribution in the IRc2/I outflow. In the first, one assumes that the vicinity of the exciting source large quantities of gas moves with all velocities as a result of the dragging of the ambient material by the action of the jet. In the second, one assumes that just the opposite is true, i.e. the masses at moderate velocities are located far from the exciting source, only in the jet heads, where jet impacts on the ambient medium. In this case, the HV gas will appear with all velocities only in the direction of the jet. Unfortunately, the orientation of the IRc2/I outflow, along the line of sight, prevent us from determining which of the two proposed scenarios account for the data. Again, we will compare the results for the IRc2/I outflow with those obtained from the jet driven outflows L 1448 and I 3282 as described in section 6.3.1. We have estimated the expected mass distribution of the L$`\mathrm{\hspace{0.17em}1448}`$ and I$`\mathrm{\hspace{0.17em}3282}`$ outflows if the jets were oriented along the line of sight. For these two outflows it is possible to separate the contribution to the total mass of the outflow for two different regions in the outflow. The head lobes, and the exciting source. Figs. 8c to e show the mass distribution as a function of the radial velocity in these two regions for the L$`\mathrm{\hspace{0.17em}1448}`$ and the I$`\mathrm{\hspace{0.17em}3282}`$ outflows in velocity intervals of 10 km s<sup>-1</sup>. Using the results of L 1448 and I 3282, the bulk of the mass for all terminal velocities would arise from the head lobes close to the jet axis where one also observes the largest terminal velocities. Only a small fraction of the mass is found close to the exciting source. These findings are consistent with the observations of the IRc2/I outflow and implies that the largest fraction of the mass of the outflowing gas at low velocities is preferentially concentrated in the regions at the head lobes, and close to the jet axis. The similarity between the results obtained for the bipolar outflows L$`\mathrm{\hspace{0.17em}1448}`$ and I$`\mathrm{\hspace{0.17em}3282}`$ and those found in the IRc2/I outflow suggest that the three outflows have a very similar mass distribution. This would indicate that the massive outflow driven by source I is produced by a jet and that a large fraction of the outflowing gas at low or moderate velocities is located at the end of jets in the head lobes, just in front and behind of the exciting source. ## 7 The interaction between the jet and the ambient material The precessing jet scenario proposed for the molecular outflow in Orion has important consequences in order to explain the different phenomena produced by the interaction of the outflow and the ambient gas like the H<sub>2</sub> vibrationally excited emission (H$`{}_{}{}^{}{}_{2}{}^{}`$), the shock-chemistry found in this region, and the location and the origin of the low velocity and high velocity H<sub>2</sub>O masers. Figure 9 shows a sketch of the proposed model showing the regions where the different emissions could arise. According to this model (Rodríguez–Franco et al., 1999), a precessing jet with very high velocities (larger than 100 km s<sup>-1</sup>) is aligned close to the line of sight. The HV jet interacts with the surrounding material sweeping a considerable amount of gas and dust. The impact of the precessing jet on the ambient molecular gas produces a number of bow shocks in the head of the two lobes. Within the most recent bow shocks one expects to observe H$`{}_{}{}^{}{}_{2}{}^{}`$ emission and bullets and the low and high velocity H<sub>2</sub>O masers as discussed in detailed by Rodríguez–Franco et al. (1999). A precessing jet would explain the distribution of the HV bullets and the large number of H<sub>2</sub>O masers at low radial velocities. We now will discuss how the proposed model can also explains the overall properties of the H$`{}_{}{}^{}{}_{2}{}^{}`$ emission and the “plateau” emission. ### 7.1 The H$`{}_{}{}^{}{}_{2}{}^{}`$ emission The most straightforward evidence of the interaction between the high velocity jets and the ambient medium comes from H$`{}_{}{}^{}{}_{2}{}^{}`$ (see e.g., Garden et al., 1986; Doyon & Nadeau, 1988). In our model, the H$`{}_{}{}^{}{}_{2}{}^{}`$ emission should mostly appear at the head lobes (see Fig. 9). The H$`{}_{}{}^{}{}_{2}{}^{}`$ emission is expected to be located just in the intermediate layer between the jet and the HV CO emission (see Rodríguez–Franco et al., 1999), and one would, therefore, expect similar extent for both emissions. This is illustrated in the central panel of Fig. 10 where we present the comparison between the spatial distribution of the CO emission in the bow shock and that of the H$`{}_{}{}^{}{}_{2}{}^{}`$. In the proposed geometry, the H$`{}_{}{}^{}{}_{2}{}^{}`$ emission should be affected by the extinction produced by the HV blueshifted gas and dust located between the observer and the redshifted H$`{}_{}{}^{}{}_{2}{}^{}`$ layer. Indeed, though the overall H$`{}_{}{}^{}{}_{2}{}^{}`$ and CO bow shock emissions are very similar, there are also important differences. As shown in Fig. 10, the most intense CO bow shock emission is located between IRc2 and the H$`{}_{}{}^{}{}_{2}{}^{}`$ Peak 1 (Beckwith et al., 1978), just in the region where the H$`{}_{}{}^{}{}_{2}{}^{}`$ emission shows a minimum. This difference agrees with the idea of a large accumulation of material near the jet heads, close to the direction where the jet impinges. From our CO data, the largest column of density in the blueshifted bow shock is found $`12^{\prime \prime }`$ north of IRc2/I. At this position we derive, from the HV CO data, an H<sub>2</sub> column of density of $`4.5\times 10^{21}`$cm<sup>-2</sup> which corresponds to an extinction at 2.1$`\mu `$m of $`0.5`$mag, in good agreement with the extinction of 0.6 mag derived by Geballe et al. (1986). Additional evidence in favor of the proposed geometry comes from the variation of the extinction as a function of the velocity derived from the H$`{}_{}{}^{}{}_{2}{}^{}`$ emission. Scoville et al. (1982), and Geballe et al. (1986), have found that the extinction in the H$`{}_{}{}^{}{}_{2}{}^{}`$ line wings is larger than at the line center by $`1`$mag. If the ejection is jet-driven, a large quantity of ambient material is accumulated in the head of the lobes near the outflow axis where one observes the largest radial velocities. Therefore, if the jet axis is oriented along the line of sight, the highest velocities should be the most affected by the extinction produced by the gas and dust pushed by the outflow. In a similar way, the proposed geometry can naturally explain the asymmetry observed in the H$`{}_{}{}^{}{}_{2}{}^{}`$ line emission in which the blueshifted emission is less extincted (between 0.1 and $`1.5`$mag) than the redshifted emission. With the proposed scenario, one can estimate the ratio between the H$`{}_{}{}^{}{}_{2}{}^{}`$ and the CO HV material in the bow shock. For the position of Pk1, the vibrationally excited hydrogen column density is $`6\times 10^{17}`$cm<sup>-2</sup> (Brand et al., 1988), and the CO column density obtained for both outflow wings is $`1.8\times 10^{17}`$cm<sup>-2</sup>. Then, the CO/H$`{}_{}{}^{}{}_{2}{}^{}`$ ratio would be $`0.20.3`$. This indicates that, as expected, only a very small fraction of the shocked gas is hot enough to be detected in the H$`{}_{}{}^{}{}_{2}{}^{}`$ lines. Even, after the correction for extinction, the thickness of the H$`{}_{}{}^{}{}_{2}{}^{}`$ layer must be at least two orders of magnitude thinner than the colder shocked CO layer. ### 7.2 The “plateau” emission The “plateau” component is the source of broad line wings in a number of molecules which are believed to be produced by shock chemistry. Based in observations of molecules like SO, SO<sub>2</sub>, SiO and HCO<sup>+</sup>, which are good tracers of the low-velocity outflow, Plambeck et al. (1982) and Wright et al. (1995) have suggested that this low velocity emission arise from a ring or “doughnut” of gas expanding outward from IRc2/I. The origin of these molecules is closely relate with shocks (see Martín–Pintado et al., 1992), and they are produced by the interaction of the outflowing gas with the dense ambient clouds (Bachiller & Pérez–Gutiérrez, 1997). In the proposed model, the bulk of shocked gas occurs in the bow shocks produced in the heads of the two lobes, in the direction of the line of sight. As in the low mass outflows (Bachiller & Pérez–Gutiérrez, 1997), the bow shocks will favor the observation of molecules characteristics of a shock chemistry like HCO<sup>+</sup>, SO, SO<sub>2</sub> or SiO in the region of high density. This might explain why this emission has low velocity and is observed just around the outflow axis. A similar situation have been observed in L 1157, a low mass star driven outflow, whose axis is almost in the plane of the sky (Bachiller & Pérez–Gutiérrez, 1997). In this outflow one can observe that the abundance of molecules like SiO, CH<sub>3</sub>OH, H<sub>2</sub>CO, HCN, CN, SO and SO<sub>2</sub> is enhanced by at least an order of magnitude in the shocked region at the head of both lobes, with low abundance in the vicinity of the exciting source. ### 7.3 Models of jet-driven molecular outflows Most of the mass observed in molecular outflows is made by ambient material entrained by a “primary wind” from the central source. Two basic processes of entrainment has been considered to explain the bulk of the mass in the bipolar outflows (see e.g. Cabrit, 1995). * Viscous mixing layers at the steady-state, produced via Kelvin–Helmholtz (KH) instabilities at the interface between the outflow and the ambient cloud (Stahler, 1994). * Prompt entrainment, produced at the end of the jet head in a curved bow shock that accelerates and sweeps the ambient gas creating a dense cover with a low density cocoon surrounding the jet (Raga & Cabrit, 1993). Studies of the CO line profiles in several molecular outflows indicates that prompt entrainment at the jet head is the main mechanism for molecular entrainment (see e.g. Chernin et al., 1994). Furthermore, precessing jets have been also invoked to explain the large opening angles of bipolar outflows (Chernin & Masson, 1995). However, it has been argued that the precessing angles are small and the propagation of large bowshock seems to be the dominant mechanism in the formation of bipolar outflows for low mass stars (Gueth et al., 1996). The proposed scenario (a jet driven molecular outflow) for the IRc2/I outflow confirms that the main mechanism for entrainment in this outflow is also prompt entrainment at the jet heads, in agreement with the Raga & Cabrit (1993) model. However, the presence of HV bullets (see Figs. 3e and f) are best explained (see Rodríguez–Franco et al., 1999) by the model of a precessing jet (Raga & Biro, 1993). Furthermore, the area-velocity relation is well explained in both, low mass star molecular outflows (L 1448, I 3282), and in the massive star IRc2/I outflow by a radial expansion from the exciting source similar to that found in the L 1157 outflow (Gueth et al., 1996). As illustrated in the sketch in Fig. 9 this would be easily explained in the framework of the precessing jet with prompt entrainment since the shocked material will be always moving in the direction of the jet, i.e., just in the radial direction from the exciting source. Another important result from our data is the large transverse velocities measured from the CO data which can be up to $`2030`$% of the jet velocity. The large transverse velocities in jet driven molecular outflows will also explain the shell-like outflows as more evolved objects. The typical time scale for a young jet-like molecular outflow to evolve to a shell-like molecular outflow with a cavity of $`3`$pc will be of $`10^5`$years for the typical transverse velocity measured in the IRc2/I outflow. This is in agreement with the ages found in shell like outflows powered by intermediate mass stars (NGC 7023, Fuente et al., 1998) and low mass stars (L 1551-IRS 5, Plambeck & Snell, 1995). ## 8 Conclusions We have mapped the Orion region in the $`J=21`$ line of CO with the 30 m telescope. From these maps we have detected high velocity gas in two regions: the IRc2/I outflow and the Orion–S outflow. The main results for the Orion–S outflow can be summarized as follows, * The bipolar molecular outflow in the Orion-S region presented in this paper is very fast ($`110`$km s<sup>-1</sup>) and compact ($`\stackrel{<}{}`$$`0.16`$pc). It is highly collimated and shows the presence of HV velocity bullets. It is perpendicular to the monopolar low velocity ($`<30`$km s<sup>-1</sup>) outflow known in this region (Schmid$``$Burgk et al., 1990). * The location of the possible exciting source is estimated, from the kinematics of the high velocity gas, to be $`20^{\prime \prime }`$ north from the position of FIR$`\mathrm{\hspace{0.17em}4}`$. At this position no continuum source in the cm or mm wavelength range has been detected. * The morphology of this bipolar outflow suggests a very young (dynamical age of $`10^3`$years) jet driven molecular outflow similar to those powered by low mass stars. For the IRc2/I outflow the main results are: * While the HV gas with moderate velocities ($`v_{LSR}55`$km s<sup>-1</sup>) is centred on IRc2/I with a very weak bipolarity around IRc2/I, the morphology of the blue and redshifted HV gas for the most extreme velocities $`v_{LSR}9`$$`\stackrel{<}{}`$80 km s<sup>-1</sup> (EHV) peaks $`20^{\prime \prime }`$ northwest from IRc2/I and $`10^{\prime \prime }`$ south of IRc9. The EHV gas does not show any clear bipolarity around IRc2/I. The only possible bipolarity in the east$``$west direction is found $`20^{\prime \prime }`$ north of IRc2. The blue and redshifted HV CO emission show a similar spatial distribution with an elliptical-like shape centred in the vicinity of IRc2/I, and a systematic trend of the size of the HV gas to decrease as a function of the velocity. * The morphology and kinematics of the HV CO emission cannot be accounted by the most accepted model: the wide opening angle outflow. We discuss other alternatives such as multiple outflows and a precessing jet driven molecular outflow oriented along the line of sight. * We have compared the size-velocity dependence and the mass distribution in the Orion IRc2/I outflow with those derived from the jet driven molecular outflows powered by low mass stars (L$`\mathrm{\hspace{0.17em}1448}`$ and I$`\mathrm{\hspace{0.17em}3282}`$) when these are projected to have the jet oriented along the line of sight. We find very good agreement between the jet driven molecular outflow in low mass stars with that of the Orion IRc2/I outflow indicating that this outflow can be jet driven. * The size-velocity dependence found for the outflows is fit using a simple velocity law which consider the presence of a highly collimated jet and entrained material. The velocity decreases exponentially from the jet and it is in the radial direction for the entrained material outside the jet. We derive similar collimation parameters for the L$`\mathrm{\hspace{0.17em}1448}`$ and the I$`\mathrm{\hspace{0.17em}3282}`$ outflows of 0.03 pc, a factor of two larger for the Orion–IRc2/I outflow. This difference might be an age effect, or due to the different type of exciting stars. * From a comparison of the mass distribution as a function of velocity, we conclude that the bulk of the HV gas in the Orion IRc2/I outflow is produced by prompt entrainment at the head of the jet. * The morphology and kinematics of the shock tracers, H$`{}_{}{}^{}{}_{2}{}^{}`$, H<sub>2</sub>O masers, H<sub>2</sub> bullets, the “plateau emission”, and the radial direction of the entrained HV gas in the IRc2/I outflow is qualitatively explained within the scenario of a molecular outflow driven by a precessing jet oriented along the line of sight. The large transverse velocity found in this outflow can explain the shell-type outflows as the final evolution of the younger jet driven outflows. Acknowledgements. This work was partially supported by the Spanish CAICYT under grant number PB93-0048.
no-problem/9910/cond-mat9910385.html
ar5iv
text
# Stability of Ferromagnetism in Hubbard models with degenerate single-particle ground states ## 1 Introduction The problem of ferromagnetism in itinerant electron systems has a long history. It is clear that ferromagnetism (as any other ordering in itinerant electron systems) occurs due to the interaction of the electrons, or, to be more precise, due to a subtle interplay between the kinetic motion of the electrons and the interaction. 1963, Hubbard , Kanamori , and Gutzwiller formulated and studied a simple tight-binding model of electrons with an on-site Coulomb repulsion of strength $`U`$. This model is usually called the Hubbard model. Although the assumption, that a realistic system can be described by a purely local repulsion of the electrons is artifical, the Hubbard model became a paradigm for the study of correlated electron systems. The reason is that already a pure on-site interaction can produce many ordering effects that have been observed in electronic systems. The mechanisms that are responsible for some long range order in the ground state of the Hubbard model are probably the same in more complicated (and more realistic) models. From a theoretical point of view the Hubbard model is very interesting, because it offers the possibility to derive ordering phenomena in a simple model that does not contain special interactions favouring this order. In this paper we present a result on ferromagnetism in the Hubbard model. This is an old problem, which has been studied extensively using various approximative methods. The most simple approach is the Hartree-Fock approximation. It yields the Stoner criterion $`U\rho _F>1`$ for the occurence of ferromagnetism in the Hubbard model. $`\rho _F`$ is the density of states at the fermi energy. It is well known that this criterion overestimates the occurence of ferromagnetism. There are situations where $`\rho _FU`$ is infinite and the ground state of the Hubbard model is not ferromagnetic. Ferromagnetism is not a universal property of the Hubbard model. As far as we know it occurs on special lattices and in special regions of the parameter space. In the discussion of ferromagnetism in correlated electron systems, more realistic models with e.g. an additional ferromagnetic interaction between the elctrons or a Hund’s coupling between several bands in a multi-band system have been discussed as well. It is clear that in a realistic description of itinerant ferromagnetism such additional interactions are present and may favour the occurence of ferromagnetism. But it is a challenging problem to derive conditions for the occurence of ferromagnetism in a Hubbard model, which does not explicitely contain such interactions. The hope is that results for this model yield a main contribution to the understanding of ferromagnetism in more realistic models. The first rigorous results on ferromagnetism in the Hubbard model is the so called Theorem of Nagaoka. On a large class of lattices, the Hubbard model has a ferromagnetic ground state if the Coulomb repulsion is infinite and if there is one electron less than lattice sites. A very general proof of this theorem has been given by Tasaki . A second class of systems, for which the existence of ferromagnetic ground states has been shown rigorously, are the so called flat band models. In 1989, Lieb proved an important theorem on the Hubbard model: At half filling and on a bibartite lattice (one electron per lattice site) the ground state of the Hubbard model is unique up to the usual spin degeneracy. The spin of the ground state is given by $`S=\frac{1}{2}||A||B||`$, where $`|A|`$ and $`|B|`$ are the numbers of lattice sites of the two sublattices of the bipartite lattice. When this quantity is extensive, the system is ferrimagnetic. In that case, the model has strongly degenerate single particle eigenstates at the Fermi level, $`\rho _F`$ is infinite. For a Hubbard model on a translational invariant lattice such a model has several bands, one of these bands is flat. Later, it has been shown that a multiband Hubbard model for which the lowest band is flat shows ferromagnetism-. These lattices are not bipartite, generally they contain triangles or next-nearest-neighbor hoppings. A typical example is the Hubbard model on the kagomé lattice . There are several extensions of the flat band ferromagnetism. The most important result has been derived by Tasaki . He discussed the question, whether the flat band ferromagnet is stable with respect to small perturbations. He showed under very general asumptions that for a class of multi-band Hubbard models with a nearly flat lowest band the ferromagnetic state is stable with respect to single spin flips if the Coulomb repulsion $`U`$ is sufficiently large and if the nearly flat band is half filled. This local stability of the ferromagnetic state suggests its global stability. The class of models, for which Tasaki was able to proof this important result consists of models, for which the nearly flat, lowest band is separated from the rest of the spectrum by a sufficiently large gap. Therefore one would expect that these models describe an insulating ferromagnet. This a general problem for the flat band ferromagnetism as well. The flat band models show ferromagnetism if the flat band is half filled or less than half filled. Even if the flat band is less than half filled, or if the model has no gap between the flat band the other bands (this is the case for the kagomé lattice), the system may be an insulator. The reason is that for an entire flat band, the systme can be described by localized states as well. Furthermore, the existence of a basis of localized states was an essential part of the proofs. Another extension of the flat band ferromagnetism are models with a partially flat band. In a general necessary and sufficient condition for the uniqueness of ferromagnetic ground states has been derived for a model with a degenerate single particle ground state. This result holds only if the number of electrons is equal to the number of degenerate single particle ground states. But it does not require a gap in the spectrum or an entirely flat band. A partially flat band is sufficient. A Hubbard model with a single band far away from half filling is expected to be a conductor. This remains true if the band is partially flat. Therefore these models may describe a metallic ferromagnet. A generalization of this result to a situation where the number of electrons is less than the number of single particle states has recently been published . The main result of that letter is that in a single band Hubbard model with a degenerate single partical ground state local stability of ferromagnetism implies global stability, if the number of electrons is less or equal to the number of degenerate single particle groud states. Stability is meant here in the sence of absolute stability: The ferromagnetic ground state is the only ground state of the system. That local stability of ferromagnetism implies global stability has often been assumed but is by no means guarantied. It would be certainly useful to know, in which situatons this is the case. The aim of the present paper is to generalize the result in to a general Hubbard model with degenerate single particle ground states. It is not necessary to have a single band model. It is even not necessary to have translational invariance, although this would be a natural asumption. Let us mention that Hubbard models with a partially flat band are not only an academic toy model. Very recently Arita et al used such a model to explain the negative magnetoresistance of certain organic conductors. They mention that standard band-structure calculations for these materials yield a partially flat band. The paper is organized as follows. The next section contains the main definitions and results. The proof of the result combines the ingredients of the proofs in and. The main part of the proof is the choice of a suitable basis. This choice is discussed in Section 3. Section 4 contains a new proof of the result in . The proof in used an induction in the number of degenerate single particle states and was not intuitive at all. On the other hand, the basic idea why the condition in should be true is simple. The new proof is based on this basic idea and is much easier. Furthermore it can be generalized to situations where the number of electrons is less than the number of degenerate single particle states. This generalization is presented in Section 5. ## 2 Main result We consider a Hubbard model on a finite lattice with $`N_s`$ sites. The Hamiltonian is $$H=H_{\text{hop}}+H_{\text{int}},$$ (1) where $$H_{\text{hop}}=\underset{x,y,\sigma }{}t_{xy}c_{x\sigma }^{}c_{y\sigma },$$ (2) and $$H_{\text{int}}=\underset{x}{}U_xn_xn_x.$$ (3) $`x`$ and $`y`$ are lattice sites. As usual $`c_{x\sigma }^{}`$ and $`c_{x\sigma }`$ are the creation and the annihilation operators of an electron on the site $`x`$ with a spin $`\sigma =,`$. They satisfy the anticommutation relations $`[c_{x\sigma },c_{y\tau }^{}]_+=\delta _{xy}\delta _{\sigma \tau },`$ and $`[c_{x\sigma },c_{y\tau }]_+=[c_{x\sigma }^{},c_{y\tau }^{}]_+=0.`$ The number operator is defined as $`n_{x\sigma }=c_{x\sigma }^{}c_{x\sigma }.`$ The hopping matrix $`T=(t_{xy})`$ is real symmetric, and the on-site Coulomb repulsion $`U_x`$ is positive. We do not need to assume any kind of translational symmetry, therefore the lattice is simply a collection of sites. We allow that the local Coulomb repulsion depends on $`x`$. The total number of electrons is $`N_e=_{x\mathrm{\Lambda }}(n_x+n_x)`$. The Hubbard model has a $`SU(2)`$ spin symmetry, it commutes with the spin operators $$\stackrel{}{S}=\underset{x}{}\underset{\sigma ,\tau =,}{}c_{x\sigma }^{}(\stackrel{}{p})_{\sigma \tau }c_{x\tau }/2,$$ (4) where $`\stackrel{}{p}`$ is the vector of Pauli matrices, $$p_1=\left(\begin{array}{cc}0& 1\\ 1& 0\end{array}\right),p_2=\left(\begin{array}{cc}0& i\\ i& 0\end{array}\right),p_3=\left(\begin{array}{cc}1& 0\\ 0& 1\end{array}\right).$$ (5) We denote by $`S(S+1)`$ the eigenvalue of $`\stackrel{}{S}^2`$. The eigenstates of the hopping matrix $`T`$ are $`\phi ^i`$, the corresponding eigenvalues are $`ϵ_i`$, $`i=1,\mathrm{},N_s`$. Let $`ϵ_iϵ_j`$ for $`i<j`$. In the following we will discuss a Hubbard model with $`N_d`$ degenerate single particle ground states. The energy scale is chosen such that $`ϵ_i=0`$ for $`iN_d`$. The main result of the present paper can now be formulated. > *Theorem* – In a Hubbard model with $`N_d`$ degenerate single particle ground states and $`N_eN_d`$ electrons, local stability of ferromagnetism implies global stability: The model has only ferromagnetic ground states with a spin $`S=\frac{N_e}{2}`$, if and only if there are no single spin-flip ground states (ground states with a spin $`S=\frac{N_e}{2}1`$). This theorem is true for any positive Coulomb repulsion $`U_x`$. The existence of feromagnetic ground states is indeed trivial. Any multi particle state that contains only electron with spin up in single particle states $`\phi ^i`$, $`iN_d`$ is a ground state of the Hamiltonian. It is even a ground state of the kinetic part and of the interaction part of the Hamiltonian separately. The problem is to show that there are no further ground states. The above theorem yields a necessary and sufficient condition for the existence of non-ferromagnetic ground states. As already mentioned in , one can use degenerate perturbation theory to generalize the result to a situation where the flat part of the band lies not at the bottom of the single-particle spectrum. A concrete model, where such a situation occurs, has been investigated by Arita et al . They investigated a special type of one-dimensional Hubbard model used for the description of atomic quantum wires. These models have a flat band that is not situated at the bottom of the spectrum. ## 3 Choice of the basis The main part of the proof of the theorem is the choice of an appropriate basis. It turns out that the choice of the single particle basis used in is useful. In this section we give a detailed construction of such a basis. Starting point is the representation $$T=\left(\begin{array}{cc}C^{}T_0C& C^{}T_0\\ T_0C& T_0\end{array}\right)$$ (6) of the hopping matrix. Here $`T_0`$ is a positive $`(N_sN_d)\times (N_sN_d)`$-matrix, $`\text{rank}T_0=N_sN_d`$. $`C`$ is a $`(N_sN_d)\times N_d`$-matrix. This representation of $`T`$ can be obtained as follows: Since $`\text{rank}T=N_sN_d`$, one can find $`N_sN_d`$ rows (or colums) of $`T`$ wich are linear independent. We label the corresponding sites by $`x=N_d+1,\mathrm{},N_s`$. $`T_0`$ is the submatrix $`(t_{xy})_{x,y\{N_d+1,\mathrm{},N_s\}}`$. Since $`T`$ is non-negative, $`T_0`$ is positive. The matrix $`C`$ is given by $`(T_0)^1T_{01}`$ where $`T_{01}=(t_{xy})_{x\{N_d+1,\mathrm{},N_s\},y\{1,\mathrm{},N_d\}}`$. The other matrix elements of $`T`$ are fixed, since $`T`$ is symmetric and since the other rows of $`T`$ are linear dependent. By construction, the single particle ground states obey $`T\psi =0`$. This holds if and only if $$\psi =\left(\begin{array}{c}\overline{\psi }\\ C\overline{\psi }\end{array}\right)$$ (7) A basis of single particle ground states can be obtained by choosing an arbitrary set of $`N_d`$ linear independent vectors $`\overline{\psi }`$. We choose the basis $`=\{\psi _i:\overline{\psi }_i(x)=\delta _{x,i}\}`$. Since $`t_{xy}`$ are real, $`\psi _i(x)`$ are real. This basis is not orthonormal. The matrix $`B=(b_{ij})_{i,j=1,\mathrm{},N_d}`$ with $`b_{ij}=_x\psi _i(x)\psi _j(x)`$ is positive and the dual basis is formed by $`\psi _i^d(x)=_j(B^1)_{ij}\psi _j(x).`$ One has $`_x\psi _i^d(x)\psi _j(x)=\delta _{i,j}`$. We introduce creation operators for electrons in the state $`\psi _i(x)`$ $$a_{i\sigma }^{}=\underset{x}{}\psi _i(x)c_x^{}$$ (8) and the corresponding dual operators $$a_{i\sigma }=\underset{x}{}\psi _i^d(x)c_x^{}$$ (9) They obey the commutation relations $`[a_{i\sigma },a_{j\tau }^{}]=\delta _{i,j}\delta _{\sigma ,\tau }`$. These creation and annihilation operators can now be used to construct multi-particle states. The unique ferromagnetic ground state with $`N_e=N_d`$ electrons and $`S=S_3=N_d/2`$ is $$\psi _{0F}=\underset{i}{}a_i^{}|0$$ (10) A general ground state of the kinetic part of the Hamiltonian is given by $$\psi ^{n,m}(\alpha )=S_{}^{n,m}(\alpha )\psi _{0F}$$ (11) where $$S_{}^{n,m}(\alpha )=\underset{j_1\mathrm{}j_m;i_1\mathrm{}i_n}{}\alpha _{j_1\mathrm{}j_m;i_1\mathrm{}i_n}\underset{k}{}a_{j_k}^{}\underset{k}{}a_{i_k}$$ (12) This state has $`N_e=N_dn+m`$ electrons. In the following I assume that $`N_eN_d`$, i.e. $`mn`$. $`\psi ^{n,m}(\alpha )`$ is a state with $`S_3=(N_dnm)/2=N_e/2m`$. It obeys $`S_+\psi ^{n,m}(\alpha )=0`$ if and only if $$\underset{k}{}\alpha _{k,j_1\mathrm{}j_{m1};k,i_1\mathrm{}i_{n1}}=0.$$ (13) In that case it is a state with a spin $`S=S_3`$. We want to derive a condition for $`\psi ^{n,m}(\alpha )`$ to be a ground state of the Hamitltonian. A necessary and sufficient condition is $$c_xc_x\psi ^{n,m}(\alpha )=0$$ (14) for all $`x`$. For $`xN_d`$ one obtains $`a_ia_i\psi ^{n,m}(\alpha )=0`$. Therefore $`\alpha _{j_1\mathrm{}j_m;i_1\mathrm{}i_n}=0`$ if $`\{j_1\mathrm{}j_m\}`$ is not a subset of $`\{i_1\mathrm{}i_n\}`$. It turns out that this fact is important since it simplifies the proof substantially. ## 4 The case $`N_e=N_d`$ This case has already been discussed in . In the following we obtain a simplified proof of the result in . Let us first discuss the stability with respect to single spin-flips. A ferromagnetic ground state is called stable with respect to a single spin flip, if there is no single spin flip state with the same energy. I derive a necessary and sufficient condition for $`\psi _{0F}`$ to be stable with respect to a single spin flip. A general state with a single spin flip can be written in the form $$\psi =\underset{j,k}{}\alpha _{j;k}c_j^{}c_k\psi _{0F}$$ (15) If and only if $`\alpha _{j;k}\delta _{j,k}`$, $`\psi `$ is the unique ferromagnetic ground state with $`S=N_d/2`$, $`S_z=N_d/21`$. If and only if $`_j\alpha _{j;j}=0`$, $`\psi `$ is a state with $`S=S_z=N_d/21`$. Therefore I assume $`_j\alpha _{j;j}=0`$. $`\psi `$ is a ground state if and only if $`c_xc_x\psi =0`$. $$c_xc_x\psi =\underset{j,k,l}{}\alpha _{j;k}\psi _j(x)\psi _l(x)c_lc_k\psi _{0F}$$ (16) The right hand side vanishes if and only if $$\underset{j}{}\psi _j(x)(\alpha _{j;k}\psi _l(x)\alpha _{j;l}\psi _k(x))=0k,l$$ (17) I introduce $$\stackrel{~}{\psi }_k(x)=\underset{j}{}\alpha _{j;k}\psi _j(x)$$ (18) The condition for $`\alpha _{j,k}`$ yields $$\stackrel{~}{\psi }_k(x)\psi _l(x)\stackrel{~}{\psi }_l(x)\psi _k(x)=0k,l,x$$ (19) A trivial solution is $`\stackrel{~}{\psi }_k(x)=\psi _k(x)`$. It corresponds to $`\alpha _{j,k}\delta _{j,k}`$ and has been excluded above. Multiplying the condition for $`\stackrel{~}{\psi }_k(x)`$ with $`\psi _l^d(y)\psi _k^d(z)`$ and summing over $`k`$ and $`l`$ yields $$\stackrel{~}{\rho }_{y,x}\rho _{x,z}\rho _{y,x}\stackrel{~}{\rho }_{x,z}=0$$ (20) where $`\rho _{y,x}=_j\psi _j(x)\psi _j^d(y)`$, $`\stackrel{~}{\rho }_{y,x}=_j\stackrel{~}{\psi }_j(x)\psi _j^d(y)`$. If the matrix $`\rho _{x,y}`$ is irreducible, the only solution is $`\stackrel{~}{\rho }_{y,x}=\rho _{y,x}`$. It corresponds to $`\alpha _{j,k}\delta _{j,k}`$ and has been excluded above. If the matrix $`\rho _{x,y}`$ is reducible, the equation for $`\stackrel{~}{\rho }_{y,x}`$ has another non-trivial solution. From the non-trivial solution for $`\stackrel{~}{\rho }_{y,x}`$ one obtains a solution for $`\alpha _{j,k}`$ from which one can construct easily a solution with $`_j\alpha _{j,j}=0`$. Thus $`\psi _{0F}`$ is stable with respect to a single spin flip if and only if $`\rho _{x,y}`$ is irreducible. This is the condition derived previously in . To derive this condition, it was not necessary to use the special single particle basis introduced in Section 3. The use of this basis is useful for the investigation of multi spin-flip states. Let us now consider a multi spin-flip state $`\psi ^{n,m}(\alpha )`$. It is a ground state if and only if $$\underset{P}{}(1)^P\underset{j_1}{}\psi _{j_1}(x)\psi _{k_{P(n+1)}}\alpha _{j_1\mathrm{}j_n;k_{P(1)}\mathrm{}k_{P(n)}}=0$$ (21) Since $`\alpha _{j_1\mathrm{}j_n;i_1\mathrm{}i_n}`$ is antisymmetric in the last $`n`$ indices, it is sufficient to sum over all cyclic permutations. $$\underset{r=1}{\overset{n+1}{}}(1)^{nr}\underset{j_1}{}\psi _{j_1}(x)\psi _{k_r}(x)\alpha _{j_1\mathrm{}j_n;k_{r+1}\mathrm{}k_{n+1},k_1\mathrm{}k_{r1}}=0$$ (22) Since $`\alpha _{j_1\mathrm{}j_n;i_1\mathrm{}i_n}0`$ only if $`\{j_1\mathrm{}j_n\}=\{i_1\mathrm{}i_n\}`$, we obtain for $`n=1`$ (the single spin-flip case) $$\psi _k(x)\psi _k^{}(x)(\alpha _{k;k}\alpha _{k^{};k^{}})=0$$ (23) With $`\stackrel{~}{\psi }_k(x)=\alpha _{k,k}\psi _k(x)`$ this yields the original condition (19). This means, that with this choice of the basis, the functions $`\stackrel{~}{\psi }_k(x)`$ are either equal to $`\psi _k(x)`$ or vanish. A solution exists, if the set $`\{\psi _k(x),k=1\mathrm{}N_d\}`$ decays in two subsets such that $`\psi _k(x)\psi _k^{}(x)=0`$ if the two factors are out of different subsets. This is equivalent to the above condition on the single particle density matrix $`\rho _{x,y}`$. For $`n>1`$ we now use the fact that the set $`\{j_1,\mathrm{},j_n\}`$ is a subset of $`\{k_1,\mathrm{},k_{n+1}\}`$. I choose $`j_2=k_1`$, $`j_3=k_2`$, etc in (22). With this choice only the terms with $`rn`$ in the sum over $`r`$ do not vanish. For $`r=n`$ the only non-vanishing contribution in the sum over $`j_1`$ is $`j_1=k_{n+1}`$. For $`r=n+1`$ one has $`j_1=k_n`$. This finally yields $$\psi _{k_n}(x)\psi _{k_{n+1}}(x)(\alpha _{k_1\mathrm{}k_n;k_1\mathrm{}k_n}\alpha _{k_{n+1}k_1\mathrm{}k_{n1};k_{n+1}k_1\mathrm{}k_{n1}})=0$$ (24) For some fixed $`k_1,\mathrm{}k_{n1}`$ I let $`\alpha _{k;k}=\alpha _{kk_1\mathrm{}k_{n1};kk_1\mathrm{}k_{n1}}`$. The indices $`k_1\mathrm{}k_{n1}`$ are chosen such that $`\alpha _{k;k}`$ does not vanish identically, which is possible since $`\alpha _{j_1\mathrm{}j_n;i_1\mathrm{}i_n}`$ does not vanish identically. This shows that the existence of a multi spin-flip ground state implies the existence of a single spin-flip ground states. Therefore the ferromagnetic ground state of the Hubbard model with $`N_e=N_d`$ electrons is the unique ground state (up to the spin degeneracy due to the $`SU(2)`$ symmetry) if and only if $`\rho _{xy}`$ is irreducible. ## 5 The case $`N_e<N_d`$ It is now very easy to generalize the second part of the above derivation to the case $`N_e<N_d`$. We will show that the existence of a multi spin-filp ground state implies the existence of a single spin-flip ground state. Let $`\psi ^{1,n}(\alpha )`$, $`n>1`$ be a single spin-flip ground state for $`N_e=N_dn+1`$ electrons. The condition, that this is a ground state, yields $$\underset{r=1}{\overset{n+1}{}}(1)^{nr}\underset{j_1\{k_1,\mathrm{}k_{n+1}\}\{k_r\}}{}\psi _{j_1}(x)\psi _{k_r}(x)\alpha _{j_1;k_{r+1}\mathrm{}k_{n+1}k_1\mathrm{}k_{r1}}=0$$ (25) The sum over $`j_1`$ is restricted to the set $`\{k_1,\mathrm{}k_{n+1}\}\{k_r\}`$ since otherwise $`\alpha _{j_1;k_{r+1}\mathrm{}k_{n+1}k_1\mathrm{}k_{r1}}`$ vanishes. The similar condition for a multi spin-flip ground state $`\psi ^{m,n+m1}(\alpha )`$ is $$\underset{r=1}{\overset{n+m}{}}(1)^{(n+m1)r}\underset{j_1\{k_1,\mathrm{}k_{n+m}\}\{k_r,j_2\mathrm{}j_m\}}{}\psi _{j_1}(x)\psi _{k_r}(x)\alpha _{j_1\mathrm{}j_m;k_{r+1}\mathrm{}k_{n+m}k_1\mathrm{}k_{r1}}=0$$ (26) I let $`j_r=k_{n+r}`$, $`r2`$. Then the sum over $`r`$ runs from $`1`$ to $`n+1`$ and the sum over $`j_1`$ runs over all elements of $`\{k_1\mathrm{}k_{n+1}\}\{k_r\}`$. all other terms vanish identically. One obtains $$\underset{r=1}{\overset{n+1}{}}(1)^{nr}\underset{j_1\{k_1,\mathrm{}k_{n+1}\}\{k_r\}}{}\psi _{j_1}(x)\psi _{k_r}(x)\alpha _{j_1k_{n+2}\mathrm{}k_{n+m};k_{r+1}\mathrm{}k_{n+1}k_1\mathrm{}k_{r1}k_{n+2}\mathrm{}k_{n+m}}=0$$ (27) Therefore we can choose $`\stackrel{~}{\alpha }_{j_1;k_{r+1}\mathrm{}k_{n+1}k_1\mathrm{}k_{r1}}=\alpha _{j_1k_{n+2}\mathrm{}k_{n+m};k_{r+1}\mathrm{}k_{n+1}k_1\mathrm{}k_{r1}k_{n+2}\mathrm{}k_{n+m}}`$ for some fixed $`k_{n+2},\mathrm{}k_{n+m}`$, such that $`\stackrel{~}{\alpha }_{j_1;k_{r+1}\mathrm{}k_{n+1}k_1\mathrm{}k_{r1}}`$does not vanish identically. This is possible since $`\alpha _{j_1\mathrm{}j_m;k_1\mathrm{}k_n}`$ does not vanish identically. The corresponding single spin-flip state $`\psi ^{1,n}(\stackrel{~}{\alpha })`$ is thus a ground state for $`N_e=N_dn+1`$ electrons. The proof presented here is considerably simpler than the proof in . Compared to the proof in it has the advantage that the existence of the single-spin flip ground state is trivial, whereas in a lengthy calculation (hidden in fotenote 13) was necessary to show that. On the other hand, for a translationally invariant multi-band system the basis used here is clearly artifical. But if one uses a natural basis of Bloch states, it is very hard to construct the single spin-flip states from multi spin-flip states. ## 6 Summary and Outlook In this paper it is shown that for a general Hubbard model with degenerate single particle ground states local stability of ferromagnetism implies global stability of ferromagnetism. To be more precise: If there are no single spin-flip ground states, all ground states have the maximal spin. This result holds if the number of electrons is less than or equal to the number of degenerate single particle ground states. A similar result has been proven for a single band Hubbard model in and the present result can be seen as a generalization. It holds for a Hubbard model with more than one band, it holds even if the model does not have translational invariance. Furthermore, the proof presented here is much simpler than the proof in . The result is important since in many cases it is much simpler to show local stability of ferromagnetism than global stability. In showed that under very general conditions the flat band ferromagnetism can be extended to situations where the lowest band is not flat but has a weak dispersion. He was able to prove that in such a situation the ferromagnetic ground state is locally stable. It would be very interesting to obtain conditions under which in that case the ferromagnetic ground state is globally stable, i.e. where it is the real ground state of the system. This is clearly a very difficult project. The present theorem does not apply since Tasakis model does not have degenerate single particle ground states. But one may hope that a generalization is possible. If in a situation with degenerate single particle ground states the ferromagnetic ground state is the only one, it is possible that ferromagnetism is stable with respect to small perturbations of the Hamiltonian. The main problem is clearly that for a general model there is no gap in the single particle spectrum like in the models discussed by Tasaki . From a physics point of view ferromagnetism for models with a partially flat band, as studied in this paper, differs from the flat-band ferromagnetism, since a flat-band ferromagnet is typically an insulator, whereas models with a partially flat band describe metals. Therefore our new approach is a step towards the understanding of metallic ferromagnetism in the Hubbard model. ### Acknowledgement I am grateful to Hideo Aoki for drawing my attention on refs. and .
no-problem/9910/quant-ph9910038.html
ar5iv
text
# Refined Factorizations of Solvable Potentials ## I Introduction We shall begin this section by recalling some basic facts of the standard factorization method, as can be found for instance in , mainly to fix the notation. Afterwards, we will set up the general lines to define more general factorizations, and the way they depart from the conventional ones previously characterized. Let us consider a sequence of stationary one dimensional Schrödinger equations, labeled by an integer number $`\mathrm{}`$, written in the form $$H^{\mathrm{}}\psi _n^{\mathrm{}}\left\{\frac{d^2}{dx^2}+V^{\mathrm{}}(x)\right\}\psi _n^{\mathrm{}}(x)=E_n^{\mathrm{}}\psi _n^{\mathrm{}}(x),$$ (1) where the constants $`\mathrm{}`$ and $`m`$ have been conveniently reabsorved. If such a set (or ‘hierarchy’) of Hamiltonians can be expressed as $$H^{\mathrm{}}=X_{\mathrm{}}^+X_{\mathrm{}}^{}q(\mathrm{})=X_\mathrm{}1^{}X_\mathrm{}1^+q(\mathrm{}1),$$ (2) where $$X_{\mathrm{}}^\pm =\frac{d}{dx}+w_{\mathrm{}}(x),$$ (3) being $`w_{\mathrm{}}(x)`$ functions and $`q(\mathrm{})`$ constants, then we will say that they admit a factorization. From (3) we have that $`X_{\mathrm{}}^\pm `$ are hermitian conjugated of each other, $`\left(X_{\mathrm{}}^{}\right)^{}=X_{\mathrm{}}^+`$, with respect to the usual inner product of the Schrödinger equation. This is consistent with the factorization (2) and the hermiticity of $`H^{\mathrm{}}`$. We shall focus our interest in studying the discrete spectrum of each Hamiltonian, so we further impose that the equation $$X_{\mathrm{}}^{}\psi _{\mathrm{}}^{\mathrm{}}(x)=0$$ (4) will determine the ground state of $`H^{\mathrm{}}`$ if it exists. Of course many other properties related with the continuous spectrum can also be derived with the help of factorizations, but they are out of our present scope. Some consequences that can immediately be derived from the previous conditions are enumerated below. * Spectrum. Let $`\psi _{\mathrm{}}^{\mathrm{}}`$ be the ground state of $`H^{\mathrm{}}`$ as stated in (4), then its energy is precisely $`E_{\mathrm{}}^{\mathrm{}}=q(\mathrm{})`$. When there are excited bounded states $`\psi _n^{\mathrm{}}`$, with $`n=\mathrm{},\mathrm{}+1,\mathrm{}`$, their energy is given by $`E_n^{\mathrm{}}=q(n)`$. Therefore, in these circunstances, $`q(\mathrm{})`$ should be an increasing function on $`\mathrm{}`$. * Eigenfunctions. It is straightforward to check, for each $`\mathrm{}`$, the intertwining relations $$H^{\mathrm{}}X_{\mathrm{}}^+=X_{\mathrm{}}^+H^{\mathrm{}+1},H^{\mathrm{}+1}X_{\mathrm{}}^{}=X_{\mathrm{}}^{}H^{\mathrm{}}.$$ (5) Let us designate by $`^{\mathrm{}}=\{\psi _n^{\mathrm{}}\}_n\mathrm{}`$ the Hilbert space spanned by the bounded states of $`H^{\mathrm{}}`$, for $`\mathrm{}\text{}`$. Then, due to (5) the operators $`X_{\mathrm{}}^\pm `$ link these spaces as $$\begin{array}{cc}X_{\mathrm{}}^{}:^{\mathrm{}}^{\mathrm{}+1}& X_{\mathrm{}}^+:^{\mathrm{}+1}^{\mathrm{}}\\ X_{\mathrm{}}^{}\psi _n^{\mathrm{}}(x)\psi _n^{\mathrm{}+1}(x),& X_{\mathrm{}}^+\psi _n^{\mathrm{}+1}(x)\psi _n^{\mathrm{}}(x).\end{array}$$ (6) Remark that the action of $`X_{\mathrm{}}^\pm `$ preserve the label $`n`$, that is, they connect eigenfunctions with the same energy $`E_n^{\mathrm{}}`$. If the eigenfunctions are normalized we can be more explicit: up to an arbitrary phase factor, $$\begin{array}{cc}X_{\mathrm{}}^{}\psi _n^{\mathrm{}}(x)=\sqrt{q(\mathrm{})q(n)}\psi _n^{\mathrm{}+1}(x),\hfill & n\mathrm{}\hfill \\ X_{\mathrm{}}^+\psi _n^{\mathrm{}+1}(x)=\sqrt{q(\mathrm{})q(n)}\psi _n^{\mathrm{}}(x),\hfill & n>\mathrm{}.\hfill \end{array}$$ (7) Similar considerations would also apply if the ground states were defined through $`X^+`$. Depending on each particular problem we will use one of the following notations $`X_\mathrm{}1^+(r)\psi _{\mathrm{}}^{\mathrm{}}=0,\mathrm{if}\mathrm{}0,`$ (8) $`X_\mathrm{}1^+(r)\psi _{\mathrm{}}^{\mathrm{}}=0,\mathrm{if}\mathrm{}0.`$ (9) For such a case $`q(\mathrm{})`$ must be a decreasing function of $`\mathrm{}`$. We shall also have the opportunity to illustrate this situation in some examples along the next sections. Now, it is natural to define a set of free-index linear operators $`\{X^\pm ,L\}`$ acting on the direct sum of the Hilbert spaces $`_{\mathrm{}}^{\mathrm{}}`$ by means of $$X^{}\psi _n^{\mathrm{}}:=X_{\mathrm{}}^{}\psi _n^{\mathrm{}},X^+\psi _n^{\mathrm{}}:=X_\mathrm{}1^+\psi _n^{\mathrm{}},L\psi _n^{\mathrm{}}:=\mathrm{}\psi _n^{\mathrm{}},$$ (10) where one must have in mind (6) and (7). That is, the operators $`X^\pm `$ act on each function $`\psi _n^{\mathrm{}}(x)`$ by means of the differential operators (3) changing $`\mathrm{}`$ into $`\mathrm{}1`$. The action on any other vector of $``$ can be obtained from (10) by linearization, but we shall never need it. At this moment we are not in conditions to guarantee that the space $``$ is invariant under this action (it might happen that the action of $`X^\pm `$ on $``$ could lead us to the continuous spectrum, or even to an unphysical eigenfunction), but we postpone this problem to the examples of Section III. Taking into account our definitions (10), it is straightforward to arrive at the following commutators, $$[L,X^\pm ]=X^\pm ,[X^+,X^{}]=q(L)q(L1).$$ (11) It is clear that the set of operators $`\{X^\pm ,L\}`$ in general does not close a Lie algebra; relations (11) only allow us to speak formally of an associative algebra. There are many aspects of the conventional factorizations above characterized which can be modified, mainly with the objective of being aplicable to a wider class of systems (see for example ). However, in this paper we are interested in going deeply into the possibilities of this method on a class of systems where the usual factorization can already be applied, so that it could supply us with additional information. With this aim, we shall stress here on two points that will be useful in the next sections. First, we shall assume that the operators $`X_{\mathrm{}}^\pm `$ do not have to take necessarily the form given in (3). In particular, if we have a family of invertible operators $`D_{\mathrm{}}`$ and define $`Y_{\mathrm{}}^+=X_{\mathrm{}}^+D_{\mathrm{}}^1`$, $`Y_{\mathrm{}}^{}=D_{\mathrm{}}X_{\mathrm{}}^{}`$, we will also have $$H^{\mathrm{}}=X_{\mathrm{}}^+X_{\mathrm{}}^{}q(\mathrm{})=Y_{\mathrm{}}^+Y_{\mathrm{}}^{}q(\mathrm{}).$$ (12) The new factor $`D_{\mathrm{}}`$ may be a function (which would add nothing specially new) but also a local operator, i.e., an operator acting on wavefunctions in the form $$D_{\mathrm{}}\psi (x)=\psi (g_{\mathrm{}}(x)),$$ (13) where $`g_{\mathrm{}}`$ is a bijective real function. An example of such an operator, which was already used in , is given by the dilation, $$D(\mu )\psi (x)=\psi (\mu x),\mu >0.$$ (14) Second, an eigenvalue equation can be characterized by more than one label; this consideration has also been explored by Barut et al , but in another context. In the next section we shall deal with two real labels; this will enable us to have more possible ways to factorize the Hamiltonian hierarchy, and the sequence of labels will not be limited (essentialy) to the integers, but it will be constituted by a lattice of points in $`\text{}^2`$. This increasing of factorizations will reflect itself in a larger algebra of free-index operators. In particular, among them, there can be lowering and raising operators for each Hamiltonian, which can never be obtained by the conventional factorization method. Section III will illustrate how our general method works when it is applied to three well known potentials: radial oscillator, Morse, and radial Coulomb. For each of these potentials we shall see that the results so obtained can be used to recover, as special cases, those corresponding to the standard factorizations. Finally some comments and remarks will end this paper. ## II Refined Factorizations Once the spectrum $`E_n^{\mathrm{}}`$ of the hierarchy $`H^{\mathrm{}}`$ is known, we propose a somewhat more general factorization of the eigenvalue equations than that one already displayed in (2), as follows: $$h_{n,\mathrm{}}(x)\left[H^{\mathrm{}}E_n^{\mathrm{}}\right]=B_{n,\mathrm{}}A_{n,\mathrm{}}\varphi (n,\mathrm{})=A_{\stackrel{~}{n},\stackrel{~}{\mathrm{}}}B_{\stackrel{~}{n},\stackrel{~}{\mathrm{}}}\varphi (\stackrel{~}{n},\stackrel{~}{\mathrm{}}).$$ (15) This must be understood as a series of relationships valid for a class of allowed values of the parameters $`(n,\mathrm{})\text{}^2`$. Here $`B_{n,\mathrm{}}`$ and $`A_{n,\mathrm{}}`$ are first order differential operators in the wider sense specified in the previous section, $`h_{n,\mathrm{}}(x)`$ denote functions, and $`\varphi (n,\mathrm{})`$ are constants. The $`(\stackrel{~}{n},\stackrel{~}{\mathrm{}})`$ values depend on $`(n,\mathrm{})`$, i.e., $`(\stackrel{~}{n},\stackrel{~}{\mathrm{}})=F(n,\mathrm{})`$, being $`F:\text{}^2\text{}^2`$ an invertible map defined on a certain domain. The iterated action of $`F`$ or $`F^1`$ on a fixed initial point $`(n_0,\mathrm{}_0)\text{}^2`$ originates a sequence of points in $`\text{}^2`$ that will play a role similar to the integer sequence $`\mathrm{}`$ in for the usual factorizations. In principle the points $`(n,\mathrm{})`$ obtained by this new approach can take integer values for both arguments, but we do not discard a priori other possibilities. The problem of finding solutions to this kind of factorizations becomes more involved because we have additional functions $`h_{n,\mathrm{}}(x)`$ to be determined. Nevertheless, an important and immediate consequence of (15) is that the operators $`B_{n,\mathrm{}},A_{n,\mathrm{}}`$ share properties similar to (5) with respect to their analogs $`\{X_{\mathrm{}}^\pm \}`$ : $$\begin{array}{c}\left[h_{\widehat{n},\widehat{\mathrm{}}}(x)\left(H^\widehat{\mathrm{}}E_{\widehat{n}}^\widehat{\mathrm{}}\right)\right]A_{n,\mathrm{}}=A_{n,\mathrm{}}\left[h_{n,\mathrm{}}(x)\left(H^{\mathrm{}}E_n^{\mathrm{}}\right)\right],\hfill \\ B_{n,\mathrm{}}\left[h_{\widehat{n},\widehat{\mathrm{}}}(x)\left(H^\widehat{\mathrm{}}E_{\widehat{n}}^\widehat{\mathrm{}}\right)\right]=\left[h_{n,\mathrm{}}(x)\left(H^{\mathrm{}}E_n^{\mathrm{}}\right)\right]B_{n,\mathrm{}},\hfill \end{array}$$ (16) where $`F(\widehat{n},\widehat{\mathrm{}})=(n,\mathrm{})`$. Therefore, using the same notation as in (6), $$\begin{array}{cc}A_{n,\mathrm{}}:^{\mathrm{}}^\widehat{\mathrm{}}& B_{n,\mathrm{}}:^\widehat{\mathrm{}}^{\mathrm{}}\\ A_{n,\mathrm{}}\psi _n^{\mathrm{}}(x)\psi _{\widehat{n}}^\widehat{\mathrm{}}(x),& B_{n,\mathrm{}}\psi _{\widehat{n}}^\widehat{\mathrm{}}(x)\psi _n^{\mathrm{}}(x).\end{array}$$ (17) In this case, the most relevant differences with respect to the usual factorizations are: * $`B_{n,\mathrm{}},A_{n,\mathrm{}}`$ in general do not preserve the energy eigenvalue, they may change both labels $`n`$ and $`\mathrm{}`$. * $`A_{n,\mathrm{}}`$ does not act on the whole space $`^{\mathrm{}}`$, it acts just on the eigenfunction $`\psi _n^{\mathrm{}}(x)^{\mathrm{}}`$ (the same can be said of $`B_{n,\mathrm{}}`$ with respect to $`\psi _{\widehat{n}}^\widehat{\mathrm{}}(x)^\widehat{\mathrm{}}`$). When $`n=\widehat{n}`$ and $`h_{n,\mathrm{}}(x)=1`$, we recover the conventional case with $`B_{n,\mathrm{}}`$, $`A_{n,\mathrm{}}`$ playing the role of $`X_{\mathrm{}}^+`$, $`X_{\mathrm{}}^{}`$, respectively. However, the hermiticity properties for the general case are lost because the product $`B_{n,\mathrm{}}A_{n,\mathrm{}}`$ gives not the Hamiltonian operator alone, but it includes also a non constant multiplicative factor. We can define the free-index operators $`\{A,B,L,N\}`$ as we did in (10), where the latter is defined by $`N\psi _n^{\mathrm{}}=n\psi _n^{\mathrm{}}`$. They satisfy the following commutation rules $$\begin{array}{ccc}[L,B]=B(\stackrel{~}{L}L),\hfill & [N,B]=B(\stackrel{~}{N}N),\hfill & [B,A]=\varphi (N,L)\varphi (\stackrel{~}{N},\stackrel{~}{L})\hfill \\ [L,A]=(L\stackrel{~}{L})A,\hfill & [N,A]=(N\stackrel{~}{N})A,\hfill & [N,L]=0,\hfill \end{array}$$ (18) where $`(\stackrel{~}{N},\stackrel{~}{L})=F(N,L)`$. As the operators $`L,N`$ commute, their eigenvalues are used to label the common eigenfunctions $`\psi _n^{\mathrm{}}(x)`$. We must also notice that the equation $`A_{n,\mathrm{}}\psi _n^{\mathrm{}}(x)=0`$ (or $`B_{n,\mathrm{}}\psi _{\widehat{n}}^\widehat{\mathrm{}}(x)=0`$) does not necessarily give an eigenfunction of $`H^{\mathrm{}}`$ (or $`H^\widehat{\mathrm{}}`$); this happens to be the case only when $`\varphi (n,\mathrm{})=0`$. ## III Applications ### A Radial Oscillator Potential As usual the Hamiltonian of the two dimensional harmonic oscillator includes the effective radial potential $`V^{\mathrm{}}(r)=r^2+\frac{(2\mathrm{}+1)(2\mathrm{}1)}{4r^2}`$, where $`\mathrm{}=0,1\mathrm{}`$ is for the angular momentun. The related stationary Schrödinger equation has discrete eigenvalues denoted according to the following convention, $$E_n^{\mathrm{}}=2n+2,n=2\nu +\mathrm{};\nu =0,1,\mathrm{}$$ It can be factorized in two ways according to our general scheme: $$\frac{1}{4}\left[H^{\mathrm{}}E_n^{\mathrm{}}\right]=\frac{1}{4}\left[\frac{d^2}{dr^2}r^2\frac{(2\mathrm{}+1)(2\mathrm{}1)}{4r^2}+E_n^{\mathrm{}}\right]=B_{n,\mathrm{}}^iA_{n,\mathrm{}}^i\varphi ^i(n,\mathrm{}),i=1,2$$ (19) with $`\varphi ^i(n,\mathrm{})`$ given by $$\varphi ^1(n,\mathrm{})=\frac{1}{2}(n+\mathrm{}+2),\varphi ^2(n,\mathrm{})=\frac{1}{2}(n\mathrm{}+2),$$ (20) and where the action on the parameters associated to each factorization is given respectively by the functions $$(n,\mathrm{})=F^1(n+1,\mathrm{}+1),(n,\mathrm{})=F^2(n+1,\mathrm{}1).$$ (21) This can also be written in an easier notation, $$\begin{array}{c}A^1:(n,\mathrm{})(n+1,\mathrm{}+1)\hfill \\ B^1:(n+1,\mathrm{}+1)(n,\mathrm{})\hfill \end{array}\begin{array}{c}A^2:(n,\mathrm{})(n+1,\mathrm{}1)\hfill \\ B^2:(n+1,\mathrm{}1)(n,\mathrm{}).\hfill \end{array}$$ (22) The explicit form of these intertwining operators is $$\{\begin{array}{c}A_{n,\mathrm{}}^1(r)=\frac{1}{2}\left[\frac{d}{dr}r(\mathrm{}+1/2)\frac{1}{r}\right]\hfill \\ B_{n,\mathrm{}}^1(r)=\frac{1}{2}\left[\frac{d}{dr}+r+(\mathrm{}+1/2)\frac{1}{r}\right]\hfill \end{array}\{\begin{array}{c}A_{n,\mathrm{}}^2(r)=\frac{1}{2}\left[\frac{d}{dr}r+(\mathrm{}1/2)\frac{1}{r}\right]\hfill \\ B_{n,\mathrm{}}^2(r)=\frac{1}{2}\left[\frac{d}{dr}+r(\mathrm{}1/2)\frac{1}{r}\right]\hfill \end{array}$$ (23) Observe that in this case, as $`h_{n,\mathrm{}}(r)`$ is a constant, we are able to implement also the hermiticity properties $`(A^i)^{}=B^i`$. The nonvanishing commutation rules for the free-index operators $`\{N,L,A^i,B^i;i=1,2\}`$ are shown to be, in agreement with (18), $$\begin{array}{ccc}[L,B^i]=(1)^iB^i,\hfill & [N,B^i]=B^i,\hfill & [A^i,B^i]=1,\hfill \\ [L,A^i]=(1)^iA^i,\hfill & [N,A^i]=A^i,\hfill & i=1,2.\hfill \end{array}$$ (24) These commutators correspond to two independent boson algebras with $`N,L`$ being a linear combination of their number operators. Formally we can extend the values of $`\mathrm{}`$ so to include the negative integers. This is physically appealing because in two space dimensions (only!) $`\mathrm{}`$ represents the $`L_z`$-component of angular momentum, so that it could take negative integer values. Of course the extension $`\psi _n^{\mathrm{}}(r):=\psi _n^{\mathrm{}}(r)`$ above proposed is consistent with such an interpretation: (i) The radial components for opposite $`L_z`$-values have to coincide, and (ii) The potential $`V^{\mathrm{}}`$ is invariant under the interchange $`\mathrm{}\mathrm{}`$. With this convention, the Hilbert space $``$ of bounded states is invariant under the action of the operators $`\{N,L,A^i,B^i;i=1,2\}`$, so that it constitutes the support for a lowest weight irreducible representation for the algebra (24) based on the fudamental state $`\psi _{n=0}^{\mathrm{}=0}`$. It is worth to notice that, taking into account (22), the composition $`\{A^1A^2,B^1B^2\}`$ constitutes the lowering and raising operators for each Hamiltonian $`H^{\mathrm{}}`$, while the pair $`\{A^1B^2,A^2B^1\}`$ connects states of different Hamiltonians $`H^{\mathrm{}}`$ with the same energy, changing only the label $`\mathrm{}`$. We shall compare briefly the above results with the conventional factorizations of the two-dimensional radial oscillator potential . It is well known that there are two such factorizations which we will write in the form: $`(a)`$ $`X_{\mathrm{}}^+X_{\mathrm{}}^{}q_x(\mathrm{})=H_x^{\mathrm{}}=H^{\mathrm{}}2\mathrm{}`$ (25) $`(b)`$ $`Z_{\mathrm{}}^+Z_{\mathrm{}}^{}q_z(\mathrm{})=H_z^{\mathrm{}}=H^{\mathrm{}}+2\mathrm{},`$ (26) with $`H^{\mathrm{}}=\frac{d^2}{dr^2}+r^2+\frac{(2\mathrm{}+1)(2\mathrm{}1)}{4r^2}`$. Then we have the following identification: Case $`(a)`$ 1. Operators: $`X_{\mathrm{}}^+=2B_{n,\mathrm{}}^1`$, $`X_{\mathrm{}}^{}=2A_{n,\mathrm{}}^1`$, $`q_x(\mathrm{})=4\mathrm{}2`$. 2. Ground states: $`X_\mathrm{}1^+\psi _{\mathrm{}}^{\mathrm{}}=0`$, $`\mathrm{}0`$. 3. Energy eigenvalues: $`E_n^{\mathrm{}}=4n+2`$, with $`n\text{}^+`$ and $`n\mathrm{}`$. In this case we have used a notation in agreement with (8). Case $`(b)`$ 1. Operators: $`Z_{\mathrm{}}^+=2A_{n1,\mathrm{}+1}^2`$, $`Z_{\mathrm{}}^{}=2B_{n1,\mathrm{}+1}^2`$, $`q_z(\mathrm{})=4\mathrm{}2`$. 2. Ground states: $`Z_{\mathrm{}}^{}\psi _{\mathrm{}}^{\mathrm{}}=0`$, $`\mathrm{}0`$. 3. Energy eigenvalues: $`E_n^{\mathrm{}}=4n+2`$, with $`n\text{}^+`$ and $`n\mathrm{}`$. Therefore, as there is a correspondence between the results of the conventional and our factorizations, one might conclude the total equivalence of both treatments. However, we make a remark worth to take into account: the conventional factorizations make use of two Hamiltonian hierarchies, $`H_x^{\mathrm{}}`$ and $`H_z^{\mathrm{}}`$, whose terms differ in a constant $`4\mathrm{}`$, while the new factorizations use only one $`H^{\mathrm{}}`$. If we want that both factorizations $`(a)`$ and $`(b)`$ be valid inside the same hierarchy it is necessary to adopt the properties of our approach in the following sense: either the operators $`X_{\mathrm{}}^\pm `$ or $`Z_{\mathrm{}}^\pm `$ (or both pairs) must change not only the quantum number $`\mathrm{}`$ but also $`n`$. In this way we have shown, by means of this simple example, that the factorizations presented here prove to be quite useful providing directly a more natural viewpoint. ### B Morse Potential In this case we have eigenvalue Schrödinger equations for the whole real line $`x\text{}`$ with the potentials $$V^{\mathrm{}}(x)=\left(\frac{\alpha }{2}\right)^2\left(e^{2\alpha x}2(\mathrm{}+1)e^{\alpha x}\right),\alpha >0,\mathrm{}0.$$ (27) Often in the literature the Morse potentials are written $`V(y)=A\left(e^{2\alpha y}2e^{\alpha y}\right)`$. This form can be reached from (27) by a simple change of the variable $`x=y+k`$, with $`e^{\alpha k}=\mathrm{}+1`$. The energy eigenvalues can be expressed as $$E_n^{\mathrm{}}=\frac{\alpha ^2}{4}n^2,n=\mathrm{}2\nu >0;\nu =0,1,2\mathrm{}$$ (28) In order to have bounded states it is necessary the restriction $`\mathrm{}>0`$; the critical value $`\mathrm{}=0`$ has in this respect an special limiting character, and it is convenient to take it into account as we shall see later. According to (28), the eigenfunctions $`\psi _n^{\mathrm{}}`$ are characterized by labels satisfying $`n\mathrm{}`$; this means that the ground states will be defined through (9). There are two new factorizations $$\frac{e^{\alpha x}}{\alpha ^2}\left[H^{\mathrm{}}E_n^{\mathrm{}}\right]=B_{n,\mathrm{}}^i(x)A_{n,\mathrm{}}^i(x)\varphi ^i(n,\mathrm{}),i=1,2,$$ (29) with $`\varphi ^i(n,\mathrm{})`$ given by $$\varphi ^1(n,\mathrm{})=\frac{1}{2}(\mathrm{}+n+2),\varphi ^2(n,\mathrm{})=\frac{1}{2}(\mathrm{}n+2),$$ (30) and the action on the parameters $`(n,\mathrm{})`$ for each factorization by the functions $$(n,\mathrm{})=F^1(n+1,\mathrm{}+1),(n,\mathrm{})=F^2(n1,\mathrm{}+1).$$ (31) The explicit form of the intertwining operators (29) is $`\{\begin{array}{c}B_{n,\mathrm{}}^1(x)=\frac{e^{\alpha x/2}}{\alpha }\frac{d}{dx}+\frac{1}{2}e^{\alpha x/2}+\frac{n+1}{2}e^{\alpha x/2}\hfill \\ A_{n,\mathrm{}}^1(x)=\frac{e^{\alpha x/2}}{\alpha }\frac{d}{dx}\frac{1}{2}e^{\alpha x/2}\frac{n}{2}e^{\alpha x/2}\hfill \end{array}`$ (34) (35) $`\{\begin{array}{c}B_{n,\mathrm{}}^2(x)=\frac{e^{\alpha x/2}}{\alpha }\frac{d}{dx}+\frac{1}{2}e^{\alpha x/2}\frac{n1}{2}e^{\alpha x/2}\hfill \\ A_{n,\mathrm{}}^2(x)=\frac{e^{\alpha x/2}}{\alpha }\frac{d}{dx}\frac{1}{2}e^{\alpha x/2}+\frac{n}{2}e^{\alpha x/2}\hfill \end{array}`$ (38) As in the oscillator case we have two pairs of operators that change simultaneously two types of labels: one, $`\mathrm{}`$, is related to the intensity of the potential, although here it can not be interpreted as due to a centrifugal term. The second one, $`n`$, is directly related to the energy through formula (28). The (nonvanishing) commutators of the free-index operators are $$\begin{array}{ccc}[L,B^i]=B^i,\hfill & [N,B^i]=(1)^iB^i,\hfill & [A^i,B^i]=1\hfill \\ [L,A^i]=A^i,\hfill & [N,A^i]=(1)^iA^i,\hfill & i=1,2.\hfill \end{array}$$ (39) Observe that in this case the function $`h_{n,\mathrm{}}(x)=e^{\alpha x}/\alpha ^2`$ is not a constant, so the hermiticity relations among the operators $`\{A^i,B^i;i=1,2\}`$ are spoiled. Let us take $`\mathrm{}\text{}^+`$, and formally allow for negative $`n`$-values in (28), i.e., $`\pm n=\mathrm{}2\nu `$; this is admissible because in the operators of (34)-(38) we have a symmetry under the change $`nn`$. Then the Hilbert space $``$ of bounded states enlarged with the (not square integrable) states $`\psi _{n=0}^{\mathrm{}}`$, $`\mathrm{}=0,1,2\mathrm{}`$, will be invariant under the action of all the operators defined in this section. The lowest weight state is played in this case by a not square-integrable wavefunction, $`\psi _{n=0}^{\mathrm{}=0}`$. We can of course build other operators out of the previous ones, changing exclusively one of the labels: the pair $`\{A^1A^2,B^1B^2\}`$ change $`\mathrm{}`$ (in $`+2`$ or $`2`$ units, respectively), while $`\{A^1B^2,A^2B^1\}`$ change $`n`$ (also in $`+2`$ or $`2`$ units, respectively). It is interesting to show explicitly the form taken by the former couple: $$\{\begin{array}{c}(B^1B^2)_{n,\mathrm{}}=\frac{1}{\alpha }\frac{d}{dx}+\frac{1}{2}\left(e^{\alpha x}(\mathrm{}+2)\right)\hfill \\ (A^1A^2)_{n,\mathrm{}}=\frac{1}{\alpha }\frac{d}{dx}+\frac{1}{2}\left(e^{\alpha x}(\mathrm{}+2)\right),\hfill \end{array}$$ (40) where $`(A^1A^2)_{n,\mathrm{}}=A_{n1,\mathrm{}+1}^1A_{n,\mathrm{}}^2`$ and $`(B^1B^2)_{n,\mathrm{}}=B_{n,\mathrm{}}^1B_{n+1,\mathrm{}+1}^2`$, according to the rules of the action of free index operators (17), (10). They can be identified with the usual factorization operators for the Morse Hamiltonians $`H^{\mathrm{}}`$ described in the first section in the following way 1. Factorization: $`X_{\mathrm{}^{}}^+X_{\mathrm{}^{}}^{}q(\mathrm{}^{})=H^2\mathrm{}^{},\mathrm{}^{}\text{}^+.`$ 2. Operators: $`X_{\mathrm{}^{}}^+=\alpha (B^1B^2)_{n,2\mathrm{}^{}}`$ , $`X_{\mathrm{}^{}}^{}=\alpha (A^1A^2)_{n,2\mathrm{}^{}}`$ , $`q(\mathrm{}^{})=\alpha ^2(\mathrm{}^{}+1)^2`$ . 3. Ground states: $`X_\mathrm{}^{}1^+\psi _{\mathrm{}^{}}^{\mathrm{}^{}}=0`$, $`\mathrm{}^{}>0`$. 4. Energy eigenvalues: $`E_{2n^{}}^2\mathrm{}^{}=\alpha ^2(n^{})^2`$, with $`n=2n^{}`$, $`n^{}\text{}^+`$, and $`0n^{}\mathrm{}^{}`$. This time the notation, as it was mentioned above, is in agreement with (9). ### C Radial Coulomb Potential After the separation of the angular variables, the stationary radial Schrödinger equation for the Coulomb potential in two dimensions takes the form $$H^{\mathrm{}}\psi _n^{\mathrm{}}(r)=\left\{\frac{d^2}{dr^2}+\frac{(2\mathrm{}+1)(2\mathrm{}1)}{4r^2}\frac{2}{r}\right\}\psi _n^{\mathrm{}}(r)=E_n^{\mathrm{}}\psi _n^{\mathrm{}}(r),$$ (41) where the values of the orbital angular momentum are positive integers $`\mathrm{}=0,1,2\mathrm{}`$ The computation of the discrete spectrum associated to the bounded states of $`H^{\mathrm{}}`$ can be easily obtained by means of the conventional factorizations (2) with $$X_{\mathrm{}}^\pm =\frac{d}{dr}\frac{2\mathrm{}+1}{2r}+\frac{2}{2\mathrm{}+1},q(\mathrm{})=\frac{1}{\left(\mathrm{}+1/2\right)^2}.$$ (42) Therefore, according to the results quoted in Section I, we have $$E_n^{\mathrm{}}=\frac{1}{(n+1/2)^2},n=\mathrm{}+\nu ,\nu =0,1,\mathrm{}$$ (43) When our method is applied to the hydrogen Hamiltonians $`H^{\mathrm{}}`$ of equation (41) with the eigenvalues $`E_n^{\mathrm{}}`$ (43), we obtain two independent solutions that read as follows $`B_{n,\mathrm{}}^1A_{n,\mathrm{}}^1+\mathrm{}+n+1={\displaystyle \frac{(2n+1)r}{4}}\left[H^{\mathrm{}}E_n^{\mathrm{}}\right],`$ (44) $`B_{n,\mathrm{}}^2A_{n,\mathrm{}}^2\mathrm{}+n+1={\displaystyle \frac{(2n+1)r}{4}}\left[H^{\mathrm{}}E_n^{\mathrm{}}\right].`$ (45) The explicit form of the operators $`\{A^i,B^i\}_{i=1,2}`$, is displayed below: $`\{\begin{array}{c}B_{n,\mathrm{}}^1(r)=(2n+1)^{1/2}\left(\frac{r^{1/2}}{2}\frac{d}{dr}+\frac{r^{1/2}}{2n+1}+\frac{\mathrm{}}{2r^{1/2}}\right)c_n^{1/2}D(c_n)\hfill \\ A_{n,\mathrm{}}^1(r)=D(c_{n}^{}{}_{}{}^{1})c_n^{1/2}(2n+1)^{1/2}\left(\frac{r^{1/2}}{2}\frac{d}{dr}\frac{r^{1/2}}{2n+1}\frac{2\mathrm{}+1}{4r^{1/2}}\right)\hfill \end{array}`$ (48) (49) $`\{\begin{array}{c}B_{n,\mathrm{}}^2(r)=(2n+1)^{1/2}\left(\frac{r^{1/2}}{2}\frac{d}{dr}+\frac{r^{1/2}}{2n+1}\frac{\mathrm{}}{2r^{1/2}}\right)c_n^{1/2}D(c_n)\hfill \\ A_{n,\mathrm{}}^2(r)=D(c_{n}^{}{}_{}{}^{1})c_n^{1/2}(2n+1)^{1/2}\left(\frac{r^{1/2}}{2}\frac{d}{dr}\frac{r^{1/2}}{2n+1}+\frac{2\mathrm{}+1}{4r^{1/2}}\right)\hfill \end{array}`$ (52) The symbol $`D(\mu )`$ in (48)–(52) is for the dilation operator (14), and $`c_n=\frac{2n+2}{2n+1}`$. Thus, in this example we are dealing with general first order differential operators as explained in Section I. For the first couple $`\{A^1,B^1\}`$ we have $`(\widehat{n},\widehat{\mathrm{}})=(n+1/2,\mathrm{}+1/2)`$, while for the second pair $`\{A^2,B^2\}`$, $`(\widehat{n},\widehat{\mathrm{}})=(n+1/2,\mathrm{}1/2)`$. The nonvanishing commutators among the free-index operators are $$\begin{array}{ccc}[N,B^i]=\frac{1}{2}B^i,\hfill & [L,B^i]=(1)^i\frac{1}{2}B^i,\hfill & [A^i,B^i]=I,\hfill \\ [N,A^i]=\frac{1}{2}A^i,\hfill & [L,A^i]=(1)^i\frac{1}{2}A^i,\hfill & i=1,2.\hfill \end{array}$$ (53) In other words, as in the previous examples, we have a set of two independent boson operator algebras. The problem with these operators is that they change the quantum numbers $`(n,\mathrm{})`$ in half-units, so that they do not keep inside the sector of physical wavefunctions. To avoid this problem we can build quadratic operators $`\{A^iA^j,B^iA^j,B^iB^j\}_{i,j=1,2}`$ satisfying this requirement; such second-order operators close the Lie algebra $`sp(4,\text{})`$ , which includes the subalgebra $`su(2)`$ (whose generators connect eigenstates with the same energy but different $`\mathrm{}`$’s). It is worth to write down these quadratic operators: $$\{\begin{array}{c}(B^2A^1)_{n,\mathrm{}}=\frac{(2\mathrm{}+1)(2n+1)}{2}\left(\frac{1}{2}\frac{d}{dr}+\frac{2\mathrm{}+1}{4r}\frac{1}{2\mathrm{}+1}\right)\hfill \\ (B^1A^2)_{n,\mathrm{}}=\frac{(2\mathrm{}+1)(2n+1)}{2}\left(\frac{1}{2}\frac{d}{dr}+\frac{2\mathrm{}+1}{4r}\frac{1}{2\mathrm{}+1}\right)\hfill \end{array}$$ (54) They constitute, up to global constants, the usual factorization operators given in (42): $`X_{\mathrm{}}^+(A^1B^2)_{n,\mathrm{}}`$, $`X_{\mathrm{}}^{}(A^2B^1)_{n,\mathrm{}}`$. Another subalgebra is $`su(1,1)`$ (relating states with the same $`\mathrm{}`$ but different energies or $`n`$ values). Once included the negative $`\mathrm{}`$ values, as we did for the radial oscillator potential, the space $``$ is the support for what it is called a ‘singleton representation’ of $`so(3,2)sp(4,\text{})`$. There is one lowest weight eigenvector $`\psi _{n=0}^{\mathrm{}=0}`$, from which all the representation space is generated by applying raising operators. ## IV Conclusions and Remarks We have shown that a refinement of the factorization method allows us to study the maximum of relations among the Hamiltonian hierarchies that the conventional factorizations are not able to appreciate. The operators involved obbey commutation rules which show the connection existing among the three examples dealt with in this paper: they have the same underlying Lie algebra associated with confluent hypergeometric functions. In other occasions the conventional factorizations have been used in this respect, but we have seen that such an approach is partial and not complete at all. Usually the Hamiltonian hierarchies are obtained from higher dimensional systems after separation of variables (or by any other way of reduction). Such systems have symmetries that are responsible for their analytical treatment. These symmetries are reflected in the many factorizations that the hierarchies can give rise to by means of the thecnique we have developed. We have limited our study to $`N=2`$ space dimensions for the radial oscillator and Coulomb potentials because they are the simplest cases to deal with. For other dimensions there appear certain subtleties, in the sense that the Hilbert space $``$ of bounded states is no longer invariant under the involved operators . Finally, let us mention that we have limited ourselves to some examples (all of them inside the class of shape invariant potentials ), but it is clear that the whole treatment is aplicable to the remaining Hamiltonians in the classification of Infeld and Hull . ### Acknowledgements This work has been partially supported by a DGES project (PB94–1115) from Ministerio de Educación y Cultura (Spain), and also by Junta de Castilla y León (CO2/197). ORO acknowledges support by SNI and CONACyT (Mexico), and the kind hospitality at the Departamento de Física Teórica (Univ. de Valladolid).
no-problem/9910/physics9910025.html
ar5iv
text
# 1 Abstract ## 1 Abstract We employ a number of statistical measures to characterize neural discharge activity in cat retinal ganglion cells (RGCs) and in their target lateral geniculate nucleus (LGN) neurons under various stimulus conditions, and we develop a new measure to examine correlations in fractal activity between spike-train pairs. In the absence of stimulation (i.e., in the dark), RGC and LGN discharges exhibit similar properties. The presentation of a constant, uniform luminance to the eye reduces the fractal fluctuations in the RGC maintained discharge but enhances them in the target LGN discharge, so that neural activity in the pair no longer mirror each other. A drifting-grating stimulus yields RGC and LGN driven spike trains similar in character to those observed in the maintained discharge, with two notable distinctions: action potentials are reorganized along the time axis so that they occur only during certain phases of the stimulus waveform, and fractal activity is suppressed. Under both uniform-luminance and drifting-grating stimulus conditions (but not in the dark), the discharges of pairs of LGN cells are highly correlated over long time scales; in contrast discharges of RGCs are nearly uncorrelated with each other. This indicates that action-potential activity at the LGN is subject to a common fractal modulation to which the RGCs are not subjected. ## 2 Introduction The sequence of action potentials recorded from cat retinal ganglion cells (RGCs) and lateral-geniculate-nucleus (LGN) cells is always irregular. This is true whether the retina is in the dark , or whether it is adapted to a stimulus of fixed luminance . It is also true for time-varying visual stimuli such as drifting gratings. With few exceptions, the statistical properties of these spike trains have been investigated from the point-of-view of the interevent-interval histogram , which provides a measure of the relative frequency of intervals of different durations. The mathematical model most widely used to describe the interevent-interval histogram under all of these stimulus conditions derives from the gamma renewal process , though point processes incorporating refractoriness have also been investigated . However, there are properties of a sequence of action potentials, such as long-duration correlation or memory, that cannot generally be inferred from measures that reset at short times such as the interevent-interval histogram . The ability to uncover features such as these demands the use of measures such as the Allan factor, the periodogram, or rescaled range analysis (R/S), which can extend over time (or frequency) scales that span many events. RGC and LGN spike trains exhibit variability and correlation properties over a broad range of time scales, and the analysis of these discharges reveals that the spike rates exhibit fractal properties. Fractals are objects which possess a form of self-similarity: parts of the whole can be made to fit to the whole by shifting and stretching. The hallmark of fractal behavior is power-law dependence in one or more statistical measures, over a substantial range of the time or frequency scale at which the measurement is conducted . Fractal behavior represents a form of memory because the occurrence of an event at a particular time increases the likelihood of another event occurring at some time later, with this likelihood decaying in power-law fashion. Fractal signals are also said to be self-similar or self-affine. This fractal behavior is most readily illustrated by plotting the estimated firing rate of a sequence of action potentials for a range of averaging times. This is illustrated in Fig. 1A for the maintained discharge of a cat RGC. The rate estimates are formed by dividing the number of spikes in successive counting windows of duration $`T`$ by the counting time $`T`$. The rate estimates of the shuffled (randomly reordered) version of the data are presented in Fig. 1B. This surrogate data set maintains the same relative frequency of interevent-interval durations as the original data, but destroys any long-term correlations (and therefore fractal behavior) arising from other sources, such as the relative ordering of the intervals. Comparing Figs. 1A and B, it is apparent that the magnitude of the rate fluctuations decreases more slowly with increasing counting time for the original data than for the shuffled version. Fractal processes exhibit slow power-law convergence: the standard deviation of the rate decreases more slowly than $`1/T^{1/2}`$ as the averaging time increases. Nonfractal signals, such as the shuffled RGC spike train, on the other hand, exhibit fluctuations that decrease precisely as $`1/T^{1/2}`$. The data presented in Fig. 1 are typical of all RGC and LGN spike trains. ## 3 Analysis Techniques ### 3.1 Point Processes The statistical behavior of a neural spike train can be studied by replacing the complex waveforms of each individual electrically recorded action potential (Fig. 2, top) by a single point event corresponding to the time of the peak (or other designator) of the action potential (Fig. 2, middle). In mathematical terms, the neural spike train is then viewed as an unmarked point process. This simplification greatly reduces the computational complexity of the problem and permits use of the substantial methodology previously developed for stochastic point processes . The occurrence of a neural spike at time $`t_n`$ is therefore simply represented by an impulse $`\delta (tt_n)`$ at that time, so that the sequence of action potentials is represented by $$s(t)=\underset{n}{}\delta (tt_n)$$ A realization of a point process is specified by the set of occurrence times of the events, or equivalently, of the times $`\{\tau _n\}`$ between adjacent events, where $`\tau _n=t_{n+1}t_n`$. A single realization of the data is generally all that is available to the observer, so that the identification of the point process, and elucidation of the mechanisms that underlie it, must be gleaned from this one realization. One way in which the information in an experimental sequence of events can be made more digestible is to reduce the data into a statistic that emphasizes a particular aspect of the data, at the expense of other features. These statistics fall into two broad classes which have their origins, respectively, in the sequence of interevent intervals $`\{\tau _n\}`$ illustrated at the lower left of Fig. 2, or in the sequence of counts $`\{Z_n\}`$ shown at the lower right of Fig. 2. #### 3.1.1 Examples of Point Processes The homogeneous Poisson point process, which is the simplest of all stochastic point processes, is described by a single parameter, the rate $`\lambda `$. This point process is memoryless: the occurrence of an event at any time $`t_0`$ is independent of the presence (or absence) of events at other times $`tt_0`$. Because of this property, both the intervals $`\{\tau _n\}`$ and counts $`\{Z_n\}`$ form sequences of independent, identically distributed (iid) random variables. The homogeneous Poisson point process is therefore completely characterized by the interevent-interval distribution (which is exponential) or the event-number distribution (which is Poisson) together with the iid property. This process serves as a benchmark against which other point processes are measured; it therefore plays the role that the white Gaussian process enjoys in the realm of continuous-time stochastic processes. A related point process is the nonparalyzable fixed-dead-time-modified Poisson point process, a close cousin of the homogeneous Poisson point process that differs only by the imposition of a dead-time (refractory) interval after the occurrence of each event, during which other events are prohibited from occurring . Another cousin is the gamma-$`r`$ renewal process which, for integer $`r`$, is generated from an homogeneous Poisson point process by permitting every $`r`$th event to survive while deleting all intermediate events . Both the dead-time-modified Poisson point process and the gamma renewal process require two parameters for their description. All the examples of point process presented above belong to the class of renewal point processes, which will be defined in Sec. 3.2.1. However, spike trains in the visual system cannot be adequately described by renewal point processes; rather, nonrenewal processes are required . Of particular interest are fractal-rate stochastic point processes, in which one or more statistics exhibit power-law behavior in time or frequency . One feature of such processes is the relatively slow power-law convergence of the rate standard deviation, as illustrated in Fig. 1A. We have previously shown that a fractal, doubly stochastic point process that imparts multiscale fluctuations to the gamma-$`r`$ renewal process provides a reasonable description of the RGC and LGN maintained discharges . ### 3.2 Interevent-Interval Measures of a Point Process Two statistical measures are often used to characterize the discrete-time stochastic process $`\{\tau _n\}`$ illustrated in the lower left corner of Fig. 2. These are the interevent-interval histogram (IIH) and rescaled range analysis (R/S). #### 3.2.1 Interevent-Interval Histogram The interevent-interval histogram (often referred to as the interspike-interval histogram or ISIH in the physiology literature) displays the relative frequency of occurrence $`p_\tau (\tau )`$ of an interval of size $`\tau `$; it is an estimate of the probability density function of interevent-interval magnitude (see Fig. 2, lower left). It is, perhaps, the most commonly used of all statistical measures of point processes in the life sciences. The interevent-interval histogram provides information about the underlying process over time scales that are of the order of the interevent intervals. Its construction involves the loss of interval ordering, and therefore dependencies among intervals; a reordering of the sequence does not alter the interevent-interval histogram since the order plays no role in the relative frequency of occurrence. Some point processes exhibit no dependencies among their interevent intervals at the outset, in which case the sequence of interevent intervals forms a sequence of iid random variables and the point process is completely specified by its interevent-interval histogram. Such a process is called a renewal process, a definition motivated by the replacement of failed parts (such as light bulbs), each replacement of which forms a renewal of the point process. The homogeneous Poisson point process, dead-time-modified Poisson point process, and gamma renewal process are all renewal processes, but experimental RGC and LGN spike trains are not. #### 3.2.2 Rescaled Range (R/S) Analysis Rescaled range (R/S) analysis provides information about correlations among blocks of interevent intervals. For a block of $`k`$ interevent intervals, the difference between each interval and the mean interevent interval is obtained and successively added to a cumulative sum. The normalized range $`R(k)`$ is the difference between the maximum and minimum values that the cumulative sum attains, divided by the standard deviation of the interval size. $`R(k)`$ is plotted against $`k`$. Information about the nature and the degree of correlation in the process is obtained by fitting $`R(k)`$ to the function $`k^H`$, where $`H`$ is the so-called Hurst exponent . For $`H>0.5`$ positive correlation exists among the intervals, whereas $`H<0.5`$ indicates the presence of negative correlation; $`H=0.5`$ obtains for intervals with no correlation. Renewal processes yield $`H=0.5`$. For negatively correlated intervals, an interval that is larger than the mean tends, on average, to be preceded or followed by one smaller than the mean. This widely used measure is generally assumed to be well suited to processes that exhibit long-term correlation or have a large variance , but it appears not to be very robust since it exhibits large systematic errors and highly variable estimates of the Hurst coefficient for some fractal sequences . Nevertheless, it provides a useful indication of correlation in a point process arising from the ordering of the interevent intervals alone. ### 3.3 Event-Number Measures of a Point Process It is advantageous to study some characteristics of a point process in terms of the sequence of event numbers (counts) $`\{Z_n\}`$ rather than via the sequence of intervals $`\{\tau _n\}`$. Figure 2 illustrates how the sequence is obtained. The time axis is divided into equally spaced, contiguous time windows (center), each of duration $`T`$ sec, and the (integer) number of events in the $`n`$th window is counted and denoted $`Z_n`$. This sequence $`\{Z_n\}`$ forms a random counting process of nonnegative integers (lower right). Closely related to the sequence of counts is the sequence of rates (events/sec) $`\lambda _n`$, which is obtained by dividing each count $`Z_n`$ by the counting time $`T`$. This is the measure used in Fig. 1. We describe several statistical measures useful for characterizing the counting process $`\{Z_n\}`$: the Fano factor, the Allan factor, and the event-number-based power spectral density estimate (periodogram). #### 3.3.1 Fano Factor The Fano factor is defined as the event-number variance divided by the event-number mean, which is a function of the counting time $`T`$: $$F(T)\frac{\mathrm{Var}\left[Z_n(T)\right]}{\mathrm{E}\left[Z_n(T)\right]}.$$ This quantity provides an abbreviated way of describing correlation in a sequence of events. It indicates the degree of event clustering or anticlustering in a point process relative to the benchmark homogeneous Poisson point process, for which $`F(T)=1`$ for all $`T`$. The Fano factor must approach unity at sufficiently small values of the counting time $`T`$ for any regular point process . In general, a Fano factor less than unity indicates that a point process is more orderly than the homogeneous Poisson point process at the particular time scale $`T`$, whereas an excess over unity indicates increased clustering at the given time scale. This measure is sometimes called the index of dispersion; it was first used by Fano in 1947 for characterizing the statistical fluctuations of the number of ions generated by individual fast charged particles. For a fractal-rate stochastic point process the Fano factor assumes the power-law form $`T^{\alpha _F}`$ ($`0<\alpha _F<1`$) for large $`T`$. The parameter $`\alpha _F`$ is defined as an estimate of the fractal exponent (or scaling exponent) $`\alpha `$ of the point-process rate. Though the Fano factor can detect the presence of self-similarity even when it cannot be discerned in a visual representation of a sequence of events, mathematical constraints prevent it from increasing with counting time faster than $`T^1`$ . It therefore proves to be unsuitable as a measure for fractal exponents $`\alpha >1`$; it also suffers from bias for finite-length data sets . For these reasons we employ other count-based measures. #### 3.3.2 Allan Factor The reliable estimation of a fractal exponent that may assume a value greater than unity requires the use of a measure whose increase is not constrained as it is for the Fano factor, and which remains free of bias. In this section we present a measure we first defined in 1996 , and called the Allan factor. The Allan factor is the ratio of the event-number Allan variance to twice the mean: $$A(T)\frac{\mathrm{E}\left\{\left[Z_n(T)Z_{n+1}(T)\right]^2\right\}}{2\mathrm{E}\left[Z_n(T)\right]}.$$ The Allan variance was first introduced in connection with the stability of atomic-based clocks . It is defined in terms of the variability of differences of successive counts; as such it is a measure based on the Haar wavelet. Because the Allan factor functions as a derivative, it has the salutary effect of mitigating linear against nonstationarities. More complex wavelet Allan factors can be constructed to eliminate polynomial trends . Like the Fano factor, the Allan factor is also a useful measure of the degree of event clustering (or anticlustering) in a point process relative to the benchmark homogeneous Poisson point process, for which $`A(T)=1`$ for all $`T`$. In fact, for any point process, the Allan factor is simply related to the Fano factor by $$A(T)=2F(T)F(2T)$$ so that, in general, both quantities vary with the counting time $`T`$. In particular, for a regular point process the Allan factor also approaches unity as $`T`$ approaches zero. For a fractal-rate stochastic point process and sufficiently large $`T`$, the Allan factor exhibits a power-law dependence that varies with the counting time $`T`$ as $`A(T)T^{\alpha _A}`$ ($`0<\alpha _A<3`$); it can rise as fast as $`T^3`$ and can therefore be used to estimate fractal exponents over the expanded range $`0<\alpha _A<3`$. #### 3.3.3 Periodogram Fourier-transform methods provide another avenue for quantifying correlation in a point process. The periodogram is an estimate of the power spectral density of a point process, revealing how the power is concentrated across frequency. The count-based periodogram is obtained by dividing a data set into contiguous segments of equal length $`𝒯`$. Within each segment, a discrete-index sequence $`\{W_m\}`$ is formed by further dividing $`𝒯`$ into $`M`$ equal bins, and then counting the number of events within each bin. A periodogram is then formed for each of the segments according to $$S_W(f)=\frac{1}{M}\left|\stackrel{~}{W}(f)\right|^2,$$ where $`\stackrel{~}{W}(f)`$ is the discrete Fourier transform of the sequence $`\{W_m\}`$ and $`M`$ is the length of the transform. All of the segment periodograms are averaged together to form the final averaged periodogram $`S(f)`$, which estimates the power spectral density in the frequency range from $`1/𝒯`$ to $`M/2𝒯`$ Hz. The periodogram $`S(f)`$ can also be smoothed by using a suitable windowing function . The count-based periodogram, as opposed to the interval-based periodogram (formed by Fourier transforming the interevent intervals directly), provides direct undistorted information about the time correlation of the underlying point process because the count index increases by unity every $`𝒯/M`$ seconds, in proportion to the real time of the point process. In the special case when the bin width $`𝒯/M`$ is short in comparison with most interevent intervals $`\tau `$, the count-based periodogram essentially reduces to the periodogram of the point process itself, since the bins reproduce the original point process to a good approximation. For a fractal-rate stochastic point process, the periodogram exhibits a power-law dependence that varies with the frequency $`f`$ as $`S(f)f^{\alpha _S}`$; unlike the Fano and Allan factor exponents, however, $`\alpha _S`$ can assume any value. Thus in theory the periodogram can be used to estimate any value of fractal exponent, although in practice fractal exponents $`\alpha `$ rarely exceed a value of $`3`$. Compared with estimated based on the Allan factor, periodogram-based estimates of the fractal exponent $`\alpha _S`$ suffer from increased bias and variance . Other methods also exist for investigating the spectrum of a point process, some of which highlight fluctuations about the mean rate . #### 3.3.4 Relationship Among Fractal Exponents For a fractal-rate stochastic point process with $`0<\alpha <1`$, the theoretical Fano factor, Allan factor, and periodogram curves all follow power-law forms with respect to their arguments, and in fact we obtain $`\alpha _F=\alpha _A=\alpha _S=\alpha `$. For $`1\alpha <3`$, the theoretical Fano factor curves saturate, but the relation $`\alpha _A=\alpha _S=\alpha `$ still obtains. The fractal exponent $`\alpha `$ is ambiguously related to the Hurst exponent $`H`$, since some authors have used the quantity $`H`$ to index fractal Gaussian noise whereas others have used the same value of $`H`$ to index the integral of fractal Gaussian noise (which is fractional Brownian motion). The relationship between the quantities is $`\alpha =2H1`$ for fractal Gaussian noise and $`\alpha =2H+1`$ for fractal Brownian motion. In the context of this paper, the former relationship holds, and we can define another estimate of the fractal exponent, $`\alpha _R=2H_R1`$, where $`H_R`$ is the estimate of the Hurst exponent $`H`$ obtained from the data at hand. In general, $`\alpha _R`$ depends on the theoretical value of $`\alpha `$, as well as on the probability distribution of the interevent intervals. The distributions of the data analyzed in this paper, however, prove simple enough so that the approximate theoretical relation $`\alpha _R=\alpha `$ will hold in the case of large amounts of data. ### 3.4 Correlation Measures for Pairs of Point Processes Second-order methods prove useful in revealing correlations between sequences of events, which indicate how information is shared between pairs of spike trains. Such methods may not detect subtle forms of interdependence to which information-theoretic approaches are sensitive , but the latter methods suffer from limitations due to the finite size of the data sets used. We consider two second-order methods here: the normalized wavelet cross-correlation function (NWCCF) and the cross periodogram. #### 3.4.1 Normalized Wavelet Cross-Correlation Function We define the normalized wavelet cross-correlation function $`A_2(T)`$ as a generalization of the Allan factor (see Sec. 3.3.2). It is a Haar-wavelet-based version of the correlation function and is therefore insensitive to linear trends. It can be readily generalized by using other wavelets and can thereby be rendered insensitive to polynomial trends. To compute the normalized wavelet cross-correlation function at a particular counting time $`T`$, the two spike trains first are divided into contiguous counting windows $`T`$. The number of spikes $`Z_{1,n}`$ falling within the $`n`$th window is registered for all indices $`n`$ corresponding to windows lying entirely within the first spike-train data set, much as in the procedure to estimate the Allan factor. This process is repeated for the second spike train, yielding $`Z_{2,n}`$. The difference between the count numbers in a given window in the first spike train $`\left(Z_{1,n}\right)`$ and the one after it $`\left(Z_{1,n+1}\right)`$ is then computed for all $`n`$, with a similar procedure followed for the second spike train. Paralleling the definition of the Allan factor, the normalized wavelet cross-correlation function is defined as: $$A_2(T)\frac{\mathrm{E}\left\{\left[Z_{1,n}(T)Z_{1,n+1}(T)\right]\left[Z_{2,n}(T)Z_{2,n+1}(T)\right]\right\}}{2\left\{\mathrm{E}\left[Z_{1,n}(T)\right]\mathrm{E}\left[Z_{2,n}(T)\right]\right\}^{1/2}}.$$ The normalization has two salutary properties: 1) it is symmetric in the two spike trains, and 2) when the same homogeneous Poisson point process is used for both spike trains the normalized wavelet cross-correlation function assumes a value of unity for all counting times $`T`$, again in analogy with the Allan factor. To determine the significance of a particular value for the normalized wavelet cross-correlation function, we make use of two surrogate data sets: a shuffled version of the original data sets (same interevent intervals but in a random order), and homogeneous Poisson point processes with the same mean rate. Comparison between the value of the normalized wavelet cross-correlation function obtained from the data at a particular counting time $`T`$ on the one hand, and from the surrogates at that time $`T`$ on the other hand, indicates the significance of that particular value. #### 3.4.2 Cross Periodogram The cross periodogram is a generalization of the periodogram for individual spike trains (see Sec. 3.3.3), in much the same manner as the normalized wavelet cross-correlation function derives from the Allan factor. Two data sets are divided into contiguous segments of equal length $`𝒯`$, with discrete-index sequences $`\{W_{1,m}\}`$ and $`\{W_{2,m}\}`$ formed by further dividing each segment of both data sets into $`M`$ equal bins, and then counting the number of events within each bin. With the $`M`$-point discrete Fourier transform of the sequence $`\{W_{1,m}\}`$ denoted by $`\stackrel{~}{W_1}(f)`$ (and similarly for the second sequence), we define the segment cross periodograms as $$S_{2,W}(f)\frac{1}{2M}\left[\stackrel{~}{W_1}^{}(f)\stackrel{~}{W_2}(f)+\stackrel{~}{W_1}(f)\stackrel{~}{W_2}^{}(f)\right]=\frac{1}{M}\mathrm{Re}\left[\stackrel{~}{W_1}^{}(f)\stackrel{~}{W_2}(f)\right],$$ where represents complex conjugation and $`\mathrm{Re}()`$ represents the real part of the argument. As with the ordinary periodogram, all of the segment cross periodograms are averaged together to form the final averaged cross periodogram, $`S_2(f)`$, and the result can be smoothed. This form is chosen to be symmetric in the two spike trains, and to yield a real (although possibly negative) result. In the case of independent spike trains, the expected value of the cross periodogram is zero. We again employ the same two surrogate data sets (shuffled and Poisson) to provide significance information about cross-periodogram values for actual data sets. The cross periodogram and normalized wavelet cross-correlation function will have different immunity to nonstationarities and will exhibit different bias-variance tradeoffs, much as their single-dimensional counterparts do . ## 4 Results for RGC and LGN Action-Potential <br>Sequences We have carried out a series of experiments to determine the statistical characteristics of the dark, maintained, and driven neural discharge in cat RGC and LGN cells. Using the analysis techniques presented in Sec. 3, we compare and contrast the neural activity for these three different stimulus modalities, devoting particular attention to their fractal features. The results we present all derive from on-center X-type cells. ### 4.1 Experimental Methods The experimental methods are similar to those used by Kaplan and Shapley and Teich et al. . Experiments were carried out on adult cats. Anesthesia was induced by intramuscular injection of xylazine (Rompun 2 mg/kg), followed 10 minutes later by intramuscular injection of ketamine HCl (Ketaset 10 mg/kg). Anesthesia was maintained during surgery with intravenous injections of thiamylal (Surital 2.5%) or thiopental (Pentothal 2.5%). During recording, anesthesia was maintained with Pentothal (2.5%, 2–6 (mg/kg)/hr). The local anesthetic Novocain was administered, as required, during the surgical procedures. Penicillin (750,000 units intramuscular) was also administered to prevent infection, as was dexamethasone (Decadron, 6 mg intravenous) to forestall cerebral edema. Muscular paralysis was induced and maintained with gallium triethiodide (Flaxedil, 5–15 (mg/kg)/hr) or vecuronium bromide (Norcuron, 0.25 (mg/kg)/hr). Infusions of Ringer’s saline with 5% dextrose at 3–4 (ml/kg)/hr were also administered. The two femoral veins and a femoral artery were cannulated for intravenous drug infusions. Heart rate and blood pressure, along with expired CO<sub>2</sub>, were continuously monitored and maintained in physiological ranges. For male cats, the bladder was also cannulated to monitor fluid outflow. Core body temperature was maintained at 37.5 C throughout the experiment by wrapping the animal’s torso in a DC heating pad controlled by feedback from a subscapular temperature probe. The cat’s head was fixed in a stereotaxic apparatus. The trachea was cannulated to allow for artificial respiration. To minimize respiratory artifacts, the animal’s body was suspended from a vertebral clamp and a pneumothorax was performed when needed. Eyedrops of 10% phenylephrine hydrochloride (Neo-synephrine) and 1% atropine were applied to dilate the pupils and retract the nictitating membranes. Gas-permeable hard contact lenses protected the corneas from drying. Artificial pupils of 3-mm diameter were placed in front of the contact lenses to maintain fixed retinal illumination. The optical quality of the animal’s eyes was regularly examined by ophthalmoscopy. The optic discs were mapped onto a tangent screen, by back-projection, for use as a positional reference. The animal viewed a CRT screen (Tektronix 608, 270 frames/sec; or CONRAC, 135 frames/sec) that, depending on the stimulus condition, was either dark, uniformly illuminated with a fixed luminance level, or displayed a moving grating. A craniotomy was performed over the LGN (center located 6.5 mm anterior to the earbars and 9 mm lateral to the midline of the skull), and the dura mater was resected. A tungsten-in-glass microelectrode (5–10-$`\mu `$m tip length) was lowered until spikes from a single LGN neuron were isolated. The microelectrode simultaneously recorded RGC activity, in the form of S potentials, and LGN spikes, with a timing accuracy of 0.1 msec. The output was amplified and monitored using conventional techniques. A cell was classified as Y-type if it exhibited strong frequency doubling in response to contrast-reversing high-spatial-frequency gratings, and X-type otherwise . The experimental protocol was approved by the Animal Care and Use Committee of Rockefeller University, and was in accord with the National Institutes of Health guidelines for the use of higher mammals in neuroscience experiments. ### 4.2 RGC and LGN Dark Discharge Results for simultaneously recorded RGC and target LGN spike trains of 4000-sec duration are presented in Fig. 3, when the retina is thoroughly adapted to the dark (this is referred to as the “dark discharge”). The normalized rate functions (A) for both the RGC (solid curve) and LGN (dashed curve) recordings exhibit large fluctuations over the course of the recording; each window corresponds to a counting time of $`T=100`$ sec. Such large, slow fluctuations often indicate fractal rates . The two recordings bear a substantial resemblance to each other, suggesting that the fractal components of the rate fluctuations either have a common origin or pass from one of the cells to the other. The normalized interevent-interval histogram (B) of the RGC data follows a straight-line trend on a semi-logarithmic plot, indicating that the interevent-interval probability density function is close to an exponential form. The LGN data, however, yields a nonmonotonic (bimodal) interevent-interval histogram. This distribution favors longer and shorter intervals at the expense of those near half the mean interval, reflecting clustering in the event occurrences over the short term. Various kinds of unusual clustering behavior have been previously observed in LGN discharges . R/S plots (C) for both the RGC and LGN recordings follow the $`k^{0.5}`$ line for sums less than 1000 intervals, but rise sharply thereafter in a roughly power-law fashion as $`k^{H_R}=k^{(\alpha _R+1)/2}`$, suggesting that the neural firing pattern exhibits fractal activity for times greater than about 1000 intervals (about 120 sec for these two recordings). Both smoothed periodograms (D) decay with frequency as $`f^{\alpha _S}`$ for small frequencies, and the Allan factors (E) increase with time as $`T^{\alpha _A}`$ for large counting times, confirming the fractal behavior. The 0.3-Hz component evident in the periodograms of both recordings is an artifact of the artificial respiration; it does not affect the fractal analysis. As shown in Table 1, the fractal exponents calculated from the various measures bear rough similarity to each other, as expected ; further, the onset times also agree reasonably well, being in the neighborhood of 100 sec. The coherence among these statistics leaves little doubt that these RGC and LGN recordings exhibit fractal features with estimated fractal exponents of $`1.9\pm 0.1`$ and $`1.8\pm 0.1`$ (mean $`\pm `$ standard deviation of the three estimated exponents), respectively. Moreover, the close numerical agreement of the RGC and LGN estimated fractal exponents suggests a close connection between the fractal activity in the two spike trains under dark conditions . Curves such as those presented in Fig. 3 are readily simulated by using a fractal-rate stochastic point process, as described in . With the exception of the interevent-interval distribution, it is apparent from Fig. 3 that the statistical properties of the dark discharges generated by the RGC and its target LGN cell prove to be remarkably similar. ### 4.3 RGC and LGN Maintained Discharge Figure 4 presents analogous statistical results for simultaneously recorded maintained-dis/-charge RGC and target-LGN spike trains of 7000-sec duration when the stimulus presented by the CRT screen was a 50 cd/m<sup>2</sup> uniform luminance. The cell pair from which these recordings were obtained is different from the pair whose statistics are shown in Fig. 3. As is evident from Table 1, the imposition of a stimulus increases the RGC firing rate, though not that of the LGN. In contrast to the results for the dark discharge, the RGC and LGN action-potential sequences differ from each other in significant ways under maintained-discharge conditions. We previously investigated some of these statistical measures, and their roles in revealing fractal features, for maintained discharge . The rate fluctuations (A) of the RGC and the LGN no longer resemble each other. At these counting times, the normalized RGC rate fluctuations are suppressed, whereas those of the LGN are enhanced, relative to the dark discharge shown in Fig. 3. Significant long-duration fluctuations are apparently imparted to the RGC S-potential sequence at the LGN, through the process of selective clustered passage . Spike clustering is also imparted at the LGN over short time scales; the RGC maintained discharge exhibits a coefficient of variation (CV) much less than unity, whereas that of the LGN significantly exceeds unity (see Table 1). The normalized interevent-interval histogram (B) of the RGC data resembles that of a dead-time-modified Poisson point process (fit not shown), consistent with the presence of relative refractoriness which becomes more important at higher rates . Dead-time effects in the LGN are secondary to the clustering that it imparts to the RGC S-potentials, in part because of its lower rate. The R/S (C), periodogram (D), and Allan factor (E) plots yield results that are consistent with, but different from, those revealed by the dark discharge shown in Fig. 3. Although both the RGC and LGN recordings exhibit evidence of fractal behavior, the two spike trains now behave quite differently in the presence of a steady-luminance stimulus. For the RGC recording, all three measures are consistent with a fractal onset time of about 1 sec, and a relatively small fractal exponent ($`0.7\pm 0.3`$). For the LGN, the fractal behavior again appears in all three statistics, but begins at a larger onset time (roughly 20 sec) and exhibits a larger fractal exponent ($`1.4\pm 0.6`$). Again, all measures presented in Fig. 4 are well described by a pair of fractal-rate stochastic point processes . ### 4.4 RGC and LGN Driven Discharge Figure 5 presents these same statistical measures for simultaneously recorded 7000-sec duration RGC and LGN spike trains in response to a sinusoidal stimulus (drifting grating) at 4.2 Hz frequency, 40% contrast, and 50 cd/m<sup>2</sup> mean luminance. The RGC/LGN cell pair from which these recordings were obtained is the same as the pair illustrated in Fig. 4. The results for this stimulus resemble those for the maintained discharge, but with added sinusoidal components associated with the restricted phases of the stimulus during which action potentials occur. Using terminology from auditory neurophysiology, these spikes are said to be “phase locked” to the periodicity provided by the drifting-grating stimulus. The firing rate is greater than that observed with a steady-luminance stimulus, particularly for the LGN (see Table 1). Again, the RGC and LGN spike trains exhibit different behavior. The rate fluctuations (A) of the LGN still exceeds those of the RGC, but not to as great an extent as in Fig. 4. Both action-potential sequences exhibit normalized interevent-interval histograms (B) with multiple maxima, but the form of the histogram is now dominated by the modulation imposed by the oscillatory stimulus. Over long times and small frequencies, the R/S (C), periodogram (D), and Allan factor (E) plots again yield results in rough agreement with each other, and also with the results presented in Fig. 4. The most obvious differences arise from the phase locking induced by the sinusoidal stimulus, which appears directly in the periodogram as a large spike at 4.2 Hz, and in the Allan factor as local minima near multiples of $`(4.2\text{ Hz})^1=0.24`$ sec. The RGC results prove consistent with a fractal onset time of about 3 sec, and a relatively small fractal exponent ($`0.7\pm 0.1`$), whereas for the LGN the onset time is about 20 sec and the fractal exponent is $`1.7\pm 0.4`$. For both spike trains fractal behavior persists in the presence of the oscillatory stimulus, though its magnitude is slightly attenuated. ### 4.5 Correlation in the Discharges of Pairs of RGC and LGN Cells We previously examined information exchange among pairs of RGC and LGN spike trains using information-theoretic measures . While these approaches are very general, finite data length renders them incapable of revealing relationships between spike trains over time scales longer than about 1 sec. We now proceed to investigate various RGC and LGN spike-train pairs in terms of the correlation measures for pairs of point processes developed in Sec. 3.4. Pairs of RGC discharges are only weakly correlated over long counting times. This is readily illustrated in terms of normalized rate functions such as those presented in Fig. 6A, in which the rate functions of two RGCs are computed over a counting time $`T=100`$ sec. Calculation of the correlation coefficient ($`\rho =+0.27`$) shows that the fluctuations are only mildly correlated. Unexpectedly, however, significant correlation turns out to be present in pairs of LGN discharges over long counting times. This is evident in Fig. 6B, where the correlation coefficient $`\rho =+0.98`$ ($`p<10^{16}`$) for the rates of two LGN discharges computed over the same counting time $`T=100`$ sec. For shorter counting times, there is little cross correlation for either pairs of RGC or of LGN spike trains (not shown). However, strong correlations are present in the spike rates of an RGC and its target LGN cell as long as the rate is computed over times shorter than 15 sec for this particular cell pair. The cross correlation can be quantified at all time and frequency scales by the normalized wavelet cross-correlation function (see Sec. 3.4.1) and the cross periodogram (see Sec. 3.4.2), respectively. Figure 6C shows the normalized wavelet cross-correlation function, as a function of the duration of the counting window, between an RGC/LGN spike-train pair recorded under maintained-discharge conditions, as well as for two surrogate data sets (shuffled and Poisson). For this spike-train pair, it is evident that significant correlation exists over time scales less than 15 seconds. The constant magnitude of the normalized wavelet cross-correlation function for $`T<15`$ sec is likely associated with the selective transmission properties of the LGN . Figure 6D presents the normalized wavelet cross-correlation function for the same RGC/LGN spike-train pair shown in Fig. 6C (solid curve), together with that between two RGC action-potential sequences (long-dashed curve), and between their two associated LGN spike trains (short-dashed curve). Also shown is a dotted line representing the aggregate behavior of the normalized wavelet cross-correlation function absolute magnitude for all surrogate data sets, which resemble each other. While the two RGC spike trains exhibit a normalized wavelet cross-correlation function value which remains below 7, the two LGN action-potential sequences yield a curve that steadily grows with increasing counting window $`T`$, attaining a value in excess of 1000. Indeed, a logarithmic scale was chosen for the ordinate to facilitate the display of this wide range of values. It is of interest to note that the LGN/LGN curve begins its steep ascent just as the RGC/LGN curve abruptly descends. Further, the normalized wavelet cross-correlation function between the two LGN recordings closely follows a power-law form, indicating that the two LGN action-potential rates are co-fractal. One possible origin of this phenomenon is a fractal form of correlated modulation of the random-transmission processes in the LGN that results in the two LGN spike trains. Some evidence exists that global modulation of the LGN might originate in the parabrachial nucleus of the brain stem; the results presented here are consistent with such a conclusion. Analogous results for the cross-periodograms, which are shown in Figs. 6E and F, provide results that corroborate, but are not as definitive as, those obtained with the normalized wavelet cross-correlation function. The behavior of the normalized wavelet cross-correlation functions for pairs of driven spike trains, shown in Fig. 7, closely follow those for pairs of maintained discharges, shown in Fig. 6, except for the presence of structure at the stimulus period imposed by the drifting grating. ## 5 Discussion The presence of a stimulus alters the manner in which spike trains in the visual system exhibit fractal behavior. In the absence of a stimulus, RGC and LGN dark discharges display similar fractal activity (see Fig. 3). The normalized rate functions of the two recordings, when computed for long counting times, follow similar paths. The R/S, Allan factor, and periodogram quantify this relationship, and these three measures yield values of the fractal exponents for the two spike trains that correspond reasonably well (see Table 1). The normalized interevent-interval histogram, a measure which operates only over relatively short time scales, shows a significant difference between the RGC and LGN responses. Such short-time behavior, however, does not affect the fractal activity, which manifests itself largely over longer time scales. The presence of a stimulus, either a constant luminance (Fig. 4), or a drifting grating (Fig. 5), causes the close linkage between the statistical character of the RGC and LGN discharges over long times to dissipate. The normalized rate functions of the LGN spike trains display large fluctuations about their mean, especially for the maintained discharge, while the RGC rate functions exhibit much smaller fluctuations that are minimally correlated with those of the LGN. Again, the R/S, Allan factor, and periodogram quantify this difference, indicating that fractal activity in the RGC consistently exhibits a smaller fractal exponent (see also Table 1), and also a smaller fractal onset time (higher onset frequency). Both the R/S and Allan-factor measures indicate that the LGN exhibits more fluctuations than the RGC at all scales; the periodogram does not, apparently because it is the only one of the three constructed without normalization. In the driven case (Fig. 5), the oscillatory nature of the stimulus phase-locks the RGC and LGN spike trains to each other at shorter time scales. The periodogram displays a peak at 4.2 Hz, and the Allan factor exhibits minima at multiples of $`(4.2\text{ Hz})^1=0.24`$ sec, for both action-potential sequences. The normalized interevent-interval histogram also suggests a relationship between the two recordings mediated by the time-varying stimulus; both RGC and LGN histograms achieve a number of maxima. Although obscured by the normalization, the peaks do indeed coincide for an unnormalized plot (not shown). In the presence of a stimulus, RGCs are not correlated with their target LGN cells over the long time scales at which fractal behavior becomes most important, but significant correlation does indeed exist between pairs of LGN spike trains for both the maintained and driven discharges (see Figs. 6 and 7, respectively). These pairs of LGN discharges, exhibiting linked fractal behavior, may be called co-fractal. The normalized wavelet cross-correlation function and cross periodogram plots between RGC 1 and LGN 1 remain significantly above the surrogates for small times (Figs. 6C and 6E). The results for the two RGCs suggest some degree of co-fractal behavior, but no significant correlation over short time scales for the maintained discharge (Figs. 6D and 6F). Since the two corresponding RGC spike trains do not appear co-fractal nearly to the degree shown by the LGN recordings, the co-fractal component must be imparted at the LGN itself. This suggests that the LGN discharges may experience a common fractal modulation, perhaps provided from the parabrachial nucleus in the brain stem, which engenders co-fractal behavior in the LGN spike trains. Although similar data for the dark discharge are not available, the tight linkage between RGC and LGN firing patterns in that case (Fig. 3) suggests that a common fractal modulation may not be present in the absence of a stimulus, and therefore that discharges from nearby LGN cells would in fact not be co-fractal; this remains to be experimentally demonstrated. Correlations in the spike trains of relatively distant pairs of cat LGN cells have been previously observed in the short term for drifting-grating stimuli ; these correlations have been ascribed to low-threshold calcium channels and dual excitatory/inhibitory action in the corticogeniculate pathway . In the context of information transmission, the LGN may modulate the fractal character of the spike trains according to the nature of the stimulus present. Under dark conditions, with no signal to be transmitted, the LGN appears to pass the fractal character of the individual RGCs on to more central stages of visual processing, which could serve to keep them alert and responsive to all possible input time scales. If, as appears to be the case, the responses from different RGCs do not exhibit significant correlation with each other, then the LGN spike trains also will not, and the ensemble average, comprising a collection of LGN spike trains, will display only small fluctuations. In the presence of a constant stimulus, however, the LGN spike trains develop significant degrees of co-fractal behavior, so that the ensemble average will exhibit large fluctuations . Such correlated fractal behavior might serve to indicate the presence of correlation at the visual input, while still maintaining fluctuations over all time scales to ready neurons in later stages of visual processing for any stimulus changes that might arrive. Finally, a similar behavior obtains for a drifting-grating stimulus, but with somewhat reduced fractal fluctuations; perhaps the stimulus itself, though fairly simple, serves to keep more central processing stages alert. ### 5.1 Prevalence and Significance of Fractal and Co-Fractal <br>Behavior Fractal behavior is present in all 50 of the RGC and LGN neural spike-train pairs that we have examined, under dark, maintained-discharge, and drifting-grating stimulus conditions, provided they are of sufficient length to manifest this behavior. Indeed, fractal behavior is ubiquitous in sensory systems. Its presence has been observed in cat striate-cortex neural spike trains ; and in the spike train of a locust visual interneuron, the descending contralateral movement detector . It is present in the auditory system of a number of species; primary auditory (VIII-nerve) nerve fibers in the cat , chinchilla, and chicken all exhibit fractal behavior. It is exhibited at many biological levels, from the microscopic to the macroscopic; examples include ion-channel behavior , neurotransmitter exocytosis at the synapse , and spike trains in rabbit somatosensory-cortex neurons and mesencephalic reticular-formation neurons . In almost all cases, the upper limit of the observed time over which fractal correlations exist is imposed by the duration of the recording. The significance of the fractal behavior is not fully understood. Its presence may serve as a stimulus to keep more central stages of the sensory system alert and responsive to all possible time scales, awaiting the arrival of a time-varying stimulus whose time scale is a priori unknown. It is also possible that fractal activity in spike trains provides an advantage in terms of matching the detection system to the expected signal since natural scenes have fractal spatial and temporal noise . ## 6 Conclusion Using a variety of statistical measures, we have shown that fractal activity in LGN spike trains remains closely correlated with that of their exciting RGC action-potential sequences under dark conditions, but not with stimuli present. The presence of a visual stimulus serves to increase long-duration fluctuations in LGN spike trains in a coordinated fashion, so that pairs of LGN spike trains exhibit co-fractal behavior largely uncorrelated with activity in their associated RGCs. Such large correlations are not present in pairs of RGC spike trains. A drifting-grating stimulus yields similar results, but with fractal activity in both recordings somewhat suppressed. Co-fractal behavior in LGN discharges under constant luminance and drifting-grating stimulus conditions suggests that a common fractal modulation may be imparted at the LGN in the presence of a visual stimulus. ## 7 Acknowledgments This work was supported by the U.S. Office of Naval Research under grants N00014-92-J-1251 and N0014-93-12079, by the National Institute for Mental Health under grant MH5066, by the National Eye Institute under grants EY4888 and EY11276, and by the Whitaker Foundation under grant RG-96-0411. E. Kaplan is Jules and Doris Stein Research-to-Prevent-Blindness Professor at Mt. Sinai School of Medicine. ## 8 Table | | | Moments | | Fractal Exponents | | | | --- | --- | --- | --- | --- | --- | --- | | Stimulus | Cell | Mean | CV | $`\alpha _R`$ | $`\alpha _S`$ | $`\alpha _A`$ | | Dark | RGC | 112 msec | 1.54 | 1.71 | 1.89 | 1.96 | | | LGN | 152 msec | 1.62 | 1.66 | 1.75 | 1.85 | | Maintained | RGC | 32 msec | 0.52 | 0.53 | 0.58 | 0.99 | | | LGN | 284 msec | 1.63 | 0.89 | 2.01 | 1.41 | | Driven | RGC | 27 msec | 1.21 | 0.79 | 0.54 | 0.74 | | | LGN | 77 msec | 1.15 | 1.35 | 2.10 | 1.76 | Neural-discharge statistics for cat retinal ganglion cells (RGCs) and their associated lateral geniculate nucleus (LGN) cells, under three stimulus conditions: dark discharge in the absence of stimulation (data duration $`L=4000`$ sec); maintained discharge in response to a uniform luminance of 50 cd/m<sup>2</sup> (data duration $`L=7000`$ sec); and driven discharge in response to a drifting grating (4.2 Hz frequency, 40% contrast, and 50 cd/m<sup>2</sup> mean luminance; data duration $`L=7000`$ sec). All cells are on-center X-type. The maintained and driven data sets were recorded from the same RGC/LGN cell pair, whereas the dark discharge derived from a different cell pair. Statistics, from left to right, are mean interevent interval, interevent-interval coefficient of variation (CV = standard deviation divided by mean), and fractal exponents estimated by least-squares fits on doubly logarithmic plots of 1) the rescaled range (R/S) statistic for $`k>1000`$, which yields an estimate of the Hurst exponent $`H_R`$, and of $`\alpha _R`$, in turn, through the relation $`\alpha _R=2H_R1`$; 2) the count-based periodogram for frequencies between 0.001 and 0.01 Hz which yields $`\alpha _S`$; and 3) of the Allan factor for counting times between $`L/100`$ and $`L/10`$ where $`L`$ is the duration of the recording, which yields $`\alpha _A`$. ## 9 Figure Captions Figure 1: Rate estimates formed by dividing the number of events in successive counting windows by the counting time $`T`$. The stimulus was a uniformly illuminated screen (with no temporal or spatial modulation) of luminance 50 cd/m<sup>2</sup>. A) Rate estimate for a cat RGC generated using three different counting times ($`T=`$ 1, 10, and 100 sec). The fluctuations in the rate estimates converge relatively slowly as the counting time is increased. This is characteristic of fractal-rate processes. The convergence properties are quantified by measures such as the Allan factor and periodogram. B) Rate estimates from the same recording after the intervals are randomly reordered (shuffled). This maintains the same relative frequency of interval sizes but destroys the original relative ordering of the intervals, and therefore any correlations or dependencies among them. For such nonfractal signals, the rate estimate converges more quickly as the counting time $`T`$ is increased. The data presented here are typical of the 50 data sets examined. Figure 2: A sequence of action potentials (top) is reduced to a set of events (represented by arrows, middle) that form a point process. A sequence of interevent intervals $`\{\tau _n\}`$ is formed from the times between successive events, resulting in a discrete-time, positive, real-valued stochastic process (lower left). All information contained in the original point process remains in this representation, but the discrete-time axis of the sequence of interevent intervals is distorted relative to the real-time axis of the point process. The sequence of counts $`\{Z_n\}`$, a discrete-time, nonnegative, integer-valued stochastic process, is formed from the point process by recording the numbers of events in successive counting windows of duration $`T`$ (lower right). This process of mapping the point process to the sequence $`\{Z_n\}`$ results in a loss of information, but the amount lost can be made arbitrarily small by reducing $`T`$. An advantage of this representation is that no distortion of the time axis occurs. Figure 3: Statistical measures of the dark discharge from a cat on-center X-type retinal ganglion cell (RGC) and its associated lateral geniculate nucleus (LGN) cell, for data of duration $`L=4000`$ sec. RGC results appear as solid curves, whereas LGN results are dashed. A) Normalized rate function constructed by counting the number of neural spikes occurring in adjacent 100-sec counting windows, and then dividing by 100 sec and by the average rate. B) Normalized interevent-interval histogram (IIH) vs normalized interevent interval constructed by dividing the interevent intervals for each spike train by the mean, and then obtaining the histogram. C) Normalized range of sums $`R(k)`$ vs number of interevent intervals $`k`$ (see Sec. 3.2.2). D) Periodogram $`S(f)`$ vs frequency $`f`$ (see Sec. 3.3.3). E) Allan factor $`A(T)`$ vs counting time $`T`$ (see Sec. 3.3.2). Figure 4: Statistical measures of the maintained discharge from a cat on-center X-type RGC and its associated LGN cell, at a steady luminance of 50 cd/m<sup>2</sup>, for data of duration $`L=7000`$ sec. This cell pair is different from the one illustrated in Fig. 3. The results for the RGC discharge appear as solid curves, whereas those for the LGN are presented as dashed curves. Panels A)–E) as in Fig. 3. Figure 5: Statistical measures of the driven discharge from a cat on-center X-type RGC and its associated LGN cell, for a drifting-grating stimulus with mean luminance 50 cd/m<sup>2</sup>, 4.2 Hz frequency, and 40% contrast, for data of duration $`L=7000`$ sec. This cell pair is the same as the one illustrated in Fig. 4. The results for the RGC discharge appear as solid curves, whereas those for the LGN are presented as dashed curves. Panels A)–E) as in Figs. 3 and 4. Figure 6: Statistical measures of the maintained discharge from pairs of cat on-center X-type RGCs and their associated LGN cells, stimulated by a uniform luminance of 50 cd/m<sup>2</sup>, for data of duration $`L=7000`$ sec. RGC and LGN spike trains denoted “1” are those that have been presented in Figs. 4 and 5, while those denoted “0” are another simultaneously recorded pair. A) Normalized rate functions constructed by counting the number of neural spikes occurring in adjacent 100-sec counting windows, and then dividing by 100 sec and by the average rate, for RGC 1 and RGC 0. Note that the ordinate scale differs from that in (A). B) Normalized rate functions for the two corresponding target LGN cells, LGN 1 and LGN 0. C) Normalized wavelet cross-correlation function (NWCCF) between the RGC 1 and LGN 1 recordings (solid curve), shuffled surrogates of these two data sets (long-dashed curve), and Poisson surrogates (short-dashed curve). Unlike the Allan factor $`A(T)`$, the normalized wavelet cross-correlation function can assume negative values and need not approach unity in certain limits. Negative normalized wavelet cross-correlation function values for the data or the surrogates are not printed on this doubly logarithmic plot, nor are they printed in panel (D). Comparison between the value of the normalized wavelet cross-correlation function obtained from the data at a particular counting time $`T`$ on the one hand, and from the surrogates at that time $`T`$ on the other hand, indicates the significance of that particular value. D) Normalized wavelet cross-correlation functions between RGC 1 and LGN 1 (solid curve, repeated from panel (C), the two RGC spike trains (long-dashed curve), and the two LGN spike trains (short-dashed curve). Also included is the aggregate behavior of both types of surrogates for all three combinations of recordings listed above (dotted line). E) Cross periodograms of the data sets displayed in panel (C). F) Cross periodograms of the data sets displayed in panel (D). Figure 7: Statistical measures of the driven discharge from pairs of cat on-center X-type RGCs and their associated LGN cells, stimulated by a drifting grating with a mean luminance of 50 cd/m<sup>2</sup>, 4.2 Hz frequency, and 40% contrast, for data of duration $`L=7000`$ sec. RGC and LGN spike trains denoted “1” are recorded from the same cell pair that have been presented in Figs. 4–6, while those denoted “0” are recorded simultaneously from the other cell pair, that was presented in Fig. 6 only. Panels A)–F) as in Fig. 6. Lowen, Fig. 1 Lowen, Fig. 2 Lowen, Fig. 3 Lowen, Fig. 4 Lowen, Fig. 5 Lowen, Fig. 6 Lowen, Fig. 7
no-problem/9910/cond-mat9910155.html
ar5iv
text
# Fluid-fluid transitions of hard spheres with a very short-range attraction ## Abstract Hard spheres with an attraction of range a tenth to a hundredth of the sphere diameter are constrained to remain fluid even at densities when monodisperse particles at equilibrium would have crystallised, in order to compare with experimental systems which remain fluid. They are found to have a fluid-fluid transition at high density. As the range of the attraction tends to zero, the density at the critical point tends towards the random-close-packing density of hard spheres. PACS: 82.70.Dd Colloids, 61.20.Gy Theory and models of liquid structure Argon forms a liquid because argon atoms attract each other and these dispersion attractions between the atoms are relatively long-ranged; the volume over which one argon atom attracts another is comparable to the volume one argon atom of the pair excludes to another. If we could reduce the range of the attraction between argon atoms then the liquid phase would disappear from the equilibrium phase diagram when the volume over which the atoms attract was of order one tenth of the volume they exclude to each other. Of course we cannot change the interaction between argon atoms but there are well-established colloidal systems whose interactions we can change. The liquid phase disappears from the equilibrium phase diagram because the fluid-fluid transition is preempted by the crystallisation of the fluid. But although the fluid-fluid transition has disappeared from the equilibrium phase diagram of monodisperse particles, experiments often do not observe crystallisation, presumably due to a combination of a large free energy barrier to crystallisation and the destabilising effect of small amounts of polydispersity on the crystalline phase. As crystallisation does not occur it does not preempt the fluid-fluid transition, which is therefore observable. With this in mind we study the behaviour of spherical particles with a short-range attraction which are constrained to remain fluid. We study attraction ranges down to a hundredth of the diameter of the hard core — this is what we mean by very short-range attractions. We find that as the range decreases, the density at the critical point increases to very high values. For a sufficiently short range the critical point lies above the density of the kinetic glass transition observed in experiments on hard-sphere-like colloids. Experiments on colloids with very short-range attractions have found a threshold beyond which diffusion ceases , and Poon, Pirie and Pusey have previously suggested that this is due to an arrested fluid-fluid phase separation. If as we find, the dense fluid has a density above that of the hard-sphere glass transition, then it is not surprising that the dynamics of phase separation become arrested. Here we will not consider the crystalline phase at all. Our results are for a system of particles which is constrained to remain fluid at all temperatures and pressures; see refs. for a discussion of the application of constraints to stabilise a phase which would otherwise be metastable or unstable. Although experiments on near-monodisperse colloidal spheres show that they crystallise readily, at least as long as the attraction is not too strong, polydisperse colloidal spheres often never crystallise and the presence of a very short-range attraction makes the crystalline phase even more sensitive to polydispersity . By polydisperse spheres we mean that the spherical particles do not all have the same diameter but have a range of diameters. Our theory is a perturbation theory about a hard-sphere fluid and so completely neglects the crystal. Thus we will not need to explicitly apply a constraint within the theory. We do, however, need to assume that it is possible to apply a constraint to the system which has almost no effect on the fluid phase but completely prevents crystallisation. We chose a simple potential with a hard-sphere core and an attraction in the form of a Yukawa function. The hard-sphere+Yukawa potential is a spherically symmetric pair potential so the interaction energy $`v`$ depends only on the separation $`r`$ of the centres of the two particles, $$v(r)=\{\begin{array}{cc}\mathrm{}\hfill & r\sigma \hfill \\ ϵ\frac{\sigma }{r}\mathrm{exp}[\kappa (1r/\sigma )]\hfill & \sigma <r\hfill \end{array},$$ (1) where $`\sigma `$ is the hard-sphere diameter and $`ϵ`$ is the energy of interaction for touching spheres. With this potential the thermodynamic functions depend on the reduced temperature $`kT/ϵ`$, and the reduced density $`\eta =(N/V)(\pi /6)\sigma ^3`$ which is the fraction of the volume occupied by the cores of the particles. $`k`$, $`T`$, $`N`$ and $`V`$ are Boltzmann’s constant, the temperature, the number of particles and the volume, respectively. We require a free energy for this potential which is accurate up to very high densities, up to near random close packing which is at a volume fraction $`\eta 0.64`$-$`0.65`$ . Speedy has obtained, from computer simulation data, an accurate equation of state of hard spheres up to random close packing. This enables us to use a perturbation theory, i.e., to start from the Helmholtz free energy in the infinite temperature limit of our model, which is hard spheres, and add on the energy as a perturbation. Then our expression for the Helmholtz free energy per particle $`a`$ at a temperature $`T`$ and a volume fraction $`\eta `$ is $$\beta a(\eta ,T)=\beta a_{hs}(\eta )+\beta u(\eta ,T),$$ (2) where $`a_{hs}`$ is the Helmholtz free energy of hard spheres, $`u`$ is the energy per particle and $`\beta =1/kT`$. As the energy of a fluid of hard spheres is zero, $`\beta a_{hs}=s_{hs}/k`$, where $`s_{hs}`$ is the entropy per particle of hard spheres, which is, according to Speedy $$\frac{s_{hs}}{k}=1\mathrm{ln}\rho +C\mathrm{ln}(\eta _0\eta )+S_0+N^1\mathrm{ln}N_g(\eta _0),$$ (3) where $`C=2.8`$, $`S_0=0.25`$ and $`N_g`$ is $$N_g(\eta _0)=\mathrm{exp}\left[N\left(\alpha \gamma (\eta _0\eta _m)^2\right)\right],$$ (4) where $`\alpha =2`$, $`\gamma =193`$ and $`\eta _m=0.555`$. In these equations the value of $`\eta _0`$ at any density is determined by minimising the free energy at that density. This form of the free energy is optimised for the dense fluid. Essentially, if we start from any configuration of the dense fluid and begin to expand all the spheres (so increasing the volume fraction) then at some point the spheres will touch and then the spheres cannot be expanded further. At this point the volume fraction is $`\eta _0`$; this can be seen from the log term in eq. (3) which diverges when $`\eta =\eta _0`$. If we start from different configurations then after expansion of the spheres we may end up with a different value of $`\eta _0`$. The larger the difference $`\eta _0\eta `$ then the more room the spheres have which increases the entropy. However, simulations show that there few arrangements of the spheres which have a large $`\eta _0`$, therefore there is an entropic cost to being in an arrangement with a large $`\eta _0`$. $`N_g(\eta _0)`$, eq. (4), is essentially the number of ways of arranging spheres such that the maximum possible volume fraction is $`\eta _0`$; it is maximal at $`\eta _0=\eta _m`$. The competition between the third and fifth terms in eq. (3) then determines the value of $`\eta _0`$. As the spheres touch when $`\eta =\eta _0`$ and if we assume that the expansion is isotropic then the separation $`b`$ of spheres at a given $`\eta `$ and $`\eta _0`$ is $$b/\sigma =(\eta _0/\eta )^{1/3},$$ (5) just as in a crystal. The energy of attraction is approximated by the energy of interaction of each sphere with its six neighbours at a separation $`b`$ $$u=3v(b)=3v((\eta _0/\eta )^{1/3}).$$ (6) As the energy depends on $`\eta _0`$, the total free energy, eq. (2), is minimised to obtain $`\eta _0`$ at each density and temperature. Guides to the accuracy of our free energy are obtained by comparison with existing simulation data. For $`\kappa =7`$, Hagen and Frenkel find a fluid-fluid critical point at $`kT/ϵ=0.41`$, $`\eta =0.26`$, whereas we predict $`kT/ϵ=0.54`$, $`\eta =0.30`$. The agreement is fair although not quantitative and we expect our theory to do better at higher densities. Applying an approximation of the type, eq. (6), to a face-centred-cubic (fcc) crystal yields an fcc-crystal–fcc-crystal critical point when $`\kappa =100`$ at $`kT/ϵ=1.1`$, $`\eta =0.69`$. Bolhuis, Hagen and Frenkel using computer simulation and perturbation theory predict $`kT/ϵ=0.70`$, $`\eta =0.71`$. Again there is fair but not quantitative agreement Results for four, short, ranges are plotted in fig. 1 . A simple liquid such as argon is reasonably well modeled by an attraction of inverse range $`\kappa =1.8`$ . The results are for inverse ranges up to two orders of magnitude greater. The notable feature is that the critical densities and the densities of the liquid phase are high and move to higher density as the range decreases. At high density the particles are pushed together until they are within range of the attraction. This occurs at separations between the surfaces of the spheres $`b\sigma =𝒪(\sigma \kappa ^1)`$. With the particles just within range of the attraction there is a clear energetic driving force towards phase separation: the fluid lowers its energy at fixed overall density by some of the fluid condensing into a dense fluid where all the spheres are well within the range of the attraction of their nearest neighbours. This is just what was observed by Bolhuis, Hagen and Frenkel in the fcc crystal. In the absence of the crystalline phase, due to polydispersity perhaps, the transition simply shifts over to a fluid-fluid transition and it occurs at a lower density due to the fact that the random close-packing density which is the maximum density of amorphous spheres is lower than the maximum density of spheres achievable in an fcc crystal. Because of the smaller number of neighbours in the dense fluid as compared to the crystal the transition shifts to a lower temperature but in both cases the critical temperature varies little with changing range. Grant and Russel , and Verduin and Dhont have performed experiments on colloids which are hard-sphere-like at high temperature but as the temperature is reduced the solvent becomes a poor solvent for the alkane chains which are grafted to the surface of the colloids. There is then a very short-range attraction when the grafted layers of two colloids overlap. Verduin and Dhont estimate the range of the attraction to be less than 1nm for colloids with a diameter 80nm. They assess the strength of the attraction via a parameter $`\tau `$ which is related to the second virial coefficient $`B_2`$ by $$\tau =\frac{1}{4}\left(1B_2/B_2^{hs}\right)^1,$$ (7) where $`B_2^{hs}`$ is the second virial coefficient of hard spheres. We have replotted the phase diagrams for $`\kappa =20`$ and 100 in the density-$`\tau `$ plane in fig. 2. The experimental results are in the range $`\tau =0.1`$ to 0.2, and $`\eta =0.1`$ to $`\eta =0.4`$. These densities and temperatures lie within the coexistence region for $`\kappa =100`$. They locate both a spinodal and a ‘static percolation’ line where diffusion of the particles stops. At these densities and temperatures, the fluid is unstable with respect to phase separation into a more dilute phase and a very dense fluid phase — this phase has a volume fraction $`0.56`$. Beyond a volume fraction of 0.56-0.58 , the relaxation time of a fluid of hard spheres exceeds typical experimental times which are of the order of 1000s. This is referred to as a kinetic glass transition, above it the samples are not equilibrated on the experimental time scale and so are not an equilibrium fluid. Thus as phase separation proceeds and domains of this dense phase appear the phase separation dynamics may become arrested due to the very slow relaxation within these dense domains. This is the scenario suggested by Poon, Pirie and Pusey on the basis of their experiments on colloids with a longer range but still short-range attraction . Alternatively, as Grant and Russel have suggested the phase separation may be fluid-crystal phase separation. We predict a critical point at a density which increases as the range decreases and at a value of $`\tau `$ which also increases as the range decreases. Although we cannot perform calculations at zero range, $`\kappa =\mathrm{}`$, extrapolation of our results together with the results of Bolhuis, Hagen and Frenkel who were able to study the zero range limit in the crystal, suggests that in the zero-range limit there is a fluid-fluid critical point at the random-close-packing density. The critical point would be at roughly the same temperature as the critical points in fig. 1, which implies that as in the crystal it occurs at infinite $`\tau `$. Baxter solved the Percus-Yevick (PY) approximation for hard-spheres with a zero-range attraction. Within the PY approximation there are 2 routes to the thermodynamic functions. If the compressibility route is used the critical point is at the low volume fraction $`\eta =0.12`$ and at $`\tau =0.098`$ , whereas via the energy route the prediction is $`\eta =0.32`$ and $`\tau =0.12`$ . Our results suggest that the critical point predicted by the PY approximation may be an artifact of this approximation. Stell has shown that the virial expansion is pathological for all finite $`\tau `$ in the limit of the range of the attraction tending to zero. However, this pathology originates in crystalline clusters which we have eliminated with our constraint that crystalline configurations are not allowed . So the pathological virial expansion shows that in the zero-range limit, the equilibrium, unconstrained, fluid is completely unstable at finite $`\tau `$; see Refs. for the equlibrium phase diagram in the zero-range limit. In conclusion, we have determined the phase diagram of hard spheres with an attraction with a range of order 0.1 or 0.01 of the hard-core diameter, which are constrained not to crystallise. The fluid-fluid transition persists, according to our approximate theory, for all ranges of the attraction. As the range decreases the density at the critical point increases and can become very high, near the random-close-packing density of hard spheres. As the density is so high observing it will be difficult as the dynamics are very slow at these densities; the densities can exceed that of the glass transition of hard spheres. Due to these slow dynamics a glass-glass transition may be observed instead of a fluid-fluid transition but again it may be impossible to observe directly. The difficulty in observing fully equilibrated coexistence does not mean that the transition has no observable consequences. Out of equilibrium systems tend to head toward equilibrium and even if they do not reach equilibrium their final state may be, roughly speaking, the point on the path to equilibrium where the dynamics stop. This is our tentative interpretation of the results on colloids with a very short-range attraction : the ceasing of diffusion observed is arrested fluid-fluid phase separation. One final point is that as the range decreases the critical point, with its associated large fluctuations and critical slowing down of the dynamics , will approach the kinetic glass transition. What effect this will have on the kinetic glass transition is unknown.
no-problem/9910/astro-ph9910425.html
ar5iv
text
# X-ray Detection of SN1994W in NGC 4041? ## 1 Introduction X-ray emission from supernovae arises either from Compton-scattered $`\gamma `$ rays from the radioactive decay of <sup>56</sup>Co or from the interaction of the circumstellar matter with the supernova’s shock wave. For the case of circumstellar interaction, X-rays provide a view of the last years of the progenitor’s life, specifically, the years of mass loss. Supernovae undergoing circumstellar interaction are expected to emit X-rays with energies from 1 keV to 100 keV (Chevalier & Fransson, 1994). The recently-recognized hypernovae class, supernovae which show evidence of extreme blast wave energies ($``$10<sup>52</sup> ergs s<sup>-1</sup>) (Wang, 1999; Fryer & Woosley, 1998; Paczyński, 1998) and association with GRBs, also contribute X-ray emission. Currently, nine “normal” supernovae have been detected in the X-ray band: SN1978K, SN1979C, SN1980K, SN1986J, SN1987A, SN1988Z, SN1993J, SN1994I, and SN1995N (Schlegel (1995) and references therein for supernovae earlier than 1994; Lewin et al. (1995) for SN1995N; Immler, Pietsch, & Aschenbach (1998a) for SN1979C and Immler, Pietsch, & Aschenbach (1998b) for SN1994I). Of these, the early X-ray emission from one (SN1987A) largely arose from Compton-scattered $`\gamma `$-rays from the radioactive decay of <sup>56</sup>Co. The emission from the other eight is believed to come from the interaction of the SN shock with a circumstellar medium (the currently increasing late X-ray emission from SN1987A also comes from the shock interaction (Hasinger, Aschenbach, & Trümper, 1996)). A tenth supernova, SN1998bw, identified as a possible hypernova and perhaps associated with GRB980425, has been detected in X-rays (Pian et al., 1998). This paper describes the detection of X-ray emission from the location of SN1994W. ## 2 Summary of Discovery SN1994W in NGC 4041 was discovered by G. Cortini and M. Villi on 1994 July 29.85 (Cortini & Villi, 1994). The supernova was located approximately 17<sup>′′</sup>.5 N and 7<sup>′′</sup>.8 W of the nucleus of NGC 4041 (Pollas, 1994). Bragaglia, Munari, & Barbon (1994) obtained a spectrum using the Bologna Astronomical Observatory 1.5-m that showed a flat continuum with strong H$`\alpha `$ and H$`\beta `$ emission lines defining SN1994W as an SN II. No P Cygni profiles were observed in the first spectrum. Filippenko & Barth (1994) reported that a spectrum obtained with the Lick Observatory 3-m confirmed SN1994W as a peculiar SN II. The emission lines showed a narrow component (FWHM $``$1200 km s<sup>-1</sup>) sitting on a broad base (FWHM $``$5000 km s<sup>-1</sup>). Narrow Fe II emission lines were detected. Narrow absorption components were visible in the cores of the emission lines with FWHM $``$300 km s<sup>-1</sup>. Subsequent spectroscopy showed narrow (FWHM $``$1200 km s<sup>-1</sup>) P Cygni profiles. Cumming, Lundqvist, & Meikle (1994) described a spectrum obtained about two weeks later that showed little change in the emission lines. They suggested that the lack of change implied the supernova illuminated a dense circumstellar shell. Radio and X-ray emission could be expected. ## 3 X-ray Observation The ROSAT HRI was used to observe SN1994W on 21-23 October 1997 for an on-source time of 33.7 ksec. The MJD of the middle of the observation is 50742.04. The original processing of the data contained a 10<sup>′′</sup> boresight error, so the data were re-processed after a patch was applied to fix the pipeline software. Sollerman, Cumming, & Lundqvist (1998) (hereafter, SCL98) established the date of outburst as 1994 July 14$`{}_{4}{}^{}{}_{}{}^{+2}`$ so the HRI observation date corresponds approximately to an age of 1180 days. The particle background of the data was removed using the software described by Snowden (1998). The deadtime-corrected exposure totaled 33460.8 sec. The HRI data were binned to 4<sup>′′</sup> pixels and registered with an optical image from the Digitized Sky Survey 2 atlas<sup>1</sup><sup>1</sup>1Image obtained from http://archive.eso.org/dss/. The binned data were then overlaid on the optical image. No smoothing was applied to the X-ray data. Figure X-ray Detection of SN1994W in NGC 4041? shows the results with X-ray contours over an optical image. A point source at the position of SN1994W is visible in the figure. At the position of the X-ray source, the contours are 2.5, 3.0, and 3.5 counts pixel<sup>-1</sup>. The 2.5 counts pixel<sup>-1</sup> contour is 2-3 times the background rate. The coordinates of the X-ray source are 12:02:11.0, +62:08:31.7 (J2000) while the coordinates for SN1994W are 12:02:10.9, +62:08:32.6. The differences, defined as X-ray minus optical, are +0<sup>s</sup>.1 in RA and -0<sup>′′</sup>.9 in Dec. To check the positional accuracy, the coordinates of the nucleus of NGC 4041 were extracted and compared to published values (Russell et al., 1990; Pollas, 1994). The differences (X-ray minus published) are +0<sup>s</sup>.1 in RA and +1<sup>′′</sup>.1 in Dec. The absolute position of SN1994W is accurate to $``$1<sup>′′</sup>.4, a value well within the pointing error of the HRI ($``$6<sup>′′</sup>). A net total of 31$`\pm `$7.3 counts was extracted from a 3<sup>′′</sup> radius circle centered on SN1994W’s position. These counts correspond to a net source rate of 9.3$`\pm `$2.2$`\times `$10<sup>-4</sup> counts s<sup>-1</sup>. The extraction circle contained 50% of the PSF, so the count rate must be increased by a factor of 2. The background that was subtracted was obtained from an annulus surrounding the galaxy that had an inner radius of 1 and an outer radius of 2. The probability of a source falling at the exact location of SN1994W will be given by the probability of a random background fluctuation at that location, an estimate based on the log N-log S for the measured flux, or approximately by the number of sources detected in the galaxy divided by the galaxy area. This last number will be the larger of the three. For example, from a ROSAT-measured log N-log S relation (e.g., Hasinger, Schmidt, & Trümper (1991)), one source, of flux $``$10<sup>-13</sup> erg s<sup>-1</sup> cm<sup>-2</sup>, is expected per square degree (i.e., a probability of $``$2$`\times `$10<sup>-6</sup> for the detect cell used here). Figure X-ray Detection of SN1994W in NGC 4041? shows at least 4 sources at or above that flux. The D<sub>25</sub> radius of the galaxy is $``$2.7 (Tully, 1988). If we exclude the $``$8<sup>′′</sup> nuclear region, the resulting annulus has an area of $``$7.5$`\times `$10<sup>4</sup> arcsec<sup>2</sup> in which about 8-10 sources are located. This gives a probability of $``$10<sup>-4</sup> per arcsec<sup>2</sup>; multiplying by the size of the detect cell gives a probability of $``$3$`\times `$10<sup>-3</sup> of a source falling precisely on the SN1994W position. We judge this to be sufficiently small to associate SN1994W with the X-ray source on a provisional basis. The Galactic reddening in the direction of NGC 4041 is E<sub>B-V</sub> $``$0.017 (Schlegel, Finkbeiner, & Davis, 1998) which converts to N<sub>H</sub> $``$9$`\times `$10<sup>19</sup> cm<sup>-2</sup> using the conversion of (Predehl & Schmitt, 1995). However, SCL98 estimate a value for E<sub>B-V</sub> of 0.17$`\pm `$0.06 from the interstellar Na I D absorption line using the calibration of Munari & Zwitter (1997). That E<sub>B-V</sub> converts to a column density of $``$9$`\times `$10<sup>20</sup> cm<sup>-2</sup>. We adopt the higher column because SCL98 detected the Na I line directly toward SN1994W. The difference between the Galactic and the Na I-determined values for E<sub>B-V</sub> is a measure of the absorption local to the SN1994W region. Using an assumed thermal bremsstrahlung spectrum with kT $``$5 keV, the count rate converts to an unabsorbed flux in the 0.1-2.4 keV ROSAT band of $``$1.1$`\pm `$0.3$`\times `$10<sup>-13</sup> ergs s<sup>-1</sup> cm<sup>-2</sup>. Using a distance of 25.4 Mpc (SCL98), the L<sub>X</sub> is 8.5$`\pm `$2.0$`\times `$10<sup>39</sup> ergs s<sup>-1</sup>. ## 4 Discussion The positional coincidence and the optical spectral behavior argue that SN1994W has been detected in the X-ray band. The detection of narrow absorption lines in the optical spectra requires the existence of a dense circumstellar shell. SCL98 estimate a number density for the shell of $`>`$10<sup>8</sup> cm<sup>-3</sup> from the presence of optically thick Fe II lines. The estimated X-ray luminosity is comparable to that of SN1986J, SN1988Z, and SN1995N (Schlegel, 1995) and lies near the upper end of the X-ray luminosity range for supernovae. No other X-ray observations have been made of NGC 4041. The IRAS observation of NGC 4041 had, at the best spatial resolution, a beam size of 0.77 which provides essentially no spatial information (Soifer et al., 1989). A VLA image at 4.85 GHz was obtained at 1, again providing no spatial information (Becker, White, Edwards, 1991). Certainty that SN1994W has been detected in the X-ray band will only be possible with additional observations, particularly with Chandra. The X-ray light curve, whether increasing or decreasing, is of interest. Table 1 lists parameters generated from the competing models of Chevalier & Fransson (1994) (hereafter, CF94), which applies to normal Type II supernovae exploding into a red giant wind, and Terlevich et al. (1992) (hereafter, the ‘cSNR’ model), which describes a supernova expanding into a very dense circumstellar environment (equations summarized in Aretxaga et al. (1999)). The parameters for the CF94 model are calculated for two different values of the power law index $`n`$ that describes the expanding gas. Each model is calculated for two different ages, at a template value of 30 days and at the age of the ROSAT observation. For the number density, we adopt the lower bound of 3$`\times `$10<sup>8</sup> cm<sup>-3</sup> (SCL98). Both models show similar behavior: temperatures and velocities decrease from day 30 to day 1180 and shell radii and luminosities increase. In detail, however, the models differ. The initial velocities of the CF94 $`n=7`$ models are too high; the observed FWZI velocities near day 30 were $``$5000 km s<sup>-1</sup>. For the $`n=20`$ model, the initial velocity is correct, but the day 1180 velocity remains too high and the X-ray luminosity is too low. The cSNR model appears to be a better match to the observations. The luminosities in the cSNR model are too large by factors of 100-1000, but we have little or no information for the efficiency of the X-ray production. An efficiency value of $``$0.1-1% would bring the prediction into line with the observation. Optically, SN1994W showed P Cygni profiles at H$`\alpha `$ and, for the first 100 days, showed a Type IIP light curve (SCL98). The spectrum also resembled the spectra of Type IIn supernovae such as SN1988Z and SN1987B (Filippenko, 1997; Schlegel, 1990; Schlegel et al., 1996). On the basis of its optical behavior, SN1994W may fall near the middle of a continuum that has at one extreme the IIn supernovae, with dense circumstellar shells, and at the other extreme “normal” Type II supernovae with little or no circumstellar medium. The X-ray behavior, however, places SN1994W among the most luminous IIn supernovae (assuming that the distance to NGC 4041 is known accurately). We note in passing that the face-on spiral galaxy NGC 4041 is itself of interest. At least six point sources have been positively detected in the HRI image; another six may exist near the nucleus. The nuclear emission appears to be extended, although the emission may be the blended emission of nearby point sources. The X-ray study of this galaxy will benefit from an observation with Chandra. In summary, an observation of NGC 4041 with the ROSAT HRI has revealed the existence of an X-ray source at the position of SN1994W. The signature of a dense circumstellar shell in the optical spectrum plus the positional coincidence of the X-ray source with the optical position support the identification of the X-ray source as SN1994W, the eleventh supernova discovered to emit X-rays. I thank the referee for comments that improved the presentation. This research was supported by NASA Grant NAG5-6923 to the Smithsonian Astrophysical Observatory.
no-problem/9910/quant-ph9910078.html
ar5iv
text
# Untitled Document Karl Popper and the Copenhagen Interpretation Asher Peres Department of Physics, Technion—Israel Institute of Technology, 32 000 Haifa, Israel Abstract > Popper conceived an experiment whose analysis led to a result that he deemed absurd. Popper wrote that his reasoning was based on the Copenhagen interpretation and therefore invalidated the latter. Actually, Popper’s argument involves counterfactual reasoning and violates Bohr’s complementarity principle. The absurdity of Popper’s result only confirms Bohr’s approach. I called thee to curse mine enemies, and, behold, thou hast altogether blessed them. Numbers 24:10 The emergence of quantum mechanics led to considerable progress in our understanding of physical phenomena. However, it also led to serious misconceptions. In my current work as a theoretical physicist, I recently examined a conceptual experiment that was proposed some time ago by Karl Popper (1982). Its feasibility was challenged by Collett and Loudon (1987) who claimed that such an experiment would be inconclusive. Nevertheless, an actual experiment is currently under way (Kim and Shih, 1999). The rigorous theoretical analysis of these experiments is quite intricate and I shall only briefly outline it here. Most of the present article is an attempt to analyze the meaning of what Popper wrote and to understand his way of reasoning. I found it most surprising when I read the original argument in his book. Popper’s experiment is a variant of the one considered long ago by Einstein, Podolsky, and Rosen (1935): a source S emits pairs of particles having a broad angular distribution but precisely opposite momenta, $$𝐩_1+𝐩_\mathrm{𝟐}=0.$$ (1) The example given by Popper is that of pairs of photons emitted by the decay of positronium at rest. Actually, the wavelength of gamma rays emitted by positronium is much too short for realizing Popper’s experiment, but pairs of photons resulting from parametric down-conversion in a nonlinear crystal (Kim and Shih, 1999) are suitable for that purpose: these photons have precisely correlated (though not opposite) momenta, and this is all we need. If we wish, we can refer our calculations to a Lorentz frame moving with a constant velocity $`c(𝐩_1+𝐩_\mathrm{𝟐})/(E_1+E_2)`$, so that Eq. (1) holds in that frame. Note that Eq. (1) seems to conflict with the quantum “uncertainty principle.” Popper writes “we consider pairs of particles that move in opposite directions along the positive and negative $`x`$-axis.” If these were classical particles, opposite momenta would indeed lead to opposite positions. However, the quantum dynamical variables in Eq. (1) do not commute with $`(𝐪_\mathrm{𝟏}+𝐪_\mathrm{𝟐})`$. For the components along any axis, we have uncertainty relations $$\mathrm{\Delta }(p_1+p_2)\mathrm{\Delta }(q_1+q_2)\mathrm{},$$ (2) which set a limit on how precisely opposite the positions of the particles will be observed. This issue was analyzed by Collett and Loudon (1987) who came to the conclusion that Popper’s experiment (described below) could not give conclusive results. This is just one example of how hazardous it is to use classical reasoning when we discuss quantum phenomena. I shall return to this point later. However, it is no less hazardous to make heuristic use of the “uncertainty principle” in order to draw quantitative conclusions. What must be done in case of doubt is to write the Schrödinger equation that describes the physical situation, and to derive rigorously unambiguous results. As will be shown below, the analysis of Popper’s experiment is much subtler than either Popper, or Collett and Loudon, were inclined to think. Popper’s proposed experiment proceeds as follows: two observers, whom I shall call Alice and Bob in accordance with current practice in quantum information theory, are located on opposite sides of the source, with arrays of detectors as shown in Figure 1. Alice can place an opaque screen with a narrow slit of width $`a`$ in the way of her photons, so that those passing through the slit are diffracted by an angle of the order of $`\lambda /a`$, where $`\lambda `$ is the wavelength of the photons. The narrower the slit, the wider is the scattering angle. On this point, Popper writes that “the wider scattering angles go with a narrower slit, according to the Heisenberg relations.” Actually, the diffraction angle $`\lambda /a`$ is a well known result of classical optics. The wavelength of the photons, which is the quantity that we can actually measure, is related to their momentum by the relation $`\lambda =h/p`$, which readily follows from Einstein’s equation for the photoelectric effect, $`E=h\nu `$. The latter predates Heisenberg’s uncertainty principle by more than 20 years. Still before Heisenberg, it was de Broglie’s bold intuition to extend the relation $`\lambda =h/p`$ to massive particles, and in that case $`\lambda `$ is called the de Broglie wavelength. However, the issue is not just one of misappropriation of credit. Here, Popper wanted to invoke Heisenberg’s “uncertainty” because he had in mind that the detection of a particle that had passed through Alice’s slit was a measurement of the $`y`$-coordinate of that particle at it passed through the slit, and therefore also a virtual measurement of the position of the other particle, since the two had precisely opposite directions. Let us examine Popper’s text: > According to the EPR argument, we have measured $`q_y`$ for both particles … with the precision $`\mathrm{\Delta }q_y`$ \[$`a`$\] … We can now calculate the $`y`$-coordinate of the \[other\] particle with approximately the same precision … We thus obtain fairly precise ‘knowledge’ about the $`q_y`$ position of this particle—we have ‘measured’ its position indirectly. And since it is, according to the Copenhagen interpretation, our knowledge which is described by the theory … we should expect that the momentum of the \[second\] beam scatters as much as that of the beam that passes through the slit … > > To sum up: if the Copenhagen interpretation is correct, then any increase of our mere knowledge of the position … of the particles … should increase their scatter … The italics that appear in the above excerpt are those in the book. Popper refrains from openly saying that the above prediction is absurd (as it obviously is). He only says that he is “inclined to predict” that the test will decide against the Copenhagen interpretation. On this, I have several comments. First, is not at all clear why Popper associates this absurd prediction (particle scatter due to potential knowledge by an observer) with the Copenhagen interpretation. This is another example of credit misappropriation, much worse than having quoted Heisenberg instead of Einstein or de Broglie. Whatever the “Copenhagen interpretation” is (a point that I shall discuss later), it is reasonable to expect that it is somehow related to the views expressed by Niels Bohr. However, Popper himself wrote explicitly that his proposed experiment was an extension of the argument of Einstein, Podolsky, and Rosen (1935). It is well known that their argument was promptly criticized by Bohr (1935). I find it quite remarkable that an opinion which is diametrically opposite to Bohr’s be called the “Copenhagen interpretation.” I also have other, more serious objections to the terminology used in the passage quoted above. In particular, I take exception to the phrase “we have measured $`q_y`$” of some particle. Here however, my criticism is not aimed at Popper because we are all guilty of occasionally talking like that. This is a misleading language, as explained long ago by Kemble (1937): > We have no satisfactory reason for ascribing objective existence to physical quantities as distinguished from the numbers obtained when we make the measurements which we correlate with them. There is no real reason for supposing that a particle has at every moment a definite, but unknown, position which may be revealed by a measurement of the right kind, or a definite momentum which can be revealed by a different measurement. On the contrary, we get into a maze of contradictions as soon as we inject into quantum mechanics such concepts carried over from the language and philosophy of our ancestors… It would be more exact if we spoke of “making measurements” of this, that, or the other type instead of saying that we measure this, that, or the other “physical quantity.” Terms that Popper used, such as “knowledge of the $`y`$-coordinate … or the $`q_y`$ position of this particle” are flagrant (and admittedly quite common) abuses of an improper language. When we are discussing quantum theory, we should refrain from using classical terminology—or at least be aware that we do so at our own risk. In classical mechanics, a particle has (ideally) a precise position and a precise momentum. We can in principle measure them with arbitrary accuracy and thereby determine their numerical values. In quantum mechanics, a particle also has a precise position and a precise momentum. However, the latter are mathematically represented by self-adjoint operators in a Hilbert space, not by ordinary numbers. Their nature is quite different from that of the classical position and momentum. In the early quantum literature, operators were called $`q`$-numbers, while plain numbers were $`c`$-numbers (Dirac, 1926). Likewise, to avoid confusion, we should have used in quantum theory names such as $`q`$-position and $`q`$-momentum, while the corresponding classical dynamical variables would have been called $`c`$-position and $`c`$-momentum. If such a distinction had been made, it would have helped to prevent much of the present confusion about quantum theory. It is the imperfect translation from the $`q`$-language to the $`c`$-language that led to the unfortunate introduction of the term “uncertainty” in that context. We may note, incidentally, that the theory of relativity did not cause as much misunderstanding and controversy as quantum theory, because people were careful to avoid using the same nomenclature as in nonrelativistic physics. For example, elementary textbooks on relativity theory distinguish “rest mass” from “relativistic mass” (hard core relativists call them simply “mass” and “energy”). The criticism above was aimed at the terminology used by Popper in proposing his experiment. Now, it is time to analyze the substance. First, we have to find out how precisely the two particles of each pair will be aligned opposite to each other, in spite of the uncertainty relation in Eq. (2). Note that, contrary to the so-called “uncertainty principle” which is an ill defined concept and has only a heuristic meaning, Eq. (2) is a rigorous mathematical consequence of the quantum formalism. It puts a lower bound on the product of the standard deviations of the results of a large number of measurements performed on identically prepared systems. Each one of these measurements is assumed to have perfect accuracy (any experimental inaccuracy would have to be added to the quantum dispersion). There is no “uncertainty” connotation here, unless this uncertainty merely refers to future outcomes of potential, perfectly accurate measurements that may be performed on such systems (Ballentine, 1970). A long calculation (to be published separately) is needed to estimate how precise is the angular alignment of two particles emitted with opposite momenta. Actually, what Eq. (2) says is that if an ensemble of pairs of particles is prepared in such a way that $`(p_1+p_2)`$ is sharp, then the positions of the points halfway between the particles are very broadly distributed. It says nothing on the angular alignment of distant particles. On that issue, a detailed calculation shows that if one particle is found in the direction given by polar and azimuthal angles $`\theta `$ and $`\varphi `$, then the other will be found very nearly in the opposite direction, with angles $`\pi \theta `$ and $`\varphi \pm \pi `$, respectively. The allowed deviation from perfect alignment is too small to be of any consequence in the present discussion. It is therefore correct to assume, as Popper did, that if a particle is detected behind Alice’s slit, and if an identical slit were placed by Bob in a symmetric position, then Bob would definitely detect the other particle of that pair there. However, this does not mean that Bob’s knowledge creates a “virtual slit” through which his particles are diffracted by the same angle $`\lambda /a`$. Bob’s knowledge has no physical consequence because it is manifestly counterfactual. This can easily be seen by considering other counterfactual experiments. For example, Bob also knows, after he was informed by Alice of what she found, that if he had placed a slit of width $`a/2`$ at a position whose distance from the source is one half of the distance of Alice’s slit, then he would have detected his particle within that slit with certainty. In that case, his “virtual slit” is narrower, and therefore the diffraction angle is wider by a factor 2. In brief, we can imagine infinitely many such counterfactual experiments (which are mutually exclusive, of course), and each one of these conceptual slits leads to a different observable diffraction angle, which is absurd. There is no doubt that Popper was right when he was “inclined to predict” that the test would give a negative result. However, Popper concluded that “the test decides against the Copenhagen interpretation” and this assertion requires further scrutiny. What is, indeed, the Copenhagen interpretation? There seems to be at least as many different Copenhagen interpretations as people who use that term, probably there are more. For example, in two classic articles on the foundations of quantum mechanics, Ballentine (1970) and Stapp (1972) give diametrically opposite definitions of “Copenhagen.” There is no real conflict between Ballentine and Stapp on how to understand quantum mechanics, except that one of them calls Copenhagen interpretation what the other considers as the exact opposite of the Copenhagen interpretation. I shall now explain my own Copenhagen interpretation. It relies on articles written by Niels Bohr. Whether or not you agree with Bohr, he is the definitive authority for deciding what is genuine Copenhagen. Quantum mechanics provides statistical predictions for the results of measurements performed on physical systems that have been prepared in specified ways (Peres, 1995). (I hope that everyone agrees at least with that statement. The only question here is whether there is more than that to say about quantum mechanics.) The preparation of quantum systems and their measurement are performed by using laboratory hardware which is described in classical terms. If you have doubts about that, just have a look at any paper on experimental physics. The necessity of using a classical terminology was emphasized by Bohr (1949) whose insistence on this point was very strict: > However far the \[quantum\] phenomena transcend the scope of classical physical explanation, the account of all evidence must be expressed in classical terms. The argument is simply that by the word ‘experiment’ we refer to a situation where we can tell others what we have done and what we have learned and that, therefore, the account of the experimental arrangement and the results of the observations must be expressed in unambiguous language with suitable application of the terminology of classical physics. The keywords in that excerpt are: classical terms … unambiguous language … terminology of classical physics. Bohr did not say that there are in nature classical systems and quantum systems. There are physical systems for which we may use a classical description or a quantum description, according to circumstances, and with various degrees of approximation. It is according to our assessment of the physical circumstances that we decide whether the $`q`$-language or the $`c`$-language is appropriate. Physics is not an exact science, it is a science of approximations. Unfortunately, Bohr was misunderstood by some (perhaps most) physicists who were unable to make the distinction between language and substance, and he was also misunderstood by philosophers who disliked his positivism. It is remarkable that Bohr never considered the measuring process as a dynamical interaction between an apparatus and the system under observation. Measurement had to be understood as a primitive notion. Bohr thereby eluded questions which caused considerable controversy among other authors (Wheeler and Zurek, 1983). Bohr willingly admitted that any intermediate systems used in the measuring process could be treated quantum mechanically, but the final instrument always had a purely classical description (Bohr, 1939): > In the system to which the quantum mechanical formalism is applied, it is of course possible to include any intermediate auxiliary agency employed in the measuring process \[but\] some ultimate measuring instruments must always be described entirely on classical lines, and consequently kept outside the system subject to quantum mechanical treatment. Yet, a quantum measurement is not a supernatural process. Measuring apparatuses are made of the same kind of matter as everything else and they obey the same physical laws. It therefore seems natural to use quantum theory in order to investigate their behavior during a measurement. This was first attempted by von Neumann (1932) in his treatise on the mathematical foundations of quantum theory. In the last section of that book, as in an afterthought, von Neumann represented the apparatus by a single degree of freedom whose value was correlated to that of the dynamical variable being measured. Such an apparatus is not, in general, left in a definite pure state, and does not admit a classical description. Therefore, von Neumann introduced a second apparatus which observes the first one, and possibly a third apparatus, and so on, until there is a final measurement, which is not described by quantum dynamics and has a definite result (for which quantum mechanics can only give statistical predictions). The essential point that was suggested, but not proved by von Neumann, is that the introduction of this sequence of apparatuses is irrelevant: the final result is the same, irrespective of the location of the “cut” between classical and quantum physics. (At this point, von Neumann also speculated that a final step would involve the consciousness of the observer—a rather bizarre statement in a mathematically rigorous monograph.) These different approaches of Bohr and von Neumann were reconciled by Hay and Peres (1998), who introduced a dual description for the measuring apparatus. It obeys quantum mechanics while it interacts with the system under observation, and then it is “dequantized” and is described by a classical Liouville density, which provides the probability distribution for the results of the measurement. Alternatively, the apparatus may always be treated by quantum mechanics, and be measured by a second apparatus which has such a dual description. Hay and Peres showed that these two different methods of calculation give the same result, provided that the measuring apparatus satisfies appropriate conditions (otherwise, it is not a valid measuring apparatus). The other fundamental feature of Bohr’s presentation of quantum theory is the principle of complementarity, which asserts that when some types of predictions are possible, others are not, because they are related to mutually incompatible experiments. For example, in the situation described by Einstein, Podolsky, and Rosen (1935), the choice of the experiment performed on the first system determines the type of prediction that can be made for the results of experiments performed on the second system (Bohr, 1935). In Popper’s experiment, Bob can predict what would have happened if he had placed slits of various sizes at various positions, or no slit at all. However, all these possible setups are mutually incompatible. In particular, if Bob puts no slit at all, the result he obtains is not the one he would have obtained if he had put a slit. Counterfactual experiments need not have consistent results (Peres, 1978). Note that Bohr did not contest the validity of counterfactual reasoning. He wrote (Bohr, 1935): > Our freedom of handling the measuring instruments is characteristic of the very idea of experiment … we have a completely free choice whether we want to determine the one or the other of these quantities … Thus, Bohr found it perfectly legitimate to consider counterfactual alternatives: observers have free will and can arbitrarily choose their experiments. However, each experimental setup must be considered separately. In particular, no valid conclusion can be drawn from the comparison of possible results of mutually incompatible experiments. Bohr was sometimes accused of being elusive, because his approach does not provide answers to questions in which people may be interested. There are indeed questions that seem reasonable but do not correspond to any conceivable experiment: quantum theory has no obligation to answer meaningless questions. To conclude this article, let me report the result of a rigorous analysis of Popper’s experimental setup, where only Schrödinger’s equation is used, without invoking any controversial interpretation. The irony of the answer is that Bob does observe a diffraction broadening, as if he had a virtual slit! However, that slit is not located between him and the source, but is precisely located where Alice’s real slit is, and is indeed identical to it. An experiment similar to Popper’s proposal was actually performed by Strekalov et al. (1995), who used a double slit, so that Bob had a virtual double slit, producing a neat interference pattern, not only a diffraction broadening. Figure 2 is a simplified sketch of that experiment. Its complete theoretical analysis involves advanced concepts of quantum optics and is quite intricate. I shall now give a brief outline of the theory, based on Schrödinger’s equation. The only “knowledge” needed in the analysis of the experiment is the factual one, on the preparation and observation procedures. That knowledge is formally encapsulated in the Hilbert-space vectors $`|\mathrm{\Psi }_0`$ and $`|\mathrm{\Psi }_d`$, whose coordinate-space representation is localized in the source of particles and in the detectors that were excited, respectively. (These vectors are also known as “quantum states.”) Schrödinger’s equation asserts that the initial vector $`|\mathrm{\Psi }_0`$ evolves in time, as long as there is no detection event, according to a unitary transformation $$|\mathrm{\Psi }_0|\mathrm{\Psi }_t=U_t|\mathrm{\Psi }_0,$$ (3) where $`U_t=e^{iHt/\mathrm{}}`$ for a time-independent Hamiltonian $`H`$. In the present case, the double slit can be represented by an infinite potential in $`H`$, or by an equivalent boundary condition. Born’s rule (which makes the connection between the quantum formalism and observed probabilities of macroscopic events) asserts that the probability that a particular pair of detectors will “click” at time $`t`$ is $`P=|\mathrm{\Psi }_d,\mathrm{\Psi }_t|^2`$, where the symbol $`u,v`$ denotes the scalar product of two vectors, $`|u`$ and $`|v`$. We thus have (Peres, 1995) $$P=|\mathrm{\Psi }_d,U_t\mathrm{\Psi }_0|^2|U_t^{}\mathrm{\Psi }_d,\mathrm{\Psi }_0|^2,$$ (4) where $`U_t^{}=U_t`$ is the unitary operator for the time-reversed dynamics. It may be practically impossible to realize experimentally that reversed dynamics, but it is legitimate to perform the calculation of the ordinary dynamics by proceeding backwards, starting at the detectors and ending at the source. In the present case, this is indeed much easier, because $`|\mathrm{\Psi }_0`$ is entangled and has to satisfy Eq. (1), while $$|\mathrm{\Psi }_d=|\psi _1|\psi _2,$$ (5) is a tensor product of two vectors, whose coordinate-space representations are well separated, since they are localized in the two detectors. Moreover, the Hamiltonian is the sum of those of the two particles, since the latter do not interact after they leave the source. Therefore the unitary evolution also factorizes: $`U_t=U_1U_2`$. We thus propagate $`|\psi _1`$ and $`|\psi _2`$ from the detectors toward the source. We have to compute $$P=|\mathrm{\Psi }_0,(U_1\psi _1U_2\psi _2)|^2.$$ (6) Now, since $`|\mathrm{\Psi }_0`$ satisfies Eq. (1), the only contribution to $`P`$ comes from components of $`|U_1\psi _1U_2\psi _2`$ with opposite momenta that also satisfy Eq. (1). This is illustrated in Figure 2. For example, if we record all the detections on Bob’s side that are in coincidence with one particular detector of Alice, then Bob will observe an ordinary double-slit interference pattern, generated by a “virtual” double-slit, that actually is Alice’s real slit. Note that it is necessary, for such an observation to be possible, that the region of the nonlinear crystal from where the rays emerge be very broad (Hong and Mandel, 1985) and the emergence point be undetermined. Likewise, if the experiment were done with positronium as Popper originally suggested, the positronium ought to be prepared with $`\mathrm{\Delta }y`$ much larger than the distance between the slits. Expressed in an informal language, the requirement is that each one of the two photons that pass through both slits must also originate in both regions of the source. This demand is similar to the conditions required for the Pfleegor and Mandel (1967) experiment, where a single photon originates from two different lasers and gives rise to first order interference. A similar analysis also applies to Popper’s original experiment with a single slit (however, it would be more difficult to draw for it a figure like Figure 2). In summary, according to the Copenhagen interpretation, as Bohr apparently understood it, quantum theory is not a description of physical reality. It also does not deal with anthropomorphic notions such as knowledge or consciousness. All it does is to provide correct answers to meaningful questions about experiments done with physical systems. Acknowledgments I am grateful to Rainer Plaga for bringing Popper’s experiment to my attention, and to Amiram Ron for clarifying discussions. This work was supported by the Gerard Swope Fund and the Fund for Encouragement of Research. References Ballentine, L. E. (1970) ‘The Statistical Interpretation of Quantum Mechanics’ Reviews of Modern Physics 42, 358–381. Bohr, N. (1935) ‘Can Quantum-Mechanical Description of Physical Reality be Considered Complete?’ Physical Review 48, 696–702. Bohr, N. (1939) ‘The causality problem in modern physics’ in New Theories in Physics (Paris: International Institute of Intellectual Cooperation) pp. 11–45. Bohr, N. (1949) ‘Discussion with Einstein on Epistemological Problems in Atomic Physics’ in P. A. Schilpp (ed.) Albert Einstein, Philosopher-Scientist (Evanston: Library of Living Philosophers) pp. 201–241. Collett, M. J., and Loudon, R. (1987) ‘Analysis of a proposed crucial test of quantum mechanics’ Nature 326, 671–672. Dirac, P. A. M. (1926) ‘Quantum mechanics and a preliminary investigation of the hydrogen atom’ Proceedings of the Royal Society A (London) 110, 561–569. Einstein, A., Podolsky, B., and Rosen, N. (1935) ‘Can Quantum-Mechanical Description of Physical Reality be Considered Complete?’ Physical Review 47, 777–780. Hay, O., and Peres, A. (1998) ‘Quantum and classical descriptions of a measuring apparatus’ Physical Review A 58, 116–122. Kemble, E. C. (1937) The Fundamental Principles of Quantum Mechanics (New York: McGraw-Hill, reprinted by Dover) pp. 243–244. Kim, Y. H., and Shih, Y. H. (1999) ‘Experimental realization of Popper’s experiment: violation of the uncertainty principle?’ Fortschritte der Physik (in press). Hong, C. K., and Mandel, L. (1985) ‘Theory of parametric frequency down conversion of light’ Physical Review A 31, 2409–2418. Peres, A. (1978) ‘Unperformed experiments have no results’ American Journal of Physics 46, 745–747. Peres, A. (1993) Quantum Theory: Concepts and Methods (Dordrecht: Kluwer Academic Publishers). Pfleegor, R. L., and Mandel, L. (1967) ‘Interference of independent photon beams’ Physical Review 159, 1084–1088. Popper, K. R. (1982) Quantum Theory and the Schism in Physics (London: Hutchinson) pp. 27–29. Stapp, H. P. (1972) ‘The Copenhagen Interpretation’ American Journal of Physics 40, 1098–1116. Strekalov, D. V., Sergienko, A. V., Klyshko, D. N., and Shih, Y. H. (1995) ‘Observation of two-photon “ghost” interference and diffraction’ Physical Review Letters 74, 3600–3603. von Neumann, J. (1932) Mathematische Grundlagen der Quantenmechanik (Berlin: Springer); transl. by R. T. Beyer (1955) Mathematical Foundations of Quantum Mechanics (Princeton: Princeton University Press). Wheeler, J. A., and Zurek, W. H., editors (1983) Quantum Theory and Measurement (Princeton: Princeton University Press). FIGURE 1. Popper’s conceptual experiment. A pair of photons with opposite momenta is emitted by the source S. Alice’s detectors are on the left, those of Bob on the right. FIGURE 2. Simplified sketch of the experiment of Strekalov et al. (1995). The figure shows a single pair of photons with opposite momenta, emitted by the source S. When many such pairs are detected in coincidence, interference patterns appear on both sides.
no-problem/9910/hep-th9910238.html
ar5iv
text
# RU-NHETC-99-38 hep-th/9910238 Hypermultiplet Moduli Space and Three Dimensional Gauge Theories ## 1 Introduction The duality between the heterotic string on $`K3\times T^2`$ and type II string theory on CY manifolds can be used to study the moduli space of four dimensional string vacua with 8 supercharges. The moduli space factorizes locally into the hyper- and vector multiplet moduli spaces. It is then possible to study the hypermultiplet moduli space classically in the heterotic string side, since the heterotic string dilaton is a member of a vector multiplet. This is to be compared with calculations on the type II side, as in . The detailed study of singularities in the moduli space is, however, not always possible at tree level. There are some singularities that drive the string coupling to diverge, no matter how small it is asymptotically. In those cases the metric on moduli space can be computed reliably, but the physics of the singularity is non-perturbative. Well-known singularities of that sort are the type II conifold , and the non-perturbative gauge symmetry at the core of a small instanton . In , a detailed study of singularities which do not involve non perturbative string physics was initiated. An example is the heterotic string near an A-D-E type singularity of a $`K3`$ surface. Since the heterotic string dilaton becomes weak near the singularity, one may study the singularity in the heterotic CFT. Indeed, for the case of $`A_1`$ singularity, the moduli space was found to be the Atiyah-Hitchin manifold. The classical singularity is resolved by a combination of one loop and worldsheet instanton effects, with no recourse to stringy corrections. The pattern in which the $`\alpha ^{}`$ corrections smooth out the classical singularity is reminiscent of the work in . This analogy suggested a relation between the two frameworks <sup>1</sup><sup>1</sup>1 The relation between the hypermultiplet moduli space and three dimensional gauge theory was noted in the type II context in .. Therefore it was conjectured in that the hypermultiplet moduli space of the heterotic compactification on an A-D-E type singularity is identical to the moduli space of a three dimensional pure gauge thery with 8 supercharges and the corresponding A-D-E gauge group. In this note we establish this relation by using the duality between the heterotic string on $`T^3`$ and M-theory on $`K3`$. Furthermore, one can construct slightly more general backgrounds which keep the heterotic coupling perturbative (in leading order in $`\alpha ^{}`$). This is done by putting a number of heterotic fivebranes at the A-D-E singularity. This does not break the supersymmetry further. In order for the heterotic string coupling to be small near the singularity, the number of fivebranes is bounded, as discussed below. The hypermultiplet moduli space in this case turns out to be the moduli space of a three dimensional gauge theory with matter. Dualities in three dimensional gauge theories exchanging the Coulomb and Higgs branches were discussed in , and were named Mirror symmetry. Realization of the mirror symmetry by an embedding in string theory has concentrated mainly on realizing the gauge theory on D-branes (see, however, for a closely related discussion). Here, the embedding in M-theory (or string theory) involves the dynamics of coincident membranes in M-theory, or coincident 5-branes (compactified on $`T^3`$) in the heterotic string theory. As is shown below in a particular case, the requirement of the heterotic string to be perturbative is a necessary condition for the absence of singularities in the hypermultiplet moduli space. It would be interesting to verify that this is also a sufficient condition, along the lines of , or using the gauge theory representation. For $`A_N`$ singularities with no small instantons, there is an independent argument for the smoothness (and identification) of the hypermultiplet moduli space . ## 2 Heterotic String on A-D-E Singularities We consider a compactification of the heterotic string on $`K3`$, in the limit where the $`K3`$ developes an A-D-E type singularity, associated with a gauge group which we call $`G`$. Concentrating on the behavior near the singularity allows one to replace the $`K3`$ surface by a non-compact ALE space which looks identical near the singularity. One therefore is allowed to ignore constraints of tadpole cancellation which arise upon further compactification to lower dimensions . We call this ALE space $`X_G`$. Concentrating on the singularity results in a low energy six dimensional theory which decouples from gravity. The hypermultiplet moduli space is an hyperKahler manifold in this limit. Upon further toroidal compactification, The moduli space factorizes into the vector- and the hypermultiplet moduli spaces. This factorization follows from the different R-symmetry transformation laws for the scalars in the corresponding multiplets. We are interested in particular in compactifying further on $`T^3`$, yielding a three dimensional theory with 8 supercharges. The low energy theory has an R-symmetry $`SO(4)=SU(2)_L\times SU(2)_R`$. The scalars in the vector- and hypermultiplets transform in different $`SU(2)`$ factors. Therefore the moduli space factorizes. In particular, the hypermultiplet moduli space is independent of the value of vector moduli, such as the radii of $`T^3`$ and the heterotic string coupling. It is the same in all dimensions and does not receive stringy corrections. For the heterotic string background discussed here, the three dimensional R-symmetry $`SU(2)_L\times SU(2)_R`$ originates from the factors $`T^3`$ and $`X_G`$, respectively. The scalars in the hypermultiplet moduli space are distinguished by a transformation law under $`SU(2)_R`$, that is the one that rotates the 3 complex structures of $`X_G`$. Since one can safely go to strong heterotic coupling, it is natural to consider the situation in dual variables. In M-theory variables the background in question is $`K3\times X_G`$. The first factor comes from the duality between M-theory on $`K3`$ and the heterotic string on $`T^3`$, and the second factor is the ALE space introduced above. Since the moduli space in question does not depend on the moduli of $`K3`$, we can take it to be a generic, non-singular surface. M-theory near an A-D-E singularity is well known to yield a seven dimensional gauge theory with a gauge group $`G`$ . Further reduction on a smooth $`K3`$ surface (with a trivial gauge bundle) yields in low energies a pure gauge theory in three dimensions, 8 supercharges, and gauge group G. The scalars in the vector multiplet transform under the R-symmetry $`SU(2)`$ which originates from $`X_G`$. This identifies the hypermultiplet moduli space on the heterotic side, with the moduli space of the three dimensional gauge theory with gauge group $`G`$. The analogy between the calculations in and can be explained by this identification. For example an heterotic worldsheet instanton is an elemantary string wrapped around a 2-cycle in $`X_G`$, which we call $`\mathrm{\Sigma }`$. This maps to an $`M5`$-brane wrapping $`K3`$ and the 2-cycle $`\mathrm{\Sigma }`$. This 5-brane is a magnetic source for the 2+1 dimensional gauge field which comes from the 11 dimensional 3-form reduced on the harmonic 2-form Poincare dual to $`\mathrm{\Sigma }`$. In the gauge theory therefore one gets the usual instantonic monopole, which corrects the moduli space metric, as in . ## 3 Small Instantons and Matter In , the question of resolutions of classical singualrities by the heterotic CFT was investigated. One then is interested in classical singularities which do not drive the heterotic string coupling to be strong. A general set of examples can be obtained following . The condition for the heterotic string to remain perturbative near the singularity can be derived from , equation (1.1): $$\mathrm{}\varphi =tr(F_{ij}F^{ij})tr(R_{ij}R^{ij})$$ (1) This relates the variation of the dilaton to the curvature sources of the metric and gauge bundle near the singularity. The dilaton blows up near a small instanton singularity, and tends to zero near an A-D-E singularity. It is therefore possible to add a number of small instantons sitting at the A-D-E singularity, such that the heterotic coupling does not blow up. The number of small instantons is bounded. This bound can be found by integrating equation (1) around the singularity. The condition for the dilaton not to blow up at the singularity is found to be: $$s=N_fN_c0$$ (2) where the instanton number at the singularity is denoted by $`N_f`$ (a notation to be explained shortly), and the rank of the A-D-E gauge group by $`N_c1`$, ($`N_c`$ equals the Euler characteristic of the corresponding ALE space). This bound may be modified by $`\alpha ^{}`$ corrections to (1), and is therefore only a necessary condition for the string coupling to stay small. Whether or not the string coupling is small near the singularity has to be decided dynamically in the worldsheet theory, or alternatively in the gauge theory description. With instantons included, the local factorization of the moduli space is less effective. For example, even though the Wilson lines on $`T^3`$ are vector multiplets in three dimensions, they do affect the hypermultiplet moduli space. As the Wilson lines are tuned to enhance the gauge symmetry to any gauge group $`H`$, the hypermultiplet moduli space contains a part which is the moduli space of $`H`$ instantons. It is only locally, away fron enhanced symmetry points, that the hypermultiplet moduli space is independent of the Wilson lines. For the set of examples defined above one may be able to study the resolution of the classical singularity by the heterotic CFT. Using the duality above, it is straightforward to map the small instanton to the dual variables. One gets M-theory on $`K3\times X_G`$ with $`N_f`$ membranes spanning the 2+1 noncompact directions. The low energy limit is then a gauge theory with matter, in a limit described below. An alternative representation of the membranes is as follows. The 7 dimensional theory obtained by concentrating near the singularity is a gauge theory with a gauge group $`G`$. One of the terms in the action is (normalizing the M-theory membrane tension to 1): $$I=\frac{1}{8\pi ^2}Ctr(FF)$$ (3) where C is the M-theory 3-form and F is the seven dimensional gauge field. This term comes from the term $`CdCdC`$ in 11 dimensional supergravity. Therefore, gauge instantons in seven dimensions carry membrane charge: the membranes spanning the 2+1 noncompact directions can be represented as $`G`$-instantons on $`K3`$. The low energy theory now has additional fields, representing the moduli of the instantons. Those fields are hypermultiplets, identified as such by their R-symmetry representation. The instantons have a finite size on the Higgs branch, and become membranes on the Coulomb branch of the gauge theory. A precise identification of the matter content on the Coulomb branch (where the instantons are of zero size) requires a better understanding of coincident membranes in M-theory. The cases relevant to the six dimensional heterotic hypermultiplet moduli space require tuning the second $`K3`$ to an enhanced symmetry point before taking the limit corresponding to a decompactification of the $`T^3`$ on the heterotic side. Results relevant to the identification of the matter content in those cases can be found in . We conclude by commenting on the case of $`A_{N_c}`$ singularities, with a generic $`K3`$ surface, and membranes seperated on it. We concentrate on this case in order to interpret the bound (2) in the gauge theory. The space $`X_G`$ can be described locally by a multi-Taub-NUT metric in the limit of $`N_c`$ coincident centers, since both of these space have an $`A_{N_c}`$ singularity at the origin. The Taub-NUT metric is a circle fibration, and one can use the circle to reduce to type IIA theory. In type IIA language this is a D2-D6 system, where the D6 brane wraps the $`K3`$ surface. The gauge theory on the D2-branes is $`U(1)^{N_f}`$, with an infinite classical gauge coupling, related to the infinite asymptotic radius of the circle fiber in the ALE space. The gauge theory on the 6-branes is $`SU(N_c)`$, as before. The 2-6 strings generate $`N_f`$ hypermultiplets in the fundamental representation of $`SU(N_c)`$. For the $`A_{N_c}`$ case, $`N_c`$ and $`N_f`$ are the number of colors and flavors in the gauge theory $`SU(n_c)`$. One can then identify the number $`s=N_fN_c`$ defined above as half the number of fermionic zero modes in the presence of an $`SU(N_c)`$ monopole. In , $`s`$ was required to be positive on order to have (multi-) monopole corrections to the metric on the Coulomb branch. Such monopoles in the present case correct the metric of both the $`SU(N_c)`$ and $`U(1)^{N_f}`$ factors , providing therefore a possibility for smoothing out the complete vector moduli space. In the absence of those corrections, the moduli space is singular, and the origin of the Coulomb branch represents a non-trivial conformal field theory. Returning to the heterotic string frame, we see that the condition for the heterotic string to be perturbative is complementary to the condition in . It is then a necessary condition (at least for the $`A_{N_c}`$ case) for the absence of singularities on the hypermultiplet moduli space. It is not clear whether this condition also guarantees a smooth moduli space. ## 4 Acknowledgments We thank T. Banks, M. Douglas, K. Intriligator, A. Rajaraman , E. Witten for useful conversations, and O. Aharony for collaboration in parts of this paper.
no-problem/9910/astro-ph9910536.html
ar5iv
text
# Effect of intergalactic absorption in the TeV 𝛾-ray spectrum of Mkn 501 ## Introduction The ground-based detectors, utilizing the so-called imaging air Čerenkov technique, offer an effective tool to study the cosmic TeV $`\gamma `$-rays. Recently, a number of celestial objects has been identified as TeV $`\gamma `$-ray emitters by use of such technique 1 . Among them there are two active galactic nuclei (AGN) – Mkn 421 and Mkn 501 – which for almost similar redshift of 0.031 and 0.034 respectively, have very different properties of a TeV $`\gamma `$-ray emission. In particular, TeV $`\gamma `$-ray fluxes from Mkn 421 and Mkn 501 differ in variability time scale and spectral behavior. Mkn 421 has shown significant flux variations within a time period as short as 15 minutes 2 whereas Mkn 501 may outburst during a period of 6 months with an extraordinary high $`\gamma `$-ray flux of more than 3 Crabs on average 3 . The energy spectrum of Mkn 501, as measured in the energy range from 0.5 TeV up to 20 TeV, shows evident curvature ($`dJ_\gamma /dEE^{1.9}exp(E/6.2)`$) and the spectrum shape does not depend on the flux level 4 . At the same time the Mkn 421 energy spectrum is very steep and consistent with the pure power law ($`dJ_\gamma /EE^{3.1}`$) over the energy range 0.5-7 TeV, at least during low state of emission 5 . All that shows an apparent intrinsic difference in the mechanism of the TeV $`\gamma `$-ray emission which is widely believed to be an inverse Compton scattering of electrons within a relativistic jet directed along the observer line of site (for review see 1 ). In addition the measured spectra of TeV $`\gamma `$-rays from such distant sources as Mkn 421 and Mkn 501 might be affected by the $`\gamma `$-ray absorption on the diffuse intergalactic infrared (IR) background. Here we discuss how important might be the effect of such absorption on the spectra of two observed AGNs in particular Mkn 501 which shows a spectacular shape of its spectrum. ## Observations During an extraordinary outburst of TeV $`\gamma `$-rays from Mkn 501 in 1997 observation period this object was monitored by several ground-based imaging air Čerenkov telescopes (IACTs) 3 . The HEGRA stereoscopic system of 4 IACTs has observed Mkn 501 for a total exposure time of $`110`$hours 4 . The unprecedented statistics of about 38,000 TeV photons, combined with the good energy resolution of $`20`$% allowed determination of a spectrum over the energy range from $`500`$ GeV up to $`24`$ TeV. The shape of the spectrum does not depend on intensity of the source. It justifies the determination of the time-averaged Mkn 501 spectrum. The energy spectrum of Mkn 501, as measured by the HEGRA group, shows apparent curvature over entire energy range. The shape of the spectrum may be well described by the power law with an exponential cutoff. A fit of the data gives: $`\mathrm{d}N/\mathrm{d}E=10.810^{11}E^{1.92}\mathrm{exp}\left[E/6.2\right],\mathrm{cm}^2\mathrm{s}^1\mathrm{TeV}^1`$ (1) The detailed systematic analysis of the fit parameters has been discussed in 4 . The HEGRA data are also shown in Figure 1. ## Discussion on spectrum shape The curvature in Mkn 501 energy spectrum may be caused by several reasons. The curved energy spectrum of TeV $`\gamma `$-rays may be attributed to (i) the intrinsic spectrum of TeV $`\gamma `$-ray emission within the synchrotron self-Compton or external inverse Compton scenarios; (ii) the curvature might be due to the absorption of TeV $`\gamma `$-rays by the pair production inside the source, or (iii) in intergalactic medium; finally, the observed energy spectrum may be affected by a combination of several reasons noticed above. The recent calculations based on the synchrotron self-Compton (SSC) and external Compton (EC) models could explain rather well currently established variability time scales of X-ray and TeV emission of Mkn 421 and Mkn 501 (see e.g., 7 ). Thus the observation of the variability of TeV $`\gamma `$-ray flux at the level of $``$1 hr 8 , at least, limits the Doppler boosting factor of the emitting jet as $`\delta 10`$. For such big Doppler boosting factor the $`\gamma `$-ray absorption within the sources does not play an important role and $`\gamma `$-ray photons can easily escape from the emitting region 9 . The shape of Mkn 501 energy spectrum as measured by the HEGRA collaboration can not be easily fitted by pure SSC and EC models (see e.g., 9 ). In particular, the shape of a spectrum appears to be very steep above 5 TeV. In addition, the observations with Rossi X-Ray Timing Explorer have shown that the spectrum of Mkn 501 varies strongly with generally a very hard spectral index extending to much higher energies ($`100`$ keV) 10 . On the contrary, the TeV $`\gamma `$-ray spectrum does not show any variations in the spectrum shape 4 . The simultaneous variations in the X-ray and TeV $`\gamma `$-ray fluxes may be well described by a change in the maximum energy to which electrons can be accelerated $`\gamma _{max}`$ 11 (hereafter $`\gamma _{max}`$ is a corresponding maximum Lorentz factor). Thus the energy spectrum of TeV $`\gamma `$-rays can be extremely soft, e.g., $`\alpha 3.0`$ ($`\alpha `$ is an index of a power law energy spectrum), due to the cutoff in the spectrum of emitting electrons. However for that the variations in $`\gamma _{max}`$ lead to significant change of the spectrum slope in TeV $`\gamma `$-rays which is not a case for the HEGRA observations. It is more likely that synchrotron photons of approximately 1-20 keV, emitted by the electrons accelerated within the jet, are up scattered by the same electrons to the TeV energies. For such scenario the X-ray variabilities caused by the hardening of the initial electron spectrum not necessarily lead to a variations of the spectrum slope in TeV $`\gamma `$-rays which is relatively flat $`\alpha 2.0`$ and remains constant. Interestingly, the TeV energy $`\gamma `$-ray spectrum as measured by HEGRA shows very similar spectrum slope in the energy range below 5 TeV whereas it deviates from the power law strongly in the high energy part. Such behavior might be easily explained by the effect of intergalactic absorption. ## IR absorption in TeV spectrum of Mkn 501 While propagating in the interstellar medium the TeV $`\gamma `$-rays may attenuate through pair production process in the intergalactic infrared radiation field (IIRF) 12 . The corresponding opacity of intergalactic medium is determined by the spectral energy distribution (SED) of the IR photon field (see Figure 1). The absorption of $`\gamma `$-rays in the energy range from 0.5 to 20 TeV rely on IR SED of photon field between 1 to 50 mkm. Recently measurements as well as low upper limits of SED strongly constrain the shape of SED in the range relevant to the TeV $`\gamma `$-rays. Compilation of present data is shown in Figure 1. We also show two models of SED from 13 and 14 . Note that the recent tentative detection of IR photons at 3.5 mkm by COBE 15 is consistent with both models whereas the ISOCAM lower limit on IR photon field, if true, favour model from 13 with rather flat SED at mid IR region. An optical depth of $`\gamma `$-ray absorption as a function of energy and redshift, $`\tau =\tau (E_\gamma ,z)`$, was calculated in 16 using the predictions on SED of intergalactic IR photon field according to 13 . As such these data may be used to unfold the Mkn 501 energy spectrum measured by the HEGRA group, $`(dN_\gamma /dE)_m`$, in order to get a “de-absorbed” intrinsic energy spectrum of Mkn 501, $`(dN_\gamma /dE)_i`$. $$(dN_\gamma /dE)_i=(dN_\gamma /dE)_me^{\tau (E,z)}$$ (2) The “de-absorbed” HEGRA data are shown in Figure 1 together with a power law fit. We find 17 that the data points can be well fitted by $$(dN_\gamma /dE)_i=(1.32\pm 0.04)10^{10}(E/1TeV)^{2.0\pm 0.03}.$$ (3) Note that similar results have been shown at this Workshop by the Telescope Array group using their measurement of the Mkn 501 TeV $`\gamma `$-ray spectrum. We show in Figure 3 the large scale spectral energy distribution of Mkn 501 calculated assuming the absorption. ## Comparison of Mkn 501 and Mkn 421 spectra As reported in 18 the spectra of Mkn 421 and Mkn 501, measured by the Whipple group in the high state of emission, show noticeable difference in their spectral shape over the energy range 0.3-10 TeV. The spectrum of Mkn 421 is a power law whereas the spectrum of Mkn 501 is apparently curved. Since two objects Mkn 421 and Mkn 501 have almost the same red shift one may conclude that these two objects have different intrinsic energy spectra of TeV $`\gamma `$-rays 18 . However the Whipple data for the Mkn 421 and Mkn 501 energy spectra at the energies above 1 TeV do not show prominent difference and both could be well fitted by power law. Apparent difference in two spectra is at energies less than 1 TeV, namely in the range where the absorption of TeV $`\gamma `$-rays in the intergalactic IR photon field does strongly affect the spectra. Similar behavior of both spectra in the energy range above 5 TeV does not contradict the effect of absorption at these energies as stated above. The spectrum of Mkn 421 as measured by HEGRA collaboration in low state shows power law behavior $`dN_\gamma /dEE^{3.1}`$ 5 . The HEGRA data allow to extend the spectral measurements only up to 7 TeV. Such steep spectrum most likely can be attributed to the very soft intrinsic source spectrum and may not disprove the effect of absorption of TeV $`\gamma `$-rays (see Figure 3). ## Conclusion We propose a possible scenario explaining the spectral shape of the Mkn 501 energy spectrum as measured by HEGRA collaboration. Strong variations of X-ray emission argue in favor of rather flat intrinsic spectrum of TeV $`\gamma `$-rays. We conclude that absorption in the interstellar IR photon field plays an important role and produces an apparent curvature observed in Mkn 501 spectrum. The SSC fit of the spectral shape constrain rather high value of the Doppler boosting factor of a emitting jet, $`\delta >50`$. Future multi-wavelength observations as well as detections of other BL Lac objects will help in future understanding of mechanisms of the TeV $`\gamma `$-ray emission and propagation processes.
no-problem/9910/quant-ph9910040.html
ar5iv
text
# The Photon-Box Bohr-Einstein Debate Demythologized ## Abstract The legendary discussion between Einstein and Bohr concerning the photon box experiment is critically analyzed. It is shown that Einstein’s argument is flawed and Bohr’s reply is wrong. I. INTRODUCTION The disagreement between Bohr and Einstein concerning quantum mechanics has become legendary in physics. One of their discussions, perhaps the one that has received most attention, is about the famous photon box experiment, a Gedankenexperiment devised by Einstein in order to show an apparent flaw in quantum mechanics: a violation of the time-energy indeterminacy relation. Bohr’s reply to it, constructed during a sleepless night after Einstein’s presentation, used the red shift formula, a result of general relativity. This discussion has been publicized as a crucial moment in the Bohr-Einstein debate. Bohr himself said that the “discussion took quite a dramatic turn”. Although some authors have criticized the validity of Bohr’s reply there is a general belief that the photon box experiment was the arena of a master fight between two giants (this view was also held by us). However, in this work we will see that this reputation is highly undeserved. Indeed a careful and irreverent study of the issue shows that Einstein’s argument is flawed and Bohr’s reply is wrong. There are some indications that neither Einstein nor Bohr were satisfied by their part in the discussion. Most probably Einstein noticed that Bohr’s reply was not conclusive; however he did not insist with his argument, probably because he did no longer believe in it. On the other hand, it is reported that Bohr had, on the day of his death, a drawing of the photon box in his blackboard. This could be thought to be a hunter’s trophy but it is also possible to interpret this as a sign that Bohr was not satisfied with his reply and was still looking for something better. The Bohr-Einstein discussion concerning the photon box has been treated by many authors, either criticizing or justifying and improving Bohr’s argumentation. To our knowledge, none of them makes a critical discussion of Einstein’s argument. This is probably due to the fact that his argument is extremely simple and apparently it requires no further comment. We will see however that his argument is seriously flawed, but not for the reasons given by Bohr. Although the core of the discussion took place at the Solvay meeting in Brussels in 1930, we will take as an authorized source for the argument and the reply, Bohr’s account of them presented in his article published in 1949 in a book that has become a standard reference. Einstein read this article and had the opportunity (not used) to comment on these issues in the same volume. In this work we will use the terms “indeterminacy” and “uncertainty” with different meaning. The first term, “indeterminacy”, denotes the impossibility of assigning precise values to the observables of a system as prescribed by quantum mechanics; whereas “uncertainty” will refer to the lack of precision in the knowledge of the value assigned to an observable due to apparatus or experimental limitations. According to this convention, Heisenberg’s relations refer to indeterminacies and not to uncertainties. Therefore, “uncertainty” will have in this work a classical origin and its nature is gnoseological whereas “indeterminacy” is essentially quantum-mechanical, regardless of whether its nature is ontological or gnoseological, an issue not decided among the experts in the foundations of quantum mechanics. In particular, referring to the indeterminacies either in space-time or in energy-momentum, two opposite points of view can been taken. In one of them it is claimed that the particles do have a precise localization in energy and momentum as well as in space and time, but quantum mechanics is unable to calculate or predict them simultaneously. This position implies that quantum mechanics is an incomplete theory. In the opposing view, quantum mechanics is a complete theory but the particles do not have the classical property of having exact values assigned to all observables. II. EINSTEIN’S ARGUMENT Einstein proposed to consider a box with perfectly reflecting walls containing electromagnetic radiation. Inside the box, an ideal clock mechanism could open a shutter at a predetermined moment $`T`$ for a time $`\mathrm{\Delta }T`$ short enough to let only one photon escape. Therefore since $`\mathrm{\Delta }T0`$, the time of emission of the photon can be, according to Einstein, exactly known. Before and after the emission, we could weigh the box at all leisure with unlimited precision and from the difference in weight we could exactly determine the energy $`E`$ of the photon. Therefore, argued Einstein, the escaping electromagnetic radiation violated the relation $$\mathrm{\Delta }E\mathrm{\Delta }T\mathrm{},$$ (1) where $`\mathrm{\Delta }E`$ is the indeterminacy in the energy of the photon and $`\mathrm{\Delta }T`$ is the indeterminacy in the moment of emission. The meaning of the time energy relation has been a subject of profound analysis. The main difficulty with it is due to the fact that “time” is not a quantum mechanical observable and relation (1) can not be derived from a commutation relation as is done with position-momentum indeterminacy relations. For the purpose of this work we don’t need to worry about these difficulties because there are ways to derive time-energy indeterminacy relations that do not require the existence of a time operator. In any case, the violation of this relation, with the meaning given above would be fatal to quantum mechanics. If Einstein’s argument were right, it would not only be fatal for the indeterminacy principle in quantum mechanics but it would also present an unsolvable contradiction between Einstein’s own concept of a photon and classical electrodynamics! Indeed, if the shutter is open during a vanishing time interval (for just one photon to escape, Einstein thought) then the electromagnetic pulse must be very sharp, ideally a Dirac delta. According to classical electrodynamics, the Fourier components of such a pulse involve a wide spectrum of frequencies. Therefore the electromagnetic pulse does not have a precisely defined frequency. On the other hand the unique escaping photon should have, according to Einstein, a precisely defined energy, that is, a precisely defined frequency ($`E=h\nu `$) in contradiction with the sharp pulse. At least in 1949, Einstein was well aware of this contradiction as he stated that “…indivisible point-like localized quanta of the energy $`h\nu `$ (and momentum $`h\nu /c`$)… contradicts Maxwell’s theory”. We know today that the photon concept is compatible with Maxwell’s theory provided that we abandon the simultaneous requirement of point-like localization and precise energy-momentum. In order to appreciate Einstein’s argument and to understand its weakness it is useful to review the history of the photon as a quantum mechanical particle and its relation to a classical electromagnetic pulse. In the year 1905, when Einstein postulated its existence, the photon was not considered to be a particle but rather an indivisible “parcel of electromagnetic energy” involved in the interaction with matter. It was with the Compton effect, observed in 1923, that it became clear that the photon, in its collision with an electron, was exchanging energy and momentum, that is, individual particle properties. Only in 1926, when the photon was more than 20 years old, he was given his name. Today we may think of the photon as a full fledged quantum mechanical particle, like an electron or a neutrino, obeying typical indeterminacy relations. A photon can be prepared in a quantum state with precise energy and momentum at the cost of loosing space-time localization. On the other hand, a well localized photon escaping Einstein’s box through a shutter opened during a very short time, will have unsharp energy. We should notice however that sometimes in the bibliography the term “photon” has a more restrictive meaning, being reserved for the cases of quantum states prepared with sharp energy momentum (and therefore not localized). That is, those states generated by creation operators applied to the vacuum $`a_𝐤^{}|0`$. With appropriate superposition of these photon states of well defined energy, we can build states corresponding to any desired configuration of the electromagnetic fields. For instance, it can be shown that the, so called, coherent states correspond to a monochromatic linearly polarized plane electromagnetic wave. The quantum mechanical description of the electromagnetic field is a very rich subject that we will not treat here. For our purpose it is sufficient to mention that the space-time localization of the quantum state obtained as a superposition of photon states, is in correspondence with the space-time width of a classical electromagnetic pulse. From the proposed experiment, it follows that Einstein was thinking of the photon as a sharply localized object with a sharply defined energy (frequency). Such an object does not exist! Einstein’s experiment can not be realized, not even in gedanke, because it involves a nonexistent object. Of course one can weigh the box before and after a short opening interval but the results of these measurements can not be attributed to a nonexistent physical system. Even if the “localized photon” did exist, the measurement performed on the box would perturb its state, as is today generally accepted, as consequence of the projection postulate; no matter how far away the photon is. Nonlocality is indeed the most astonishing aspect of the photon box experiment. Einstein’s argument would be applicable when the physical system emitted by the box were a point like object with precisely defined energy, like a classical particle for instance. But then, the argument would be useless because it would not be surprising that a classical apparatus with a classical physical system violates quantum mechanics. Einstein needed the photon in his argument because of its essential quantum nature. His error was to assign, to this quantum mechanical object, classical features of localization and sharp energy. This is not possible as shown by the combination of classical electrodynamics with the definition of the photon energy. II. BOHR’S REPLY We have seen that the main weakness of Einstein’s argument lays in the “photon” side because it requires a nonexistent object. However Bohr looked for an error at another place: at the box side. The set of formulas used by Bohr in his reply to Einstein in order to derive the inequality (1) is: $`\mathrm{\Delta }E=`$ $`\mathrm{\Delta }mc^2,`$ (2) $`\mathrm{\Delta }p\mathrm{\Delta }q`$ $`\mathrm{},`$ (3) $`\mathrm{\Delta }p<`$ $`Tg\mathrm{\Delta }m,`$ (4) $`{\displaystyle \frac{\mathrm{\Delta }T}{T}}=`$ $`{\displaystyle \frac{1}{c^2}}g\mathrm{\Delta }q.`$ (5) This is a hybrid set involving classical mechanics (4), special relativity (2), quantum mechanics (3) and (4) and general relativity (5). We will see that this hybrid mixture is precisely the root of the weakness of the argument. Of course, with trivial manipulations of these formulas one can readily arrive at the inequality (1). However, in order to provide a proof of the inequality, the relations (2) to (5) must be valid and the symbols used in these formulas must have the same meaning as the one in the inequality (1). We will see that these two requirements are not satisfied by Bohr’s reply. In Bohr’s reply to Einstein, $`T`$ is the “interval of balancing procedure”, $`\mathrm{\Delta }m`$ is the “weighing … accuracy”, $`\mathrm{\Delta }q`$ is the “position … accuracy” and $`\mathrm{\Delta }p`$ is the “minimum latitude in the control of the momentum of the box”. In these definitions there is a mixture of classical uncertainties and quantum indeterminacies. We agree with other authors that the symbol $`\mathrm{\Delta }`$ in Bohr’s arguments denotes some unspecified measure of width whose nature (classical or quantum) is not clearly stated. This ambiguity is typical of Bohr’s confusing argumentation style that was tolerated because of his (deserved) authority. For this reason his argument still “raises many questions which have never been satisfactorily answered”. The main difficulty in this task is that there is absolutely no unanimity as to the correct reading of Bohr’s argument. The number of different interpretations of the argument and of the meaning of the several symbols is astounding. In any case, the symbols involved in relation (1) are quantum indeterminacies and whatever meaning Bohr had in mind, sooner or later quantum indeterminacies must enter the scene in order to end with relation (1). We will therefore analyze Bohr’s argument assuming that he also refers to quantum indeterminacies. With this assumption we will show that Bohr’s argument is wrong. The first difficulty that we find with Bohr’s argument is that the symbol $`T`$ has not the same meaning in the set of relations (2) to (5) as in relation (1). In Einstein’s argument, $`\mathrm{\Delta }T`$ is the indeterminacy in the moment of escape of the “photon” (more precisely, the time-width of the electromagnetic pulse) and in Bohr it means the indeterminacy in the balancing time of the box during the weighing procedure. These indeterminacies need not be the same. The weighing of the box can, indeed, be made a long time after the escape of the electromagnetic pulse. We have here sufficient reason to take Bohr’s reply as inconclusive. The next difficulty is that Bohr claims that the inequality (4) is “obvious”. This was not obvious to us and therefore we made a bibliography search. Many authors simply quote Bohr literally without further explanation. Other authors attempt some explanations for it. In several cases an equality is derived instead of the inequality (4), involving classical uncertainties that are later replaced by quantum indeterminacies. None of the authors consulted stated clearly the difference between classical quantities causally related and essential quantum indeterminacies. We reached the conclusion that relation (4) is not obvious and, if valid, it must be proved with more care. Since relation (4) has been attributed to be a consequence of the weighing procedure of the box in a spring balance, it is important to analyze it in every detail. The procedure consists in hanging or taking away increasingly smaller weights until the box remains at rest with the pointer at the zero of the scale. At this point the force of the spring cancels the total weight of the box and we can neglect them. There is one subtle point worth mentioning about the weighing procedure. In the light of the mass-energy relation, the weighing procedure must be such that there can not be any energy dissipation to the environment because this would imply a loss of mass. In particular, a damping mechanism to stop the box could not be tolerated. The weighing procedure must involve only transfer of elastic, gravitational and kinetic energy to stop the box without dissipation. We assume that the required experimental skills exist for this to be possible. When the balancing weights get smaller, the movement of the pointer becomes slower and longer balancing times are required. The weighing terminates with an experimental precision $`\delta m`$ when the addition or subtraction of such a mass would not produce any noticeable displacement of the pointer during a previously chosen balancing time $`T`$. If we are willing to spend more time we can reach better precision. Clearly, the box can not be at absolute rest but it will be moving with a momentum $`\delta p`$ that must be negligibly small, so small that during the time of balancing $`T`$, the displacement of the box $`\delta q`$ around the zero position should be smaller than any perceivable distance. The gravitational force $`g\delta m`$ associated to the mass $`\delta m`$, acting during the time $`T`$ would cause then a change $`\delta p`$ in the momentum of the box. That is, $$\delta p=g\delta mT.$$ (6) We must emphasize the all quantities mentioned are classical uncertainties typical of any measurement procedure. The mass (energy) uncertainty $`\delta m`$ and the momentum uncertainty $`\delta p`$ are causally related by the equation above and can be estimated for a given experiment. They should not be confused with quantum indeterminacies that are in general not causally related. This equation is, formally, similar to Bohr’s relation (4) and we could be tempted to replace the uncertainties $`\delta p`$ and $`\delta m`$ by the corresponding indeterminacies $`\mathrm{\Delta }p`$ and $`\mathrm{\Delta }m`$ and somehow argue for the $`<`$ sign. We should resist the temptation leading to a wrong result, as we will see. More will be said below about this temptation. The closest that we can get relating the classical uncertainties with the quantum indeterminacies is to notice that the indeterminacies of the quantum state of the macroscopic apparatus are of course much smaller than the experimental uncertainties. $$\mathrm{\Delta }p<\delta p;\mathrm{\Delta }m<\delta m.$$ (7) Clearly, Bohr’s relation (4) does not follow from relations (6) and (7) above. Even more, we will next see that there are quantum states of the box that violate relation (4). An appropriate classical model for a spring balance could be a damped harmonic oscillator, if we can assume that the dissipated energy (mass) is negligible compared to the weighed mass. For our quantum case, however, the uncertainty in the measured mass must be very small and we can not tolerate energy losses; therefore we will take as a model for the box hanging from a spring in a constant gravitational field a (undamped) harmonic oscillator. The constant force of gravity is canceled by the spring and can therefore be ignored (it just amounts to an offset in the rest position of the box). The quantum state of the box will be given then by the harmonic oscillator energy eigenfunctions $`\{\varphi _n\}`$. We may assume that the preparation of the system, involved in the weighing procedure, leaves the box, including the enclosed radiation, in the energy ground state $`\varphi _0`$. This is the simplest assumption but we can be more general and choose for the state of the box some superposition of energy states with an energy indeterminacy $`\mathrm{\Delta }E`$ much smaller than the experimental uncertainty $`\delta mc^2`$. Among these choices we could consider the harmonic oscillator coherent states $$\psi _\alpha =\underset{n=0}{\overset{\mathrm{}}{}}\mathrm{exp}(\frac{|\alpha |^2}{2})\frac{\alpha ^n}{\sqrt{n!}}\varphi _n.$$ (8) These states are particularly interesting because they are the quantum states that most resemble the classical states in the sense that they have the least indeterminacy product. Notice that $`\alpha `$ can take any (complex) value and the ground state corresponds to the special case where $`\alpha =0`$. The indeterminacies in momentum and energy for these states are, $$\mathrm{\Delta }^2p=\mathrm{}m\omega /2,\mathrm{\Delta }^2E=\mathrm{}^2\omega ^2|\alpha |^2.$$ (9) Where $`m`$ is the mass of the oscillator (the box) and $`\omega `$ its oscillation frequency. The momentum indeterminacy is constant for all states and taking $`\alpha `$ small enough we can reach a state with an energy (mass) indeterminacy small enough to violate inequality (4), providing a counter example to Bohr’s “obvious” relation. That is, $$|\alpha |<\frac{c^2}{Tg}\sqrt{\frac{m}{2\mathrm{}\omega }}\mathrm{\Delta }p>Tg\mathrm{\Delta }m.$$ (10) Regardless of the meaning that Bohr gave to the symbols, we can not use the set of relations (2) to (5), assuming that the $`\mathrm{\Delta }`$’s mean quantum indeterminacies, in order to derive relation (1) because one of them is wrong, as the counterexample shows. The last inequality above has the reversed sign compared with relation (4) and, if the rest of Bohr’s argument were correct, it would lead to a result like (1) but with the reversed sign! A boomerang for Bohr. With the harmonic oscillator model for the box, a rather decent model, we have shown that relation (4) is not generally true and presumably a model independent proof of the relation is impossible. In any case, it is far from being “obvious”. Here we would like to emphasize that quantum indeterminacies can not be treated as classical variables because they are not causally related. An indeterminacy in momentum $`\mathrm{\Delta }p`$ is not a change in momentum that is caused by the action of some force. The indeterminacies are just that, indeterminacies without a classical cause. For this reason it would be wrong to place $`\mathrm{\Delta }m`$ and $`\mathrm{\Delta }p`$ instead of $`\delta m`$ and $`\delta p`$ in Eq.(6). It follows from the formal definition of the indeterminacies in quantum mechanics that when two observables $`A`$ and $`B`$ are related by a function, $`B=F(A)`$, their indeterminacies are not related by the same function (not even for a linear function with operator valued coefficients!). For example, energy and momentum of a free particle are related by $`E=p^2/2m`$ but their indeterminacies are not related in the same way. As another example, the position observables at different times ($`t`$ and $`0`$) for a free particle in a given quantum state, are related by $`x(t)=x(0)+vt`$, where $`v=p/m`$ is the velocity observable. However in this case the indeterminacies in the position at those times are related by $$\mathrm{\Delta }x(t)=\sqrt{\mathrm{\Delta }^2x(0)+\left(xv+vx2xv\right)t+v^2t^2},$$ (11) where the expectation values are taken in the state at time $`t=0`$. It is therefore wrong to use Heisenberg indeterminacy relations in order to derive other relations by careless mathematical manipulations of the indeterminacies. This illegal use of Heisenberg relations has prompted D. Griffith to say: “when you hear a physicist invoke the uncertainty principle, keep a hand on your wallet”. IV. CONCLUSION When we differentiate the meaning and the mathematical treatment of the classical uncertainties and the quantum indeterminacies, it becomes clear that it is impossible to prove a quantum mechanical relation, like relation (1), by means of an argument concerning classical uncertainties. If this were possible, then, quantum mechanics would be a consequence of classical mechanics. On the other hand, we have just seen that hybrid arguments mixing classical and quantum concepts are often meaningless and may lead to wrong results. Therefore the only way to prove the validity of a quantum relation is to require logical consistency within the quantum theory (and, of course, agreement with the experiments). Every indeterminacy relation is a consequence of the formalism of quantum mechanics where states are represented by Hilbert space elements and observables by hermitian operators. Therefore any correct quantum mechanical treatment of the photon box experiment will be consistent with the indeterminacy relations. Such treatments are not the purpose of this work but we just want to mention that they will involve the quantum description of the electromagnetic radiation inside and outside the box with states that are coupled by energy conservation. The description of the outgoing radiation can be conveniently done in terms of states built as superposition of photon number eigenstates. A well localized electromagnetic pulse will involve a large number of photon states, implying a large energy spread. On the other hand, if energy is sharp, the electromagnetic pulse will not be localized. Since the radiation inside and outside the box are coupled by energy conservation, the state reduction involved in the measurement of the energy inside the box will affect the state of the radiation outside the box, no matter how far away it is. Again, nonlocality appears as one of the most astonishing features of quantum mechanics. In this work we have seen that the reputation of the Bohr-Einstein discussion concerning the photon-box experiment is not justified. Their arguments have been uncritically propagated by many authors. It is unfortunate that these, very deficient, arguments of Bohr and Einstein have been given such a high priority in their debate. Indeed, arguments that are meant to destroy quantum mechanics and the reply to save quantum mechanics from destruction should have a level of rigor not reached in the photon-box debate. Of course, the weakness in the arguments of Bohr and Einstein are easily detected today in the light of the present knowledge of quantum mechanics and it would be an anachronistic error to blame them for that. These weakness should by no means tarnish their fundamental contributions to quantum theory. On Bohr’s side, the idea of complementarity showing that physical reality has a level of sophistication veiled to our naive observations; and on Einstein’s side the EPR argument exhibiting unexpected correlations whose study have dominated the research on the foundations of quantum mechanics. Compared with their gigantic contributions, their errors are insignificant. The important mistake that we do want to point out is the uncritical and authoritarian propagation of a coherent combination of errors that lead to the right conclusion that quantum mechanics is correct. This work has received partial support from “Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET), Argentina (PIP grant Nr. 4342/96). A.D. and I.G.M. would like to thank the “Comisión de Investigaciones Científicas” (CIC) for financial support.
no-problem/9910/astro-ph9910233.html
ar5iv
text
# The Evolution of Galaxy Clustering in Hierarchical Models ## 1. Introduction Observations now probe the properties of galaxy populations over a large fraction of the age of the universe (e.g. Steidel et al. 1996; Ellis et al. 1996; Lilly et al. 1996; Adelberger et al. 1998). Furthermore, we can look forward to a much more detailed study of the high redshift universe with the many instruments soon to be commissioned on the growing generation of new 8-m class telescopes. Over the lookback time probed by these observations the conventional cold dark matter dominated models of structure formation predict very strong evolution of the distribution of dark matter. It is therefore the job of theorists to progress to the stage where these observations can be interpreted and further properties predicted within the framework of the hierarchical evolution of dark matter halos. The main approaches that have been taken towards this goal can be divided into two classes. The first, direct simulation, involves solving explicitly the gravitational and hydrodynamical equations in the expanding universe using numerical N-body techniques (e.g. Katz et al. 1992; Evrard et al. 1994; Navarro & Steinmetz 1997; Frenk et al. 1999; Pearce et al. 1999). The second approach, now commonly known as “semi-analytic modelling of galaxy formation”, calculates the evolution of the baryonic component using simple analytic models, and uses a Monte-Carlo technique to generate merger trees that describe the hierarchical growth of dark matter halos. The two modelling techniques have complementary strengths. The major advantage of direct simulations is that the dynamics of the cooling gas are calculated in full generality, without the need for simplifying assumptions. The main disadvantage is that even with the best codes and fastest computers available, the attainable resolution is still some orders of magnitude below that required to fully resolve the formation and internal structure of individual galaxies in cosmological volumes. In addition, a phenomenological model, similar to that employed in semi-analytic models, is required to include star formation and feedback processes. Semi-analytic models do not suffer from such resolution limitations. Their major disadvantage is the need for simplifying assumptions in the calculation of gas properties, such as spherical symmetry which is imposed to estimating the cooling rate of halo gas. An important advantage of semi-analytic models is their flexibility, which allows the effects of varying assumptions or parameter choices to be readily investigated and makes it possible to calculate a wide range of observable galaxy properties, such as luminosities, sizes, mass-to-light ratios, bulge-to-disk ratios, circular velocities, etc. Semi-analytic models of galaxy formation based on Monte-Carlo methods for generating halo merger trees were pioneered by two groups, one now based in Munich (e.g. Kauffmann, White & Guiderdoni 1993; Kauffmann & Charlot 1994; Kauffmann 1995a,b; Diaferio et al. 1999) and the other in Durham (e.g. Cole et al. 1994; Baugh et al. 1998; Governato et al. 1998; Benson et al. 1999a,b; Cole et al. 1999). There is now a third, well established independent group (Somerville & Primack 1999; Somerville & Kolatt 1999; Somerville et al. 1999) and the field continues to grow with interesting variants being developed, for example, by Roukema et al. (1997), Avila-Reese & Firmani (1998), Wu, Fabian & Nulsen (1998) and van Kampen, Jimenez & Peacock (1999). The numerous contributions of these groups to this meeting are an indication of the versatility and usefulness of this approach to modelling galaxy formation. There are typically a large number of differences between the detailed assumptions made in any two of the above models. Many of these relate to the prescription for generating the halo merger trees. These differences typically have little effect on model predictions. Also one can expect this aspect of the various approaches to converge, because in each case the models are attempting to emulate the evolution of dark matter halos seen in high resolution N-body simulations. The most important differences relate to assumptions regarding star formation, e.g. the importance of merger induced bursts and the manner in which stellar feedback operates. In this respect all the models are, inevitably, oversimplified and probably the best way forward is to confront the models continually with ever more detailed and accurate data. The great value of the semi-analytic approach is its ability to address a wide range of observational data from galaxy luminosity functions, colour and metallicity distributions to clustering statistics within a single coherent model. We have found this to be a particular strength of the models as it often allows robust predictions to be made despite the intrinsic uncertainty in the physical processes that are being modelled. In the remainder of this article we briefly describe the latest Durham model and use it to illustrate the main processes that are incorporated in semi-analytic models of galaxy formation. We then present a comprehensive set of results for the galaxy clustering properties predicted by this model for a $`\mathrm{\Lambda }`$CDM ($`\mathrm{\Omega }_0=0.3`$, $`\mathrm{\Lambda }_0=0.7`$) cosmology, after the model is constrained to reproduce the the bright end of the observed galaxy luminosity function. ## 2. The Model A full description of the current Durham semi-analytic galaxy formation model, complete with an exploration of how the predictions depend on parameter variations and how they compare to observational data, can be found in Cole et al. (1999). Here we simply describe the main features of the model. ### 2.1. Merger Trees We use a simple new Monte-Carlo algorithm to generate merger trees that describes the formation paths of randomly selected dark matter halos. Our algorithm is based directly on the analytic expression for halo merger rates derived by Lacey & Cole (1993). The algorithm enables the merger process to be followed with high time resolution, as timesteps are not imposed on the tree but rather are controlled directly by the frequency of mergers. Also, there is no quantization of the masses of the halos. ### 2.2. Halo Structure and Gas Cooling We assume that the dark matter in virialized halos is well described by the NFW density profile (Navarro, Frenk & White 1997). We further assume that any diffuse gas present during a halo merger is shock-heated to the virial temperature of the halo. The density profile we adopt for the hot gas is less centrally concentrated than that of the dark matter and is chosen to be in agreement with the results of high resolution simulations of non-radiative gas (e.g. Frenk et al. 1999). We estimate the fraction of gas that can cool in a halo by computing the radius at which the radiative cooling time of the gas equals the age of the halo. The gas that cools is assumed to conserve angular momentum and settle into a rotationally supported disk. Thus, the initial angular momentum of the halo, which we assign using the well characterised distribution of spin parameters found for halos in N-body simulations, determines the size of the resulting galaxy disk. In computing the size of the disk we also take account of the contraction of the inner part of the halo caused by the gravity of the disk. ### 2.3. Star Formation and Feedback The processes of star formation and stellar feedback are the most uncertain to model. We adopt a flexible approach in which the star formation rate in the disk of cold gas is given by $`\dot{M}_{}=M_{\mathrm{cold}}/\tau _{}`$, with the timescale $`\tau _{}`$ parameterized as $$\tau _{}=ϵ_{}^1\tau _{\mathrm{disk}}(V_{\mathrm{disk}}/200\mathrm{km}\mathrm{s}^1)^\alpha _{}.$$ (1) We also adopt a feedback model in which for every solar mass of stars formed, $$\beta =(V_{\mathrm{disk}}/V_{\mathrm{hot}})^{\alpha _{\mathrm{hot}}}$$ (2) solar masses of gas are assumed to be reheated and ejected from the disk as a result of energy input from young stars and supernovae. In these formulae, $`\tau _{\mathrm{disk}}`$ and $`V_{\mathrm{disk}}`$ are the dynamical time and circular velocity of the disk; $`ϵ_{}`$, $`\alpha _{}`$, $`\alpha _{\mathrm{hot}}`$ and $`V_{\mathrm{hot}}`$ are the model parameters. ### 2.4. Galaxy Mergers Mergers between galaxies can occur, subsequent to the merger of their dark matter halos, if dynamical friction causes the orbits of the galaxies to decay. The result of a merger depends on the mass ratio of the merging galaxies. If they are comparable, $`M_{\mathrm{smaller}}>f_{\mathrm{ellip}}M_{\mathrm{larger}}`$, then the merger is said to be violent and results in the formation of a spheroid. At this point any cold gas present in the merger is assumed to undergo a burst of star formation, with a timescale equal to the dynamical time of the forming spheroid and with feedback estimated using equation (2), but with the circular velocity of the spheroid replacing that of the disk. The size of the resulting spheroid is estimated assuming energy conservation in the merger (once dynamical friction has eroded the orbits to the point where the galaxies interpenetrate) and the virial theorem. For minor mergers, $`M_{\mathrm{smaller}}<f_{\mathrm{ellip}}M_{\mathrm{larger}}`$, we assume the cold gas is accreted by the disk and the stars by the bulge of the larger galaxy. ### 2.5. Stellar Population Synthesis and Dust To convert the calculated star formation histories of each galaxy into observable luminosities and colours we use the stellar population synthesis model of Bruzual & Charlot (1993,1999) and, in addition, the 3-dimensional dust model of Ferrara et al. (1999). For the former we adopt the IMF of the solar neighbourhood as parameterized by Kennicutt (1983) and for the latter we adopt their Milky Way extinction law and assume that the dust/gas ratio in the cold gas disk scales with metallicity. ### 2.6. Galaxy Clustering Given a list of halo masses, at the present day or at some redshift $`z`$, the above model can be used to determine the number, luminosity and other properties of the galaxies that inhabit them. It is then straightforward to compute the amplitude of their correlation function on large scales using the formalism of Mo & White (1996). Here we use a more direct approach, that of using the positions of halos from an N-body simulation, as this enables the correlation function to be studied down to smaller scales and allows other aspects of galaxy clustering to be investigated. Details of our procedure can be found in Benson et al. (1999a). ## 3. Observational Constraints The number of parameters in our galaxy formation model is not small and the intrinsic range of behaviour that the model can produce is large. This is inevitable as galaxy formation, at the very least, involves all the processes which are included in the model and quite possibly others as well. Thus, progress can only be made if a set of constraints is applied to fix model parameters. Our approach, set out in detail in Cole et al. (1999), is to fix these parameters using the observed properties of the local galaxy population, e.g. the B and K-band luminosity functions, the slope of the Tully-Fisher relation and the gas fractions and metallicities of disk galaxies. The result of applying these constraints is a well specified model whose properties can be examined and critically compared to other observational data such as galaxy clustering or high redshift observations. Fig. 1 shows the $`\mathrm{b}_\mathrm{J}`$ and K-band luminosity functions of a $`\mathrm{\Lambda }`$CDM model ($`\mathrm{\Omega }_0=0.3`$, $`\mathrm{\Lambda }_0=0.7`$) constrained in this way. It turns out that the predicted low redshift galaxy clustering is insensitive to changes in the model parameters provided only that the model is constrained to produce a reasonable match to the bright end of the luminosity function (Benson et al. 1999a). ## 4. Results We now look at a variety of clustering properties that we predict for the constrained $`\mathrm{\Lambda }`$CDM model and compare them with available observational data. The N-body simulation used to assign positions to our galaxies is the $`\mathrm{\Lambda }`$CDM “GIF” simulation carried out by the Virgo consortium. These same simulations have been analyzed in great detail by the Munich group, with results presented at this meeting and in Kauffmann et al. (1999a,b) and Diaferio et al. (1999). Their approach is more sophisticated than ours in that they extract the halo merger trees directly from the N-body simulations and are able to follow individual galaxies in the simulation from one epoch to another. However, these differences in approach do not significantly affect the predictions of the clustering and kinematic properties of the galaxy populations. In particular, with Antonaldo Diaferio, we were able to verify that if we use our algorithm to assign galaxies positions, but start with the Munich group’s list of galaxies as a function of halo mass, we recover very similar results to those reported in Diaferio et al. (1999). These tests are discussed in Benson et al. (1999b), where we conclude that in the few cases where significant differences exist between our results and those of the Munich group, they are largely a result of the differing constraints that have been applied to the models. ### 4.1. Low Redshift Clustering We start, in Fig. 2a, by showing a slice through the N-body simulation with the positions of galaxies superimposed on the dark matter distribution. It is worth noting that the way in which galaxies trace the dark matter is non-trivial with galaxies avoiding the large underdense regions and concentrating in filaments and clusters. In our model this distribution is entirely determined by the combination of the dark matter distribution and the distribution of the number of galaxies within halos as a function of halo mass. One representation of this distribution is Fig. 2b, which shows the variation of total mass-to-light ratio of halos as a function of halo mass, with the errorbars indicating the 10 and 90 centiles of this distribution. This dependence is produced naturally by the physics incorporated into the semi-analytic model. Galaxy formation is most efficient (M/L lowest) in intermediate mass halos. The efficiency is reduced in low mass halos due to feedback and in the most massive halos due to long cooling times. The low efficiency in low mass halos leads to the production of large voids in the galaxy distribution, while the inefficiency at high masses leads to an anti-bias in small scale clustering, as shown by the real-space correlation functions of Fig. 3a. In fact the real-space galaxy correlation function has a nearly power law form in quite good agreement with that observed. The underrepresentation of galaxies in clusters also leads to a reduced pairwise velocity dispersion (Fig. 3b) which is clsoe to the observed value. Strangely, the different dark matter and galaxy peculiar velocities act to produce very similar redshift space correlation functions for both components and these match well the observed redshift-space galaxy correlation function. Fig. 4 compares the skewness, $`S_3`$, in redshift space as a function of cell size with two recent observational estimates. The skewness of the model galaxy distribution is substantially less than that of the dark matter and in remarkably good agreement with the observed values. ### 4.2. The Evolution of galaxy clustering A strong test of the models will come from comparing their predictions against high redshift observations. One highly successful comparison has already been made. In Baugh et al. (1998) and Governato et al. (1998) firm predictions for the Lyman-break galaxy correlation function were made. These later proved to agree remarkably well with observations (Adelberger et al. 1998). Unfortunately the predictions for both low and high $`\mathrm{\Omega }_0`$ models are very similar and so the observations do not discriminate between cosmological models. In order to make detailed comparisons of the galaxy formation models with the ever increasing quantity of high redshift data we are developing techniques to simulate deep pencil beam surveys and accurately match observational selection criteria. Fig. 5 shows a simulated deep R-band image constructed using our technique of outputting a light-cone from an evolving N-body simulation and then using the semi-analytic galaxy formation model to populate its halos with galaxies. Given the limited resolution and volume of the simulation used to construct these prototypes one should be somewhat cautious to avoid over-interpreting their predictions. Nevertheless it is interesting to make a preliminary comparison with some recent observations. Fig. 6 compares predictions for the clustering amplitude at one degree as a function of limiting K-magnitude and for redshift slices. The model predictions are based on an ensemble of simulated light cones with the errorbars indicating the rms scatter. The model agrees well with the K-band observations of McCracken (1999). The model predicts very little variation of clustering amplitude with redshift, whereas Magliocchetti & Maddox (1999) find quite a strong trend. However, the errorbars are large and larger area surveys will be required to critically test the models. ## 5. Conclusions Physically motivated semi-analytic models of galaxy formation which, importantly, include the evolution of structure, provide a framework in which very diverse properties of galaxies can be modelled and understood. Using a subset of the locally observed galaxy properties these models can be constrained and then employed to make useful predictions. Such predictions include all aspects of galaxy evolution and also galaxy clustering. We have only examined the predicted clustering properties for a couple of cosmological models (Benson et al. 1999a). However, it is intriguing that the results of the $`\mathrm{\Lambda }`$CDM model presented here appear to match galaxy clustering data remarkably well and significantly better than a $`\tau `$CDM model. We have invested in the technology necessary to extend the model predictions to high redshift by simulating deep pencil beams. Soon, such data will provide very interesting tests of galaxy formation models. ## References Adelberger, K.L., Steidel, C.C., Giavalisco, M., Dickinson, M., Pettini, M., Kellogg, M. 1998, ApJ, 505, 1 Avila-Reese, V., Firmani, C. 1998, ApJ, 505, 37 Baugh, C.M. 1996, MNRAS, 280, 267 Baugh, C.M., Cole, S., Frenk, C.S., Lacey, C.G. 1998, ApJ, 498, 504 Benson, A.J., Cole, S., Frenk, C.S., Baugh, C.M., Lacey, C.G. 1999a, MNRAS, in press. Benson, A.J., Cole, S., Baugh, C.M., Frenk, C.S., Lacey, C.G. 1999b, MNRAS, submitted. Bruzual, A.G., Charlot, S. 1993, ApJ, 405, 538 Bruzual, A.G., Charlot, S. 1999, in preparation. Cole, S., Aragón-Salamanca, A., Frenk, C.S., Navarro, J.F., Zepf, S.E. 1994, MNRAS, 271, 781 Cole, S., Lacey, C.G., Baugh, C.M., Frenk C.S. 1999, MNRAS, submitted. Diaferio, A. Kauffmann, G., Colberg, J.M., White, S.D.M. 1999, MNRAS, 307, 537 Ellis, R.S., Colless, M., Broadhurst, T., Heyl, J., Glazebrook, K. 1996, MNRAS, 285, 613 Evrard, A.E. Summers, F., Davis, M. 1994, ApJ, 422, 11 Ferrara, A., Bianchi, S., Cimatti, A., Giovanardi C. 1999, ApJS, 123, 437 Frenk, C.S., et al. 1999, ApJ, in press. Gardner, J.P., Sharples, R.M., Frenk, C.S., Carrasco, B.E. 1997, ApJ, 480, L99 Glazebrook, K., Peacock, J.A., Miller, L., Collins, C.A. 1995, MNRAS, 275,169 Governato, F., Baugh, C.M., Frenk, C.S., Cole, S., Lacey, C.G., Quinn, T., Stadel, J. 1998, Nature 392, 359 Guzzo et al. 1999 (astro-ph/9901378) Hoyle, F., Szapudi, I., Baugh, C.M. 1999, MNRAS, submitted. Jing, Y.P., Mo, H.J., Borner, G. 1998, ApJ, 494, L1 van Kampen E., Jimenez, J., Peacock, J.A. 1999, MNRAS, submitted. Katz, N., Hernquist, L., Weinberg D.H. 1992, ApJ, 399, L109 Kauffmann, G. 1995a, MNRAS, 274, 153 Kauffmann, G. 1995b, MNRAS, 274, 161 Kauffmann, G., White, S.D.M., Guiderdoni, B. 1993, MNRAS, 264,201 Kauffmann, G., Charlot, S. 1994, ApJ, 430, L97 Kauffmann, G., Colberg, J.M., Diaferio, A., White, S.D.M. 1999a, MNRAS, 303, 188 Kauffmann, G., Colberg, J.M., Diaferio, A., White, S.D.M. 1999b, MNRAS, 307, 529 Kennicutt, R.C. 1983, ApJ, 272, 54 Lacey, C.G., Cole, S. 1993, MNRAS, 262, 627 Lilly, S.J., Le Fevre, O., Hammer, F. Crampton, D. 1996, ApJ, 460, 1 Loveday, J., Peterson, B. A., Efstathiou, G., Maddox, S.J. 1992, ApJ, 90,338 Magliocchetti, M., Maddox, S.J. 1999, MNRAS, 306, 998 McCracken, H.J. 1999, Ph.D thesis, University of Durham Mo, H.J., White, S.D.M. 1996, MNRAS, 282, 347 Navarro, J.F., Frenk, C.S., White, S.D.M. 1997, ApJ, 490, 493 Navarro, J.F., Steinmetz, M. 1997, ApJ, 478, 13 Pearce, F.R., Jenkins, A., Frenk, C.S., Colberg, J.M., White, S.D.M., Thomas, P.A., Couchman, H.M.P, Peacock, J.A., Efstathiou, G. 1999, ApJ, 521, L9 Ratcliffe, A., Shanks, T., Parker, Q.A., Fong, R. 1998, MNRAS, 294, 147 Roukema, B.F., Peterson, B.A. Quinn, P.J., Rocca-Volmerange, B. 1997, MNRAS, 292, 835 Somerville, R.S., Kolatt, T.S. 1999, MNRAS, 305, 1 Somerville, R.S., Lemson, G., Kolatt, T.S., Dekel, A. 1999, MNRAS, submitted (astro-ph/9807277) Somerville, R.S., Primack, J.R. 1999, MNRAS, in press (astro-ph/9802268) Steidel, C.C., Giavalisco, M., Pettini, M., Dickinson, M., Adelberger, K.L. 1996, ApJ, 462, 17 Wu, K.K.S., Fabian, A.C., Nulsen, P.E.J. 1998, MNRAS, 301, L20 Zucca, E., et al. 1997, A&A, 326, 477
no-problem/9910/cond-mat9910389.html
ar5iv
text
# Propagation of solitons of the magnetization in magnetic nano-particle arrays ## Abstract It is clarified for the first time that solitons originating from the dipolar interaction in ferromagnetic nano-particle arrays are stably created. The characteristics can be well controlled by the strength of the dipolar interaction between particles and the shape anisotropy of the particle. The soliton can propagate from a particle to a neighbor particle at a clock frequency even faster than 100 GHz using materials with a large magnetization. Such arrays of nano-particles might be feasible in an application as a signal transmission line. The recently developed nanofabrication techniques make it possible to fabricate ferromagnetic particles to a length of 20-30 nanometers. Since the size of such nano-particles is small enough to be a single domain, the magnetization is homogeneous over a particle and can be described by a magnetic moment. These particles interact with each other due to a dipolar interaction of the magnetic moments. In such a nano-particle, the magnetic anisotropy energy can be dominated by the particle’s shape rather than the magnetocrystalline anisotropy. Therefore, the magnetic anisotropy energy can be well controlled by changing the shape of the particle. When the distance between particles is short enough that dipolar interaction becomes comparable with the anisotropy energy, the direction of the magnetization of each particle is determined in order to minimize both energies, and the direction of the magnetization moves in a corrective manner among the particles. It is known that the magnetic domain wall in bulk magnetic materials is a kind of solitons of nonlinear waves. Usually, the soliton originates from the exchange interaction between spins. This letter clarified for the first time that, in a one-dimensional array of nano-particles, solitons originating from the dipolar interaction between particles are present, and that the characteristics can be well controlled by the strength of the dipolar interaction and the shape anisotropy. The great advantage of control is quite a contrast to the case of the soliton of the domain walls in bulk materials. In this letter, we analyze the characteristics of the solitons in arrays of ferromagnetic nano-particles by varying the experimentally controllable parameter: the shape anisotropy energy. When the damping is small, the soliton propagates from a particle to a neighbor particle at 50 GHz, even faster than 100 GHz using materials with a large magnetization. Such arrays of nano-particles might be feasible in an application as a signal transmission line. We examine a one-dimensional array of particles in the $`x`$-direction. Each particle has a magnetic moment $`\stackrel{}{M}_i`$ ($`|\stackrel{}{M}_i|=M`$). The dipolar interaction and anisotropy energy is modeled as $`H`$ $`=`$ $`{\displaystyle \underset{ij}{}}{\displaystyle \frac{\mu _0}{4\pi r_{ij}^3}}[\stackrel{}{M}_i\stackrel{}{M}_j{\displaystyle \frac{3}{r_{ij}^2}}(\stackrel{}{M}_i\stackrel{}{r}_{ij})(\stackrel{}{M}_j\stackrel{}{r}_{ij})]`$ (1) $`+`$ $`{\displaystyle \underset{i}{}}{\displaystyle \frac{1}{M}}[K_yM_{iy}^2+K_zM_{iz}^2],`$ (2) where $`K_y`$ and $`K_z`$ describe the shape anisotropy. The dipolar interaction is characterized by $`J=\mu _0M^2/(4\pi a^3)`$. In this letter, we consider the case of $`K_zJ`$ and $`K_zK_y`$, and thus the $`xy`$ plane is an easy plane for the magnetization. Further, we consider the case where the thermal fluctuation in the direction of $`\stackrel{}{M}_i`$, which frequently leads to the super-paramagnetism, is well-suppressed by $`K_y`$ and/or $`J`$. As shown below, the condition can be sufficiently satisfied since $`J`$ can exceed $`10,000`$ K even for particles of a size of 20 nm, when these are aligned closely to each other. The magnetization of each nano-particle obeys the Bloch equation: $$\frac{d\stackrel{}{M}_i}{dt}=\frac{\nu }{M}\stackrel{}{M}_i\times (\frac{H}{\stackrel{}{M}_i}\frac{\alpha M}{\nu }\frac{d\stackrel{}{M}_i}{dt}),$$ (3) where $`\nu =g\mu _B/\mathrm{}`$ is a gyromagnetic constant, and $`\alpha `$ describes the dumping due to energy dissipation, such as magnetoelastic dissipation and eddy current loss in metallic ferromagnets. In nano-particles, however, it has been shown that the amount of these dissipation mechanisms are negligibly small. In fact, considering the eddy current loss, $`\alpha 10^4`$ for Fe-particles of a size of 20 nm, for which the soliton can propagate beyond 1,000 sites. Therefore, we exclusively consider the case of $`\alpha =0`$ in this letter. Before discussing the results of the numerical simulations of Eq. (3), it is worth considering the continuum limit. There are two types of configurations of $`\stackrel{}{M}_i`$ giving the minimum total energy depending on $`K_y`$. When only the nearest-neighbor interaction is considered for simplicity, the configuration is type I \[Fig. 1 (a)\] and type II \[Fig. 1 (b)\] for $`K_y<J`$ and $`K_y>J`$, respectively. In both cases, the Bloch equation (3) is approximated to the sine-Gordon (SG) equation: $$^2\theta ^{}(x,t)\frac{1}{c^2}\frac{^2\theta ^{}(x,t)}{t^2}\frac{1}{\lambda ^2}\mathrm{sin}\theta ^{}(x,t)=0,$$ (4) where $`c^2=3Ja^2K_z\nu ^2/M^2`$, $`\lambda ^2=3Ja^2/(4|K_yJ|)`$, and $`\theta ^{}(x,t)/2=\theta (x,t)`$ is the azimuthal angle of $`\stackrel{}{M}(x)`$ in the continuum limit of $`\stackrel{}{M}_i`$. Note that the definitions of $`\theta _i`$ for even- and odd-sites are different as shown in Fig. 1. Consequently, the solution in the form of the soliton is present for both cases of type I and II, whose examples are shown below Figs. 1 (a) and (b). The solution has a velocity ($`v`$) as a parameter, and its wavelength is $`l=\lambda \sqrt{1v^2/c^2}`$. The maximum velocity is limited to $`c`$. When the long range part of the interaction is taken into account, $`|K_yJ|`$ in $`\lambda ^2`$ is replaced by $`|K_y5\zeta (3)J/4||K_y1.5J|`$ with $`\zeta (x)`$ being Riemann’s zeta function. To gain a large value of $`J`$, however, it is better to align particles closely in such a way that the distance between particles ($`a`$) is comparable with the diameter of the dot ($`d`$), where the nearest-neighbor interaction becomes dominant. Therefore, we only take into account the nearest-neighbor interaction in Eq. (2) hereafter, although the actual form of the interaction may slightly deviate from that in Eq. (2) for $`ad`$. Figure 2 shows the time-evolution of the center coordinate of the soliton obtained from the numerical calculations of Eq. (3) for various values of $`K_y`$. As an initial condition for $`t=0`$, we choose a solution of SG-equation (4) with an initial velocity $`v=0.2c`$. For both cases of type I and II, the soliton stably propagates keeping the initial velocity when $`K_y`$ is close to $`J`$. The velocity begins to decrease when $`|K_yJ|`$ becomes large, and the amount of the decrease is more significant for the case of type II than type I. The soliton for type II stops after the propagation beyond 30 sites for $`K_y=1.2J`$, while the soliton for type I can propagate even for $`K_y=0`$. Therefore, the configuration of type I with $`K_y`$ close to $`J`$ is suitable for the stable propagation of the soliton. We performed the same calculations for the type I case for various values of the initial velocity, and obtained the average velocity ($`\overline{v}`$) in the range, for example, of $`200x400`$. The results are shown in Fig. 3 (a). The average velocity $`\overline{v}`$ keeps the initial velocity for smaller velocities. This implies that the solution of the continuum limit is appropriate for this range of the velocity. With the increase of the initial velocity, however, the effects of the discreteness of the array become important, and $`\overline{v}`$ begins to saturate and comes to depend little on the initial velocity. With the further increase of the initial velocity, $`\overline{v}`$ for $`K_y0.3J`$ increases suddenly with a finite jump, and $`\overline{v}`$ takes an almost completely fixed maximum value. It is interesting to note that the corresponding wavelength of the soliton is always close to the distance between particles ($`a`$) independent of $`K_y`$, while the corresponding fixed value of $`\overline{v}`$ depends on $`K_y`$. In the case of $`K_y0.1J`$, although the jump becomes very broad and the velocity is not fixed so rigorously, similar behavior also remains. It should be noted that the sudden jump and fixing in the velocity is absent in the case of type II. According to the numerical results, the maximum of the average velocity ($`v_c`$) and the corresponding wavelength ($`l_c`$) satisfies an approximate relation similar to the continuum limit: $$(\frac{v_c}{c})^2+(\frac{l_c0.25a}{\lambda })^2=0.76,$$ (5) as shown in the inset of Fig. 3. Therefore, due to the the effects of the discreteness of the array, the wavelength is shifted by a constant, and $`c`$ and $`\lambda `$ are renormalized simultaneously, although the amount of the shift and the renormalization may depend on the distance of the soliton propagation. The corresponding wavelength $`l_c`$ is always close to $`a`$ as discussed above. Therefore, the maximum velocity of the soliton is approximately given by $`K_y`$ as $$v_cc\sqrt{0.01+0.75\frac{K_y}{J}}.$$ (6) It is possible to generate similar solitons when the magnetization at the edge of the array ($`\stackrel{}{M}_0`$) is simply rotated and flipped by an external force. In fact, the average velocity of the solitons generated in such a way is plotted in Fig. 3 (b) as a function of the flipping time. When flipping is faster than $`M/J\nu `$, the average velocity is fixed without regard to the flipping time. The corresponding wavelength of the soliton is also fixed to $`a`$, and the velocity is approximately given by Eq. (6) again. For slower flipping time ($`>3M/J\nu `$), the average velocity gradually decreases with the increase of the flipping time, and we have checked that the soliton can be generated even for much slower flipping time ($`100M/J\nu `$). Finally, we mention the actual values of parameters for an experimental setup. It is better to align particles closely ($`ad`$) in order to gain a large $`J`$ as discussed before. Since the particle is a single domain, $`Md^3`$, and thus $`JM^2/a^3d^3`$. Therefore, $`J`$ decreases with the decrease of the size of the particle. For a cylinder of Fe with $`d=20`$ nm and a height of $`10`$ nm, however, $`J`$ is as large as $`J2\times 10^4`$ K. The soliton propagates to the nearest-neighbor site in a time of $`t_0/\sqrt{3K_z/J}`$ with $`t_0M/(J\nu )`$, which does not depend too much on $`d`$. For an Fe-particle of the above size, $`t_0/\sqrt{3K_z/J}2\times 10^{11}`$ sec ($`50`$ GHz) assuming $`K_z/J=5`$. The maximum velocity in the continuum limit is $`c=\sqrt{3K_z/J}(a/t_0)1\times 10^3`$ m/s for $`a=20`$ nm. When the particles are fabricated to have a shape anisotropy of $`K_y0.7J`$, the actual velocity is about 70% of $`c`$ according to Eq. (6). Since the clock frequency shown above is proportional to the magnetization, it might be able to exceed 100 GHz for materials with a large magnetization. To conclude, we have shown for the first time that a soliton excitation mode is present in magnetic nano-particle arrays with the dipolar interactions for both configurations of the magnetization (type I and type II) giving the minimum total energy depending on the shape anisotropy. The soliton mode in the type I configuration exists in a wider range of the shape anisotropy than that in the type II configuration. The solitons with the maximum velocity, which depend on the shape anisotropy, always have a wavelength close to the distance between particles. These solitons are stable independent of the initial conditions, and they can be generated by fast flipping of the magnetization at the edge of the array. The soliton can propagate from a particle to a neighbor particle at a clock frequency even faster than 100 GHz, and may be feasible in an application as a signal transmission line. Finally, it should be noted that the dominant mechanism of the dumping in the nano-particle arrays and the effects of the disorder in the alignment of the particles and its shape are not clarified yet. Further intensive studies will be necessary on these effects. We would like to thank Takeshi Honda, Jun-ichi Fujita, and Toshio Baba for helpful discussions.
no-problem/9910/astro-ph9910503.html
ar5iv
text
# FIRST RESULTS FROM VIPER: DETECTION OF SMALL-SCALE ANISOTROPY AT 40 GHZ ## 1. Introduction Most theories of the early universe predict the presence of a peak in the cosmic microwave background (CMB) anisotropy power spectrum (White, Scott, and Silk (1994)). In cold dark matter models the position of this peak, the first acoustic peak, is $`\mathrm{}220\mathrm{\Omega }_{total}^{\frac{1}{2}}`$ where $`\mathrm{\Omega }_{total}=\mathrm{\Omega }_{matter}+\mathrm{\Omega }_\lambda `$ is determined by the total matter/energy density in the universe (Kamionkowski, Spergel, and Sugiyama (1994)). In many inflation models $`\mathrm{\Omega }_{total}`$ is forced to one; these inflation models make the specific prediction that the power spectrum will peak at $`\mathrm{}220`$. CMB anisotropy data have recently been analyzed in comparison to theoretical models (Lineweaver and Barbosa (1998), Ratra, et al. (1999), Hu (1999), Scott (1999)) and there is evidence for a peak in the power spectrum. That conclusion has in the past been based on combined analysis of many different experiments at various angular scales, since most previous individual experiments did not cover the range of $`\mathrm{}`$ best suited to search for the expected peak. The Viper telescope, when used at 40 GHz, has a $`0.26^{}`$ beam (FWHM) which sweeps $`3.6^{}`$ across the sky. Thus Viper has sensitivity from $`\mathrm{}100`$ to $`\mathrm{}600`$, spanning the range of interest for tests of inflation and other cosmological models. ## 2. Instrument The optics of the Viper telescope consist of four mirrors arranged in an off-axis configuration. The 2.15m primary mirror, together with the secondary mirror, form an aplanatic gregorian. Radiation from a distant object, converging to a focus after reflection from the primary and secondary, is reflected again by a flat, electrically driven chopping mirror, and then directed into the photometer feed horns using a fast hyperbolic condensing mirror. The chopping mirror is placed at an exit pupil of the gregorian, i.e. at an image of the primary formed by the secondary. This means that tilting the chopping mirror is nearly equivalent, optically, to tilting the primary. Because the optical design has a clear aperture, the secondary and condensing mirrors can be built oversized without the blockage that would occur in an on-axis telescope. We have done this to improve optical efficiency and reduce pickup of earth emission. The primary mirror has incoherent extension panels that increase the effective diameter to 3 m, and the entire telescope is housed in a 10 meter diameter conical reflecting baffle to further reduce pickup of earth emission. Details of the instrument design are available at the Viper web page<sup>6</sup><sup>6</sup>6http://cmbr.phys.cmu.edu/viper. For the observations reported here, the photometer used on Viper was a two pixel receiver based on HEMT (high electron mobility transistor) amplifiers cooled to 10-20 K temperature. The amplifiers are coupled to the telescope through corrugated feed horns chosen to provide a half-power illumination pattern on the primary of about 1 m diameter. This photometer, called Corona, measures the total power from 38 to 44 GHz, in two sub-bands. The instrument is calibrated using ambient temperature and liquid-nitrogen-immersed calibrators, temporarily inserted so they fill the illumination pattern of the feed horns. The efficiency of the optics, $`90\pm 5`$%, is measured by tracing the beam through the telescope using ambient-temperature absorbers, and is checked by measuring the brightness temperature of the Moon. The total calibration uncertainty is 8 %. ## 3. Observations The chopping mirror oscillates at a frequency of $`2.35`$ Hz, causing the beam to sweep back and forth across the sky in the co-elevation direction at nearly constant velocity. Using observations of Venus (figure 3), we find that the chopper throw is $`3.60\pm .01^{}`$, and the beam is $`.26\pm .01^{}`$ wide (FWHM) with a Gaussian shape and no noticeable eccentricity. This is consistent with similar measurements made using Centaurus A and a remote Gunn oscillator. For the observations discussed here, the telescope slews at a constant declination, $`52.03^{}`$ (epoch 2000.0), between 11 overlapping fields spaced $`.77^{}`$ apart. It dwells at each field for 13.7 s, and spends 5.0 seconds slewing to the next field. For fields $`i=1\mathrm{}11`$, the movement pattern is: i=1,3,5,7,9,11,10,8,6,4,2. In this manner, a single scan containing $`151`$ s of data is completed in $`206`$ s. A total of 135 hours of data, recorded in June 1998, are included in this analysis. Interspersed with these observations, the Carina nebula was swept for a few minutes every 2 hours as a pointing check. Over the period of this data set, the measurements of the nebula center varied by $`<1`$ arcminute (rms). We take this to be our relative pointing precision. To determine the absolute pointing accuracy of the instrument, we scanned 6 bright ($``$ 25 mK) objects at declinations $`63.0\delta 47.5`$ in the galactic plane. We use a simple 5-parameter pointing model<sup>7</sup><sup>7</sup>7 These parameters are: the offsets of the azimuth and elevation encoders, the distance of the two feed horns from the optical center, and the dewar angle in the focal plane. to align these objects with their radio coordinates, determined using the Parks-MIT-NRAO (PMN) 4.8 GHz survey (Griffith and Wright (1993), Wright, et al. (1994)). We find a pointing residual of 2.4 arcminutes rms. Using this pointing model, we scan 30 Doradus ($`5h38m43s69^{}06^{}03^{\prime \prime }`$) and find it 3.8 arcminutes from its PMN coordinates. This is the largest discrepancy between the radio positions and the positions we find at 40 GHz, so we believe our absolute pointing to be accurate to within 4 arcminutes in the region $`69.1\delta 47.5`$. ## 4. Synchronous Offset Subtraction As the chopping mirror moves, sweeping the beam across the sky, the photometer uses different areas of the telescope mirrors. The illumination of the primary mirror hardly changes as the chopping mirror moves, but the illumination pattern on the secondary changes substantially. The emissivity of the secondary might vary across its surface if, for example, emissive snow had accumulated in an uneven pattern. In addition, scattering by snow grains on the optics might vary as the chopper sweeps. These effects produce what are called synchronous offsets, which appear at the detector output to be variations of sky brightness but are in fact instrumental in origin. If these offsets come from the telescope they will produce the same apparent sky structure regardless of where the telescope is pointed so they can be identified and removed. The removal process is described in the next section. ## 5. Data Reduction If any sample deviates by more than 5-sigma from the average, the sweep containing that sample is deleted. This serves to remove electrical interference that appears in a few places in the data. If more than 2 sweeps are deleted from any scan, the entire scan is deleted. We then determine the synchronous offset. For each scan, we co-add over all the sweeps in each field to produce a single waveform. We then co-add all 11 fields to produce an average chopper-synchronous offset waveform for the scan. This offset waveform, typically less than 4 mK rms in size, is subtracted from the waveform for each field. Finally we look for rapid changes in the synchronous offset. The offset might change if snow is falling on the telescope mirrors, or if clouds are moving through the scan. We co-add the first and last 5 fields in each scan. When these two waveforms are subtracted, the standard deviation of the residual indicates the rate of change of the offset over the period of a scan. If this exceeds 2.5 mK rms the scan is deleted. This removes a total of 42 hours of data. ## 6. Modulations By multiplying the waveform for each field by the weighting functions in figure 5 we synthesize 16 orthogonal beam patterns. The weighting functions $`w_{jk}`$ are derived using the orthogonality and normalization conditions: $`{\displaystyle \underset{k=1}{\overset{N}{}}}w_{j_1k}w_{j_2k}=`$ $`\mathrm{\hspace{0.17em}0}`$ $`j_1j_2,\mathrm{\hspace{0.17em}1}jN`$ (1) $`{\displaystyle \underset{k=1}{\overset{N}{}}}|w_{jk}|=`$ $`\mathrm{\hspace{0.17em}2}`$ (2) Here $`j`$ is an index identifying the function, and $`k`$ indexes $`N`$ positions across the sky. Because fluctuations of atmospheric emission (sky noise) typically have much larger spatial scales than the region of sky we sweep, the dominant contributions of the atmospheric emission to our measured signals appear as constant or gradient waveforms across the sweep. To generate the $`w_{jk}`$, we start with a constant $`w_{1k}`$ and a gradient $`w_{2k}`$. Each successive modulation $`w_{jk}`$ is a described by a polynomial of order j-1. This constraint, along with equations (1) and (2), generates a unique set of functions that are similar to the Legendre Polynomials, except that they are defined over a discrete basis. For $`j>2`$ these weighting functions offer excellent sky noise rejection because they are orthogonal to the constant and gradient waveforms. In this analysis we have divided the sweeps into 4 segments, and each segment is weighted with $`w_{1k}\mathrm{}w_{4k}`$. We then divide each segment into 4 smaller bins, which are similarly weighted. Some of the resulting 16 patterns are translated equivalents of others. There are seven unique patterns. The constant weighting provides no information on sky structure and is discarded. We had not expected to be able to detect sky structure with the gradient weighting $`w_{2k}`$, but we find that the South Pole winter atmosphere is so stable that statistically significant structure is in fact detected. We continue the analysis with the six remaining unique weighting functions. The beam patterns for the six weightings are shown in figure 6 and the corresponding window functions are shown in figure 8. ## 7. Band Power Estimates The goal of our observations is to measure $`C_{\mathrm{}}`$, the power spectrum of CMB anisotropy (see Hu, Sugiyama and Silk (1997), Hu (1999)). For each modulation we determine the band-power (Bond et al. (1998)), using the matrix method (Netterfield, et al. (1997)). That is, we generate two matrices, a theory matrix and a noise matrix. The theory matrix accounts for correlations of modulated sky temperature structure that would be expected in a noiseless observation of the sky. This matrix consists of correlations calculated between pairs of modulated observations on separated fields. The theory matrix is calculated without reference to the data from the sky, using lagged window functions. It accounts for the fact that our fields overlap and that pairs of temperature values for nearby locations are expected to be correlated. In contrast to the theory matrix, the noise matrix is calculated from the set of measured temperatures of the sky (Lineweaver et al. (1994)). There are a number of possible sources of correlated noise in these observations. For example, on days with patchy cloud cover we see variations of apparent sky brightness that are actually coming from the cloud pattern passing over the telescope (sky noise). This noise is likely to be correlated since individual clouds move from field to field. These correlations must be accounted for if the uncertainty of the observation is to be correctly estimated. The noise matrix accomplishes this since its elements are the cross-correlations of data sets recorded for each field. To determine the range of band-power values that reasonably fit our data we carry out a likelihood test for each modulation, establishing a most likely $`C_{\mathrm{}}`$ value. We also estimate an uncertainty in $`C_{\mathrm{}}`$ by determining a confidence interval containing 67 % of the integrated likelihood. These values are plotted in figure 8. In estimating $`C_{\mathrm{}}`$ we used the method of Church, et al. (1998) to compensate for the effect of offset subtraction. The error bars in figure 8 do not include the calibration uncertainty. ## 8. Foregrounds Because these observations were made over a small range of observing frequency, we make use of observations made with other telescopes to constrain possible astronomical foregrounds. The observations reported here lie within the region previously studied with Python (Coble, et al. (1999)), a region which was carefully selected to have a very low level of foreground emission. Using Python the region has been mapped with $`1^{}`$ angular resolution at 40 GHz and 90 GHz. These maps show strongly correlated sky structure. The sky structure detected with Python has a frequency spectrum consistent with the CMB and not consistent with the spectrum of any single known foreground source. Using data from the Infra-Red Astronomical Satellite (IRAS) (Beichman, et al. (1988)) and using PMN data, the Python team has estimated that of the $``$80 $`\mu `$K RMS sky structure detected less than $``$ 1 $`\mu `$K rms is due to dust, synchrotron or free-free emission. Galactic structure has a spatial power spectrum that falls rapidly with $`\mathrm{}`$, and the Viper observations reported here cover a higher range of $`\mathrm{}`$ than the Python results. We therefore accept the Python foreground analysis as indicating that extended (Galactic) foregrounds make no significant contribution to the sky structure measured on these fields with Viper. Viper has a beam about 1/16 the area of the Python beam so foregrounds due to extra-galactic point sources are a more significant concern. To estimate the contribution due to unresolved sources, we examined a total of 160 PMN 4.8 GHz sources that fall close to the region we observed. The brightest source in the vicinity of our observations is PMNJ2256-5158. This source can be expected to contribute substantially to the sky structure we detect because it has flux 700 $`\mu `$Jy at 4.8 GHz, our sweep passes dierctly over it, and its spectrum has been measured to fall only slowly between 2.8 and 4.8 GHz ($`S_\nu \nu ^\beta ;\beta (\mathrm{2.8..4.8})=0.7`$). We compared our data for the region near this source to that expected from 5158 and found a statistically significant match. If we treat our measured sky brightnesses in this region as due entirely to 5158 (i.e. ignoring any CMB structure), we can estimate a spectral index for this source by comparing our data to the 4.8 GHz PMN flux. We get $`\beta (\mathrm{4.8..40})=0.9\pm 0.2`$. Because this falls in line with the PMN spectral index, we elect to make a correction for this source, by extrapolating from 4.8 GHz, using spectral index $`0.9`$, and then removing the contribution to $`C_{\mathrm{}}`$ due to 5158. The correction is small: the greatest effect is in the high $`\mathrm{}`$ bands and amounts to about 2 $`\mu `$K. All other PMN sources in the region are at least five times weaker and would require spectral index $`>1.0`$ to contribute significantly. We make no correction for any other PMN source. The data presented in figure 6 include the emission of 5158 (i.e. we have applied no correction to this data), but the $`C_{\mathrm{}}`$ spectrum in figure 8 has been corrected. Data taken with IRAS at 100 $`\mu `$m (3000 GHz) show eight sources near the region observed. Extrapolating from 3000 GHz to 40 GHz, using a conservative emissivity law $`ϵ\nu ^1`$, we find that none of the sources can contribute as much as 1 $`\mu `$K. We make no correction for IRAS sources. We can not rule out the possibility that an undetected population of extra-galactic objects contributes to fine scale sky structure at 40 GHz. Our study of possible extra-galactic foregrounds assumes that sources detected at lower frequencies have spectra that fall with frequency when compared to the CMB, as radio-bright galaxies do. We also assume that the objects detected at 100 $`\mu `$m with IRAS have spectra that rise with frequency over this frequency range. This is indeed the pattern seen in infrared-bright galaxies. There remains the possibility, however, that a class of previously undetected objects exists with spectra that closely mimic the CMB. As an example consider a dusty galaxy at redshift $`z=10`$. At that epoch the CMB temperature was 30K so dust in such a galaxy might have been heated to just a few Kelvin above the CMB temperature. That dust emission, redshifted to millimeter wavelengths, would have a frequency spectrum peaking just above the CMB peak and would be exceedingly difficult to distinguish from CMB structure. While there is currently no evidence that any such objects did exist at redshift 10, it will take fine-beam millimeter wavelength sky survey data to rule out the contribution of such sources. ## 9. Discussion We interpret the sky structure we have detected as CMB anisotropy. The increase above COBE anisotropy levels, expected in flat inflation models to appear near $`\mathrm{}220`$, is evident in the data, as is a lower anisotropy level near the expected position of the first null at $`\mathrm{}400`$. Models with a significant cosmological constant appear to fit the data better than those with $`\mathrm{\Lambda }=0`$. This work was supported by the National Science Foundation under cooperative agreement OPP-8920223 with the Center for Astrophysical Research in Antarctica and was also supported with internal CMU funds. We wish to thank Ted Griffith and Laszlo Varga for constructing and installing the Viper telescope. Mark Dragovan and Brian Crone did the initial optical design, and Hien Nguyen modified this design. We thank the following for contributions to Viper assembly and testing: Pamela Brann, Nicole Cook, Alex van Gaalen, Michael Vincent, Alex Japour, Mike Masterman and Mark Williams. We also thank the staff of the Amundsen-Scott research station at the South Pole.
no-problem/9910/cond-mat9910086.html
ar5iv
text
# Anomalous diffusion with absorption: Exact time-dependent solutions ## I Introduction The ubiquity of the anomalous diffusion phenomenon in nature has attracted the interest of researchers from both the theoretical and experimental point of view. Anomalous diffusion has been found in transport of fluids in porous media and surface growth , in NMR relaxometry of liquids in porous glasses , in a two dimensional fluid flow , to name just a few among the large variety of physical phenomena where it is present. A related aspect is the case of density dependent diffusivities as found in some biological systems , in polymers , and hydrogen diffusion in metals (see also ). Some recent papers have investigated a class of nonlinear generalizations diffusion and Fokker–Planck equations , as a model of correlated anomalous diffusion. Some of those studies were based in a nonextensive thermodynamical formalism . Particularly in Ref. exact solutions for the nonlinear Fokker–Planck equation subject to a linear force have been found. Here we want to show how these solutions can be extended to the case where an absorption process is also present. We start recalling the “full” anomalous diffusion equation or “nonlinear” Fokker-Planck equation solved in Ref. $$\frac{}{t}[p(x,t)]^\mu =\frac{}{x}\{F(x)[p(x,t)]^\mu \}+D\frac{^2}{x^2}[p(x,t)]^\nu .$$ (1) When $`F(x)=0`$, Eq. (1) can be interpreted as a diffusion equation for $`\mathrm{\Phi }(x,t)=[p(x,t)]^\mu `$, where the diffusivity depends on $`\mathrm{\Phi }`$ $`{\displaystyle \frac{}{t}}\mathrm{\Phi }={\displaystyle \frac{^2}{x^2}}(D(\mathrm{\Phi })\mathrm{\Phi })`$ (2) $`D(\mathrm{\Phi })=\mathrm{\Phi }^{\frac{\nu }{\mu }1}.`$ (3) There are several real situations where this power-law dependence of the diffusivity is found. It occurs in the flow of gases through porous media ($`\nu /\mu 2`$ ), the flow of water in unsaturated soils ($`\nu /\mu =5`$ ), the simultaneous diffusion and adsorption in porous samples where the adsorption isotherm is of power-law type ($`\nu /\mu 1`$ for Freundlich type of adsorption isotherm ). Clearly, in those cases, the diffusivity vanishes (diverges) for $`\mathrm{\Phi }=0`$ when $`\nu /\mu >1`$ ($`\nu /\mu <1`$). It is worth to remember that Eq. (1) corresponds to the so called “Porous Media Equation” when $`\mu =1`$ . There are a large number of situations where the interest of describing anomalous diffusion plus absorption are of relevance. Notably, those related with diffusion of some (reactive) substance in gaseous phase through a porous media or a membrane, that react and can be adsorbed in sites inside the pore . Our interest here is to solve the same Eq. (1) but including now terms that describe some kind of absorption process. A general form of such an equation is $$\frac{}{t}[p(x,t)]^\mu =\frac{}{x}\{F(x)[p(x,t)]^\mu \}+D\frac{^2}{x^2}[p(x,t)]^\nu \alpha [p(x,t)]^\mu ^{},$$ (4) where $`\alpha `$ plays the role of an absorption rate (and becomes the usual one for $`\mu ^{}=\mu `$). The presence of reaction terms like the one in Eq. (4) (with $`\alpha 0`$ and $`\mu ^{}0`$) is not at all unexpected considering the large amount of work on the problematic of diffusion–limited reactions. Among the diversity of systems that have been studied we only recall here the so called one species coagulation, that is: $`A+A0`$ or $`mAlA`$ (with $`m>l`$), that have been associated, among others, with catalytic processes in regular, heterogeneous or disordered systems . The reaction term may account, in the case $`\mu =\mu ^{}`$, for an irreversible first-order reaction of the transported substance so that the rate of removal is $`\alpha C`$ . This extra term also appears when a tracer undergoing radioactive decay is transported through a porous medium, where $`\alpha `$ is the reciprocal of the tracer’s mean lifetime , as well as in heat flow involving heat production at a rate which is a linear function of the temperature . Finally, in solute transport through adsorbent samples the adsorption rate, at small solute concentration, is usually proportional to the concentration in solution and Eq. (4) applies. In it has been shown that $`p_q(x,t)`$, the solution of Eq. (1), for a linear force $`F(x)`$ has the form $$p_q(x,t)=\frac{\{1\beta (t)(1q)[xx_M(t)]^2\}^{1/(1q)}}{Z_q(t)},$$ (5) where $`q=1+\mu \nu `$, $`\beta (t)`$ depends on the width of the distribution, $`x_M(t)`$ is the average of the coordinate and $`Z_q(t)`$ is a normalization factor. All of them depend on the diffusion parameter $`D`$ as well as on $`\mu ,\nu `$ and the force (see for details). For completeness, as well as for reference, we write Eq. (1) without the “drift” $`F(x)`$ as we will refer to the solutions of such an equation in the following sections $$\frac{}{t}[p(x,t)]^\mu =D\frac{^2}{x^2}[p(x,t)]^\nu .$$ (6) According to the results from Ref. we can write that the solution $`p_q^{(0)}(x,t)`$ of Eq. (6) has the form given in Eq. (5), with $`x_M(t)`$ $`=`$ $`x_0`$ (7) $`Z_q(t)`$ $`=`$ $`\left({\displaystyle \frac{2\nu }{\mu }}(\nu +\mu )\pi Dt\right)^{\frac{1}{\nu +\mu }}`$ (8) $`\beta (t)`$ $`=`$ $`\pi \left({\displaystyle \frac{2\nu }{\mu }}(\nu +\mu )\pi Dt\right)^{\frac{2\mu }{\nu +\mu }},`$ (9) where, we have used the relation $`\beta (0)Z_q(0)^{2\mu }=\pi `$, that shall be fulfilled if we want to have a $`\delta `$-like initial condition. In the present work we intent to analyze the specific case of a linear drift, namely $`F(x)=k_1k_2x`$. This case, where the potential is harmonic (a typical approximation), is the simple nontrivial one where analytic solutions can be obtained just by means of changing the variables to suitable ones, namely a simple extension of the well known Boltzmann Transformation . In section II we start considering the simple case $`\mu =\mu ^{}`$. We analize first the case of constant external force ($`k_2=0`$). In this case we can firstly reduce Eq. (4) by proposing a solution of the form $$p(x,t)=e^{\frac{\alpha }{\mu }t}\widehat{p}(x,t),$$ (10) that yields an equation for $`\widehat{p}(x,t)`$ given by $`{\displaystyle \frac{}{t}}[\widehat{p}(x,t)]^\mu `$ $`=`$ $`{\displaystyle \frac{}{x}}\{F(x)[\widehat{p}(x,t)]^\mu \}`$ (11) $`+`$ $`De^{\alpha (1\nu /\mu )t}{\displaystyle \frac{^2}{x^2}}[\widehat{p}(x,t)]^\nu .`$ (12) Although this reduction to a nonlinear Fokker-Planck equation like in Eq. (1) looks to be always possible (when $`k_2=0`$), for the case of a linear force (when $`k_20`$) such a reduction is not possible to make in a simple way and we will need a more general treatment. The linear force situation, tightly related to the so called Uhlenbeck-Ornstein process ($`k_1=0;k_20`$), is treated in section III, while in section IV we discuss the most general case, that is when $`\mu \mu ^{}`$. In the last section we make some final remarks. ## II Solution for a constant force As indicated above, here we consider the case of a constant force, that is $`F(x)=k_1`$. Equation (11) can be further reduced making the following change of variables $`\xi =xk_1t`$, that results in $$\frac{}{t}[\widehat{p}(\xi ,t)]^\mu =De^{\alpha (1\nu /\mu )t}\frac{^2}{\xi ^2}[\widehat{p}(\xi ,t)]^\nu .$$ (13) Now we change the time variable according to $`\widehat{p}(\xi ,t)`$ $``$ $`\widehat{p}(\xi ,z(t)){\displaystyle \frac{}{t}}=\dot{z}(t){\displaystyle \frac{}{z}}`$ (14) $`z(t)`$ $`=`$ $`{\displaystyle _0^t}e^{\alpha (1\nu /\mu )\tau }𝑑\tau ={\displaystyle \frac{1e^{\gamma t}}{\gamma }}t0,`$ (15) with $`\gamma =\alpha (\mu \nu )/\mu `$, and obtain the following equation valid for all $`t0`$ $$\frac{}{z}[\widehat{p}(\xi ,z)]^\mu =D\frac{^2}{\xi ^2}[\widehat{p}(\xi ,z)]^\nu .$$ (16) Hence, if $`p_q^{(0)}(x,t)`$ is the solution of Eq. (6), the solution with $`F(x)=k_1`$ plus absorption results to be $$p_q(x,t)=e^{\frac{\alpha }{\mu }t}p_q^{(0)}(xk_1t,z(t)).$$ (17) It is easy to check that this solution has the right limits for $`\alpha 0`$ and for $`\alpha >0`$ and $`\mu =\nu =1`$, i.e the standard Fokker-Planck equation (plus absorption). The new variable $`z(t)`$ plays the role of an effective time for the dispersion process. It exhibits markedly different behaviors depending on the ratio $`\mu /\nu `$. The case $`\mu /\nu >1`$ $`(\gamma <0)`$ corresponds to superdiffusive transport when there is no absorption . When absorption is present this superdiffusion is enhanced since the effective time $`z`$ grows exponentially as a function of the real time $`t`$. In the case $`\mu /\nu <1`$ $`(\gamma >0)`$, which leads to subdiffusion for $`\alpha =0`$, the presence of absorption also plays a key role. The effective time $`z(t)`$ converges to an asymptotic value: $`lim_t\mathrm{}z(t)=z_{\mathrm{}}=1/\gamma `$. Therefore, the distribution $`p_q^{(0)}(\xi ,z)`$ evolves toward an asymptotic curve $`p_q^{(0)}(\xi ,z_{\mathrm{}})`$. Let us note that $`z_{\mathrm{}}`$ diverges whenever $`\alpha 0`$ or $`\mu /\nu 1`$. In Fig. 1 we compare, in the $`\mu /\nu <1`$ case, the time evolution of the distributions for $`\alpha 0`$ and $`\alpha =0`$. We also compare these distributions with the shape of the $`\alpha 0`$ asymptotic curve. For completeness, we include in Fig.2 the behavior of $`z`$ on $`t`$, also illustrating its dependence on $`\mu /\nu `$. Finally in the case $`\mu =\nu `$ (normal diffusion$`+`$absorption for the quantity $`\mathrm{\Phi }(x,t)=\left[p(x,t)\right]^\mu `$) the change of variables becomes a simple time scaling. As already mentioned, Eq. (4) can be viewed as a classical diffusion equation for $`\mathrm{\Phi }(x,t)=\left[p(x,t)\right]^\mu `$, where the diffusivity depends on $`\mathrm{\Phi }(x,t)`$ through $`D(\mathrm{\Phi })=D\mathrm{\Phi }^{\nu /\mu 1}`$. Therefore, it becomes clear that the absorption can enhance (reduce) the diffusive transport whenever $`\mu /\nu >1`$ ($`\mu /\nu <1`$), namely: as absorption proceeds $`\mathrm{\Phi }`$ decreases, yielding to an increase or not in $`D(\mathrm{\Phi })`$ according to the $`\mu /\nu `$ ratio. This qualitative description seems to be in complete agreement with the previous quantitative results. ## III Solution for a linear force case We now consider the case of a linear force, given by $`F(x)=k_1k_2x`$ (here and in what follows we assume that $`k_2>0`$), whose general solution without absorption was found in . To start with, we assume that $`\mu =\mu ^{}`$. With the hint of the change of variables made in the previous section we propose the following changes, that define the new variables $`p(x,t)`$ $`=`$ $`e^{wt}\widehat{p}(\xi ,z(t))`$ (18) $`\xi `$ $`=`$ $`xg(t)+f(t),`$ (19) with $`w,g(t)`$ and $`f(t)`$ to be determined. In terms of these new variables the time and space derivatives becomes $`{\displaystyle \frac{}{t}}`$ $`=`$ $`{\displaystyle \frac{}{t}}+\left(x\dot{g}(t)+\dot{f}(t)\right){\displaystyle \frac{}{\xi }}+\dot{z}(t){\displaystyle \frac{}{z}}`$ (20) $`{\displaystyle \frac{}{x}}`$ $`=`$ $`g(t){\displaystyle \frac{}{\xi }}.`$ (21) Taking into account these results and the form proposed for the solution (given by Eq. (18)), each separate term of Eq. (4) becomes $`{\displaystyle \frac{}{t}}[p(x,t)]^\mu `$ $`=`$ $`w\mu e^{w\mu t}[\widehat{p}(\xi ,z)]^\mu +`$ (22) $`+`$ $`e^{w\mu t}(x\dot{g}(t)+\dot{f}(t)){\displaystyle \frac{}{\xi }}[\widehat{p}(\xi ,z)]^\mu `$ (23) $`+`$ $`e^{w\mu t}\dot{z}(t){\displaystyle \frac{}{z}}[\widehat{p}(\xi ,z)]^\mu ,`$ (24) $`{\displaystyle \frac{}{x}}\left\{(k_1k_2x)[p(x,t)]^\mu \right\}`$ $`=`$ $`k_2[p(x,t)]^\mu (k_1k_2x){\displaystyle \frac{}{x}}[p(x,t)]^\mu `$ (25) $`=`$ $`e^{w\mu t}\left\{k_2[\widehat{p}(\xi ,z)]^\mu (k_1k_2x)g(t){\displaystyle \frac{}{\xi }}[\widehat{p}(\xi ,z)]^\mu \right\},`$ (26) $$D\frac{^2}{x^2}[p(x,t)]^\nu =Dg^2(t)e^{w\nu t}\frac{^2}{\xi ^2}[\widehat{p}(\xi ,z)]^\nu .$$ (27) In this way, the equation we obtain for $`\widehat{p}(\xi ,z)`$ replacing into Eq. (4) with $`F(x)=k_1k_2x`$ results (after arranging terms and multiplying by $`e^{w\mu t}`$) $`\dot{z}(t){\displaystyle \frac{}{z}}[\widehat{p}(\xi ,z)]^\mu `$ $`=`$ $`[w\mu +k_2\alpha ][\widehat{p}(\xi ,z)]^\mu `$ (30) $`[(k_1k_2x)g(t)+x\dot{g}(t)+\dot{f}(t)]{\displaystyle \frac{}{\xi }}[\widehat{p}(\xi ,z)]^\mu `$ $`+Dg^2(t)e^{w(\mu \nu )t}{\displaystyle \frac{^2}{\xi ^2}}[\widehat{p}(\xi ,z)]^\nu .`$ In order to reduce the last equation to a one with a form similar to Eq. (6), we need to cancel the first two terms on the rhs, and reduce the coefficient of the third one to a constant. To operate with the second term, we shall cancel it for all values of $`x`$. These conditions yield the following equations $`0`$ $`=`$ $`w\mu +k_2\alpha `$ (31) $`0`$ $`=`$ $`k_2g(t)+\dot{g}(t)`$ (32) $`0`$ $`=`$ $`k_1g(t)+\dot{f}(t)`$ (33) $`1`$ $`=`$ $`g^2(t)e^{w(\mu \nu )t}\dot{z}(t)^1,`$ (34) rendering $`w`$ $`=`$ $`(k_2+\alpha )\mu ^1`$ (35) $`g(t)`$ $`=`$ $`Ge^{k_2t}`$ (36) $`f(t)`$ $`=`$ $`HG{\displaystyle \frac{k_1}{k_2}}e^{k_2t}`$ (37) $`z(t)z(0)`$ $`=`$ $`G^2\left\{1e^{\gamma t}\right\}\gamma ^1,`$ (38) with $`\gamma =\left(k_2(\mu +\nu )+\alpha (\nu \mu )\right)\mu ^1`$. In the general case, the values of the constants shall be chosen to fulfill some particular initial condition. Here, to simplify, we choose $`G=1`$, implying that we do not change the $`x`$ scale at $`t=0`$. Also, in order to make the change of space variables in such a way to have it centered at the potential minimum ($`\xi ^{}=(x\frac{k_1}{k_2})`$) we adopt $`H=0`$. Finally we choose $`z(0)=0`$ to preserve the time origin. With these values we have $`\xi `$ $`=`$ $`\left(x{\displaystyle \frac{k_1}{k_2}}\right)e^{k_2t}`$ (39) $`z(t)`$ $`=`$ $`\left\{1e^{\gamma t}\right\}\gamma ^1,`$ (40) and the solution of Eq. (4) with $`F(x)=k_1k_2x`$ is $$p_q(x,t)=e^{\frac{k_2\alpha }{\mu }t}p_q^{(0)}(\xi ,z).$$ (41) This is the final result for the present case ($`\mu =\mu ^{}`$). It is trivial to check its validity in some limits, the most obvious one is to choose $`\alpha =0`$, recovering the solution of Ref. . With $`\mu =\nu =1`$ and $`\alpha 0`$ we recover the simple case of diffusion in a harmonic potential with absorption. Also, if we consider the case $`\mu =\nu `$ and $`\alpha 0`$, it is immediate to obtain (remember that $`\mu =\nu `$ gives $`q=1`$!) $$p_1(x,t)=e^{\frac{\alpha }{\mu }t}\frac{e^{\frac{k_2}{2\mu D}\frac{[x\frac{k_1}{k_2}x_0e^{k_2t}]^2}{1e^{2k_2t}}}}{[\frac{2\pi \mu D}{k_2}(1e^{k_2t})]^{1/2\mu }}.$$ (42) This result becomes obvious after making the change $`p_1(x,t)^\mu =\varphi (x,t)`$, reducing the problem to an effective one of diffusion in a harmonic potential with absorption for $`\varphi (x,t)`$. Clearly, even though the solution has a Gaussian form (times a decaying exponential term), the width of the Gaussian factor behaves “anomalously” as it differs from the one in the associated Ornstein-Uhlenbeck process . As in the constant-force case, absorption process markedly influences the time evolution of dispersion. A straightforward calculation yields the dispersion of the distribution in the present case (i.e. $`\mu =\mu ^{}`$) $$(xx)^2=\frac{1}{\beta [z(t)]}e^{2k_2t}.$$ (43) In the superdiffusive case ($`\mu /\nu >1`$) $`\beta (z(t))`$ becomes asymptotically exponential, namely, $`\beta (z(t))\mathrm{exp}\left(2\frac{k_2(\mu +\nu )+\alpha (\mu \nu )}{\mu +\nu }t\right)`$. Replacing this result in Eq. (43) we obtain the long time behavior of the dispersion $$(xx)^2e^{2\alpha \frac{(\mu \nu )}{(\mu +\nu )}t}.$$ (44) Therefore, the superdiffusive transport enhanced by absorption yields an exponentially increasing dispersion even in an attractive potential. The subdiffusive case ($`\mu /\nu <1`$) presents two different situations. Although in both cases the dispersion decays exponentially, when the absorption rate is small ($`\gamma <0,\alpha <k_2(\nu +\mu )/(\nu \mu )`$), absorption is the rate controlling process for dispersion $$(xx)^2e^{2\alpha \frac{(\nu \mu )}{(\nu +\mu )}t}.$$ (45) In the other case, when the absorption rate is large ($`\gamma >0`$), the attractive force becomes rate limiting for dispersion process $$(xx)^2e^{2k_2t}.$$ (46) In order to compare the influence of the absorption term on the solutions we have found, in Fig. 3 we depict the solution given in Eq. (41), in the case $`\mu /\nu >1`$ (superdiffusion), for $`\alpha =0`$ and $`\alpha 0`$ at different times. In Fig. 4 we compare, in the subdiffusive case, the solution in Eq. (41) when absorption is the rate limiting process for dispersion to the case when the attractive force controls dispersion. In both figures the differences between the characteristics of the different situations are apparent. ## IV General Absorption Term In this section we consider Eq. (4), in the general case $`\mu \mu ^{}`$. The following “simple” kinetic equation $$\frac{}{t}[p(t)]^\mu =\alpha \left[p(t)\right]^\mu ^{},$$ (47) whose solution is $$p(t)=[1(1q^{})\frac{\alpha }{\mu }t]^{\frac{1}{(1q^{})}},$$ (48) where $`q^{}=1\mu ^{}+\mu `$, strongly suggests to replace the exponential in the change of variables in Eqs. (17) and (18), by the q’-exponential function defined by Eq. (48) . The ordinary exponential function is recovered when $`q^{}1`$. If we try this possibility, together with the Ansatz in Eq. (7), it immediately leads to the condition $`\mu =\mu ^{}`$. This result becomes apparent when analyzing Eq. (1) in terms of $`\mathrm{\Phi }(x,t)=\left[p(x,t)\right]^\mu `$. The form proposed in Eq. (10) allows to reduce the general equation, eliminating the absorption term, only when absorption is proportional to $`\mathrm{\Phi }(x,t)`$. However, the general case with $`\mu \mu ^{}`$ will have a solution whose scaling properties can be determined. To find such scaling behavior it is enough to consider the simplified situation without external force, that is $$\frac{}{t}[p(x,t)]^\mu =D\frac{^2}{x^2}[p(x,t)]^\nu \alpha [p(x,t)]^\mu ^{}.$$ (49) We consider the following Ansatz $$p(x,t)=\phi (t)\mathrm{\Theta }(\xi ),$$ (50) with $`\xi =\psi (t)x`$. Replacing this into Eq. (49), we obtain the functions $`\phi (t)`$ and $`\psi (t)`$ as $`\phi (t)`$ $`=`$ $`\left[1+(1q^{})(\alpha /\mu )t\right]^{\frac{1}{1q^{}}}`$ (51) $`\psi (t)`$ $`=`$ $`\left[1+(1q^{})(\alpha /\mu )t\right]^{\frac{1}{2}\frac{\mu ^{}\nu }{(1q^{})}}`$ (52) where $`q^{}=1\mu ^{}+\mu `$. Hence, Eq. (49) for $`\mathrm{\Theta }(\xi )`$ reduces to an ordinary differential equation on the variable $`\xi `$ $$\mathrm{\Theta }^\mu +\left[\frac{\mu ^{}\nu }{2\mu }\right]\xi \frac{d}{d\xi }\mathrm{\Theta }^\mu =D\frac{d^2}{d\xi ^2}\mathrm{\Theta }^\nu +\mathrm{\Theta }^\mu ^{}$$ (53) Once again, the previous results can be interpreted in terms of $`\mathrm{\Phi }(x,t)`$. Writing the absorption term as $`(\alpha \mathrm{\Phi }^{\mu ^{}/\mu 1})\mathrm{\Phi }`$, it can be seen that in the case $`\mu ^{}<\mu `$ the absorption process is enhanced as $`\mathrm{\Phi }`$ decreases with time. This leads to a finite time $`t_c=\mu /\alpha (\mu \mu ^{})`$, where $`\mathrm{\Phi }`$ becomes zero. On the other hand, when $`\mu ^{}>\mu `$ we can obtain the asymptotic dispersion, even though the ordinary differential equation for $`\mathrm{\Theta }(\xi )`$, Eq. (53), is too complicated to be solved analytically. However, for the nth-moment of the distribution, we obtain $`x^{2n}`$ $`=`$ $`\left[{\displaystyle 𝑑xx^{2n}p(x,t)}\right]/\left[{\displaystyle 𝑑xp(x,t)}\right]`$ (54) $`=`$ $`\left[{\displaystyle 𝑑xx^{2n}\phi (t)\mathrm{\Theta }(\psi (t)x)}\right]/\left[{\displaystyle 𝑑x\phi (t)\mathrm{\Theta }(\psi (t)x)}\right]`$ (55) $`=`$ $`\psi (t)^{2n}\left[{\displaystyle 𝑑\xi \xi ^{2n}\mathrm{\Theta }(\xi )}\right]/\left[{\displaystyle 𝑑\xi \mathrm{\Theta }(\xi )}\right]`$ (56) $`=`$ $`\psi (t)^{2n}A_{2n}`$ (57) $`x^{2n+1}`$ $`=`$ $`0,`$ (58) yielding $$\left(xx\right)^2=x^2=\psi (t)^2A_2t^{\left(\frac{\mu ^{}\nu }{(1q^{})}\right)}=t^{\left(\frac{\mu ^{}\nu }{\mu ^{}\mu }\right)}.$$ (59) Hence, it is clear that, as in the previous case ($`\mu ^{}=\mu `$), $`\mu /\nu <1`$ corresponds to subdiffusion whereas the case $`\mu /\nu >1`$ corresponds to superdiffusive transport. ## V Final Remarks The Fokker–Planck equation that was generalized to a nonextensive scenario has been further generalized to include the possibility of an absorption process. We have shown that the exact solutions of Eq. (4) (a nonlinear Fokker–Planck equation subject to linear forces) found in when $`\alpha =0`$, can be extended for the case $`\alpha 0`$ and $`\mu ^{}=\mu `$. However, in the general case $`\mu ^{}\mu `$, we have been only able to obtain the scaling properties of the solution (whose analytical form we cannot obtain), and the asymptotic behavior of the whole hierarchy of moments. Summarizing our results for the nonlinear process of anomalous diffusion plus absorption, as described by Eq.(4), we have found that the solution - in a constant force field and for $`\mu /\nu >1`$ shows a superdiffusive behaviour that is enhanced when $`\alpha 0`$ ($`\gamma <0`$), - also, when $`\gamma >0`$ ($`\mu /\nu <1`$) the concentration reaches an asymptotic constant profile, - for a linear force and $`\gamma <0`$, we find an exponentially increasing dispersion for superdiffusion, - also, in the linear force case, an exponentially decreasing dispersion arises for subdiffusion (gamma ¿ 0), where absorption is the rate controlling process for dispersion when absorption rate is small ($`\alpha <k_2(\nu +\mu )/(\nu \mu )`$), while the attractive force becomes the rate limiting dispersion process when absorption is large enough ($`\alpha >k_2(\nu +\mu )/(\nu \mu )`$). The present results gives further support to the argument that a generalized thermostatistics including nonextensivity constitutes an adequate framework within which it is possible to unify both normal and correlated anomalous diffusion , extended now to the case when an absorption process is also present. Also, as indicated in , this kind of work points out the convenience of paying more attention to the thermodynamic aspects of non–Fickian diffusion. Moreover, it has been suggested that even Levy-like anomalous diffusion (that can be discussed by means of linear Fokker–Planck equations with fractional derivatives) can be included within the present common framework of nonlinear Fokker–Planck equations with fractional derivatives. This work also opens the possibility of analyzing reaction–diffusion systems on a fractal substratum, by considering nonlinear Fokker–Planck equation with other forms of reaction terms. This problem will be the subject of further work. Acknowledgments: GD thanks for support from UBA and CONICET. HSW acknowledges financial support from CONICET and ANPCyT (Argentinian agencies). CT acknowledges the warm hospitality extended to him during his stay at Instituto Balseiro, and partial support from CNPq, FAPERJ and PRONEX (Brazilian agency).
no-problem/9910/chao-dyn9910027.html
ar5iv
text
# Stochastic modelling of nonlinear dynamical systems ## Contexts Probabilistic concepts are ubiquitous in diverse areas of nonlinear science. Deterministic dynamical systems may give rise to random transport that is an intrinsic feature of their complex behaviour, augmented by a possible choice of random initial or boundary data and/or suitable scaling limits. Nonequlibrium statistical physics directly employs random processes, Gaussian and non-Gaussian, which ultimately implement a nonlinear transport. Apart from a concrete identification of genuine sources of randomness in the dynamics of classical systems or in quantum theory, quite often we need quantitative methods to deal with random-looking phenomena. Basically that refers to situations when origins of randomness are either uncontrollable (allowing merely for probabilistic predictions about the future behaviour of the system) or not definitely settled. The so-called Schrödinger boundary data and stochastic interpolation problem sets a conceptual and formal basis for a surprisingly rich group of topics. Here, , stochastic analysis methods are used to deduce the most likely (generally approximate) underlying dynamics from the given (possibly phenomenological) input-output statistics data, pertaining to a certain dynamical process that is bound to take place in a finite time interval. The pertinent motion scenarios range from processes arising in nonequilibrium phenomena, , through classical dynamics of complex systems (deterministic chaos in terms of densities) to searches for a stochastic counterpart of quantum phenomena. They involve random processes that go beyond the standard Gaussian basis and enable a consistent usage of the jump-type processes (Lévy and their perturbed versions) associated with the anomalous transport, . In the diffusion processes context, we have identified the third Newton law for mean velocity fields as being capable to generate anomalous (enhanced) or non-dispersive diffusion-type processes through ”perturbations of noise”, . Since the stochastic interpretation of various differential equations is our major target, let us mention typical examples that are amenable to our methodology. Those are: Boltzmann, Navier-Stokes, Euler, Burgers (more or less standard transport of matter), Hamilton-Jacobi, Hamilton-Jacobi-Bellmann, Kardar-Parisi-Zhang (an issue of viscosity solutions and the interface profile growth), Fokker-Planck, Kramers (standard random propagation related to the Brownian motion), both linear and nonlinear Schrödinger equations (probabilistic interpretation of solutions, in Euclidean and non-Euclidean versions of the problem, also with reference to an escape over a barrier and decay of a metastable state). In most of the above cases a natural linearisation of a nonlinear problem is provided by generalized diffusion (heat) equations in their forward and/or backward versions, . On the contrary, a suitable coupled pair of time-adjoint nonlinear diffusion equations admits a linearisation in terms of the familiar Schrödinger equation. That involves Markovian diffusion processes with a feedback (enhancement mechanism named ”the Brownian recoil principle”), . ## Concepts: Schrödinger’ s interpolation problem There are many procedures to deduce an intrinsic dynamics of a physical system from observable data. None of them is free of arbitrary assumptions needed to reduce the level of ambiguity and so none can yield a clean choice of the modelling strategy. As a standard example one may invoke the time series analysis that is a respected tool in the study of complex signals and is routinely utilised for a discrimination between deterministic and random inputs. Our objective is to reconstruct a random dynamics that is consistent with the given input-output statistics data. We shall outline an algorithm allowing to reproduce an admissible microscopic motion scenario under an additional assumption that the sought for dynamics actually is a Markovian diffusion process. This reconstruction method is based on solving the so-called Schrödinger boundary-data and interpolation problem. Given two strictly positive (usually on an open space-interval) boundary probability densities $`\rho _0(\stackrel{}{x}),\rho _T(\stackrel{}{x})`$ for a process with the time of duration $`T0`$. One can single out a unique Markovian diffusion process which is specified by solving the Schrödinger boundary data problem: $$m_T(A,B)=_Ad^3x_Bd^3ym_T(\stackrel{}{x},\stackrel{}{y})$$ (1) $$d^3ym_T(\stackrel{}{x},\stackrel{}{y})=\rho _0(\stackrel{}{x}),d^3xm_T(\stackrel{}{x},\stackrel{}{y})=\rho _T(y)$$ (2) where the joint probability distribution has a density $$m_T(\stackrel{}{x},\stackrel{}{y})=u_0(\stackrel{}{x})k(\stackrel{}{x},0,\stackrel{}{y},T)v_T(\stackrel{}{y})$$ (3) and the two unknown functions $`u_0(\stackrel{}{x}),v_T(\stackrel{}{y})`$ come out as (unique) solutions, of the same sign, of the integral identities. To this end, we need to have at our disposal a continuous bounded strictly positive (ways to relax this assumption are known) function $`k(\stackrel{}{x},s,\stackrel{}{y},t),0s<tT`$, which for our purposes (an obvious way to secure the Markov property) is chosen to be represented by familiar Feynman-Kac integral kernels of contractive dynamical semigroup operators: $$k(\stackrel{}{y},s,\stackrel{}{x},t)=exp[_s^tc(\stackrel{}{\omega }(\tau ),\tau )𝑑\tau ]𝑑\mu _{(\stackrel{}{x},t)}^{(\stackrel{}{y},s)}(\omega )$$ (4) In the above, $`d\mu _{(\stackrel{}{x},t)}^{(\stackrel{}{y},s)}(\omega )`$ is the conditional Wiener measure over sample paths of the standard Brownian motion. (Another choice of the measure allows to extend the framework to jump-type processes.) The pertinent (interpolating) Markovian process can be ultimately determined by means of positive solutions (it is desirable to have them bounded) of the adjoint pair of generalised heat equations: $$_tu(\stackrel{}{x},t)=\nu \mathrm{}u(\stackrel{}{x},t)c(\stackrel{}{x},t)u(\stackrel{}{x},t)$$ (5) $$_tv(\stackrel{}{x},t)=\nu \mathrm{}v(\stackrel{}{x},t)+c(\stackrel{}{x},t)v(\stackrel{}{x},t).$$ (6) Here, a function $`c(\stackrel{}{x},t)`$ is restricted only by the positivity and continuity demand for the kernel. Solutions, upon suitable normalisation give rise to the Markovian diffusion process with the factorised probability density $`\rho (\stackrel{}{x},t)=u(\stackrel{}{x},t)v(\stackrel{}{x},t)`$ which, while evolving in time, interpolates between the boundary density data $`\rho (\stackrel{}{x},0)`$ and $`\rho (\stackrel{}{x},T)`$. The interpolation admits an Itô realisation with the respective forward and backward drifts defined as follows: $$\stackrel{}{b}(\stackrel{}{x},t)=2\nu \frac{v(\stackrel{}{x},t)}{v(\stackrel{}{x},t)}$$ (7) $$\stackrel{}{b}_{}(\stackrel{}{x},t)=2\nu \frac{u(\stackrel{}{x},t)}{u(\stackrel{}{x},t)}$$ (8) in the prescribed time interval $`[0,T]`$. For the forward interpolation, the familiar Fokker-Planck (second Kolmogorov) equation holds true: $$_t\rho (\stackrel{}{x},t)=\nu \mathrm{}\rho (\stackrel{}{x},t)[\stackrel{}{b}(\stackrel{}{x},t)\rho (\stackrel{}{x},t)]$$ (9) with $`\rho (\stackrel{}{x},0)`$ given, while for the backward interpolation (starting from $`\rho (\stackrel{}{x},T)`$) we have: $$_t\rho (\stackrel{}{x},t)=\nu \mathrm{}\rho (\stackrel{}{x},t)[\stackrel{}{b}_{}(\stackrel{}{x},t)\rho (\stackrel{}{x},t)].$$ (10) The drifts are gradient fields, $`curl\stackrel{}{b}=0`$. As a consequence, those that are allowed by any prescribed choice of the function $`c(\stackrel{}{x},t)`$ must fulfill the compatibility condition $$c(\stackrel{}{x},t)=_t\mathrm{\Phi }+\frac{1}{2}(\frac{b^2}{2\nu }+b)$$ (11) which establishes the Girsanov-type connection of the forward drift $`\stackrel{}{b}(\stackrel{}{x},t)=2\nu \mathrm{\Phi }(\stackrel{}{x},t)`$ with the Feynman-Kac potential $`c(\stackrel{}{x},t)`$. In the considered Schrödinger’s interpolation framework, the forward and backward drift fields are connected by the identity $`\stackrel{}{b}_{}=\stackrel{}{b}2\nu ln\rho `$. For Markovian diffusion processes the notion of the backward transition probability density $`p_{}(\stackrel{}{y},s,\stackrel{}{x},t)`$ can be consistently introduced on each finite time interval, say $`0s<tT`$: $$\rho (\stackrel{}{x},t)p_{}(\stackrel{}{y},s,\stackrel{}{x},t)=p(\stackrel{}{y},s,\stackrel{}{x},t)\rho (\stackrel{}{y},s)$$ (12) so that $`\rho (\stackrel{}{y},s)p(\stackrel{}{y},s,\stackrel{}{x},t)d^3y=\rho (\stackrel{}{x},t)`$ and $`\rho (\stackrel{}{y},s)=p_{}(\stackrel{}{y},s,\stackrel{}{x},t)\rho (\stackrel{}{x},t)d^3x`$. The transport (density evolution) equations refer to processes running in opposite directions in a fixed, common for both time-duration period. The forward one executes an interpolation from the Borel set $`A`$ to $`B`$, while the backward one executes an interpolation from $`B`$ to $`A`$. Let us mention at this point that various partial differential equations associated with Markovian diffusion processes are known not to be invariant under time reversal. That implies dissipation and links them with irreversible physical phenomena. However, the correspoding processes are known to admit a statistical inversion and asking for a statistical past of the process makes sense. In particular, the knowledge of the Feynman-Kac kernel implies that the transition probability density of the forward process reads: $$p(\stackrel{}{y},s,\stackrel{}{x},t)=k(\stackrel{}{y},s,\stackrel{}{x},t)\frac{v(\stackrel{}{x},t)}{v(\stackrel{}{y},s)}.$$ (13) while the corresponding transition probability density of the backward process has the form: $$p_{}(\stackrel{}{y},s,\stackrel{}{x},t)=k(\stackrel{}{y},s,\stackrel{}{x},t)\frac{u(\stackrel{}{y},s)}{u(\stackrel{}{x},t)}.$$ (14) Obviously in the time interval $`0s<tT`$ there holds: $$u(\stackrel{}{x},t)=u_0(\stackrel{}{y})k(\stackrel{}{y},s,\stackrel{}{x},t)d^3y$$ (15) $$v(\stackrel{}{y},s)=k(\stackrel{}{y},s,\stackrel{}{x},T)v_T(\stackrel{}{x})d^3x.$$ (16) Consequently, we have fully determined the underlying (Markovian) random motions, forward and backward, respectively. All that accounts for perturbations of (and conditioning upon) the Wiener noise. ## Particularities If we consider a fluid in thermal equilibrium as the noise carrier, a kinetic theory viewpoint amounts to visualizing the constituent molecules that collide not only with each other but also with the tagged (colloidal) particle, so enforcing and maintaining its observed erratic motion. The Smoluchowski approximation takes us away from those kinetic theory intuitions by projecting the phase-space theory of random motions into its configuration space image which is a spatial Markovian diffusion process, whose formal infinitesimal encoding reads: $$d\stackrel{}{X}(t)=\frac{\stackrel{}{F}}{m\beta }dt+\sqrt{2D}d\stackrel{}{W}(t).$$ (17) In the above $`m`$ stands for the mass of a diffusing particle, $`\beta `$ is a friction parameter, D is a diffusion constant and $`\stackrel{}{W}(t)`$ is a normalised Wiener process. The Smoluchowski forward drift can be traced back to a presumed selective action of the external force $`\stackrel{}{F}=\stackrel{}{}V`$ on the Brownian particle that has a negligible effect on the thermal bath but in view of frictional resistance imparts to a particle the mean velocity $`\stackrel{}{F}/m\beta `$ on the $`\beta ^1`$ time scale. The noise carrier (fluid in the present considerations) statistically remains in the state of rest, with no intrinsic mean flows, hence unperturbed in the mean (all other cases we associate with the term ”perturbations of noise”). At the first glance, a formal replacement of the Smoluchowski forward drift in Eq. (17) by any space-time dependent driving velocity field would suggest a legitimate procedure to implement a net transport that would combine dispersion due to a diffusion process with deterministic mean flows due to external agencies (consider Euler, Navier-Stokes or Burgers velocity fields as reference examples). However the situation is not that simple. It is well known that a spatial diffusion (Smoluchowski) approximation of the phase-space process, allows to reduce the number of independent local conservation laws to two only. Therefore the Fokker-Planck equation can always be supplemented by another (independent) partial differential equation to form a closed system. If we assign a probability density $`\rho _0(\stackrel{}{x})`$ with which the initial data $`\stackrel{}{x}_0=\stackrel{}{X}(0)`$ are distributed, then the emergent Fick law would reveal a statistical tendency of particles to flow away from higher probability residence areas. This feature is encoded in the corresponding Fokker-Planck equation: $`_t\rho =\stackrel{}{}(\stackrel{}{v}\rho )=\stackrel{}{}[(\frac{\stackrel{}{F}}{m\beta }D\frac{\stackrel{}{}\rho }{\rho })\rho ]`$ where a diffusion current velocity is $`\stackrel{}{v}(\stackrel{}{x},t)=\stackrel{}{b}(\stackrel{}{x},t)D\frac{\stackrel{}{}\rho (\stackrel{}{x},t)}{\rho (\stackrel{}{x},t)}`$ while the forward drift reads $`\stackrel{}{b}(\stackrel{}{x},t)=\frac{\stackrel{}{F}}{m\beta }`$. Clearly, the local diffusion current (a local flow that might be experimentally observed for a cloud of suspended particles in a liquid) $`\stackrel{}{j}=\stackrel{}{v}\rho `$ gives rise to a non-negligible matter transport on the ensemble average, even if no intrinsic mean flows are in existence in the random medium. We may surely consider a formal replacement of the previous $`\frac{\stackrel{}{F}}{m\beta }`$ by a local velocity field $`\stackrel{}{v}(\stackrel{}{x},t)`$. However, irrespectively of whether we utilize $`\frac{\stackrel{}{F}}{m\beta }`$ or $`\stackrel{}{v}(\stackrel{}{x},t)`$ in the formalism, the velocity field must obey the natural (local) momentum conservation law which directly originates from the rules of the Itô calculus for Markovian diffusion processes and from the first moment equation in the diffusion approximation (!) of the Kramers theory: $$_t\stackrel{}{v}+(\stackrel{}{v}\stackrel{}{})\stackrel{}{v}=\stackrel{}{}(\mathrm{\Omega }Q).$$ (18) An effective potential function $`\mathrm{\Omega }(\stackrel{}{x})`$ can be expressed in terms of the Smoluchowski forward drift $`\stackrel{}{b}(\stackrel{}{x})=\frac{\stackrel{}{F}(\stackrel{}{x})}{m\beta }`$ as follows: $`\mathrm{\Omega }=\frac{\stackrel{}{F}^2}{2m^2\beta ^2}+\frac{D}{m\beta }\stackrel{}{}\stackrel{}{F}`$. That is to be compared with Eq. (11) which governs the more general space-time dependent situation. Moreover, in the would-be Euler equation (18), instead of the standard pressure term, there appears a contribution from a probability density $`\rho `$-dependent potential $`Q(\stackrel{}{x},t)`$. It is given in terms of the so-called osmotic velocity field $`\stackrel{}{u}(\stackrel{}{x},t)=D\stackrel{}{}ln\rho (\stackrel{}{x},t)`$: $`Q(\stackrel{}{x},t)=\frac{1}{2}\stackrel{}{u}^2+D\stackrel{}{}\stackrel{}{u}`$ and is generic to a local momentum conservation law respected by isothermal Markovian diffusion processes. To analyze general perturbations of the medium and then the resulting intrinsic (mean) flows, a function $`\stackrel{}{b}(\stackrel{}{X}(t),t)`$, must replace the Smoluchowski drift. Under suitable restrictions, we can relate probability measures corresponding to different (in terms of forward drifts !) Fokker-Planck equations and processes by means of the Cameron-Martin-Girsanov theory of measure transformations, . For suitable forward drifts which are gradient fields that yields the most general form of an auxiliary potential (cf. Eq. (11)) $`\mathrm{\Omega }(\stackrel{}{x},t)=2D[_t\varphi +\frac{1}{2}(\frac{\stackrel{}{b}^2}{2D}+\stackrel{}{}\stackrel{}{b})].`$ We denote $`\stackrel{}{b}(\stackrel{}{x},t)=2D\stackrel{}{}\varphi (\stackrel{}{x},t)`$. Mathematical features of the formalism appear to depend crucially on the properties (like continuity, local and global boundedness, Rellich class) of the auxiliary potential $`\mathrm{\Omega }`$. Let us consider a bounded from below (local boundedness from above is useful as well), continuous function $`\mathrm{\Omega }(\stackrel{}{x},t)`$. Then, by means of the gradient field ansatz for the diffusion current velocity ($`\stackrel{}{v}=\stackrel{}{}S_t\rho =\stackrel{}{}[(\stackrel{}{}S)\rho ]`$) we can transform the local momentum conservation law of a Markovian diffusion process to a universal Hamilton-Jacobi form: $$\mathrm{\Omega }=_tS+\frac{1}{2}|\stackrel{}{}S|^2+Q$$ (19) where $`Q(\stackrel{}{x},t)`$ was defined before. By applying the gradient operation we recover the conservation law (18). In the above, the contribution due to $`Q`$ is a direct consequence of an initial probability measure choice for the diffusion process, while $`\mathrm{\Omega }`$ does account for an appropriate forward drift of the process via Eq. (19). Thus, in the context of Markovian diffusion processes, we can consider a closed system of partial differential equations which comprises the continuity equation $`_t\rho =\stackrel{}{}(\stackrel{}{v}\rho )`$ and the Hamilton-Jacobi equation, plus suitable initial (and/or boundary) data. The underlying isothermal diffusion process is specified uniquely. Those two partial differential equations set ultimate limitations in the old-fashioned problem of ”how much nonlinear” and ”how much space-time dependent” the driving velocity field can be to yield a consistent stochastic diffusion process (or the Langevin-type dynamics)
no-problem/9910/astro-ph9910204.html
ar5iv
text
# Spectral Aspects of the Evolution of Gamma-Ray Bursts ## 1. Introduction The discovery of gamma-ray bursts (GRBs) at the end of the 1960’s (Klebesadel, Strong, & Olson 1973) revealed a phenomenon which has been unwilling to allow us to gain a clear insight into its origin. Until the recent attention given to the GRB afterglow emission (see, e.g., Metzger et al. 1997) the prompt non-thermal flash of gamma-rays has been the main source of information. The data collected by the Compton Gamma-Ray Observatory, CGRO (Fishman et al. 1989) and the BeppoSAX (Boella et al. 1997) satellites, have given an unprecedented wealth of data. This has led, after an initial phase of confusion, to a more detailed knowledge, for instance, of the spectral shape and its evolution in time. The observed gamma-ray light curves exhibit a large diversity in duration, strength, and morphology. Some are very complex, having stochastic spiky structures, while others are smooth and have only a few, well-shaped pulses. The duration of the gamma-ray emission ranges from as short as a few milliseconds up to several hundreds of seconds. The spectra, even though not as diverse in their characteristics as the light curves, have not given any clear signature of the underlying physical emission process(es). The spectra have a non-thermal appearance and can evolve considerably during the burst. Undoubtably, the key to the understanding of the phenomenon lies in this spectral behavior. Is the large diversity mainly due to varying physical properties of the source or is it due to other effects such as different appearances to the observer? Are there any characteristics that can correctly describe the GRB temporal behavior and do these have typical values for all GRBs. In other words, how broad are their true, intrinsic distributions? Much study has been devoted to the search for empirical relations and correlations between observable quantities, and to systematize the diverse appearances of the data. Correlations for both large ensembles of GRBs and within individual bursts have been studied. This is a natural step in astronomy and can be compared to the early advances in our understanding of stellar evolution using the optical color-color diagrams, and, for instance, using the Hertzsprung-Russell diagram of globular clusters to determine their ages. The behavior of low-mass X-ray binaries is studied in X-ray color-color diagrams leading to the classification of the sources into two populations based on their behavior in the diagrams: Z sources and atoll sources. In this review, the study of the temporal-spectral behavior within individual bursts will be addressed. It will mainly concern the efforts to understand the continuum spectral shape and its evolution in time. These results should trigger and guide theoretical work and lead to physical models capable of reproducing the observed features. In §2, the main features of the light curves and spectra as observed to date, are summarized followed by a description of the spectral evolution in §3. This is succeeded by a discussion on how the spectra and the intensity evolve relative to each other in §4. §5 is devoted to a discussion on different aspects of the observations which could affect the results. Finally, an overview discussion on the constraints put on the physical models describing the data is given in §6. ## 2. Burst Properties In the following, the photon flux will be denoted by $`N(t)`$ (photons cm<sup>-2</sup> s<sup>-1</sup>) and its spectrum by $`N_\mathrm{E}(E,t)`$ (photons cm<sup>-2</sup> s<sup>-1</sup> keV<sup>-1</sup>) and correspondingly, the energy flux by $`F(t)`$ (keV cm<sup>-2</sup> s<sup>-1</sup>). Intensity will denote a general flux entity and not necessarily the intensity-entity usually used in astronomy. In GRB astronomy the sources are not resolved, thus making it less meaningful. The terms light curve and time history will be used synonymously with the time evolution of the intensity. The hardness of the spectrum will refer to the overall spectral property of the burst, mainly the peak energy of the power output. A power-law spectrum is steep if it is dominated by soft photons and flatter as the fraction of hard photons increases. ### 2.1. Gamma-Ray Burst Light Curves A remarkable feature of the observed properties of GRBs is the large diversity of the light curves, both morphologically and in strength and duration<sup>1</sup><sup>1</sup>1This has been summarized as: ‘Have you seen one, you have seen one’. Several examples of light curves, observed by the Burst and Transient Source Experiment (BATSE) on the CGRO will be presented in this paper (Figure 1, Figure 5, and Figure 9a; see, e.g., the current BATSE catalog<sup>2</sup><sup>2</sup>2The BATSE GRB catalog is available online at: http://www.batse.msfc.nasa.gov/ data/grb/catalog/.). Different approaches to the understanding of the light curve morphology have been pursued. It is generally believed that the fundamental constituent of a GRB light curve is a time structure having a sharp rise and a slower decay, with the decay rate decreasing smoothly (e.g., Fishman et al. 1994; Norris et al. 1996; Stern & Svensson 1996). This shape is denoted by the acronym FRED, fast-rise and exponential-decay, even though the decay is not necessarily exponential. A burst can consist of only a few such pulses, clearly separable, producing a simple and smooth light curve, as in the left-hand panels in Figure 1 and Figure 9a. More complex light curves, such as in Figure 5 are superpositions of many such fundamental pulses. Mixtures of the two types are also common. Such interpretations have been shown to be able to explain and partly reproduce many observed light curve morphologies. To reveal the underlying process of GRBs, the fundamental pulses are of extra interest as they will show the clearest signature of the physics. To model pulses, often a stretched exponential is used: $`N(t)`$ exp $`((|tt_{\mathrm{max}}|/\sigma _{\mathrm{r},\mathrm{d}})^\nu )`$, where $`t_{\mathrm{max}}`$ is the time of the pulse’s maximum intensity, $`\sigma _{\mathrm{r},\mathrm{d}}`$ are the time constants for the rise and the decay, and $`\nu `$ is the peakedness parameter. Such a function gives a flexibility to describe most pulses, and to give characteristics of the pulses for statistical analysis. Norris et al. (1996) studied a sample of bursts observed by the BATSE Large Area Detectors (LADs) and stored in the four energy channel data type<sup>3</sup><sup>3</sup>3For the different data types from BATSE, see, e.g., Fishman et al. (1989). They modeled the light curves in detector counts in the four channels separately and found that the decay generally lay between a pure exponential ($`\nu =1`$) and a Gaussian ($`\nu =2`$). Lee et al. (1998) studied approximately 2500 individual channel pulse structures in the high time resolution BATSE TTS data, using this general stretched exponential function and confirmed the general behavior that pulses tend to have shorter rise times than decay times. Norris et al. (1996) also used the stretched exponential to create an algorithm to separate overlapping pulses based on $`\chi ^2`$ fitting. Another pulse-identification algorithm was introduced by Li & Fenimore (1996) and similarly by Pendleton et al. (1997), who identified pulses based on the depth of the minima between peaks, which has the advantage that it does not depend on any particular peak shape. The large amplitude variations observed within a burst is demonstrated by Stern (1999) who shows a few examples of GRB pulses with near-exponential tails that are traceable over almost 4 orders of magnitude in intensity. Schaefer & Dyson (1996) studied the decay phase of 10 smooth FRED pulses in the four separate energy channels and found that most of them are not exponentials, although a few cases come close. A power-law fit passes most of their statistical tests. Most studies use the LAD 4 spectral channel data, with the channels covering approximately 20-50 keV, 50-100 keV, 100-300 keV, and 300 keV - 2 MeV. Often the studies make use of individual channels or the sum of channels 2 and 3 (50-300 keV) in count space, without using detailed knowledge of the spectral behavior. Frequently, the count rates are normalized in the four channels. It is, however, of interest to study the intensity curve in photon flux for the maximal available energy band-width instead of detector counts to get physical values on the fitted parameters. This is done by correctly considering the effects of the detector response. The spectra must then be deconvolved for every time bin, for instance, using direct inversion techniques, which can be model independent (Loredo & Epstein 1989). Alternatively, forward-folding techniques can be used, fitting an empirical spectral model to the data by minimizing the $`\chi ^2`$ between the model count spectrum and the observed count spectrum. Ryde & Svensson (1999b) used the LAD 128 spectral channel data, between 25-1900 keV, to study GRB pulses in photon flux. They identify a subgroup of pulses for which the early intensity decay follows a power-law, $`N(t)(1+t/\tau )^1`$, where the time coordinate, $`t`$, is taken from the maximum of the light curve and $`\tau `$ is the time constant. This behavior changes eventually into a faster decay such as an exponential. A detailed discussion on this issue is given in $`\mathrm{\S }6.4`$. The light curve has also been described as having an overall envelope with a FRED-like shape (see, e.g., Fenimore et al. 1996). The actual light curve can either follow the envelope closely or have more or less strong deviations, giving rise to a stochastic, spiky appearance, see Figure 1. In that model, even a simple pulse is thus not caused by a single event but is the result of several. There has also been a proposal that the light curve can be decomposed into two uncorrelated radiation components, which dominate different parts of the spectrum and behave differently (Chernenko & Mitrofanov 1995). The corresponding spectral behavior has also been studied by Chernenko et al. (1998) who model the spectrum with 9 parameters, describing the two emission components. The authors studied approximately 10 strong BATSE bursts, which could all be explained by the model. In several works, the averaged behavior over the entire burst has been studied. By aligning the light curves (summed over all four LAD channels) to the time of the peak of the event, the averaged, peak-aligned profile is obtained. This has been done by, for instance, Mitrofanov et al. (1994, 1996), Norris et al. (1994), Stern (1996) and Stern et al. (1997, 1999). Stern (1996) showed that the averaged, peak-aligned profile has an overall stretched exponential form with the index $`\nu =1/3`$. Another alternative average is the averaged, duration-aligned profile, which is obtained by aligning the time structure by setting the durations to a standard duration. The decay is then found to fit a linear function in time, but not to exponentials and power-laws (Fenimore 1999). A general remark on the time histories is that there are no typical starting points of the emission that, for instance, could be associated with the primary event. ### 2.2. Gamma-Ray Burst Spectra #### Pre-BATSE Results. Important results concerning the GRB spectrum were obtained with a number of experiments prior to the CGRO, such as IMP 6 (Cline et al. 1973) and IMP 7 (Cline & Desai 1975), SIGNE/Venera (Chambon et al. 1979), KONUS/Venera (Mazets et al. 1982), the GRS/Solar Maximum Mission (Matz et al. 1985), and Ginga (Yoshida et al. 1989). The results of these experiments indicated that the spectral continuum, in the keV to MeV range, consisted of three major components: (i) a low-energy component resembling the thermal bremsstrahlung spectrum (with a Gaunt factor of 1) of an optically-thin hot plasma (e.g., Rybicki & Lightman 1979): $`N_\mathrm{E}(E)=E^1`$ exp$`(E/E_0)`$, where $`E`$ is the spectral energy, and $`E_0`$ is the $`e`$-folding energy. (ii) a steep high-energy power-law with no obvious cut-off, $`N_\mathrm{E}(E)E^{2.5}`$. Matz et al. (1985) showed that 60 $`\%`$ of their sample had emission above 1 MeV. (iii) an X-ray component ($`<10`$ keV) which resides 1-2 $`\%`$ of the total power. There were also several reports of emission and absorption features. Mazets et al. (1981) presented observations of features from the KONUS/Venera and Hueter et al. (1987) from HEAO-1. Additional reports on spectral features were given by Muakami et al. (1988) from the Ginga observations. #### BATSE Results. The BATSE detectors ($`201900`$ keV) have given a large data base, which refined these results. Palmer et al. (1994) studied 192 bursts with the spectroscopy detectors (SDs). The SDs have the ability to see spectral lines with the characteristics reported by Ginga. No convincing line features were found in the BATSE data, ruling out the previous results. This result is confirmed by the high energy resolution TGRS (transient gamma-ray spectrometer) on the WIND spacecraft (Palmer et al. 1996; Seifert et al. 1997). It now seems likely that if line features exist they are very rare. Briggs et al. (1998) and Golenetskii et al. (1998) report on a few possible candidates from BATSE and KONUS/WIND, respectively. Pendleton et al. (1994) found that broad cusps in the energy range 40-100 keV, could be explained by a superposition of hard and soft spectral sub-components. This was also suggested within the picture presented in Ryde & Svensson (1999a). A detailed discussion on the methodology of identifying line features in gamma-ray spectra and a review of the field is given in Briggs (1999). Schaefer et al. (1992) and Band et al. (1993) studied the continuum spectral shape of the BATSE bursts. The latter study comprised a sample of 54 BATSE bursts and successfully fitted most spectra with an empirical model similar to the optically-thin bremsstrahlung spectrum previously used; a low-energy power-law exponentially joined together with a high-energy power-law. The success of this model, fitting both the time-integrated spectra and the time-resolved spectra, has led to a wide spread use and it is often referred to as the ‘GRB-function’ or the ‘Band-model’: $$N_\mathrm{E}(E)=\{\begin{array}{cc}A\left(\frac{E}{100\mathrm{keV}}\right)^\alpha e^{E/E_0}\hfill & \text{if }E(\alpha \beta )E_0;\hfill \\ A^{}\left(\frac{E}{100\mathrm{keV}}\right)^\beta \hfill & \text{if }E>(\alpha \beta )E_0\hfill \end{array}$$ (1) where $`E_0`$ is the $`e`$-folding energy (in units of keV), and $$A^{}=A\left[\frac{(\alpha \beta )E_0}{100\mathrm{keV}}\right]^{\alpha \beta }e^{(\alpha \beta )},$$ (2) with $`N_\mathrm{E}(E)`$ being a continuous and a continuously differentiable function. Often the energy at which the power is maximal, $`E_{\mathrm{pk}}=(2+\alpha )E_0`$ (the peak in the logarithmic $`E^2N_\mathrm{E}`$-spectrum), is used as the measure of spectral hardness, and not $`E_0`$. A power peak exists only in the case of $`\beta <2`$. The Band et al. (1993) study did not identify any universal values for the GRB-function parameters, which were found to have a large diversity. The peak energy lies mainly in the interval 100 keV to 1 MeV, clustering around 100-200 keV. The study also confirmed the existence of a hard tail. In a recent study by Preece et al. (1999), the mean of the distribution of the peak energies was determined to be $`E_{\mathrm{pk}}=250_{143}^{+433}`$. The GRB-function index, $`\alpha `$, shows a broad peak between $`1.25`$ and $`0.25`$, instead of the universal value earlier claimed of $`1`$ (Band et al. 1993), while the high-energy power-law index, $`\beta `$ clusters fairly narrowly around $`2.12\pm 0.30`$, even though there exist super-soft bursts with $`\beta <3`$ (Preece et al. 1998a). Schaefer & Walker (1999) noted, for instance, that the spectrum of GRB 920229 has an extremely sharp high-energy cut-off. Some studies have tried to identify statistically averaged shapes by various methods of averaging the spectra. Fenimore (1999) studied the average spectrum from the duration-aligned light curves of a sample of GRBs and found that $`\alpha =1.03`$ and $`\beta =3.31`$. The peak energy lied at $`390`$ keV. #### Outside the BATSE Spectral Range. How far do the power-laws persist towards lower and higher energies? Occasionally the GRB lies in the field of view of the other CGRO instruments (COMPTEL, EGRET, OSSE) and a broader spectrum can be studied. An example is given in Figure 2 where a composite spectrum of GRB 910503, using all the capability of the CGRO’s four experiments, is given. A similar study was done by Schaefer et al. (1998) who studied GRB 910503, GRB 910601 and GRB 910814 from approximately 20 keV to a few hundred MeV. Such broad band studies are, more or less, consistent with a continuation of the BATSE spectrum. In a few cases, very hard radiation has been observed to be emitted late in, or even after, the main (lower energy) part of the burst. For instance GRB 940217, a burst lasting for 160 s as observed between 15 keV and 2 GeV, emitted GeV photons up to 1.5 hours after the trigger, with one photon having an energy of 18 GeV (a significant detection; Hurley et al. 1994). Barat et al. (1998) present the spectra between 0.1 and 10 MeV of the 20 most intense bursts observed by PHEBUS/Granat and report on the existence of a sharp break at typical energies of either around 1 MeV or around 2 MeV. They fit a 6 parameter function allowing for a second, high-energy sharp break. The fit to the spectrum of GRB 900520a is shown in the right-hand panel in Figure 3. Strohmayer et al. (1998) studied a number of GRBs observed by Ginga, covering an energy band below the BATSE range ($`2400`$ keV) and found a substantial number of bursts with breaks below 10 keV, i.e., below the observable range in BATSE. The authors propose that the observations are due to the existence of two breaks in the GRB spectrum, one in the BATSE range and one below this, close to 5 keV. This is also consistent with their finding that the X-ray spectra are often hard, with positive $`\alpha `$s, in 40 $`\%`$ of their studied sample. The observed ratio of the energy emitted as X-rays (2-10 keV) relative to the gamma-rays (50-300 keV) is often a few $`\%`$, but in some cases it can be substantially larger, giving an average of 24 $`\%`$, with a logarithmic average of 7 $`\%`$ for the 22 bursts studied by Strohmayer et al. (1998). #### Soft Excess and Spectral Subclasses. Several early studies occasionally observed significant emission in the X-ray range of GRBs ($`220`$ keV). This was done, for instance, by XMON/P78-1 (Laros et al. 1984) and by WATCH/GRANAT (Castro-Tirado 1994). Preece et al. (1996c) studied the time-averaged spectra from 86 BATSE bursts in search of a soft excess above the extrapolated low-energy power-law. They used the 256 channel SD data and combined them with the lowest energy SD discriminator channel, leading to a useful spectral coverage from approximately 5 keV to 2 MeV, after making a post-launch calibration of the $`520`$ keV region. They searched for soft emission below 20 keV and found this in 14 $`\%`$ of the cases. The enhancement was $`1.25.8`$ times relating to the standard power-law model flux, exceeding 5 $`\sigma `$ in significance. Not a single case had a low-energy flux deficit. In their study they also identified 4 cases with a peak energy below 45 keV and with a $`\beta 2`$. For the cases which had a peak energy larger than 100 keV the averaged low-energy power-law had $`\alpha 1.0`$, and for the cases with a peak energy below 100 keV the averaged value was $`\alpha 0.3`$. Pendleton et al. (1994) searched for spectral subclasses. They studied 206 bursts with the LAD 4 channel data and used a direct spectral inversion technique to obtain the photon spectrum. They found that the distribution of spectral states is broad and that there are no clear sub-classes, albeit a weak clustering. The authors also found that the peak fluence (the fluence in the 64 ms interval with the highest count rate in channels 2 and 3) is significantly harder than the total fluence (from all counts in the burst interval) in the range 20-100 keV, which indicates that the time-resolved spectra are flatter than the time-integrated spectrum. This is also observed to be the case, for instance, by Ford et al. (1995), Liang & Kargatis (1996), and Crider et al. (1997, 1998a). The time-integrated spectra differ from the instantaneous spectra and it is of great importance that this is considered when physical models are tested. This is especially a problem in cases when the spectra evolve markedly which makes the two spectra differ notably (see §3 and, e.g., Crider et al. 1997; Ryde & Svensson 1999a). Continuing the pursuit of spectral sub-classes, Pendleton et al. (1997) identified two distinct types of spectra. They studied a sample of 882 bursts with the LAD 4 channel data. The ‘not-high-energy-bursts’ (NHEB) have a marked lack of fluence above 300 keV. The authors even study individual pulses within the burst and find that ’high-energy-bursts’ (HEB) can consist of the sum of high-energy-pulses and not-high-energy-pulses, while the NHEB can consist only of not-high-energy-pulses. Bonnell & Norris (1999a, 1999b) argue that the NHE class of GRB probably is due to a brightness bias in the observations. ## 3. Spectral Evolution ### 3.1. Time Evolution of the Spectral Hardness (Peak Energy) It was early discovered that the time-resolved (instantaneous) spectra in general soften with time (Mazets et al. 1982; Teegarten et al. 1982). In the survey of spectral evolution of BATSE bursts, Ford et al. (1995) found a number of common trends. They studied 37 bursts using the SD data and concluded that the peak energy rises with or slightly precedes intensity increases and softens for the remainder of the pulse. They also found that successive pulses are usually softer, as well as that there is a general softening in time outside of the main pulses over the entire burst. Furthermore, bursts for which the bulk of the flux comes well after the trigger tend also to be softer. There were also a few bursts which did not show these behaviors. Figure 4 shows the result of the analysis of the strong burst GRB 921207 (BATSE trigger # 2083) in Ford et al. (1995). For the BATSE observations, the peak energy varies, in general, by a factor of 5, with some cases reaching up to a factor of 15, over the burst. Complex bursts have only a weak and slow time evolution. The softening over the burst can, in some cases, be spectacular and have a complex behavior. Occasionally there is a correlation between the spectral hardness and the intensity. In most pulses, the hardness decays monotonically, creating the hard-to-soft pulses, while in some the hardness tracks the intensity, creating the tracking pulses, cf. §4. Beside the general trends there are examples of bursts with very diverse behaviors. For instance, GRB 980519 (#6764), which was observed by BeppoSAX and BATSE with a total energy range of $`21900`$ keV (in’t Zand et al. 1999) exhibited a soft-to-hard-to-soft evolution. The whole evolution seems to be connected, suggesting that the soft initial phase is not a preburst X-ray activity, but may have a common origin with the main GRB emission. ### 3.2. The Evolution of the Spectral Shape A systematic investigation of the shape of the spectrum below the peak energy was made by Crider et al. (1997), using the 128 channel LAD data. They studied the slope of the asymptotic low-energy power-law, in terms of its index $`\alpha `$ in a sample of 79 bursts, and found that $`\alpha `$ evolves in 58 $`\%`$ of the cases. Some bursts exhibit substantial evolution in $`\alpha `$ over the burst. Furthermore, they conclude that $`\alpha `$ follows the evolution of the peak energy, both for hard-to-soft pulses and for tracking pulses, albeit with less confidence for the latter result. The averaged values of the power-law slope during the rise phase of the pulses are significantly harder for the hard-to-soft pulses, with 40 $`\%`$ of them having a positive averaged $`\alpha `$-value. The most extreme example of spectral shape evolution is found in GRB 910927 (#829), in which the low-energy power-law index evolves from approximately $`+1.6`$ down to $`0.5`$. The maximal value of $`\alpha `$ is somewhat dependent on the analysis and could actually be lower. However, it is beyond doubt that the $`\alpha `$s can be large and values close to 0 are certain. For the tracking pulses the averaged value remains negative during the rise phase. The spectral shape above the break energy, $`E_{\mathrm{pk}}`$, i.e., the high-energy power-law, does not change as much as has been observed for the low-energy power-law. Preece et al. (1998a) studied the behavior of the high-energy power-law in detail, using the 128 channel LAD data, which became useful after an in-orbit calibration. 126 bursts were studied: 122 of these had a spectrum consistent with a power-law, and for the evolution of this power-law 34 $`\%`$ were inconsistent with a constant $`\beta `$. The value of $`\beta `$ averaged over the burst has a narrow distribution, $`2.12\pm 0.30`$. There were a few events classified as super soft, having a $`\beta <3`$, cf. Pendleton (1997). 100 events showed a hard-to-soft evolution and the averaged change of $`\beta `$, $`\mathrm{\Delta }\beta =0.37\pm 0.52`$. Furthermore, it was found that the behavior of $`\beta `$ is independent of the rest of the spectral evolution. ## 4. Connection between the Spectral and the Light Curve (Intensity) Behavior The spectral evolution is more or less coupled with the intensity of the burst. By studying narrow time bins of the light curve, the instantaneous spectra can be studied, giving information on the temporal behavior of the spectrum over the burst. Correspondingly, the light curves from different spectral energy channels show how the intensity of different energies compare with each other. The burst evolution can be described as taking place in an imaginary cube, having the spectral energy and the time axes in the x-y-plane and the intensity on the z-axis: ‘the GRB-cube’. The full evolution can then be illustrated as contour plots of the intensity on the energy-time plane. An example of such a plot is given in Figures 6 for GRB 950403 (#3491), whose light curve, from all the 4 LAD channels, is shown in Figure 5. The contours are from the fitted GRB-function, and the evolution of the peak energy is indicated. To describe this evolution and to systematize the observations in order to see the general trends, empirical relations between observables have been sought. The physical reason for these correlations is then explored. The main observables studied are the instantaneous photon (or energy) flux, the spectral hardness characterized by the peak (break) energy or equivalently the temperature or color, the fluence and the total flux, the spectral shape parameters (e.g., the power-law indices), and the duration of the pulse and burst. ### 4.1. Quantitative Correlations #### Hardness-Intensity Correlations (HIC). The relation between the intensity and the hardness has been well investigated and it has been shown that there is no ubiquitous trend of spectral evolution that can characterize all bursts; several types of behavior exist. Firstly, Norris et al. (1986) found that the most common trend of spectral evolution is a hard-to-soft behavior over a pulse, with the hardness decreasing monotonically as the flux rises and falls. They studied 10 bursts observed by the Solar Maximum Mission satellite. This behavior was also seen to be the most common trend by Kargatis et al. (1994), who studied 16 SIGNE/Venera bursts. There are also a few cases which exhibit soft-to-hard and even soft-to-hard-to-soft evolution. In a study by Band (1997), 209 BATSE bursts were studied with the LAD discriminator rates giving high time resolution. The spectral evolution was studied through auto- and cross-correlation between light curves from the four LAD channels. Most of the bursts in the sample showed a hard-to-soft behavior. Secondly, there is a tracking behavior between the intensity and the hardness, first noted by Golenetskii et al. (1983). Kargatis et al. (1994), confirmed the existence of such a HIC, even though it was less common than the hard-to-soft trend. However, in the decay phase of hard-to-soft pulses the HIC is often seen. Kargatis et al. (1995) found the hardness-intensity correlation in 28 pulse decays in 15 out of 26 GRBs with prominent pulses. Ryde & Svensson (1999b) also studied the HIC for the decay phases of a number of strong burst pulses. Thirdly, there are bursts that do not exhibit any correlation at all having a chaotic behavior. Indeed, the main conclusion drawn by Jourdain (1990), who studied several bursts observed by the APEX experiment, and Laros et al. (1985), who studied a few Pioneer Venus Orbiter bursts, is that there does not exist any correlation between the spectral evolution and the time history in their samples of GRBs. Over the whole GRB, there often does not exist any pure correlation, even though the tracks in the hardness-intensity plane are confined to an area from hard and intense to soft and weak, indicating an overall trend with increasing luminosity with hardness. (Kargatis et al. 1994). A chaotic behavior in the plane may be a result of the superposition of several short hard-to-soft pulses that cannot be resolved. Several of the different types of trends can also be seen in a single GRB (e.g., Hurley et al. 1992). The variety of behaviors is also manifested in Band et al. (1993) and Ford et al. (1995). Bhat et al. (1994) studied 19 time structures, which have a FRED-like shape with a short rise time ($`<4`$ s), and found that most had a good correlation between the hardness and the intensity. The tracking behavior has been described quantitatively. Golenetskii et al. (1983) found the power-law relation between the instantaneous luminosity ($``$ the energy flux) and the peak energy $$L(kT)^\gamma ,$$ (3) where the peak of the spectrum was quantified as the temperature, $`T`$, in the thermal bremsstrahlung model ($`k`$ is the Boltzmann’s constant). The power-law index (the correlation index), $`\gamma `$, was found to have a typical value of $`1.51.7`$. Figure 7 shows fits to the hardness-intensity correlation for three GRBs discussed in the original paper by Golenetskii et al. (1983). This analysis was criticized by several workers, including Laros et al. (1985), Norris et al. (1986), and Kargatis (1994). It was speculated that the correlation could possibly be an artifact from the way the temperature was derived from the two-channel count rates. Furthermore, Golenetskii et al. (1983) excluded the hard initial phase of the bursts. Ford et al. (1995) suggested that the low time-resolution may result in the initial, non-tracking, hard behavior being missed. However, Kargatis et al. (1994) confirmed the existence of the Golenetskii et al. (1983) correlation in approximately half of their cases. The spread was substantially wider $`\gamma =2.2\pm 1`$. In the Kargatis et al. (1995) study, in which the decay phase of a number of prominent pulses were examined, it was found that the distribution of the correlation index peaks at 1.7 and has a substantial spread. Bhat et al. (1994) found a corresponding spread in the HIC index. In the study of the Ginga data, Strohmayer et al. (1998) investigated the evolution of the peak energy versus energy flux and found the power-law correlation to be valid here too, with, for instance, $`\gamma 3`$ for GRB 890929 (in the Ginga energy range). #### Hardness-Fluence Correlation (HFC). As mentioned above, the correlation between the hardness and the flux (luminosity) over the entire burst or even a pulse does not always show any clear correlation. However, by studying the relation between the hardness and the running time-integral of the flux, the fluence, a clear correlation is often revealed over the entire pulse. Liang & Kargatis (1996) consequently found an empirical relation defining how the instantaneous spectrum evolves as a function of photon fluence, $`\mathrm{\Phi }(t)=^tN(t^{})𝑑t^{}`$. They found that the power peak energy of the time-resolved spectra of a single pulse decays exponentially as a function of $`\mathrm{\Phi }(t)`$, i.e., $$E_{\mathrm{pk}}(t)=E_{\mathrm{pk},\mathrm{max}}e^{\mathrm{\Phi }(t)/\mathrm{\Phi }_0},$$ (4) where $`E_{\mathrm{pk},\mathrm{max}}`$ is the maximum value of $`E_{\mathrm{pk}}`$ within the pulse and $`\mathrm{\Phi }_0`$ is the exponential decay constant. The photon fluence is the photon flux integrated from the time of $`E_{\mathrm{pk},\mathrm{max}}`$. Figure 8 shows examples of fitted correlations from the original work. The authors found that 35 of the 37 pulses in the study were consistent or marginally consistent with the relation. Furthermore, they concluded that the decay constant is constant from pulse to pulse within a GRB. This view was, however, changed by Crider et al. (1998a) who dismissed the apparent constancy as consistent with drawing values out of a narrow statistical distribution of $`\mathrm{\Phi }_0`$, which they found to be log-normal with a mean of $`\mathrm{lg}\mathrm{\Phi }_0=1.75\pm 0.07`$ and a FWHM of $`\mathrm{\Delta }\mathrm{lg}\mathrm{\Phi }_0=1.0\pm 0.1`$. This result is probably affected by selection effects. They expanded the study to include 41 pulses within 26 bursts, by using the algorithm introduced by Norris et al. (1996), to identify pulses. Another approach was also introduced, where they used the energy fluence instead of the photon fluence. The two approaches are very similar and do not fundamentally change the observed trends of the decay. These results confirm the correlation and extend the number of pulses in which the correlation is found. The relation between the two approaches is discussed in §5.3. #### Other Correlations. A few other correlations within individual GRBs should also be mentioned. Norris et al. (1996) introduced the asymmetry/width/softness paradigm for pulses, in which the quantities are correlated. They only detect, however, a slight trend that more symmetric, narrower pulses are harder. Furthermore, Kouveliotou et al. (1992) reported on a trend that pulses with a short rise time are harder in single-pulse events. Using PVO bursts, Lochner (1992) noted a negative correlation between hardness and time between pulses. The spectral evolution in gamma-ray color-color diagrams, i.e, the correlation between hardness ratios have been studied. Kouveliotou et al. (1993) could classify about half of the 30 bursts they studied into three types of behavior, crescent, island-like and flat. They did not find any striking correlation between the temporal profile of the bursts and the shape in its color-color diagram. In the study of Lee et al. (1998), the 2500 individual channel pulse structures analyzed also confirmed the general behaviors that pulses are narrower and occur earlier at high energies. There is also a negative correlation between the peak flux and the pulse width, and between the pulse fluence and the pulse duration, within a burst. Petrosian et al. (1999) discuss this and note that these correlations are the same as the ones attributed to cosmological effects found in ensembles of bursts. They therefore draw the conclusion that these ensemble correlations cannot be cosmological signatures alone, but must arise from the intrinsic properties of the GRBs. ### 4.2. Quantitative Temporal Descriptions A few attempts have been made at quantitatively describing the temporal intensity-spectral evolution within a GRB pulse. Fenimore et al. (1995) studied how the width of a GRB pulse changes as a function of spectral energy and found that it scales as $`E^{0.4}`$. This result was found both by using the autocorrelation function for GRBs and by using the width of the average pulse profiles for the four BATSE channels (see in ’t Zand & Fenimore 1996 for a discussion on the autocorrelation function). The behavior was also observed in the whole band from 1.5 to 700 keV by BeppoSAX (Piro et al. 1997), which suggests that the emission mechanism is the same from soft X-rays to gamma-rays. Following the notation (with some modifications) of Fenimore & Bloom (1995) the light curve in each of the 4 BATSE channels ($`k=1,2,3,4`$) can be described as $$H_k(t)=_0^{\mathrm{}}R_k(E)A(E,t)N_\mathrm{E}(E)𝑑E$$ (5) assuming that the time structure can be separated from the spectral shape, $`N_E(E,t)=A(E,t)N_\mathrm{E}(E)`$. $`R_k(E)`$ is the effective area of the detector for each channel. The scaling factor, $`A(E,t)`$ was modeled by Fenimore & Bloom (1995) as $`A(E,t)=\mathrm{exp}[t/\tau (E)]`$ with $`\tau (E)=S_1(E/100)^{S_2}`$, $`E`$ is measured in keV and for the decay phase of the pulse, typically, $`S_1=0.45`$ and $`S_2=0.39`$ and for the rise phase, $`S_1=0.22`$ and $`S_2=0.40`$. Neither the hardness-intensity correlation (Golenetskii et al. 1983) nor the hardness-fluence correlation (Liang & Kargatis 1996) include the time dependence of the spectral evolution. However, combined they do, as the fluence in the time integral of the flux. This was used by Ryde & Svensson (1999b) to synthesize and find a compact and quantitative description of the time evolution of the decay phase of a GRB pulse. This description is for the intensity-time plane of the GRB-cube, rather than for the single-spectral-channel light curves, which have been studied extensively in connection with their dependence with energy. Ryde & Svensson (1999b) identify a subgroup of GRB pulses, for which the two empirical relations are valid and show that for these the decay phase of the pulse should follow a power-law. For the decay phase the HFC becomes $$E_{\mathrm{pk}}(t)=E_{\mathrm{pk},0}e^{\mathrm{\Phi }(t)/\mathrm{\Phi }_0},$$ (6) where $`E_{\mathrm{pk},0}`$ is the peak energy at the start of the decay. Note that $`E_{\mathrm{pk},\mathrm{max}}`$ could be even larger, for instance for hard-to-soft bursts. For a moderate spectral shape evolution the HIC can be rewritten, using the photon flux, as, in that case, it holds that $`E_{\mathrm{pk}}(t)N(t)F(t)`$. For the decay phase we then have $$E_{\mathrm{pk}}(t)=E_{\mathrm{pk},0}\left[\frac{N(t)}{N_0}\right]^\delta ,$$ (7) where $`N_0`$ is the photon flux at the same time as $`E_{\mathrm{pk}}=E_{\mathrm{pk},0}`$, i.e., at the beginning of the decay phase. The correlation index, $`\delta `$, corresponds approximately to $`1/(\gamma 1)`$, where $`\gamma `$ is the index used by Golenetskii et al. (1983). These two relations, given by equations (6) and (7), fully describe the evolution and especially the time dependence. If these two relations are fulfilled the time evolution can be described by a vector function $`𝐆(t)=(N(t),E_{\mathrm{pk}}(t))`$ given by $`N(t)`$ $`=`$ $`{\displaystyle \frac{N_0}{(1+t/\tau )}};`$ (8) $`E_{\mathrm{pk}}(t)`$ $`=`$ $`{\displaystyle \frac{E_{\mathrm{pk},0}}{(1+t/\tau )^\delta }},`$ (9) where the initial value is $`𝐆(0)=(N_0,E_{\mathrm{pk},0})`$ and the number of additional parameters is limited to two, the time constant $`\tau `$ $`\left[N(t=\tau )=N_0/2\right]`$ and the HIC index $`\delta `$. Note that the origin of the time variable, $`t`$, is at the time of the intensity peak. The peak energy has a similar dependence as the intensity, differing only by the power law index $`\delta `$. The exponential decay constant of the HFC is given by $`\mathrm{\Phi }_0N_0\tau /\delta `$, and thus the characteristic time scale of the decay, the time constant, $`\tau \delta \mathrm{\Phi }_0/N_0`$. The formulation, given by equations (8) and (9), is a condensate of the HIC and the HFC, which have been proven to be valid in several cases. Ryde & Svensson (1999b) studied a number GRB pulses in this context, fitting both the original correlations, as well as the new equivalent formulation. The fits of the behavior of the decay phases of two such strong pulses are shown in Figure 9. This shows, among other things, that if the HIC and the HFC are valid the decay part of the light curve (intensity) should follow the power-law $`N(t)(1+t/\tau )^1`$. This behavior cannot persist too long as the integrated flux (the fluence) has a divergent behavior $`\mathrm{\Phi }(t)=N_0\tau \mathrm{ln}(1+t/\tau )`$. The decay of the intensity must thus change into a more rapid one, such as an exponential, or possibly be turned off completely. ### 4.3. Relation Between the Time-Integrated and Time-Resolved Spectra The hardness-fluence correlation gives us the possibility to understand in what way the instantaneous and the time-integrated spectra are related over a pulse. What time-integrated spectrum does this relation give rise to? I.e., what does the spectrum on the intensity-energy plane of the GRB-cube look like? The exponential decay of the peak energy with fluence means that a linear increase in fluence by equal steps of $`\mathrm{\Phi }_0`$ photons cm<sup>-2</sup> corresponds to a decrease of $`\mathrm{ln}E_{\mathrm{pk}}`$ in equal logarithmic steps. As the instantaneous spectra are, roughly, dominated by the $`N\mathrm{\Phi }_0`$ photons cm<sup>-2</sup> in a logarithmic interval around the peak energy, $`dN/d\mathrm{ln}E`$ $`=EdN/dEEN_\mathrm{E}(E)`$ is a constant $`=\mathrm{\Phi }_0`$. In other words, the time-integrated, specific flux spectrum is a constant function of energy, and thus the time-integrated photon spectrum, $`N_\mathrm{E}(E)`$ of a single pulse has a power-law slope of $`1`$. This is a direct result of the specific evolution defined by the hardness-fluence correlation. I.e., the spectral shape is a result of the exponential decay of the peak energy versus photon fluence. This spectral shape is reminiscent of the optically-thin thermal bremsstrahlung spectrum. This was studied in detail by Ryde & Svensson (1999a), who showed analytically how the time evolution of the instantaneous spectra is related to the resulting time-integrated spectrum. They studied mainly the spectra of single FRED pulses and showed that the exponential decay of the peak energy with photon fluence, indeed, does lead to a general, low-energy slope, normalized by the decay constant $`\mathrm{\Phi }_0`$ and having the underlying $`E^1`$ behavior. This general result is affected by the finite range over which the peak energy evolves; the less it evolves the more the spectrum is affected. The way the spectrum is affected can be found analytically, leading to a function that can be used to fit the time-integrated spectra, and having parameters describing the instantaneous spectra. From the fit to the time-integrated spectrum one can deduce information about the instantaneous spectra, which is of interest as these carry more direct physical information. This is not the case for the time-integrated spectra, as they are merely the result of the exponential decay of the peak energy. ## 5. Analysis Methodology ### 5.1. Data Analysis The observed distribution of the spectral parameters could very well be different from the parent distribution due to observational biases, such as truncation of the data set resulting from the trigger procedure. For instance, is this the case for the narrow range of peak energies found by the BATSE instrument? Cohen et al. (1998) argued that the detection efficiency of the BATSE could lead to an unreal paucity of hard bursts and they suggest that there could exist a large, unobserved population of hard (MeV) bursts. If the luminosity at the peak energies is represented by a standard candle, high peak energies will result in fewer photons, letting fewer pass the trigger. Very low peak energies will correspondingly affect the triggered fraction, as the spectra would have their cut-offs below the detectable range. This is also noted in Lloyd & Petrosian (1999) and Petrosian et al. (1999) who show that the effects of selection biases and data truncations are to produce observed distributions that are narrower than the parent distributions. They also present methods to properly account for this. Furthermore, are the assigned values of the spectral characteristics, mainly the peak energy, $`E_{\mathrm{pk}}`$, and the slope of the asymptotic, low-energy power-law, $`\alpha `$, correctly measured? As it is the asymptotic value of the power-law that is measured, problems can arise the closer the energy break gets to the low-energy cut-off of the energy window of the detector (generally 15-25 keV for BATSE data). The fitting can lead to a wrong value being ascribed to $`\alpha `$, and consequently also to $`E_{\mathrm{pk}}`$, as the exponential turnover (curvature) is not modeled correctly. This is also evident in the comparison of the results from fitting a sharply broken power-law to the data, instead. This function does not take the exponential turnover into account and therefore gives a steeper (lower $`\alpha `$) power-law compared to the asymptotic value. The errors in the fitted $`\alpha `$-parameter also increase as the available energy range, for fitting, decreases. Preece et al. (1998b) use the power-law tangent to the ‘GRB’ function at some chosen low energy (e.g. 25 keV) as the upper bound of the low-energy power-law behavior within the observed energy window. This value is, however, always smaller than the asymptotic value. In addition, the values assigned to the fitted parameters can be erroneous due to the existence of any soft component, any previous pulses adding soft photons to the spectrum, or any completely unresolved pulses with lower break energies. These issues could affect the estimated fraction of pulses over which the spectral shape actually changes. Kargatis et al. (1995) and Liang & Kargatis (1996) freeze the power-law values to their average values during the burst, which reduces the spectral variations into merely the hardness variation. Correspondingly, the measured value of $`\beta `$ is sensitive to the amount of high-energy signal available for its determination. The power peak, in some cases, does not lie within the BATSE band, e.g., when $`\beta >2`$. For a detailed discussion on these issues see Preece (1999). To be able to study the spectral evolution on finer time scales a coarser hardness measure in needed. Often the hardness ratio between different energy bands is used. Bhat et al. (1994) compared the analysis of the spectral evolution with the two different hardness definitions, conventional spectral fitting and hardness ratios, and found that the two were consistent. Furthermore, does the BATSE spectrum, between 20 keV and 1900 keV, represent a correct measure of the bolometric flux? The energy spectrum often peaks within the BATSE band and thus it should be a good measure of the bolometric energy flux. Such considerations made Crider et al. (1999) prefer to study the spectral evolution in terms of energy flux rather than photon flux, cf. §5.3. Moreover, the evolution of the spectral characteristics and the correlations between them could also be affected by various limitations in the observations. Schaefer (1993) discusses methodological problems in connection with the study of the HIC and expresses concern with the fitting technique used (Isobe 1990). He emphasizes the importance of having high spectral resolution rather than time resolution, so as not to introduce artificial correlations. To do spectral analysis correctly it has been shown that the signal-to-noise ratio should be of the order of $`45`$ in the BATSE range (Preece et al. 1998a). ### 5.2. Detailed Spectral Modeling To model the time-integrated photon count spectra, from which the background has been subtracted, the ‘GRB’ function (Band et al. 1993) is the most commonly used, cf. §2.2. It is a purely empirical model described by 4 parameters. Besides the normalization, the two power-laws and the break energy are fitted. Earlier studies used the ‘optically-thin thermal bremsstrahlung’ (OTTB) spectrum, $`N_\mathrm{E}(E)E^1\mathrm{exp}(E/kT)`$ and the ’thermal synchrotron’ (TS) spectrum from an optically-thin, mildly relativistic, thermal plasma in a magnetic field $`B`$; $`N_\mathrm{E}(E)\mathrm{exp}\left[(E/E_\mathrm{c})^{1/3}\right]`$, where $`T`$ is the temperature and $`E_\mathrm{c}`$ is the critical frequency proportional to $`BT^2`$ (Liang 1982). The empirical models are often used only to determine the general shape, e.g., the hardness, and not to determine physical characteristics of the source, like a temperature. The value assigned to the break of the spectrum depends on the model used and can thus affect the correlations sought for. The break is determined from the fit to the overall continuum shape which is modeled in different ways. This was noted, for instance, by Schaefer et al. (1992) and Kargatis et al. (1994). In the latter study the authors used the OTTB and TS models and found that many cases gave consistent results but that there were cases for which rather different values were obtained. Another such study was done by Ryde (1999), where 10 GRB spectra were fitted with three different models: a sharply broken power-law, the ‘GRB’ model, both with 4 parameters, and the smoothly broken power-law, described by a broken power-law, smoothly and evenly connected through a hyperbolic function with a total of 5 parameters. The extra free parameter describes the width of the transition region (see, e.g., Preece et al. 1996a; Ryde 1999). In some cases, the peak energies attributed to the data differ considerably. As bursts can have an actual curvature that is sharper than the one given by the fixed exponential curvature in the ‘GRB’ function, which is determined solely by $`E_0`$, the resulting fits may differ. Adding extra parameters to the model function is meaningful only if the data are good enough to enable a constraining of the parameters. ### 5.3. Energy or Photon Flux? The intensity can be characterized either in terms of the photon/count flux or of the energy flux. For instance, Golenetskii et al. (1983) studied the HIC based on the energy flux (luminosity), while Bhat et al. (1994) studied the correlation using the detector count flux instead, allowing for higher time resolution by using hardness ratios to characterize the spectrum. The studies arrive at similar conclusions for the correlation. Liang & Kargatis (1996) choose to study the photon flux in their search of the HFC. This was motivated by the fact that the decay of the hardness versus energy fluence was more difficult to establish because the deconvolved energy flux has larger statistical errors. Liang & Kargatis (1996) discovered the HFC using the photon fluence. In a larger study of the HFC, Crider et al. (1999) used the energy flux instead. More detailed spectral fitting can reduce the statistical errors in the energy fluence. The authors also found the HFC in the energy flux fits for all the 41 cases they studied. In parallel, they also studied the decay versus photon fluence suggested originally, and confirmed the discovery. The reason they prefer the energy fluence over the photon fluence is their argument that the energy fluence represents a more physical quantity and that it is a better measure of the bolometric flux. However, they point out that the two approaches do not represent fundamentally different trends. Ryde & Svensson (1999b) used the photon flux both for the HIC and for the HFC in their synthesis of the spectral evolution of a GRB pulse and showed the relations to hold. What then is the difference and when is a difference between the approaches expected to be seen? Can we determine which correlation is the most fundamental, i.e., the one always valid and not merely a consequence of the other? The decay tested for is either the exponential decay of the peak energy versus photon fluence, $$E_{\mathrm{pk}}(t)=E_{\mathrm{pk},\mathrm{max}}e^{\mathrm{\Phi }(t)/\mathrm{\Phi }_0},$$ (10) or the linear decay of the peak energy versus energy fluence, $`(t)`$, $$E_{\mathrm{pk}}(t)=E_{\mathrm{pk},\mathrm{max}}\frac{1}{\mathrm{\Phi }_0}_0^tF(t^{})𝑑t^{}=E_{\mathrm{pk},\mathrm{max}}\frac{1}{\mathrm{\Phi }_0}(t).$$ (11) Differentiating equation (10) gives the decay rate of the peak as $$\frac{dE_{\mathrm{pk}}(t)}{dt}=\frac{E_{\mathrm{pk}}(t)}{\mathrm{\Phi }_0}N(t)\frac{F(t)}{\mathrm{\Phi }_0}.$$ (12) The last step in equation (12) is generally only approximately true. The equivalence between the exponential decay in photon fluence, equation (10), and the linear decay in energy fluence, equation (11), depends on the validity of this approximation. The energy flux $$F(t)=_0^{\mathrm{}}EN_\mathrm{E}(E,t)𝑑E=N(t)_0^{\mathrm{}}Ef_\mathrm{E}(E,t)𝑑EN(t)E,$$ (13) where $`E`$ is the flux-weighted, averaged energy, i.e., the mean energy, and $`f_\mathrm{E}`$ is the normalized spectrum. The assumption that the two decays are the same is equivalent to that $`E=E_{\mathrm{pk}}`$. This is exactly the case in the often illustrative Dirac $`\delta `$function approximation of the spectrum. It is also the case, when the spectral shape, i.e., the function $`f_\mathrm{E}(E,t)`$, is symmetric around the peak energy. The approximation is better the more peaked the logarithmic $`EN_\mathrm{E}`$-spectrum is. A complication to the discussion stems from the fact that $`E`$ can shift as $`f_\mathrm{E}(E,t)`$ changes, i.e., when the spectral shape varies, for instance, with an evolving $`\alpha `$. Furthermore, when the spectral shape changes, the relation between the power peak, $`E_{\mathrm{pk}}`$, and the photon number peak, $`E_\mathrm{p}`$, varies, as $`E_{\mathrm{pk}}=(2+\alpha )/(1+\alpha )E_\mathrm{p}`$. For which measured peak energy does the relations hold the best? Ryde & Svensson (1999a) argue that the $`E_\mathrm{p}`$ is the important measure. General uncertainty in the measurement of the peak energy has already been discussed above. These issues add to the fact that the data cannot clearly demonstrate which relation is the correct one. In other words, we cannot make any conclusive statement based on information gleaned from the data alone. ## 6. Discussion Gamma-ray bursts are at cosmological distances, as indicated by recent observations of the afterglow, giving high redshift values (e.g., Costa et al. 1997; Metzger et al. 1997). One plausible origin of the huge energy release needed, is from a dissipative, relativistically expanding fireball (blast-wave), or, equivalently, a propagating jet with low baryonic contamination (e.g., Mészáros & Rees 1997). The motivation for this scenario is based on the requirement that the observed amount of energy must be injected inside a very small volume, given by the characteristic time scales of GRBs. The photon energy densities imply that the radiation is super-Eddington (by orders of magnitude; particularly if the radiation is emitted isotropically) and lead to the creation of an optically thick, dense radiation and electron-positron-pair fluid, expanding under its own pressure and cooling adiabatically. The observed radiation can thus not come from the surface of the central energy source. Initially the fireball is thermal and converts the radiation energy into bulk kinetic energy. The thermal emission from the fireball will not be visible in the gamma-ray band, but may be visible at lower energies. It then becomes optically-thin and the kinetic energy of the wind will be tapped by a dissipation mechanism, such as shocks or magnetohydrodynamic (MHD) turbulence, converting it into internal energy and accelerating relativistic particles. The shocks could occur as the fireball crashes into the circum-burst, low-density environment or as different shells, with different Lorentz factors, within the fireball catch up with each other. The general hard-to-soft evolution of the burst spectra could then be explained by the expansion and deceleration of the blast-wave and/or by the decline in the averaged available energy, as more particles reach the shocks. The pair plasma wind has to be highly relativistic with Lorentz factors of $`\mathrm{\Gamma }10^210^3`$ to avoid photon-photon degradation through pair-production, as high-energy photons are observed. This implies that the baryonic pollution of the radiation fields cannot be very high (’clean fireball’). Lower Lorentz factors are, however, possible but then the production of the high-energy radiation cannot be directly connected to the lower energy gamma-rays. The primary source and signatures of the underlying mechanism is enshrouded by the optically-thick pair plasma at the beginning of the life of the fireball and details are washed out and cannot be seen directly in the observations. The observed radiation from this initial phase is from and outside of the pair photosphere. Thus the nature of the primary energy release will not greatly affect the resulting expanding fireball. Models including stellar-mass, compact objects, such as a merging neutron star and a black hole, meet the requirements of occurrence and energetics. This event can either be a single, short-lived, catastrophic event, producing a single fireball, or result in a recurrent central engine capable of producing several shells. The latter could, for instance, be a long-lived accretion system with the debris of the disrupted neutron star accreting onto the black hole. The orbital and spin energy in such a system can be tapped, for instance, through electromagnetic torques. As the expanding fireball is relativistic the radiation will undergo beaming, time-transformation, and Lorentz-boosting, blueshifting the emission into the gamma-ray band. Furthermore, the emission radiated from the blast-wave at a given comoving time will contribute to a broad observer time interval, due to light travel-time effects. Fenimore et al. (1996) argue that the FRED-like envelope shape of light curves are expected from a relativistically expanding shell, and find that the decay phase should follow a power-law. Gamma-ray observations may give hints of possible physical causes of the continuum spectral emission, i.e., information on the processes which convert the kinetic energy into the observed radiation. Empirical studies of the dynamics of the burst spectra, enable the systematization and the investigation of the underlying distributions of parameters. These empirical properties provide the important clues for the theoretical efforts to unravel the physical processes. The case is probably that these relations do not point directly to, and are not able to unambiguously state, the physical processes responsible for the radiation. Several different radiation processes could be involved, as well as pure kinematic and relativistic effects, making the physical interpretation difficult and complex. However, if one of the effects is dominant, the observations will give us direct information on physical entities, such as the distribution of particles emitting the radiation and the optical depth, etc. A physical GRB model must, under all circumstances, be able to reproduce the severe constraints that these relations and observations give. The large diversity, time scales, and variability of light curves must be naturally explained, as should the shape and breaks in the spectra. The connection between the spectral and intensity evolution, as described above, must also be addressed by any successful model. The general trend of the spectrum becoming softer over the whole burst could indicate a single emission region having a memory of previous emission events. Simple radiation processes all have some difficulty in describing the observations. Even on the smallest time scales the observed spectra are broad, much broader than a black body spectrum. Is it a multi-temperature black body mimicking a power-law, e.g., thermal spectra from many short-lived events or do other processes produce the broad spectrum? From the early spectral observations there were suggestions of thermal bremsstrahlung of an optically-thin, hot plasma. The spectra have, however, been shown generally not to follow such a spectrum. The low-energy power-law does not always follow $`\alpha =1`$. Thermal bremsstrahlung is also too inefficient (Liang 1982). Furthermore, the hardness-intensity correlation-index is $`\gamma =0.51`$ for mildly relativistic thermal bremsstrahlung from a plasma cloud (if the emission measure is constant). A part of the internal energy will take the form of magnetic fields which will make the relativistic electrons radiate synchrotron radiation, a very efficient radiation mechanism. As the fireball crashes into the surrounding low-density gas, it will form a relativistic, collisionless shock and radiate by optically-thin synchrotron radiation, which will be boosted into the gamma-ray band (the synchrotron shock model; Mészáros & Rees 1993). The electrons, giving rise to the synchrotron spectrum, are assumed to have a truncated power-law distribution. In the comoving frame, the minimum Lorentz factor is approximately equal to (depending on the equipartition between electrons and protons) to the bulk Lorentz factor of the blast wave, while the maximum Lorentz factor is set by the balance of radiative losses and energy gain from the acceleration mechanism at work for the most energetic electrons. If the cooling time of the electrons is very long the electron distribution around the low-energy cut-off will not change and the emerging synchrotron spectrum will have a photon index of $`\alpha =2/3`$. However, if the cooling time is much shorter than the comoving pulse duration, the electrons will settle in a cooled distribution, emitting a synchrotron spectrum with a low-energy photon index between -2/3 and -3/2, depending on the strength of the magnetic field. The electron distribution must have a sufficiently sharp low-energy cut-off during the whole evolution to be able to give rise to the observed spectrum. The photon index can, in this model never exceed the value $`2/3`$, creating a testable ’line of death’ for the synchrotron shock model in its simplest version. Preece et al. (1998b) found that 23 bursts out of their sample of 137, for which they did time-resolved spectroscopy, violate the synchrotron shock model, as the low-energy power-law spectra are harder than the maximally allowed, $`\alpha =2/3`$. Cohen et al. (1997) fitted the time-integrated spectra and found them to confirm the synchrotron shock model. However, as emphasized earlier, it is the time-resolved spectra which should be studied. The spectral index should also be constant, which clearly is not the case, as a softening of the spectra is observed in many bursts (Crider et al. 1997). The pulse width scaling with energy, $`WE^{0.45}`$ is, however, consistent with the prediction from radiative cooling by synchrotron losses (Tavani 1996). Synchrotron self-absorption would increase the low-energy power-law index, being $`+3/2`$ for a non-thermal plasma. The optical depth must then, however, be greater than one. The inclusion of Comptonization of the soft synchrotron photons by the emitting particles themselves, can modify the spectrum (Liang et al. 1997). The spectrum will then be an inverse Compton image of the synchrotron continuum in the comoving frame. The relativistic expansion will then boost the radiation to even higher energies. The empirical correlations point to saturated Comptonization and as noted by Crider et al. (1998b) there also has to be an initial increase in Thomson depth to explain the initial increase of $`\alpha `$ observed in many pulses. Here the $`WE^{0.4}`$ relation, found in the $`1.5700`$ keV range, could be an argument against the synchrotron self-Compton mechanism, as the X-rays and $`\gamma `$-rays would, in that case, be expected to have the same duration, since they would be produced by the same population of electrons (Piro et al. 1997). Another emission scenario involves photon-starved Comptonization in an optically-thick pair plasma having moderate Lorentz factors, $`\mathrm{\Gamma }=3050`$. This scenario cannot explain the very high-energy (GeV) emission observed. It is, however, attractive as it naturally provides a thermostat and can produce a stable spectral break. The nonlinear nature also can explain the highly variable light curves and the large amplitude variations, as such a system can be turned off quickly. See, e.g., Thompson (1998) and Ghisellini & Celotti (1999) for more detailed discussions. Detailed calculations of resulting light curves and spectra in the blast-wave model, including synchrotron and synchrotron self-Compton emission are given in Dermer & Chiang (1998) and Chiang & Dermer (1999), showing, e.g., how the injected electron distribution is reflected in the radiated spectrum, for various combinations of non-thermal electron distributions and magnetic fields. See also, e.g., Mészáros et al. (1994) and Panaitescu et al. (1997). Daigne & Mochkovitch (1998) calculated the emitted spectrum in an internal shock scenario, and are able to reproduce many of the observations. An observational feature that has to be explained is the highly variable light curves. The time scales for the variations in classical GRBs can be as low as $`10^1`$ s and are found to be self-similar with the average power density spectrum of long bursts being a power-law over more than two decades of frequency (Beloborodov et al. 1999). Such a behavior is difficult to explain with just variations in the external medium or shock collisions. Stern (1999) suggests that complex dynamic processes in the shock evolution make the outflow inhomogeneous, giving rise to the observations. This could be MHD turbulence with reconnection or instabilities such as the Rayleigh-Taylor instability. ## 7. Conclusion Gamma-ray bursts are observed at a rate of 1 per day with current detectors. Their time histories are a morphological zoo with a large diversity in shape. The energy spectra are peaked, broken power-laws and the instantaneous spectra evolve, sometimes markedly, both within a pulse and over the whole GRB. The intensity and its spectral characteristics are often correlated. This can be described by empirical relations, which are the result of the true intrinsic correlations of the GRB giving rise to empirical correlations. These are the result of the true intrinsic correlations of the GRB event as well as of relativistic effects. Understanding the intrinsic correlations, for instance within a burst, will eventually lead to the unraveling of the secret behind the energy release and the radiation processes in GRBs. It has been emphasized in this review that it is important to consider the spectral evolution when GRBs are studied, for instance, their light curves. Furthermore, it is the instantaneous spectra, which are more peaked than the time-integrated spectra, and their time evolution that reflect the physical processes responsible for the GRB emission. The time-integrated spectrum is, generally, a result of the specific spectral evolution taking place during the burst. Unfortunately, most theoretical spectral models assume that it is the time-integrated spectrum that reflects the underlying physical emission mechanism. In the study of the spectral characteristics, the low-energy power-law is the best studied and puts constraints on the physical radiation mechanism proposed to be responsible for the observed radiation. At the moment theory lags behind the observational advances. The observations give a number of constraints that have to be met by any successful physical description of the GRB phenomenon. ##### Acknowledgments. Thanks are due to L. Borgonovo, V. Petrosian, R. Preece, J. Poutanen and R. Svensson, for discussions and comments on the manuscript. ## References Band, D., et al. 1993, ApJ, 413, 281 Band, D. 1997, ApJ, 486, 928 Barat, C., Lestrade, J. P., Dezalay, J-P, Hurley, K., Sunyaev, R., Terekhov, O., & Kuznetsov, A. 1998, in AIP Conf. Proc. 428, Gamma-ray Bursts, 4th Huntsville Symposium, ed. C. A. Meegan, R. D. Preece & T. M. Koshut (New York: AIP), 278 Beloborodov, A. M., Stern, B., & Svensson, R. 1999, ApJ, 508, L25 Bhat, P. N., Fishman, G. J., Meegan, C. A., Wilson, R. B., Kouveliotou, C., Paciesas, W. S., Pendleton, G. N., & Schaefer, B. E. 1994, ApJ, 426, 604 Boella, M., et al. 1997, A&A Supp. Ser., 122, 299 Bonnell, J. T., & Norris, J. P. 1999a, ApJ, submitted, astro-ph/9905319 Bonnell, J. T., & Norris, J. P. 1999b, these proceedings Briggs, M. S., Band, D. L., Preece, R. D., Pendleton, G. N., Paciesas, W. S., & Matteson J. L. 1998, in AIP Conf. Proc. 428, Gamma-ray Bursts, 4th Huntsville Symposium, ed. C. A. Meegan, R. D. Preece & T. M. Koshut (New York: AIP), 299 Briggs, M. S. 1999, these proceedings Castro-Tirado, A. 1994, Ph.D. thesis, Univ. Copenhagen Chambon, G., Hurley, K., Niel, M., Vedrenne, G., Zenchenko, V., Kuznetsov, A., & Estulin, I. 1979, Space Sci. Instrum., 5, 73 Chernenko, A., & Mitrofanov, I. 1995, MNRAS, 274, 361 Chernenko, A., Briggs, M. S., Paciesas, W. S., Pendleton, G. N., Preece, R. D., & Meegan, C. A. 1998, in AIP Conf. Proc. 428, Gamma-ray Bursts, 4th Huntsville Symposium, ed. C. A. Meegan, R. D. Preece & T. M. Koshut (New York: AIP), 294 Chiang, J., & Dermer, C. D. 1999, ApJ, 512, 699 Cline, T. L., et al. 1973, ApJ, 185, L1 Cline, T. L., & Desai, U. D. 1975, ApJ, 196, L43 Cohen, E., Katz, J. I., Piran, T., Sari, R., Preece, R. D., & Band, D. L. 1997, ApJ, 488, 330 Cohen, E., Piran, T., & Narayan, R. 1998, ApJ, 500, 888 Costa, E., et al. 1997, IAU Circ. 6572; Nature, 387, 783 Crider, A., et al. 1997 ApJ, 479, L39 Crider, A., Liang, E. P., & Preece, R. D. 1998a, in AIP Conf. Proc. 428, Gamma-ray Bursts, 4th Huntsville Symposium, ed. C. A. Meegan, R. D. Preece & T. M. Koshut (New York: AIP), 63 Crider, A., Liang, E. P., & Preece, R. D. 1998b, in AIP Conf. Proc. 428, Gamma-ray Bursts, 4th Huntsville Symposium, ed. C. A. Meegan, R. D. Preece & T. M. Koshut (New York: AIP), 359 Crider, A., Liang, E. P., Preece, R. D., Briggs, M. S., Pendleton, G. N., Paciesas, W. S., Band, D. L. , & Matteson, J. L. 1999, ApJ, 519, 206 Daigne, F. & Mochkovitch, R. 1998, MNRAS, 296, 275 Dermer, C. D., & Chiang, J. 1998, NewA, 3, 157 Fenimore, E. E. 1999, ApJ, 518, 379 Fenimore, E. E., & Bloom, J. S. 1995, ApJ, 453, 25 Fenimore, E. E., in’t Zand, J. J. M., Norris, J. P., Bonnell, J. T., & Nemiroff, R. J. 1995, ApJ, 448, L101 Fenimore, E. E., Madras, C. D., & Nayakshin, S. 1996, ApJ, 473, 998 Fishman, G. J., et al. 1989, in Proc. of the GRO Science Workshop, ed. W. N. Johnson, 2 Fishman, G. J., et al. 1994, ApJS, 92, 229 Ford, L. A., et al. 1995, ApJ, 439, 307 Ghisellini, G., & Celotti, A. 1999, ApJ, 511, L93 Golenetskii, S. V., Mazets, E. P., Aptekar, R. L., & Ilyinskii, V. N. 1983, Nature, 306, 451 Golenetskii, S. V., Aptekar, R. L., Frederiks, D. D., Ilyinskii, V. N., Mazets, E. P., Panov, V. N., Sokolova, Z. J., & Terekhov, M. M. 1998, in AIP Conf. Proc. 428, Gamma-ray Bursts, 4th Huntsville Symposium, ed. C. A. Meegan, R. D. Preece & T. M. Koshut (New York: AIP), 284 Hueter, G. J. 1987, Ph.D. thesis, UCSD Hurley, K., et al. 1992, in Gamma-Ray Bursts, AIP Conf. Proc. 265, ed. W. S. Paciesas & G. J. Fishman (New York: AIP), 195 Hurley, K., et al. 1994, in AIP Conf. Proc. 307, Gamma-ray Bursts, 2nd Huntsville Workshop, ed. G. J. Fishman, J. J. Brainerd, & K. Hurley (New York: AIP), 726 in ’t Zand, J. J. M., & Fenimore, E. E. 1996, ApJ, 464, 622 in ’t Zand, J. J. M., Heise, J., van Paradijs, J., & Fenimore, E. E. 1999, ApJ, 516, L57 Jourdain, E. 1990, Ph.D. thesis, C.E.S.R., Paul Sabatier Univ., Toulouse Kargatis V. E., Liang, E. P., Hurley, K. C., Barat, C., Eveno, E., & Niel, M. 1994, ApJ, 422, 260 Kargatis, V. E., et al. 1995, A&SS, 231, 177 Klebesadel, R. W., Strong, I. B., & Olson, R. A. 1973, ApJ, 182, L85 Kouveliotou, C., Paciesas, W. S., Fishman, G. J., Meegan, C. A., & Wilson, R. B. 1992, in The Compton Observatory Science Workshop, ed. C. R. Shrader, N. Gehrels & B. Dennis (Greenbelt: NASA/GSFC), 61 Kouveliotou, C., Paciesas, W. S., Fishman, G. J., Meegan, C. A., & Wilson, R. B. 1993 A&AS, 97, 55 Laros, J. G., Evans, W. D., Fenimore, E. E., Klebesadel, R. W., Shulman, S., & Fritz, G. 1984, ApJ, 286, 681 Laros, J. G., et al. 1985, ApJ, 290, 728 Lee, A., Bloom, E., & Scargle, J. 1998, in AIP Conf. Proc. 428, Gamma-ray Bursts, 4th Huntsville Symposium, ed. C. A. Meegan, R. D. Preece & T. M. Koshut (New York: AIP), 261 Li, H., & Fenimore, E. E. 1996, ApJ, 469, 115L Liang, E. P. 1982, Nature, 299, 321 Liang, E. P., & Kargatis, V. E. 1996, Nature, 381, 495 Liang, E. P., Kusunose, M., Smith, I. A., & Crider, A. 1997, ApJ, 479, L35 Lloyd, N., & Petrosian, V. 1999, ApJ, 511, 550 Lochner, J. C. 1992, in AIP Conf. Proc. 265, Gamma-Ray Bursts, ed. W. S. Paciesas, & G. J. Fishman (New York: AIP), 289 Loredo, T. J., & Epstein, R. I. 1989, ApJ 336, 896 Matz, S. M., et al. 1985, ApJ, 288, L37 Mazets, E. P., et al. 1981, Nature, 290, 378 Mazets, E. P., Golenetskii, S. V., Ilyinskii, V. N., Guryan, Yu. A., & Aptekar, R. L. 1982, Ap&SS, 82, 261 Mészáros, P., & Rees, M. J. 1993, ApJ, 405, 278 Mészáros, P., & Rees, M. J. 1997, ApJ, 476, 232 Mészáros, P., Rees, M. J., & Papathanassiou, H. 1994, ApJ, 432, 181 Metzger, M., et al. 1997, Nature, 387, 878 Mitrofanov, I. G., Chernenko, A. M., Pozanenko, A. S., Paciesas, W. S., Kouveliotou, C., Meegan, C. A., Fishman, G. J, & Sagdeev, R. Z. 1994, in AIP Conf. Proc. 307, Gamma-ray Bursts, 2nd Huntsville Workshop, ed. G. J. Fishman, J. J. Brainerd & K. Hurley (New York: AIP), 187 Mitrofanov, I. G., Chernenko, A. M., Pozanenko, A. S., Briggs, M. S., Paciesas, W. S., Fishman, G. J., Meegan, C. A., & Sagdeev, R. Z. 1996, ApJ, 459, 570 Murakami, T., et al. 1988, Nature, 335, 234 Norris, J. P., et al. 1986, ApJ, 301, 213 Norris, J. P., Nemiroff, R. J., Scargle, J. D., Kouveliotou, C., Fishman, G. J., Meegan, C. A., Paciesas, W. S., & Bonnell, J. T. 1994, ApJ, 424, 540 Norris, J. P., Nemiroff, R. J., Bonnell, J. T., Scargle, J. D., Kouveliotou, C., Paciesas, W. S., Meegan, C.A., & Fishman, G. J. 1996, ApJ, 459, 393 Palmer, D. M., et al. 1994, ApJ, 433, L77 Palmer, D. M., et al. 1996, in AIP Conf. Proc. 384, Gamma-ray Bursts, 3rd Huntsville Symposium, ed. C. Kouveliotou, M. F. Briggs & G. J. Fishman (New York: AIP), 218 Panaitescu, A., Wen, L., Laguna, P., & Mészáros, P. 1997, ApJ482, 942 Pendleton, G. N., et al. 1994, ApJ, 431, 416 Pendleton, G. N., et al. 1997, ApJ, 489, 175 Petrosian, V., Lloyd, N., & Lee, A. 1999, these proceedings Piro, L., et al. 1998, A&A, 329, 906 Preece, R. D., Briggs, M. S., Mallozzi, R. S., & Brock, M. N. 1996a, WINGSPAN v 4.4 manual Preece, R. D., Briggs, M. S., Pendleton, G. N., Paciesas, W. S., Band, D. L., Ford, L. A., & Kouveliotou, C. 1996b, in AIP Conf. Proc. 384, Gamma-ray Bursts, 3rd Huntsville Symposium, ed. C. Kouveliotou, M. F. Briggs & G. J. Fishman (New York: AIP), 243 Preece, R. D., Briggs, M. S., Pendleton, G. N., Paciesas, Matteson, J. L., Band, D. L., Skelton, R. T., & Meegan, C. A. 1996c, ApJ, 473, 310 Preece, R. D., Pendleton, G. N., Briggs, M. S., Mallozzi, R. S., Paciesas, W. S., Band, D. L., Matteson, J. L., & Meegan, C. A. 1998a, ApJ, 496, 849 Preece, R. D., Briggs, M. S., Mallozzi, R. S., Pendleton, G. N., Paciesas, W. S., & Band, D. L. 1998b, ApJ, 506, L23 Preece, R. D. 1999, these proceedings Preece, R. D., et al. 1999, ApJS, in press (astro-ph/9908119) Ryde, F. 1999, Astroph. Lett. and Comm., in press (astro-ph/9811462) Ryde, F., & Svensson, R. 1999a, ApJ, 512, 693 Ryde, F., & Svensson, R. 1999b, ApJ, submitted Rybicki, G. B., & Lightman, A. P. 1979, Radiative Processes in Astrophysics (New York: Wiley) Schaefer, B. E., et al. 1992, ApJ, 393, L51 Schaefer, B. E. 1993, ApJ, 404, L87 Schaefer, B. E., et al. 1994, in AIP Conf. Proc. 307, Gamma-ray Bursts, 2nd Huntsville Workshop, ed. G. J. Fishman, J. J. Brainerd, & K. Hurley (New York: AIP), 280 Schaefer, B. E., & Dyson, S. E. 1996, in AIP Conf. Proc. 384, Gamma-ray Bursts, 3rd Huntsville Symposium, ed. C. Kouveliotou, M. F. Briggs & G. J. Fishman (New York: AIP), 96 Schaefer B. E., et al. 1998, ApJ, 492, 696 Schaefer, B. E., & Walker, K. C. 1999, ApJ, 511, L89 Seifert, H. et al. 1997, ApJ, 491, 697 Stern, B. E. 1996, ApJ, 464, L111 Stern, B. E. 1999, in ASP Conf. Series Vol. 161, High Energy Processes in Accreting Black Holes, ed. J. Poutanen & R. Svensson (San Francisco: ASP), 277 (astro-ph/9902203) Stern, B. E., & Svensson, R. 1996, ApJ, 469, L109 Stern, B. E., Poutanen, J., & Svensson, R. 1997, ApJ, 489, L45 Stern, B. E., Poutanen, J., & Svensson, R. 1999, ApJ, 510, 312 Strohmayer, T. E., Fenimore, E. E., Murakami, T., & Yoshida, A. 1998, ApJ, 500, 873 Tavani, M. 1996, ApJ, 466, 768 Teegarden, B. J. 1982, in AIP Conf. Proc. 77, Gamma-Ray Transients and Related Astrophysical Phenomena, ed. R. E. Lingenfelter, J. C. Higdon & D. M. Worrall (New York: AIP), 123 Thompson, C. 1998, in AIP Conf. Proc. 428, Gamma-ray Bursts, 4th Huntsville Symposium, ed. C. A. Meegan, R. D. Preece & T. M. Koshut (New York: AIP), 737 Yoshida, A., et al. 1989, PASP, 41, 509
no-problem/9910/hep-th9910015.html
ar5iv
text
# 1 Introduction ## 1 Introduction Seiberg and Witten proposed exact results for $`SU(2)`$, $`N=2`$ supersymmetric Yang-Mills theory with and without matter multiplets. These include, in particular an exact expression for the mass spectrum of BPS-states for these theories. Their solutions also provide a mechanism, based on monopole condensation, for chiral symmetry breaking and confinement in $`N=2`$ Yang-Mills with and without coupling to fundamental hypermultiplets respectively. The results of Seiberg and Witten have since been generalised to a variety of gauge groups describing a number of interesting new phenomena . On another front the low energy effective theories arising from non-Abelian YM-theory have been identified with those describing the low energy dynamics of certain intersecting brane configurations in string theory . This approach to field theory has the advantage of providing an elegant geometrical representation of the low energy dynamics of strongly coupled supersymmetric gauge theory. For gauge groups $`SU(2)`$ and $`N_F3`$ massless flavours it has since been shown that the solution are indeed the only ones compatible with supersymmetry and asymptotic freedom in these theories . However, a number of issues have still resisted an exact treatment. In particular the precise relation between the low energy effective coupling $`\tau _{\text{eff}}`$ and the microscopic coupling $`\tau `$ in the scale invariant $`N_F=4`$ theory has not been understood so far. Indeed, while the two couplings were first assumed to be identical in it was later found by explicit computation that there are, in fact, perturbative as well as instanton corrections . On the other hand, explicit instanton calculus is so far limited to topological charge $`k2`$. A related observation has been made in the D-brane approach to scale invariant theories. The details of the conclusions reached there are, however, somewhat different . The purpose of the present paper is to fill this gap. Our analysis uses a combination of analytic results from the theory of conformal mappings combined with the known results form instanton calculus. More precisely we consider the sequence $`\tau z𝐂\tau _{\text{eff}}`$. We will then argue that given the Seiberg-Witten curve together with some suitable assumptions on the singular behaviour of the instanton contributions there is a one-parameter family of admissible maps $`\tau _{\text{eff}}\tau `$. The remaining free parameter is in turn determined by the two-instanton contribution to the asymptotic expansion at weak coupling. This coefficient has been computed explicitly in . Combining these results then determines the map completely. Although we are not able to give a closed form of the map $`\tau \tau _{\text{eff}}`$ globally the higher order instanton coefficients can be determined iteratively. We further discuss some global properties of the map qualitatively. In particular we will see that it is not single valued meaning that the instanton corrections lead to a cut in the strong coupling regime. In this note we restrict ourselves to gauge group $`SU(2)`$ leaving the extension to higher groups for future work. ## 2 Review of N=2 Yang-Mills with 4 Flavours To prepare the ground let us first review some of the relevant features of the theory of interest , that is $`N=2`$ YM-theory with $`4`$ hypermultiplets $`Q^r`$ and $`\stackrel{~}{Q}_r`$, $`r=1,\mathrm{},4`$, in the fundamental representation. In $`N=1`$ language the hypermultiplets are described by two chiral multiplets containing the left handed quarks and anti quarks respectively. These are in isomorphic representations of the gauge group $`SU(2)`$. The global symmetry group therefore contains a $`SO(8)`$ or, more precisely, a $`O(8)`$ due to invariance under the $`𝐙_2`$ “parity” which exchanges a left handed quark with its anti particle, $`Q_1\stackrel{~}{Q}_1`$, with all other fields invariant. At the quantum level this $`𝐙_2`$ is anomalous due to contributions from odd instantons. We consider the Coulomb branch with a constant scalar $`\phi =a0`$ in the $`N=2`$ vector multiplet. This breaks the gauge group $`SU(2)U(1)`$. The charged hypermultiplets then have mass $`M=\sqrt{2}|a|`$ and transform as a vector under $`SO(8)`$, rather than $`O(8)`$ due to the $`𝐙_2`$-anomaly. In addition, there are magnetic monopole solutions leading to $`8`$ fermionic zero modes from the $`4`$ hypermultiplets. These turn the monopoles into spinors of $`SO(8)`$. The symmetry group is therefore the universal cover of $`SO(8)`$ or Spin$`(8)`$ with centre $`𝐙_2\times 𝐙_2`$. Following we label the representations $`𝐙_2\times 𝐙_2`$ according to the Spin$`(8)`$ representations, that is the trivial representation $`o`$, the vector representation $`v`$ and the two spinor representations $`s`$ and $`c`$. To decide in which spinor representation the monopoles and dyons transform one considers the action of an electric charge rotation $`e^{\pi iQ}`$ on these states. Here the electric charge $`Q`$ is normalised such that the massive gauge bosons have charge $`\pm 2`$. This action is conveniently described by $$e^{\pi iQ}=e^{in_m\theta }(1)^H,$$ (1) where states with even, odd $`n_e`$ are $`(1)^H`$ even, odd respectively. On the other hand, for consistency, the monopole anti-monopole annihilation process requires a correlation between chirality in Spin$`(8)`$ and electric charge . We therefore identify $`(1)^H`$ with the chirality operator in the spinor representations of Spin$`(8)`$. Hence dyons with even and odd electric charge transform in one or the other spinor representation of Spin$`(8)`$ respectively. There is an outer automorphism $`S_3`$ of Spin$`(8)`$ that permutes the three non-trivial representations $`v`$, $`s`$ and $`c`$. It is closely connected to the proposed duality group $`SL(2,𝐙)`$ of the quantum theory. Indeed there is a homomorphism $`SL(2,𝐙)S_3`$, so that the invariance group of the spectrum is given by the semi direct product Spin$`(8)`$ and $`SL(2,𝐙)`$. The kernel of this homomorphism plays an important part in our analysis below. It consists of the matrices congruent to $`1`$ (mod$`(2)`$). They are conjugate to the subgroup $`\mathrm{\Gamma }_0(2)`$. The fundamental domain of this subgroup is the space of inequivalent effective couplings $`\tau _{\text{eff}}`$. One can further formalise this structure in terms of the hyperelliptic curve that controls the low energy behaviour of the model . For this one seeks a curve $`y^2=F(x,u,\tau )`$ such that the differential form $$\omega =\frac{\sqrt{2}}{8\pi }\frac{\text{d}x}{y}$$ (2) has the periods $`(\frac{a_D}{u},\frac{a}{u})`$ with $`(a_D,a)`$ given by $$a=\sqrt{\frac{u}{2}}\text{and}a_D=\tau _{\text{eff}}a.$$ (3) The curve consistent with $`SL(2,𝐙)`$ duality determined in , is given by $$y^2=(xue_1(\tau ))(xue_2(\tau ))(xue_2(\tau )),$$ (4) where $`e_i(\tau )`$ are the modular forms corresponding to the three subgroups of $`SL(2,𝐙)`$, conjugate to the index $`3`$ subgroup $`\mathrm{\Gamma }_0(2)`$. ## 3 Map: $`\tau \tau _{\text{eff}}`$ We now have the necessary ingredients to determine the precise relation between $`\tau _{\text{eff}}`$ and $`\tau `$. We begin with the observation that, according to the structure of the effective theory presented above, the fundamental domain $`D_\mathrm{\Gamma }`$ of any of the three subgroups conjugate to $`\mathrm{\Gamma }_0(2)`$ can be used as the space of inequivalent effective couplings. The three choices are then related by the Spin$`(8)`$ “triality” relating the $`3`$ different non-trivial representations $`v`$, $`s`$ and $`c`$. Each fundamental domain is described by a triangle in the upper half plane ($`\text{Im}(\tau )0`$), bounded by circular arcs . The $`3`$ singularities are conjugate to the points $`(i\mathrm{},1,1)`$ corresponding to the weak coupling regime, massless monopoles and massless dyons with charge $`(n_m,n_e)=(1,1)`$ mod$`(2)`$ respectively. In the absence of perturbative- and instanton corrections the effective coupling is identified with the microscopic coupling $`\tau `$. This is the case in $`N=4`$ theories. In the scale invariant $`N=2`$ theory considered here the situation is different. As shown in , the effective coupling is finitely renormalised at the one loop level and furthermore receives instanton corrections. As a result the fundamental domain $`\overline{D}`$, of inequivalent microscopic couplings and the fundamental domain $`D_\mathrm{\Gamma }`$, of inequivalent effective couplings are not the same. We will now determine the exact relation between them. Some information about the fundamental domain $`\overline{D}`$ of microscopic couplings $`\tau `$ is obtained from the following observations: a) As the microscopic coupling does not enter in the mass formula, its fundamental domain is not constrained to be that of a subgroup of $`SL(2,𝐙)`$ . Nevertheless we require that the imaginary part of $`\tau `$ be bounded from below. Correspondingly the different determinations of $`\tau `$ for a given $`\tau _{\text{eff}}`$ must be related by a transformation in $`GPSL(2,\mathrm{I}\mathrm{R})`$. Hence, a particular determination of $`\tau `$. will lie within a fundamental domain of $`G`$ or some covering thereof. That is, $$\tau C[H/G]\text{where}GPSL(2,\mathrm{I}\mathrm{R}),$$ (5) where $`C[]`$ denotes a certain covering. Any such domain is bounded by circular arcs and is thus conformally equivalent to the punctured $`2`$-sphere, $`𝐂\{a_1,\mathrm{},a_n\}`$ . b) We know of no principle excluding the possibility that the number of vertices of $`\overline{D}`$ be different from that of $`D_\mathrm{\Gamma }`$. On the other hand such extra singularities have no obvious physical interpretation. We therefore discard this possibility. Equipped with this information we will now determine the homomorphism that maps $`D_\mathrm{\Gamma }`$ into the fundamental domain of microscopic couplings<sup>1</sup><sup>1</sup>1As will become clear below the two domains cannot be isomorphic. $`\overline{D}`$. It follows from general arguments that this map has an expansion of the form $$\tau _{\text{eff}}=\underset{n=0}{\overset{\mathrm{}}{}}c_nq^n\text{where}q=e^{\pi i\tau }$$ (6) The coefficients $`c_i`$, represent the perturbative one-loop ($`i=0`$) and instanton ($`i>0`$) corrections respectively. The contributions from odd instantons to $`\tau _{\text{eff}}`$ vanishes. This is due the fact that the part of the effective action determining the effective coupling is invariant under the $`𝐙_2`$-”parity” described in the last section. The first two non-vanishing coefficients of the expansion (6) are known . They are $$c_0=\frac{i}{\pi }4\mathrm{ln}2\text{and}c_2=\frac{i}{\pi }\frac{7}{2^53^6}.$$ (7) To continue we use some elements of the theory of conformal mappings . That is we consider the maps from the punctured $`2`$-sphere $`S=𝐂\{a_1,\mathrm{},a_n\}`$ to polygons in the upper half plane, bounded by circular arcs. Concretely we consider the sequence $`\tau _{\text{eff}}zS\tau `$ (see Fig.1). The form of such mappings is generally complicated. However, their Schwarzian derivative takes a remarkably simple form $$\{\tau ,z\}=\underset{i}{}\frac{1}{2}\frac{1\alpha _i^2}{(za_i)^2}+\frac{\beta _i}{za_i}.$$ (8) The parameters $`\alpha _i`$ measure the angles of the polygon in units of $`\pi `$. The accessory parameters $`\beta _i`$ do not have a simple geometric interpretation but are determined uniquely up to a $`SL(2,𝐂)`$ transformation of $`S`$. Furthermore they satisfy the conditions $`{\displaystyle \underset{i=1}{\overset{n}{}}}\beta _i=0`$ , $`{\displaystyle \underset{i=1}{\overset{n}{}}}\left[2a_i\beta _i+1\alpha _i^2\right]=0,`$ $`{\displaystyle \underset{i=1}{\overset{n}{}}}\left[\beta _ia_i^2+a_i\left(1\alpha _i^2\right)\right]`$ $`=`$ $`0`$ (9) In the present situation it is convenient to orient the polygons such that they have a vertex at infinity with zero angle (see Fig. 1 ) corresponding to the weak coupling singularity $`(\tau =\tau _{\text{eff}}=i\mathrm{})`$. The above conditions then simplify to $`{\displaystyle \underset{i=1}{\overset{\mathrm{}}{}}}\beta _i`$ $`=`$ $`0`$ (10) $`{\displaystyle \underset{i=1}{\overset{n1}{}}}\left(2a_i\beta _i\alpha _i^2\right)`$ $`=`$ $`(2n).`$ As explained at the beginning of this section, in the case at hand, the polygon on the $`\tau _{\text{eff}}`$-side corresponds to the fundamental domain of $`\mathrm{\Gamma }_0(2)`$. The corresponding parameters $`(a_i,\alpha _i,\beta _i)`$ are given by $$a_1=a_1=1,\beta _1=\beta _1=\frac{1}{4},\alpha _i=0.$$ (11) The parameters for the polygon on the $`\tau `$-side, $`(\stackrel{~}{a}_i,\stackrel{~}{\alpha }_i,\stackrel{~}{\beta }_i)`$ are to be determined. However, the conditions (10) together with the symmetry $`\tau \overline{\tau }`$ leaves only one free parameter. Indeed, without restricting the generality we can choose $`\stackrel{~}{a}_i=a_i`$. Furthermore $`\stackrel{~}{\alpha }_i=\stackrel{~}{\alpha }_i`$. Then (10) implies $$\stackrel{~}{\beta }_1=\stackrel{~}{\beta }_1=\frac{1}{4}\left(12\stackrel{~}{\alpha }_1^2\right)$$ (12) leaving only one parameter, $`\stackrel{~}{\alpha }_1`$ say, undetermined. As we shall now see this parameter is in turn determined by the two instanton contribution in (6). For this we make use of the identity $`\{\tau ,\tau _{\text{eff}}\}\left({\displaystyle \frac{\tau _{\text{eff}}}{z}}\right)^2`$ $`=`$ $`\{\tau ,z\}\{\tau _{\text{eff}},z\}`$ $`=`$ $`{\displaystyle \underset{i=1}{\overset{\stackrel{~}{n}1}{}}}{\displaystyle \frac{1}{2}}{\displaystyle \frac{1\stackrel{~}{\alpha }_i^2}{(z\stackrel{~}{a}_i)^2}}+{\displaystyle \frac{\stackrel{~}{\beta }_i}{(z\stackrel{~}{a}_i)}}{\displaystyle \underset{i=1}{\overset{n1}{}}}{\displaystyle \frac{1}{2}}{\displaystyle \frac{1\alpha _i^2}{(za_i)^2}}+{\displaystyle \frac{\beta _i}{(za_i)}}.`$ To continue we invert (6) as $$\tau =\tau _{\text{eff}}c_0c_2e^{2\pi ic_0}q_{\text{eff}}+\left(2\pi ic_2^2c_4\right)e^{4\pi ic_0}q_{\text{eff}}^2+O(q_{\text{eff}}^4).$$ (14) Finally we need the form of $`\tau _{\text{eff}}(z)`$, that is the inverse modular function for $`\mathrm{\Gamma }_0(2)`$ $$\tau _{\text{eff}}=i\frac{{}_{2}{}^{}F_{1}^{}(\frac{1}{2},\frac{1}{2};1;\frac{1+z}{1+z})}{{}_{2}{}^{}F_{1}^{}(\frac{1}{2},\frac{1}{2};1;\frac{2}{1+z})}$$ (15) This function has the asymptotic expansion for large $`z`$ $$\tau _{\text{eff}}(z)=\frac{i}{\pi }\left[3\mathrm{ln}2+\mathrm{ln}z\frac{5}{16}\frac{1}{z^2}\right]+O(z^3).$$ (16) Substituting (16) into the right hand side of (3) we end up with $$\{\tau ,\tau _{\text{eff}}\}\left(\frac{\tau _{\text{eff}}}{z}\right)^2=\frac{7}{23^5}\frac{1}{z^4}+\left(\frac{77213}{2^53^{10}}+2^{10}i\pi c_4\right)\frac{1}{z^6}$$ (17) Substitution of (17) into (3) then leads to $$\stackrel{~}{\alpha }_{1}^{}{}_{}{}^{2}=\frac{7}{2^23^5}$$ (18) which then fixes the erstwhile free parameter in $`\{\tau ,z\}`$. This is the result we have been aiming at. Indeed all higher instanton coefficients are now determined implicitly by the equation $$\{\tau ,\tau _{\text{eff}}\}=\left(\frac{z}{\tau _{\text{eff}}}\right)^2\left[\{\tau ,z\}\{\tau _{\text{eff}},z\}\right].$$ (19) In order to integrate (3) one notices that any solution of (8) can be written as a quotient $$\tau (z)=\frac{(u_1(z)+du_2(z))}{(eu_1(z)+fu_2(z))},$$ (20) where $`u_1,u_2`$ are two linearly independent solutions of the hypergeometric differential equation $$(1+z)(1z)\frac{d^2}{dz^2}u(z)+\left((c2)z+c2a2b\right)\frac{d}{dz}u(z)+\frac{2ab}{1+z}=0,$$ (21) with $$c=1,\text{;}b(ca)+a(cb)=\frac{1}{2}\text{and}(ab)^2=\stackrel{~}{\alpha }_1^2.$$ (22) The coefficients $`d,e,f`$ in(20) are determined by the asymptotic expansion $$\tau (z)=\frac{i}{\pi }\mathrm{ln}\frac{z}{2}+\frac{i}{8\pi }z^2\left(\frac{5}{2}+\frac{7}{3^6}\right)+O(z^4).$$ (23) Finally we substitute $`z`$ in (20) by $$z(\tau _{\text{eff}})\lambda _1=\frac{2}{\lambda _0(\tau _{\text{eff}})}1,$$ (24) where $`\lambda _0`$ is the automorphic function $`\tau _{\text{eff}}^1`$ $`:`$ $`\tau _{\text{eff}}\lambda _0𝐂,`$ (25) $`\lambda _0(\tau _{\text{eff}})`$ $`=`$ $`16q_e{\displaystyle \underset{n=1}{\overset{\mathrm{}}{}}}\left({\displaystyle \frac{1+q_e^{2n}}{1+q_e^{2n1}}}\right)^8\text{ with }q_e=\mathrm{exp}(i\pi \tau _{\text{eff}}).`$ This then integrates (3). To extract the instanton coefficients one needs the inverse map $`\tau _{\text{eff}}(\tau )`$. This can be done iteratively. We have done this to $`O(q^4)`$ allowing us to predict the $`4`$-instanton coefficient $$c_4=\frac{i}{\pi }\frac{717421}{2^63^{10}521}.$$ (26) We close with the observation that globally the inverse function $`\tau _{\text{eff}}(\tau )`$ cannot be single valued. Indeed, existence of a single valued inverse function $`z(\tau )`$ requires $`\stackrel{~}{\alpha }_1=p`$ or $`\alpha =1/p`$, $`p𝐙`$ . As a consequence, the instanton corrected effective coupling $`\tau _{\text{eff}}(\tau )`$ has a cut somewhere in the strong coupling region. It would certainly be interesting to understand the origin of this branch cut from the non-perturbative physics of this model. Thus far this remains elusive to us. Acknowledgements: I.S. would like to thank the Department of Mathematics at Kings College London for hospitality during the writing up of this work. I.S. was supported by a Swiss Government TMR Grant, BBW Nr. 970557.
no-problem/9910/cond-mat9910143.html
ar5iv
text
# First principles study of Li intercalated carbon nanotube ropes ## Abstract We studied Li intercalated carbon nanotube ropes by using first principles methods. Our results show charge transfer between Li and C and small structural deformation of nanotube due to intercalation. Both inside of nanotube and the interstitial space are susceptible for intercalation. The Li intercalation potential of SWNT rope is comparable to that of graphite and almost independent of Li density up to around LiC<sub>2</sub>, as observed in recent experiments. This density is significantly higher than that of Li intercalated graphite, making nanorope to be a promising candidate for anode material in battery applications. Carbon nanotubes are currently attracting interest as constituents of novel nanoscale materials and device applications . Novel mechanic, electronic, magnetic and chemical properties in these one-dimensional materials have been discovered. Single-walled nanotubes (SWNTs) form nanorope bundles with close-packed two-dimensional triangular lattices . These rope crystallites might offer an all-carbon host lattice for intercalation and energy storage. On analogy of the Li intercalated graphite , carbon nanorope is expected to be a candidate of anode materials for Li-ion battery applications . Recent experiments found much higher Li capacity (Li<sub>1.6</sub>C<sub>6</sub>) in SWNTs than those of graphite (LiC<sub>6</sub>) . The Li capacity can be further improved up to Li<sub>2.7</sub>C<sub>6</sub> after ballmilling the nanotube samples . This high capacity of Li in nanorope implies lower weight and longer life time in the battery applications . First principles calculations have been successfully used to identify the cathode materials for lithium batteries . In previous theoretical works, K-doped small individual carbon nanotubes was studied by first principles electronic structure calculations . Empirical force field model was also employed to simulate K doped SWNT ropes . However, there is no first principles study on the Li-intercalated SWNT ropes. There are lots of open questions such as: (1) what is the maximum Li intercalation density; (2) where the intercalated Li ions sit; (3) what is the nature of interaction between Li and the carbon nanotube; (4) does the intercalation modify the structure of nanotube. In this letter, we present results obtained by first principles SCF pseudopotential calculations. Several model systems of intercalated nanotube bundle are studied and the results are discussed with the available experiments. In this work, first principles SCF pseudopotential total-energy calculation and structural minimization are carried out within the framework of local-density approximation on a plane-wave basis with an energy cutoff of 35 Ry. The Car-Parrinello algorithm with $`\mathrm{\Gamma }`$ point approximation is used in the electronic structure minimization . The ion-electron interaction is modeled by Troullier-Martin norm-conserving nonlocal pseudopotential in Kleinman-Bylander form . Plane-wave pseudopotential program, CASTEP , is used for structural minimization on some selected systems. The tube bundle is modeled by an uniform two dimensional hexagonal lattice. The SWNTs studied here include both (10,0), (12,0) zigzag and armchair (8,8), (10,10) tubes. The Li intercalated graphite and bulk Li are also investigated as reference. The initial configuration of Li atoms are assumed to be on high-symmetric sites which maximize the Li-Li distance (see Table II for details). In Fig.1, we plot the relaxed structure and charge density of (10,0) tube bundle with 5 Li atoms per unit cell. After structural minimization, the Li atoms only slightly shift from their initial symmetric configuration. This shows that symmetry and maximum Li-Li distance are good criteria for choosing Li configuration. The Li intercalation also slightly modifies the shape of carbon nanotubes (see Fig.1 and Fig.2). This result differs from a previous empirical force field simulation on K doped (10,10) SWNT, in which significant deformation on nanotubes was found (Using the same force field model, we also obtain large deformation). The discrepancy between first principle and empirical calculations demonstrates the importance of quantum effect and the insufficiency of empirical potential in such systems. Fig.1 shows that there is almost no total charge density distribution in the space between SWNTs. To further understand the charge distribution, in Fig.2 we present the contour plots of both occupied (a) and empty (b) orbitals near Fermi level. We find the conduction band orbitals are concentrated on the carbon tubes, while the empty states has some distribution passing through the Li sites. In Fig.3, we compare the band structure near Fermi energy of tube bundle and that of intercalated bundle. Although the individual (10,0) tube and its bundle are all semiconductors, the intercalated tube bundle is found to be metallic. For valence band, only small modification upon intercalation is observed. In contrast, the hybridization between lithium and carbon has significant influence on conduction band and introduces some new states, similar to that found in Ref.. All these analysis show that there is almost complete charge transfer and the conduction electrons mainly occupy the bands originated from carbon nanotubes. Our calculation on other nanotube bundles such as (10,10) tube bundle (Li<sub>5</sub>C<sub>40</sub>) show similar charge transfer between Li ions and carbon host. The observation of charge transfer agrees with previous ab initio calculation on K doped individual small carbon nanotubes . Similar effect is well known in alkali-metal-doped fullerens . Experimentally, the charge transfer is supported by Raman and NSR measurements on alkali-metal doped SWNT materials. These suggest that the cohesion between Li and carbon nanotube is mainly ionic. However, we have tried a simple model in which complete charge transfer and uniform charge distribution on C atoms are assumed. Interactions with different screening lengths were tested. We find that this simple model is not sufficient to describe our results, indicating the importance of screening and electron correlations. To understand where the Li ions can be intercalated, we compare the intercalation energy of two typical Li sites with high symmetry, the center of tube and interstitial site of hexagonal lattice. The total energy and equation of states of the tube bundle are calculated via Car-Parrinello electronic minimization method . The optimal distance between neighboring tubes are then determined. The intercalation energy is obtained by subtracting the energy of the pure nanorope from the total energy of intercalated system. SWNT bundles composed of (10,0), (8,8), (12,0) and (10,10) tubes are studied (Table I). In general, the energy of the Li atoms inside the tube is found to be lower than or comparable to those outside the tube, implying that both the inside and outside of nanotube are favorable for Li intercalation. For smaller tube, the center of tube is less favorable because of the strong core repulse between Li ions and carbon walls. To study this issue further, we consider nanorope with different intercalation density by putting certain number of lithium atoms at both interstitial sites and those inside the tube. Typical Li configurations are briefly illustrated in Table II. We find that the energies of both the Li sites outside and inside the nanotube are comparable even up to rather high intercalation density. For instance, the energy difference of nine Li ions all inside or outside the (10,10) tube is only 0.36 eV per Li atom. We also find that the intercalation energy is not sensitive to Li arrangements at higher concentration. All these results imply that both inside and outside of the tubes can be simultaneously intercalated to achieve higher Li density. In recent experiments, the intercalation density of as-prepared SWNTs bundles sample was found as Li<sub>1.6</sub>C<sub>6</sub> , and improved up to Li<sub>2.7</sub>C<sub>6</sub> after proper ballmilling . The experimental size of carbon nanotubes is close to that of (10,10) tube. We suggest that the ball-milling process creates defects or breaks the nanotube, allowing the Li ions to intercalate inside of the tube. To understand the experimental intercalation density, we study the intercalated (10,0), (12,0), (10,10) tube bundles with intercalation density from 0 to 28 Li ions per unit cell. Typical Li configurations are given in Table II. The intercalation energy as function of intercalation density for nanoropes is compared with that of graphite in Fig.4. We find that the intercalation energy per carbon atom increases linearly with the intercalation density for different tube bundles up to about Li<sub>0.6</sub>C. In contrast, the Li intercalation in graphite is already saturated at around Li<sub>0.35</sub>C. We find that the intercalation potentials defined by taking the derivative of intercalation energy with respect to intercalation density for all tube bundles are almost the same. It is comparable to the intercalation potential of graphite and about 0.1 eV higher than the formation energy of bulk lithium. From above results, we conclude that the nanorope has a higher capacity for hosting the Li atoms if Li can penetrate into inside space of nanotube. This agrees with the experimental finding that the intercalation density in the ball-milled SWNT bundles can reach up to Li<sub>2.7</sub>C<sub>6</sub>, much higher than LiC<sub>6</sub> in graphite . The nature of higher Li capacity in nanotube can be related to the low carbon density in nanotube bundle. For example, the average atomic volume for carbon in (10,10) tube bundle is about 60$`\%`$ larger than that of graphite. The calculated saturation intercalation density is also about 60$`\%`$ higher in (10,10) tube bundle than graphite. Additional understanding of high Li concentration can be gained by examine the work function (WF) of nanotube. Although we are unaware of any experimental measurements of WF on SWNT one might expects that its WF to be close to that of C<sub>60</sub> thin films (4.85 eV) and higher than WF of graphite (4.44 eV) . Thus, the electron in nanorope has lower energy than those in graphite. In summary, we have performed first principles calculations on the total energy and electronic structures of Li intercalated SWNT nanoropes. The main conclusions are: (1) almost complete charge transfer occurs between Li atoms and SWNTs; (2) the deformation of nanotube structure after intercalation is relatively small; (3) energetically inside of tube is as favorable as interstitial sites for intercalation; (4) the intercalation potential of Li/SWNT is comparable to the formation energy of bulk Li and independent of Li density up to about Li<sub>0.5</sub>C; (5) the intercalation density of SWNT bundle is significant high than that of graphite. These results suggest that nanorope is a promising candidate material for anode in battery application. This work is supported by the U.S. Army Research Office Grant No. DAAG55-98-1-0298, the Office of Naval Research Grant No. N00014-98-1-0597 and NASA Ames Research Center. The authors thank Dr. O.Zhou and Mr. B.Gao for helpful discussions. We gratefully acknowledge computational support from the North Carolina Supercomputer Center. : zhaoj@physics.unc.edu : jpl@physics.unc.edu
no-problem/9910/astro-ph9910496.html
ar5iv
text
# Metallicity distribution of bulge planetary nebulae and the [O/Fe] × [Fe/H] relation ## 1 Introduction Recent chemical evolution models usually predict different relationships between the \[$`\alpha `$-elements/Fe\] and the metallicity as measured by the \[Fe/H\] ratio for the different phases that comprise the Galaxy, namely the disk, bulge and halo (see for example Pagel pagel (1997)). These relationships basically reflect the rate at which these elements are produced in different scenarios, being usually faster in the bulge and halo compared to the disk in the framework of an inside-out model for Galaxy formation. Metallicities of bulge stars are poorly known compared with disk objects, as only limited samples of well measured stars are available. As a conclusion, the derived metallicity distribution and the corresponding ratios between the $`\alpha `$-elements and metalllicity are not well known. In this work, a sample of bulge planetary nebulae (PN) with relatively accurate abundances is used to shed some light on the \[O/Fe\] $`\times `$ \[Fe/H\] relationship adopted for the bulge. It will be shown that this relation probably exagerates the amount of oxygen produced at a given metallicity. ## 2 Metallicity distribution of bulge PN Many bulge, or type V PN (Maciel maciel89 (1989)) are known, but only recently have accurate abundances been obtained. Recent work by Cuisinier et al. (cuisinier (1999)) and Costa & Maciel (costa (1999)) has led to He, O, N, Ar and S abundances for about 40 bulge PN, with an uncertainty comparable to disk objects, namely up to 0.2 dex for O/H. The bulge O/H abundances are generally comparable with those of the disk, and the O/H, Ar/H and S/H ratios can be higher than the disk counterparts even though very metal rich PN are missing in the bulge. Since underabundant nebulae are also present, these results suggest that the bulge contains a mixed population, so that star formation in the bulge spans a wide time interval. Chemical abundances of PN in the bulge of M31 have been recently studied by Jacoby & Ciardullo (jacoby (1999)), who have included in their analysis some results by Stasińska et al. (stasinska (1998)) and Richer et al. (richer (1999)). Fig. 1 shows a comparison of this sample (42 PN, top panel) with the galactic bulge objects from Ratag et al. (ratag (1997)) (103 PN, lower panel, thin line) and Cuisinier et al. (cuisinier (1999)) (30 PN, lower panel, thick line). The oxygen abundance is given in the usual notation, $`ϵ(\mathrm{O})=\mathrm{log}(\mathrm{O}/\mathrm{H})+12`$. It can be seen that all the O/H abundance distributions are similar, peaking around 8.7 dex, and showing very few if any super metal rich objects with supersolar abundances. In order to compare the PN metallicity distribution with the stellar distributions, it is necessary to convert the measured nebular O/H abundances into the usual \[Fe/H\] metallicities relative to the Sun. Direct measurements are of limited usefulness, for two main reasons. First, a very small number of planetary nebulae have measured iron lines, due to their weakness and the relatively large distances of the nebulae, so that the derived values are more uncertain than the usual O, N or Ar abundances. Second, all available measurements indicate a strong depletion usually attributed to grain formation, so that the measured Fe abundances should be considered as lower limits to the total abundances at the times of formation of the PN progenitor stars. Both these aspects are illustrated in Fig. 2, where we plot the \[Fe/H\] abundances against oxygen for a group of four disk planetary nebulae from Perinotto et al. (perinotto (1999), squares with error bars) and one object by Pottasch & Beintema (pottasch (1999), triangle with error bars). In all conversions we have used the solar iron abundance $`ϵ(\mathrm{Fe})_{}=7.5`$ (Anders & Grevesse anders (1989)) and average O/H error bars. A better way to convert O/H abundances into \[Fe/H\] metallicities is to use theoretical \[O/Fe\] $`\times `$ \[Fe/H\] relationships, such as those recently derived by Matteucci et al. (matteucci (1999)) both for the solar neighbourhood and the galactic bulge. The corresponding relations are more suitably plotted in Fig. 2 using $`ϵ(\mathrm{O})_{}=8.9`$ (Anders & Grevesse anders (1989)) both for the bulge (solid line) and the solar neighbourhood (dotted line). Taking into account the relations shown in Fig. 2, the O/H distribution of bulge PN from Cuisinier et al. (cuisinier (1999)) can be converted into a \[Fe/H\] distribution, as shown in Fig. 3. Again the solid line corresponds to the \[O/Fe\] $`\times `$ \[Fe/H\] relationship for the bulge, and the dotted line was obtained using the solar neighbourhood relation shown in Fig. 2. As a comparison, the dashed histogram in Fig. 3 shows the metallicity distribution of bulge K giant stars in Baade’s Window by McWilliam & Rich (mcwilliam (1994)). It can be seen that the PN metallicity distribution looks similar to the K giant distribution if the solar neighbourhood \[O/Fe\] $`\times `$ \[Fe/H\] relation is adopted, but when the bulge relation is taken into account the derived distribution is displaced towards lower metallicities by roughly 0.5 dex. The PN samples were carefully selected not to include disk objects (see a detailed discussion in Cuisinier et al. cuisinier (1999)), and their progenitor stars are clearly not massive enough to appreciably change their initial O/H composition. Therefore, the bulge PN metallicity distribution is expected to be similar to that of the K giants. This conclusion is strenghtened by the earlier measurements of Sadler et al. (sadler (1996)) for Mg and by the results recently presented by Feast (feast (2000)), who has shown that the metallicity distribution of bulge Mira variables peaks around \[Fe/H\] $`0`$, which is again higher by roughly 0.5 dex than the bulge PN shown by the solid histogram in Fig. 3. Since both classes of objects are basically the offspring of the evolution of intermediate to low mass stars, their metallicity distributions should also be similar. These objects may be located within a certain distance from the galactic centre, but, as discussed by Frogel (frogel (1999)), any gradient between Baade’s Window and the Centre must be small, amounting up to a few tenths of a dex. ## 3 Discussion Several reasons can be considered in order to explain the discrepancy between the bulge PN distribution shown in Fig. 3 and the K giants of McWilliam & Rich (mcwilliam (1994)): (i) systematic errors in the O/H abundances of planetary nebulae, leading to mild to strong underestimates of this quantity; (ii) ON cycling in the PN progenitor stars, which would lead to a depleted O/H ratio; (iii) Statistical uncertainties due to the fact that the considered samples are relatively small and probably incomplete, and (iv) uncertainties in the adopted \[O/Fe\] $`\times `$ \[Fe/H\] relationships. Let us briefly consider each of these possibilities. First, some recent work has raised the possibility that the O/H abundances of planetary nebulae may be underestimated, in view of the discrepancies between the forbidden line and recombination line values for a number of objects (Mathis & Liu mathis (1999), Liu & Danziger liu (1993)). In principle, that would be applied to all PN samples considered in this work, namely, the bulge PN of Ratag et al. (ratag (1997)), Cuisinier et al. (cuisinier (1999)) and the objects in the bulge of M31 of Jacoby & Ciardullo (jacoby (1999)). There are no detailed published (under)abundance analyses for a large number of objects, but in order to be able to explain the discrepancy shown in Fig. 3, the O/H abundances should have to be underestimated by about 0.5 dex, or a factor 3. This factor should also be applied to disk PN, as their abundances are derived using basically the same methods used for bulge PN. However, Cuisinier et al. (cuisinier (1999)) have shown that their metallicity distribution for bulge PN is similar to the corresponding distribution of disk PN given by Maciel & Köppen (mk94 (1994)). This is confirmed by the recent (although smaller) sample of disk PN used by Maciel & Quireza (mq99 (1999)) to study radial abundance gradients in the galactic disk. This sample includes 128 disk PN, and an application of the solar neighbourhood \[O/Fe\] $`\times `$ \[Fe/H\] relationship of Matteucci et al. (matteucci (1999)) (dotted line in Fig. 2) produces a \[Fe/H\] distribution peaked at \[Fe/H\] $`0.3`$. This is very similar to the recent G-dwarf metallicity distribution of the solar neighbourhood by Rocha-Pinto & Maciel (rpm96 (1996)) within the usually adopted uncertainties in the PN abundances of up to 0.2 dex for oxygen. Therefore, if we were to apply a correction factor of about 0.5 dex to the O/H abundances, the derived disk PN distribution would imply a very large number of extremely metal-rich objects, whose nature it would be very difficult to explain. Moreover, the PN distribution would peak at about 9.2 dex, which means that most disk PN would be more oxygen rich than the Sun by about a factor 2. Even considering that the populations of PN and G-dwarfs have somewhat different ages, no known age-metallicity relation would be sufficient to explain such large overabundance (see for example the age metallicity relation by Twarog twarog (1980) or the recent work by Rocha-Pinto et al. rpsmf99 (1999) and references therein). As a conclusion, any underestimate of the O/H abundances in PN would have to be much lower than 0.5 dex, and could be well accommodated within an average uncertainty of 0.2 dex. Of course, this does not exclude the possibility that some individual nebulae may have strongly underestimated abundances, but the average factor is probably much lower than 0.5 dex. Further work is needed, possibly including a sizable sample of disk PN with well measured forbidden-line and recombination-line abundances. Second, the possibility of ON cycling has often been mentioned for planetary nebulae (see for example Maciel maciel92 (1992)), basically due to some anticorrelation in the N/O ratio compared with He/H for disk PN. However, this phenomenon is only expected to occur in Type I PN (cf. Peimbert peimbert (1978)), which are formed mainly from the higher mass progenitor stars. These objects have an excess of He/H and/or N/O, and their oxygen abundances are in fact slightly lower than the most common Type II objects (see for example Maciel maciel00 (2000)). However, the amount by which the O/H ratio is decreased is very small (roughly 0.1 dex), and Type I objects are explicitly excluded from the disk sample of Maciel & Quireza (mq99 (1999)). Regarding the bulge PN, almost all objects show no trace of He/H or N/O enrichment, so that this possibility cannot be used to explain the discrepancy shown in Fig. 3. Third, statistical uncertainties are more difficult to analyze, since the considered samples are relatively small and may be affected by observational selection effects. However, all distributions can be understood in terms of general models for chemical evolution of the Galaxy, and the similarities of the metallicity distributions of different objects such as G-dwarfs and disk PN, or bulge PN, Mira variables and K giants fit rather nicely in the framework of galactic evolution, within the uncertainties of the derived abundances. Considering the bulge PN in particular, the similarity of the three different distributions shown in Fig. 1 is striking, even though they reflect different samples using different techniques. Small differences such as the lack of oxygen rich PN in the Cuisinier et al. (cuisinier (1999)) sample may be explained by the smaller size of this sample. However, we are interested in very broad aspects of these distributions, and not in the detailed behaviour at a given metallicity of range of metallicities, so that it is unlikely that statistical uncertainties might produce the discrepancy shown in Fig. 3. Fourth, we are left with the possibility that the bulge \[O/Fe\] $`\times `$ \[Fe/H\] relation as given by Matteucci et al. (matteucci (1999)) or, equivalently, the \[Fe/H\] $`\times \mathrm{log}`$ O/H relation given by the solid line in Fig. 2 might be responsible for all or most of the discrepancy in the metallicity distributions shown in Fig. 3. Such relation assumes a faster evolution during the bulge formation, so that at a given metallicity the \[O/Fe\] ratio is higher than in the solar neighbourhood. Although this is correct in principle, the present results suggest that the amount of oxygen produced has been overestimated, leading to an excess of O/H for a given metallicity. In fact, the earlier models of Matteucci & Brocato (mattbroc (1990)) predict a lower \[O/Fe\] enhancement, producing a better agreement with the present results. On the other hand, there are some independent evidences that the theoretical \[O/Fe\] in the bulge may be overestimated. A recent analysis by Barbuy and collaborators for one star in the bulge globular cluster NGC 6528 (see for example Barbuy barbuy (1999)) shows that \[O/Fe\] $`0.35`$ for a metallicity \[Fe/H\] $`0.6`$, so that \[O/H\] $`0.25`$ and $`\mathrm{log}`$ O/H + 12 $`8.65`$. This object is also shown in Fig. 2 (asterisk) and is closer to the dotted curve than to the bulge relation (solid line). Also, for all six bulge giants with measurable \[OI\] features in the McWilliam & Rich (mcwilliam (1994)) sample the \[O/Fe\] ratio is lower by $`0.20.7`$ dex than predicted by the \[O/Fe\] $`\times `$ \[Fe/H\] relationship of Matteucci et al. (matteucci (1999)). These stars are also shown in Fig. 2 (crosses), and are clearly located to the left of the bulge curve, even allowing for some uncertainty in the O/H abundances. The average difference is about $`0.5`$ dex, which is just what is needed to eliminate the offset of the bulge PN metallicity distribution shown in Fig. 3. Therefore, it can be concluded that the theoretical \[O/Fe\] $`\times `$ \[Fe/H\] relation for the bulge probably overestimates the oxygen enhancement relative to iron by 0.3 to 0.5 dex, at least for metallicities \[Fe/H\] $`1.5`$ dex. ###### Acknowledgements. I am indebted to B. Barbuy, F. Matteucci, F. Cuisinier and R.D.D. Costa, for some helpful discussions. This work was partially supported by CNPq and FAPESP.
no-problem/9910/cond-mat9910353.html
ar5iv
text
# Ordering in a spin glass under applied magnetic field \[ ## Abstract Torque, torque relaxation, and magnetization measurements on a AuFe spin glass sample are reported. The experiments carried out up to $`7T`$ show a transverse irreversibility line in the $`(H,T)`$ plane up to high applied fields, and a distinct strong longitudinal irreversibility line at lower fields. The data demonstrate for that this type of sample, a Heisenberg spin glass with moderately strong anisotropy, the spin glass ordered state survives under high applied fields in contrast to predictions of certain ”droplet” type scaling models. The overall phase diagram closely ressembles those of mean field or chiral models, which both have replica symmetry breaking transitions. \] For the infinite dimension or mean field spin glass, there is a true replica symmetry breaking (RSB) phase transition under an applied field with, for Heisenberg spins, a transverse irreversibility onset followed at lower fields by a crossover to longitudinal irreversibility . The mean field $`(H,T)`$ phase diagram including the effect of anisotropy has been extensively studied theoretically. It has been strongly argued that the physics of spin glasses below the upper critical dimension $`d=6`$ is basically similar to that in infinite dimension . If real spin glasses in dimension three undergo an RSB transition, one should expect to find an in-field phase diagram qualitatively similar to that of mean field. Alternatively if the standard Fisher-Huse scaling (or ”droplet”) scenario is physically correct, in three dimensions a true transition should exist only in zero field and no irreversibilities should be seen under applied fields. The experimental situation is not clear cut and is complicated by the fact that it has been hard to find a crucial physical measurement to rule out or alternatively to definitively establish the existence of an in-field frozen state. Magnetization experiments have been analysed in terms of transition lines in the $`(H,T)`$ plane, while susceptibility data (on an Ising like material) have been interpreted as demonstrating an absence of ordering in a finite field . Torque measurements have the advantage of being directly sensitive to transverse irreversibility. The Dzyaloshinski-Moriya (DM) interaction is the source of magnetic anisotropy in spin glasses, leading to torque when an applied field is turned . Because of the special character of the spin glass anisotropy, this torque is observed only if there is a frozen-in spin arrangement. If the spin glass is in a paramagnetic state, meaning that the spins can reorganize themselves locally as soon as the field is turned, there will be no torque for an isotropic polycrystalline sample. The torque criterion for identifying a frozen spin glass state was exploited early on over a restricted field range , but although the question of spin glass ordering and aging has been addressed by progressively more sophisticated magnetization and susceptibility experiments (see for instance), the torque technique has been neglected ; in particular there have been no systematic comparisons of torque and magnetization on one and the same sample over an extended field range. Thus the theoretical predictions have only been incompletely tested. Here we report extensive torque, torque relaxation, and magnetization measurements to high fields on a sample of the archetype spin glass, AuFe. The torque data show a clear transverse irreversibility transition line below which the spin arrangement remains frozen over very long times even under strong applied fields. The magnetization data indicate a quite distinct strong longitudinal irreversibility line. The experimental in-field phase diagram bears a striking qualitative resemblance to that of the Heisenberg mean field model with strong anisotropy. We studied a sample of Au5% Fe prepared by standard metallurgical techniques. AuFe is a Heisenberg spin glass with moderately strong DM anisotropy . The sample was heavily cold worked and then annealed to guarantee homogenity. The $`T_g`$ estimated with applied field extrapolated to zero is $`20.6K`$. The torque measurements were performed using a capacity method ; applied fields up to $`7T`$ were provided by a horizontal superconducting Helmhotz coil. The main experimental difficulty was eliminating parasitic signals arising from the interaction of the sample moment with a residual field gradient from the coils. Magnetization measurements were carried out on a commercial SQUID instrument. The principal protocol used for the torque measurements was to field cool (FC) the sample in an applied field $`H`$ to the measuring temperature $`T`$ ; once the temperature was established, the field was turned, typically by an angle of $`5^{}`$.The torque was measured from a few seconds after the turn and for times up to an hour. We will first describe the overall pattern of the torque signals as a function of $`H`$ and $`T`$. Fig.1 shows the observed torque values ; each point corresponds to a separate FC run.We have chosen to plot the points measured at 30 seconds after the field was turned. Relaxation effects will be discussed later on. The DM anisotropy is due to a sum of terms of the form $`𝐃_{ij}(𝐒_i𝐒_j)`$ . Each time a sample is cooled either in field or in zero field, the spins conspire to minimize the total spin-spin interaction plus anisotropy energy by taking up an appropriate configuration. Once a rigid configuration has been established, turning it bodily costs energy leading to anisotropy with respect to its original orientation. If the spins can completely rearrange, they can take up a configuration which is different on the microscopic level, so the anisotropy reorients, and the torque disappears. Zero torque thus indicates a paramagnetic state. Suppose that a spin glass has a strictly rigid spin configuration with magnetization $`M(H)`$ and a field independent spin glass anisotropy $`K`$. Then the torque signal $`\mathrm{\Gamma }`$ when the applied field $`H`$ is turned by an angle $`\theta `$ is given by $$\theta /\mathrm{\Gamma }=1/K+1/M_rH$$ (1) For a series of points each taken after cooling in field, the torque signal $`\mathrm{\Gamma }(H)`$ will initially increase with field as $`H^2`$ because the magnetization is proportional to $`H`$; when the limit $`HM>>K`$ is reached the torque will saturate at a field independent value depending only on $`Ksin\theta `$ . This is what is observed at the lowest temperatures in Fig.1. However at higher temperatures, the observed torque signal initially increases with field as at low temperatures ; it reaches a peak at a field $`H_p(T)`$ and then for higher fields it decreases again until it becomes unobservably small at a critical field $`H_c(T)`$. Thus the data show that at low temperatures, $`K`$ tends to become field independent for the range of fields available to us, i.e. the spin configuration is almost rigid and the DM anisotropy after cooling in field is almost independent of the value of the field. With increasing temperature the low field behaviour is still of the same form so $`K(T)`$ is still essentially field independent, but $`K(T)`$ decreases regularly. This is because local spin-wave-like fluctuations reduce the time average effective local spin moments $`<𝐒_i>`$, so each term in the DM expression above becomes weaker on increasing temperature. Higher fields lead to the peak effect, showing that a combination of temperature and field begins to weaken the rigid state. The spins are still frozen but they have become free to select configurations for which the DM terms are weaker, producing a progressive reduction in $`K(H,T)`$ with field. Finally above $`H_c(T)`$ the system can completely rearrange the spins on a time short compared with the time scale of the measurement, and there is no more observable anisotropy. Above this critical field the system has entered the paramagnetic state.( The precise form of the behaviour of a spin glass with weak anisotropy may well be rather different). This measurement gives detailed information on the progressive manner by which a spin glass system loses rigidity under increasing applied fields and temperatures. Individual points on the $`H_c(T)`$ curve were estimated by plotting $`log(\mathrm{\Gamma }(H,T))`$ against $`T`$ and observing the intersection with the noise level, which was typically $`0.01`$ in the units of Figure 1. We have also estimated the position of the critical line $`H_c(T)`$ in a complementary and more sensitive way by measuring the torque signal as a function of time after turning. $`H_c(T)`$ is then defined as the field above which there is no observable torque relaxation (so no observable torque above the noise). The two sets of estimates are entirely consistent. Error bars are indicated on Figure 2. Magnetic measurements provide an alternative method for identifying an irreversibility line. Field cooled and zero field cooled (ZFC) magnetizations are compared ; the onset of difference between the two indicates irreversibility. For a CuMn sample Kenning, Chu and Orbach observed ” strong ” and ” weak ” irreversibility lines ; they identified the latter with a transverse irreversibility as had been seen in torque measurements over a restricted range of fields . We have carried out magnetization measurements on the present sample. Following Kenning et al, we have plotted the difference between $`M_{FC}`$ and $`M_{ZFC}`$. $`[M_{FC}M_{ZFC}]/M_{FC}(5K)<10^3`$ gives a criterion which defines an effective critical temperature at each field. (In any case, theory suggests this line is a crossover and so intrinsically fuzzy).To the precision of our SQUID measurements, we could not observe a weak irreversibility line, and our critical points $`H_{cm}(T)`$ correspond to the strong irreversibility of Kenning et al. It can be noted that while the torque gives a transverse irreversibility criterion for $`H_c(T)`$ which is very clear cut experimentally, the weak longitudinal irreversibilty criterion of Kenning et al requires painstaking measurements of tiny magnetization differences between successive FC and ZFC runs. Even the strong irreversibility signal becomes small at high fields (less than 1 percent of $`M_{FC}`$ at $`6K`$ by $`3T`$). Although the transverse irreversibility line can be taken as representing a true transition, it is a ”stealthy ” transition - essentially invisible in any longitudinal measurement, whether by magnetization differences or a.c. susceptibility. This implies that except in low fields, no longitudinal magnetization measurement can be used as a reliable probe of the onset of true spin freezing, and transverse irreversibility must be studied in order to establish an $`(H,T)`$ phase diagram. The results for the phase diagram using these alternative criteria are displayed in Fig.2. The $`H_c(T)`$ line is of similar form to that already observed in torque measurements on a Au2%Fe sample at low fields ; the present results extend the torque data by an order of magnitude in field range and provide longitudinal measurements on one and the same sample. The $`H_c(T)`$ form is characteristic of RSB predictions for samples with strong anisotropy, where the transverse transition line follows AT behaviour at low fields and then crosses over towards GT like behaviour at high fields , see inset. Clearly the present $`H_c(T)`$ line ressembles the full line in the inset, reaching the crossover region but not the GT limit. We can note that for this sample the peak field $`H_p(T)`$ line from the torque experiments lies very close to the longitudinal irreversibility line. Although qualitative agreement between theory and experiment appears excellent there is an important caveat. It would not appear meaningful to use the standard model to analyse the field dependences of the transition temperature, because the three dimension Heisenberg spin glass transition calculated with the standard Edwards Anderson order parameter is already at zero temperature in zero field , so an alternative model must be sought. A most attractive explanation for the observation of the finite temperature transitions in real Heisenberg spin glasses is that of Kawamura who proposes that the transition is fundamentally chiral, and that it is ” revealed ” by the presence of even weak anisotropy. Calculations show that the chiral model $`(H,T)`$ transition behaviour is of an RSB type, and mimics the mean field behaviour . For fields strong compared to the anisotropy the transverse irreversibility transition line lies at $`H_c(TT_g)^{0.5}`$ as for the GT line. For low fields $`H_c(TT_g)^\varphi `$ with $`\varphi `$ between 1.3 and 1.5 much as for the AT line . There is an anisotropy dependent crossover from AT-like behaviour to GT-like as in the mean field model. In the chiral model the transition irreversibility line is a true transition line, but the transition may well be of a very different nature from that in the mean field model (it might be one step RSB for instance ). The present transverse irreversibility data are completely compatible with the chiral model predictions for the irreversibility onset if we consider that the steep rise at lower temperatures indicates that even at $`7T`$ the system is still in a crossover regime and the true GT-like regime would require yet stronger fields (c.f.). The longitudinal irreversibility $`H_{cm}`$ line follows an AT like behaviour with exponent $`\varphi `$ about 1.5 from $`T_g`$ to near $`0.8T_g`$ and then takes a larger exponent. We now turn briefly to the question of relaxation. In the region below $`H_c(T)`$ the torque signal always relaxes with time in the algebraic or quasi-logarithmic manner familiar from spin glass magnetization measurements, Fig.3. This is true above and below the $`H_{cm}(T)`$ line. This form of signal decay means that there is no maximum characteristic time for the relaxation, which is a criterion indicating that the system is in a glassy frozen state and that the signal decay reflects a form of magnetic creep. We conclude that when the torque signal is observable, the system is frozen in this sense ; there is a line of true freezing transitions at or very near to the $`H_c(T)`$ line in Fig.2. Though chiral model simulations have so far only been carried out in zero field, the slow quasi-algebraic form of relaxation found experimentally appears compatible with the equilibrium simulation relaxation data . The experimental torque aging effects after cooling in field appear to be negligible (as in CuMn ), in contrast to those always observed in spin glass magnetization experiments at zero field , and to the aging observed in the zero field chiral simulations . Simulations to check for an in-field supression of aging in the chiral approach would provide an important verification of the model. In conclusion, by combining torque and magnetization information over a wide range of applied fields, we find that a Heisenberg spin glass with strong random DM anisotropy has an in-field phase diagram which is remarkably similar to that of the chiral ordering model (which mimics the well established mean field type ): a transverse irreversibility onset line corresponding to a true RSB freezing transition, plus a lower and quite distinct strong longitudinal irreversibility line which can be identified from magnetization data. The two lines fuse at low applied fields. To make a realistic quantitative comparison with theory concerning characteristics like the in-field aging and the transverse magnetization decay, we must await full in-field simulations in the chiral ordering scenario with strong anisotropy. Already the striking qualitative similarities between the experimental phase diagram and the chiral (or mean field) spin glass model phase diagram is strong evidence that the essential physics of real life Heisenberg spin glasses is very close to that of the RSB class of models. Scaling approaches of the Fisher-Huse droplet type do not appear to be compatible with the experimental data, as they predict that whenever the applied field is non-zero, there will be no transition to a frozen state at any finite temperature. Acknowledgements. We would like to thank Professor H. Kawamura for very helpful information on the chiral model.
no-problem/9910/astro-ph9910022.html
ar5iv
text
# Searching for Jets in Asymmetrical Nebulae with the Hubble Space Telescope ## 1. Introduction Observations of Herbig-Haro objects like HH30 and HH34 have provided perhaps the best images of narrow, continuous jets. These jets are known to originate in the accretion disks surrounding the associated T-Tauri stars. Recently, morphological studies of PNe have revealed many “point-symmetric” structures, which would have a natural explanation if the nebula were once subjected to the effects of precessing jets. But the existence of jets seems at variance with the conventional picture of the PN central star, which does not involve an accretion disk. In this context, any observations of jets in PNe, which provide a more direct indication of stellar accretion disks, are especially interesting. HST observations of NGC 6543 (the “Cat’s Eye Nebula”) confirmed the earlier indications (Miranda & Solf 1992) of a remarkable pair of jets in this object (Figure 1). We wanted to see if similar structures exist in other PNe. With this in mind, we carried out a “snapshot” imaging program of likely nebulae with the HST. ## 2. The HST Snapshot Program We selected our targets based on three criteria: (1) reports of high velocity flows (e.g., He 3-1475, the Eskimo), (2) point-symmetric morphologies, (3) ground-based images that showed “jet-like” structures. Since we found that in the case of NGC 6543 the jets were best seen in the ratio of the \[N II\] $`\lambda `$6584 to H$`\alpha `$ filter images (Figure 1 is such a ratio image), we requested \[N II\] and H$`\alpha `$ images of our targets. Snapshot programs limit the exposure time; all our images are 10 minute exposures. When the program ended on 29 June 1999, 14 objects had been observed, most in \[N II\], but only 6 in both filters. The nebulae we observed are listed in Table 1. Since the program was basically a morphological survey, we set the public release time for our data at 2 months – all are now available from the STScI archives. ## 3. Results While we have found a variety of interesting radial structures in these nebulae, we have come to the conclusion that, at least for mature PNe (as opposed to proto-PNe), real jets are rare. We consider a true jet to be a narrow, continuous, high-velocity flow. There are two types of long, radial structures which we feel have no relation to jets: we will call them cometary structures and rays. Cometary structures have a bright “head” at the end nearest the star, from which a low-ionization tail, often sinuous, extends outward. These features were first seen in the Helix over three decades ago. HST images have shown that the heads are neutral globules which are photo-evaporating. The globules are drifting outward more slowly than the surrounding ionized gas; the tails presumably result as evaporated material is dragged back by the flow. Of the nebulae imaged for this program, we found that the Eskimo has an extensive set of cometary structures. Their sinuous tails could indicate a subsonic flow past the globules. Such comets are seen in other PNe, such as NGC 6543 and A 30 (Borkowski et al. 1995). The tails may be quite long: the “jet” in NGC 7354 looks like a comet to us. There is, however, one linear feature in the Eskimo at P.A. 90 which is different: very thin, straight and directed exactly away from the star. This feature also has a clump at its head. It seems too narrow to be a jet. In NGC 6543 there is a bundle of such rays outside the northern cap. It seems probable that the rays are “shadow columns”, where the gas, shielded from the direct stellar radiation, is ionized and heated only by the diffuse field. Such gas would be less highly ionized, cooler and hence denser. Such shadow columns might only form if the gas is sufficiently quiescent. Our images of nebulae chosen for point-symmetry (Hu 2-1, J320, M 3-1, PC 19) show no jet-like structures, but the symmetry is seen to be much more detailed and precise than was apparent from ground-based observations. Hb-4 is a curious case. It has two jet-like structures well away from the main nebulosity, but they are not even approximately co-axial. López et al. (1997) found these structures have velocities of $`\pm `$150 km/s. We would class these features as “FLIERs”. Our HST images reveal that these structures have a distinct “corkscrew” morphology; hints of similar structure are seen in the ansae of NGC 6543. Perhaps the best case for a real jet in our sample – aside from the proto-PN discussed below – is IC 4593. Two “bullets” emerge from the main body of the nebula on an axis directly through the central star. Trails of material are seen connecting the bullets with the inner nebulosity. Several studies of this nebula have been published (e.g., Corradi et al. 1997), and it was found that the bullets have low velocities – but they may be moving nearly in the plane of the sky. Our images show: (a) that there are distinct bow-shocks around the bullets, indicative of outward motion (see Figure 2), and (b) that one of the trails/jets leads back to a conical structure in the inner nebulosity. What is curious is that, although the bullets that terminate the jets are aligned with the star, the jet leading to the conical structure shows a pronounced lack of alignment with the star. One possible explanation is that the jet which produced the trail out to the bullet has turned off, and subsequent motions of the inner nebulosity have shifted the inner part of the trail. The type of conical structure noted in the inner part of IC 4593, which is widest near the star and narrows to an apex as we move away from the star, is also seen in other objects: M1-66, M1-92, and He 3-1475. The apex may be followed by an opening cone or outward spray. Such structures suggest flows that are being focused on a large scale – perhaps by oblique cooling shocks – rather than jets emerging from stellar accretion disks. The most spectacular object is the proto-PN He 3-1475 (Borkowski et al. 1997). It was the first object observed in our program, and we have since followed up with spectroscopic, infrared and polarization observations with Hubble’s STIS, NICMOS, and FOC instruments. The STIS results confirm the high velocities found by Riera et al. (1995). The velocity increases down the axis of the approaching jet, reaches a maximum of $`970`$ km/s a bit before the cone apex, then declines through the apex, followed by an abrupt deceleration of over 500 km/s when the flow hits the first knot. The flow in the receding jet is similar, with a maximum velocity of $`+895`$ km/s. Though one would expect shocks at the knots to produce very high temperatures, NICMOS images show H<sub>2</sub> emission from the knots. This work was supported by NASA through grant GO-06347.01-95A from STScI, which is operated by AURA under NASA contract NAS5-26555. ## References Borkowski, K.J., Blondin, J.M. & Harrington, J.P. 1997, ApJ, 482, L97. Borkowski, K.J., Harrington, J.P. & Tsvetanov, Z.I. 1995, ApJ, 449, L143. Corradi, R.L.M., Guerrero, M., Manchado, A. & Mampaso, A. 1997, New Astronomy, 2, 461. López, J.A., Steffen, W., & Meaburn, J. 1997. ApJ, 485, 697. Miranda, L.F. & Solf, J. 1992, A&A, 260, 397. Riera, A.A., Garcia-Lario, P., Manchado, A., Pottasch, S.R. & Raga, A.C. 1995, A&A, 302, 137.
no-problem/9910/hep-lat9910007.html
ar5iv
text
# Glueball Mass Predictions of the Valence Approximation to Lattice QCD ## I Introduction In recent articles we described calculations of the infinite volume, continuum limit of scalar and tensor glueball masses in the valence (quenched) approximation to lattice QCD . For a single value of lattice spacing and lattice volume, we reported also a calculation of the the decay coupling constants of the lightest scalar glueball to pairs of pseudoscalar mesons. The mass and decay calculations combined support the identification of $`f_0(1710)`$ as primarily composed of the lightest scalar glueball . Evaluation of the mass of the lightest scalar quarkonium states and of quarkonium-glueball mixing amplitudes then yield a glueball component for $`f_0(1710)`$ of $`73.8\pm 9.5`$ %. In the present article, we describe the glueball mass data of Ref. in greater detail along with an improved evaluation of the mass predictions which follow from these data. For the scalar and tensor glueball masses we obtain $`1648\pm 58`$ MeV and $`2267\pm 104`$ MeV, respectively. The valence approximation, on which our results depend, may be viewed as replacing the momentum dependent color dielectric constant arising from quark-antiquark vacuum polarization with its zero-momentum limit and, for flavor singlet mesons, shutting off transitions between valence quark-antiquark pairs and gluons. The valence approximation is expected to be fairly reliable for low lying flavor nonsinglet hadron masses, which are determined largely by the low momentum behavior of the chromoelectric field. This expectation is supported by recent valence approximation calculations of the masses of the lowest flavor multiplets of spin 1/2 and 3/2 baryons and pseudoscalar and vector mesons. The predicted masses are all within about 10% of experiment. For the lowest valence approximation glueball masses, the error arising from the valence approximation’s omission of the momentum dependence of quark-antiquark vacuum polarization we thus also expect to be 10% or less. Refs. show this error should tend to lower valence approximation masses below those of full QCD. For flavor singlet configurations whose quantum numbers, if realized as quarkonium, require nonzero orbital angular momentum, it is shown in Ref. that the additional error arising from the valence approximation’s suppression of transitions between valence quark-antiquark pairs and gluons is likely to introduce an additional error of the order of 5% or less. For the lowest scalar glueball this error is examined in detail in Ref. and found to shift the valence approximation mass by about 5% below its value in full QCD. It is perhaps useful to mention that, for glueball masses, the valence approximation simply amounts to a reinterpretation of the predictions of pure gauge theory. In Section II we define a family of operators used to construct glueball propagators. In Section III we describe the set of lattices on which propagators were evaluated and the algorithms we used to generate gauge configurations and estimate error bars. In Sections IV and V we present our results for scalar and tensor glueball propagators, respectively, and masses extracted from these propagators. In Section VI we estimate the difference between the scalar and tensor masses we obtain in finite volumes and the corresponding infinite volume limits. In Section VII we extrapolate scalar and tensor masses to their continuum limits. In Section VIII we compare our calculations with work by other groups . For combined world average valence approximation scalar and tensor glueball masses we obtain $`1656\pm 47`$ MeV and $`2302\pm 62`$ MeV, respectively. ## II Smeared Operators We evaluated glueball propagators using operators built out of smeared link variables. Glueball operators built from link variables with an optimal choice of smearing couple more weakly to excited glueball states than do corresponding operators built from unsmeared links. As a consequence, the plateau in effective mass plots for optimally smeared operators begins at a smaller time separation between source and sink operators, extends over a larger number of time units, and yields a fitted mass with smaller statistical noise than would be obtained from operators made from unsmeared link variables. Examples of the improvements which we obtained by a choice of smeared operators with be given in Section IV and V. Initially, we constructed smeared operators from gauge links fixed to Coulomb gauge. This method gave adequate results for the largest values of lattice spacing we considered. As the lattice spacing was made smaller, however, we found that the computer time required to gauge fix a large enough ensemble of configurations to obtain useful results became unacceptably large. We then switched to a gauge invariant smearing method. For the lattice sizes used in our extrapolation to the continuum limit, the gauge invariant mass results had statistical uncertainties typically a factor of three smaller than our earlier Coulomb gauge results. In the remainder of the present article we discuss only the gauge invariant results. A summary of our Coulomb gauge mass calculations is given in Ref. . A family of gauge invariant smeared operators we construct following the adaptation in Ref. of the smearing method of Ref. . A related method of gauge invariant smearing is proposed in Refs. . For $`n>0`$, $`ϵ>0`$, we define iteratively a sequence of smeared, space-direction link variable $`U_i^{nϵ}(x)`$, with $`U_i^{0ϵ}(x)`$ given by the unsmeared link variable $`U_i(x)`$. Let $`u_i^{(n+1)ϵ}(x)`$ be $`u_i^{(n+1)ϵ}(x)`$ $`=`$ $`U_i^{nϵ}(x)+ϵ{\displaystyle \underset{j}{}}U_j^{nϵ}(x)U_i^{nϵ}(x+\widehat{j})[U_i^{nϵ}(x+\widehat{i})]^{}+`$ (2) $`ϵ{\displaystyle \underset{j}{}}[U_j^{nϵ}(x\widehat{j})]^{}U_i^{nϵ}(x\widehat{j})U_i^{nϵ}(x+\widehat{i}\widehat{j}),`$ where the sum is over the two space directions $`j`$ orthogonal to direction $`i`$. The projection of $`u_i^{(n+1)ϵ}(x)`$ into SU(3) defines the new smeared link variable $`U_i^{(n+1)ϵ}(x)`$. To find $`U_i^{(n+1)ϵ}(x)`$ we maximize over SU(3) the target function $`ReTr\{U_i^{(n+1)ϵ}(x)[u_i^{(n+1)ϵ}(x)]^{}\}.`$ (3) The required maximum is constructed by repeatedly applying an algorithm related to the Cabbibo-Marinari-Okawa Monte Carlo method. We begin with $`U_i^{(n+1)ϵ}(x)`$ chosen to be 1. We then multiply $`U_i^{(n+1)ϵ}(x)`$ by a matrix in the SU(2) subgroup acting only on gauge index values 1 and 2 chosen to maximize the target function over this subgroup. This multiplication and maximization step is repeated for the SU(2) subgroup acting only on index values 2 and 3, then for the subgroup acting only on index values 1 and 3. The entire three step process is then repeated five times. Five repetitions we found sufficient to produce a $`U_i^{(n+1)ϵ}(x)`$ satisfactorily close to the true maximum of the target function in Eq. (3). Iteratively maximizing the target function over SU(2) subgroups turns out to be much easier to program than a direct maximization over all of SU(3). The additional computer time required for this iterative maximization, on the other hand, was a negligible fraction of the total time required for our calculation. From the $`U_i^{nϵ}(x)`$ we construct $`W_{kl}^{nϵs}(x)`$ by taking the trace of the product of $`U_i^{nϵ}(x)`$ around the boundary of an $`s\times s`$ square of links beginning in the $`k`$ direction. The sum of $`W_{kl}^{nϵs}(x)`$ over all sites with a fixed time value $`t`$ gives the zero-momentum loop variable $`W_{kl}^{nϵs}(t)`$. For each triple $`(n,ϵ,s)`$, a field coupling the vacuum only to zero-momentum scalars states is $`S^{nϵs}(t)`$ $`=`$ $`{\displaystyle \underset{ij}{}}ReW_{ij}^{nϵs}(t),`$ (4) where the sums are over space directions $`i`$ and $`j`$. A possible choice of the two independent operators coupling the vacuum only to zero-momentum tensor states is $`T_1^{nϵs}(t)`$ $`=`$ $`2ReW_{12}^{nϵs}(t)ReW_{23}^{nϵs}(t)ReW_{31}^{nϵs}(t)`$ (5) $`T_2^{nϵs}(t)`$ $`=`$ $`\sqrt{3}ReW_{23}^{nϵs}(t)\sqrt{3}ReW_{31}^{nϵs}(t).`$ (6) The optimal choice of $`(n,ϵ,s)`$ for each operator and lattice spacing will be considered in the next section. ## III Lattices, Monte Carlo Algorithm and Error Bars The set of lattices on which we evaluated scalar and tensor glueball propagators is listed in Table I. On each lattice, an ensembles of gauge configurations was generated by a combination of the Cabbibo-Marinari-Okawa algorithm and the overrelaxed method of Ref. . To update a gauge link we first performed a microcanonical update in the SU(2) subgroup acting on gauge indices 1 and 2. This was then repeated for the SU(2) subgroup acting on indices 2 and 3, and the subgroup acting on indices 1 and 3. These three update steps were then repeated on each link of the lattice. After four lattice sweeps each consisting of the three microcanonical steps on each link, we carried out one Cabibbo-Marinari-Okawa sweep of the full lattice. At least 10,000 sweeps were used in each case to generate an initial equilibrium configuration. The number of sweeps skipped between each configuration used to calculate propagators and the total number of configurations in each ensemble are listed in the third and fourth columns, respectively, of Table I. Although the number of sweeps skipped in each case was not sufficient to permit successive configurations to be treated as statistically independent, we found successive configurations to be sufficiently independent to justify the cost of evaluating glueball operators. For the propagators, effective masses and fitted masses to be discussed in Sections IV and V, we determined statistical uncertainties by the bootstrap method . The bootstrap algorithm can be applied directly, however, only to determine the uncertainties in quantities obtained from an ensemble whose individual members are statistically independent. We therefore partitioned each ensemble of correlated gauge configurations into successive disjoint bins with a fixed bin size. Bootstrap ensembles were then formed by randomly choosing a number of entire bins equal to the number of bins in the original partitioned ensemble. For bins sufficiently large, propagator averages found on distinct bins will be nearly independent. It follows that for large enough bins, the binned bootstrap estimate of errors will be reliable. It is not hard to show that once bins are made large enough to produce nearly independent bin averages, further increases in bin size will leave bootstrap error estimates nearly unchanged. The only variation in errors as the bin size is increased further will come from statistical fluctuations in the error estimates themselves. To determine the required bin size for a particular error estimate to be reliable we applied the bootstrap method repeatedly with progressively larger bin sizes until the estimated error became nearly independent of bin size. The final bin size we adopted for each lattice, chosen to be large enough for all of the error estimates done on that lattice, is given in fifth column of Table I. ## IV Scalar Propagators and Masses From the scalar operator of Eq. 4, a propagator for scalars is defined to be $`P_S^{nϵs}(t_1t_2)=<S^{nϵs}(t_1)S^{nϵs}(t_2)><S^{nϵs}(t_1)><S^{nϵs}(t_2)>.`$ (7) To reduce statistical noise, $`P_S^{nϵs}(t_1t_2)`$ is then averaged over reflections and time direction displacements of $`t_1`$ and $`t_2`$. The collection of values of smearing iterations n, smearing parameter $`ϵ`$, and loop size $`s`$ for which propagators were evaluated for each lattice are given in Table II. At $`\beta `$ of 5.70, and at $`\beta `$ of 5.93 on the lattice $`16^3\times 24`$, we ran with relatively larger ranges of parameters to try to find values which coupled efficiently to the lightest scalar glueball. For other lattices, the parameter range was then narrowed to choices which, in physical units, were about the same as the value range which gave best results at $`\beta `$ of 5.93. From the existence of a self-adjoint, positive, bounded transfer matrix for lattice QCD, it follows that a spectral resolution can be constructed for $`P_S^{nϵs}(t)`$, $`P_S^{nϵs}(t)`$ $`=`$ $`{\displaystyle \underset{i}{}}Z_i\{exp(E_it)+exp[E_i(Lt)]\},`$ (8) $`Z_i`$ $`=`$ $`|<i|S^{nϵs}(0)|vacuum>|^2,`$ (9) where the sum is over all zero-momentum, scalar states $`<i|`$, $`E_i`$ is the energy of $`<i|`$, and $`L`$ is the lattice period in the time direction. For large values of $`t`$ and $`L`$, the sum in Eq. (8) approaches the asymptotic form $`P_S^{nϵs}(t)Z\{exp(mt)+exp[m(Lt)]\}`$ (10) where $`m`$ is the smallest $`E_i`$ and thus the mass of the lightest scalar glueball and $`Z`$ is the corresponding $`Z_i`$. Fitting $`P_S^{nϵs}(t)`$ to the asymptotic form in Eq. (10) at $`t`$ and $`t+1`$ gives the scalar effective mass $`m(t)`$, which at large $`t`$ approaches $`m`$. To extract values of $`m`$ from our data sets, we began by examining effective mass graphs to find combinations of $`n`$ and $`s`$ for which $`m(t)`$ shows a plateau at $`t`$ values for which we have data, and to determine which of these combination of $`n`$ and $`s`$ has the best plateaus. Among the data sets used in our final extrapolation of the scalar mass to zero lattice spacing, we included the largest range of values of $`n`$ and $`s`$ for the lattice $`16^3\times 24`$ with $`\beta `$ of 5.93. Scalar effective masses obtained for this case with $`n`$ of $`5`$ and $`s`$ of $`37`$ are shown in Figures 1 \- 5, respectively. As the loop size $`s`$ is increased, initially the effective mass graphs become flatter, as shown, for example, by a decrease in the difference between $`m(0)`$ and $`m(2)`$. It follows that the relative coupling of the corresponding operators to the lightest scalar glueball increases with $`s`$. Beyond $`s`$ of 5, however, this trend reverse. Thus, as might be expected, the relative coupling to the lightest state becomes weaker again when the loop is made too large. For $`s`$ of 7 the effective mass graph shows no sign of becoming flat even at the largest $`t`$ for which we have statistically significant data. For $`n`$ of 5, the best coupling to the lightest state appears to occur with $`s`$ of 4 or 5. For $`s`$ fixed at 4, Figure 2 and Figures 6 \- 8 show the variation in the effective mass graph as $`n`$ runs from 5 to 8, respectively. The difference between $`m(0)`$ and $`m(2)`$ is least at $`n`$ of 6 and then grows again as $`n`$ is raised toward 8. In Figures 1 \- 8 the statistical uncertainty in effective masses grows as $`t`$ is made larger and tends to grow also if $`n`$ or $`s`$ is increased. Both of these phenomena are explained by the discussion in Ref. of the statistical uncertainty in propagators. Figures 9 \- 13 show scalar effective masses for the each of the values of lattice size and $`\beta `$ listed in Table I. The parameters $`n`$ and $`s`$ for the data in Figures 9 \- 13 are chosen, for each lattice and $`\beta `$, from among the set which couple best to the lightest scalar. For each combination of lattice size and $`\beta `$, we determined a final value of the scalar mass from the collection of propagators for which the effective mass graph showed at least some evidence of a plateau at large $`t`$. For several different choices of $`t`$ interval, each of these propagators was fitted to the asymptotic form in Eq. (10) by minimizing the fit’s correlated $`\chi ^2`$. The upper limit of each fitting interval $`t_{max}`$ we fixed at the largest $`t`$ for which we had statistically significant propagator data. The lower limit of the fitting interval $`t_{min}`$ was then progressively increased from 1 to $`t_{max}2`$. As $`t_{min}`$ was increased, the fitted mass and the fit’s $`\chi ^2`$ per degree of freedom both generally decreased and the statistical error bar increased. For each $`n`$ and $`s`$, the final choice of $`t_{min}`$ we took to be the smallest value for which the corresponding mass was within the error bars of all the fits with the same $`n`$ and $`s`$ and larger $`t_{min}`$. Our intent in this procedure was to extract a mass from the largest time interval for which the propagator for each combination of $`n`$ and $`s`$ was consistent with the asymptotic form of Eq. (10). The solid horizontal lines in Figures 2 \- 5 and Figures 6 \- 13 show the best fitted mass in each case and extend over the interval of $`t`$ on which these fits were made. The dashed lines in these figures extend the solid lines to smaller $`t`$ to show the approach, with increasing $`t`$, of effective masses to the final mass values. For the lattice $`16^3\times 24`$ with $`\beta `$ of 5.93, Tables III \- VI show the results of our fits for all the combination of $`n`$, $`s`$, $`t_{min}`$ and $`t_{max}`$ which we examined. The best choice of $`t_{min}`$ and $`t_{max}`$ turned out to be 2 and 8, respectively, for all $`n`$ and $`s`$. Tables VII \- XI show the fitted masses found with the best choice of $`t_{min}`$ and $`t_{max}`$ for all the lattice sizes, $`\beta `$, $`n`$ and $`s`$ for which our effective mass data showed a plateau at large $`t`$. As expected, for each lattice size and $`\beta `$, the fitted masses in Tables VII \- XI vary with $`n`$ and $`s`$ by an amount generally less than the statistical uncertainty in each mass. There is also a weak tendency for masses to fall initially with increasing $`n`$ and $`s`$, as the corresponding operator’s relative coupling to the lightest glueball increases. Then, in some cases, when $`n`$ and $`s`$ become too large the coupling to the lightest state decreases, the fitted masses show some tendency to rise again. To reduce this small remaining statistical uncertainty and systematic bias, our final value of mass for each lattice size and $`\beta `$ was obtained by an additional fit of a single common mass to a set of masses from a range of several $`n`$ and $`s`$. The common mass was chosen to minimize the correlated $`\chi ^2`$ of the fit of the common mass to the collection of different best mass values. The correlation matrix among the best mass values was determined by the bootstrap method. The set of $`n`$ and $`s`$ used in each final fit was chosen by examining a decreasing sequence of sets, starting with all $`n`$ and $`s`$, and progressively eliminating the smallest and largest $`n`$ and $`s`$ until a $`\chi ^2`$ per degree of freedom below 2.0 was obtained. The final fit was taken from the largest set of channels yielding a $`\chi ^2`$ below 2.0. If several sets of equal size gave $`\chi ^2`$ per degree of freedom below 2.0, we chose among these the set with smallest $`\chi ^2`$ per degree of freedom. Tables XIIXVI show these combined fits and the set of $`n`$ and $`s`$ chosen for the final mass value for each lattice and $`\beta `$. In all of these tables, it is clear that once enough of the largest and smallest $`n`$ and $`s`$ are eliminated to give an acceptable $`\chi ^2`$ per degree of freedom, the fitted values vary only by small fractions of their statistical uncertainty as additional changes are made in the set of $`n`$ and $`s`$. The final mass values are collected in Table XVII. At several points in Tables XIIXVI, combined fits including several nearby values of $`n`$ and $`s`$ yield large $`\chi ^2`$ while separate fits to smaller subsets of $`n`$ and $`s`$ give nearly equal masses and acceptable $`\chi ^2`$. This phenomenon, we have found, does not indicate a problem with our data or our fits and arises instead because propagators with nearby values of $`n`$ and $`s`$ in some cases are very highly correlated and yield slightly different masses. A similar problem would arise in trying to fit a single value $`x`$ to, say, a gaussian random variable $`X`$ with dispersion 1, and a shifted copy $`X+0.0001`$. For any choice of $`x`$ the fit’s $`\chi ^2`$ is infinite. Nonetheless, for a Monte Carlo ensemble of 1000 $`X`$ values, taking $`x`$ as either $`<X>\pm 1/\sqrt{1000}`$ or $`<X>+0.0001\pm 1/\sqrt{1000}`$ is a reliable estimate of the mean of $`X`$ with systematic error much smaller than the statistical error. An alternative way to extract a single mass from glueball propagators for a range of $`n`$, $`ϵ`$ and $`s`$ uses the matrix of propagators $`M_S^{k\delta rnϵs}(t_1t_2)=<S^{k\delta r}(t_1)S^{nϵs}(t_2)><S^{k\delta r}(t_1)><S^{nϵs}(t_2)>.`$ (11) For large $`t`$ and lattice time direction period $`L`$, $`M_S^{k\delta rnϵs}(t)`$ has the asymptotic form $`M_S^{k\delta rnϵs}(t)Z^{k\delta rnϵs}\{exp(mt)+exp[m(Lt)]\}`$ (12) where $`m`$ is the mass of the lightest scalar glueball and $`Z^{k\delta rnϵs}`$ is a matrix independent of $`t`$. In principle, $`M_S^{k\delta rnϵs}(t)`$ can be extracted from our data and fitted to Eq. (12) to produce a value for $`m`$. To find the best $`m`$ and $`Z^{k\delta rnϵs}`$ by minimizing the fit’s $`\chi ^2`$, however, requires the statistical correlation matrix among the fitted $`M_S^{k\delta rnϵs}(t)`$. If we fit, for example, to three choices of $`(k,\delta ,r)`$, three choices of $`(n,ϵ,s)`$ and four values of $`t`$, the correlation matrix has 1296 entries. Our underlying data set is too small to provide reliable entries for such a large correlation matrix. As a consequence the value of $`m`$ determined this way will have a statistical error which can not be estimated reliably. In practice, we found that the value of $`m`$ produced by this method was not stabile as we varied the sets of $`(m,\delta ,r)`$ and $`(n,ϵ,s)`$ and the range of $`t`$ used in the fit. ## V Tensor Propagators and Masses A propagator for tensors is defined to be $`P_T^{nϵs}(t_1t_2)={\displaystyle \underset{i}{}}[<T_i^{nϵs}(t_1)T_i^{nϵs}(t_2)><T_i^{nϵs}(t_1)><T_i^{nϵs}(t_2)>].`$ (13) where $`T_1`$ and $`T_2`$ are the tensor glueball operators of Eq. (5), and $`P_T^{nϵs}(t_1t_2)`$ is then averaged over reflections and time direction displacements of $`t_1`$ and $`t_2`$ to reduce statistical noise. Tensor propagators were found for gauge configuration ensembles and operator parameters listed in Tables I and II. A tensor glueball mass was extracted from propagators by fitting the data to the tensor version of Eq. (10). We obtained a satisfactory tensor glueball mass signal only for the lattices with $`\beta `$ of 5.93, 6.17 and 6.40. We did not find an acceptable tensor signal at $`\beta `$ of 5.70. Overall, the statistical errors in the tensor data are larger than those in the scalar data of Section IV and, as a result, the fitting process encounters complications not present in the scalar fits. Tables XVIII \- XXI list tensor masses for each gauge ensemble with $`\beta `$ of 5.93 and above, for each set of operator parameters in Table II, fitted on one or, in some cases, two choices of time interval. For all fits the high end of the fitting range $`t_{max}`$ is chosen to be the largest value at which a statistically significant effective mass is found. The low end of the fitting range $`t_{min}`$ is then progressively increased. The smallest $`t_{min}`$ yielding a mass within one standard deviation of the masses for all larger $`t_{min}`$ is selected as the lower bound for an initial choice of the fitting range. For the lattice $`16^3\times 24`$ at $`\beta `$ of 5.93 and for the lattice $`32^2\times 30\times 40`$ and $`\beta `$ of 6.40, however, we found that for almost all choices of operator parameters a $`t_{min}`$ one unit larger than the initial choice yielded a noticeably lower mass. These second values of $`t_{min}`$ and the corresponding masses are also listed in Tables XIX, and XXI. Effective mass plots for tensors are shown in Figures 14 \- 17, for the four lattices with $`\beta `$ of 5.93 and larger, for typical choices of operator parameters. The solid line in each figure indicates the mass obtained from a fit over the time interval which the line spans. The dashed line in each figures extend the solid line to smaller $`t`$ to show the approach of effective masses to the fitted value. Tables XXII \- XXV list tensor masses found by combining, as discussed in Section IV, the masses fitted to various sets of operators and choices of time interval. Table XXII corresponds to the lattices $`12^3\times 24`$ at $`\beta `$ of 5.93 with fits using the single time interval given in Table XXII. Table XXIV corresponds to the lattice $`24^3\times 36`$ at $`\beta `$ of 6.17 with fits using the single time interval in Table XX. In Tables XXII and XXIV, all combined fits with acceptable $`\chi ^2`$ per degree of freedom give masses consistent with each other to within statistical uncertainties. In each case, the mass corresponding to the largest set with acceptable $`\chi ^2`$, marked with an arrow, is chosen as the final value. Table XXIII for the lattice $`16^3\times 24`$ at $`\beta `$ of 5.93 shows combined fits using both choices of $`t_{min}`$ of Table XIX. The combined fits using the smaller $`t_{min}`$ have unacceptably high $`\chi ^2`$ per degree of freedom. For the fits using the larger $`t_{min}`$ the $`\chi ^2`$ is acceptable, and the fitted masses are all consistent with each other within statistical uncertainties. The mass for the largest set of operators with the larger $`t_{min}`$ is chosen as the final number. Table XXV for the lattice $`32^2\times 30\times 40`$ at $`\beta `$ of 6.40 also gives combined fits for both $`t_{min}`$ in Table XXI. Most fits for both $`t_{min}`$ have acceptable $`\chi ^2`$ per degree of freedom. The masses obtained from the larger $`t_{min}`$ all lie one standard deviation or a bit more below the masses found with the smaller $`t_{min}`$, however, and all have significantly better $`\chi ^2`$ than the fits with the smaller $`t_{min}`$. The mass found from the largest set of operators for the larger $`t_{min}`$ is therefore chosen as the final result. The collection of final tensor masses is listed in Table XXVI. ## VI Volume Dependence We now consider an estimate of the difference between the scalar and tensor glueball masses in Table XVII and XXVI for finite lattice period $`L`$ and the infinite volume limits of these quantities. For large values of $`L`$, scalar $`m_0(L)`$ and tensor $`m_2(L)`$ glueball masses deviate from their infinite volume limits, $`m_0`$ and $`m_2`$, respectively, by $`m_s(L)=m_s\{1g_s{\displaystyle \frac{exp(\frac{\sqrt{3}m_0L}{2})}{m_0L}}O[{\displaystyle \frac{exp(m_0L)}{m_0L}}]\}`$ (14) where $`s`$ is 0 or 2. In Ref. for $`\beta `$ near 6.0, data for $`m_0(L)`$ is shown to fit the two leading terms in Eq. (14) reasonably well at 4 values of $`L`$ ranging from $`6/m_0`$ to $`12/m_0`$. This result is plausible since for $`L`$ ranging from $`6/m_0`$ to $`12/m_0`$, the third term in Eq. (14) is smaller than the second by a factor ranging from $`O(0.4)`$ to $`O(0.2)`$. For our data at $`\beta `$ of 5.93, Table XVII shows that $`m_0`$ is above 0.75 so that $`L`$ of 12 and 16 are larger than $`8/m_0`$ and $`12/m_0`$, respectively. Thus we believe that for the data at $`\beta `$ of 5.93, the leading two terms of Eq. (14) likely provide a fairly reliable estimate of the $`L`$ dependence of $`m_0(L)`$ and $`m_2(L)`$. Fitting the $`\beta =5.93`$ data in Table XVII to the two leading terms of Eq. (14) yields $`m_0`$ of $`0.783\pm 0.012`$ and $`g_0`$ of $`1500\pm 1100`$. In addition a bootstrap calculation yields with 95% probability $`{\displaystyle \frac{m_0m_0(16)}{m_0}}0.0037.`$ (15) . At $`\beta =5.93`$, Table XXVI combined with the leading two terms of Eq. (14) gives $`m_2`$ of $`1.236\pm 0.037`$ and $`g_2`$ is $`1300\pm 1200`$. A bootstrap calculation yields with 95% probability $`{\displaystyle \frac{m_2m_2(16)}{m_2}}0.0048`$ (16) . Overall, it appears to us safe to conclude that at $`\beta `$ of 5.93 the difference between scalar and tensor masses for $`L`$ of 16 and their infinite volume limits are of the order of 0.5% or less. In Section VII we show that the scalar and tensor glueball masses in Tables XVII and XXVI with $`\beta `$ of 5.93 and greater and $`m_0L`$ fixed at about 13 are not far from asymptotic scaling. We therefore expect the fractional volume dependent errors found in these masses to be about the same as the errors at $`\beta `$ of 5.93. Thus the finite volume errors in all masses in Tables XVII and XXVI with $`\beta `$ of 5.93 and greater and $`m_0L`$ of about 13 should be 0.5% or less. ## VII Continuum Limit The nonzero lattice spacing scalar and tensor glueball masses in lattice units given in Tables XVII and XXVI, respectively, we now convert to physical units and extrapolate to zero lattice spacing. To convert masses in lattice units to physical units, we divide by a known mass measured in lattice units. One natural choice for this conversion factor is the rho mass $`m_\rho (a)a`$. Values of $`m_\rho (a)a`$ for three of the four $`\beta `$ in Tables XVII and XXVI are given in Ref. . For the largest $`\beta `$ in Tables XVII and XXVI, Ref. does not report $`m_\rho (a)a`$. For the three $`\beta `$ considered in Ref. , however, the ratio $`[\mathrm{\Lambda }_{\overline{MS}}^{(0)}a]/[m_\rho (a)a]`$ is found to be independent of $`\beta `$ to within statistical errors. Here $`\mathrm{\Lambda }_{\overline{MS}}^{(0)}a`$ is obtained by the 2-loop Callan-Symanzik equation from $`\alpha _{\overline{MS}}`$ found from its mean-field improved relation to $`\beta `$. Since $`[\mathrm{\Lambda }_{\overline{MS}}^{(0)}a]/[m_\rho (a)a]`$ is constant within errors, converting to physical units using $`\mathrm{\Lambda }_{\overline{MS}}^{(0)}a`$ then extrapolating to zero lattice spacing should give results nearly equivalent to those found using $`m_\rho (a)a`$. Table XXVII lists, for each $`\beta `$, the corresponding mean-field improved $`\alpha _{\overline{MS}}`$ and $`\mathrm{\Lambda }_{\overline{MS}}^{(0)}a`$. The $`\beta `$ dependence of valence approximation glueball masses is determined entirely by the pure gauge part of the QCD action. The leading irrelevant operator in the pure gauge plaquette action has lattice spacing dependence of $`O(a^2)`$. Thus for scalar and tensor glueball masses $`m_0`$ and $`m_2`$, respectively, we extrapolate to the continuum limit by $`{\displaystyle \frac{m_s(a)a}{\mathrm{\Lambda }_{\overline{MS}}^{(0)}a}}`$ $`=`$ $`{\displaystyle \frac{m_s}{\mathrm{\Lambda }_{\overline{MS}}^{(0)}}}+C[\mathrm{\Lambda }_{\overline{MS}}^{(0)}a]^2,`$ (17) where $`s`$ is 0 or 2. If $`\mathrm{\Lambda }_{\overline{MS}}^{(0)}a`$ in Eq. (17) were replaced by $`m_\rho (a)a`$, then since the leading irrelevant operator in the quark action has lattice spacing dependence of $`O(a)`$ it might be argued that the quadratic $`O(a^2)`$ term in the equation’s right hand side should be a linear $`O(a)`$. This in turn would contradict our claim that extrapolation using either $`m_\rho (a)a`$ or $`\mathrm{\Lambda }_{\overline{MS}}^{(0)}a`$ will give nearly equal results. An answer to this objection is that the approximate constancy of $`[\mathrm{\Lambda }_{\overline{MS}}^{(0)}a]/[m_\rho (a)a]`$ implies that the $`O(a)`$ irrelevant contribution to $`m_\rho (a)a`$ is quite small. The constancy of $`[\mathrm{\Lambda }_{\overline{MS}}^{(0)}a]/[m_\rho (a)a]`$ as a function of $`a`$ or equivalently as a function of $`\beta `$ can not be explained by a cancellation of an $`O(a)`$ term in $`\mathrm{\Lambda }_{\overline{MS}}^{(0)}a`$ with an $`O(a)`$ term in $`m_\rho (a)a`$ since $`\mathrm{\Lambda }_{\overline{MS}}^{(0)}a`$ is defined to fulfill the true continuum two-loop Callan-Synamzik equation and itself has no $`O(a)`$ corrections. The leading correction to the $`\beta `$ dependence of $`\mathrm{\Lambda }_{\overline{MS}}^{(0)}a`$ is by a multiplicative factor of $`[1+O(\beta ^2)]`$. If $`\mathrm{\Lambda }_{\overline{MS}}^{(0)}a`$ is replaced by $`m_\rho (a)a`$, any significant $`a`$ dependence which appears will come from the $`O(a^2)`$ term in $`m_s(a)a`$. Thus Eq. (17) even with $`m_\rho (a)a`$ substituted for $`\mathrm{\Lambda }_{\overline{MS}}^{(0)}a`$ will remain correct. The scalar data of Tables XVII combined with $`\mathrm{\Lambda }_{\overline{MS}}^{(0)}a`$ of Table XXVII fitted to Eq. (17) at the three largest $`\beta `$ is shown in Figure 18. The predicted continuum limit $`m_0/\mathrm{\Lambda }_{\overline{MS}}^{(0)}`$ is $`7.016\pm 0.167`$. The fit in Figure 18 has a $`\chi ^2`$ of 0.6 over a range in which the term $`[\mathrm{\Lambda }_{\overline{MS}}^{(0)}a]^2`$ varies by more than a factor of 3.4. The variation of $`[m_s(a)a]/[\mathrm{\Lambda }_{\overline{MS}}^{(0)}a]`$ over the fitting range, however, is only slight. Each of the three nonzero lattice spacing values of $`[m_s(a)a]/[\mathrm{\Lambda }_{\overline{MS}}^{(0)}a]`$ is within 1.6 standard deviations of the extrapolated zero lattice spacing result. Thus we believe the extrapolation to zero lattice spacing is quite reliable and would expect the predicted continuum mass to be not very different from what would be obtained by any other reasonable, smooth extrapolation of the data. The tensor data of Tables XXVI combined with $`\mathrm{\Lambda }_{\overline{MS}}^{(0)}a`$ of Table XXVII fitted to Eq. (17) at the three largest $`\beta `$, the only $`\beta `$ for which tensor masses were found, is shown in Figure 19. The predicted continuum limit $`m_2/\mathrm{\Lambda }_{\overline{MS}}^{(0)}`$ is $`9.65\pm 0.36`$. The fit in Figure 19 has a $`\chi ^2`$ of 0.8, while, as before, the term $`[\mathrm{\Lambda }_{\overline{MS}}^{(0)}a]^2`$ in Eq. (17) varies by more than a factor of 3.4 over the fitting range. To obtain scalar and tensor glueball masses in units of MeV, we combine the continuum limit $`\mathrm{\Lambda }_{\overline{MS}}^{(0)}/m_\rho `$ of $`0.305\pm 0.008`$ with $`m_\rho `$ of 770 MeV to give $`\mathrm{\Lambda }_{\overline{MS}}^{(0)}`$ of $`234.9\pm 6.2`$ MeV. The scalar glueball mass becomes $`1648\pm 58`$ MeV and the tensor mass becomes $`2267\pm 104`$ MeV. The continuum limit results are summarized in Table XXVIII. For $`\mathrm{\Lambda }_{\overline{MS}}^{(0)}/m_\rho `$ we take the value given in Ref. for a lattice with period of about 2.4 fermi. For the rho mass obtained at $`\beta `$ of 5.7 from a combination of propagators for rho operators with smearing parameters 0, 1 and 2, the 2.4 fermi result differs from the result for period 3.6 fermi by a bit over one standard deviation. This difference appears to be largely a consequence of a slightly poorer separtion of the rho component of the propagator from excited state components in the 2.4 fermi rho mass calculation than in the 3.6 fermi calculation . For the rho operator with smearing parameter 4, which couples more weakly to excited states, the difference at $`\beta `$ of 5.7 between 2.4 fermi and 3.6 fermi predictions is much less than one standard deviation. Thus overall it appears to us reasonable to take the 2.4 fermi calculations as the infinite volume limit, within statistical errors. The continuum limit values of $`\mathrm{\Lambda }_{\overline{MS}}^{(0)}/m_\rho `$ for the the data combining smearings 0, 1 and 2 and for the data from smearing 4 are nearly identical. ## VIII Comparison with Other Results An independent calculation of the infinite volume, continuum limit of the valence approximation to several glueball masses is reported in Ref. . A second, more recent, calculation appears in Ref. . A comparison of Ref. with the original analysis of our results appears in Ref. . The calculation of Ref. uses the same plaquette action we use but takes a different set of glueball operators. The gauge field ensembles of Ref. range from 1000 to 3000 configurations. For the scalar and tensor masses Ref. reports $`1550\pm 50`$ MeV and $`2270\pm 100`$ MeV, respectively. The predicted zero lattice spacing masses are not actually found by extrapolation to zero lattice spacing, but are obtained instead from calculations at $`\beta `$ of 6.40 of glueball masses in units of the square root of string tension, $`\sqrt{\sigma }`$, then converted to MeV using an assumed $`\sqrt{\sigma }`$ of 440 MeV with zero uncertainty. The uncertainties given in the masses are entirely the uncertainties in the $`\beta `$ of 6.40 calculations of masses in units of $`\sqrt{\sigma }`$ and are thus missing at least a contribution from the uncertainty in $`\sqrt{\sigma }`$. A graph shown in Ref. suggests that the $`\beta `$ of 6.40 value of $`[m_0(a)a]/[\sqrt{\sigma (a)}a]`$ is about 50 MeV below the data’s zero lattice spacing limit. An additional error of $`\pm 50`$ MeV in the scalar mass is therefore proposed in Ref. as a consequence of the absence of extrapolation to zero lattice spacing. Since $`[m_0(a)a]/[\sqrt{\sigma (a)}a]`$ of Ref. is clearly rising as lattice spacing falls, it does not appear to us that a symmetric error of $`\pm 50`$ MeV an accurate representation of the effect of the absence of extrapolation. If the statistical error and extrapolation error in the scalar mass are, nonetheless, taken at face value and combined the result is a prediction of $`1550\pm 71`$ MeV. No estimate is given for the extrapolation error in the tensor mass, which is found to be only weakly dependent on lattice spacing if measured in units of $`\sqrt{\sigma }`$. A scalar mass of $`1550\pm 71`$ MeV is a bit over one standard deviations below the result $`1648\pm 58`$ MeV in Table XXVIII, while the tensor mass of $`2270\pm 100`$ MeV is in close agreement with our value of $`2267\pm 104`$ MeV. If the continuum limit of the Ref. data is found by extrapolation to zero lattice spacing of $`[m_0(a)a]/[\mathrm{\Lambda }_{\overline{MS}}^{(0)}a]`$, following Section VII, the result for $`m_0/\mathrm{\Lambda }_{\overline{MS}}^{(0)}`$ is $`6.67\pm 0.33`$. Converted to MeV using $`\mathrm{\Lambda }_{\overline{MS}}^{(0)}`$ of $`234.9\pm 6.2`$ MeV, $`m_0`$ becomes $`1567\pm 88`$ MeV. This value is less than a standard deviation below the prediction $`1648\pm 58`$ MeV in Table XXVIII. The calculation of Ref. uses an improved action with time direction lattice spacing chosen smaller than the space direction. The gauge field ensembles range in size from 4000 to 10000 configurations. Masses measured in units of the parameter $`r_0^1`$ are extrapolated to zero lattice spacing, then converted to MeV using a value of $`r_0^1`$ found by extrapolation of $`r_0^1/m_\rho `$ to zero lattice spacing. As a result of working at relatively large values of lattice spacing, some ambiguity is encountered in matching the scalar mass’s lattice spacing dependence to the small lattice spacing asymptotic behavior expected for the improved action. Taking this uncertainty into account, the scalar mass is predicted to be $`1730\pm 94`$ MeV. The tensor mass, for which the extrapolation to zero lattice spacing encounters no problem, is predicted to be $`2400\pm 122`$ MeV. Both numbers are a bit under one standard deviation above the predictions in Table XXVIII. For the ratio $`m_2/m_0`$ Ref. predicts $`1.39\pm 0.04`$, in good agreement with the value $`1.375\pm 0.066`$ in Table XXVIII. Thus the difference between Table XXVIII and Ref. is almost entirely a discrepancy in overall mass scale. Combining our extrapolation of $`6.67\pm 0.33`$ for the data in Ref. with $`7.016\pm 0.167`$ in Table XXVIII gives $`6.95\pm 0.15`$ for $`m_0/\mathrm{\Lambda }_{\overline{MS}}^{(0)}`$, thus $`1631\pm 55`$ MeV. Combining $`1631\pm 55`$ MeV with $`1730\pm 94`$ MeV of Ref. gives a world average valence approximation scalar mass of $`1656\pm 47`$ MeV. This number is consistent with the unmixed scalar mass of $`1622\pm 29`$ MeV found in Ref. taking the observed states $`f_0(1710)`$, $`f_0(1500)`$ and $`f_0(1400)`$ as the mixed versions of the scalar glueball and the two isoscalar spin zero quarkonium states, respectively. The state $`f_0(1710)`$ in this calculation is assigned a glueball component of $`73.8\pm 9.5`$ %. Combining $`2270\pm 100`$ MeV, $`2267\pm 104`$ MeV and $`2400\pm 122`$ MeV gives a world average valence approximation tensor mass of $`2302\pm 62`$ MeV.
no-problem/9910/hep-ph9910484.html
ar5iv
text
# Hadronic 𝜎_{𝑡⁢𝑜⁢𝑡}^{𝑝⁢𝑝} from accelerators and cosmic rays ## Hadronic $`\sigma _{tot}^{pp}`$ from accelerators and cosmic rays Since the first results of the Intersecting Storage Rings(ISR) at CERN arrived in the 70s, it is a well-established fact that $`\sigma _{tot}^{pp}`$ rise with energy , . The CERN $`S\overline{p}pS`$ Collider found this rising valid for $`\sigma _{tot}^{\overline{p}p}`$ as well . Several parametrizations (purely theoretically, empirical or semi-empirical based) fit pretty well the data. All of them agree that at the energies (14 TeV in the centre-of-mass) of the future CERN Large Hadron Collider (LHC) or higher the rise will continue. A thoroughful discussion on these problems may be found in , . For our purposes we have chosen a parametrization used by experimentalists to fit their data . The most interesting piece is the one controling the high-energy behaviour, given by a $`ln^2(s)`$ term, in order to be compatible, asymptotically, with the Froissart-Martin bound . The parametrization assumes $`\sigma _{tot}^{pp}`$ and $`\sigma _{tot}^{\overline{p}p}`$ to be the same asymptotically. It has shown its validity predicting, from the ISR data (23-63 GeV in the center of mass), the $`\sigma _{tot}^{\overline{p}p}`$ value found at the $`S\overline{p}pS`$ Collider (546 GeV), one order of magnitude higher in energy , . With the same well-known technique and using the most recent results it is possible to get estimations for $`\sigma _{tot}^{pp}`$ at the energies of the LHC and beyond . These estimations, together with our present experimental knowledge for both $`\sigma _{tot}^{pp}`$ and $`\sigma _{tot}^{\overline{p}p}`$ are summarized in Table 1 and plotted in fig. 1. We have also plotted the cosmic ray experimental data , . The curve is the result of the fit describe in . The increase in $`\sigma _{tot}`$ as the energy increases is clearly seen. The main conclusion from this analysis based on accelerators results are the predictions $`\sigma _{tot}=109\pm 8`$ mb at $`\sqrt{s}=14`$ TeV and $`\sigma _{tot}=130\pm 13`$ mb at $`\sqrt{s}=40`$ TeV. Cosmic rays experiments give us $`\sigma _{tot}^{pp}`$ as derived from cosmic ray extensive air shower (EAS) data . The primary interaction involved in EAS is proton-air; what it is determined through EAS is the $`p`$-inelastic cross section, $`\sigma _{inel}^{pair}`$. But the determination of $`\sigma _{inel}^{pair}`$ (or its relation with $`\sigma _{tot}^{pp}`$) is model dependent. A theory for nuclei interactions must be used. Usually is Glauber’s theory . The AKENO Collaboration has quoted, from their results for a center-of-mass energy in the interval 6-25 TeV, $`\sigma _{tot}^{pp}=133\pm 10`$ mb at $`\sqrt{s}=40`$ TeV. On the other hand, an analysis of the Fly’s Eye experiment results claims $`\sigma _{tot}^{pp}=175_{27}^{+40}`$ mb at $`\sqrt{s}=40`$ TeV. It has been argued by Nikolaev that this contradiction in the values of both experiments disappears if, in the AKENO analysis, the $`\sigma _{inel}^{pair}`$ is identified with an absorption cross section . He obtains $`\sigma _{tot}^{pp}=160170`$ mb at $`\sqrt{s}=40`$ TeV, which solves the discrepancy. ## Are accelerators and cosmic ray $`\sigma _{tot}^{pp}`$ compatible? The results from cosmic ray experiments, from the previous analysis, have been made compatible among themselves. But they have shifted away from the estimations obtained with extrapolations using the data from accelerators. The validity of these extrapolations, of course, may be discussed. But we would like to point to the fact that most extrapolations (as those using a $`ln(s)`$ term to control the high-energy behaviour) predict even lower values for $`\sigma _{tot}^{pp}`$. That makes the difference bigger. We have tackled the problem using the multiple-diffraction model , . In a recent version of it the parameters of the model are determined fitting the $`pp`$ accelerator data in the interval $`13.8\sqrt{s}62.5`$ GeV. The $`\sigma _{tot}^{pp}`$ values obtained when extrapolated to higher energies seem to confirm the above quoted compatible values of the cosmic ray experiments. That would imply the extrapolation cherished by experimentalists is wrong. But this approach predicts a value for $`\sigma _{tot}^{pp}`$ at the Fermilab Collider (1.8 TeV) which seems to be very high: $`91.6`$ mb (no error quoted). In table 1 we see that the measured $`\sigma _{tot}^{\overline{p}p}`$ at that energy is much smaller. It may be argued that $`\sigma _{tot}^{pp}`$ and $`\sigma _{tot}^{\overline{p}p}`$ are different at high energies. This is the “Odderon hypothesis”, which has been very much weakened recently . Taking this into account, in our multiple-diffraction analysis it is assumed the same behaviour for $`\sigma _{tot}^{pp}`$ and $`\sigma _{tot}^{\overline{p}p}`$ at high energy. Results are summarized in table 2 and plotted in fig. 2. | $`\sqrt{s}`$ (TeV) | $`\sigma _{tot}`$ (mb) | $`\sigma _{upp}`$ (mb) | $`\sigma _{low}`$ (mb) | $`\sigma _{tot}`$ (mb) | $`\sigma _{upp}`$ (mb) | $`\sigma _{low}`$ (mb) | | --- | --- | --- | --- | --- | --- | --- | | 0.55 | 69.39 | 77.77 | 62.0 | 62.24 | 63.56 | 60.98 | | 0.9 | 78.04 | 89.62 | 67.87 | 67.94 | 69.35 | 66.59 | | 1.8 | 91.74 | 108.64 | 76.99 | 76.44 | 78.14 | 74.84 | | 14 | 143.86 | 182.45 | 110.32 | 104.17 | 108.57 | 99.85 | | 40 | 177.23 | 230.32 | 130.95 | 118.99 | 125.98 | 111.75 | | (a) | | | | (b) | | | Table 2: Predicted $`\sigma _{tot}^{pp}`$ from fitting accelerator data at (a) $`\sqrt{s}`$ 62.5 GeV; (b) including data at 546 GeV and 1.8 TeV. Our results indicate that, if in the phenomenological multiple-diffraction approach we limit our fitting calculations to the accelerator domain $`\sqrt{s}62.5`$ GeV, the extrapolation to high energies is in complete agreement with the analysis carried out by Nikolaev , and with the experimental data of the Fly’s Eye and the Akeno collaborations, because their quoted errors fall within the error band of our extrapolations. That is, such an extrapolation produces an error band so large at cosmic ray energies that any cosmic ray results become compatible with results at accelerator energies. However, if additional data at higher accelerator energies are included and then the error band obviously narrows, things change. This can be seen in fig.2b, where we have considered data at 0.546 TeV and 1.8 TeV (according to Table 1), in which case the predicted values of $`\sigma _{tot}^{pp}`$ from our extrapolation at $`\sqrt{s}=40`$ TeV, $`\sigma _{tot}^{pp}=119\pm 7`$ mb are much lower than those illustrated in fig. 2a, and clearly incompatible with the reinterpreted Fly’s Eyes and Akeno results by several standard deviations. Concerning the quoted error bands we employed the so called forecasting technique of regression analysis . We conclude that, when all experimental available data is taking into account, the estimated values for $`\sigma _{tot}^{pp}`$ obtained from extrapolation from present high-energy accelerators and those obtained from cosmic ray experiments are incompatible in the region around $`\sqrt{s}=40`$ TeV mb.
no-problem/9910/adap-org9910002.html
ar5iv
text
# A Simple Model of the Evolution of Simple Models of Evolution ## Acknowledgments The authors thank Marc Abrahams and Nigel Snoad for helpful discussion, and Prof. Per Bak for providing a continual source of inspiration. CRS thanks Prof. David Griffeath and the undergraduate students at Madison for providing financial support, and Prof. Yuri Klimontovich, whose book (Klimontovich (1990)) first alerted him to the possibilities of simple models of evolution by physicists. WAT has already thanked an undisclosed set of people of analytically determined size, and expects eventually to be thanked by others in kind due simply to expansionary propagation of that original thankfulness (described in a subsequent paper, now in preparation).
no-problem/9910/astro-ph9910294.html
ar5iv
text
# Star formation along a misaligned bar in the peculiar starburst galaxy Mkn 439 ## 1 Introduction Mkn 439 (NGC 4369, UGC 7489, IRAS 12221+3939) is a nearby early type starburst galaxy (z=0.0035). It has been classified as a starburst by Balzano (balz (1983)) based on the equivalent width of $`H\alpha `$ and also belongs to the $`\mathrm{𝐼𝑅𝐴𝑆}`$ Bright Galaxy Sample (Soifer et al. soif (1987)). On the basis of multiaperture near infrared photometry and optical spectroscopy, Devereux (dev (1989)) describes this galaxy as a M82 type starburst galaxy. Rudnick & Rix (rudrix (1998)) report an azimuthal asymmetry in the stellar mass distribution of Mkn 439 based on the $`R`$ band surface brightness. The peculiar morphology of Mkn 439 attracted our attention during the course of an optical imaging study of a sample of starburst galaxies derived from the Markarian lists (Chitre chitre (1999)). The galaxy image was nearly circular and appeared featureless in long exposure images. The outer isophotes were smooth and nearly circular in $`B`$ and $`R`$ bands. However, the isophotal contours show highly complex features in the inner parts. Moreover, the strength of these features is wavelength dependent. Wiklind & Henkel (wik (1989)) report the detection of a molecular bar in the central region of this galaxy based on the results of CO mapping. No detailed surface photometric studies of this galaxy have been reported. Usui, Saito & Tomita (usui (1998)) report the detection of two regions bright in $`H\alpha `$ that are displaced from the nucleus and faint emission from the nucleus. However, their data were obtained at a seeing of 5″. In order to study the spatial distribution of various stellar populations in Mkn 439, we imaged this galaxy in $`B`$, $`R`$, $`H\alpha `$ and $`H`$ bands. The $`B`$ and $`R`$ band continuum trace the intermediate age populations while $`H\alpha `$ traces the young, massive stellar populations. The infrared continuum of galaxies is dominated by evolved stellar populations. Hence, the $`H`$ band and the line emission images can be used alongwith the optical continuum images to separate the young and old stellar populations spatially. ## 2 Observations and data reduction ### 2.1 Optical ($`B`$,$`R`$) and $`H\alpha `$ imaging The $`B`$, $`R`$ and $`H\alpha `$ images were obtained under photometric conditions from the 1.2m telescope at Gurushikhar, Mt. Abu. The images were taken at the Cassegrain focus employing a thinned back illuminated Tektronix 1K $`\times `$ 1K CCD. Binning of 2 $`\times `$ 2 was employed before recording the images to increase the signal-to-noise ratio of the measurements and keeping in mind the data storage requirements. The final resolution was 0.634/pixel which is sufficient to sample the point spread function (PSF) appropriately. Typical seeing (full width at half maximum (FWHM) of the stellar images) was $``$ 1.8 for the images. For the $`H\alpha `$ images a filter having FWHM of 80 Åwas used. Another off-band filter of the same FWHM was used to measure the galactic red continuum. About 3-4 exposures were taken in each of the photometric bands. The total exposure times were 510 sec, 360 sec and 1600 sec in $`B`$, $`R`$ and $`H\alpha `$ respectively. Standard stars from Landolt (land (1992)) were observed to calibrate the broad band data. Twilight flats were taken and median filtered to construct the master flats. The data was reduced using IRAF <sup>1</sup><sup>1</sup>1IRAF is distributed by National Optical Astronomy Observatories, which is operated by the Association of Universities Inc. (AURA) under cooperative agreement with the National Science Foundation, USA. on the IBM-6000 RISC at PRL, Ahmedabad. A detailed reduction procedure can be found in Chitre & Joshi (cucj (1999)). ### 2.2 $`H`$ band images The $`H`$ band images were recorded with a 256$`\times `$256 NICMOS array at the 1.2 m Gurushikhar telescope. An exposure of 30 sec ensured that the background and the galaxy signal were in the linear portion of the detector response curve (Joshi et al. jetal (1999)). Observations were made by alternating between the galaxy and positions 4′-5′to the north and south till a total integration time of 600 seconds on the galaxy was achieved. Several dark frames having the same time sequences as that of galaxy or sky were taken and median filtered master dark frames were constructed. The median filtered master sky frames were constructed using several sky frames with integration times equal to those given for the galaxy. All the source frames were corrected for the sky background by subtracting the master sky frame from the source frames. As the program galaxy does not occupy the whole detector array, the residual sky was determined from the image corners and the images were then corrected for residual sky. The dark subtracted sky frame was used to construct the master flat. The sky corrected galaxy frames were corrected for flat field response of the detector by dividing the galaxy frames by the master flat. Finally, the galaxy images were aligned by finding the center of the galaxy nucleus using the IMCNTR task in IRAF and co-added to improve the S/N ratio. The plate scale was selected to be 0.5 per pixel. Faint standard stars from the UKIRT lists were observed for calibration. ## 3 Morphology of Mkn 439 Fig. 1 illustrates the isophotal contours of the inner 25″ of Mkn 439 in $`B`$, $`R`$, continuum subtracted $`H\alpha `$ and $`H`$ band. A comparison of the various panels in Fig. 1 shows that the morphological structures vary at different wavelengths. The morphology of Mkn 439 in the $`B`$ band is characterized by smooth outer isophotes and a very complex light distribution in the inner region. The central region is elliptical and is elongated in the NS direction. Faint indications of a spiral arm in the NE direction is seen in the isophotal maps in $`B`$ and $`R`$. The contour maps show two projections - one along the NW and the other along the SE from the nuclear region. These projections are most prominent in the $`B`$ continuum, getting progressively fainter at longer wavelengths and nearly disappearing in $`H`$. The $`B`$ band image shows another condensation to the SW of the nuclear region. Similar to the projections, this feature also gets progressively fainter at longer wavelength. The $`H`$ band image shows smoother isophotes. The signature of the projections is absent at this wavelength. As seen in the $`R`$ band, the outer isophotes are nearly circular. However, unlike other optical bands, there are no spurs or bar-like features apparent in the $`H`$ band image. The continuum subtracted $`H\alpha `$ image shows an elongated bar-like strucure corresponding to the projections seen in the contour maps. $`H\alpha `$ emission is seen along the bar in the form of clumps. Emission is most intense at the ends of the bar, though it is found to extend throughout the body of the galaxy. Emission from the nucleus is much fainter as compared to that from the clumps in the bar ends. $`H\alpha `$ emission is maximum in Spot 1. The clump of $`H\alpha `$ emission seen to the E of the extended emission has no counterpart in the continuum colour map. The bright blobs of emission in $`H\alpha `$ have no counterparts in the $`H`$ band. This indicates that the HII regions are young and have not yet evolved enough to form a considerable number of red giants and supergiants to start influencing the light in the $`H`$ band. It is also seen that the latest episode of star formation is misaligned with the isophotal contours of the near infrared continuum. The ($`B`$-$`H`$) colour map (Fig. 2) was constructed by scaling the images, rotating and aligning them. It shows interesting features. A bar-like structure made up of blue clumps is seen in the central part of the galaxy. A spiral arm starts from the nuclear region and curves towards the eastern side. A distinct blue clump is present at either end of the bar marked as Spot 1 and Spot 2 in Fig. 2. These correspond to the ends of the two projections seen in the isophotal contours in $`B`$. Another blue region (Spot 3) is seen about 8″ to the south of Spot 1. The clump of $`H\alpha `$ emission seen to the E of the extended emission has no counterpart in the continuum colour map. The ($`B`$-$`R`$) and ($`B`$-$`H`$) colours of these regions are listed in Table 2. The isophotal contours of Mkn 439 appear different in the optical, near infrared and the line emission indicating the spatial separation of the distribution of these various populations. The gaseous component in the galaxy appears to be under the influence of a potential which has distributed it in the form of a gaseous bar. Compression of the gas in the bar has led to the formation of young, massive stars which are seen as clumpy HII regions along the bar. We infer that the latest dynamical episode experienced by the galaxy has given rise to the formation of young, massive stars along the bar as a result of the response of the gas to the perturbing potential. A comparison of the $`H\alpha `$ contours in Fig. 1 and Fig. 2 reveals that no HII regions are seen in the blue spiral arm-like feature emerging from the nucleus indicating that the blue spiral arm is made up of an intermediate age stellar population. Wiklind & Henkel (wik (1989)) report the detection of a molecular bar in the central region in this galaxy based on CO mapping. They observed Mkn 439 in both the J=1-0 and J=2-1 line of CO and found that the ratio of the J=2-1 to the J=1-0 intensity varies with position and inferred that this was due to changing physical conditions in the molecular cloud population. The contour maps of these two transitions can be found in Wiklind (wikthes (1990)). Many galaxies with weak stellar bars have been found to contain pronounced bar-like gas distributions similar to the one found in Mkn 439. For example, the center of the nearby Scd galaxy IC 342 harbors a bar-like molecular gas structure and a modest nuclear starburst (Lo et al. lo (1984); Ishizuki et al. ish (1990)). Other examples of galaxies having a molecular bar at their centers are NGC 253 and M83. Simulations by Combes (combes (1994)) describe the formation of a gas bar which is phase shifted from the stellar component in the innermost regions of a galaxy due to the existence of perpendicular orbits. However, her models describe the situation for the nuclear bars and in the innermost 1kpc region. An alternative explanation could be that two unequal mass spirals have merged to form the S0 galaxy. Bekki Kenji (bek (1998)) suggests that S0 galaxies are formed by the merging of spirals and when two spirals are of unequal mass, the S0 galaxy thus formed has an outer diffuse stellar envelope or a diffuse disk like component and a central thin stellar bar composed mainly of new stars. ## 4 Isophotal analysis In order to provide a quantitative description of the morphological aspects at various wavelengths, we explored Mkn 439 using ellipse fitting techniques. The procedure consists of fitting elliptical isophotes to the galaxy images and deriving 1-dimensional azimuthally averaged radial profiles for the surface brightness, ellipticity and the position angle based on the algorithm given by Jedrejewski (jedr (1987)). This technique has been used successfully in studying various structures in galaxies like bars, rings, shells, etc. and in searching for dust in them (Bender & Möllenhoff bend (1987); Wozniak et al. woz (1995) and Jungweirt, Combes & Axon jung (1997)). Multiband isophotal analysis can also be used to indicate whether the reddening seen in colour maps is due to a redder stellar population or due to the presence of dust (Prieto et al. 1992a , 1992b ). The surface brightness distribution and the variation of the position angle and ellipticity of the isophotes in each filter (Fig. 3) were obtained by fitting ellipses to the images in each filter using the ISOPHOTE package within STSDAS<sup>2</sup><sup>2</sup>2The Space Telescope Science Data Analysis System STSDAS is distributed by the Space Telescope Science Institute.. The detailed fitting procedure used is outlined in Chitre (chitre (1999)). The radial distribution of the colour indices (Fig. 4) was derived from the surface brightness profiles. Fitting isophotes to the images reveals changing ellipticity and position angle throughout the body of the galaxy (refer Fig 3). The luminosity profile is smooth except for small features at 5″ and 10″ in the optical bands. An inspection of Fig. 4 shows that the galaxy is bluest near the center and gets redder outwards. The ellipticity of the elliptical feature is maximum at the center and goes on decreasing outwards unlike a bar in which ellipticity increases accompanied by a constant position angle. The ellipticity profile shows a double peaked structure in the inner region. The first peak is seen between 2″- 3″ and the second peak at 5″. The ellipticity of the first peak is wavelength dependent, the isophotes at shorter wavelengths being rounder. The colour map also shows a small local redder region between 3″ and 4″. The surface brightness profiles also show a small dip in the intensity at shorter wavelengths at 4″. All these features indicate the presence of dust in the inner 4″ of this galaxy. van den Bergh & Pierce (van (1990)) do not find any trace of dust in Mkn 439 from a direct inspection of the $`B`$ band images on a CCD frame. However, ellipse fitting analysis has been successfully employed in the present study to infer the presence of dust in the inner regions of this galaxy based on multiband observations. The other peak occurs at 5″ which corresponds to the brightest region seen in H$`\alpha `$. The depth of the dip between the two peak reduces at longer wavelengths. The first peak and the dip is probably due to dust while the second one corresponds to the blue region at the end of the bar. Both these factors, namely dust and star forming regions contribute the maximum at shorter wavelengths. At longer wavelengths, the effects of both dust and the star forming regions are reduced hence we see the underlying old stellar population. As a result the depth of the dip is reduced at longer wavelengths. Beyond 5″, the ellipticity starts dropping and reaches a value ($``$0.05) at 15″ and remains at a low value beyond that in all filters. Between 5″ and 15″, the isophotes at shorter wavelengths are rounder than the corresponding isophotes at longer wavelengths indicating the presence of dust in this region of Mkn 439. The position angle is nearly constant in the inner 10″. The luminosity profiles show an inner steeply rising part and an outer exponential disk. We derived the scale lengths of Mkn 439 in each of the filter bands. This was done by marking the disk and fitting an exponential to the surface brightness profile in this region. The range of fit was taken to be from 18″ to the region where the signal falls to 2$`\sigma `$ of the background. The fit to the $`H`$ band is shown in Fig.3. The scale lengths derived were 0.97$`\pm 0.14`$ kpc in $`B`$, 0.84$`\pm 0.02`$ kpc in $`R`$ and 0.61$`\pm 0.03`$ kpc in $`H`$ band. ## 5 Conclusions 1. Mkn 439 is a peculiar galaxy made up of three distinct components: an elliptical structure in the inner regions, a smooth outer envelope in which this structure is embedded and a bar. We detect massive star formation along the bar in Mkn 439. This bar is misaligned with the main body of the galaxy. 2. The signature of the bar gets progressively fainter at longer wavelengths. 3. The stars in the bar are young and have not yet started influencing the light in the near infrared region. This indicates that the galaxy has undergone some perturbation which trigerred the bar formation and the starburst along the bar in recent times. 4. There are indications for the presence of dust in the inner 15″ of the galaxy. ###### Acknowledgements. We are grateful to the anonymous referee for useful suggestions. One of the authors A. Chitre wishes to thank Tommy Wiklind for useful discussions. The authors are thankful to Dr. K.S. Baliyan for helping with observations. This work was supported by the Department of Space, Government of India.
no-problem/9910/astro-ph9910059.html
ar5iv
text
# Completeness and confusion in the identification of Lyman-break galaxies ## 1 Introduction The field of the of the $`z=3.8`$ quasar pair PC1643+4631 A & B contains a Cosmic Microwave Background decrement (Jones et al. 1997) which may be the Sunyaev-Zel’dovich effect of a cluster of galaxies at $`z>>1`$ (Saunders et al. 1997; Kneissl et al. 1998). In an attempt to detect such a cluster, we have carried out deep $`UGVRI`$ imaging of the field. No cluster is immediately obvious in the images, so we carried out Monte-Carlo simulations to quantify our ability to detect a cluster of galaxies in our images (full details are given in Haynes et al. 1999 and Cotter et al. 1999). ## 2 Model high-$`z`$ cluster galaxies Model clusters were created using simulated colours of evolving galaxies in the redshift range $`0<z<4`$, and added to our WHT images. We then used photometric redshift techniques to try to recover the simulated cluster. As the cluster redshift reached $`z1`$ and beyond, the lack of strong spectral features in the optical made the cluster increasingly difficult to detect. However, even at $`z3`$, where the characteristic Lyman-limit break became detectable, a large fraction of the the fake cluster galaxies were still recovered from the simulation with ambiguous colours (Fig 1). Indeed, in our $`z=3.0`$ simulation, only one in five of the model cluster galaxies was identified as such by $`UGR`$ selection. The recovered colours were skewed towards the red in $`GR`$; this is a result of confusion with other objects in the field. ## 3 Recovery of model $`z3`$ galaxies Our original search for $`z3`$ Lyman-break galaxies (LBGs) in these images (Cotter & Haynes 1998) had recovered a reasonable number of candidates—approximately 1.1 arcmin<sup>-2</sup>, similar to the findings of the surveys of Steidel et al. (1996,1998). The fact that our recovery of simulated high-$`z`$ cluster galaxies was so inefficient therefore prompted us to measure the effects of completeness and confusion specifically for $`z3`$ galaxies. We ran 1000 simulations, each time adding ten fake LBGs to our images. Fake LBGs were drawn from a Schecter luminosity function with $`R_{}=24`$ and $`\alpha =1.06`$; all had input colours $`GR=2.2`$, $`UG=0.3`$. We used input half-light radii of 0.2-0.3<sup>′′</sup> (Giavalisco et al. 1998). Then, using FOCAS, we attempted to recover the LBGs. Our selection criteria are chosen to be as close as possible to that of Steidel et al. We select only those galaxies clearly detected with $`R<25.5`$ above the 3-$`\sigma `$ isophote in $`R`$, measure magnitudes in $`U`$ and $`G`$ through this $`R`$-band isophote, and then impose a colour cut of $`UG>2`$ , $`UG>4(GR)+0.5`$, which is closely equivalent to the “robust” colour selection of Steidel et al. First, we find that 53% of galaxies with input $`R<25.5`$ are selected to the isophotal $`R=25.5`$ limit. Second, we find that, as for our fake cluster galaxies, a large fraction of the fake LBGs are scattered far away from their input colours. In total, only 23% of the input LBGs with $`R<25.5`$ remain within the $`z3`$ region of the $`UGR`$ plane (Fig 2). Therefore, the true number of LBG candidates in our images may be four times greater than the 1.1 arcmin<sup>-2</sup> we measure. Again we stress that, while our $`UGR`$ filter set is slightly different from the $`U_nG`$ used by Steidel et al., our search for genuine LBG candidates in these images finds a surface density at least as great as that of Steidel et al. These results suggest that ground-based $`UGR`$ selection, while extremely successful at identifying $`z3`$ galaxies, may miss a significant fraction of the population. This may have a bearing on the apparent discrepancy between the surface densities of LBGs measured in the Hubble Deep Field (HDF) and in ground-based surveys. There are 12 galaxies in the HDF which have spectroscopic redshifts $`2.8<z<3.5`$ and $`V_{606}<25.5`$ (Dickinson 1997). These galaxies correspond to those which would be detected by the “robust” LBG candidate criteria of Steidel et al. (1998). However, one would expect, given the published surface-density of “robust” LBG candidates of $`0.7`$ arcmin<sup>-2</sup> (Steidel et al. 1998), to find only three such LBGs in the HDF. Of course, cosmic variance will be significant for the HDF; but it is striking that the apparent overdensity of LBGs in the HDF corresponds with our estimate of the fraction of LBGs lost to incompleteness and confusion in the ground-based images. ## 4 Conclusions We have carried out simulations to examine the effectiveness of searches for high-redshift galaxies in deep optical images typical of those obtained with 4-m telescopes. We find that the scatter of the recovered colours of model galaxies away from their model colours is two to three times greater than that expected from the photometric errors alone and arises as a result of confusion between the simulated galaxies and the real objects in the field. Because of the effects of incompleteness and confusion, the surface densities of LBGs based on ground-based imaging may be underestimated by a factor of four; this is consistent with the surface density of LBGs measured in the HDF. To investigate these effects further, we will carry out more detailed simulations using our present data to investigate how the inferred luminosity function of the LBGs is affected by incompleteness and confusion. We also plan deep imaging programmes with the new generation of wider-field, high-image-quality instrumentation as it becomes available.
no-problem/9910/hep-ph9910343.html
ar5iv
text
# The resummed thrust distribution in DIS ## 1 Introduction For some time now it has been standard practice in $`e^+e^{}`$ reactions to compare event-shape distributions with resummed perturbative predictions (see for instance ). The resummation is necessary because in the two-jet limit (small values of the shape variable) the presence of large soft and collinear logarithms spoils the convergence of the fixed-order calculations. Such resummed analyses have led to valuable information about the strong coupling constant and also about non-perturbative effects. At HERA, studies of event-shape distributions are being carried out by both collaborations but as yet no perturbative resummed calculations exist for comparison. Here we present preliminary results on such a calculation . In DIS, event shapes are defined in the current hemisphere of the Breit frame to reduce contamination from remnant fragmentation which is beyond perturbative control. The distribution of emissions in the current hemisphere ($`_\text{C}`$) is analogous to that in a single hemisphere of $`e^+e^{}`$. Differences arise from $`e^+e^{}`$ however because the momentum of the current quark (the quark struck by the photon) depends on remnant-hemisphere ($`_\text{R}`$) emissions through recoil effects. This necessitates a resummed treatment of the space-like branching of the incoming parton. Currently such a simultaneous resummation of the space-like and time-like (double logarithmic) contributions exists only for jet multiplicities , or for cross sections at large $`x`$ . For the thrust $`T`$ (as defined in the following section) the recoil from remnant hemisphere emissions can be divided into two parts: a piece from soft and collinear emissions, which gives double logarithms $`\alpha _\text{s}^n\mathrm{ln}^{2n}(1T)`$, identical to those from half a hemisphere of $`e^+e^{}`$, and a piece from purely collinear emissions which gives single logarithms $`\alpha _\text{s}^n\mathrm{ln}^n(1T)`$ that can be identified with a change of scale in the parton distribution. This is outlined in the next section. A valuable cross-check of any resummation is to expand it to NLO and compare the result to that from fixed order Monte Carlo programs such as DISASTER++ and DISENT . This is illustrated in section 3. Finally in section 4, we comment on possible future developments. ## 2 The Thrust in DIS There are several possible definitions of the thrust in DIS, which differ according to a choice of axis and the normalisation. We here consider the thrust with respect to the photon axis, $`\stackrel{}{n}_\gamma `$, and normalised to $`Q/2`$, $$T=\frac{2}{Q}\underset{i_\text{C}}{}\stackrel{}{k_i}\stackrel{}{n}_\gamma ,$$ (1) where the sum extends over all particles in the current hemisphere, $`H_C`$. At lowest order $`T=1`$, so the region requiring resummation will be that of $`1T`$ close to zero. Expressing all momenta in terms of their Sudakov components $$k_i=\alpha _iP+\beta _iP^{}+k_{t,i},\alpha _i\beta _i=\stackrel{}{k}_{t,i}^2/Q^2,$$ (2) where $`P=\frac{1}{2}Q(1,0,0,1)`$ and $`P^{}=\frac{1}{2}Q(1,0,0,1)`$ are along the remnant and current directions respectively, we can write the thrust as $$1T=\underset{i_\text{R}}{}\beta _i+\underset{i_\text{C}}{}\alpha _i+\alpha _q,$$ (3) where the sums now run over emitted particles only and $`\alpha _q`$ is the $`\alpha `$ component of the current quark<sup>2</sup><sup>2</sup>2The expression must be modified when the current quark goes into the remnant hemisphere — but such a situation is not relevant for small $`1T`$.. Dependence on the remnant hemisphere emissions arises both through the $`_{i_\text{R}}\beta _i`$ term and through $$\alpha _q=\frac{k_{t,q}^2}{\beta _q}\left|\underset{i}{}\stackrel{}{k}_{t,i}\right|^2,$$ (4) (we have made use of the fact that $`\beta _q1`$). We now consider the cross section $`\sigma (\tau )`$ for $`1T`$ to be smaller than $`\tau `$. Roughly it will contain virtual corrections from the exclusion of all emissions whose contribution to (3) is larger than $`\tau `$. In the remnant hemisphere we exclude soft and collinear emissions, $`k_t>\beta >\tau `$, where the $`k_t>\beta `$ condition ensures that particle be in $`_\text{R}`$. The integration over $`k_t`$ and $`\beta `$ gives a double logarithm. In $`_\text{C}`$ we exclude the soft and collinear region $`k_t>\alpha >\tau `$. Again this gives us a double logarithm. The exclusion $`\alpha _qk_{t,i}^2>\tau `$ is just a limit on the maximum emitted transverse momentum: this implies ‘stopping’ the DGLAP parton evolution at a scale $`\tau Q^2`$, which gives single collinear logarithms. These aspects are illustrated in the first order result for $`\sigma (\tau )`$: $$\begin{array}{c}\frac{\sigma (\tau )}{\sigma }=1\frac{\alpha _\text{s}C_F}{2\pi }\left[2\mathrm{ln}^2\frac{1}{\tau }3\mathrm{ln}\frac{1}{\tau }\right]\hfill \\ \hfill \frac{\alpha _\text{s}}{2\pi }\frac{\mathrm{ln}1/\tau }{q(x)}_x^1\frac{dz}{z}\left[q\left(\frac{x}{z}\right)P_{qq}(z)+g\left(\frac{x}{z}\right)P_{qg}(z)\right].\end{array}$$ (5) The first line contains the soft and collinear double logarithms (which turn out to be identical to the $`e^+e^{}`$ result) from the conditions on the $`\alpha _i`$ and $`\beta _i`$. The second line contains the collinear logarithm associated with the restriction on the DGLAP evolution. Photon-gluon fusion plays a role only through this collinear logarithm, in the convolution with the gluon distribution. The actual details of the resummation will be presented in . Schematically, the result is $$\frac{\sigma (\tau )}{\sigma }=\left[1+\alpha _\text{s}C(x)\right]\frac{q(x,\tau Q^2)}{q(x,Q^2)}\mathrm{\Sigma }(\tau ),\mathrm{\Sigma }(\tau )=e^{\frac{\alpha _\text{s}C_F}{2\pi }\left[2\mathrm{ln}^2\frac{1}{\tau }3\mathrm{ln}\frac{1}{\tau }\right]+𝒪\left(\alpha _\text{s}^2\mathrm{ln}^3\frac{1}{\tau }\right)},$$ (6) where $`\mathrm{\Sigma }(\tau )`$ is just the corresponding $`e^+e^{}`$ quantity of and $`C(x)`$ is a $`\tau `$-independent coefficient function which is given in . The ratio $`\frac{q(x,\tau Q^2)}{q(x,Q^2)}`$ comes from the suppression of radiation with $`k_t^2>\tau Q^2`$, as mentioned above. ## 3 Comparison with fixed order programs. Expanding (6) to NLO it is possible to perform a comparison to the fixed order Monte Carlo programs DISASTER++ and DISENT . We actually look at the coefficient of $`(\alpha _\text{s}/2\pi )^2`$ in $$\frac{\tau }{\sigma _0}\frac{d\sigma (\tau )}{d\tau },$$ (7) where we have normalised to the the Born cross section $`\sigma _0`$ for simplicity. At second order eq. (7) contains terms $`\mathrm{ln}^n\tau `$, $`n3`$ and for the resummation to be correct to next-to-leading order we should correctly control terms $`1n3`$. So the difference between our expanded result and the exact result should at most be a constant for small $`\tau `$. The upper two plots of figure 1 show our results for the quark and gluon-initiated components of the answer compared to the predictions from DISASTER++. The shape of the distributions is well reproduced for small $`\tau `$. The lower two plots show the difference between the DISASTER++ results and ours: one sees that for small $`\tau `$ this difference is indeed compatible with a constant, as required. Comparisons have also been made with DISENT, which seems to disagree with our result in the gluon sector at the level of a term proportional to $`\mathrm{ln}^2\tau `$, and perhaps also in the quark sector. ## 4 Outlook Although we have results for the resummation of the thrust distribution in DIS, some work remains to be done for the practical implementation of our results for comparison with experimental data. In particular, prescriptions need to be defined for the matching of our resummed result with the fixed order results, in order extend the range of applicability of the calculation (without the matching the results are applicable only to very small $`\tau `$). Work is also in progress on the resummation of other DIS event-shape variables. Among these, the thrust normalised to the energy in the current hemisphere is close to completion. Other variables to be studied include the jet mass, the $`C`$-parameter and the jet-broadening (the resummation of the latter is relevant also for predicting the form of the power correction ). Once this programme is complete, we hope that it will lead to phenomenological studies analogous to those already being carried out in $`e^+e^{}`$. ## Acknowledgements We benefited much from continuous discussions of this and related subjects with Yuri Dokshitzer, Pino Marchesini and Mike Seymour. One of us (VA) is grateful to Yuri Dokshitzer and Pino Marchesini for introducing him to perturbative QCD, and to the organisers and the participants of the 1999 U.K. Phenomenology Workshop on Collider Physics for the invitation and for the stimulating atmosphere. He would also like to thank N. Brook and V. Khoze for useful discussions. This research was supported in part by MURST, Italy and by the EU Fourth Framework Programme ‘Training and Mobility of Researchers’, Network ‘Quantum Chromodynamics and the Deep Structure of Elementary Particles’, contract FMRX-CT98-0194 (DG 12-MIHT). ## References
no-problem/9910/cond-mat9910042.html
ar5iv
text
# Absolute Negative Conductivity and Spontaneous Current Generation in Semiconductor Superlattices with Hot Electrons ## Abstract We study electron transport through a semiconductor superlattice subject to an electric field parallel to and a magnetic field perpendicular to the growth axis. Using a single miniband, semiclassical balance equation model with both elastic and inelastic scattering, we find that (1) the current-voltage characteristic becomes multistable in a large magnetic field; and (2) “hot” electrons display novel features in their current-voltage characteristics, including absolute negative conductivity (ANC) and, for sufficiently strong magnetic fields, a spontaneous dc current at zero bias. We discuss possible experimental situations providing the necessary hot electrons to observe the predicted ANC and spontaneous dc current generation. As first realized by Esaki and Tsu , semiconductor superlattices (SSLs) are excellent systems for exploring nonlinear transport effects, since their long spatial periodicity implies that SSLs have small Brillouin zones and very narrow “minibands.” Applied fields accelerate Bloch electrons in a band according to Bloch’s acceleration theorem, $`\dot{𝐤}=(e/\mathrm{})[𝐄+(𝐯\times 𝐁)/c]`$, where $`𝐤`$ is the crystal momentum of the electron, $`e`$ its charge, $`𝐄`$ the electric field, $`𝐁`$ the magnetic field, $`𝐯`$ the electron’s velocity, and $`c`$ the speed of light. In SSLs, both the velocity and effective mass depend on the crystal momentum; in fact, the effective mass is negative above the band’s inflection point, corresponding to the fact that electrons slow down to zero velocity as the reach the edge of the Brillouin zone. The acceleration of the external fields is balanced by scattering processes that limit the crystal momentum of electrons. In clean SSLs with only modest fields, electrons can reach the negative effective mass (NEM) portion of the miniband before scattering. For an electric field oriented along the SSL growth axis, the current-voltage characteristic exhibits a peak followed by negative differential conductivity (NDC) when a significant fraction of electrons explore the NEM region of the miniband ; with an additional magnetic field perpendicular to the growth axis, NDC occurs at a larger bias because the magnetic field impedes the increase of crystal momentum along the growth axis . In this letter, we study electron transport through a single miniband of a spatially homogeneous SSL with growth axis in the $`z`$-direction in the presence of a constant magnetic field, $`B`$, in the $`x`$-direction and an electric field, $`E`$, in the $`z`$-direction. We assume a tight-binding dispersion relation for the SSL miniband, $`ϵ(𝐤)=\mathrm{}^2k_y^2/2m^{}+\mathrm{\Delta }/2[1\mathrm{cos}(k_za)]`$, where $`ϵ`$ is the energy of an electron with crystal momentum $`𝐤`$, $`m^{}`$ is the effective mass within the plane of the quantum wells (QWs) that form the SSL, $`\mathrm{\Delta }`$ is the miniband width, and $`a`$ the SSL period. Generalizing the approach of to include the effects of the magnetic field, we obtain the following balance equations $`\dot{V_y}`$ $`=`$ $`{\displaystyle \frac{eB}{m^{}c}}V_z\gamma _{vy}V_y`$ (1) $`\dot{V_z}`$ $`=`$ $`{\displaystyle \frac{e}{m(\epsilon _z)}}[E{\displaystyle \frac{BV_y}{c}}]\gamma _{vz}V_z`$ (2) $`\dot{\epsilon _z}`$ $`=`$ $`eEV_z+{\displaystyle \frac{eB}{c}}V_yV_z\gamma _\epsilon [\epsilon _z\epsilon _{eq,z}].`$ (3) The average electron velocity, $`𝐕=(V_y,V_z)`$, is obtained by integrating the distribution function satisfying the Boltzmann transport equation over the Brillouin zone; and $`\gamma _{vy}`$ and $`\gamma _{vz}`$ are the relaxation rates for the corresponding components of $`𝐕`$ following from elastic impurity, interface roughness and disorder scattering, and inelastic phonon scattering. It is convenient to separate the total energy of the electrons into parts associated with longitudinal and transverse motion. Doing so, $`\epsilon _z`$ is the average energy of motion along the growth axis with equilibrium value $`\epsilon _{eq,z}`$; $`\gamma _\epsilon `$ represents its relaxation rate due mainly to inelastic phonon scattering (elastic scattering that reduces the energy of motion along the superlattice growth axis and increases the energy of (transverse) motion within the QWs also contributes). Note that the balance equations contain an effective mass term dependent on $`\epsilon _z`$, $`m(\epsilon _z)=m_0/(12\epsilon _z/\mathrm{\Delta })`$, which follows from the crystal momentum dependence of the effective mass tensor; in this expression, $`m_0=2\mathrm{}^2/\mathrm{\Delta }a^2`$ is the effective mass at the bottom of the SSL miniband. Because of the constant effective mass for motion within the plane of the QWs, the energy of this motion does not enter the balance equations. While the magnetic field does not change the total electron energy, it does transfer energy between in-plane motion and $`\epsilon _z`$, hence Eq. (3) contains the magnetic field-dependent term. For an intuitive understanding of the balance equations, we can consider them as describing an “average” electron whose velocity changes according to Newton’s second law, $`\dot{𝐕}=𝐅/𝐦(\epsilon )`$, with $`𝐅`$ representing electric, magnetic and damping forces. The mass tensor $`𝐦(\epsilon )`$ is diagonal and $`m_{zz}`$ depends on the energy of motion in the $`z`$-direction; this component of the energy evolves according to $`\dot{\epsilon _z}=F_zV_zP_{damp}`$. Inelastic scattering to the average energy $`\epsilon _{eq,z}`$ (which may not be the bottom of the miniband) leads to the damping term, $`P_{damp}`$. This gratifyingly intuitive picture should not obscure the result that our balance equations have been derived systematically from the full Boltzmann transport equation. For numerical simulations, we introduce the scalings $`v_y=((m_0m^{})^{1/2}a/\mathrm{})V_y`$, $`v_z=(m_0a/\mathrm{})V_z`$, $`w=(\epsilon _z\mathrm{\Delta }/2)/(\mathrm{\Delta }/2)`$, $`w_0=(\epsilon _{eq,z}\mathrm{\Delta }/2)/(\mathrm{\Delta }/2)`$, $`=eB/(m^{}m_0)^{1/2}c`$ and $`\omega _B=eEa/\mathrm{}`$ (the Bloch frequency of the electric field). Note that the average electron energy is scaled such that -1 (+1) corresponds to the bottom (top) of the miniband. In terms of the scaled variables, the balance equations read $`\dot{v_y}`$ $`=`$ $`v_z\gamma _{vy}v_y`$ (4) $`\dot{v_z}`$ $`=`$ $`\omega _Bwv_yw\gamma _{vz}v_z`$ (5) $`\dot{w}`$ $`=`$ $`\omega _Bv_z+v_yv_z\gamma _\epsilon (ww_0).`$ (6) The current across the superlattice $`I=eNA(\mathrm{\Delta }a/2\mathrm{})v_{z,ss}`$, where $`N`$ is the carrier concentration, $`A`$ the cross-sectional area and $`v_{z,ss}`$ the steady-state solution to Eq. (5). By setting the time derivatives in Eqs. (4)-(6) to zero, we obtain the following equation relating $`v_{z,ss}`$ and hence the SSL current to the applied bias, $$C^2v_{z,ss}^3+2C\omega _Bv_{z,ss}^2+[\gamma _{vz}\gamma _\epsilon +\omega _B^2\gamma _\epsilon w_0C]v_{z,ss}\gamma _\epsilon w_0\omega _B=0,$$ (7) where $`C=^2/\gamma _{vy}`$. This cubic equation for $`v_{z,ss}`$ implies that there may be up to three steady-state current values for a given bias . In figure 1, we plot $`v_{z,ss}`$, which is proportional to the current across the SSL, as a function of scaled voltage, $`\omega _B`$, for various scaled magnetic fields strengths, $`C`$, with equal momentum and energy relaxation rates, $`\gamma _{vz}=\gamma _\epsilon `$. With no magnetic field (Fig. 1a), the current exhibits a peak followed by negative differential conductance (NDC) and satisfies the well-known expression $`v_{z,ss}=(w_0/\gamma _{vz})\omega _B/(1+\omega _B^2/\gamma _{vz}\gamma _\epsilon )`$ . A magnetic field increases the value of the electric field at which the current reaches its maximum value (Fig. 1b), as has been observed in recent experiments . Finally, for larger magnetic fields (Fig. 1c), the current-voltage characteristic from the balance equations has a region of multistability with three possible currents. For SSL parameters of $`\mathrm{\Delta }=23`$ meV, $`a=84`$Å, and $`\gamma _{vy}=\gamma _{vz}=\gamma _\epsilon =1.5\times 10^{13}`$sec<sup>-1</sup>, , multistability requires a magnetic field of 21 T, but the semiclassical balance equation is not applicable at such large fields. However, for SSL parameters of $`\mathrm{\Delta }=22`$meV, $`a=90`$Å, and $`\gamma _{vy}=\gamma _{vz}=\gamma _\epsilon =10^{12}`$sec<sup>-1</sup>, , multistability should occur for a modest magnetic field of 1.4 T. Let us now consider the situation of “hot” electrons. In this case the electron distribution is highly non-thermal, even without the applied fields. The electrons do not have time to relax to the bottom of the miniband before leaving the SSL. We can effectively describe these hot carriers as relaxing to the top half of the miniband, i.e, as having $`w_0>0`$. This may happen in a very clean SSL, at very low temperatures, when the inelastic mean free path is comparable with the SSL size. The hot electrons may be obtained by injection or by an optical excitation. Below we will discuss how to achieve this situation experimentally. For zero or small magnetic fields (Fig. 2a), absolute negative conductance (ANC) occurs as the current flows in the opposite direction as the applied bias. Then, in larger magnetic fields (Fig. 2b), a region of multistability appears around zero bias; a linear stability analysis shows the zero current solution becomes unstable as soon as the nonzero current solutions emerge. In other words, the SSL will spontaneously develop a current across it at zero bias. The three possible steady-state velocities at zero bias are $$v_{z,ss}=0,\pm (\frac{\gamma _\epsilon w_0C\gamma _{vz}\gamma _\epsilon }{C^2})^{1/2},$$ (8) so a spontaneous current will appear when the condition $`w_0C>\gamma _{vz}`$ (in other words, $`w_0^2/\gamma _{vy}>\gamma _{vz}`$) is satisfied. Since $`C`$ and $`\gamma _{vz}`$ are always positive, this requires that $`w_0`$ be positive; neither thermal effects nor doping can fulfill the necessary condition for a zero-bias current: hot electrons are required. Physically, one clearly needs energy to create the spontaneous current, and this energy is supplied by hot electrons. These two results for hot electrons, i.e. ANC and spontaneous current generation, follow from their negative effective mass in the top half of the miniband. To understand the origin of the ANC, consider a one-dimensional SSL with electrons at their equilibrium position at the bottom of the band, $`w_0=1`$, and no electric field; when a positive bias is applied, $`\omega _B>0`$, the electrons move through the band according to $`\dot{k_za}=\omega _B`$ until a scattering event occurs. Elastic scattering conserves energy, sending an electron across the band in this one-dimensional case. Inelastic scattering changes the electron energy to $`w_0`$, i.e. $`k_z`$=0. (Hot electrons inelastically scatter to $`w_0>0`$, possibly gaining energy.) In Fig. 3, the electric field accelerates the electrons from their equilibrium position at point A; inelastic scattering prevents many electrons from passing point B, so electrons are found mainly in the segment AB. Elastic scattering sends electrons into the segment AC, which contains fewer electrons than the segment AB. In this tight-binding miniband, the velocity of an electron with crystal momentum $`k_z`$ is $`𝒱(k_z)\mathrm{}^1ϵ/k_z=(\mathrm{\Delta }a/2\mathrm{})\mathrm{sin}(k_za)`$; because the segment AB has more electrons, there is a net negative velocity, or a positive current, as expected for a positive voltage. In the presence of hot electrons, when $`w_0>0`$, the two points labeled D1 and D2 initially are occupied with equal numbers of electrons and no current flows. Once applied, the electric field accelerates electrons such that they occupy the segments D1E1 and D2E2, as inelastic (non-energy-conserving) scattering returns them to their quasi-equilibrium energy at points D1 and D2; elastic scattering leads to a smaller number of electrons in the segments D1F1 and D2F2. The speed of electrons above the inflection point of the miniband decreases as the magnitude of their crystal momentum increases towards the edge of the Brillouin zone, thus the electrons in the segment D2E2 have a larger speed than those in the segment D1E1. A positive net velocity or, in other words, a negative current results; this is the absolute negative conductivity shown in figure 2a. An intuitive picture of the spontaneous current generation also follows from the miniband structure of an SSL in an external magnetic field. Consider a small, positive current fluctuation across the SSL resulting from extra electrons at the initial energy $`w_0>0`$, point D2 in Fig. 3. The crystal momentum evolves according to $`\dot{𝐤}=(e/\mathrm{}c)𝒱\times 𝐁`$; with $`B_x>0`$, initially $`\dot{k_y}>0`$, hence $`\dot{k_z}<0`$. The electron moves from point D2 towards E2 with increasing speed, until inelastic scattering returns the electron to its quasi-equilibrium initial position or elastic scattering sends it across the band. For a large enough magnetic field, small enough elastic scattering and electrons far enough into the NEM region of the miniband (as specified by the requirement $`w_0^2>\gamma _{vy}\gamma _{vz}`$), the initial current fluctuation will increase, the zero current state will be unstable to such small fluctuations, and the SSL will develop a spontaneous current. Experimentally, it is possible to obtain these hot electrons with $`w_0>0`$ by injecting electrons into the NEM portion of the miniband, as was described recently in references . In this injection geometry, two mechanisms contribute to the current through the SSL: first, coherent tunneling through the whole SSL, and, second, incoherent transport of scattered electrons that do not maintain phase information . The balance equations describe these latter electrons. While the electrons in the NEM region can support a current instability, those that have scattered to the bottom of the miniband cannot, so it is vitally important to keep the miniband width below the LO phonon energy of 36 meV in order to limit phonon scattering. In this case, the balance equations describe the behavior of electrons that have scattered elastically, primarily because of disorder. When the injection energy is in the forbidden region below the miniband, there is no appreciable current through the SSL; as the injection energy is swept through the miniband, the current increases, since electrons incident at the miniband energy can traverse the SSL. The current then decreases again when the injection energy passes through the top half of the miniband. The sharpness of this decrease depends on the miniband width because phonon replicas emerge when electrons having undergone LO phonon scattering are at the miniband position. For a sharp feature, a narrow miniband is important (see Fig. 3 in reference ), such that the width of the incident wavepacket (about 17meV) plus the miniband width is less than the LO phonon energy (36meV). To observe the hot electron effects we predict, the transmitted current- injection energy characteristic must first be measured with no external fields; the experiment must then be repeated with a magnetic field in the plane of the quantum well. While such a field reduces the coherent current , if sufficiently strong, it can lead to spontaneous current generation for electrons incident near the top of the miniband, i.e. between the peaks in the current-injection energy curve. This current instability would cause the current to flatten, or even increase, between the main peak and its phonon replica. To observe the ANC, the current-injection energy curves must be measured for positive and negative voltages; it is known that in a positive or negative bias the location of the current peak shifts due to the voltage drop across the drift region and the coherent current decreases . When the phonon replica current is small, it may also be possible to observe a change in the shape of the current peak. For positive bias, the current below the peak, at injection energies in the lower half of the miniband, increases. Meanwhile the current above the peak, for injection energy in the top half of the miniband, decreases due to ANC; the current drops off more rapidly on the high-injection energy side of the current peak. Just the opposite occurs for a negative bias: since ANC causes the current to increase for injection energies in the top half of the miniband, the peak drops less sharply. Finally, we note that recently Kempa and coworkers have studied the possibility of generating non-equilibrium plasma instabilities through a similar selective energy injection scheme . The other possibility to create the hot electrons is an optical excitation of electron-hole pairs. As far as we aware, this approach has not yet been used specifically as a method of injecting hot electrons into an SSL. In summary, we have described new physical effects—-incoherent current flow opposite to the direction of the applied electric bias and spontaneous current generation for hot electrons in a transverse magnetic field—in an SSL with nonequilibrium electron excitations and have suggested how they might be observed in experiments. We hope our experimental colleagues will search for these effects. We are grateful to Lawrence Eaves for stimulating discussions. F.V.K. thanks the Department of Physics at the University of Illinois at Urbana-Champaign for its hospitality. This work was partially supported by NATO Linkage Grant NATO LG 931602 and INTAS. E.H.C. acknowledges support by a graduate traineeship under NSF GER93-54978.
no-problem/9910/cond-mat9910165.html
ar5iv
text
# Exact Demonstration of Magnetization Pleateaus and First Order Dimer-Néel Phase Transitions in a Modified Shastry-Sutherland Model for SrCu2(BO3)2 \[ ## Abstract We study a generalized Shastry-Sutherland model for the material SrCu<sub>2</sub>(BO<sub>3</sub>)<sub>2</sub>. Along a line in the parameter space, we can show rigorously that the model has a first order phase transition between Dimerized and Néel-ordered ground states. Furthermore, when a magnetic field is applied in the Dimerized phase, magnetization plateaus develop at commensurate values of the magnetization. We also discuss various aspects of the phase diagram and properties of this model away from this exactly soluble line, which include gap-closing continuous transitions between Dimerized and magnetically ordered phases. \] In recent years, many novel magnetic materials have been synthesized which exhibit spin-gap behavior. In these materials the ground state is a spin-singlet and there is a gap to all spin excitations. Such phenomena have long been studied in quasi-one dimensional systems, but much recent interest has arisen from the discovery of quasi-two dimensional spin-gap materials CaV<sub>4</sub>O<sub>9</sub> , Na<sub>2</sub>Ti<sub>2</sub>Sb<sub>2</sub>O and SrCu<sub>2</sub>(BO<sub>3</sub>)<sub>2</sub> . The latter material is particularly interesting in that, by virtue of the crystal geometry, it is an experimental realization of the Shastry-Sutherland model , for which an exact dimerized singlet eigenstate can be written down, which for a range of parameters is the ground state of the model. Among the interesting experimental findings for SrCu<sub>2</sub>(BO<sub>3</sub>)<sub>2</sub> are that the system appears very close to a transition to a Néel phase and it also shows magnetization plateaus as a function of magnetic field . Here we consider a generalized Shastry-Sutherland model, with Hamiltonian $$=J_1\underset{<i,j>}{}\stackrel{}{S}_i\stackrel{}{S}_j+J_2\underset{<i,k>}{}\stackrel{}{S}_i\stackrel{}{S}_k+J_3\underset{<i,l>}{}\stackrel{}{S}_i\stackrel{}{S}_l,$$ (1) where the bonds corresponding to interactions $`J_1`$, $`J_2`$ and $`J_3`$ are shown in Fig. 1. We assume $`J_1`$ is antiferromagnetic and can henceforth set $`J_1`$ to unity. For $`J_3=0`$ the model reduces to the original Shastry-Sutherland model. For $`J_2=J_3`$, the model has infinitely many conserved quantities. The total spin on each $`J_1`$ bond commutes with $``$ and thus each eigenstate of the Hamiltonian can be characterized by the number and position of the triplets present. These triplets then form (in general a diluted) spin-one Heisenberg model, with nearest-neighbor interactions on the square-lattice. It is easy to show that this model has three phases at $`T=0`$ with first order transitions between them. For large negative $`J_2`$ the ground state is a fully polarized ferromagnet, for large positive $`J_2`$ the ground state is equivalent to the Néel ground state of a spin-one Heisenberg model on the square-lattice, which is rigorously known to be ordered , and whose numerical properties are very well established . In between the ground state is the Dimerized Shastry-Sutherland singlet phase. At the Dimer to Néel transition the sublattice magnetization jumps from 0 to about $`80`$ percent of the classical value. Away from the $`J_2=J_3`$ line, we use symmetry arguments, Dimer series expansions together with considerations of the classical limit to discuss the ground-state phase diagram and properties of this model. The model with J<sub>2</sub> and J<sub>3</sub> interchanged can be mapped into the original one by interchanging the spins on all J<sub>1</sub> bonds. Thus the phase diagram is symmetric with respect to the $`J_2=J_3`$ line. We will concentrate our discussion on the region $`J_2J_3`$. First, let us compare energies of various classically ordered phases with the energy of the Dimer phase to get a first view of various phases and their boundaries. The different phases are shown in Fig. 2, with the Dimer phase denoted by $`D`$. The magnetically ordered phases are best described with respect to a rotated lattice where individual spins have four nearest neighbors (connected by $`J_2`$ bonds) . This lattice is topologically equivalent to a square-lattice. On this lattice, in addition to a ferromagnetic phase (F) and an antiferromagnetic Néel phase (N), there are columnar (C<sub>N</sub> and C<sub>F</sub>) and helical (H) phases. The two columnar phases are colinear and equivalent to each other. In the $`C_N`$ phase the spins order antiferromagnetically along one of the axes and have period four ($``$) in the perpendicular direction. In the $`C_F`$ phase, the spins are ferromagnetic in one direction and have period four in the other direction. Classically the four helical phases can be mapped onto each other. In the helical phase for $`J_2>J_3>0`$, the helix runs along one of the axes, with successive spins rotating by an amount $`+\theta `$ as one moves along that direction . Along the perpendicular direction the change in spin directions alternates between $`+\theta `$ and $`\theta `$. The angle $`\theta `$ is non-unique and is one of the solutions to the equation $$\mathrm{cos}(\pi \theta )=\frac{2(J_2J_3)}{J_1+\sqrt{J_124J_3(J_2J_3)}}.$$ (2) Defining $`x=J_2/J_1`$, $`y=J_3/J_1`$, the columnar, helical, Dimer triple point in the classical phase diagram is located at $`x_{tr}=(454\sqrt{6})/360.9778`$, $`y_{tr}=(9+4\sqrt{6})/360.5222`$. The Néel-helical phase boundary is given by $`x+5y=1`$, whereas the asymptotic ($`x\mathrm{}`$) phase boundary between the helical and columnar phases is given by the equation $`y=\frac{1}{19}(8x+1+O(1/x))`$. Since there are no quantum fluctuations in the ferromagnetic and Dimer phases and on the boundary between ferromagnetic and helical phases, these remain true phase boundaries in the model and are shown by solid lines in Fig. 2. Note that the Dimer-ferromagnetic boundary is first order, whereas the Dimer-helical boundary is second order. The other classical phase boundaries are shown by dashed lines. They leave an oval-like Dimer phase in the middle. The first-order Dimer-Néel phase boundary can be determined quite accurately along $`J_2=J_3`$ to be at $`x=0.42957(2)`$, and has been given in previous numerical studies along $`J_3=0`$ (and equivalently $`J_2=0`$) . These points are shown by solid dots and we connect them smoothly to indicate a first order Dimer-Néel phase boundary. Actually, it is not evident whether the Dimer-Néel phase boundary along the line $`J_3=0`$ is first or second order, or even whether there is an intermediate phase between the two. Albrecht and Mila have argued that there is an intermediate helical phase between the Néel and Dimer phases. Their Schwinger Boson mean-field treatment leads to the estimate that the Néel phase extends only down to $`x0.91`$, whereas the helical phase exists between $`0.61<x<0.91`$. On the other hand, using series expansions, Weihong et al. have argued that the Néel phase extends down to $`x=0.691`$ at which point there is a first order transition to the Dimer phase. The finite-size calculations also do not suggest any helical phases. Though, Albrecht and Mila have argued that this is because the helical phases are not properly accomodated in finite geometries. The quantitative validity of Schwinger Boson calculations is difficult to judge. One generally expects quantum fluctuations to stabilize colinear phases. And this could considerably reduce the extent of the helical phases in the phase diagram. In several spin models, where numerical calculations have been done, the sublattice magnetization of the Néel phase goes continuously to zero and it is separated from incommensurate phases by a singlet phase . On general grounds, Ferrer has argued that the Néel phase must be separated from helicoidal phases by an intermediate spin-liquid phase. Thus, it is reasonable to assume that along $`J_3=0`$ there is a direct transition between Dimer and Néel phases. Along $`J_3=0`$, Weihong et al. estimate that the Dimer to Néel transition happens at $`J_2/J_1=0.691(6)`$. Using d-log Pade approximants to analyze the gap series, we estimate that it vanishes at $`J_2/J_10.697(2)`$. Thus, within the uncertainties of the series analysis, this transition could be continuous. Around this value of the couplings, the sublattice magnetization series from the Néel side is also consistent with zero . The primary reason for believing that the transition is first order is that the energies for the Néel and Dimer phases appear to cross at a non-zero angle. However, if the transition is continuous, then very close to the transition, the Néel energy curves should change slope . Thus, given all the numerical evidence, a plausible conclusion is that the transition is very weakly first order, though it could also be second order. Given the above, it is natural to expect some continuous transitions when $`J_3<0`$. To explore this possibility, we have developed series expansions for the triplet excitations in the dimer phase to $`15`$th order for arbitrary $`J_3/J_2`$ using the flow equation method . For $`J_3=0`$, the expansion coefficients agree with those calculated by Weihong et al. The unit cell of the lattice contains two dimers, giving rise to two triplet modes. These modes are almost degenerate throughout the Brillouin zone and become exactly degenerate as $`q0`$ due to symmetry. We find that the gap minimum is always at $`q`$ equal to zero even as one moves from the Néel towards the ferromagnetic phase. We use d-log Pade approximants to calculate the locus of points, where the triplet gap closes. This contour is also depicted by a solid line in Fig. 2. It marks a boundary at which the Dimer phase becomes locally unstable, and hence the dimer phase cannot exist beyond that line. Without studying all eigenstates of our system, it is not possible to say if some other level crossing transition leads to a different ground state before we get to this line. It is plausible to think that at least parts of this line represent continuous phase transitions between the Dimer and magnetically ordered phases. As seen in Fig. 2, a possible consistent scenario is that very near the $`J_3=0`$ line, we have a multicritical point where a second order transition line meets a first order phase boundary. These continuous phase transitions between the Dimer phase and various magnetic phases are rather unusual. They are not in the conventional $`2+1`$-dimensional O(3) universality class as expected for the non-linear sigma models . This is evident from the fact that in the Dimer phase, the ground state remains unchanged and hence the correlation length remains of order unity. In contrast, for a generic dimerized spin system, the correlation length gradually grows and diverges as the gap goes to zero . The continuous phase transition, here, is somewhat analogous to the density driven generic phase transitions in the Bosonic Mott insulators . However, there are some important differences. Unlike the case of Bosonic Mott insulators, the spectrum appears to be linear at the transition. Along the $`J_3=0`$ line, we estimate that the gap vanishes at $`x=0.697(2)`$, with an apparent exponent $`\nu `$ of $`0.45(2)`$. Different d-log Pade approximants show remarkable consistency with each other. Fig. 3 shows the spectrum, in the reduced Brillouin zone, at the transition. Different ways of analyzing the series all point to a finite spin-wave velocity and a linear spectrum. These results suggest that this transition belongs to a new universality class . With our present calculations we cannot study the transitions from the ordered side and thus cannot establish the full nature of these phase transitions nor can we say anything about the stability of columnar and helical phases in the overall phase diagram. Quantum fluctuations can lead to additional singlet (spin-gap) phases between the Néel and the helical phases and possibly eliminate helical phases altogether from the Néel side of the phase diagram. These Néel to singlet phase transitions should be similar to those found in the $`J_1J_2`$ square-lattice Heisenberg model . When a magnetic field is applied to the Dimer ground state along the special line J<sub>2</sub>=J<sub>3</sub>, the resulting magnetization is shown in Fig. 4. The triplet excitations, aligned by the field, have no dispersion, but a nearest neighbor repulsion. Thus they form a simple Wigner crystal (or a Bosonic Mott insulator) at one half the saturation magnetization. If we add an additional weak antiferromagnetic coupling between neighboring horizontal $`J_1`$ bonds (and similarly neighboring vertical $`J_1`$ bonds) of the form $`J_4(\stackrel{}{S}_1+\stackrel{}{S}_2)(\stackrel{}{S}_3+\stackrel{}{S}_4)`$, the triplets remain dispersionless but they now have an additional second neighbor repulsion. This leads to additional Wigner crystal phases at one and three quarter fillings and hence plateaus in the magnetization at one and three quarters of the saturation value. In Fig. 4, we also show the ordering pattern of the Wigner crystal on different plateaus by showing four $`J_1`$ bonds. A line denotes a ($`S^z=1`$) triplet on the bond whereas a circle denotes a singlet. As we move away from the $`J_2=J_3`$ line, the triplet develops dispersion. It is useful to think of the problem in terms of the Bose Hubbard model , with the ($`S^z=1`$) triplet representing hard core Bosons. These Bosons have a strong nearest-neighbor repulsion and a weak diagonal (second-neighbor) and further-range hopping. At half-filling, the strong repulsion will clearly lead to a Wigner crystal and hence the magnetization plateaus will remain. However, the transition between the magnetization plateaus may now be second order. Indications of such plateaus also exist in the finite size calculations of Miyahara and Ueda for $`J_3=0`$ . However, due to the finite size, all plateaus appear discontinuously in their study. The question of whether there will be additional magnetization plateaus at other rational fillings perhaps including valence bond states, as in one dimension , deserves further attention. We now comment on the materials: One would naively expect the SrCu<sub>2</sub>(BO<sub>3</sub>)<sub>2</sub> system to be close to $`J_3=0`$, a limit that has been considered by other authors . The ratio of J<sub>2</sub> to J<sub>1</sub> has been placed in the literature close to the Dimer to Néel transition. Even though this transition may be first order, it would be very weakly so, due to the vicinity of the multicritical point. In this sense, this material may allow us to study a special quantum critical point, not the generic Néel to singlet quantum phase transition. However, this transition may be unstable to the generic $`O(3)`$ transition, when the special eigenstate of Shastry and Sutherland is not a true eigenstate due to some small perturbations. It would be interesting to study this crossover theoretically. An interesting problem could be the instability of these special transitions due to spin-lattice couplings. Magnetization plateaus have been observed in the material at one eighth and one quarter of the saturation magnetization. The exactly soluble model suggests that these phases may be regarded as simple Wigner crystals of local triplets. The question of whether the models away from $`J_2=J_3`$ will have magnetization plateaus at other rational fractions, or whether other couplings such as $`J_4`$ are needed for this, deserves further attention. We thank Ian Affleck, Leon Balents and Matthew Fisher for discussions. This work is supported in part by the National Science Foundation grants PHY94-07194 and DMR96-16574 and by the Deutsche Forschungsgemeinschaft, SFB 341 and SPP 1073.
no-problem/9910/astro-ph9910467.html
ar5iv
text
# Properties of high-𝑧 galaxies as seen through lensing clusters ## 1. Introduction The basis of our large collaboration program, involving different european institutions, is to use clusters of galaxies as gravitational lenses to build up and to study an independent sample of high-z galaxies. This sample is important because it complements the large samples obtained in field surveys. The idea is to take benefit from the large amplification factor close to the critical lines in lensing clusters (typically 1 to 3 magnitudes) to study the properties of the distant background population of lensed galaxies. The signal/noise ratio in spectra of amplified sources and the detection fluxes are improved beyond the limits of conventional techniques, whatever the wavelength used for this exercise. In particular, the amplification properties have been succesfully used in the ultra-deep MIR survey of A2390 (Altieri et al. 1999), and the SCUBA cluster lens-survey (Smail et al 98; Blain et al. 99). This collaboration program is presently going on, and the next step is to perform the spectroscopic follow up (mainly with the VLT) on a sample of high-z candidates selected through both photometric redshifts and lensing inversion procedures. Number of high-z lensed galaxies have been found since the first case of lensed galaxy at $`z\stackrel{>}{}2`$, the spectacular blue arc in Cl2244-02 (Mellier et al. 1991), and these findings strongly encourage our approach. Among the recent examples of highly-magnified distant galaxies, identified either purposely or serendipitously, one can mention: the star-forming source $`\mathrm{\#}384`$ in A2218 at z=2.51 (Ebbels et al. 1996); the luminous z=2.7 arc behind the EMSS cluster MS1512+36 (Yee et al. 1996; Seitz et al. 1998); the three z $``$ 4 galaxies in Cl0939+47 (Trager et al. 1997); a z=4.92 system in Cl1358+62 (Franx et al. 1997, Soifer et al. 1998); and the two red galaxies at $`z4`$ in A2390 (Frye & Broadhurst 1998, Pelló et al. 1999). ## 2. The photometric redshift approach Photometric redshifts (hereafter $`z_{phot}`$ ) are computed using a standard SED fitting procedure originally developed by Miralles (1998). A new public version of this tool, called hyperz, is presently under developement (Bolzonella, Miralles and Pelló, in preparation; see also Bolzonella & Pelló, this conference). The set of templates includes mainly spectra from the Bruzual & Charlot evolutionary code (GISSEL98, Bruzual & Charlot 1993), and also a set of empirical SEDs compiled by Coleman, Wu and Weedman (1980) to represent the local population of galaxies. The synthetic database derived from Bruzual & Charlot includes 255 spectra, distributed into 5 different star-formation regimes (all of them with solar metallicity): a burst of 0.1 Gyr, a constant star-formation rate, and 3 $`\mu `$ models (exponential-decaying SFR) with characteristic times of star-formation chosen to match the present-day sequence of E, Sa and Sc galaxies. The reddening law is taken from Calzetti (1997), with $`A_V`$ ranging between 0 and 0.5 magnitudes. Flux decrements in the Lyman forest are computed according to Madau (1995). When applying hyperz to the spectroscopic samples available on the HDF, the uncertainties are typically $`\delta z/(1+z)0.1`$ (Bolzonella & Pelló, this conference). ## 3. Results and Future Developments High-z lensed sources with $`z_{phot}`$ $`2`$ are selected close to the appropriate critical lines. In all cases, $`z_{phot}`$ are computed from broad-band photometry on a large wavelength interval, from B (U when possible) to K. This allows to cancel the biases which focus the convergence of the fitting procedure towards or against a particular type of galaxy or redshift domain, and also to reduce the errors on $`z_{phot}`$. Furthermore, it permits to optimize the instrument choice for further spectroscopic follow-up (visible vs. near-IR bands). The first spectroscopic surveys were performed on $`4`$m telescopes: CFHT/ WHT/ ESO (3.6m, NTT) (Toulouse/Cambridge/Paris/Barcelona collaboration). They have demonstrated the efficiency of the technique (see for instance Ebbels et al. 1996, 1998, and Pelló et al. 1999). The present large VLT Program is focused on an X-ray selected sample of 12 lensing clusters. Photometry was performed on $`4`$m telescopes (NTT, 3.6m telescope ESO, …), including HST and/or other archive images when available. We intend to obtain the whole spectroscopic follow up at VLT (FORS, ISAAC, …). The main goals of this program are: to determine the $`z`$ distribution of a very faint subsample of high-$`z`$ lensed galaxies, unvisible otherwise; to study the SED of $`z1`$ galaxies (especially $`z2`$) for a sample less biased in luminosity than the field (SFR history, permitted region in the age- metallicity- reddening-… space) and to explore the dynamics of $`z2`$ sources by using 2D spectroscopy of arcs, a prospective issue for future studies. Most of the photometric survey is presently completed. We have obtained the (photometric) $`z`$ distribution of arclets in several well known clusters (A2390, A370, Cl2244-02, AC114,…). Figure 2 displays the $`z_{phot}`$ distribution of arclets in three fields, where the samples were defined according to different criteria. The typical number of high-z sources found in the inner 1’ radius region of the cluster is $`30`$ to 50 at $`1z7`$. For a subsample of spectroscopically confirmed sources in different clusters (with $`0.4z4`$), the $`z_{phot}`$ accuracy has been checked as a function of the relevant parameters (SFR, reddening, age and metallicity of the stellar population). We have also cross-checked the consistency between the photometric, the spectroscopic and the lensing redshift obtained from inversion methods (Ebbels et al. 1998). According to the present results, the agreement between the three methods is good up to at least $`z\stackrel{<}{}1.5`$. The comparison between the spectroscopic and the lensing redshift was already studied in the field of A2218 (Ebbels et al. 1998), and all the present results seem to follow this trend up to $`z\stackrel{<}{}1.5`$ at least. Taking into account that $`z_{phot}`$ and lensing inversion techniques produce independent probability distributions for amplified sources, combining both methods provides with a robust way to determine the redshift distribution of distant sources. This comparison gives promising results at $`z\stackrel{>}{}1.5`$, for the most amplified sources, but an enlarged spectroscopic sample is urgently needed to conclude, in particular for the most distant candidates which could be the most distant galaxies ever detected. The method is restricted to lensing clusters whose mass distribution is highly constrained by multiple images (revealed by HST or ground-based multicolor images), where the amplification uncertainties are typically $`\mathrm{\Delta }m_{lensing}<`$ 0.3 magnitudes. Such well constrained mass distributions enable to recover precisely the properties of lensed galaxies (morphology, magnification factor). It is worth to note that highly magnified sources are presently the only way to access the dynamical properties of galaxies at $`z2`$, through 2D spectroscopy, at a spatial resolution $``$ 1 kpc. The two multiple-images at the same $`z4`$, observed behind A2390, are un example of these reconstruction capabilities (Pelló et al. 1999). Thanks to the lensing inversion, lensing clusters can therefore be used to calibrate photometric redshifts as well, up to the faintest limits in magnitude for a given z. They could be also used advantageously to search for primeval galaxies, in order to put strong constraints on the scenarios of galaxy formation. ### Acknowledgments. We are grateful to G. Bruzual, S. Charlot, R.S. Ellis, S. Seitz and I. Smail for useful discussions on this particular technique. Part of this work was supported by the TMR Lensnet ERBFMRXCT97-0172 (http://www.ast.cam.ac.uk/IoA/lensnet). ## References Altieri, B., et al. 1999, A&A, 343, L65 Blain, A. W., Kneib, J.-P., Ivison, R. J., Smail, I. 1999, ApJ, 512, L87 Bruzual, G., Charlot, S. 1993, ApJ, 405, 538 Calzetti, D. 1997, AIP Conference Proceedings, v.408., p.403 (astro-ph/9706121) Coleman, D.G., Wu, C.C., Weedman, D.W. 1980, ApJS, 43, 393 Ebbels, T.M.D., et al. 1996, MNRAS, 281, L75 Ebbels, T.M.D., et al. 1998, MNRAS, 295, 75 Franx, M., et al. 1997, ApJ, 486, 75 Frye, B., Broadhurst T. 1998, ApJ, 499, 115 Madau, P. 1995, ApJ, 441, 18 Mellier, Y., et al. 1991, ApJ, 380, 334 Miralles, J. M. 1998, PhD. thesis Université Paul Sabatier Miralles, J. M., Pelló, R. 1998, ApJsubmitted, (astro-ph/9801062) Pelló, R. et al. 1999, A&A, 346, 359, (astro-ph/9810390) Seitz, S., et al. 1998, MNRAS, 298, 945 Smail, I., Ivison, R. J., Blain, A. W., Kneib, J.-P. 1998, AAS 192, 4813 Soifer, B.T., et al. 1998, ApJ, 501, 171 Trager, S. C., et al. 1997 , ApJ, 485, 92 Yee, H.K.C., et al. 1996, AJ, 111, 1783
no-problem/9910/chao-dyn9910011.html
ar5iv
text
# Signatures of quantum chaos in nodal points and streamlines in electron transport through billiards ## I Introduction Billiards play a predominant role in the study of classical and quantum chaos. Indeed, the nature of quantum chaos in a specific system is traditionally inferred from its its classical counterpart. Hence one may ask if quantum chaos is to be understood solely as a phenomenon that emerges in the classical limit, or are there some intrinsically quantal phenomena, which can contribute to irregular behavior in the quantum domain? This is a question we raise in connection with quantum transport through ideal regular and irregular electron billiards. The seminal studies by McDonald and Kauffmann of the morphology of eigenstates in a closed Bunimovich stadium have revealed characteristic patterns of disordered, undirectional and non-crossing nodal lines. Here we will first discuss what will happen to patterns like these when input and output leads are attached to a billiard, regular or irregular, and an electric current is induced through the the billiard by an applied voltage between the two leads. For such an open system the wave function $`\psi `$ is now a scattering state with both real and imaginary parts, each of which gives rise to separate sets of nodal lines at which either $`Re[\psi ]`$ or $`Im[\psi ]`$ vanish. How will the patterns of nodal lines evolve as, e.g., the energy of injected electrons is increased, i.e., more scattering channels become open. Could they tell us something about how the perturbing leads reduce symmetry and how an initially regular billiard may eventually turn into a chaotic one as the number open modes increase? Below we will argue that nodal points, i.e., the points at which the two sets of nodal lines intersect because $`Re[\psi ]=Im[\psi ]=0`$, carry important information in this respect. Thus we will study their spatial distributions and try to characterize chaos in terms of such distributions. The question we wish to ask is simply if one can find a distinct difference between the distributions for nominally regular and irregular cavities. In addition, which other signatures of quantum chaos may one find in the coherent transport in open billiards? The spatial distribution of nodal points play a decisive role in how the flow pattern is shaped. Therefore we will also study the general behavior of streamlines derived from the probability current associated with a stationary scattering state $$\psi =\sqrt{\rho }\mathrm{exp}(iS/\mathrm{})$$ The time independent Schrödinger equation can be decomposed as $$E=\frac{1}{2}mv^2+V+V_{QM},$$ $$\rho 𝐯=0,m\dot{𝐗}=S.$$ The separate quantum streamlines are sometimes referred to as Bohm trajectories. In this alternative interpretation of quantum mechanics it is thought that an electron is a ”real” particle that follows a continuous and causally defined trajectory (streamline) with a well defined position $`𝐗`$ with the velocity of the particle given by the expressions above. These equations imply that the electron moves under the action of a force which is not obtained entirely from the classical potential $`V`$, but also contains a ”quantum mechanical” potential $$V_{QM}=\frac{\mathrm{}^2}{2m}\frac{^2\rho }{\rho }.$$ This quantum potential is negatively large where the wave function is small, and becomes infinite at the nodal points of the wave function where $`\rho (x,y)=0`$. Therefore, the close vicinity of a nodal point constitutes a forbidden area for quantum streamlines contributing to the net transport from source to drain. When $`\rho `$ does not vanish, $`S`$ is single valued and continuous. However at the nodal point where $`\psi =0`$, neither $`S`$ nor $`S`$ is well defined. The behavior of $`S`$ around these nodal points is discussed in a . For our study the main important property of the nodal points of $`\psi `$ is that the probability current flows described by ’open’ streamlines cannot encircle a nodal point. On the contrary, they are effectively repelled from the close vicinity of the nodal points, in a way as if these were impurities. The scattering wave functions $`\psi `$ are found by solving the Schrödinger equation in a tight-binding approximation with the Neumann boundary conditions outside the billiards, on a distance over which evanescent modes effectively decay to zero. The energy of the incident electron is $`ϵ=20`$ where $`ϵ=2E_Fd^2m^{}/\mathrm{}`$ in which $`E_F`$ is the Fermi energy, $`d`$ the width of the channel, and $`m^{}`$ the effective mass. ## II Distributions of nodal points An inspection of the two sets of nodal lines associated with the real and imaginary parts of the scattering wave function reveals the typical pattern of undirectional, self-avoiding nodal lines found already by McDonald and Kauffmann for an isolated, irregular billiard. However, in our case of a complex scattering function the nodal lines are not uniquely defined because a multiplication of the wave function by an arbitrary constant phase factor exp$`(i\alpha )`$ would yield a different pattern. The nodal points, on the other hand, appear to helpful in this respect. They represent a new aspect of the open system and will obviously remain fixed upon a change of the phase of wave function. Here we conjecture that the nodal points may serve as unique markers which should useful for a quantitative characterization of scattering wave functions for open systems. To be more specific, we have considered a large number of realizations (’samples’) of nodal points associated with different kinds of billiards and present averaged normalized distributions of nearest distances between the nodal points. Fig. 1 shows the distributions for open Sinai (a), Bunimovich (b) and rectangular billiards (c, d). The distributions are obtained as an average over 101 different values of energy belonging to a specific energy window in which the conductance undergoes a few oscillations as shown by the insets in Fig. 1. The cases (a, b, c) present two channel transmission through the billiards while the case (d) refers to five channel transmission. The rectangular billiard is nominally maximal in area with numerical size $`210\times 100`$ and with width of leads equal to 10. It is noteworthy that the distribution of nearest neighbors is distinctly different from the corresponding distribution for random points in the two-dimensional plane $$g(r)=2\pi \rho r\mathrm{exp}(\pi \rho r^2),$$ $`(1)`$ where a density $`\rho `$ of random points is related to mean separation $`<r>`$ as $`\rho =1/4<r>^2`$. This distribution is shown in Fig. 1 (a) by the thin line indicating an underlying correlation between the nodal points of transport wave function through the Sinai Billiard. In this sense quantum chaos is not randomness. With slight deviations the Bunimovich billiard gives rise to the same distributions as the Sinai as shown by Fig. 1 (a,b). Analysis of the distributions for lower energies ($`ϵ20`$, one channel transmission) gives quite similar universal forms as shown in Fig. 1 (a, b), but with more pronounced fluctuations because the number of nodal points is less at lower energies. Moreover the average over wider energy domains with a finer grid or for higher energies gives no visible deviations from the distributions in Fig. 1 (a, b). We considered also the Berry’s wave function of a chaotic billiard which is accepted as standart measure of quantum chaos : $$\psi (x,y)=\underset{j}{}|a_j|\mathrm{exp}[ik(\mathrm{cos}\theta _jx+\mathrm{sin}\theta _jy)+\varphi _j]$$ $`(2)`$ where $`\theta _j,|a_j|`$ and $`\varphi _j`$ are independent random variables. We found that distribution of nearest distances between the nodal points of (2) has completely the same form as for the Sinai billiard Fig. 1 (a). On the other hand an analysis of nodal points of wave function $$\psi (x,y)=\underset{k_x,k_y}{}\mathrm{exp}(ik_xx+k_yy)$$ $`(3)`$ with $`k_x,k_y`$ distributed randomly leads to the distribution (1) of random points. To supplement the averaging over energy we have also considered the positions of leads. Fig. 2 (a) shows the normalized distribution of the nearest distances between nodal points for the Sinai billiard obtained as an average over 101 positions of the input lead. As seen this distribution has the same form as the energy averaged Sinai billiard in Fig. 1 (a). In the same way Fig. 2 (b) shows the corresponding case of the Bunimovich billiard with an asymmetric input lead to be compared with Fig. 1 (b). The unsymmetric arrangement of leads allows a larger number of eigenstates of the Bunimovich to participate in the electron transport because symmetry restrictions are relaxed. On the basis of Figs. 1 and 2 and comparison with the Berry’s wave function (2) we therefore argue that there is a universal distribution that characterizes open chaotic billiards. At this stage we conclude that the form of the distributions is not sensitive to the averaging procedure, to the number of channels of electron transmission and to the type of attachment of leads. The mathematical form of the universal distribution constitues an interesting problem that remains to be solved. So does a derivation of the random distribution associated with wave function in eq. (3). Let us now turn to the case of the nominally regular rectangular billiard. In Fig. 1 (c) the distribution functions are given for the case of two-channel transmission with the same energy averaging procedure as for the chaotic billiards. The nearest neighbor distribution clearly displays a peak corresponding to a regular set of nodal points in contrast to other billiards discussed above. This feature is found even for very high energies around 250 (five-channel transmission). Therefore the rectangular dot with the two symmetrically attached leads displays considerable stability with respect to regular nodal points in contrast to the chaotic Sinai and Bunimovich billiards. As indicated, symmetric leads impose restrictions on how states inside the billiard are selected and mixed on injection of a particle. In Fig. 2 (c) the result of averaging over the positions of the input lead is therefore shown for the rectangular billiard at a fixed energy chosen from the energy domain in Fig. 1 (c). As may be expected the pronounced peak in the distribution function of nearest nodal points has now disappeared. Moreover, the distribution is close to the case of the Bunimovich billiard in Fig. 1 (b) and Fig. 2 (b). Evidently the non symmetrical positioning of leads disturb the nominally regular billiard in a much more profound way, effectively rendering it chaotic characteristics. To reconfirm this conclusion we have also performed calculations of distribution of nodal points within the same energy domain and the same number of energy steps as in Fig. 1 (c) but for non symmetrical positions of the input lead. In fact, the distribution function of nearest distances in Fig. 2 (d) demonstrates the close similarity with the position average of the nodal points. Therefore the non universal behavior of the distribution function of nodal points for the rectangular billiard shown in Fig. 1 (c, d) is the result of only a few symmetrical eigenstates taking part in the transmission because of symmetry restrictions. In order to give a quantitative measure of disorder of nodal point patterns we consider the Shannon entropy $`S`$ normalized for each specific billiard by the entropy of fully random points. Numerical values for $`S`$ are specified in Figs. 1 and 2. As may be expected there is a clear tendency towards maximal entropy for chaotic billiards for the same energy window. A similar tendency is clearly seen for the position average (Fig. 2). A case of rectangular billiard with entropy 0.95 Fig. 1 (d) is beyond of this rule because for the five-channel transmission the number of nodal points substantially exceeds other considered cases irrespective of type of billiard. Thus the Shannon entropy of nodal points is important additional quantitative measure of quantum chaos for the quantum transport through billiards. ## III Streamlines As mentioned above streamlines are strongly affected by the positions of nodal points. Superficially they play the role of impurities. It is therfore interesting issue if streamlines behave differently for regular and irregular situations and for this reason we will consider a few typical examples starting with two well defined systems, the nominally regular rectangle and the irregular Sinai billiard. Fig. 3 (a) shows the flow lines in the case of the rectangular billiard. The features of the flow lines connecting input and output leads are remarkable. It is clearly seen how the flow (trajectories) effectively ’channel’ through ’a nodal crystal’ avoiding the individual nodal points. This picture is evidently very different from semi-classical physics and periodic orbit theory. In Fig. 3 only contributions to the net current are displayed. In addition there are also vortical motions centered around each nodal point. The other extreme, the completely chaotic Sinai billiard, is shown in Fig. 3 (b). Because the nodal distribution is now irregular also the streamlines form an irregular pattern when finding their way through the rough potential landscape. Since a streamline cannot cross itself Fig. 3 brings to mind the classical example of meandering rivers in a flat delta landscape. As well known, slight changes in the topography, for example, by moving only a few obstacles to new positions, may induce completely new flow patterns in a sometimes dramatic ways. In the same way slight variations of the energy, for example, may affect the quantum streamlines in the Sinai billiard in an endless way, occasionally forming more collected bunches connecting the two leads in a more focused way than in Fig. 3 (b). The same type of behavior has also been obtained for a two-dimensional ring in which a tiny variation of external magnetic flux induce drastic changes of the flowlines and, as a consequence, Aharonov-Bohm oscillations become irregular. ###### Acknowledgements. This work has been partially supported by the INTAS-RFBR Grant 95-IN-RU-657, RFFI Grant 97-02-16305 and the Swedish Natural Science Research Council. The computations were in part performed at the National Supercomputer Centre at Linköping University. Figure captions
no-problem/9910/chao-dyn9910029.html
ar5iv
text
# Different transport regimes in a spatially-extended recirculating background ## Abstract Passive scalar transport in a spatially-extended background of roll convection is considered in the time-periodic regime. The latter arises due to the even oscillatory instability of the cell lateral boundary, here accounted for by sinusoidal oscillations of frequency $`\omega `$. By varying the latter parameter, the strength of anticorrelated regions of the velocity field can be controled and the conditions under which either an enhancement or a reduction of transport takes place can be created. Such two ubiquitous regimes are triggered by a small-scale(random) velocity field superimposed to the recirculating background. The crucial point is played by the dependence of Lagrangian trajectories on the statistical properties of the small-scale velocity field, e.g. its correlation time or its energy. PACS 47.27Qb – Turbulent diffusion. Transport in turbulent flows with recirculation is a problem of great interest both in the atmosphere and in the ocean . In the atmosphere, the so-called horizontal roll vortices or, briefly, rolls are the paradigm of recirculating pattern. They can be considered as a manifestation of Bénard–Rayleigh convection occurring when conditions of combined surface heating and strong winds take place in the atmospheric boundary layer. Their depth equals the mixing layer and the ratio of lateral to vertical dimensions for a roll pair is about 3:1 (see, e.g., Ref. ). In the following, such type of flow configuration will be the background where particle tracers (i.e. a passive scalar) will be plunged and their statistics investigated. An important characteristic of atmospheric rolls is their large spatial extension which makes possible the description of the related dispersion problem through an effective diffusion equation, i.e. a Fick equation for the large-scale slow-varying passive scalar concentration where the molecular diffusivity is replaced by an enhanced (eddy-) diffusivity. The existence of an asymptotic diffusive regime for the large scale concentration can be rigorously proved by using, e.g., multiscale techniques . A simple Lagrangian interpretation of the asymptotic diffusive regimes is based on “central limit” argumentations. For Lagrangian chaotic flows, velocity correlations decay fastly and, as a consequence, the particle displacement, $`\delta x(t)`$, at the time $`t`$, is the results of the sum of almost independent advecting contributions. The result is that $`\delta x(t)`$ undergoes Brownian motion when observed on times larger than the typical velocity correlation time. In this asymptotic framework, the effect on the dispersion process of the small-scale components of the velocity field is the renormalization of the effective diffusion coefficient. Notice that, as pointed out in Ref. , an eddy-diffusivity based description should not be possible in the presence of finite-size domains with a small number of recirculations where, necessarily, the asymptotic regime might not be reached and the dynamics is governed by transient behaviors . Interesting studies on this regimes can be found, e.g., in Refs. . For the system we are going to investigate, the characteristic length of the organized array of cells is smaller than the size of the domain. As a consequence, the eddy-diffusivity tensor, defined through the following asymptotic limit, $$D_{\alpha \beta }^E=lim_t\mathrm{}\frac{1}{2t}[x_\alpha (t)x_\alpha (0)][x_\beta (t)x_\beta (0)],$$ (1) with $`x(t)`$ being the particle position at the time $`t`$, and $``$ the average over an ensemble of tracer particles, turns out to be a well-defined mathematical quantity. Atmospheric rolls show a wide range of regimes ranging from (almost) time independent to turbulent flow. The related dispersion phenomena are clearly strongly influenced by these different regimes and, as a consequence, transport rates vary over a wide range. An intermediate regime attracting considerable attention both theoretically , experimentally and numerically is the time-periodic regime, where the transport process is dominated by advection of tracer particles across the lateral boundary. Such regime will be the main concern of the present Letter. Specifically, the main question addressed here concerns the role of the small-scale (not explicitly resolved) components of the velocity field on the large-scale transport. We shall show that the superposition of a colored (random) noise velocity (that can be though as associated to a small-scale turbulent motion with nonvanishing memory) to the convective (deterministic) background strongly affects the transport process: either an enhanced or a reduced (with respect to the white case) eddy-diffusion may occur, depending on the frequency of the lateral roll oscillations. The interference mechanism, recently proposed in Refs. for the simple, idealized parallel flow, is identified here as the responsible of this twofold behavior. Our two-dimensional model for the roll-convection follows Ref. . Specifically, the convective flow is defined by the following stream function : $$\psi (x,y,t)=\psi _0\mathrm{sin}[k_x(x+B\mathrm{sin}\omega t)]\mathrm{sin}(k_yy),$$ (2) where $`y`$ ranges from $`0`$ to $`L_y=2\pi /k_y`$. The stream function (2) describes single-mode, two-dimensional convection with rigid boundary condition, where the even oscillatory instability is accounted for by the term $`B\mathrm{sin}\omega t`$, representing the lateral oscillation of the roll. In Ref. , a quantitative comparison of the behavior in this flow with the experimental data has shown that the basic mechanisms of convective transport are well captured by the expression (2). The periodicity of the cell along the $`x`$-axis is denoted by $`L`$ ($`L=2\pi /k_x`$) while its depth (along the $`y`$direction) is $`L/3`$ (i.e. $`k_y=3k_x`$). The amplitude, $`B`$, of the roll oscillations is assumed $`0.13L`$. The dimensionless parameter controlling the dynamics is $`ϵ\omega /\omega _R`$, $`\omega _Rk_xk_y\psi _0`$ being the characteristic frequency of particle oscillations inside the cell. The two limiting regimes $`ϵ1`$ and $`ϵ1`$ have been investigated analytically in Ref. to obtain expressions of the eddy-diffusivity in the limit of zero molecular diffusivity. Here, we shall concentrate on the behavior for a wide range of $`ϵ`$ and in the presence of diffusivity. The investigation is however not accessible by analytical techniques. In order to evaluate eddy-diffusivities, we have therefore decided to perform Monte Carlo numerical simulations of the Langevin equation $$\frac{d𝒙(t)}{dt}=𝒗(𝒙(t))+𝒗^{}(t).$$ (3) The velocity field $`𝒗(𝒙(t))`$ is incompressible, and related to the stream-function (2) through the usual relations $`v_x=_y\psi `$, $`v_y=_x\psi `$. The noise term $`𝒗^{}(t)`$ is a Gaussian, zero-mean random process with the colored-noise correlation function: $$v_\alpha ^{}(t)v_\beta ^{}(t^{})=\frac{D_0}{\tau }\delta _{\alpha \beta }e^{\frac{tt^{}}{\tau }},$$ (4) where $`D_0`$ can be though as the (isotropic) eddy-diffusivity arising from the smallest (not explicitly resolved) scales of turbulent motion and $`\tau `$ is their correlation time. Notice that the white-in-time correlation function is obtained by taking the limit $`\tau 0`$. From the Langevin equation (3) the expression (1) for the eddy diffusivity can be easily rewritten in terms of the Lagrangian autocorrelation $`C_{\alpha \beta }(t)=v_\alpha (𝒙(t))v_\beta (𝒙(0))`$: $$D_{\alpha \beta }^E=D_0\delta _{\alpha \beta }+_0^{\mathrm{}}𝑑tC_{\alpha \beta }(t).$$ (5) The role played by anticorrelated regions of the velocity field (i.e. regions where $`C_{\alpha \beta }<0`$ in Eq. (5)) on the large-scale transport has been investigated in Refs. for the class of parallel flows. Two different regimes of transport may occur depending on the extension of such regions. Specifically, when anticorrelated regions are sufficiently extended (we denote such regime with the label EAR), an increasing $`\tau `$ (for a fixed, small, $`D_0`$) causes a reduction of transport (with respect to $`\tau =0`$) while an increasing $`D_0`$ (for a fixed, small, $`\tau `$) leads to an enhancement of transport (with respect to $`D_0=0`$). The scenario is opposite for anticorrelated regions weak enough (hereafter regime WAR), that is, $`D_0`$ leads to transport reduction while $`\tau `$ to transport enhancement. We briefly recall the basic mechanisms characterizing the above two different regimes. The first mechanism works to increase the Lagrangian correlation time (and thus eddy-diffusivities): this is due to the fact that $`\tau `$ makes the particles of diffusing substance forget their past less rapidly than in the case $`\tau =0`$. Thus, the autocorrelation function in (5) decays less rapidly than in the withe-noise case. This implies an increasing weight of regions where the velocity is strongly (positively) correlated and, as an immediate consequence, an increasing eddy-diffusivity. The second mechanism arises for flows with closed streamlines and it is associated to the presence of anticorrelated regions of the velocity field. The correlation time, $`\tau `$, is now working to increase effects of trapping due to the anticorrelated zones where the velocity is weak. This means that regions where the velocity is anticorrelated give an enhanced (again with respect to the withe-noise case) contribution to the time-integral (5). The contribution of anticorrelated regions to the time-integral being negative, a reduction of diffusion occurs. Our main aim here is to show that in the presence of roll convection the aforesaid two mechanisms are relevant and work in competition thus governing the large-scale transport. The frequency, $`\omega `$, of the lateral roll oscillation is identified here as one of the parameters controlling the crossover from the WAR and the EAR regimes. Notice that, unlike Refs. , our control parameter, $`\omega `$, is not trivially related to the extension of anticorrelated regions. The relation is intrinsic and selected by the dynamics. Indeed, by varying $`\omega `$, it is possible to synchronize the frequency (of order of $`\omega _R`$) of particles inside the cell with the frequency, $`\omega `$, of the lateral roll oscillation. Due to this synchronization mechanism, the eddy-diffusivity as a function of $`\omega `$ can have maxima (when oscillations are in phase) or minima (when oscillations are in phase opposition). Moreover from Eq. (5) it results that maxima (minima) of diffusion are associated to the flow configurations with the weakest (strongest) anticorrelated regions. In order to evaluate the component of the eddy-diffusivity along the direction (e.g., the $`x`$-axis) of the lateral roll oscillation as a function of $`\omega `$, numerical integration of Eq. (3) has been made by using a second-order Runge-Kutta scheme and then performing a linear fit of $`[x(t)x(0)]^2`$ vs $`t`$. Averages are made over different realizations and performed by uniformly distributing $`10^6`$ particles in the basic periodic cell. The system evolution has been computed up to times $`10^4t_R`$, where $`t_R2\pi /\omega _R`$. The $`x`$-component, $`D^E`$, of the eddy-diffusivity vs the frequency, $`\omega `$, of the roll oscillation is shown in Fig. 1 for $`D_0/\psi _0=5\times 10^3`$ and different values of $`\tau `$: $`\tau =0`$ (full line), $`\tau /t_R=0.24`$ (dotted line), $`\tau /t_R=0.48`$ (long-dashed line) and $`\tau /t_R=0.95`$ (dot-dashed line). A few comments are in order. Eddy-diffusivity shows maxima originated from the resonance between the lateral roll oscillation frequency and the characteristic frequencies of the particle motion. Moreover, the effect of $`\tau `$ on the shape of the peaks is twofold : firstly, the variation of $`\tau `$ causes a shifting of maxima positions and, secondly, the larger (smaller) $`\tau `$, the higher (lower) the peaks. The first feature (particularly evident for $`\omega `$ corresponding to the highest peak) suggests that the correlation time, $`\tau `$, acts to renormalize the large scale velocity. The result is that an enhanced (with respect to the white-in-time case) convective velocity governs the transport and, in order to have resonance when increasing $`\tau `$, the roll oscillation frequency must thus follow the increasing velocity. The renormalizing effect of $`\tau `$ has been identified perturbatively in Ref. for small $`\tau `$. Our results seem to suggest a generalization for finite $`\tau `$. Concerning the second effect played by $`\tau `$, this means that peaks and valley of the eddy-diffusivity profile are associated to regions of type WAR and EAR, respectively. In terms of the two mechanisms associated to such regions, the first is the winner for value of $`\omega `$ corresponding to the peaks in the eddy-diffusivity profile (where the contribution of anticorrelated regions is weak), while the second dominates for values of $`\omega `$ corresponding to local minima in the eddy-diffusivity profile (where the weight of anticorrelated regions is strong). This can be easily seen also from Fig. 2 where behaviors of the eddy-diffusivity as a function of $`\tau `$ are shown for two different value of $`\omega `$: $`\omega /\omega _R=0.67`$ (on the left) and $`\omega /\omega _R=1.1`$ (on the right). The former value corresponds to a local minimum in the eddy-diffusivity profile, while the latter to the highest peak (see Fig. 1). It is remarkable that the scenario above described is opposite when fixing $`\tau `$ and varying the (small-scale) bare eddy-diffusivity $`D_0`$. This can be easily observed from Fig. 3 where behaviors of the eddy-diffusivity are now shown as a function of $`D_0`$, keeping $`\tau `$ fixed and equal to zero. As in Fig.2, on the left we have $`\omega /\omega _R=0.67`$ while, on the right, $`\omega /\omega _R=1.1`$. The physical reason of such behavior can be easily grasped from the aforesaid mechanisms but now recalling the fact that an increasing $`D_0`$ makes the particles of diffusing substance forget their past more rapidly (rather than less rapidly as it happens by increasing $`\tau `$). This effect causes a reduction in the transport. On the other hand, concerning the second mechanism, trapping due to anticorrelated regions is now less effective. Indeed, leaving the region of trapping is easier when increasing $`D_0`$. This fact leads to a reduction of the weight of the negative contribution to the time-integral (5) giving the eddy-diffusivity and, as a consequence, transport is enhanced. The final result is thus a complete symmetry between the following operation: increasing $`\tau `$ (for a fixed $`D_0`$) and decreasing $`D_0`$ (for a fixed $`\tau `$). Large-scale transport seems thus controlled by a parameter $`D_0/\tau `$. Roughly speaking, this is easily understood if we observe that when increasing $`\tau `$ the particle motion along the Lagrangian trajectories becomes more and more coherent. Conversingly, coherence becomes lost when, for a fixed $`\tau `$, we decrease $`D_0`$. In this sense, the dependence of Lagrangian trajectories on the statistical properties of the small scale velocity is crucial in this problem. In conclusion, two ubiquitous different regimes of transport have been identified here as relevant in the time-periodic roll circulation. A key role to select such two regimes of transport enhancement/reduction is due to the interplay between the even oscillatory instability of the cell and the statistical properties of the small-scale (random) velocity (e.g. their correlation time or their energy). Specifically, in the model here considered, the even instability is accounted for by a sinusoidal lateral boundary oscillation with frequency $`\omega `$, while small-scale velocity activity is described by a bare diffusivity, $`D_0`$, within a Gaussian, zero-mean random process with correlation time $`\tau `$. When varying $`\omega `$, the eddy diffusivity profile appears very structured with sharp peaks separated by evident valley. Peaks turn out to be associated to transport enhancement when increasing $`\tau `$ (for a fixed $`D_0`$) or, conversingly, when reducing $`D_0`$ (for a fixed $`\tau `$). The situation is opposite for values of $`\omega `$ corresponding to minima in the eddy-diffusivity profile. The physical key role is played by synchronization mechanisms (from which the structured eddy-diffusivity profile arises) and by the strength of the anticorrelated regions of the velocity field, the weight of which in the time-integral giving the eddy-diffusivity can be either enhanced (by reducing $`D_0/\tau `$) or reduced (by increasing $`D_0/\tau `$), thus affecting in different ways the large-scale transport. Acknowledgements We thank C.F. Ratto, M. Vergassola and A. Vulpiani for several perceptive comments and suggestions on this work. Useful suggestions and comments during the 1999 TAO Study Center are also acknowledged (A.M.). Simulations were performed at CINECA (INFM Parallel Computing Initiative). P.C. has been supported by INFM (Progetto Ricerca avanzata-Turbo) by MURST (program no. 9702265437) and by the European grant ERB 4061 PL 97-0775.
no-problem/9910/quant-ph9910006.html
ar5iv
text
# NMR quantum computation with indirectly coupled gates ## Abstract An NMR realization of a two-qubit quantum gate which processes quantum information indirectly via couplings to a spectator qubit is presented in the context of the Deutsch-Jozsa algorithm. This enables a successful comprehensive NMR implementation of the Deutsch-Jozsa algorithm for functions with three argument bits and demonstrates a technique essential for multi-qubit quantum computation. Nuclear magnetic resonance (NMR) spectroscopy has emerged at the forefront of experimental quantum computation investigations . Key concepts such as fundamental gates and error correction have been demonstrated using NMR spectroscopy. However, comprehensive algorithm realizations have only been accomplished for the Deutsch-Jozsa algorithm for functions with one and two bit arguments and for Grover’s algorithm with a two bit register . In these instances crucial two-qubit gates were realized “directly” via the coupling between the corresponding nuclear spins. For a quantum computer with a larger numbers of qubits, the associated requirement of appreciable coupling between any pair of spins raises difficulties. First, it may not be possible to find any molecule with this coupling configuration. Second, it demands increasingly complex schemes for managing the evolution of spectator spins during execution of any two-qubit gate . However, efficient “indirect” realization of any two-qubit gate via a chain of couplings through intermediate spins is possible . This relaxes the coupling requirements and a “linear” coupling configuration, for which the pattern of couplings is $`ABCD\mathrm{}`$ suffices for quantum computation. In this Letter we report a comprehensive three qubit NMR realization of the Deutsch-Jozsa algorithm for functions with three bit arguments, using indirect realizations of two-qubit gates and a linear coupling configuration for information processing. The method that we present is general and readily scalable to larger numbers of qubits. The Deutsch problem considers $`f:\{0,1\}^N\{0,1\}`$ that are constant or balanced. A balanced function returns $`0`$ as many times as $`1`$ after evaluation over its entire range. Given any $`f`$ which is either balanced or constant the problem is to determine its type. For classical algorithms that solve the Deutsch problem with certainty the number of evaluations of $`f`$ grows exponentially with $`N`$. A quantum algorithm requires a single evaluation of $`f`$ and yet solves the problem with certainty . Previous NMR demonstrations used the Cleve version , requiring an $`N`$ qubit control register for storing function arguments, plus a $`1`$ qubit function register for function evaluation. Our recent modification to the algorithm (see Fig. 1) only needs the $`N`$ qubit control register . It was also shown that for each admissible (i. e. constant or balanced) function for $`N2`$ the evolution step is a product of single qubit operations. For an isolated spin $`\frac{1}{2}`$ nucleus any single qubit operation amounts to a rotation of the magnetization vector; this can be implemented classically. Thus for $`N2`$ the algorithm can be executed classically. Previous comprehensive NMR implementations of the algorithm fall within this classical regime . However, for $`N3`$ there exist balanced functions for which qubits in the control register become entangled; this is indisputably quantum mechanical. $`N=3`$ is then the critical point at which quantum mechanical features become essential in the Deutsch-Jozsa algorithm. Two-qubit, entangling operations only appear during the function evaluation step, $`\widehat{U}_f`$. However, for $`f`$ constant, $`\widehat{U}_f=\widehat{I}`$ and the algorithm can be executed classically. To assess quantum behaviour the corresponding gate for balanced $`f`$ must be investigated. Balanced functions can be characterized by the choices of $`2^{N1}`$ arguments out of $`2^N`$ possibilities for which $`0`$ is returned. For $`N=3`$ this gives $`\left(\genfrac{}{}{0pt}{}{8}{4}\right)=70`$ balanced functions. Admissible functions may be represented via power series expansions in the argument bits, $`x_i`$ where $`i=0,1,2`$. That $`x_i^2=x_i`$ for $`x_i\{0,1\}`$ implies that for any admissible function, $$f(x_2,x_1,x_0)=\underset{i>j0}{\overset{2}{}}a_{ij}x_ix_j\underset{i=0}{\overset{2}{}}a_ix_ia$$ (1) where addition is modulo 2 and $`a_{ij},a_i,a\{0,1\}`$. The disappearance of a cubic term in Eq. (1) is a property of balanced and constant functions. This provides a preliminary decomposition: $$\widehat{U}_f=\underset{i>j0}{\overset{2}{}}\left(\widehat{U}^{ij}\right)^{a_{ij}}\underset{k=0}{}\left(\widehat{U}^k\right)^{a_k}$$ (2) where $`\widehat{U}^{ij}|x`$ $`:=`$ $`\left(1\right)^{x_ix_j}|x\text{, and}`$ (3) $`\widehat{U}^i|x`$ $`:=`$ $`\left(1\right)^{x_i}|x.`$ (4) are the quadratic term gate and linear term gate, respectively. The constant term merely provides an identity operation. Quadratic and linear term gates all commute and can be rearranged at will. In terms of fundamental gates, $`\widehat{U}^i=\widehat{R}_{\widehat{𝐳}}^i(180)`$ where superscripts index the qubit to which the $`180^{}`$ single qubit rotation is applied. Similarly, $`\widehat{U}^{ij}=\widehat{R}_{\widehat{𝐲}}^j(90)\widehat{U}_{\text{CN}}^{ij}\widehat{R}_{\widehat{𝐲}}^j(90)`$ where $`\widehat{U}_{\text{CN}}^{ij}`$ is a controlled-NOT gate with control $`i`$ and target $`j`$. Thus the algorithm can be implemented classically for functions with no quadratic terms. However, a quadratic term gate can produce entangled states from unentangled states for qubits $`i`$ and $`j`$ via its constituent controlled-NOT gate. There are no classical operations involving magnetization vectors of two spins that produce entanglement. Therefore, for $`N=3`$ it is in the quadratic term gates that quantum mechanical features appear. The arrangements of linear and quadratic term gates that constitute $`\widehat{U}_f`$ for any admissible $`f`$ can be classified via similarity under permutations of control register qubits. Equations \[(2)-(4)\] imply that this can be accomplished by classifying admissible functions via similarity under argument bit permutations and/or addition of a constant term. Accordingly there are ten classes of balanced functions; a representative of each is provided in Table I. Function evaluation steps for any members of a given class differ only by a permutation of the control register qubits to which they are applied. Thus a realization of the $`N=3`$ Deutsch algorithm is comprehensive if the algorithm is applied to at least one representative from each class of admissible functions. All possible quadratic term gates are required for the classes represented by $`f_9`$ and $`f_{10}`$. Therefore, for $`N=3`$ the algorithm requires two-qubit gates between all pairs of qubits and is suitable for testing quantum information processing with a linear coupling configuration. Currently the most accessible experimental technology for quantum computing is NMR spectroscopy of spin $`\frac{1}{2}`$ nuclei of appropriate molecules in solution . Any molecule containing three distinguishable, coupled spin $`\frac{1}{2}`$ nuclei in an external magnetic field provides the three qubits needed to solve the $`N=3`$ Deutsch problem. To a good approximation the Hamiltonian for a room temperature solution state sample is $`\widehat{H}=_{i=0}^2\frac{\omega _i}{2}\widehat{\sigma }_z^i+\frac{\pi }{2}_{i>j0}^2J_{ij}\widehat{\sigma }_z^i\widehat{\sigma }_z^j`$, where $`\omega _i`$ are the Zeeman frequencies, $`J_{ij}`$ the scalar coupling constants, and $`\widehat{\sigma }_z^i`$ Pauli operators . Superscripts label the spins and identify them with the corresponding argument bits. A literal translation of the algorithm into NMR operations would begin with initialization via a pseudo-pure state preparation scheme . The evolution stage can be implemented by building single qubit and controlled-NOT gates from standard sequences of spin selective RF pulses and periods of evolution under $`\widehat{H}`$ . Measurement can effectively be accomplished via tomography, which requires repeated execution of the initialization and evolution stages, each time varying the readout pulse before acquisition . It is, however, possible to apply the evolution stage directly to a thermal equilibrium initial state and successfully solve the Deutsch problem with an expectation value measurement . Analysis of the system state after the function evaluation step demonstrates this for the $`N=3`$ Deutsch algorithm. Using the product operator formalism, the deviation part of the thermal equilibrium equilibrium density operator for a weakly coupled homonuclear NMR system is proportional to $`\widehat{\rho }_{th}=\widehat{I}_z^2+\widehat{I}_z^1+\widehat{I}_z^0`$ . The initial rotation with phase $`\varphi =\frac{\pi }{2}`$ transforms this to $`\widehat{I}_x^2\widehat{I}_x^1\widehat{I}_x^0`$. Then $`\widehat{U}_f`$ produces the states listed in Table I. Signal acquisition immediately after application of $`\widehat{U}_f`$ and with no additional readout pulses provides a spectrum, the $`f`$-spectrum, whose form depends on $`f`$. A fiducial spectrum is obtained in the same fashion but with $`\widehat{U}_f`$ replaced by $`\widehat{I}`$; here the system’s pre-acquisition state is $`\widehat{I}_x^2\widehat{I}_x^1\widehat{I}_x^0`$. Comparison with $`\widehat{\rho }_f`$ for admissible functions (see Table I) indicates that for each there is either a $`0`$ or a $`\pi `$ phase difference between each line each line of the $`f`$-spectrum and its counterpart in the fiducial spectrum. More precisely for applicable functions: (i) $`f`$ is constant if and only if the $`f`$-spectrum is identical to the fiducial spectrum and (ii) $`f`$ is balanced if and only if there is a $`\pi `$ phase difference between at least one line of the $`f`$-spectrum and its counterpart in the fiducial spectrum. This criterion requires that each spin is coupled to at least one other spin. If any spin is completely uncoupled then the entire $`f`$-spectra for $`f_9`$ and $`f_{10}`$ disappears; the comparison would be impossible. However, if each spin is coupled to at least one other then for $`N=3`$ at least one is coupled to the other two. This ensures that at least one of the doubly antiphase multiplets for $`f_9`$ and $`f_{10}`$ survives, giving a line in the $`f`$-spectrum with a $`\pi `$ phase difference relative to its fiducial spectrum counterpart. For $`f_7`$ and $`f_8`$ at least one of the antiphase multiplets (for spin 2 or spin 0) must survive; again this provides a line whose phase differs by $`\pi `$. For $`f_4`$, $`f_5`$ and $`f_6`$ the entire spin 0 multiplet in the $`f`$-spectrum displays a $`\pi `$ phase difference relative to its fiducial spectrum counterpart. The same is true for spin 2 in the cases $`f_1`$, $`f_2`$ and $`f_3`$. The fiducial spectrum can be phased so that its constituent lines all appear upright. Thus, the answer to the $`N=3`$ Deutsch problem may be determined by inspecting the $`f`$-spectrum for inverted lines. Each balanced function produces at least one inversion. For constant functions all lines are upright. This provides a solution state NMR scheme for conclusively answering the $`N=3`$ Deutsch problem with just one application of the evolution stage (here equivalent to the unmodified version followed by a $`\widehat{R}_{\widehat{𝐧}}(90)`$ readout) to the thermal equilibrium input state. A saturated solution of $`{}_{}{}^{13}\text{C}`$ labeled alanine in $`\text{D}_2\text{O}`$ provided the qubits. We label the carboxyl carbon, spin $`2`$, the $`\alpha `$ carbon, 1 and the methyl carbon, $`0`$. Protons were decoupled using a standard heteronuclear decoupling technique. Scalar couplings are $`J_{21}=56`$Hz, $`J_{10}=36`$Hz and $`J_{20}=1.3`$Hz. Relaxation times are $`T_1(2)=11.5`$s, $`T_1(1)=1.2`$s, and $`T_1(0)=0.7`$s and $`T_2(2)=1.3`$s, $`T_2(1)=0.41`$s, and $`T_2(0)=0.81`$s where the argument labels the spin. Spin selective rotations were implemented via Gaussian shaped pulses of duration $`0.7`$ms for spins 0 and 1 and $`0.5`$ms for spin 2. No hard pulses were used. Linear term gates can be implemented via spin selective phase shifts on the output spectrum by placing them after the quadratic term gates and immediately prior to acquisition. Thus certain $`f`$-spectra differ by spin selective phase shifts only. These are: (i) $`f_1`$, $`f_2`$ and $`f_3`$, (ii) $`f_4`$, $`f_5`$, and $`f_6`$, (iii) $`f_7`$ and $`f_8`$ and (iv) $`f_9`$ and $`f_{10}`$. The crux of the experiment is in the realization of the quadratic term gates. This is accomplished via the rotation and delay construction of controlled-NOT gates . Selective refocusing sequences effectively eliminate all but one coupling term in $`\widehat{H}`$, thus providing appropriate evolution during the delay. The resulting quadratic term gate simplifies to $$\widehat{U}^{ij}=\left[1/2J_{ij}\right]^{ij}\left[90\right]_z^i\left[90\right]_z^j$$ (5) where $`\left[\theta \right]_n^j`$ indicates a rotation of spin $`i`$ about the axis $`n`$ through angle $`\theta `$ and $`\left[t\right]^{ij}`$ evolution under the scalar coupling between spins $`i`$ and $`j`$ for period $`t`$. For alanine this is satisfactory for $`\widehat{U}^{21}`$ and $`\widehat{U}^{10}`$. However, for $`\widehat{U}^{20}`$ it is inadequate since $`1/2J_{20}=0.42`$s which is comparable to the smallest $`T_2`$. An alternative is to process the information via spin 1 and use only the linear coupling configuration ( spin 2 - spin 1 -spin 0). An indirect realization is $`\widehat{U}_{\text{CN}}^{20}=\widehat{U}_{\text{SW}}^{01}\widehat{U}_{\text{CN}}^{21}\widehat{U}_{\text{SW}}^{01},`$ where $`\widehat{U}_{\text{SW}}^{01}`$ is the SWAP gate between qubits $`1`$ and $`0`$. After simplification, $`\widehat{U}^{20}`$ $`=`$ $`\left[90\right]_y^1\left[90\right]_y^0\left[1/2J_{10}\right]^{10}\left[90\right]_x^1\left[90\right]_x^0`$ (9) $`\left[1/2J_{10}\right]^{10}\left[90\right]_y^1\left[1/2J_{21}\right]^{21}\left[90\right]_x^1`$ $`\left[1/2J_{10}\right]^{10}\left[90\right]_y^1\left[90\right]_x^0\left[1/2J_{10}\right]^{10}`$ $`\left[90\right]_x^1\left[90\right]_y^0\left[90\right]_z^2\left[90\right]_z^3.`$ This gives a pulse sequence of duration $`0.071s`$ (excluding $`\widehat{𝐳}`$ rotations that are equivalent to phase shifts in the output spectrum) that is significantly faster than that using $`\left[1/2J_{20}\right]^{20}`$. During $`\left[1/2J_{10}\right]^{10}`$ and $`\left[1/2J_{21}\right]^{21}`$ evolution periods the selective refocusing scheme effectively removes the scalar coupling between spins 2 and 0. Throughout this implementation of the algorithm information is processed using only the spin2 - spin 1 -spin 0 linear coupling configuration and not the spin 2 -spin 0 coupling. However, the latter must be taken into account in the interpretation of the output spectra. The experiments were performed at room temperature using a BRUKER 500-DRX spectrometer and an inverse detection probe. For each representative function listed in Table I signal acquisition takes place immediately after implementation of $`\widehat{U}_f`$. Figure 2 provides selected experimental spectra that are phased so that $`\widehat{I}_x^j`$ product operator terms correspond to upright multiplets. The line orientations agree with those predicted from $`\widehat{\rho }_f`$ and provide correct solutions to the Deutsch problem. An estimate of errors for the most complicated case, $`f_9`$, may be obtained by applying a selective $`90^{}`$ readout pulse about the $`\widehat{𝐱}`$ axis immediately after $`\widehat{U}_f`$. Ideally the readout spin multiplet should remain while the others disappear. The average amplitudes of the residual signals for the latter lie between $`14\%`$ and $`31\%`$ of the average amplitude of the corresponding lines with no readout. The ability to extract the Deutsch problem solution for $`N>3`$ via pure phase information in the output spectrum, in contrast to amplitude information, appears to have mitigated such errors. It is not yet clear how this advantage may be extended beyond $`N=3`$. The number of selective rotations required for $`f_9`$ points to imperfections within selective pulses as one likely source of error. In particular, it must be noted that possible effects of scalar coupling evolution during application of selective rotations were ignored. Indeed, for the indirectly coupled realization of $`\widehat{U}^{20}`$ the total duration of all the selective rotations is comparable to $`1/2J_{21}`$. To the best of our knowledge this issue has not been addressed satisfactorily. A further possible source of error are inhomogeneities in the RF magnetic fields used for selective rotations. To conclude, we have provided a comprehensive NMR realization of the $`N=3`$ Deutsch-Jozsa algorithm, at the same time demonstrating quantum information processing via a linear configuration of couplings and indirect realizations of two-qubit gates. The use of appropriate SWAP gates allows for the extension of our method to quantum computation with larger numbers of qubits. This work was supported, in part, by the DARPA and the ONR. We would also like to thank Gary Sanders for useful discussion. Figure 1 Figure 2
no-problem/9910/hep-ph9910321.html
ar5iv
text
# Partial widths for the decays 𝜂⁢(1295)→𝛾⁢𝛾 and 𝜂⁢(1440)→𝛾⁢𝛾 ## Abstract We discuss $`\gamma \gamma `$ partial widths of pseudoscalar/isoscalar mesons $`\eta (M)`$ in the mass region $`M10001500`$ MeV. The transition amplitudes $`\eta (1295)\gamma \gamma `$ and $`\eta (1440)\gamma \gamma `$ are studied within an assumption that the decaying mesons are the members of the first radial excitation nonet $`2^1S_0q\overline{q}`$. The calculations show that partial widths being of the order of 0.1 keV are dominantly due to the $`n\overline{n}`$ meson component while the contribution of the $`s\overline{s}`$ component is small. The two–photon decays of the pseudoscalar mesons served a great deal of information on the structure of the basic $`1^1S_0q\overline{q}`$ nonet. The value of the partial width $`\pi ^0\gamma \gamma `$ gave one of the first experimental evidence for colour structure of quarks. The decays $`\eta \gamma \gamma `$ and $`\eta ^{}\gamma \gamma `$ provide an information on the quark/gluon content of these mesons. The partial $`\gamma \gamma `$-widths of pseudoscalar mesons belonging to the basic nonet $`1^1S_0q\overline{q}`$ are relatively large: $`\mathrm{\Gamma }_{\pi ^0\gamma \gamma }=7.2\pm 0.5`$ eV, $`\mathrm{\Gamma }_{\eta \gamma \gamma }=0.46\pm 0.04`$ keV, $`\mathrm{\Gamma }_{\eta ^{}\gamma \gamma }=4.27\pm 0.19`$ keV thus giving a possibility to mesure not only partial widths but transition form factors $`\pi ^0\gamma ^{}(Q^2)\gamma `$, $`\eta \gamma ^{}(Q^2)\gamma `$, $`\eta ^{}\gamma ^{}(Q^2)\gamma `$ over a broad range of photon virtualities, $`Q^220`$ GeV<sup>2</sup> . These data made it possible : (i) to restore the wave functions of $`\eta `$ and $`\eta ^{}`$ (for both $`n\overline{n}`$ and $`s\overline{s}`$ components), (ii) to estimate the gluonium admixture in $`\eta `$ and $`\eta ^{}`$, (iii) to restore the vertex function for the transition $`\gamma q\overline{q}`$ (or photon wave function) as a function of the $`q\overline{q}`$ invariant mass. The same method as used in for the analysis of the basic pseudoscalar mesons can be applied for a study of $`\gamma \gamma `$ decays of the first radial excitation mesons: in the present paper we discuss these processes. The search for exotics in the pseudoscalar/isoscalar sector ultimately requires the investigation of $`\eta `$-mesons of the $`2^1S_0q\overline{q}`$ nonet: in the framework of this investigation program, here we calculate partial widths $`\eta (1295)\gamma \gamma `$ and $`\eta (1440)\gamma \gamma `$ under the assumption that the mesons $`\eta (1295)`$ and $`\eta (1440)`$ are members of the $`2^1S_0q\overline{q}`$ nonet. The state $`\eta (1440)`$ (old name is $`E(1407)`$ ) attracts our special attention: it is considered during long time as a state with possible rich gluonic component. Transition form factors and partial widths. A partial width for the decay $`\eta (M)\gamma \gamma `$ is determined as $`\mathrm{\Gamma }_{\eta (M)\gamma \gamma }=\pi \alpha ^2M^3F_{\eta (M)\gamma \gamma }^2(0)/4`$, where $`M(M)`$ is the mass of the $`\eta `$-meson, $`\alpha =1/137`$, and $`F_{\eta (M)\gamma \gamma }(0)`$ is the form factor of the considered decay. In Ref. , the form factor $`F_{\eta \gamma ^{}\gamma }(Q^2)`$ was calculated for the virtual photon $`\gamma ^{}(Q^2)`$; the decay form factor is given by the limit $`Q^20`$. The decay form factor $`F_{\eta (M)\gamma \gamma }(0)`$ reads : $`F_{\eta (M)\gamma \gamma }(0)`$ $`=`$ $`{\displaystyle \frac{1}{6\sqrt{3}\pi ^3}}{\displaystyle \frac{dxd^2k_{}}{x(1x)^2}\left[\frac{5m}{\sqrt{2}}\mathrm{cos}\varphi \mathrm{\Psi }_{n\overline{n}}(s)\mathrm{\Psi }_{\gamma n\overline{n}}(s)+m_s\mathrm{sin}\varphi \mathrm{\Psi }_{s\overline{s}}(s)\mathrm{\Psi }_{\gamma s\overline{s}}(s)\right]}.`$ (1) Two terms in the square brackets refer to $`n\overline{n}`$ and $`s\overline{s}`$ components of the $`\eta (M)`$-meson. The flavour wave function is determined as $`\psi _\eta (M)=\mathrm{cos}\varphi n\overline{n}+\mathrm{sin}\varphi s\overline{s}`$ where $`\varphi `$ is mixing angle and $`n\overline{n}=(u\overline{u}+d\overline{d})/\sqrt{2}`$; $`m`$ and $`m_s`$ are masses of the non-strange and strange constituent quarks. The wave functions for $`n\overline{n}`$ and $`s\overline{s}`$ components are written as $`\mathrm{\Psi }_{n\overline{n}}(s)`$ and $`\mathrm{\Psi }_{s\overline{s}}(s)`$ where $`s`$ is $`q\overline{q}`$ invariant mass squared. In terms of the light cone variables $`(x,\stackrel{}{k}_{})`$, the $`q\overline{q}`$ invariant mass reads $`s=(m^2+k_{}^2)/x(1x)`$. The photon wave function $`\mathrm{\Psi }_{\gamma q\overline{q}}(s)`$ was found in Ref. : it is shown in Fig. 1a. Wave functions of $`\eta (M)`$-mesons. We approximate wave functions of the $`\eta (M)`$-mesons in the one–parameter exponential form. For the basic multiplet and first radial excitation nonet, the wave functions are determined as follows: $$\mathrm{\Psi }_\eta ^{(0)}(s)=Ce^{bs},\mathrm{\Psi }_\eta ^{(1)}(s)=C_1(D_1s1)e^{b_1s}.$$ (2) The parameters $`b`$ and $`b_1`$ are related to the radii squared of corresponding $`\eta (M)`$-meson. Then the other constants ($`C`$, $`C_1`$, $`D_1`$) are fixed by the normalization and orthogonality conditions: $$\mathrm{\Psi }_\eta ^{(0)}\mathrm{\Psi }_\eta ^{(0)}=1,\mathrm{\Psi }_\eta ^{(1)}\mathrm{\Psi }_\eta ^{(1)}=1,\mathrm{\Psi }_\eta ^{(0)}\mathrm{\Psi }_\eta ^{(1)}=0.$$ (3) The convolution of the $`\eta `$-meson wave function at $`q_{}0`$ determines form factor of the $`\eta `$-meson, $`f_\eta ^{(n)}(q_{}^2)=\left[\mathrm{\Psi }_\eta ^{(n)}\mathrm{\Psi }_\eta ^{(n)}\right]_{q_{}0}`$ thus allowing us to relate the parameter $`b`$ (or $`b_1`$) at small $`q_{}^2`$ to $`\eta `$-meson radius squared: $`f_\eta (q_{}^2)1\frac{1}{6}R_\eta ^2q_{}^2`$. The $`\eta `$-meson form factor reads : $`f_\eta (q_{}^2)`$ $`=`$ $`{\displaystyle \frac{1}{16\pi ^3}}{\displaystyle \frac{dxd^2k_{}}{x(1x)^2}\mathrm{\Psi }_\eta ^{(n)}(s)\mathrm{\Psi }_\eta ^{(n)}(s^{})\left[\alpha (s+s^{}q^2)+q^2\right]},`$ (4) $`\alpha `$ $`=`$ $`{\displaystyle \frac{s+s^{}q^2}{2(s+s^{})\frac{(s^{}s)^2}{q^2}q^2}}`$ (5) where $`s^{}=(m^2+(\stackrel{}{k}_{}x\stackrel{}{q}_{})^2)/x(1x)`$. When working with a simple one-parameter wave function representation of Eqs. (2) and (3), it is instructive to compare the results with those obtained using more precise wave function patametrization; such a comparison can be done for basic $`2^1S_0q\overline{q}`$ nonet. The $`\eta `$ and $`\eta ^{}`$ wave functions (or those for its $`n\overline{n}`$ and $`s\overline{s}`$ components) were found in basing on the data for the transitions $`\eta \gamma \gamma ^{}(Q^2)`$, $`\eta \gamma \gamma ^{}(Q^2)`$ at $`Q^220`$ GeV<sup>2</sup>. The calculated decay form factors $`F_{n\overline{n}\gamma \gamma }^{(0)}(k^2)`$ and $`F_{s\overline{s}\gamma \gamma }^{(0)}(k^2)`$ for these wave functions are marked in Fig. 1b by rhombuses. The wave functions of Ref. give the following mean radii squared for $`n\overline{n}`$ and $`s\overline{s}`$ components: $`R_{n\overline{n}}^2=13.1`$ GeV<sup>-2</sup> and $`R_{s\overline{s}}^2=11.7`$ GeV<sup>-2</sup>; in Fig. 1b we have drawn rhombuses for these values of radii. Solid curves in Fig. 1b represent $`F_{n\overline{n}\gamma \gamma }^{(0)}(0)`$ and $`F_{s\overline{s}\gamma \gamma }^{(0)}(0)`$ calculated by using the simple exponential parametrization (2): we see that both calculations coincide with each other within reasonable accuracy. The coincidence of the results justifies the exponential approximation for the calculation of transition form facrors at $`q_{}^20`$. Results. The figure 2a demonstrates calculation results for the transition form factors $`n\overline{n}\gamma \gamma `$ and $`s\overline{s}\gamma \gamma `$ when these components refer to $`\eta `$-mesons of the first radial excitation multiplet. The form factor for the $`n\overline{n}`$ component, $`F_{n\overline{n}\gamma \gamma }^{(1)}(0)`$, depends strongly on the mean radius squared, increasing rapidly in the region $`R_{n\overline{n}}^21424`$ GeV<sup>-2</sup> (0.7-1.2 fm<sup>2</sup>). As for $`s\overline{s}`$ component, the form factor $`F_{s\overline{s}\gamma \gamma }^{(1)}(0)`$ is small; it changes sign at $`R_{s\overline{s}}^215`$ GeV<sup>-2</sup>. Therefore, one can neglect the contribution of the $`s\overline{s}`$ component into $`\gamma \gamma `$ decay. Then $$\mathrm{\Gamma }_{\eta (M)\gamma \gamma }\frac{\pi }{4}\alpha ^2M^3\mathrm{cos}^2\varphi F_{n\overline{n}\gamma \gamma }^{(1)\mathrm{\hspace{0.17em}2}}=\mathrm{cos}^2\varphi \mathrm{\Gamma }_{n\overline{n}\gamma \gamma }^{(1)}(0),$$ (6) where $`\mathrm{cos}^2\varphi `$ is a probability for $`n\overline{n}`$ component in the $`\eta (M)`$ meson. The calculated values $`\mathrm{\Gamma }_{n\overline{n}\gamma \gamma }^{(1)}`$ for $`\eta (1295)`$ and $`\eta (1440)`$ are shown in Fig. 2b as functions of $`R_{n\overline{n}}^2`$. A significant difference of widths $`\mathrm{\Gamma }_{n\overline{n}\gamma \gamma }^{(1)}`$ for $`\eta (1295)`$ and $`\eta (1440)`$ is due to a strong dependence of partial width on the $`\eta `$-meson mass, $`\mathrm{\Gamma }_{\eta (M)\gamma \gamma }M^3`$. Conclusion. We calculate the $`\gamma \gamma `$ partial width for $`\eta (1295)`$ and $`\eta (1440)`$ supposing these mesons to be members of the first radial excitation nonet $`2^1S_0q\overline{q}`$. The calculation technique is based on that developed in for the transition of mesons from basic nonet $`1^1S_0q\overline{q}`$ into $`\gamma ^{}(Q^2)\gamma `$. The $`\gamma \gamma `$ partial widths of $`\eta (1295)`$ and $`\eta (1440)`$ are mainly determined by the flavour component $`n\overline{n}=(u\overline{u}+d\overline{d})/\sqrt{2}`$ so $`\mathrm{\Gamma }_{\eta (1295)\gamma \gamma }+\mathrm{\Gamma }_{\eta (1440)\gamma \gamma }\mathrm{\Gamma }_{n\overline{n}}^{(1)}`$. Partial widths strongly depend on the meson radii squared: $`\mathrm{\Gamma }_{\eta (1295)\gamma \gamma }+\mathrm{\Gamma }_{\eta (1440)\gamma \gamma }0.04`$ keV at $`R_{\eta (X)}^2/R_\pi ^21.5`$ and $`\mathrm{\Gamma }_{\eta (1295)\gamma \gamma }+\mathrm{\Gamma }_{\eta (1440)\gamma \gamma }0.2`$ keV at $`R_{\eta (X)}^2/R_\pi ^22`$. The paper was partly supported by the RFBR grant 98-02-17236.
no-problem/9910/hep-th9910232.html
ar5iv
text
# Making the gravitational path integral more Lorentzian or Life beyond Liouville gravity ## 1 Motivation The ultimate aim of the work described below is to learn more about four-dimensional quantum gravity by relating non-perturbative canonical and covariant approaches, which so far have not been successful separately. By ‘covariant’ we do not mean semi-classical gravitational path integrals, but genuine “sums over all metrics”, which usually involve a discretization of space-time. A prototype of this ansatz is quantum Regge calculus. With the help of numerical simulations, one tries to find a non-trivial fixed point and an associated continuum theory of quantum gravity. A great deal of numerical expertise has been gathered in the approach of dynamical triangulations, a recent variant of the Regge method. Unfortunately, all investigations so far have concentrated on path integrals for unphysical space-time metrics of Euclidean signature. Unlike for some fixed background metrics, there is no prescription of how to “Wick-rotate” a general Euclidean metric to Lorentzian signature. On the other hand, a lot of progress has been made in the last ten years in an analytic formulation of canonical quantum gravity based on a reformulation in terms of gauge-theoretic variables, called “loop quantum gravity”. Although a priori based in the continuum, the quantum theory has a number of discrete features reminiscent of a generally covariant version of a lattice gauge field theory. However, in this approach some basic obstacles remain in defining a satisfactory quantum Hamiltonian evolution, and efficient numerical methods have not yet been developed. It is tempting to try to combine the positive aspects of both approaches, but one soon realizes that in order to relate the two, a number of technical and conceptual difficulties have to be overcome. To narrow this gap, we want to define a Lorentzian path integral where individual regularized space-time geometries in the sum are required to be causal, reflected in a local “light-cone structure” and the absence of closed time-like curves. It should be appreciated that it is relatively easy to write down Feynman sums of amplitudes $$\underset{\mathrm{causal}\mathrm{geometries}\{I\}}{}e^{iS^{\mathrm{Einstein}}(I)},$$ (1) but that it is very hard to construct concrete models with a suitable regularization, such that the sum can be performed and leads to a non-trivial continuum theory. ## 2 An ideal testing ground: d=2 The difficulties associated with defining the sum (1) can be overcome, at least in dimension $`d=2`$. There exists already a rigorous discretized path integral for Euclidean geometries, obtained by the method of dynamical triangulations, where the path-integral sum is performed over all possible triangulations $`T`$ (i.e. gluings of equilateral triangles). The 2d gravity action for fixed space-time topology reduces to the cosmological-constant term $$S=\lambda d^2x\sqrt{|detg|},$$ (2) for both Euclidean and Lorentzian metrics $`g_{\mu \nu }`$. After the discretization, this term becomes proportional to $`\lambda N(T)`$, with $`N(T)`$ counting the number of triangles contained in $`T`$. The Euclidean state sum is given by $$Z^{\mathrm{eu}}(\lambda )=\underset{N}{}e^{\lambda N}Z^{\mathrm{eu}}(N)=\underset{N}{}e^{\lambda N}\underset{T^{(N)}}{}1.$$ (3) With the help of ingenious combinatorial methods the counting of all triangulations $`T^{(N)}`$ of volume $`N`$ in the sum on the right can be done explicitly. Moreover, there is good evidence that the method is diffeomorphism-invariant, since it reproduces the results of continuum Liouville gravity in the continuum limit. How can this framework be adapted to the Lorentzian situation? We have substituted the fundamental equilateral building blocks (with squared edge lengths $`a^2=1`$) by triangles with two time-like edges with $`a^2=1`$ and one space-like edge with $`a^2=1`$ . To obtain allowed histories, these must be glued causally: consecutive spatial slices (consisting entirely of space-like edges) of variable length $`l`$ are connected by sets of time-like edges. For simplicity, these slices are compactified to circles $`S^1`$. A typical triangulated 2d Lorentzian geometry of $`t`$ time-steps ($`t`$ pointing up) is depicted in Fig.1. Note that the local geometric degrees of freedom (apart from the edge lengths) are encoded in the variable coordination numbers of edges meeting at vertices, giving a direct measure of curvature. It turns out that also in this discrete Lorentzian model, the combinatorics can be solved explicitly and yields the Lorentzian analogue $`Z^{\mathrm{lor}}(\lambda )`$ of (3). The partition function exhibits critical behaviour as $`\lambda \lambda _{\mathrm{crit}}`$, where a continuum limit can be taken. After appropriate renormalization, one obtains a new quantum gravity theory inequivalent to Liouville gravity. It is rather surprising that there is a second universality class of models describing fluctuating two-geometries! The central dynamical quantity of the theory is the continuum propagator $`G_\mathrm{\Lambda }(L_1,L_2;T)`$. It describes the transition from an initial spatial geometry of length $`L_1`$ to a final one of length $`L_2`$ in proper time $`T`$ and takes the form $`G_\mathrm{\Lambda }(L_1,L_2;T)=e^{\mathrm{coth}(\sqrt{i\mathrm{\Lambda }}T)\sqrt{i\mathrm{\Lambda }}(L_1+L_2)}`$ $`\times {\displaystyle \frac{\sqrt{i\mathrm{\Lambda }L_1L_2}}{\mathrm{sinh}(\sqrt{i\mathrm{\Lambda }}T)}}I_1\left({\displaystyle \frac{2\sqrt{i\mathrm{\Lambda }L_1L_2}}{\mathrm{sinh}(\sqrt{i\mathrm{\Lambda }}T)}}\right),`$ (4) where $`I_1`$ denotes the modified Bessel function. In order to illustrate our claim that the Lorentzian quantum gravity theory differs from Liouville gravity, let us look at the behaviour of a simple observable. A good example is the so-called Hausdorff dimension $`d_H`$, which contains information about the bulk properties of the quantum geometry in the ground state of the theory. It is measured by looking at the volume $`Vr^{d_H}`$ of geodesic balls (discs in dimension 2) of radius $`r`$. Liouville gravity has a fractal Hausdorff dimension $`d_H=4`$. This may be surprising at first, but has to do with the fact that the dominant contributions to the path integral are highly branched geometries, with many “baby universes”. By contrast, in the Lorentzian theory we have $`d_H=2`$, which is the “canonical” dimension expected from naïve semi-classical considerations. The difference arises because there are no baby universes in Lorentzian gravity. At a point where a baby universe branches off, the Lorentzian metric structure must inevitably go bad, thereby violating causality. This also implies that in Lorentzian gravity the topology of the spatial slices cannot change. Note that this is exactly the situation described by canonical approaches to gravity. ## 3 Coupling matter to Lorentzian gravity The discussion of the previous section suggests that the geometry of Lorentzian quantum gravity is “better” behaved than its Euclidean counterpart. This is also illustrated by Fig.1 (taken from a Monte Carlo simulation of pure Lorentzian gravity). In spite of strong fluctuations $`\mathrm{\Delta }l`$ of the length of spatial slices, the geometry is still effectively two-dimensional. The geometry of the Lorentzian model therefore lies somewhere in between the wildly fluctuating and fractal quantum geometry of the Liouville model and that of a fixed classical two-dimensional space-time. It is an interesting question how matter will behave under coupling to the Lorentzian model. To investigate this issue, we have considered a model of Ising spins with nearest-neighbour interaction. Coupling this to Euclidean dynamical triangulations yields an exactly soluble model of Euclidean gravity plus matter. Its matter behaviour is governed by the critical exponents $$\alpha =1,\beta =0.5,\gamma =2,$$ (5) characterizing the singularity structure of the specific heat, the spontaneous magnetization, and the magnetic susceptibility as functions of the bare Ising coupling constant $`\beta _I`$. This should be contrasted with the Onsager values of these exponents found on fixed, flat lattices, which are given by $$\alpha =0,\beta =0.125,\gamma =1.75.$$ (6) The partition function for Lorentzian gravity coupled to Ising spins $`\sigma _i=\pm 1`$ is the sum $$Z(\lambda ,\beta _I)=\underset{N}{}e^{\lambda N}\underset{T^{(N)}}{}Z_{T^{(N)}}(\beta _I),$$ (7) where the partition function $`Z_T(\beta _I)`$ of the Ising model on the Lorentzian triangulation $`T`$ is $$Z_T(\beta _I)=\underset{\{\sigma _i\}}{}e^{\beta _I/2_{ij}\sigma _i\sigma _j}.$$ (8) We have investigated this model by means of a high-T (that is, small inverse temperature $`\beta _I`$) expansion and by Monte Carlo simulations . An exact solution has not yet been constructed. Note that eq. (7) describes the Euclidean sector of Lorentzian gravity plus matter, i.e. with real weights and therefore Euclidean values for the coupling constants. This is the form suitable for numerical simulations. What we have found is that both methods agree with good precision in their estimates of the critical matter exponents, which turn out to be the Onsager exponents. The Hausdorff dimension of the geometry is unaltered, $`d_H=2`$, and the typical Monte-Carlo-generated geometries look qualitatively similar to the ones in pure gravity. There are effects of the gravity-matter coupling at the discretized level, for example, on the distribution of coordination numbers, but we have not investigated whether this is reflected in a change of universal properties of the geometry that would survive in the continuum limit. ## 4 Coupling more matter The previous picture is changed drastically when several Ising models instead of one are coupled to Lorentzian gravity. For the case of $`n`$ Ising models, the partition function (7) is replaced by $$Z(\lambda ,\beta _I)=\underset{N}{}e^{\lambda N}\underset{T^{(N)}}{}Z_{T^{(N)}}^n(\beta _I).$$ (9) At the critical point, this model describes a conformal field theory with central charge $`c=n/2`$ coupled to gravity. Our motivation for coupling more matter is the fact that Euclidean 2d gravity becomes inconsistent for $`n>2`$, that is, beyond the so-called $`c=1`$ barrier. In the presence of Ising spins it is energetically favourable to have short boundaries between regions of opposite spins. In a theory of fluctuating geometry the effect of the spins is to try and “squeeze off” parts of the space-time manifold. In Euclidean gravity, where the geometry is very branched to start with, this mechanism seems to be so effective that for $`n>2`$ the theory seizes to make sense. In order to get a clear picture of what goes on “well beyond the $`c=1`$ barrier” in Lorentzian gravity, we have investigated its properties at $`n=8`$ by numerical simulations . One observes a very strong interaction of gravity and matter, to the extent that the geometry is now in a different phase from before: time and space directions acquire an anomalous relative scaling and the Hausdorff dimension is changed to $`d_H=3`$! This is illustrated by Fig.2. The effect of the matter on the geometry is reflected in the presence of the long, stalk-like part of the space-time, which is effectively one-dimensional. All interesting physics (that survives the limit as $`N\mathrm{}`$) happens in the extended bulk phase. However, in spite of these drastic changes in the geometrical properties, we have found that the critical matter exponents retain their Onsager values! ## 5 Conclusions There are a number of lessons to be learned from this two-dimensional model of quantum gravity. The choice of Lorentzian over Euclidean, which in our case consisted in the imposition of a causality condition on individual path-integral histories, made a big difference. In two dimensions, it led us to the discovery of a new universality class of quantum gravity models, besides that of Liouville gravity. In Lorentzian gravity, the quantum geometry is much smoother, and better behaved in the sense that one can cross the infamous $`c=1`$ barrier without any problems. Conversely, the coupled model with eight Ising models illustrated that the matter behaviour is rather robust: the geometry can undergo drastic changes without the critical matter behaviour being affected. From this we also learn that Onsager exponents by no means imply that the underlying space-time is flat. The difference between the Euclidean and Lorentzian theories can be traced entirely to the presence of branchings or baby universes . Since this is a purely kinematical effect which has to do with an a priori restriction on the sum-over-geometries, it will be present in higher dimensions as well. To date, the problem with dynamically triangulated path integrals for Euclidean geometries in $`d>2`$ has been the dominance of highly degenerate geometries, including a proliferation of baby universes. Our hope is that also in these cases a causality requirement will lead to an effective “smoothing out” of the quantum geometry. An investigation of the case $`d=3`$ is under way.
no-problem/9910/hep-ph9910329.html
ar5iv
text
# Theory of Elastic Vector Meson Production ## 1 Introduction Why are we interested in elastic vector meson production? First of all the process $`\gamma ^{}pVp`$ provides us with well distinguishable experimental signals in a wide range of the $`\gamma ^{}p`$ c.m. energy $`W`$, the virtuality of the photon $`Q^2`$, and the mass of the vector meson $`M_V`$. Quite some data are already available for $`V=\rho ,\varphi `$ and $`J/\mathrm{\Psi }`$, and even for the heavy $`\mathrm{{\rm Y}}`$ first measurements were published recently.<sup>2</sup><sup>2</sup>2For the discussion of experimental results on the production of light and heavy quarkonia see AProskuryakov ; CKiesling ; LLindemann and references therein. In the future the range in $`Q^2`$ and $`W`$ and the precision of the data will increase. This enables us to study vector meson production in detail in the very interesting regime where the transition from soft to hard QCD dynamics is expected (and already seen) to take place. In addition, there is hope to make use of the high sensitivity of this process on the gluon distribution $`xg(x,\overline{Q}^2)`$ in the proton to constrain this quantity at small values of $`x`$ better than through other processes. In the following we first sketch the basic picture of elastic vector meson production. In Section 2 we briefly discuss different theoretical models which are not based on the two gluon exchange picture which is then introduced in Section 3. There, starting from the basic leading order result known for long time, we develop corrections which improve the leading order formula. In Section 4 recent issues in pQCD calculations as off-diagonal parton distributions, the influence of the vector meson wave function and an alternative approach using parton-hadron duality are discussed. We mainly concentrate on diffractive $`\rho `$ meson electroproduction, but the presented perturbative model is also successful in the case of $`J/\mathrm{\Psi }`$ and $`\mathrm{{\rm Y}}`$. Section 4 contains our conclusions and outlook. ### 1.1 The basic picture In Fig. 1 the basic picture for the process $`\gamma ^{}pVp`$ is shown: first the photon with virtuality $`Q^2=q^2`$ fluctuates into a quark-antiquark pair. This $`q\overline{q}`$ fluctuation then interacts elastically with the proton $`p`$, where the zig-zag line represents the (for the moment unspecified) elastic interaction with the proton. The $`\gamma ^{}p`$ centre-of-mass energy is denoted by $`W`$, $$W^2=(q+p)^2,$$ (1) whereas $$t=(pp^{})^2$$ (2) is the four-momentum transfer squared. (In the following we will mainly restrict ourselves to the case of small $`t`$.) The shaded blob at the right stands for the formation of the vector meson $`V`$, which, to leading order, has to form from the $`q\overline{q}`$ pair with invariant mass squared $`M_V^2`$. It is important to note that at high energy $`W`$ corresponding to small values of $`x`$, $$x=\frac{Q^2+M_V^2}{Q^2+W^2},$$ (3) the timescales involved in the problem are very different<sup>3</sup><sup>3</sup>3This definition for $`x`$, which is often called $`\xi `$ or $`x_{IP}`$, is common in diffractive physics and should not be confused with the ordinary Bjorken-$`x`$, $`x_{\mathrm{Bj}}=Q^2/(Q^2+W^2)`$.: the typical lifetime of the $`\gamma ^{}q\overline{q}`$ fluctuation as well as the time for the formation of the vector meson $`V`$ is much longer than the duration of the interaction with the proton, i.e. $`\tau _{\gamma ^{}q\overline{q}},\tau _{q\overline{q}V}\tau _i`$. Therefore the basic amplitude factorizes, as sketched already in Fig. 1, into the $`q\overline{q}`$ fluctuation, the interaction amplitude $`A_{q\overline{q}+p}`$ and the wave function of the vector meson $`V`$, $$A(\gamma ^{}pVp)=\psi _{q\overline{q}}^\gamma A_{q\overline{q}+p}\psi _{q\overline{q}}^V,$$ (4) and the process becomes calculable within various models.<sup>4</sup><sup>4</sup>4For a more detailed discussion of the ordering of the timescales see e.g. LMRT . Formally it has been shown that for $`Q^2`$ larger than all other mass scales in the process there is factorization into a hard scattering subprocess, non-perturbative (and off-diagonal, as will be discussed later) parton distributions and the meson wave function.fac This strict proof of factorization holds for longitudinally polarized photons, whereas meson production through transversely polarized photons is shown to be suppressed by a power of $`Q`$. Let us now turn to the discussion of different models for the $`\gamma ^{}p`$ interaction. ## 2 Some non-perturbative models The following short section is far from being a review of this rich field, but is intended to give a hint at some non-perturbative models, which contrast the perturbative description of diffractive scattering, which is the main subject of this article. $``$ We will not cover approaches based on vector meson dominance (see e.g. Schildknechtetal ). $``$ For Regge-phenomenology-based models of (one or two) Pomeron exchange we refer the reader to Peter . $``$ *The model of the stochastic QCD vacuum* Dosch, Gusset, Kulzinger and Pirner DGKP have developped a model of the interaction with the proton, which is similar to the semi-classical model of Buchmüller discussed in Buchmuller in the context of inclusive diffractive DIS. This model, originally used for hadron-hadron scattering, leads to linear confinement and predicts a dependence of the high-energy scattering on the hadron size. It gives a unified description of low energy and soft high-energy scattering phenomena. Dosch et al. approximate the slowly varying infrared modes of the gluon field of the proton by a stochastic process. Via a path integral method they average over all possible field configurations. For the splitting of the photon into the $`q\overline{q}`$ pair and for the description of the vector meson they use light cone wavefunctions. Within their model they are able to calculate the $`Q^2`$ dependence of the cross section, as well as the dependence on the momentum transfer $`t`$, $`\mathrm{d}\sigma /\mathrm{d}t`$, and the ratio of the longitudinal to the transverse cross section, $`L/T`$, where longitudinal and transverse refer to the polarization of the photon. Their results are in fair agreement with experimental data. There is no prediction for the $`W`$ dependence of the cross section. $``$ Rueter has extended the model of Dosch et al. to also describe the $`W`$ dependence of the cross section.Rueter He achieves this by using a phenomenological model based on the exchange of one soft and one hard Pomeron, each being a simple pole in the complex angular momentum plane, similar to the Donnachie-Landshoff model Peter . For the very hard components of the photon fluctuations he treats the interaction perturbatively and achieves a good description of the experimentally observed transition from the soft to the hard regime. ## 3 The two gluon exchange model To leading order in QCD the zig-zag line in Fig. 1, which stands for the elastic scattering via the exchange of a colourless object with the quantum numbers of the vacuum, can be described by two gluons. If the scale governing the (transverse) size of the photon fluctuation is large compared to the typical scale of non-perturbative strong interactions, i.e. if $$Q^2\mathrm{\Lambda }_{\mathrm{QCD}}^2\mathrm{or}M_V^2\mathrm{\Lambda }_{\mathrm{QCD}}^2,$$ (5) then the coupling of the two gluons to the $`q\overline{q}`$ fluctuation can be treated reliably within perturbative QCD (pQCD). Another kinematic regime, where pQCD is applicable, is high-$`t`$ diffraction. There the hard scale which is needed to ensure the validity of the perturbative treatment is given by the large value of the momentum transfer $`t`$, and one expects high-$`t`$ diffractin to be a good place to search for the perturbative Pomeron.Forshaw It has been shown some time ago that due to the factorization property of the process the coupling of the two gluons to the proton can, in the leading logarithmic approximation, be identified with the ordinary (diagonal) gluon distribution in the proton.Misha1 ; Brodskyetal ; Bartelsetal We will come back to this point later when discussing the importance of off-diagonal gluon distributions. ### 3.1 The basic formula The basic leading order formula for diffractive vector meson production is given by Misha1 ; Brodskyetal $$\frac{\mathrm{d}\sigma }{\mathrm{d}t}\left(\gamma ^{}pVp\right)|_{t=0}=\frac{\mathrm{\Gamma }_{ee}^VM_V^3\pi ^3}{48\alpha }\frac{\alpha _s(\overline{Q}^2)^2}{\overline{Q}^8}\left[xg(x,\overline{Q}^2)\right]^2\left(1+\frac{Q^2}{M_V^2}\right),$$ (6) where $`\alpha `$ is the electromagnetic coupling and the gluon distribution is sampled at the effective scale $$\overline{Q}^2=\left(Q^2+M_V^2\right)/4.$$ (7) In Eq. (6) the non-relativistic approximation for the vector meson wave function is used and the coupling of the vector meson to the photon is encoded in the electronic width $`\mathrm{\Gamma }_{ee}^V`$. Note that Eq. (6) is valid for $`t=0`$. In the approach discussed in the following there is no prediction for the $`t`$ dependence of the cross section, which is assumed to be of the exponential form $`\mathrm{exp}(b|t|)`$ with an experimentally measured slope-parameter $`b`$, which may depend on the vector meson $`V`$ and on $`Q^2`$. On the other hand, Eq. (6) makes predictions for both the $`Q^2`$ and the $`W`$ dependence of the cross section for longitudinally and transversely polarized photons for all sorts of vector mesons, as long as either $`Q^2`$ or $`M_V^2`$ is large enough to act as the hard scale. It is obvious that the $`W`$ dependence comes entirely from the gluon distribution $`xg(x,\overline{Q}^2)`$, which enters quadratically in the cross section. ### 3.2 Improvements beyond the leading order In the following we will discuss several improvements of the leading order formula.<sup>5</sup><sup>5</sup>5For more detailed discussions see e.g. RRML ; FKS or the recent review MW . $``$ Eq. (6) contains only the leading imaginary part of the positive-signature amplitude $$Ai\left(x^\lambda +(x)^\lambda \right).$$ (8) The real part of the amplitude can be restored using dispersion relations: $$\mathrm{Re}A=\mathrm{tan}(\pi \lambda /2)\mathrm{Im}A,$$ (9) where $`\lambda `$ is given by the logarithmic derivative $$\lambda =\frac{\mathrm{log}A}{\mathrm{log}(1/x)}.$$ (10) For the case of $`\rho `$ production, the contributions from the real part are roughly 15%. For $`J/\mathrm{\Psi }`$ production in the HERA regime they amount to approximateley 20% and are even bigger for $`\mathrm{{\rm Y}}`$ production MRT3 ; FMS , where larger values of $`x`$ are probed. $``$ In Fig. 2 one of four leading order diagrams<sup>6</sup><sup>6</sup>6There are three similar diagrams: one where both gluons couple to the antiquark and two where one gluon is attached to the quark, whereas the other couples to the antiquark. for the two gluon exchange model is shown with some kinematic variables which will be used below. In the general case the two gluons $`g_1`$, $`g_2`$ have different $`x`$, $`x^{}`$ and transverse momenta $`\mathrm{}_T`$, $`\mathrm{}_T^{}`$. The leading logarithmic approximation of the $`\mathrm{}_T^2`$ loop integral (indicated by the circle in Fig. 2) leads to the identification with the integrated gluon distribution $`xg(x,\overline{Q}^2)`$ at the effective scale $`\overline{Q}^2`$ defined in Eq. (7). Beyond leading logarithmic accuracy one has to perform the $`\mathrm{}_T^2`$ integral over the unintegrated gluon distribution $`f(x,\mathrm{}_T^2)`$. This can lead to numerical results which are, depending on the kinematical regime, twice as big as the result from Eq. (6).RRML ; LMRT Although here we are considering elastic production at small momentum transfer $`t`$, the timelike vector meson with mass $`M_V`$ has to be produced from the spacelike (or real) photon with virtuality $`Q^2`$. This means, that even if there is no transverse momentum transfer, $`\mathrm{}_T=\mathrm{}_T^{}`$, there has to be a difference $`xx^{}=\left(M_V^2+Q^2\right)/\left(W^2+Q^2\right)`$ in the longitudinal momentum of the two gluons $`g_1`$ and $`g_2`$. Therefore the identification with the ordinary diagonal gluon distribution $`xg(x,\overline{Q}^2)`$ is only a good approximation for very small values of $`x`$ and $`t`$, and in the general case the process $`\gamma ^{}pVp`$ depends on off-diagonal parton distributions.Alan Their importance for diffractive vector meson production will be discussed in the following. ## 4 Recent issues in pQCD calculations ### 4.1 Off-diagonal parton distributions Off-diagonal (also called “skewed” or non-forward) parton distributions<sup>7</sup><sup>7</sup>7These off-diagonal parton distributions are not parton densities in the ordinary probabilistic sense but matrix elements of parton-fields between different initial and final proton states. are much studied recently.<sup>8</sup><sup>8</sup>8See Alan and references therein. In the case of small $`t`$ scattering the skewedness comes from the difference between $`x`$ and $`x^{}`$ of the two gluons $`g_1`$ and $`g_2`$, and the cross section can be shown to be proportional to the square of a skewed gluon distribution, $$\sigma \left|x^{}g(x,x^{};\overline{Q}^2)\right|^2.$$ (11) Here $`x=\left(M_{q\overline{q}}^2+Q^2\right)/\left(W^2+Q^2\right)`$, $`x^{}=\left(M_{q\overline{q}}^2M_V^2\right)/\left(W^2+Q^2\right)x`$, and $`M_{q\overline{q}}^2`$ is the mass squared of the intermediate $`q\overline{q}`$ pair. (Taking the leading imaginary part of the amplitude corresponds to cutting the amplitude as indicated by the dashed line in Fig. 2 and putting both $`q`$ and $`\overline{q}`$ on-shell, which in turn fixes $`x`$. $`x^{}`$ has to accomodate the difference between $`M_{q\overline{q}}`$ and $`M_V`$ and it not fixed due to the integration over all possible quark (and antiquark) momenta. At leading logarithmic order $`x^{}x`$ and we can put $`x^{}0`$.) For arbitrary kinematics skewed parton distributions are not connected with the diagonal ones and are unknown non-perturbative objects. However, in the case of small $`x`$, they are determined completely by the diagonal ones.SGMR ; Alan The ratio of skewed to diagonal gluon distribution is given by $$R_g=\frac{x^{}g(x,x^{})}{xg(x)}=\frac{2^{2\lambda +3}}{\sqrt{\pi }}\frac{\mathrm{\Gamma }\left(\lambda +\frac{5}{2}\right)}{\mathrm{\Gamma }\left(\lambda +4\right)}.$$ (12) Here $`\mathrm{\Gamma }`$ is the usual Gamma-function and the effective power $`\lambda `$ can be obtained from the logarithmic derivative of the amplitude $`A`$ for the $`\gamma ^{}pVp`$ cross section, $$\lambda =\frac{\mathrm{log}A}{\mathrm{log}\left(1/x\right)}.$$ (13) As will be shown below, the magnitude of the resulting correction factor for the total cross section, $`R_g^2`$, can be sizeable, especially for large $`Q^2`$ or $`M_V^2`$. ### 4.2 The vector meson wave function Another important issue is the treatment of the vector meson wave function. As sketched in Fig. 1 and Eq. (4) it enters the amplitude via a convolution with the scattered $`q\overline{q}`$ fluctuation. In Eq. (6) the non-relativistic approximation was adopted. This means, that quark and antiquark equally share the longitudinal momentum of the photon, i.e. $`z=1z=1/2`$, and that there is no internal (transverse) momentum $`k_T`$ in the $`q\overline{q}`$ bound state. Therefore, in this naive approximation, $$\psi _{q\overline{q}}^V(z,k_T)=\delta ^{(2)}(k_T)\delta \left(z1/2\right),$$ (14) and $`M_V=2m_q`$. While this simplification may be suitable for heavy mesons like the $`\mathrm{{\rm Y}}`$, it is clear that the non-relativistic approximation has to break down for light quarks. Various groups have worked on improving this approximation by including the Fermi motion of the quarks in the meson by using a nontrivial wave function.Brodskyetal ; FKS ; RRML ; Nemchiketal Different models for the meson wave function were used which lead to quite different correction factors: whereas in Gaussian models there is no strong suppression RRML , the large $`k_T`$ tail typical for wave functions from non-relativistic potential models seems to lead to large corrections FKS . On the other hand, considering that within these potential models a big part of the $`𝒪(v^2)`$ corrections comes from a regime, where $`k_T`$ is bigger than the quark mass itself, these large corrections may well be an artefact of the non-relativistic approximation. Another related problem is the question, which mass for the quarks should be used in the perturbative formulae. Note that Eq. (6) is written in terms of the vector meson mass $`M_V`$. However, as discussed in RRML , the full expressions used to include higher order (relativistic) corrections contain the quark mass $`m_q`$ instead of $`M_V`$. As the ratio $`M_V/(2m_q)`$ enters with a high power, this difference is not negligible and should be taken into account in the calculation of the $`𝒪(v^2)`$ corrections applied to Eq. (6). In addition, it is well known that there are other relativistic corrections, which in principle have to be taken into account in a consistent way. As pointed out by Hoodbhoy Hoodbhoy , gauge invariance is only preserved if higher Fock states ($`q\overline{q}g`$, $`q\overline{q}gg,\mathrm{}`$) are included in the wave function. In doing so he arrives at the conclusion, that the relativistic corrections to the quark propagators plus the corrections from the higher Fock states amount to only a few percent for $`J/\mathrm{\Psi }`$ production, in agreement with RRML . After all large relativistic corrections can probably be excluded, but, as different approaches lead to quite different results there remains a considerable uncertainty and the issue a hot topic. ### 4.3 An alternative approach based on parton-hadron duality In this section we will discuss an alternative approach, which avoids the meson wave function and leads to results which are in surprisingly good agreement with available data. It was proposed in MRT1 for $`\rho `$ meson electroproduction, where the hard scale is provided by $`Q^2`$, not by $`M_\rho `$.<sup>9</sup><sup>9</sup>9Experimentally both the $`t`$ dependence $`\mathrm{d}\sigma /\mathrm{d}t\mathrm{exp}(bt)`$ with $`b56`$ GeV<sup>-2</sup> for $`Q^2>10`$ GeV<sup>2</sup> and the $`W`$ behaviour of the cross section $`\sigma W^{0.8}`$ indicate that $`\rho `$ meson electroproduction is not a soft, but mainly a hard process. Due to the tiny $`u`$ and $`d`$ quark masses, in the case of the $`\rho `$ non-relativistic approximations cannot be justified, and the wave function is not very well known. Now the crucial problem was, that all naive predictions for the ratio of the longitudinal to the transverse cross section, which are based on the perturbative formula (6), lead to $$\sigma _L/\sigma _TQ^2/M_\rho ^2.$$ (15) This is much too steep and incompatible with experimental data (see below). The inclusion of effects from a light cone wave function for the $`\rho `$ does not change the picture considerably.<sup>10</sup><sup>10</sup>10One might argue that $`\sigma _T`$ receives large contributions from the small $`k_T`$ region, which is non-perturbative. But those contributions would cause the transverse cross section to fall off even faster with increasing $`Q^2`$ and therefore worsen the problem MRT1 . These observations indicate that the main effects are not coming from the $`\rho `$ wave function and lead to the proposal of a different model in MRT1 : there the cross section for $`\rho `$ production is predicted via perturbative $`u\overline{u}`$ and $`d\overline{d}`$ quark pair electroproduction together with the principle of parton-hadron duality (PHD) PHD . PHD means that the integral of the parton ($`q\overline{q}`$) production cross section over a mass interval $`\mathrm{\Delta }M`$ is approximately equal to the sum over all (corresponding) possible hadron production cross sections in the same mass interval. In the region $`M_{q\overline{q}}^2M_\rho ^2`$ the production of more complicated partonic configurations (like $`q\overline{q}+g`$, $`q\overline{q}+2g`$, $`q\overline{q}+q\overline{q}`$, etc.) is heavily suppressed. On the hadronic side the $`\rho `$ resonance (plus the small admixture of the $`\omega `$) with its decay into two (three) pions completely saturates the cross section. Therefore we can well approximate the $`\rho `$ production cross section $$\gamma ^{}p\rho p\pi \pi p$$ by $$\sigma \left(\gamma ^{}p\rho p\right)0.9\underset{q=u,d}{}_{M_a^2}^{M_b^2}\frac{\mathrm{d}\sigma \left(\gamma ^{}p(q\overline{q})p\right)}{\mathrm{d}M^2}$$ (16) where $`M_a`$ and $`M_b`$ have to be chosen to embrace the $`\rho `$ resonance appropriately, i.e. $`M_b^2M_a^21`$ GeV<sup>2</sup>. The factor 0.9 on the right side of Eq. (16) corrects for the contributions from $`\omega `$ production. The perturbative formulae for the $`q\overline{q}`$ production cross section are derived from the amplitudes depicted in Fig. 2 and can be written in terms of the conventional spin rotation matrices $`d_{\lambda \mu }^J(\theta )`$ (see MRT1 for details): $`{\displaystyle \frac{\mathrm{d}^2\sigma _L}{\mathrm{d}M^2dt}}|_{t=0}`$ $`=`$ $`{\displaystyle \frac{4\pi ^2e_q^2\alpha }{3}}{\displaystyle \frac{Q^2}{(Q^2+M^2)^2}}{\displaystyle \frac{1}{8}}{\displaystyle _1^1}\mathrm{d}\mathrm{cos}\theta \left|d_{10}^1(\theta )\right|^2\left|I_L\right|^2,`$ (17) $`{\displaystyle \frac{\mathrm{d}^2\sigma _T}{\mathrm{d}M^2dt}}|_{t=0}`$ $`=`$ $`{\displaystyle \frac{4\pi ^2e_q^2\alpha }{3}}{\displaystyle \frac{M^2}{(Q^2+M^2)^2}}{\displaystyle \frac{1}{4}}{\displaystyle _1^1}\mathrm{d}\mathrm{cos}\theta \left(\left|d_{11}^1(\theta )\right|^2+\left|d_{11}^1(\theta )\right|^2\right)\left|I_T\right|^2`$ (18) where $`e_q`$ is the electric charge of the quark $`q`$, $`\alpha `$ the electromagnetic coupling and $`\theta `$ the polar angle of the quark $`q`$ in the $`q\overline{q}`$ rest frame with respect to the proton direction ($`k_T=M/2\mathrm{sin}\theta `$). $`I_{L,T}`$ are integrals over the gluon $`\mathrm{}_T^2`$ and given by $`I_L(K^2)`$ $`=`$ $`K^2{\displaystyle \frac{\mathrm{d}\mathrm{}_T^2}{\mathrm{}_T^4}\alpha _s(\mathrm{}_T^2)f(x,\mathrm{}_T^2)\left(\frac{1}{K^2}\frac{1}{K_{\mathrm{}}^2}\right)},`$ (19) $`I_T(K^2)`$ $`=`$ $`{\displaystyle \frac{K^2}{2}}{\displaystyle \frac{\mathrm{d}\mathrm{}_T^2}{\mathrm{}_T^4}\alpha _s(\mathrm{}_T^2)f(x,\mathrm{}_T^2)\left(\frac{1}{K^2}\frac{1}{2k_T^2}+\frac{K^22k_T^2+\mathrm{}_T^2}{2k_T^2K_{\mathrm{}}^2}\right)},`$ (20) with $`f`$ being the unintegrated gluon distribution and $$K_{\mathrm{}}^2\sqrt{(K^2+\mathrm{}_T^2)^2\mathrm{\hspace{0.25em}4}k_T^2\mathrm{}_T^2},K^2k_T^2(Q^2+M^2)/M^2.$$ In Eqs. (17) the different rotation matrices appropriately reflect the different spin states of the $`q\overline{q}`$ produced from longitudinal and transverse photons, and the integrals $`I_{L,T}`$ contain the scattering off the proton via the two gluon exchange.<sup>11</sup><sup>11</sup>11Here we assume $`s`$ channel helicity conservation (SCHC), i.e. the produced $`\rho `$ has the helicity of the virtual photon. However, there are small violations of SCHC (e.g. $`\gamma _T^{}\rho _L`$) which can be successfully described in a framework similar to the one discussed here. For a discussion of recent measurements of the 15 spin density matrix elements of $`\rho `$ production compared to different theoretical predictions see AProskuryakov . In order to pick up only those $`u\overline{u}`$, $`d\overline{d}`$ configurations which correspond to the quantum numbers of the $`\rho `$, one has to project out the $`J^{PC}=1^{}`$ states. This can be easily done on amplitude level with the same rotation matrices $`d_{\lambda \mu }^J(\theta )`$, see MRT1 . (Even higher spin states like the $`\rho (3^{})`$ can be projected out using the corresponding $`d`$ function MRT2 .) It is important to note that through the projection on amplitude level the longitudinal and transverse cross sections $`\sigma _{L,T}`$ are less infrared sensitive than Eqs. (17), and therefore $`\sigma _T`$ becomes calculable without a large uncertainty from the treatment of the (non-perturbative) infrared region. For the complete numerical predictions one also has to include the contributions from the real part of the amplitudes and the skewed gluon distribution as discussed above. Both effects are taken into account on amplitude level. Eqs. (17) give the cross section differential in $`t`$ for $`t=0`$. To arrive at the $`t`$ integrated total cross section one assumes the exponential form $`\mathrm{exp}(b|t|)`$. The slope $`b`$ can be taken from experiment or theoretical models and depends in general on $`M^2`$, $`W`$ and $`Q^2`$. For more details we refer the reader to MRT4 . To go beyond the leading order prediction in a completely consistent way would require in addition the full set of next-to-leading order gluonic corrections to the $`(q\overline{q})`$-$`2g`$ vertex. These corrections are not known yet<sup>12</sup><sup>12</sup>12A first step towards the calculation of the full NLO corrections is provided by FM ., but can be estimated by a $`𝒦`$ factor LMRT ; MRT1 . Similar to the Drell-Yan process, there are $`\pi ^2`$ enhanced terms, which come from the $`i\pi `$ terms in the double logarithmic Sudakov form factor. Resummation of those leading corrections results in the $`𝒦`$ factor $`𝒦=\mathrm{exp}\left(\pi C_F\alpha _s\right)`$ which leads to a considerable enhancement of the cross section. In Fig. 3 the complete numerical prediction for $`\gamma ^{}p\rho p`$ using the PHD model<sup>13</sup><sup>13</sup>13For the numerical analysis the MRST99 gluon MRST99 was used and the scale of $`\alpha _s`$ in the $`𝒦`$ factor was chosen as $`2K^2`$. For more details see MRT4 . is shown as a function of $`Q^2`$ together with recent H1 data.H1 The continuous line includes all the effects discussed above, whereas the dashed line does not include the skewed gluon. The importance of the off-diagonal gluon for the $`Q^2`$ behaviour of the cross section is obvious and the effect seems to be required to describe the data. Of course the model prediction is not free from uncertainties like the choice of the mass interval $`M_b^2M_a^2`$ in Eqs. (16) or the scale of $`\alpha _s`$ in the $`𝒦`$ factor. These (and other) uncertainties are discussed in detail in MRT4 , but they affect mainly the normalization of the cross section and do not spoil the good agreement with the experimental data. In Fig. 4 the prediction of the PHD model for the ratio $`L/T`$ is shown as a continuous line. It agrees fairly well with the data points, which show a very modest rise with $`Q^2`$ in contrast to the naive prediction Eq. (15) (dashed line). Thus, in the PHD picture, it is not the $`\rho `$ wave function, but the dynamics of the $`q\overline{q}`$ pair creation from longitudinal and transverse photons together with the off-diagonal two gluon interaction and the projection onto the $`1^{}`$ state, that determines the $`Q^2`$ dependence and the ratio $`L/T`$. It is important to note that the PHD model also works in the case of massive quarks and heavy mesons. Starting from formulae for diffractive heavy quark production LMRT and modifying the projection formalism appropriately, elastic $`\mathrm{{\rm Y}}`$ photoproduction was recently predicted using PHD in agreement with first measurements.MRT3 The same formalism can also be applied to diffractive $`J/\mathrm{\Psi }`$ production.MRT4 Again, as shown in Fig. 5, there is a surprisingly good agreement between the predicted cross section as a function of $`Q^2`$ and the experimental data H1two . ## 5 Summary Elastic vector meson production is a rich field, both from experimental and theoretical point of view. Different theoretical models describe the data, and more precise data in an increased kinematical range will be needed to clarify the situation. We have briefly discussed some non-perturbative models, but mainly concentrated on perturbative approaches. We have shown that with recent improvements pQCD-based approaches work very well and are in agreement with data. The fairly large impact of skewed parton distributions on the predictions within these models is supported by the data. There is good hope that in future we will be able to discriminate between the different models and to understand elastic vector meson production in more detail. By combining different observables from different processes elastic vector meson production with its high sensitivity to the gluon at small $`x`$ will finally help to constrain the gluon much better. For this much effort will be needed also from the theoretical side in order to increase the precision of the calculations. ### Acknowledgements I would like to thank G. Grindhammer, B. Kniehl and G. Kramer for the good organization of this stimulating and enjoyable workshop. I also thank Genya Levin, Alan Martin and Misha Ryskin for pleasant collaborations.
no-problem/9910/math9910157.html
ar5iv
text
# Nakano positivity and the 𝐿²-metric on the direct image of an adjoint positive line bundle ## 1. Introduction A holomorphic vector bundle $`E`$ over a complex projective manifold $`X`$ is called ample if the tautological line bundle $`𝒪_{(E)}(1)`$ over the projective bundle $`(E)`$ of hyperplanes in $`E`$ is ample. This notion of ampleness was introduced by R. Hartshorne in \[Ha\]. On the other hand, P.A. Griffiths in \[Gr\] introduced an analytic notion of positivity of a vector bundle. A holomorphic vector bundle $`E`$ over $`X`$ is called Griffiths positive if it admits a Hermitian metric $`h`$ such that the curvature $`C_h(E)C^{\mathrm{}}(X,\mathrm{\Omega }_X^{1,1}\text{End}(E))`$ of the Chern connection on $`E`$ for the Hermitian structure $`h`$ has the property that for every $`xX`$ and every nonzero holomorphic tangent vector $`0vT_xX`$, the endomorphism $`\sqrt{1}C_h(E)(x)(v,\overline{v})`$ of the fiber $`E_x`$ is positive definite with respect to $`h`$. If $`h`$ is a Griffiths positive Hermitian metric on $`E`$, then the Hermitian metric on $`𝒪_{(E)}(1)`$ induced by $`h`$ has positive curvature. Therefore, $`E`$ is ample by a theorem due to Kodaira. An ample line bundle admits a Griffiths positive Hermitian metric. Also, an ample vector bundle on a Riemann surface is Griffiths positive, which was proved by H. Umemura in \[Um\]. However, the question posed by Griffiths \[Gr, page 186, (0.9)\] asking whether every ample vector bundle is Griffiths positive is yet to be settled. The notion of Griffiths positivity was strengthened by S. Nakano. A holomorphic Hermitian vector bundle $`(E,h)`$ is called Nakano positive if $`\sqrt{1}C_h(E)`$ is a positive form on $`TXE`$. A Nakano positive vector bundle is clearly Griffiths positive. In the other direction, H. Skoda and J.-P. Demailly proved that if $`E`$ is Griffiths positive, then $`EdetE`$ is Nakano positive \[DS\], $`detE`$ being the line bundle given by the top exterior power of $`E`$. Our aim here is to establish the following analytic property of an ample vector bundle (Theorem 3.1). Theorem A.Let $`E`$ be an ample vector bundle of rank $`r`$ over a projective manifold $`X`$. The vector bundle $`S^k(E)detE`$ is Nakano positive for every $`k0`$, where $`S^k(E)`$ denotes the $`k`$-th symmetric power of $`E`$. More generally, for every decreasing sequence $`\lambda ^r`$ of height (= number of positive elements) $`l`$, the vector bundle $`\mathrm{\Gamma }^\lambda E(detE)^l`$ is Nakano positive; $`\mathrm{\Gamma }^\lambda E`$ is the vector bundle corresponding to the irreducible representation of $`GL(r,)`$ defined by the weight $`\lambda `$. So, in particular, $`\mathrm{\Gamma }^\lambda E(detE)^l`$ is Griffiths positive. In the special case where $`X`$ is a toric variety or an abelian variety, the Griffiths positivity of such vector bundles associated to an ample bundle was proved in \[Mo\]. The above Theorem A is obtained as consequence of a result on the Nakano positivity of the direct image of an adjoint positive line bundle. This result on Nakano positivity of direct image will be described next. Let $`L`$ be a holomorphic line bundle, equipped with a Hermitian metric $`h`$, over a connected projective manifold $`M`$. Given any section $`tH^0(M,LK_M)`$, where $`K_M`$ is the canonical bundle, its conjugate $`\overline{t}`$ is realized as a section of $`L^{}\overline{K_M}`$ using $`h`$. Now, given another section $`sH^0(M,LK_M)`$, consider the top form on $`M`$ obtained from $`s\overline{t}`$ by contracting $`L`$ with $`L^{}`$. The $`L^2`$ inner product on $`H^0(M,LK_M)`$ is defined by taking the integral of this form over $`M`$. We prove the following theorem on the Nakano positivity of the $`L^2`$ metric on a direct image (Theorem 2.3). Theorem B.Let $`\psi :YX`$ be a holomorphically locally trivial fiber bundle, where $`X`$ and $`Y`$ are connected projective manifolds, and $`H^1(\psi ^1(x),)=\mathrm{\hspace{0.17em}0}`$ for some, hence every, point $`xX`$. Let $`L`$ be a holomorphic line bundle over $`Y`$ equipped with a positive Hermitian metric $`h`$. Then the $`L^2`$-metric, defined using $`h`$, on the vector bundle $`\psi _{}(K_{Y/X}L)`$ over $`X`$ is Nakano positive; $`K_{Y/X}`$ is the relative canonical bundle. We note that the condition of ampleness of $`L`$ in Theorem B ensures that the direct image $`\psi _{}(K_{Y/X}L)`$ is locally free with the fiber of the corresponding vector bundle over any point $`xX`$ being $`H^0(\psi ^1(x),(K_{Y/X}L)|_{\psi ^1(x)})`$. Theorem A is an immediate consequence of Theorem B applied to a natural line bundle over the flag bundle $`\psi :M_\lambda (E)X`$ associated to an ample vector bundle $`E`$ by a weight $`\lambda `$. In particular, setting $`Y=(E)`$ and $`L=𝒪_{(E)}(k+r)`$ in Theorem B, where $`E`$ is an ample vector bundle of rank $`r`$, the Nakano positivity of $`S^k(E)detE`$ is obtained. This particular case corresponds to the weight $`(k,0,0,\mathrm{},0)`$. Acknowledgments: Remarks 3.3 and 3.5 were communicated to us by Jean-Pierre Demailly. We are very grateful to him for this. We thank M.S. Narasimhan for some useful discussions. We would like to thank the International Centre for Theoretical Physics, Trieste, for its hospitality. The first-named author also thanks Joseph Le Potier and the Institut de Mathématiques de Jussieu for hospitality. The second-named author thanks T. Ohsawa and K. Yoshikawa for his visit to Nagoya University. ## 2. Curvature of the $`L^2`$-metric We first recall the definition of Nakano positivity. Let $`E`$ be a holomorphic vector bundle over $`X`$ equipped with a Hermitian metric $`h`$. Let $`C_h(E)`$ denote the curvature of the corresponding Chern connection on $`E`$. Let $`\mathrm{\Theta }_h`$ denote the unique sesquilinear form on $`TXE`$ such that for any $`v_1,v_2T_xX`$ and $`e_1,e_2E_x`$, the equality $$\mathrm{\Theta }_h(v_1e_1,v_2e_2)=\sqrt{1}C_h(E)(v_1,\overline{v_2})e_1,e_2_h$$ is valid. The metric $`h`$ is called Nakano positive if the sesquilinear form $`\mathrm{\Theta }_h`$ on $`TXE`$ is Hermitian, i.e., it is positive definite. Let $`X`$ and $`Y`$ be two connected complex projective manifolds of dimension $`m`$ and $`n`$ respectively, and let $`(2.1)`$ $$\psi :YX$$ be a holomorphic submersion defining a holomorphically locally trivial fiber bundle over $`X`$ of relative dimension $`f=nm`$. So, every point of $`X`$ has an analytic open neighborhood $`U`$ such that $`\psi ^1(U)`$ is holomorphically isomorphic to the trivial fiber bundle $`U\times F`$ over $`U`$, where $`F`$ is the typical fiber of $`\psi `$. The relative canonical line bundle $`K_Y\psi ^{}K_X^1`$ over $`Y`$ will be denoted by $`K_{Y/X}`$. For any $`xX`$, the submanifold $`\psi ^1(x)`$ of $`Y`$ will be denoted by $`Y_x`$. Let $`L`$ be an ample line bundle over $`Y`$. The direct image $`\psi _{}(K_{Y/X}L)`$ is locally free on $`X`$. Indeed, from the Kodaira vanishing theorem it follows that all the higher direct images of $`K_{Y/X}L`$ vanish. Let $`V`$ denote the vector bundle over $`X`$ given by this direct image $`\psi _{}(K_{Y/X}L)`$. Fix a positive Hermitian metric $`h`$ on $`L`$. For any point $`xX`$, using the natural conjugate linear isomorphism of $`\mathrm{\Omega }_{Y_x}^{f,0}`$ with $`\mathrm{\Omega }_{Y_x}^{0,f}`$ and the Hermitian metric $`h`$ on $`L|_{Y_x}`$, a conjugate linear isomorphism between $`\mathrm{\Omega }_{Y_x}^{f,0}L|_{Y_x}`$ and $`\mathrm{\Omega }_{Y_x}^{0,f}L^{}|_{Y_x}`$ is obtained. We denote this conjugate linear isomorphism by $`\iota `$. The $`L^2`$ Hermitian metric on the vector bundle $`V:=\psi _{}(K_{Y/X}L)`$ is defined by sending any pair of sections $$t_1,t_2V_x:=H^0(Y_x,(K_{Y/X}L)|_{Y_x})$$ to the integral over $`Y_x`$ of the contraction of $`(\sqrt{1})^{f^2}t_1`$ with $`\iota (t_2)`$. In other words, if $`t_i=s_id\zeta _1d\zeta _2\mathrm{}d\zeta _f`$, where $`\{\zeta _1,\mathrm{},\zeta _f\}`$ is a local holomorphic coordinate chart on the fiber $`Y_x`$ and $`s_i`$, $`i=1,2`$, is a local section of $`L`$, then the pairing $`(2.2)`$ $$t_1,t_2:=(\sqrt{1})^{f^2}_{Y_x}s_1,s_2_h𝑑\zeta _1d\zeta _2\mathrm{}d\zeta _fd\overline{\zeta _1}d\overline{\zeta _2}\mathrm{}d\overline{\zeta _f}$$ is the $`L^2`$ inner product of the two vectors $`t_1`$ and $`t_2`$ of the fiber $`V_x`$. The top form $$s_1,s_2_hd\zeta _1d\zeta _2\mathrm{}d\zeta _fd\overline{\zeta _1}d\overline{\zeta _2}\mathrm{}d\overline{\zeta _f}$$ clearly depends only on $`t_1`$ and $`t_2`$ and, in particular, it does not depend on the choice of the coordinate function $`\zeta `$; in other words, it is a globally defined top form on $`Y_x`$. We note that the definition of $`L^2`$-metric does not require a metric on $`Y_x`$. Our aim is to compute the curvature of the Chern connection on $`V`$ for the $`L^2`$ metric. Theorem 2.3.Assume that $`H^1(Y_x,)=\mathrm{\hspace{0.17em}0}`$ for some, hence every, point $`xX`$. Then the $`L^2`$-metric on the direct image $`V`$ is Nakano positive. Proof. Take a point $`\xi X`$. Let $`\{z_1,z_2,\mathrm{},z_m\}`$ be a holomorphic coordinate chart on $`X`$ around $`\xi `$ such that $`\xi =0`$. Let $`r`$ be the rank of the direct image $`V`$. Fix a normal frame $`\{t_1,t_2,\mathrm{},t_r\}`$ of $`V`$ around $`\xi `$ with respect to the $`L^2`$ metric. In other words, $`\{t_1,t_2,\mathrm{},t_r\}`$ is a holomorphic frame of $`V`$ around $`\xi `$ such that for the function $`t_\alpha ,t_\beta `$ around $`\xi `$ we have $`t_\alpha ,t_\beta |_{z=0}`$ $`=`$ $`\delta _{\alpha \beta }`$ $`{\displaystyle \frac{t_\alpha ,t_\beta }{z_i}}|_{z=0}`$ $`=`$ $`0`$ for all $`\alpha ,\beta [1,r]`$ and all $`i[1,m]`$. We note that the second condition is equivalent to the condition that $`dt_\alpha ,t_\beta (0)=0`$, where $`d`$ is the exterior derivation. Let $`^{L^2}`$ denote the Chern connection on $`V`$ for the $`L^2`$ metric. Its curvature, which is a $`\mathrm{End}(V)`$-valued $`(1,1)`$-form on $`X`$, will be denoted by $`C_{L^2}`$. Take any vector $$v=\underset{i=1}{\overset{m}{}}\underset{\alpha =1}{\overset{r}{}}v_{i,\alpha }\frac{}{z_i}t_\alpha T_\xi XV_\xi .$$ We wish to show that $`(2.4)`$ $$\underset{i=1}{\overset{m}{}}\underset{j=1}{\overset{m}{}}\underset{\alpha =1}{\overset{r}{}}\underset{\beta =1}{\overset{r}{}}v_{i,\alpha }\overline{v_{j,\beta }}\sqrt{1}C_{L^2}(\frac{}{z_i},\frac{}{\overline{z_j}})t_\alpha ,t_\beta _{L^2}>\mathrm{\hspace{0.17em}0}$$ with the assumption that $`v0`$. Now, we have $$\sqrt{1}C_{L^2}(\frac{}{z_i},\frac{}{\overline{z_j}})t_\alpha ,t_\beta _{L^2}(0)=\frac{1}{\sqrt{1}}\frac{^2t_\alpha ,t_\beta }{z_i\overline{z_j}}(0).$$ For any $`\alpha [1,r]`$, let $`\widehat{t}_\alpha `$ be the unique section of $`K_{Y/X}L`$, defined on an analytic open neighborhood of $`Y_\xi :=\psi ^1(\xi )`$, which is determined by the condition that for any $`xX`$ in a sufficiently small neighborhood of $`\xi `$, the restriction of $`\widehat{t}_\alpha `$ to the fiber $`\psi ^1(x)`$ represents $`t_\alpha (x)`$, the evaluation of the section $`t_\alpha `$ at $`x`$. The holomorphicity of the section $`t_\alpha `$ of $`V`$ implies that the section $`\widehat{t}_\alpha `$ of $`K_{Y/X}L`$ is also holomorphic. Fix a trivialization of the fiber bundle over a neighborhood of $`\xi `$ defined by the projection $`\psi `$. In other words, we fix an isomorphism of the fiber bundle $`\psi ^1(U)U`$ over some open subset $`UX`$ containing $`\xi `$ with the trivial fiber bundle $`U\times Y_\xi `$ over $`U`$. Using this isomorphism, the relative tangent bundle for the projection $`\psi `$ gets identified with a subbundle of the restriction of the tangent bundle $`TY`$ to $`\psi ^1(U)`$. Consequently, any relative differential form on $`\psi ^1(U)`$ becomes a differential form on $`\psi ^1(U)`$. If $`\theta `$ (respectively, $`\omega `$) is a $`L`$-valued $`(i_1,i_2)`$-form (respectively, $`L`$-valued $`(j_1,j_2)`$-form), i.e., a smooth section of $`\mathrm{\Omega }_Y^{i_1,i_2}L`$ (respectively, $`\mathrm{\Omega }_Y^{j_1,j_2}L`$), then define $`\{\theta ,\omega \}`$ to be the $`(i_1+j_2,i_2+j_1)`$-form on $`Y`$ obtained from the section $$\theta \overline{\omega }C^{\mathrm{}}(\mathrm{\Omega }_Y^{i_1+j_2,i_2+j_1}LL^{}),$$ where the conjugate of $`L`$ has been identified with $`L^{}`$ using the Hermitian structure $`h`$, and then contracting $`L`$ with $`L^{}`$. In the above notation we have $`(2.5)`$ $$_Xt_\alpha ,t_\beta =\psi _{}(_Y\{\widehat{t}_\alpha ,\widehat{t}_\beta \}),$$ where $`\psi _{}`$ is the integration of forms along the fiber (the Gysin map) and $`\alpha ,\beta [1,r]`$. It is easy to see directly that the right-hand side of (2.5) does not depend on the choice of the trivialization of the fibration defined by $`\psi `$. Indeed, any two choices of the local trivialization defines a homomorphism, over $`U`$, from the tangent bundle $`TU`$ to the trivial vector bundle over $`U`$ with the space of vertical vector fields $`H^0(Y_\xi ,TY_\xi )`$ as the fiber. This homomorphism is constructed by taking the difference of the two horizontal lifts, given by the two trivializations, of vector fields. If the one-form in the right-hand side of (2.5) for a different choice of trivialization is denoted by $`\eta `$, then the one-form defined by the difference $$\eta \psi _{}(_Y\{\widehat{t}_\alpha ,\widehat{t}_\beta \})$$ sends any tangent vector $`vT_xX`$ to $$_{\psi ^1(x)}L_{\widehat{v}}\{\widehat{t}_\alpha ,\widehat{t}_\beta \},$$ where $`L_{\widehat{v}}`$ is the Lie derivative with respect to the vertical vector field corresponding to $`v`$ for the given pair trivializations of the fiber bundle. Now the identity $`L_{\widehat{v}}=di_{\widehat{v}}+i_{\widehat{v}}d`$ and the Stokes’ theorem together ensure that the right-hand side of (2.5) is independent of the choice of trivialization of the fibration defined by $`\psi `$. Let $``$ denote the Chern connection of the holomorphic line bundle $`L`$ equipped with the Hermitian metric $`h`$. Its curvature will be denoted by $`C_h`$. Since $`\widehat{t}_\alpha `$ is a holomorphic section for every $`\alpha [1,r]`$, the equality $$\psi _{}(_Y\{\widehat{t}_\alpha ,\widehat{t}_\beta \})=\psi _{}(\{\widehat{t}_\alpha ,\widehat{t}_\beta \})$$ is valid for all $`\alpha ,\beta [1,r]`$. For any $`i[1,m]`$, let $`\frac{\stackrel{~}{}}{z_i}`$ denote vector field on $`\psi ^1(U)`$ given by the lift of the local vector field $`\frac{}{z_i}`$ on $`UX`$ using the chosen trivialization of the fiber bundle. The contraction of a one-form $`\theta `$ with a vector field $`\nu `$ will be denoted by $`(\theta ,\nu )`$. The above equality combined with (2.5) gives $$\frac{}{z_i}t_\alpha ,t_\beta =\left(\psi _{}(\{_{\frac{\stackrel{~}{}}{z_i}}\widehat{t}_\alpha ,\widehat{t}_\beta \})\right)|_{Y_\xi }$$ as any $`\widehat{t}_\alpha `$ is a relative top form with values in $`L`$. Now taking taking $`\overline{}_Y`$ of this equality yields $$\frac{^2}{\overline{z_j}z_i}t_\alpha ,t_\beta =(\overline{}_X\psi _{}(\{_{\frac{\stackrel{~}{}}{z_i}}\widehat{t}_\alpha ,\widehat{t}_\beta \}),\frac{}{\overline{z}_j})=(\psi _{}(\overline{}_Y\{_{\frac{\stackrel{~}{}}{z_i}}\widehat{t}_\alpha ,\widehat{t}_\beta \}),\frac{}{\overline{z_j}}).$$ Since any $`\widehat{t}_\alpha `$ is holomorphic, the last term coincides with $$(\psi _{}(\{^{0,1}_{\frac{\stackrel{~}{}}{z_i}}\widehat{t}_\alpha ,\widehat{t}_\beta \}),\frac{}{\overline{z_j}})+(1)^f(\psi _{}(\{_{\frac{\stackrel{~}{}}{z_i}}\widehat{t}_\alpha ,^{1,0}\widehat{t}_\beta \}),\frac{}{\overline{z_j}})$$ $`(2.6)`$ $$=\psi _{}(\{_{\frac{\stackrel{~}{}}{\overline{z_j}}}_{\frac{\stackrel{~}{}}{z_i}}\widehat{t}_\alpha ,\widehat{t}_\beta \})+(1)^f\psi _{}(\{_{\frac{\stackrel{~}{}}{z_i}}\widehat{t}_\alpha ,_{\frac{\stackrel{~}{}}{z_j}}\widehat{t}_\beta \}),$$ where $`\frac{\stackrel{~}{}}{\overline{z_j}}`$, as before, denotes the lift of $`\frac{}{\overline{z_j}}`$ using the chosen trivialization of the fibration, and $`f`$ is the relative dimension. The holomorphicity of any $`\widehat{t}_\alpha `$ yields the equality $$\psi _{}(\{_{\frac{\stackrel{~}{}}{\overline{z_j}}}_{\frac{\stackrel{~}{}}{z_i}}\widehat{t}_\alpha ,\widehat{t}_\beta \})=\psi _{}(C_h(\frac{\stackrel{~}{}}{z_i},\frac{\stackrel{~}{}}{\overline{z_j}})\{\widehat{t}_\alpha ,\widehat{t}_\beta \}).$$ Now, for any $`v=_{i=1}^m_{\alpha =1}^rv_{i,\alpha }\frac{}{z_i}t_\alpha T_\xi XV_\xi `$ we have $$\underset{i=1}{\overset{m}{}}\underset{j=1}{\overset{m}{}}\underset{\alpha =1}{\overset{r}{}}\underset{\beta =1}{\overset{r}{}}v_{i,\alpha }\overline{v_{j,\beta }}\psi _{}(C_h(\frac{\stackrel{~}{}}{z_i},\frac{\stackrel{~}{}}{\overline{z_j}})\{\widehat{t}_\alpha ,\widehat{t}_\beta \})=\psi _{}(\underset{i,j=1}{\overset{m}{}}C_h(\frac{\stackrel{~}{}}{z_i},\frac{\stackrel{~}{}}{\overline{z_j}})\{\theta _i,\overline{\theta _j}\}),$$ where $`\theta _i:=_{\alpha =1}^rv_{i,a}\widehat{t}_a`$ is the section of $`K_{Y/X}L`$ defined on a neighborhood of $`Y_\xi `$. Let $`e`$ be a local section of $`L`$ defined around a point $`yY`$ and $`\{\zeta _1,\zeta _2,\mathrm{},\zeta _f\}`$ be a holomorphic coordinate chart on $`Y_\xi `$ around $`y`$. Set $`\tau `$ to be the local section of $`TY`$ defined around $`y`$ that satisfies the equality $$d\zeta _1\zeta _2\mathrm{}\zeta _fe\tau =\underset{i=1}{\overset{m}{}}\frac{\stackrel{~}{}}{z_i}\theta _i$$ of local sections of $`K_{Y/X}LTY`$. Using this notation, the evaluation at $`\xi `$ of the function defined on a neighborhood of $`\xi `$ by the fiber integral $$\psi _{}(\{_{\frac{\stackrel{~}{}}{\overline{z_j}}}_{\frac{\stackrel{~}{}}{z_i}}\widehat{t}_\alpha ,\widehat{t}_\beta \})$$ in (2.6) coincides with $`(2.7)`$ $$\psi _{}\left(C_h(\tau ,\overline{\tau })e,e_hd\zeta _1d\zeta _2\mathrm{}d\zeta _fd\overline{\zeta _1}d\overline{\zeta _2}\mathrm{}d\overline{\zeta _f}\right).$$ The curvature $`C_h`$ is given to be positive. So using the expression (2.7) we conclude that the evaluation at $`\xi `$ of the complex valued function around $`\xi `$ defined by $$\frac{1}{\sqrt{1}}\underset{i=1}{\overset{m}{}}\underset{j=1}{\overset{m}{}}\underset{\alpha =1}{\overset{r}{}}\underset{\beta =1}{\overset{r}{}}v_{i,\alpha }\overline{v_{j,\beta }}\psi _{}(\{_{\frac{\stackrel{~}{}}{\overline{z_j}}}_{\frac{\stackrel{~}{}}{z_i}}\widehat{t}_\alpha ,\widehat{t}_\beta \})$$ is positive real; $`\psi _{}(\{_{\frac{\stackrel{~}{}}{\overline{z_j}}}_{\frac{\stackrel{~}{}}{z_i}}\widehat{t}_\alpha ,\widehat{t}_\beta \})`$ is a term in (2.6). Consequently, to prove the theorem it suffices to show that the last term in (2.6), namely the complex valued function $`(2.8)`$ $$\psi _{}(\{_{\frac{\stackrel{~}{}}{z_i}}\widehat{t}_\alpha ,_{\frac{\stackrel{~}{}}{z_j}}\widehat{t}_\beta \}),$$ which defined around $`\xi `$, actually vanishes at $`\xi `$. Let $`\omega `$ denote the Kähler form on $`Y_\xi `$ obtained from the curvature of the Chern connection $`C_h`$ on $`L`$. The second part of the normal frame condition gives $$(_{\frac{\stackrel{~}{}}{z_i}}\widehat{t}_\alpha )|_{Y_\xi },\widehat{t}_\beta |_{Y_\xi }_\omega =\psi _{}(\{_{\frac{\stackrel{~}{}}{z_i}}\widehat{t}_\alpha ,\widehat{t}_\beta \})(\xi )=\mathrm{\hspace{0.17em}0}$$ for all $`\alpha ,\beta [1,r]`$ and $`i[1,m]`$; here $`,_\omega `$ is the inner product defined using $`\omega `$. We note that the above orthogonality condition does not depend on the choice of the Kähler form on $`Y_\xi `$ that is needed to define the orthogonality condition. The above assertion that the smooth section $$\stackrel{~}{t}_{i,\alpha }:=(_{\frac{\stackrel{~}{}}{z_i}}\widehat{t}_\alpha )|_{Y_\xi }$$ is orthogonal to $`H^0(Y_\xi ,(K_{Y/X}L)|_{Y_\xi })`$ is equivalent to the condition that $`\stackrel{~}{t}_{i,\alpha }`$ is orthogonal to the space $`H^{f,0}(Y_\xi ,L|_{Y_\xi })`$ of $`\mathrm{\Delta }_{Y_\xi }^{\prime \prime }`$-harmonic $`(f,0)`$-forms with values in $`L`$; here $`f`$, as before, is the relative dimension. This Laplacian $`\mathrm{\Delta }_{Y_\xi }^{\prime \prime }`$ corresponds to the Dolbeault operator on $`L|_{Y_\xi }`$ endowed with the Hermitian metric $`h|_{Y_\xi }`$. The above orthogonality condition implies that the equality $$\stackrel{~}{t}_{i,\alpha }=\mathrm{\Delta }_{Y_\xi }^{\prime \prime }G_{Y_\xi }^{\prime \prime }(\stackrel{~}{t}_{i,\alpha })=(D_{Y_\xi }^{\prime \prime })^{}D_{Y_\xi }^{\prime \prime }G_{Y_\xi }^{\prime \prime }(\stackrel{~}{t}_{i,\alpha })$$ is valid; here $`G_{Y_\xi }^{\prime \prime }`$ is the Green operator corresponding to $`\mathrm{\Delta }_{Y_\xi }^{\prime \prime }`$. Therefore, we have the following equality $$\psi _{}(\{_{\frac{\stackrel{~}{}}{z_i}}\widehat{t}_\alpha ,_{\frac{\stackrel{~}{}}{z_j}}\widehat{t}_\beta \})=_{Y_\xi }D_{Y_\xi }^{\prime \prime }(\stackrel{~}{t}_{i,\alpha }),D_{Y_\xi }^{\prime \prime }G_{Y_\xi }^{\prime \prime }(\stackrel{~}{t}_{j,\beta })_\omega =_{Y_\xi }(D_{Y_\xi }^{\prime \prime })^{}D_{Y_\xi }^{\prime \prime }(\stackrel{~}{t}_{i,\alpha }),G_{Y_\xi }^{\prime \prime }(\stackrel{~}{t}_{j,\beta })_\omega $$ for the function defined by (2.8). Now, to prove that the function defined in (2.8) vanishes at $`\xi `$ it suffices to establish the equality $`(2.9)`$ $$D_{Y_\xi }^{\prime \prime }(\stackrel{~}{t}_{i,\alpha })=\mathrm{\hspace{0.17em}0}$$ for all $`i[1,m]`$ and $`\alpha [1,r]`$. To prove this we first note that $$D_{Y_\xi }^{\prime \prime }(\stackrel{~}{t}_{i,\alpha })=D_{Y_\xi }^{\prime \prime }((_{\frac{\stackrel{~}{}}{z_i}}\widehat{t}_\alpha )|_{Y_\xi })=(\underset{k=1}{\overset{f}{}}_{\frac{}{\overline{\zeta _k}}}(_{\frac{\stackrel{~}{}}{z_i}}\widehat{t}_\alpha )d\overline{\zeta _k})|_{Y_\xi }=\underset{k=1}{\overset{f}{}}C_h(\frac{\stackrel{~}{}}{z_i},\frac{}{\overline{\zeta _k}})\widehat{t}_\alpha d\overline{\zeta _k},$$ where $`\{\zeta _1,\mathrm{},\zeta _{f1},\zeta _f\}`$, as before is a local holomorphic coordinate chart on $`Y_\xi `$. The locally defined $`(0,1)`$-form $`_{k=1}^fC_h(\frac{\stackrel{~}{}}{z_i},\frac{}{\overline{\zeta _k}})d\overline{\zeta _k}`$ is easily seen to be independent of the coordinate function $`\{\zeta _1,\mathrm{},\zeta _f\}`$. Indeed, this locally defined $`(0,1)`$-form coincides with the contraction $$i_{\frac{\stackrel{~}{}}{z_i}}C_h|_{Y_\xi }$$ of the $`(1,1)`$-form $`C_h`$ with the vector field $`\frac{\stackrel{~}{}}{z_i}`$. We have $`H^{0,1}(Y_\xi )=0`$, as, by assumption, $`H^1(Y_\xi ,)=0`$ and $`Y_\xi `$ is Kähler. Consequently, there is no nonzero harmonic form of type $`(0,1)`$ on $`Y_\xi `$. Therefore, the equality (2.9) is an immediate consequence of the following lemma. Lemma 2.10.The $`(0,1)`$-form $`i_{\frac{\stackrel{~}{}}{z_i}}C_h|_{Y_\xi }`$ on $`Y_\xi `$ is harmonic. Proof of Lemma 2.10. This form $`i_{\frac{\stackrel{~}{}}{z_i}}C_h|_{Y_\xi }`$ is evidently $`\overline{}_{Y_\xi }`$-closed. To prove that it is also $`\overline{}_{Y_\xi }^{}`$-closed, first note that the Kähler identity gives $$\sqrt{1}\overline{}_{Y_\xi }^{}\left(\underset{k=1}{\overset{f}{}}C_h(\frac{\stackrel{~}{}}{z_i},\frac{}{\overline{\zeta _k}})d\overline{\zeta _k}\right)=\mathrm{\Lambda }_\omega _{Y_\xi }\underset{k=1}{\overset{f}{}}C_h(\frac{\stackrel{~}{}}{z_i},\frac{}{\overline{\zeta _k}})d\overline{\zeta _k}.$$ The right-hand side coincides with $$\mathrm{\Lambda }_\omega \underset{l=1}{\overset{f}{}}\underset{k=1}{\overset{f}{}}\frac{}{\zeta _l}C_h(\frac{\stackrel{~}{}}{z_i},\frac{}{\overline{\zeta _k}})d\zeta _ld\overline{\zeta _k}=\mathrm{\Lambda }_\omega \underset{l=1}{\overset{f}{}}\underset{k=1}{\overset{f}{}}\frac{\stackrel{~}{}}{z_i}C_h(\frac{}{\zeta _l},\frac{}{\overline{\zeta _k}})d\zeta _ld\overline{\zeta _k}.$$ The last equality is a obtained using the Bianchi identity. Since $`\mathrm{\Lambda }_\omega `$ and $`\frac{\stackrel{~}{}}{z_i}`$ commute, the following equality is obtained : $$\mathrm{\Lambda }_\omega \underset{l=1}{\overset{f}{}}\underset{k=1}{\overset{f}{}}\frac{\stackrel{~}{}}{z_i}C_h(\frac{}{\zeta _l},\frac{}{\overline{\zeta _k}})d\zeta _ld\overline{\zeta _k}=\frac{\stackrel{~}{}}{z_i}(\mathrm{\Lambda }_\omega \omega )=\mathrm{\hspace{0.17em}0}.$$ This completes the proof of the lemma.$`\mathrm{}`$ We already noted that the given condition $`H^1(Y_\xi ,)=0`$ and the above lemma together imply the equality (2.9). This completes the proof of the theorem.$`\mathrm{}`$ If $`H^1(Y_\xi ,)0`$, then the Picard group of $`Y_\xi `$ has continuous part. If $`\psi `$ gives a locally trivial holomorphic fibration, then the family of line bundle $`L|_{Y_x}`$ gives an infinitesimal deformation map $`\tau :T_\xi XH^1(Y_\xi ,𝒪_{Y_\xi })`$. It is easy to check that the harmonic $`(0,1)`$-form $`_{k=1}^fC_h(\frac{\stackrel{~}{}}{z_i},\frac{}{\overline{\zeta _k}})d\overline{\zeta _k}`$ in Lemma 2.10 is the harmonic representative of the image of the tangent vector $`\frac{}{z_i}`$ under the homomorphism $`\tau `$. (See \[ST\], \[BS\] for a similar argument.) ## 3. Applications of the positivity of direct images Let $`E`$ be an ample vector bundle of rank $`r`$ on a projective manifold $`X`$. Take $`\lambda =(\lambda _1,\lambda _2,\mathrm{},\lambda _r)^r`$ such that $`\lambda _i\lambda _j`$ if $`ij`$. Let $`\mathrm{\Gamma }^\lambda E`$ denote the vector bundle associated to $`E`$ for the weight $`\lambda `$. If, in particular, $`\lambda _i=0`$ for $`i2`$, then $`\mathrm{\Gamma }^\lambda E`$ is the symmetric power $`S^{\lambda _1}(E)`$; if $`\lambda _i=0`$ for $`ik+1`$ and $`\lambda _i=1`$ for $`ik`$, then $`\mathrm{\Gamma }^\lambda E`$ is the exterior power $`^k(E)`$. Let $$\psi :M_\lambda (E)X$$ denote the associated flag bundle over $`X`$ and $`L_\lambda M_\lambda (E)`$ is the corresponding line bundle. If $`\lambda _2=0`$, then $`M_\lambda (E)=(E)`$ and $`L_\lambda =𝒪_{(E)}(\lambda _1+r)`$. The direct image $`\psi _{}(L_\lambda K_{M_\lambda (E)/X})`$ coincides with $`\mathrm{\Gamma }^\lambda E(^rE)^l`$, where $`l[1,r]`$ such that $`\lambda _l0`$ and $`\lambda _{l+1}=0`$; $`K_{M_\lambda (E)/X}`$ is the relative canonical bundle for the projection $`\psi `$. In this special setup Theorem 2.3 reads as follows : Theorem 3.1.The vector bundle $`\mathrm{\Gamma }^\lambda E(^rE)^l`$ is Nakano positive. In particular, setting $`\lambda =(k,0,\mathrm{},0,0)`$ the vector bundle $`S^k(E)detE`$ is Nakano positive. Theorem 3.1 combined with a vanishing theorem of Nakano proved in \[Na\] immediately gives as a corollary the following result of Demailly proved in \[De\]. Corollary 3.2.Let $`E`$ be an ample vector bundle on a projective manifold $`X`$ of dimension $`n`$. Then $$H^{n,i}(X,\mathrm{\Gamma }^\lambda E(detE)^l)=\mathrm{\hspace{0.17em}0}$$ for $`i1`$ and $`\lambda `$ as above. We note that the special case Corollary 3.2 where $`\mathrm{\Gamma }^\lambda E=S^k(E)`$ was proved by Griffiths \[Gr\]. Remark 3.3. The curvature terms of the $`L^2`$-metric on the vector bundle $`\mathrm{\Gamma }^\lambda E(detE)^l`$ are $$\psi _{}(\{_{\frac{\stackrel{~}{}}{\overline{z_j}}}_{\frac{\stackrel{~}{}}{z_i}}\widehat{t}_\alpha ,\widehat{t}_\beta \})$$ The curvature of the dual metric is the negative of the transpose of the initial curvature. Hence, the sesquilinear form on $`TXE^{}`$ computed with the dual metric is Nakano negative. As an immediate consequence, for an ample vector bundle $`E`$ we have $$H^{p,n}(X,\mathrm{\Gamma }^\lambda E(detE)^l)=\mathrm{\hspace{0.17em}0}$$ if $`p1`$ and $`\lambda `$ as above. Let $`X`$ be a compact complex manifold equipped with a Hermitian structure $`\omega `$. In \[DPS\], the notion of a numerically effective line bundle on $`X`$ is defined to be a holomorphic line bundle $`L`$ satisfying the condition that given any $`ϵ>0`$, there is a Hermitian metric $`h_ϵ`$ on $`L`$ such that $$\mathrm{\Theta }_{h_ϵ}ϵ\omega ,$$ where $`\mathrm{\Theta }_{h_ϵ}`$ is the Chern curvature for the Hermitian metric $`h_ϵ`$ \[DPS, Definition 1.2\]. The manifold $`X`$ being compact, this condition, of course, does not depend on $`\omega `$. A vector bundle $`E`$ over $`X`$ is called numerically effective if the tautological line bundle $`𝒪_{(E)}(1)`$ over $`(E)`$ is numerically effective \[DPS, Definition 1.9\]. We have the following proposition as an easy consequence of the proof of Theorem 2.3. Proposition 3.4.Let $`X`$ be a compact complex manifold equipped with a Hermitian form $`\omega `$. A vector bundle $`E`$ over $`X`$ is numerically effective if and only if there is a Hermitian metric $`h_{k,\epsilon }`$ on $`S^k(E)detE`$, for all $`k1`$ and all $`\epsilon >0`$, such that $$\sqrt{1}C_{h_{k,\epsilon }}(S^k(E)detE)>\epsilon \omega Id_{S^k(E)detE},$$ where $`C_{h_{k,\epsilon }}(S^k(E)detE)`$ denotes the curvature of the metric $`h_{k,\epsilon }`$. The inequality is in the sense of Nakano. Proof. Let $`E`$ be a numerically effective vector bundle of rank $`r`$ over the compact complex manifold $`X`$. Let $`Y`$ denote the projective bundle $`(E)`$ over $`X`$. The natural projection of $`Y`$ to $`X`$ will be denoted by $`\psi `$. The tautological line bundle $`𝒪_{(E)}(1)`$ on $`(E)`$ is numerically effective. For any $`\epsilon >0`$, consider the Hermitian metric $`h_\epsilon `$ on $`𝒪_{(E)}(1)`$ as in Definition 1.2 (pp. 299) of \[DPS\]. This Hermitian metric $`h_\epsilon `$ induces a Hermitian metric on each $`𝒪_{(E)}(k)`$, where $`k>1`$. Consequently, we have a Hermitian metric $`h_{k,\epsilon }`$ on each $`S^k(E)detE`$ obtained as the $`L^2`$-metric for the above Hermitian metric on $`𝒪_{(E)}(k+r)`$ obtained from $`h_\epsilon `$. Now, from the proof of Theorem 2.3 it can be deduced that the condition for numerically effectiveness of a line bundle in Definition 1.2 (pp. 299) of \[DPS\], given in terms of Hermitian metrics $`h_\epsilon `$, ensures that the inequality in the proposition is valid. Conversely, let $`E`$ be a vector bundle such that the inequality condition in the statement of the proposition is valid. Now it follows immediately from the criterion of numerically effectiveness given in Theorem 1.12 (pp. 306) of \[DPS\] that such a vector bundle $`E`$ must be numerically effective. This completes the proof of the proposition.$`\mathrm{}`$ Remark 3.5. Let $`E`$ be a vector bundle of rank $`r`$ and set $`L=𝒪_{(E)}(r+k)`$ in (2.6). The expression for the curvature of the $`L^2`$-metric on $`S^kEdetE`$ shows that if $`E`$ is ample, then for each $`k\mathrm{\hspace{0.17em}1}`$, there is a Hermitian metrics on $`S^kEdetE`$ such that there exits a positive number $`\epsilon >\mathrm{\hspace{0.17em}0}`$ with the property that for every positive integer $`k`$, the inequality $$\sqrt{1}c(S^kEdetE)>(k+r)\epsilon \omega \mathrm{Id}_{S^kEdetE}$$ is valid, where $`\omega `$ is any fixed Hermitian form on $`X`$; the inequality is in the sense of Nakano. We observe that the above property is actually a characterization of ampleness. Indeed, if $`E`$ is a vector bundle which satisfies the above condition, then fix any metric on the line bundle $`detE`$. By subtracting it to $`S^kEdetE`$, we get a metric on $`S^kE`$ whose curvature is, for a large $`k`$, Nakano positive. Hence, $`S^kE`$ must be ample. This immediately yields the ampleness of $`E`$.
no-problem/9910/cond-mat9910356.html
ar5iv
text
# Exact trimer ground states on a spin-1 chain11footnote 1Work performed within the research program of the Sonderforschungsbereich 341, Köln-Aachen- Jülich ## Abstract We construct a new spin-1 model on a chain. Its ground state is determined exactly which is three-fold degenerate by breaking translational invariance. Thus we have trimerization. Excited states cannot be obtained exactly, but we determine a few low-lying ones by using trial states, among them solitons. Spin chains and spin ladders have been intensively studied in recent years . Besides the Bethe ansatz solvable models there is an increasing number of models where at least the ground state is known exactly and where eventually the gap to the first excited state can be calculated . Typically in the ground state of these models the correlations are of short range which implies a finite gap. One natural way to construct such models in one dimension is to use the matrix product method which is based on the concept of ”optimum ground states” as explained in ref. . In some cases translational symmetry is spontaneously broken leading to twofold degenerate, dimerized ground states. To the best knowledge of the authors no model has been found so far where the ground state would be threefold degenerate, thus leading to trimerization. For the first time we construct such a model. There exists one model, the Uimin-Lai-Sutherland (ULS) model , which is a special case of the spin-1 bilinear-biquadratic chain, where the excitation spectrum has a tripled periodicity in the Brillouin zone, namely soft modes appear at $`q=0`$ and $`q=\pm 2\pi /3`$, however, the ground state itself does not break translational symmetry. For later convenience the Hamiltonian of the spin-1 bilinear-biquadratic model is written in the form $$=\underset{i=1}{\overset{N}{}}\left[\left(𝐒_i𝐒_{i+1}\right)^2+\alpha \left(𝐒_i𝐒_{i+1}\right)\right].$$ (1) The ULS point corresponds to $`\alpha =1`$. For $`\alpha >1`$ a gap opens up in the excitation spectrum above the unique ground state, if the chain is periodically closed. At $`\alpha =3`$ we get the AKLT model , where the existence of the gap has been proven. For $`\alpha <1`$ the gapless spectrum with three soft modes presumably survives . Since a gapless spectrum and, correspondingly, power law like decaying correlations should be an exception, and generically a spin chain should have a finite gap, one might expect to find a gapped trimerized state by perturbing away from the Hamiltonian given above. To construct such a model that has an exact threefold degenerate ground state with a finite gap to the excitations we observe that the quantity $$A_{i,j}=\left(𝐒_i𝐒_j\right)^2+\alpha \left(𝐒_i𝐒_j\right)+\alpha 1$$ (2) has the following properties. As eigenstates arrange in multiplets, we denote the 2-spin singlet state at site $`i`$ and $`j`$ by $`s[i,j]`$, and similarly by $`t[i,j]`$ and $`q[i,j]`$ the triplet and quintuplet states. Then we find $$A_{i,j}\left\{\begin{array}{c}s[i,j]\\ t[i,j]\\ q[i,j]\end{array}\right\}=\{\begin{array}{c}\hfill (3\alpha )s[i,j],\\ \hfill 0t[i,j],\\ \hfill 2\alpha q[i,j].\end{array}$$ (3) This implies that the triplet has the lowest energy in the range $`0<\alpha <3`$. For three spins at sites $`i,j,k`$ one has one septuplet, two quintuplets, three triplets, and precisely one totally antisymmetric singlet. This trimer singlet is given by $$S[i,j,k]=\frac{1}{\sqrt{6}}\left[(+,0,)+(0,,+)+(,+,0)(,0,+)(0,+,)(+,,0)\right]$$ (4) in terms of the three states $`(+)`$, $`()`$ and $`(0)`$ of a spin-1 operator. Then using the fact that in order to form a singlet with the third spin, any two spins have to form a triplet, one gets easily from eq. (3) $$A_{i,j}S[i,j,k]=A_{i,k}S[i,j,k]=A_{j,k}S[i,j,k]=0.$$ (5) Using this property one sees immediately that the Hamiltonian $$=\underset{i=1}{\overset{N}{}}h_{i,i+1,i+2,i+3}=\underset{i=1}{\overset{N}{}}A_{i,i+2}A_{i+1,i+3}$$ (6) has precisely three zero energy ground states. For a finite periodically closed chain of $`N=3p`$ sites, where $`p`$ is an integer, these three states are: $`\mathrm{\Psi }_1`$ $`=`$ $`{\displaystyle \underset{i=1}{\overset{p}{}}}S[3i2,3i,3i+2],`$ $`\mathrm{\Psi }_2`$ $`=`$ $`{\displaystyle \underset{i=1}{\overset{p}{}}}S[3i1,3i+1,3i+3],`$ (7) $`\mathrm{\Psi }_3`$ $`=`$ $`{\displaystyle \underset{i=1}{\overset{p}{}}}S[3i,3i+2,3i+4].`$ They are ground states, as $``$ and $`h`$ of eq. (6) are positive-semidefinite in $`0<\alpha <3`$, and they are optimum ground states, as in the sense of ref. they are simultaneously ground states of all local interactions $`h`$. For a finite system these states are not strictly orthogonal, but their overlap goes to zero exponentially with $`N\mathrm{}`$ namely as $`\gamma ^N`$ ($`0<\gamma <1`$). To visualize the states we show one of them in fig. 1 by connecting the sites belonging to the trimer singlets by valence bonds. Because the trimer singlet is antisymmetric under left to right reflection, i.e., $`[i,j,k][k,j,i]`$, the bonds have to be directed. It is more convenient to draw the chain as a zig-zag ladder which is shown in fig. 2. When $`N=6p`$, this is a genuine ladder, while when $`N=3(2p+1)`$, it can be thought of as part of a Moebius ribbon. In what follows we will use everywhere this zig-zag ladder representation. That these states are in fact the ground states of the Hamiltonian in the range $`0<\alpha <3`$, follows immediately from eq. 3. At $`\alpha =0`$ the quintuplet of the spins at site $`i`$ and $`j`$ has the same energy as the singlet, so at this point the degeneracy of the ground state becomes exponentially large. For $`\alpha <0`$ the quintuplet has the lowest energy, leading to the fully polarized ferromagnetic state. The transition of the trimer phase to the ferromagnetic one is of first order. At $`\alpha =3`$ the 2-spin singlet and triplet local pairs become degenerate, producing again an exponentially large degeneracy, while for $`\alpha >3`$ we get to the massive, Haldane-like regime with unique ground state. Next we discuss excited states of the model. They cannot be calculated exactly, but clearly there has to be a gap to the excitations, because the ground state is formed of short range valence bonds. The simplest excited state could be a state where one trimer singlet is broken up and is replaced by a triplet (Tri), quintuplet (Quint) or septuplet (Sept) of the three spins, e.g., $$\mathrm{\Phi }_\mathrm{B}(l)=B[3l2,3l,3l+2]\underset{il}{}S[3i2,3i,3i+2],$$ (8) where $`B`$ stands for Tri, Quint or Sept. Using these as trial states and calculating their energy as the average $`E=`$, we see that two terms of the Hamiltonian give a non-zero contribution: $$A_{3l2,3l}A_{3l1,3l+1}+A_{3l1,3l+1}A_{3l,3l+2}.$$ (9) As can be easily seen the Hamiltonian has no matrix element between states where different trimers are broken, and there is no overlap between these states. Therefore these excited states have no dispersion. For the three possible triplet states we get the energy $$E_{\mathrm{Tri}}=\{\begin{array}{c}(2+\alpha )(\frac{1}{3}+\alpha ),\hfill \\ (3a_1\alpha )(\frac{1}{3}+\alpha ),\hfill \\ (3a_2\alpha )(\frac{1}{3}+\alpha ),\hfill \end{array}$$ (10) where $`a_{1,2}`$ are the solutions of the equation $$a_{1,2}=1\frac{3}{2}\alpha \pm \sqrt{\left(1\frac{3}{2}\alpha \right)^2+3\alpha }.$$ (11) For the two quintuplet states the energies are $$E_{\mathrm{Quint}}=\{\begin{array}{c}\alpha (\frac{1}{3}+\alpha ),\hfill \\ 3\alpha (\frac{1}{3}+\alpha ),\hfill \end{array}$$ (12) while for the septuplet state $`E_{\mathrm{Sept}}=4\alpha (1/3+\alpha )`$. One checks that the triplet gap with $`a_1`$ in eq. (10) and the first quintuplet gap in eq. (12) are the lowest ones. The lowest energy propagating modes are probably solitonic excitations, which are moving domain walls between two different ground states. In a ring of length $`N=3p`$, where the lattice spacing is taken to be unity, a soliton cannot appear alone. In the simplest case one needs three solitons to satisfy the periodic boundary condition. However, provided the chain is long enough, solitons can be studied separately assuming that the two ends are in different ground states. A domain wall that perturbes the system as little as possible can be obtained on a chain of $`3p+2`$ sites between state $`\mathrm{\Psi }_1`$ on the left hand side and $`\mathrm{\Psi }_3`$ on the right hand side by having the spins at sites $`3l2`$ and $`3l+1`$ forming a singlet, triplet or quintuplet, while all other spins are in their trimer singlet ground state. Similar domain walls appear between state $`\mathrm{\Psi }_2`$ on the left and $`\mathrm{\Psi }_1`$ on the right or $`\mathrm{\Psi }_3`$ on the left and $`\mathrm{\Psi }_2`$ on the right. Such a state can be written as $$\mathrm{\Phi }_\mathrm{c}(l)=\underset{i=1}{\overset{l1}{}}S[3i2,3i,3i+2]c[3l2,3l+1]\underset{j=l}{\overset{p}{}}S[3j,3j+2,3j+4],$$ (13) where $`c`$ stands for $`s`$, $`t`$ or $`q`$. It is shown in fig. 3. Only one term in the Hamiltonian will give a non-zero contribution to the energy, namely $$A_{3l2,3l}A_{3l1,3l+1}.$$ (14) Although the Hamiltonian has no matrix element between states with different site index $`l`$, a propagating mode is obtained since there is a finite overlap between these states. Now looking for the moving soliton in the form $$\mathrm{\Phi }_\mathrm{c}(q)=\underset{l=1}{\overset{p}{}}e^{i3ql}\mathrm{\Phi }_\mathrm{c}(l),$$ (15) we find for the energy of the singlet and quintuplet excitations $$E_{\mathrm{s},\mathrm{q}}(q)=\frac{1}{4}\left(\frac{1}{3}+\alpha \right)^2[53\mathrm{cos}3q],$$ (16) and $$E_\mathrm{t}(q)=\frac{1}{4}\left(\frac{1}{3}+\alpha \right)^2[5+3\mathrm{cos}3q]$$ (17) for the triplet excitations. A somewhat more complicated domain wall is shown in fig. 4, between state $`\mathrm{\Psi }_1`$ on the left and state $`\mathrm{\Psi }_3`$ on the right. Because of its symmetric shape, the Hamiltonian has finite matrix elements between domain walls on different sites, giving direct propagation. In addition to that the overlap has also to be considered, giving finally a rather complicated dispersion relation. When the two spins form a singlet, the dispersion relation of the soliton is: $$E_\mathrm{s}(q)=\frac{2}{9}\left[\left(1+3\alpha \right)^2+\left(12\alpha \right)^2\mathrm{cos}3q\right]\frac{53\mathrm{cos}3q}{14/32\mathrm{cos}3q},$$ (18) while for the triplet $$E_\mathrm{t}(q)=2\left(\frac{1}{3}+\alpha \right)^2\left[1+\frac{1}{4}\mathrm{cos}3q\right]\frac{5+3\mathrm{cos}3q}{16/3+4\mathrm{cos}3q},$$ (19) and for the quintuplet $$E_\mathrm{q}(q)=\frac{2}{9}\left[\left(1+3\alpha \right)^2+\frac{1}{4}\left(1+\alpha \right)^2\mathrm{cos}3q\right]\frac{53\mathrm{cos}3q}{14/32\mathrm{cos}3q}.$$ (20) It is interesting to speculate what happens if the model presented in this paper is perturbed away from the form given in eq. (6). As mentioned earlier both at $`\alpha =0`$ and $`\alpha =3`$ the degeneracy of the ground state gets exponentially large. Beyond these points the system becomes ferromagnetic, or a first order phase transition to a massive Haldane phase occurs. If extra parameters are introduced by making the couplings of the two-spin and four-spin terms independent, the trimerized phase will survive because of its finite gap. However, at some point the two legs of the zig-zag ladder become decoupled, and other phases may appear. Finally we mention that the model can easily be generalized without changing essential properties. Namely for $`N=6p`$ one can have different parameters $`\alpha `$ and $`\beta `$ in the operators $`A`$ acting on the upper and the lower legs, respectively. We like to thank Dr. A. Schadschneider for this hint. Furthermore we mention that we have found other model Hamiltonians with trimer ground states. However, in all these cases they are degenerate with other ground states and the degeneracy is exponentially large. Clearly they are not so interesting. \*** One of the authors (J. S.) is grateful to the University of Cologne for the hospitality during his visit, where most of this work was done. The authors acknowledge the financial support of the Deutsche Forschungsgemeinschaft. The work was partially supported by the Hungarian Research Fund (OTKA) grant No. 30173.
no-problem/9910/astro-ph9910515.html
ar5iv
text
# The Possible Role of R-modes in Post-glitch Relaxation of Crab ## I Introduction Many studies of the (nonradial) oscillation of neutron stars concern the question of stability of the various modes realised in these stars. The so-called CFS nonaxisymmetric instability, driven by the emission of gravitational radiation, accompanies a very rapid spinning down of a neutron star, at spin frequencies larger than a certain critical frequency. Determination of such critical frequencies for various modes and its observational implications has been addressed by many authors (eg. Lindblom 1995). Also, for the stable phase when the star rotates slower than the associated critical frequency, possible observational manifestations of the various modes (including those which arise from MHD, superconductor, and superfluid effects) have been investigated through a determination of the pulsation frequencies and the damping times due to the different dissipation mechanisms (eg. Van Horn 1980; McDermott, Van Horn & Hansen 1988; Mendell 1991; Ipser & Lindblom 1991). A recently discovered curious feature of the so-called r-modes is their generically unstable nature (Andersson 1998); i.e. the corresponding frequency for the onset of CFS-instability equals zero, for some of these modes. Nevertheless, a nonzero critical frequency $`\mathrm{\Omega }_\mathrm{c}`$ is still defined, in this case too, based on a comparison between the “growth” time of the mode (driven by gravitational radiation) and its “damping” time due to the viscosity. For $`\mathrm{\Omega }>\mathrm{\Omega }_\mathrm{c}`$, where $`\mathrm{\Omega }`$ is the angular frequency of the star, the unstable mode of oscillation grows in amplitude and a significant decrease in $`\mathrm{\Omega }`$ is predicted, also due to the proportionality of the amplitude of gravitational waves with the amplitude of the pulsation. However, when $`\mathrm{\Omega }<\mathrm{\Omega }_\mathrm{c}`$ the viscosity-driven damping is more efficient than the growth, which is still driven by the emission of the gravity waves. Much attention has again been paid to a determination of the relevant $`\mathrm{\Omega }_\mathrm{c}`$ and the observational implications of the unstable phase in the case of r-modes (Andersson, Kokkotas & Stergioulas 1998; Madsen 1998; Levin 1998). We are, however, concerned here with the consequences of r-mode oscillation in neutron stars during their “stable” phase when $`\mathrm{\Omega }<\mathrm{\Omega }_\mathrm{c}`$. This is indeed expected to be the relevant phase for all presently known pulsars (Andersson, Kokkotas & Schutz 1998). In particular, we will assume that r-modes, originally absent in a pulsar, are excited instantaneously at the event of a glitch which is observable as a sudden spin-up of the star. The excited r-modes, which will be subsequently damped effectively by the viscous effects, would nevertheless result in certain amount of the angular momentum to be taken away from the star by the gravitational waves. The total loss of angular momentum until the modes are damped out is computed, for assumed initial amplitudes of the excited r-modes. The resulting decrease in the rotation frequency of the star is compared with the (unrelaxed part of the) increase caused by the glitch. The predicted effect is seen to be negligible in all pulsars, except in the case of Crab which has also shown a corresponding unique observational behavior, as will be discussed. Normal modes of oscillation, including the r-modes, in a neutron star might be excited at a glitch, which is observed as a sudden rise of the spin frequency in radio pulsars (see, eg., Lyne 1995). The excitation could be a result of the “shaking” of the star due to a sudden transfer of angular momentum or a sudden variation in the strength of the rotational coupling, between its various components, or due to a sudden change of its moment of inertia accompanying structural changes. All these effects are commonly considered as possibilities which might occur at a glitch (eg. Blandford 1992). It is further noted that the special role attributed here to the r-modes during a post-glitch relaxation has not to do with their distinctive unstable property, as was indicated earlier. The other (say, p-, f-, g-) modes of oscillation in a neutron star could have, in principle, played the same role as well. The distinction lies, however, in the associated damping times. The damping time for the r-modes (in the case of Crab pulsar) is comparable to that of the observed post-glitch relaxation. In contrast, the damping time for the other modes is either too short or too long by many orders of magnitudes (eg., McDermott et al 1984, Kokkotas and Schmidt 1999). Hence the total loss of angular momentum due to gravitational radiation by the excited modes, during intervals between successive glitches in, say, Crab Pulsar turns out to be negligible, for all the other modes except for the r-modes (see Fig. 1 below). ## II The Predicted Effect In order to estimate the effect of the r-mode instability, in a “stable” neutron star, on its post-glitch relaxation, we have used the model described by Owen et. al. (1998). The total angular momentum of a star is parameterized in terms of the two degrees of freedom of the system. One, is the uniformly rotating equilibrium state which is represented by its angular velocity $`\mathrm{\Omega }_{\mathrm{eq}}`$. The other, is the excited r-mode that is parameterized by its magnitude $`\alpha `$ which is bound to an upper limiting value of $`\alpha =1`$, in the linear approximation regime treated in the model. Thus, the total angular momentum $`J`$ of the star is written as a function of the two parameters $`\mathrm{\Omega }_{\mathrm{eq}}`$ and $`\alpha `$: $$J=I_{\mathrm{eq}}\mathrm{\Omega }_{\mathrm{eq}}+J_c,$$ (1) where $`I_{\mathrm{eq}}=\stackrel{~}{I}MR^2`$ is the moment of inertia of the equilibrium state, and $`J_c=\frac{3}{2}\stackrel{~}{J}\alpha ^2\mathrm{\Omega }_{\mathrm{eq}}MR^2`$ is the canonical angular momentum of the $`l=2`$ r-mode, which is negative in the rotating frame of the equilibrium star. The dimensionless constants $`\stackrel{~}{I}`$ and $`\stackrel{~}{J}`$ depend on the detailed structure of the star, and for the adopted $`n=1`$ polytropic model considered have values $`\stackrel{~}{I}=0.261`$ and $`\stackrel{~}{J}=0.01635`$. Also $`R=12.54`$ km and $`M=1.4M_{}`$ are the assumed radius and mass of the star, for the same polytropic model. Eq. 1 above implies that an assumed instantaneous excitation of r-modes at a glitch would cause a sudden increase in $`\mathrm{\Omega }_{\mathrm{eq}}`$. For definiteness, we define the “real” observable rotation frequency $`\mathrm{\Omega }`$ of the star as $`\mathrm{\Omega }=\frac{J}{I}`$, where $`I`$ is the moment of inertia of the real star. The two are equal, $`\mathrm{\Omega }=\mathrm{\Omega }_{\mathrm{eq}}`$, in the absence of the r-modes, ie. before the excitation of the modes at a glitch and after the modes are damped out. If there were no loss of angular momentum (by gravitational radiation) accompanying the post-glitch damping (by viscosity) of the modes $`\mathrm{\Omega }_{\mathrm{eq}}`$ would recover its extrapolated pre-glitch value; ie. its initial rise would be compensated exactly. However due to the net loss of angular momentum by the star, the post-glitch decrease of $`\mathrm{\Omega }_{\mathrm{eq}}`$ overshoots its initial rise. The negative offset between the values of $`\mathrm{\Omega }_{\mathrm{eq}}`$ before the excitation of the modes and after they are damped out is the quantity of interest for our discussion. The question of whether the instantaneous rise in the value of $`\mathrm{\Omega }_{\mathrm{eq}}`$ at a glitch, due to the excitation of the r-modes, is observable or not is a separate problem, and its resolution would have no consequence for the net loss of angular momentum from the star which is the relevant quantity here. It is noted that the distinction between $`\mathrm{\Omega }`$ and $`\mathrm{\Omega }_{\mathrm{eq}}`$, in the presence of modes, is quantitatively negligible, in all cases of interest, and is usually disregarded. Also, one might dismiss an increase in $`\mathrm{\Omega }_{\mathrm{eq}}`$ as implied by Eq. 1 to be observable as a spin-up of the star since for an inertial outside observer the r-modes rotate in the prograde direction and their excitation should result, if at all, in a spin-down of the star. Moreover, an excitation of the r-modes should not result, by itself, in any real change of the rotation frequency of the star at all. Because one could not distinguish two physically separate parts of the stellar material such that the two components of angular momentum in Eq. 1 may be assigned to the two parts separately. The time evolution of the quantities $`\alpha `$ and $`\mathrm{\Omega }_{\mathrm{eq}}`$ may be determineded from the coupled equations (Owen et. al. 1998): $`{\displaystyle \frac{\mathrm{d}\mathrm{\Omega }_{\mathrm{eq}}}{\mathrm{d}t}}={\displaystyle \frac{2\mathrm{\Omega }_{\mathrm{eq}}}{\tau _\mathrm{v}}}{\displaystyle \frac{\alpha ^2Q}{1+\alpha ^2Q}},`$ (2a) $`{\displaystyle \frac{\mathrm{d}\alpha }{\mathrm{d}t}}={\displaystyle \frac{\alpha }{\tau _{\mathrm{gr}}}}{\displaystyle \frac{\alpha }{\tau _\mathrm{v}}}{\displaystyle \frac{1\alpha ^2Q}{1+\alpha ^2Q}},`$ (2b) where $`Q=\frac{3}{2}\frac{\stackrel{~}{J}}{\stackrel{~}{I}}=0.094`$, for the adopted equilibrium model of the star. The two timescales $`\tau _\mathrm{v}(>0)`$ and $`\tau _{\mathrm{gr}}(<0)`$ are the viscous damping and gravitational growth timescales, respectively. The viscous time has two contributions from the shear and bulk viscosities with corresponding timescales $`\tau _{\mathrm{sv}}`$ and $`\tau _{\mathrm{bv}}`$, respectively. The overall “damping” timescale $`\tau `$ for the mode, which is a measure of the period over which the excited mode will persist, is defined as $$\frac{1}{\tau }=\frac{1}{\tau _\mathrm{v}}+\frac{1}{\tau _{\mathrm{gr}}}=\frac{1}{\tau _{\mathrm{sv}}}+\frac{1}{\tau _{\mathrm{bv}}}+\frac{1}{\tau _{\mathrm{gr}}}$$ (3) Following Owen et. al. (1998) we use $`\tau _{\mathrm{sv}}=2.52\times 10^8(\mathrm{s})T_9^2`$, $`\tau _{\mathrm{bv}}=4.92\times 10^{10}(\mathrm{s})T_9^6\mathrm{\Omega }_3^2`$, and $`\tau _{\mathrm{gr}}=1.15\times 10^6(\mathrm{s})\mathrm{\Omega }_3^6`$, where $`T_9`$ is the temperature, $`T`$, in units of $`10^9`$ K, and $`\mathrm{\Omega }_3`$ is in units of $`10^3\mathrm{rad}\mathrm{s}^1`$. The above expression for $`\tau `$ which we have used for our following calculations does not however include the role of superfluid mutual friction in damping out the oscillations. We have further taken into account that effect by including also a damping time $`\tau _{\mathrm{mf}}=4.28\times 10^8\mathrm{\Omega }_3^5`$ sec, due to the mutual friction (Lindblom and Mendell 1999) in the calculation of $`\tau `$. The effect of the mutual friction is nevertheless seen to be negligible and the computed curves shown below remain almost the same even when the effect due to the mutual friction is included. By integrating Eqs 2, numerically, for a given initial value of $`\alpha `$, one may therefore follow the time evolution of $`\alpha `$ and $`\mathrm{\Omega }_{\mathrm{eq}}`$ which together with Eq. 1 determine the time evolution of the total angular momentum, $`J`$, and hence the time evolution of $`\mathrm{\Omega }`$. Fig. 1 shows the computed time evolution for the absolute value of the resulting (negative) fractional change $`\frac{\mathrm{\Delta }\mathrm{\Omega }}{\mathrm{\Omega }}`$ in the spin frequency (Fig. 1a) and also the change $`\frac{\mathrm{\Delta }\dot{\mathrm{\Omega }}}{\dot{\mathrm{\Omega }}}`$ in the spin-down rate of the star (Fig. 1b), starting at the glitch epoch which corresponds to time $`t=0`$. The results in Fig. 1 are for a choice of an initial value of $`\alpha _0=0.04`$, and for the assumed values of $`T`$ and $`\mathrm{\Omega }`$ corresponding to the Crab and Vela pulsars, as indicated. Fig. 1a shows that for the same amplitude of the r-modes assumed to be excited at a glitch the resulting loss of angular momentum through gravitational radiation would be much larger in Crab than in Vela, ie. by more than 3 orders of magnitudes. (Note that the curve for Vela in Fig. 1a represents the results after being multiplied by a factor of $`10^3`$.) Furthermore, for the adopted choice of parameter values, the magnitude of the corresponding decrease in $`\mathrm{\Omega }`$ for the Crab, is $`|\frac{\mathrm{\Delta }\mathrm{\Omega }}{\mathrm{\Omega }}|10^7`$ (Fig. 1a). The observational consequence of such an effect would nevertheless be closely similar to what has been already observed during the post-glitch relaxations of, only, the Crab pulsar. Before proceeding further with Crab, we note that the post-glitch effects of excitation of r-modes would however have not much observational consequences for the Vela, and even more so for the older pulsars, which are colder and rotate more slowly. This has two, not unrelated, reasons: in the older pulsars r-modes a) are damped out faster (ie. have smaller values of $`\tau `$), and b) result in less gravitational radiation. The dependence of $`\tau `$ on the stellar interior temperature is shown in Fig. 2a. For the colder, i.e. older, neutron stars the r-modes are expected to die out very fast. The damping timescale for a pulsar with a period $`P1\mathrm{s}`$, being colder than $`10^8`$ K, could be as short as a few hours (Fig. 2a), and r-modes would have been died out at times longer than that after a glitch. For the hot Crab pulsar, on the other hand, r-modes are expected to persist for 2-3 years after they are excited, say, at a glitch. The value of $`\tau `$ decreases for older pulsars due to both their longer periods as well as lower temperatures, but the effect due to the latter dominates by many orders of magnitudes, for the standard cooling curves of neutron stars (Urpin et. al. 1993). The second reason, ie. the loss of angular momentum being negligible in older pulsars, was already demonstrated in Fig. 1a, by a comparison between Crab and Vela pulsars. We have verified it also for the case of pulsars older than Vela. It may be also demonstrated analytically from Eqs 2, in the limit of $`\alpha ^2Q<<1`$. The initial increase in $`\mathrm{\Omega }_{\mathrm{eq}}`$ due to excitation of r-modes with a given initial amplitude $`\alpha _0`$ is seen from Eq. 1 to be $`|\frac{\mathrm{\Delta }\mathrm{\Omega }_{\mathrm{eq}}}{\mathrm{\Omega }_{\mathrm{eq}}}|_0=\alpha _0^2Q`$. The subsequent damping of the modes result in secular decrease in $`\mathrm{\Omega }_{\mathrm{eq}}`$, and the total decrease at large $`t\mathrm{}`$ would be $`|\frac{\mathrm{\Delta }\mathrm{\Omega }_{\mathrm{eq}}}{\mathrm{\Omega }_{\mathrm{eq}}}|_{\mathrm{}}\frac{\tau }{\tau _\mathrm{v}}\alpha _0^2Q`$, which is true for $`|\mathrm{\Delta }\mathrm{\Omega }_{\mathrm{eq}}|<<\mathrm{\Omega }_{\mathrm{eq}}`$. Note that in the absence of gravitational radiation losses (ie. $`\frac{1}{\tau _{\mathrm{gr}}}=0;\tau =\tau _\mathrm{v}`$) the total decrease would be the same as the initial increase, which is expected for the role of viscous damping alone. The difference between these two changes (total decrease minus initial increase) in $`\mathrm{\Omega }_{\mathrm{eq}}`$ would correspond to the total loss of angular momentum from the star, hence to the net decrease in its observable rotation frequency, ie. $$|\frac{\mathrm{\Delta }\mathrm{\Omega }}{\mathrm{\Omega }}|_{\mathrm{}}=\frac{\tau _\mathrm{v}}{|\tau _{\mathrm{gr}}|}\alpha _0^2Q$$ (4) which is valid in the limit of $`\frac{\tau _\mathrm{v}}{|\tau _{\mathrm{gr}}|}<<1`$. Fig. 2b shows the dependence of the quantity $`\frac{\tau _\mathrm{v}}{|\tau _{\mathrm{gr}}|}`$ on the stellar rotation frequency, and also on its internal temperature. While for the Crab $`\frac{\tau _\mathrm{v}}{|\tau _{\mathrm{gr}}|}10^3`$, however its value is much less for the older pulsars, due to both their lower $`\mathrm{\Omega }`$ as well as lower $`T`$ values. The dependence on the temperature is however seen to be much less than that on the rotation frequency, in contrast to the dominant role of the temperature in determining the value of the total damping time $`\tau `$, as indicated above. As is seen in Fig. 2b, for the Vela $`\frac{\tau _\mathrm{v}}{|\tau _{\mathrm{gr}}|}<10^7`$, which means a maximum predicted value of $`|\frac{\mathrm{\Delta }\mathrm{\Omega }}{\mathrm{\Omega }}|_{\mathrm{}}<10^8`$, even for the large values of $`\alpha _01`$. This has to be contrasted with the glitch induced values of $`|\frac{\mathrm{\Delta }\mathrm{\Omega }}{\mathrm{\Omega }}|10^6`$ in Vela, which shows the insignificance of the role of r-modes in its post-glitch behaviour. ## III The Crab The observed post-glitch relaxation of the Crab pulsar has been unique in that the rotation frequency of the pulsar is seen to decrease to values $`less`$ than its pre-glitch extrapolated values (Lyne et. al. 1993). So far, two mechanisms have been suggested to account for the observed excess loss of angular momentum during post-glitch relaxations of the Crab. The first mechanism, in the context of the vortex creep theory of Alpar et. al. (1984, 1996), invokes generation, at a glitch, of a so called “capacitor” region within the pinned superfluid in the crust of a neutron star, resulting in a permanent decoupling of that part of the superfluid. Nevertheless, this suggestion has been disqualified (Lyne et. al. 1993) since the moment of inertia required to have been decoupled permanently in such regions during the past history of the pulsar is found to be much more than that permitted for all of the superfluid component in the crust of a neutron star. In another attempt, Link et. al. (1992) have attributed the excess loss of angular momentum to an increase in the electromagnetic braking torque of the star, as a consequence of a sudden increase, at the glitch, in the angle between its magnetic and rotation axes. As they point out, such an explanation is left to future observational verification since it should also accompany other observable changes in the pulsar emission, which have not been detected, so far, in any of the resolved glitches in various pulsars. Moreover, the suggestion may be questioned also on the account of its long-term consequences for pulsars, in general. Namely, the inclination angle would be expected to show a correlation with the pulsar age, being larger in the older pulsars which have undergone more glitches. No such correlation has been deduced from the existing observational data. Also, and even more seriously, the assumption that the braking torque depends on the inclination angle is in sharp contradiction with the common understanding of pulsars spin-down process. The currently inferred magnetic field strengths of all radio pulsars are in fact based on the opposite assumption, namely that the torque is independent of the inclination angle. The well-known theoretical justification for this, following Goldreich & Julian (1969), is that the torque is caused by the combined effects of the magnetic dipole radiation and the emission of relativistic particles, which compensate each other for the various angles of inclination (see, eg., Manchester & Taylor 1977; Srinivaran 1989). The excitation of r-modes at a glitch and the resulting emission of gravitational waves could, however, account for the required “sink” of angular momentum in order to explain the peculiar post-glitch relaxation behavior of the Crab pulsar. As is shown in Fig. 1, for values of $`\alpha _00.04`$ the predicted time evolution of $`\frac{\mathrm{\Delta }\mathrm{\Omega }}{\mathrm{\Omega }}`$ and $`\frac{\mathrm{\Delta }\dot{\mathrm{\Omega }}}{\dot{\mathrm{\Omega }}}`$ during the 3–5 years of the inter-glitch intervals in Crab, might explain the observations. That is, the predicted total change in the rotation frequency of the star, $`|\frac{\mathrm{\Delta }\mathrm{\Omega }}{\mathrm{\Omega }}|`$, is much larger than the corresponding jump $`\frac{\mathrm{\Delta }\mathrm{\Omega }}{\mathrm{\Omega }}10^8`$ at the glitch, which explains why the post-glitch values of $`\mathrm{\Omega }`$ should fall below that expected from an extrapolation of its pre-glitch behavior. Also, the predicted values of $`\frac{\mathrm{\Delta }\dot{\mathrm{\Omega }}}{\dot{\mathrm{\Omega }}}10^4`$, after a year or so (Fig. 1b), are in good agreement with the observed persistent shift in the spin-down rate of the Crab (Lyne et. al. 1995). The predicted increase in the spin-down rate would be however diminished as the excited modes at a glitch are damped out, leaving a permanent negative offset in the spin frequency. Hence the above so-called persistent shift in the spin-down rate of the Crab may be explained in terms of the effect of r-modes, as long as it persists during the inter-glitch intervals of 2-3 years. It may be noted that a permanent persistent shift in the spin-down rate at a glitch may be caused by a sudden decrease in the moment of inertia of the star. However this effect could not, by itself, result in the observed negative offset in the spin frequency, at the same time. The same mechanism would be expected to be operative during the post-glitch relaxation in the other colder and slower pulsars, as well. However, for the similar values of $`\alpha _0`$, ie. the same initial amplitude of the excited modes, the effect is not expected to become “visible” in the older pulsars. Particularly, for the Vela its initial jump in frequency at a glitch, $`\frac{\mathrm{\Delta }\mathrm{\Omega }}{\mathrm{\Omega }}10^6`$, is seen from Fig. 1a to be much larger (ie. by some four orders of magnitudes) than that of the above effect due to the r-modes. In other words, while the predicted loss in the stellar angular momentum due to the excitation of r-modes result in a negative $`\frac{\mathrm{\Delta }\mathrm{\Omega }}{\mathrm{\Omega }}`$ which, in the case of Crab, overshoots the initial positive jump at a glitch, however for the Vela and older pulsars it comprises only a negligible fraction of the positive glitch-induced jump. A more detailed study should, however, take into account the added complications due to internal relaxation of various components of the star, which is highly model dependent. The observed initial rise in $`\mathrm{\Omega }`$ need not be totally compensated for by the losses due to r-modes which we have discussed, since part of it could be relaxed internally (by a transfer of angular momentum between the “crust” and other components, and/or temporary changes in the effective moment of inertia of the star) even in the absence of any real sink for the angular momentum of the star. Such considerations would not only leave the above conclusions valid but also allow for even smaller values of the initial amplitude of the excited modes, compared to our presently adopted value of $`\alpha _00.04`$. The suggested effect of the r-modes in the post-glitch relaxation of pulsars should be understood as one operating in addition to that of the internal relaxation which is commonly invoked. While the latter could account only for a relaxation back to (or higher than) the extrapolated pre-glitch values of the spin frequency, the additional new effect due to the r-modes may explain the excess spin-down to the lower values, as is observed in the Crab pulsar. It is further noted that the above estimates are for an adopted value of $`Q=9.4\times 10^2`$, which corresponds to the particular choice of the polytropic model star. Differences in the structure among pulsars, in particular between Crab and Vela, which have also been invoked in the past (see, eg., Takatsuka & Tamagaki 1989), could be further invoked to find a better agreement with the data for the above effect due to r-modes as well. Also, the initial amplitude of the excited modes need not be the same in all pulsars. It is reasonable to assume that in a hotter and faster rotating neutron star, as for the Crab, larger initial amplitudes, ie. larger values of $`\alpha _0`$, are realised than in the colder–slower ones. The quantity $`\alpha _0`$ is however a free parameter in our calculations for which we have chosen a value $`\alpha _00.04`$ that result in an appreciable loss of angular momentum for the Crab pulsar. Nevertheless, for the same choice of the value of $`\alpha _0`$ the effect of r-modes would be different between the Crab and the other pulsars, in agreement with the observations. The associated energy of the excited r-modes, in the rotating frame, is $`\stackrel{~}{E}=\frac{1}{2}\alpha _0^2\stackrel{~}{J}\mathrm{\Omega }^2MR^2`$, which result in $`\frac{\stackrel{~}{E}}{E_{\mathrm{rot}}}10^4`$ for a value of $`\alpha _00.04`$, where $`E_{\mathrm{rot}}=\frac{I\mathrm{\Omega }^2}{2}`$ is the rotational energy of the star. In contrast, for the energy $`\mathrm{\Delta }EI\mathrm{\Omega }\mathrm{\Delta }\mathrm{\Omega }`$ released at a glitch, associated with a glitch of a change $`\mathrm{\Delta }\mathrm{\Omega }`$ in the rotation frequency, $`\frac{\mathrm{\Delta }E}{E_{\mathrm{rot}}}\frac{\mathrm{\Delta }\mathrm{\Omega }}{\mathrm{\Omega }}`$ which is much smaller than the above even for the giant glitches of the Vela pulsar. We note however that the excitation of the oscillation modes is a separate process taking place in the liquid core of the star, and need not be energetically comparable to the energy transfer/dissipation involved in a glitch. Furthermore, as indicated earlier, one should note that the distinction between modes and the equilibrium star, as in Eq. 1, is only a mathematical convenience and no real transfer of angular momentum takes place from one component into another. Hence the associated energy of the modes need not have any definite relation to that of the glitch process which is accompanied by a real transfer of angular momentum between different components of the star. Unfortunately, the existing theory does not provide any prescription for determining the initial amplitude of the excited modes, for any assumed cause of it, say, at a glitch. We are thankful to the referee for the valuable comments and criticisms which helped to improve the manuscript. A helpful correspondence from K. D. Kokkotas is also gratefully acknowledged. VR acknowledges the support of a Royal Society grant and thanks the Portsmouth Relativity and Cosmology Group for hospitality. References * Alpar M. A., Anderson P. W., Pines D., Shaham J., 1984, ApJ, 276, 325 * Alpar M. A., Chau H. F., Cheng K. S., Pines D., 1996, ApJ, 459, 706 * Andersson N., 1998, ApJ, 502, 708 * Andersson N., Kokkotas K. D., Schutz B. F., 1999, ApJ, 510, 846 * Andersson N., Kokkotas K. D., Stergioulas N., 1999, ApJ, 516, 846 * Blandford R. D., 1992, Nat., 359, 675 * Goldreich P., Julian W. H., 1969, ApJ, 157, 869 * Ipser J. R., Lindblom L., 1991, ApJ, 373, 213 * Kokkotas D. K., Schmidt B. G., 1999, e-print, gr-qc/9909058 * Levin Y., 1999, ApJ, 517, 328 * Lindblom L., 1995, ApJ, 438, 265 * Lindblom L., Mendell G., 1999, e-print, gr-qc/9909084 * Link B., Epstein R. I., Baym G., 1992, ApJ, 390, L21 * Lyne A. G., 1995,, in Alpar M. A., Kiziloǧlu Ü., van Paradijs J. (eds), The Lives of the Neutron Stars. Kluwer, Dordrecht, p. 167 * Lyne A. G., Pritchard R. S., Smith F. G., 1993, MNRAS, 265, 1003 * Madsen J., 1998, Phys. Rev. Lett., 81, 3311 * Manchester R. N., Taylor J. H., 1977, “Pulsars”, Freeman, San Fransisco * McDermott P. N., Svaedoff M. P., Van Horn H. M., Zweibel E. G., Hansen C. J., 1984, ApJ, 281, 746 * McDermott P. N., Van Horn H. M., Hansen c. J., 1988, ApJ, 325, 725 * McKenna J., Lyne A. G., 1990, Nat., 343, 349 * Mendell G., 1991, ApJ, 380, 515 * Owen B. J., Lindblom L., Cutler C., Schutz B. F., Vecchio A., Andersson N., 1998, Phys. Rev. D, 58, 084020 * Srinivaran G., 1989, Astronomy & Astrophysics Review, 1, 209 * Takatsuka T. Tamagaki R., 1989, Progress in Theoretical Physics, Vol. 82, No. 5, 945 * Urpin V. A., Van Riper K. A., 1993, ApJ, 411, L87 * Van Horn H. M., 1980, ApJ, 236, 899 Figure Captions: Fig. 1a- The post-glitch time evolution of the absolute value of the fractional change in the spin frequency of a pulsar, caused by its loss of angular momentum due to gravitational waves driven by the r-modes that are assumed to be excited at the glitch epoch, $`t=0`$, with an initial amplitude of $`\alpha _0=0.04`$. The two curves correspond to assumed values of $`T_9=0.3`$ and $`\mathrm{\Omega }_3=0.19`$, for the Crab (thick line), and $`T_9=0.2`$ and $`\mathrm{\Omega }_3=0.07`$, for the Vela (thin line). Note that the curve for Vela represents the results after being multiplied by a factor of $`10^3`$. Fig. 1b- Time evolution of the fractional change in the spin-down rate of a pulsar, caused by its loss of angular momentum due to the excitation of r-modes at $`t=0`$. A value of $`\dot{\mathrm{\Omega }}=2.4\times 10^9`$rad s<sup>-2</sup>, and other parameter values same as in Fig. 1a for the Crab have been assumed. Fig. 2a- The dependence of the total damping timescale of r-modes on the internal temperature of a neutron star. Parameter values same as for the Crab in Fig. 1a have been assumed. Fig. 2b- The dependence of the quantity $`\frac{\tau _\mathrm{v}}{|\tau _{\mathrm{gr}}|}`$, which is a measure of the net post-glitch decrease in the rotation frequency, on the rotation frequency of a neutron star. The two curves are to show the dependence on the temperature, where $`T_9=0.3`$ (bare line) and $`T_9=0.2`$ (dotted line) have been used.
no-problem/9910/astro-ph9910177.html
ar5iv
text
# The Ly𝛼 forest in a truncated hierarchical structure formation ## 1 Introduction The underlying paradigm in cosmology is the Cosmological Principle of isotropy and homogeneity (e.g. Peebles 1993). This principle was adopted at the time that observational data on the large scale structure in the universe were not available. At present it is well established that the distribution of galaxies and mass are clumpy on scales smaller than tens of Mpcs. It is important therefore to quantify the gradual transition from clumpiness to homogeneity. This transition can be phrased in terms of the fractal dimension of the galaxy (or mass) distributions. The fractal dimension, $`D`$, is defined as $$N(<R)R^D,$$ (1) where $`N`$ is the mean number of objects within a sphere of radius $`R`$ centered on a randomly selected object. On scales $`<10h^1\mathrm{Mpc}`$ the fractal dimension of the galaxy distribution is $`D=1.2`$, but the fluctuations in the X-ray Background and in the Cosmic Microwave Background on scales larger than $`300h^1\mathrm{Mpc}`$ are consistent with $`D=3`$ to very high precision (e.g. Peebles 1980; Wu, Lahav & Rees 1999) in agreement with the Cosmological Principle. On the other hand, there are claims (e.g. Pietronero, Montuori & Sylos-Labini 1997) that the distribution of galaxies is characterized by a fractal with $`D=2\pm 0.2`$ all the way to scales of $`500h^1\mathrm{Mpc}`$. Because of the significance of this issue, it is important to investigate other independent probes of the matter distribution in the universe. Here we attempt to set constraints on the smoothness on large scales from the distribution of the normalized flux in the Ly$`\alpha `$ forest in the lines-of-sight to distant quasars. As a probe to the matter distribution the Ly$`\alpha `$ forest has a number of advantages over galaxies. The forest reflects the neutral hydrogen (HI) distribution and therefore is likely to be a more direct trace of the mass distribution than galaxies are, especially in low density regions. Unlike galaxy surveys which are limited to the low redshift universe, the forest spans a large redshift interval, typically $`2.2<z<4`$, corresponding to comoving interval of $`600h^1\mathrm{Mpc}`$. Also, observations of the forest are not contaminated by complex selection effects such as those inherent in galaxy surveys. It has been suggested qualitatively by Davis (1997) that the absence of big voids in the distribution of Ly$`\alpha `$ absorbers is inconsistent with the fractal model (see also, Wu et al. 1999). Furthermore, all lines-of-sight towards quasars look statistically similar. Here we predict the distribution of the flux in Ly$`\alpha `$ observations in a specific truncated fractal-like model. We find that indeed in this model there are too many voids compared with the observations and conventional models for structure formation. The outline of the paper is as follows. In section 2 we describe the truncated clustering hierarchy (TCH) model for the distribution of dark matter. In section 3 we briefly review the physics of the Ly$`\alpha `$ and describe how to obtain HI density field from the dark matter distribution. In section 4 we show the results for the probability distribution function of the flux. In section 5 we conclude with a summary and a general discussion. ## 2 the dark matter distribution We consider a dark matter model that is based on a fractal-like distribution of points. There are many candidates for this type of models. Here we will focus on one of the most studied of these models, namely the Rayleigh-L’evy (RL) walk (Mandelbrot 1977, Peebles 1980, 1993). As we will see later, our analysis relies on rather general fractal properties, so we do not expect our conclusions to depend on the details of the fractal model. Our main conclusion is that a fractal distribution of matter with $`D<2`$ is inevitably characterized by large voids, which is inconsistent with observations of the flux in the Ly$`\alpha `$ forest. In practice we will use the TCH model in which the distribution of matter is made of a finite number of Rayleigh-L’evy clumps, each with a finite number of steps (points). This model obviously does not have as many big voids as the pure fractal distribution. Therefore, if we demonstrate that the TCH already contains far too many large voids to be compatible with Ly$`\alpha `$ data then we can safely rule out a pure fractal distribution on the same basis. In the TCH model, the dark matter distribution in a large volume is represented by a finite number of points. The points are distributed in $`n_c`$ clumps each made of $`n_s`$ steps (points) so that the total number of points in the volume is $`n_cn_s`$. The clumps are randomly placed inside the volume and the distribution of points in each clump is generated from a Rayleigh-L’evy random walk (e.g. Peebles 1980) in which the individual steps have random directions, and random lengths drawn according to the following cumulative probability function $`P(>l)`$ $`=`$ $`\left({\displaystyle \frac{l}{l_0}}\right)^\alpha \mathrm{f}orll_0`$ $`1\mathrm{o}therwise`$ For $`\alpha >2`$ the variance of $`l`$ is finite. Therefore, by the central limit theorem, the displacement of a point after sufficiently large number of steps has a Gaussian distribution. On the other hand, for $`\alpha 2`$ the variance is infinite and the clump follows a truncated fractal structure where a pure fractal with dimension $`D=\alpha `$ is obtained only in the limit of an infinite number of steps. In figure 1 we show a projection in the plane of a realization of a three dimensional single RL clump generated using 160000 points and $`\alpha =D=1.2`$. The clump is viewed at three different magnifications. On scales smaller than the typical length of RL clump in the volume, the TCH model shares many of the properties of a pure fractal. However, in the TCH model the mean number density of points on large scales is well defined. This allows e.g. statistical description of the distribution of points in terms of the two-point correlation function (Peebles 1980). ## 3 The Ly$`\alpha `$ forest and the dark matter connection Absorption spectra are presented in terms of the normalised flux $`F=\mathrm{exp}(\tau )`$ where $`\tau `$ is the optical depth which is related to the HI density, $`n_{_{\mathrm{HI}}}`$, along the line-of-sight by $$\tau (w)=\sigma _0\frac{c}{H(z)}_{\mathrm{}}^{\mathrm{}}n_{_{\mathrm{HI}}}(x)[wx,b(x)]dx,$$ (3) where $`\sigma _0`$ is the effective cross section for resonant line scattering, $`H(z)`$ is the Hubble constant at redshift $`z`$, $`x`$ is the real space coordinate (in km s<sup>-1</sup>), $``$ is the Voigt profile normalized and $`b(x)`$ is the Doppler parameter due to thermal/turbulent broadening. For simplicity we have neglected the effect of peculiar velocities. In Cold Dark Matter (CDM) models, the absorption features in the Ly$`\alpha `$ forest are mainly produced by regions of moderate densities where photoheating is the dominant heating source. Because hydrogen in the intergalactic medium (IGM) is highly ionized (Gunn & Peterson 1965), the photoionization equilibrium in the expanding IGM establishes a tight correlation between neutral and total hydrogen density, $`n__\mathrm{H}`$. This can be approximated by a power law $`n_{_{\mathrm{HI}}}n__\mathrm{H}^\beta `$, where $`1.56\beta 2`$ (Hui & Gnedin 1997). Numerical simulations have shown (e.g. Zhang et al. 1995) that the total hydrogen density traces the mass fluctuations on scales larger than the Jeans length. So, ionization equilibrium yields the following relation for the HI density, $`n_{_{\mathrm{HI}}}`$, $$n_{_{\mathrm{HI}}}=\widehat{n}_{_{\mathrm{HI}}}\left[1+\delta (𝐱)\right]^\beta ,$$ (4) where $`\delta =\rho /\overline{\rho }1`$ is the mass density contrast, $`\overline{\rho }`$ is the mean background mass density, and $`\widehat{n}_{_{\mathrm{HI}}}`$ is the HI density at $`\delta =0`$. The gas density is obtained by smoothing the dark matter distribution with the following smoothing window in $`k`$-space (Bi 1993), $$W(k)=\frac{1}{1+\left(\frac{k}{k_J}\right)^2},$$ (5) where $`2\pi /k_J`$ is the Jeans scale length which is $`1h^1\mathrm{Mpc}`$ (comoving) at $`z3`$. We approximate the the gas density field by the dark matter distribution smoothed with the window (5). ## 4 The flux probability distribution function As can be seen in Fig 1 a pure RL fractal realization with fractal dimension $`D<2`$ leaves most of space empty. If the neutral hydrogen traces faithfully the dark matter distribution then most of the lines-of-sights to hypothetical quasars will experience zero optical depth and hence will have $`F=1`$. This is very different from what is observed in the real universe, where the mean fluxes $`F`$ are e.g. $`0.6`$ at $`z3`$. We show below that even if we consider the THC model and adjust it so that the observed value of mean flux is reproduced, the shape of the flux probability distribution function (PDF) towards a single quasar significantly differs from the observed PDF. To make this comparison we use two observed high resolution spectra of the QSO 1442+231, from z=3.6 (the redshift of the QSO) to z=2.9 (the redshift of Ly$`\beta `$). This redshift range corresponds to a comoving separation of $`250h^1\mathrm{Mpc}`$ in a flat $`\mathrm{\Omega }=1`$ universe. The spectra were observed by Songalia & Cowie (1996) and Rauch et. al. (1997). Since the distribution is a fractal, we can arbitrarily identify the length of the box with the redshift interval $`z=2.93.6`$ towards the quasar. We have generated $`50`$ RL clumps each containing $`160000`$ points, for $`\alpha =1.2`$ and $`1.8`$, respectively. The clumps were placed randomly in a cubic box of size unity where $`l_0=7\times 10^5`$ and $`10^3`$ for $`\alpha =1.2`$ and $`1.8`$, respectively. This values of $`l_0`$ were chosen to be much smaller than the mean separation between particles, and to yield clumps of roughly equal size in the two values of $`\alpha `$. We use the clouds-in-cells (CIC) scheme to interpolate the point dark matter distribution in the TCH model to a uniform cubic grid. Then we use FFT to convolve the gridded density with the window (5). In the TCH model, the actual value of the smoothing scale length is not important as long as it is smaller than the typical size of the clumps and the mean separation between them. We use the relations (3) and (4) with $`\beta =1.7`$ to compute the optical depth and the normalized flux from the smoothed dark matter density field in random lines-of-sight through the box. For the comparison with the data, we adjust $`\widehat{n}_{_{\mathrm{HI}}}`$ in eq. ( 4) so that the mean flux $`F`$ matches the observed value of $`0.65`$ over the redshift range $`z=2.93.6`$ for QSO1442+231 (Rauch et al 1997). In figures (2) and (3) we compare the models’ results with two observed high resolution spectra of the QSO 1422. The error bars on the PDF from the TCH model are rough estimates of the cosmic variance. In deriving these error bars we had to fix a physical scale for the box size. We fix this scale by requiring the mass correlation function at $`z3`$ to be unity at separation $`1h^1\mathrm{Mpc}`$ as inferred by extrapolating the the local galaxy correlation function (e.g., Peacock & Dodds 1994) to $`z3`$ in a flat universe $`\mathrm{\Omega }=1`$. Once we have fixed a physical scale, we can estimate the cosmic variance for a portion of the quasar spectrum of of length equal to the one dimensional box size. For comparison we show the PDF corresponding to a lognormal $`\mathrm{\Omega }=1`$ standard CDM mass distribution normalised to match the abundance of rich clusters in the local universe (Eke et. al. 1996). The CDM model is clearly a much better fit to the data than the TCH model. ## 5 Discussion The Ly$`\alpha `$ forest is likely to be a better tracer of the mass density than galaxies and therefore serves as a better probe of the density fluctuations at low and high redshifts. We have shown that the RL fractal models (pure as well as truncated) fail to reproduce the flux distribution of the Ly$`\alpha `$ forest. The main reason for this discrepancy is that these models have low fractal dimension ($`D<2`$) and are charactarised by large empty regions (i.e. high level of ‘lacunarity’). In contrast, models with fluctuations declining with increasing scale-lengths and approaching uniformity on horizon scale (e.g. CDM) are consistent with the observed flux distribution. This result from the Ly$`\alpha `$ forest is in line with constraints on the smoothness of the universe on large scales from the X-Ray and Microwave Background Radiations, and from the distribution of radio sources . Another potential measurement of the large scale smoothness on scales of $`500h^1\mathrm{Mpc}`$ from the Ly$`\alpha `$ forest, which we have not addressed here in detail, can be obtained from the fluctuations in the mean flux in multiple lines-of-sight to QSOs. ## 6 acknowledgment AN thanks the Astronomy department of UC Berkeley, the Institute of Astronomy and St. Catharine’s College Cambridge for the hospitality and support. OL acknowledges the hospitality of the Technion. We thank L. Cowie and M. Rauch for allowing the use of their data. We are grateful to M.Davis, M.Haehnelt and M.Rees for stimulating discussions.
no-problem/9910/astro-ph9910267.html
ar5iv
text
# The Amateur Sky Survey Mark III Project ## 1 Introduction In the year 1781, a musician and amateur astronomer named William Herschel discovered the planet Uranus while sweeping the skies with a homemade telescope in the back yard of his home in London. At that time, wealthy gentlemen could and did possess the equipment and knowledge to make significant advances in the science. During the next century, as astronomy evolved into a profession, academic and research institutions gradually dominated the field: only they possessed the resources to construct ever-larger telescopes and equip them with sophisticated instruments. Observatories moved to remote mountaintops in search of dark skies and good seeing, leaving the common man far behind. By the middle of the twentieth century, there were very few opportunities for amateur astronomers to contribute to the discipline: monitoring bright variable stars and atmospheric features of the planets, certainly, but little else. In the past two decades, however, fortuitous developments in technology have given amateur astronomers a chance to rejoin the field. The combination of inexpensive CCD detectors and cheap, powerful computers permits any motivated individual to measure quantitatively the position and brightness of tens of millions of celestial sources. While the realm of the nebulae may still belong largely to the professional, the nearby universe is open to all. In this paper, we describe a project in which we constructed our own equipment and used it to conduct a survey of bright stars near the celestial equator. Because most of us are amateurs, we call the project “The Amateur Sky Survey,” or “TASS” for short. After several experiments in designing celestial cameras (codenamed Mark 0 through Mark II), we settled on the Mark III device, which we describe in Section 2. In Section 3, we characterize the software used to reduce our images into lists of stars. We provide a few details on the observing locations (our back yards) in Section 4. We discovered that the really difficult part of this survey was keeping track of all the data (a common lesson in the current age of large-scale surveys), and so we devote Section 5 to the details of our solution. In Section 6 we discuss several results from the first three years of our work, and conclude in Section 7 by describing our future plans. ## 2 Hardware The Mark III cameras were designed and constructed by one of us (TD) in 1995, then distributed gratis to observers across the country. We arrange our description of the subsystems to follow the order in which an incoming photon encounters them. Each camera has an Aubell 135-mm f/2.8 camera lens. We chose this lens because it was on special at a New York camera store for $19, and was available in sufficient quantity to provide identical optics to all our units. We later noticed that while the lens gives reasonably sharp images (FWHM $`23`$ microns) in $`V`$-band, it performs poorly in $`I`$-band: the PSF shows a core surrounded by a halo, and its FWHM ($`33`$ microns) is considerably larger than that of the $`V`$-band PSF. We believe this inability to focus near-IR light accounts for most of the increased scatter in $`I`$-band photometry. Our future cameras will feature better optics. Between the lens and the detector, we place filters manufactured to the Bessell (1990) prescription by Omega Optical, Inc. Each Mark III “triplet” contains three cameras mounted together (Figure 1). We usually chose two $`I`$-band and one $`V`$-band filters, but the Batavia triplet has one each of $`V`$, $`R`$ and $`I`$. The heart of each camera is a Kodak KAF-0400 CCD detector. The KAF-0400 has 512 rows and 768 columns of $`9`$ micron pixels, with a full-well capacity of 85,000 electrons per pixel. We operate the CCDs in drift-scan mode: each chip is mounted so that its columns run east-west, parallel to the motion of stars across the sky. We read out the chip continuously, shifting charge along the columns at the same rate that stars drift across the detector. For a detailed description of drift-scanning, see Gehrels et al (1986). Our 135-mm lens yields a plate scale of about $`13.8`$ arcseconds per pixel; the effective exposure time is the time required for a star to drift $`512\times 13.8\mathrm{arcsec}1.96`$ degrees on the equator: 471 seconds. The length of our exposures requires us to cool the CCD well below ambient temperatures. Each CCD is attached to a two-stage thermoelectric (TE) cooler: the first stage does most of the work, dropping the temperature by about $`30^{}`$ Celsius. The second stage is driven by a circuit which acts to maintain the temperature at a constant value. We place a large bucket of water next to the triplet at each site to act as a heat sink at roughly constant temperature, and circulate the water through the TE coolers. Although the water may cool down during the course of a night by a large amount, the temperature regulation can hold the CCD at a temperature constant to within $`0.01^{}`$ C. The specifications of the Kodak KAF-0400 show a read noise of 13 electrons. When running our cameras at a typical operating temperature of $`15^{}`$ Celsius, we measure a total noise of 25 to 30 electrons in dark images. Photon noise from the sky at our sites overwhelms the readout and thermal noise: the total noise ranges from 50 to 120 electrons per pixel in a $`V`$-band image, and from 90 to 150 electrons in an $`I`$-band image. Three cameras are bundled together in each triplet (see Figure 1). The body holds the cameras in a common plane, but points them at 15-degree intervals. We set the triplets to point near the celestial equator, with one camera looking on the meridian (due south), one looking an hour to the east, the third looking an hour to the west. A star may drift through all three cameras during a single night. We built a single electronics board, located in the back of the triplet housing, to control all three cameras. For simplicity’s sake, we used a single clock driver to time the control signals to all three CCDs. The camera operator at each site tuned the driver by trial and error to achieve the best setting. We discovered that the single driver was a mistake: small variations in the focal lengths of the cameras made it impossible to optimize the PSF in all three simultaneously. Each camera produces a long, continuous image of the night sky, about three degrees wide from north to south and as long as the night is clear. The electronics board within the triplet housing sends the data, one row at a time, over a byte parallel line to a nearby computer. The computer reads the information and writes it to disk in 16-bit integer FITS format, breaking the stream of pixels into images of roughly 900 rows (about 3.5 degrees in Right Ascension). ## 3 Observations Since the Mark III cameras require power, cooling water, and a computer, as well as manual intervention to cover them from the elements, we placed them in our backyards; the camera operators have day jobs and cannot drive long distances to dark sites on a daily basis. Operation consists of uncovering the cameras at dusk and starting a computer program. The operator then leaves the unit alone, just making sure to cover up the camera before it rains or the sun shines into the lens. In the morning, the data files are processed to star lists when time is available. The data presented herein were taken at three suburban sites: near Cincinnati, Ohio, longitude 84.58 West, latitude 39.20 North, operated by MG; near Dayton, Ohio, longitude 83.87 West, latitude 39.80 North, operated by GG; and near Batavia, Illinois, longitude 88.33 West, latitude 41.83 North, operated by TD. A fourth triplet at the Applied Physics Lab of Johns Hopkins University, longitude 76.88 West, latitude 39.15 North, has also run regularly. The weather at these sites is poor, especially in the winter months: our coverage of the equator is weakest between 6 and 12 hours Right Ascension. Even when the weather is good, we are sometimes unable to collect data due to other commitments. As a result, over the roughly two years from October, 1996, to November, 1998, the three sites combined to submit measurements from 175 nights. Obviously, we could increase the data rate by a factor of two to four by placing a triplet within a weather-proof housing at a site with good weather and automating its operation. We plan to put the first of our next generation of instruments in Arizona for exactly this reason. The skies of our suburban locations are much brighter than those at most observatories. On a typical night in Batavia, the sky brightness is about eighteenth magnitude per square arcsecond in both $`V`$-band and $`I`$-band. At a dark site ($`21.0`$ mag/square arcsecond in $`V`$), the noise background due to sky brightness would decrease by a factor of about four, which would increase our limiting magnitude by a bit more than one magnitude. During an eight-hour night, each camera scans roughly 360 square degrees. The images are stored on disk and processed into star lists the next day. Depending on the season and sky brightness, a triplet may record 50,000 to 200,000 measurements on a good night. Most sources are detected two or three times, at one-hour intervals. Our images provide good data for sources from seventh magnitude to thirteenth or fourteeth magnitude. We provide an example of a $`V`$-band Mark III image in Figure 2: it shows a field roughly 4 degrees wide by 3 degrees high at the intersection of Monoceros, Canis Minor, and Hydra. Figure 3 shows a closeup view of the area bounded by the dashed box in Figure 2; note the slightly diagonal shape of the PSF. ## 4 Software We have written all the software used to record the image data, clean the images, detect and measure stellar sources. In each step, we follow standard procedures and employ conventional algorithms; nonetheless, we describe our pipeline in some detail so that readers may understand the limitations it places on the final results. ### 4.1 Cleaning the raw images Since we take drift-scan images, the signal from each star passes through every row along one column of the CCD. Correcting the images requires the construction and application of one-dimensional dark and flatfield vectors. We make a dark vector for each camera on each night by taking an image of several hundred rows with the lens cap in place, then calculating the median of the pixel values in each column. We create a flatfield vector in a slightly different manner. In order to improve the signal-to-noise ratio, we start with scans taken during twilight. First, we subtract the dark vector from the scans. Now, since the brightness level changes rapidly at these times, we cannot simply take the median of several hundred rows as a fair measure of the change in sensitivity across the chip. Therefore we calculate the flatfield vector in two steps. First, we create a set of temporary flatfield vectors, one for each row in our twilight image, by dividing the pixel values in that row by the median pixel value of that row. We then create a single flatfield vector whose column values are the median value of each column from our set of temporary flat vectors. We normalize the flatfield vector and then divide each dark-subtracted raw image by it. ### 4.2 Generating a PSF For each frame, we first calculate the sky value and the standard deviation from that value. Next, we find all sources with peaks at least 20 times the standard deviation above the sky. To make a quick estimate of the Full Width at Half-Maximum (FWHM) of the PSF, we walk down from the peak pixel along the rows and columns until the pixel value drops to half that of the peak. We use the median value of this rough FWHM in the row and column directions as a first approximation. We make a second pass through the sources, discarding any with FWHM values which deviate from the first approximations by more than $`25\%`$. We calculate the average FWHM in each direction from the remaining peaks. We then create a bounding rectangle whose width and height are 2.55 times the average FWHM (i.e. 3 sigma for a true gaussian) in each direction. We create a set of PSF’s by normalizing each peak within the bounding rectangle. We average all the PSF’s in the set to determine the final PSF. We therefore have a strictly empirical PSF for each image. We build an “optimized aperture” from this discrete PSF by selecting whole pixels, working outwards from the center, until the next pixel value drops to less than one-fourth of the average pixel value included so far. This aperture has the same shape as the PSF, but discards pixels in which the signal-to-noise of a star is low. ### 4.3 Finding stars Following DAOPHOT (Stetson 1987), we convolve the image with a lowered PSF to detect sources. We mark as a candidate peaks in the convolved image above a threshold (typically two or three times the standard deviation from the sky). We measure the sky-subtracted flux of the candidate in the original image through the optimized aperture, and use it to calculate the ratio $$f=\frac{\mathrm{flux}\mathrm{inside}\mathrm{aperture}}{\mathrm{peak}\mathrm{in}\mathrm{convolved}\mathrm{image}}$$ A true star has a value of $`f`$ close to 1.0, while cosmic rays and other spurious detections typically yield very different values. We accept any candidate with $`f`$ within a reasonable range of the expected value. ### 4.4 Measuring stellar properties To calculate the position of each detected star, we use pixels within the aperture to form marginal sums in the row (column) directions. We add the sums for each row (column) to form a cumulative marginal sum, then interpolate to find the row (column) at which the sum reaches half its final value. Our interpolation scheme yields positions which are fractions of a pixel. We convert the (row, col) position of each star into (RA, Dec) by matching the brightest 30 stars in each image against stars in the Tycho catalog (ESA 1997); we have adapted the triangle-based method described by Groth (1986) and Valdes et al. (1995) to match the detected and catalog positions of the stars. Each detection is tagged with the time at which it passed the midpoint of the field of view, based on its position within the image and the time at which the image was read from the camera. To calculate the flux of each detected star, we do not use this interpolated position. Instead, we adopt the position of the peak in the image convolved with the PSF. At this position in the original image we place the “optimized aperture,” and add (flux - sky) contributed by each pixel within the aperture. We estimate an uncertainty in each measurement by combining contributions from readout noise, sky subtraction, and photon noise from the source itself. ### 4.5 Photometric calibration Each Mark III camera converts instrumental magnitudes to the standard Johnson-Cousins system via a two-step calibration procedure. First, in order to set a preliminary zero-point for the instrumental magnitudes, we consider a particular subset of stars from the Tycho catalog. We choose stars which * are not marked as variable in the Tycho catalog, * have $`B_T`$ and $`V_T`$ uncertainties of $`0.06`$ mag, * are fainter than $`V_T=7.5`$ so that they are not saturated, * are separated from other Tycho stars by at least 83 arcsec in RA and 50 arcsec in Dec. Since the TASS cameras observe in Johnson-Cousins $`VRI`$, we need to convert the Tycho $`B_T`$ and $`V_T`$ magnitudes to the Johnson-Cousins system. We found a set of 300 Tycho stars brighter than $`V=9`$ which matched Landolt ( 1983, 1992) standards. We performed linear fits to $`V,R,`$ and $`I`$ versus $`(B_TV_T)`$, finding transformation equations: $`V`$ $`=`$ $`0.0156+V_T0.0994(B_TV_T)`$ (1) $`R`$ $`=`$ $`0.0160+V_T0.5390(B_TV_T)`$ (2) $`I`$ $`=`$ $`0.0468+V_T0.9480(B_TV_T)`$ (3) Note that the V equation is similar to that given in the Hipparcos (ESA 1997) catalog: $`V`$ $`=`$ $`V_T0.090(B_TV_T)`$ (4) but with slightly different coefficients. To ensure uniformity, we adopted the new coefficients. Note also that each of these equations is only valid over a limited color range, and the quality of the fit degrades as one moves further from the $`B_T`$ passband. We have therefore further restricted the Tycho subset to stars within the color range of $`0.2<(B_TV_T)<2.0`$. We then compare these approximate Johnson-Cousins magnitudes to our instrumental magnitudes for the same stars in our images; we simply shift the instrumental magnitudes by a constant to match the catalog values. This procedure places our observations on the standard Johnson-Cousins system without having to make separate observations of Landolt fields (often impossible for cameras scanning a few degrees from the celestial equator), and makes use of non-photometric nights since the Tycho reference stars are contained within each image. After a year or so of operation, we discovered that cameras at several sites exhibited small, systematic errors in photometry as a function of Declination. Apparently, the one-dimensional flatfield vectors we create do not perfectly remove variations in response across the field of view. Fortunately, the errors are nearly constant over the course of a single night. We correct them by breaking the three-degree Declination range of each camera into a small number (typically 8) of zones. In each zone, we make a linear fit to the (observed - expected) residuals as a function of Declination, using Tycho stars from the entire night; we force the corrections to agree at each boundary between zones. We then add these corrections to the magnitude of all stars observed on that night. This Declination-dependent correction reduces the residuals of repeated measurements of bright stars in the $`V`$-band from about 55 millimagnitudes to about 35 millimagnitudes. The results of this two-step procedure are stored locally by each site and made available for further analysis. We have detected small color terms in these measurements from some of the Mark III cameras at some of the sites; see the discussion on color corrections in the tenxcat catalog in Section 6.2. ## 5 Database There have been three to four Mark III triplets active at any time over the past three years. During a typical night of operation, a single triplet measures over 100,000 stellar positions and magnitudes. Keeping track of the data generated at all sites is not a trivial task. The operator of each triplet is responsible for processing the data into “star lists:” ASCII tables of stellar positions and magnitudes, with a small amount of ancillary information. Each operator periodically sends accumulated star lists via FTP, ZIP disks or CDs, to one of our members (MR) who acts as DataBase Manager (DBM). The DBM occasionally loads the star lists into our central database. In no sense is this a real-time operation: the interval between observation and insertion into the database varies from weeks to months. We have adopted the PostgreSQL<sup>1</sup><sup>1</sup>1http://www.postgresql.org database engine for our project: it is a relational database with an SQL syntax which runs on many platforms and is distributed free of charge. One of us (CA) designed the tables required to hold the information generated by each site, and the software necessary to load the information into the database. The central database runs on the desktop computer in the DBM’s office, where it competes for disk space and memory with many other projects. The information from roughly 175 nights of observation requires about 5 Gigabytes of disk space. Once the data has been loaded into the database, it may be analyzed in many ways. Users from any computer with an SQL client and an Internet connection can access the information freely. We have built WWW interfaces to service common queries so that novice users need only a web browser to answer simple questions. One of the main functions of the database is to identify measurements of the same star in different images, so that one can calculate mean values or look for variability. We wrote software to perform this merging of data with the following method in mind: if a new detection appears more than $`D`$ arcseconds from any existing entry in the database, consider it a new entity; but if it appears within $`D`$ arcseconds of an existing entry, mark it as an additional observation of the existing entry. We chose $`D=15`$ arcseconds, based on the precision of our measurements of position (see below) and on the large size of our pixels. We seeded the database with stars from the USNO A1.0 catalog (Monet 1996) down to sixteenth magnitude, to serve as initial, accurate positions. Our plan was to keep the USNO A1.0 position as the database position until a star was recorded some large number $`N`$ times; then we would switch to using the star’s mean observed position. In this way, random errors in the first few recorded measurements would not cause the position to wander by large amounts (several arcseconds), but, eventually, the database would contain a position based on our measurements alone. Unfortunately, we made an error in our software, setting $`N=1`$; thus, the position for each star in our database did wander by significant amounts. In some cases, the database position moved so far from the true position that a new measurement did not fall within $`D=15`$ arcseconds of the existing position; the new measurement was then recorded (incorrectly) in the database as an independent star. We estimate that perhaps 5 percent of all stars in our database suffer from this “spurious companion” problem; we provide one post facto fix in our discussion of the tenxcat catalog, below. ## 6 Results We are still exploring the wealth of information garnered during two years of operation, and several sites are still contributing new observations. The items mentioned below are simply the first projects to which we have put our energies; many avenues of study await the curious researcher. For example, we have not yet tried to perform any search for objects which move in the sky from one night to the next. In one sense, the primary result of our work cannot be presented in this paper: it is the database of measurements we have collected. This database continues to grow as we incorporate new observations from several sites. There is no restriction on access to the data; TASS members and non-members alike may sift through it on an equal basis. We encourage readers who have a use for the information to visit the WWW pages<sup>2</sup><sup>2</sup>2http://www.tass-survey.org/tass/www\_scripts/make\_chart.html which describe database access. ### 6.1 Precision of Astrometry and Photometry The Mark III cameras do not provide precise positions. The telephoto lens and CCD yield a plate scale of about $`13.8`$ arcseconds per pixel, the PSF is often asymmetric, and the $`I`$-band PSF features an extended halo. Comparing the position of a single TASS measurement for a bright star to the position listed in the Hubble Guide Star Catalog (Lasker et al 1990), we find a scatter of about 3 arcseconds; the scatter rises to about 6 arcseconds near our detection limit. The mean of many measurements is considerably more accurate; see Section 6.2 below. The photometric measurements produced at each camera site and transmitted to the central database may suffer from systematic color terms (see discussion in tenxcat section below). We ignore that source of error here, and consider only the precision of the measurement: as a metric, we use the standard deviation from the mean of measurements of the same star by the same camera on many different nights. We find that the standard deviation from the mean has a minimum of about 0.03 mag for bright stars, increasing to more than 0.20 mag for stars near the limits of detection. If one calculates the expected uncertainty in photometric measurements, using measured values for readout noise and estimates of the sky brightness, one finds much smaller values. The Mark III measurements are evidently not limited by photon noise, except for the very faintest stars. The major sources of error are imperfect sky subtraction and variations in the PSF across our field of view. Since we measure most stars many times, the mean values of our measurements should provide information on each star more precise than that from a single observation. We therefore have constructed a catalog which contains the mean values of position and magnitude for stars observed multiple times. ### 6.2 The tenxcat Catalog After two years of operation, the total number of detections reported to the central database reached more than 10 million in $`V`$, 0.7 million in $`R`$, and 13 million in $`I`$. However, many of these detections were due to noise peaks, airplane trails, cosmic rays, and other contaminants. In order to generate a catalog of reliable celestial sources, we selected objects in the database which appeared on at least 10 different occasions, in any combination of passbands. We call this the tenxcat catalog: it contains 367,241 stars at the time of writing (September, 1999). Interested readers can access the catalog via the Internet at http://www.tass-survey.org/tass/www\_scripts/make\_chart.html We subjected the objects in this subset to two additional steps of processing. * We checked each of the Mark III cameras for color terms, by comparing their measurements of equatorial standard stars against those of Landolt ( 1983; 1992). The $`R`$-band and $`I`$-band cameras had statistically significant residuals as a function of stellar color, which we list in the Appendix. We therefore applied linear corrections to the TASS magnitudes from those cameras to bring their measurements to the Johnson-Cousins magnitude scale. * We checked each star for “spurious companions:” neighbors within $`D=15`$ arcseconds which never appear in the same image as the star, and have the same brightness to within 0.5 mag. Whenever possible, we merged such pairs of stars into a single entry in our database, and marked the remaining entry with a flag. We are constrained by our drift-scan cameras to observe near the celestial equator: our surveyed area described a rough band from Declination $`=5^{}`$ to $`1.5^{}`$. We emphasize that this is by no means a complete catalog; as Figure 4 shows, the number of observations varies considerably throughout our survey area. Since some faint stars were not detected in all images, an area would need to covered many more than 10 times for all its stars to be included in tenxcat. The tenxcat catalog contains stars ranging from about seventh magnitude to about fourteenth magnitude. However, the distribution of stellar magnitudes (Figure 5) shows that the number of stars per magnitude bin begins to fall at about $`V=13.3`$, $`R=13.3`$, $`I=12.5`$. The difference between the passbands is a combination of the color of a typical star and the system sensitivity as a function of wavelength. The internal consistency of our photometric measurements can be gauged by calculating the median value of the standard deviation from the mean within magnitude bins. In Figure 6 we show the results for all 3 passbands; all three have a floor of about $`\sigma =0.04`$ mag, but the scatter increases much more quickly in $`I`$ band than in $`V`$ or $`R`$. We blame the $`I`$-band results on three factors: first, the $`I`$-band images suffered from a core-halo PSF; second, we combined $`I`$-band measurements from 5 different cameras at 3 different sites, whereas there were only 3 different cameras with $`V`$ filters and a single camera with $`R`$ band filter; third, our photometric calibration in $`I`$ depends upon extrapolation in the transformation from Tycho $`B_T,V_T`$ to Johnson-Cousins $`I`$ (see the Appendix). The positions in tenxcat are the mean values from at least 10 different measurements. We have compared our positions against those in the ACT Reference Catalog (Urban et al 1998) to judge the accuracy of the mean positions. We find the median difference between TASS and ACT positions to rise slowly with magnitude, from $`0.59`$ arcseconds at $`V=7.5`$ to $`0.89`$ arcseconds at $`V=11.5`$. The mean positions of fainter stars are less accurate, but still good to a few arcseconds. In order to facilitate the identification of sources, we include cross-references between stars in tenxcat and other catalogs. Each entry in our catalog has a field for the matching entry in the Hubble Guide Star Catalog (Lasker et al 1990) and the ACT Reference Catalog (Urban et al 1998). We have extracted from the ACT Reference Catalog information from the Henry Draper (HD) catalog (Cannon & Pickering 1918-1924; Cannon 1925), including the spectral type; unfortunately, tenxcat contains only 7792 stars with data from the HD. Finally, we have compared the position of each of our stars against sources in the IRAS Point Source Catalog (Beichman et al 1988); any star which falls within 20 arcseconds of an IRAS source is marked with the IRAS identifier. The catalog is available as an ASCII text file, and via a WWW form which allows the user to search through it in several ways. A detailed description of each field in the catalog, and instructions on accessing it electronically, are provided in Richmond ( 1999b). ### 6.3 Variable Stars One of the strengths of the Mark III survey is the number of times it has scanned some areas of the sky. Recall that two of our sites have two $`I`$-band cameras on their triplets, providing measurements of each star roughly two hours apart during a single night. In addition, we have spent many nights observing the celestial equator during the past few years. As a result, of the 367,241 stars in tenxcat, over 16,000 have at least 30 measurements in $`V`$-band and over 55,000 have at least 30 measurements in $`I`$-band. We have arranged our database so that remote users may easily retrieve all observations of one particular star, or generate a light curve on the fly. One obvious use for this wealth of data is to study variable stars. We describe here several different ways in which our members have started to mine the database. Gombert ( 1998a; 1998c; 1999) has searched for variable stars which do not appear in existing catalogs, finding over 60 candidates. He finds that the Mark III data is especially well-suited to finding Mira-like variables: one can isolate stars with large $`(VI)`$ colors and examine several years of observations. Gutzwiller ( 1999) has found another 52 candidates. These efforts are by no means an exhaustive search of the entire Mark III dataset, but will keep us busy gathering followup observations (for an example, see Dvorak 1999). While our positions are not as accurate as those in astrometric catalogs, they are certainly better than many listed in the General Catalog of Variable Stars (GCVS) (Kholopov et al 1992). When the identity of a variable in the GCVS is uncertain, due to a poor position, one can examine the light curve of all stars in the vicinity in our database and often find the variable easily. Gombert ( 1998b) provides positions accurate to a few arcseconds for 51 known variable stars. Our archives do not include images of the sky, only measurements of position and brightness for the sources we detect. The Stardial project (McCullough & Thakkar 1997), on the other hand, does save all of its images, but doesn’t perform any source detection or measurement. Since the Mark III and Stardial surveys share a very large area on the sky (about 1400 square degrees in a strip from Dec = $`0^{}`$ to $`4^{}`$), one can combine them to find objects which would not stand out in either survey alone. Hassforther and Bastian ( 1999) have found a Mira variable by searching visually a set of Stardial images; although the star disappears from Stardial at minimum light, it is still detected in our $`I`$-band measurements. We expect to see many other projects combine the Mark III data with information from other sources. ## 7 Future Work The Mark III survey will continue for several years. We have already gathered several seasons of data from a fourth site (the Applied Physics Lab at Johns Hopkins) to be incorporated into the database. This paper represents only a “snapshot” of a working project; we may export several updated versions of our catalog(s) before reaching the final version. TASS members have shifted most of their attention and efforts away from the Mark III survey to a new project: the Mark IV systems. A Mark IV unit will consist of two cameras fixed side-by-side on a common mount; each camera combines a 100mm f/4 lens, a single filter, and a Lockheed/Fairchild 442A CCD with $`2048\times 2048`$ pixels to provide a field of view four degrees on a side. The mount allows motion in Hour Angle and Declination, and is designed to track accurately for several minutes. We estimate the detection limit of the system to be fainter than fifteenth magnitude. We are currently constructing seven Mark IV systems, the first of which will be deployed at Flagstaff in Fall 1999. TASS members have supported this project with their own time, energy and money; no tax dollars were spent on our work. We have been fortunate enough to receive valuable advice and contributions from many people, including (but not limited to) Paul Bartholdi, Bernie Kluga, Alain Maury, Peter McCullough, Peter Mount, Marty Pittinger, Jure Skvarc, Saral Wagner, and Ron Wickersham. We also owe a large debt to Bohdan Paczynski, whose enthusiasm and generosity guided us through the crucial early stages of the project. Jim Kern kindly suggested ways to improve this manuscript. ## Appendix A Appendix There were 23 cameras built and distributed for the Mark III survey. Of these, 16 are known to have operated and to have taken at least some data. Twelve have been operated almost every clear night at their locations. Data from 9 cameras is presently included in database: * 3 V-band cameras, one at each site * 1 R-band camera, at Batavia * 5 I-band cameras: 2 each at Dayton and Cincinnati, 1 at Batavia We compared TASS measurements of stars against Landolt’s equatorial catalogs of UBVRI standard stars (Landolt 1983; 1992). The V-band measurements showed no significant offset in the mean, nor any significant color dependence. We therefore made no correction to the V-band measurements. In the R and I bands, however, there were both small offsets in the mean (always in the sense that TASS measurements were slightly fainter than Landolt measurements), and small trends in residuals as a function of (V-I) color. See TASS Technical Note 54 (Richmond 1999a) for details, graphs, and tables. We determined the following color terms, based on stars brighter than mag 11 and using a color which was the mean TASS V value (based on measurements from all cameras) minus the mean TASS I value (based on measurements from all cameras). In the equations below, uppercase values represent corrected single magnitude measurements, and lowercase values represent raw single magnitude measurements. $`Rband`$ $`\mathrm{camera}\mathrm{H1}(\mathrm{Batavia}):R`$ $`=r+0.012090.073704(color)`$ $`Iband`$ $`\mathrm{camera}\mathrm{B0}(\mathrm{Dayton}):I`$ $`=i0.05630.018676(color)`$ $`\mathrm{camera}\mathrm{B2}(\mathrm{Dayton}):I`$ $`=i0.0683+0.040971(color)`$ $`\mathrm{camera}\mathrm{D0}(\mathrm{Cinn}):I`$ $`=i0.0844+0.06116(color)`$ $`\mathrm{camera}\mathrm{D2}(\mathrm{Cinn}):I`$ $`=i0.1001+0.08510(color)`$ $`\mathrm{camera}\mathrm{H2}(\mathrm{Batavia}):I`$ $`=i+0.01210.073704(color)`$ After making the corrections to each individual measurement, we then re-calculated the mean magnitude of each star in all passbands. In cases of spurious pairs, we made the corrections to each member of the pair separately, based on its own color, re-calculated mean magnitudes for each member separately, and finally determined a weighted mean magnitude for the merged entity. The corrected tenxcat magnitudes agree well with Landolt magnitudes, with little dependence on color.
no-problem/9910/gr-qc9910053.html
ar5iv
text
# Spin-Torsion in Chaotic Inflation (July 1999) ## Abstract The role of spin-torsion coupling to gravity is analyzed in the context of a model of chaotic inflation. The system of equations constructed from the Einstein-Cartan and inflaton field equations are studied and it is shown that spin-torsion interactions are effective only at the very first e-folds of inflation, becoming quickly negligible and, therefore, not affecting the standard inflationary scenario nor the density perturbations spectrum predictions. PACS number(s): 98.80 Cq Inflation, in its many different implementations, has become one of the most important cosmological paradigm today \[for reviews, see for instance, \]. The underlying idea of inflation, of a period of accelerated expansion of the scale factor, when the energy density is dominated by a vacuum energy density, is able to provide in a simple way a solution to the cosmological horizon and flatness problems and at the same time provides a model for density perturbations in the early Universe. Earlier studies by Gasperini have shown that inflation could be driven by a spin density dominated epoch in the early Universe, even in the absence of vacuum dominant contributions to the energy density, showing that a spin-torsion interaction acts like a source of repulsive gravity. This then poses us with the question whether primordial spin-torsion interactions are able to support inflation in standard inflaton driven inflationary scenarios, by, e.g., easing the conditions for slow-roll of the inflaton field. Previous works on spin/torsion effects in inflation that we are aware of have not detailed or elucidated the real role of spin-torsion in an inflationary epoch. Torsion makes an important role in very different physical models . In particular, torsion is natural to many models of higher dimensional theory, as in Einstein-Kalb-Ramond models and string theory and in gauge theories of the Poincarè group . Therefore, it is natural to expect that torsion may be particularly important in pre-inflationary models, where quantum gravity effects may be introduced, from the geometrical aspects of the space-time, by a torsion interaction term. This may be the case in chaotic inflationary scenarios, where the inflaton initial conditions are taken around the Planck era and, then, quantum gravity effects may become important to determine the initial conditions prior to inflation. Based on the above motivations, in this letter, by considering the spin-spin interactions of matter as described by the Einstein-Cartan theory (see, e.g., Ref. ), we study the role of spin-torsion in the simplest model of chaotic inflation, which is that of an inflaton with a quadratic potential. We do not expect that more general models of chaotic inflation will lead to results much different to this simple model, when regarding the effects of spin-torsion, which is introduced through a generalization of the gravity equations. In the Einstein-Cartan theory, the gravity equations are modified such that the Friedman equation (we are assuming a spatially flat Friedman-Robertson-Walker metric) and the acceleration equations read , respectively, $$H^2=\frac{8\pi G}{3}(\rho _\varphi +\rho _\mathrm{s})$$ (1) and $$\frac{\ddot{a}}{a}=\frac{4\pi G}{3}(\rho _\varphi +3p_\varphi 8\pi G\rho _\mathrm{s}),$$ (2) where $`H=\dot{a}/a`$ is the Hubble parameter and $`G=1/M_{\mathrm{pl}}^2`$, with $`M_{\mathrm{pl}}`$ the Planck mass. In the above equations we have also defined $`\rho _\mathrm{s}`$, as $`\rho _\mathrm{s}=S_{\mu \nu \alpha }S^{\mu \nu \alpha }/2`$, the average of the square of the spin density tensor $`S_{\mu \nu \alpha }`$. The spins are taken as randomly oriented (from not polarized spinning matter fields) , so the average value of $`S`$ is zero. The torsion tensor $`Q_{\mu \nu }^\alpha `$ is related with $`S_{\mu \nu }^\alpha `$ by the standard expression $$Q_{\mu \nu }^\alpha =8\pi G\left(S_{\mu \nu }^\alpha +\frac{1}{2}\delta _\mu ^\alpha S_{\nu \beta }^\beta \frac{1}{2}\delta _\nu ^\alpha S_{\mu \beta }^\beta \right).$$ (3) In Eqs. (1) and (2), $`\rho _\varphi `$ and $`p_\varphi `$ are the energy density and pressure for the inflaton field $`\varphi `$, respectively, given by the usual relations: $`\rho _\varphi =\frac{1}{2}\dot{\varphi }^2+V(\varphi )`$, $`p_\varphi =\frac{1}{2}\dot{\varphi }^2V(\varphi )`$. As the spin-torsion does not couple to the inflaton field, we have the evolution equation for $`\varphi `$, $$\ddot{\varphi }+3H\dot{\varphi }+V^{}(\varphi )=0,$$ (4) where in the above equations overdot represent derivative with respect to time and $`V^{}=\frac{dV}{d\varphi }`$ is the field derivative of the inflaton potential. Using the simplest inflaton potential for chaotic inflation, $`V(\varphi )=m^2\varphi ^2/2`$, and from Eqs. (1), (2) and (4), we determine the system of equations satisfying the scale factor $`a`$, $`\varphi `$ and $`\rho _\mathrm{s}`$, $`\ddot{a}`$ $`=`$ $`{\displaystyle \frac{8\pi G}{3}}a\left(\dot{\varphi }^2{\displaystyle \frac{m^2}{2}}\varphi ^24\pi G\rho _\mathrm{s}\right)`$ $`\ddot{\varphi }`$ $`=`$ $`3{\displaystyle \frac{\dot{a}}{a}}\dot{\varphi }m^2\varphi `$ (5) $`\dot{\rho _\mathrm{s}}`$ $`=`$ $`6{\displaystyle \frac{\dot{a}}{a}}\rho _\mathrm{s}.`$ From the equation for the acceleration in the set of equations in (5), we can immediately see that a spin-torsion density works in favor of inflation, when the slow-roll conditions for the inflaton fields applies. However, under these circumstances, of a regime of inflation, the Universe quickly enters in a de Sitter phase, with $`H\mathrm{const}.`$ and, therefore, from the last of the equations in (5), $`\rho _\mathrm{s}`$ satisfies $$\rho _\mathrm{s}e^{6Ht}(\mathrm{in}\mathrm{de}\mathrm{Sitter})$$ (6) decreasing with the sixth power of the inverse of the scale factor during the de Sitter phase and then the spin density quickly vanishes as soon the Universe enters in an inflationary phase. Note that $`\rho _\mathrm{s}`$ decreases much faster than the inflaton density (and even faster than radiation energy densities), which goes with the third power of the inverse of the scale factor during the de Sitter phase. Therefore, we do not find a spin-torsion dominated inflation over the inflaton in chaotic inflation models in general, with spin-torsion interactions very fast becoming subdominat right after the first e-folds of inflation. This is consistent with earlier findings from Kao in , which working in the context of torsion in the ten-dimensional Kalb-Ramond theory, concludes that the torsion field vanishes at the end of the inflationary era. For the same reasons above, we do not expect any contribution of spin-torsion interactions to density perturbations in chaotic inflationary scenarios, once the spin-torsion density is depleted well before quantum fluctuations first cross the horizon. We can ask what happens when the initial value for $`\varphi `$ is much smaller than the usual value needed in the model above in the absence of spin-torsion effects, $`\varphi _i3.4M_{\mathrm{pl}}`$, as required for sufficiently inflation ($`70`$ e-folds of inflation) in the model. It seems that, from the first of the equations in (5), we could arrange a spin-torsion dominated epoch over the inflaton field, for a sufficiently large initial value for $`\rho _\mathrm{s}`$. However, this seems not to be the case, since, from Eq. (1), it imposes a limit on the initial value for $`\rho _\mathrm{s}`$, as $`2\pi G\rho _{\mathrm{s},i}`$ cannot be larger than $`\rho _{\varphi ,i}`$. Also, very specific models as the one discussed by Gasperini in , shows that a spin-torsion dominated inflation, with the physical requirements of large enough e-foldings of inflation, can only be attained if we require a extreme fine-tuning for the spin density prescription used in there. Acknowledgements This work was partially supported by Conselho Nacional de Desenvolvimento Científico e Tecnológico - CNPq (Brazil).
no-problem/9910/hep-ex9910043.html
ar5iv
text
# DESY–99–162 Measurement of the 𝐸_{𝑇,𝑗⁢𝑒⁢𝑡}²/𝑄² dependence of forward–jet production at HERA ## 1 Introduction The wide kinematic range available at the HERA $`ep`$ collider at DESY has allowed QCD to be tested in regions of phase space not available to previous experiments. Both the H1 and ZEUS collaborations have studied the forward–jet cross sections in order to search for BFKL effects . For these analyses, the two hard scales involved in jet production in deep inelastic scattering (DIS), the negative square of the four–momentum transfer at the lepton vertex, $`Q^2`$, and the squared transverse jet energy, $`E_{T,jet}^2`$, were chosen to be of the same order of magnitude. This paper extends our previous study by investigating the forward–jet cross section as a function of the ratio of these two scales, $`E_{T,jet}^2/Q^2`$, for the entire available range. Three different kinematic regions can be distinguished, depending on the dominant scale. In the first region, $`Q^2E_{T,jet}^2`$, $`Q^2`$ is the standard deep inelastic process hard scale. Typically, leading–order (LO) Monte Carlo models approximate pQCD contributions in this regime by parton showers. In the second region, where $`E_{T,jet}^2Q^2`$, all terms with $`\mathrm{log}(Q^2/E_{T,jet}^2)`$ become small and the effects of DGLAP evolution are suppressed. Therefore BFKL effects are expected to be observable in this region, which was selected for the analysis of forward–jet production , where it was discussed in detail. In the third region, where $`Q^2E_{T,jet}^2`$, the NLO pQCD prediction is sensitive to the treatment of terms proportional to $`\mathrm{log}(E_{T,jet}^2/Q^2)`$, which ought to be resummed. Conventional Monte Carlo models do not include these terms. In this letter, measurements of the forward–jet cross sections covering all three regions are presented and compared to the predictions of various LO Monte Carlo models in which the hard–scattering process is described by direct photon diagrams, namely boson–gluon fusion and QCD Compton diagrams. The models under consideration differ in their way of describing the higher–order contributions to the LO process. LEPTO and HERWIG use parton showers that evolve according to the DGLAP equations. ARIADNE employs the color–dipole model, in which gluons are emitted from the color field between quark–antiquark pairs. Since color dipoles radiate independently, the gluons are not ordered in transverse momentum, $`k_T`$. The linked–dipole–chain model, LDC , implements the structure of the CCFM equation , which is intended to reproduce both DGLAP and BFKL evolution in their respective ranges of validity. In all these models, $`Q^2`$ is normally used as the relevant scale. Finally, RAPGAP introduces a resolved photon contribution in addition to the direct photon cross section and uses $`Q^2+E_{T,jet}^2`$ as the factorization scale. The inclusion of the resolved photon contribution partially mimics the higher–order contributions to the direct photon component, namely the $`\mathrm{log}(E_{T,jet}^2/Q^2)`$ terms, which are not included in the conventional DIS LO Monte Carlo models. The scattering of the partons from those contained in the resolved photon can lead to final state partons with a high transverse momentum in the forward direction. Since this process was suggested to provide an explanation for the observed excess in forward–jet production , the previously published forward cross section as a function of the Bjorken scaling variable, $`x`$, is compared to predictions of the RAPGAP model. ## 2 Measurement This study is based on data taken with the ZEUS detector in 1995, corresponding to an integrated luminosity of 6.36 pb<sup>-1</sup>. As the analysis follows very closely that for the forward–jet cross section , details about the experimental setup, event selection, jet finding and systematic error are not repeated here. The selected DIS events were required to have a scattered electron with a minimum energy of $`E_e^{}=10`$ GeV. The fractional energy transfer by the virtual photon had to be $`y>0.1`$. The $`x`$ range was extended with respect to from $`4.510^4<x<4.510^2`$ to $`2.510^4<x<8.010^2`$. An additional cut, $`Q^2>10`$ GeV<sup>2</sup>, was applied in order to be well within the DIS regime. Jets were selected with a cone algorithm in the laboratory frame. The cone radius, $`R`$, was chosen to be 1.0. The transverse energy of the jets in the laboratory frame, $`E_{T,jet}`$, was required to be larger than 5 GeV and the jet pseudorapidity<sup>1</sup><sup>1</sup>1The ZEUS coordinate system is defined as right–handed with the $`Z`$–axis pointing in the proton beam direction, referred to as forward direction, and the $`X`$–axis horizontal, pointing towards the center of HERA. The pseudorapidity is defined as $`\eta =\mathrm{ln}(\mathrm{tan}\frac{\theta }{2})`$, where the polar angle $`\theta `$ is taken with respect to the proton beam direction. range was restricted to $`\eta _{jet}<2.6`$. The scaled longitudinal jet momentum $`x_{jet}=p_{Z,jet}/p_{beam}`$, where $`p_{beam}=820`$ GeV, had to be larger than 0.036 to select forward jets . Furthermore, only jets with a positive $`Z`$–momentum in the Breit frame were considered, thus avoiding those jets originating from the scattered quark at large values of $`x`$. These cuts are given in Table 1. The jet cross sections presented here have been corrected to the hadron level for detector acceptance and smearing effects using the ARIADNE model, since it gave the best description of the data . The purity for reconstructing forward jets in the given phase space rises from 40% to 80% with increasing $`E_{T,jet}^2/Q^2`$, while the efficiency rises from 35% to 55%. For the lowest bin in $`E_{T,jet}^2/Q^2`$, both purity and efficiency are around 20%, but here the statistical errors are large. The factors required to correct the data for detector effects lie between 0.8 and 1.4 and increase as $`E_{T,jet}^2/Q^2`$ increases. ## 3 Results The forward–jet cross section is presented in Fig. 1 as a function of $`E_{T,jet}^2/Q^2`$. The numerical values are given in Table 2. The treatment of the systematic errors closely follows the published results and leads to errors of similar size. The shaded band corresponds to the uncertainty coming from the energy scale of the calorimeter. Predictions from different LO Monte Carlo models are shown in Fig. 1 and Fig. 2. Three regions are distinguished, separated by the dashed vertical lines. In the region where $`Q^2E_{T,jet}^2`$, all the models describe the data reasonably well. In the regime $`Q^2E_{T,jet}^2`$, only ARIADNE 4.08 and RAPGAP 2.06 reproduce the measured distributions. In RAPGAP the resolved component of the virtual photon is modeled using the SaS-2D parametrization for the parton distribution function (pdf) of the photon , which in this $`Q^2`$ range evolves as $`\mathrm{log}(E_T^2/Q^2)`$. The factorization scale has been set to $`\mu ^2=E_{T,jet}^2+Q^2`$. In Fig. 3 the $`x`$–dependence in this regime is compared with RAPGAP, using the cuts $`0.5<E_{T,jet}^2/Q^2<2.0`$ and $`4.510^4<x<4.510^2`$ . RAPGAP gives a good description of the cross section. The contribution of the direct photon component is indicated separately. As expected, it matches the LEPTO prediction. For $`Q^2E_{T,jet}^2`$, none of these models, except RAPGAP, reproduces the data. In particular ARIADNE overshoots the data by up to an order of magnitude at the upper limit of the displayed range. The other models, LEPTO 6.5, HERWIG 5.9 and LDC 1.0, lie far below the data. These comparisons using corrected cross sections are similar to those made previously , using the uncorrected distributions. The same data are shown in Fig. 2 together with the prediction of the RAPGAP Monte Carlo model, which describes the data well over the full range of $`E_{T,jet}^2/Q^2`$. Recently, the parton level NLO calculation JetViP has become available, to which our data can also be compared, with the proviso that the hadronization corrections are model–dependent and are of the order of up to 20%. JetViP sums contributions from the direct and resolved virtual photon and uses the SaS-1D photon pdf . For the first three bins in Fig. 2 only the direct contribution has been taken into account, since $`Q^2`$ is large enough ($`Q^2>83`$ GeV<sup>2</sup>) that the resolved component can be neglected. The renormalization and factorization scales have been set to $`E_{T,jet}^2+Q^2`$ . The agreement over the full range of $`E_{T,jet}^2/Q^2`$ is good. The $`x`$ dependence of the cross section in the range $`0.5<E_{T,jet}^2/Q^2<2.0`$ has also been calculated with JetViP and good agreement was found. The fact that only RAPGAP and JetViP describe the data implies that a resolved photon component is necessary for $`E_{T,jet}^2/Q^2>1`$. The necessity of a resolved photon component in a DIS process has also been discussed by the H1 collaboration in the context of dijet production in a $`Q^2`$ range of 5 to 100 GeV<sup>2</sup> , where the measured dijet cross section could only be described with the inclusion of the resolved component. In comparing the performance of RAPGAP and JetViP it should be noted that while they both agree with the data, their predictive power is limited. On the one hand both RAPGAP and JetViP use the SaS photon pdf, which for $`Q^2>0`$ is not very well constrained by experimental data. On the other hand there is a large variation of the results when the factorization scale is varied, as shown by the light shaded band in Fig. 3 for RAPGAP. A similar effect is seen for JetViP . ## 4 Summary The cross sections for forward–jet production over a wide range of $`E_{T,jet}^2/Q^2`$ have been compared to different Monte Carlo models. All leading–order Monte Carlo models tested here give a good description of the region in which $`E_{T,jet}^2Q^2`$. However, only those models which include non–$`k_T`$–ordered gluon emissions, or contributions from a resolved photon, reproduce the $`E_{T,jet}^2Q^2`$ region. The full range of $`E_{T,jet}^2/Q^2`$ can be described only by the RAPGAP model and the JetViP NLO QCD calculation, both of which include a resolved photon contribution. The forward–jet differential cross section, as a function of $`x`$ , is also well reproduced by RAPGAP and JetViP. However, the large dependence of its predictions on the factorization scale diminishes the significance of this agreement. ## Acknowledgements We thank the DESY directorate for their strong support and encouragement. The remarkable achievements of the HERA machine group were essential for the successful completion of this work and are gratefully acknowledged. We also thank G. Kramer for useful discussions and B. Pötter for providing the JetViP calculation.
no-problem/9910/astro-ph9910185.html
ar5iv
text
# Mid-Infrared Spectra of Be Stars ## 1 Introduction Be stars are defined as B stars which display optical emission in one or more Balmer lines of hydrogen, or which have displayed such emission at some point in the past. There are also several subdivisions of Be stars, including the “classical” Be stars and Herbig Be stars. Herbig Ae/Be stars are considered the intermediate mass (5-20 $`M_{\mathrm{}}`$) counterparts to T Tauri stars; they are pre-main sequence stars surrounded by an optically thick, dusty disk remaining from proto-stellar collapse. From this disk arises forbidden line emission and H$`\alpha `$ emission. Classical Be stars, on the other hand, are intermediate mass main-sequence objects, believed to have an extended optically thin gaseous envelope. While these two types of objects share similar nomenclature, they have little in common outside of their masses. The sources presented in this paper are all classical Be stars, and henceforth the term Be star refers to classical Be stars. Classical Be stars are also divided into two categories, “non-shell”, or “normal” stars, and “shell” stars. Shell stars display broad emission wings in one or more of the hydrogen lines and narrow absorption bands for ionized metals (Slettebak 1988), features not observed in non-shell Be stars. Several suggestions about the relation between shell stars and normal Be stars have been made: this relation is complicated by the fact that several stars, including $`\gamma `$ Cas and Pleione (HR 1180), have been observed to change from “normal” spectra to “shell” spectra (Doazan & Thomas 1982). Hanuschik (1996) explains shell stars as Be stars which are observed equator-on, with the transition between normal and shell spectra explained by density variations in a concave equatorial disk viewed at a small inclination angle ($``$15). Alternatively, it has been suggested that the shell phase is a common evolutionary phase of Be stars (Slettebak, et al. 1992), involving the formation of a gas shell outside the stellar envelope. The formation of the gas shell can explain the transition from normal to shell spectra, while the dissipation of the gas shell causes the transition from shell to normal spectra. Optical spectroscopy has raised a number of questions about Be stars. Individual spectral lines are found to be double peaked, with a ratio of the violet peak flux to the red peak flux (the V/R ratio) which differs from unity. Further, this ratio and the line strengths of the hydrogen Balmer lines have been found to be time-variable in many Be stars (Telting et al 1993). Near-infrared (NIR) spectroscopy has added to this puzzle. Variability of line strengths and V/R ratios of the double-peaked lines mirror observations in the optical (Hamann & Simon 1989). In addition, a number of the hydrogen recombination lines which appear in the NIR have anomalously high line strengths, and the NIR continuum has been observed to vary with time (Dougherty & Taylor 1994). Ultraviolet observations complicate the problem further. The strong lines observed in the UV should arise from a fast, low-density wind (Smith 1995). This is at odds with the optical and NIR observations, which require a relatively dense gas to produce the observed features. A possible explanation for this discrepancy is discussed below. X-ray observations have shown several Be stars to be X-ray binaries. $`\gamma `$ Cas, the brightest of the northern hemisphere Be stars has a compact companion, most likely a white dwarf (Haberl 1995). Other Be stars are also in binary systems; a large number of Be stars, including many of the sources presented in this sample, are spectroscopic binaries, including $`\beta `$ Lyr, $`\zeta `$ Tau, and $`\varphi `$ Per (Jared et al. 1989). Of all of the wavelength regions, the mid-infrared has been the least explored. Photometry by Gehrz et al (1974) from 2.3 to 19.5µm showed that the spectral energy distribution of the IR excess was consistent with free-free emission from a warm disk-shaped envelope. It has also been suggested (when discussing NIR observations) that studies of infrared line emission can be very useful in probing the stellar envelope of Be stars, as the emission is highly dependent upon the temperature and density structure of the envelope (Marlborough et al. 1997). This is one of the motivations for the study introduced here. Explaining the multitude of observations by modelling is a daunting task. The most common model used to explain the observations is the axisymmetric equatorial disk model of Poeckert & Marlborough (1978). A slow, high-density wind at the equator creates a thick envelope which produces the observed optical and NIR emission, while the UV emission arises from a fast, tenuous wind originating at the poles of the star (Lamers & Waters 1987). The presence of an equatorial disk has been supported by interferometric and polarimetric observations (Quirrenbach et al. 1997, Stee et al. 1995), but these models are incomplete, as they still cannot explain some Be star phenomenae, such as the observed V/R variability. A more complete discussion of Be stars can be found in several excellent review articles, including Slettebak (1988) and Dachs (1986). In this paper, we present the first high-sensitivity, medium resolution ($`R600`$) mid-infrared (8-13.3µm) spectra of be stars. In §2, we discuss the observations and data reduction techniques. In §3, we present the observed spectra, and briefly mention characteristics of particular sources. In §4, we discuss using free-free emission from the stellar envelope to model the MIR continuum. Finally, in §5, we briefly discuss some of the implications of these observations. ## 2 Observations and Data Reduction The observations presented here were taken using SCORE (van Cleve et al. 1998) at the Cassegrain focus of the Hale 5-m telescope.<sup>2</sup><sup>2</sup>2Observations at the Palomar Observatory were made as part of a continuing collaborative agreement between the California Institute of Technology, the Jet Propulsion Laboratory, and Cornell University. SCORE is a novel medium resolution mid-infrared spectrograph built originally as a proof of concept of the Infrared Spectrograph (IRS) short-hi module for SIRTF. It uses cross-dispersion to obtain a 7.5-15µm spectrum in a single exposure. The obtained spectrum has a resolution of $``$600, with a 1$`\mathrm{}\times 2\mathrm{}`$ slit. This short slit length is ideal for sources which are unresolved in the MIR at Palomar. SCORE also makes use of a second MIR array for slit-viewing, which both simplifies source acquisition during observing and provides information for photometric calibration (see discussion below). A listing of all of the observations is presented in Table 1. All of these observations were made using standard beam-switched chopping and nodding techniques. Typical integration times ranged from 240 to 1090 seconds, and the entire N-band was observed at a single setting for each object. The 1$`\sigma `$ sensitivity at 11.0µm in 100 seconds of integration for our instrument is roughly 90mJy. The MIR seeing for these observations was typically 1$`\mathrm{}`$. The individual integrations of each object were stacked, and the N-band spectra were extracted using SCOREX, extraction software designed to optimally extract the spectrum from the cross-dispersed format of SCORE (Smith et al. 1998). These spectra were then calibrated using observed spectra of standard stars (Cohen, et al. 1992) to properly remove spectral features from the sky background. The observed spectra from these standard stars were taken at nearly the same airmass as our observed objects, as even relatively small differences in airmass result in different strengths of atmospheric features, leading to inaccurate subtraction of background features from our calibrated spectra. Absolute photometry for the spectra was found by using the slit-viewer images of SCORE. Since a small chop amplitude ($``$5″) was used for these observations, the chopped images appear in the field-of-view of the slit-viewer. We calculate the slit-throughput by comparing the flux incident on the slit-viewer focal plane with the total flux found in the observed spectrum; mathematically, $$A=\frac{F_\nu (\mathrm{Slitviewer})}{F_\nu (\lambda )g(\lambda )d\lambda }$$ (1) where $`A`$ is the renormalization factor, $`F_\nu (\lambda )`$ is the flux as a function of wavelength on the spectrograph, $`g(\lambda )`$ is the slit viewer filter passband function, and $`F_\nu (\mathrm{Slitviewer})`$ is the total flux incident on the slit-view focal plane array. The renormalization factor $`A`$ should account for the slit throughput. To provide the absolute calibration, we use the same procedure for our calibrator star, deriving another normalization factor, $`A_{calib}`$. By multiplying the observed, calibrated spectrum of our source by $`\frac{A}{A_{calib}}`$ we are able to normalize the observed spectrum to have the correct amount of flux, without introducing errors from instrumental effects. The absolute normalization of the spectra is uncertain to roughly 5%, due primarily to the uncertainty in the slit throughput efficiency. ## 3 Observed Spectra of Be stars In Figure 1, we present the reduced spectra of our objects. These spectra clearly show the presence of a large number of spectral lines. Further, in most of the spectra, there are a number of lines which are weak relative to the strong MIR continuum. Since these weak lines appear in several different spectra, but not in calibration star spectra, we conclude that they are not artifacts. Further, the clear presence of multiple lines in each series implies the presence of other lines within the series. For instance, the H 29$``$10 is weak in the spectrum of $`\gamma `$ Cas, but the presence of other transitions, (including the H 29$``$9 transition in this case) support the identification of this weak line, since they arise from a common excited state. Similar support can be found for nearly all of the weak observed lines. Finally, because all of the lines occur at wavelengths attributable to hydrogen recombination (for most of our objects, see below), there is strong evidence to support the pure-hydrogen recombination spectra. There is also another test for the presence of these weak lines. Nine of the 11 spectra display similar strong hydrogen line emission (all but $`\beta `$ Lyr and MWC 349), but a number of these spectra have low signal to noise. These nine sources will hereafter be referred to as the hydrogen spectra Be stars (or HS stars). If we suppose that all of the lines observed in the high signal to noise spectra (such as $`\gamma `$ Cas and $`\zeta `$ Tau) are present in the spectra of all HS sources, we can coadd the spectra from all nine of these objects. If these lines are, in fact, present in a majority of the sources, we expect the weak lines to be accentuated. If, on the other hand, these weak lines are present in only a few spectra, we expect them to be diluted (relative to their appearance in the individual spectra). This coadded spectrum is shown in Figure 2, with the line identifications marked at the top of the plot. Examination of this spectrum shows a large number of lines, including a number which are only observed as very faint peaks in the spectra of our bright sources. This supports the idea that all of the observed hydrogen transitions are present in the majority of the HS sources. In Table 2, we list all of the observed lines, their line widths and effective line strengths, and the hydrogen recombination transition responsible for their production, in order of decreasing wavelength. This table includes the data for all of the HS sources (which excludes $`\beta `$ Lyr and MWC 349). The data from the two peculiar objects ($`\beta `$ Lyr and MWC 349) is presented in Table 3, because of the clear differences between their spectra and the spectra of the majority of our sources. Note that in Table 2, every observed line is attributable to hydrogen recombination. We examined the spectra for the presence of line species other than hydrogen, in particular for the presence of He I or He II, since these species can produce emission lines coincident with hydrogen lines. He I is easily identified by the presence of the 10.88 µm He I 3S–3P<sub>o</sub> transition, while He II can be identified by the presence of many additional hydrogen-like transitions. In the nine HS spectra, there is no evidence for the He I 10.88µm transition or for the numerous He II transitions which would be present if the species accounted for any of the observed emssion. In $`\beta `$ Lyr, however, we observed the He I 3S–3P<sub>o</sub> transition at 10.88 µm. Since Helium recombinations occur at the same wavelength as hydrogen recombinations (for instance, both H 9$``$7 and He I 9$``$7 occur at 11.3µm) the presence of this line implies that the observed lines are not necessarily purely due to hydrogen, as is true for most of our other sources. The possible overlap of hydrogen and helium recombination line makes exact identification of these lines problematic. We also find evidence for the \[Ne II\] 12.8µm transition, a line also observed in the spectrum of MWC 349. MWC 349 showed no evidence for He I or He II emission, and only showed a few weak hydrogen transitions (with the exception of the H 7$``$6 transition which was quite strong). We will return to a discussion of the observed spectral features of these objects later in this section. ### 3.1 Hydrogen Spectra Be Stars While the nine HS sources presented here have similar MIR spectra, they are significantly different types of objects in several ways. Seven of the sources are classified as shell stars. Five of our sources are spectroscopic binaries, and three of the sources are X-ray binaries. Further, while most Be stars do not display radio emission, four of the sources presented here are relatively bright radio sources. The characteristics of the individual objects are tabulated in Table 4. For completeness, some of the characteristics of the two peculiar sources are also listed on this table. The large differences in the properties of the individual objects leads us to look for correlations. Are any of these individual properties related to the MIR observations presented here? We looked at correlation plots for each of the different categories listed in Table 4. An example of one of these correlation plots is shown in Figure 3. We have plotted the ratio of the H 9$``$7 to H 7$``$6 transition versus the ratio of the H 10$``$7 to H 7$``$6 transitions in this figure, with the XRB plotted as $`+`$ symbols, spectroscopic binaries as open triangles, and single stars as open diamonds. We have examined a number of similar correlation plots, looking for evidence that our MIR spectra are effected by each of these properties, but have found no such evidence. However, given the small statistics (only nine sources), this lack of detection is not surprising, and only by building a more significant set of data will such correlation tests be valuable for understanding the effects of these peculiarities on our MIR spectra. The presence of so many high-level hydrogen transitions provides valuable insight into the origin of line emission. The line strengths are inconsistent with optically thin line emission (Hummer & Storey 1987), and therefore must originate at optical depths of $``$1. The optically thick emission will simply be the product of the Planck function, the line width, and the surface area of the emission region. Since the Planck function of a gas with temperatures consistent with the production of such high H recombination series decreases with wavelength and our observations show no clear trend in line width with wavelength, the surface area must be increasing with wavelength. ISO observations have shown that the line width actually decreases with wavelength (Hony et al. 1999), strengthening this statement. This implies a density gradient, in order to balance line ratios; if the density is uniform, then the low-level transitions will become optically thick much more quickly than their high-level counterparts, giving them an effective emission surface significantly larger than the high-level lines, and overcompensating for the decreasing strength with wavelength. This was previously noted from NIR observations of $`\gamma `$ Cas (Hamann & Simon 1987). These authors concluded that gradients as small as r<sup>-2</sup> were sufficient to explain their observed line ratios. ### 3.2 Peculiar Be Stars $`\beta `$ Lyr and MWC 349 display spectra which are significantly different from our HS sources, and from each other. Each of these sources is unique, and it is therefore not surprising that their MIR spectra should be peculiar as well. We briefly discuss the peculiar aspects of the MIR spectra of these objects here, and present some possible explanations for these peculiarities. $`\beta `$ Lyr is a known spectroscopic and eclipsing binary. The two members of this binary are B stars, with masses of $`M_113M_{\mathrm{}}`$ and $`M_23M_{\mathrm{}}`$ (Harmanec & Scholz 1993). The two stars are in close orbit, with the mass-loss from the less-massive star filling its Roche lobe, overflowing to form an accretion disk around the more massive star (De Greve & Linnell 1994). This situation is quite different from the disk-like stellar envelopes around our HS Be stars, which occupy a comparatively small volume. The accretion process can also possibly explain observed He I emission by invoking convective dredging in the donor star, thus bringing helium into the Roche lobe and the accretion disk. The presence of the He recombination lines in this spectrum, when compared to the HS sources is most easly explained by either a higher He abundance in the circumstellar disk or a higher level of excitation. If we compare $`\beta `$ Lyr to $`\gamma `$ Cas, however, we see that $`\gamma `$ Cas is significantly hotter than $`\beta `$ Lyr ($`T_{\beta Lyr}<20000`$K). Since no He emission is observed in $`\gamma `$ Cas, whose disk gas is at higher excitation, we conclude that the observed He emission in $`\beta `$ Lyr is due to a greater He abundance. The biggest question raised from examination of the MIR spectrum of $`\beta `$ Lyr is the cause of \[Ne II\] emission. While a great number of emission lines have been observed in the optical and NIR spectra of $`\beta `$ Lyr, there has been no evidence for forbidden emission in any of these spectra (Johnson 1978), or for mid-infrared emission from forbidden lines in our spectra (\[Ar III\] at 9.0µm, \[S IV\] at 10.5µm). Why should \[Ne II\] emission, an no other forbidden emission, be observed? The two properties which should be examined in particular are the ratio of the observed to critical density ($`n_e/n_{crit}`$) for \[Ne II\] emission and the energy required to form Ne II. Upon examination, we find that both of these characteristics of Ne are favorable for the observed forbidden emission. \[Ne II\] has a very high critical density ($`10^5`$ cm<sup>-3</sup>), higher than any other astronomically strong MIR forbidden transition. Further, it has a relatively low ionization potential (21.5 eV), making it possible to ionize a large enough fraction of the Neon gas by the hot B stars to create the observed emission. Using the observed strength of the \[Ne II\] line, the distance to $`\beta `$ Lyr (270 pc) from the Hipparchos catalog (1997), and making a few assumptions about the emitting gas (e.g. $`n_e=n_{crit}`$, $`T_e=10^4`$K, solar abundances), we can calculate an approximate mass of emitting gas. This calculation gives a mass of the emitting gas of $`M_{tot}10^5M_{\mathrm{}}`$. This compares favorably with calculations of the mass-transfer rate from the “donor” star to the “accretor” star, if we assume that some small fraction of mass-transfer escapes into an extended gas shell. Such calculations, assuming complete mass conservation between the two stars, give values of $`\dot{M}2\times 10^5`$ M yr<sup>-a</sup> (Hubeny et al. 1994). If we assume that the gas is fully ionized, we can estimate an emission volume for the \[Ne II\] line. We assume, for this approximation, that the emission volume is spherical, and find that the emitting region should be of order 100 AU across. For comparison, the separation of the two components of the binary is only of order 20 AU. From these estimates, we suggest that the forbidden line emission does not arise in the Roche lobe or accretion disk, but must come from a much more extended gas shell around the binary. This could also explain the difference in the observed hydrogen spectra of this source, relative to our HS sources. The extended, low density outer gas shell will produce optically thin line emission from hydrogen, while the disk produces optically thick line emission. The contribution from the optically thin emission will alter the hydrogen line ratios, explaining why the line ratios for hydrogen emission from $`\beta `$ Lyr are not $``$1 (as for our HS sources). MWC 349 has the largest MIR to optical flux ratio of all of our sources, with this ratio more than 100 times larger than our other sources. This could be due to several different phenomena, including a much cooler source or much more circumstellar mass (including, perhaps, a large amount of dust, leading to severe reddening). Independent of the cause, the red color of MWC 349, compared to our HS sources, implies that the nature of this object or its environment may be quite different from classical Be stars. Observations of MWC 349 in the submillimeter detected H$`\alpha `$ transitions which were greatly amplified; MWC 349 was the first source observed with hydrogen laser emission (Martin-Pintado et al. 1989). Further, recent ISO observations have detected more than 160 emission lines between 2.4 and 190µm, including every hydrogen $`\alpha `$ transition (H$`54`$ to H16$`15`$) in this range (Thum et al. 1998), showing the presence of infrared lasers from the large amplification of the H$`\alpha `$ lines. Further, lasing/masing has been observed from a number of the hydrogen $`\beta `$ lines. It is thought that the conditions which allow the observed masing/lasing from MWC 349 include the high temperature of (and corresponding large ultraviolet flux from) the central star, a dense, massive, neutral disk (much more massive than the disks in most Be stars), and the coincidental edge-on view we have of the disk (Strelnitski et al. 1996). A significant amount of forbidden line emission in the optical has been observed in MWC 349 (Brugel & Wallerstein 1979, Andrillat & Swings 1976, Allen & Swings 1976)). We have also detected forbidden emission, in the form of the \[Ne II\] 12.81µm line. It is unlikely that this forbidden emission arises from the same region as the hydrogen emission, however. Recent work by Them & Greve (1997) used the observed Paschen decrement to estimate the electron density in the disk at $`n_e=10^8`$, over one hundred times greater than the critical density for \[Ne II\] forbidden emission. The forbidden emission more likely arises in an extended gas component around the star. Such an extended component has been observed via radio measurements and has been associated with a slow (50 km s<sup>-1</sup>) gaseous outflow. Cohen et al. (1985), using observations from the VLA, found that the radio observations were well-matched by a spherical $`1/r^2`$ wind model, with mass-loss rates of order 10<sup>-5</sup>M yr<sup>-1</sup> and a temperature of roughly 9000 K. Using the same technique as described for $`\beta `$ Lyr, we estimate the mass of the emitting gas from the forbidden emission. We use the same assumptions as in the case of $`\beta `$ Lyr, and estimate a distance to the source of 400 pc (Thompson et al. 1977). From these, we find a mass of the emitting gas of $`M_{tot}10^2M_{\mathrm{}}`$. The mass of the disk should be substantially larger than this, up to several solar masses (Thompson et al., Thum & Greve 1997). However, the forbidden emission we observe will only come from the regions around the star where the density is relatively low, and therefore will only arise from a small fraction of the disk, since the majority of the disk has a high ($`10^8`$cm<sup>-3</sup>) density (Thum et al. 1998). ## 4 Continuum Emission The majority of photospheric radiation from hot stars is at short wavelengths, in the optical or ultraviolet (an 16000 K star has a peak flux at $``$ 1800Å, well into the UV). In the MIR, photospheric emission is well inside the Raleigh-Jeans tail of the blackbody distribution, producing relatively weak emission in this wavelength range. Classical Be stars have been long known to produce large IR excesses, and a common explanation for this is emission via free-free processes. Free-free radiation from ionized gas is a powerful source of emission at long wavelengths. Typically, such emission has been used to explain radio emission from cool astronomical sources (Panagia & Felli 1975), but it can also explain the infrared excesses observed from many sources. Be stars have well-known IR excess, and free-free emission from the extended stellar envelope provides a convenient explanation for this excess. Gehrz et al. (1974) used models of optically thin and optically thick free-free emission to match infrared photometry of Be stars from 2.3µm to 19.5µm. Waters et al (1984) successfully modeled IRAS Be star observations with an optically thick free-free emission. We have used models of free-free emission to fit our MIR continuum observations, finding values for the shell radius ($`R_{sh}`$), the electron density ($`n_e`$), and the shell temperature ($`T_{sh}`$). ### 4.1 Calculations of Free-Free Emission Three emission processes are included in our model of the MIR continuum: emission from the photosphere of the star, optically thin free-free emission, and optically thick free-free emission. Because neither of the free-free processes are very effective at short wavelengths, we can constrain the stellar parameters without having to consider the effects of the shell. In the infrared, the photospheric spectrum is essentially a blackbody produced at the photospheric temperature, so the Planck function describes this emission. Initial values for the temperature ($`T_{}`$) and radii ($`R_{}`$) of our sources were approximated from their respective spectral types (Waters et al. 1987), and we then adjusted the values of both $`T_{}`$ and $`R_{}`$ to agree with J-band (1.25µm) and K-band (2.20µm) photometry (Gezari et al. 1993). Distances to the stars were obtained from the Hipparchos Catalog (1997). All of these parameters, including the J-band and K-band fluxes, are listed in Table 5. To fit the observed MIR continuum, we require several assumptions. First, we assume that the MIR emission arises from an extended stellar envelope around the star. Further, it is assumed that this envelope is flattened into an oblate spheroidal disk, with a semi-minor axis $`1/10R_{sh}`$ where $`R_{sh}`$ is the semi-major axis of the spheroid. Further, we assume that the stellar envelope has both uniform density and uniform temperature. These assumptions are clearly not correct, but greatly simplify calculations without introducing significant amounts of error. Finally, we assume that we are looking into the disk-like envelope edge-on. This last assumption is supported by two facts: the large values of $`v\mathrm{sin}i`$ for these sources, and the central line absorption of shell spectra (seven of our nine HS sources are shell stars, see Table 4). The MIR emission from the stellar photosphere is calculated using the Planck function (making use of $`R_{}`$, $`T_{}`$, and $`D`$), as described above, but is corrected by applying an extinction factor in the form of $`e^{\tau (\rho =R_{sh})}`$. This will account for free-free absorption of photospheric radiation as a function of wavelength. Thus, we write $$F_\lambda ^{}=37469.4\left[e^{\left(1.44\frac{1\mu \mathrm{m}}{\lambda }\frac{10^4}{T_{}}\right)}1\right]^1\left(\frac{1\mathrm{\mu m}}{\lambda }\right)^5\left(\frac{R_{}}{D}\right)^2e^{\tau (\rho =R_{sh})}$$ (2) Next, we include the effect of optically thick free-free emission. The optically thick emission will be characterized by the Planck function at the shell temperature ($`T_{sh}`$). This emission will arise from an area of the disk which is defined by the surface a depth $`\rho `$ into the disk, where the optical depth is $$\tau ^{ff}=\alpha _\nu ^{ff}\rho =1.4\times 10^3\left(\frac{n_e}{10^{11}}\right)^2\left(\frac{10^4}{T_{sh}}\right)^{0.5}\left(\frac{\lambda }{\mathrm{\mu m}}\right)^3\left(1e^{1.44\frac{\mu \mathrm{m}}{\lambda }\frac{10^4}{T_{sh}}}\right)\left(\frac{\rho }{10^{12}}\right)=1$$ (3) This cuts off the optically thick emission at short wavelengths, as the absorption coefficient becomes very small and the entire disk becomes optically thin. Mathematically, $$F_\lambda ^{ffthick}=37469.4\left[e^{\left(1.44\frac{1\mu \mathrm{m}}{\lambda }\frac{10^4}{T_{sh}}\right)}1\right]^1\left(\frac{1\mu \mathrm{m}}{\lambda }\right)^5\left(\frac{R_{sh}}{D}\right)^2x(\tau )$$ (4) where $`x(\tau )`$ is the ratio of the area of the optically thick emission to the maximum area of optically thick emission. Finally, we include the effect of the optically thin region of the emission disk. This can be accounted for to first order by integrating the emission function for free-free emission ($`ϵ_\nu ^{ff}`$) over the volume of the stellar envelope. We improve upon this first-order calculation by also including the extinction ($`e^{\tau (\rho )}`$) factor in the integral over volume. In this way, we account for the reduction in optically thin emission with increasing wavelength. This gives us, replacing Equation 3, $$F_\lambda ^{ffthin}=7.12\times 10^{13}\left(\frac{n_e\mu \mathrm{m}\mathrm{pc}}{10^{11}\lambda D}\right)^2\left(\frac{10^4}{T_{sh}}\right)^{0.5}\left(\frac{R_{sh}}{10^{12}}\right)^3\left(e^{1.44\frac{1\mu \mathrm{m}}{\lambda }\frac{10^4}{T_{sh}}}\right)\left(\frac{e^{\tau (\rho )}𝑑V}{V_{full}}\right)$$ (5) To fit our observations, we sum the emission from these three sources, varying the temperature ($`T_{sh}`$), radius ($`R_{sh}`$), and electron density ($`n_E`$) of the shell. This sum is then compared to our observed spectrum. We calculate a reduced $`\chi ^2`$ value based upon nine points in our observed spectra; these nine points located in regions at least 0.05µm from any of the observed emission lines, in order to not contaminate the $`\chi ^2`$ with line emission. The flux at each point is determined by averaging the flux over $`\lambda 0.05`$ to $`\lambda +0.05`$µm, in order to reduce the effects of local noise spikes. The results of these fits are shown in Table 6. In Figure 4, we show the observed spectrum of $`\gamma `$ Cas with the overlaid fit. The observed spectrum is shown as a dotted line, the fit curve is shown as a solid line, the dashed line and dot-dash line are the optically thin and optically thick components of the emission respectively, the diamonds are the nine points used to calculate the reduced $`\chi ^2`$, and the open triangle is the 12µm flux measured by IRAS. ### 4.2 Comparisons with Previous Work Two works in particular, Gehrz et al. 1974 and Waters et al. 1987, have calculated parameters of Be stellar envelopes. Gehrz et al. calculated $`R_{sh}`$ and $`n_e`$ from their observations, based upon an estimate of $`T_{sh}`$ (this estimate was constrained solely by the requirement that $`T_{sh}`$ be between 10000K and $`T_{}`$) by using the estimated $`T_{sh}`$ and by calculating the wavelength where the extrapolations of optically thin and optically thick free-free emission intersect (at which point the optical depth $`\tau (\lambda _c)1`$). This provides a relation between the radius of the disk and the electron density. Further, by assuming a Rayleigh-Jeans distribution for both the optically thick shell flux and the stellar flux (at $`\lambda _c`$), they obtain an expression for the radius of the shell, which then determines the electron density of the disk. For these calculations, they have assumed the disk to be flattened with an aspect ratio $`A=r_s/d_{disk}=5`$ (where $`d_{disk}`$ is the full thickness of the disk), the same value used for our modelling. As in our model, theirs includes the simplifying assumption that the electron density was uniform in the disk. Waters et al. assume $`T_{sh}=0.8T_{}`$ and calculate $`R_{sh}`$ and the density profile ($`\rho (r)`$) for the stellar disks using the “curve of growth” method (Lamers & Waters 1984). For this method, they assume a disk with an opening angle $`\theta `$, a disk density which is proporational to $`r^n`$, a density at the inner edge of the shell of $`\rho _o`$, and a disk radius $`r_{sh}`$ (the sharp outer cutoff to the disk). For their modelling, they have assumed a pole-on view of the disk, such that the opening angle $`\theta `$ is deprecated in their calculations; this is contrary to the both our assumption and the assumption of Gehrz et al. that the disk is viewed edge-on. They then use a combination of optically thin and optically thick free-free radiation as a comparision to the observed monochromatic infrared excess (as found from IRAS spectra and using assumed stellar parameters). By comparing the shapes of the observed excess and their models, they obtain a value fo the density parameter $`n`$, and from the shift required to match the observed and model curves they obtain $`\rho _o`$. These models are dependent upon the assumed shell radius, such that the authors run a series of models to find the best-fit combination of $`n`$ and $`r_{sh}`$, yielding the values used for comparision in this paper (notably, their values for $`r_{sh}`$ are all lower limits). We do not compare our values for $`n_e`$ with the $`\rho (r)`$ calculated by Waters et al. because of the large uncertainty in the mass densities they derived, the uncertainty in the proper ionization fraction of the envelope gas, and the dependence upon the exact shell radius. We present the results of the modelling discussed in this paper, as well as the results of both Gehrz et al. and Waters et al. in Table 7 for ease of comparision. We find that our results are generally consistent (within factors of a few) with those of both authors. However, in a number of the cases, Waters et al. derive shell radii which are significantly larger than the radii derived here. This can be explained by the assumptions which have gone into our modelling, and the wavelength regimes of the observations. We have assumed an isothermal disk, whereas Waters et al. did not. Since their observations utilize IRAS data, they will be sensitive to the cool parts of the shell which should exist in the case of a thermal gradient. These discrepancies are particularly apparent in the cases of $`\varphi `$ Per and $`\gamma `$ Cas, bright sources with accurate 12µm, 25µm, and 60µm IRAS detections. Comparing our results to those of Gehrz et al., we find that in the case of $`\zeta `$ Tau we predict an electron density which is a factor of two below that of Gehrz et al., and a moderately larger disk. In the cases of $`\gamma `$ Cas and $`\varphi `$ Per, we derive larger, hotter disks than Gehrz et al., but have similar electron densities. The simplicity of the models used by Gehrz et al. are most likely responsible for this discrepancy. They extrapolated curves for both the optically-thin and optically-thick segments of the free-free emission, then used the intercept point of these two curves to derive numerical values for radius and temperature of the shell. The improved modelling techniques used here should be better able to limit these same parameters. In Table 7, we also show calculated total mass of hydrogen in the emitting disk. These numbers assume complete ionization of the hydrogen gas, such that $`n_e=n_H`$, and uniform density, an assumption already used in the modelling. We find that the masses of the disks are relatively small, ranging from 2.5$`\times 10^{11}\mathrm{M}_{\mathrm{}}`$ ($`o`$ Aqr) to 8.1$`\times 10^9\mathrm{M}_{\mathrm{}}`$ ($`\gamma `$ Cas). ## 5 Discussion These spectra present MIR features of Be stars and give some indications of the range of emission characteristics of them as well. Continuum emission is explained by a combination of optically-thick and optically-thin free-free emission. Both the continuum and line emission is likely to arise in a warm stellar envelope. The peculiar objects ($`\beta `$ Lyr and MWC 349) display complicated line emission which is difficult to explain with simple models. In the case of $`\beta `$ Lyr, a relatively simple explanation for the presence of hydrogen and helium is possible, and the observed \[Ne II\] emission is easily explained as emission from a large gas cloud surrounding the binary system. In the case of MWC 349, the observed features are consistent with previous observations, and the observed forbidden emission implies a large volume of low-density ionized gas. The authors would like to thank the staff at Palomar Observatory for their assistance. We would also like to thank S. Hony for sharing ISO data and results prior to publication. SAR acknowledges L.B.F.M. Waters and M. Simon for helpful discussions. This research was partly supported by NASA Contract 960803.
no-problem/9910/math9910108.html
ar5iv
text
# On the imbedding of a finite family of closed disks into ℝ² or 𝑆² (Institute of mathematics, National Acad. of Sci., Ukraine e-mail polulyah@imath.kiev.ua ) ## Abstract Let $`\{V_i\}_{i=1}^n`$ be a finite family of closed subsets of a plane $`^2`$ or a sphere $`S^2`$, each homeomorphic to the two-dimensional disk. In this paper we discuss the question how the boundary of connected components of a complement $`^2_{i=1}^nV_i`$ (accordingly, $`S^2_{i=1}^nV_i`$) is arranged. It appears, if a set $`_{i=1}^nIntV_i`$ is connected, that the boundary $`W`$ of every connected component $`W`$ of the set $`^2_{i=1}^nV_i`$ (accordingly, $`S^2_{i=1}^nV_i`$) is homeomorphic to a circle. Let $`U^2`$ be an open area (subset of a plane, homeomorphic to the two-dimensional disk). One of the classical problems of complex analysis is the question of a possibility of an extention of conformal mapping defined in $`U`$ out of this area. The answer to this question is tightly connected with the structure of the boundary $`U`$ of $`U`$ and depends on how much the closure $`ClU`$ differs from the closed two-dimensional disk. As a rule, it is known only the local information about a structure of the set $`U`$ (accessibility of points of the boundary from area $`U`$ and so on). In works \[P1, P2\] the criterion is given for a compact subset of a plane to be homeomorphic to the closed two-dimensional disk, which uses only local information about the boundary of this set (see theorem 3 below). This criterion enables to investigate the problems connected to a mutual disposition of closed disks on a plane. Let $`\{V_i\}_{i=1}^n`$ be a finite family of closed subsets of a plane $`^2`$ or a sphere $`S^2`$, each homeomorphic to the two-dimensional disk. In this paper we discuss the question how the boundary of connected components of a complement $`^2_{i=1}^nV_i`$ (accordingly, $`S^2_{i=1}^nV_i`$) is arranged. It appears, if a set $`_{i=1}^nIntV_i`$ is connected, that the boundary $`W`$ of every connected component $`W`$ of the set $`^2_{i=1}^nV_i`$ (accordingly, $`S^2_{i=1}^nV_i`$) is homeomorphic to a circle (see. theorems 1, 2 below). ###### Theorem 1 Let $`V_1,\mathrm{},V_n`$ be a finite collection of the closed subsets of $`^2`$, each homeomorphic to the two-dimensional disk. Suppose the set $`_{i=1}^nIntV_i`$ is connected. Let $`W`$ be the unlimited connected component of the set $`^2_{i=1}^nV_i`$. Then the set $`^2W`$ is homeomorphic to the closed two-dimensional disk. ###### Theorem 2 Let $`V_1,\mathrm{},V_n`$ be a finite collection of the closed subsets of $`S^2`$, each homeomorphic to the two-dimensional disk. Suppose the set $`_{i=1}^nIntV_i`$ is connected and $`S^2_{i=1}^nV_i\mathrm{}`$. Let $`W`$ be a connected component of the set $`S^2_{i=1}^nV_i`$. Then the set $`ClW`$ is homeomorphic to the closed two-dimensional disk. The following definitions and statements will be useful for us in what follows. ###### Definition 1 \[ZVC\] Let $`D`$ be an open set. The point $`xD`$ is called *accessible* from $`D`$ if there exists a continuous injective mapping $`\phi :IClD`$, such that $`\phi (1)=x`$ and $`\phi ([0,1))IntD`$ (this map is named *a cut*). ###### Definition 2 \[ZVC\] Let $`E`$ be a subset of a topological space $`X`$ and $`aX`$ be a point. The set $`E`$ is called *locally arcwise connected* in $`a`$, if any neighbourhood $`U`$ of $`a`$ contains such neighbourhood $`V`$ of $`a`$ that any two points from $`VE`$ can be connected by a path in $`UE`$. ###### Proposition 1 \[ZVC\] Let $`D`$ be an area with a nonempty interior in $`^2`$ or $`S^2`$. If $`D`$ is locally arcwise connected in a point $`aD`$ then $`a`$ is accessible from $`D`$. ###### Theorem 3 \[P1, P2\] Let $`D`$ be a compact subset of a plane $`^2`$ with a nonempty interior. Then $`D`$ is homeomorphic to the closed two-dimensional disk if and only if the following conditions holds: * the set $`IntD`$ is connected; * the set $`^2D`$ is connected; * any point $`xD`$ is accessible from $`IntD`$; * any point $`xD`$ is accessible from $`^2D`$. ###### Theorem 4 (Shönflies) \[ZVC\] Let $`\gamma `$ be a simple closed curve in $`S^2`$ (respectively, in $`^2`$). There exists a homeomorphism $`f`$ of $`S^2`$ onto itself (respectively, of $`^2`$ onto itself) mapping the curve $`\gamma `$ onto the unit circle. ###### of theorem 1. Let us show, that the compact set $`D=^2W`$ complies with the conditions of theorem 3. We will divide our argument into several steps. 1. Since $`D_{i=1}^nV_i`$ then for any $`xD`$ we can find $`i\{1,\mathrm{},n\}`$ such that $`xV_i`$. Theorem 3 states that the point $`x`$ is accessible from $`IntV_i`$. Hence $`x`$ is accessible from $`IntD`$ because $`IntV_iIntD`$. 2. Let us show, that any point $`aD`$ is accessible from $`W=^2D`$. Without loss of generality we can assume that the origin of coordinates lies in $`IntD`$. We fix $`aD`$. The set of all points accessible from $`W`$ is dense in $`W=D`$ \[ZVC\], therefore there exists a point $`x_0D`$ accessible from $`W`$ which do not coinside with $`a`$. All compact subsets of $`^n`$, $`n`$, are known to be limited. Therefore there exists $`R>0`$ such that $$\underset{i=1}{\overset{n}{}}V_i\left\{x^2\right|\text{d}(0,x)<R\}.$$ We fix a point $`x^{}W`$ which meets an equality $`|x^{}|=R`$. It is known (see \[ZVC\]) that there exists a cut $$\gamma _0:I^2,$$ $`\gamma _0(0)=x_0`$, $`\gamma _0(1)=x^{}`$, $`\gamma _0((0,1])W`$. Let $$\tau =\mathrm{min}\{tI|\gamma _0(t)|=R\}.$$ According to the conditions of theorem $`\tau >0`$. Let $`\gamma _0(\tau )=x^{\prime \prime }`$. Denote a polar angle of $`x^{\prime \prime }`$ by $`\phi `$. Consider continuous injective mapping $$\gamma _1:_+^2,$$ $$\gamma _1(t)=\{\begin{array}{ccc}\gamma _0(t)\hfill & \hfill \text{when}& \hfill t[0,\tau ),\\ (\phi ,R+t\tau )\hfill & \hfill \text{when}& \hfill t[\tau ,+\mathrm{}).\end{array}$$ This map is an imbedding of $`_+`$ into $`^2`$, moreover $`\gamma _1(0)W`$, $`\gamma _1(_+\{0\})W`$. 2.1. Let us show, that the open set $`W\gamma _1(_+)`$ is connected. Consider an involution $$f:^2\{0\}^2\{0\},$$ $$f(r,\phi )=(r^1,\phi ).$$ This map is known to be a homeomorphism. Under the action of $`f`$ the area $`W`$ will pass to an open connected set $`\stackrel{~}{W}=f(W)`$. Mark that the origin of the coordinates is an isolated point of the boundary $`\stackrel{~}{W}`$ because $$\{(r,\phi )^2|r>R\}W,$$ $$\{(r,\phi )^2|\mathrm{\hspace{0.17em}0}<r<R^1\}f(W).$$ Therefore, $`\stackrel{~}{W}_0=\stackrel{~}{W}\{0\}`$ appeares to be the open connected set and the map $$\stackrel{~}{\gamma }:I^2,$$ $$\stackrel{~}{\gamma }(t)=\{\begin{array}{ccc}f\gamma _1(t^11)\hfill & \text{when}\hfill & \hfill t(0,1],\\ 0\hfill & \text{for}\hfill & \hfill t=0.\end{array}$$ is a cut of the set $`\stackrel{~}{W}_0`$. Moreover $`\stackrel{~}{W}_0\stackrel{~}{\gamma }(I)=f(W\gamma _1(_+))`$. So, for a proof of connectivity of the set $`W\gamma _1(_+)`$ it is sufficient to check the validity of the following statement. ###### Lemma 1 Let $`U^2`$ be an open connected set, point $`zU`$ be accessible from $`U`$, $`\alpha :I^2`$ be a cut of $`U`$ with the end in $`z`$ (a continuous injective mapping such that $`\alpha (0)=z`$ and $`\alpha ((0,1])U`$). Then the set $`U\alpha (I)`$ is connected. Let us prove this statement. Let $`y=\alpha (t)`$ for some $`t>0`$. According to propositions 6.4.6 and 6.5.1 from \[ZVC\] there exists a homeomorphism $`h`$ of $`^2`$ onto $`^2`$, such that the map $$h\alpha =\stackrel{~}{\alpha }:I^2$$ complies the relation $$\stackrel{~}{\alpha }(t)=(t,0)_+\times \{0\}^2,tI.$$ Since $`\alpha ([t,1])U`$, there exists an $`\epsilon >0`$ such that $$\stackrel{~}{U}_t=\{x^2|\text{d}(x,\stackrel{~}{\alpha }([t,1]))<\epsilon \}h(U).$$ Obviously, $`U_t=h^1(\stackrel{~}{U}_t)`$ is a neighbourhood of a point $`\alpha (t)`$ in $`U`$ and the set $`U_t\alpha (I)`$ is connected. Besides, a set $`(U_{t_1}U_{t_2})\alpha (I)`$ is not empty for any $`t_1`$, $`t_2(0,1]`$. Therefore $$\underset{t(0,1]}{}U_t$$ is a connected open neighbourhood of a set $`\alpha (I)`$ in $`U`$, hence $`U\alpha (I)`$ is a connected set. $`\mathrm{}`$ So, the set $`W\gamma _1(_+)`$ is connected. 2.2. Select a point $`x_iV_i`$, $`x_ia`$ for each $`i\{1,\mathrm{},n\}`$. The set $$\underset{i=1}{\overset{n}{}}IntV_i$$ is connected by the condition of theorem and the point $`x_i`$ is accessible from $`IntV_i`$ for any $`i`$. Therefore we can find a continuous map $$\beta :[1,n+1]\underset{i=1}{\overset{n}{}}V_i$$ which meets the following conditions $$\beta ((i,i+1))\underset{i=1}{\overset{n}{}}IntV_iIntD,i=1,\mathrm{},n;$$ $$\beta (i)=x_i,i=1,\mathrm{},n;\beta (n+1)=x_0.$$ Consider a continuous map $$\gamma :_+^2,$$ $$\gamma (t)=\{\begin{array}{ccc}\beta (t+1)\hfill & \text{for}\hfill & \hfill t[0,n),\\ \gamma _1(tn)\hfill & \text{for}\hfill & \hfill t[n,+\mathrm{}).\end{array}$$ Since the relations $$\gamma (_+)\left(\beta ([1,n+1])\gamma _0(I)\gamma _1([\tau ,+\mathrm{}))\right),$$ $$\gamma _1([\tau ,+\mathrm{}))\{z^2|\text{d}(z,0)R\},$$ $$aD\{z^2|\text{d}(z,0)<R\}$$ hold and a compact set $`\beta ([1,n+1])\gamma _0(I)`$ does not contain a point $`x`$ on a construction, there exists $`\epsilon _0>0`$ which complies the inequality $$\text{d}(a,z)>\epsilon _0\text{for all }z\gamma (_+).$$ Now we are ready for proof of local linear connectivity of the area $`W`$ in the point $`aW=D`$. 2.3. Let $`U`$ be a curtain neighbourhood of the point $`a`$. Find $`\epsilon >0`$ which meets the conditions $$U_\epsilon (a)=\{x^2|\text{d}(a,x)<\epsilon \}U,U_\epsilon (a)\gamma (_+)=\mathrm{}.$$ Fix imbeddings $$f_i:S^1V_i,i=1,\mathrm{},n.$$ Here $`S^1=\{(r,\phi )^2|r=1\}`$. The metric on $`S^1`$ we shall define as follows: $$\text{d}_s((1,\phi _1),(1,\phi _2))=\underset{k}{\mathrm{min}}|\phi _1\phi _2+2\pi k|.$$ Mark that maps $`f_i`$, $`i=1,\mathrm{},n`$ are uniformly continuous. Fix $`\delta _1>0`$ such that an inequality $`\text{d}_s(\tau _1,\tau _2)<\delta _1`$ implies $$\text{d}(f_i(\tau _1),f_i(\tau _2))<\mathrm{min}(\epsilon _0/2,\epsilon /3)$$ for any $`i=1,\mathrm{},n`$ and $`\tau _1`$, $`\tau _2S^1`$. Find also $`\delta _2>0`$, such that $`\text{d}(z_1,z_2)<\delta _2`$ has as a consequence an inequality $$\text{d}_s(f_i^1(z_1),f_i^1(z_2))<\mathrm{min}(\delta _1/2,\pi /4)$$ for every $`i=1,\mathrm{},n`$ and any $`z_1`$, $`z_2V_i`$. Assume $`\delta =\mathrm{min}(\delta _2/2,\epsilon /3)`$. 2.4. Let us show, that for any $`a_1`$, $`a_2U_\delta (a)W`$ there exists a continuous map $`g:IU_\epsilon (a)W`$ such that $`g(0)=a_1`$, $`g(1)=a_2`$. The inequality $`\text{d}(z_1,z_2)<\delta _2`$ is fulfilled for all $`z_1`$, $`z_2U_\delta (a)`$, hence $$\text{d}\left(f^1(V_iU_\delta (a))\right)<\mathrm{min}(\delta _1/2,\pi /4)$$ for every $`i\{1,\mathrm{},n\}`$ and in the case $`V_iU_\delta (a)\mathrm{}`$ the circle $`S^1`$ could be decomposed into two not intersecting intervals $`J_i^{}`$ and $`J_i^{\prime \prime }`$ with common endpoints in such a way that the following relations are fulfilled $$f_i^1(V_iU_\delta (a))J_i^{},$$ $$diam(J_i^{})=\underset{t_1,t_2J_i^{}}{\mathrm{max}}\text{d}_s(t_1,t_2)<\mathrm{min}(\delta _1,\pi /2).$$ In the case $`V_iU_\delta (a)=\mathrm{}`$ set $`J_i^{\prime \prime }=S^1`$, $`J_i^{}=\mathrm{}`$. Therefore, $`f_i(J_i^{\prime \prime })U_\delta (a)=\mathrm{}`$ and $`f_i(J_i^{})U_{2\epsilon /3}(a)`$. ###### Lemma 2 Let $`B`$ be a closed disk satisfying the following conditions: $$B\left(\underset{i=1}{\overset{n}{}}V_i\right)U_\delta (a),$$ $$(BU_\delta (a))(W\gamma (_+)).$$ Then $`BV_if_i(J_i^{})U_{2\epsilon /3}(a)`$, $`i=1,\mathrm{},n`$. On the condition of lemma $`Bf_i(J_i^{\prime \prime })=\mathrm{}`$ for every $`i=1,\mathrm{},n`$. Therefore, $`f_i(J_i^{\prime \prime })IntB`$ or $`f_i(J_i^{\prime \prime })(^2B)`$. By a construction $`x_if_i(J_i^{\prime \prime })`$ and $`x_i\gamma (_+)(^2B)`$, hence $`f_i(J_i^{\prime \prime })(^2B)`$, $`i=1,\mathrm{},n`$. $`\mathrm{}`$ Let $`a_1`$, $`a_2(U_\delta (a)W)`$. Since $`U_\delta (a)\gamma (_+)=\mathrm{}`$, then $`a_1`$, $`a_2(U_\delta (a)(W\gamma (_+)))`$. From connectivity of the set $`W\gamma (_+)`$ follows, that there exists an injective continuous map $$\stackrel{~}{\mu }:I(W\gamma (_+)),$$ complying the equalities $`\stackrel{~}{\mu }(0)=a_1`$, $`\stackrel{~}{\mu }(1)=a_2`$ (the concepts of connectivity and linear connectivity coincide for open subsets of $`^n`$). Find smooth imbeddings $$\eta _1:S^1U_\delta (a),$$ $$\eta _2:S^1\left(U_\epsilon (a)ClU_{2\epsilon /3}(a)\right),$$ such that the points $`a_1`$, $`a_2`$ lie inside disks bounded by curves $`\eta _1`$, $`\eta _2`$. It is known that an imbedding of a segment or circle into $`^2`$ could be as much as desired precisely approximated by a smooth imbedding. It is known as well that any two one-dimensional smooth compact submanifolds of $`^2`$ could be reduced to the general position by a small perturbation fixed on their boundary. Therefore, there exists smooth imbedding $$\mu :IW\gamma (_+),a_1=\mu (0),a_2=\mu (1)$$ such that the sets $`\mu (I)\eta _1(S^1)`$ and $`\mu (I)\eta _2(S^1)`$ consist of final number of points. For every $`z\mu (I)\eta _2(S^1)`$ there exist $`t^{}`$, $`t^{\prime \prime }I`$, $`t^{}<t^{\prime \prime }`$, which comply with the following conditions $$z\mu ((t^{},t^{\prime \prime })),$$ $$\mu (t^{}),\mu (t^{\prime \prime })\eta _1(S^1),$$ $$\mu ((t^{},t^{\prime \prime }))\eta _1(S^1)=\mathrm{}.$$ We receive a finite family of nonintersecting intervals $$(t_{j,1},t_{j,2})Ij=1,\mathrm{},k$$ satisfying to relations $$\mu ((t_{j,1},t_{j,2}))\eta _1(S^1)=\mathrm{},\mu (t_{j,1}),\mu (t_{j,2})\eta _1(S^1),j=1,\mathrm{},k,$$ $$\mu \left(I\backslash \underset{j=1}{\overset{k}{}}(t_{j,1},t_{j,2})\right)U_\epsilon (a).$$ Now for each $`j=1,\mathrm{},k`$ we fix an arc $`\mathrm{\Theta }_j:I\eta _1(S^1)`$ with the endpoints $`\mu (t_{j,1})`$ and $`\mu (t_{j,2})`$. A set $$\mathrm{\Theta }_j(i)\mu ((t_{j,1},t_{j,2}))$$ is homeomorphic to a circle, therefore it bounds a closed disk $`B_j`$ such that $$\left(B_j\underset{i=1}{\overset{n}{}}V_i\right)U_\delta (a),$$ $$\left(B_jU_\delta (a)\right)\left(W\gamma (_+)\right).$$ By lemma 2 these relations has as a consequence following inclusion $$\left(B_j\underset{i=1}{\overset{n}{}}V_i\right)U_{2\epsilon /3}(a).$$ Since $`\eta _2(S^1)(U_\epsilon (a)ClU_{2\epsilon /3}(a))`$, then $$B_j\eta _2(S^1)=\underset{s=1}{\overset{m_j}{}}\chi _s.$$ Here $`\{\chi _s\}_{s=1}^{m_j}`$ is a final family of nonintersecting arcs of the circle $`\eta _2(S^1)`$. In addition $`\chi _s(W\gamma (_+))`$, $`s=1,\mathrm{},m_j`$. A set $$(IntB_j)\backslash \left(\underset{s=1}{\overset{m_j}{}}\chi _s\right)$$ represents a final union of connected components homeomorphic to the two-dimensional disk, lying either inside or outside the closed disk limited by a circle $`\eta _2(S^1)`$. Select that from components, which bounds with an arc $`\mathrm{\Theta }_j`$. Designate by $`\stackrel{~}{B}_j`$ a closure of this component. Obviously, $$\stackrel{~}{B}_jU_\epsilon (a),(\stackrel{~}{B}_j\mathrm{\Theta }_j(I))(W\gamma (_+)).$$ Let $$g_j:I(\stackrel{~}{B}_j\mathrm{\Theta }_j((0,1)))$$ be an arc of a circle $`\stackrel{~}{B}_j`$ with the endpoints $`\mu (t_{j,1})`$, $`\mu (t_{j,2})`$. As we already have shown, it complies with the relation $$g_j(I)\left(U_\epsilon (a)(W\gamma (_+))\right).$$ A continuous curve $$g:I(W\gamma (_+)),$$ $$g(t)=\{\begin{array}{ccc}\mu (t)\hfill & \hfill \text{if}& \hfill t\left(I\backslash _{j=1}^k(t_{j,1},t_{j,2})\right),\\ g_j((t_{j,2}t_{j,1})t+t_{j,1})\hfill & \hfill \text{if}& \hfill t(t_{j,1},t_{j,2}).\end{array}$$ represents a continuous path in $`U_\epsilon (a)(W\gamma (_+))`$, connecting points $`a_1`$ and $`a_2`$. Therefore, the point $`a`$ is accessible from $`W\gamma (_+)`$ and all the more it is accessible from $`W=^2D`$. Then each point of $`D`$ is accessible from $`^2D`$ because of the arbitrary rule we selected the point $`aD`$. 3. The set $`W`$ is connected on a condition of theorem. 4. Let us show that the set $`IntD`$ is connected. The set $`_{i=1}^nV_i`$ is connected since any point of $`_{i=1}^nV_i`$ is accessible from a connected set $`_{i=1}^nIntV_i`$, therefore it is sufficient to show that the boundary $`\stackrel{~}{W}`$ does not lie in the set $`D`$ for any connected component $`\stackrel{~}{W}`$ of the set $`^2\left(_{i=1}^nV_i\right)`$, different from $`W`$. Assume that $`\stackrel{~}{W}D`$. The set $`\stackrel{~}{W}`$ divides $`^2`$, consequently it has dimension not less than one (see \[G-W\]). Therefore, we can find three different points $`z_1`$, $`z_2`$, $`z_3\stackrel{~}{W}`$. Each of these points is accessible from the connected sets $`W`$ and $`_{i=1}^nIntV_i`$. There exists a continuous injective mapping (see \[ZVC\]) $$\phi :I^2$$ which satisfies the conditions $$\phi (0)=z_1,\phi (1)=z_2,\phi ((0,1))\underset{i=1}{\overset{n}{}}IntV_i.$$ Let $`z=\phi (1/2)`$. There exists a continuous injective mapping $$\stackrel{~}{\phi }:I^2,$$ $$\stackrel{~}{\phi }(0)=z_3,\stackrel{~}{\phi }(1)=z,\stackrel{~}{\phi }((0,1])\underset{i=1}{\overset{n}{}}IntV_i.$$ Let $`t_1=\mathrm{min}\{tI|\stackrel{~}{\phi }(t)\phi (I)\}`$. We have $`t_1>0`$ since $`z_3=\stackrel{~}{\phi }(0)\phi (I)`$. Denote $`z^{}=\stackrel{~}{\phi }(t_1)`$. Then $`t_2(0,1)`$ is uniquely defined, such that $`z^{}=\phi (t_2)`$. Consider continuous injective mappings $$\phi _1:I^2,\phi _1(t)=\phi (t_2(1t));$$ $$\phi _2:I^2,\phi _2(t)=\phi ((1t_2)t+t_2);$$ $$\phi _3:I^2,\phi _3(t)=\stackrel{~}{\phi }(t_1(1t)).$$ which comply with the relations $$\phi _s(0)=z_s,\phi _s(1)=z^{},\phi _s((0,1])\underset{i=1}{\overset{n}{}}IntV_i,s=1,2,3;$$ $$\phi _{s_1}([0,1))\phi _{s_2}([0,1))=\mathrm{}\text{when }s_1s_2.$$ Similarly, there exists a point $`z^{\prime \prime }W`$ and continuous injective mappings $`\psi _s:I^2`$, $`s=1,2,3`$, such that $$\psi _s(0)=z_s,\psi _s(1)=z^{\prime \prime },\psi _s((0,1])W,s=1,2,3;$$ $$\psi _{s_1}([0,1))\psi _{s_2}([0,1))=\mathrm{}\text{when }s_1s_2.$$ Since $$W\left(\underset{i=1}{\overset{n}{}}IntV_i\right)=\mathrm{},$$ the equality $$\left(\left(\underset{s=1}{\overset{3}{}}\phi _s(I)\right)\left(\underset{s=1}{\overset{3}{}}\psi _s(I)\right)\right)=\underset{s=1}{\overset{3}{}}\{z_s\}$$ is valid. Therefore, everyone from the sets $$\phi _{s_1}(I)\phi _{s_2}(I)\psi _{s_1}(I)\psi _{s_2}(I),s_1s_2$$ is homeomorphic to a circle. The set $$^2\backslash \left(\underset{s=1}{\overset{3}{}}(\phi _s(I)\psi _s(I))\right)$$ falls into the three connected components $`U_1`$, $`U_2`$, $`U_3`$, two of which are homeomorphic to the open two-dimensional disk and third is not limited. As $$\left(\underset{s=1}{\overset{3}{}}(\phi _s(I)\psi _s(I))\right)\stackrel{~}{W}=\mathrm{},$$ then there exists $`j\{1,2,3\}`$ such that $`\stackrel{~}{W}U_j`$. But it is impossible because everyone from the sets $`ClU_s`$, $`s=1,2,3`$ contains exactly two from the points $`z_1`$, $`z_2`$, $`z_3`$. So, we have proved that the set $`\stackrel{~}{W}D`$ consists not more than from two points. Therefore, $`z\stackrel{~}{W}`$ and $`\epsilon >0`$ could be found to comply the inclusion $`U_\epsilon (z)IntD`$. The set $$\stackrel{~}{W}U_\epsilon (z)\left(\underset{i=1}{\overset{n}{}}IntV_i\right)IntD$$ is connected since $`\stackrel{~}{W}_{i=1}^nV_i`$ and the sets $`\stackrel{~}{W}`$, $`U_\epsilon (z)`$, $`_{i=1}^nIntV_i`$ are connected. By virtue of arbitrariness in a choice of $`\stackrel{~}{W}`$, the set $`IntD`$ is connected. Applying to $`D`$ theorem 3 we conclude that this set is homeomorphic to the closed two-dimensional disk. ∎ ###### of theorem 2. Let $`in_i:I^2S^2`$, $`i=1,\mathrm{},n`$ be the inclusion maps, $`in_i(I^2)=V_i`$. Without loss of a generality, it is possible to assume that a North Pole $`s_0`$ of $`S^2`$ lies in $`W`$. Consider a stereographic projection $$f:S^2\{s_0\}^2.$$ As is known, this map is a homeomorphism. Since $`V_iS^2\{s_0\}`$, $`i=1,\mathrm{},n`$ and the set $`S^2\{s_0\}`$ is open in $`S^2`$, the compositions $$In_i=fin_i:I^2^2,i=1,\mathrm{},n$$ are continuous and are one-to-one. The set $`I^2`$ is compact, therefore maps $`In_i`$, $`i=1,\mathrm{},n`$ are imbeddings. Sign $`\widehat{V}_i=f(V_i)=In_i(I^2)`$, $`i=1,\mathrm{},n`$. From a mutual uniqueness of map $`f`$ follows that $$f(\underset{i=1}{\overset{n}{}}IntV_i)=\underset{i=1}{\overset{n}{}}f(IntV_i)=\underset{i=1}{\overset{n}{}}Int\widehat{V}_i.$$ The set $`_{i=1}^nInt\widehat{V}_i`$ is connected as an image of a connected set at a continuous map. So, family $`\widehat{V}_1,\mathrm{},\widehat{V}_n`$ satisfies to conditions of theorem 1. Consider an open set $`W^{}=W\{s_0\}S^2`$. It is easy to see that $`W^{}=W\{s_0\}`$ and $`s_0`$ is an isolated point of the boundary of $`W^{}`$. Denote $`\widehat{W}=f(W^{})^2`$. Obviously, $`\widehat{W}`$ is the unique unlimited connected component of a set $`^2_{i=1}^n\widehat{V}_i`$. Applying theorem 1, we conclude that a set $`^2\widehat{W}`$ is homeomorphic to the closed two-dimensional disk, and it’s boundary $`(^2\widehat{W})=\widehat{W}`$ is homeomorphic to a circle $`S^1`$. From this immediately follows, that the set $`W=f^1(\widehat{W})`$ of the limit points of $`W`$ is homeomorphic to a circle. From theorem 4 it immediately follows that the set $`W`$ divides $`S^2`$ into two opened connected components and for each of these components it’s closure is homeomorphic to the closed two-dimensional disk. Consequently, the set $`ClW`$ is homeomorphic to the closed two-dimensional disk. ∎
no-problem/9910/nucl-ex9910009.html
ar5iv
text
# Extraction of level density and 𝛾 strength function from primary 𝛾 spectra ## 1 Introduction The $`\gamma `$ transitions of excited nuclei give rich information on nuclear properties. In particular, the energy distribution of the first emitted $`\gamma `$ rays from a given excitation energy reveals information on the level density at the excitation energy to which the nucleus decays, and the $`\gamma `$ strength function at the difference of those two energies. If the initial and final excitation energy belong to the continuum energy region, typically above 4 MeV of excitation energy for nuclei in the rare earth region, also thermodynamical properties may be investigated . Recently, the nuclear level density has become the object of new interest. There is strong theoretical progress in making calculations applicable to higher energies and heavier nuclei. In particular, the shell model Monte Carlo technique moves frontiers at present, and it is now mandatory to compare these calculations with experiments. Furthermore, the level density is essential for the understanding of the nucleon synthesis in stars. The level densities are input in large computer codes where thousands of cross sections are estimated . Our present knowledge of the gross properties of the $`\gamma `$ strength function is also poor. The Weisskopf estimate which is based on single particle transitions, see e.g. , gives a first estimation for the strengths. However, for some measured $`\gamma `$ transitions the transition rate may deviate many orders of magnitude from these estimates. A recent compilation on average $`\gamma `$ transition strengths for M1, E1 and E2 transitions is given in Ref. . The uncertainties concern the absolute strength as well as how the strength depends on the $`\gamma `$ transition energy. For E1 transitions, it is usually assumed that the energy dependency follows the GDR $`(\gamma ,\gamma ^{})`$ cross section, however, this is not at all clear for low energy $`\gamma `$ rays. In this work we describe a method to extract simultaneously the level density and $`\gamma `$ strength function in the continuum energy region for low spin (0-6 $`\mathrm{}`$). The basic ideas and the assumptions behind the method were first presented in Ref. . An implementation using an iterative projection technique, was first described in Ref. . However, due to the existence of infinitely many solutions and the unfortunate renormalization of the primary $`\gamma `$ spectrum in every iteration step, this first implementation suffered from various severe problems, including divergence of the extracted quantities . Several solutions of the convergence problem have been proposed and presented at different conferences, using approximate normalizations, but none of them yielding exact reproductions of test spectra. However, data using one of those approximate methods were published in Ref. . Today, we consider the previous methods as premature, and we will present in the following a completely new, exact and convergent technique to extract level density and $`\gamma `$ strength function from primary $`\gamma `$ spectra. ## 2 Extracting level density and $`\gamma `$ strength function ### 2.1 Ansatz We take the experimental primary $`\gamma `$ matrix $`\mathrm{\Gamma }(E_i,E_\gamma )`$ as the starting point for this discussion. We assume that this matrix is normalized for every excitation energy bin $`E_i`$. This is done by letting the sum of $`\mathrm{\Gamma }`$ over all $`\gamma `$ energies $`E_\gamma `$ from some minimum $`\gamma `$ energy $`E_\gamma ^{\mathrm{min}}`$ to the maximum $`\gamma `$ energy $`E_i`$ at this excitation energy bin be unity, i.e. $$\underset{E_\gamma =E_\gamma ^{\mathrm{min}}}{\overset{E_i}{}}\mathrm{\Gamma }(E_i,E_\gamma )=1.$$ (1) The $`\gamma `$ decay probability from the excitation energy $`E_i`$ to $`E_f`$ by a $`\gamma `$ ray with energy $`E_\gamma =E_iE_f`$ in the continuum energy region is proportional to the level density $`\varrho (E_f)`$ and a $`\gamma `$ energy dependent factor $`F(E_\gamma )`$ . This ansatz is illustrated in Fig. 1. The experimental normalized primary $`\gamma `$ matrix $`\mathrm{\Gamma }`$ can therefore theoretically be approximated by $$\mathrm{\Gamma }_{\mathrm{th}}(E_i,E_\gamma )=\frac{F(E_\gamma )\varrho (E_iE_\gamma )}{_{E_\gamma =E_\gamma ^{\mathrm{min}}}^{E_i}F(E_\gamma )\varrho (E_iE_\gamma )},$$ (2) which also fulfills Eq. (1). As it is shown in Appendix A, one can construct all solutions of Eq. (2) by applying the transformation given by Eq. (3) to one arbitrary solution, where the generators of the transformation $`A`$, $`B`$ and $`\alpha `$ can be chosen freely. $`\stackrel{~}{\varrho }(E_iE_\gamma )`$ $`=`$ $`\varrho (E_iE_\gamma )A\mathrm{exp}(\alpha (E_iE_\gamma )),`$ (3) $`\stackrel{~}{F}(E_\gamma )`$ $`=`$ $`F(E_\gamma )B\mathrm{exp}(\alpha E_\gamma ),`$ ### 2.2 Method #### 2.2.1 $`0^{\mathrm{th}}`$ order estimate Since all possible solutions of Eq. (2) can be obtained by the transformation given by Eq. (3) of one arbitrary solution, we choose conveniently $`\varrho ^{(0)}=1`$. With this choice, the $`0^{\mathrm{th}}`$ order estimate of $`F`$ is given by $$\mathrm{\Gamma }(E_i,E_\gamma )=\frac{F^{(0)}(E_\gamma )}{_{E_\gamma =E_\gamma ^{\mathrm{min}}}^{E_i}F^{(0)}(E_\gamma )}.$$ (4) Summing over the excitation energy interval $`E_i^{\mathrm{min}}\mathrm{}E_i^{\mathrm{max}}`$ while obeying $`E_iE_\gamma `$ yields $$\underset{E_i=\mathrm{max}(E_i^{\mathrm{min}},E_\gamma )}{\overset{E_i^{\mathrm{max}}}{}}\mathrm{\Gamma }(E_i,E_\gamma )=F^{(0)}(E_\gamma )\underset{E_i=\mathrm{max}(E_i^{\mathrm{min}},E_\gamma )}{\overset{E_i^{\mathrm{max}}}{}}\frac{1}{_{E_\gamma =E_\gamma ^{\mathrm{min}}}^{E_i}F^{(0)}(E_\gamma )},$$ (5) where the sum on the right hand side can be set to unity, giving $$F^{(0)}(E_\gamma )=\underset{E_i=\mathrm{max}(E_i^{\mathrm{min}},E_\gamma )}{\overset{E_i^{\mathrm{max}}}{}}\mathrm{\Gamma }(E_i,E_\gamma ).$$ (6) #### 2.2.2 Higher order estimates In order to calculate higher order estimates of the $`\varrho `$ and $`F`$ functions, we developed a least $`\chi ^2`$ method. The basic idea of this method is to minimize $$\chi ^2=\frac{1}{N_{\mathrm{free}}}\underset{E_i=\mathrm{max}(E_i^{\mathrm{min}},E_\gamma )}{\overset{E_i^{\mathrm{max}}}{}}\underset{E_\gamma =E_\gamma ^{\mathrm{min}}}{\overset{E_i}{}}\left(\frac{\mathrm{\Gamma }_{\mathrm{th}}(E_i,E_\gamma )\mathrm{\Gamma }(E_i,E_\gamma )}{\mathrm{\Delta }\mathrm{\Gamma }(E_i,E_\gamma )}\right)^2,$$ (7) where $`N_{\mathrm{free}}`$ is the number of degrees of freedom, and $`\mathrm{\Delta }\mathrm{\Gamma }(E_i,E_\gamma )`$ is the uncertainty in the primary $`\gamma `$ matrix. Since we assume every point of the $`\varrho `$ and $`F`$ functions as independent variables, we calculate $`N_{\mathrm{free}}`$ as $$N_{\mathrm{free}}=\mathrm{ch}(\mathrm{\Gamma })\mathrm{ch}(\varrho )\mathrm{ch}(F),$$ (8) where ch indicates the number of data points in the respective spectra. We minimize the reduced $`\chi ^2`$ by letting all derivatives $$\frac{}{F(E_\gamma )}\chi ^2=0\mathrm{and}\frac{}{\varrho (E_iE_\gamma )}\chi ^2=0$$ (9) for every argument $`E_\gamma `$ and $`E_iE_\gamma `$ respectively. A rather tedious but straight forward calculation yields equivalence between Eqs. (9) and $`F(E_\gamma )`$ $`=`$ $`{\displaystyle \frac{_{E_i=\mathrm{max}(E_i^{\mathrm{min}},E_\gamma )}^{E_i^{\mathrm{max}}}\varrho (E_iE_\gamma )\phi (E_i,E_\gamma )}{_{E_i=\mathrm{max}(E_i^{\mathrm{min}},E_\gamma )}^{E_i^{\mathrm{max}}}\varrho ^2(E_iE_\gamma )\psi (E_i,E_\gamma )}}`$ (10) $`\varrho (E_f)`$ $`=`$ $`{\displaystyle \frac{_{E_i=\mathrm{max}(E_i^{\mathrm{min}},E_f+E_\gamma ^{\mathrm{min}})}^{E_i^{\mathrm{max}}}F(E_iE_f)\phi (E_i,E_iE_f)}{_{E_i=\mathrm{max}(E_i^{\mathrm{min}},E_f+E_\gamma ^{\mathrm{min}})}^{E_i^{\mathrm{max}}}F^2(E_iE_f)\psi (E_i,E_iE_f)}},`$ (11) where $`\phi (E_i,E_\gamma )`$ $`=`$ $`{\displaystyle \frac{a(E_i)}{s^3(E_i)}}{\displaystyle \frac{b(E_i)}{s^2(E_i)}}+{\displaystyle \frac{\mathrm{\Gamma }(E_i,E_\gamma )}{s(E_i)\left(\mathrm{\Delta }\mathrm{\Gamma }(E_i,E_\gamma )\right)^2}}`$ (12) $`\psi (E_i,E_\gamma )`$ $`=`$ $`{\displaystyle \frac{1}{\left(s(E_i)\mathrm{\Delta }\mathrm{\Gamma }(E_i,E_\gamma )\right)^2}},`$ (13) and $`a(E_i)`$ $`=`$ $`{\displaystyle \underset{E_\gamma =E_\gamma ^{\mathrm{min}}}{\overset{E_i}{}}}\left({\displaystyle \frac{F(E_\gamma )\varrho (E_iE_\gamma )}{\mathrm{\Delta }\mathrm{\Gamma }(E_i,E_\gamma )}}\right)^2`$ (14) $`b(E_i)`$ $`=`$ $`{\displaystyle \underset{E_\gamma =E_\gamma ^{\mathrm{min}}}{\overset{E_i}{}}}{\displaystyle \frac{F(E_\gamma )\varrho (E_iE_\gamma )\mathrm{\Gamma }(E_i,E_\gamma )}{\left(\mathrm{\Delta }\mathrm{\Gamma }(E_i,E_\gamma )\right)^2}}`$ (15) $`s(E_i)`$ $`=`$ $`{\displaystyle \underset{E_\gamma =E_\gamma ^{\mathrm{min}}}{\overset{E_i}{}}}F(E_\gamma )\varrho (E_iE_\gamma ).`$ (16) Within one iteration, we first calculate the functions $`a(E_i)`$, $`b(E_i)`$ and $`s(E_i)`$, using the previous order estimates for $`\varrho `$ and $`F`$. Using these three functions, we can calculate the matrices $`\phi (E_i,E_\gamma )`$ and $`\psi (E_i,E_\gamma )`$. Further on, we calculate the actual order estimates of $`\varrho `$ and $`F`$ by means of Eqs. (10) and (11). Figure 2 shows where the sums in Eqs. (10) and (11) are performed. #### 2.2.3 Convergence properties The method usually converges very well. However, in some cases the $`\chi ^2`$ minimum is very shallow, and the chance exists, that the iteration procedure might fail. In order to enhance convergence of the method, we have restricted the maximum change of every data point in $`\varrho `$ and $`F`$ within one iteration to a certain percentage $`P`$. This means that the data point obtained in the actual iteration (new) is checked if it lies within the interval $$\frac{\mathrm{old}}{(1+P/100)}\mathrm{new}(1+P/100)\mathrm{old},$$ (17) determined by the data point from the previous iteration (old). In the case that the new data point lies outside this interval, it will be set to the value of the closest boundary. Applying this method to some of our data, we have observed, that the smaller $`P`$ is chosen, the smaller $`\chi ^2`$ gets in the end, when the procedure has reached its limit. The reason for this is, that more and more data points in $`\varrho `$ and $`F`$ will converge, while fewer and fewer points (typically at high energies $`E_\gamma `$ and $`E_f`$ where few counts are available) are oscillating between the two boundaries given by Eq. (17). Occasionally, we can choose $`P`$ so small that all data points will converge and no oscillating behavior can be seen. However, in some cases oscillating data points can not be avoided by any choice of $`P`$ which might indicate that the $`\chi ^2`$ minimum is too shallow, or does not even exist, for some data points in $`\varrho `$ and $`F`$. A small $`P`$ would lead to an accurate result but make a large number of iterations necessary, and a large $`P`$ would shorten the execution time of the procedure but affect the accurateness of the solution. We combine the advantages and avoid the disadvantages of the two concepts by letting $`P`$ become smaller as a function of the number of iterations. In our actual computer code , we have implemented a stepwise decrease of $`P`$ as shown in Table 1. The choices of $`P`$ as a function of the number of iterations is quite arbitrary, but we have achieved very good convergence for those spectra, where convergence properties without restrictions are rather fair. In conclusion we have to stress, that the convergence properties of the method in many cases do not require any restrictions of the maximum variation of data points within one iteration. In those cases however, where restrictions are mandatory to achieve or enhance convergence, they will only affect a small percentage of the data points at high energies, where data in the primary $`\gamma `$ matrix are sparse and mainly erratically scattered. In those cases, where the restrictions of Table 1 would prove not to be satisfactory for convergence, the number of iterations or the value of $`P`$ can be changed, since the validity of the method does not rely on these values. #### 2.2.4 Error calculation A huge effort has been made in order to estimate errors of the data points in $`\varrho `$ and $`F`$. Since the experimental primary $`\gamma `$ matrix has been obtained from raw data by applying an unfolding procedure and a subtraction technique , error propagation through these methods is very tedious and has never been performed. In order to perform an error estimation of $`\varrho `$ and $`F`$, we first have to estimate the error of the primary $`\gamma `$ matrix data. A rough estimation yields $$\mathrm{\Delta }\mathrm{\Gamma }=2\sqrt{(M_1+M_2)\mathrm{\Gamma }},$$ (18) where $`M_1`$ denotes the number of first and higher generation $`\gamma `$ rays, and $`M_2`$ the number of second and higher generation $`\gamma `$ rays at one excitation energy bin $`E_i`$. We estimate those quantities roughly by $$M_1=\mathrm{max}(1,M(E_i))\mathrm{and}M_2=\mathrm{max}(0,M(E_i)1),$$ (19) where the multiplicity $`M(E_i)`$ is given by a fit to the experimental data in Ref. $$M(E_i)=0.42+\mathrm{4.67\hspace{0.17em}10}^4E_i\mathrm{1.29\hspace{0.17em}10}^8E_i^2,$$ (20) and $`E_i`$ is given in keV. The motivation of Eq. (18) is that during the extraction method of primary $`\gamma `$ spectra of Ref. the second and higher generation $`\gamma `$ ray spectrum, which has of the order $`M_2\mathrm{\Gamma }`$ counts, is subtracted from the total unfolded $`\gamma `$ ray spectrum, which has of the order $`M_1\mathrm{\Gamma }`$ counts. The errors of these spectra are roughly the square root of the number of counts. If we assume that these errors are independent from each other, the primary $`\gamma `$ spectra has an error of roughly $`\sqrt{(M_1+M_2)\mathrm{\Gamma }}`$. The factor 2 in Eq. (18) is due to the unfolding procedure and is quite uncertain. We assume this factor to be roughly equal the ratio of the solid angle covered by the CACTUS detector array of some 15% to its photopeak efficiency of some 7% at 1.3 MeV . We have however to apply a couple of minor corrections to Eq. (18). Firstly, the first generation method exhibits some methodical problems at low excitation energies. The basic assumption behind this method is that the $`\gamma `$ decay properties of an excited state is unaffected by its formation mechanism e.g. direct population by a nuclear reaction, or population by a nuclear reaction followed by one or several $`\gamma `$ rays. This assumption is not completely valid at low excitation energies, where thermalization time might compete with the half life of the excited state and the reactions used exhibit a more direct than compound character. This and some experimental problems like ADC threshold walk and bad timing properties of low energetic $`\gamma `$ rays, all described in Ref. , oblige us to exclude $`\gamma `$ rays below 1 MeV from the primary $`\gamma `$ spectra. For low energetic $`\gamma `$ rays above 1 MeV, we increase the error bars by the following rule. For each excitation energy bin $`E_i`$, we identify the channel with the maximum number of counts ch<sup>max</sup> (this occurs typically between 2 and 3 MeV of $`\gamma `$ energy). This is also the channel with the highest error err<sup>max</sup>, following Eq. (18). We then replace the errors of the channels ch below ch<sup>max</sup> by $$\mathrm{err}=\mathrm{err}^{\mathrm{max}}\left(1+1.0\frac{\mathrm{ch}^{\mathrm{max}}\mathrm{ch}}{\mathrm{ch}^{\mathrm{max}}}\right).$$ (21) This formula cannot be motivated by some simple handwaving arguments. We feel however, after inspecting several primary $`\gamma `$ matrices, that we estimate the systematic error of these spectra quite accurate. Secondly, the unfolding procedure exhibits some methodical problems at high $`\gamma `$ energies. Since the ratio of the photopeak efficiency to the solid angle covered by the CACTUS detector array drops for higher $`\gamma `$ energies, the counts at these energies are multiplied with significant factors in the unfolding procedure. Some channels might nevertheless turn out to contain almost zero counts, giving differences in counts between two neighboring channels by two orders of magnitude. Since the errors are estimated as proportional to the square root of the number of counts, the estimated errors of these channels do not reflect their statistical significance. In order to obtain comparable errors to neighboring channels we check the errors within one excitation energy bin from the $`\gamma `$ energy of $``$4 MeV and upwards. If the error drops by more than a factor 2, when going from one channel to the next higher one, we set the error of the higher channel equal to 50% of the error of the previous one. Also this rule cannot be motivated by a simple argumentation. It affects, however usually only a very small percentage of channels, and an inspection of several primary $`\gamma `$ spectra gives us confidence in our error estimation. It is now very tedious to perform error propagation calculation through the extraction procedure. We therefore decided to apply a simulation technique to obtain reliable errors of the $`\varrho `$ and $`F`$ functions. For this reason, we add statistical fluctuations to the primary $`\gamma `$ matrix. For every channel in the primary $`\gamma `$ matrix, we choose a random number $`r`$ between zero and one. We then calculate $`x`$ according to $$r=\frac{1}{\sqrt{2\pi }\sigma }_{\mathrm{}}^x\mathrm{exp}(\frac{(\xi a)^2}{2\sigma ^2})d\xi ,$$ (22) where $`a`$ is the number of counts and $`\sigma `$ the error of this channel. By replacing the number of counts $`a`$ with $`x`$, we add a statistical fluctuation to this channel. This is done for all channels of the primary $`\gamma `$ matrix, and new $`\varrho ^{(s)}`$ and $`F^{(s)}`$ functions are extracted, containing statistical fluctuations. This procedure is repeated 100 times, which gives reasonable statistics. The errors in $`\varrho `$ and $`F`$ are then calculated by $`\mathrm{\Delta }\varrho (E_f)`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{100}}}\sqrt{{\displaystyle \underset{i=1}{\overset{100}{}}}[\varrho _i^{(s)}(E_f)\varrho (E_f)]^2}`$ (23) $`\mathrm{\Delta }F(E_\gamma )`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{100}}}\sqrt{{\displaystyle \underset{i=1}{\overset{100}{}}}[F_i^{(s)}(E_\gamma )F(E_\gamma )]^2}.`$ (24) #### 2.2.5 Normalizing the level density to other experimental data As pointed out above, all solutions of Eq. (2) can be generated from one arbitrary solution by the transformation given by Eq. (3). It is of course discouraging that an infinite number of equally good solutions exists, however by comparing to known data, we will be able to pick out the most physical one. At low excitation energies up to typically 2 MeV for even even nuclei, we can compare the extracted level density to the number of known levels per excitation energy bin (for a comprehensive compilation of all known levels in nuclei see e.g. Ref. ). At the neutron binding energy, we can deduce the level density for many nuclei from available neutron resonance spacing data. The starting point is Eqs. (4) and (5) of Ref. $`\varrho (U,J)`$ $`=`$ $`{\displaystyle \frac{\sqrt{\pi }}{12}}{\displaystyle \frac{\mathrm{exp}2\sqrt{aU}}{a^{1/4}U^{5/4}}}{\displaystyle \frac{(2J+1)\mathrm{exp}((J+1/2)^2/2\sigma ^2)}{2\sqrt{2\pi }\sigma ^3}}`$ (25) $`\varrho (U)`$ $`=`$ $`{\displaystyle \frac{\sqrt{\pi }}{12}}{\displaystyle \frac{\mathrm{exp}2\sqrt{aU}}{a^{1/4}U^{5/4}}}{\displaystyle \frac{1}{\sqrt{2\pi }\sigma }},`$ (26) where $`\varrho (U,J)`$ is the level density for both parities and for a given spin $`J`$, and $`\varrho (U)`$ is the level density for all spins and parities; $`\sigma `$ is the spin dependence parameter and $`a`$ the level density parameter. Assuming that $`I`$ is the spin of the target nucleus in a neutron resonance experiment, the neutron resonance spacing $`D`$ can be written as $$\frac{1}{D}=\frac{1}{2}(\varrho (U_n,J=I+1/2)+\varrho (U_n,J=I1/2)),$$ (27) since all levels are accessible in neutron resonance experiments, and we assume, that both parities contribute equally to the level density at the neutron binding energy represented by $`U_n`$. Combining Eqs. (25), (26) and (27), one can calculate the total level density at the neutron binding energy $$\varrho (U_n)=\frac{2\sigma ^2}{D}\frac{1}{(I+1)\mathrm{exp}((I+1)^2/2\sigma ^2)+I\mathrm{exp}(I^2/2\sigma ^2)},$$ (28) where $`\sigma ^2`$ is calculated by combining Eqs. (9) and (11) of Ref. i.e. $$\sigma ^2=0.0888\sqrt{aU_n}A^{2/3},$$ (29) and $`A`$ is the mass number of the nucleus. It is assumed that $`\sigma ^2`$ has an error of $``$10% due to shell effects . One should also point out, that $`U_n`$ is given by $`U_n=B_nP`$, where $`B_n`$ is the neutron binding energy and $`P`$ the pairing energy which can be found in Table III of Ref. for many nuclei. Unfortunately, we cannot compare the calculated level density at the neutron binding energy directly with our extracted level density, since due to the omission of $`\gamma `$ rays below 1 MeV, the $`\varrho `$ function can only be extracted up to 1 MeV below the neutron binding energy. We will however extrapolate the extracted $`\varrho `$ function with a Fermi gas level density, obtained by combining Eqs. (26) and (29) $$\varrho (U)=\frac{1}{12\sqrt{0.1776}A^{1/3}}\frac{\mathrm{exp}2\sqrt{aU}}{a^{1/2}U^{3/2}}.$$ (30) This is done by adjusting the parameters $`A`$ and $`\alpha `$ of the transformation given by Eq. (3) such, that the data fit the level density formula of Eq. (30) in an excitation energy interval between 3 and 1 MeV below $`B_n`$, where in most cases all parameters of Eq. (30) can be taken from Tables II and III of Ref. . This semi experimental level density spanning from 0 MeV up to $`B_n`$ is then again transformed according to Eq. (3) such, that it fits the number of known levels up to $``$2 MeV and $``$1 MeV for even even and odd even nuclei respectively and simultaneously the level density deduced from neutron resonance spacing at $`B_n`$. We have to point out however, that after the fit to known data, the extrapolation does not have the functional form of Eq. (30) anymore, due to the transformation given by Eq. (3) applied to the semi experimental level density. Therefore, if necessary, a new extrapolation of the experimental data must be performed. We have successfully implemented the new extraction method in a Fortran 77 computer code called rhosigchi . The computer code was compiled under a Solaris 2.5.1 operating system running on a Dual UltraSPARC station with 200 MHz CPU. The execution time of one extraction is in the order of 10-20 s. The computer code has $``$1200 programming lines, excluding special library in and output routines. ## 3 Applications to spectra ### 3.1 Testing the method on theoretical spectra The method has been tested on a theoretically calculated primary $`\gamma `$ matrix. The theoretical primary $`\gamma `$ matrix was obtained by simply multiplying a level density $`\varrho `$ to a $`\gamma `$ energy dependent factor $`F`$ according to Eq. (2). The level density was given by a backshifted Fermi gas formula $$\varrho (U)=CU^{3/2}\mathrm{exp}(2\sqrt{aU})$$ (31) with $`U=E_fP`$. Below the minimum at $`U=9/4a`$ a constant level density was used. The $`\gamma `$ energy dependent factor was chosen as $$F(E_\gamma )=CE_\gamma ^{4.2}.$$ (32) In addition, a “fine structure” was imposed on both functions, by scaling several $``$1 MeV broad intervals with factors around 1.5–5. Both model functions are shown in the upper half of Fig. 3. We extracted the $`\varrho `$ and $`F`$ functions from the theoretical primary $`\gamma `$ matrix using the excitation energy interval of 4 to 8 MeV and excluding all $`\gamma `$ rays below 1 MeV. In the lower panel of Fig. 3 we show the ratio of the extracted functions to the theoretical functions. After adjusting the extracted quantities with the transformation given by Eq. (3), we can state that the deviation from the input functions is smaller than one per thousand in the covered energy range of both functions. Tests of the old extraction method showed deviations of the order of 10% to 100% . We therefore consider the new extraction method to be much more reliable. ### 3.2 Testing the method on <sup>172</sup>Yb spectra We have tested the method on several experimental primary $`\gamma `$ spectra. We will in the following discuss a typical example; the <sup>173</sup>Yb(<sup>3</sup>He,$`\alpha `$)<sup>172</sup>Yb reaction. The experiment was carried out at the Oslo Cyclotron Laboratory (OCL) at the University of Oslo, using a MC35 cyclotron with a beam energy of 45 MeV and a beam intensity of typically 1 nA. The experiment was running for two weeks. The target was consisting of a self supporting, isotopically enriched (92% <sup>173</sup>Yb) metal foil of 2.0 mg/cm<sup>2</sup> thickness. Particle identification and energy measurements were performed by a ring of 8 Si(Li) particle telescopes at 45 with respect to the beam axis. The $`\gamma `$ rays were detected by an array of 28 $`5^{\prime \prime }\times 5^{\prime \prime }`$ NaI(Tl) detectors (CACTUS). More experimental details can be found in . The raw data are unfolded, using measured response functions of the CACTUS detector array . After unfolding, a subtraction method is applied to the particle $`\gamma `$ matrix in order to extract the first generation $`\gamma `$ matrix . This primary $`\gamma `$ matrix is taken as the starting point for the extraction method presented here. In Fig. 4, we show the normalized, experimental primary $`\gamma `$ spectra at ten different excitation energy bins (data points). The errors of the data points are estimated as explained above. The $`\varrho `$ and $`F`$ functions were extracted from the excitation energy interval 4-8 MeV, excluding all $`\gamma `$ energies smaller than 1 MeV. The lines are the calculated primary $`\gamma `$ spectra, obtained by multiplying the extracted level density $`\varrho `$ and the $`\gamma `$ energy dependent factor $`F`$ according to Eq. (2). One can see, that the lines follow the data points very well. It can also be seen that the errors of the data points are estimated reasonably giving a reduced $`\chi ^2`$ of $``$0.4. Figure 4 is a beautiful example for the claim, that primary $`\gamma `$ spectra can be factorized according to the Axel Brink hypothesis . Figure 5 shows how the parameters $`\alpha `$ and $`A`$ of the transformation given by Eq. (3) can be determined in the case of the <sup>173</sup>Yb(<sup>3</sup>He,$`\alpha `$)<sup>172</sup>Yb reaction. The extracted $`\varrho `$ function (data points) is compared to the number of known levels per excitation energy bin (histogram) and to the level density at the neutron binding energy, calculated from neutron resonance spacing data (data point in insert). The line in the insert is the extrapolation of the $`\varrho `$ function up to $`B_n`$ according to Section 2.2.5. In the following, the extracted $`\varrho `$ and $`F`$ functions are discussed. Both functions were already published before, using the old extraction method and some fine structure discussed below, could already be seen in the previous publication . Figure 6 shows the level density $`\varrho `$ and the relative level density, which is the level density, divided by an exponential fit to the data between the arrows. The parameters of the fit function $$\varrho _{\mathrm{fit}}=C\mathrm{exp}(E/T)$$ (33) are shown in the lower panel of the figure. In the relative level density one can see a small bump emerging at $``$2.7 MeV probably due to the quenching of pairing correlations . One can also see very nicely the onset of strong pairing correlations at 1.0–1.5 MeV of excitation energy. In Fig. 7 the $`\gamma `$ energy dependent factor is shown (upper panel). On the lower panel, the same data are given, divided by a fit function of the form $$F_{\mathrm{fit}}=CE_\gamma ^n.$$ (34) This function can be used as a parameterization of $$F(E_\gamma )=E_\gamma ^{2\lambda +1}\sigma (E_\gamma ),$$ (35) where $`\sigma (E_\gamma )`$ is the $`\gamma `$ strength function and $`\lambda `$ is the multipolarity of the $`\gamma `$ transition. The fit to the data was performed between the arrows, the fit parameter $`n`$ is given in the lower panel. Since other experimental data is very sparse, we did not scale $`F`$ in order to obtain absolute units. However, the extracted fit parameter $`n`$ is in good agreement with expectations from the tail of a GDR strength function at low $`\gamma `$ energies . In the lower panel a merely significant bump at $``$3.4 MeV is visible, which we interpret as the Pigmy resonance. ## 4 Conclusions In this work we have presented for the first time a reliable and convergent method to extract consistently and simultaneously the level density $`\varrho `$ and the $`\gamma `$ energy dependent function $`F`$ from primary $`\gamma `$ spectra. The new method, based on a least square fit, has been carefully tested on simulated $`\gamma `$ spectra as well as on experimental data. In order to normalize the data, we count known discrete levels in the vicinity of the ground state and use the level spacing known from neutron resonances at the neutron binding energy. Compared to the previous projection method , the least square fit method gives the following advantages: The iteration converges mathematically. The reproduction of the input level densities and $`\gamma `$ strength functions in simulations is much better (almost exact). No tuning of the initial trial function is necessary to obtain a reasonable scaled level density, but the newly derived transformation properties of the solution enable the user to normalize the extracted quantities with known data. The reduced $`\chi ^2`$ is estimated reasonably. The errors of the extracted quantities are estimated by statistical simulations. We have used the new method to reanalyze previously published data and for the analysis of more recent data. Especially the ability to extract absolute values of the level density $`\varrho `$ enables us to perform several new applications . ## 5 Acknowledgments The authors wish to thank A. Bjerve for interesting discussions. Financial support from the Norwegian Research Council (NFR) is gratefully acknowledged. ## Appendix A Proof of Eq. (3) The functional form of Eq. (2) opens for a manifold of solutions. If one solution of Eq. (2) is found, one can generally construct all possible solutions by the following transformation $`\stackrel{~}{\varrho }(E_iE_\gamma )`$ $`=`$ $`\varrho (E_iE_\gamma )g(E_iE_\gamma ),`$ (36) $`\stackrel{~}{F}(E_\gamma )`$ $`=`$ $`F(E_\gamma )f(E_\gamma ).`$ The two functions $`g`$ and $`f`$ have to fulfill certain conditions, since the set of functions $`\stackrel{~}{\varrho }`$ and $`\stackrel{~}{F}`$ are supposed to form a solution of Eq. (2) i.e. $$\mathrm{\Gamma }_{\mathrm{th}}(E_i,E_\gamma )=\frac{F(E_\gamma )\varrho (E_iE_\gamma )}{_{E_\gamma ^{}=E_\gamma ^{\mathrm{min}}}^{E_i}F(E_\gamma ^{})\varrho (E_iE_\gamma ^{})}=\frac{\stackrel{~}{F}(E_\gamma )\stackrel{~}{\varrho }(E_iE_\gamma )}{_{E_\gamma ^{}=E_\gamma ^{\mathrm{min}}}^{E_i}\stackrel{~}{F}(E_\gamma ^{})\stackrel{~}{\varrho }(E_iE_\gamma ^{})}.$$ (37) Inserting Eq. (36) one can easily deduce $`f(E_\gamma )g(E_iE_\gamma ){\displaystyle \underset{E_\gamma ^{}=E_\gamma ^{\mathrm{min}}}{\overset{E_i}{}}}F(E_\gamma ^{})\varrho (E_iE_\gamma ^{})=`$ $`{\displaystyle \underset{E_\gamma ^{}=E_\gamma ^{\mathrm{min}}}{\overset{E_i}{}}}f(E_\gamma ^{})g(E_iE_\gamma ^{})F(E_\gamma ^{})\varrho (E_iE_\gamma ^{}).`$ Since the right side is independent of $`E_\gamma `$, also the left side must be independent of $`E_\gamma `$, thus the product of $`f`$ and $`g`$ must be a function of $`E_i`$ only yielding $$f(E_\gamma )g(E_iE_\gamma )=h(E_i).$$ (39) This condition must of course hold for the case $`E_i=E_\gamma `$. Using the short hand notation $`g(0)=A`$, one obtains $$Af(E_\gamma )=h(E_\gamma ).$$ (40) Inserting this result in Eq. (39), one gets $$f(E_\gamma )g(E_iE_\gamma )=Af(E_i).$$ (41) Analogously, the condition must hold for the case $`E_\gamma =0`$, and with $`f(0)=B`$, one obtains $$Bg(E_i)=Af(E_i).$$ (42) Inserting this result in Eq. (41), one finally gets $$g(E_\gamma )g(E_iE_\gamma )=Ag(E_i).$$ (43) We will now show, that the only solution of Eq. (43) is an exponential function. This proof will involve the limit of Eq. (43) for small $`E_\gamma `$. However, since $`g`$ is a function of only one variable and the variable $`E_i`$ is unrestricted in the proof, it will be valid for all arguments of $`g`$. By expanding $`g`$ in Taylor series up to the first order in $`E_\gamma `$, one obtains $$[A+g^{}(0)E_\gamma ][g(E_i)g^{}(E_i)E_\gamma ]=Ag(E_i).$$ (44) Neglecting second order terms in $`E_\gamma `$ and dividing by $`E_\gamma `$ one gets $$g^{}(0)g(E_i)=Ag^{}(E_i).$$ (45) Defining $`\alpha =A/g^{}(0)`$, this differential equation is solved by $$g(E_i)=Ae^{\alpha E_i}.$$ (46) Using Eq. (42), we can easily deduce $`f`$ to be $$f(E_i)=Be^{\alpha E_i}.$$ (47) Thus, we have proven the transformation given by Eq. (3) to be the most general way to construct all solutions of Eq. (2) from one arbitrary solution.
no-problem/9910/hep-ph9910391.html
ar5iv
text
# The first fermi in a high energy nuclear collision. ## Acknowledgments I would like to thank Dr. A. Dumitru for a useful discussion. This research was supported by DOE Nuclear Theory at BNL. ## References
no-problem/9910/hep-ph9910485.html
ar5iv
text
# Polarized parton distributions in perturbative QCD ## 1 Experimental information on deeply inelastic scattering (DIS) of polarized leptons off polarized nucleon targets has greatly improved since the first E80/E130 experiments at SLAC (see ref. for a complete bibliography.) While at an early stage the attention was mainly focused on the test of the Ellis-Jaffe sum rule, more recently the interest is concentrated on the study of the general features of polarized nucleons in the deep-inelastic region in the context of the QCD-improved parton model. The analysis of polarized DIS data can at present be performed using perturbative QCD at next-to-leading order accuracy, thanks to the recent computation of order $`\alpha _S^2`$ Altarelli-Parisi splitting functions in the polarized case. This analysis has been performed by many different authors ,- with consistent results. I will present here the results obtained in ref. . The strategy is the same as the one adopted in the case of unpolarized DIS: the polarized parton distributions $`\mathrm{\Delta }q(x,Q^2),\mathrm{\Delta }g(x,Q^2)`$ at an initial scale $`Q_0`$ are assumed to have an arbitrarily chosen $`x`$ dependence, specified by a set of unknown parameters; with the help of the QCD-improved parton model formulas and of Altarelli-Parisi evolution, one computes the structure function $`g_1(x,Q^2)`$ at each data point in the $`(x,Q^2)`$ plane, and fits the unknown parameters, which are in turn related to interesting physical quantities. A first important point is the test of the Bjorken sum rule . The combination $$\mathrm{\Gamma }_{Bj}_0^1𝑑x\left[g_1^p(x,Q^2)g_1^n(x,Q^2)\right]$$ (1) can be shown to be proportional to the axial charge $$g_A=_0^1𝑑x\left[\mathrm{\Delta }u(x,Q^2)\mathrm{\Delta }d(x,Q^2)\right],$$ (2) which is $`Q^2`$-independent because of current conservation, times a Wilson coefficient $`C_{NS}(\alpha _S)`$, which is perturbatively computable, and known to order $`\alpha _S^3`$. This is an important and accurate theoretical prediction, since corrections to it may only come from isospin violation ($`1`$%), from terms of order $`\alpha _S^4`$ or higher in the Wilson coefficient, or from non-perturbative contributions, suppressed by powers of $`\mathrm{\Lambda }_{QCD}^2/Q^2`$. A direct test of the Bjorken sum rule has become possible, since $`g_1`$ data with deuteron and neutron targets are available. This is done by using the non-singlet axial charge $`g_A`$ as one of the parameters of the fitting procedure. The result is $$g_A=1.18\pm 0.05(\mathrm{exp})\pm 0.07(\mathrm{th})=1.18\pm 0.09,$$ (3) to be compared with the value $`g_A=1.257\pm 0.003`$ measured in $`\beta `$ decay (we will discuss in the next section the unceretainties in eq. (3). It can be concluded that polarized DIS data are consistent with the Bjorken sum rule at the level of one standard deviation, with an accuracy of less than 10 %. A second interesting question is the singlet contribution to the first moment of $`g_1`$: $$\left[_0^1𝑑xg_1(x,Q^2)\right]_{singlet}=C_S^{(1)}(\alpha _S)a_0(Q^2),$$ (4) where $`C_S^{(1)}(\alpha _S)`$ is the first moment of the singlet coefficient function, and $`a_0(Q^2)`$ the singlet axial charge; $`a_0`$ is not scale independent because of the axial current anomaly. In the QCD-improved parton model, one can choose the factorization scheme so that the first moment of the singlet combination of polarized quark densities, $`\mathrm{\Delta }\mathrm{\Sigma }(1)`$, is scale independent, and can therefore be interpreted as the total helicity carried by quarks. In this class of schemes one has $$a_0(Q^2)=\mathrm{\Delta }\mathrm{\Sigma }(1)n_f\frac{\alpha _s(Q^2)}{2\pi }\mathrm{\Delta }g(1,Q^2),$$ (5) where $`\mathrm{\Delta }g(1,Q^2)`$ is the first moment of the polarized gluon density. The values of $`a_0`$, $`\mathrm{\Delta }\mathrm{\Sigma }(1)`$ and $`\mathrm{\Delta }g(1,Q^2)`$ can be extracted from the fitting procedure outlined above. We find $`\mathrm{\Delta }\mathrm{\Sigma }(1)`$ $`=`$ $`0.46\pm 0.04(\mathrm{exp})\pm 0.08(\mathrm{th})=0.46\pm 0.09,`$ $`\mathrm{\Delta }g(1,1\mathrm{GeV}^2)`$ $`=`$ $`1.6\pm 0.4(\mathrm{exp})\pm 0.8(\mathrm{th})=1.6\pm 0.9,`$ (6) $`a_0(\mathrm{})`$ $`=`$ $`0.10\pm 0.05(\mathrm{exp})\genfrac{}{}{0pt}{}{+0.17}{0.10}(\mathrm{th})=0.10\genfrac{}{}{0pt}{}{+0.17}{0.11}.`$ The results in eqs. (6) show that large values of $`\mathrm{\Delta }\mathrm{\Sigma }(1)`$ are compatible with small values of $`a_0`$, provided $`\mathrm{\Delta }g(1,Q^2)`$ is positive and large enough, as first suggested in refs. . Finally, one can attempt using the value of $`\alpha _S`$ as one of the parameters in the fit , as customary in unpolarized data analyses. It is interesting to note that the value obtained with polarized DIS data, namely $$\alpha _S(m_Z)=0.120\genfrac{}{}{0pt}{}{+0.004}{0.005}(\mathrm{exp})\genfrac{}{}{0pt}{}{+0.009}{0.006}(\mathrm{th})=0.120\genfrac{}{}{0pt}{}{+0.010}{0.008},$$ (7) is very close to other determinations, and that the uncertainty is reasonably small. ## 2 We come now to a discussion of the theoretical uncertainties attached to the observables mentioned above, and summarized in table 1. This analysis has been presented in ref. , where the interested reader can find more details. The experimental error is taken into account by the minimum-square fitting procedure. We have added in quadrature systematic and statistic errors on each data point; this procedure results probably in an overestimate of the effective uncertainty on the fit parameters, since it does not account for correlations among systematics. A source of theoretical uncertainty which is often neglected is the arbitrariness in the choice of the functional form in $`x`$ for the parton densities at the initial scale. We have considered a wide range of functional forms (see ref. for details), and we have found the choice of the initial-scale parametrization affects consideralby the final results. In fact, different parametrizations lead to different estimates of contribution to the first moment of $`g_1`$ from the small-$`x`$ region, where the experimental information is very poor. The corresponding spread in the determination of physical quantities must be included in the total uncertainty. This is illustrated in fig. 1, where the different curves refer to different parametrizations of the initial scale parton densities. The curves are quite close to each other in the measured region, as expected, since they all correspond to fits of comparable quality, while they differ considerably below $`x0.01`$. Perhaps, part of this uncertainty could be reduced using positivity constraints . Our analysis includes data points at $`Q^2`$ down to 1 GeV<sup>2</sup>, in order to have a reasonable information at small values of $`x`$. At such low scales, one should worry about the uncertainty originated by higher orders in the perturbative expansion. These can be estimated by varying the values of renormalization and factorization scales $`\mu _R,\mu _F`$ independently around $`Q^2`$. Not surprisingly, this turns out to be the most important origin of theoretical uncertainty. It should be poited out that this “theoretical” uncertainty could eventually be removed, if more data at small $`x`$ and higher $`Q^2`$ were available; in fact, one could then exclude from the analysis data points below, say, $`Q^245`$ GeV<sup>2</sup>, as in most unpolarized DIS analyses, thus avoiding the region where the QCD perturbative expansion is less reliable. Non-perturbative contributions are also potentially large at this low values of $`Q^2`$, since they have the form of powers of $`\mathrm{\Lambda }_{QCD}^2/Q^2`$. Unfortunately, they are very difficult to estimate. One possible strategy is that of comparing results obtained fitting all data above $`Q^2=1`$ GeV<sup>2</sup> with those obtained by excluding data points below $`2`$ GeV<sup>2</sup>. This procedure indicates that the contribution of power-suppressed terms is not very large, compared to other sources of uncertainty. A similar conclusion is obtained by fitting the Bjorken sum to its perturbative expression, supplemented with a twist-4 term $`a/Q^2`$, with the parameter $`a`$ taken from renormalon and sum rule estimates. Other minor sources uncertainties, such as violations of the $`SU(3)`$ flavour symmetry or the position of heavy quark thresholds in $`Q^2`$ evolution, are also listed in table 1. I wish to thank G. Altarelli, R. Ball and S. Forte for the fruitful collaboration on the subject of this talk.
no-problem/9910/cond-mat9910163.html
ar5iv
text
# Optimal Fluctuations and Tail States of non-Hermitian Operators \[ ## Abstract We develop a general variational approach to study the statistical properties of the tail states of a wide class of non-Hermitian operators. The utility of the method, which is a refinement of the instanton approach introduced by Zittartz and Langer, is illustrated in detail by reference to the problem of a quantum particle propagating in an imaginary scalar potential. \] Over recent years considerable interest has been shown in the spectral properties of random Fokker-Planck operators, and their application to the dynamics of various classical systems . Of these, perhaps the best known example is the “Passive Scalar” problem which concerns the diffusion of a classical particle subjected to a random velocity field. Here, in contrast to quantum mechanical evolution, the random classical dynamics is typically specified by a linear non-Hermitian operator. Non-Hermitian operators also appear in a number of problems in statistical physics. For example, the statistical mechanics of a repulsive polymer chain can be described in terms of the classical diffusion of a particle subject to a random imaginary scalar potential . Similarly, the statistical mechanics of flux lines in a type II superconductor pinned by a background of impurities can be described as the quantum evolution of a particle in a disordered environment subject to an imaginary vector potential . While these connections have been known for a long time, the ramifications of non-Hermiticity on the nature of the dynamics is only now being fully explored. Beginning with early work on random matrix ensembles , a number of attempts have been made to analyze spectral properties of non-Hermitian operators, and to apply the results to the description of classical systems . For example, it has been found that the generic localization properties of non-Hermitian systems are drastically different from those of their Hermitian counterparts, a fact that can be attributed to an implicit chiral symmetry of non-Hermitian operators . Previous studies of non-Hermitian operators have been largely based on perturbative schemes, such as the self-consistent Born approximation in the diagrammatic analysis (e.g. Ref. ), or the mean field approximation in the field-theoretic approach (e.g. Ref. ). However, there are indications that an important role can be played by those parts of the spectrum which are populated by exponentially rare, localized states. These “Lifshitz tail” states, first introduced in the context of semiconductor physics, can not be treated perturbatively. Since the original works of Lifshitz , a number of sophisticated mathematical methods to deal with tail states have appeared , including the instanton technique in statistical field theory . The aim of this letter is to propose a new non-perturbative scheme to investigate properties of the tail states of non-Hermitian operators. Although our approach is quite general, for clarity, we choose to explain the main features of the method by applying it to possibly the simplest model system. Specifically, we study the Hamiltonian of a quantum particle subject to a random imaginary scalar potential $$\widehat{H}=\mathrm{\Delta }+iV(𝐫),$$ (1) where the potential $`V(𝐫)`$ is drawn from a Gaussian $`\delta `$-correlated impurity distribution with zero average, and correlator given by $`V(𝐫)V(𝐫^{})=\gamma \delta (𝐫𝐫^{})`$. Previous studies have shown that, when averaged over realizations of $`V(𝐫)`$, the Feynman propagator of the Hamiltonian above (1) can be identified with the partition function of a self-repelling polymer chain with a contact interaction . Moreover, this model can be used to describe NMR in inhomogeneous materials, where the interplay of diffusion and local variations in the spin precession rate results in the dynamics of magnetization being specified by the operator (1. Indeed, in the case of the latter, the tails of the operator (1) can be shown to provide a significant contribution to the relaxation of the NMR signal. The Hamiltonian (1) is non-Hermitian, and its eigenvalues $`ϵ_k`$ occupy some area in the complex plane. In the self-consistent Born approximation, the density of complex eigenvalues, defined as $`\rho =_k\delta (xx_k)\delta (yy_k)`$, is given by $$\rho (x,y)=\{\begin{array}{cc}(4\pi \gamma )^1,\hfill & |y|<\mathrm{\Delta }(x)\text{,}\hfill \\ 0,\hfill & |y|>\mathrm{\Delta }(x)\text{,}\hfill \end{array}$$ (2) where $`x`$ and $`y`$ are the real and imaginary parts of the complex energy $`ϵ=x+iy`$, $`\mathrm{\Delta }(x)=2\pi \gamma \nu (x)`$, and $`\nu (x)`$ is the density of states of the clean ($`\gamma =0`$) system. The validity of this “mean field” result is restricted to values of $`x`$ such that $`\mathrm{\Delta }(x)x`$, which corresponds to large $`x`$ in dimensions lower than four. According to Eq. (2), there are no states outside the region $`|y|<\mathrm{\Delta }(x)`$. This can be understood if one recalls that the mean field approximation is essentially a way to determine the self-consistent contribution of typical fluctuations of the random potential. On the other hand, numerical simulations suggest the existence of rare states with $`|y|\mathrm{\Delta }(x)`$, which are generated by atypically strong fluctuations of $`V(𝐫)`$. Such fluctuations occur with an exponentially small probability $`\mathrm{exp}[W/\gamma ]`$, where $$W=\frac{1}{2}V^2(𝐫)𝑑𝐫.$$ (3) For any given configuration $`V(𝐫)`$ of the random potential, we will describe $`W`$ as the energy associated with that configuration. This should not be confused with the “energy” $`ϵ`$ of a quantum state, which by definition is the eigenvalue of $`\widehat{H}`$ corresponding to that state. Since strong fluctuations of the disorder potential are suppressed by an exponential factor, states with $`|y|\mathrm{\Delta }(x)`$ are dominated by those configurations of $`V(𝐫)`$ that have the highest statistical weight $`\mathrm{exp}[W/\gamma ]`$ or, equivalently, the lowest energy $`W`$. Thus, we come to the idea of the optimal fluctuation method, whose rigorous formulation can be given as follows: Amongst all the configurations of $`V(𝐫)`$ such that $`ϵ`$ is an eigenvalue of the Hamiltonian (1), we choose the one that minimizes the energy functional $`W[V(𝐫)]`$. The density of states is then given with exponential accuracy by the statistical weight of the minimal configuration, $`\rho (ϵ)\mathrm{exp}[W_{min}(ϵ)/\gamma ]`$. This approach is similar to that employed in the seminal work by Zittartz and Langer in the treatment of the Hermitian model. There, a saddle-point technique was applied to estimate the functional integral over $`V(𝐫)`$. Although based on the same physical ideas, this technique leads to inconsistent results when formally applied to a non-Hermitian problem. Instead, we introduce below a more general variational formulation tailored to the consideration of non-Hermitian operators. The arguments outlined above are not restricted to the particular form of the Hamiltonian (1), and can be applied whenever one has to deal with tail states (i.e. localized states created by strong fluctuations of the disorder potential). Formally, the limitations of this scheme are set by the inequality $`W_{min}(x,y)\gamma `$, which defines a certain area in the complex plane of eigenvalues. Below we will show that the precise form of the optimal fluctuation potential can, in general, be determined by solving a system of coupled non-linear equations. For the particular case of the Hamiltonian (1), we will obtain an explicit form of the solution in the limit $`yx`$. With this preparation, we now turn to the derivation of the variational approach: Our aim is to minimize the functional $`W[V(𝐫)]`$ subject to the constraint $`\text{det}(\widehat{H}ϵ)=0`$. Account for the latter can be made by introducing two Lagrange multipliers, $`\mu _1`$ and $`\mu _2`$, which in turn leads to the functional $$F[V(𝐫)]=W\mu _1\text{Re}\text{det}(\widehat{H}ϵ)\mu _2\text{Im}\text{det}(\widehat{H}ϵ).$$ (4) If $`ϵ`$ is an eigenvalue of $`\widehat{H}`$ with left and right eigenfunctions $`\psi ^L(𝐫)`$ and $`\psi ^R(𝐫)`$, a spectral decomposition of the Hamiltonian (1) obtains the identity $$\frac{\delta }{\delta V(𝐫)}\text{det}(\widehat{H}ϵ)=i\psi ^L(𝐫)\psi ^R(𝐫)\underset{k,ϵ_kϵ}{}(ϵ_kϵ),$$ (5) where $`ϵ_k`$ denote the remaining eigenvalues of $`\widehat{H}`$ (i.e. those different from $`ϵ`$). Equating the functional derivative $`\delta F/\delta V(𝐫)`$ to zero, and applying Eq. (5) we obtain $$V(𝐫)=\lambda _1\text{Re}\psi ^L(𝐫)\psi ^R(𝐫)+\lambda _2\text{Im}\psi ^L(𝐫)\psi ^R(𝐫),$$ (6) where $`\lambda _1`$ and $`\lambda _2`$ denote redefined Lagrangian multipliers. When combined with the eigenvalue equations $$\widehat{H}\psi ^R(𝐫)=ϵ\psi ^R(𝐫),\widehat{H}^{}\psi ^L(𝐫)=ϵ^{}\psi ^L(𝐫),$$ (7) and with the binormality condition $$𝑑𝐫\psi ^L(𝐫)\psi ^R(𝐫)=1,$$ (8) Eq. (6) defines a closed system of non-linear equations for $`\psi ^R`$, $`\psi ^L`$, $`\lambda _1`$, $`\lambda _2`$, which has to be solved in order to extract the optimal configuration $`V(𝐫)`$ of the disorder potential. Note that Eq. (6) depends explicitly on the nature of the impurity distribution and on the structure of the operator at hand, whereas Eqs. (7) and (8) are universal. To treat other random non-Hermitian operators, one simply has to modify Eq. (6) accordingly. For example, in the case of the Passive Scalar problem, where the disorder has the form of a random velocity field, the analogue of Eq. (6) relates the optimal configuration of the random velocity field to spatial derivatives of the eigenfunctions $`\psi ^R`$, $`\psi ^L`$ . For the Hamiltonian (1), further progress can be made by exploiting symmetry properties. From the relation $`\widehat{H}^{}=\widehat{H}^{}`$ it follows that one can constrain the complex wavefunctions to obey the relation $`\psi ^R(𝐫)=\psi ^L(𝐫)=\psi (𝐫)`$. In doing so, leaving aside the lengthy but straightforward analysis , one can show that the following simplifications obtain: Firstly, Eq. (8) can be effectively disregarded; secondly, by performing a complex rotation of $`\psi (𝐫)`$, one finds that arbitrary non-zero values can be assigned to the Lagrange multipliers $`\lambda _1`$ and $`\lambda _2`$. For convenience, we will set $`\lambda _1=1`$, $`\lambda _2=0`$. With these simplifications, focusing initially on the one-dimensional problem, one finds $$V=\text{Re}\psi ^2,$$ (9) where the wavefunction $`\psi `$ is obtained self-consistently by solving the Schrödinger equation in the potential $`V`$, $$\psi ^{\prime \prime }+iV\psi =ϵ\psi .$$ (10) Eqs. (9) and (10), which represent the non-Hermitian analogue of the non-linear Schrödinger equation of Zittartz and Langer, are the main result of the paper. To complete the program, one should find the localized solution $`\psi `$ (which is assumed to be unique) of Eqs. (9) and (10) for each value of $`ϵ`$, and calculate the energy $`W(ϵ)`$ of the corresponding configuration $`V=\text{Re}\psi ^2`$ of the potential. Note that, in the derivation of Eqs. (9) and (10), no approximation has been made. However, the applicability of the optimal fluctuation method itself is restricted by the condition $`W_{min}(ϵ)\gamma `$. As mentioned above, an analogous calculation can be carried out for other model systems including the Passive Scalar operator $$\widehat{}_{\mathrm{ps}}=\mathrm{\Delta }+𝐯(𝐫).$$ (11) Indeed, in the case of a Gaussian distributed incompressible ($`𝐯=0`$) flow, the tail states of $`\widehat{}_{\mathrm{ps}}`$ are found to be governed by equations which have the same form as Eqs. (9) and (10. In the analysis of Eqs. (9) and (10), it is convenient to interpret the one-dimensional spatial coordinate $`r`$ as a time $`t`$, and the complex wavefunction $`\psi (t)`$ as the position of a fictitious classical particle in the two-dimensional plane. With this interpretation, one can recast Eqs. (9) and (10) in the Lagrangian form, $`(/\psi _{1,2}^{})^{}=/\psi _{1,2}`$, where $$=\psi _1^{}\psi _2^{}x\psi _1\psi _2\frac{y}{2}(\psi _1^2\psi _2^2)+\frac{1}{4}(\psi _1^2\psi _2^2)^2,$$ (12) and $`\psi =\psi _1+i\psi _2`$ has been separated into real and imaginary parts. The Lagrangian (12) has at least one invariant of the motion — the classical energy. Although we can not rule out the possibility of this system being integrable, we have been unable to find a second invariant of the motion. Therefore, integrability remains an open question, which deserves a separate investigation. As follows from Eqs. (9) and (10), the wavefunction of a tail state is defined by the components of the energy, $`x`$ and $`y`$, which determine the position of the state in the complex plane of eigenvalues. In order to exploit the optimal fluctuation method to its fullest potential, one should analyze these equations over the entire complex plane. However, at present, we do not have a method of finding the exact analytical solution of Eqs. (9) and (10) for arbitrary values of $`x`$ and $`y`$. Therefore, we will focus on a specific domain, in which an approximate solution can be found. As we will see later, the density of Lifshitz tails decays exponentially as a function of the distance from the boundary of the “mean field spectrum”, $`|y|=\mathrm{\Delta }(x)`$. Therefore, the most physically relevant states lie close to the boundary. For such states, the condition $`|y|x`$ is satisfied (recall that $`\mathrm{\Delta }(x)x`$) allowing the following separation of scales: Since the wavefunction of a tail state oscillates with a high frequency $`\sqrt{x}`$, while the amplitude of these oscillations varies at a much lower rate $`|y|/\sqrt{x}`$ (the ratio of the two time-scales being $`|y|/x1`$), the solution of Eqs. (9) and (10) can be parameterized in the form of a wavepacket, $$\psi (t)=\sqrt{2|y|}\left[\phi _+\left(\frac{|y|t}{2\sqrt{x}}\right)e^{i\sqrt{x}t}+\phi _{}\left(\frac{|y|t}{2\sqrt{x}}\right)e^{i\sqrt{x}t}\right].$$ (13) Substituting the parameterization (13) into Eqs. (9) and (10), one obtains an equation for the envelope. Following a standard procedure , we neglect the second derivatives of $`\phi _\pm `$, and perform the averaging over the intermediate scales. As a result, one obtains a system of parameter-free first-order differential equations for $`\phi _\pm (\tau )`$: $$\begin{array}{cccc}\hfill \phi _+^{}& =& & \phi _++5\phi _+^2\phi _{}+\phi _{}^3,\hfill \\ \hfill \phi _{}^{}& =& & \phi _{}5\phi _+\phi _{}^2\phi _+^3.\hfill \end{array}$$ (14) In general, the solution of equations of this kind, which can easily be computed numerically, yields a universal dimensionless constant in the final result for the density of states. Surprisingly, for the problem at hand, Eqs. (14) can be represented in the form of Hamilton’s equations, wherein $`\phi _\pm `$ play the role of the canonical variables, $`\phi _\pm ^{}=\pm /\phi _{}`$, with the Hamiltonian $$=\phi _+\phi _{}+\frac{5}{2}\phi _+^2\phi _{}^2+\frac{1}{4}(\phi _+^4+\phi _{}^4).$$ (15) Of the infinite set of solutions, defined by values of $``$, we are interested in the localized solution, which corresponds to $`=0`$. Integrating Eqs. (14) along the line $`=0`$, we obtain $$\phi _\pm (\tau )=\frac{2e^{\pm \tau }}{\sqrt{e^{4\tau }+e^{4\tau }+10}}.$$ (16) Now, with the help of Eq. (13), we can restore the wavefunction $`\psi (t)`$. It has a rather unusual shape shown in Fig. 2(a). The combination of an oscillatory wavefunction modulated by a localized envelope is characteristic of the tail states of non-Hermitian operators. The optimal configuration of $`V(t)`$, which is expressed through $`\psi (t)`$ according to Eq. (9), also has the form of localized oscillations (see Fig. 2(b)), and its energy can be straightforwardly determined: $$W_{min}(ϵ)=\alpha |y|\sqrt{x},\alpha =\frac{4}{\sqrt{6}}\mathrm{log}(5+2\sqrt{6})3.744.$$ (17) Recalling that, in one dimension, $`\nu (x)=1/(2\pi \sqrt{x})`$, we arrive at the expression for the density of tail states, $$\rho (ϵ)\mathrm{exp}\left[\alpha \frac{|y|}{\mathrm{\Delta }(x)}\right],\mathrm{\Delta }(x)|y|x.$$ (18) We remark that, in higher dimensions, the spherical symmetry of the optimal potential reduces the problem to being effectively one-dimensional. In this case, one obtains the general result , $$\rho (ϵ)\{\begin{array}{cc}\mathrm{exp}\left[\alpha \left(\frac{x}{|y|}\right)^{d1}\frac{|y|}{\mathrm{\Delta }(x)}\right],\hfill & \mathrm{\Delta }(x)|y|x\text{,}\hfill \\ \mathrm{exp}\left[\beta \frac{|y|}{x}\frac{|y|}{\mathrm{\Delta }(x)}\right],\hfill & |y|x\text{,}\hfill \end{array}$$ (19) where the values of the numerical factors $`\alpha `$ and $`\beta `$ depend on the dimensionality, and $`\mathrm{\Delta }(x)=2\pi \gamma \nu (x)\gamma x^{(d2)/2}`$. For completeness, one should add that in addition to the tail states depicted in Fig. 1 and studied here, the random imaginary scalar potential exhibits another class of exponentially rare states which inhabit the region of very small $`x0`$ . The origin of the latter seems to be associated with large areas in real space which are almost free of disorder. Such states are inaccessible within the present framework. To summarize, in this paper we have developed a general variational approach to study spectral properties of non-Hermitian operators. Applied to the problem of a particle propagating in a random imaginary scalar potential, one finds that properties of the strongly localized (tail) states are governed by a system of coupled non-linear dynamical equations. These equations show that tail states associated with the non-Hermitian Hamiltonian exhibit oscillations on scales much shorter than the localization length. Employing a procedure of scale separation, both the density of tail states as well as the corresponding wavefunctions have been determined. We would like to thank V. Ruban, K. Samokhin, I. Smolyarenko and A. Moroz for helpful discussions.
no-problem/9910/hep-ph9910464.html
ar5iv
text
# References RUB-TPII-13/99 hep-ph/9910464 Polarized antiquark flavor asymmetry in Drell–Yan pair production B. Dressler<sup>a,1</sup>, K. Goeke<sup>a,2</sup>, M.V. Polyakov<sup>a,b,3</sup>, P. Schweitzer<sup>a,4</sup>, M. Strikman<sup>c,5,∗</sup>, and C. Weiss<sup>a,6</sup> <sup>a</sup>Institut für Theoretische Physik II, Ruhr–Universität Bochum, D–44780 Bochum, Germany <sup>b</sup>Petersburg Nuclear Physics Institute, Gatchina, St.Petersburg 188350, Russia <sup>c</sup>Pennsylvania State University, University Park, PA 16802, U.S.A. ## Abstract We investigate the role of the flavor asymmetry of the nucleon’s polarized antiquark distributions in Drell–Yan lepton pair production in polarized nucleon–nucleon collisions at HERA (fixed–target) and RHIC energies. It is shown that the large polarized antiquark flavor asymmetry predicted by model calculations in the large–$`N_c`$ limit (chiral quark–soliton model) has a dramatic effect on the double spin asymmetries in high mass lepton pair production, as well as on the single spin asymmetries in lepton pair production through $`W^\pm `$–bosons at $`M^2=M_W^2`$. <sup>1</sup> E-mail: birgitd@tp2.ruhr-uni-bochum.de <sup>2</sup> E-mail: goeke@tp2.ruhr-uni-bochum.de <sup>3</sup> E-mail: maximp@tp2.ruhr-uni-bochum.de <sup>4</sup> E-mail: peterw@tp2.ruhr-uni-bochum.de <sup>5</sup> E-mail: strikman@physics.psu.edu <sup>6</sup> E-mail: weiss@tp2.ruhr-uni-bochum.de Alexander–von–Humboldt–Forschungspreisträger Drell–Yan (DY) lepton pair production in $`pp`$ or $`pn`$ collisions offers one of the most direct ways to measure the antiquark distributions in the nucleon. In particular, such experiments have recently established a significant flavor asymmetry of the unpolarized antiquark distributions, $`\overline{u}(x)\overline{d}(x)`$, see Ref. for a review. Since the amount of $`\overline{u}(x)\overline{d}(x)`$ generated perturbatively is very small, this provides unambiguous evidence for an important role of nonperturbative effects in generating the sea distributions. Other evidence is the large suppression of the strange sea compared to the nonstrange one for $`Q^2`$ of the order of a few $`\mathrm{GeV}^2`$. It appears natural to invoke the chiral degrees of freedom for the explanation of these effects. Two competing mechanisms are currently being discussed. One is due to scattering off pions generated via virtual processes $`NN+\pi `$, $`N\mathrm{\Delta }+\pi `$, or $`qq+\pi `$ . With this mechanism one can in principle generate a significant value of $`\overline{u}(x)\overline{d}(x)`$, although this requires one to consider virtual pion momenta up to $`1\mathrm{GeV}`$ and relies on fine-tuning of the parameters of the model; see Ref. for a discussion. Another mechanism emerges within the large–$`N_c`$ limit of QCD, where the nucleon can be described as a chiral soliton . This approach allows for a fully quantitative description of the antiquark distributions essentially without free parameters, and preserves all fundamental qualitative properties of the distribution functions, such as positivity, sum rules etc. It describes well the data for $`\overline{u}(x)\overline{d}(x)`$ . It was pointed out in Ref. that a distinctive difference of the two mechanisms is the degree of polarization of the antiquark flavor asymmetry, $`\mathrm{\Delta }\overline{u}(x)\mathrm{\Delta }\overline{d}(x)`$. In the pion cloud models polarization is absent . There have been some attempts to generate polarization by including spin–$`1`$ resonances in this picture , which, however, presents severe conceptual difficulties.<sup>1</sup><sup>1</sup>1Pions play a special role as the Goldstone bosons of spontaneously broken chiral symmetry. In contrast, there is nothing special about exchanges of spin–1 resonances compared to, say, tensor, $`b_1`$, $`h_1`$, $`\rho _3`$, $`a_4`$ etc. mesons. Moreover, Regge recurrences are likely to lead to strong cancellations between contributions from different resonances. Also, the quark and gluon degrees of freedom already partly account for the mesonic degrees of freedom, so one faces the problem of double counting. See Ref. for a critical discussion. In contrast to the pion cloud model the large–$`N_c`$ approach predicts that $`\mathrm{\Delta }\overline{u}(x)\mathrm{\Delta }\overline{d}(x)`$ is much larger than the unpolarized $`\overline{u}(x)\overline{d}(x)`$; in fact, it is parametrically enhanced by a factor of $`N_c`$. \[The numerical results for the polarized and unpolarized antiquark flavor asymmetries obtained in this approach are shown in Fig.1 at a scale of $`\mu ^2=(5\mathrm{GeV})^2`$.\] Thus, measurements of $`\mathrm{\Delta }\overline{u}(x)\mathrm{\Delta }\overline{d}(x)`$ would provide a decisive test of the different approaches to include the chiral degrees of freedom in the nucleon. We have recently demonstrated that the current data on hadron production in semi-inclusive deep–inelastic scattering (DIS) are not sensitive to the value of $`\mathrm{\Delta }\overline{u}(x)\mathrm{\Delta }\overline{d}(x)`$ . The purpose of this letter is to study if DY pair and $`W^\pm `$ production in polarized $`pp`$ collisions, which will be possible at RHIC, allow to distinguish between the two options. Specifically, we investigate the role of the large polarized antiquark flavor asymmetries obtained in the large–$`N_c`$ model calculation of Ref. on spin asymmetries in longitudinally polarized DY pair production. Predictions for the spin asymmetries in polarized DY pair production (see e.g. Ref.) have so far been made on the basis of present experimental information about the polarized parton distributions in the nucleon, which comes mostly from inclusive DIS . However, DIS probes directly only the sum of quark– and antiquark distributions, while the separation in quarks and antiquarks, as well as the gluon distribution, have to be determined indirectly through scaling violations. The flavor asymmetry of the polarized antiquark distribution is practically not constrained by the DIS data . On the other hand, the polarized antiquark flavor asymmetry contributes to DY spin asymmetries at leading order in QCD . A quantitative understanding of these effects is a prerequisite for any attempt to extract the polarized gluon distribution from NLO analyses of the data . The cross section for DY pair production is a function of the center–of–mass energy of the incoming hadrons, $`s=(p_1+p_2)^2`$, and the invariant mass of the produced lepton pair, $`M^2`$, which is equal to the virtuality of the exchanged gauge boson. At the partonic level this process is described by the annihilation of a quark and an antiquark originating from the two hadrons, carrying, respectively, longitudinal momenta $`x_1p_1`$ and $`x_2p_2`$, with $`x_1x_2=Q^2/s`$. One can parametrize the momentum fractions as $`x_1=(Q^2/s)^{1/2}e^y,x_2=(Q^2/s)^{1/2}e^y`$, where $`y`$ is called rapidity. In the case of DY pair production through a virtual photon one is interested in the double spin asymmetry of the cross section $`A_{LL}^\gamma `$ $`=`$ $`{\displaystyle \frac{\sigma _{++}^\gamma \sigma _+^\gamma }{\sigma _{++}^\gamma +\sigma _+^\gamma }},`$ (1) where the subscripts $`+,`$ denote the longitudinal polarization of nucleons $`1`$ and $`2`$. In QCD in leading–log approximation this ratio is given by $`A_{LL}^\gamma (y;s,M^2)`$ $`=`$ $`{\displaystyle \frac{_ae_a^2\mathrm{\Delta }q_a(x_1,M^2)\mathrm{\Delta }q_{\overline{a}}(x_2,M^2)}{_ae_a^2q_a(x_1,M^2)q_{\overline{a}}(x_2,M^2)}},`$ (2) where the sum runs over all species of light quarks and antiquarks in the two nucleons, $`a=\{u,\overline{u},d,\overline{d},s,\overline{s}\}`$; we neglect the small contributions due to heavy flavors. The relevant scale here for the parton distribution functions is the virtuality of the photon, $`M^2`$. When the lepton pair is produced instead by exchange of a charged weak gauge boson, $`W^\pm `$, due to the parity–violating nature of the weak interaction the cross section exhibits already a single spin asymmetry, $`A_L^{W\pm }`$ $`=`$ $`{\displaystyle \frac{\sigma _+^{W\pm }\sigma _{}^{W\pm }}{\sigma _+^{W\pm }+\sigma _{}^{W\pm }}},`$ (3) where now the subscripts $`+,`$ denote the longitudinal polarization of nucleon $`1`$; the polarization of nucleon $`2`$ is averaged over. In QCD in leading–log approximation one has $`A_L^{W\pm }(y;s,M^2)`$ $`=`$ $`{\displaystyle \frac{\mathrm{\Delta }u(x_1,M^2)\overline{d}(x_2,M^2)\mathrm{\Delta }\overline{d}(x_1,M^2)u(x_2,M^2)}{u(x_1,M^2)\overline{d}(x_2,M^2)+\overline{d}(x_1,M^2)u(x_2,M^2)}},`$ (4) for $`W^{}`$ one should exchange $`ud,\overline{u}\overline{d}`$ everywhere here. Eq.(4) includes only $`u`$– and $`d`$–quarks, even for values of $`M^2`$ of the order of the $`W`$–boson mass. Contributions from $`c`$$`s`$ transitions are negligible because of the comparative smallness of the product of $`c`$ and $`s`$ distributions, while contributions of type $`u`$$`s`$ and $`c`$$`d`$ are small because of Cabbibo suppression; see Ref. for a more detailed discussion. Our aim is to study the effect of the large flavor asymmetry of the polarized antiquark distributions, obtained in the model calculations of Refs. based on the large–$`N_c`$ limit, on the spin asymmetries $`A_{LL}^\gamma `$ and $`A_L^{W\pm }`$, Eqs.(2) and (4). In order to make maximum use of the direct experimental information on the polarized parton distributions available from DIS we proceed as follows. The individual polarized light quark and antiquark distributions $`\mathrm{\Delta }u(x),\mathrm{\Delta }\overline{u}(x),\mathrm{\Delta }d(x),\mathrm{\Delta }\overline{d}(x),\mathrm{\Delta }s(x)`$, and $`\mathrm{\Delta }\overline{s}(x)`$, figuring in the numerators in Eqs.(2) and (4) can be expressed in terms of the six combinations $`\mathrm{\Delta }_u(x)`$ $``$ $`\mathrm{\Delta }u(x)+\mathrm{\Delta }\overline{u}(x),\text{(analogously for }\mathrm{\Delta }_d,\mathrm{\Delta }_s\text{)},`$ (5) $`\mathrm{\Delta }_0(x)`$ $``$ $`\mathrm{\Delta }\overline{u}(x)+\mathrm{\Delta }\overline{d}(x)+\mathrm{\Delta }\overline{s}(x),`$ (6) $`\mathrm{\Delta }_3(x)`$ $``$ $`\mathrm{\Delta }\overline{u}(x)\mathrm{\Delta }\overline{d}(x),`$ (7) $`\mathrm{\Delta }_8(x)`$ $``$ $`\mathrm{\Delta }\overline{u}(x)+\mathrm{\Delta }\overline{d}(x)2\mathrm{\Delta }\overline{s}(x).`$ (8) The combinations $`\mathrm{\Delta }_u(x),\mathrm{\Delta }_d(x)`$ and $`\mathrm{\Delta }_s(x)`$, Eq.(5), are measured directly in inclusive polarized DIS, so we evaluate them using the GRSV95 leading–order (LO) parametrization (“standard scenario”), which was obtained by fits to inclusive DIS data .<sup>2</sup><sup>2</sup>2Actually, in DIS with proton or nuclear targets one is able to measure directly only two flavor combinations of these three distributions; however, the third one can be inferred using $`SU(3)`$ symmetry arguments. The flavor–singlet antiquark distribution, $`\mathrm{\Delta }_0(x)`$, Eq.(6), we also take from the GRSV95 parametrization; this distribution is known only from the study of scaling violations in inclusive DIS and depends to some extent on the assumptions made about the polarized gluon distribution; however, the GRSV95 parametrization for $`\mathrm{\Delta }_0(x)`$ is in good agreement with the result of model calculations in the large–$`N_c`$ limit . For the polarized flavor asymmetries of the antiquark distribution, $`\mathrm{\Delta }_3(x)`$ and $`\mathrm{\Delta }_8(x)`$, Eqs.(7) and (8), which are not constrained by DIS data, we use the results of the model calculation in the large–$`N_c`$ limit of Refs., evolved in LO from the low normalization point of $`\mu ^2=(600\mathrm{MeV})^2`$ to the experimental scale, $`M^2`$. The result for $`\mathrm{\Delta }_3(x)`$ is shown in Fig.1 at a scale of $`(5\mathrm{GeV})^2`$. The other non-singlet combination, $`\mathrm{\Delta }_8(x)`$, is obtained from $`\mathrm{\Delta }_3(x)`$ at the low normalization point by the $`SU(3)`$ relation $`\mathrm{\Delta }_8(x)=[(3FD)/(F+D)]\mathrm{\Delta }_3(x)`$, where we use $`F/D=5/9`$, see Ref. for details. Note that $`\mathrm{\Delta }_3(x)`$ and $`\mathrm{\Delta }_8(x)`$ do not mix with the other distributions under LO evolution. The “hybrid” polarized quark and antiquark distributions thus obtained, by construction, fit all the inclusive polarized DIS data in LO, while at the same time incorporating the polarized antiquark flavor asymmetry obtained in the model calculation in the large–$`N_c`$ limit. Finally, to evaluate the denominators in Eqs.(2) and (4) we use the GRV94 parametrization of the unpolarized parton distributions. In Fig.2 (a) and (b) we compare the double spin asymmetries, $`A_{LL}^\gamma `$, obtained with the “hybrid” distributions incorporating the antiquark flavor asymmetries, $`\mathrm{\Delta }_3(x)`$ and $`\mathrm{\Delta }_8(x)`$, calculated in the large–$`N_c`$ limit (solid lines), with what one obtains for $`\mathrm{\Delta }_3(x)=\mathrm{\Delta }_8(x)=0`$ (dashed lines). We show the results in two different kinematical regions, (a): $`s=(40\mathrm{GeV})^2`$ and $`M^2=(5\mathrm{GeV})^2`$, corresponding to a proposed fixed target experiment using the HERA proton beam , and (b): $`s=(500\mathrm{GeV})^2`$ and $`M^2=M_W^2=(80.3\mathrm{GeV})^2`$, which can be reached in the RHIC experiment. One sees that in both cases the flavor asymmetry of the antiquark distribution has a dramatic effect on the spin asymmetry, reversing even its sign compared to the case with $`\mathrm{\Delta }_3(x)=\mathrm{\Delta }_8(x)=0`$. The results for the double spin asymmetry, $`A_{LL}^\gamma `$, depend in principle also on the assumptions made about the polarized gluon distribution in the nucleon, which mixes with the singlet quark distribution under evolution, and which is practically not constrained by the present data. In order to estimate the sensitivity of our results to the polarized gluon distribution we have repeated the above comparison using instead of GRSV95 the Gehrmann–Stirling LO “A” and “C” parametrizations for $`\mathrm{\Delta }_u,\mathrm{\Delta }_d,\mathrm{\Delta }_s`$ and $`\mathrm{\Delta }_0`$, which provide fits to the inclusive data with widely different assumptions about the shape of the input polarized gluon distributions . The resulting asymmetries $`A_{LL}^\gamma `$ obtained without polarized flavor asymmetry, $`\mathrm{\Delta }_3(x)=\mathrm{\Delta }_8(x)=0`$ (dashed lines), and including the large–$`N_c`$ model results for $`\mathrm{\Delta }_3(x)`$ and $`\mathrm{\Delta }_8(x)`$ (solid lines) are shown in Fig.2 (c) and (d). One sees that the changes of $`A_{LL}^\gamma `$ due to the inclusion of the flavor asymmetry (differences between corresponding solid and dashed curves) are much larger than the differences due to changes of the input gluon distribution (differences between the two dashed curves). It is not an exaggeration to say that $`A_{LL}^\gamma `$ measures the polarized flavor asymmetry of the antiquark distribution, and not the polarized gluon distribution. Our comparison of asymmetries calculated with and without inclusion of a polarized antiquark flavor asymmetry refers explicitly to the leading–logarithmic (LO) approximation, since only at this level the flavor asymmetries $`\mathrm{\Delta }_3(x)`$ and $`\mathrm{\Delta }_8(x)`$, evolve separately and can be combined with parametrizations for $`\mathrm{\Delta }_u,\mathrm{\Delta }_d,\mathrm{\Delta }_s`$ and $`\mathrm{\Delta }_0`$ without affecting the fits to inclusive data. It is expected that the spin asymmetry $`A_{LL}^\gamma `$ is less sensitive to NLO corrections than the polarized and unpolarized DY cross sections individually, since the $`K`$–factors partially cancel between numerator and denominator in the ratio, Eq.(2) ; however, this claim has been debated in Ref.. In any case, since the inclusion of the polarized antiquark flavor asymmetry has a very large effect on $`A_{LL}^\gamma `$ already at LO level, it is unlikely that higher–order corrections will reverse this situation. At least, the differences between our LO results for $`A_{LL}^\gamma `$ obtained with and without flavor asymmetry are much larger than those between the LO and NLO results in the case of zero flavor asymmetry quoted in Ref.. The single spin asymmetries in lepton pair production through $`W^\pm `$, $`A_L^{W\pm }`$, for proton–proton scattering are shown in Fig.3, for $`s=(500\mathrm{GeV})^2`$ and $`M^2=M_W^2=(80.3\mathrm{GeV})^2`$, which can be reached at RHIC. Figs.3 (a) and (b) show the results obtained using the GRSV95 parametrization without antiquark flavor asymmetry (dashed lines), and including the contributions from $`\mathrm{\Delta }_3(x)`$ and $`\mathrm{\Delta }_8(x)`$ obtained in the large–$`N_c`$ model estimate (solid lines). One sees that also in this case the inclusion of the antiquark flavor asymmetry has a qualitative effect on the spin asymmetry. Again, in the case of the Gehrmann–Stirling parametrizations, Fig.3 (c) and (d), the differences due to changes in the gluon distribution are negligible compared to the effect of the flavor asymmetry of the antiquark distribution. To summarize, we have shown that the large flavor asymmetries of the polarized antiquark distributions predicted by model calculations in the large–$`N_c`$ limit (chiral quark–soliton model), have a pronounced effect on the spin asymmetries in Drell–Yan pair production through photons or $`W^\pm `$ bosons at HERA or RHIC energies. In particular, the effect of the antiquark flavor asymmetry on the spin asymmetries is much larger than their uncertainties due to the lack of knowledge of the degree of gluon polarization in the nucleon. The expected accuracy of the RHIC measurements will certainly be sufficient to observe an effect of the magnitude predicted. We are grateful to S. Heppelmann and P.V. Pobylitsa for useful discussions. This investigation was supported in part by the Deutsche Forschungsgemeinschaft (DFG), by a joint grant of the DFG and the Russian Foundation for Basic Research, by the German Ministry of Education and Research (BMBF), and by COSY, Jülich. The work of M. Strikman was supported in part by a DOE grant, and by the Alexander–von–Humboldt Foundation.
no-problem/9910/hep-th9910078.html
ar5iv
text
# Untitled Document Fig. 1. The initial state of the field $`A`$ \- numerical and polynomial approximations. Fig. 2. The difference $`\delta f_x(t)`$ of the dynamical and static values of the field $`f`$ for $`x=5.0`$ and $`\kappa =0.1`$. Fig. 3. The difference $`\delta f_x(t)`$ of the dynamical and static values of the field $`f`$ for $`x=160.0`$ and $`\kappa =0.1`$. Fig. 4. The difference $`\delta h_x(t)`$ of the dynamical and static values of the field $`h`$ for $`x=5.0`$ and $`\kappa =0.1`$. Fig. 5. The difference $`\delta h_x(t)`$ of the dynamical and static values of the field $`h`$ for $`x=160.0`$ and $`\kappa =0.1`$. Note the delay time $`x`$. Fig. 6. The difference $`\delta h_x(t)`$ of the dynamical and static values of the field $`h`$ for $`x=160.0`$ and $`\kappa =0.05`$. Note the delay time $`x`$. Fig. 7. The difference $`\delta h_x(t)`$ of the dynamical and static values of the field $`h`$ for $`x=160.0`$ and $`\kappa =0.2`$. Note the delay time $`x`$. Fig. 8. The snapshot of the difference $`\delta h_x`$ for $`t=100.0`$. Fig. 9. The snapshot of the difference $`\delta h_x`$ for $`t=200.0`$. Fig. 10. The snapshot of the difference $`\delta h_x`$ for $`t=300.0`$. Fig. 11. The Fourier transformations modules of the difference $`\delta f_x(t)`$ for $`x=5.0`$ and $`\kappa =0.05,0.1,0.2`$. Fig. 12. The Fourier transformations modules of the difference $`\delta f_x(t)`$ for $`x=160.0`$ and $`\kappa =0.05,0.1,0.2`$. Fig. 13. The Fourier transformations modules of the difference $`\delta h_x(t)`$ for $`x=5.0`$ and $`\kappa =0.1,0.2`$. Fig. 14. The Fourier transformations modules of the difference $`\delta h_x(t)`$ for $`x=160.0`$ and $`\kappa =0.1,0.2`$. Fig. 15. The Fourier transformation modules of the difference $`\delta h_x(t)`$ for $`x=5.0`$ and $`\kappa =0.05`$. Fig. 16. The Fourier transformation modules of the difference $`\delta h_x(t)`$ for $`x=160.0`$ and $`\kappa =0.05`$.
no-problem/9910/cond-mat9910378.html
ar5iv
text
# Phase transitions in Ising magnetic films and superlattices ## I Introduction Considerable effort has been recently devoted to the understanding of magnetic films, layered structures and superlattices\[1-7\]. With the development of molecular beam epitaxy, it is now possible to grow in a very controlled way magnetic films with few atomic layers or even monolayer atop nonmagnetic substrates. A superlattice in which the atoms vary from one monolayer to another can also be envisaged. Very often one finds unexpected and interesting properties in these systems. For example, experimental studies\[8-10\] on the magnetic properties of surfaces of Gd,Cr and Tb have shown that a magnetically ordered surface can coexist with a magnetically disordered bulk phase. Most work has been devoted to the free surface problem in which spins on the surface interact with each other through an exchange interaction $`J_s`$ different from that of bulk material\[11-20\]. For the values of $`J_s/J`$ above a certain critical value $`(J_s/J)_{crit}`$ the system orders on the surface before it orders in the bulk. Below this critical value only two phases are expected, namely the bulk ferromagnetic and paramagnetic phases. Magnetic excitations in superlattices were studied in numerous papers (see, e.g., Ref.21 for a brief review). Yet less attention has been paid to critical behavoir. The phase transition temperatures for a Heisenberg magnetic superlattices have been studied \[22-24\]. Hinchey and Mills have investigated a superlattice structure with alternating ferromagnetic and antiferromagnetic layers\[25-26\]. In this paper, we are concerned with phase transitions in Ising magnetic films and superlattices. we study them within the framework of mean field theory. The transfer matrix mathod is used to derive nonlinear equations for magnetic Ising films and superlattices. The equations are general ones for arbitrary exchange interaction constants. In section II, we outline the formalism and derive the nonlinear equation for phase transition temperatures of magnetic films. The transition temperatures as a function of surface exchange constants are studied. The nonlinear equation for transition temperatures of superlattices are given in section III. The last section IV is devoted to a brief discussion. ## II Formalism and phase transitions in Ising magnetic films We start with a lattice of localized spins with spin equal to $`1/2`$. The interaction is of the nearest-neighbor ferromagnetic Ising type. The Ising Hamiltonian of the system is given by $$H=\frac{1}{2}\underset{(i,j)}{}\underset{(r,r^{})}{}J_{ij}S_{ir}S_{jr^{}},$$ (1) where $`(i,j)`$ are plane indices, $`(r,r^{})`$ are different sites of the planes, and $`S_{ir}`$ is spin variable. $`J_{ij}`$ denote the exchange constants and are plane dependent. We will keep only nearest-neighbour terms. In the mean-field theory, $`S_{ir}`$ is replaced by its mean value $`m_i`$ associated with each plane, and is determined by a set of simultaneous equations $$m_i=\mathrm{tanh}[(z_0J_{ii}m_i+zJ_{i,i+1}m_{i+1}+zJ_{i,i1}m_{i1})/K_BT],$$ (2) where $`z_0,z`$ are the numbers of nearest neighbours in the place and between the planes, respectively, $`k_B`$ is the Boltzman constant and $`T`$ is the temperature. Near the transition temperature, the order parameter $`m_i`$ are small, Eq.(2) reduces to $$K_BTm_i=z_0J_{ii}m_i+zJ_{i,i+1}m_{i+1}+zJ_{i,i1}m_{i1}.$$ (3) Let us rewrite the above equation in matrix form in analogy with Ref. $$\left(\genfrac{}{}{0pt}{}{m_{i+1}}{m_i}\right)=M_{i1}\left(\genfrac{}{}{0pt}{}{m_i}{m_{i1}}\right)$$ (4) with $`M_{i1}`$ as the transfer matrix defined by $$M_{i1}=\left(\begin{array}{cc}(K_BTz_0J_{ii})/(zJ_{i,i+1})& J_{i,i1}/J_{i,i+1}\\ 1& 0\end{array}\right).$$ (5) We consider a magnetic film which contains $`N`$ layers with layer indices $`i=1,2\mathrm{}N`$. From Eq.(4), we get $$\left(\genfrac{}{}{0pt}{}{m_N}{m_{N1}}\right)=R\left(\genfrac{}{}{0pt}{}{m_2}{m_1}\right),$$ (6) where $`R=M_{N2}\mathrm{}M_2M_1`$ represents successive multiplication of the transfer matrices $`M_i`$. For an ideal film system, there exists symmetry in the direction perpendicular to the surface, which allows us to write $`m_i=m_{N+1i}`$. Then, the following nonlinear equation for the transition temperature can be obtained from equations (3) and (6) as $$R_{11}[(K_BTz_0J_{11})/(zJ_{1,2})]^2+(R_{12}R_{21})[(K_BTz_0J_{11})/(zJ_{1,2})]R_{22}=0.$$ (7) The above equation is the general equation for the transition temperature of symmetric films. It is suitable for arbitrary exchange interaction constants $`J_{ij}`$. All the information about the phase transition temperatures of the system is contained in the equation. For a uniform system with $`J_{ij}=J`$, Eq.(7) reduces to $$R_{11}(tz_0/z)^2+(R_{12}R_{21})(tz_0/z)R_{22}=0,$$ (8) where $`t=K_BT/(Jz)`$ is the reduced temperature and $`R=D^{N2}`$. Here the matrix $$D=\left(\begin{array}{cc}tz_0/z& 1\\ 1& 0\end{array}\right).$$ (9) Note that $`det(D)=1`$, we can linearalize matrix $`R`$ as $$R=U_{N2}AU_{N3}I,$$ (10) where $`I`$ is the unit matrix ,$`U_N=(\lambda _+^N\lambda _{}^N)/(\lambda _+\lambda _{})`$, and $`\lambda _\pm =(tz_0/z\pm \sqrt{(tz_0/z)^24})/2`$. Substituting Eq.(10) into Eq.(8), we reduce Eq.(8) to its simplest form $$U_{N+1}=0.$$ (11) $`U_{N+1}`$ can be rewritten as $$U_{N+1}=\mathrm{sin}[(N+1)\varphi ]/\mathrm{sin}\varphi $$ (12) for $`(tz_0/z)^24`$. Here $`\varphi =\mathrm{arccos}[(tz_0/z)/2]`$. For $`(tz_0/z)^24`$, $`\varphi `$ becomes $`i\theta `$, and the trigonometric functions become hyperbolic functions of $`\theta `$. Eq.(11) gives $$t=2\mathrm{cos}[\pi /(N+1)]+z_0/z.$$ (13) In the limit $`N\mathrm{}`$, the reduced bulk temperature $`t_B`$ is obtained as $$t_B=2+z_0/z.$$ (14) Fig.1 shows the reduced transition temperature vs. layer number $`L`$. It can be seen that the film transition tempeature is lower than the bulk one, i.e., the film disorders at a lower temperature than the bulk ones. Throughout this paper, we take $`J`$ as the unit of energy. In the above discussions, we have assumed the surface exchange constants $`J_s`$ are the same as the bulk exchange constants $`J`$. Now we consider a $`(l,n,l)`$ film consisting $`l`$ top surface layers, $`n`$ bulk surface layers and $`l`$ bottom surface layers, and assume that the exchange constants in a surface layer is denoted by $`J_s`$ and that in a bulk layer or between successive layers by $`J`$. In this case the total transfer matrix $`R`$ becomes $$R=P^{l1}Q^nP^{l1},$$ (15) where the matrix $`P`$ and $`Q`$ are $$P=\left(\begin{array}{cc}t4J_s/J& 1\\ 1& 0\end{array}\right),Q=\left(\begin{array}{cc}t4& 1\\ 1& 0\end{array}\right).$$ (16) Here we have assumed that the spins lie on a simple cubic lattice, i.e., $`z_0=4,z=1`$. Since $`Det(P)=Det(Q)=1`$, the matrix $`P^{l1}`$ and $`Q^n`$ can be linearalized in analogous to Eq.(10). In this case, the nonlinear Eq.(7) reduces to $$R_{11}(t4J_s/J)^2+(R_{12}R_{21})(t4J_s/J)R_{22}=0.$$ (17) The numerical results for transition temperatures of a magnetic $`(l,n,l)`$ film as a function of $`J_s/J`$ are shown in Fig.2. It is can be seen that the transition temperature increase as $`J_s/J`$ increases. For $`J_s/J1`$, the transition temperatures in film $`(10,10,10)`$ are nearly equal to the bulk temperature $`T_B`$. It is interesting that the transtion temperature increases linearly with the increase of $`J_s/J`$ when $`J_s/J`$ is large enough. We can also see that the transition temperature increases as layer number increases. ## III Phase transitions in Ising magnetic superlattices The $`(l,n)`$ superlattice structure we study is formed from two types of atoms. In each elementary unit with layer indices $`i=1,2\mathrm{}l+n`$, there are $`l`$ atomic layers of type $`A`$ and $`n`$ atomic layers of type $`B`$. The interlayer exchange constants are given by $`J_a`$ and $`J_b`$, whereas the exchange constants between different layers is described by $`J`$. For the above model of the superlattice, the transfer matrix $`M_i`$(Eq.(5)) reduce to different types of matrix $$A=\left(\begin{array}{cc}X_A& 1\\ 1& 0\end{array}\right),B=\left(\begin{array}{cc}X_B& 1\\ 1& 0\end{array}\right),$$ (18) where $`X_A=t4J_A/J`$ and $`X_B=t4J_B/J`$. From Eq.(4), we can obtain the following equation as $$\left(\genfrac{}{}{0pt}{}{m_{l+n+2}}{m_{l+n+1}}\right)=R\left(\genfrac{}{}{0pt}{}{m_2}{m_1}\right),$$ (19) where $$R=AB^nA^{l1}$$ (20) is the total transfer matrix. Due to the periodicity of the superlattice, we know $`m_{l+n+2}=m_2`$ and $`m_{l+n+1}=m_1`$. Then from Eq.(19), we get $$Det(R)Tr(R)+1=0.$$ (21) It can be easily seen that $`Det(A)=Det(B)=Det(R)=1`$. Then the Eq.(21) reduces to the simplest form $$Tr(R)=2.$$ (22) Actually, the above equation is a general equation for phase transition temperature of superlattices. It is valid for arbitrary exchange constants $`J_{ij}`$. The nonlinear eqution for films are dependent on both diagonal and nondiagonal terms of the total transfer matrix $`R`$, while the nonlinear equation for superlattices are only depend on diagonal terms of $`R`$. For the total transfer matrix $`R=AB^nA^{l1}`$, we get $$Tr(A^lB^n)=2.$$ (23) The matrix $`A^l`$ and $`B^n`$ can be linearlized as $`A^l`$ $`=`$ $`E_lAE_{l1}I`$ (24) $`B^n`$ $`=`$ $`F_nBF_{n1}I,`$ (25) where $`E_l=(\alpha _+^l\alpha _{}^l)/(\alpha _+\alpha _{})`$, $`F_n=(\beta _+^n\beta _{}^n)/(\beta _+\beta _{})`$, $`\alpha _\pm =(X_A\pm \sqrt{X_A^24})/2`$ and $`\beta _\pm =(X_B\pm \sqrt{X_B^24})/2`$. From Eq.(23), we get the equation $$(E_lX_aE_{l1})(F_nX_bF_{n1})2E_lF_n+E_{l1}F_{n1}=2.$$ (26) For $`l=1`$, $`n=1`$, the above equation reduces to $$X_aX_b=4,$$ (27) which is identical with the result of Ref.(2) and (29). Next we numerically calculate the phase transition temperatures from Eq.(25). In figure 3, we have shown the results for the superlattices $`(3,1),(2,2),(10,5)`$ and $`(20,20)`$. The transition temperature is plotted as a function of $`J_A/J`$. For $`J_A/J<1`$, the transition temperature is smaller that the bulk transition temperature. For $`J_A/J=1`$, the transition temperature of the superlattice is independent of $`m`$ and $`n`$. and are equal to the bulk temperature $`T_B`$ as expected. On the other hand, for $`J_A/J>1`$, the transition temperature is greater than the bulk temperature $`T_B`$. The transition temperature increases with the layer number in one unit cell and approaches $`T_B`$ asymptotically as the number become large. The transition temperature increases nearly linearly with $`J_A/J`$ when $`J_A/J`$ is large enough. The layer number of superlattices $`(3,1)`$ and $`(2,2)`$ are same, but the transition temperatures are different. For $`J_A/J<1`$, the transition temperatures of superlattice $`(2,2)`$ are larger than those of superlattice $`(3,1)`$. In contrary to this, the transition temperatures of superlattice $`(2,2)`$ are smaller than those of superlattice $`(3,1)`$ for $`J_A/J>1`$. ## IV Discussions In summary, we have studied phase transitions in Ising magnetic films and superlattices with the framework of mean field theory. By transfer matrix method, we have derived two general nonlinear equations for phase transition temperatures in Ising films and superlattices, respectively. The transition temperatures as a function of exchange interaction constants are calculated. In addition, the equations can be easily solved and the parameters involved can be adjusted at will. Figure Captions Fig.1, Transition temperatures of a uniform film as a function of layer number $`L`$. Fig.2, Transition temperatures of a magnetic film $`(l,n,l)`$ as a function of $`J_s/J`$. Fig.3, Transition temperatures of a magnetic superlattice $`(l,n)`$ as a function of $`J_A/J`$.
no-problem/9910/gr-qc9910014.html
ar5iv
text
# Introduction ## Introduction The standard Friedmann models are generally considered as the best approximation for the observed large scale distribution of galaxies, since the results predicted by these models are usually quite good approximations of observations.<sup>1</sup> However, although no observational evidence was so far found to severely contradict this widespread belief, the question remains of whether or not other cosmological models could also provide theoretical predictions in line with observations. This is obviously an important aspect in the general acceptance of the standard Friedmannian models as good approximations of the observed Universe, inasmuch as we can only have a direct response to the question of how good the Friedmann models really are, if we are able to test the data against the predictions of other non-standard cosmological models. Nevertheless, cosmography is presently dominated by observational relations derived only within the Friedmannian context,<sup>1-4</sup> and obviously those relations do not allow comparisons between standard and non-standard cosmologies. Therefore, in practice we presently have a situation where the observational test of non-standard models is quite difficult due to the absence of detailed and observationally-based relations derived for that purpose. There are exceptions, however, and the basis of a general theory for observations of cosmological sources was presented by George Ellis,<sup>5</sup> although later, in a series of papers,<sup>6-8</sup> the theory was further developed, with the presentation of detailed calculations of observational relations from where cosmological effects can be identified and separated from the brightness profile evolution of the sources. Although such study was a step forward in the possibility of direct observational test of non-standard cosmological models, this detailed theory<sup>6-8</sup> equally demands detailed observations of the sources, a task usually not feasible when dealing with large scale redshift surveys, where the total number of observed objects varies from hundreds to thousands of galaxies. Actually, often it is not even desirable to obtain such detailed observations since what is often being sought are data for doing statistics on the distribution of galaxies. The approach of this work differs from those quoted above because in here cosmological sources are considered point sources, and therefore observables like flux and colour are integrated over the whole object. This is a reasonable approximation for objects included in these surveys, since they are usually so faint that very detailed observations of their structure are still difficult with the presently available techniques. Therefore, by treating galaxies as point sources we can, at least in principle, apply the methods presented in this paper to the large and deep galaxy surveys presently available. The observational relations discussed here were derived with the aim of comparing with this redshift surveys of galaxies. As a consequence, the theory used here offers the possibility of comparing the predictions of different cosmological models with the need of much less real data than demanded by the detailed theory mentioned above.<sup>6-8</sup> Besides, this simpler view of the problem creates the option of a first order test of cosmological models against observations without the need of detailed data, which in turn would demand a more complex and demanding analysis. However, in order to be able to obtain observational relations capable of being compared with observations, to a certain extent we need to depart from the basic approach<sup>5</sup> and discuss in detail some specific observations in cosmology within some specific bandwidth, since this is the way astronomers deal with their data. This paper is the first of a series in a programme for investigating whether or not other, non-standard, cosmological models could also explain the data obtained from the large-scale redshift surveys of galaxies. Here I shall review the basic theory for observational relations in limited frequency bandwidth, and the quantities which are mostly used by observers. In doing so I will put together some basic results which will form the common ground from where the general approach of this proposed research programme should start. I will also extend some aspects of this theory, particularly the number-counts expression, and indicate where the connection among these observational quantities, real astronomical observations, and the spacetime geometry takes place. In short, such a connection appears when the observables are written in terms of the redshift and the cosmological distances, since both can only be explicitly written when a spacetime metric is assumed. Even when all observables are written solely in terms of the redshift, this connection will appear in the functional form between the observational quantities and the redshift, as this functional relationship is dependable on the chosen spacetime geometry. ## Basic Definitions and Equations Let us call $`F`$ the bolometric flux as measured by the observer. This is the rate at which radiation crosses unit area per unit time in all frequencies. Then $`F_G`$ will be the bolometric galaxy flux measured across an unit sphere located in a locally Euclidean space at rest with a galaxy or a cosmological source.<sup>5</sup> The distance definitions used here are three: i) the observer area distance $`r_0`$ is the area distance of a source as measured by the observer; ii) the galaxy area distance $`r_G`$ is defined as the area distance to the observer as measured from the distant galactic source. This quantity is unobservable, by definition; iii) the luminosity distance $`d_{\mathrm{}}`$ is the distance measured by the observer as if the space were flat and non-expanding, that is, as if the space were stationary and Euclidean. The observer area distance $`r_0`$ is also called angular diameter distance,<sup>2</sup> and corrected luminosity distance.<sup>11</sup> The galaxy area distance $`r_G`$ is also named effective distance,<sup>12</sup> angular size distance,<sup>1</sup> transverse comoving distance,<sup>13</sup> and proper motion distance.<sup>14</sup> These three definitions of distance are related to each other by Etherington’s reciprocity theorem,<sup>5,15,16</sup> $$d_{\mathrm{}}=r_0(1+z)^2=r_G(1+z),$$ (1) where $`z`$ is the redshift of the source. All these distances tend to the same Euclidean value as $`z0`$, but greatly differ at large redshift.<sup>9,10</sup> Although the equation above appears in standard texts of observational cosmology, with very few exceptions<sup>5,16</sup> they all fail to acknowledge the generality of the theorem, and give due credit to Etherington’s 1933 discovery. The reciprocity relation was proven for general null geodesics, without specifying any metric, and, therefore, it is not at all restricted to standard cosmologies. Let us now call $`L`$ the bolometric source luminosity, that is, the total rate of radiating energy emitted by the source and measured through an unity sphere located in a locally Euclidean spacetime near the source. Then $`\nu `$ will be the observed frequency of the radiation, and $`\nu _G`$ the emitted frequency, that is, the frequency of the same radiation $`\nu `$ received by the observer, but at rest-frame of the emitting galaxy. The source spectrum function $`𝒥(\nu _G)`$ gives the proportion of radiation emitted by the source at a certain frequency $`\nu _G`$ as measured at the rest frame of the source. This quantity is a property of the source, giving the percentage of emitted radiation, and obeying the normalization condition, $`_0^{\mathrm{}}𝒥(\nu _G)𝑑\nu _G=1`$. Then $`L_{\nu _G}=L𝒥(\nu _G)`$ is the specific source luminosity, giving the rate at which radiation is emitted by the source at the frequency $`\nu _G`$ at its locally Euclidean rest frame. To summarize, we have that, $$L=_{\text{2-sphere}}F_G𝑑A=4\pi F_G=_0^{\mathrm{}}L_{\nu _G}𝑑\nu _G=_0^{\mathrm{}}L𝒥(\nu _G)𝑑\nu _G.$$ (2) The redshift $`z`$ is defined by, $$1+z=\frac{\lambda _{\text{observed}}}{\lambda _{\text{emitted}}}=\frac{\nu _G}{\nu },$$ (3) and from the expressions above it follows that $$d\nu =\frac{d\nu _G}{(1+z)},$$ (4) and $$F=\frac{F_G}{(r_0)^2(1+z)^4}=\frac{F_G}{(r_G)^2(1+z)^2}=\frac{F_G}{(d_{\mathrm{}})^2}.$$ (5) The connection of the model with the spacetime geometry appears in the expressions for the redshift and the different definitions of distance. That can be seen if we remember that in the general geometric case the redshift is given by,<sup>5</sup> $$1+z=\frac{\left(u^ak_a\right)_{\text{source}}}{\left(u^ak_a\right)_{\text{observer}}},$$ (6) where $`u^a`$ is the observer’s four-velocity, and $`k^a`$ is the tangent vector of the null geodesic connecting source and observer, that is, the past light cone. This expression allows us to calculate $`z`$ for any given spacetime geometry. If we assume that source and observer are comoving, then $`u^a=\delta _0^a`$ implies that $`u^bk_b=k^bg_{0b}`$, and the redshift may be rewritten as, $$1+z=\frac{[g_{0b}(dx^b/dy)]_{\text{source}}}{[g_{0b}(dx^b/dy)]_{\text{observer}}}.$$ (7) Here $`y`$ is the affine parameter along the null geodesics connecting source and observer, and $`g_{ab}`$ is the metric tensor. Inasmuch as $`dx^b/dy`$ and $`g_{ab}`$ can only be determined when a spacetime geometry is defined by some line element $`dS^2`$, the function $`g_{0b}(dx^b/dy)`$ and, ultimately, the redshift as well are directly dependable on the geometry of the model. Although $`z`$ is an astronomically observable quantity, its specific internal relationship with other internal cosmological quantities of the model will be set by the metric tensor. The observer area distance $`r_0`$ is defined by,<sup>5,16</sup> $$(r_0)^2=\frac{dA_0}{d\mathrm{\Omega }_0},$$ (8) where $`dA_0`$ is the cross-sectional area of a bundle of null geodesics measured at the source’s rest frame, and diverging from the observer at some point, and $`d\mathrm{\Omega }_0`$ is the solid angle subtended by this bundle. This quantity can in principle be measured if we had intrinsic astrophysically-determined dimensions of the source, but it can also be obtained from the assumed spacetime geometry, especially in spherically symmetric metrics, from where it can be easily calculated. For the Einstein-de Sitter cosmology, detailed calculations for many observables can be found elsewhere.<sup>10,17</sup> ## Frequency Bandwidth Observational Relations ### Flux and Magnitude The flux within some specific wavelength range can be obtained if we consider equations (2), (4) and (5). Then we have,<sup>5</sup> $$F=\frac{_0^{\mathrm{}}L𝒥(\nu _G)𝑑\nu _G}{4\pi (r_0)^2(1+z)^4}=\frac{L}{4\pi }\frac{_0^{\mathrm{}}𝒥\left[\nu (1+z)\right](1+z)𝑑\nu }{(r_0)^2(1+z)^4}=\frac{L}{4\pi }\frac{_0^{\mathrm{}}𝒥\left[\nu (1+z)\right]𝑑\nu }{(r_0)^2(1+z)^3}.$$ (9) Therefore, the specific flux $`F_\nu `$ measured in the frequency range $`\nu `$, $`\nu +d\nu `$ by the observer, may be written as $$F_\nu d\nu =\frac{L}{4\pi }\frac{𝒥\left[\nu (1+z)\right]d\nu }{(r_0)^2(1+z)^3}.$$ (10) The apparent magnitude in a specific observed frequency bandwidth is, $$m_W=2.5\mathrm{log}_0^{\mathrm{}}F_\nu W(\nu )𝑑\nu +\text{constant},$$ (11) where $`W(\nu )`$ is the function which defines the spectral interval of the observed flux (the standard UBV system, for instance). This is a sensitivity function of the atmosphere, telescope and detecting device. Thus, from equations (10) and (11) the apparent magnitude in a specified spectral interval $`W`$ yields, $$m_W=2.5\mathrm{log}\left\{\frac{L}{4\pi }\frac{1}{(r_0)^2(1+z)^3}_0^{\mathrm{}}W(\nu )𝒥\left[\nu (1+z)\right]𝑑\nu \right\}+\text{constant}.$$ (12) Since cosmological sources do evolve, the intrinsic luminosity $`L`$ changes according to the evolutionary stage of the source, and therefore, $`L`$ is actually a function of the redshift: $`L=L(z)`$. Hence, in order to use equation (12) to obtain the apparent magnitude evolution of the source, some theory for luminosity evolution is also necessary. For galaxies, $`L(z)`$ is usually derived taking into consideration the theory of stellar evolution, from where some simple equations for luminosity evolution can be drawn.<sup>1,18</sup> Note that equation (12) also indicates that the source spectrum function $`𝒥`$ might evolve and change its functional form at different evolutionary stages of the source. In addition, as $`𝒥\left[\nu (1+z)\right]`$ is a property of the source at a specific redshift, this function must be known in order to calculate the apparent magnitude, unless the K-correction approach is used (see below). For magnitude limited catalogues, the luminosity distance and the observer area distance have both an upper cutoff, which is a function of the apparent magnitude, the frequency bandwidth used in the observations and the luminosity of the sources. ### K-Correction The relations above demand the knowledge of both the source spectrum and the redshift. However, when the source spectrum is not known, it is necessary to introduce a correction term in order to obtain the bolometric flux from observations. This correction is known as the K-correction, and it is a different way for allowing the effect of the source spectrum. In deriving the K-correction,<sup>3,4,19,20</sup> I start by calculating the difference in magnitude produced by the bolometric flux $`F`$ and the flux $`F_W`$ measured by the observer, but at the bandwidth $`W(\nu )`$ in any redshift $`z`$. Since, $$F=_0^{\mathrm{}}F_\nu 𝑑\nu ,F_W=_0^{\mathrm{}}F_\nu W(\nu )𝑑\nu ,$$ (13) the difference in magnitude $`\mathrm{\Delta }m(z)`$ will be given by $$\mathrm{log}\frac{F(z)}{F_W(z)}=0.4\mathrm{\Delta }m(z).$$ (14) The rate between the observed flux $`F_W(z)`$ at a given redshift and at $`z=0`$ defines the K-correction. Then, considering equation (14), we have that $$\frac{F_W(z)}{F_W(0)}=\frac{F(z)}{F(0)}10^{0.4K_W},$$ (15) where we have defined $$K_W\mathrm{\Delta }m(z)\mathrm{\Delta }m(0).$$ (16) Then it follows that $$K_W=m_Wm_{\text{bol}}\mathrm{\Delta }m(0),$$ (17) which means that once we know the K-term and the observed magnitude $`m_W`$, the bolometric magnitude is know within a constant $`\mathrm{\Delta }m(0)`$. If we now substitute equation (10) into equation (15), and assume $`L(z)=L(0)`$, it is easy to show that $$K_W(z)=2.5\mathrm{log}\left\{\frac{_0^{\mathrm{}}W(\nu )𝒥(\nu )𝑑\nu }{_0^{\mathrm{}}W(\nu )𝒥(\nu _G)𝑑\nu _G}\right\}.$$ (18) Remembering that by equation (4) we know that we can have the source spectrum transformed from the rest frame of the source to the rest-frame of the observer by a factor of $`(1+z)`$, that is, $`𝒥\left[\nu (1+z)\right]d\nu =\left[𝒥(\nu _G)d\nu _G\right]/(1+z)`$, then we may also write equation (18) as $$K_W(z)=2.5\mathrm{log}(1+z)+2.5\mathrm{log}\left\{\frac{_0^{\mathrm{}}W(\nu )𝒥(\nu )𝑑\nu }{_0^{\mathrm{}}W(\nu )𝒥\left[\nu (1+z)\right]𝑑\nu }\right\}.$$ (19) Note that the equations above allow us to write theoretical K-correction expressions for any given spacetime geometry, provided that the line element $`dS^2`$ is known beforehand. As a final remark, it is obvious that if the source spectrum is already known, all relevant observational relations can be calculated without the need of the K-correction. ### Colour With the expressions above we can obtain the theoretical equation for the colour of the sources for any given spacetime. Let us consider two bandwidths $`W`$ and $`W^{}`$. From equation (12) we can find the difference in apparent magnitude for these two frequency bands in order to obtain an equation for the colour of the source in a specific redshift. Let us call this quantity $`C_{WW^{}}`$. Thus, $$C_{WW^{}}(z)m_Wm_W^{}=2.5\mathrm{log}\left\{\frac{_0^{\mathrm{}}W^{}(\nu )𝒥\left[\nu (1+z)\right]𝑑\nu }{_0^{\mathrm{}}W(\nu )𝒥\left[\nu (1+z)\right]𝑑\nu }\right\}.$$ (20) Considering that cosmological sources do evolve, they should emit different luminosities in different redshifts due to the different evolutionary stages of the stellar contents of the sources, and this is reflected in the equation above by the source spectrum function which may be different for different redshifts. Note, however, that in the equation above the source is assumed to have the same bolometric luminosity in a specific redshift and, therefore, we can only use equation (20) to compare observation of objects of the same class and at similar evolutionary stages in certain $`z`$, since $`L=L(z)`$. This often means galaxies of the same morphological type. In other words, equation (20) assumes that a homogeneous populations of cosmological sources do exist, and hence, the evolution and structure of the members of such a group will be similar. Equation (20) also gives us a method for assessing the possible evolution of the source spectrum. For instance, by calculating $`(BV)`$ and $`(VR)`$ colours for E galaxies with modern determinations of the K-correction, it has been reported<sup>4</sup> that no colour evolution was found to at least $`z=0.4`$. However, for $`z0.3`$ it was found that rich clusters of galaxies tend to be bluer (the Butcher-Oemler effect) than at lower redshifts.<sup>1,21</sup> Therefore, if we start from a certain metric, we can calculate the theoretical redshift range where colour evolution would be most important for the assumed geometry of the cosmological model. Then, assessing evolution could be done by means of multicolour observations. As the luminosity and area distances must be the same in all wavelengths for each given source, if the luminosity-redshift plot is not the same in two colours, this shows that these two colours have different evolution functions. Applications of this idea for searching inhomogeneities, by means of the Lemaître-Tolman-Bondi cosmology, can be found in the literature.<sup>22</sup> Another point worth mentioning, from equation (20) we see that colour is directly related to the intrinsic characteristics of the source, its evolutionary stage, as given by the redshift and the assumptions concerning the real form of the source spectrum function at a certain $`z`$. However, this reasoning is valid for point sources whose colours are integrated and, therefore, we are not considering here structures, like galactic disks and halos, which in principle may emit differently and then will produce different colours. If we remember that cosmological sources are usually far enough to make the identification and observation of source structures an observational problem for large scale galaxy surveys, this hypothesis seems reasonable at least as a first approximation. Finally, it is clear that in order to obtain a relationship between apparent magnitude and redshift we need some knowledge about the dependence of the intrinsic bolometric luminosity $`L`$ and the source spectrum function $`𝒥`$ with the redshift. It seems that such a knowledge must come from astrophysically independent theories about the intrinsic behaviour and evolution of the sources, and not from the assumed cosmological model. ### Number Counts In any cosmological model if we consider a small affine parameter displacement $`dy`$ at some point P on a bundle of past null geodesics subtending a solid angle $`d\mathrm{\Omega }_0`$, and if $`n`$ is the number density of radiating sources per unit proper volume at P, then the number of sources in this section of the bundle is,<sup>5</sup> $$dN=(r_0)^2d\mathrm{\Omega }_0\left[n(k^au_a)\right]_Pdy,$$ (21) where $`k^a`$ is the propagation vector of the radiation flux. Equation (21) assumes the counting of all sources at P with number density $`n`$. Consequently, if we want to consider the more realistic situation that only a fraction of galaxies in the proper volume $`dV=(r_0)^2d\mathrm{\Omega }_0dl=(r_0)^2d\mathrm{\Omega }_0(k^au_a)dy`$ is actually detected and included in the observed number count, we have to write $`dN`$ in terms of a selection function $`\psi `$ which represents this detected fraction of galaxies. Then equation (21) becomes<sup>23</sup> $$dN_0=\psi dN=\psi \left[ndV\right]_P=(r_0)^2\psi d\mathrm{\Omega }_0\left[n(k^au_a)\right]_Pdy,$$ (22) where $`dN_0`$ is the fractional number of sources actually observed in the unit proper volume $`dV`$ with a total of $`dN`$ sources. In principle $`\psi `$ can be estimated from a knowledge of the galactic spectrum, the observer area distance, the redshift, and the detection limit of the sample as given by the limiting flux in a certain frequency bandwidth. The other quantities in equation (22) come from the assumed cosmological model itself. In order to determine $`\psi `$ we need to remember that in any spacetime geometry the observed flux in bandwidth $`W`$ is given by equations (10) and (13), $$F_W=\frac{L(z)}{4\pi (r_0)^2(1+z)^3}_0^{\mathrm{}}W(\nu )𝒥\left[\nu (1+z)\right]𝑑\nu .$$ (23) Then, if a galaxy at a distance $`r_0`$ is to be seen at flux $`F_W`$, its luminosity $`L(z)`$ must be bigger than $`\{4\pi (r_0)^2(1+z)^3F_W\}/\{_0^{\mathrm{}}W(\nu )𝒥\left[\nu (1+z)\right]𝑑\nu \}`$. Therefore, the probability that a galaxy at a distance $`r_0`$ and with redshift $`z`$ is included in a catalog with maximum flux $`F_W`$ is, $$𝒫\psi (\mathrm{})=_{\mathrm{}}^{\mathrm{}}\varphi (w)𝑑w,$$ (24) where this integral’s lower limit is $$\mathrm{}=\frac{1}{L_{}}\frac{4\pi (r_0)^2(1+z)^3F_W(z)}{_0^{\mathrm{}}W(\nu )𝒥\left[\nu (1+z)\right]𝑑\nu },$$ (25) $`L_{}`$ is a parameter, and $`\varphi (w)`$ is the luminosity function.<sup>1</sup> In Schechter<sup>24</sup> model $`L_{}`$ is a characteristic luminosity at which the luminosity function exhibits a rapid change in its slope. Now, if we assume spherical symmetry, then equation (22) becomes, $$dN_0=4\pi (r_0)^2\psi (\mathrm{})\left[n(k^au_a)\right]_Pdy.$$ (26) Thus, the number of galaxies observed up to an affine parameter $`y`$ at a point P down the light cone, may be written as $$N_0=4\pi _0^y(r_0)^2\psi (\mathrm{})\left[n(k^au_a)\right]_P𝑑\overline{y},$$ (27) which generalizes Peebles’ Euclidean equation (7.40)<sup>1</sup> into a relativistic setting. Equation (27) is deceptively simple. It is in fact a highly non-linear and difficult-to-compute function, as all quantities entering the integrand are functions of the past null cone affine parameter $`y`$. Therefore, in principle, they must be explicitly calculated before they can be entered into equation (27). In some cases one may avoid this explicit determination and use instead the radial coordinate,<sup>10,17,25-28</sup> a method which turns out to be easier than finding these expressions in terms of $`y`$. Then, once $`N_0(y)`$ is obtained, it becomes possible to relate it to other observables, since they are all function of the past null cone affine parameter. For example, if one can derive an analytic expression for the redshift in a given spacetime, say $`z=z(y)`$, and if this expression can be analytically inverted, then we can write $`N_0`$ as a function of $`z`$. It is important to mention that the local number density $`n`$ is given in units of proper density and, therefore, in order to take a proper account of the curved spacetime geometry, one must relate $`n`$ to the local density as given by the right hand side of Einstein’s field equations. If, for simplicity, we suppose that all sources are galaxies with a similar rest-mass $`M_g`$, then $`n=\rho /M_g`$. The discussion above shows that the theoretical determination of $`N_0`$ depends critically on the spacetime geometry and the luminosity function $`\varphi `$. For the latter, in the Schechter<sup>24</sup> model it has the form, $`\varphi (w)=\varphi _{}w^\alpha e^w`$, where $`\varphi _{}`$ and $`\alpha `$ are constant parameters. One must not forget that this luminosity function shape was originally determined from local measurements,<sup>24</sup> and it is still under assessment the possible change of shape and parameters of the luminosity function in terms of evolution,<sup>29-32</sup> that is, as we go down the light cone. As a final remark, one must note that gravitational lensing magnification can also affect the counting of point sources, because weak sources with low flux might appear brighter due to lensing magnification. Such an effect will not be discussed here, since its full treatment demands more detailed information about the sources themselves, such as considering them as extended ones, and is considered to be most important for QSO’s.<sup>16</sup> ## Conclusion In this paper I have advanced a general proposal for testing non-standard cosmological models by means of observational relations of cosmological point sources in some specific waveband, and their use in the context of data provided by the galaxy redshift surveys. I have also shown how the relativistic number-counting equation can be expressed in terms of the selection and luminosity functions, generalizing thus the Euclidean number counts expression into a relativistic setting. The equations for colour, and K-correction were also presented. All expressions obtained here are valid for any cosmological metric, since no specific geometry was assumed in such a derivation. Although these observables can be specialized for a given spacetime geometric, some quantities must come from astrophysical considerations, namely the intrinsic luminosity $`L(z)`$, the source spectrum function $`𝒥(\nu )`$, and the luminosity function $`\varphi (w)`$. These cannot be obtained only from geometrical considerations, which means that the determination of the spacetime structure of universe is a task intrinsically linked to astrophysical considerations and results. Further developments and applications of the general approach discussed here are the subject of a forthcoming paper.<sup>33</sup> Acknowledgments I am grateful to W. R. Stoeger for reading the original manuscript and for helpful comments. Financial support from Brazil’s CAPES Foundation is also acknowledged. References P.J.E. Peebles, Principles of Physical Cosmology (Princeton University Press), 1993. S. Weinberg, Gravitation and Cosmology (Wiley, New York), 1972. A. Sandage, ARA&A, 26, 561, 1988. A. Sandage, in B. Binggeli & R. Buser (eds.), The Deep Universe (Saas-Fee Advanced Course 23) (Springer, Berlin), 1995, p. 1. G.F.R. Ellis, in R.K. Sachs (ed.), General Relativity and Cosmology (Proc. Int. School Phys. “Enrico Fermi”) (Academic Press, New York), 1971, p. 104. G.F.R. Ellis & J.J. Perry, MNRAS, 187, 357, 1979. G.F.R. Ellis, J.J. Perry & A.W. Sievers, AJ, 89, 1124, 1984. A.W. Sievers, J.J. Perry & G.F.R. Ellis, MNRAS, 212, 197, 1985. G.C. McVittie, QJRAS, 15, 246, 1974. M.B. Ribeiro, Gen. Rel. Grav., 33, 1699, 2001 (astro-ph/0104181). J. Kristian & R.K. Sachs, ApJ, 143, 379, 1966. M.S. Longair, in B. Binggeli & R. Buser (eds.), The Deep Universe (Saas-Fee Advanced Course 23) (Springer, Berlin), 1995, p. 317. D. W. Hogg, preprint, 2000 (astro-ph/9905116). D. Scott, J. Silk, E.W. Kolb & M.S. Turner, in A.N. Cox (ed.), Allen’s Astrophysical Quantities, 4th edition (Springer, Berlin), 2000, p. 643. I.M.H. Etherington, Phil. Mag., 15, 761, 1933; reprinted, Gen. Rel. Grav., in press, 2002. P. Schneider, J. Ehlers & E.E. Falco, Gravitational Lenses (Springer, Berlin), 1992. M.B. Ribeiro, ApJ, 441, 477, 1995 (astro-ph/9910145). J. Binney & S. Tremaine, Galactic Dynamics (Princeton University Press), 1987. M.L. Humason, N.U. Mayall & A.R. Sandage, ApJ, 61, 97, 1956. J.B. Oke & A. Sandage, ApJ, 154, 21, 1968. R.G. Kron, in B. Binggeli & R. Buser (eds.), The Deep Universe (Saas-Fee Advanced Course 23) (Springer, Berlin), 1995, p. 233. C. Hellaby, A&A, 372, 357, 2001 (astro-ph/0010641). G.F.R. Ellis, S.D. Nel, W.R. Stoeger & A.P. Whitman, 1985, Phys. Rep., 124, 315, 1985. P. Schechter, ApJ, 203, 297, 1976. M.B. Ribeiro, ApJ, 388, 1, 1992. M.B. Ribeiro, ApJ, 395, 29, 1992. M.B. Ribeiro, ApJ, 415, 469, 1993. M.B. Ribeiro, in D. Hobbil, A. Burd & A. Coley (eds.), Deterministic Chaos in General Relativity (Plenum, New York), 1994, p. 269. C.J. Lonsdale & A. Chokshi, AJ, 105, 1333, 1993. C. Gronwall & D.C. Koo, ApJ, 440, L1, 1995. R.S. Ellis et al., MNRAS, 280, 235, 1996. N. Cross et al., MNRAS, in press, 2002 (astro-ph/0012165). M.B. Ribeiro & W.R. Stoeger, in preparation, 2002.
no-problem/9910/cond-mat9910423.html
ar5iv
text
# EFFECTS OF CORE-HOLE SCREENING ON SPIN-POLARISED AUGER SPECTRA FROM FERROMAGNETIC Ni (April 26, 1999) ## Abstract We calculate the spin- and temperature-dependent local density of states for ferromagnetic Ni in the presence of a core hole at a distinguished site in the lattice. Correlations among the valence electrons and between valence and core electrons are described within a multi-band Hubbard model which is treated by means of second-order perturbation theory around the Hartree-Fock solution. The core-hole potential causes strong screening effects in the Ni valence band. The local magnetic moment is found to be decreased by a factor 5-6. The consequences for the spin polarisation of CVV Auger electrons are discussed. It was pointed out by Allenspach et al. that the experimentally observed spin polarisation of the CVV Auger spectrum of Ni is substantially smaller than the band spin-polarisation. Contrarily, in the case of Fe the band and Auger spin-polarisations are comparable. The authors of ref. argue that the ground-state configuration of Ni in a solid is 3d<sup>9</sup> which as a consequence of the core-hole screening becomes a 3d<sup>10</sup> configuration with a vanishing magnetic moment. The observed finite Auger polarisation is then attributed to a core-hole polarisation caused by resonant excitation of a core electron into the valence band. Furthermore, in the case of Ni only the minority-spin (3d) states are unoccupied (strong ferromagnet). While the situation is different in the case of Fe, since there are unoccupied minority, as well as, majority spin states (weak ferromagnet). In the present paper we investigate the effects of core-hole screening in the initial state of the Auger process quantitatively, i. e. in a model of correlated itinerant electrons. Besides the local magnetic moment we are interested in the local density of states, which is relevant for the Auger line shape as is already known from the simple self-convolution model of Lander . We consider a multi-band Hubbard-type model for the 3d, 4s and 4p electrons. The atomic basis orbitals are assumed to have a well-defined angular-momentum character $`L=\{l,m\}`$. The hopping and overlap integrals for the one-particle part of the Hamiltonian corresponding to the non-orthogonal atomic basis are taken from (paramagnetic) tight-binding band-structure calculations . The relatively broad 4s and 4p bands are assumed to be sufficiently well described by band theory. The Coulomb interaction among the 3d electrons is assumed to be strongly screened. Consequently, the interaction part of the Hamiltonian consists of on-site 3d interactions only. Exploiting atomic symmetries, the complete Coulomb matrix $`U_{L_1L_2L_4L_3}`$ can be expressed via $`3j`$-symbols in terms of three effective Slater parameters ($`F^0`$, $`F^2`$, $`F^4`$). Equivalently, the Coulomb matrix can be parametrised by the averaged direct and exchange correlation parameters $`U`$ and $`J`$ (we take the atomic ratio for the effective Slater integrals $`F^2/F^4`$=0.625 which is a reasonable assumption for 3d-transition metals ). The one-particle excitation spectrum is calculated by second-order perturbation theory around the Hartree-Fock solution (SOPT-HF) . We furthermore employ the local approximation since it is known that the effects due to the (weak) $`𝐤`$-dependence of the self-energy are fairly small within SOPT-HF applied to the multi-band Hubbard model. The interaction parameters are chosen as $`U`$=2.47 eV and $`J`$=0.5 eV which reproduces the measured value $`m`$=0.56 $`\mu _\mathrm{B}`$ for the $`T`$=0 magnetic moment . The ratio $`J/U`$$``$$`0.2`$ is a typical value for the late 3d-transition metals. The resulting density of states for Ni (fcc) is shown on the l. h. s. of fig. 1 for three different temperatures ($`T`$=0, $`T`$=0.9$`T_\mathrm{C}`$ and $`T`$=$`T_\mathrm{C}`$). Due to the imaginary part of the SOPT-HF self-energy the spectra are strongly damped compared to the results of band-structure calculations. In the energy region $`\pm `$2 eV around the chemical potential $`\mu `$ there are distinguishable structures. As a consequence of the non-zero slope of the real part of the self-energy at $`E`$=$`\mu `$, a considerable band-narrowing is observed. The temperature-dependent difference between $``$ and $``$ spectra is more or less given by a rigid shift which disappears for $`T`$$``$$`T_\mathrm{C}`$. The corresponding magnetisation curve (circles in the inset of fig. 1) has a Brillouin-function-like form. The Curie temperature turns out to be $`T_\mathrm{C}`$=1655 K and is thereby about a factor 2.6 larger than the measured value of 624 K . This overestimation of $`T_\mathrm{C}`$ is probably due to the mean-field character of the SOPT-HF. It should be noted that a simple LDA+$`U`$ (Hartree-Fock) calculation yields a much higher value ($`T_\mathrm{C}`$$``$2500 K). Conceptually, the Auger process can be divided into two subprocesses. The first one is the creation of a core hole at a particular lattice site $`𝐑_c`$, e. g. by absorbing an x-ray quantum. The second one is the radiationless decay of the core hole by ejecting an Auger electron. Provided that the life time of the core hole is large compared to typical relaxation times of the valence electrons, the two subprocesses become independent . Since the Auger process takes place locally, the Auger spectrum is influenced by the additional core-hole potential in the initial state. To describe the core-hole effects we have to extend the Hamiltonian. In the one-particle part we additionally consider a non-degenerate (s-like) and dispersionless core level with a one-particle energy well below the valence band. In the interaction part we add a density-density interaction between core and valence electrons. This interaction is responsible for the screening of the core-hole potential. The corresponding Coulomb-matrix elements are taken to be orbitally independent ($`U_L^c`$=$`U_c`$). We assume an infinite life-time of the core hole in the initial state for the Auger process as there are no decay terms in the Hamiltonian. Thus, the core-level occupation is a good quantum number. Thermodynamic averages in the presence of the core hole have to be performed in the subspace of the Hilbert space that is built up by all many-body states with a core hole at the site $`𝐑_c`$. In practice, this is done by introducing an appropriate Lagrange parameter. The extended Hamiltonian and the averaging procedure introduces two new terms in the valence-band self-energy. The Hartree-like term $`\delta _{\mathrm{𝐑𝐑}_c}`$$`U_c`$ represents the additional core-hole potential seen by the valence electrons. Thereby, the translational symmetry is broken and all occupation numbers become site dependent. Since the correlation effects among the valence electrons depend on the occupations, this introduces an extra screening term in the self-energy. These screening effects are in general extended over some shells around $`𝐑_c`$. A reasonable approximation for 3d-transition metals is to assume a complete screening of the core-hole potential already at the site $`𝐑_c`$, i. e. the total occupation at $`𝐑_c`$ is increased by 1 electron, and one is left with a single-site scattering problem (for a more detailed discussion see ref. ). Here, the complete screening is used as a condition to fix the core-valence interaction parameter $`U_c`$ at $`T`$=0. We find the value $`U_c`$=1.81 eV. The local density of states at the site $`𝐑_c`$ in the presence of the core hole is shown on the r. h. s. of fig. 1. The structure of the spectrum has remarkably changed. Spectral weight is transferred to lower energies. Especially for $``$ electrons a redistribution from energies above to below the chemical potential $`\mu `$ is visible. By comparing the quasi-particle weights (band-width renormalisation) with the unscreened case, we find the screened case to behave less correlated since here one is closer to the limit of the completely filled (3d) band. As a consequence of the fact that Ni is a strong ferromagnet, only minority spin states can be populated to screen the core hole. Indeed this leads to a drastic reduction of the local magnetic moment (0.095 $`\mu _\mathrm{B}`$). The temperature dependence of the local magnetic moment in the presence of the core hole is shown in the inset (squares). We conclude that the comparatively small spin polarisation of Auger electrons for Ni is due to the screening of the core-hole potential in the initial state. Within the considered itinerant-electron model including 4s and 4p states, however, even a complete screening does not lead to fully vanishing local magnetic moment. Acknowledgements: Financial support of the Deutsche Forschungsgemeinschaft within the project No. 158/5-1 is greatfully acknowledged. The numerical calculations were performed on a CrayT3E at the Konrad-Zuse-Zentrum für Informationstechnologie Berlin (ZIB).
no-problem/9910/cond-mat9910111.html
ar5iv
text
# Computer simulations of the mechanism of thickness selection in polymer crystals ## I Introduction In 1957 Andrew Keller reported that polyethylene formed chain-folded lamellar crystals from solution. This discovery was followed by the confirmation of the generality of this morphology—lamellar crystals are formed on crystallization from both solution and the melt for a wide variety of polymers—and the basic phenomenological laws describing such properties as the thickness and growth rate. In particular, the crystal thickness, $`l`$, has been found to be inversely proportional to the supercooling, which is interpreted as resulting from $`l`$ being slightly larger than $`l_{\mathrm{min}}`$, the minimum thickness for which a lamellar crystal is stable with respect to the solution or melt, i.e. $`l=l_{\mathrm{min}}+\delta l`$, where $`\delta l`$ is small. Surprisingly, however, no theoretical consensus has yet been reached as to the mechanism of this seemingly simple behaviour. In particular, two of the most well-known theories—the Lauritzen-Hoffman (LH) surface nucleation theory and the Sadler-Gilmer (SG) entropic barrier model—present very different explanations of thickness selection. Of course, in such a situation, one would like to determine which of the theories, if any, is closest to the truth. There are two aspects to such a task. Firstly, the predictions of the theories should be critically compared with experimental results. In the case of polymer crystallization both the LH and SG theories are able to reproduce the basic behaviour: the observed temperature dependence of the thickness and the growth rate. Additionally, Hoffman and coworkers have further developed the surface nucleation approach in order to explain some of the more detailed behaviour of crystallizing polymers, for example the regime transitions in the growth rate. However, this comparison does not conclusively favour one of the theories. This situation illustrates the fact that although consistency with experiment is an important first hurdle for any theory, it does not automatically imply the correctness of a theory. There may be a number of different ways of generating a particular experimental law. Furthermore, the number of parameters in a complex theory may give the theory sufficient plasticity to fit a wide variety of scenarios. Secondly, it is important that the assumptions of a theory, particularly those about the microscopic mechanisms, are critically examined. However, in the case of polymer crystallization this task is very difficult to achieve experimentally. By addressing this gap, computer simulations can potentially play an important role in this field. Such simulations could range from examining simple models to performing realistic atomistic simulations of the crystal growth process. The former could allow the effects of relaxing some of the theoretical assumptions to be determined and the latter could provide a detailed molecular picture of the growth process. Indeed, there has been an increasing number of computational studies pursuing these aims. In this paper I will review my efforts in this direction and hope to illustrate the positive role that computer simulations can play in helping to understand polymer crystallization. In particular, the aim of my simulations has been to critically examine the LH and SG theories. ## II Free Energy profiles In the LH theory the growth of a new layer is modelled as the deposition of a succession of stems (straight sections of the chain that traverse the growth face) along the growth face from an initial nucleus, where the length of each stem is the same as the thickness of the lamella. The inset of Figure 1 illustrates the geometry of this mechanism. To analyse the kinetics of growth, a thermodynamic description of the nucleation and growth of a new layer is first required. The free energy of a configuration with $`N_{\mathrm{stem}}`$ complete stems is taken to be $$A(N_{\mathrm{stem}})=2bl\sigma +2(N_{\mathrm{stem}}1)ab\sigma _fN_{\mathrm{stem}}abl\mathrm{\Delta }F,$$ (1) where $`a`$ and $`b`$ are the width and depth of a stem, $`l`$ is the thickness of the lamella, $`\sigma `$ is the lateral surface free energy, $`\sigma _f`$ is the fold surface free energy, and $`\mathrm{\Delta }F`$ is the free energy of crystallization. The first term corresponds to the free energy of the two lateral surfaces created on the deposition of the first stem and is proportional to $`l`$. The second term is the free energy of the new fold surface created on the deposition of subsequent stems. It is then assumed that at the barrier between configurations with different numbers of stems all the new surfaces have been created and that a fraction $`\mathrm{\Psi }`$ of the free energy of crystallization is released. This then gives the LH free energy profile that is illustrated in Figure 1. From this free energy profile, $`S(l)`$, the flux over the barrier, can be obtained. The observed crystal thickness is then taken to correspond to the average $$\overline{l}=_{l_{\mathrm{min}}}^{\mathrm{}}lS(l)𝑑l.$$ (2) This average thickness is close to the value of $`l`$ at the maximum in $`S(l)`$, which in turn is close to, but slightly above $`l_{\mathrm{min}}`$, thus reproducing the observed behaviour of $`l`$. The maximum in $`S(l)`$ is the result of two competing factors. The free energy barrier for deposition of the first stem increases with $`l`$, thus making the growth of thick crystals prohibitively slow. However, as $`l_{\mathrm{min}}`$ is approached from above, the thermodynamic driving force for crystallization goes to zero. It is important to note that by integrating over $`l`$, Equation (2) assumes that there are crystals with all values of $`l`$ greater than $`l_{\mathrm{min}}`$ which all grow with constant thickness and contribute to the average $`\overline{l}`$. Those crystals with a thickness close to the maximum in $`S(l)`$ dominate this ensemble and contribute more to Equation (2) because of their rapid growth. As was realized by Frank and Tosi, the results of experiments where the temperature is changed during crystallization argue against such an ensemble. The temperature jumps give rise to steps on the lamellae, showing that a crystal need not necessarily grow at constant thickness. We will come back to this issue later, but in this section we focus on the LH free energy profile. In particular, we compare this theoretical profile with ones computed from simulations of a simple polymer. In our model the polymer is represented by a self-avoiding walk on a simple cubic lattice. There is an attractive energy, -$`ϵ`$, between non-bonded polymer units on adjacent lattice sites and between polymer units and the surface, and an energetic penalty, $`ϵ_g`$, for kinks (or ‘gauche bonds’) in the chain. The parameter $`ϵ_g`$ determines the stiffness of the chains. In our simulations we have included a surface which represents the growth face of a polymer crystal. To follow the crystallization of the polymer on the surface, we need to define an order parameter which determines the degree of crystallinity. We use $`N_{\mathrm{xtal}}`$, the largest fragment of the polymer with the structure of a target crystalline configuration. In our case, we examine the crystallization of a 200-unit chain into a structure with 5 stems of length 40 units. In order to compare with the theoretical profiles we have to constrain the other $`NN_{\mathrm{xtal}}`$ units in the chain to be disordered. The simulations were carried out using configurational-bias Monte Carlo, and the umbrella sampling technique was used to calculate the free energy profiles. The free energy profiles that we obtained are shown in Figure 2a. They show the expected temperature dependence: at low temperature the crystal is most stable and at high temperature the disordered state is most stable. Note that the value of $`N_{\mathrm{xtal}}`$ for the disordered state is non-zero, because the disordered polymer is adsorbed on the surface. The adsorbed polymer is bound to have some short straight sections that qualify as crystalline by the definition of $`N_{\mathrm{xtal}}`$. The free energy profiles also have a sawtooth structure resembling that of the theoretical profile. The barriers occur immediately after the previous stem has been completed, and correspond to the formation of a new fold. They are followed by a monotonic decrease in energy as this new stem grows to completion. In the language of the LH theory $`\mathrm{\Psi }(N_{\mathrm{stem}}N_{\mathrm{stem}}+1)0`$ for $`N_{\mathrm{stem}}2`$. However, there is no feature in the simulation profiles that corresponds to the formation of the first fold. This is because the initial nucleus is not a single stem, but two stems connected by a fold that grow simultaneously. Such a possibility had previously been suggested by Point. Confirmation of a two-stem nucleus comes from a simple model calculation of the free energy profile. We can write the free energy as $$A(N_{\mathrm{xtal}})=A_{\mathrm{coil}}(NN_{\mathrm{xtal}})+kT\mathrm{exp}\left(E_{\mathrm{xtal}}/kT\right),$$ (3) where the sum is over all possible crystalline configurations which are $`N_{\mathrm{xtal}}`$ units long, $`E_{\mathrm{xtal}}`$ is the energy of the crystalline configuration, and $`A_{\mathrm{coil}}`$ is the free energy of an ideal two-dimensional coil. The resulting profile is very similar to the simulation profile (Figure 2b). In particular, there is no feature due to the formation of the first fold. However, when we force the initial nucleus to be a single stem by restricting the sum in the above equation to only those crystalline configurations with one incomplete stem, a free energy barrier associated with the formation of the first fold appears. The reason for the preference for a two-stem nucleus is simply energetic. For $`N_{\mathrm{xtal}}>4ϵ_g/ϵ+2`$ the two-stem nucleus is lower in energy because of the interaction between the two stems. Our simulations were performed on a surface that was infinite. Whether a two-stem nucleus would be expected, when, as with a lamellar crystal, the thickness of the growth face is finite, depends upon how this critical size compares to the thickness of the lamella. It can be clearly be seen from Figure 2b that the two-stem nucleus significantly reduces the nucleation barrier. In particular, it will no longer be proportional to $`l`$. This has significant implications for the LH theory given the key role played by this initial free energy barrier in constraining $`\overline{l}`$ to a value close to $`l_{\mathrm{min}}`$. Before we move on we should make a number of comments. First, the polymer model is very simple, and although there is no obvious reason why the thermodynamic reasons behind the two-stem nucleus should not also apply to a real polymer, there may be factors that are not included in our model that come into play. Second, the profiles reflect our choice of order parameter. As we monitor crystallization unit by unit, during the growth of the first two stems the lateral surface energy is paid for at the same time as the free energy of crystallization is released. Therefore, in this size range $`\mathrm{\Psi }`$ is effectively equal to 1, albeit with the possibility of a two-stem nucleus. Hoffman, however, advocates a $`\mathrm{\Psi }`$=0 version of the LH theory—it has the advantage that it avoids a ‘$`\delta l`$-catastrophe’ (a divergence of the lamellar thickness) at large supercooling—in which he postulates that prior to crystallization an aligned physisorbed state is formed that has lost its entropy but not yet gained the free energy of crystallization. Such a state cannot occur in our lattice model because there is no difference in the interaction with the surface for a disordered chain adsorbed on the surface and a crystalline layer. In an off-lattice model the interaction energy for the crystalline layer would be greater because the stems would fit into the grooves provided by the stems of the previous layer. Third, a good order parameter must pass continuously through intermediate values when the system goes between two states. However, one can imagine a number of mechanisms by which this criterion for $`N_{\mathrm{xtal}}`$ is broken. For example, in a realistic simulation of the surface crystallization of a long alkane into a once-folded configuration, the chain first formed non-adjacent crystalline stems connected by a loose fold which then came together by the propagation of a defect through one of the stems. Another possibility that has been observed in simulations is the formation of crystallites in different portions of a chain that subsequently coalesce to form a single crystallite. ## III A multi-pathway model In the previous section, in order to compare the LH free energy profile with those from simulation, we had to constrain the $`NN_{\mathrm{xtal}}`$ units not having the target structure to be disordered. If we had not done this, at temperatures where the crystal is most stable the rest of the chain would have formed a crystalline configuration with stem lengths different from the target configuration. This naturally raises questions about the LH assumption that the stems in a new layer must all have the same thickness as the previous layer. In this section, we examine the effects of relaxing some of the LH assumptions by studying a model in which the stems grow unit by unit and the length of a stem is unconstrained. We term it a multi-pathway model because it can take into account the many possible ways that a new crystalline layer can form. This idea is not new. Frank and Tosi, Price and Lauritzen and Passaglia considered models where the stem length is not always constant, and Point, and DiMarzio and Guttman studied models where the stems could grow unit by unit. All these studies were performed at a time when computational resources were much less, so approximations and simplifications had to be made in order to render the models tractable. The natural way to solve such problems, though, is through the use of computational techniques, such as kinetic Monte Carlo. However, the only applications of computational methods to this problem were in a short note by Point and the continuation of this work in the PhD thesis of Dupire. Some of the results presented in these earlier studies are similar to those we report here. In our model we grow a single new crystalline layer by the successive growth of stems across a surface that represents the growth face of a polymer crystal. The polymer interactions are the same as used in the previous section, and we only model the crystalline portion of the polymer explicitly—the rest is assumed to behave like an ideal coil. An example configuration is illustrated in Figure 3 along with possible changes of configuration. These changes can only occur at the ends of the crystalline portion, and are selected using the kinetic Monte Carlo algorithm, in which a move is chosen with a probability proportional to the rate for that process. First, we shall examine the effect of the initial nucleus on the thickness of the layers grown. If the stem lengths are unconstrained and the initial nucleus is a single stem, one might imagine that one way of reducing the large initial free energy barrier in Figure 1 (and achieving faster initial growth) would be for the stem length to increase gradually to its average value as crystallization progresses. For this pathway, the lateral surface free energy is paid for ‘in installments’ rather than all initially. This is exactly what we observe when we force the initial nucleus to be a single stem by only allowing growth from one end of the crystalline portion of the chain (Figure 4a). When a double-stem nucleus is allowed the initial growth is very different because there is now no longer a large initial free energy barrier to circumvent. The most important thing to note from these results is that, contrary to the LH theory, the thickness of the inital nucleus does not determine the thickness of the layer. Further confirmation of this can be obtained when we examine the growth from initial seed crystals. Whatever the thickness of the initial seed the thickness of the growing crystal converges to the same value (Figure 4b). This implies that the thickness of a crystalline layer must be determined by factors which are operating on the deposition of each stem and not those specific to the initial stems. To determine what these factors might be, in Figure 5 we show how the thickness of a new layer depends on temperature. First, it is immediately obvious that the thickness of a new layer is not necessarily the same as that of the growth face. Second, all the curves increase as the temperature approaches $`T_m`$, the melting or dissolution temperature, because of the rise of $`l_{\mathrm{min}}`$. Third, the thickness also increases at low temperature, in this instance because it becomes increasingly difficult to scale the free energy barrier for forming a fold and so on average the stems continue to grow for longer. However, this rise is checked by the thickness of the growth face. It is unfavourable for the polymer to overhang the edge of the growth face because these units do not interact with the surface. Figure 5 only describes the growth of a single layer. However, as the thickness of the new layer is not generally the same as the thickness of the growth face, one needs to consider the addition of a succession of layers. If we assume that all the variations in the stem length within a layer are annealed out before a new layer begins to grow, this can be achieved using Figure 6a, in which we have plotted for a single temperature the thickness of the new layer against the thickness of the growth face. By following the dotted lines one can see what would happen for growth on a growth face that is 50 units thick: the first layer is 36 units thick, the second 28, the third 23, …Thus, the thickness converges to the value $`l^{}`$ at which the curve crosses $`y=x`$, i.e. to the point where the thickness of the new layer is the same as the previous, and then the crystal continues to grow at this thickness. The mapping represented in Figure 6a is a fixed-point attractor. A similar picture emerges if we explicitly perform simulations of multi-layer growth. Figure 7 shows a cut through a typical configuration that results. Within 5–10 layers the thickness of the crystal converges to its steady-state value $`l^{}`$ and then growth continues at that thickness. The mechanism of thickness selection that occurs in our multi-pathway is at odds with the LH theory. It shows that it is inappropriate to compare the growth rates of crystals of different thickness because the thickness has only one dynamically stable value for which growth at constant thickness occurs. The ensemble of crystals assumed by Equation (2) is fictitious. Furthermore, the growth rate of a new layer slows down as $`l^{}`$ is approached from above (inset of Figure 6a). However, we should note that in some of the multiple-pathway studies mentioned earlier, it was realized that stable growth can only occur at the one thickness where a new layer has the same thickness as the previous. Since then this insight has for the most part been neglected. To analyse the reasons for the dynamical convergence of the thickness to the value $`l^{}`$ we examine how the probability distributions for the stem length depend on the thickness of the growth face (Figure 8). $`l_{\mathrm{min}}`$ places one constraint on the stem length; only a small fraction of the stems can be shorter than $`l_{\mathrm{min}}`$ if the layer is to be thermodynamically stable. The thickness of the growth face places the second constraint on the stem length; it is energetically unfavourable for the polymer to extend beyond the edges of the growth face. There is also a third weaker kinetic constraint on the stem length. At every step there is always a certain probability that a fold will be formed. Therefore, even in the absence of the second constraint, i.e. an infinitely thick growth face, the probability distribution will decay exponentially to zero at large stem length (Figure 8a). Although, this effect prevents the thickness from ever diverging in a $`\delta l`$-catastrophe, it does not stop the thickness becoming very large. When the growth face is significantly thicker than $`l_{\mathrm{min}}`$ there is a range of stem lengths between $`l_{\mathrm{min}}`$ and the thickness of the growth face that are viable, and therefore the new layer will be thinner than the previous layer. However, as the thickness of the growth face decreases, the probability distributions of the stem length becomes increasingly narrow and the difference in probability between the stem length being greater or less than the surface thickness diminishes. Finally, at $`l^{}`$, as the thickness of the growth face approaches $`l_{\mathrm{min}}`$, the probability distribution become symmetrical about the surface thickness and the thickness of the new layer becomes equal to the thickness of the growth face (Figure 8e). When the thickness is less than $`l^{}`$, the asymmetry of the probability distribution is reversed (Figure 8f). It is, therefore, through the combined action of the two thermodynamic constraints on the stem length that the thickness converges to a value close to $`l_{\mathrm{min}}`$. The picture is not quite this simple at all temperatures. As the supercooling decreases, it becomes increasingly unfavourable for a stem to overhang the edge of the growth face. Indeed, for sufficiently small supercoolings the probability distribution for the stem length never becomes symmetrical about the thickness of the growth face, not even when the thickness of the growth face is close to $`l_{\mathrm{min}}`$. This situation is illustrated in Figure 6b. After the growth of two layers on a 50-unit thick surface, the crystal stops growing because the outer layer is too thin for a new layer to form. For these supercoolings, as in the SG model, the rounding of the crystal profile inhibits growth. To overcome this barrier requires a cooperative mechanism whereby a new layer takes advantage of (and then locks in) dynamic fluctuations in the outer layer to larger thickness. However, unlike the SG model, the current model has no interlayer dynamics—we attempt to grow a new layer on an outer layer that is static—and so growth stops. Despite this it is clear that if this interlayer dynamics could be included, it would again lead to steady-state growth close to $`l_{\mathrm{min}}`$. We should note that this cessation of growth was also found in the model of Frank and Tosi at low supercoolings. Lauritzen and Passaglia were also aware of this effect, but they introduced an ad hoc energetic term in their rate constants to prevent it. However, in the restricted equilibrium model of Price this effect was absent. In this study each new layer, but not the crystal as a whole, was allowed to reach equilibrium and so the kinetic constraint on the stem length is absent. Finally, we should note that our multi-pathway model is not parameter-free, and that, like most other models of polymer crystallization (including the LH theory, the SG model and the earlier multi-pathway models), for some choices of parameters (not those used here) the lamellar thickness begins to increase at sufficiently large supercooling. This effect occurs because the large driving force for crystallization at large supercoolings reduces the effect that the thickness of the growth face has in constraining the stem lengths. ## IV The Sadler-Gilmer model In this section we re-examine the model used by Sadler and Gilmer in order to see whether the mechanism of thickness selection that we found in the previous section for our multi-pathway model also occurs in the SG model. Sadler and Gilmer interpreted this model in terms of an entropic barrier. In particular, they argued that the rounding of the crystal profile gives rise to an entropic barrier, which can only be surmounted by a fluctuation to a squarer profile before growth can continue. As this barrier increases with lamellar thickness it constrains the thickness to a value close to $`l_{\mathrm{min}}`$. However, we shall not dwell on this interpretation here, but instead direct the interested reader to a critique of this argument in Ref. . In the SG model the growth of a polymer crystal results from the attachment and detachment of polymer units at the growth face. The rules that govern the sites at which these processes can occur are designed to mimic the effects of the chain connectivity. In the original three-dimensional version of the model, under many conditions the growth face is rough and the correlations between stems in the direction parallel to the growth face are weak. Therefore, an even simpler two-dimensional version of the model was developed in which lateral correlations are neglected entirely, and only a slice through the polymer crystal perpendicular to the growth face is considered. The geometry of the model is shown in Figure 9. Changes in configuration can only occur at the outermost stem and stems behind the growth face are ‘pinned’ because of the chain connectivity. At each step, there are three possible changes in configuration: the outermost stem can increase in length, a new stem can be initiated and a polymer unit can be removed from the outermost stem. The model can be formulated in terms of a set of rate equations that can be easily solved by numerical integration. When we examine the dependence of the thickness of a layer on the previous, we again find a fixed-point attractor describing the convergence of the thickness to its steady-state value (Figure 10). Moreover, when we examine the probability distributions for the stem length we find evidence for the same three constraints as for the multi-pathway model (Figure 11b). The weaker nature of the kinetic constraint is particularly clear from the much more rapid exponential decay of the probability for stems that extend beyond the growth face. The role played by the two thermodynamic constraints in the mechanism of thickness selection is particularly clear from Figure 11b. As the thickness of the growth face decreases the viable range of stem lengths decreases until the the thickness of the growth face meets $`l_{\mathrm{min}}`$ at the fixed point. ## V Discussion In this paper we have outlined evidence from computer simulations for a mechanism of thickness selection in lamellar polymer crystal that differs from the theories of Lauritzen and Hoffman, and Sadler and Gilmer. Instead, the mechanism has much more in common with the results of earlier multi-pathway models. We find that a fixed-point attractor which describes the dynamical convergence of the crystal thickness to a value just larger than the minimum stable thickness, $`l_{\mathrm{min}}`$. This convergence arises from the combined effect of two constraints on the length of stems in a layer: it is unfavourable for a stem to be shorter than $`l_{\mathrm{min}}`$ and for a stem to overhang the edge of the previous layer. It is encouraging to note that we find the same mechanism of thickness selection operating in two models which make very different assumptions about the microscopic growth processes. This provides evidence of the generality of this mechanism, and so suggests that, although the models described here have a very simplified description of the microscopic dynamics, the physical principles behind the mechanism could be general enough to apply to real polymers. This mechanism of thickness selection is also consistent with experiments where the temperature is changed during crystallization. The steps that result indicate that the thickness of the lamellar crystals dynamically converges to the steady-state thickness for the new temperature by a mechanism similar to that which we observe in our simulations. Furthermore, if the step profiles could be characterized with sufficient resolution by atomic-force microscopy, it may be possible to extract the fixed-point attractor of a real polymers. However, for a temperature decrease the step profiles may also reflect the rounding of the crystal edge and for a temperature increase the roughness of the fold surface. Furthermore, any annealing mechanisms that operate could change the shape of the step profile from its as-formed state. Although the multi-pathway approach is, in some ways, an extension of the LH theory, the removal of many of the LH constraints leads to significantly different behaviour. In particular, our work undermines the LH assumptions that the initial nucleus determines the thickness of a layer, and shows that the approach embodied in Equation (2) (i.e. a comparison of the growth rates of the crystals in an ensemble of crystals of different thickness all of which grow at constant thickness) is inappropriate because crystals of arbitrary thickness do not necessarily continue to grow at that thickness. Although our results lead us to question the thickness selection mechanism in the LH theory, other aspects of the nucleation approach may not be affected by our critique. For example, the regime transitions are a result of the different functional dependence of the growth rate on the nucleation rate and the substrate completion rate in the different regimes. Recently, there have been a number of alternative theoretical proposals that have made recourse to metastable phases. Keller and coworkers suggested that crystallization of polyethylene could initially occur into the mobile hexagonal phase. These crystals would then thicken until a critical thickness was reached at which a phase transition to the orthorhombic phase would occur. Olmsted et al. have argued that the density fluctuations resulting from the spinodal decomposition of a polymer melt assist the nucleation of crystals. Strobl and coworkers have argued, on the basis of the thickness dependence of the crystallization and melting temperatures of syndiotactic polypropylene, and the granular texture in AFM images of the same polymer, that the polymer first crystallizes into blocks, which are subsequently stabilized when they fuse into lamellae. Our simulations can say little about these proposals since our polymer models are too simple to be able to capture such features. However, all these approaches are based on behaviour that has been observed in crystallization from the melt, so it is not clear how the ideas can apply to crystallization from solution, where the same basic laws for lamellar polymer crystals apply.
no-problem/9910/chao-dyn9910002.html
ar5iv
text
# Escape Probability and Mean Residence Time in Random Flows with Unsteady Drift 11footnote 1This work was begun during a visit to the Oberwolfach Mathematical Research Institute, Germany. ## 1 Introduction The Lagrangian view of fluid motion is particularly important in geophysical flows since only Lagrangian data can be obtained in many situations. It is essential to understand fluid particle trajectories in many fluid problems. Stochastic dynamical systems arise as models for fluid particle motion in geophysical flows with random velocity field $`\dot{x}`$ $`=`$ $`f(x,y,t)+a(x,y)\dot{w}_1,`$ (1) $`\dot{y}`$ $`=`$ $`g(x,y,t)+b(x,y)\dot{w}_2,`$ (2) where $`w_1(t),w_2(t)`$ are two real independent Brownian motion processes, $`f,g`$ are deterministic drift part, and $`a,b`$ are the intensity coefficients of diffusive noise part, of the velocity field. Note that the generalized derivative of a Brownian motion process is a mathematical model for “white noise”. For general background in stochastic dynamical systems, see . Deterministic quantities, such as escape probability (from a fluid domain) and mean residence time (in a fluid domain), that characterize stochastic dynamics can be computed by solving the Fokker-Planck type backward partial differential equations. In a previous paper , when the drift is steady, i.e, $`f,g`$ do not depend on time, we have quantified fluid transport between flow regimes of different characteristic motion by escape probability and mean residence time; developed methods for computing escape probability and mean exit time; and applied these methods in the investigation of geophysical fluid dynamics. In this paper, we further consider the case of unsteady or nonautonomous drift $`f(x,y,t),g(x,y,t)`$, develop a numerical algorithm for computing escape probability and mean exit time, and demonstrate the application of this approach to a tidal flow model. ## 2 Stochastic Dynamics <br>of Fluid Particle Motion For a planar bounded domain $`D`$, we can consider the exit problem of random solution trajectories of (1)-(2) from $`D`$. To this end, let $`D`$ denote the boundary of $`D`$. The residence time of a particle initially at $`(x,y)`$ at time $`t`$ inside $`D`$ is the time until the particle first hits $`D`$ (or escapes from $`D`$). The mean residence time $`\tau (x,y,t)=t+u(x,y,t)`$, where $`u(x,y,t)`$ satisfies $`u_t+{\displaystyle \frac{1}{2}}a^2u_{xx}+{\displaystyle \frac{1}{2}}b^2u_{yy}`$ $`+`$ $`f(x,y,t)u_x+g(x,y,t)u_y=1,`$ (3) $`u(x,y,t)`$ $`=`$ $`0,(x,y,t)D\times (0,T),`$ (4) $`u(x,y,T)`$ $`=`$ $`0,(x,y)D,`$ (5) where $`T`$ is big enough, i.e., $`T>sup\tau (x,y,t)`$ and here the $`sup`$ is taken over all $`(x,y)`$ in a compact set, i.e., the closure of the domain $`D`$. Note that $`u(x,y,t)=\tau (x,y,t)t`$ is the mean residence time after time instant $`t`$, or it quantifies how much longer a particle will stay inside $`D`$ after we observe it at position $`(x,y)`$ at the time instant $`t`$. Let $`\mathrm{\Gamma }`$ be a part of the boundary $`D`$. The escape probability $`p(x,y,t)`$ is the probability that the trajectory of a particle starting at position $`(x,y)`$ and at instant $`t`$ in $`D`$ first hits $`D`$ (or escapes from $`D`$) at some point in $`\mathrm{\Gamma }`$ (prior to escape through $`D\mathrm{\Gamma }`$), and $`p(x,y,t)`$ satisfies $`p_t+{\displaystyle \frac{1}{2}}a^2p_{xx}+{\displaystyle \frac{1}{2}}b^2p_{yy}`$ $`+`$ $`f(x,y,t)p_x+g(x,y,t)p_y=0,`$ (6) $`p(x,y,t)`$ $`=`$ $`0,(x,y,t)(D\mathrm{\Gamma })\times (0,T),`$ (7) $`p(x,y,t)`$ $`=`$ $`1,(x,y,t)\mathrm{\Gamma }\times (0,T),`$ (8) $`p(x,y,T)`$ $`=`$ $`1,(x,y)D.`$ (9) Note that $`p(x,y,t)`$ depends on $`\mathrm{\Gamma }`$, and it may be better denoted as $`p_\mathrm{\Gamma }(x,y,t)`$. Suppose that initial conditions (or initial particles) are uniformly distributed over $`D`$. The average escape probability $`P(t)`$ that a trajectory will leave $`D`$ along the subboundary $`\mathrm{\Gamma }`$ at time $`t`$, before leaving the rest of the boundary, is given by $$P(t)=\frac{1}{|D|}_Dp(x,y,t)𝑑x𝑑y,$$ (10) where $`|D|`$ is the area of domain $`D`$. ## 3 Numerical Approaches For the backward type of partial differential equation (3), we reverse the time $$s=Tt$$ Then the mean residence time $`u(x,y,s)`$ (we still use the same notation) satisfies $`u_s`$ $`=`$ $`{\displaystyle \frac{1}{2}}a^2u_{xx}+{\displaystyle \frac{1}{2}}b^2u_{yy}`$ (11) $`+`$ $`f(x,y,Ts)u_x+g(x,y,Ts)u_y+1,`$ $`u(x,y,s)`$ $`=`$ $`0,(x,y,s)D\times (0,T),`$ (12) $`u(x,y,0)`$ $`=`$ $`0,(x,y)D.`$ (13) Similarly for the escape probability $`p(x,y,s)`$ (we still use the same notation), we have $`p_s`$ $`=`$ $`{\displaystyle \frac{1}{2}}a^2p_{xx}+{\displaystyle \frac{1}{2}}b^2p_{yy}+f(x,y,Ts)p_x+g(x,y,Ts)p_y,`$ (14) $`p(x,y,s)`$ $`=`$ $`0,(x,y,s)(D\mathrm{\Gamma })\times (0,T),`$ (15) $`p(x,y,s)`$ $`=`$ $`1,(x,y,s)\mathrm{\Gamma }\times (0,T),`$ (16) $`p(x,y,0)`$ $`=`$ $`1,(x,y)D.`$ (17) A piecewise linear, finite element approximation scheme was used for the numerical solutions of the escape probability $`p(x,y,s)`$, and the mean residence time $`u(x,y,s)`$, described by the parabolic equations (11), and (14), respectively. By transforming back to original time $`t=Ts`$, we get $`p(x,y,t)`$, and $`u(x,y,t)`$. We have used a few different time-discretization schemes, including the implicit backward in time, and Crank-Nicholson scheme . The code works also for boundary defined by a collection of points lying on the boundary. A piecewise cubic splines were constructed to define such boundaries. ## 4 Application to a Tidal Flow To demonstrate the above ideas and numerical algorithms, we consider a tidal flow model. This flow model is very idealistic and here we just use it as an illuminating example. Oscillatory tidal water motions dominate a large part of the coastal regions. Beerens and Zimmerman considered a tidal flow model with velocity field $`u_o`$ $`=`$ $`\pi \mathrm{sin}(\pi x)\mathrm{cos}(\pi y)+\pi \lambda \mathrm{cos}(2\pi t),`$ (18) $`v_o`$ $`=`$ $`\pi \mathrm{cos}(\pi x)\mathrm{sin}(\pi y),`$ (19) where $`\lambda `$ is a parameter measuring the intensity of the tidal wave $`\pi \mathrm{cos}(2\pi t)`$. We take $`0<\lambda <3`$ as used by Beerens and Zimmerman . As pointed out by Beerens and Zimmerman , it is essential to include more complicated temporal modes in this model in order to describe more realistic tidal flows. We include random temporal modes or white noise in this model, i.e., we consider a tidal flow model with unsteady drift part and random diffusive part: $`\widehat{u}_o`$ $`=`$ $`\pi \mathrm{sin}(\pi x)\mathrm{cos}(\pi y)+\pi \lambda \mathrm{cos}(2\pi t)+\sqrt{2ϵ}\dot{w}_1,`$ (20) $`\widehat{v}_o`$ $`=`$ $`\pi \mathrm{cos}(\pi x)\mathrm{sin}(\pi y)+\sqrt{2ϵ}\dot{w}_2,`$ (21) where $`ϵ>0`$ is the constant intensity of the white noise. We assume that the random temporal modes are weaker than the time-periodic mode and so we take $`0<ϵ<0.1`$ in the following simulations. We study the transport of fluid particles in this tidal flow model. The equations of motion of fluid particles in this tidal flow are $`\dot{x}`$ $`=`$ $`\pi \mathrm{sin}(\pi x)\mathrm{cos}(\pi y)+\pi \lambda \mathrm{cos}(2\pi t)+\sqrt{2ϵ}\dot{w}_1,`$ (22) $`\dot{y}`$ $`=`$ $`\pi \mathrm{cos}(\pi x)\mathrm{sin}(\pi y)+\sqrt{2ϵ}\dot{w}_2,`$ (23) where $`w_1(t),w_2(t)`$ are two independent Brownian motion processes. The unperturbed flow, with no temporal periodic “tidal” mode $`\mathrm{cos}(2\pi t)`$ and no temporal white noise modes $`\dot{w}_1,\dot{w}_2`$ of this tidal flow model, is the so-called cellular flow: $`\dot{x}`$ $`=`$ $`\pi \mathrm{sin}(\pi x)\mathrm{cos}(\pi y),`$ (24) $`\dot{y}`$ $`=`$ $`\pi \mathrm{cos}(\pi x)\mathrm{sin}(\pi y).`$ (25) Figure 1 shows the phase portrait of this unperturbed flow (24)-(25). The partial differential equations for mean residence time $`u`$ of fluid particles in a fluid domain $`D`$, and for the escape probability $`p`$ of fluid particles cross a subboundary $`\mathrm{\Gamma }`$ of $`D`$, are the following (in reversed time $`s=Tt`$), respectively, $`u_s`$ $`=`$ $`ϵ(u_{xx}+u_{yy})`$ (26) $`+`$ $`[\pi \mathrm{sin}(\pi x)\mathrm{cos}(\pi y)+\pi \lambda \mathrm{cos}(2\pi Ts)]u_x`$ $``$ $`\pi \mathrm{cos}(\pi x)\mathrm{sin}(\pi y)u_y+1,`$ $`p_s`$ $`=`$ $`ϵ(p_{xx}+p_{yy})`$ (27) $`+`$ $`[\pi \mathrm{sin}(\pi x)\mathrm{cos}(\pi y)+\pi \lambda \mathrm{cos}(2\pi Ts)]p_x`$ $``$ $`\pi \mathrm{cos}(\pi x)\mathrm{sin}(\pi y)p_y.`$ In practical numerical simulations, the “final” time $`T`$ should be taken big enough so that the solutions $`u`$, $`p`$ do not change within a reasonable tolerance. To do so, we monitor the mean-square difference of $`u`$ (and also $`p`$) at time $`T`$ and $`T+1`$. When this difference is within a reasonable tolerance (we use $`0.001`$), we take the $`T`$ as the “final” time; otherwise, we increase the value of $`T`$ and do the simulation again, until the tolerance criterion is met. We take a fluid domain $`D`$ to be a typical cell, i.e., the unit square, in the unperturbed flow; see Figure 2. Unlike the stochastic systems with steady drift as studied by Brannan, Duan and Ervin , the mean residence time and escape probability depend on time (although the change is small in this specific example); see Figures 3, 4, 5 and 6. All these plots are for $`\lambda =1`$ and $`ϵ=0.1`$. The mean residence time and escape probability for $`0<\lambda <3`$ and $`0<ϵ<0.1`$ display similar features. In this flow model, it turns out that the average escape probability crossing the top boundary $`y=1`$ does not change much with time, with value around $`0.2639`$. We observe similar features for crossing other three side boundaries. In the tidal flow model (18)-(19) with only time-periodic “tidal” mode $`\pi \lambda \mathrm{cos}(2\pi t)`$, Beerens and Zimmerman found that there are “islands” in the tidal flow and the fluid particles trapped in such islands will never escape, for $`0<\lambda <3`$ as used by Beerens and Zimmerman . This phenomenon is common in non-dissipative, Hamiltonian planar systems. Although the dissipation in the oceans is small, “no matter how small the dissipation is, the (oceanic) fluid has substantial time to experience the action of dissipative forces” . So this “islands” phenomenon does not appear to be likely in realistic tidal flows; see more physical discussions in . While in our tidal flow model (20)-(21) with both time-periodic “tidal” mode $`\pi \lambda \mathrm{cos}(2\pi t)`$ and temporal white noise modes, all fluid particles will eventually escape from any fluid domain in finite time after we first observe them; see Figures 3, 4. This feature is true for any $`ϵ>0`$. It appears that our stochastic tidal model is a little more realistic than Beerens and Zimmerman’s model . We remark again that we use this simple tidal flow model to demonstrate the applications of mean residence time and escape probability. In summary, in this paper we have discussed the quantification of fluid transport between flow regimes of different characteristic motion by escape probability and mean residence time, developed numerical algorithms to solve for escape probability and mean residence time, and applied these ideas and numerical algorithms to a tidal flow model. Acknowledgement. We would like to thank Ludwig Arnold for very useful suggestions. We would also thank the hospitality of the Oberwolfach Mathematical Research Institute, Germany. This work was supported by the NSF Grant DMS-9704345.
no-problem/9910/astro-ph9910443.html
ar5iv
text
# On the optimum spacing of stereoscopic imaging atmospheric Cherenkov telescopes ## 1 Introduction IACT stereoscopy – the simultaneous observation of air showers with multiple imaging atmospheric Cherenkov telescopes (IACTs) under different viewing angles – has become the technique of choice for most of the next-generation instruments for earth-bound gamma-ray astronomy in the VHE energy range, such as VERITAS or HESS. The stereoscopic observation of air showers allows improved determination of the direction of the primary and of its energy, as well as a better suppression of backgrounds, compared to single IACTs. A crucial question in the layout of stereoscopic systems of IACTs is the spacing of the identical telescopes. Obviously, the spacing of telescopes should be such that at least two telescopes fit into the Cherenkov light pool with its typical diameter of around 250 m. For higher energy showers the light pool can increase considerably, but at large distances ($`>`$ 130 m) one observes primarily light from shower particles scattered at larger angles. The coincidence rate between two telescopes will decrease with increasing spacing; on the other hand, the angle between the views and hence the quality of the stereoscopic reconstruction of the shower geometry will improve. The opinions within the IACT community concerning the optimum geometry of IACT arrays differ; the HESS project, for example, initially aimed for a spacing of 100 m between adjacent telescopes; more recently, larger distances around 120 m or more are favored . For the otherwise similar VERITAS array, intertelescope distances of initially 50 m, and later 80 m to 85 m were foreseen. Among existing systems, the HEGRA 5-telescope IACT system has a characteristic spacing of 70 m between its central telescope and the corner telescopes. The WHIPPLE-GRANITE two-telescope system had a spacing of 140 m. The Telescope Array uses a spacing of 120 m between the first three telescopes, and later 70 m for the full array. Optimization of IACT system geometry is heavily based on Monte-Carlo simulations. Over the last years, the quality of these simulations has improved significantly, both concerning the reliability of the shower simulation and concerning the details of the simulation of the telescopes and their readout (see, e.g, ). Simulations have been tested extensively, and key characteristics, such as the radial distribution of Cherenkov light in the light pool, have been verified experimentally . Nevertheless, a direct experimental verification of the dependence of the performance of IACT systems on the intertelescope distance would be highly desirable. The HEGRA IACT system at the Observatorio del Roque de los Muchachos on La Palma consists of five telescopes, four of them arranged roughly in the form of a square with 100 m side length, with the fifth telescope in the center. Selecting pairs of telescopes, the performance of two-telescope stereo systems can be studied for distances between 70 m (from the central telescope to the corner telescopes) and 140 m (across the diagonal of the square), covering essentially the entire range of interest. This paper reports the results of such a study, based on the large sample of gamma-rays acquired during the 1997 outburst of Mrk 501. ## 2 Telescope hardware and data set The HEGRA IACT system consists of five telescopes, each with a tessellated mirror of 8.5 m<sup>2</sup> area and 5 m focal length, and a 271-pixel camera with a 4.3 field of view. The trigger condition requires a coincidence of two neighboring pixels above a threshold $`q_o`$ to trigger the camera, and a coincidence of triggers from two cameras to record the data. Details of the trigger system are given in . Events are reconstructed by parameterizing the images using the Hillas image parameters, and by geometrically determining the direction and the impact point of the shower. A simple and relatively efficient method for cosmic-ray rejection is based on the difference in the width of $`\gamma `$-ray and cosmic-ray images, respectively. Width values are normalized to the mean width expected for a gamma-ray image of a given intensity and impact distance, and a mean scaled width is calculated by averaging over telescopes. The $`\gamma `$-ray showers exhibit, by definition, a mean scaled width around 1, whereas the broader cosmic-ray showers show larger values. A cut requiring a mean scaled width below 1.2 or 1.3 keeps virtually all gamma-rays and rejects a significant fraction of cosmic rays; a cut at 1.0 to 1.1 has lower gamma-ray acceptance, but optimizes the significance for the detection of $`\gamma `$-ray sources. The data analysis and performance of the system are described in more detail in . During 1997, when the data for this study were taken, only four of the five telescopes were included in the HEGRA IACT system; the fifth telescope (one of the corner telescopes) was still equipped with an older camera and was operated in stand-alone mode. The four telescopes can be used to emulate six different two-telescope stereo systems: three combinations with intertelescope distances around 70 m (from the central telescope to the three corner telescopes), two combinations with about 90 m and 110 m (the sides of the imperfect square), and one combination with 140 m (the diagonal). Data from pairs of (triggered) telescopes are analyzed, ignoring the information provided by the other telescopes. Since only two telescopes are required to trigger the system, there is no trigger bias or other influence from those other telescopes. A slight difficulty arises since one compares combinations of different telescopes, rather than varying the distance between two given telescopes. While the four HEGRA system telescopes used here are identical in their construction, they differ somewhat in their age and hence the degree of mirror deterioration, in the quality of the alignment of the mirror tiles, and in the properties of the PMTs in the cameras, which show systematic variations between production batches. Different mirror reflectivities and PMT quantum efficiencies result in slightly different energy thresholds for given (identical) settings of electronics thresholds. The determination of image widths and hence the background rejection are sensitive to the quality of the mirror adjustment. These effects have to be determined from the data, and compensated. The analysis is based on the data set recorded in 1997 during the outburst of Mrk 501. Only data during the high-flux periods between MJD 50595 and MJD 50613 were used. The data set was further restricted to small zenith angles, less than $`20^{}`$, to approximate the situation for vertical showers. Mrk 501 was observed in the so-called wobble mode, with the source images 0.5 away from the center of the cameras. An equivalent region imaged on the opposite side of the camera center was used as off-source region. Given the angular resolution of about 0.1, the on-source and off-source regions are well separated. ## 3 Analysis and results As a first step in the analysis, the detection rates were studied as a function of intertelescope distance. To equalize the energy thresholds of all telescopes, pairs of telescopes were used to reconstruct showers and events with cores located at equal distance from both telescopes were selected. By comparing the mean size of the images in the two telescopes, and in particular by comparing the mean signal amplitude in the second-highest pixel (which determines if a telescopes triggers or not), and can derive correction factors which can be used to equalize the response of the telescopes. Three of the four telescopes were found to be identical within a few %, one had a sensitivity which was lower by about 25%. In the analysis, pixel amplitudes are corrected correspondingly, and only camera images with two pixels above 20 photoelectrons are accepted. This software threshold is high enough to eliminate any influence of the hardware threshold (at about 8 to 10 photoelectrons, depending on the data set), even after the worst-case 25% correction is applied. The resulting detection rates of cosmic rays and of gamma-rays – after subtraction of the cosmic-ray background – are shown in Fig. 1. The rates measured for the three different telescope combinations around 70 m spacing agree within 5%, indicating the precision in the adjustment of telescope thresholds. Detection rates decrease with increasing distance; between 70 m and 140 m, rates drop by about 1/3, with a very similar dependence for gamma rays as compared to cosmic rays. At first glance, this seems surprising, since compared to gamma-ray showers, the distribution of Cherenkov light in proton-induced showers – and hence the probability to trigger – is more strongly peaked near the shower axis, favoring a smaller separation of telescopes. On the other hand, however, the trigger probability for gamma-ray showers near threshold - where most of the detection rate originates – cuts off sharply around 120 m to 130 m, whereas proton-induced showers occasionally trigger at larger distances . The two effects compensate each other to a certain extent. In addition, about 33% of the cosmic-ray triggers are caused by primaries heavier than protons . These particles interact higher in the atmosphere and produce a wider light pool, again enhancing the rate for telescopes with wide spacing. In the later analysis, shape cuts completely eliminate such events. Indeed, if only cosmic-ray events with narrow (proton- or gamma-like) images are accepted, their rates fall off steeper with distance than the all-inclusive rate, or the gamma-ray rate. As a measure of the sensitivity for weak sources, the ratio $`S/\sqrt{B}`$ of the gamma-ray rate to the square root of the cosmic-ray rate was used. The significance of background-dominated signals scales with this ratio. For optimum sensitivity, $`S/\sqrt{B}`$ is optimized by tighter cuts on the pointing of showers, and on the image shapes. Fig. 2 shows the angular resolution provided by pairs of telescopes as a function of spacing. The three pairs at 70 m show differences at the level of 10%, indicating small differences in the quality of the mirror alignment and of the telescope alignment. Angular resolution improves slightly with increasing spacing; however, given the 10% systematic variations, this effect is of marginal significance. To determine the sensitivity for the different combinations of telescopes, cuts on pointing and on the mean scaled width were optimized for each combination, resulting in pointing cuts around $`0.1^{}`$ and cuts on the mean scaled width around 1.05. In addition, a lower limit of 0.75 was imposed on the mean scaled width. Due to differences in the quality of the mirror alignment, the enhancement of significance due to such cuts differs slightly between telescopes. This can be studied by selecting events with telescopes $`A`$ and $`B`$, and deriving the significance of the signal by cutting only on the event shape in $`A`$, or only in $`B`$. For identical telescopes, the results should be identical. Based on such studies, sensitivity corrections for the different telescope pairs were derived; these correction factors were always below 10%. Fig. 3(a) shows the resulting significance (defined as $`S/\sqrt{B}`$) of the Mrk 501 signal. The errors shown are statistical, and are dominated by the low statistics in the cosmic-ray background sample after cuts (even though the background region was chosen to be larger than the signal region). One finds that the significance is almost independent of telescope spacing, over the 70 m to 140 m range covered. Concerning systematic uncertainties, we note that the results for the three combinations at 70 m agree reasonably well. Variations of parameters such as the software trigger threshold, or the exact values of the pointing cuts and angular cuts produce stable results, with systematic variations of at most 10%. The absolute significance of a point source detected with two stereoscopic IACTs, and its dependence on the intertelescope distance will, of course, depend on additional cuts which may be applied in the analysis. Additional requirements may, for example, concern the stereo angle, defined as the angle between the two views of the shower axis provided by the two telescopes, or, equivalently, the angle between the image axes in the two cameras. If the stereo angle is small – as it is the case for events with shower impact points on or near the line connecting the two telescopes, and for very distant showers – the two views coincide and the spatial reconstruction of the shower axis is difficult. In some analyses, one will therefore add the requirement of a minimal stereo angle between the two views, in order to increase the reliability of the stereoscopic reconstruction. For a minimum stereo angle of $`20^{}`$ – as used in some HEGRA work – the resulting significance remains virtually unchanged compared to the data shown in Fig. 3(a). With a large minimum stereo angle of $`45^{}`$ (Fig. 3(b)), the distance dependence becomes more pronounced; for small telescope distances, significance is reduced, while for large distances it remains or is even slightly enhanced. The explanation is simple. For a telescope spacing $`d`$ and a stereo angle greater than $`45^{}`$, shower cores are accepted at most up to a distance of $`1.2d`$ from the center of the telescope pair. As long as this distance is below the radius of the light pool, the effective detection area is reduced. One concludes that such a cut might be beneficial to reduce systematic effects in spectral measurements, for example, but should not be applied in searches for gamma-ray sources. Another quantity which influences the distance dependence is the field of view of the cameras. Cameras with small field of view will perform worse for large distances. For example, rejecting events with image centroids beyond $`1^{}`$ from the camera center reduces the sensitivity for the $`140`$ m point by a factor of about 2. ## 4 Summary The performance of two-telescope stereoscopic IACT systems was studied experimentally using the HEGRA telescopes, as a function of telescope spacing in the range between 70 m and 140 m. While detection rates decrease with increasing spacing, the significance of source signals is almost independent of distance, with a slight improvement for distances beyond 100 m. These results confirm Monte Carlo simulations (see, for example, ) which generally show that the exact choice of telescope spacing is not a very critical parameter in the design of IACT systems (provided that the field of view of the cameras is sufficiently large not to limit the range of shower impact parameters). An analysis requirement of a minimum angle between views favors larger distances. The results shown apply, strictly speaking, only to two-telescope systems. In IACT arrays with a large number of telescopes, one may place telescopes at maximum spacing in order to maximize the effective area of the array, under the condition that most individual showers are observed by two or maybe three or four telescopes. Alternatively one may choose to place telescopes close to each other, such that an individual shower is observed simultaneously by almost all telescopes, improving the quality of the reconstruction and lowering the threshold, at the expense of detection area at energies well above threshold. In the first case, one would probably use a spacing between adjacent telescopes between 100 m and 150 m; in the second case, one would limit the maximum spacing between any pair of telescopes to this distance. ## Acknowledgements The support of the HEGRA experiment by the German Ministry for Research and Technology BMBF and by the Spanish Research Council CYCIT is acknowledged. We are grateful to the Instituto de Astrofisica de Canarias for the use of the site and for providing excellent working conditions. We gratefully acknowledge the technical support staff of Heidelberg, Kiel, Munich, and Yerevan.
no-problem/9910/chao-dyn9910013.html
ar5iv
text
# Cascades in helical turbulence ## Abstract The existence of a second quadratic inviscid invariant, the helicity, in a turbulent flow leads to coexisting cascades of energy and helicity. An equivalent of the four-fifth law for the longitudinal third order structure function, which is derived from energy conservation, is easily derived from helicity conservation . The ratio of dissipation of helicity to dissipation of energy is proportional to the wave-number leading to a different Kolmogorov scale for helicity than for energy. The Kolmogorov scale for helicity is always larger than the Kolmogorov scale for energy so in the high Reynolds number limit the flow will always be helicity free in the small scales, much in the same way as the flow will be isotropic and homogeneous in the small scales. A consequence is that a pure helicity cascade is not possible. The idea is illustrated in a shell model of turbulence. Few exact results regarding fully developed turbulence have yet been derived. The most celebrated being Kolmogorovs four-fifth law . The four-fifth law is based on the fact that energy, which is an inviscid invariant of the flow, is transferred through the inertial range from the integral scale to the dissipation scale. The four-fifth law, $`\delta v(l)_{}^3=(4/5)\overline{\epsilon }l`$, states that the third order correlator associated with energy flux equals the mean energy dissipation. As noted recently in the case of helical flow a similar relation exists for the transfer of helicity leading to an other scaling relation for a third order correlator associated with the flux of helicity, $`\delta 𝐯_{}(l)[𝐯_{}(r)\times 𝐯_{}(r+l)]=(2/15)\overline{\delta }l^2`$, where $`\overline{\delta }`$ is the mean dissipation of helicity. This relation is called the ’two-fifteenth law’ due to the numerical prefactor. This establishes another non-trivial scaling relation for velocity differences in a turbulent helical flow. The coexistence of cascades of energy and enstrophy is prohibited for high Reynolds number flow in 2D turbulence. The reason for this is that the enstrophy dominates at small scales such that the ratio of energy – to enstrophy dissipation vanishes for high Reynolds number flow. The Kolmogorov scale $`k_Z^1`$ for enstrophy dissipation is determined from the energy spectrum $`E(k)k^3`$ and the kinematic viscosity $`\nu `$ by $`\overline{\zeta }=\nu ^{k_Z}𝑑kk^4E(k)\nu k_Z^2k_Z\nu ^{1/2}`$. The energy dissipation is $`\overline{\epsilon }=\nu ^{k_Z}𝑑kk^2E(k)\nu \mathrm{log}k_Z(1/2)\nu \mathrm{log}\nu 0`$ for $`\nu 0`$. Consequently energy is cascaded upscale in 2D turbulence. The situation in 3D turbulence is different. Here coexisting cascades of energy and helicity are possible . However, the same type of dimensional argument as for the cascades of energy and enstrophy in 2D turbulence applies. The helicity density is $`h=u_i\omega _i`$, where $`\omega _i=ϵ_{ijk}_ju_k`$ is the vorticity. The mean dissipation of helicity is $`D_H=\nu _ju_i_j\omega _i`$. Disregarding signs this can spectrally be represented as $$D_H\nu ^{k_E}𝑑kk^3E(k)\nu k_E^{7/3}\nu ^{3/4},$$ (1) where $`k_E^1=\eta `$ is the Kolmogorov scale and we have used $`E(k)k^{5/3}`$ and $`k_E\nu ^{3/4}`$. This means that for high Reynolds numbers flow the dissipation of helicity will grow as $`Re^{3/4}`$. Since the mean dissipations of energy $`\overline{\epsilon }`$ and helicity $`\overline{\delta }`$ are determined by the integral scale forcing the growth of helicity dissipation with Reynolds number is apparently in conflict with the assumption of a constant energy dissipation in the limit of vanishing viscosity. This is not a true problem because helicity is non-positive, and the viscous term in the equation for the helicity $`\nu (u_i_{jj}\omega _i+\omega _i_{jj}u_i)`$ can have either sign. So in the high Reynolds number limit there must either be a detailed balance between dissipation of positive and negative helicity or the energy cascade is blocked . In the rather artificial case of a shell model where only one sign of helicity is dissipated by hyper-viscosity, the energy cascade is indeed prevented all together similar to the case of forward energy cascade of energy in 2D turbulence . In a helical flow $`(\overline{\delta }0)`$ the dissipation of helicity defines a scale different from the Kolmogorov scale $`\eta `$. This we will call the Kolmogorov scale $`\xi `$ for dissipation of helicity. Following K41, the Kolmogorov scale $`\eta `$ for energy dissipation is obtained from $`\overline{\epsilon }\delta u_\eta ^3/\eta \nu \delta u_\eta ^2/\eta ^2\eta (\nu ^3/\overline{\epsilon })^{1/4}`$, where $`\delta u_l`$ is a typical variation of the velocity over a scale $`l`$. The Kolmogorov scale $`\xi `$ for dissipation of helicity is defined as the scale where the helicity dissipation is of same order as the spectral helicity flux. With dimensional counting we have $`\overline{\delta }\nu \delta u_\xi ^3/\xi ^2`$ and using $`\delta u_l(l\overline{\epsilon })^{1/3}`$ we obtain $$\xi (\nu ^3\overline{\epsilon }^2/\overline{\delta }^3)^{1/7}.$$ (2) Now it is clear why (1) leads to a wrong conclusion for the mean dissipation of the helicity $`\overline{\delta }`$. The integral will not be dominated by contributions from $`k_E`$ but contributions from $`k_H=1/\xi `$, $$D_H=\overline{\delta }\nu k_H^{7/3}k_H\nu ^{3/7}.$$ (3) The ratio of the two Kolmogorov scales is then $`(\eta /\xi )=(k_H/k_E)\nu ^{3/7+3/4}=\nu ^{9/28}0`$ for $`\nu 0`$. Thus for high Reynolds number helical flow the small scales will always be non-helical and a pure helicity cascade is not possible. On the other hand for scales $`l<\xi `$ the ratio of dissipation of energy and helicity is proportional to $`l`$, $`D_E/D_Hl`$, which means that helicity dissipation dominates and the dissipation of positive and negative helicity must balance. The reason for the flow to be non-helical on small scales is different from the reason why the flow tends to be isotropic on small scales even though the integral scale is non-isotropic. The reason for the small scales to be isotropic is that the structure functions associated with the non-isotropic sectors scale with scaling exponents that are larger than those of the isotropic sector and thus becomes sub-leading for the flow at small scales independent of the dissipation . The physical picture for fully developed helical turbulence is then that $`\overline{\delta }`$ and $`\overline{\epsilon }`$ are solely determined by the forcing in the integral scale. There will then be an inertial range with coexisting cascades of energy and helicity with third order structure functions determined by the four-fifth – and the two-fifteenth laws. This is followed by an inertial range between $`\xi `$ and $`\eta `$ corresponding to non-helical turbulence, where the dissipation of positive and negative helicity vortices balance and the two-fifteenth law is not applicable. In order to test these ideas in a model system we investigate the role of helicity and the structure of the helicity transfer in a shell model. Shell models are toy-models of turbulence which by construction have second order inviscid invariants similar to energy and helicity in 3D turbulence. Shell models can be investigated numerically for high Reynolds numbers, in contrast to the Navier-Stokes equation, and high order statistics and anomalous scaling exponents are easily accessible. Shell models lack any spatial structures so we stress that only certain aspects of the turbulent cascades have meaningful analogies in the shell models. This should especially be kept in mind when studying helicity which is intimately linked to spatial structures, and the dissipation of helicity to reconnection of vortex tubes . So the following only concerns the spectral aspects of the helicity and energy cascades. The most well studied shell model, the GOY model , is defined from the governing equation, $$\dot{u_n}=ik_n(u_{n+2}u_{n+1}\frac{ϵ}{\lambda }u_{n+1}u_{n1}+\frac{ϵ1}{\lambda ^2}u_{n1}u_{n2})^{}\nu k_n^2u_n+f_n$$ (4) with $`n=1,\mathrm{},N`$ where the $`u_n`$’s are the complex shell velocities. The wavenumbers are defined as $`k_n=\lambda ^n`$, where $`\lambda `$ is the shell spacing. The second and third terms are dissipation and forcing. The model has two inviscid invariants, energy, $`E=_nE_n=_n|u_n|^2`$, and ’helicity’, $`H=_nH_n=_n(ϵ1)^n|u_n|^2`$. The model has two free parameters, $`\lambda `$ and $`ϵ`$. The ’helicity’ only has the correct dimension of helicity if $`|ϵ1|^n=k_n1/(1ϵ)=\lambda `$. In this work we use the standard parameters $`(ϵ,\lambda )=(1/2,2)`$ for the GOY model. A natural way to define the structure functions of moment $`p`$ is through the transfer rates of the inviscid invariants, $`S_p^E(k_n)=(\mathrm{\Pi }_n^E)^{p/3}k_n^{p/3}`$ (5) $`S_p^H(k_n)=(\mathrm{\Pi }_n^H)^{p/3}k_n^{2p/3}`$ (6) The energy flux is defined in the usual way as $`\mathrm{\Pi }_n^E=d/dt|_{n.l.}(_{m=1}^nE_m)`$ where $`d/dt|_{n.l.}`$ is the time rate of change due to the non-linear term in (4). The helicity flux $`\mathrm{\Pi }_n^H`$ is defined similarly. By a simple algebra we have the following expression for the fluxes, $`\mathrm{\Pi }_n^E=(1ϵ)\mathrm{\Delta }_n+\mathrm{\Delta }_{n+1}=\overline{\epsilon }`$ (7) $`\mathrm{\Pi }_n^H=(1)^nk_n(\mathrm{\Delta }_{n+1}\mathrm{\Delta }_n)=\overline{\delta }`$ (8) where $`\mathrm{\Delta }_n=k_{n1}Imu_{n1}u_nu_{n+1}`$, $`\overline{\epsilon }`$ and $`\overline{\delta }`$ are the mean dissipations of energy and helicity respectively. The first equalities hold without averaging as well. These equations are the shell model equivalents of the four-fifth – and the two-fifteenth law. Kadanoff et al. defined a third order structure function as, $$S_n^3=Imu_{n1}u_nu_{n+1}=\mathrm{\Delta }_n/k_{n1}$$ (9) to avoid the spurious (specific to the GOY model) period 3 oscillation. Using this we obtain from (7) and (8) a scaling relation for $`S_n^3`$, $$S_n^3=\frac{1}{(1ϵ/2)k_n}(\overline{\epsilon }(1)^n\overline{\delta }/k_n).$$ (10) The last term in the parenthesis is sub-leading with period two oscillations. When $`\overline{\delta }=0`$ the sub-leading term disappears and the scaling from the four-fifth law is obtained, figure 1. The relation (8) is the scaling relation corresponding to the sub-leading term, which survives due to detailed cancellations between the two terms $`\mathrm{\Delta }_{n+1}`$ and $`\mathrm{\Delta }_n`$ of the leading term corresponding to (7). The case $`\overline{\epsilon }=0`$ and $`\overline{\delta }0`$ would, aside from the period two oscillation, correspond to a helicity cascade with the scaling obtained from dimensional counting $`u_nk_n^{2/3}`$. However, this situation is, as we will show shortly, not realizable. The mean dissipations $`\overline{\epsilon }`$ and $`\overline{\delta }`$ are from energy and helicity conservations identical to the mean energy and helicity inputs which from (4) are, $$\overline{\epsilon }=\underset{n}{}f_nu_n^{}+c.c.$$ (11) and $$\overline{\delta }=\underset{n}{}(1)^nk_nf_nu_n^{}+c.c.,$$ (12) so $`\overline{\epsilon }`$ and $`\overline{\delta }`$ are not independent. The forcing can be chosen in many ways. A natural choice is $`f_n=f_n^0/u_n^{}`$, where $`f_n^0`$ is independent on the shell velocities. Then we have, $`\overline{\epsilon }=_{n<n_I}f_n^0`$ and $`\overline{\delta }=_{n<n_I}(1)^nk_nf_n^0`$, $`n_I`$ indicates the end of the integral scale. By choosing the coefficients, stochastic or deterministic functions of time, this last sum can vanish identically, which is referred to as helicity free forcing. The simulations shown in figure 1 are performed with the forcing, $`f_3^0=10^2(1+i)`$ and $`f_4^0=Af_3^0/\lambda `$ with $`A=0`$ and $`A=1`$, corresponding to $`(\overline{\epsilon },\overline{\delta })=(0.01,0)`$ and $`(\overline{\epsilon },\overline{\delta })=(0.01,0.08)`$ respectively. Helicity is not positive and is dissipated with opposite signs for odd and even shells. If we consider the third order structure function associated with the helicity transfer as defined by (6) we see (figure 2) period two oscillations growing with $`n`$. This period two oscillation is due to the dissipation and not the non-linear transfer. The helicity flux is $$\mathrm{\Pi }_n^H=\overline{\delta }D_n,$$ (13) where $`D_n`$ is the helicity dissipation at shells $`mn`$: $$D_n=\underset{m=1}{\overset{n}{}}\nu (1)^mk_m^3|u_m|^2.$$ (14) In the inertial range for energy transfer we have the Kolmogorov scaling $`u_nk_n^{1/3}`$ so the helicity dissipation can be estimated, $$D_n\underset{m=1}{\overset{n}{}}\nu (1)^mk_m^{7/3}\lambda ^{7/3}\frac{(1)^n\lambda ^{7n/3}1}{\lambda ^{7/3}+1}(1)^nk_n^{7/3}.$$ (15) Figure 3 shows $`|\mathrm{\Pi }_n^H|`$ and $`\mathrm{\Pi }_n^E`$ as functions of wave number. The scaling (15) of the helicity dissipation is the straight line, the horizontal dashed line is $`\overline{\delta }`$. The inertial range for helicity transfer is to the left of the crossing of the two lines. The crossing is the Kolmogorov scale for helicity transfer $`K_H`$, which does not coincide with the Kolmogorov scale for energy transfer, $`K_E`$. The ’pile-up’ for $`k`$ larger than $`K_H`$ was earlier interpreted as a bottleneck effect . It is a balance between positive and negative helicity dissipation. The forcing $`f_n=f_n^0/u_n^{}`$ can potentially cause numerical trouble when $`|u_n|`$ becomes small. It is easy to see that the linear equation for (real) shell velocity $`u_n`$, neglecting the non-linear transfer, $`\dot{u}_n=f/u_n`$ will create a finite time singularity. This is not the case for the forcing suggested by Olla at two shells, $`f_n=(aE_{n+1}u_n)/(E_n+E_{n+1})`$ and $`f_{n+1}=(bE_nu_{n+1})/(E_n+E_{n+1})`$, where $`a`$ and $`b`$ are constants determining the ratio of energy – to helicity input. The coupled set of equations, $`(\dot{u}_n=f_n,\dot{u}_{n+1}=f_{n+1})`$ is integrable (solve for $`y=u_n/u_{n+1}`$), and has no finite time singularities. Using this forcing we performed a set of simulations with constant energy input $`\overline{\epsilon }=0.01`$ and varying helicity input $`\overline{\delta }=(0.0001,0.001,0.005,0.01,0.08)`$. In figure 4 the spectra of the absolute value of the helicity transfer normalized with $`\overline{\delta }`$ are plotted against wave number normalized with $`K_H`$. $`K_H`$ is in each case calculated from (2), and a clear data collapse is seen. In summary the simulations with the GOY shell model suggest a new Kolmogorov scale for helicity, always smaller than the Kolmogorov scale for energy. Thus there exist two inertial ranges in helical turbulence, a range smaller than $`K_H`$ with coexisting cascades of energy and helicity where both the four-fifth - and the two-fifteenth law applies, and a range between $`K_H`$ and $`K_E`$ where the flow is non-helical and only the four-fifth law applies. FIGURE CAPTIONS * The third order structure function $`S_n^3`$ as calculated from (9) in the cases $`\overline{\delta }>0`$ (crosses) and $`\overline{\delta }=0`$ (diamonds). In the case of helicity free forcing the modulus 2 oscillations disappears. In the two runs we have 25 shells, $`\nu =10^9,f_n=0.01(1+i)(\delta _{n,2}/u_2^{}A\delta _{n,3}/2u_3^{})`$ with $`A=0,1`$ respectively. * The helicity flux $`\mathrm{\Pi }_n^H`$ in the case $`\overline{\delta }>0`$. The same curve is multiplied by 1000 and over-plotted in order to see the inertial range. The period 2 oscillations in the helicity transfer comes from the helicity dissipation. * The absolute values of the helicity flux $`|\mathrm{\Pi }_n^H|`$ (diamonds) show a crossover from the inertial range for helicity to the range where the helicity is dissipated. The line has a slope of $`7/3`$ indicating the helicity dissipation. The dashed lines indicate the helicity input $`\overline{\delta }`$. The crosses is the helicity flux in the case $`\overline{\delta }=0`$ where there is no inertial range and $`K_H`$ coincides with the integral scale. The triangles are the energy flux $`\mathrm{\Pi }_n^E`$. * Five simulations with constant viscosity $`\nu =10^9`$, constant energy input $`\overline{\epsilon }=0.01`$ and varying helicity input $`\overline{\delta }=(0.0001,0.001,0.005,0.01,0.08)`$ are shown. The absolut values of the helicity flux $`|\mathrm{\Pi }_n^H|`$ divided by $`\overline{\delta }`$ is plotted against the wave number divided by $`K_H=(\nu ^3\overline{\epsilon }^2/\overline{\delta }^3)^{1/7}`$, which is obtained from (2) neglecting $`O(1)`$ constants. A clear data collapse is seen.
no-problem/9910/astro-ph9910197.html
ar5iv
text
# Final Stages of N-body Star Cluster Encounters ## 1 Introduction The study of star cluster pairs in the Magellanic Clouds has been explored extensively in the past years (Bhatia & Hatzidimitriou 1988; Bhatia & McGillivray 1988; Bica, Clariá & Dottori 1992; Rodrigues et al. 1994, hereafter RRSDB; Bica & Schmitt 1995; de Oliveira et al. 1998, hereafter Paper I; Bica et al. 1999). The evolution of physical pairs can provide fundamental insight into the past history of cluster formation in the Magellanic Clouds. In the last decade, N-body simulations of stellar system encounters have been the main tool to investigate the dynamical processes that can occur in such interactions, like mergers and tidal disruption. Many of the first simulations worked with equal mass encounters (White 1978; Lin & Tremaine 1983; Barnes 1988). Different mass encounters have been carried out by Rao, Ramamani & Alladin (1987), Barnes & Hut (1986, 1989), RRSDB and Paper I. These studies indicate that tidal disruption and merger are two important processes in the dynamical evolution of a binary stellar system. However, there remain important issues not yet well tested, as the long term stages of such interactions, and the fraction and how stars are stripped from the cluster by the parent galaxy tidal field. Some isolated clusters in the Magellanic Clouds present interesting structures with higher ellipticities than Galactic globular clusters (van den Bergh & Morbey 1984). Several explanations have been proposed like the presence of sub clumps merging with the main cluster which could produce the impression of high ellipticities in these clusters (Elson 1991). Another possibility is that old mergers in advanced evolutionary stages could show such ellipticities (Sugimoto & Makino 1989). This work is a continuation of Paper I where we analyzed the morphology of selected cluster pairs in the Magellanic Clouds and have compared them to those obtained from numerical simulations of star cluster encounters. A preliminary discussion was given in de Oliveira et al. (1999). In the present paper we study the long term behaviour of some of our numerical models (up to 1 Gyr), and compare the final stages of these simulations (isodensity maps, ellipticities, isophotal twisting) with the structure of isolated clusters. In Section 2 we describe the method and the conditions employed in the present simulations. In Section 3 we describe the Magellanic Cloud cluster images and the procedure to derive the isodensity maps and ellipticity measures. The numerical results and discussions are presented in Section 4. In Section 5 we give the main conclusions. ## 2 The Method and the Initial Conditions We performed the simulations using TREECODE (Herquist 1987) in the CRAY YMP-2E computer of Centro Nacional de Supercomputação of the Universidade Federal do Rio Grande do Sul (CESUP-UFRGS). For a complete description of the method see Paper I. ### 2.1 The Initial Conditions In Paper I, cluster N-body models were generated for 16384, 4096 and 512 equal mass stars, corresponding to total masses of $`10^5\mathrm{M}_{},10^4\mathrm{M}_{}`$ and $`10^3\mathrm{M}_{}`$, respectively. The cutoff radii of the Plummer clusters are 20, 15 and 8 parsecs, respectively. These values are comparable to diameters found in Magellanic Clouds clusters (Bhatia et al. 1991). We generated two different concentration degrees for the 512 particle cluster, one with half-mass radius 1/2 of the cutoff radius and another with 1/4 (Table 1). The interaction models are characterized by a pericentre distance $`p`$ (distance of closest approach) and the initial relative velocity $`V_i`$. These initial velocities are comparable to the observed random velocities in the LMC disk (Freeman, Illingworth & Oemler 1983). The initial conditions (t=0) used in Paper I were: (i) the large cluster (in the case of equal mass clusters, the more concentrated one) is located at coordinates (X,Y)=(0,0); (ii) the small cluster lies at a distance $`r_0=5\times r_t`$ (where $`r_t`$ is the tidal radius). Beyond this distance the tidal effects are negligible. The initial relative velocity at this distance was obtained from the two-body formulae. The position and velocities of the particles during the encounters were computed in the centre-of-mass frame of the entire system. At various times, the essential data containing the positions and velocities of all the particles were stored for later analysis. The computation was stopped when disruption of the cluster occurred or, in the case of open orbits, when the relative separation of the clusters reached distance $`d>2r_0`$. We point out that the softening parameter adopted for the interaction is the smallest value of the two cluster models (column 7 of Table 1). Indeed, with reference to the AB cluster interaction (which has the largest $`ϵ`$ difference) we carried out a simulation where cluster B is allowed to evolve isolated adopting the cluster A softening value for $`\mathrm{\Delta }t`$200 Myr ($``$ 2 relaxion times). Analysis of the time evolution of cluster B structure shows that it is essentially the same for $`ϵ`$ = 0.30 as that obtained for the cluster with $`ϵ`$ = 0.47. Consequently in the AB cluster interaction significant structural changes are not expected by the adoption of the smaller $`ϵ`$ value. In the present Paper, we proceeded the simulation from this point on for some models with closed orbit where disruption occurred (see Table 2). We used the same procedures as in Paper I. These simulations were ran up to 990 Myr for the models presented here. We illustrate in Fig. 1 the time evolution of an elliptic orbit encounter involving two clusters with 16384 and 4096 particles respectively (model E9AB10, see Table 2). In this Fig. we can see the small cluster being deformed by the massive one with the formation of a bridge ($``$50 Myr), subsequently complete disruption occurs ($``$135 Myr); at the end of this simulation the clusters coalesce into a single one, with the stars of the disrupted cluster forming a halo around the final cluster, and some of them being ejected. ## 3 Isodensities for LMC Elliptical Clusters In the final stages of our simulations the pairs coalesced into a single cluster with a distinct structure as compared to the original ones. We noticed that in viewing angles close to the original orbital plane of encounter, the final single cluster presented an ellipticity larger than those of the initial clusters (our models have a spherical symmetry). In this paper we have selected some LMC clusters with morphologies resembling those of the present models (Sect. 4.4). We checked many clusters with important ellipticity, as indicated in previous studies (Geisler & Hodge 1980, Zepka & Dottori 1987, Kontizas et al. 1989, 1990). In particular the selected clusters present increasing ellipticity outwards. In Table 3 we show these clusters with their V magnitudes, SWB types derived from interpreted UBV photometry (Bica et al. 1996, Girardi & Bertelli 1998) and corresponding ages. Also there are available ages from colour-magnitude diagrams: (i) NGC1783 has 700- 1100 Myr depending on adopted distance modulus (Mould et al. 1989). Based on the same data, using the $`\mathrm{\Delta }`$V turnoff/clump method, Geisler et al. (1997) obtained 1300 Myr; (ii) NGC1831 has an age of 400 Myr according to Hodge (1984). CCD data provided 500-700 Myr (Mateo 1988) and 350-550 Myr (Vallenari et al. 1992), depending on adopted models; (iii) NGC2156 has age $``$60 Myr according to Hodge (1983); (iv) NGC1978 has an age $``$2000 Myr (Olszwewski 1984, Bomans et al. 1995). Geisler et al. (1997) derived 2000 Myr by means of the $`\mathrm{\Delta }V`$ turnoff/clump method. There is good agreement of ages derived from the CMD studies and the integrated colours (Table 3). NGC1783 and NGC1831 have ages comparable to the long term stages of our models, while NGC2156 falls somewhat short, and NGC1978 has an age $``$ a factor 2 larger. The images of this selection were obtained from the Digitized Sky Survey (DSS). The plates are from the SERC Southern Sky Survey and include IIIa-J long (3600s), V band medium (1200s) and V band short (300s) exposures. The PDS pixel values correspond to photographic density measures from the original plates, and are not calibrated. The digitized images, similary to those generated by the models, were treated with the IRAF<sup>1</sup><sup>1</sup>1IRAF is distributed by the National Optical Astronomy Observatories, which is operated by the Association of Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation, U.S.A. package at the Instituto de Física — UFRGS, applying a 2-d Gaussian filter to smoothen out individual stars, and to create isodensity maps. The ellipticity measurement was made with the IRAF package task Ellipse which fits elliptical isophotes to data images with the use of a Fourier expansion: $$y=\underset{n=0}{\overset{4}{}}A{}_{n}{}^{}sin(nE)+B_ncos(nE)$$ (1) where E is the position angle. The amplitudes ($`A_3,B_3,A_4,B_4`$), divided by the semi-major axis length and local intensity gradient, measure the isophotal deviations from perfect ellipse. Note that the IRAF routine requires that one indicates a first guess for the object major axis position angle. Subsequently the routine iteractively fits the best solution for each isophote. We stored the coefficient $`B_4`$ which tells whether the ellipse is disk (a positive $`B_4`$ parameter) or box-shaped (a negative $`B_4`$ parameter), Bender et al. 1989. We also stored the semi-major axis position angle variation. It should be noted that in general ellipticity does not present simple behaviour. Ellipticity varies with radius (Zepka & Dottori 1987, Kontizas et el. 1989,1990) and so it is difficult to assign one to a cluster. We measured elliptical isophotes starting at $`R_h`$ and stopping at the last possible non diverging ellipse. In the models, this last ellipse occurred around the radius containing 90% of the cluster mass. ## 4 Discussion We considered models in which the less massive cluster (perturbed) is allowed to move in elliptic orbits around the more massive one (perturber). The perturbed cluster is assumed to move in an orbit of eccentricity $`e=0.6,0.7,0.9`$. The values of the pericentre and the eccentricity are meaningful only if the clusters are assumed to be point masses moving in Keplerian orbits. However, soft potential orbits are not conical and our use of these definitions is meaningful only to a good approximation, but not strictly (White 1978). The collision parameters of the simulations are given in Table 2. The model designation (column 1 of Table 2) contains information on the encounter conditions. E refers to an elliptic orbit encounter; the number following the letter E is related to the orbital eccentricity (column 5 of Table 2); A, B, C and D refer to the model type (Table 1); the last number is the orbit pericentre (column 6 of Table 2). From the analysis of the time evolution of each model as illustrated in Fig. 1 for E9AB10, it is possible to observe the trend to form bridges in the beginning of the encounters (see also Paper I). As time goes by the smaller cluster is completely disrupted by the massive one, resulting in a single cluster at the end of the simulation. ### 4.1 The Structure of the Final Stages In Fig. 2, we illustrate the same encounter as in Fig. 1, but in a different plane projection (parallel to the orbital plane). We can observe the disruption of the smaller cluster, with its stars occupying orbits around the massive cluster. As these stars remain preferentially orbiting in the same plane of the original encounter, they introduce an elliptical shape to the final single cluster, when seen in a favourable plane projection. In Fig. 3 we show isopleths, ellipticity, position angle and $`4^{th}`$ cosine coefficient (hereafter $`B_4`$ ) for the model E9AB10 in a XZ projection. These measures were made for four different evolution times, the first panel being the initial massive cluster A before interacting (t=0 Myr). We can observe a radial variation of ellipticity during the interaction, which has also been observed for all other models. It also can be seen a general trend of decreasing ellipticity toward the inner parts of the models. Zepka & Dottori (1987) and Kontizas et al. (1989) have observed internal variations of ellipticity in a number of LMC clusters, finding a general tendency of increasing ellipticity towards the inner parts of these clusters. However, they reported a few exceptions ($`5\%`$ of the data) which in turn have counterpart in our models, and could be the result of an interaction. Indeed there are about 300 cluster pairs among a total population of 7847 extended objects in the SMC, inter-Cloud region and LMC (Bica & Schmitt 1995 and Bica et al. 1998). At least 50% of the pairs are expected to be interacting (Bhatia & Hatzidimitriou 1988) and we need a favourable projection (nearly edge-on orbits) to observe elliptical structure like those of the models. In Fig. 3 there is little time variation of ellipticity during the lifetime of our model. This variation occurs mainly in the outer parts of the cluster, probably by the continuous escaping stars of the disrupted cluster. There occur small position angle variations (Fig. 3), which indicate that the elliptical shape of the cluster stands out in the plane of the interacting encounter. The $`B_4`$ coefficient clearly has a positive peak, which is a sign of the presence of a disc component structure in the resulting model. This suggests that the elliptical shape in our final clusters is due to a rotation disk formed mainly by stars of the disrupted cluster. In order to check whether mass or size could affect the behaviour of our models, we did the same analyses for models with different mass and concentration degree, but with same orbital encounter. In Fig. 4 we present the same measures of Fig. 3 for other two models at the final stage and we see similar ellipticity variations in radius. In order to compare absolute ellipticity variations between models, and also between different time stages in the same model, it is required to define ellipticity at a common radial distance. As we have the main shape changes due to variations in the outermost parts of these clusters we decided to measure the ellipticity as the mean between 3$`R_h`$ and the outermost ellipse measured. In Table 4 we give the mean ellipticity between 3$`R_h`$ and the outermost ellipse $`e_{out}`$ for all the models in different time stages. We conclude that in model E9AB10 $`e_{out}`$ shows a small tendency of increasing with time; differences among the models are found mainly in the shape of e versus radial distance curve. When comparing ellipticity between different models at the same appropriate time (Table 4), we observe a small reduction of $`e_{out}`$ from a more massive model (E9AB10) to a less massive model (E9BC10). This relation (ellipticity) versus (mass) has been suggested for the LMC clusters (van den Bergh & Morbey 1984) where high mass clusters have higher angular momentum or have more difficulty in shedding it than do low mass clusters. However, when comparing a more massive model (E9AB10) with a less massive one (E9BD10) — where we have in the last a different concentration degree for the disrupted cluster — we observe no significant variation in $`e_{out}`$. Thus, the initial concentration degree affects the final shape of the resulting cluster. This suggests that a more concentraded cluster has a deeper potential well, making more difficult stripping its stars by the more massive cluster. On the other hand, stripping stars in the less concentrated cluster is more efficient, resulting in a more pronounced halo expansion. This is supported by the results in Table 5. This halo expansion when seen in an edge-on view seems to be an important factor to stablish the shape of the final merger. We also compared the possible ellipticity variation between models with equal mass and concentration degree, but different initial orbital parameters. We observed no significant differences in $`e_{out}`$ between the models. ### 4.2 Rotational Velocity Field In Fig 5 we show the line-of-sight rotational velocity field ($`V_y`$) for the model E9AB10 along the major axis at 990 Myr seen in a XZ projection. The merger consists approximately of a rigid rotation core and an outer halo ( $`r20`$ pc) with a Keplerian fall. The rotational velocity along the major axis has a peak value of about 1 $`Kms^1`$ at a radius of $`2030pc`$. It is interesting to see that the rigid rotation extends exactly to the radius of the initial massive cluster before the encounter. This indicates that the main contribution to the velocity field for $`r>20`$ pc is due to stars of the small cluster. This becomes evident when we plot the two star cluster member sets for a merged final model (Fig. 6) in a XZ projection. In this Fig. we plot the model EAB10 at T=990 Myr and a reference box (side of 40 pc) centred at the more massive cluster. It is clear that the main contribution to the final composite system for $`r>20`$ pc is due to stars of the small cluster. Velocity distributions obtained with high resolution spectra with large telescopes might reveal such rotation curves, while in turn could be a signature of evolved stages of cluster merging. ### 4.3 Mass Loss In Table 5 we give the radii containing 10%, 50% and 90% of the mass for the massive cluster in each interaction (model A or B) for $`t=0`$ and for the final cluster stage. We can observe that $`R_{50}`$ varies little for the initial and final cluster stage when comparing models with same initial mass, suggesting that the massive one has its original stellar content little affected by the collision. However model E9AB10 shows a contraction of the core, showing that the initial mass may affects the cluster final structure. The radius containing 90% of the mass is larger for all our models, which shows that the final cluster outer halo is formed mainly by stars from the disrupted cluster. The latter result is in agreement with those presented in Paper I for the radial mass distribution. There we showed that the maximum expansion for the disrupted cluster in E models always occurs in the plane of the encounter and is a minimum in the z direction. When we compare the projected density distribution between the models before interacting and at the end of the simulation we see that they are similar within the region $`r<10`$ pc (Fig 7). However, the final cluster extends with a dependence of $`\rho r^3`$. Together with changes in the internal structure of our final single cluster, the present simulations suggests that a fraction of cluster stars may be ejected to the field due to the encounter. In order to quantify this fraction, we calculated the total energy per particle, identifying those that had positive total energy which are expected to escape the system. In Table 6 we have for each model the final fraction of stars that escape the cluster (in units of solar mass). The mass fraction that is ejected to the field is very small, considering the ensemble of the pair ($``$ 3% ), so the contribution to the field stars by a cluster pair encounter is not very significant. It should be noted that when we study the merger of the cluster pair we are not taking into account the tidal field of the parent galaxy. This should truncate the outer halo of the final cluster, which would contribute with an increasing fraction of stars ejected to the field. Adopting a tidal radius $`r_{tidal}65`$pc (distance moduli of 18.5 for LMC, Westerlund 1990) for observed cluster’s halo (Elson et al., 1987), we can estimate how much mass is beyond this limit. For the model E9AB10, we estimate $`M50\%`$ of the mass of the disrupted cluster (4.5% of the total mass) is beyond this limit. So, it can be concluded that although the total mass loss for the pair is not very significant, the assumption of a maximum radius for the final cluster, implies that half of the disrupted cluster stars feed the field. ### 4.4 Comparison of Simulations with Magellanic Cloud Clusters Isopleth maps of projected planes at a given time $`t`$ of a suitable model can be compared to the observed isodensity maps of selected Magellanic Cloud clusters to infer on their dynamics. In Paper I we have used this method to search for evidences of interacting pairs. In the present study, we use model isopleths together with other measured parameters in order to compare our simulation final stages with possible observational evolved products of cluster pair interaction. Some examples are given in Figs. 8910 and 11, where we show some isolated LMC clusters isophotes, and radial dependence of ellipticity, position angle and $`B_4`$ coefficient. The examples shown here have an ellipticity radial variation with a trend to increase towards the outer parts. NGC 1783 has an ellipticity curve compatible with our model E9BD10 when seen in a XZ projection at 990 Myr (Fig. 4, right). The cluster age (Sect. 3) is fully compatible with the model evolutionary time. This scenario can be explained by the result of a cluster pair encounter with the subsequent merger of the pair in the early history of NGC1783. NGC1831 has many resemblances with the models. The ellipticity (Fig. 9) increases outwards, but in the external isophote the value decreases again. This is observed in the model E9BC10 at 990 Myr (Fig. 4, left). The cluster presents a considerable isophotal twisting like in the model. As pointed out in Sect. 3, the age of NGC2156 falls short of the long term stages of the models. But notice that asymmetrical early stages (Fig. 2) at $``$ 25 Myr , if seen favourably in a given direction might present ellipticity variations. The negative B4 coefficient behaviour suggests that if the cluster is indeed a merger, disruption has not yet occurred to create a disc shape. NGC1978 is an interesting intermediate age cluster due to the pronounced ellipticity (Fig. 11), which is already presented in the innermost available isophote. No isophotal twisting occurs in this case. Recently Kravtsov (1999) searched for special variations in the colour-magnitude diagram of NGC 1978. They found evidence of some variations which they attributed to a possible metallicity spread. Bomans et al. (1995) did not find any evidence of age variation within the cluster. A merger scenario does not require differences in age and/or metallicity. Indeed two clusters born in the same association are expected to have a close orbit encounter, which is the more favourable case for interactions (Paper I). The $`B_4`$ coefficient has different behaviours in the 4 clusters, and in some cases has varying values in a given cluster. We might be witnessing clusters with either discoidal (positive values) or boxy (negative values) shapes. The average value of $`B_4`$ for NGC1783 suggests an overall boxy shape. NGC2156 is also boxy, but with a systematic trend to become disc-shaped in the outer parts. NGC1831 shows a predominant positive $`B_4`$ suggesting a disc-shape. Finally, NGC1978 on the average is positive thus indicating a predominantly disc-shape. However the cluster has two distinct $`B_4`$ value regions, 20$`r`$50 is definitively disc while r $`>`$ 50 should be classified as boxy. Behaviours like this have counterparts in elliptical galaxies (see e.g. the sample of Goudfrooij et al. 1994). The $`B_4`$ coefficient is a promising tool to explore the possibility of merger in galaxies or star clusters. ## 5 Conclusions We used N-body simulations to study the final morphology and structure of star cluster pair encounters. We also compared these morphologies with those of elliptical isolated LMC clusters. The main conclusions may be summarized as follows: 1. Close orbit encounters of cluster pairs can lead to a single final cluster with a distinct structure of the original ones. When seen in a favourable plane projection, models show an elliptical shape comparable to those of some observed isolated LMC clusters. This suggests that tidal encounters could be a mechanism to explain the ellipticity of several clusters in the Magellanic Clouds. 2. Evolved stages appear to be stable after $``$ 200 Myr, suggesting that the resulting ellipticity is not transient. The simulations indicate that the initial concentration degree and mass affect the final shape of the resulting cluster. In the final stages all models present halo expansion, and some present core contraction. 3. The models show mass loss for the composite system.The fraction of stars ejected to the field by the encounter, is not so significant ($`3\%`$ with respect to the sum of both clusters). However it can represent up to $`50\%`$ of the stars of the disrupted cluster, if we assume a tidal radius of $`r_{tidal}65`$ pc for the final merged cluster. 4. The velocity distribution of some final stage models presents characteristic velocity patterns in favourable plane projections, which could be used as an observational constraint to test the present scenarios. Finally, we call attention that the merging of spheroidal systems can produce disc-shaped products. Thus not only boxy-shaped systems might be related to mergers. ACKNOWLEDGMENTS We thank Dr. Hernquist for allowing us to use TREECODE and the CESUP-UFRGS for alloted time in the CRAY YMP-2E computer. We are also grateful to an anonymous referee for interesting remarks. We acknowledge support from the Brazilian Institutions CNPq, CAPES and FINEP. The images in this study are based on photographic data obtained using the UK Schmidt Telescope, which was operated by the Royal Observatory Edinburgh, with funding from the UK Science and Engineering Research Council, until 1988 June, and thereafter by the Anglo-Australian Observatory. Original plate material is copyright by the Royal Observatory Edinburgh and the Anglo-Australian Observatory. The plates were processed into the present compressed digital form with their permission. The Digitized Sky Survey was produced at the Space Telescope Science Institute under US Government grant NAG W-2166
no-problem/9910/physics9910043.html
ar5iv
text
# Atomic dynamics in evaporative cooling of trapped alkali atoms in strong magnetic fields ## I Introduction Bose-Einstein condensation of alkali atoms in magnetic traps was first observed in 1995 , and since then the development in related research has been been very swift. Typically the hyperfine state used in the alkali experiments is the $`F=1`$ state, although condensation has been demonstrated for the <sup>87</sup>Rb $`F=2`$ case as well . The trapping of atoms is based on moderate, spatially inhomogeneous magnetic fields, which create a parabolic, spin-state dependent potential for spin-polarised atoms, as shown in Fig. 1(a). For slowly moving atoms the trapping potential depends on the strength of the magnetic field $`B`$ but not on its direction . In practice the field is dominated by a constant bias field $`\stackrel{}{B}_{\mathrm{bias}}`$, which eliminates the Majorana spin flips at the center of the trap. In evaporative cooling the hottest atoms are removed from the trap and the remaining ones thermalise by inelastic collisions. This leads to a decrease in temperature of the atoms remaining in the trap . Continuous evaporative cooling requires adjustable separation into cold and hot atoms. This is achieved by inducing spin flips with an oscillating (radiofrequency) magnetic field, which rotates preferably in the plane perpendicular to the bias field . In the limit of linear (weak) Zeeman effect the rf field couples the adjacent magnetic states $`M_F`$ resonantly at the spatial location determined by the field frequency \[Fig. 1(a)\]. Hot atoms oscillating in the trap can reach the resonance point and exit the trap after a spin flip to a nontrapping state. Using the rotating wave approximation we can eliminate the rf field oscillations, and obtain the curve crossing description of resonances \[Fig. 1(b)\]. The dynamics of atoms as they move past the resonance point can be described with a simple semiclassical model , which has been shown to agree very well with fully quantum wave packet calculations . The model, however, can be applied only if the resonances between adjacent $`M_F`$ states occur at the exactly same distance from the trap center. When the nonlinear terms dominate the Zeeman shifts, the situation changes, as shown in Fig. 1(c). The adjacent resonances become separated and one expects to treat the evaporation as a sequence of independent Landau-Zener crossings as suggested by Desruelle et al. in connection with their recent <sup>87</sup>Rb experiment . We show that there is an intermediate region where off-resonant two-photon transitions from the $`M_F=2`$ state to the $`M_F=0`$ state, demonstrated in Fig. 2, play a relevant role. In general there is a competition between the adiabatic following of the eigenstates (solid lines in Fig. 2), which leads to evaporation, and nonadiabatic transitions which force the atoms to stay in the trapping states. In <sup>23</sup>Na the nonadiabatic transitions can lead to highly inelastic collisions . In the experiment by Desruelle et al. it was found that for a strong bias field the nonlinear Zeeman shifts remove some resonances completely, thus making it impossible to make a spin flip to a nontrapping state. Our calculations confirm this observation. We also show that although evaporation could continue via off-resonant multiphoton processes, such a process is not practical. The stopping of evaporation at some finite temperature occurs for the <sup>87</sup>Rb and <sup>23</sup>Na $`F=2`$ trapping states, but not e.g. for the <sup>85</sup>Rb $`F=2`$ trapping state. In Sec. II we write down the formalism for the Zeeman shifts and show the basic properties of the field-dependent trapping potentials. We describe the fully quantum wave packet approach and corresponding semiclassical theories in Sec. III, present and discuss the results in Sec. IV, and summarize our work in Sec. V. ## II The Zeeman structure ### A <sup>23</sup>Na and <sup>87</sup>Rb The Zeeman shifts can not be derived properly in the basis of the hyperfine states (labelled by $`F`$ and $`M_F`$. We need to consider the atom-field Hamiltonian in the $`(I,J)`$ basis: $$H=A\stackrel{}{I}\stackrel{}{J}+CJ_z+DI_z,$$ (1) where $`\stackrel{}{I}`$ and $`\stackrel{}{J}`$ are the operators for the nuclear and total electronic angular momentum, respectively. The first term describes the hyperfine coupling; $`E_{\mathrm{hf}}=h\nu _{\mathrm{hf}}=2A`$, where $`E_{\mathrm{hf}}`$ is the hyperfine splitting between the $`F=1`$ and $`F=2`$ states. Here $`\nu _{\mathrm{hf}}=1772`$ MHz for <sup>23</sup>Na and $`\nu _{\mathrm{hf}}=6835`$ MHz for <sup>87</sup>Rb. The magnetic field dependence arises from the two other terms, with $`C=g_J\mu _BB`$ and $`D=\alpha \mu _NB`$, where the Bohr magneton is $`\mu _B=e\mathrm{}/2m_e`$, the nuclear magneton is $`\mu _N=e\mathrm{}/2m_p`$, and the Lande factor is $`g_J=2`$. Here $`\alpha =2.218`$ for <sup>23</sup>Na and $`\alpha =2.751`$ for <sup>87</sup>Rb. But $`\mu _B/\mu _N1000`$, and in fact we can omit the third term in Eq. (1). For <sup>23</sup>Na and <sup>87</sup>Rb we have $`I=3/2`$ and $`J=1/2`$ (leading to $`F=1`$ or $`F=2`$ with $`\stackrel{}{F}=\stackrel{}{I}+\stackrel{}{J}`$). Our state basis is formed by the angular momentum states labelled with the magnetic quantum number pairs $`(M_I,M_J)`$. When we evaluate the matrix elements of $`H`$ \[using the relation $`\stackrel{}{I}\stackrel{}{J}=I_zJ_z+\frac{1}{2}(I_+J_{}+I_{}J_+)`$\], the states that correspond to the same value of $`M_F=M_I+M_J`$ form subsets of mutually coupled states. By diagonalising the Hamiltonian we obtain its eigenstates. The states which correspond to the $`F=2`$ state in the $`B0`$ limit (labelled with $`M_F`$) have the energies $`E_{\mathrm{M}_\mathrm{F}}`$: $`E_{+2}`$ $`=`$ $`{\displaystyle \frac{1}{2}}C,`$ (2) $`E_{+1}`$ $`=`$ $`{\displaystyle \frac{1}{2}}\sqrt{4A^2+2AC+C^2}A,`$ (3) $`E_0`$ $`=`$ $`{\displaystyle \frac{1}{2}}\sqrt{4A^2+C^2}A,`$ (4) $`E_1`$ $`=`$ $`{\displaystyle \frac{1}{2}}\sqrt{4A^22AC+C^2}A,`$ (5) $`E_2`$ $`=`$ $`{\displaystyle \frac{1}{2}}C.`$ (6) These energies have been normalised to the energy of the $`F=2`$ state for $`B=0`$. In Fig. 3(a) and (b) we show the Zeeman shifts for all hyperfine ground states of <sup>23</sup>Na and <sup>87</sup>Rb, but normalised to the ground state energy in the absence of hyperfine structure. For small magnetic fields ($`B`$ 1 T) we get $$E_{\mathrm{M}_\mathrm{F}}E_{\mathrm{hf}}[\epsilon M_F+(4M_F^2)\epsilon ^2],$$ (7) where $`\epsilon =\mu _BB/(2E_{\mathrm{hf}})`$. In terms of $`F`$ and $`M_F`$ the linear Zeeman shift is $`E_{\mathrm{M}_\mathrm{F}}=g_F\mu _BBM_F=E_{\mathrm{hf}}\epsilon M_F`$ as the hyperfine Lande factor is $`g_F=1/2`$. The necessary condition for evaporation is that the rf field induces a resonance between the states $`M_F=2`$ and $`M_F=1`$. The location of this resonance defines the division between the hot and cold atoms. By decreasing the rf field frequency $`\nu _{\mathrm{rf}}`$ we both move the resonance point closer to the trap center as well as allow more atoms to escape the trap. For small $`B`$ fields all adjacent states are resonant at the same location for any $`\nu _{\mathrm{rf}}`$. But in case of strong magnetic fields, typically larger than about 0.0002 T, due to the nonlinear Zeeman shifts the resonances separate. Furthermore, the other resonances than the $`M_F=2M_F=1`$ one in fact move towards the trap center faster, and reach it while the $`M_F=2M_F=1`$ resonance still corresponds to some finite temperature. When $`\nu _{\mathrm{rf}}`$ is lowered further, the other resonances begin to disappear. At strong $`B`$ fields the $`M_F=0`$ state is also a trapping state, as shown in Fig. 4, so for effective evaporation one really needs to reach the $`M_F=1`$ state. At the critical frequency $`\nu _{\mathrm{cr}}`$ the crossing between the states $`M_F=1`$ and $`M_F=0`$ disappears. Alternatively, for a fixed frequency $`\nu _{\mathrm{rf}}`$ we have a critical value $`B_{\mathrm{cr}}`$ for the $`B`$ field; the resonances disappear when $`BB_{\mathrm{cr}}`$ (for practical reasons we have chosen to modify $`B`$ rather than $`\nu _{\mathrm{rf}}`$ in our wave packet studies). In Fig. 5(a) we show the potential configuration when $`\nu _{\mathrm{rf}}`$ is slightly below $`\nu _{\mathrm{cr}}`$. Since $`\nu _{\mathrm{cr}}`$ corresponds to the state separation at the center of the trap, it is independent of the trap parameters such as the trap frequency. For a specific trap configuration $`\nu _{\mathrm{cr}}`$ can be converted into a minimum kinetic energy required for reaching the resonance between the states $`M_F=2`$ and $`M_F=1`$. In Fig. 5(b) we show this minimum kinetic energy in units of temperature as a function of magnetic field strength for <sup>23</sup>Na and <sup>87</sup>Rb, and for the trap configuration used both in our simulations and in the experiment by Desruelle et al. . In the intermediate region $`0BB_{\mathrm{cr}}`$, where the necessary crossings exist but are separated, the processes take place via two possible routes. We can have off-resonant multiphoton processes, that e.g. lead to adiabatic transfer from the $`M_F=2`$ state to the $`M_F=0`$ state. This example is demonstrated in Fig. 2 where we show also the eigenstates of the system, i.e., the field-dressed potentials. When the relevant resonances are well separated, the evaporation takes place via a complicated sequence of crossings, as indicated in Fig. 1(c). This will be demonstrated with wave packet simulations in Sec. IV. ### B <sup>85</sup>Rb For the isotope <sup>85</sup>Rb we have $`I=5/2`$ and $`J=1/2`$, so the ground state hyperfine states are $`F=2`$ and $`F=3`$, as shown in Fig. 3(c). Now the $`F=2`$ trapping state is the lower hyperfine ground state. Thus the behavior of the $`M_F`$ states is different from the <sup>87</sup>Rb and <sup>23</sup>Na case. The $`B`$ field dependence of the states related to $`F=2`$ is now $`E_{+2}`$ $`=`$ $`{\displaystyle \frac{3A}{2}}{\displaystyle \frac{1}{2}}\sqrt{9A^2+4AC+C^2},`$ (8) $`E_{+1}`$ $`=`$ $`{\displaystyle \frac{3A}{2}}{\displaystyle \frac{1}{2}}\sqrt{9A^2+2AC+C^2},`$ (9) $`E_0`$ $`=`$ $`{\displaystyle \frac{3A}{2}}{\displaystyle \frac{1}{2}}\sqrt{9A^2+C^2},`$ (10) $`E_1`$ $`=`$ $`{\displaystyle \frac{3A}{2}}{\displaystyle \frac{1}{2}}\sqrt{9A^22AC+C^2},`$ (11) $`E_2`$ $`=`$ $`{\displaystyle \frac{3A}{2}}{\displaystyle \frac{1}{2}}\sqrt{9A^24AC+C^2},`$ (12) where now $`E_{\mathrm{hf}}=3A`$. For <sup>85</sup>Rb we have $`\nu _{\mathrm{hf}}`$ = 3036 MHz. Here the trapping states are now $`F=2,M=2`$ and $`F=2,M=1`$. If we now define $`\stackrel{~}{\epsilon }=(2/3)\epsilon =\mu _BB/(3E_{\mathrm{hf}})`$ we get approximatively $$E_{\mathrm{M}_\mathrm{F}}E_{\mathrm{hf}}[\stackrel{~}{\epsilon }M_F+(9M_F^2)\stackrel{~}{\epsilon }^2].$$ (13) As $`g_F=1/3`$, this agrees with the linear expression $`E_{\mathrm{M}_\mathrm{F}}=g_F\mu _BBM_F=E_{\mathrm{hf}}\stackrel{~}{\epsilon }M_F`$. The change of order in the $`M_F`$ state energy ladder means that with increasing $`B`$ field one never loses the crossing points between the adjacent states. In other words, if we use an rf field that can couple the states $`M_F=2`$ and $`M_F=1`$ resonantly at some location $`x_C`$, then we always couple the rest of the states resonantly as well at distances larger than $`x_C`$. In Fig. 6 we see how this leads to a sequence of crossings that allows hot atoms to leave the trap without the need for sloshing. One must, however, take into account that the kinetic energy required to leave the trap is now set by the difference between the energy of the $`M_F=2`$ state at the center of the trap, and the energy of the $`M=0`$ (or $`M_F=1`$) state at the point where the states $`M_F=0`$ and $`M_F=1`$ are in resonance. In other words, atoms need a kinetic energy equal or larger to the energy difference between the trap center and the second crossing in Fig. 6. In this paper we limit our discussion on the $`F=2`$ case only, but it is obvious that for the <sup>85</sup>Rb $`F=3`$ trapping states we face the same problem as in the $`F=2`$ case for <sup>87</sup>Rb and <sup>23</sup>Na. In general for the alkali atoms we can expect that the problem will arise whenever we use the upper ground state hyperfine state as the trapping state at strong $`B`$ fields. ### C Trap configuration For simplicity we have assumed in our studies the same spatially inhomogeneous magnetic field as in the experiment by Desruelle et al. , except that we have added a spatially homogeneous compensation field. This allows us to change the general field magnitude (depends on the bias field) while keeping the trap shape almost unchanged (depends also on the bias field). Thus we set $$B=B_0+\left(\frac{B^2}{2B_{\mathrm{bias}}}\frac{B^{\prime \prime }}{2}\right)(x^2+y^2)+B^{\prime \prime }z^2,$$ (14) where $`B^{}=9`$ T/m, $`B^{\prime \prime }/B_{\mathrm{bias}}=10^4`$ m<sup>-2</sup>, and the trap center field is defined as $`B_0B_{\mathrm{bias}}B_{\mathrm{comp}}`$. The actual trap is cigar-shaped, which is a typical feature in many experiments. We have selected the $`x`$ direction as the basis for our wave packet studies. We set $`B_{\mathrm{bias}}=0.0150`$ T and use $`B_{\mathrm{comp}}`$ as a parameter to change $`B_0`$. Using $`C=g_J\mu _BB`$ with Eqs. (4) and (14) we get the spatially dependent trapping potentials. ## III Quantum and semiclassical models ### A Wave packet simulations For our wave packet studies we fix the rf field frequency to the value $`\nu _{\mathrm{rf}}=\nu _0+0.25`$ MHz, where $`\nu _0=[E_{+2}(x=0)E_{+1}(x=0)]/h`$. With this setting the atoms need typically a kinetic energy about $`E_{\mathrm{kin}}/k_B24\mu `$K in order to reach the crossing between the states $`M_F=2`$ and $`M_F=1`$. With our special definition of $`\nu _{\mathrm{rf}}`$ the differences between <sup>23</sup>Na and <sup>87</sup>Rb appear mainly in the time scale of atomic motion (Na atoms are lighter and thus move faster), and in scaling of $`B`$. For our selected $`\nu _{\mathrm{rf}}`$ we have $`B_{\mathrm{cr}}=0.00297`$ T for <sup>87</sup>Rb and $`B_{\mathrm{cr}}=0.00152`$ T for <sup>23</sup>Na. We have used the rf field strength $`\mathrm{\Omega }=(2\pi )2.0`$ kHz), where the rf field induced coupling term is $$=\mathrm{}\left(\begin{array}{ccccc}0& \mathrm{\Omega }& 0& 0& 0\\ \mathrm{\Omega }& 0& \sqrt{\frac{3}{2}}\mathrm{\Omega }& 0& 0\\ 0& \sqrt{\frac{3}{2}}\mathrm{\Omega }& 0& \sqrt{\frac{3}{2}}\mathrm{\Omega }& 0\\ 0& 0& \sqrt{\frac{3}{2}}\mathrm{\Omega }& 0& \mathrm{\Omega }\\ 0& 0& 0& \mathrm{\Omega }& 0\end{array}\right)\begin{array}{c}|2,2\hfill \\ |2,1\hfill \\ |2,0\hfill \\ |2,1\hfill \\ |2,2\hfill \end{array},$$ (15) in the $`|F,M_F`$ basis as indicated. The wave packet simulations were performed in the same manner as in the previous study . Our initial wave packet has a Gaussian shape, with a width of $`10\mu `$m. For all practical purposes this wave packet is very narrow both in position and momentum, and the spreading due to its natural dispersion is not an important factor. We identify the mean momentum of the wave packet with the atomic kinetic energy $`E_{\mathrm{kin}}`$, and set $`E_{\mathrm{kin}}/k_B=30\mu `$K. In the experiment by Desruelle et al. one had typically $`B_0=B_{\mathrm{bias}}=0.0150`$ T, which sets the kinetic energy for reaching the resonance points (for any practical value of $`\nu _{\mathrm{rf}}`$) too large for realistic numerical simulations. Thus we have introduced the compensation field and limit $`B_0`$ to values below $`0.0050`$ T. But the main conclusions from our study apply to larger values of $`B_0`$ and $`E_{\mathrm{kin}}`$, and many of the results can be scaled to other parameter regions with the semiclassical models. Another simplification is that we consider only one spatial dimension. This is necessary simply because we have chosen to work with relatively large energies, such as 30 $`\mu `$K. Numerical wave packet calculations at the corresponding velocities require on the order of 100 000 points for both the spatial and temporal dimensions. As discussed in Ref. , however, this is not a crucial simplification. Basically, we solve the five-component Schrödinger equation $$i\mathrm{}\frac{\mathrm{\Psi }(x,t)}{t}=(x)\mathrm{\Psi }(x,t),$$ (16) The components of the state vector $`\mathrm{\Psi }(x,t)`$ stand for the time dependent probability distributions for each $`M_F`$ state. The off-diagonal part of the Hamiltonian $``$ is given Eq. (15). The diagonal terms are $$\frac{\mathrm{}^2}{2m}\frac{^2}{x^2}+U_{\mathrm{M}_\mathrm{F}}(x)M_Fh\nu _{\mathrm{rf}},$$ (17) where $`m`$ is the atomic mass and $`U_{\mathrm{M}_\mathrm{F}}(x)`$ are the trap potentials as in Fig. 1(a). For states $`M_F=2`$ and $`M_F=1`$ we use absorbing boundaries, and reflecting ones for the others. The numerical solution method is the split operator method, with the kinetic term evaluated by the Crank-Nicholson approach . ### B Semiclassical models For small magnetic fields the rf field induced resonances between adjacent states occur at the same position, $`x=x_C`$. In this situation the spin-change probability for atoms which traverse the resonance is given by the multistate extension of the two-state Landau-Zener model . We have earlier shown that for the evaporation in <sup>23</sup>Na $`F=2`$ state at $`E_{\mathrm{kin}}/k_B=5\mu `$K and small $`B`$ this model predicts the wave packet results very well . The solution for the multistate problem can be expressed with the solutions to the two-state Landau-Zener (LZ) model, so we shall begin by discussing the two-state case first. Let us consider two potentials, $`U_1`$ and $`U_2`$, which intersect at $`x=x_C`$ and are coupled by $`V`$. For strong $`B`$, when the crossings are well separated in our alkali $`F=2`$ system, $`V`$ is equal to $`\mathrm{}\mathrm{\Omega }`$ or $`\sqrt{3/2}\mathrm{}\mathrm{\Omega }`$, depending which pair of adjacent states is involved \[see Eq. (15)\]. In addition to the coupling $`V`$, the relevant factors are the speed $`v_C`$ of the wave packet and the slopes of the trapping potentials $`U_{\mathrm{M}_\mathrm{F}}(x)`$ at the crossing. We define $$\alpha =\mathrm{}\left|\frac{d(U_2U_1)}{dx}\right|_{x=x_C}.$$ (18) The speed of the wave packet enters the problem as we describe the traversing of the crossing with a simple classical trajectory, $`x=v_C(tt_0)+x_0`$. This allows us to enter the purely time-dependent description where the population transfer is given by the two-component Schrödinger equation $$i\mathrm{}\frac{}{t}\left(\begin{array}{c}\mathrm{\Psi }_1(t)\\ \mathrm{\Psi }_2(t)\end{array}\right)=\left(\begin{array}{cc}0& V\\ V& \alpha v_Ct\end{array}\right)\left(\begin{array}{c}\mathrm{\Psi }_1(t)\\ \mathrm{\Psi }_2(t)\end{array}\right).$$ (19) This is the original Landau-Zener theory. In this form it is fully quantum and we can obtain an analytic expression for state populations $`P_1`$ and $`P_2`$ after the crossing. If state 1 was the initial state, then $$\begin{array}{cc}P_1\hfill & =1\mathrm{exp}(\pi \mathrm{\Lambda })\hfill \\ P_2\hfill & =\mathrm{exp}(\pi \mathrm{\Lambda })\hfill \end{array},\mathrm{\Lambda }=\frac{2V^2}{\mathrm{}\alpha v_C}.$$ (20) Obviously, the Landau-Zener model is only applicable when the total energy is higher than the bare-state energy at the resonance point. For more details about applying LZ theory to wave packet dynamics see Refs. . And now we return to the original multistate problem. According to the five-state case of the multistate model (see e.g. Ref. ) the populations $`P_{\mathrm{M}_\mathrm{F}}`$ for the untrapped states after one traversal of the crossing are $`P_2`$ $`=`$ $`p^4,`$ (21) $`P_1`$ $`=`$ $`4(1p)p^3,`$ (22) $`P_0`$ $`=`$ $`6(1p)^2p^2,`$ (23) $`P_1`$ $`=`$ $`4(1p)^3p,`$ (24) $`P_2`$ $`=`$ $`(1p)^4,`$ (25) where $`p=\mathrm{exp}(\pi \mathrm{\Lambda })`$, and $`\mathrm{\Lambda }`$ is defined by setting $`V=\mathrm{}\mathrm{\Omega }/2`$. This assumes that we were intially on state $`M_F=2`$. We can see that the final population of the initial state, $`P_2`$, is equal to $`\mathrm{exp}[\pi \mathrm{}\mathrm{\Omega }^2/(2\alpha v_C)]`$ for both the two-state and the multistate model if Hamiltonian (15) is used. ## IV Results Typical examples of the atomic wave packet evolution for the three trapping states are shown in Figs. 7 and 8. They demonstrate the sloshing discussed e.g. in Refs. . The amplitudes of the components decrease as population is partly transferred to another state. Similarly new wave packet components can appear at crossings. As a wave packet component reaches a turning point it sharpens strongly. In Fig. 7 we have $`B_0=0.0018`$ T, which means that there is no crossing between states $`M_F=0`$ and $`M_F=1`$. Population transfer from the state $`M_F=1`$ to $`M_F=0`$ is weak. The $`M_F=0`$ wave packet component has turning points beyond the integration space. As sloshing continues Stückelberg oscillations could take place as split wave packet components merge again at crossings and interfere (for further discussion, see Refs. ). However, the wave packet contains several momentum components and thus such oscillations are not likely observed, because they are very sensitive to phase differences. In our simulations we saw no major indication of inteferences. In Fig. 9 we track the trap state populations and their sum as the wave packet sloshes in the trap and traverses several crossings. The magnetic field values are strong enough to ensure that the crossings are well separated. We can identify when the various crossings take place although some of them happen simultaneously. The filled symbols indicate the corresponding Landau-Zener predictions, and we find that the agreement is excellent. Some oscillations appear for the <sup>23</sup>Na case \[Fig. 7(a)\] at times between 3.5 ms and 4.5 ms. These may arise from Stückelberg oscillations, but they do not affect the final transition probabilities, supporting our assumption that in the end such oscillations average out. Note that for <sup>23</sup>Na there is no resonance between states $`M_F=0`$ and $`M_F=1`$, but for <sup>87</sup>Rb there is and it is seen as a stepwise reduction of $`P_SP(M_F=2)+P(M_F=1)+P(M_F=0)`$. Near the critical field $`B_{\mathrm{cr}}`$ the probability to leave the trap via states $`M_F=2`$ and $`M_F=1`$ varies strongly with $`B_0`$. When $`B_0<B_{\mathrm{cr}}`$ the wave packet meets two crossings between the states $`M_F=1`$ and $`M_F=0`$ as it traverses the region around the trap center $`x=0`$ on state $`M_F=0`$. At both crossings some population leaks into the state $`M_F=1`$, as seen in Fig. 10 for $`B_0=0.0028`$ T. As $`B_0`$ increases, the two crossing points, on opposite sides of $`x=0`$, begin to merge, until they disappear at $`x=0`$ when $`B=B_{\mathrm{cr}}`$. Then the transfer between the two states becomes off-resonant (or tunnelling), and its probability decreases exponentially as a function of some ratio of $`\mathrm{\Omega }`$ and the energy difference between the states $`M_F=1`$ and $`M_F=0`$ at $`x=0`$. This situation corresponds to the parabolic level crossing model . But the main point is that the off-resonant process is unlikely to play any major role. Finally, in Fig. 11 we show how the transfer probability between the trap states at the first crossing changes as a function of $`B_0`$. The multistate process transforms smoothly into a two-state process between the states $`M_F=2`$ and $`M_F=1`$. The transition zone is rather large, though, with $`B_0`$ ranging from 0 to 0.0010 T. The transfer process in this zone is the off-resonant two-photon transfer demonstrated in Fig. 2. An analogous process can occur in atoms interacting with chirped pulses . An interesting point is that the population of the initial state is not affected by the fact how the transferred population is distributed to the other involved states. This seems to be typical for the Landau-Zener crossings . The solid lines indicate the predictions of the two-state model, and the dotted lines the multistate model. They change with $`B_0`$ because the location of the first crossing point and thus the wave packet speed $`v_C`$ at this point change slightly with $`B_0`$. ## V Conclusions Our results show that in general the semiclassical level crossing models offer a clear understanding of the single atomic dynamics during the evaporation process. Also, we have verified with wave packet calculations that the interpretations presented by Desruelle et al. for their <sup>87</sup>Rb experiment are correct. The simple picture of evaporation at near-zero magnetic fields transforms into a complex sequence of two-state crossings at field strengths above about 0.0010 T. For all alkali systems where $`F=2`$ is the upper hyperfine ground state the evaporation will stop before condensation as the necessary resonances disappear too soon as a function of the rf field frequency. We have shown that tunnelling does not really play a role once the resonances have been lost. Further complications arise from the fact that the $`M_F=0`$ state becomes a trapping state. In experiments, as suggested by Desruelle et al., one could avoid the problem by coupling the $`F=2,M_F=2`$ trapping state to the $`F=1,M_F=1`$ nontrapping state, or by using several rf fields of different frequencies within the $`F=2`$ hyperfine manifold. Although for <sup>87</sup>Rb one has observed a long-lasting coexistence of $`F=1`$ and $`F=2`$ condensates, theoretical studies predict this difficult for <sup>23</sup>Na due to destructive collisions. Thus the first approach may apply better for <sup>87</sup>Rb than for <sup>23</sup>Na. We have calculated earlier that for <sup>23</sup>Na the collisions between atoms in the $`M_F=0`$ and $`M_F=2`$ states are very destructive, with a rate coefficient on the order of $`10^{11}`$ cm<sup>3</sup>/s. For practical bias field strengths the $`M_F`$ state is also a trapping state. Thus the efficiency of evaporation is reduced, and the time the atoms spend on the $`M_F=0`$ state increase, making it more likely to have a destructive, energy releasing collision. So far condensation on the $`F=2`$ state for Na has not been achieved. Even in the weak $`B`$ field case evaporation can produce atoms on $`M_F=0`$ state via nonadiabatic transitions. Thus the role of inelastic collisions is expected to be enhanced for the field strengths considered here. Once condensation is reached, however, the nonlinearity of the Zeeman shifts can be an asset rather than a nuisance. For instance, one could create a new type of binary condensates by making a selective transfer of part of the condensate from the $`F=2,M_F=2`$ state to the $`F=2,M_F=1`$ state, either by using resonant or chirped rf field pulses. Alternatively, two rf pulses of different frequencies or perhaps a single chirped pulse might allow one to transfer the condenstate from the $`F=2,M_F=2`$ state to the $`F=2,M_F=0`$ state and let it expand normally, without the need to switch the magnetic fields off. Of course, this would work only when $`B`$ is so small that the trapping nature of the $`M_F`$ state is not too strong. ###### Acknowledgements. This research has been supported by the Academy of Finland. We thank A. Aspect and S. Murdoch for valuable discussions and information.
no-problem/9910/astro-ph9910416.html
ar5iv
text
# Electron injection break and pair content of quasar jets ## 1. Assumptions $``$ Nonthermal radiation in blazars is produced by thin shells, propagating at a constant relativistic ($`\mathrm{\Gamma }1`$) speed along the conical jet; $``$ Relativistic electrons are injected in the shells within a distance range $`\mathrm{\Delta }r=r_0`$, starting from $`r_0`$. They are injected at a constant rate and with the two power-law energy distribution, $`Q=K\gamma ^p`$ for $`\gamma >\gamma _b`$ and $`Q\gamma ^1`$ for $`\gamma \gamma _b`$; $``$ Radiative energy losses of electrons are dominated by Comptonization of the quasar broad emission lines. This process is responsible for production of $`\gamma `$-rays. The low energy break at few MeV results from inefficient radiation cooling of lower energy electrons; $``$ Intensity of the magnetic field is $`B(r)=(r_0/r)B(r_0)`$. ## 2. The model equations ### 2.1. Electron Evolution Evolution of the electron energy distribution is given by the continuity equation (Moderski, Sikora & Bulik 2000) $$\frac{N_\gamma }{r}=\frac{}{\gamma }\left(N_\gamma \frac{d\gamma }{dr}\right)+\frac{Q}{c\beta \mathrm{\Gamma }},$$ $`(1)`$ where $$\frac{d\gamma }{dr}=\frac{1}{\beta c\mathrm{\Gamma }}\left(\frac{d\gamma }{dt^{}}\right)_{rad}\frac{2}{3}\frac{\gamma }{r}.$$ $`(2)`$ The second term on the rhs of Eq. (2) represents the adiabatic losses. The rate of the radiative energy losses is: $$\left(\frac{d\gamma }{dt^{}}\right)_{rad}=\frac{4\sigma _T}{3m_ec}(u_B^{}+u_S^{}+u_{BEL}^{}+u_{IR}^{})\gamma ^2,$$ $`(3)`$ where $`u_B^{}=B^2/8\pi `$ is the magnetic energy density, $`u_S^{}`$ is the energy density of the synchrotron radiation field, $`u_{BEL}^{}=(4/3)\mathrm{\Gamma }^2L_{BEL}/4\pi r^2c`$ is the energy density of the broad emission lines, $`u_{IR}^{}(4/3)\mathrm{\Gamma }^2\xi _{IR}4\sigma _{SB}T^4/c`$ is the energy density of the near-IR radiation produced in molecular torus by hot dust, and $`\xi _{IR}`$ is the fraction of the accretion disc radiation reprocessed by the dust into the near-IR band. ### 2.2. Radiation Spectra The observed spectra as a function of time are computed using the formula $$\nu L_\nu (t)4\pi \frac{(\nu L_\nu (t))}{\mathrm{\Omega }_{\stackrel{}{n}_{obs}}}=_{\mathrm{\Omega }_j}\frac{\nu ^{}L_\nu ^{}^{}[r(\theta ,t),\theta ]𝒟^4}{\mathrm{\Omega }_j}\mathrm{d}\mathrm{cos}\theta \mathrm{d}\varphi ,$$ $`(4)`$ where $`𝒟=[\mathrm{\Gamma }(1\beta \mathrm{cos}\theta )]^1`$ is the Doppler factor, $`\nu =𝒟\nu ^{}`$, and $`r=c\beta (tt_0)/(1\beta \mathrm{cos}\theta )+r_0`$. The luminosity $`\nu ^{}L_\nu ^{}^{}`$ is contributed by: synchrotron radiation $$\nu ^{}L_{S,\nu ^{}}^{}\frac{1}{2}(\gamma N_\gamma )m_ec^2\left|\frac{\mathrm{d}\gamma }{\mathrm{d}t^{}}\right|_S.$$ $`(5)`$ the synchrotron-self-Compton (SSC) process $$\nu ^{}L_{SSC,\nu ^{}}^{}=\frac{\sqrt{3}\sigma _T}{8\mathrm{\Omega }_jr^2}\nu _{}^{}{}_{}{}^{3/2}N_\gamma \left[\gamma =\sqrt{\frac{3\nu ^{}}{4\nu _S^{}}}\right]L_{S,\nu ^{}}^{}\nu _S^{3/2}d\nu _S^{},$$ $`(6)`$ and the external-radiation-Compton (ERC) process $$\nu ^{}L_{ERC,\nu ^{}}^{}[\theta ^{}]4\pi \frac{(\nu ^{}L_{ERC,\nu ^{}}^{})}{\mathrm{\Omega }_{\stackrel{}{n}_{obs}^{}}^{}}\frac{1}{2}\gamma N_\gamma m_ec^2\left|\frac{\mathrm{d}\gamma }{\mathrm{d}t^{}}\right|_{ERC}[\theta ^{}],$$ $`(7)`$ where $$\left|\frac{\mathrm{d}\gamma }{\mathrm{d}t^{}}\right|_{ERC}[\theta ^{}]\frac{4\sigma _T}{3m_ec^2}\gamma ^2𝒟^2(u_{BEL}+u_{IR}).$$ $`(8)`$ and $$\nu ^{}𝒟\gamma ^2\nu _{BEL/IR},$$ $`(9)`$ where $`\nu _{BEL}`$ and $`\nu _{IR}`$ are average frequencies of broad emission lines and of infrared radiation of hot dust, respectively. Note that in the comoving frame the ERC radiation field is anisotropic, while the synchrotron and SSC radiation fields are isotropic (Dermer 1995). ## 3. Results In Figure 1 we present the time averaged blazar spectra for four different values of the energy break $`\gamma _b`$. All models are computed for the following set of parameters: $`r_0=6\times 10^{17}`$ cm; $`\mathrm{\Gamma }=15`$; $`L_{BEL}=1.4\times 10^{44}`$ ergs/s; $`B^{}=1.4`$ Gauss; $`\gamma _{max}=10^4`$; $`p=2.2`$; $`K=0.7\times 10^{50}`$ s<sup>-1</sup>; $`\theta _{obs}=\theta _j=1/15`$ rad, $`T=1000`$ K, and $`\xi _{IR}=0.08`$. For the justification of this choice see Błażejowski et al (in preparation). ## 4. Discussion ### 4.1. Low Energy Break in Electron Distribution We can see from Fig. 1, that for $`\gamma _b=1`$ the low energy tail of the C(BEL) component extends down to $`2`$ keV. Thus, any presence of thermal nonrelativistic electrons should be imprinted as a bump, peaking around 2 keV. Since blazar spectra extend down to much lower values without any bump (Comastri et al. 1997; Sambruna 1997; Lawson & McHardy 1998), we exclude the domination of C(BEL) in the soft X-ray band. In order to get soft X-ray spectra which smoothly join the middle X-ray band, one needs to assume that $`\gamma _b>3`$. Then the low energy break of C(BEL) moves above $`>20`$ keV and X-ray radiation below this value is dominated by either by SSC or C(IR). Noting that the SSC X-ray spectra are much softer than the observed ones ($`\alpha _{X,SSC}1`$ vs. $`\alpha _{X,obs}0.60.7`$; Kubo et al. 1998), C(IR) is a better candidate for X-ray production. This, however, can be the case if $`\gamma _b10`$. For larger values of $`\gamma _b`$ the low energy break of the C(IR) component moves to energies $`>20`$ keV, and, then, at lower energies the C(IR) spectrum becomes too hard in comparison with observations. We conclude that interpreting the blazar X-ray observations within the framework of our model implies that $`\gamma _b`$ is enclosed in the range $`(310)`$. ### 4.2. Pair Content For a jet dynamically dominated by the energy flux of protons, $`L_p`$, and for radiative energy losses of electrons dominated by Comptonization of broad emission lines, the pair content of the jet can be calculated from the formula (Sikora et al., in preparation) $$\frac{n_{pairs}^{}}{n_p^{}}\frac{K}{2(p1)\gamma _b^{p1}}\frac{m_pc^2}{L_p}\mathrm{\Gamma }^2,$$ $`(10)`$ This, for our model parameters and $`3<\gamma _b<10`$ gives $$6/L_{p,47}<n_{pairs}^{}/n_p^{}<26/L_{p,47}.$$ $`(11)`$ Thus, our results suggest that particle number in quasar jets is dominated by pairs, while the jet inertia is still dominated by protons. ## ACKNOWLEDGEMENTS This project was supported by ITP/NSF grant PHY94-07194, the Polish KBN grant 2P03D00415, and NASA grant NAG-5-6337. ## REFERENCES Comastri, A., Fossati, G., Ghisellini, G., & Molendi, S. 1997, ApJ, 480, 534 Dermer, C.D. 1995, ApJ, 446, L63 Lawson, A.J., & M<sup>c</sup>Hardy, I.M. 1998, MNRAS, 300, 1023 Kubo, H., et al. 1998, ApJ, 504, 693 Moderski, R., Sikora, M., Bulik, T., 2000, ApJ, in press Sambruna, R.M. 1997, ApJ, 487, 536