id
stringlengths 30
36
| source
stringclasses 1
value | format
stringclasses 1
value | text
stringlengths 5
878k
|
---|---|---|---|
no-problem/9912/hep-ph9912249.html | ar5iv | text | # Upper limit on 𝑚_ℎ in the MSSM and M-SUGRA vs. prospective reach of LEP
## 1 Introduction
Within the MSSM the masses of the $`𝒞𝒫`$-even neutral Higgs bosons are calculable in terms of the other MSSM parameters. The mass of the lightest Higgs boson, $`m_h`$, has been of particular interest, as it is bounded to be smaller than the $`Z`$ boson mass at the tree level. The one-loop results for $`m_h`$ have been supplemented in the last years with the leading two-loop corrections, performed in the renormalization group (RG) approach , in the effective potential approach and most recently in the Feynman-diagrammatic (FD) approach . The two-loop corrections have turned out to be sizeable. They can change the one-loop results by up to 20%.
Experimental searches at LEP now exclude a light MSSM Higgs boson with a mass below $``$90 GeV . In the low $`\mathrm{tan}\beta `$ region, in which the limit is the same as for the Standard Model Higgs boson, a mass limit of even $`m_h\stackrel{>}{}106\mathrm{GeV}`$ has been obtained . Combining this experimental bound with the theoretical upper limit on $`m_h`$ as a function of $`\mathrm{tan}\beta `$ within the MSSM, it is possible to derive constraints on $`\mathrm{tan}\beta `$. In this paper we investigate, for which MSSM parameters the maximal $`m_h`$ values are obtained and discuss in this context the impact of the new FD two-loop result. Resulting constraints on $`\mathrm{tan}\beta `$ are analyzed on the basis of the present LEP data and of the prospective final exclusion limit of LEP.
The Minimal Supergravity (M-SUGRA) scenario provides a relatively simple and constrained version of the MSSM. In this paper we explore, how the maximum possible values for $`m_h`$ change compared to the general MSSM, if one restricts to the M-SUGRA framework. As an additional constraint we impose that the condition of radiative electroweak symmetry breaking (REWSB) should be fulfilled.
## 2 The upper bound on $`m_h`$ in the MSSM
The most important radiative corrections to $`m_h`$ arise from the top and scalar top sector of the MSSM, with the input parameters $`m_t`$, $`M_{\mathrm{SUSY}}`$ and $`X_t`$. Here we assume the soft SUSY breaking parameters in the diagonal entries of the scalar top mixing matrix to be equal for simplicity, $`M_{\mathrm{SUSY}}=M_{\stackrel{~}{t}_L}=M_{\stackrel{~}{t}_R}`$. This has been shown to yield upper values for $`m_h`$ which comprise also the case where $`M_{\stackrel{~}{t}_L}M_{\stackrel{~}{t}_R}`$, if $`M_{\mathrm{SUSY}}`$ is identified with the heavier one of $`M_{\stackrel{~}{t}_L}`$, $`M_{\stackrel{~}{t}_R}`$ . For the off-diagonal entry of the mixing matrix we use the convention
$$m_tX_t=m_t(A_t\mu \mathrm{cot}\beta ).$$
(1)
Note that the sign convention used for $`\mu `$ here is the opposite of the one used in Ref. .
Since the predicted value of $`m_h`$ depends sensitively on the precise numerical value of $`m_t`$, it has become customary to discuss the constraints on $`\mathrm{tan}\beta `$ within a so-called “benchmark” scenario (see Ref. and references therein), in which $`m_t`$ is kept fixed at the value $`m_t=175\mathrm{GeV}`$ and in which furthermore a large value of $`M_{\mathrm{SUSY}}`$ is chosen, $`M_{\mathrm{SUSY}}=1\mathrm{TeV}`$, giving rise to large values of $`m_h(\mathrm{tan}\beta )`$. In Ref. it has recently been analyzed how the values chosen for the other SUSY parameters in the benchmark scenario should be modified in order to obtain the maximal values of $`m_h(\mathrm{tan}\beta )`$ for given $`m_t`$ and $`M_{\mathrm{SUSY}}`$. The corresponding scenario ($`m_h^{\mathrm{max}}`$ scenario) is defined as
$`m_t=m_t^{\mathrm{exp}}(=174.3\mathrm{GeV}),M_{\mathrm{SUSY}}=1\mathrm{TeV}`$
$`\mu =200\mathrm{GeV},M_2=200\mathrm{GeV},M_A=1\mathrm{TeV},m_{\stackrel{~}{g}}=0.8M_{\mathrm{SUSY}}(\mathrm{FD})`$
$`X_t=2M_{\mathrm{SUSY}}(\mathrm{FD})\mathrm{or}X_t=\sqrt{2}M_{\mathrm{SUSY}}(\mathrm{RG}),`$ (2)
where the parameters are chosen such that the chargino masses are beyond the reach of LEP2 and that the lightest $`𝒞𝒫`$-even Higgs boson does not dominantly decay invisibly into neutralinos. In eq. (2) $`\mu `$ is the Higgs mixing parameter, $`M_2`$ denotes the soft SUSY breaking parameter in the gaugino sector, and $`M_A`$ is the $`𝒞𝒫`$-odd Higgs boson mass. The gluino mass, $`m_{\stackrel{~}{g}}`$, can only be specified as a free parameter in the FD result (program FeynHiggs ). The effect of varying $`m_{\stackrel{~}{g}}`$ on $`m_h`$ is up to $`\pm 2\mathrm{GeV}`$ . Within the RG result (program subhpole ) $`m_{\stackrel{~}{g}}`$ is fixed to $`m_{\stackrel{~}{g}}=M_{\mathrm{SUSY}}`$. Compared to the maximal values for $`m_h`$ (obtained for $`m_{\stackrel{~}{g}}0.8M_{\mathrm{SUSY}}`$) this leads to a reduction of the Higgs boson mass by up to $`0.5\mathrm{GeV}`$. Different values of $`X_t`$ are specified in eq. (2) for the results of the FD and the RG calculation, since within the two approaches the maximal values for $`m_h`$ are obtained for different values of $`X_t`$. This fact is partly due to the different renormalization schemes used in the two approaches .
The maximal values for $`m_h`$ as a function of $`\mathrm{tan}\beta `$ within the $`m_h^{\mathrm{max}}`$ scenario are higher by about 5 GeV than in the previous benchmark scenario. The constraints on $`\mathrm{tan}\beta `$ derived within the $`m_h^{\mathrm{max}}`$ scenario are thus more conservative than the ones based on the previous scenario.
The investigation of the constraints on $`\mathrm{tan}\beta `$ that can be obtained from the experimental search limits on $`m_h`$ has so far been based on the results for $`m_h`$ obtained within the RG approach . The recently obtained FD result differs from the RG result by a more complete treatment of the one-loop contributions and in particular by genuine non-logarithmic two-loop terms that go beyond the leading logarithmic two-loop contributions contained in the RG result . Comparing the FD result (program FeynHiggs) with the RG result (program subhpole) we find that the maximal value for $`m_h`$ as a function of $`\mathrm{tan}\beta `$ within the FD result is higher by up to 4 GeV.
In Fig. 1 we show both the effect of modifying the previous benchmark scenario to the $`m_h^{\mathrm{max}}`$ scenario and the impact of the new FD two-loop result on the prediction for $`m_h`$. The maximal value for the Higgs boson mass is plotted as a function of $`\mathrm{tan}\beta `$ for $`m_t=174.3`$ GeV and $`M_{\mathrm{SUSY}}=1`$ TeV. The dashed curve displays the benchmark scenario, used up to now by the LEP collaborations . The dotted curve shows the $`m_h^{\mathrm{max}}`$ scenario. Both curves are based on the RG result (program subhpole). The solid curve corresponds to the FD result (program FeynHiggs) in the $`m_h^{\mathrm{max}}`$ scenario. The increase in the maximal value for $`m_h`$ by about $`4`$ GeV from the new FD result and by further 5 GeV if the benchmark scenario is replaced by the $`m_h^{\mathrm{max}}`$ scenario has a significant effect on exclusion limits for $`\mathrm{tan}\beta `$ derived from the Higgs boson search. Combining both effects, which of course have a very different origin, the maximal Higgs boson masses are increased by almost $`10\mathrm{GeV}`$ compared to the previous benchmark scenario.
From the FD result we find the upper bound of $`m_h\stackrel{<}{}\mathrm{\hspace{0.33em}129}`$ GeV in the region of large $`\mathrm{tan}\beta `$ within the MSSM for $`m_t=174.3\mathrm{GeV}`$ and $`M_{\mathrm{SUSY}}=1`$ TeV. Higher values for $`m_h`$ are obtained if the experimental uncertainty in $`m_t`$ of currently $`\mathrm{\Delta }m_t=5.1\mathrm{GeV}`$ is taken into account and higher values are allowed for the top quark mass. As a rule of thumb, increasing $`m_t`$ by 1 GeV roughly translates into an upward shift of $`m_h`$ of 1 GeV. An increase of $`M_{\mathrm{SUSY}}`$ from 1 TeV to 2 TeV enhances $`m_h`$ by about 2 GeV in the large $`\mathrm{tan}\beta `$ region. As an extreme case, choosing $`m_t=184.5`$ GeV, i.e. two standard deviations above the current experimental central value, and using $`M_{\mathrm{SUSY}}=2`$ TeV leads to an upper bound on $`m_h`$ of $`m_h\stackrel{<}{}\mathrm{\hspace{0.33em}141}`$ GeV within the MSSM.
## 3 The prospective upper $`m_h`$ reach of LEP
The four LEP experiments are very actively searching for the Higgs boson. Results presented recently by the LEP collaborations revealed no evidence of a SM Higgs boson signal in the data collected in 1999 at centre-of-mass energies of approximately 192, 196, 200 and 202 GeV. From the negative results of their searches ALEPH, DELPHI and L3 have therefore individually excluded a SM Higgs boson lighter than $``$101–106 $`\mathrm{GeV}`$ (at the 95% confidence level) .
Here we will present the expected exclusion reach of LEP assuming all the data taken by the four experiments in 1999 is combined. The ultimate exclusion reach of LEP – assuming no signal were found in the data to be collected in the year 2000 – will also be estimated for several hypothetical scenarios of luminosity and centre-of-mass energy. These results are then confronted with the theoretical MSSM upper limit on $`m_h(\mathrm{tan}\beta )`$ presented in Section 2, in order to establish to what extent the LEP data can probe the low $`\mathrm{tan}\beta `$ region. We recall that models in which b-$`\tau `$ Yukawa coupling unification at the GUT scale is imposed favor low $`\mathrm{tan}\beta `$ values, $`\mathrm{tan}\beta 2`$, which can severely be constrained experimentally by searches at LEP. Alternatively, such models can favor $`\mathrm{tan}\beta 40`$, a region which however can only be partly covered at LEP.
All experimental exclusion limits quoted in this section are implicitly meant at the 95% confidence level (CL).
It has been proposed that the LEP-combined expected 95% CL lower bound on $`m_h`$, $`m_h^{95}`$, for a data set consisting of data accumulated at given centre-of-mass energies can be estimated by solving the equation
$$n(m_h^{95})=(\sigma _0_{eq})^\alpha ,$$
(3)
where $`n(m_h^{95})`$ is the number of signal events produced at the 95% CL limit. The equivalent luminosity, $`_{eq}`$, is the luminosity that one would have to accumulate at the highest centre-of-mass energy in the data set in order to have the same sensitivity as in the real data set, where the data is split between several different $`\sqrt{s}`$ values. For a SM Higgs boson signal, the parameters $`\sigma _0`$ and $`\alpha `$ are $``$38 pb and $``$0.4, respectively . (These parameter values are obtained from a fit to the actual LEP-combined expected limits from $`\sqrt{s}=161`$ GeV up to $`\sqrt{s}=188.6`$ GeV .) The predicted $`m_h`$ limits obtained with this method are expected to approximate the more accurate combinations done by the LEP Higgs Working Group, with an uncertainty of the order of $`\pm `$ 0.3 $`\mathrm{GeV}`$.
Solving eq. (3) for the existing LEP data with 183 GeV $`\stackrel{<}{}\sqrt{s}\stackrel{<}{}\mathrm{\hspace{0.33em}202}`$ GeV (Table 1) results in a predicted combined exclusion of $`m_h<108.2\mathrm{GeV}`$ for the SM Higgs boson (see Figure 2a).
Based on the current LEP operational experience, it is believed that in the year 2000 stable running is possible up to $`\sqrt{s}=206`$ GeV. Figure 2b demonstrates the impact of additional data collected at $`\sqrt{s}=206`$ GeV on the exclusion. For instance, if no evidence of a signal were found in the data, collecting 500 (1000) pb<sup>-1</sup> at this centre-of-mass energy would increase the $`m_h`$ limit to 113.0 (114.1) $`\mathrm{GeV}`$. Figure 2c shows the degradation in the sensitivity to a Higgs boson signal if the data in the year 2000 were accumulated at $`\sqrt{s}=205`$ GeV instead: in this case the luminosity required to exclude up to $`m_h=113\mathrm{GeV}`$ would be 840 pb<sup>-1</sup>.
In Table 2 the expected SM Higgs boson limit is shown for several possible LEP running scenarios in the year 2000. Taking into account that the experimental MSSM $`m_h`$ exclusion in the range $`0.5\stackrel{<}{}\mathrm{tan}\beta \stackrel{<}{}\mathrm{\hspace{0.33em}3}`$ is (i) essentially independent of $`\mathrm{tan}\beta `$ and (ii) equal in value to the SM $`m_h`$ exclusion (see e.g. ), $`m_h^{95}`$ can be converted into an excluded $`\mathrm{tan}\beta `$ range in the $`m_h^{\mathrm{max}}`$ benchmark scenario described in Section 2. This is done by intersecting the experimental exclusion and the solid curve in Figure 1. Using the LEP data taken until the end of 1999 (for which $`m_h^{95}=108.2\mathrm{GeV}`$) one can already expect to exclude $`0.6\stackrel{<}{}\mathrm{tan}\beta \stackrel{<}{}\mathrm{\hspace{0.33em}1.9}`$ within the MSSM for $`m_t=174.3`$ GeV and $`M_{\mathrm{SUSY}}=1`$ TeV. Note that in determining the excluded $`\mathrm{tan}\beta `$ regions in Table 2 the theoretical uncertainty from unknown higher-order corrections has been neglected. As can be seen from Table 2, several plausible scenarios for adding new data at higher energies can extend the exclusion to $`m_h\stackrel{<}{}\mathrm{\hspace{0.33em}113}\mathrm{GeV}`$ ($`0.5\stackrel{<}{}\mathrm{tan}\beta \stackrel{<}{}\mathrm{\hspace{0.33em}2.4}`$).
## 4 The upper limit on $`m_h`$ in the M-SUGRA scenario
The M-SUGRA scenario is described by four independent parameters and a sign, namely the common squark mass $`M_0`$, the common gaugino mass $`M_{1/2}`$, the common trilinear coupling $`A_0`$, $`\mathrm{tan}\beta `$ and the sign of $`\mu `$. The universal parameters are fixed at the GUT scale, where we assumed unification of the gauge couplings. Then they are run down to the electroweak scale with the help of renormalization group equations . The condition of REWSB puts an upper bound on $`M_0`$ of about $`M_0\stackrel{<}{}`$ 5 TeV (depending on the values of the other four parameters).
In order to obtain a precise prediction for $`m_h`$ within the M-SUGRA scenario, we employ the complete two-loop RG running with appropriate thresholds (both logarithmic and finite for the gauge couplings and using the so called $`\theta `$-function approximation for the masses ) including full one-loop minimization conditions for the effective potential, in order to extract all the parameters of the M-SUGRA scenario at the EW scale. This method has been combined with the presently most precise result of $`m_h`$ based on a Feynman-diagrammatic calculation . This has been carried out by combining the codes of two programs namely, SUITY and FeynHiggs .
In order to investigate the upper limit on the Higgs boson mass in the M-SUGRA scenario, we keep $`\mathrm{tan}\beta `$ fixed at a large value, $`\mathrm{tan}\beta =30`$. Concerning the sign of the Higgs mixing parameter, $`\mu `$, we find larger $`m_h`$ values (compatible with the constraints discussed below) for negative $`\mu `$ (in the convention of eq. (1)). In the following we analyze the upper limit on $`m_h`$ as a function of the other M-SUGRA parameters, $`M_0`$, $`M_{1/2}`$ and $`A_0`$. Our results are displayed in Fig. 3 for four values of $`A_0`$: $`A_0=0,500,1000,1500\mathrm{GeV}`$. We show contour lines of $`m_h`$ in the $`M_0M_{1/2}`$-plane. The numbers inside the plots indicate the lightest Higgs boson mass in the respective area within $`\pm 0.5\mathrm{GeV}`$. The upper bound on the lightest $`𝒞𝒫`$-even Higgs boson mass is found to be at most 127 GeV. This upper limit is reached for $`M_0500\mathrm{GeV}`$, $`M_{1/2}400\mathrm{GeV}`$ and $`A_0=1500\mathrm{GeV}`$. Concerning the analysis the following should be noted:
* We have chosen the current experimental central value for the top quark mass, $`m_t=174.3`$ GeV. As mentioned above, increasing $`m_t`$ by 1 GeV results in an increase of $`m_h`$ of approximately $`1\mathrm{GeV}`$.
* The M-SUGRA parameters are taken to be real, no SUSY $`𝒞𝒫`$-violating phases are assumed.
* We have chosen negative values for the trilinear coupling, because $`m_h`$ turns out to be increased by going from positive to negative values of $`A_0`$. $`|A_0|`$ is restricted from above by the condition that no negative squares of squark masses and no charge or color breaking minima appear.
* The regions in the $`M_0M_{1/2}`$-plane that are excluded for the following reasons are also indicated:
+ REWSB: parameter sets that do not fulfill the REWSB condition.
+ CCB: regions where charge or color breaking minima occur or negative squared squark masses are obtained at the EW scale.
+ LSP: sets where the lightest neutralino is not the LSP. Mostly there the lightest scalar tau becomes the LSP.
+ Chargino limit: parameter sets which correspond to a chargino mass that is already excluded by direct searches.
* We do not take into account the $`bs\gamma `$ constraint as the authors of Ref. do. This could reduce the upper limit but still the experimental and theoretical uncertainties of this constraint are quite large.
## 5 Conclusions
We have analyzed the upper bound on $`m_h`$ within the MSSM. Using the Feynman-diagrammatic result for $`m_h`$, which contains new genuine two-loop corrections, leads to an increase of $`m_h`$ of up to $`4\mathrm{GeV}`$ compared to the previous result obtained by renormalization group methods. We have furthermore investigated the MSSM parameters for which the maximal $`m_h`$ values are obtained and have compared the $`m_h^{\mathrm{max}}`$ scenario with the previous benchmark scenario. For $`m_t=174.3`$ GeV and $`M_{\mathrm{SUSY}}=1`$ TeV we find $`m_h\stackrel{<}{}\mathrm{\hspace{0.33em}129}`$ GeV as upper bound in the MSSM. In case that no evidence of a Higgs signal is found before the end of running in 2000, experimental searches for the Higgs boson at LEP can ultimately be reasonably expected to exclude $`m_h\stackrel{<}{}\mathrm{\hspace{0.33em}113}\mathrm{GeV}`$. In the context of the $`m_h^{\mathrm{max}}`$ benchmark scenario (with $`m_t=174.3`$ GeV, $`M_{\mathrm{SUSY}}=1`$ TeV) this rules out the interval $`0.5\stackrel{<}{}\mathrm{tan}\beta \stackrel{<}{}\mathrm{\hspace{0.33em}2.4}`$ at the 95% confidence level within the MSSM. Within the M-SUGRA scenario, the upper bound on $`m_h`$ is found to be $`m_h\stackrel{<}{}\mathrm{\hspace{0.33em}127}\mathrm{GeV}`$ for $`m_t=174.3\mathrm{GeV}`$. This upper limit is reached for the M-SUGRA parameters $`M_0500\mathrm{GeV}`$, $`M_{1/2}400\mathrm{GeV}`$ and $`A_0=1500\mathrm{GeV}`$. The upper bound within the M-SUGRA scenario is lower by 2 and 4 GeV than the bound obtained in the general MSSM for $`M_{\mathrm{SUSY}}=1\mathrm{TeV}`$ and $`M_{\mathrm{SUSY}}=2\mathrm{TeV}`$, respectively.
## Acknowledgements
A.D. acknowledges financial support from the Marie Curie Research Training Grant ERB-FMBI-CT98-3438. A.D. would also like to thank Ben Allanach for useful discussions. P.T.D. would like to thank Jennifer Kile for providing the Standard Model Higgs boson production cross-sections. G.W. thanks C.E.M. Wagner for useful discussions.
## References |
no-problem/9912/astro-ph9912524.html | ar5iv | text | # Broad Emission Line Regions in AGN: the Link with the Accretion Power
## 1 Introduction
Broad Emission Lines (BELs) are probably ubiquitous in AGN, being unobserved only when a good case can be made for their obscuration by dust (type 2 AGN) or their being swamped by beamed continuum (Blazars). This suggests that the presence of Broad Emission Line Clouds (BELCs) in the AGN environment is closely related to the mechanisms which are responsible for the quasar activity. Shakura & Sunyaev (1973, hereinafter SS73) already noted that a high velocity, high density wind of ionized matter may originate from the accretion disk (a suggestion greatly elaborated by Murray et al., 1995), at a radial distance where the radiation pressure becomes comparable to the gas pressure. Witt, Czerny and Zycki (1997, hereinafter WCZ97) studied a radially co-accreting disk/corona system (which stabilizes the, otherwise thermally unstable, radiation pressure dominated region of a Shakura-Sunyaev disk - SS-disk), and demonstrated that a transonic vertical outflow from the disk develops where the fraction of total energy dissipated in the corona is close to the maximum. In a revised version of this model (Czerny et al., 1999, in preparation), which includes evaporation from the disk, this solution continues to hold for accretion rates higher than a minimum value, below which evaporation inhibits the formation of the wind.
Reverberation studies of the BELs in many Seyfert 1 galaxies (e.g. Wandel, Peterson & Malkan, 1999: hereinafter WPM99) indicate that the gas of the BELCs is photoionized, and that its physical state, from one object to another, covers a rather narrow range of parameter space (in column density, electron density, ionization parameter). It is also clear (Krolik et al., 1991; Peterson et al., 1999) that within a single object the BELCs are stratified, with the highest ionization lines being also the broadest. This accords with the photoionization hypothesis and with the idea that the BELs are broadened by their orbital Keplerian motion around the central source (Peterson & Wandel, 1999).
On the other hand the BELs do not at all have the same dynamical properties in all objects. A broad distribution of line widths from $`1,000`$ km s<sup>-1</sup> (in Narrow Line Seyfert 1 galaxies, NLSy1) to $`20,000`$ km s<sup>-1</sup> (in the broadest broad line type 1 AGN) is present. A model by Wandel (1997) attempts to forge a physical link between the breadth of the permitted optical emission lines of type 1 AGN and the steepness of their X-ray continuum: a steeper X-ray spectrum has stronger ionizing power, and hence the BELR is formed at a larger distance from the central source, where the velocity dispersion is smaller, and so produces narrower emission lines. Alternatively, Laor et al. (1997) use a dynamical argument to suggest that small emission line widths are direct consequence of large L/L<sub>Edd</sub>.
Here we propose that a vertical disk wind, originating at a critical distance in the accretion disk, is the origin of the BELCs and that the widths of the BELs are the Keplerian velocities of the accretion disk at the radius where this wind arises. The disk wind forms for external accretion rates higher than a minimum value below which a standard SS-disk (SS73) is stable and extends down to the last stable orbit. The model explains the observed range of FWHM in the BELs of AGN as a function of a single physical quantity connected with the AGN activity: the accretion rate. In §2 we present our model, and show the basic equations which support our findings. In §3 we discuss the observational consequences and compare with exisiting data.
## 2 The Model
The three main ingredients of our model are: (a) the transition radius $`r_{tran}`$, derived by setting equal the radiation pressure at $`r<r_{tran}`$ and the gas pressure at $`r>r_{tran}`$, in a standard SS-disk (SS73):
$$r_{tran}f^{16/21}15.2(\alpha m)^{2/21}\left(\frac{1}{\eta }\dot{m}\right)^{16/21},$$
(1)
(b) the approximate analytical relationship giving the fraction of energy dissipated in the corona, in a dynamical disk/corona configuration (WCZ97):
$$(1\beta )0.034(\alpha f\frac{1}{\eta }\dot{m})^{1/4}r^{3/8},$$
(2)
and (c) the maximum radius below which a stable co-accreting disk/corona configuration can exist, obtained by setting $`\beta =0`$ (WCZ97):
$$r_{max}f^{2/3}8,000\left(\alpha \frac{1}{\eta }\dot{m}\right)^{2/3}.$$
(3)
In the above equations we have used dimensionless quantities: $`m=M/M_{}`$, $`\dot{m}=\dot{M}/\dot{M}_{EDD}`$, $`r=R/R_0`$, with $`\dot{M}_{EDD}=1.5\times 10^{17}\eta ^1m\text{ }`$ g s<sup>-1</sup>, and $`R_0=6GM/c^2`$, for a non-rotating black hole; here $`\eta `$ is the maximum efficiency. Finally, $`f`$ gives the boundary conditions at the marginally stable orbit: $`f=f(r)=(1r^{0.5})`$.
We note that (with the adopted units) $`r_{max}`$ does not depend on the mass, while $`r_{tran}`$ depends only very weakly on it. However both these critical radii depend on the accretion rate and, interestingly, with similar powers. This results in a quasi-rigid radial shifting of the region delimited by these two distances as the accretion rate (in critical units) varies. From equations 1 and 3 we can estimate the total radial extent of this region to be of the order of $`10`$ times $`r_{tran}`$, for $`m=10^8`$ and $`\dot{m}=1`$ <sup>1</sup><sup>1</sup>1 The radial extent of the vertical outflow containing the BELCs, however, is smaller than this, being constrained by the weighting function $`(1\beta )`$ (equation 2, WCP97. See below).. Equation 1 allows us to define the minimum external accretion rate needed for a thermally unstable radiation pressure dominated region to exist. From the condition $`r>1.36`$ (the limit of validity of the SS-disk solution) we have: $`\dot{m}\genfrac{}{}{0pt}{}{_>}{^{}}\dot{m}_{min}(m)0.3\eta (\alpha m)^{1/8}`$. Throughout this paper we assume $`\eta =0.06`$, and a viscosity coefficient of $`\alpha =0.1`$, which give a minimum external accretion rate of $`\dot{m}_{min}(14)\times 10^3`$, for $`m`$ in the range $`10^610^9`$. At lower accretion rates a SS-disk (SS73) is stable down to the last stable orbit: we propose that all the available energy is dissipated in the disk and no radiation pressure supported and driven wind is generated. AGN accreting at these low external rates should show no BELs in their optical spectra. For accretion rates $`\dot{m}\genfrac{}{}{0pt}{}{_>}{^{}}\dot{m}_{min}`$ a SS-disk is unstable (Lightman & Eardley, 1974) and a stabilizing, co-accreting “disk/corona + outflow” system forms (WCZ97). The fraction of energy dissipated in the corona, and powering the vertical outflow, is maximum at $`r_{max}`$ and decreases inward following equation 2 (see also Fig. 6 in WCZ97). At radii smaller than $`r_{tran}`$ the available energy is almost equally divided between the disk and the corona (WCZ97). We then adopt an averaged radius $`r_{wind}`$ for the transonic outflow (and so for the BELRs), obtained weighting the radial distance by $`(1\beta )`$ \- equation 2 \- , between $`r_{tran}`$ and $`r_{max}`$. We computed $`r_{wind}`$ numerically for several values of $`m`$ and $`\dot{m}`$.
In the next section we discuss the implications of our model for linking the gas of the BELR in type 1 AGN and the accretion mechanism.
### 2.1 Dynamical Properties
We calculated the orbital velocities at $`r_{wind}`$ under the Keplerian assumption: $`\beta (r_{wind})=v/c=(6r_{wind})^{1/2}`$. We then transformed these velocities to Full Width Half Maximum (FWHM) values of the lines emitted by these clouds of gas using the relationship: FWHM$`=2(<v^2>)^{1/2}`$ (Netzer, 1991), where $`(<v^2>)^{1/2}=v/\sqrt{2}`$ is the averaged Keplerian velocity in a cylindrical geometry. In Figure 1 we show the relationship between the accretion rate (in the range $`\dot{m}_{min}(m)10`$) and the expected FWHM$`(r_{wind})`$ (solid, thick curves) and FWHM($`r_{max}`$) (dashed, thin curves), for $`m=10^6,10^7,10^8,10^9`$. These curves are independent on the mass of the central black hole, and so they overlap in the diagram of Figure 1. However, for each curve, the maximum FWHM reacheable depends on the mass, through the limit imposed by the minimum external accretion rate $`\dot{m}_{min}(m)`$ needed for an unstable SS-disk to exist. At the bottom of the plot the four horizontal lines indicate these values of $`\dot{m}_{min}(m)`$. The parameter space delimited by the dashed and solid curves of Figure 1 gives a possible range of FWHM at a given accretion rate, so allowing, at least partly, for a stratification of the BELCs in a single object <sup>2</sup><sup>2</sup>2However, according to the model for the quasar structure proposed by Elvis (1999), a more complete and satisfactory explanation is that the high and low ionization BELs are actually produced in two separate regions of the outflow: in the vertical part, at a height $`z<r`$ from the disk surface (high ionization lines), and in the radially displaced part (low ionization lines) located at $`z>r`$ (WCZ97) and shadowed by the vertically flowing gas (Elvis, 1999).. Finally, we marked two different regions in the diagram of Figure 2: (a) for accretion rates $`\dot{m}<0.2`$ (sub-Eddington regime) the predicted FWHM are quite broad ($`\genfrac{}{}{0pt}{}{_>}{^{}}4,000`$ km s<sup>-1</sup>), and similar to those typically observed in broad line type 1 AGN; (b) for $`\dot{m}=0.23`$ (Eddington to moderately super-Eddington) the corresponding FWHM span the interval $`1,0004,000`$ km s<sup>-1</sup>, which contains the value of FWHM$`=2,000`$ km s<sup>-1</sup> used to separate the two classes of broad line type 1 AGN and NLSy1. Hence our model predicts that narrow line type 1 AGN accrete at higher accretion rates compared to broad line objects, as in the Pounds et al. (1995) suggestion. However, the mass of the central black hole in NLSy1 does not need to be smaller than that of broad line type 1 AGN, which reconciles the NLSy1 paradigm with the recent results of WPM99 and the mass estimate for the NLSy1 galaxy TON S180 (Mathur S., private communication).
## 3 Comparison with Observations
Figure 3 shows an analogous and complementary diagram to Figure 2, where we plot $`r_{wind}`$ in physical units on the Y-axis. The four mostly horizontal curves correspond to the four black hole masses $`m=10^6,10^7,10^8,10^9`$, with the accretion rate increasing from the bottom-right to the top-left of the diagram. The four vertical lines correspond to four values of the accretion rate $`\dot{m}=0.01,0.1,1,10`$, with black hole mass increasing from the bottom to the top. The space delimited by this grid contains all the allowed range of distances and orbiting velocities of the BELCs in this model. We also show a horizontal band which delimits the typical observed BELR sizes (WPM99). Superimposed on this diagram, we plot the recent measurements of distance and FWHM reported by WPM99 and obtained by using the reverberation-mapping technique. The points are numbered following the order of Table 2 in WPM99. For each point the grid of Figure 2 allows one to uniquely determine the predicted accretion rate and black hole mass. We calculated the ratio $`\xi `$ between the measured (WPM99) dimensionless luminosity $`\mathrm{}=(L_{ion}/L_{Edd})`$ and the predicted accretion rate. $`\xi `$ is then a measure of the relative efficiency (compared to the maximum radiative efficiency $`\eta `$: $`\xi =ϵ/\eta `$, where $`ϵ=L_{ion}/\dot{M}c^2`$) with which the accretion power is converted into ionizing luminosity. We found that $`\xi `$ is correlated with the measured mass of the central black hole: $`\xi =10^{(0.4\pm 0.9)}\times M_8^{(1.00\pm 0.14)}`$, R = 0.68, corresponding to a probability of P($`>`$R;N=16) = 0.4 % (Figure 3. We do not consider here the tree objects of the WPM99 sample for which only upper limits on the central black hole mass were available). The correlation is still significant (R = 0.62, P($`>`$R;N=15) = 1.3 %), when the object (NGC 4051) with the lowest mass and relative radiative efficiency is removed. This observational result is not an obvious consequence of our model and needs study. This correlation may explain why we do not see very-low mass AGN ($`\genfrac{}{}{0pt}{}{_<}{^{}}10^4`$ M): the efficiency in converting accretion power into luminosity may be too low for such objects.
## 4 Testing the Model
### 4.1 Observed Broad Line Widths
In the framework of our model, those AGN showing particularly broad emission lines (FWHM$`15,00020,000`$ km s<sup>-1</sup>) in their optical spectra are objects which are accreting at very low rate, close to but higher than $`\dot{m}_{min}(m)`$. Their expected ionizing luminosity is then given by $`L_{ion}\xi (m)\dot{m}_{min}(m)L_{Edd}10^{43}M_8^{15/8}\text{ erg s}^1`$, where $`M_8`$ is the mass of the central black hole in $`10^8M_{}`$, and we used the correlation $`\xi 0.4M_8`$. The predicted ionizing luminosity at the minimum accretion rate are then $`L_{ion}(M_8=1)10^{43}`$ erg s<sup>-1</sup> and $`L_{ion}(M_8=0.1)10^{41}`$ erg s<sup>-1</sup>. AGN with strong and exceptionally broad emission lines (e.g. Broad Line Radio Galaxies, Osterbrock, Koski & Phillips, 1975; Grandi & Pillips, 1979) should then host a massive central black hole, of $`10^810^9`$ solar masses. Lower mass black holes accreting at rates slightly higher than $`\dot{m}_{min}(m)`$ may also have very broad optical emission lines, but they would be hard to detect due to their low contrast optical spectra.
Emission lines broader than $`20,000`$ km s<sup>-1</sup> should not exist for any plausible mass of black hole. This limit seems to be obeyed. FWHM $`20,000`$ km s<sup>-1</sup> are rare. Steiner (1981) lists 14 out of 147 AGN with FW-Zero-Intensity between $`20,000`$ and 25,000 km s<sup>-1</sup> (only 3 with FW-Zero-Intensity $`>23,000`$ km s<sup>-1</sup>). The number with FW-Half-Maximum $`20,000`$ km s<sup>-1</sup> will be much less.
### 4.2 Low Luminosity AGN
Nearby AGN with independently measured masses (see Ho, 1999) that imply accretion at rates lower than $`\dot{m}_{min}(m)`$ should show no broad emission lines in their optical spectra.
NGC 4594 is a low luminosity ($`L_X3.5\times 10^{40}`$, Nicholson et al., 1998) AGN/LINER with a spectroscopically well estimated mass of $`10^9`$ solar masses for the central object (Kormendy et al., 1996). This gives a ratio between the X-ray (ASCA) and the Eddington luminosity of NGC 4594 of $`3\times 10^7`$. In our model no BELR should exist at this low $`L/L_{Edd}`$. Nicholson et al. (1998) showed that no broad H$`\alpha `$ was present in the HST spectrum of NGC 4594. From the ASCA spectrum they also put an upper limit on the column density of cold absorbing gas of $`2.9\times 10^{21}`$ cm<sup>-2</sup>. This is much lower than the amount of gas which would be needed to obscure the putative BELR. Nicholson et al. (1998) conclude that there is no BELR in NGC 4594. All this is in good agreement with the predictions of our model (see §4): if the object has a mass of $`10^9`$ M, then the expected luminosity for the minimum accretion rate $`\dot{m}_{min}(m)`$ is $`7\times 10^{44}`$ erg s<sup>-1</sup>. An accretion rate of $`\dot{m}5\times 10^5\times \dot{m}_{min}(m)`$ would then be necessary to obtain the observed X-ray luminosity of $`L_X3.5\times 10^{40}`$ erg s<sup>-1</sup> (Nicholson et al., 1998), at which no BELR is predicted to exist.
Ho et al. (1997) searched for BELRs in a sample of 486 bright northern galaxies. They found that among the 211 objects classified as having a Seyfert or LINER nucleus, only 46 ($`20`$ %) have a detectable broad emission line component. Among these 46 the maximum H$`\alpha `$ measured is 4,200 km/s. The $`80`$ % of the sources in the sample with no broad H$`\alpha `$ are either (a) obscured (type 2) AGN, (b) low massive AGN with accretion rates slightly higher than $`\dot{m}_{min}(m)`$ and in which the broad H$`\alpha `$ component is too broad to be measured in low-contrast spectra, or (c) AGN with high mass (and so visible) but with an accretion rate lower than $`\dot{m}_{min}(m)`$, and so with no BELRs (like NGC 4594). The remaining $`20`$ % of the sample with broad H$`\alpha `$ would be made of very low-mass AGN ($`10^5`$ M), with quite normal (Sy1-like) accretion rate ($`\dot{m}0.10.5`$), but (because of their low mass) with low-luminosity. The BELs in these objects would then be quite normal, and more easily detectable even in low contrast spectra.
## 5 Conclusions
We presented a simple model which tightly links the existence of the BELCs in type 1 AGN to the accretion mechanism. We derived the accretion rate and mass scaling laws for a mean distance on a stable SS-disk falling within the region delimited inward by the transition radius between the radiation pressure and the gas pressure dominated regions, and, outward, by the maximum radius below which a stabilizing co-accreting corona forms (WCZ97). We identify the BELR with a vertically outflowing wind of ionized matter which forms at this radius.
Our main findings are:
* The Keplerian velocity of the BELCs around the central black hole depends critically on the accretion rate. The entire observed range of velocities (FWHM) in type 1 AGN is naturally reproduced in our model allowing the accretion rate to vary from its minimum permitted value to super-Eddington rates.
* For accretion rates close to the Eddington value, the expected FWHM are of the order of those observed in NLSy1. Lower accretion rates give instead FWHM typical of broad line type 1 AGN. For very low accretion rates the BELCs would not longer exist giving an upper limit to the allowed FWHM of the BELs, consistent with observations. This limit could explain the absence of BELs in Low-Luminosity AGN. Existing optical data of LINERs are consistent with this prediction.
* In physical units, the distance at which the BELCs would form, is a steep function of both the accretion rate and the mass of the central object. The predicted relationsips agree with the observed masses, FWHM, and distances (WPM99).
* We find an empirical relationship which suggests that the radiative efficiency of higher mass black holes is greater than that for lower mass black holes.
This work has been made possible through the very useful and fruitful discussion with my colleagues at SAO and in Rome. In particular I would like to thank M. Elvis, F. Fiore and A. Siemiginowska for the very stimulating inputs that they gave to the development of the ideas presented in this paper. A particular thank goes to K. Forster, A. Fruscione, S. Mathur and B. Wilkes (at SAO), and G. Matt and G.C. Perola (at the 3rd Univesrsity of Rome). This work has been partly supported by the NASA grants ADP NAG-5-4808, NAG-5-3039, NAG-5-2476 and LTSA NAGW-2201. |
no-problem/9912/hep-ph9912256.html | ar5iv | text | # Multiquark picture for Λ(1405) and Σ (1620)
## Abstract
We propose a new QCD sum rule analysis for the $`\mathrm{\Lambda }`$ (1405) and the $`\mathrm{\Sigma }`$ (1620). Using the I=0 and I=1 multiquark sum rules we predict their masses.
One of interesting subjects in nuclear physics is to study properties of the excited baryon states. For example, in the case of the $`\mathrm{\Lambda }`$ (1405) its nature is not revealed completelypdg98 ; i.e. an ordinary three quark state or a $`\overline{K}N`$ bound state or the mixing state of the previous two possibilities. In the QCD sum rule approachqsr there have been several works on the $`\mathrm{\Lambda }`$ (1405) using three-quark interpolating fieldsleinweber90 ; kl97 or five-quark operatorsliu84 . In this work we focus on the decay modes of the $`\mathrm{\Lambda }`$ (1405) and the $`\mathrm{\Sigma }`$ (1620) and get the mass of each particle introducing multiquark sum rules.
Let’s consider the following correlator:
$`\mathrm{\Pi }(q^2)=i{\displaystyle d^4xe^{iqx}T(J(x)\overline{J}(0))},`$ (1)
where $`J`$ is the $`\pi \mathrm{\Sigma }`$ (I=0) multiquark interpolating field, $`J_{\pi ^+\mathrm{\Sigma }^{}+\pi ^0\mathrm{\Sigma }^0+\pi ^{}\mathrm{\Sigma }^+}`$.
Here, for the $`\mathrm{\Sigma }`$ we take the Ioffe’s choiceioffe81 ; e.g. $`\pi ^0\mathrm{\Sigma }^0`$ means $`ϵ_{abc}(\overline{u}_ei\gamma ^5u_e\overline{d}_ei\gamma ^5d_e)([u_a^TC\gamma _\mu s_b]\gamma ^5\gamma ^\mu d_c+[d_a^TC\gamma _\mu s_b]\gamma ^5\gamma ^\mu u_c)`$, where $`u`$, $`d`$ and $`s`$ are the up, down and strange quark fields, and $`a,b,c,e`$ are color indices. $`T`$ denotes the transpose in Dirac space and $`C`$ is the charge conjugation matrix.
The OPE side has two structures:
$`\mathrm{\Pi }^{OPE}(q^2)=\mathrm{\Pi }_q^{OPE}(q^2)\text{/}𝐪+\mathrm{\Pi }_1^{OPE}(q^2)\mathrm{𝟏}.`$ (2)
In this paper, however, we only present the sum rule from the $`\mathrm{\Pi }_1`$ structure (hereafter referred to as the $`\mathrm{\Pi }_1`$ sum rule) because the $`\mathrm{\Pi }_1`$ sum rule is generally more reliable than the $`\mathrm{\Pi }_q`$ sum rule as emphasized in Ref. jt97 . The OPE side is given as follows.
$`\mathrm{\Pi }_1^{OPE}(q^2)=`$ $``$ $`{\displaystyle \frac{7m_s}{\pi ^82^{18}3^25}}q^{10}ln(q^2)+{\displaystyle \frac{7}{\pi ^62^{15}3^2}}\overline{s}sq^8ln(q^2)`$ (3)
$`+`$ $`{\displaystyle \frac{35m_s^2}{\pi ^62^{14}3^2}}\overline{s}sq^6ln(q^2){\displaystyle \frac{121m_s}{\pi ^42^93^2}}\overline{q}q^2q^4ln(q^2)`$
$`+`$ $`{\displaystyle \frac{11}{\pi ^22^6}}\overline{q}q^2\overline{s}sq^2ln(q^2){\displaystyle \frac{m_s^2}{\pi ^22^63}}(14\overline{q}q^333\overline{q}q^2\overline{s}s)ln(q^2)`$
$``$ $`{\displaystyle \frac{m_s}{2^43^3}}(140\overline{q}q^4+3\overline{q}q^3\overline{s}s){\displaystyle \frac{1}{q^2}},`$
where $`m_s`$ is the strange quark mass and $`\overline{q}q`$, $`\overline{s}s`$ are the quark condensate and the strange quark condensate, respectively. Here, we let $`m_u`$ = $`m_d`$ = 0 $``$ $`m_s`$ and $`\overline{u}u`$ = $`\overline{d}d`$ $``$ $`\overline{q}q`$ $``$ $`\overline{s}s`$. We neglect the contribution of gluon condensates and concentrate on tree diagrams such as Fig. 1, and assume the vacuum saturation hypothesis to calculate quark condensates of higher dimensions. Note that only some typical diagrams are shown in Fig. 1.
The contribution of the “bound” diagrams is a $`1/N_c`$ correction to that of the “unbound” diagrams, where $`N_c`$ is the number of the colors. In Eq. (3) we set $`N_c`$ = 3. The “unbound” diagrams correspond to a picture that two particles are flying away without any interaction between them. In the $`N_c\mathrm{}`$ limit only the “unbound” diagrams contribute to the $`\pi \mathrm{\Sigma }`$ multiquark sum rule. Then, the $`\pi \mathrm{\Sigma }`$ multiquark mass ($`m(\pi \mathrm{\Sigma })`$) should be the sum of the pion and the $`\mathrm{\Sigma }`$ mass in this limit.
Eq. (3) has the following form:
$`\mathrm{\Pi }_1^{OPE}(q^2)`$ $`=`$ $`aq^{10}ln(q^2)+bq^8ln(q^2)+cq^6ln(q^2)+dq^4ln(q^2)`$ (4)
$`+`$ $`eq^2ln(q^2)+fln(q^2)+g{\displaystyle \frac{1}{q^2}},`$
where $`a,b,c,\mathrm{},g`$ are constants. Then, we parameterize the phenomenological side as
$`{\displaystyle \frac{1}{\pi }}Im\mathrm{\Pi }_1^{Phen}(s)`$ $`=`$ $`\lambda ^2m\delta (sm^2)+[as^5bs^4cs^3ds^2esf]\theta (ss_0),`$ (5)
where $`m`$ is the $`m(\pi \mathrm{\Sigma })`$ and $`s_0`$ the continuum threshold. $`\lambda `$ is the coupling strength of the interpolating field to the physical $`\mathrm{\Lambda }`$ (1405) state. The Borel-mass dependence of the $`m(\pi \mathrm{\Sigma })`$ shows that there is a plateau for the large Borel mass. However, this is a trivial result from our crude model on the phenomenological side. Hence we do not take this as the $`m(\pi \mathrm{\Sigma })`$ and neither as the $`\mathrm{\Lambda }`$ (1405) mass. Instead, we draw the Borel-mass dependence of the coupling strength $`\lambda ^2`$ at $`s_0`$ = 2.789 GeV<sup>2</sup> as shown in Fig. 2, where the $`s_0`$ is taken by considering the next $`\mathrm{\Lambda }`$ particlepdg98 . There is the maximum point in the figure. It means that the $`\pi \mathrm{\Sigma }`$ multiquark state couples strongly to the physical $`\mathrm{\Lambda }`$ (1405) state at this point. Then we take the $`\mathrm{\Lambda }`$ (1405) mass as the $`m(\pi \mathrm{\Sigma })`$ at the point. However, it would be better to determine an effective threshold $`s_0`$ from the present sum rule itself.
Thus, the steps for getting the $`m(\pi \mathrm{\Sigma })`$ are as follows. First, consider “unbound” diagrams only and choose a threshold $`s_0`$ in order that the average mass between the fiducial Borel interval becomes the $`m(\pi )`$ \+ $`m(\mathrm{\Sigma })`$. Second, consider whole diagrams (“unbound” + “bound” diagrams) and draw the Borel-mass dependence of the coupling strength $`\lambda ^2`$ using the above $`s_0`$. Last, determine the $`m(\pi \mathrm{\Sigma })`$ where the $`\lambda ^2`$ has the maximum value, and thus take this as the $`\mathrm{\Lambda }`$ (1405) mass. Following the above steps we get the $`m(\pi \mathrm{\Sigma })`$ = 1.424 GeV at $`s_0`$ = 3.082 GeV<sup>2</sup>.
There is another I=0 multiquark state; i.e. the $`\overline{K}^0n+K^{}p`$ multiquark state. Similarly, we obtain the $`m(\overline{K}N)`$ = 1.589 GeV at $`s_0`$ = 3.852 GeV<sup>2</sup>. This corresponds to the $`\mathrm{\Lambda }`$ (1600) mass. It is interesting to note that the masses from two multiquark states are similar at the same threshold as shown in Table 1.
Now, we can extend our previous analysis to the I=1 multiquark states and thus get the $`\mathrm{\Sigma }`$ (1620) mass. There are three decay channels for the $`\mathrm{\Sigma }`$ (1620). Then, we can construct the following multiquark interpolating fields; $`J_{\overline{K}^0nK^{}p}`$, $`J_{\pi ^+\mathrm{\Sigma }^{}\pi ^{}\mathrm{\Sigma }^+}`$, and $`J_{\pi ^0\mathrm{\Lambda }}`$ ( or $`J_{\pi ^\pm \mathrm{\Lambda }}`$). In Table 2 we present each multiquark mass.
We have obtained the I=0 and I=1 multiquark masses which are slightly different from the experimental valuespdg98 . One of corrections is to include the isospin symmetry breaking effects (i.e. $`m_um_d0`$, $`\overline{u}u\overline{d}d`$, and electromagnetic effects) in our sum rules. On the other hand, one can consider the contractions between the $`\overline{u}`$ and $`u`$ (or between the $`\overline{d}`$ and $`d`$) quarks in the initial state which have been excluded in our previous calculation. However, it is found that this correction is very small comparing to other $`1/N_c`$ corrections, i.e. the contribution of “bound” diagrams. Another possibility is the correction from the possible instanton effectsinstanton to the I=0 and I=1 states, respectively.
In this work we have neglected the contribution of gluon condensates and that of other higher dimensional operators including gluon components. Since we have considered the $`\mathrm{\Pi }_1`$ sum rule, only the odd dimensional operators can contribute to the sum rule. Thus, for example, the contribution of the gluon condensates is given by the terms like $`m_s\frac{\alpha _s}{\pi }G^2`$ and thus can be neglected comparing to other quark condensates of the same dimension.
In summary, the $`\mathrm{\Lambda }`$ (1405) and $`\mathrm{\Sigma }`$ (1620) masses are predicted in the QCD sum rule approach using the $`\overline{K}N`$, $`\pi \mathrm{\Sigma }`$, and $`\pi \mathrm{\Lambda }`$ multiquark interpolating fields (both I=0 and I=1).
The author thanks Prof. D.-P. Min and Prof. C.-R. Ji for their effort to make NuSS’99 successful. This work was supported in part by the Korea Science and Engineering Foundation (KOSEF). |
no-problem/9912/cond-mat9912012.html | ar5iv | text | # Phase transition properties of a finite ferroelectric superlattice from the transverse Ising model
## I Introduction
Possibly because of the great difficulty of growing well characterized samples, experimental studies of ferroelectric superlattices have been published only in recent years(Iijima et al. 1992; Tsurumi et al. 1994; Wiener-Avnear 1994; Tabata, Tanaka and Kawai 1994; Kanno et al. 1996; Zhao T. et al. 1999). Some exploratory theoretical work on ferroelectric superlattices has appeared (Tilley 1988; Schwenk, Fishman and Schwabl 1988; Schwenk, Fishman and Schwabl 1990). Their starting point is the Ginzburg-Laudau phenomenological theory.
On the microscopic level, the transverse Ising model (TIM)(de Gennes 1963; Binder 1987; Tilley and Zeks 1984; Cottam, Tilley and Zeks 1984)was used to study infinite ferroelectric superlattices under mean field theory (Qu, Zhong and Zhang 1994; Qu, Zhong and Zhang 1995; Zhong and Smith 1998 ) or effective field theory(Zhou and Yang). From the experimental point of view the TIM is a valuable model because of its possible applications, for example in studies of hydrogen bonded ferroelectrics(de Gennes 1963), cooperative Jahn-Teller systems(Elliot et al. 1971) and strongly anisotropic magnetic materials in a transverse field(Wang and Cooper 1968). The reviews of Blinc and Zeks(1972) and Stinchcombe(1973) give more details about possible applications of the TIM.
In the present paper, we consider a finite ferroelectric superlattice in which the elementary unit cell is made up of $`l`$ atomic layers of type $`A`$ and $`n`$ atomic layers of type $`B.`$ The mean-field approximation is employed and the equation for the Curie temperature is obtained by use of the transfer matrix method. We study two models of the superlattice which alternate as ABAB…AB(Model I) or ABABA…BA (Model II). Numerical results are given for the dependence of the Curie temperature on the thickness and exchange constants of the superlattice.
## II The Curie temperature
We start with the TIM(de Gennes 1963; Sy 1993, Qu, Zhong and Zhang 1994; Qu, Zhong and Zhang 1995; Zhong and Smith 1998; Bouziane et al. 1999)
$$H=\frac{1}{2}\underset{(i,j)}{}\underset{(r,r^{})}{}J_{ij}S_{ir}^zS_{jr^{}}^z\underset{ir}{}\mathrm{\Omega }_iS_{ir}^x,$$
(1)
where $`S_{ir}^x,S_{ir}^z`$ are the $`x`$ and $`z`$ components of the pseudo-spin, $`(i,j)`$ are plane indices and $`(r,r^{})`$ are different sites of the planes, $`J_{ij}`$ denote the exchange constants. We assume that the transverse field $`\mathrm{\Omega }_i`$ is dependent only on layer index and consider the interaction between neighboring sites. For simplicity, we take $`\mathrm{\Omega }`$ the same in the superlattice because the main qualitative important features result from the difference of $`J_{ij}.`$
The spin average $`\stackrel{}{S}_i`$ , obtained from the mean field theory
$$\stackrel{}{S}_i=\frac{\stackrel{}{H}_i}{2|\stackrel{}{H}_i|}\mathrm{tanh}(\frac{|\stackrel{}{H}_i|}{2k_BT})$$
(2)
where $`\stackrel{}{H}_i(\mathrm{\Omega },0,_jJ_{ij}S_j^z)`$ is the mean field acting on the ith spin, $`k_B`$ is the Boltzman constant and $`T`$ is the temperature.
At a temperature close and below the Curie temperature, $`S_i^x`$ and $`S_i^z`$ are small, $`|\stackrel{}{H}_i|\mathrm{\Omega }`$, equation (2) can be approximated as
$`S_i^x`$ $`=`$ $`{\displaystyle \frac{1}{2}}\mathrm{tanh}({\displaystyle \frac{\mathrm{\Omega }}{2k_BT}})`$ (3)
$`S_i^z`$ $`=`$ $`{\displaystyle \frac{1}{2\mathrm{\Omega }}}\mathrm{tanh}({\displaystyle \frac{\mathrm{\Omega }}{2k_BT}})[z_0J_{ii}S_i^z+z(J_{i,i+1}S_{i+1}^z+J_{i,i1}S_{i1}^z)]`$ (4)
Here $`z_0`$ and $`z`$ are the numbers of nearest neighbors in a certain plane and between successive planes respectively.
Let us rewrite Eq.(4) in matrix form in analogy with the reference(Barnas 1992)
$$\left(\genfrac{}{}{0pt}{}{m_{i+1}}{m_i}\right)=M_i\left(\genfrac{}{}{0pt}{}{m_i}{m_{i1}}\right)$$
(5)
with $`M_i`$ as the transfer matrix defined by
$$M_i=\left(\begin{array}{cc}(\tau z_0J_{ii})/(zJ_{i,i+1})& J_{i,i1}/J_{i,i+1}\\ 1& 0\end{array}\right).$$
(6)
where $`m_i=S_i^z`$ and $`\tau =2\mathrm{\Omega }/(zJ_{i,i+1})\mathrm{coth}[\mathrm{\Omega }/(2k_BT)].`$
We consider a ferroelectric superlattice which alternates as $`ABAB\mathrm{}AB`$ . In each elementary unit $`AB,`$ there are $`l`$ atomic layers of type $`A`$ and $`n`$ atomic layers of type $`B.`$ The intralayer exchange constants are given by $`J_A`$ and $`J_B`$ whereas the exchange constants between different layers is described by $`J_{AB}.`$ We assume there are $`N`$ elementary units and the layer index is from $`0`$ to $`N(l+n)1.`$ In this case, the transfer matrix $`M_{i\text{ }}`$reduces to two types:
$$M_A=\left(\begin{array}{cc}X_A& 1\\ 1& 0\end{array}\right),M_B=\left(\begin{array}{cc}X_B& 1\\ 1& 0\end{array}\right),$$
(7)
where $`X_A=\tau j_A,X_B=\tau j_B,`$ $`j_A=z_0J_A/(zJ_{AB}),j_B=z_0J_B/(zJ_{AB}),`$and $`\tau =2\mathrm{\Omega }/(zJ_{AB})\mathrm{coth}[\mathrm{\Omega }/(2k_BT)].`$
From Eq.(5), we get
$$\left(\genfrac{}{}{0pt}{}{m_{N(l+n)1}}{m_{N(l+n)2}}\right)=R\left(\genfrac{}{}{0pt}{}{m_1}{m_0}\right)$$
(8)
where
$$R=M_B^{n1}(M_A^lM_B^n)^{N1}M_A^{l1}$$
(9)
is the total transfer matrix.
From the above equation and the following equations
$$m_1=X_Am_0,m_{N(l+n)2}=X_Bm_{N(l+n)1},$$
(10)
we obtain the equation for the Curie temperature of the superlattice as
$$R_{11}X_AX_B+R_{12}X_BR_{21}X_AR_{22}=0.$$
(11)
Next we consider Model II, the superlattice which alternates as $`ABA\mathrm{}BA`$ and assume that the lattice has $`N(l+n)+l`$ layers. The total transfer matrix
$$S=M_A^{l1}M_BR$$
(12)
and the equation for the Curie temperature is obtained as
$$S_{11}X_A^2+(S_{12}S_{21})X_AS_{22}=0.$$
(13)
For an unimodular matrix $`M`$, the n-th power of $`M`$ can be linearized as(Yariv 1992; Wang, Pan and Yang 1999)
$$M^n=U_nMU_{n1}I,$$
(14)
where $`I`$ is the unit matrix, $`U_n=(\lambda _+^n\lambda _{}^n)/(\lambda _+\lambda _{}),`$ and $`\lambda _\pm `$ are the two eigenvalues of the matrix $`M.`$
Using Eq.(14), we obtain
$`M_A^l`$ $`=`$ $`E_lM_AE_{l1}I,`$ (15)
$`M_B^n`$ $`=`$ $`F_nM_BF_{n1}I,`$ (16)
where $`E_l=(\alpha _+^l\alpha _{}^l)/(\alpha _+\alpha _{})`$, $`F_n=(\beta _+^n\beta _{}^n)/(\beta _+\beta _{})`$, $`\alpha _\pm =(X_A\pm \sqrt{X_A^24})/2`$ and $`\beta _\pm =(X_B\pm \sqrt{X_B^24})/2.`$ Then from Eqs.(15) and (16), the matrix $`M_A^lM_B^n`$ in Eq.(9) can be written explicitly as
$`M_{AB}`$ $`=`$ $`M_A^lM_B^n=`$ (18)
$`\left(\begin{array}{cc}\left(E_lX_aE_{l1}\right)\left(F_nX_bE_{n1}\right)E_lF_n& \left(E_lX_aE_{l1}\right)F_n+E_lE_{n1}\\ E_l\left(F_nX_bE_{n1}\right)E_{l1}F_n& E_lF_n+E_{l1}E_{n1}\end{array}\right)`$
The trace of the matrix $`M_{AB}`$ is
$$tr=\left(E_lX_aE_{l1}\right)\left(F_nX_bE_{n1}\right)2E_lF_n+E_{l1}E_{n1.}$$
(19)
Since $`det(M_{AB})=1,`$ the eigenvalues of the matrix $`M_{AB}`$ is $`\gamma _\pm =(tr\pm \sqrt{tr^24})/2.`$ Then using Eq.(14), we get
$$M_{AB}^{N1}=G_{N1}M_{AB}G_{N2}I,$$
(20)
where $`G_N=(\gamma _+^N\gamma _{}^N)/(\gamma _+\gamma _{}).`$
Using Eq.(15)-(19), we can express the total transfer matrix $`R`$ and $`S`$ in terms of $`X_A,X_B,E_{l,}F_n,`$ and $`G_N.`$ We can get an explicit expression for equations (11) and (13) for the Curie temperature by substituting the matrix elements of $`R`$ and $`S`$ into Eq.(11) and (13), respectively. The results are tedious, we only give numerical results below.
Fig.1 gives the dependence of the reduced Curie temperature $`t_C`$ against the reduced exchange constant $`j_A`$ in model I and II. The Curie temperature increases with increase of $`j_A.`$ It is clear that the Curie temperatures in model II are larger than those in model I. The reason is that the superlattice in model II is thicker than that in model I. The fact that the Curie temperature increases with the increase of $`j_A`$ can also be seen in Fig.2. Fig.2 shows the dependence of the reduced Curie temperature $`t_C`$ against the reduced exchange constant $`j_A`$ for different $`\omega `$ in model I. The transverse field causes a reduction of the Curie temperature. In other words, the Curie temperature decreases with increase of $`\omega .`$
Fig.3 shows the dependence of the Curie temperature on the number of elementary units $`N`$ in model I. $`t_0`$ in the figure is the Curie temperature of the corresponding infinite superlattice. The Curie temperature of the infinite superlattice can be determined from the following equation(Wang, Pan and Yang 1999)
$$\text{trace}(M_A^lM_B^n)=2.$$
(21)
The Curie temperature of a finite superlattice is always less than that of a corresponding infinite superlattice, and it increases with the increase of the number of elementary units $`N`$ to approach asymptotically to $`t_0`$ for large values of $`N.`$
## III Conclusion
In conclusion, we have studied the phase transition properties of a finite ferroelectric superlattice in which the elementary unit cell is made up of $`l`$ atomic layers of type $`A`$ and $`n`$ atomic layers of type $`B.`$ By the transfer matrix method we derived the equation for the Curie temperature of the superlattice. Numerical results are given for the dependence of the Curie temperature on the thickness and exchange constants of the superlattice. The method proposed here can be applied to the finite superlattice in which each elementary unit cell is made up of many types of materials and the atomic layers of each type can be arbitrary. The finite superlattice is more realistic than the infinite superlattice in experiments. We hope that the present work will have relevance to some future experiments.
Captions:
Fig.1, The dependence of the reduced Curie temperature $`t_C`$ against the reduced exchange constant $`j_A`$ in model I and II. The parameters $`j_B=1,l=n=N=2,`$ and $`\omega =0.5.`$
Fig.2, The dependence of the reduced Curie temperature $`t_C`$ against the reduced exchange constant $`j_A`$ for different $`\omega `$ in model I. The parameters $`j_B=1,`$and $`l=n=N=2.`$
Fig.3, The dependence of the Curie temperature on the number of elementary units $`N`$ in model I. The parameters $`j_A=1.2,j_B=1,l=n=2,`$ and $`\omega =0.5.`$
References
Barnas, J. (1992). Phys.Rev.B. 45,10427.
Binder, K.(1987). Ferroelectrics 35,99.
Blinc, R., and Zeks, B.(1972). Adv.Phys. 1, 693.
Bouziane, T., Saber, M., Belaaraj, A., and Ainane, A. (1999). J.Magn.Magn.Materials 195, 220.
Cottam, M.G., Tilley, D.R. and Zeks, B. (1994). J.Phys.C 17,1793.
de Gennes, P.G. (1963). Solid State Commun. 1,132.
Elliot, R.J., Gehring, G.A., Malogemoff, A.P.,Smith, S.R.P., Staude, N.S., and Tyte, R.N. (1971). J.Phys. C 4, L179.
Iijima, K, Terashima, T., Bando, Y., Kamigaki, K. and Terauchi, H. (1992). Jpn.J.Appl.Phys. 72, 2840.
Kanno, I., Hayashi, S., Takayama, R. and Hirao, T. (1996). Appl.Phys.Lett. 68,328.
Qu, B.D., Zhong, W.L. and Zhang, P.L. (1994). Phys.Lett.A 189,419.
Qu, B.D., Zhong, W.L. and Zhang, P.L. (1995). Jpn.J.Appl.Phys. 34,4114.
Schwenk, D., Fishman, F. and Schwabl, F. (1988). Phys.Rev. B 38,11618.
Schwenk, D., Fishman, F. and Schwabl, F., (1990). J.Phys.:Condens.Matter 2,6409.
Stinchcombe, R.B. (1973). J.Phys.C 6, 2459.
Sy, H.K. (1993). J.Phys.:Condens.Matter 5, 1213.
Tabata, H., Tanaka, H. and Kawai, T. (1994). Appl.Phys.Lett. 65,1970.
Tilley, D.R. and Zeks, B. (1984). Solid State Commun. 49,823.
Tilley, D.R.(1988). Solid State Commun. 65,657.
Tsurumi, T., Suzuki, T., Yamane, M.and Daimon, M.(1994). Jpn.J.Appl.Phys. 33,5192.
Wang, X.G., Pan, S.H. and Yang, G.Z. (1999). J.Phys.:Condens.Matter 11, 6581.
Wang, X.G., Pan, S.H. and Yang, G.Z. (1999). Solid State Commun. 113, 59.
Wang, Y.L. and Cooper, B. (1968). Phys.Rev. 173, 539.
Wiener-Avnear, E.(1994). Appl.Phys.Lett. 65 ,1784.
Yariv, A. and Yeh, P. (1992). Optical Waves in Crystals (John Wiley & Sons, New York).
Zhao, T., Chen, Z.H., Chen, F., Shi, W.S.,Lu, H.B. and Yang, G.Z. (1999). Phys.Rev. B 60,1697.
Zhong, W.L. and Smith, S.R.P. (1998). J. Korea Physical Society 32,S382.
Zhou, J.H. and Yang, C.Z. (1997). Solid State Commun. 101,639. |
no-problem/9912/quant-ph9912065.html | ar5iv | text | # About the Notion of Truth in the Decoherent Histories Approach: a Reply to Griffiths.
## Abstract
Griffiths claims that the “single family rule”, a basic postulate of the decoherent histories approach, rules out our requirement that any decoherent history has a unique truth value, independently from the decoherent family to which it may belong. Here we analyze the reasons which make our requirement indispensable and we discuss the consequences of rejecting it.
This short letter is a reply to Griffiths’ article Consistent histories, quantum truth functionals, and hidden variables , in which he has raised some objections to our paper Can the decoherent histories description of reality be considered satisfactory? . For a more detailed analysis of the arguments of , we refer the reader to .
First of all, we would like to summarize the main features of the DH approach, about which there seems not to be a disagreement between Griffiths and us:
1. Within a given decoherent family everything goes like in Classical Mechanics: the probability distribution assigned to the histories obeys the classical probability rules; it is possible to define a Boolean structure, so it is possible to speak about the conjunction, disjunction of two histories and about the negation of an history; moreover, one can define the logical implication between two histories, so that also reasonings of the type “if … then …” are possible.
2. As Griffiths admits in the above quoted paper, it is possible to assign truth–values to all histories of a given decoherent family. This move has an important physical meaning: it means that, in spite of the probabilistic structure of the theory, one can speak of the properties actually possessed by the physical system under study, and not only of the probability that such properties be possessed. In order to understand this important point, let us remember that also in Classical Statistical Mechanics one generally has only a probabilistic knowledge of the physical system; despite of this, he can claim that the system has well defined physical properties (positions and momenta of its constituents, from which all other properties can be derived), but he doesn’t know which they are simply because he is ignorant about the precise state of the system. From the logical–mathematical point of view, the legitimacy of considering properties as objectively possessed is a consequence of the fact that one can define a Boolean algebra in phase space and attach truth–values to its subsets in a consistent way.
In Standard Quantum Mechanics, on the other hand, one cannot even think that systems possess physical properties prior to measurements: mathematically, this is reflected in the peculiar properties of the Hilbert space (with dimension greater than 2): the set of projection operators cannot be endowed with a Boolean structure, and it is not possible to attach consistently truth–values to them, as implied by the theorems of Gleason, Bell and Kochen and Specker.
Thus, giving a truth value to the histories of a given decoherent family corresponds to the assertion that such histories speak of specific physical properties that the system under study possesses objectively, independently from our (in general) probabilistic knowledge of the system and of any act of measurement. This, in our opinion, is the nicest feature of the DH formalism, the one emboding all its advantages with respect to the standard quantum formalism.
3. When one deals with more that one decoherent family, things become rather problematic: if such families can be accomodated into a single decoherent family, then all what we have said previously remains valid. If this is not possible (and this is likely to happen most of the times), then any reasoning, any conclusion derived by using histories which belong to incompatible families, are devoid of any physical meaning. Griffiths felt the necessity to promote this fact , which we have indicated as the “single family rule”, to a basic rule of the DH approach: a meaningful description of a (closed) quantum mechanical system, including its time development, must employ a single framework \[i.e. decoherent family\] . This rule gives rise to some curious situations, which do not have any classical analogue, but we will not discuss these matters now. Actually, we agree that they do not lead to formal inconsistencies.
Now, let us come to our argument. The formalism of DH implies, as it is obvious and can be easily checked, that any given decoherent history belongs in general to many different decoherent families. As we have argued under 2., in any of these decoherent families, such a history has a precise truth–value. As already mentioned, also Griffiths seems to agree on this. Now the relevant question is: does the truth value of the considered history depend on the decoherent family to which it may belong? We think that the answer must be “no”, because (as we said in 2.) truth–values refer to properties objectively possessed by the physical system under study, and if the truth–value of a decoherent history would change according to the decoherent family to which it belongs, also the properties that such a history attaches to the physical system would change by changing the decoherent family. We have formalized these considerations in the following assumption (which is assumption (c) of ):
> Any given decoherent history has a unique truth value (0 or 1), which is independent from the decoherent family to which the history is considered to belong.
As mentioned in the abstract, in , Griffiths claims that such an assumption violates the “single family rule” and as such it cannot be considered as part of the DH approach. With reference to this point we want first of all to make clear that nowhere, in the original formulations of the “single family rule”, it was mentioned that a given decoherent history can (or cannot) have different truth values according to the family to which it belongs; nowhere, directly or indirectly, reference was made to our assumption (since we have been the first to put it forward). Thus, it is not correct to claim that such a rule already excluded our assumption. If Griffiths claims that the “single family rule” excludes (c), then he is proposing a new, extended, interpretation of such a rule.
Having clarified this point, we are ready to accept that Griffiths rejects our assumption: he is perfectly free to do so. But we pretend that he accepts all the consequence (which we are going to analyze) of such a move. Denying (c) simply means to assert that:
> There are decoherent histories whose truth–values depend on the decoherent family to which they (are thought or considered to) belong, i.e. in some families they are, for example, true, while in other families they are false.
This, in turn, means accepting that statements like “this table is here”, “the Earth is moving around the Sun”, “that electron has spin up along such a direction” are — in general — neither true nor false per se: each of them acquires a truth value only when it is considered a member of a precise (among the infinitely many ones which are possible) decoherent family; moreover, their truth values may change according to the decoherent family to which they are associated. In some families it may be true that “this table is here” or that “the Earth is moving around the Sun”, while in other families it may be false that “this table is here” or that “the Earth is moving around the Sun”. This state of affairs is the direct consequence of denying our assumption, and it should be evident to anyone that if one takes such a position then he is spoiling the statements of the DH approach of any physical meaning whatsoever.
We can then summarize the whole debate between Griffiths and us in the following terms. In our papers and , we have considered the following four assumptions:
1. Every family of decoherent histories can be (naturally) endowed with a Boolean structure allowing to recover classical reasoning,
2. Within every decoherent family it is possible to assign to its histories truth values which preserve the Boolean structure (i.e. they form an homomorphism),
3. Every decoherent history has a unique truth value, independently from the decoherent family to which it may be considered to belong,
4. Any decoherent family can be taken into account,
and we have shown that they lead to a Kochen-and-Specker-like contradiction. This implies that at least one of them must be rejected in order to avoid inconsistencies within the DH approach. Griffiths rejects assumption (c), while we, in accordance with the previous analysis, believe that this move is unacceptable. Accordingly, in our papers we have suggested that one should limit, resorting to precise and physically meaningful criteria, the set of decoherent histories which can be taken into account. Such a move might lead to a physically acceptable and sensible new formulation of the DH approach. |
no-problem/9912/cond-mat9912245.html | ar5iv | text | # Structure, elastic moduli and thermodynamics of sodium and potassium at ultra-high pressures
## Abstract
The equations of state at room temperature as well as the energies of crystal structures up to pressures exceeding 100 GPa are calculated for Na and K . It is shown that the allowance for generalized gradient corrections (GGA) in the density functional method provides a precision description of the equation of state for Na, which can be used for the calibration of pressure scale. It is established that the close-packed structures and BCC structure are not energetically advantageous at high enough compressions. Sharply non-monotonous pressure dependences of elastic moduli for Na and K are predicted and melting temperatures at high pressures are estimated from various melting criteria. The phase diagram of K is calculated and found to be in good agreement with experiment.
The theoretical and experimental studies of the matter properties at ultra-high pressures arouse a great interest in the connection with the possibility to obtain phases with uncommon properties as well as geophysical and astrophysical applications. As an example, the problem of metallic hydrogen can be mentioned . In the high pressure studies the alkali metals can be conveniently used as model objects. This is due, first, to their high compressibility and, second, to the variety of physical phenomena occurring in their compression and numerous structural and electron phase transitions (see, e.g.). For heavy alkali metals it is the famous $`sd`$ isostructural FCC-FCC transition (see, e.g., and references therein) as well as the transitions to uncommon distorted phases at higher pressures. Recently it was supposed, basing on the electron structure calculations, that lithium can transform at high enough pressures into “exotic” phases similar to that of hydrogen . Thus, further theoretical investigations of structural properties of alkali metals at ultra-high pressures seem to be interesting and important.
Despite a lot of considerations, this is still an open problem. The most of early attempts used computational approaches which were not accurate enough from the contemporary point of view. It is well known (see, e.g., ) that the highly accurate quantitative description of the electronic and, especially, lattice properties of metals needs the consideration of the real form of potential in the crystal and going beyond the frame of local approximation in the density functional, in particular, the allowance for generalized gradient corrections (GGA) . In the present work a consistent theoretical study of the relative stability of crystal structures of Na and K under pressure as well as a variety of related lattice properties, is performed basing on these first-principle calculations. The most interesting result obtained is that, contrary to the traditional concepts (see, e.g., ) neither structure, which is characteristic of metals under normal conditions (BCC, FCC and HCP), is stable at high enough pressures even in Na where there are no electron transitions.
The ab initio calculations of electronic structure, thermodynamical potential, equilibrium lattice parameters and elastic moduli at temperature $`T=0`$ were carried out using the FP-LMTO method with allowance for the GGA in the form proposed in . A careful optimization of the parameters of this method made it possible to carry out the calculations of the total energy with an accuracy within the limits of 0.1 mRy/atom. Parameter $`c/a`$ for the HCP lattice was determined by the minimization of the total energy for a fixed specific volume, and the elastic moduli — by numerical differentiation of the total energy with respect to tetragonal and trigonal deformations (see, e.g., ). Up to now, there are only few works devoted to the first-principle calculations of elastic moduli of metals under high pressures (see, e.g., for Mo and W). The expressions for the free energy $`F`$ (connected with Gibbs thermodynamical potential $`G`$ by the relationship $`G=F+PV`$, $`P=dF/dV`$, where $`P`$ is the pressure, $`V`$ is the volume) and elastic moduli $`C_{\alpha \beta \gamma \delta }`$ at finite temperatures can be presented in the following form:
$$F(V,T)=E_e\mathrm{}(V)+F_{ph}(V,T)$$
(1)
$`C_{\alpha \beta \gamma \delta }(V,T)`$ $`=`$ $`C_{\alpha \beta \gamma \delta }^0(V_0)+V_0{\displaystyle \frac{dC_{\alpha \beta \gamma \delta }^0}{dV_0}}{\displaystyle \frac{P_{ph}(V_0,T)}{B_0}}`$ (3)
$`+C_{\alpha \beta \gamma \delta }^{}(T)`$
where $`F_{ph}=T\underset{\xi ,\stackrel{}{q}}{}\mathrm{}n\left[2sh\frac{\mathrm{}\omega __\xi (\stackrel{}{q})}{2T}\right]`$ is the free energy of the phonon subsystem in the harmonic approximation, $`E_e\mathrm{}(V)`$ is the total energy of the electron subsystem obtained from the FP-LMTO calculations , $`P_{ph}=F_{ph}/V`$ is the phonon pressure, $`B_0`$ is the bulk modulus of the electron subsystem at $`T=0`$, $`\xi ,\stackrel{}{q},\omega `$ are the number of phonon branch, wave vector and phonon frequency, respectively. In the expression for the elastic moduli $`C_{\alpha \beta \gamma \delta }(V,T)`$ the first term corresponds to the electron contribution at $`T=0`$, second one - to the quasiharmonic contribution due to the effects of thermal expansion, and third one - to the phonon contribution obtained from the differentiation of $`F_{ph}`$ with respect to the corresponding deformation parameters. For the calculation of phonon contributions to thermodynamical functions the pseudopotential model described in , which describes with a high accuracy a wide range of lattice properties of alkali metals, was used.
Figs. 1,2 show the results of calculations of Gibbs potentials at $`T=O`$ for the BCC, FCC and HCP phases of sodium and potassium, respectively. It should be noted that in the case of Na the phonon contribution to $`\mathrm{}G`$ (contribution of zero-point vibrations, $`\mathrm{}G_{zp}`$) are $`G_{zp}^{fcc}G_{zp}^{bcc}=1.31\times 10^5`$ Ry/atom, $`G_{zp}^{hcp}G_{zp}^{bcc}=1.35\times 10^5`$ Ry/atom. This is well comparable with the electron contributions to $`\mathrm{}G`$ under the normal conditions. However, already at $`P>1`$ GPa for Na and practically at all the pressures for K the contribution of zero-point oscillations to $`\mathrm{}G`$ can be neglected. Generally speaking, energy differences of order of $`10^5`$ Ry/atom is too small to be accurately derived in our first-principle calculations; nevertheless, we have obtained correct phase diagram even for sodium at low pressures. In accordance with the results of calculations, at $`P=0`$ the BCC phase and HCP phase have the lowest energy for K and Na, respectively (actually, under these conditions Na has not HCP but 9R structure whose energy, however, is very close to HCP ). It is important to emphasize that this difference of Na from K is purely quantitative: according to the results, shown in the insert to Fig.2, K would have to transit to the hexagonal close-packed phase at the negative pressure of the order of several kilobars. As the calculations show potassium, unlike sodium, transits from the BCC to FCC structure at $`P11.6`$ GPa, which is in excellent agreement with the experimental data . In this case the relative change in the volume $`\mathrm{}=(V_{bcc}V_{fcc})/V_{bcc}0.0067`$ takes place at $`V_0/V=2.14`$. Here and below $`V_0`$ is the experimental value of the specific volume at the atmospheric pressure and temperature 10K, equal to 484.12 a.u. . Our calculations make it possible to suppose that the difference between Na and K is associated with the electronic topological transition occurring in the BCC potassium at $`V_0/V2`$ and destabilizing the BCC structure. The similar situation takes place in Li while in the BCC sodium, within the whole range of pressures, the Van-Hove singularity goes away from the Fermi level under the compression. Generally, sodium seems to be a unique metal in the Periodical Table: in the whole region of the existence of BCC structure it has no singularities of electronic structure near the Fermi level, and the Fermi surface remains approximately spherical.
The calculation results of equations of state for sodium and potassium are shown in Fig.3 along with the experimental data available. It should be pointed out that the experimental data agree with the theoretical results within their accuracy limits ($`10\%`$). This creates the prerequisites for the development of pressure scale based on sodium as a reference substance. Note also that at room temperature the role of the phonon contribution to pressure falls under the compression, and this contribution in itself is small (of the order of 0.3 GPa at full pressure 20-30 GPa).
Figs. 4,5 display the calculation results of the dependence of elastic moduli $`C_{ij}(V)`$ on the compression for sodium and potassium, respectively. A drastically non-monotonous behavior of shear moduli associated with the tetragonal ($`C^{}=(C_{11}C_{12})/2`$) and trigonal ($`C_{44}`$) deformations in both BCC and FCC structures is noticeable. It should be pointed out that at least at compressions $`V_0/V<2`$ the calculation results of $`C_{ik}(V)`$ in the pseudopotential model and in the first-principle approach are close. In this region the equations of state coincide in these two approaches with an accuracy up to several percents. This confirms a sufficiently high reliability of our use of pseudopotential model for the calculation of phonon contributions to thermodynamical values. Nevertheless, the phonon contributions to the shear moduli do not exceed 10% within the whole pressure range studied. It should be noted that softening of modulus $`C^{}`$ is a typical pre-transition phenomenon connected with the structural transitions between the BCC and close-packed structures. However, the softening of modulus $`C_{44}`$ in the FCC structure of K at high pressures (Fig.5) is rather surprising. It appears to be similar to the softening of this modulus, taking place in the FCC structure of Cs near the electron $`sd`$ transition and is due to the crawling of the Fermi level over the peak of $`d`$-state density.
Fig.6 shows an experimental phase diagram of potassium and the phase diagram built on the basis of our calculations. The dependence of the melting temperature on pressure, $`T_m(P)`$ in the BCC and FCC phases was obtained using the phonon spectra and different melting criteria. First of all, we use the Lindeman criterion
$$\overline{x^2(T_m)}/d^2=const,$$
(4)
where $`\overline{x^2(T)}=\underset{\xi \stackrel{}{q}}{}\frac{\mathrm{}\left|\stackrel{}{q}\stackrel{}{e}_{\xi \stackrel{}{q}}\right|^2}{2M\omega _{\xi \stackrel{}{q}}}\mathrm{coth}\frac{\mathrm{}\omega _{\xi \stackrel{}{q}}}{2T}`$ is the mean square of atom displacement, $`\stackrel{}{e}_{\xi \stackrel{}{q}}`$ is the polarization vector, $`M`$ is the atom mass, $`d`$ is the distance between the nearest neighbors. Although the Lindeman criterion is empirical, it may be expected that its use for finding the melting temperature at high pressures would be as successful as at low temperatures . Nevertheless one can see from Fig.6 that it is not too accurate in a broad pressure region. Varshni melting criterion which is based on the temperature softening of the shear moduli, namely
$$C_{44}\left(T_m\right)/C_{44}\left(0\right)=0.65,$$
(5)
appears to be much more accurate. Here we use the method of the calculation of the temperature dependence of elastic moduli from the phonon spectra described in . Note also that we describe with high accuracy the BCC-FCC phase boundary. We also present the results obtained in generalized Debye model when all the thermodynamical quantities are calculated in the Debye model but with the Debye temperature found from ab initio elastic moduli. One can see that this description is also rather accurate for potassium.
Fig.1 shows that the BCC phase of sodium becomes energetically unfavorable as compared with the HCP at a pressure about 80 GPa. At $`P>100`$ GPa, however, this phase demonstrates anomalies in the equilibrium value of parameter $`c/a`$ (Fig.7). A sharp decrease in the ratio $`c/a`$ to 1.2-1.3 at $`V_0/V>4.35`$, which is necessary to maintain the HCP lattice in equilibrium, is doubtful actually, and seems to be indicative of transition to some non closed packed phase with a large number of atoms per cell. These phases are observed in K, Rb and Cs at high pressures . It is usual to associate their appearance in heavy alkali metals with the $`sd`$ transition. Thus, according to our results, all the three ”typically metallic” structures - BCC, FCC, HCP do not correspond to the lowest energy in Na where no electronic transitions are observed within the pressure range considered. In order to understand qualitatively the cause of appearance of ”nonstandard” metallic phases, let us use the above mentioned pseudopotential model for the estimations. In this model the radius of ”hard” ion core is described by the pseudopotential parameter $`r_0`$ . As the estimates show the compression $`V_0/V4`$ corresponding to the instability of the close- packed phase coincides with the condition of overlapping ion cores $`2r_0d`$ for Na. Hence, the concept of well determined ion cores, being at the base of standard metallic bond description, becomes inapplicable at ultra-high pressures. As a result, as we have seen, the substance transforms into exotic non close-packed phases. These results are in qualitative agreement with the results for lithium.
In conclusion, note that it would be interesting to study the structure of sodium at ultra-high pressures, which, as follows from the results obtained, may prove to be surprising. Another result of this work, permitting a direct experimental check is non-monotonous behavior of Na and K shear moduli at pressure. At last, precision theoretical description of the equation of state of sodium would make possible to use it for the development of an accurate pressure scale up to 100 GPa. Although the contemporary first-principle calculations can provide high enough accuracy also for another substances (see, e.g., recent calculations for Si) a very high compressibility of sodium and the absence of phase transitions in a broad range of pressures make it probably the most suitable for these purposes.
The authors are grateful to D. Yu. Savrasov and S. Yu. Savrasov for the permission to use the author’s version of the code realizing the method in their work as well as to D. Yu. Savrasov and E. G. Maksimov for useful discussions of the details of this method.
Figure captions
Fig.1. Pressure dependence of the differences of Gibbs potentials between BCC and FCC as well as HCP and FCC structures for Na.
Fig.2. Pressure dependence of the differences of Gibbs potentials between BCC and FCC as well as HCP and FCC structures for K.
Fig.3. Equations of states for sodium and potassium at $`T=295K`$. Solid line corresponds to FP-LMTO calculations , dashed line - to the calculations by the pseudopotential method , Empty (solid) triangles, circles and asterisks (squares) are the experimental data for Na and K, respectively.
Fig.4. The dependence of elastic moduli $`C^{}`$ and $`C_{44}`$ on the compression $`U`$ for BCC Na; empty circles and squares show, respectively, the data from ; the solid ones - the values obtained in the present work.
Fig.5. The dependence of elastic moduli $`C^{}`$ and $`C_{44}`$ on the compression $`U`$ for BCC and FCC phases of K. Solid (empty) circles and squares show $`C^{}`$ and $`C_{44}`$ values for BCC (FCC) phases, respectively. The dashed line — the $`C^{}`$ values for the bcc phase from .
Fig.6. Phase diagram of potassium. The solid line - experimental data , dashed line- the calculations using Varshni criterion (5) dashed-dot line - the calculations using Lindeman criterion (4), dotted line - generalized Debye model (see the text). Solid circles - BCC-FCC phase boundary from our calculations.
Fig.7. Dependence of the total energy of HCP structure for Na on the ratio $`c/a`$ for various compressions: the solid line — $`U=0.75`$; dashed line — $`U=0.76`$; dashed-dot line — $`U=0.765`$. The insert shows the equilibirum values of the parameter $`c/a`$ for HCP structure depending on $`U`$ is shown. Solid (empty) circles denote the values taken in the global (local) minimum of the total energy, correspondingly. |
no-problem/9912/astro-ph9912172.html | ar5iv | text | # 1 M15 Stellar Velocity Results
|
no-problem/9912/astro-ph9912271.html | ar5iv | text | # Search for the optical counterpart of the 16ms X-ray pulsar in the LMC Based on observations collected at the European Southern Observatory, La Silla, Chile
## 1 Introduction
PSR J0537$``$6910 is a young, fast, X-ray pulsar, recently discovered at the center of the LMC supernova remnant N157B, close to the 30 Doradus star forming region. N157B belongs to the class of the so called Crab-like supernova remnants, or plerions, characterized by non-thermal spectra and a centrally-filled radio/X-ray morphology probably due to the presence of a synchrotron nebula powered by the relativistic wind from a young, energetic, pulsar. Apart from the Crab Nebula, so far only three other plerions were known to host pulsars, namely: SNR0540$``$69 (PSR B0540$``$69), also belonging to 30 Doradus complex and located 15′ from N157B, MSH$``$15-52 (PSR B1509$``$58) and G11.2$``$0.3 (PSR J1811$``$1926). Thus, with the detection of PSR J0537$``$6910, N157B represents the fifth case of a plerion/pulsar association.
Pulsed X-ray emission at 16 ms was serendipitously discovered during a RXTE/PCA observation towards 30 Doradus (Marshall et al. 1998). Soon after, the pulsation was detected in archived 1993 ASCA/GIS data and additional confirmations came from BeppoSAX (Cusumano et al. 1998). PSR J0537$``$6910 takes over the Crab (33 ms) as the fastest “classical” (i.e. not spun up by matter accretion from a companion star) pulsar.
The pulsar was identified in the ASCA/GIS with a X-ray source detected at the center of N157B and resolved in a point-like component (the pulsar and, probably, its associated synchrotron nebula) plus an elongated feature, the origin of which is still uncertain. In both RXTE and ASCA data, the pulse profile appears characterized by a sharp ($`1.7`$ ms FWHM) symmetric peak, which shows no obvious evolution during the time interval between ASCA and RXTE observations (3.5 yrs). The period derivative of the pulsar ($`\dot{P}5\times 10^{14}`$ s s<sup>-1</sup>), obtained from the comparison of multi-epoch timing (Marshall et al. 1998; Cusumano et al. 1998), gives a spindown age of $`\mathrm{5\hspace{0.17em}000}`$ yrs), similar to the age of the remnant estimated by Wang & Gotthelf (1998a), a magnetic field of $`10^{12}G`$, typical for a pulsar this young, and a rotational energy loss $`\dot{E}4.8\times 10^{38}`$ ergs s<sup>-1</sup>. Substantially the same results were obtained from the timing analysis of ROSAT/HRI data (Wang & Gotthelf 1998b).
In radio, PSR J0537$``$6910 has been observed between June and August 1998 using the 64m radio telescope in Parkes but it has not been detected down to an upper limit of $`F_{14.\mathrm{GHz}}0.04`$ mJy (Crawford et al. 1998). Although not really compelling, the present upper limit suggests that PSR J0537$``$6910 is weaker in radio than both the Crab pulsar and PSR B0540$``$69.
## 2 Optical observations
While in radio PSR J0537$``$6910 is an elusive target, in the optical domain the situation appears more promising.
Up to now, three of the five pulsars younger than $`\mathrm{10\hspace{0.17em}000}`$ yrs have been certainly identified in the optical (Mignani 1998), where they channel through magnetospheric emission $`10^510^6`$ of their rotational energy output. Since PSR J0537$``$6910 is very young ($`\mathrm{5\hspace{0.17em}000}`$ yrs) and, with the Crab, it has the highest $`\dot{E}`$, it is natural to assume that also in this case a significant amount of the rotating power be radiated in the optical. However, given the uncertain dependance of the optical luminosity vs. the pulsar parameters, it is difficult to make a prediction on the actual magnitude of PSR J0537$``$6910. A possible estimate can be obtained by a straight scaling of the Pacini’s relation (see e.g. Pacini & Salvati 1987) i.e. neglecting the dependance of the pulsar luminosity on its unknown optical duty cycle. This would yield $`V24.6`$, after correcting for the interstellar absorption $`A_\mathrm{V}1.3`$, estimated applying the relation of Fitzpatrick (1986) with an $`N_\mathrm{H}10^{22}`$ cm<sup>-2</sup>, measured by the X-ray spectral fittings (Wang & Gotthelf 1998a). However, we note that the other young ($`\mathrm{2\hspace{0.17em}000}`$ yrs) LMC pulsar, PSR B0540$``$69, with a factor 3 smaller $`\dot{E}`$, has a magnitude $`V=22.4`$ with an $`A_\mathrm{V}0.6`$ (Caraveo et al. 1992).
Although PSR J0537$``$6910 is still undetectable in radio, its detection in ROSAT/HRI data (Wang & Gotthelf 1998b) reduces significantly its position uncertainty down to $`\pm `$ 3″and prompts the search for its optical counterpart. The scientific case appears similar to the one of PSR B0540$``$69, also discovered as a pulsating X-ray source (Seward et al. 1984), also embedded in a supernova remnant (SNR0540$``$69) and tentatively identified in the optical without the aid of a reference radio position (Caraveo et al. 1992).
In the following, we describe the results of the first deep imaging of the field of PSR J0537$``$6910, performed with the ESO/NTT.
### 2.1 The data set
The field of PSR J0537$``$6910 has been observed in three different runs between September and November 1998 from the European Southern Observatory (La Silla). The observations have been performed in visitor mode with the NTT, equipped with the second generation of the SUperb Seeing Imager camera (SUSI2). The camera is a CCD with a field of view of $`5\stackrel{}{.}5\times 5\stackrel{}{.}5`$, split in two chips, and a projected $`2\times 2`$ binned pixel size of 0$`\stackrel{}{.}`$16. The two CCD chips are physically separated by a gap $``$ 100 pixels in size, corresponding to an effective sky masking of $``$ 8″.
Images were obtained in different wide-band filters ($`B,V,I`$) and in the narrow-band $`H_\alpha `$, with the available data set summarized in Table 1.
After the basic reduction steps (bias subtraction, flatfield correction, etc.), single exposures have been combined and cleaned from cosmic ray hits by frame-to-frame comparison. The frames taken through the same filter have been registered with respect to each other and combined through a median filter. The conversion from instrumental magnitudes to the Johnson standard system was obtained using a set of primary calibrators from Landolt fields observed at different airmasses during each night. The formal errors in the zero point of the calibration curves are $`0.04`$ magnitudes in $`V`$ and $`B`$, and $`0.03`$ in $`I`$.
Astrometry on the field has been computed using as a reference the coordinates and positions of a set of stars extracted from the USNO catalogue. Then the sky-to-pixel coordinate transformation has been computed using the ASTROM software (Wallace 1990), yielding a final accuracy of 0$`\stackrel{}{.}`$4 on the astrometric fit.
Fig. 1 shows a $`1200s`$ exposure $`H_\alpha `$ image of the 30 Doradus region, obtained through the SUSI2 camera. The solid square ($`20\mathrm{}\times 20\mathrm{}`$), located close to the maximum of the emission in the $`H_\alpha `$ band just at the center of the star forming region, includes the X-ray position of PSR J0537$``$6910 (Wang & Gotthelf 1998b). A zoomed I-band image of this area is shown in Fig. 2 in negative greyscale.
### 2.2 Results
Few objects are seen close or within the X-ray error circle of the pulsar (Fig. 2), including the moderately bright ($`V19`$) star #1. However, the crowding of the region, together with the irregular sky background conditions, prompted us to apply automatic object detection routines to search for additional, barely detectable, candidates.
The object search in the X-ray error circle was thus performed using the ROMAFOT package for photometry in crowded fields (Buonanno & Iannicola 1989). The ROMAFOT parameters were tuned to achieve in each filter a conservative $`5\sigma `$ object detection above the local background level. A template PSF was obtained by fitting the intensity profiles of some of the brightest, unsaturated, isolated stars in the field with a Moffat function, plus a numerical map of the residual to better take into account the contribution of the stellar wings. To allow for an automatic object matching and make the color-color analysis faster, all the images have been aligned to a common reference frame. As a reference for object detection we have used our $`I`$-band image, where the effects of the local absorption are reduced. The master list of objects thus created was then registered on the images taken in $`B`$ and $`V`$ filters and used as an input for the fitting procedure. A carefully check by eye has been performed in order to ensure that all the stellar objects found in the $`I`$ band were successfully fitted in the other images. Apart from the ones labelled in Fig.2, no other candidate optical counterpart to the pulsar has been clearly detected by our procedure. We just report the possible presence in the I-band image of an $`22.3`$ magnitude object (not recognizable in Fig.2), right below the detection threshold and located nearly at the center of the error circle. However, the very low significance of this detection as well as the lack of color information prevent us to assess the nature of this object and to speculate about a possible association with the pulsar.
The properties of objects #1-#12, i.e. their magnitudes and colors ($`BV`$ and $`VI`$) are summarized in Table 2. According to their colors and brightness, all these objects are likely identified as young massive stars. Fig. 3 shows the color–magnitude ($`I`$ vs $`VI`$) diagram computed for a sample of objects selected in a $`0\stackrel{}{.}5\times 0\stackrel{}{.}5`$ surrounding area (the region marked by the dashed box in Fig. 1), together with the Zero Age Main Sequence track estimated from a suitable chemical composition ($`Z=0.008`$, $`Y=0.23`$) for the LMC stellar population (Cassisi, private comm.). The objects labelled in Fig. 2 and listed in Table 2 have been marked by open diamonds. Although broadened by the interstellar absorption and shifted redward by the differential reddening, the color–magnitude diagram (CMD) of all the stars is indeed consistent with a young stellar population main sequence. Thus, the optical counterpart of PSR J0537$``$6910 is probably too faint to be detected against the high background induced by the supernova remnant and the absorption of the embedding HII region.
Using the template $`PSF`$s computed from each image, artificial stars tests have been run to estimate the $`V`$, $`B`$, and $`I`$ magnitude limits of our images. With the flux normalization left as a free parameter, artificial stars have simulated and added to the corresponding images at $`100`$ different positions randomly selected inside the error circle. Thus, the detection algorithm has been run in a loop for the above number of trials, with the flux normalization adjusted to allow for a $`3\sigma `$ detection in each filter. Averaging over the number of trials, we have found $`3\sigma `$ detection limits corresponding (within $`0.2`$ mag) to $`B23.2`$, $`V23.4`$ and $`I22.4`$, which we have taken as an indication of the limiting magnitudes achievable in each band.
## 3 Conclusions
We have performed deep optical observations to search for the optical counterpart of the isolated X-ray pulsar PSR J0537$``$6910. However, none of the objects detected close to/inside the X-ray error circle stands out as a convincing candidate. The marginal detection of a $`I22.3`$ object at the center of the error circle must be regarded as tentative and is in need of future confirmation. The optical counterpart of PSR J0537$``$6910 is thus unidentified down to a $`3\sigma `$ limiting magnitude of $`23.4`$ in V. Our result is in agreement with the upper limits recently derived by Gouiffes & Ögelman (1999) on the pulsed optical flux.
At the distance of 47 kpc estimated for the host remnant N157B (Gould 1995) and for the assumed interstellar absorption ($`A_\mathrm{V}1.3`$), our upper limit corresponds to an optical luminosity $`L_{\mathrm{opt}}1.3\times 10^{33}`$ erg s<sup>-1</sup>. This implies that PSR J0537$``$6910 is, at best, of luminosity comparable to the ones of the Crab and PSR B0540$``$69, in line with the predictions of Pacini’s law. Although interesting, this upper limit is not stringent enough to put strong constraints on the evolution of non-thermal optical emission of young pulsars. Together with the recent upper limit obtained for PSR B1706$``$44 (Mignani et al. 1999), the measurement of the optical luminosity of PSR J0537$``$6910 would be crucial to smoothly join the class of the very young ($`\mathrm{1\hspace{0.17em}000}`$ years) and bright objects with the class of older, Vela-like ($`\mathrm{10\hspace{0.17em}000}`$ years) ones, for which the optical output is $``$ 4 orders of magnitude lower (Mignani 1998).
As in the case of PSR B0540$``$69 (Shearer et al. 1994), time-resolved high resolution, imaging, possibly exploiting the more accurate X-ray position available from future Chandra observations, would be the best way to pinpoint and identify the optical counterpart of PSR J0537$``$6910.
###### Acknowledgements.
We acknowledge the support software provided by the Starlink Project which is funded by the UK SERC. Part of the SUSI2 observations were performed in guaranteed time as part of the agreement between ESO and the Astronomical Observatory of Rome. Last, we would like to thank the anonymous referee for his/her useful comments to the manuscript. |
no-problem/9912/cond-mat9912486.html | ar5iv | text | # DOMAIN STRUCTURES OF SMECTIC FILMS FORMED BY BENT-SHAPED MOLECULES
## I INTRODUCTION
A variety of molecules form liquid crystalline phases (see e.g. the monograph ). Many mesogen molecules have symmetries consistent with the formation of ferroelectric phases and nonzero dipole moments. Ferroelectric ordering is, however, extremely rare in positionally disordered liquids or liquid crystals, and since the discovery of ferroelectric liquid crystals it has been assumed usually that ferroelectricity is possible only in the chiral smectic -$`C^{}`$ phase (Sm$`C^{}`$), (formed by chiral molecules) that has the polar symmetry group $`C_2`$. In this case polarization can be written as $`𝐏=P[𝐧\times 𝐳]`$, where $`𝐧`$ is director and $`𝐳`$ is the smectic layer normal. The necessary conditions for the existence of nonzero polarization are a finite tilt angle ($`\theta 0`$) and a molecular dipole perpendicular to the long axis of molecules. In racemic mixtures, which contain both enantiomers (that is, molecules that are mirror images of each other) in equal amounts, the electric polarization vanishes. Obviously, the electric polarization is directly connected with molecular chirality in the Sm$`C^{}`$ ferroelectric liquid crystals.
However there is no fundamental reason that non-chiral liquid crystals should not be ferroelectrics, since there is no unambiguous correspondence between chirality of molecules and the existence of macroscopic ferroelectric properties or structures they formed. The attempts of observation of ferroelectricity in non-chiral liquid crystals are, as a rule, centred around synthesis and investigations of non-conventional liquid crystalline structures . Recently ferroelectric phases composed of achiral molecules were reported and investigated , , . In these papers it was demonstrated that tilted smectic phases of achiral molecules show ferroelectric switching, and specific chiral domain structures. In the paper the bulk macroscopic properties of the lowest possible symmetry smectic phase (triclinic) were investigated and it was shown that such a system (though formed from achiral molecules) may possess ferroelectric and piezoelectric properties as well as macroscopic chirality. Due to polarity within smectic layers such a smectic may have only integer strength of point like defects in layers.
Note that in the mentioned above papers (, , ) were investigated only thin freely suspended films and care must be taken in drawing conclusions about the bulk properties of liquid crystals from the behaviour of films, as the surface layers of the film may be in a phase with higher (or lower) order than the bulk system. The surface phases may not even exist as bulk phases. In particularly in the papers , were observed not point-like defects, predicted theoretically in for the bulk phase but domain walls, i.e. two-dimensional defects in smectic layers.
The organization of our paper is the following. In the next section (II) we formulate our model and introduce (in the frame work of the Landau theory) the basic thermodynamics necessary for our discussions. In the section III we discuss different types of domain structures which may appear in smectics under consideration, and inspected the role of external influences (electric or magnetic fields and concentration of chiral impurities). The last section (IV) is devoted to a discussion and summary of our main results.
## II THEORETICAL MODEL
According to experimental data presented in the papers and , new smectic structures (labelled in these papers as smectics $`B_2`$), are formed by polar but achiral molecules (”banana”-shaped) having the symmetry group $`C_{2v}`$, and macroscopic behaviour of these structures is characterized by three spontaneous symmetry-breaking leading to the appearance of following properties: molecular tilt, ferroelectric polarization, and chirality. The maximal point symmetry group allowing these three types of symmetry breaking is $`C_2`$, where the second order axis should be parallel to smectic planes.
The tilt order parameter in any tilted smectic phases can be characterized by the two-component order parameter $`\psi =\theta exp(i\varphi )`$, where $`\theta `$ is the polar angle (tilt) and $`\varphi `$ is the azimuthal angle of the nematic director $`𝐧`$. Instead of $`\psi `$ one can use so-called $`𝐜`$-director, which is the projection of the director $`𝐧`$ onto the layer plane. The magnitude of the tilt order parameter $`|𝐜|=sin\theta `$. The ferroelectric polarization $`𝐏`$ is also a vectorial quantity, and it is only possible along the symmetry axis $`C_2`$. From the general point of view the chirality of the system is a third order antisymmetric tensor which can be reduced for the system under study to the pseudo-scalar $`\chi `$. However in fact we have the only symmetry-breaking, namely $`C_{2v}C_2`$ and therefore all three order parameters should be interrelated, and the problem we face now is to find this relation. In fact since the bend of $`𝐜`$ removes the $`𝐜𝐳`$ mirror symmetry plane, it produces a local chiral symmetry breaking. This breaking of chiral symmetry can occur on two distinct length scales (microscopic or macroscopic). The distinction between microscopic and macroscopic chiral symmetry breaking is similar to the distinction between spontaneous and induced order parameters. From the macroscopic symmetry point of view to describe chiral, tilted, ferroelectric smectic films we have to introduce three order parameters $`(\chi ,𝐜,𝐏)`$ with a third order coupling $`(\chi \mathrm{𝐜𝐏})`$ between them. However in this paper (unlike e.g. ) we are interested in mainly microscopic causes of macroscopic symmetry breaking.
From the microscopic viewpoint the existence of a tilt in smectic phases comes from the requirement of the molecular packing (i.e. steric forces). These requirements fix for the polar molecules in our case (thin free standing films) only the module of the $`𝐜`$-director, and therefore there are two allowed values of molecular tilt $`\pm \theta `$. Thus any molecule in a smectic layer $`i`$ <sup>*</sup><sup>*</sup>* For the simplicity and according to the layer structure of smectics, we suppose that the order parameters are uniform within smectic layers. can be framed by two state system labelled by indexes $`\pm `$ according to the sign of its tilt. The same manner the dipole moment $`𝐏`$ can be oriented either parallel or anti-parallel to the second order symmetry axis and it gives two more states attached to each molecular site. Therefore each molecular site is a four state system: $`(+,+),(+,),(,+),(,)`$, where the first sign corresponds to the tilt, and the second one to the dipole moment. If among the $`N^i`$ molecules in a certain smectic layer $`i`$ the number of molecules in each state is $`N^i(+,+),N^i(+,),N^i(,+),N^i(,)`$ then evidently
$`N^i=N^i(+,+)+N^i(+,)+N^i(,+)+N^i(,)`$ (1)
Analogously it is easy to see, that the tilt angle for the layer $`i`$ can be represented as:
$`N^i\theta ^i=N^i(+,+)+N^i(+,)N^i(,+)N^i(,),`$ (2)
and the polarization is given by:
$`N^iP^i=N^i(+,+)+N^i(,+)N^i(+,)N^i(,)`$ (3)
It is important to note that for each molecular site the product of $`P\theta `$ represents the chirality of the given molecule, independently of site and of layer $`i`$. We follow here the idea and method developed recently for solid racemic solutions . However though for each individual molecular site $`\chi P\theta `$, this relation generally is not valid for the local mean values for a layer $`i`$, i.e. $`\theta ^iP^i\chi ^i`$, since analogously to (2, 3) one can write:
$`N^i\chi ^i=N^i(+,+)+N^i(,)N^i(,+)N^i(+,).`$ (4)
In the spirit of the Bragg Williams mean field approximation we can compute the entropy of the system
$`S=ln\left[{\displaystyle \frac{N!}{N(+,+)!N(+,)!N(,+)!N(,)!}}\right],`$ (5)
where $`N=_iN^i`$ the total number of molecules.
Solving the equations (1 \- 4), introducing the found expressions for $`N^i(\pm ,\pm )`$ in terms of the order parameters $`\theta ,P,\chi `$, and expanding of (5) for small values of the order parameters we get
$$S=N[\frac{1}{2}(P^2+\theta ^2+\chi ^2)+\frac{1}{2}(P^2\chi ^2+P^2\theta ^2+\chi ^2\theta ^2)+\frac{1}{12}(P^4+\chi ^4+\theta ^4)\theta \chi P]$$
It is important to notice (and this is one of the main points of the our investigation) the presence of the specific third order term $`\theta P\chi `$. The free energy of the system $`F=UTS`$ (where $`U`$ is the internal energy associated to intermolecular interactions) should have the same structure as the entropy $`S`$ but with renormalized coefficients, namely
$`F={\displaystyle \frac{a_1}{2}}P^2+{\displaystyle \frac{a_2}{2}}\theta ^2+{\displaystyle \frac{a_3}{2}}\chi ^2+{\displaystyle \frac{b_1}{2}}P^2\chi ^2+{\displaystyle \frac{b_2}{2}}\theta ^2P^2+{\displaystyle \frac{b_3}{2}}\chi ^2\theta ^2+{\displaystyle \frac{c_1}{2}}P^4+{\displaystyle \frac{c_2}{2}}\theta ^4+{\displaystyle \frac{c_3}{2}}\chi ^4+\gamma \chi \theta P`$ (6)
The fact that the third order term necessarily figures in the free energy does not change at the renormalization and it is related to the symmetry, since the product of the three representations to which $`\theta ,\chi ,`$ and $`P`$ belong includes the identical representation.
The coefficients $`a_i,b_i,c_i`$, and $`\gamma `$ can be considered as phenomenological parameters and $`a_i`$ should become small near the corresponding symmetry-breaking transitions. To say more requires further knowledge of all these coefficients. Unfortunately using only the data known from the literature we are not able to extract values of all needed parameters. Therefore we will not compare quantitatively our theory with available experimental data, since with too many unknown parameters the theory tends to become an exercise in curve fitting, which looses predictive credibility. Instead of this we will discuss in the next section qualitative features of the model.
## III Qualitative analysis of the model
Let us consider some very general consequences of the model. First note that the third order coupling found above corresponds to account of three particle interactions. If we suppose to escape a conflict between experiment and theory that all three order parameters are uniform within smectic planes, this third order coupling means that the modulations of the order parameters along the normal to smectic layers should be matched $`\chi (q_1)\theta (q_2)P(q_3)`$ to provide $`q_1+q_2+q_3=0`$. Due to smectic periodicity along this direction $`|q_i|q_0`$, where $`q_0=2\pi /d`$ is the wave vector of smectic density modulation ($`d`$ is the interlayer distance). Thus to satisfy the matching there are only two possibilities: (1) one of the three wave vectors is zero and two others are anti-parallel; (2) all three wave vectors are zero.
Second let us assume that one from the three coefficients $`a_i`$ is much smaller than two others. Therefore in the temperature region where this condition is fulfilled, we have only one soft order parameter, and one may neglect two others (hard) degrees of freedom. In this case the theory is reduced to the well known Landau theory for a scalar order parameter . However due to its importance for the present context (and for convenience) we repeat mainly known results to apply them to our concrete case (free standing films). It is just the case where it is easy and more useful to derive these results for the concrete system under consideration than to try to find the suitable references, and to modify all expressions to apply them to the case.
There are two effects, related to the existence of the surface in free standing films. The first is a pure geometrical one (finite size effects). The surfaces break the translational and rotational invariance (because the surface is a specific plane which breaks the translational invariance, and the normal to the surface is a specific direction which breaks the rotational invariance). Besides, certainly, there are physical modifications of the system due to the existence of the surface (surface effects). The surface can suppress the bulk ordering (this case is traditionally called the ordinary phase transition), the surface can enhance the bulk ordering (it is called the extraordinary phase transition), or as a third possibility the surface can experience its intrinsic critical behaviour. There is also a so - called special phase transition which is intermediate between ordinary and extraordinary transitions.
The both effects related to the existence of the surface can be taken into consideration in the frame work of the Landau expansion. In our particular case (film geometry and $`a_1a_2,a_3`$) it has the form:
$`F={\displaystyle _0^L}𝑑z\left({\displaystyle \frac{1}{2}}a_1\theta ^2+{\displaystyle \frac{1}{4}}c_1\theta ^4+{\displaystyle \frac{1}{2}}d_1(\theta )^2\right)+F_s`$ (7)
where we added to (7) the gradient term (with the coefficient $`d_1`$) to describe the tilt profile over the film thickness $`L`$, and $`F_s`$ is the surface energy which should have the same form as the bulk energy (7):
$`F_s={\displaystyle \frac{a^{}}{2}}\left(\theta ^2(0)+\theta ^2(L)\right)+{\displaystyle \frac{c^{}}{4}}\left(\theta ^4(0)+\theta ^4(L)\right)`$ (8)
Usually it is supposed that $`a^{}d\lambda ^1`$, where $`\lambda `$ is called by extrapolation length and experimental data indicate that (at least as it concerns to the tilt) we have $`\lambda <0`$ and it is called traditionally by extraordinary phase transition.
In this case the surface enhances the ordering and therefore on the surface one can expect the onset of ordering before (i.e. at higher temperatures) it occurs in the bulk. So one can expect in this case the surface transition for temperatures $`T_s>T_c`$ (by the definition the bulk transition temperature is determined from $`a_1(T_c)=0`$). But of course at $`T_c`$ due to the onset of the bulk order the surface will experience some critical behaviour as well. In the regime of $`T_c<T<T_s`$ the bulk correlation length $`\xi _b`$ is finite and the order parameter decays from its maximum value at the surface. One can easily find the transition temperature for the surface layer:
$$\frac{T_s}{T_c}1=\frac{d_1}{T_c}\lambda ^2$$
To find the profile for the order parameter we have to solve the Euler - Lagrange equation which follows from (7) supplemented by the boundary condition, which can be found from (8). Performing this rather routine procedure one can find that there are two types of configurations providing the minimum of the bulk functional (7) and simultaneously minimizing the surface energy (8). The first let say natural solution is symmetrical one (we will term this solution by synclinic structure):
$`𝐒\mathrm{𝐜𝐨𝐧𝐟𝐢𝐠𝐮𝐫𝐚𝐭𝐢𝐨𝐧}:\theta (z=0)=\theta (z=L)`$,
We imply that $`a^{}=\alpha ^{}(TT_s)`$ where $`T_s`$ is the surface transition temperature (it can be extracted from experimental data for very thin films, e.g. for two-layer films). Determining the surface transition temperature we can omit third-order term in the equation for the bulk, and the transition in the film with $`N`$-layers occurs at $`T_N`$ which can be found from the following equation
$$T_N=T_s\frac{d_1}{\alpha ^{}\xi _b(T_N)}\mathrm{tanh}\left(\frac{L}{2\xi _b(T_N)}\right)$$
The second solution (we will term it by anticlinic) is antisymmetrical one:
$`𝐀\mathrm{𝐜𝐨𝐧𝐟𝐢𝐠𝐮𝐫𝐚𝐭𝐢𝐨𝐧}:\theta (z=0)=\theta (z=L)`$ and for this case
$$T_N=T_s\frac{d_1}{\alpha ^{}\xi _b(T_N)}\mathrm{coth}\left(\frac{L}{2\xi _b(T_N)}\right)$$
The solution of both transcendental equations can be found numerically very easily and (as it should be) for small film thicknesses $`L\xi _b(T_N)`$ synclinic configuration has always the higher transition temperature while for thick films with $`L\xi _b(T_N)`$ the anticlinic solution can have the higher transition temperature. But certainly the anticlinic state can be only metastable one due to the gradient energy (or by the other words due to the energy penalty which one must pay for the domain wall appearing inevitably for the anticlinic structure). However the given above statement is valid only for the case $`a_1a_2,a_3`$, when we have deal with one scalar order parameter condensation. It is not the case when we have two or three soft degrees of freedom (condensed order parameters) due to third order coupling between them.
As we have seen already the minimization of the third order coupling energy (in the conditions when all three order parameters are condensed) leads to the following possible structures of smectics under consideration:
(i)
$$\theta (q=0);P(q=q_0);\chi (q=q_0)$$
i.e. synclinic, antiferroelectric and racemic;
(ii)
$$\theta (q=q_0)P(q=0)\chi (q=q_0)$$
i.e. anticlinic, ferroelectric and racemic;
(iii)
$$\theta (q=q_0)P(q=q_0)\chi (q=0)$$
i.e. anticlinic, antiferroelectric and homochiral;
(iv)
$$\theta (q=0)P(q=0)\chi (q=0)$$
i.e. synclinic, ferroelectric and homochiral.
It is worth to note that all four types of predicted structures are really observed in experiments , . Even more, it is clear that the application of the external electric field should stimulate the ferroelectric ordering of dipoles and therefore only (ii) and (iv) structures will be stable in a strong enough field. And it is also exactly what was observed in . The same manner the external field conjugated to the chirality should induce (iii) and (iv) structures. As a physical realization of this field one can have in mind the concentration of homochiral impurities. And the field conjugated to the tilt angle must induce (i) and (iv) structures only. Physically such a field can be provided by the anchoring.
In the case when we have the condensation of two order parameters (there are three types of such pairs) one can observed a very rich behaviour with many types of domain walls. For each type of the wall at the variation of the parameters $`a_1,a_2,a_3`$ the wall transformations can be observed, which can be understood as an Ising - Bloch phase transitions with the domain wall symmetry breaking. Unfortunately, we can find no guidance from experimental or theoretical sources for choosing all phenomenological coefficients that appear in these expressions. Thus the primary function of this section must be to give a qualitative interpretation of our results and to demonstrate the possibility of ferroelectric ordering in basically non - chiral systems, as opposed to proving exactly its existence.
## IV Conclusion
We formulated a simple Landau type model describing macroscopic behaviour recently discovered new smectic phases composed of achiral bent-shaped molecules. Films of such smectics exhibit three types of ordering related to dipole polarization, molecular tilt, and chirality. However due to specific third order coupling of the order parameters these three types of symmetry-breaking are not independent ones, and this fact leads to specific structures (i) - (iv) really observed in experiments. This inhomogeneous ordering physically means that over a large region of thicknesses of free standing films they can be considered as some effective interfaces. It is typical for liquid crystals that the width of the interface of experimental mesogenes is 40 - 100 times the length of molecules. We observed the example of how the presence of an interface may induce a type of ordering in the inhomogeneous region (for free standing films it may be the whole thickness of the system) that does not occur in the bulk phases. The analogous phenomena are known also for Langmuir monolayers where chiral symmetry can be spontaneously broken , and it leads to a chiral phase composed of non - chiral molecules. In fact for a thick free standing film the top and bottom layers are each equivalent to Langmuir monolayers.
The tilt arrangement of the $`A`$ configuration is anticlinic, i.e. the top and the bottom of the film are tilted in opposite directions. Note also recent ellipsometric studies where new phases of liquid crystals have been investigated and among these are the antiferroelectric Sm$`C_A`$ structures where the tilt direction alternates when going from layer to layer.
Physical mechanisms providing the polarization properties of non-chiral and chiral free standing films are very different. For the non-chiral systems the polar order is induced in fact by the steric packing of anisotropic (but non-chiral) molecules, whereas in the ordinary (chiral) ferroelectric liquid crystalline phases the polar order is a consequence of the molecular chirality.
###### Acknowledgements.
This work was supported in part by RFFR and INTAS grants and by the Russian State Program ”Statistical Physics”. E.K. thanks prof. M. Vallade for supporting his stay at the Lab. Spectro., Joseph - Fourier University Grenoble - 1 and for fruitful discussions. |
no-problem/9912/quant-ph9912117.html | ar5iv | text | # Quantum Cryptography with Entangled Photons
## Abstract
By realizing a quantum cryptography system based on polarization entangled photon pairs we establish highly secure keys, because a single photon source is approximated and the inherent randomness of quantum measurements is exploited. We implement a novel key distribution scheme using Wigner’s inequality to test the security of the quantum channel, and, alternatively, realize a variant of the BB84 protocol. Our system has two completely independent users separated by $`360`$ m, and generates raw keys at rates of $`400`$$`800`$ bits/second with bit error rates arround $`3`$%.
The primary task of cryptography is to enable two parties (commonly called Alice and Bob) to mask confidential messages such, that the transmitted data are illegible to any unauthorized third party (called Eve). Usually this is done using shared secret keys. However, in principle it is always possible to intercept classical key distribution unnoticedly. The recent development of quantum key distribution can cover this major loophole of classical cryptography. It allows Alice and Bob to establish two completely secure keys by transmitting single quanta (qubits) along a quantum channel. The underlying principle of quantum key distribution is that nature prohibits to gain information on the state of a quantum system without disturbing it. Therefore, in appropriately designed schemes, no tapping of the qubits is possible without showing up to Alice and Bob. These secure keys can be used in a One-Time-Pad protocol , which makes the entire communication absolutely secure.
Two well known concepts for quantum key distribution are the BB84 scheme and the Ekert scheme. The BB84 scheme uses single photons transmitted from Alice to Bob, which are prepared at random in four partly orthogonal polarization states: $`0^{}`$, $`45^{}`$, $`90^{}`$, $`135^{}`$. If Eve tries to extract information about the polarization of the photons she will inevitably introduce errors, which Alice and Bob can detect by comparing a random subset of the generated keys.
The Ekert scheme is based on entangled pairs and uses Bell’s inequality to establish security. Both Alice and Bob receive one particle out of an entangled pair. They perform measurements along at least three different directions on each side, where measurements along parallel axes are used for key generation and oblique angles are used for testing the inequality. In , Ekert pointed out that eavesdropping inevitably affects the entanglement between the two constituents of a pair and therefore reduces the degree of violation of Bell’s inequality. While we are not aware of a general proof that the violation of a Bell inequality implies the security of the system, this has been shown for the BB84 protocol adapted to entangled pairs and the CHSH inequality .
In any real cryptography system, the raw key generated by Alice and Bob contains errors, which have to be corrected by classical error correction over a public channel. Furthermore it has been shown that whenever Alice and Bob share a sufficiently secure key, they can enhance its security by privacy amplification techniques , which allow them to distill a key of a desired security level.
A range of experiments have demonstrated the feasibility of quantum key distribution, including realizations using the polarization of photons or the phase of photons in long interferometers . These experiments have a common problem: the sources of the photons are attenuated laser pulses which have a non-vanishing probability to contain two or more photons, leaving such systems prone to the so called beam splitter attack .
Using photon pairs as produced by parametric down-conversion allows us to approximate a conditional single photon source with a very low probability for generating two pairs simultaneously and a high bit rate . Moreover, when utilizing entangled photon pairs one immediately profits from the inherent randomness of quantum mechanical observations leading to purely random keys.
Various experiments with entangled photon pairs have already demonstrated that entanglement can be preserved over distances as large as 10 km , yet none of these experiments was a full quantum cryptography system. We present in this paper a complete implementation of quantum cryptography with two users, separated and independent of each other in terms of Einstein locality and exploiting the features of entangled photon pairs for generating highly secure keys.
In the following we will describe the variants of the Ekert scheme and of the BB84 scheme which we both implemented in our experiment, based on polarization entangled photon pairs in the singlet state
$$|\mathrm{\Psi }^{}=\frac{1}{\sqrt{2}}[|H_A|V_B|V_A|H_B],$$
(1)
where photon $`A`$ is sent to Alice and photon $`B`$ is sent to Bob, and $`H`$ and $`V`$ denote the horizontal and vertical linear polarization respectively. This state shows perfect anticorrelation for polarization measurements along parallel but arbitrary axes. However, the actual outcome of an individual measurement on each photon is inherently random. These perfect anticorrelations can be used for generating the keys, yet the security of the quantum channel remains to be ascertained by implementing a suitable procedure.
Our first scheme utilizes Wigner’s inequality for establishing the security of the quantum channel, in analogy to the Ekert scheme which uses the CHSH inequality. Here Alice chooses between two polarization measurements along the axes $`\chi `$ and $`\psi `$, with the possible results $`+1`$ and $`1`$, on photon $`A`$ and Bob between measurements along $`\psi `$ and $`\omega `$ on photon $`B`$. Polarization parallel to the analyzer axis corresponds to a $`+1`$ result, and polarization orthogonal to the analyzer axis corresponds to $`1`$.
Assuming that the photons carry preassigned values determining the outcomes of the measurements $`\chi ,\psi ,\omega `$ and also assuming perfect anticorrelations for measurements along parallel axes, it follows, that the probabilities for obtaining $`+1`$ on both sides, $`p_{++}`$, must obey Wigner’s inequality:
$$p_{++}(\chi ,\psi )+p_{++}(\psi ,\omega )p_{++}(\chi ,\omega )0.$$
(2)
The quantum mechanical prediction $`p_{++}^{qm}`$ for these probabilities at arbitrary analyzer settings $`\alpha `$ (Alice) and $`\beta `$ (Bob) measuring the $`\mathrm{\Psi }^{}`$ state is
$$p_{++}^{qm}(\alpha ,\beta )=\frac{1}{2}\mathrm{sin}^2\left(\alpha \beta \right).$$
(3)
The analyzer settings $`\chi =30^{}`$, $`\psi =0^{}`$, and $`\omega =30^{}`$ lead to a maximum violation of Wigner’s inequality (2):
$`p_{++}^{qm}(30^{},0^{})+p_{++}^{qm}(0^{},30^{})p_{++}^{qm}(30^{},30^{})=`$ (4)
$`=\frac{1}{8}+\frac{1}{8}\frac{3}{8}=\frac{1}{8}0.`$ (5)
As Wigner’s inequality is derived assuming perfect anticorrelations, which are only approximately realized in any practical situation, one should be cautious in applying it to test the security of a cryptography scheme. When the deviation from perfect anticorrelations is substantial, Wigner’s inequality has to be replaced by an adapted version .
In order to implement quantum key distribution, Alice and Bob each vary their analyzers randomly between two settings, Alice: $`30^{},0^{}`$ and Bob: $`0^{},30^{}`$ (Figure 1a). Because Alice and Bob operate independently, four possible combinations of analyzer settings will occur, of which the three oblique settings allow a test of Wigner’s inequality and the remaining combination of parallel settings (Alice$`=0^{}`$ and Bob$`=0^{}`$) allows the generation of keys via the perfect anticorrelations, where either Alice or Bob has to invert all bits of the key to obtain identical keys.
If the measured probabilities violate Wigner’s inequality, then the security of the quantum channel is ascertained, and the generated keys can readily be used. This scheme is an improvement on the Ekert scheme which uses the CHSH inequality and requires three settings of Alice’s and Bob’s analyzers for testing the inequality and generating the keys. From the resulting nine combinations of settings, four are taken for testing the inequality, two are used for building the keys and three are omitted at all. However in our scheme each user only needs two analyzer settings and the detected photons are used more efficiently, thus allowing a significantly simplified experimental implementation of the quantum key distribution.
As a second quantum cryptography scheme we implemented a variant of the BB84 protocol with entangled photons, as proposed in Reference . In this case, Alice and Bob randomly vary their analysis directions between $`0^{}`$ and $`45^{}`$ (Figure 1b). Alice and Bob observe perfect anticorrelations of their measurements whenever they happen to have parallel oriented polarizers, leading to bitwise complementary keys. Alice and Bob obtain identical keys if one of them inverts all bits of the key. Polarization entangled photon pairs offer a means to approximate a single photon situation. Whenever Alice makes a measurement on photon $`A`$, photon $`B`$ is projected into the orthogonal state which is then analyzed by Bob, or vice versa. After collecting the keys, Alice and Bob authenticate their keys by openly comparing a small subset of their keys and evaluating the bit error rate.
The experimental realization of our quantum key distribution system is sketched in Figure 2. Type-II parametric down-conversion in $`\beta `$-barium borate (BBO), pumped with an argon-ion laser working at a wavelength of $`351`$ nm and a power of $`350`$ mW, leads to the production of polarization entangled photon pairs at a wavelength of $`702`$ nm. The photons are each coupled into $`500`$ m long optical fibers and transmitted to Alice and Bob respectively, who are separated by $`360`$ m.
Alice and Bob both have Wollaston polarizing beam splitters as polarization analyzers. We will associate a detection of parallel polarization ($`+1`$) with the key bit 1 and orthogonal detection ($`1`$) with the key bit 0. Electro-optic modulators in front of the analyzers rapidly switch (rise time $`<15`$ ns, minimum switching interval $`100`$ ns) the axis of the analyzer between two desired orientations, controlled by quantum random signal generators . These quantum random signal generators are based on the quantum mechanical process of splitting a beam of photons and have a correlation time of less than $`100`$ ns.
The photons are detected in silicon avalanche photo diodes . Time interval analyzers on local personal computers register all detection events as time stamps together with the setting of the analyzers and the detection result. A measurement run is initiated by a pulse from a separate laser diode sent from the source to Alice and Bob via a second optical fiber. Only after a measurement run is completed, Alice and Bob compare their lists of detections to extract the coincidences. In order to record the detection events very accurately, the time bases in Alice’s and Bob’s time interval analyzers are controlled by two rubidium oscillators. The stability of each time base is better than 1 ns for one minute. The maximal duration of a measurement is limited by the amount of memory in the personal computers (typically one minute).
Overall our system has a measured total coincidence rate of $`1700\mathrm{s}^1`$ , and a singles rate of $`35000\mathrm{s}^1`$ . From this, one can estimate the overall detection efficiency of each photon path to be 5 % and the pair production rate to be $`710^5\mathrm{s}^1`$. Our system is very immune against a beam splitter attack because the ratio of two-pair events is only $`310^3`$, where a two-pair event is the emission of two pairs within the coincidence window of $`4`$ ns. The coincidence window in our experiment is limited by the time resolution of our detectors and electronics, but in principle it could be reduced to the coherence time of the photons, which is usually of the order of picoseconds.
In realizing the quantum key distribution based on Wigner’s inequality, Alice’s analyzer switch randomly with equal frequency between $`30^{}`$ and $`0^{}`$, and Bob’s analyzer between $`0^{}`$ and $`30^{}`$. After a measurement, Alice and Bob extract the coincidences for the combinations of settings of $`(30^{},30^{})`$, $`(30^{},0^{})`$ and $`(0^{},30^{})`$, and calculate each probability. E.g. the probability $`p_{++}(0^{},30^{})`$ is calculated from the numbers of coincident events $`C_{++}`$, $`C_+`$, $`C_+`$, $`C_{}`$ measured for this combination of settings by
$$p_{++}(0^{},30^{})=\frac{C_{++}}{C_{++}+C_++C_++C_{}}.$$
(6)
We observed in our experiment that the left hand side of inequality (2) evaluated to $`0.112\pm 0.014`$. This violation of (2) is in good agreement with the prediction of quantum mechanics and ensures the security of the key distribution. Hence the coincident detections obtained at the parallel settings $`(0^{},0^{})`$, which occur in a quarter of all events, can be used as keys. In the experiment Alice and Bob established $`2162`$ bits raw keys at a rate of $`420`$ bits/second , and observed a quantum bit error rate of $`3.4`$ %.
In our realization of the BB84 scheme, Alice’s and Bob’s analyzers both switch randomly between $`0^{}`$ and $`45^{}`$. After a measurement run, Alice and Bob extract the coincidences measured with parallel analyzers, $`(0^{},0^{})`$ and $`(45^{},45^{})`$, which occur in half of the cases, and generate the raw keys. Alice and Bob collected $`80000`$ bits of key at a rate of $`850`$ bits/second, and observed a quantum bit error rate of $`2.5`$ %, which ensures the security of the quantum channel.
For correcting the remaining errors while maintaining the secrecy of the key, various classical error correction and privacy amplification schemes have been developed . We implemented a simple error reduction scheme requiring only little communication between Alice and Bob. Alice and Bob arrange their keys in blocks of $`n`$ bits and evaluate the bit parity of the blocks (a single bit indicating an odd or even number of ones in the block). The parities are compared in public, and the blocks with agreeing parities are kept after discarding one bit per block . Since parity checks only reveal odd occurrences of bit errors, a fraction of errors remains. The optimal block length $`n`$ is determined by a compromise between key losses and remaining bit errors. For a bit error rate $`p`$ the probability for $`k`$ wrong bits in a block of $`n`$ bits is given by the binomial distribution $`P_n(k)=\left(\genfrac{}{}{0pt}{}{n}{k}\right)p^k(1p)^{nk}`$.
Neglecting terms for three or more errors and accounting for the loss of one bit per agreeing parity, this algorithm has an efficiency $`\eta (n)=(1P_n(1))(n1)/n`$, defined as the ratio between the key sizes after parity check and before. Finally, under the same approximation as above, the remaining bit error rate $`p^{}`$ is $`p^{}=(1P_n(0)P_n(1))(2/n)`$. Our key has a bit error rate $`p=2.5`$ %, for which $`\eta (n)`$ is maximized at $`n=8`$ with $`\eta (8)=0.7284`$, resulting in $`p^{}=0.40`$ %. Hence, from $`80000`$ bits of raw key with a quantum bit error rate of $`2.5`$ %, Alice and Bob use $`10`$ % of the key for checking the security and the remaining $`90`$ % of the key to distill $`49984`$ bits of error corrected key with a bit error rate of $`0.4`$%. Finally, Alice transmits a $`43200`$ bit large image to Bob via the One-Time-Pad protocol, utilizing a bitwise XOR combination of message and key data (Figure 3).
In this letter we presented the first full implementation of entangled state quantum cryptography. All the equipment of the source and of Alice and Bob has proven to operate outside shielded lab-environments with a very high reliability. While further practical and theoretical investigations are still necessary, we believe that this work demonstrates that entanglement based cryptography can be tomorrow’s technology.
This work was supported by the Austrian Science Foundation FWF (Projects No. S6502, S6504 and F1506), the Austrian Academy of Sciences, and the TMR program of the European Commission (Network contract No. ERBFMRXCT96-0087). |
no-problem/9912/hep-ex9912037.html | ar5iv | text | # 1 Introduction
## 1 Introduction
HERA-B is designed for studies of B-physics at DESY’s HERA proton-lepton storage ring in Hamburg, Germany. The physics program calls for measuring production and decay properties of B and B<sub>s</sub> mesons with emphasis on CP violation, particularly in the B<sup>o</sup> $``$ J/$`\psi `$ K$`{}_{s}{}^{}{}_{}{}^{o}`$ decay channel.
Collisions of protons in the 920 GeV beam of HERA with a fixed target consisting of up to 8 wires surrounding the beam produce B-mesons whose decay products are measured in the HERA-B spectrometer. HERA-B will operate with a 40 MHz collision rate which, given HERA’s 10 MHz bunch-crossing rate, implies an average of 4 interactions per bunch crossing. A sophisticated multi-level triggering system is needed to reduce the overwhelming background from inelastic proton-nucleon collisions to a rate suitable for transfer of data to mass storage.
As of this writing (Dec. 99), the experimental apparatus is nearing completion and the triggering system is being commissioned. This paper will briefly describe the experiment and summarize the current status of the major detector components. A more detailed account may be found in reference and subsystem-specific papers in the same volume.
## 2 The detector
The apparatus is shown in Fig. 1. The proton beam enters from the right on the sketch and first sees the target wires then continues its journey through the spectrometer inside the 500 $`\mu `$m thin aluminum beam pipe. Immediately downstream of the target, one finds the eight super-layers of the vertex detector which occupy the first two meters of the detector. A thin vacuum window is located just before the last station. The main tracking system starts immediately afterwards as does the large aperture main spectrometer magnet. The super-layers of the tracker are divided into inner and outer regions. The inner tracker covers the first 20 cm from beam center. The outer tracker takes over from there and extends the coverage to 250 mrad horizontally and 160 mrad vertically.
Three of the tracking stations shown in the magnet are a combination of cathode pad chambers (outer region) and gas pixel chambers (inner region) which will be used in the trigger for identifying high-p<sub>t</sub> hadrons. A RICH counter occupies the region between 8.5 m and 11.5 m. This is followed by another tracking layer, a transition radiation detector which supplements electron identification in the low-angle region, a tracking layer, then an electromagnetic calorimeter. The calorimeter is followed by a muon detector which is divided into three iron/concrete filters and 4 tracking stations. The muon trackers are divided into inner and outer sections in the manner of the main tracker.
### 2.1 Vertex detector system
A horizontal section of the VDS is sketched in Fig.2. (A vertical section would look nearly the same.) Seven super-layers <sup>1</sup><sup>1</sup>1An eigth layer (not shown) is positioned just downstream of the exit window on an immovable mount. are each comprised of four “quadrants” each of which holds two double-sided silicon wafers. The wafers are configured as 50 $`\mu `$m-pitch strip detectors with strips oriented at $`\pm 2.5^o`$ angles to the vertical or horizontal. The detectors are mounted in thin movable aluminum caps (so-called “Roman pots”) which can be moved away from the beam to allow for filling. The pots are evacuated but separated from the primary machine vacuum. When in their final positions, the outer perimeter of the 4 detectors of one super-layer describes a square centered on the beam with a square hole to allow passage of the beam.
Status: 85% of the detector has been operating routinely since summer of this year. The remainder is being installed in the present shutdown. The detector is performing well with signal to noise ratios of 20 or better.
### 2.2 Inner tracker
The original design called for a “classic” micro-strip gas chamber (MSGC) . Early tests with x-rays showed that the chambers would withstand the anticipated dose rates for HERA-B but when tested in a hadron beam, the chambers sustained considerable anode damage due to sparking in a matter of hours. The problem was traced to heavy ions produced when charged tracks traverse the cathode/anode wafer. After some delay, a new design was found which uses a “GEM” foil (for gas electron multiplier) positioned between the drift electrode and the anode/cathode plane to provide additional gas amplification thus lowering the needed amplification in the region of the anode. Dose-related effects are still observed, nonetheless, the solution fulfils our requirements.
Status: Production and installation is underway with completion scheduled for February of next year. Several chambers have been installed, commissioned, and are operating on a regular basis. First analysis of data indicate that the chambers are meeting specifications. The readout electronics currently in use do not permit their forseen use in the tracking phase of the first level trigger. The readout of stations needed for the first level trigger will be upgraded in time for the running period of 2001.
### 2.3 Outer tracker
The outer-tracker is built from honeycomb drift modules with hexagonal cells of 5 mm and 10 mm. The cathodes are made from a carbon-impregnated resin “Pokalon-C ”. In situ tests of the chambers in 1997 revealed that the original design would not withstand the high radiation environment required for HERA-B operation and forced considerable additional R&D and substantial delays. Solutions to the various aging problems have since been found. The main departures from the original design are that the Pokalon-C is now gold-coated and a change of gas mixture was made, from from Ar/CH<sub>4</sub>/CF<sub>4</sub> to Ar/CO<sub>2</sub>/CF<sub>4</sub>.
Status: All but one super-layer is now installed and operating routinely. Dead and noisy channel counts are of order 1%. The missing super-layer is scheduled for installation in January. The alignment and calibration is in progress.
### 2.4 RICH detector
Cherenkov light produced in the 2.5 m long C<sub>4</sub>F<sub>10</sub> radiator gas is focused by an array of spherical mirrors onto focal planes of lens systems which then focus the light to an array of multi-anode phototubes.
Status: The detector is fully commissioned and in routine operation. The average photon yield for $`\beta `$ = 1 rings is 35, as expected.
### 2.5 Electromagnetic calorimeter
The calorimeter is of Shashlik design consisting of cells of 11 cm $`\times `$ 11 cm transverse dimensions which are stacked to form a wall some 6 m in length and 4 m in height. The calorimeter is sub-divided into 3 regions. The cells of the innermost region use tungsten as a radiator and are viewed by a 5 $`\times `$ 5 array of phototubes. Both middle and outer sections have lead radiators. The cells of the middle region are viewed by 4 phototubes, and those of the outer region are serviced by a single phototube.
Status: The calorimeter is fully installed and equipped with phototubes. The inner and middle regions are also equipped with read out electronics. The read out is scheduled for completion by end January, 2000. The calibration is continually being refined and presently is about 7%.
### 2.6 Muon detetctor
The muon detector consists of 4 superlayers. The first 40 cm of each superlayer is covered by gas pixel chambers. The outer regions of the first two superlayers are covered by 3 double layers of tube chambers oriented at 0<sup>o</sup> and $`\pm `$20<sup>o</sup> with respect to the vertical. The third and fourth superlayers are of similar design but have, in addition, cathodes segmented into pads which are read out separately, for use in triggering.
Status: The system is fully installed and operational. Occupancies are as expected and coincidence rates between the third and fourth superlayers are also as expected.
## 3 Trigger and data acquisition
### 3.1 First level trigger
The task of the first level trigger (FLT) is to reduce the 10 MHz event rate (40 MHz interaction rate) by a factor of 200 with a maximum delay of 1.2 $`\mu `$s. It works in three phases:
* Pretrigger: Pretriggers originate from three sources: coincidences of pads and pixels in the 3rd and 4th superlayers of the muon detector, high-p<sub>t</sub> clusters in the calorimeter, coincidence patterns between the 3 pad chambers in the magnet. The three pretrigger systems produce “messages” which define a geometrical area (region-of-interest, or RoI) and an estimate of momentum. When operating at design rates, several such RoIs are expected per event.
* Tracking: Messages from the pretriggers are routed to a parallel pipelined network of custom-built processors which attempt to track them through 4 of the 6 main tracker superlayers behind the magnet (and, for muon pretriggers, through the superlayers of the muon system). The processors map the superlayers geographically: each processor takes inputs from the 3 views of a contiguous region of a single superlayer. Messages are passed from processor to processor. In each processor, a search is made for hits inside the RoI and, when found, a new message is generated with refined track parameters and sent to the next processor (or processors when an RoI spans a boundary).
* Decision: Messages arriving at the furthest upstream superlayer are “tracks” with parameters determined with a typical accuracy of a single cell width in the outer tracker and 4 strip widths in the inner tracker. These messages are collected in a single processor where they are sorted by event (at any given time, 100 events are being processed). A trigger decision is made based on the kinematics of single tracks and pairs of tracks. Events must be fully processed in less than 1.2 $`\mu `$s.
Status: The inner and middle regions of the calorimeter are equipped with pretrigger electronics. First operation began more than a year ago and now, they are in routine operation. Completion is scheduled for end-January, 2000. The muon pretriggers have been tested with 15% coverage. Preliminary analysis indicates that the system is performing up to expectations. Completion is scheduled for mid-February. The high-p<sub>t</sub> system is under construction. Installation and commissioning will begin in January.
Production of the track finding units is nearing completion and installation is underway. A slice has been installed and exercised, analysis of data taken is underway. Full system commissioning will begin in January. If all goes well, the system should be at full power by March, 2000.
The trigger decision unit has been in routine use since summer of this year. Messages from the calorimeter pretriggers are routed directly to the unit which sorts them by event and triggers when more than two messages for one event are received.
### 3.2 Second and third level triggers, data acquisition
The SLT is designed to work at an input rate of 50 kHz and supply a suppression of at least 100 and up to 1000 for trigger modes in which a detached secondary vertex is required. The 3rd level trigger is intended to provide a suppression of a factor of 10 on trigger types for which RoI-based cuts to not provide sufficient suppression – e.g. for events with no detached secondary vertex.
The SLT works on RoIs defined by the FLT, first gathering all hits within the RoI from all tracking layers behind the magnet and performing a fit. Successfully fit tracks are projected to and tracked through the vertex detector. At the end of the tracking process, a vertex fit is performed on track pairs. Also, the impact parameters of tracks relative to the target wires are estimated. Trigger decisions are made based on the outcome of the vertex fit and/or the track impact parameters.
The 2nd/3rd level trigger and data acquisition are integrated into a single system, implemented as a 240-node farm of standard Pentium processors running the Linux operating system, a high bandwidth switch, and a system of buffers (the “second level buffer”) which store event data while the second level trigger decision is being made. Both switch and buffers are built from the same DSP-based board.
Upon acceptance by the first level, an event is transfered to the 2nd level buffer and a processor node is assigned. The selected node performs the 2nd level algorithm, requesting any needed data by sending messages to the appropriate buffer module. Data of events passing 2nd level cuts are read from the 2nd level buffers into processor memory and the 3rd level trigger is performed. Events accepted at the third level are sent to the 4th level farm.
Status: The second level farm has been operating routinely for more than a year. The number of nodes in use currently stands at 80. All 240 nodes are installed and cabled to the switch. The installation will be finalized by end-January.
Portions of the 2nd level trigger algorithm have been exercised routinely in the last year. Events triggered at the 1st level as described in the previous section are transfered to the buffers. The assigned processor reads in the calorimeter data and searches for clusters with p<sub>t</sub> above 1 GeV. RoIs are generated and input into the vertex tracking code which requests hits from selected regions of the 2nd level buffers. Events for which at least one silicon track is found are transfered to mass storage.
The accumulated data were then analyzed by the offline analysis group. A plot of the mass of two silicon tracks (electron mass assumed) which match high-p<sub>t</sub> calorimeter clusters and have an electron signature in the calorimeter is shown in Fig. 3. A clear J/$`\psi `$ peak is seen.
### 3.3 4th level trigger
The 4th level trigger is primarily intended for full online event reconstruction. Like the 2nd level, it consists of a farm of Pentium processors running Lynux. Unlike the 2nd level farm, it relies on standard ethernet technology for communication and data transfer. The design input rate is 50 Hz.
Status: In the last year, the event stream has been flowing through the 4th level farm on its way to mass storage. Several nodes are in use for doing partial reconstruction and data monitoring. The 200 node farm is complete and in use, in part for monitoring tasks and also for offline processing of data. The full reconstruction code exists and is being tuned in preparation for running on the completed farm in January.
## 4 Summary
HERA-B has suffered setbacks resulting primarily from unexpected aging effects in both inner and outer trackers. Solutions to these problems have been found and the spectrometer is nearing completion. In the meantime, considerable operational experience has been accrued in the running periods between monthly 3-day shutdowns for installation. After completion of the inner tracker in February, the detector will be ready for the start of the HERA-B physics program. The critical path is defined by the installation and commissioning of the first level trigger. We look forward to the start of data-taking for physics in February, 2000. |
no-problem/9912/hep-th9912060.html | ar5iv | text | # Four-dimensional gravity on a thick domain wall
## I Introduction
Recently it was discovered that four-dimensional gravity can be realized on a domain wall in five-dimensional space-time (For a review on domain walls see ). The concrete example studied in Ref. involves two regions of $`AdS_5`$ glued together over a four-dimensional Minkowski slice. The fluctuations of the gravitational field include a single normalizable zero mode, which gives rise to four-dimensional gravity on the domain wall. Since the ambient space is non-compact, there is a continuous spectrum of massive modes. Normally this results in five-dimensional gravity being restored, but if the ambient space is a suitable slice of $`AdS`$, this does not happen. The couplings of the extra modes to matter on the domain wall are sufficiently suppressed that integrating over all of them still gives a subleading contribution to the gravitational interaction between test masses on the domain wall. The original proposal was formulated in pure five-dimensional gravity. Various extensions to domain walls in gravity coupled to scalars , time dependent cosmological scenarios , higher dimensional embeddings , and models with mass gaps for the continuum modes have appeared in the literature. Several supergravity solutions related to the model of are known , but the original model does not have a supersymmetric extension .
In this note we consider a solution of five-dimensional gravity coupled to scalars, which also does not have a supersymmetric extension<sup>*</sup><sup>*</sup>*We thank A. Linde and R. Kallosh for pointing this out to us.. Our solution can be interpreted as a thick domain wall interpolating between two asymptotic $`AdS_5`$ spaces, which makes it a non-singular version of the setup in . It is interesting to investigate how four-dimensional gravity arises on non-singular domain walls, since they are a more realistic implementation of the scenario in Ref. .
Thick domain walls are easy to construct, but all explicit solutions we are aware of are too complicated for analytical computations. Our solution is simple enough that the equivalent quantum mechanics problem can be given in closed form. The potential in the equivalent quantum mechanics problem has the form of a shallow well separated from the asymptotic region by a thick potential barrier. A natural question is whether there are resonances (or quasi-stationary states) in such potentials. From the point of view of a four-dimensional observer, these states would give a quasi-discrete spectrum of low mass KK states with unsuppressed couplings to matter on the domain wall. This should be contrasted with the couplings of non-resonant modes which are small since they have to tunnel through the potential barrier. If such resonant states exist, they change the physics a four-dimensional observer sees. In order to address questions of this type it is convenient to have a closed form expression for the quantum mechanics problem. In this note we discuss one such example and find that there are no resonances of the type described above. Thus we conclude that the mechanism for localizing gravity on a thick domain wall is exactly the same as in the thin wall case of .
In section II we give a quick review of gravity coupled to scalars and outline a method for finding solutions following . We use it to find a particularly simple thick domain wall solution. In section III we study fluctuations of the gravitational field around our solution and describe how bulk modes interact with matter on the domain wall.
## II Gravity coupled to scalars
In order to ensure four-dimensional Poincare invariance we assume that the metric takes the form
$$ds^2=e^{2A(r)}\left(dx_0^2\underset{i=1}{\overset{3}{}}dx_i^2\right)dr^2.$$
(1)
The action for five-dimensional gravity coupled to a single real scalar reads
$$S=d^4x𝑑r\sqrt{g}\left(\frac{1}{4}R+\frac{1}{2}(\varphi )^2V(\varphi )\right),$$
(2)
and the equations of motion following from this action are
$`\varphi ^{\prime \prime }+4A^{}\varphi ^{}`$ $`=`$ $`{\displaystyle \frac{V(\varphi )}{\varphi }}`$ (3)
$`A^{\prime \prime }`$ $`=`$ $`{\displaystyle \frac{2}{3}}\varphi ^2`$ (4)
$`A^2`$ $`=`$ $`{\displaystyle \frac{1}{3}}V(\varphi )+{\displaystyle \frac{1}{6}}\varphi ^2.`$ (5)
The prime denotes differentiation with respect to $`r`$, and we have assumed that both $`\varphi `$ and $`A`$ are functions of $`r`$ only. If the potential for the scalar is given by
$$V(\varphi )=\frac{1}{8}\left(\frac{W(\varphi )}{\varphi }\right)^2\frac{1}{3}W(\varphi )^2,$$
(6)
setting
$$\varphi ^{}=\frac{1}{2}\frac{W(\varphi )}{\varphi },A^{}=\frac{1}{3}W(\varphi )$$
(7)
yields a solution. This very useful first order formalism for obtaining solutions to the equations of motion appeared first in the study of supergravity domain walls and was generalized in to include non-supersymmetric domain walls in various dimensions.
To study domain wall solutions we have to choose a scalar potential with several minima. A domain wall solution is characterized by a function $`\varphi (r)`$ that asymptotes to different minima of the potential as $`r\pm \mathrm{}`$. In general $`\varphi (r)`$ is smooth and contains a length scale that corresponds to the thickness of the wall.
It is straightforward to find such solution to the equations of motion. Some examples are discussed in and many others can be constructed along the same lines. However most of these solutions yield equations that are too complicated for an analytical treatment in closed form. In this note we present a very specific example of a thick $`AdS`$ domain wall, where most of the calculations can be done analytically.
We choose a superpotential
$$W(\varphi )=3bc\mathrm{sin}\left(\sqrt{\frac{2}{3b}}\varphi \right),$$
(8)
which gives rise to
$`V(\varphi )`$ $`=`$ $`{\displaystyle \frac{3bc^2}{8}}\left((14b)(1+4b)\mathrm{cos}\left(\sqrt{{\displaystyle \frac{8}{3b}}}\varphi \right)\right),`$ (9)
$`A(r)`$ $`=`$ $`b\mathrm{ln}\left(2\mathrm{cosh}(cr)\right),`$ (10)
$`\varphi (r)`$ $`=`$ $`\sqrt{6b}\mathrm{arctan}\left(\mathrm{tanh}\left({\displaystyle \frac{cr}{2}}\right)\right),`$ (11)
where we have set an integration constant that corresponds to a shift in $`A(r)`$ to zero. For $`r\pm \mathrm{}`$ we have $`A(r)bc|r|`$, so the metric, Eq. (1), reduces to $`AdS`$ far from the domain wall at $`r=0`$. Generic $`AdS`$ domain wall solutions have two free parameters, one for the asymptotic $`AdS`$ curvature and another for the width of the wall. In the solution above the $`AdS`$ curvature is given by $`bc`$ and the thickness of the wall is parametrized by $`c`$.
## III Metric fluctuations
In gravity coupled to scalars one cannot discuss fluctuations of the metric around the background given in the previous section without including fluctuations of the scalar as well. The general treatment of these fluctuations is rather complicated, since one has to solve a complicated system of coupled differential equations. However, there is a sector of the metric fluctuations that decouples from the scalars , and these fluctuations can be treated analytically.
For the metric fluctuations we adopt a gauge such that the perturbed metric takes the form
$$ds^2=e^{2A(r)}(\eta _{ij}+h_{ij})dx^idx^jdr^2,$$
(12)
where $`i=0,\mathrm{},3`$ and $`\eta _{ij}=\mathrm{diag}(1,1,1,1)`$. It is straightforward but tedious to obtain the coupled equations of motion for the scalar fluctuation, $`\stackrel{~}{\varphi }`$, and $`h_{ij}`$ . In order to prove stability of the solution given in the previous section, we would have to show that there are no negative mass solutions to these equations. Unfortunately, this is a rather daunting task, and we will not attempt it here.
The transverse and traceless part of the metric fluctuation, $`\overline{h}_{ij}`$, decouples from the scalar and satisfies a much simpler equation of motion
$$\left(_r^2+4A^{}_re^{2A}\mathrm{}\right)\overline{h}_{ij}=0,$$
(13)
where $`\mathrm{}`$ is the four-dimensional wave operator. In it proved useful to recast this equation in a form similar to Schrödinger’s equation. To that end we change coordinates to $`z=e^{A(r)}𝑑r`$. The metric takes the form
$$ds^2=e^{2A(z)}\left(dx_0^2\underset{i}{}dx_i^2dz^2\right).$$
(14)
and the wave equation for the transverse traceless parts of $`h_{ij}`$ reads
$$\left(_z^2+3A^{}(z)_z+\mathrm{}\right)\overline{h}_{ij}=0.$$
(15)
Making the ansatz $`\overline{h}_{ij}=e^{ikx}e^{3A/2}\psi _{ij}(z)`$, this equation simplifies further to
$$\left(_z^2+V_{QM}(z)k^2\right)\psi (z)=0,$$
(16)
where we have dropped the indices on $`\psi (z)`$ and introduced the potential $`V_{QM}=\frac{9}{4}A^{}(z)^2+\frac{3}{2}A^{\prime \prime }(z)`$.
Using Eq. (10) and the definition of the new variable $`z`$ we find $`z=𝑑r2^b\mathrm{cosh}^b(cr)`$. For integer $`b`$ these integrals are easy to do, but if $`b`$ is even the inversion is never possible in closed form and for odd $`b`$ inverting the expression for $`z`$ requires solving a degree $`b`$ polynomial equation. In the following we will set $`b=1`$. The $`b=3`$ case is also tractable, but the resulting equations are much more complicated and the qualitative behavior of the solution is the same as in the $`b=1`$ case.
Note that setting $`b=1`$ leaves only one free parameter, $`c`$, which controls the $`AdS`$ curvature and the thickness of the domain wall. With $`b=1`$, we cannot take the thin wall limit without sending the $`AdS`$ curvature to infinity at the same time. Since we are interested in studying thick walls, this is not a serious handicap.
For the $`b=1`$ solution we have $`A(z)=\mathrm{ln}(\sqrt{4+c^2z^2})`$, which gives
$$V_{QM}=\frac{3c^2}{4}\frac{(8+5c^2z^2)}{(4+c^2z^2)^2}.$$
(17)
The spectrum of eigenvalues, $`k^2`$, give the spectrum of graviton masses a four dimensional observer at (or near) $`z=0`$ sees. In order to have a four dimensional graviton as in the lowest eigenfunction of Eq. (16) should have eigenvalue $`k^2=0`$. We find one normalizable eigenfunction given by $`\psi _0=N/(4+c^2z^2)^{\frac{3}{4}}`$, where $`N`$ is a normalization constant. $`\psi _0`$ is the lowest energy eigenfunction, because it has no zeros. Thus there is no instability from transverse traceless modes with $`k^2<0`$.
Since the potential vanishes for large $`z`$, this is the only bound state. The wave functions for $`k^2>0`$ become plane waves at infinity. We were not able to find exact solutions of Eq. (16) for $`k^2>0`$, but qualitatively the process of localizing gravity on the domain wall is quite clear.
As in any $`k^2>0`$ is allowed, so the masses of the extra modes are continuous from zero. Normally this indicates that gravity is actually five-dimensional, but for the thin $`AdS`$ domain wall the couplings of these modes to matter on the domain wall are suppressed enough that they do not spoil four-dimensional gravity. We now discuss this somewhat mysterious behavior in our non-singular setup. Let $`V_{max}c^2`$ be the maximum of the potential shown in Fig. 1. For modes with energies (masses) $`k^2V_{max}`$, the potential is a small perturbation. These modes couple to matter on the domain wall with regular strength, but since they are heavy, they yield subleading corrections to four-dimensional gravity mediated by the zero mode.
Modes with $`k^2V_{max}`$ could potentially exhibit a resonance structure. Such resonant modes would have a disproportionately large amplitude in the interior of the domain wall, while the amplitude of non-resonant modes should be much lager outside than in the interior. Since we do not have a solution for the continuum modes in closed form, we are forced to investigate this question numerically. This analysis is simplified greatly by having a closed expression for the potential Eq. (17).
Numerically integrating Eq. (16) for various values of $`k^2`$ we find no evidence for a resonance structure in the spectrum of the continuum modes. Fig. 2 shows a mode with moderate $`k^2`$ and one with very low $`k^2`$. Modes with intermediate values of $`k^2`$ smoothly interpolate between the two solutions shown. We have adopted the unphysical normalization, $`\psi (0)=1`$. To convert to the physical normalization, we use plane wave normalization for the wave functions at large $`z`$. With this normalization the wave function for small $`k^2`$ is strongly suppressed at the origin. Since the coupling of a mode with mass $`k^2`$ to matter located at or near $`z=0`$ is given by the amplitude of the properly normalized wave function there, the modes with small $`k^2`$ are essentially decoupled from the physics on the domain wall. Furthermore, the wave function at the origin decreases monotonically as we lower $`k^2`$. Just as in the thin wall case of , there is no evidence for a resonance structure in the continuum modes. This is true both for the $`b=1`$ and the $`b=3`$ solution, but it is not clear if this is a generic feature of thick $`AdS`$ domain walls, or a special property of our simple examples. In any event we conclude that there are thick domain walls on which the four-dimensional effective theory has a spectrum that is qualitatively very similar to the thin wall case.
It is interesting to see how our solution reduces to the thin wall solution of . We have only one parameter that controls both the thickness of the wall and the $`AdS`$ curvature. The limit $`c\mathrm{}`$ sends the width of the wall to zero and the $`AdS`$ curvature to infinity. Clearly classical gravity is not adequate to describe the physics in this limit, but as a consistency check we can still compare Eq. (16) to the corresponding equation in . We expect these equations to agree in the high curvature limit. We find for $`z0`$
$$V_{QM}(z)=\frac{15}{4z^2}.$$
(18)
This agrees with the large curvature limit of the potential given in for $`z0`$, and solving Eq. (16) with this potential yields the Bessel functions found in . We cannot check that the singular piece in the potential of comes out correctly, because that requires first taking the width of the wall to zero and then sending the curvature of $`AdS`$ to infinity. Our solution only allows us to take a special correlated limit, so we do not expect the singular part to agree. Finally, in our setup modes with masses up to $`m_{max}c`$ are suppressed on the domain wall. $`m_{max}`$ becomes infinitely large in the limit $`c\mathrm{}`$, so we recover uncorrected four-dimensional gravity. Correspondingly, we find that the corrections from bulk modes in the thin wall solution of vanish as the $`AdS`$ curvature goes to infinity.
###### Acknowledgements.
It is a pleasure to thank Josh Erlich, Yael Shadmi, Yuri Shirman, and especially Lisa Randall for helpful comments and for encouraging me to write this paper. I would also like to thank the ITP at UCSB for hospitality while this work was completed. This work was supported in part by DOE grants #DF-FC02-94ER40818 and #DE-FC-02-91ER40671 and NSF grant PHY94-07194. |
no-problem/9912/cond-mat9912294.html | ar5iv | text | # Limits on Phase Separation for Two-Dimensional Strongly Correlated Electrons
\[
## Abstract
From calculations of the high temperature series for the free energy of the two-dimensional $`t`$-$`J`$ model we constuct series for ratios of the free energy per hole. The ratios can be extrapolated very accurately to low temperatures and used to investigate phase separation. Our results confirm that phase separation occurs only for $`J/t1.2`$. Also, the phase transition into the phase separated state has $`T_c0.25J`$ for large $`J/t`$.
\]
The Hubbard and $`t`$-$`J`$ models, though widely used to investigate high temperature superconductors, remain controversial when doped away from one electron per site. The possibility that doped holes do not form a uniform phase but instead phase separate into distinct high and low density regions on the lattice is an important issue that has proved difficult to settle. Phase separation for physical choices of model parameters would imply more complicated models of 2D strongly correlated electrons are required to describe high temperature superconductors. Stability of a uniform density phase would leave open the possibility that simple models contain the relevant physics without additional terms.
While experiments have clearly observed phase separation in a few high-$`T_c`$ systems, notably oxygen overdoped La<sub>2</sub>CuO<sub>4+δ</sub> with mobile interstitial oxygen atoms, phase separation does not seem to be a universal feature of the cuprates. However, the mechanism of phase separation causes holes to feel a net attraction, a possible precursor for the formation of stripe phases or superconductivity. Finding an attractive interaction for holes in models that have predominantly strong repulsive interactions is not easy, and all known possibilities deserve thorough investigation.
To investigate the properties of phase separation we have calculated the high temperature series for the 2D $`t`$-$`J`$ model free energy to 10th order in inverse temperature. The Hamiltonian for the $`t`$-$`J`$ model is
$$H=t\underset{ij,\sigma }{}\left(c_{i\sigma }^{}c_{j\sigma }+c_{j\sigma }^{}c_{i\sigma }\right)+J\underset{ij}{}\left(𝐒_i𝐒_j\frac{1}{4}n_in_j\right),$$
(1)
where the sums are over pairs of nearest neighbor sites and the Hilbert space is restricted to states with no doubly occupied sites. The series is generated for a 2D square lattice.
To determine the stability of the uniform phase we would like to investigate the ground state energy per hole given by
$$e(\delta )=\frac{E_0(\delta )E_0^{AF}}{\delta },$$
(2)
introduced by Emery, Kivelson and Lin. Here $`E_0(\delta )`$ is the ground state energy per site of the uniform phase for hole doping $`\delta `$ and $`E_0^{AF}=1.16944J`$ is the ground state energy per site for the Heisenberg model where $`\delta =0`$. If $`e(\delta )`$ is a monotonically increasing function of $`\delta `$ the uniform phase is stable. If $`e(\delta )`$ is constant or decreasing for a range of dopings the uniform phase is unstable for those values of $`\delta `$.
There are two main difficulties encountered in calculating $`e(\delta )`$ from numerical measurements (exact diagonalization, quantum Monte Carlo or Green’s function Monte Carlo) of $`E_0(\delta )`$. The first is that $`e(\delta )`$ requires the subtraction of two large numbers $`E_0(\delta )`$ and $`E_0^{AF}`$ to determine a small number which is then divided by $`\delta `$, another small number. Given statistical uncertainty in numerically determining $`E_0(\delta )`$ ($`E_0^{AF}`$ is essentially exact in comparison) this is a difficult task, especially for $`\delta 1`$. The second difficulty is that numerical calculations are done on small clusters. Systematic errors in $`E_0(\delta )`$ are tough to estimate without knowing the finite size scaling of the data and whether the cluster sizes considered are large enough to be in the scaling limit. In addition to these difficulties, phase separation is favored on small clusters for $`\delta 1`$. The reduction in ground state energy due to the kinetic energy of the holes, which disfavors phase separation, is not as large on a small cluster as it is for an infinite lattice. On a small cluster the electron system reduces its energy more through local interactions, which for the $`t`$-$`J`$ model are attractive interactions for antiparallel spins due to the $`J`$ term in the Hamiltonian.
High temperature series provide a means to avoid these difficulties. We generalize $`e(\delta )`$ to $`T>0`$ by
$$f(\delta ,T)=\frac{F(\delta ,T)F^{AF}(T)}{\delta },$$
(3)
where we have replaced the ground state energy per site by the free energy per site and $`lim_{T0}f(\delta ,T)=e(\delta )`$. This replaces the difficulties mentioned above by the need to analytically continue the series to low temperatures. For $`J/t<1`$ and $`\delta 1`$ we find ratios $`f(\delta _2)/f(\delta _1)`$ for two closely spaced dopings $`\delta _1`$ and $`\delta _2`$ are the best quantities to extrapolate. Series for ratios can be calculated exactly from the series for $`F`$, avoiding the need to subtract two large approximate numbers. The series coefficients are also exact for an infinite lattice so we have no explicit finite size effects. The ratios are extrapolated using standard Padé approximants, but only after the exact series for a given ratio is calculated. The doping spacing we use is $`\mathrm{\Delta }\delta =0.025`$.
By extrapolating $`f(\delta _2)/f(\delta _1)`$ to $`T=0`$ we obtain estimates for $`e(\delta _2)/e(\delta _1)`$ in the uniform phase. Since high temperature series start at infinite temperature and only have information for the phase above a nonzero $`T_c`$, all of our results are for the uniform phase. A description of what happens if we try to extrapolate below $`T_c>0`$ is given below. Results for a range of dopings and $`J/t`$ values are shown in Fig. 1. For the parameters considered here $`e(\delta )<0`$ so that if $`\delta _2>\delta _1`$ and the system phase separates we should find $`e(\delta _2)/e(\delta _1)>1`$ if $`T_c>0`$ or $`e(\delta _2)/e(\delta _1)=1`$ if $`T_c=0`$. If the uniform phase is stable we have $`e(\delta _2)/e(\delta _1)<1`$. The 2D $`t`$-$`J`$ model phase separates into a phase with $`\delta =0`$ and a doped phase with $`\delta =\delta _0`$. For phase separation we therefore expect $`e(\delta )/e(0.01)1`$ immediately upon doping.
In Fig. 1, $`e(\delta )/e(0.01)<1`$ and falls monotonically with increasing $`\delta `$ for all $`J/t`$ shown, indicating no instability towards phase separation in the 2D $`t`$-$`J`$ model for $`J/t<1.2`$.
In Ref. a variational argument is used to support the presence of phase separation for $`J/t1`$. A variational phase separated state was constructed from two pieces occupying different parts of the lattice: a Heisenberg antiferromagnet for the $`\delta =0`$ phase and a gas of spinless fermions for the $`\delta =\delta _0`$ phase. The energy of this state is then minimized with respect to $`\delta `$, giving $`E_0(\delta )=E_0^{AF}4t\delta (1\sqrt{B\pi J/t})`$ for the phase separated state and $`\delta _0=\sqrt{BJ/\pi t}`$, where $`B=1.16944/2=0.58472`$. This energy was then compared to ground state energy estimates for the uniform phase found by considering a single hole in an antiferromagnet. The energy for the phase separated state was found to lie below the unform state energy for small enough $`J/t`$, and since the variational energy lies above the true ground state energy the conclusion of Ref. was that the phase separated state is stable. Extrapolating the result for a single hole to a finite density of holes assumes the energy bands remain rigid, a feature not obvious for a strongly correlated system.
In Fig. 2 we compare our estimates for the uniform ground state energy to the phase separated variational ground state energy at $`J/t=0.01`$. We find that our energies lie below the variational energy for $`\delta <\delta _0`$. Note that from this result we cannot conclude that the uniform state is stable, but only that the variational state discussed in Ref. is not sufficient to show phase separation at $`J/t1`$.
In Fig. 3 we show the temperature dependence of $`f(0.02)/f(0.01)`$ for a range of $`J/t`$ values. Estimating the low $`T`$ behavior of this function is our only approximation. The weak temperature dependence for the ratio leads us to believe our results are reliable. The general trends of the data shown in Figs. 1 and 3 are due to the minimum in $`E_0(\delta )`$ moving to smaller $`\delta `$ as $`J/t`$ is increased, causing $`e(\delta )`$ to decrease in magnitude faster than $`e(0.01)`$, though for the parameters shown $`e(\delta )`$ and $`e(0.01)`$ remain negative. For values of $`J/t`$ larger than shown in Figs. 1 and 3 the ratio $`f(\delta _2)/f(\delta _1)`$ develops a spurious pole due to the crossing of $`F^{AF}`$ and $`F(\delta _1)`$ at $`T>0`$. This pole greatly degrades the accuracy of extrapolations of the ratios at lower temperatures. To investigate larger $`J/t`$ we need another method.
The chemical potential $`\mu =F/\delta `$ provides another means to investigate phase separation. We typically find $`\mu `$ is more difficult to extrapolate than $`f(\delta _2)/f(\delta _1)`$, with the error in the extrapolations for $`\mu `$ considerably larger than for the ratio. For $`J/t1.2`$ we do see $`\mu (\delta )`$ becoming quite flat for $`\delta 1`$, as expected for a first order phase transition into a phase separated state. As the temperature is lowered, $`\mu `$ near the critical point (critical doping $`\delta _c`$ and temperature $`T_c`$) becomes flat, giving a diverging compressibility $`\kappa `$ at the critical point. Results for $`\mu `$ are shown in Fig. 4. The flat region found in $`\mu (\delta )`$ can be used to estimate the boundary for phase separation. However, for larger $`\delta `$ distinguishing where the flat region ends is difficult, leading to errors in the position of the phase separation boundary.
Further evidence of phase separation at large $`J/t`$ can be found by directly extrapolating $`F(\delta ,T)`$ to estimate $`E_0(\delta )`$. Fig. 5 shows results for $`J/t=2.0`$. The characteristic signature of phase separation is the reversed curvature observed from $`\delta =0`$ to $`\delta 0.45`$. The reversed curvature of $`E_0(\delta )`$ (giving an unphysical negative compressibility) results from extrapolating the high temperature uniform phase $`F(\delta ,T)`$ through the $`T_c>0`$ phase transition for phase separation. If $`T_c=0`$ we would find instead that $`E_0(\delta )`$ became linear in $`\delta `$ in the phase separated region.
The reversed curvature shown in Fig. 5 indicates $`T_c>0`$, but $`T_c`$ is probably quite low. An indirect estimate of $`T_c`$ can be made at large $`J/t`$, above $`J/t=3.4367`$ where the 2D $`t`$-$`J`$ model phase separates at all densities into regions with $`\delta =0`$ and $`\delta =1`$. Here we know $`E_0(\delta )`$ for all $`\delta `$, since $`E_0(\delta )`$ is the linear interpolation between $`E_0(0)=1.16944J`$ and $`E_0(1)=0`$.
The ground state chemical potential in this parameter range is the constant slope of $`E_0(\delta )`$ with the value $`\mu /t=1.16944J/t`$. The chemical potential hits the bottom of the tight binding band at $`J/t=3.4367`$ and as $`J/t`$ is further reduced the gain in kinetic energy eventually limits the phase separated state to $`J/t1.2`$. In Fig. 6 we compare $`E_0`$ to $`F(T)`$ in the limit $`J/t\mathrm{}`$ with $`\delta =0.5`$.
Comparing the extrapolated $`F(T)`$ to $`E_0`$ we see they tend to cross at $`T0.25J`$. Since $`F(T)`$ must be less than $`E_0`$ and a monotonic function of $`T`$ this crossing cannot occur. We interpret the tendency to cross as a phase transition to phase separation with $`T_c0.25J`$.
Calculations for the 2D $`t`$-$`J`$ model currently give a wide range of minimum $`J/t`$ values for the presence of phase separation. Minimum $`J/t`$ values reported in the literature are 0, 0.5–0.6, and our result of 1.2. The latter results are in qualitative agreement in that there is a minimum $`J/t>0`$ for phase separation. The reasons for these differences are not clear at present. However, while statistical errors are well under control, systematic errors in ground state energy calculations due to small cluster sizes are much more difficult to control. Calculations investigating phase separation in the 2D Hubbard model find $`e(\delta )`$ equal to a constant for a range of dopings near half filling for the $`U=0`$ tight binding model. This spurious indication of phase separation is due to finite size effects and is reduced for larger clusters. Resolving the different reported results for phase sepraration will probably require significantly larger cluster sizes.
In conclusion, by using an analysis of the high temperature series for the free energy per hole $`f(\delta )`$ at different values of $`J/t`$ we find that phase sepration in the $`t`$-$`J`$ model is limited to $`J/t1.2`$. In addition, we find by indirect arguments that $`T_c0.25J`$ for the first order phase transition into the phase separated state. Combining this with the demonstration that phase separation can only occur at $`T=0`$ for the 2D Hubbard model on a square lattice supports the conjecture that the 2D Hubbard model does not phase separate for any positive $`U`$. Our results suggest phase separation in the 2D $`t`$-$`J`$ model is a classical phase transition similar to a lattice gas with an attractive interaction and that phase separation is not important for physical choices of the $`t`$-$`J`$ model parameters.
This work was supported in part by a faculty travel grant from the Office of International Studies at The Ohio State University (WOP), the Swiss National Science Foundation (WOP) and by EPSRC Grant No. GR/L86852 (MUL). WOP thanks the ETH-Zürich for hospitality while part of this work was being completed.
Permanent address. |
no-problem/9912/cond-mat9912304.html | ar5iv | text | # Multiscaling of energy correlations in the random-bond Potts model
## 1 Introduction
The $`q`$-state random-bond Potts model is an interesting framework for examining how a phase transition is modified by quenched disorder coupling to the local energy density. For $`q>2`$ such randomness acts as a relevant perturbation , and for $`q>4`$ it even changes the nature of the transition from first to second order (see Ref. for a review). In the regime where $`(q2)`$ is small, a score of analytical results have been obtained from the perturbative renormalization group, and the various expansions for the central charge and the multiscaling exponents for the moments of the spin-spin correlator compare convincingly to recent numerical work .
A particularly useful way of carrying out these simulations is to consider the finite-size scaling of the Lyapunov spectrum of the (random) transfer matrix, thus generalizing the method commonly applied to the eigenvalue spectrum in a pure system . A definite advantage over the more traditional technique of Monte Carlo simulations is that the transfer matrices allow for a representation in which $`q`$ can be regarded as a continuously varying parameter , and in particular one can study small non-integer values of $`(q2)`$.
The outcome of applying this method to the energetic sector of the transfer matrix, however, led to contradictory results . Most notably, the exponent $`\stackrel{~}{X}_1`$ describing the asymptotic decay of the disorder-averaged first moment of the two-point function $`\overline{\epsilon (x_1)\epsilon (x_2)}|x_1x_2|^{2\stackrel{~}{X}_1}`$ seemed to be a rapidly decreasing function of $`q`$, in sharp disagreement with an exact bound on the correlation length exponent, $`\nu 2/d`$ , which in our notation reads $`\stackrel{~}{X}_11`$.
More recent numerical work has emphasized the importance of crossover behavior from the random fixed point to, on one side, the pure Potts model and, on the other, a percolation-like limit in which the ratio $`R=K_1/K_2`$ between strong and weak couplings tends to infinity. It became clear that while the fixed ratio $`R=2`$ employed in Ref. seems to have been adequate for studying the spin sector when $`(q2)`$ is small, in general higher values of $`R`$ are needed to measure the true random behavior in the regime $`q>4`$ .
These findings were put on a firmer ground when it was realized that Zamolodchikov’s $`c`$-theorem can be used to explicitly trace out the critical disorder strength $`R_{}(q)`$ as a function of $`q`$, by scanning for an extremum of the effective central charge. In conjunction with an improved transfer matrix algorithm in which the Potts model is treated through its representation as a loop model, this allowed the authors of Ref. to produce very accurate results for the central charge and the magnetic scaling dimension in the regime $`q4`$.
On the analytical side, the perturbative expansions for the first three moments of the energetic two-point function have been known for quite some time . It was however only very recently that Jeng and Ludwig succeded in generalizing these computations to a general $`N`$th moment of the energy operator $`\overline{\epsilon (x_1)\epsilon (x_2)^N}|x_1x_2|^{2\stackrel{~}{X}_N}`$, yielding
$$\stackrel{~}{X}_N=N\left(1\frac{2}{9\pi ^2}(3N4)(q2)^2+𝒪(q2)^3\right).$$
(1)
In particular this makes available the experimentally relevant exponent $`\stackrel{~}{X}_0^{}`$ describing the typical decay of the energy-energy correlator in a fixed sample at criticality .
In the present publication we show that by combining the methods of Refs. the exponents $`\stackrel{~}{X}_1`$ and $`\stackrel{~}{X}_0^{}`$ can be quite accurately determined numerically for small $`(q2)`$. In particular we find $`\stackrel{~}{X}_11`$ in full agreement with the correlation length bound , and our results lend strong support to the above two-loop results of the perturbative renormalisation group.
## 2 The simulations
In order to compare our results with those of the $`(q2)`$-expansion, while on the other hand staying comfortably away from $`q=2`$ where logarithmic corrections are expected , our main series of data has $`q=2.5`$. Iterating the transfer matrix for a strip of width $`L`$ a large number $`ML`$ of times, we examine the probability distribution of the ratio between the two largest Lyapunov exponents $`\mathrm{\Lambda }_0`$, $`\mathrm{\Lambda }_1`$ in terms of the free energy gap $`\mathrm{\Delta }f(L)=\frac{1}{LM}\mathrm{ln}(\mathrm{\Lambda }_0/\mathrm{\Lambda }_1)`$. We employ the loop representation of the transfer matrix where each loop on the surrounding lattice is given a weight $`n=\sqrt{q}`$ , and bond randomness is incorporated by weighing the two possible vertex configurations by $`w_i`$ and $`1/w_i`$, where $`w_i`$ is a quenched random variable that can assume two different values $`s`$ and $`1/s`$, each one with probability $`1/2`$ . By construction, the system is then on average at its self-dual point . The strength of the disorder is measured by $`s>1`$, which is related to the ratio between strong and weak bonds by $`R=K_1/K_2=\mathrm{ln}(1+s\sqrt{q})/\mathrm{ln}(1+\sqrt{q}/s)`$. The maximum strip width employed in the study is $`L_{\mathrm{max}}=12`$.
Following Ref. , we start by locating the critical disorder strenght $`s_{}`$ by searching for a maximum of the effective central charge. To do so, we must be able to determine finite-size estimates $`c(L,L+2)`$ with five significant digits, which means that the free energy $`f_0(L)=\frac{1}{LM}\mathrm{ln}(\mathrm{\Lambda }_0)`$ must be known with seven digit precision. These considerations fix the necessary number of iterations to be $`M=10^8`$.
Our results for $`c(L,L+2)`$ as a function of $`L`$ and $`s`$ are shown in Table 1. For a sufficiently large system size $`L`$, these data exhibit a maximum as a function of $`s`$, the position of which determines a finite-size estimate $`s_{}(L)`$, which converges to $`s_{}`$ as $`L\mathrm{}`$. From the data of Table 1, supplemented by improved three-point fits (not shown), we extrapolate to $`s_{}(q=2.5)=2.5(1)`$.
The fluctuations in $`\mathrm{\Delta }f(L)`$ are examined by dividing the strip into $`M/m`$ samples, each one of length $`m=10^5`$, from which the first few cumulants of $`\mathrm{\Delta }f(L)`$ can be determined. As discussed in Ref. , the exponent $`\stackrel{~}{X}_0^{}`$ is related to the finite-size scaling of the mean value (first cumulant) of $`\mathrm{\Delta }f(L)`$, whereas $`\stackrel{~}{X}_1`$ is similarly determined from the sum of the entire cumulant expansion. In practice, the second cumulant is roughly two orders of magnetude smaller than the first, and higher cumulants are expected to be further suppressed, even though their determination is made difficult by numerical instabilities. We can therefore with confidence truncate the sum of the cumulants after the second one.
Resulting finite-size estimates of $`\stackrel{~}{X}_0^{}`$ and $`\stackrel{~}{X}_1`$ are shown in Tables 2 and 3 respectively. Unlike what seemed to be the situation in the magnetic sector, these estimates exhibit a pronounced dependence on $`s`$. Ref. worked at fixed $`R=2`$, which for $`q=2.5`$ would correspond to $`s1.7`$, and found $`\stackrel{~}{X}_0^{}<1`$ for all $`q>2`$. We see here that the correct way to extract these exponents is to extrapolate the $`s=s_{}`$ data to the $`L\mathrm{}`$ limit. With the help of improved two-point estimates (not shown) we thus obtain
$$\stackrel{~}{X}_0^{}=1.02(1)\stackrel{~}{X}_1=1.00(1),$$
(2)
which verifies the bound of Ref. . These exponents, as well as the result for their difference $`\stackrel{~}{X}_0^{}\stackrel{~}{X}_1=0.015(5)`$, are in very good agreement with the $`(q2)`$-expansion; see Eq. (1).
We have also performed simulations for higher values of $`q`$, where the discrepancy between Refs. and was even more pronounced, since $`s_{}`$ is an increasing function of $`q`$. For $`q=2.75`$ and $`q=3`$, we had to increase the number of iterations to $`M=10^9`$ in order to keep the error bars under control despite the higher disorder strength. In all cases we found good agreement with Ref. and with the $`(q2)`$-expansion, at least in the range where the latter can be assumed to be valid. A summary of our results is given in Table 4.
## 3 Conclusion
In summary, we have shown that the apparent violation of the correlation length bound observed in Ref. can be dismissed as a crossover effect due to the lack of tuning to the critical disorder strength. In conjunction with the results on degeneracy and descendents given in Ref. we would thus claim that the transfer matrix method can, at least in principle, be used to relate the entire Lyapunov spectrum to the operator content of the (as yet unknown) underlying conformal field theory.
In particular, we have supplied convincing numerical validation of the two-loop expansion (1) for the energetic multiscaling exponents . Our results also provide further evidence in favour of the replica symmetric approach to the perturbative calculations, since the assumption of initial replica symmetry breaking leads to $`\stackrel{~}{X}_1=1+𝒪(q2)^3`$ , which seems to be ruled out by the results given in Table 4. |
no-problem/9912/hep-ph9912381.html | ar5iv | text | # Semiexclusive Processes: A different way to probe hadron structureInvited talk at the 12th Nuclear Physics Summer School and Symposium: New Directions in Quantum Chromodynamics, Kyongju, South Korea, 21–25 June 1999.
## Semi-Exclusive Processes as Probes of Hadron Structure
This talk will discuss hard semiexclusive processes, namely processes of the form $`B+AC+X`$, where the momentum transfer $`t=(p_Bp_C)^2`$ is large many ; peralta ; cw93 ; acw97 ; acw98 ; bdhp99 ; b\_epic ; c\_epic . Both the cases where the hadron $`C`$ is part of a jet and where it is kinematically isolated are interesting. Semiexclusive processes provide the capability of designing “effective currents” b\_epic that probe specific parton distributions and for probing in leading order target distributions that are not probed at all in leading order in inclusive reactions.
Particle $`B`$ can be a hadron or a real or virtual photon. We will here limit ourselves to the latter. The process we will discuss is
$$\gamma +AM+X,$$
(1)
where $`A`$ is the target and $`M`$ is a meson, for definiteness the pion. The process is perturbative because of the high transverse momentum of the pion, not because of the high $`Q^2`$ of the photon. Soft processes are from the present viewpoint an annoyance, but one we need to discuss and we will estimate their size farther below.
Our considerations also apply to electroproduction,
$$e+AM+X$$
(2)
when the final electron is not seen. We use the Weizäcker-Williams equivalent photon approximation bkt71 to relate the electron and photon cross sections,
$$d\sigma (eAMX)=𝑑E_\gamma N(E_\gamma )𝑑\sigma (\gamma AMX),$$
(3)
where the number distribution of photons accompanying the electron is a well known function.
In the following section, we will describe the subprocesses that contribute to hard pion production and show how the cross sections are dependent upon the parton densities and distribution amplitudes that we wish to probe, and in the subsequent section display some results. There will be a short summary at the end.
## The Subprocesses
#### At the Highest $`k_T`$
At the highest possible transverse momenta, observed pions are directly produced at short range via a perturbative QCD (pQCD) calculable process cw93 ; acw97 ; acw98 ; bdhp99 . Two out of four lowest order diagrams are shown Fig. 1. The pion produced this way is kinematically isolated rather than part of a jet, and may be seen either by making an isolated pion cut or by having some faith in the calculation and going to a kinematic region where this process dominates the others. Although this process is higher twist, at the highest transverse momenta its cross section falls less quickly than that of the competition, and we will show plots indicating the kinematics where it can be observed.
The subprocess cross section for direct or short-distance pion production is
$$\frac{d\widehat{\sigma }}{dt}(\gamma q\pi ^\pm q^{})=\frac{128\pi ^2\alpha \alpha _s^2}{27(t)\widehat{s}^2}I_\pi ^2\left(\frac{e_q}{\widehat{s}}+\frac{e_q^{}}{\widehat{u}}\right)\left[\widehat{s}^2+\widehat{u}^2+\lambda h(\widehat{s}^2\widehat{u}^2)\right],$$
(4)
where $`\widehat{s}`$, $`\widehat{t}=t`$, and $`\widehat{u}`$ are the subprocess Mandlestam variables; $`\lambda `$ and $`h`$ are the helicities of the photon and target quark, respectively; and $`I_\pi `$ is the integral
$$I_\pi =\frac{dy_1}{y_1}\varphi _\pi (y_1,\mu ^2).$$
(5)
In the last equation, $`\varphi _\pi `$ is the distribution amplitude of the pion, and describes the quark-antiquark part of the pion as a parallel moving pair with momentum fractions $`y_i`$. It is normalized through the rate for $`\pi ^\pm \mu \nu `$, and for example,
$$\varphi _\pi =\frac{f_\pi }{2\sqrt{3}}6y_1(1y_1)$$
(6)
for the distribution amplitude called “asymptotic” and for $`f_\pi 93`$ MeV. Overall,
$$\frac{d\sigma }{dxdt}(\gamma A\pi X)=\underset{q}{}G_{q/A}(x,\mu ^2)\frac{d\widehat{\sigma }}{dt}(\gamma q\pi ^\pm q^{}),$$
(7)
where $`G_{q/A}(x,\mu ^2)`$ is the number distribution for quarks of flavor $`q`$ in target $`A`$ with momentum fraction $`x`$ at renormalization scale $`\mu `$.
There are a number of interesting features about direct pion production.
$``$ For photoproduction, the struck quark’s momentum fraction is fixed by experimental observables. This is like deep inelastic scattering, where the experimenter can measure $`xQ^2/2m_N\nu `$ and this $`x`$ is also the momentum fraction of the struck quark (for high $`Q`$ and $`\nu `$). For the present case, momenta are defined in Fig. 1 and the Mandlestam variables are
$$s=(p+q)^2;t=(qk)^2;\mathrm{and}u=(pk)^2.$$
(8)
The Mandlestam variables are all observables, and the ratio
$$x=\frac{t}{s+u}$$
(9)
is the momentum fraction of the struck quark. We will let the reader prove this.
$``$ The gluon involved in direct pion production is well off shell cw93 ; acw97 ; acw98 .
$``$ Without polarization, we can measure $`I_\pi `$, given trust in the other parts of the calculation. This $`I_\pi `$ is precisely the same as the $`I_\pi `$ in both $`\gamma ^{}\gamma \pi ^0`$ and $`e\pi ^\pm e\pi ^\pm `$ .
$``$ We have polarization sensitivity. For $`\pi ^+`$ production at high $`x`$,
$$A_{LL}\frac{\sigma _{R+}\sigma _{L+}}{\sigma _{R+}+\sigma _{L+}}=\frac{s^2u^2}{s^2+u^2}\frac{\mathrm{\Delta }u(x)}{u(x)}$$
(10)
where $`R`$ and $`L`$ refer to the polarization of the photon, and $`+`$ refers to the target, say a proton, polarization. Also, inside a $`+`$ helicity proton the quarks could have either helicity, and
$$\mathrm{\Delta }u(x)u_+(x)u_{}(x).$$
(11)
The large $`x`$ behavior of both $`d(x)/u(x)`$ and $`\mathrm{\Delta }d(x)/\mathrm{\Delta }u(x)`$ are of current interest. Most fits to the data have the down quarks disappearing relative to the up quarks at high $`x`$, in contrast to pQCD which has definite non-zero predictions for both of the ratios in the previous sentence. Recent improved work on extracting neutron data from deuteron targets, has tended to support the pQCD predictions wally .
Experimentally, direct or short-range pion production can be seen. To show this, Fig. 2(left) plots the differential cross section for high transverse momentum $`\pi ^+`$ electroproduction for a SLAC energy. Specifically, we have 50 GeV incoming electrons, with the pion emerging at 5.5 in the lab. It shows that above about 27 GeV total pion momentum or 2.6 GeV transverse momentum, direct (short distance, isolated) pion production exceeds its competition. Also shown in Fig. 2 is a situation where there is a long region where the fragmentation process—next up for discussion—dominates. Incidentally, the 340 GeV energy for the electron beam on stationary protons was chosen to match recent very preliminary discussions of an Electron Polarized Ion Collider (EPIC) with 4 GeV electrons and 40 GeV protons, and the 1.34 angle in the target rest frame matches 90 in the lab for such a collider.
### Moderate $`k_T`$
At moderate transverse momentum, the generally dominant process is still a direct interaction in the sense that the photon interacts directly with constituents of the target, but the pion is not produced directly at short range but rather at long distances by fragmentation of some parton many ; peralta ; acw98 . Many authors refer to this as the direct process; others of us are in the habit of calling it the fragmentation process. The main subprocesses are called the Compton process and photon-gluon fusion, and one example of each is shown in Fig. 3.
Photon gluon fusion often gives 30–50% of the cross section for the fragmentation process, and the polarization asymmetry is as large as can be in magnitude,
$$\widehat{A}_{LL}(\gamma gq\overline{q})=100\%.$$
(12)
Typically for the Compton process, $`\widehat{A}_{LL}(\gamma qgq)1/2`$. We shall show some $`A_{LL}`$ plots for the overall process after we discuss the soft processes.
We should note that the NLO calculations for the fragmentation process have been done also for the polarized case, though our plots are based on LO. For direct pion production, NLO calculations are not presently completed.
We should also remark that the photon may split into hadronic matter before interacting with the target. If splits into a quark anti-quark pair that are close together, the splitting can be modeled perturbatively or quasi-perturbatively, and we call it a “resolved photon process.” A typical diagram is shown in the left hand part of Fig. 4. Resolved photon processes are crucial at HERA energies, but not at energies under discussion here, and we say no more about them.
### Soft Processes
This is the totally non-perturbative part of the calculation, whose size can be estimated by connecting it to hadronic cross sections. The photon may turn into hadronic matter, such as $`\gamma q\overline{q}+\mathrm{}`$ with a wide spatial separation. It can be represented as photons turning into vector mesons. See Fig. 4(right).
We want a reliable approximation to the non-perturbative cross section so we can say where perturbative contributions dominate and where they do not. To get such an approximation one can start with the cross section given as
$$d\sigma (\gamma A\pi X)=\underset{V}{}\frac{\alpha }{\alpha _V}d\sigma (V+A\pi X)+\left(\mathrm{non}\mathrm{VMD}\right),$$
(13)
where the sum is over vector mesons $`V`$, $`\alpha =e^2/4\pi `$, and $`\alpha _V=f_V^2/4\pi `$. We can get, for example, $`f_\rho `$ from the decay $`\rho e^+e^{}`$. Then “all” one needs is a parameterization of the hadronic process, based on data. Details of our implementation of this program are given in acw99 .
We took the soft processes to be polarization insensitive. This agrees with a recent Regge analysis of Manayenkov m99 .
## Results
Since results for the unpolarized cross section have already been displayed in Fig. 2, we focus on results for $`A_{LL}`$, which is also called $`E`$ by some authors barker75 . Fig. 5 shows two plots, both for $`\pi `$ production.
The 50 GeV plot, Fig. 5(left), is dominated by direct pion production above the soft region, and is sensitive mainly to the differing polarized quark distributions of the different models. Three different parton distribution models are shown bbs95 ; grsv96 ; gs96 . Although the fragmentation process is not the crucial one here, we should mention that mostly we used our own fragmentation functions cw93 , and that the results using a better known set bkk95 are not very different. Neither set of fragmentation functions agrees well with the most recent HERMES data makins for unfavored vs. favored fragmentation functions, and the one curve labeled “newfrag” is calculated with fragmentation functions that agree better with that data.
Below about 20 GeV total pion momentum where the soft process dominates, the data is well described by supposing the soft processes are polarization independent. Above that, with asymmetry due to perturbative processes, the difference among the results for the different sets of parton distributions is quite large for the $`\pi ^{}`$.
The data is from Anthony et al. anthony99 . Presently most of the data is in the region where the soft processes dominate. The data is already interesting. Further data at even higher pion momenta would be even more interesting. Regarding the differences among the quark distributions, recall that large momentum corresponds to $`x1`$ for the struck quark, and pQCD predicts that the quarks are 100% polarized in this limit. Only the parton distributions labeled BBS bbs95 are in tune with the pQCD prediction, and they for large momentum predict even a different sign for $`A_{LL}`$ for the $`\pi ^{}`$. Calculated results plotted with the data for the $`\pi ^+`$ and for deuteron targets may be examined in acw99 .
The other plot in Fig. 5 is for 340 GeV electron beam energy, an energy where there is a long region where the fragmentation process dominates. We would like to know how sensitive the possible measurements of $`A_{LL}`$ are to the different models for $`\mathrm{\Delta }g`$. To find out, Fig. 5 (right) presents calculated results for $`A_{LL}`$ for one set of quark distributions and 5 different distributions for $`\mathrm{\Delta }g`$ bbs95 ; grsv96 ; gs96 ; bfr96 . The quark distributions and unpolarized gluon distribution in each case are those of GRSV grsv96 . There are 6 curves on each figure. One of them (labeled GRSV–) is a benchmark, which was calculated with $`\mathrm{\Delta }g`$ set to zero. The other curves use the $`\mathrm{\Delta }g`$ from the indicated distribution. There is a fair spread in the results, especially for the $`\pi ^{}`$ where photon-gluon fusion gives a larger fraction of the cross section. Thus, one could adjudicate among the polarized gluon distribution models.
## Summary
Several perturbative processes contribute to hard pion photoproduction. All are calculable. They give us new ways to measure aspects of the pion wave function, and quark and gluon distributions, especially $`\mathrm{\Delta }q`$ and $`\mathrm{\Delta }g`$. The soft processes can be estimated and avoided if the transverse momentum is greater than about 2 GeV. SLAC or HERMES energies would be excellent for finding direct pion production, which is sensitive to $`\mathrm{\Delta }u`$ and $`\mathrm{\Delta }d`$, and higher energies would give a region where the fragmentation process dominates and be excellent for measuring $`\mathrm{\Delta }g`$.
## Acknowledgments
My work on this subject has been done with Andrei Afanasev, Chris Wahlquist, and A. B. Wakely and I thank them for pleasant collaborations. I have also benefited from talking to and reading the work of many authors and apologize to those I have not explicitly cited. I thank the NSF for support under grant PHY-9900657. |
no-problem/9912/cond-mat9912492.html | ar5iv | text | # Unconventional electronic Raman spectra of borocarbide superconductors
## Abstract
Borocarbide superconductors, which are thought to be conventional BCS-type superconductors, are not so conventional in several electronic Raman properties. Anisotropic gap-like features and finite scattering strength below the gap were observed for the $`R`$Ni<sub>2</sub>B<sub>2</sub>C ($`R`$ = Lu, Y) systems in our previous study. The effects of Co-doping on the 2$`\mathrm{\Delta }`$ gap-like features and the finite scattering strength below and above the gap are studied in $`R`$ = Lu (B = B<sup>11</sup>) system. In superconducting states, Co-doping strongly suppresses the 2$`\mathrm{\Delta }`$ peak in both B<sub>2g</sub> and B<sub>1g</sub> symmetries. Raman cross-section calculation which includes inelastic scattering shows a relatively good fit to the features above the 2$`\mathrm{\Delta }`$ peak, while it does not fully account for the features below the peak.
Some superconductors that are thought to be of the conventional BCS-type have unusual properties. Especially, the borocarbides with the generic formula $`R`$Ni<sub>2</sub>B<sub>2</sub>C ($`R`$ = Y, rare earths) have shown rich physics. In this article, we further address the peculiar behavior of the 2$`\mathrm{\Delta }`$ peak and the sub-gap features, reported in our earlier electronic Raman measurements, in the Lu(Ni<sub>1-x</sub>Co<sub>x</sub>)<sub>2</sub>B<sub>2</sub>C ($`x`$ = 0.0, 0.015, 0.03).
The samples measured were single crystals grown by the flux-growth method and characterized by temperature-dependence of resistivity and magnetization. Raman spectra were obtained using a custom-made subtractive triple-grating spectrometer designed for very small Raman shifts and ultra low intensities. 3 mW of 6471 Å Kr-ion laser light was focused onto a spot of $`100\times 100`$ $`\mu `$m<sup>2</sup>, in a pseudo-backscattering geometry. The temperature of the spot on the sample surface was estimated to be $``$7 K for the superconducting spectra. The spectra were corrected for the response of the spectrometer and the Bose factor. Thus they are proportional to the imaginary part of the Raman susceptibility.
The 7 K Raman spectra in both geometries show 2$`\mathrm{\Delta }`$-like peak features. The intensity of the 2$`\mathrm{\Delta }`$-like peak is stronger and sharper in B<sub>2g</sub> than in B<sub>1g</sub>. They are strongly suppressed as pure LuNi<sub>2</sub>B<sub>2</sub>C is doped by Co on the Ni sites. The Co impurities are believed to be nonmagnetic.
The $`T0`$ peak positions of the 2$`\mathrm{\Delta }`$-like feature, which were found to show BCS-type temperature dependence of the superconducting gap $`\mathrm{\Delta }`$(T), are 45 cm<sup>-1</sup> (B<sub>2g</sub>) and 48 cm<sup>-1</sup> (B<sub>1g</sub>) for the undoped LuNi<sub>2</sub>B<sub>2</sub>C, which are much less anisotropic than the values in YNi<sub>2</sub>B<sub>2</sub>C (40 and 49 cm<sup>-1</sup>, respectively). The anisotropy of the peak positions (and thus the apparent gap values) seems to be lessened as Co-doping increases. The normal state responses (bottom spectra) show curvatures at around 35-50 cm<sup>-1</sup> depending on the amount of Co-doping, which might be responsible for the apparent insensitivity of the 2$`\mathrm{\Delta }`$ peak to the Co-doping in the superconducting state.
A comparision of the data for different Co dopings to the theory for Raman scattering in disordered conventional superconductors developed in Ref. is presented in Fig. 1. The theory has been augmented to include electron-phonon and electron-paramagnon inelastic scattering. The relevant fit parameters are the magnitude of gap $`\mathrm{\Delta }`$ and the elastic scattering rate $`1/\tau _L`$ for $`L`$ = B<sub>1g</sub>, B<sub>2g</sub> channels. Other parameters entering into the self energies (such as the DOS at the Fermi level, Debye energy, electron-phonon coupling, and the Stoner factor) are taken from Ref. . We used $`\mathrm{\Delta }=20`$ cm<sup>-1</sup> for $`x`$ = 0 and 0.015 and used $`\mathrm{\Delta }=19`$ cm<sup>-1</sup> for $`x`$ = 0.03. The values used for $`1/\tau _L`$ to fit the $`B_{2g}(B_{1g})`$ data were 32, 80, 120 (60, 100, 120) cm<sup>-1</sup> for $`x`$ = 0, 0.015, and 0.03, respectively. The same parameters, except $`\mathrm{\Delta }=0`$, were used to fit the normal state spectra (bottom). These values are less than a factor of two larger than those determined by resistivity studies, although these rates are necessarily different from the transport rate.
The theory agrees rather well with the data near the gap edge and at higher frequencies, but does not reproduce the spectral weight observed for small frequency shifts. This intensity might come from additional bands which have a very small gap or are not superconducting or from nodal quasiparticles. In the former case, both channels would experience a linear in frequency term coming from normal scattering processes superimposed on the superconducting response. In the latter case, the linear rise of the spectra naturally arises from a gap with line or point nodes, provided the nodes are not coincident with the nodes of the B<sub>1g</sub> or B<sub>2g</sub> vertex.
ISY and MVK were partially supported under NSF 9705131 and 9120000. ISY was also supported by KOSEF 1999-2-114-005-5. Ames Laboratory is operated by U.S. DOE by Iowa State University under Contract No. W-7405-Eng-82. |
no-problem/9912/cond-mat9912149.html | ar5iv | text | # References
Non-linear phenomena in electrical circuits: Simulation of non-linear relativistic field theory and possible applications
Konstantin G. Zloshchastiev
E-mail: zlosh@email.com, URL(s): http://zloshchastiev.webjump.com, http://zloshchastiev.cjb.net
## Abstract
We propose the non-accelerator non-low-temperature simulator of quantum-field effects which is based on the feeder circuits with the special feedback. By means of it one can study the field models which contain fundamental concepts in the modern field theory but do not exist in nature in a separate form. Besides, several field phenomena might find technological applications by virtue of the electrical analogy.
PACS number(s): 07.50.Ek, 11.10.Lm
Nowadays the methods of non-accelerator and non-astronomical investigations of the relativistic quantum field theory and gravity are of great interest. The most significant progress was reached by means of the simulations based on the superfluid helium. As was shown in numerous works the superfluid phases of $`{}_{}{}^{3}\text{He}`$ can simulate several phenomena in quantum field theory and gravity, namely, black holes, surface gravity, Hawking radiation, horizons, ergoregions, trapped surfaces, baryogenesis, vortexes, strings, textures, standard electroweak model, etc . Thereby, such interdisciplinary analogies might be very useful not only as a good tool for the verification of theoretical conceptions and models but also as promising source of new technologies.
In present paper we propose another non-accelerator simulator working at reasonably high temperature (unlike of superfluid helium) for studying and verifying of several phenomena of scalar field theory. It will be shown that by means of it one can study the field models which are the basis of modern physics but do not exist in nature separately. This simulator is based on the standard wave phenomena in electrical circuits. In principle, the non-linear waves of electrical nature are being widely studied and applied in optics where the (non-linear) light wave propagates inside a light guide. Nevertheless and unlike this, our present study will include the study of conditions of both the electrical potential wave propagation in optically opaque medium and the wave propagation of the charge current, i.e., electrons. These two circumstances strictly differ our case from the optical one, especially in what concerns the technological applications.
So, let us consider the usual twin feeder, $`U`$ is the potential difference, $`I`$ is the current, $`C`$, $`L`$ and $`R`$ are the specific wire-to-wire capacitance, inductance and resistance respectively. Therefore, the capacitance, inductance and resistance of a small line section have to be $`\delta C=C\delta x`$, $`\delta L=L\delta x`$, $`\delta R=R\delta x`$, where $`\delta x`$ is the length of the section.
The section’s charge $`\delta Q=CU\delta x`$ varies because of both the difference of currents in the points $`x`$ and $`x+\delta x`$ and lateral leakage current through the isolation of the feeder. Hence we have
$$_t\delta Q=C\delta x_tU=I\left(x\right)I\left(x+\delta x\right)GU\delta x,$$
(1)
where $`G`$ is the leakage coefficient. In the limit $`\delta x0`$ we therefore obtain
$$C_tU+_xI+GU=0.$$
(2)
The second circuit rule yields
$$\delta x\left(RI+Lc^2_tI\right)+U\left(x+\delta x\right)U\left(x\right)=0,$$
(3)
or in the limit $`\delta x0`$:
$$_xU+Lc^2_tI+IR=0.$$
(4)
For simplicity further we will assume the capacitance, inductance and leakage coefficient to be constant. Then the equality of mixed derivatives of the voltage yields the telegraph equation for the line current $`I(x,t)`$ (an analogous expression can be obtained for $`U`$):
$$\frac{1}{\stackrel{~}{c}^2}_{tt}I_{xx}I+\frac{GL}{c^2}_tI+GIR+C_t\left(IR\right)=0,$$
(5)
where $`\stackrel{~}{c}=c/\sqrt{LC}`$ is the (effective) propagation speed of circuit oscillations. Formally one could suppose that this speed can be more then the speed of light in vacuum. However, we should not forget that the above-mentioned expressions were obtained in the quasistationary approximation when the wave period of electromagnetic field has to be much more than the time of field propagation.
Further, if one assumes the excellent isolation $`G=0`$ then (5) loses the term causing the exponential attenuation of current, and we have the following wave equation:
$$\left(1/\stackrel{~}{c}^2\right)_{tt}I_{xx}I+C_t\left(IR\right)=0,$$
(6)
opening a wide prospect for simulation of several wave-like phenomena in physics.
For instance, one can pick the (generalized) conductor or feedback system with nonlinear response of the resistance to transmitted current such that
$$R=\frac{1}{CI}\left[V^{}\left(I\right)\text{d}t+A\left(x\right)\right],$$
(7)
where $`A(x)`$ is an arbitrary function, $`V(I(x,t))`$ is the set function of current, the prime means the derivative with respect to $`I`$, the integral is meant as a primitive. Provided the resistance chosen in such a way the equation (6) has the form of the relativistic scalar field equation
$$\left(1/\stackrel{~}{c}^2\right)_{tt}I_{xx}I+V^{}\left(I\right)=0,$$
(8)
which is the equation of motion for the self-interacting scalar field model described by the Lagrangian
$$L=\frac{1}{2}\left[\frac{1}{\stackrel{~}{c}^2}\left(_tI\right)^2\left(_xI\right)^2\right]V\left(I\right).$$
(9)
It should be noted that the consideration of the lateral leakage current just complicates the requirement for the line resistance:
$$\frac{Cc^2}{GL}R=\frac{\text{e}^{\frac{G}{C}t}}{I}\left[\text{e}^{\frac{G}{C}t}\left(\frac{c^2V^{}\left(I\right)}{GL}+\frac{GI}{C}\right)\text{d}t+A\left(x\right)\right]1,$$
but does not change the picture in principle except the cases when exponential growth or damping can arise experimental problems.
Therefore, the proposed electrical circuit indeed can simulate scalar field (9) of itself that seems to be very important because the many concepts lying in foundations of modern field theory (such as the spontaneous breaking of symmetry, topologically nontrivial solutions, instanton effects etc.) can be experimentally studied separately from other field phenomena. Besides the evident opportunities concerning the verification and visualization of the theoretical predictions, by virtue of the electrical analogy one can use certain field phenomena for practical purposes. First of all, it applies to several topologically nontrivial solutions of (8) which in this connection mean the signals maintaining initial form during indefinitely long time even in the presence of dissipative factors. Such a stability is known to be provided by the presence of several conserving values, first of all, topological index (also known as the topological charge) .
Let us consider the simplest $`\phi ^4`$ theory where
$$V\left(I\right)=\frac{\lambda }{4}\left(I^2\frac{m^2}{\lambda }\right)^2,$$
(10)
where $`\lambda `$ and $`m`$ are positive constant parameters. This theory admits the topological self-dual kink solution
$$I^{\left(k\right)}(x,t)=\frac{m}{\sqrt{\lambda }}\mathrm{tanh}\frac{m\rho }{\sqrt{2}},$$
(11)
where
$$\rho =\frac{xvt}{\sqrt{1\left(v/\stackrel{~}{c}\right)^2}},$$
and $`v=\text{const}`$ is the propagation velocity of the kink. This solution has the nonzero topological charge
$$Q_{\left(t\right)}=\frac{\sqrt{\lambda }}{m}\left[I(+\mathrm{},t)I(\mathrm{},t)\right]=2,$$
and can be interpreted as the relativistic (quasi) particle with the localized “energy” density
$$\epsilon (x,t)=\frac{1}{2}\left[\frac{1}{\stackrel{~}{c}^2}\left(_tI\right)^2+\left(_xI\right)^2\right]+V\left(I\right),$$
hence
$$\epsilon ^{\left(k\right)}(x,t)=\frac{m^4}{2\lambda \left[1\left(v/\stackrel{~}{c}\right)^2\right]}\text{sch}^4\left(\frac{m\rho }{\sqrt{2}}\right),$$
(12)
and the conserved total “energy” has the form of the energy of a massive relativistic quasi-particle:
$$E=\underset{\mathrm{}}{\overset{+\mathrm{}}{}}\epsilon (x,t)\text{d}x=\frac{\mu \stackrel{~}{c}^2}{\sqrt{1\left(v/\stackrel{~}{c}\right)^2}},\mu ^{\left(k\right)}=\frac{2\sqrt{2}}{3}\frac{m^3}{\lambda },$$
see and references therein.
It should be noted that for experimental purposes it is convenient to use the solutions shifted in such a way to remove singularities, zeros and negative values everywhere when it is required by the physical sense of the electrical values. For example, one can consider the shifted kink solution
$$I^{\left(k\right)}(x,t)=\frac{m}{\sqrt{\lambda }}\left[a+\mathrm{tanh}\frac{m\rho }{\sqrt{2}}\right],a=\text{const}>1,$$
thereby the potential energy (10) is modified insufficiently. As for the whole Lagrangian it has to be defined always at least up to additive and multiplicative constants.
Another, even more interest example for simulation is the sin-Gordon theory which, at first, has to be nonrenormalizable on the quantum level, and, second, admits several nonlinear soliton solutions which are proven to be of two types (soliton and doublet) and preserving an initial shape even after the interactions with each other in spite of the superposition principle is not valid. The potential function has the form
$$V\left(I\right)=\frac{m^4}{\lambda }\left[\text{cos}\left(\frac{\sqrt{\lambda }I}{m}\right)1\right]^2,$$
(13)
and the scattered solitons and doublets are described respectively by the expressions
$`I^{\left(s\right)}(x,t)={\displaystyle \frac{4m}{\sqrt{\lambda }}}\text{arctan}\mathrm{exp}\left(m\rho \right),`$ (14)
$`I^{\left(d\right)}(x,t)={\displaystyle \frac{4m}{\sqrt{\lambda }}}\text{arctan}\left[{\displaystyle \frac{\text{sin}\left(\frac{umt\stackrel{~}{c}}{\sqrt{1+u^2}}\right)}{u\text{cosh}\left(\frac{mx}{\sqrt{1+u^2}}\right)}}\right],`$ (15)
where $`u`$ is the dimensionless parameter determining the period of the doublet solution (15). As was mentioned, besides the separate solutions (14) and (15) there were discovered the solutions describing the elastic scattering processes between $`N`$ solitons (see Refs. and references therein). From the viewpoint of the theory of signal systems it means that in the circuit there can exist an arbitrary number of such nonlinear signals which have not to be distorted even when transmitting through each other and presence of dissipative effects. |
no-problem/9912/astro-ph9912034.html | ar5iv | text | # The Flux Ratio Method for Determining the Dust Attenuation of Starburst Galaxies
## 1 Introduction
To study galaxies, it is crucial to be able to separate the effects of the dust intrinsic to the galaxy from those associated with the galaxy’s stellar age and metallicity. Currently, the accuracy of separating the stars and dust in galaxies is fairly poor and the study of galaxies has suffered. This is in contrast with studies of individual stars and their associated sightlines in the Milky Way and nearby galaxies for which the standard pair method (Massa, Savage, & Fitzpatrick, 1983) works quite well at determining the effects of dust on the star’s spectral energy distribution (SED). The standard pair method is based on comparing a reddened star’s SED with the SED of an unreddened star with the same spectral type. Application of the standard pair method to galaxies is not possible as each galaxy is the result of a unique evolutionary history and, thus, each has a unique mix of stellar populations and star/gas/dust geometry.
Nevertheless, it would be very advantageous to find a method which would allow one to determine the dust attenuation of an individual galaxy. Such a method would greatly improve the accuracy of different star formation rate measurements. For example, two widely used star formation rate measurements are based on UV and H$`\alpha `$ luminosities. Both are affected by dust and this limits their accuracy (Kennicutt, 1998; Schaerer, 1999). The importance of correcting for the effects of dust in galaxies has gained attention through recent investigations into the redshift dependence of the global star formation rate (Madau, Pozzetti, & Dickinson, 1998; Steidel et al., 1999). The uncertainty in the correction for dust currently dominates the uncertainty in the inferred star formation rate in galaxies (Pettini et al., 1998; Meurer, Heckman, & Calzetti, 1999) and conclusions about the evolution of galaxies (Calzetti & Heckman, 1999).
Initially, the effects of dust in galaxies were removed using a screen geometry. This assumption has been shown to be a dangerous oversimplification as the dust in galaxies is mixed with the stars. Radiative transfer studies have shown that mixing the emitting sources and dust and having a clumpy dust distribution produces highly unscreen-like effects (Witt, Thronson, & Capuano, 1992; Witt & Gordon, 1996; Gordon, Calzetti, & Witt, 1997; Ferrara et al., 1999; Takagi, Arimoto, & Vansevičius, 1999; Witt & Gordon, 1999). For example, the traditional reddening arrows in color-color plots turn into complex, non-linear reddening trajectories. In general, the attenuation curve of a galaxy is not directly proportional to the dust extinction curve and its shape changes as a function of dust column (e.g., Figs. 6 & 7 of Witt & Gordon (1999)).
While the various radiative transfer studies have made it abundantly clear that correcting for the effects of dust in galaxies is hard, none have come up with a method that is not highly dependent on the assumed dust grain characteristics, star/gas/dust geometry, and clumpiness of the dust distribution. This has led to a search for empirical methods. For galaxies with hydrogen emission lines, it is possible to determine the slope and, with radio observations, the strength of the galaxies’ attenuation curves at the emission line wavelengths (Calzetti, Kinney, & Storchi-Bergmann, 1994; Smith et al., 1995). Unfortunately, this method is limited to the select few wavelengths associated with hydrogen emission lines. In the pioneering study of the IUE sample of starburst galaxies (Kinney et al., 1993), Calzetti, Kinney, & Storchi-Bergmann (1994) used a variant of the standard reddened star/unreddened star method to compute the average attenuation curve for these galaxies. This work binned the sample using $`E(BV)`$ values derived from the H$`\alpha `$ and H$`\beta `$ emission lines and assigned the lowest $`E(BV)`$ bin the status of unreddened. While this work was a significant advance in the study of dust in galaxies, it is only applicable to statistical studies of similar samples of starburst galaxies, not individual galaxies (Sawicki & Yee, 1998).
More recently, Meurer, Heckman, & Calzetti (1999) derived a relationship between the slope of the UV spectrum of a starburst galaxy and the attenuation suffered at 1600 Å, $`Att(1600)`$, using the properties of the IUE sample. This slope is parameterized by $`\beta `$ where the UV spectrum is fit to a power law ($`F(\lambda )\lambda ^\beta `$) in the wavelength range between 1200 and 2600 Å (Calzetti, Kinney, & Storchi-Bergmann, 1994). The purpose of Meurer, Heckman, & Calzetti (1999) was to calculate the attenuation suffered by high redshift starburst galaxies using only their UV observations. From our radiative transfer work, we have found that this relationship is strongly dependent on the star/gas/dust geometry, dust grain properties, and dust clumpiness (Witt & Gordon, 1996, 1999) as suspected by Meurer, Heckman, & Calzetti (1999). Fig. 11 of Witt & Gordon (1999) shows the dependence of $`Att(1600)`$ on $`\mathrm{\Delta }\beta `$ ($`=\beta 2.5`$) for various geometries, dust clumpinesses, and dust types. Meurer, Heckman, & Calzetti (1999) used the observed relationship between $`F(IR)/F(1600)`$ and $`\beta `$ for starburst galaxies, combined with a semi-empirical calibration between $`F(IR)/F(1600)`$ and $`Att(1600)`$ to determine the relationship between $`Att(1600)`$ and $`\beta `$. The correlation between $`F(IR)/F(UV)`$ and $`\beta `$ was first introduced by Meurer et al. (1997) where $`F(UV)=F(2200)`$. Witt & Gordon (1999) discovered that the relationship between $`F(IR)/F(1600)`$ and $`Att(1600)`$ was almost completely independent of the star/gas/dust geometry, dust grain properties, and dust clumpiness (see Fig. 12b of Witt & Gordon (1999)). This implies that $`F(IR)/F(1600)`$ is a much better indicator of $`Att(1600)`$ than $`\beta `$.
This opened the possibility that the $`F(IR)/F(\lambda )`$ might be a good measure of $`Att(\lambda )`$ and was the motivation for this paper. Qualitatively, there is good reason to think that a measure based on the flux at a wavelength $`\lambda `$ and the total flux absorbed and re-emitted by dust, $`F(IR)`$, should be a measure of $`Att(\lambda )`$. This is basically a statement of conservation of energy. Evidence that $`F(IR)/F(UV)`$ is a rough indicator of $`Att(UV)`$ in disk galaxies is given by Wang & Heckman (1996). The details of the relationship between $`F(IR)/F(\lambda )`$ and $`Att(\lambda )`$ will be dependent on the stellar, gas, and dust properties of a galaxy. Thus, a calibration of the relationship is necessary.
In §2, we calibrate the relationship between $`F(IR)/F(\lambda )`$ and $`Att(\lambda )`$ for UV, optical, and near-IR wavelengths using a stellar evolutionary synthesis model combined with our dust radiative transfer model. This allowed us to investigate the dependence of the relationship on stellar parameters (age, star formation type, and metallicity) and dust parameters (geometry, local dust distribution, dust type, and the fraction of Lyman continuum photons absorbed by dust). We show a comparison of $`Att(H\alpha )`$, $`Att(H\beta )`$, and $`Att(H\gamma )`$ values determined with this flux ratio method and the radio method (Condon, 1992) for 10 starburst galaxies in §3. In §4, we apply the flux ratio method to construct the UV attenuation curves for 8 starburst galaxies. The implications this work are discussed in §5.
## 2 The Flux Ratio Method
### 2.1 $`F(IR)/F(\lambda )`$ Flux Ratio
In a galaxy, almost all of the photons absorbed by dust are emitted by stars and gas in the UV, optical, and near-IR. This energy heats the dust which then re-emits in the mid- and far-infrared (small and large dust grains). Thus, the ratio of the total infrared flux to the flux at a particular wavelength is
$$\frac{F(IR)}{F(\lambda )}=\frac{a_dF(LyC)+(1a_d)F(Ly\alpha )+_{912\AA }^{\mathrm{}}f(\lambda ^{},0)\left(1C(\lambda ^{})\right)𝑑\lambda ^{}}{\lambda f(\lambda ,0)C(\lambda )}$$
(1)
where $`F(IR)`$ is the total IR flux in ergs cm<sup>-2</sup> s<sup>-1</sup>, $`F(LyC)`$ is the total unattenuated stellar flux below 912 Å in ergs cm<sup>-2</sup> s<sup>-1</sup>, $`a_d`$ is the fraction of $`F(LyC)`$ absorbed by dust internal to the H II regions (Petrosian, Silk, & Field, 1972; Mathis, 1986), $`F(Ly\alpha )`$ is the $`Ly\alpha `$ emission line flux, $`f(\lambda ,0)`$ is the unattenuated stellar/nebular flux in ergs cm<sup>-2</sup> s<sup>-1</sup> Å<sup>-1</sup>, $`C(\lambda )=10^{0.4Att(\lambda )}`$, and $`Att(\lambda )`$ is the attenuation at $`\lambda `$ in magnitudes. For emission lines, the denominator of eq. 1 becomes $`(1a_d)F(\lambda ,0)C(\lambda )`$ where $`F(\lambda ,0)`$ is the intrinsic integrated flux of the emission line. The $`Ly\alpha `$ line is resonantly scattered and, thus, is completely absorbed by the dust internal to the H II regions. Eq. 1 is similar to eq. 3 of Meurer, Heckman, & Calzetti (1999), but includes an additional term to account for the Lyman continuum photons absorbed by dust.
### 2.2 Relationship between $`F(IR)/F(\lambda )`$ and $`Att(\lambda )`$
We can calculate the relationship between $`F(IR)/F(\lambda )`$ and $`Att(\lambda )`$ by using a stellar evolutionary synthesis (SES) model and a dust radiative transfer model. We use the PEGASE SES model (Fioc & Rocca-Volmerange, 1997, 1999) which gives the SEDs of stellar populations with a range of ages, type of star formation (burst/constant), and metallicity. One strength of the PEGASE model is that it computes the continuum and emission lines expected from gas emission as well as the stellar emission. We used a Salpeter IMF for the PEGASE calculations. The SES model SEDs give $`F(LyC)`$, $`f(\lambda ,0)`$, and emission line $`F(\lambda ,0)`$ values. The effects of dust were calculated using the DIRTY radiative transfer model (Witt & Gordon, 1999). The DIRTY model gives the attenuation curves, $`Att(\lambda )`$, for a range of spherical star/gas/dust global geometries (shell, dusty, or cloudy), local dust distribution (homogeneous or clumpy), Milky Way (Cardelli, Clayton, & Mathis, 1989) or Small Magellanic Cloud (Gordon & Clayton, 1998) dust grain characteristics, and dust columns ($`\tau _V=0.2550`$). The cloudy geometry has dust extending to 0.69 of the system radius and stars extending to the model radius. The dusty geometry has both dust and stars extending to the model radius. This geometry represents a uniform mixture of stars and dust. The shell geometry has stars extending to 0.3 of the model radius and dust extending from 0.3 to 1 of the model radius. These three star/gas/dust geometries are shown pictorially in Figure 1 of Witt & Gordon (1999). Additional details of the the DIRTY model calculations can be found in Witt & Gordon (1999).
In Figure 1, we plot the the relationship between $`F(IR)/F(\lambda )`$ and $`Att(\lambda )`$ for the Meurer, Heckman, & Calzetti (1999) 1600 Å, HST/WFPC2 F218W, V, and K bands assuming a constant star formation, 10 Myr old, solar metallicity SED, $`a_d=0.25`$, and the full range of dust parameters (see above). The most surprising result is that this relationship is not sensitive to the type of dust (MW/SMC) or the local dust distribution (homogeneous/clumpy). This is true not just for the four bands plotted in Fig. 1, but for all the ultraviolet, optical, and near-infrared. Less surprising is that this relationship in the V and K bands is sensitive to the presence of stars outside the dust. The dusty and shell geometries follow similar curves while the cloudy geometry follows a different curve. For the cloudy geometry, as the attenuation is increased the dominance of the band flux from the stars outside the dust increases to the point where the band flux no longer depends on the attenuation (i.e. the flux from the stars attenuated by dust is much less than the flux from the unattenuated stars). This is not the case for the dusty and shell geometries where band flux continues to decrease with increasing attenuation since all the stars are inside the dust and attenuated to some degree.
The dependence of the $`F(IR)/F(\lambda )`$ versus $`Att(\lambda )`$ relationship can be sensitive to the shape of the intrinsic SED. Example SEDs for solar metallicity stellar populations are given in Fig. 3. The dependence of $`F(IR)/F(\lambda )`$ on $`Att(\lambda )`$ is illustrated in Figure 2 which shows the dependence of the flux ratio relationship for the 1600 and V bands on age, metallicity, star formation rate, and $`a_d`$ value. In the 1600 band, the relationship is quite similar for most choices of the above parameters except for old burst stellar populations (Fig. 2c). Our calibration of $`F(IR)/F(1600)`$ versus $`Att(1600)`$ is indistinguishable from that presented in Meurer, Heckman, & Calzetti (1999) after correcting for $``$30% difference between $`F_{FIR}`$ (Helou et al., 1988) and $`F(IR)`$ as $`F_{FIR}`$ does not include the hotter dust detected in the mid-IR. In the V band, the relationship is dependent, in decreasing order of dependence, on age, burst versus constant star formation, metallicity, and value of $`a_d`$. The qualitative dependence of other UV bands ($`\lambda <3000`$ Å) is similar to that seen for the 1600 Å band. The behavior of optical and near-infrared bands is similar to that of the V band with the increasing dependence on the above parameters as $`\lambda `$ increases.
The behavior of emission lines is similar to that seen for the V band, but has notable differences. Figure 4 gives the relationship between $`F(IR)/F(H\alpha )`$ and $`Att(H\alpha )`$ for the same parameters plotted in Fig. 2. One obvious difference between the V band and H$`\alpha `$ emission line is that the behavior with age is reversed. In particular, the H$`\alpha `$ emission line is very sensitive to the value of $`a_d`$ since the strength of H$`\alpha `$ is directly proportional to $`(1a_d)`$.
The behavior of the flux ratio versus $`Att(\lambda )`$ relationship can be qualitatively explained fairly easily. The general shape of the curves (see Fig. 1) is seen to be non-linear versus $`F(IR)/F(\lambda )+1`$ below $`Att(\lambda )1.5`$ and nearly linear versus $`\mathrm{log}[F(IR)/F(\lambda )+1]`$ above $`Att(\lambda )1.5`$. The non-linearity of the curve is due to changing relationship between the effective wavelength of $`F(IR)`$ energy absorption and that of $`Att(\lambda )`$. The linear portion of the curve is in the realm where $`F(IR)`$ is changing slowly (most of the galaxy’s luminosity is now being emitted in the IR), but $`F(\lambda )`$ continues to decrease due to the steady increase of $`Att(\lambda )`$. Thus, above $`Att(\lambda )1.5`$ the curves for all wavelengths have the same slope but different offsets reflecting the contributions different wavelengths make to $`F(IR)`$. Below $`Att(\lambda )1.5`$, the effective wavelength of the 1600 and F218W bands is similar to that of the $`F(IR)`$ energy absorption resulting in a nearly linear relationship. This is not the case for the V and K bands where their effective wavelengths are much larger than that of the $`F(IR)`$ energy absorption and, therfore, the V and K bands have non-linear relationships below $`Att(\lambda )1.5`$. The different behaviors of the star/gas/dust geometry relationships (see Fig. 1) is due to the presence of stars outside the dust in the cloudy geometry and the lack of external stars in the shell and dusty geometries.
The behavior of the relationship for different stellar populations (Fig. 2 & 4) can be easily explained using the same arguments used above. The invariance of the relationship for the 1600 band is a reflection of the dominance of the UV in the $`F(IR)`$ energy absorption. The only time when the 1600 relationship is not invariant is for old burst stellar populations where the lack of significant UV flux means that the optical dominates the $`F(IR)`$ energy absorption (Fig. 3b). This is confirmed by the linear behavior over the entire $`Att(\lambda )`$ range of the V band curves for old stellar populations (Fig. 2d). The separation of the curves in the V band is the result of the different contributions the V band flux makes to the $`F(IR)`$ absorbed energy for different stellar populations. The older the stellar population, the more the optical contributes to the $`F(IR)`$ and, thus, the more linear the V band relationship is below $`Att(V)1.5`$.
### 2.3 Fits to the Relationships
In order to use this method, we have fit the relationship between $`F(IR)/F(\lambda )`$ and $`Att(\lambda )`$ for combinations of stellar age, metallicity, burst or constant star formation, and values of $`a_d`$. We chose to fit the combination of the dusty/shell geometry curves. However, this does not limit the use of our fits in the UV since the cloudy geometry curves follow the dusty/shell geometry curves. This does limit the use of our fits for wavelengths longer than $``$3500 Å to cases where the dominant stellar sources are embedded in the dust such as starburst galaxies. The curvature of the relationship at $`Att(\lambda )1`$ required us to use a combination of a 3rd order polynomial for $`Att(\lambda )<1.75`$ and a 2nd order polynomial for $`Att(\lambda )>1`$. As a result the fit is:
$$Att(\lambda )=\{\begin{array}{cc}A(x)\hfill & x<x_1\hfill \\ w(x)A(x)+(1w(x))B(x)\hfill & x_1<x<x_2\hfill \\ B(x)\hfill & x>x_2\hfill \end{array}$$
(2)
where
$`x`$ $`=`$ $`F(IR)/F(\lambda ),`$
$`x_1`$ $`=`$ $`x[Att(\lambda )=1]`$
$`x_2`$ $`=`$ $`x[Att(\lambda )=1.75]`$
$`A(x)`$ $`=`$ $`a_1+b_1x+c_1x^2+d_1x^3,`$
$`B(x)`$ $`=`$ $`a_2+b_2(\mathrm{log}x)+c_2(\mathrm{log}x)^2,\text{ and}`$
$`w(x)`$ $`=`$ $`(x_2x)/(x_2x_1).`$
For each curve fit with equation 2, 9 numbers result; 4 coefficients for $`A(x)`$, 3 coefficients for $`B(x)`$, the $`F(IR)/F(\lambda )`$ values where $`Att(\lambda )=1`$ and $`1.75`$ ($`x_1`$ and $`x_2`$). Computing the $`Att(\lambda )`$ value corresponding to a particular value of $`F(IR)/F(\lambda )`$ then involves specifying the stellar age, metallicity, star formation type, and value of $`a_d`$ which specify the appropriate fit coefficients to use. The parameters of these fits are available from the lead author as well as an IDL function which implements the calibration.
## 3 Comparison with Radio Method
While the flux ratio method is relatively simple and makes sense qualitatively, to be truly convincing, we need an independent method for determining the attenuation for comparison. Fortunately, radio observations combined with measured hydrogen emission line fluxes allows just such a test. The radio method (Condon, 1992) is based on the measurement of the free-free radio flux from H II regions and the assumption of Case B recombination (Osterbrock, 1989). From the thermal flux, the number of Lyman continuum photons absorbed by the gas can be calculated and, thus, the intrinsic fluxes of the hydrogen emission lines. Comparison of the intrinsic and observed line fluxes gives the attenuation at the emission line wavelength. The major source of uncertainty in the radio method is that radio observations contain both thermal (free-free) and nonthermal (synchrotron) components. For example, approximately a quarter of the flux measured at 4.85 GHz has a thermal origin. The decomposition of the measured radio flux into thermal and nonthermal components imparts a factor of two uncertainty in the resulting thermal flux (Condon, 1992).
Unfortunately, determining attenuations using the flux ratio method is the most uncertain for hydrogen emission line fluxes. This is due to the lack of knowledge of the value of $`a_d`$, the fraction of Lyman continuum photons absorbed by dust (Fig. 4). We can take guidance from the work done by DeGioia-Eastwood (1992) on six Large Magellanic Cloud H II regions. She found that $`a_d`$ ranges from 0.21 – 0.55 using the approximation of Petrosian, Silk, & Field (1972). We will use this range of $`a_d`$ values in the calculations below.
To do this comparison, we need galaxies which have hydrogen emission line fluxes, infrared, and radio observations. In the IUE sample of starburst galaxies (Kinney et al., 1993), there are 10 galaxies with Balmer emission line (Storchi-Bergmann, Calzetti, & Kinney, 1994; Mcquade, Calzetti, & Kinney, 1995), IRAS (Calzetti et al., 1995), and 4.85 GHz observations (Gregory & Condon, 1991; Wright et al., 1994, 1995, 1996). The 10 galaxies are NGC 1313, 1569, 1614, 3256, 4194, 5236, 5253, 6052, 7552, & 7714. The emission lines were measured in a $`10\mathrm{}\times 20\mathrm{}`$ aperture which was usually large enough to include the entire starburst region but not the entire galaxy. While the IRAS and 4.85 GHz observations usually encompass the entire galaxy, the majority of the IRAS and radio flux emerges from the starburst region which should minimize the importance of the aperture mismatch (Calzetti et al., 1995).
Figure 5 shows the comparison between the attenuations suffered by the H$`\alpha `$, H$`\beta `$, and H$`\gamma `$ emission lines in the 10 galaxies as calculated from the flux ratio method and the radio method. While the measurements of each galaxy’s three Balmer emission lines are related (through Case B recombination theory), plotting all three reduces the observational uncertainty due to the emission line flux measurements and increases the range of attenuations tested. For the radio method, we calculated the intrinsic emission line strengths using eqs. 3 & 5 of Condon (1992) assuming a $`T_e=10^4K`$ and Table 4.2 of Osterbrock (1989). The attenuations were then easily calculated from the intrinsic and observed emission line fluxes.
For flux ratio method, the $`F(IR)`$ flux was computed by integrating each galaxy’s 8 to 1000 $`\mathrm{\mu m}`$ SED after extrapolating the IRAS fluxes to longer wavelengths using a modified black body (dust emissivity $`\lambda ^1`$). The temperature and flux level of the modified black body were determined from the IRAS 60 and 100 $`\mathrm{\mu m}`$ fluxes. ISO observations of starburst galaxies support the use of a single temperature for the large dust grain emission (Krügel et al., 1998). The 10 galaxies’ $`F(IR)`$ fluxes were 1.6 to 2.5 times larger than their FIR fluxes (as defined by Helou et al. (1988)) due to our inclusion of the mid-infrared hot, small dust grains. We assumed the 10 galaxies were undergoing constant star formation and used their measured metallicities (Calzetti et al., 1995) for the calculation of their attenuations from their measured infrared to emission line flux ratios. The error bars in Fig. 5 for the flux ratio method reflect the range of attenuations possible, assuming the galaxy age is between 1 Myr and 10 Gyr and $`a_d`$ values between 0.21 and 0.55.
The attenuations calculated for the two methods agree well within their associated uncertainties. This gives confidence that the flux ratio method for calculating attenuations is valid. Of course, this conclusion would be strengthened with a larger sample of galaxies and observations with similar apertures at optical, infrared, and radio wavelengths. Such infrared observations will become possible with the launch of SIRTF.
## 4 Application to Individual Galaxies
The application of the flux ratio method to determining the UV attenuations of individual galaxies is straightforward. Due to the insensitivity in the UV of this method to the star, gas, or dust parameters (Figs. 1a,b & 2a,c,e), the observed $`F(IR)/F(UV)`$ is directly related to $`Att(UV)`$. This is not the case for optical and near-IR wavelengths where this method is sensitive to the intrinsic SED shape (Figs. 2b,d,f) and, to a lesser extent, the geometry of the star, gas and dust (Fig. 1c,d).
In order to construct the full UV through near-IR attenuation curve for a galaxy, an iterative procedure must be followed. The steps of the iterative procedure are:
1. Assume an intrinsic SED shape (stellar age, metallicity, star formation type, and $`a_d`$ value),
2. Construct a candidate attenuation curve using the observed UV-NIR $`F(IR)/F(\lambda )`$ and our calibration of $`Att(\lambda )`$ versus $`F(IR)/F(\lambda )`$,
3. Deredden the observed UV-NIR SED with the candidate attenuation curve,
4. Compare the dereddened SED (step 3) with the assumed SED (step 1),
5. Repeat steps 1-4 to find the attenuation curve which produces the best match between the dereddened SED and the assumed SED.
We attempted to apply this iterative method to 10 starburst galaxies listed in the previous section as they have UV, optical, near-infrared, infrared, and radio observations. We were unable to find fits which would simultaneously fit the UV/optical/NIR continuum and the H$`\alpha `$ emission attenuations derived from the ratio observations. To do the fitting we used the measured metallicities of the galaxies and allowed the galaxy’s age and type of star formation as well as the the value of $`a_d`$ to vary. The fact that we could not find fits to any of the 10 galaxies is an indication that at least two stellar populations are contributing to the observed SED. But, the correlation between the radio and flux ratio H$`\alpha `$ attenuations is strong evidence that only one of these stellar populations is ionizing the gas and is the main source for the dust heating (see Fig. 5). This stellar population is likely the starburst and the other stellar population is likely that of the underlying galaxy. The existence of two stellar populations, each with different stellar parameters and attenuation curves, complicates the fitting to the point where the number of free parameters can exceed the number of observed data points. This illustrates one of the main difficulties of applying the flux ratio method. The calibration of the flux ratio method is based on the assumption that there is a single stellar population responsible for the UV-NIR continuum and IR dust emission. When a second stellar population contributes to the continuum or IR dust emission, applying the flux ratio method will become more difficult.
While we cannot determine the UV-NIR attenuation curves for the 10 starburst galaxies, we can determine their UV attenuation curves for the following reason. The identification of the stellar population heating the dust as the same population ionizing the gas leads us to conclude that the same population also emits the majority of the galaxies’ UV continua since UV photons are the main source of dust heating for starburst galaxies (see §3). Figure 6 gives the attenuation curves for 8 of the 10 starburst galaxies used in the previous section. The other 2 galaxies were excluded as they did not have near UV data. All of the curves lack a substantial 2175 Å bump in agreement with previous work (Calzetti, Kinney, & Storchi-Bergmann, 1994; Gordon, Calzetti, & Witt, 1997). There is also a trend towards steeper attenuation curves as $`Att(2850)`$ decreases which is the behavior predicted by Witt & Gordon (1999). The curves are similar to the “Calzetti attenuation curve” (Calzetti, Kinney, & Storchi-Bergmann, 1994; Calzetti, 1997) derived for the IUE sample of galaxies. As the Calzetti attenuation curve is an average, the scatter of our individual curves is likely to be real. While the 8 galaxies in Fig. 6 were included in the Calzetti (1997) work, the method used to derive the Calzetti attenuation curve was quite different from the flux ratio method. This is further evidence that the flux ratio method can determine the attenuation curves of starburst galaxies.
## 5 Discussion
We have presented a method which uses the $`F(IR)/F(\lambda )`$ flux ratio to determine $`Att(\lambda )`$ for individual starburst galaxies. The major strengths of this method is that it is almost completely independent of the type of dust (MW/SMC) or the local distribution of dust (homogeneous/clumpy), and is only weakly dependent on the global distribution of stars and dust (presence/lack of stars outside dust). In the ultraviolet, this method is independent of the intrinsic stellar SED except for the case of very old burst populations. In the optical/near-IR, this method is dependent on the intrinsic stellar SED shape. The flux ratio method is not based on the properties of the nebular emission (as is the radio method), but on the properties of the stellar continuum and IR dust emission. As a result, it is applicable to any wavelength from the UV to near-IR and not just wavelengths with hydrogen emission lines.
A major limitation of the flux ratio method is that the majority of the observed UV through far-infrared flux must originate from a single stellar population (either burst or constant star formation). An example of a case where the flux ratio method would not be applicable would be a heavily embedded starburst in a galaxy with a second older, less embedded stellar population. At UV and IR wavelengths the starburst would dominate, but at optical and near-IR wavelengths the older population would dominate. Another possible limitation is that the measured infrared flux is assumed to be a direct measure of the flux absorbed by the dust. If the infrared radiation in not emitted symmetrically (e.g., for non-symmetrically distributed dust which is optical thick in the infrared), then the measured infrared flux will not be a direct measure of the flux absorbed by the dust. The assumption that the infrared flux is a direct measure of the flux absorbed by the dust is crucial to the accuracy flux ratio method. It is possible to account for these weaknesses by increasing the complexity of the modeling by adding additional stellar populations and/or complex dust geometries. Such increases in the complexity of the modeling will necessarily require more detailed spectral and spatial observations as the number of model parameters increases.
For any starburst galaxy with UV and IR observations, the UV attenuation curve can be calculated using the flux ratio method. Starburst galaxies are likely to be the best case for applying the flux ratio method as the intensity of the starburst greatly increases the probability that the UV and IR flux originate from only the starburst population. If the parameters (age, metallicity, etc) of the intrinsic SED shape can be determined and the contamination from the underlying stellar population removed, then the attenuation of the starburst galaxy can be determined not only for the UV, but also for the optical and near-IR.
Thus, the flux ratio method seems very promising for determining the dust attenuations of individual galaxies. The easiest way to ensure the basic assumptions of our calibration of the flux ratio method are met is to take high spatial resolution observations of starburst regions in nearby galaxies or integrated galaxy observations of intense starburst galaxies at any distance. This would ensure that the UV through far-infrared flux originates from the starburst and not the host galaxy. Examples of these observations would be super star clusters in nearby galaxies (Calzetti et al., 1997) and observations of high-z starburst galaxies which have been shown to be similar to local starbursts except more intense (Heckman et al., 1998). Currently, both types of UV, optical, and near-IR observations can and have been done, but the far-infrared observations needed await SIRTF. SIRTF will have the spatial resolution and sensitivity to do both types of observations.
The ability to determine the UV dust attenuation curve for individual starburst galaxies will facilitate the study of dust in different star formation environments. The traditional explanation for the differences seen in the dust extinction between the Milky Way, LMC, and SMC has been that the different metallicities of the three galaxies lead to different dust grains. Work on starburst galaxies with metallicities between 0.1 and 2 times solar which found most of these galaxies possess dust which lacks a 2175 Å bump (Calzetti, Kinney, & Storchi-Bergmann, 1994; Gordon, Calzetti, & Witt, 1997) seriously called this explanation into question. Subsequent work on the extinction curves in both the SMC (Gordon & Clayton, 1998) and LMC (Misselt, Clayton, & Gordon, 1999) found that the extinction curves toward star forming regions in both galaxies were systematically different than those toward more quiescent regions. These results imply that dust near sites of active star formation is different due to processing (Gordon, Calzetti, & Witt, 1997) of existing dust or formation of new dust (Dwek, 1998). The processing interpretation is supported by recent work in the Milky Way along low density sightlines toward the Galactic Center (Clayton, Gordon, & Wolff, 1999). This work found that sightlines which show evidence of processing (probed by N(Ca II)/N(Na I)) have weaker 2175 Å bumps and stronger far-UV extinctions than most other Milky Way sightlines (Cardelli, Clayton, & Mathis, 1989). The actual processing mechanism is not simple as the dust towards the most intense star formation in the LMC (30 Dor) has a weak 2175 Å bump, but the dust towards the most intense star formation in the SMC, which has only 10% the strength of 30 Dor, has no 2175 Å bump. In order to completely characterize the dust near starbursts, attenuation curves for a large sample of starbursts galaxies with a range of metallicity, dust content, and starburst strength are needed.
In conjunction with investigating the impact environment has on dust properties, the ability to determine individual starburst galaxy attenuation curves will simplify the study of the starburst phenomenon. By being able to remove the effects of dust accurately, the age and strength of starburst galaxies and regions in galaxies can be determined with confidence. In the realm of high redshift starburst galaxies ($`z>2.5`$), the ability to determine the dust attenuation of individual galaxies will arrive with the advent of deep SIRTF/MIPS imaging of fields with existing rest-frame UV imaging (eg., Hubble Deep Fields). The currently large uncertainty on the global star formation history of the universe due to the effects of dust on starburst galaxies will be greatly reduced (Madau, Pozzetti, & Dickinson, 1998; Pettini et al., 1998; Steidel et al., 1999).
This work benefited from discussions with Daniela Calzetti and Gerhardt Meurer. Support for this work was provided by NASA through LTSAP grant NAG5-7933 and archival grant AR-08002.01-96A from the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS5-26555. |
no-problem/9912/astro-ph9912190.html | ar5iv | text | # 1 Introduction
## 1 Introduction
Cepheids and star clusters are very important objects for empirical testing many fundamental problems in astronomy. Since the period-luminosity relation for Cepheids has been discovered by Leavitt (1912) they became one of the most important sources of information about distances in the nearby Universe. Observations of Cepheids also provide empirical constraints on theory of stellar structure, evolution etc. On the other hand star clusters are ideal tracers of stellar evolution and independent source of information about distances, ages, chemical composition, interstellar absorption etc. of the galaxies where they reside.
Cepheids belonging to star clusters are especially worth detailed studies. For instance, due to well defined evolutionary phase they provide precise information on age, superior to that obtained with the standard procedure of isochrone fitting. They also may help in studies of cluster dynamics.
Unfortunately the number of Cepheids located in the regions of star clusters is still small in both the Magellanic Clouds and the Galaxy. Observations are highly inhomogeneous, obtained by many astronomers, using many different instruments. Microlensing surveys make it possible to select large number of variable stars, in particular those from the fields of star clusters. Following the list of 127 eclipsing systems in optical coincidence with star clusters from the SMC (Pietrzyński and Udalski 1999), in this paper we present Cepheids located in the regions of star clusters in the Magellanic Clouds.
## 2 Observational Data
The photometric data used in this paper were collected during the OGLE-II microlensing survey with the 1.3 m Warsaw telescope located at the Las Campanas Observatory, Chile, which is operated by the Carnegie Institution of Washington. The telescope was equipped with $`2048\times 2048`$ CCD camera working in driftscan mode. Detailed description of the instrumental system and the OGLE-II project was presented by Udalski, Kubiak and Szymański (1997).
About 4.5 and 2.4 square degrees regions in the LMC and SMC, respectively, covering most of the bars of these galaxies were monitored regularly since January 1997 through the standard BVI filters. Coordinates of the observed fields and the schematic maps of the LMC and SMC with contours of observed fields can be found in Udalski et al. (1999c,d). Data reduction pipeline and data quality tests of the SMC photometry are described in Udalski et al. (1998). Quality of the LMC data is similar and it will be described with release of stellar maps of the LMC in the near future (Udalski et al. in preparation). In particular, accuracy of transformation to the standard system is about 0.01–0.02 mag for all BVI-bands.
## 3 Cepheids in the Magellanic Cloud Star Clusters
Tables 1 and 2 list Cepheids located in the close neighborhood of star clusters of the LMC and SMC, respectively. The lists were constructed based on Catalogs of Star Clusters from the LMC (Pietrzyński et al. 1999) and SMC (Pietrzyński et al. 1998) and Catalogs of Cepheids from the LMC and SMC (Udalski et al. 1999c,d).
Cluster Cepheids were extracted from the Catalog of Cepheids when the location of a given object on the sky was smaller than 1.5 radius from the center of a given cluster. Beside classical Cepheids listed in the Catalogs also double-mode and second overtone objects from the SMC (Udalski et al. 1999a,b) and LMC (Udalski et al. in preparation) were checked.
204 and 132 Cepheids in the LMC and SMC, respectively, satisfied our criterion. Basic parameters of these objects are given in Tables 1 and 2. First column is the cluster designation according to the OGLE scheme. Cross-identification of clusters with other catalogs can be found in Catalogs of Star Clusters. Star ID number (OGLE identification: field and number) and distance from the cluster center, measured in units of cluster radius are given in columns 2, 3 and 4. Periods, zero phases corresponding to maximum brightness, VI photometry, interstellar reddening and classification taken from the Catalogs of Cepheids (Udalski et al. 1999c,d) are presented in the following columns. FU, FO, BR and FA symbols in the last column indicate that a given object belongs to the classical Cepheids pulsating in the fundamental mode, first overtone mode, it is brighter than FO or fainter than FU, respectively. DM indicates double mode Cepheid while SO – second overtone object. For the sake of completeness we additionally included to the SMC list one Cepheid which is likely a NGC346 member. Large part of that cluster is located outside the OGLE-II fields and therefore it was not included in the Catalog of Star Clusters from the SMC.
## 4 Conclusions
We present lists of Cepheids located in the close neighborhood of star clusters from the 4.5 square degrees field of the LMC and 2.4 square degrees area of the SMC. Thus far, presented Cepheids constitute the most complete sample of such objects with homogeneous observational data and high statistical completeness. The sample is very well suited for further detailed studies. Results of analysis of these objects will be presented in separate papers.
Photometry of Cepheids and star clusters in the LMC and SMC is available from the OGLE Internet archive: http://www.astrouw.edu.pl/~ogle
or its US mirror http://www.astro.princeton.edu/~ogle
Acknowledgements. The paper was partly supported by the KBN grants: 2P03D00814 to A. Udalski and 2P03D00617 to G. Pietrzyński. Partial support for the OGLE project was provided with the NSF grant AST-9820314 to B. Paczyński.
## REFERENCES
* Leavitt, H.S. 1912, Harvard Cir, 173.
* Pietrzyński, G., and Udalski, A. 1999, Acta Astron., 49, 149.
* Pietrzyński, G., Udalski, A., Kubiak, M., Szymański, M., Woźniak, P., and Żebruń, K. 1998, Acta Astron., 48, 175.
* Pietrzyński, G., Udalski, A., Kubiak, M., Szymański, M., Woźniak, P., and Żebruń, K. 1999, Acta Astron., 49, submitted, astro-ph/9912187.
* Udalski, A., Kubiak, M., and Szymański, M. 1997, Acta Astron., 47, 319.
* Udalski, A., Szymański, M., Kubiak, M., Pietrzyński, G., Woźniak, P., and Żebruń, K. 1998, Acta Astron., 48, 147.
* Udalski, A., Soszyński, I., Szymański, M., Kubiak, M., Pietrzyński, G., Woźniak, P., and Żebruń, K. 1999a, Acta Astron., 49, 1.
* Udalski, A., Soszyński, I., Szymański, M., Kubiak, M., Pietrzyński, G., Woźniak, P., and Żebruń,K. 1999b, Acta Astron., 49, 45.
* Udalski, A., Soszyński, I., Szymański, M., Kubiak, M., Pietrzyński, G., Woźniak, P., and Żebruń,K. 1999c, Acta Astron., 49, 223.
* Udalski, A., Soszyński, I., Szymański, M., Kubiak, M., Pietrzyński, G., Woźniak, P., and Żebruń,K. 1999d, Acta Astron., 49, submitted, astro-ph/9912096. |
no-problem/9912/cond-mat9912031.html | ar5iv | text | # 𝜅-(BEDT-TTF)2Cu[N(CN)2]Br: a Fully Gapped Strong-Coupling Superconductor
\[
## Abstract
High-resolution specific-heat measurements of the organic superconductor $`\kappa \text{-(BEDT-TTF)}_2\text{Cu[N(CN)}_2\text{]Br}`$ in the superconducting ($`B=0`$) and normal ($`B=14`$ T) state show a clearly resolvable anomaly at $`T_c=11.5`$ K and an electronic contribution, $`C_{es}`$, which can be reasonably well described by strong-coupling BCS theory. Most importantly, $`C_{es}`$ vanishes exponentially in the superconducting state which gives evidence for a fully gapped order parameter.
\]
Since the discovery of superconductivity in organic metals about 20 years ago the question on the nature of this state is one of the most intriguing problems in this class of materials. The close neighborhood of antiferromagnetically ordered states in the pressure-temperature phase diagram has spurred speculations on a Cooper-pair coupling which is mediated by antiferromagnetic fluctuations rather than by conventional electron-phonon coupling . This notion gained additional feedback by the growing evidence for unconventional behavior of the high-$`T_c`$ cuprates and heavy-fermion superconductors. A large number of experiments, especially on the quasi-two-dimensional (2D) organic materials, were initiated to elucidate the question on the symmetry of the order parameter, i.e., on the determination of possible gap nodes in the superconducting state. The outcome is rather controversial with an approximately equal distribution of reports which present results in line with conventional BCS-like behavior and others giving support for an unconventional state . Here, the term ‘unconventional superconductivity’ is used to denote that either a non-phononic Cooper-pair attraction is present or that besides the gauge symmetry additional symmetries are broken at $`T_\mathrm{c}`$.
The most studied family of the 2D organic charge-transfer salts is the $`\kappa `$-phase based on the donor molecule BEDT-TTF (bisethylenedithio-tetrathiafulvalene or ET for short). Materials of this phase reveal a unique phase diagram with $`\kappa `$-(BEDT-TTF)<sub>2</sub>Cu\[N(CN)<sub>2</sub>\]Br, the superconductor with the highest transition temperature ($`T_c=11.5`$ K) in this class, being close to an antiferromagnetic (presumably) Mott-insulating ground state. This direct neighborhood of competing ground states strongly motivated the speculations on a non-phononic pairing mechanism.
Results especially in favour for unconventional behavior were supplied by <sup>13</sup>C-NMR experiments of $`\kappa `$-(BEDT-TTF)<sub>2</sub>Cu\[N(CN)<sub>2</sub>\]Br . The NMR data were obtained with the necessarily applied field along the BEDT-TTF planes. For this field orientation it is believed that the vortex lattice is trapped in the so-called lock-in state and that one thereby can avoid additional spin-relaxation processes due to the otherwise present flux-line motion. All three experiments showed consistently a non-exponential, i.e., non-BCS-like, decrease of the spin-lattice relaxation rate $`1/T_1`$. The data could approximately be described by a $`1/T_1T^3`$ dependence which was interpreted as an indication for $`d`$-wave pairing with line nodes in the energy gap. Accordingly, these line nodes should lead to a $`T^2`$ behavior of the electronic specific heat in the superconducting state, $`C_{es}`$. Recently, indeed specific-heat data were reported which seemingly showed an approximately $`T^2`$ dependence of $`C_{es}`$. In that experiment, however, the phonon specific heat of $`\kappa `$-(BEDT-TTF)<sub>2</sub>Cu\[N(CN)<sub>2</sub>\]Br was tried to estimate by measuring a quench-cooled non-superconducting deuterated sample which is just on the insulating side of the above-mentioned phase diagram .
Specific-heat experiments are an especially powerful method in order to decide whether nodes of the superconducting gap are present or not. If this integral technique reveals an exponential dependence of $`C_{es}`$, nodes of the order parameter, i.e., points where the superconducting gap becomes zero, can unequivocally be ruled out. On the other side, care has to be taken when a non-exponential behavior of $`C_{es}`$ is observed. Besides the existence of gap nodes, spurious effects like a not completely superconducting sample or an improper subtraction of non-electronic specific-heat contributions may lead to wrong conclusions. This experiment, i.e, the measurement of the specific heat of one single crystal $`\kappa `$-(BEDT-TTF)<sub>2</sub>Cu\[N(CN)<sub>2</sub>\]Br both in the superconducting ($`B=0`$) and in the normal state at a magnetic field of 14 T, was initiated in order to obtain a definitive answer to the possible existence of gap nodes in a reliable way.
Care was taken to reduce the heat capacity of the sample holder. This enabled us to measure one single crystal of 3.26 mg which contributed 50-70% to the total heat capacity. The heat capacity of the empty sample holder, which consists of a sapphire plate with a thin manganin wire (20 $`\mu `$m diameter) as heater and a RuO<sub>2</sub> resistor as thermometer, was measured in all relevant fields. The RuO<sub>2</sub> thermometer which shows in the experimental range only a small field dependence was calibrated in fields up to 14 T in steps of 1 T. The specific heat was measured in a <sup>4</sup>He cryostat equipped with a 14 T superconducting magnet by the quasi-adiabatic heat-pulse technique. The temperature resolution of about $`\mathrm{\Delta }T/T<110^5`$ prevents any rounding effects at the transition due to the experiment.
The specific heat, $`C`$, between 1.7 and 21 K in $`B=0`$ and $`B=14`$ T is shown in Fig. 1. The upper critical field of $`\kappa `$-(BEDT-TTF)<sub>2</sub>Cu\[N(CN)<sub>2</sub>\]Br is $`B_{c2}=(10\pm 2)`$ T which can be estimated from the field dependence of our low-temperature $`C`$ data (not shown) and which is in line with earlier estimates . Therefore, the data in $`B=14`$ T are in the normal state comprising the electronic and the phononic contribution truely relevant for the data analysis of this special sample. From our data we determine a Sommerfeld coefficient $`\gamma =(25\pm 2)`$ mJ mol<sup>-1</sup> K<sup>-2</sup> and a Debye temperature of about $`\mathrm{\Theta }_D=(200\pm 10)`$ K. These values agree within error bars with earlier literature data . The uncertainties in our values originate in the limited $`T`$ range where we observe a linear plus a cubic temperature dependence of $`C`$. Already at about 3 K we observe a deviation from the cubic Debye law, i.e., an additional phononic contribution. These low-lying optical phonon modes are well known from Raman-scattering investigations and previous specific-heat of other organic superconductors (see Refs. for details). At very low temperatures, the nuclear magnetic moments of the hydrogen atoms of the BEDT-TTF molecules should contribute to a Schottky anomaly due to hyperfine interactions (see for details). In 14 T, this hyperfine contribution would be about 3.5% of the total specific heat at 2 K. In our experiment as well as in no indication of a low-temperature upturn of the $`C`$ data was observed for this field. This is most probably caused by a too long spin-lattice relaxation time compared to the thermal relaxation time of the sample to the bath.
The blow up in Fig. 1(b) shows the region close to $`T_c=11.5`$ K. In this scale one can see more clearly the broad anomaly arising from the superconducting transition. In contrast to previous reports we were able to unequivocally resolve this anomaly which contributes about 3% to the total specific heat. The broadened jump at $`T_c`$ is much larger than anticipated from weak-coupling theory. This becomes much clearer when we plot $`\mathrm{\Delta }C`$ vs $`T`$ (Fig. 2), where $`\mathrm{\Delta }C`$ is the specific-heat difference between $`C`$ in the superconducting ($`B=0`$) and in the normal state ($`B=14`$ T). The latter was approximated by a polynomial \[solid line in Fig. 1(b)\]. $`\mathrm{\Delta }C`$ expected from weak-coupling BCS theory is shown as the dashed line in Fig. 2. It is obvious that the jump at $`T_c`$ as well as the whole temperature dependence does not follow this behavior. Instead, the experimental data can much better be described by strong-coupling behavior (solid line in Fig. 2). Thereby, we assumed a BCS-like temperature dependence of the energy gap $`\mathrm{\Delta }(T)`$ scaled by one appropriate parameter, i.e., the gap ratio $`\alpha =\mathrm{\Delta }(0)/k_BT_c`$, which is $`\alpha _{BCS}=1.76`$ in the weak-coupling limit . With this simplistic assumption and $`\alpha =2.7`$ we obtain the reasonable description shown in Fig. 2. The jump height is reproduced quite well taken into account that we neglected any fluctuations. In the intermediate temperature region the data lie somewhat above the strong-coupling line, whereas at low temperatures, where the data are most precise, perfect agreement is found. We want to note that we did not fit the model to the data but rather compared visually the BCS curves for different $`\alpha `$ with the data. Therefore, as well as due to the error bar in $`\gamma `$, the uncertainty in $`\alpha `$ is about $`\pm 0.2`$.
For strong-coupling superconductors only phenomenological models exist which connect the different superconducting parameters. By use of a large set of data from conventional superconductors the approximate relation between the specific-heat jump $`\mathrm{\Delta }C/\gamma T_c`$ and $`T_c/\omega _{ln}`$ is known, where $`\omega _{ln}`$ is the average phonon (or, more general, coupling) energy . Further on, the value $`T_c/\omega _{ln}`$ is connected with the coupling strength $`\lambda `$ of the superconducting charge carriers by the modified McMillan equation. However, for strong coupling, i.e., $`\lambda `$ larger than about 1.5, the McMillan equation is not valid any more and it is more appropriate to use an empirical relation between $`T_c/\omega _{ln}`$ and $`\lambda `$ obtained from tunneling data and presented in Ref. . Under the assumption that the organic superconductors can be described by the same strong-coupling theory as conventional superconductors this leads to a very large $`\lambda `$ of about 2.5. This might be in line with a recent theoretical treatment where enhanced strong-coupling features in quasi-two-dimensional correlated electron systems are expected .
The $`\lambda `$ values vs $`T_c`$ for the title material as well as for four other organic superconductors are presented in Fig. 3. Thereby, $`\lambda `$ was extracted for all materials in the same way, with $`\alpha =1.76`$ for the weak-coupling superconductor $`\alpha `$-(BEDT-TTF)<sub>2</sub>NH<sub>4</sub>Hg(SCN)<sub>4</sub> and a crudely estimated $`\alpha =2.2`$ from the limited set of available literature data for $`\kappa `$-(BEDT-TTF)<sub>2</sub>Cu(NCS)<sub>2</sub> . A clear systematic increase of $`\lambda `$, i.e., the relative specific-heat jump $`\mathrm{\Delta }C/\gamma T_c`$, as a function of $`T_c`$ is obvious. According to Fig. 1 of Ref. this indicates that the characteristic average coupling energy $`\omega _{ln}`$ has a similar strength for all shown organic superconductors. Consequently, one can write $`\lambda N(E_F)I^2`$ , where $`N(E_F)`$ is the electronic density of states at the Fermi energy and $`I^2`$ is the coupling matrix element averaged over the Fermi surface. Our result indicates that mainly $`I^2`$ controls $`T_c`$, since $`N(E_F)`$ remains more or less constant as shown by the measured $`\gamma N(E_F)`$ which is not correlated with Tc for the mentioned organic superconductors. There is, however, a tendency for a slight increase with $`T_c`$ if one considers only the kappa-phase materials, from $`\gamma =(18.9\pm 1.5)`$ mJ mol<sup>-1</sup> K<sup>-2</sup> for $`\kappa `$-(BEDT-TTF)<sub>2</sub>I<sub>3</sub> to $`\gamma =(25\pm 2)`$ mJ mol<sup>-1</sup> K<sup>-2</sup> for the title material. Within a two-dimensional Fermi-liquid picture the $`\gamma `$ values lead to effective masses of about 3.6 $`m_e`$ and 4.6 $`m_e`$, respectively, where $`m_e`$ is the free-electron mass. This increase of $`\gamma `$ and the effective masses is in accordance with results from de Haas–van Alphen or Shubnikov–de Haas experiments which show an increasing effective cyclotron mass from $`m_c=3.9m_e`$ for $`\kappa `$-(BEDT-TTF)<sub>2</sub>I<sub>3</sub> to $`m_c=6.6m_e`$ for $`\kappa `$-(BEDT-TTF)<sub>2</sub>Cu\[N(CN)<sub>2</sub>\]Br . These enhanced masses point to the importance of many-body effects, i.e., electron-phonon and electron-electron interactions, in the organic superconductors and are at least qualitatively in line with the estimated large coupling constants $`\lambda `$.
The main point of this paper is the proof of an exponentially vanishing electronic specific heat in the superconducting state. It is clear already from Fig. 2 that no electronic contribution to $`C`$ remains at low temperatures since otherwise the data would not follow so perfectly the strong-coupling BCS curve. The fact becomes more evident when we plot the electronic part of the specific heat in the superconducting state, $`C_{es}`$, as a function of $`T_c/T`$ (Fig. 4). For the determination of $`C_{es}`$ we subtracted the phonon part of $`C`$ which corresponds to $`C`$ measured in $`B=14`$ T minus $`\gamma T`$. The normalized plot in Fig. 4 shows unambiguously that $`C_{es}`$ vanishes towards low $`T`$. The solid line is an exponential fit to the data of the form $`C_{es}/\gamma T_c\mathrm{exp}(2.7T_c/T)`$. At $`T/T_c3`$, $`C_{es}`$ is so small that we cannot resolve it any longer leading to the scatter of the data towards lower temperatures. From this result we can conclude that a possible remnant of $`C_{es}/T`$ is less than about 1 mJ mol<sup>-1</sup> K<sup>-2</sup>. Consequently, our data prove the absence of gap nodes but, instead, point strongly to the existence of a complete energy gap in the superconducting state. We want to note, that our data do not allow to make any statements on possible gap anisotropies. These may well be the reason for the observed slight discrepancy between the $`\mathrm{\Delta }C`$ data and the BCS fit in the intermediate temperature region shown in Fig. 2.
Within BCS theory one can approximate $`C_{es}/\gamma T_c\mathrm{exp}(a_\mathrm{\Delta }T_c/T)`$ for $`2.5<T_c/T<6`$ , where the coefficient $`a_\mathrm{\Delta }`$ (= 1.44 in the weak-coupling limit) is proportional to the energy gap $`\mathrm{\Delta }`$ at $`T=0`$. The much larger value $`a_\mathrm{\Delta }2.7`$ we extracted from our data is the behavior expected for strong coupling and consistent with the large $`\lambda `$. The exponential vanishing of $`C_{es}`$ can equally well be proven for the organic superconductors $`\kappa `$-(ET)<sub>2</sub>I<sub>3</sub> and $`\beta `$”-(ET)<sub>2</sub>SF<sub>5</sub>CH<sub>2</sub>CF<sub>2</sub>SO<sub>3</sub> .
In Fig. 4 we included the approximated average of the estimated result for $`C_{es}`$ from Fig. 3 of Ref. (dashed line). It is evident from our result that one can definitely exclude any remnant contribution as high as proposed in this work (at $`T_c/T=3`$ our data are more than a factor of 10 smaller). Indeed, the estimated $`C_{es}`$ at 4 K in coincides approximately with the normal-state electronic $`C`$ which would mean a crossing of the $`C`$ data in the normal and superconducting state at around this temperature. Figures 1(a) and 2 show that this results must be wrong. It is therefore proven that it is not allowed to estimate the phonon specific heat from a quench-cooled non-superconducting deuterated sample.
For superconductors with line nodes a field dependence of $`\gamma `$ proportional to $`\sqrt{B}`$ is predicted . Recently, however, a $`\sqrt{B}`$ dependence was also observed at low fields in an s-wave superconductor pointing out that the bare observation of this behavior does not prove an unconventional pairing state. For the title material a $`\sqrt{B}`$ dependence of $`\gamma `$ at low fields was reported . Since our measurements were made at higher temperatures we cannot make a definitive statement. However, from the field dependence of $`C`$ at fixed temperature we can describe the data reasonably well by a linear field dependence.
In conclusion, the results of our specific-heat measurements of the organic superconductor $`\kappa `$-(BEDT-TTF)<sub>2</sub>Cu\[N(CN)<sub>2</sub>\]Br in the superconducting and normal state can be well described by strong-coupling BCS theory. We extract a large coupling parameter $`\lambda 2.5`$ which scales well with $`\lambda `$ values found for organic superconductors with lower $`T_c`$. The electronic specific heat in the superconducting state vanishes exponentially with $`T_c/T`$ which disproves the $`T^2`$ behavior claimed earlier. Our data are fully consistent with a completely gapped order parameter.
We thank H. v. Löhneysen for continuous support and fruitful discussions. This work was partially supported by the Deutsche Forschungsgemeinschaft. |
no-problem/9912/cond-mat9912007.html | ar5iv | text | # Auxiliary particle theory of threshold singularities in photoemission and X-ray absorption spectra: test of a conserving 𝑇-matrix approximation
## I Introduction
The core level spectral function $`A_d(ϵ)`$ of a localized core orbital immersed in a conduction electron sea, as observed in the photoemission of electrons after X-ray absorption has long been known to show nonanalytic threshold behavior characterized by fractional power laws $`A_d(ϵ)ϵ^{\alpha _d}`$ in the frequency distance to the threshold $`ϵ=\omega E_0`$. As shown by Anderson , this can be understood by considering that the sudden creation of a deep hole in the electronic core of an ion in a metal (or the filling of an empty core state) disturbs the Fermi sea of the conduction electrons so strongly that the subsequent relaxation into the new ground state follows a fractional power law in time rather than the usual exponential dependence. This is due to the fact that the ground states of the initial state and the final state are orthogonal in the limit of an infinite system (“orthogonality catastrophe”). At finite, but small $`ϵ`$ the relaxation process involves excitation of a large number of particle-hole pairs out of the Fermi sea of conduction electrons. A similar situation arises at the X-ray absorption threshold. There it has been argued that in addition to the above an excitonic effect appears, as first discussed by Mahan . A theoretical description requires the use of infinite order perturbation theory.
The problem is in some sense the simplest situation in which strong electron correlations are generated by a sudden change of electron occupations of a level coupled to a Fermi sea. The same generic problem is at the heart of the Kondo problem, or generally speaking, of quantum impurity problems, which can be understood as a succession of X-ray edge problems generated by successive flips of the impurity spin or pseudospin. In an even more general context, such problems arise in lattice models of correlated electrons, when the hopping of an electron from one site to the next changes the occupation of these sites, causing a corresponding rearrangement of the whole Fermi system. Given the existing evidence that high temperature superconductors, heavy fermion compounds and other metallic systems are governed by strong electron correlation effects, which are at present only poorly understood, there is an urgent need for generally applicable theoretical methods capable of dealing with these complex situations.
A powerful method of many-body physics, which directly addresses the consequences of a change in occupation number of a local level is the pseudoparticle representation . Within this framework one introduces pseudoparticles for each of the states of occupation of a given energy level, i.e. fermions for the singly occupied level and bosons for the empty level. It is well known that a representation of this type for the infinite $`U`$ Anderson model of a magnetic impurity in a metal can give surprisingly good results already in second-order self-consistent perturbation theory \[“non-crossing-approximation” (NCA)\] in the hybridization of local level and conduction band . However, at low temperatures and low energies the NCA fails to control the infrared singular behavior of the pseudoparticle spectral functions at threshold. Application of the NCA to the problem of the core hole spectral function gives a threshold exponent $`\alpha _d`$ independent of the occupation of the core state, in contradiction with the exactly known result.
We have recently developed an approximation scheme, which appears to overcome the difficulties of NCA . It is based on the idea of including singular behavior emerging in any of the two-particle channels. There are two relevant channels, the pseudofermion-conduction electron and the slave boson-conduction electron channel. In both channels the ladder diagrams are summed, the resulting $`T`$-matrices are self-consistently included in the self-energies, as is required within a conserving approximation scheme. The main results of this conserving $`T`$-matrix approximation (CTMA) are: ($`i`$) the (exactly known) infrared threshold exponents of the pseudoparticle spectral functions are recovered , ($`ii`$) the thermodynamic quantities spin susceptibility and specific heat show local Fermi liquid behavior in the single channel case and ($`iii`$) in the multi channel case, non-Fermi liquid behavior is found , in quantitative agreement with exact results available in certain limiting cases.
One of the most stringent tests of a many-body method is the calculation of the core hole spectral function. In this paper we report the results of an application of the CTMA to this problem.
The organization of the paper is as follows. In section II, we summarize the most important results of the exact solution of the X-ray model , notably those for the threshold exponents for the photoemission and the X-ray absorption. Then, in section III, we recall the pseudoparticle representation of a spinless Anderson impurity Hamiltonian and point out its equivalence to the X-ray model in the infrared limit. The conserving pseudopartcle approximation up to infinite order in the hybridization $`V`$ is discussed in section IV and compared with the parquet equation approach of Nozières et al. in section V. The numerical results are discussed in section VI. In appendix A we give explicitly the self-consistent equations which determine the auxiliary particle self-energies within the CTMA.
## II Physical model
The absorption of an X-ray photon by a deep level core electron and the subsequent emission of the electron leaves a core hole, which is seen by the conduction electrons as a suddenly created screened Coulomb potential. The simplest model Hamiltonian describing this situation is given by
$$H=\underset{𝐤\sigma }{}\left(ϵ_𝐤^{}\mu \right)c_{𝐤\sigma }^{}c_{𝐤\sigma }^{}+E_d^{}d^{}d+V_d\underset{\sigma }{}c_{0\sigma }^{}c_{0\sigma }^{}dd_{}^{},$$
(1)
where $`c_{𝐤\sigma }^{}`$ ($`c_{𝐤\sigma }^{}`$) are the conduction electron field operators for momentum and spin eigenstates $`|𝐤\sigma `$, with energy $`ϵ_𝐤`$ and chemical potential $`\mu `$. The energy of the deep level is $`E_d`$, and $`V_d`$ is the screened Coulomb interaction between the conduction electrons at the site of the hole ($`c_{0\sigma }^{}`$, $`c_{0\sigma }^{}`$) and the hole (with operators $`d^{}`$, $`d`$; the spin state of the hole is irrelevant here). We assume that the hole is localized and does not have internal structure, i.e. we neglect the finite life time of the hole due to Auger effect as well as a possible recoil of the hole. The Coulomb interaction between the conduction electrons is absorbed into a quasiparticle renormalization.
### Photoemission.—
The spectral function of the hole, $`A_d(ϵ)`$, which can be measured in photoemission experiments, is obtained from the one-particle core hole Green’s function $`G_d(t)=iT[d(t)d^{}(0)]`$, subjected to the initial condition that the core hole occupation number $`d^{}d=0`$ for times $`t<0`$ (before the photoemission process), by taking the imaginary part of its Fourier transform, $`A_d(\omega )=(1/\pi )\text{Im}G_d(\omega i0)`$. The initial condition is equivalent to the trace $`\mathrm{}`$ in the definiton of $`G_d(t)`$ being taken only over states with hole occupation equal to zero. It is this restriction which implies the non-trivial dynamics of the X-ray problem. $`A_d(\omega )`$ is proportional to the spectral weight of processes, where a photon is absorbed by the metal, subsequently emitting the deep level core electron. The energy $`\omega `$ required for this process is bounded from below by the threshold energy $`E_0=E_FE_{\text{core}}\mathrm{\Delta }E`$, where $`E_{\text{core}}`$ and $`E_F`$ are the core level energy and the Fermi energy, respectively, and $`\mathrm{\Delta }E`$ is a renormalization due to core hole-conduction electron interactions. In the following we will choose the zero of energy such that $`E_0=0`$ (i.e. $`ϵ=\omega E_0`$). The spectral function $`A_d(ϵ)`$ then shows singular threshold behavior
$$A_d(ϵ)=\frac{C_d}{ϵ^{\alpha _d}}(ϵ0^+).$$
(2)
In a landmark paper Nozières and De Dominicis showed that the exponent $`\alpha _d`$ depends only on the scattering phase shift $`\eta `$ of the conduction electrons off the core hole and calculated it as ($`s`$-wave-scattering)
$`\alpha _d=1\left({\displaystyle \frac{\eta }{\pi }}\right)^2=1n_d^2,`$ (3)
where Friedel’s sum rule $`\eta =\pi n_d`$ has been used to express $`\eta `$ in terms of the occupation number of the core level, $`n_d`$.
### X–ray absorption.—
The X-ray absorption cross section is given by the two particle Green’s function $`G_2(t)=i\mathrm{\Theta }(t)[d_{}^{}(t)c_{0\sigma }^{}(t),c_{0\sigma }^{}(0)d(0)]`$ as $`d\sigma /dϵ\text{Im}G_2(ϵi0)`$. The absorption cross section is finite for $`ϵ>0`$ and again shows singular threshold behavior
$$\frac{d\sigma }{dϵ}=\frac{C_a}{ϵ^{\alpha _a}}(ϵ0^+).$$
(4)
The exponent $`\alpha _a`$ has been calculated by Nozières and De Dominicis with the result
$`\alpha _a={\displaystyle \frac{2\eta }{\pi }}\left({\displaystyle \frac{\eta }{\pi }}\right)^2=2n_dn_d^2.`$ (5)
## III Pseudoparticle representation of the X-ray model
As will be seen below, it is useful to formulate the core hole problem in terms of pseudoparticles in order to impose the initial condition. We define fermion operators $`f^+`$ ($`f`$) and boson operators $`b^+`$ ($`b`$) creating (annihilating) the occupied or empty core level. The transition amplitude $`V`$ of an electron from the core level into the conduction band describes the hybridization of these two systems. The Hamiltonian of this system takes the form of an Anderson impurity Hamiltonian for spinless particles (spin degeneracy $`N=1`$):
$`H`$ $`=`$ $`{\displaystyle \underset{𝐤}{}}\left(ϵ_𝐤^{}\mu \right)c_𝐤^{}c_𝐤^{}`$ (6)
$`+`$ $`E_d^{}f^{}f+V\left(f_{}^{}bc_0^{}+\text{h. c.}\right)+\lambda Q,`$ (7)
where $`c_0=_𝐤c_𝐤`$ annihilates a conduction electron at the impurity site. The constraint $`Q=f^{}f+b^{}b=1`$ ensuring that the core level is either empty or occupied is implemented by adding the last term in (6), where $`\lambda `$ is associated with the operator constraint $`Q=1`$ and may be interpreted as the negative of a chemical potential for the pseudoparticles . As has been shown previously , the limit $`\lambda \mathrm{}`$ imposes the constraint exactly and is equivalent to taking all expectation values of pseudoparticle operators in the Hilbert subspace with $`Q=0`$ (no core hole present). Thus, in the present context, it implements exeactly the X-ray initial condition of sudden creation of the core hole. The auxiliary particle Green’s functions are expressed in terms of their self-energies as $`G_f^1(i\omega _n)=\left[G_{f,b}^0(i\omega _n)\right]^1\mathrm{\Sigma }_f(i\omega _n)`$, $`G_b^1(i\nu _m)=\left[G_b^0(i\nu _m)\right]^1\mathrm{\Sigma }_b(i\nu _m)`$, where $`G_f^0(i\omega _n)=1/(i\omega _nE_d)`$ and $`G_b^0(i\nu _m)=1/i\nu _m`$ are the respective non-interacting Green’s functions and $`i\omega _n=(2n+1)\pi /\beta `$, $`i\nu _m=2m\pi /\beta `$ denote the fermionic and bosonic Matsubara frequencies.
In the model (6) one may distinguish two distinct regimes, where the impurity occupation number $`n_d`$ at infinitely long time after suddenly switching on the interaction is large ($`n_d1`$, $`E_d<0`$) or small ($`n_d0`$, $`E_d>0`$). Since, due to the hybridization, $`n_d`$ is equal and opposite in sign to the change of the conduction electron number (i.e. screening charge) induced by the presence of the impurity, $`n_d=\mathrm{\Delta }n_c`$, these regimes correspond via the Friedel sum rule to large ($`\eta \pi `$) and small ($`\eta 0`$) scattering phase shifts, respectively (see detailed discussion below), and may, therefore, be termed the strong and the weak coupling regions. We now show the formal equivalence between the X-ray model Eq. (1) and the slave particle Hamiltonian Eq. (6) at low energies both in the weak and in the strong coupling regions.
In the strong coupling region, an effective low-energy model is derived from the Anderson Hamiltonian (6) by integrating out the slave boson degree of freedom (or, equivalently, by means of a Schrieffer-Wolff transformation onto the part of the Hilbert space involving only states with the core level occupied). The interaction term in the resulting effective action reads
$`S_{\text{int}}`$ $`=`$ $`V^2{\displaystyle \frac{1}{\beta ^3}}{\displaystyle \underset{i\omega _n,i\omega _n^{},i\nu _m}{}}G_b^0(i\nu _m)`$ (8)
$`\times `$ $`c_0^{}(i\omega _n^{}i\nu _m)c_0^{}(i\omega _n)f(i\omega _n^{})f^{}(i\omega _n+i\nu _m),`$ (9)
where, in addition, the projection onto the physical Hilbert space is imposed by taking $`\lambda \mathrm{}`$. At low
excitation energy relative to the core level, i.e. when the conduction electron energies after analytical continuation are $`|\omega |`$, $`|\omega ^{}\nu ||E_d|`$ and the pseudofermions have energies $`\omega ^{}`$, $`\omega +\nu E_d`$ (see Fig. 1), the non-interacting slave boson Green’s function in Eq. (8) is taken at $`\nu E_d`$ and thus reduces to $`1/E_d`$. The resulting effective Hamiltonian is thus given by Eq. (1), with electron operators $`d^{}`$, $`d`$ replaced by pseudofermions $`f^{}`$, $`f`$, interacting with the conduction electrons via the repulsive, instantaneous potential $`V_d=V^2/E_d>0`$.
In order to derive the effective low-energy Hamiltonian in the weak coupling domain ($`n_d0`$, $`E_d>0`$), it is useful to observe that the model Eq. (6) is in the physical Hilbert space invariant under the special particle-hole transformation $`fb`$, $`cc^{}`$ and $`E_dE_d`$. Integrating out the high energy states, i.e. the fermionic degrees of freedom in this case, and then performing this particle-hole transformation, the resulting low-energy Hamiltonian is again given by Eq. (1), with the replacement $`d^{}`$, $`d`$ $``$ $`f^{}`$, $`f`$, and the attractive interaction potential $`V_d=V^2/E_d<0`$ between conduction electrons and local pseudofermions.
Having, thus, established the formal connection between the original X-ray model Eq. (1) and the auxiliary particle Hamiltonian (6) in the weak and in the strong coupling regions, we now turn to showing that the photoemission and X-ray absorption spectra are given by the slave boson and the pseudofermion spectral functions, respectively.
### Photoemission.—
The retarded Green’s function $`G_b^R(t)=i\mathrm{\Theta }(t)[b(t),b^{}(0)]_{}`$ describes the propagation of the empty $`d`$-level in time. The corresponding spectral function after projection onto the physical sector $`Q=1`$, $`A_b^+(\omega )=lim_\lambda \mathrm{}\text{Im}G_b^R(\omega )/\pi `$ can be represented in terms of the exact eigenstates of the system without the $`d`$-level, $`|0,n`$, and with the $`d`$-level, $`|1,n`$, as
$`A_b(\omega )`$ $`=`$ (10)
$`{\displaystyle \frac{1}{Z_{Q=0}}}`$ $`{\displaystyle \underset{m,n}{}}`$ $`|1,m|b^+|0,n|^2e^{\beta ϵ_{0,n}}\delta (ϵ+ϵ_{0,n}ϵ_{1,m}).`$ (11)
At zero temperature ($`\beta =1/T=\mathrm{}`$), $`A_b(ϵ)`$ is zero for $`ϵ=\omega E_0<0`$, where $`E_0=ϵ_{1,0}ϵ_{0,0}`$ is the difference of the ground state energies for the $`Q=1`$ and $`Q=0`$ systems. Near the threshold, $`ϵ0`$, $`A_b(ϵ)`$ has a power law singularity (infrared divergence), $`A_b(ϵ)ϵ^{\alpha _b}`$, for exactly the same reason as the hole spectral function $`A_d(ϵ)`$ considered above: the states $`|0,n`$ (free Fermi sea) and $`|1,n`$ (Fermi sea in presence of a potential scattering center) are orthogonal, giving rise to the orthogonality catastrophe . The exponent $`\alpha _b`$ is therefore given in terms of the phase shift $`\eta _b`$ (for $`s`$-wave scattering) as $`\alpha _b=1\left(\eta _b/\pi \right)^2`$. Using the Friedel sum rule and the fact that in the photoemission process (boson propagator) the impurity occupation number changes from initially $`0`$ to $`n_d>0`$ in the final state, we obtain the characteristic dependence on $`n_d`$,
$$\alpha _b=1n_d^2.$$
(12)
We may conclude that the threshold behavior of the physical hole spectral function $`A_d(ϵ)`$ and the slave boson spectral function $`A_b(ϵ)`$ is governed by the same exponent, $`\alpha _d=\alpha _b`$, provided the scattering phase shift is the same.
### X-ray absorption.—
In a similar way, the threshold behavior of the X-ray absorption cross section $`d\sigma /dϵ`$ may be obtained from the pseudofermion Green’s function. As shown in section II, $`d\sigma /dϵ`$ is proportional to the imaginary part of the two particle Green’s function $`G_2(t)=i\mathrm{\Theta }(t)[d_{}^{}(t)c_{0\sigma }^{}(t),c_{0\sigma }^{}(0)d(0)]`$. The corresponding quantity here is the slave boson-conduction electron correlation function
$$G_{bc}(t)=i\mathrm{\Theta }(t)[b(t)c_0^{}(t),c_{0\sigma }^{}(0)b_{}^{}(0)],$$
(13)
which is given in terms of the pseudofermion Green’s function $`G_f(ϵ)`$ (after Fourier transformation) as
$$G_{bc}(ϵ)=\frac{1}{V^2}\left[\left(G_f^0(ϵ)\right)^1G_f(ϵ)1\right]\left(G_f^0(ϵ)\right)^1.$$
(14)
It follows that the spectral functions are related by $`A_{bc}(ϵ)A_f(ϵ)ϵ^{\alpha _f}`$, i.e. the X-ray absorption exponent is identical to the pseudofermion threshold exponent $`\alpha _f`$. The latter is again determined by the orthogonality catastrophe argument, considering that the initial state of the system is now the conduction electron Fermi sea plus the filled $`d`$-level. The phase shift $`\eta _f`$, again given via the Friedel sum rule as the change of the occupation number from the initial to the final state, is now different, $`\eta _f=(n_d1)\pi `$, leading to the expression
$$\alpha _f=2n_dn_d^2.$$
(15)
Comparison with (5) again shows that the infrared behavior of the pseudofermion spectral function is indeed identical to that of the two particle Green’s function $`G_2`$, as expected.
It should be mentioned that in the intermediate coupling or “mixed valence” domain, $`\pi N(0)V^2|E_d|`$ ($`n_d1/2`$), a Schrieffer-Wolff type projection is no longer valid because of large level occupancy fluctuations. The formal derivation of the X-ray model (1) from the pseudoparticle model (6) in the “mixed valence” regime involves a retarded effective interaction, in contrast to Eq. (1). However, since the Hamiltonian Eq. (6) is a faithful representation of a non-interacting system (via the identification $`d^{}=f^{}b`$), where the constraint $`Q=f^{}f+b^{}b=1`$ merely serves to implement the X-ray initial condition of sudden switching on the interaction between localized states and the conduction electrons (see above), the system is described by single-particle wave functions even in the valence fluctuation regime of this spinless model. The analysis of the pseudoparticle threshold exponents $`\alpha _b`$, $`\alpha _f`$ in terms of the corresponding scattering phase shifts $`\eta _b`$, $`\eta _f`$ and the Friedel sum rule, as given above, then also applies in the valence fluctuation regime. It has been verified explicitly by a numerical renormalization group calculation of the pseudoparticle threshold exponents that their $`n_d`$ dependence, given in Eqs. (12), (15), is valid over the complete range of the core level occupation number $`n_d`$ .
The preceding analysis shows explicitly that in the auxiliary particle representation the threshold exponents of both the X-ray photoemission and absorption are determined by the infrared behavior of single-particle propagators, involving the physics of the orthogonality catastrophe for auxiliary bosons or pseudofermions only . There is no separation into single particle effects and excitonic effects.
## IV Conserving theory
In the previous section we reformulated the core hole problem by introducing auxiliary particles and showed on general grounds that the threshold exponents of X-ray absorption and photoemission spectra can be extracted from one particle properties, namely the auxiliary fermion and slave boson Green’s functions respectively. In this section a systematic self-consistent approximation is formulated to calculate these functions.
As a minimal requirement the constraint $`Q=1`$ has to be fulfilled in any approximate theory. The constraint is closely related to the invariance of the system under a simultaneous local (in time) gauge transformation $`f(\tau )e^{\mathrm{\Theta }(\tau )}f(\tau )`$, $`b(\tau )e^{\mathrm{\Theta }(\tau )}b(\tau )`$. The Lagrange multiplier $`\lambda `$ assumes the role of a local gauge field and transforms as $`\lambda \lambda +i\mathrm{\Theta }/\tau `$. Any approximate scheme respecting the gauge symmetry will preserve the charge $`Q`$ in time. The simultaneous transformations $`f(\tau )e^{\mathrm{\Theta }(\tau )}f(\tau )`$, $`c_𝐤(\tau )e^{\mathrm{\Theta }(\tau )}c_𝐤(\tau )`$, $`\mu (\tau )\mu (\tau )+i\mathrm{\Theta }/\tau `$ lead to the conservation of the total fermion number $`n_f+_𝐤c_𝐤^{}c_𝐤^{}=\text{const.}`$ where $`\mu `$ is the chemical potential of the conduction electrons (we choose $`\mu =0`$). Any theory which preserves these symmetries is called conserving and may be generated by functional derivation from a generating functional $`\mathrm{\Phi }`$ of closed skeleton diagrams .
### NCA. —
We are interested in the limit of weak hybridization $`V`$. So let us first consider the lowest order approximation. The conserving approximation scheme requires the self-energies to be determined self-consistently,
which amounts to an infinite resummation of perturbation theory even if only the lowest order skeleton diagram ist kept (which is known as the “non-crossing-approximation” (NCA) , see Fig. 2). The NCA is known to yield good results in the absence of or sufficiently far away from a Fermi liquid fixed point . Hence the NCA is not appropriate in the X-ray problem. The reason is that no parquet diagrams (see Fig. 5) are included in the lowest order approximation. By functional derivation of $`\mathrm{\Phi }`$ one obtains for the slave particle self-energies $`\mathrm{\Sigma }_f=\delta \mathrm{\Phi }/\delta G_f`$, $`\mathrm{\Sigma }_b=\delta \mathrm{\Phi }/\delta G_b`$ which are diagramatically given in Fig. 2 and yield the set of coupled integral equations
$`\mathrm{\Sigma }_f(ϵ)`$ $`=`$ $`V^2{\displaystyle _{\mathrm{}}^{\mathrm{}}}{\displaystyle \frac{du}{\pi }}G_b(ϵ+u)A_c(u)f(u)`$ (16)
$`\mathrm{\Sigma }_b(ϵ)`$ $`=`$ $`V^2{\displaystyle _{\mathrm{}}^{\mathrm{}}}{\displaystyle \frac{du}{\pi }}G_f(u+ϵ)A_c(u)f(u)`$ (17)
where $`A_c(ϵ)`$ is the non-interacting local conduction electron spectral density. At zero temperature $`T=0`$ the integral equations can be rewritten as ordinary differential equations (with a constant density of states for the conduction electrons and for $`ϵ0`$)
$`{\displaystyle \frac{}{ϵ}}{\displaystyle \frac{1}{A_f(ϵ)}}`$ $``$ $`N(0)V^2A_b(ϵ)`$ (18)
$`{\displaystyle \frac{}{ϵ}}{\displaystyle \frac{1}{A_b(ϵ)}}`$ $``$ $`N(0)V^2A_f(ϵ).`$ (19)
The solution displays the well-known infrared singularities $`A_{f,b}(ϵ)ϵ^{\alpha _{f,b}}(ϵ0)`$ where $`\alpha _{f,b}=1/2`$. These exponents obviously differ from the exact results discussed before \[Eqs. (12) and (15)\].
Hence the NCA is not even in qualitative agreement with the exact Fermi liquid properties of the model; it shows no dependence of the exponents on the filling factor $`n_d`$ of the deep level.This is due to the lack of vertex corrections which have to be included in infinite orders of perturbation theory, because it can be shown by power-counting arguments that there are no corrections to the NCA exponents in any finite order .
### CTMA. —
We have to include the major singularities in each order of self-consistent perturbation theory. These singularities emerge in the conduction electron and pseudofermion $`T`$-matrix ($`T_{fc}`$) as well as in the conduction electron and slave boson $`T`$-matrix ($`T_{bc}`$). In order to preserve gauge invariance, self-consistency has to be imposed: the self-energies are functionals of the Green’s functions which in turn are expressed in terms of self-energies, closing the set of self-consistent equations. The summation of the corresponding ladder diagrams can be performed by solving the integral equations for the $`T`$-matrices for the pseudofermions (see Fig. 3)
$`T_{fc}(i\omega _n,i\omega _n^{};i\mathrm{\Omega }_m)`$ $`=`$ $`V^2G_b(i\omega _n+i\omega _n^{}i\mathrm{\Omega }_m)`$ (20)
$``$ $`{\displaystyle \frac{V^2}{\beta }}{\displaystyle \underset{i\omega _n^{\prime \prime }}{}}G_b(i\omega _n+i\omega _n^{\prime \prime }i\mathrm{\Omega }_m)G_f(i\omega _n^{\prime \prime })G_c(i\mathrm{\Omega }_mi\omega _n^{\prime \prime })T_{fc}(i\omega _n^{\prime \prime },i\omega _n^{};i\mathrm{\Omega }_m),`$ (21)
and the slave-bosons
$`T_{bc}(i\nu _m,i\nu _m^{};i\mathrm{\Omega }_n)`$ $`=`$ $`V^2G_f(i\nu _m+i\nu _m^{}i\mathrm{\Omega }_n)`$ (22)
$``$ $`{\displaystyle \frac{V^2}{\beta }}{\displaystyle \underset{i\nu _m^{\prime \prime }}{}}G_f(i\nu _m+i\nu _m^{\prime \prime }i\mathrm{\Omega }_n)G_b(i\nu _m^{\prime \prime })G_c(i\mathrm{\Omega }_ni\nu _m^{\prime \prime })T_{bc}(i\nu _m^{\prime \prime },i\nu _m^{};i\mathrm{\Omega }_n).`$ (23)
Here $`\omega _n,\omega _n^{},\omega _n^{\prime \prime }`$ are fermionic frequencies ($`\omega _n=(2n+1)\pi /\beta `$), $`\nu _m,\nu _m^{},\nu _m^{\prime \prime }`$ are bosonic frequencies ($`\nu _m=2m\pi /\beta `$), and the center of mass frequency $`\mathrm{\Omega }_{m,n}`$ is bosonic in the case of $`T_{fc}`$ and fermionic for $`T_{bc}`$. The self-energies $`\mathrm{\Sigma }_f`$ and $`\mathrm{\Sigma }_b`$
$`\mathrm{\Sigma }_f(i\omega _n)`$ $`=`$ $`\mathrm{\Sigma }_f^{\text{NCA}}(i\omega _n)+\mathrm{\Sigma }_f^{fc}(i\omega _n)+\mathrm{\Sigma }_f^{bc}(i\omega _n)`$ (24)
$`\mathrm{\Sigma }_b(i\nu _m)`$ $`=`$ $`\mathrm{\Sigma }_b^{\text{NCA}}(i\nu _m)+\mathrm{\Sigma }_b^{fc}(i\nu _m)+\mathrm{\Sigma }_b^{bc}(i\nu _m)`$ (25)
calculated from $`T_{fc}`$ and $`T_{bc}`$, then follow from a generating functional $`\mathrm{\Phi }`$ (see Fig. 4) by functional derivation. The explicit expressions are given in appendix A.
## V Comparison with renormalized parquet equations
The CTMA is closely related to the parquet equation approach by Nozières et al. In Ref. \[\] these authors investigate the X-ray model (1) by the methods of perturbation theory. Even to the lowest order one must sum the so-called parquet diagrams, in close analogy with the Abrikosov theory of the Kondo effect . In this approximation Mahan’s prediciton of the singularity in the X-ray absorption spectrum was first confirmed. In a succeeding paper the many-body approach was generalized to include self-energy and vertex renormalization in a self-consistent fashion. This self-consistent formalism describes the reaction of divergent fluctuations on themselves, and should, therefore, be useful in other more complicated problems, such as the Kondo effect.
In Ref. \[\] it is shown that the significant contributions in logarithmic accuracy to the renormalized interaction and the deep level self-energy are given by the diagrams reproduced in Fig. 5 (a). Both graphs are included in the CTMA (see Fig. 5 (b)): By collapsing the boson lines into points, i.e. by integrating out the high energy bosonic degree of freedom in the strong coupling region ($`n_d1`$) as done in section III, it is seen that the X-ray interaction kernel (Fig. 5 (a), left) can be extracted from the $`T_{bc}`$-matrix, and the deep level self-energy (Fig. 5 (a), right) is already included in the NCA. For weak coupling ($`n_d0`$) analogous results are obtained by integrating out the pseudofermionic degree of freedom and then interchanging bosons and fermions, compare section III. The self-consistent evaluation of these diagrams represents the renormalized parquet analysis for the pseudoparticles. The advantage of our formulation is that it is valid both in the weak coupling and in the strong coupling regime, with symmetrical expressions in these two regions. The symmetry between weak and strong coupling is also visible in the results for the threshold exponents (Fig. 7). Since the CTMA is not restricted to parquet diagrams (which give the right asymptotic behaviour only for $`V0`$), but goes beyond the parquet approximation, one may expect that its validity extends beyond the weak and the strong coupling limits and interpolates correctly between these regimes. This will be seen the following section.
## VI Numerical results
The self-consistent solutions are obtained by first solving the linear Bethe-Salpeter equations (13) and (15) for the $`T`$-matrices by matrix inversion on a grid of 200 frequency points. First we insert NCA Green’s functions into the $`T`$-matrix equations. From the $`T`$-matrices the auxiliary particle self-energies $`\mathrm{\Sigma }_f`$ and $`\mathrm{\Sigma }_b`$ are calculated corresponding to Eqs. (9) and (12), which give the respective Green’s functions. This process is iterated until
convergence is reached . The $`T`$-matrices show nonalytic behavior in the infrared limit.
As can be seen from Fig. 6 the fermion and boson spectral functions display power law behaviour at low frequencies . The power law behavior emerges in the infrared limit, i.e. for energies smaller than the low energy scale (which is $`E_d`$). For smaller frequencies there is always a deviation from the power law behaviour due to finite temperature. The exponents extracted from the spectral functions at low but finite temperature for various values of the deep level filling $`n_d`$ in Fig. 7 are in good numerical agreement with the exact results in the regions $`n_d[0.0,0.3]`$ and $`n_d[0.7,1.0]`$. Note that in contrast to the $`n_d`$-dependent exponents within the CTMA the NCA spectral functions always diverge with $`n_d`$-independent exponents $`\alpha _f=\alpha _b=1/2`$. For intermediate coupling, $`n_d[0.3,0.7]`$, the convergence of the self-consistent scheme is very slow, and we find no stable numerical solution. It remains to be seen whether this is due to numerical instabilities or possibly due to the importance of further vertex corrections beyond the CTMA.
A comparison of the CTMA results with the weak-coupling treatment, which corresponds to $`n_d0`$ in our model, shows that for finite interaction strength renormalization effects are important (see Fig. 8). The connection between $`n_d`$ and $`E_d/\mathrm{\Gamma }`$ is exactly given by Friedel’s sum rule $`n_d=1/2\mathrm{arctan}(E_d/\mathrm{\Gamma })/\pi `$. Again we mention the $`n_d`$ dependence of the exponent $`\alpha _f`$ in contrast to the NCA result: To recover the Fermi liquid properties of the model one thus has to go far beyond the lowest order self-consistent approximation.
## VII conclusion
In summary, we have calculated the exponents of threshold singularities in the X-ray photoemission and absorption spectra, using a standard many-body technique, where the empty and the singly occupied core level are represented by separate fields, auxiliary bosons and pseudofermions, respectively, coupled to the conduction electrons via a hybridization interaction. In this formulation, the X-ray problem is described by a spinless Anderson impurity model in pseudoparticle representation, and the initial condition of sudden creation of the impurity potential is implemented by the constraint that all expectation values of local fermion or boson fields must be calculated in the Hilbert subspace with pseudoparticle number $`Q=0`$. The latter can be fulfilled exactly. It was further shown that the X-ray photoemission cross section or core level spectral function is given by the boson spectral function, while the X-ray absorption cross section is proportional to the total fermion hybridization vertex. Therefore, the X-ray photoemission and absorption threshold exponents are identical to the infrared exponents of the auxiliary boson and pseudofermion spectral functions, respectively. It follows that both X-ray photoemission and absorption are solely governed by the orthogonality catastrophe, and there is no separation into single particle and excitonic effects.
In a more general context, the generalized SU($`N`$)$`\times `$SU($`M`$) Anderson impurity models, classified by the spin degeneracy $`N`$ of the local orbital and the number $`M`$ of degenerate conduction electron channels, may be considered as standard models to describe strong correlations induced by the restriction of no double occupancy of sites. Depending on their symmetry, these models display Fermi ($`N=M=1`$ or $`NM+1`$) or non-Fermi liquid behavior ($`2NM`$) at low temperature . The present case of the spinless Anderson impurity model in slave boson representation ($`N=1`$, $`M=1`$), Eq. (6), may be considered as the most stringent test case for the development of new methods for strongly correlated systems. This is because for this case earlier approximation schemes like the non-crossing approximation (NCA) fail in the most pronounced way to even qualitatively describe the low-energy Fermi liquid behavior of this model, i.e. the $`n_d`$ dependence of the infrared threshold exponents, while in the non-Fermi liquid case the NCA gives the correct exponents at least in the Kondo limit of these models .
In the present paper we have applied a recently developed approximation scheme, the conserving $`T`$-matrix approximation (CTMA) to the $`N=1`$, $`M=1`$ Anderson impurity model to calculate the X-ray photoemission and absorption threshold exponents on a common footing. The CTMA includes the complete subclass of diagrammatic contributions which, in the limits of weak ($`n_d0`$) and strong ($`n_d1`$) impurity scattering potential, reduce to the renormalized parquet diagrams, which have been shown by Nozières et al. to describe the exact infrared singular behavior in the weak coupling regime of the X-ray problem. As a result, the CTMA recovers the correct X-ray photoemission and absorption exponents in a wide region around weak as well as strong coupling. In connection with earlier results on the spin $`1/2`$ Anderson impurity model ($`N=2`$, $`M=1`$), this makes the CTMA the first standard many-body technique to correctly describe the Fermi liquid regime of the Anderson impurity models in a systematic way, including the smooth crossover to the high temperature behavior.
We are grateful for discussions with J. Brinkmann, T. A. Costi and T. Kopp. T.S. acknowledges the support of the DFG-Graduiertenkolleg “Kollektive Phänomene im Festkörper”. This work was supported in part by SFB 195 of the Deutsche Forschungsgemeinschaft. Computer support was provided by the John-von-Neumann Institute for Computing, Jülich.
## A CTMA equations
In this appendix we give explicitly the self-consistent equations which determine the auxiliary particle self-energies within the CTMA. In the Matsubara representation the vertex functions $`T_{fc}`$ and $`T_{bc}`$ are given by the following Bethe-Salpeter equations:
$`T_{fc}(i\omega _n,i\omega _n^{};i\mathrm{\Omega }_m)`$ $`=`$ $`I_{fc}(i\omega _n,i\omega _n^{};i\mathrm{\Omega }_m)`$ (1)
$`+`$ $`{\displaystyle \frac{V^2}{\beta }}{\displaystyle \underset{i\omega _n^{\prime \prime }}{}}G_b(i\omega _n+i\omega _n^{\prime \prime }i\mathrm{\Omega }_m)G_f(i\omega _n^{\prime \prime })G_c(i\mathrm{\Omega }_mi\omega _n^{\prime \prime })T_{fc}(i\omega _n^{\prime \prime },i\omega _n^{};i\mathrm{\Omega }_m)`$ (2)
with
$`I_{fc}(i\omega _n,i\omega _n^{};i\mathrm{\Omega }_m)={\displaystyle \frac{V^4}{\beta }}{\displaystyle \underset{i\omega _n^{\prime \prime }}{}}G_b(i\omega _n+i\omega _n^{\prime \prime }i\mathrm{\Omega }_m)G_f(i\omega _n^{\prime \prime })G_c(i\mathrm{\Omega }_mi\omega _n^{\prime \prime })G_b(i\omega _n^{}+i\omega _n^{\prime \prime }i\mathrm{\Omega }_m),`$
and $`T_{bc}`$
$`T_{bc}(i\nu _m,i\nu _m^{};i\mathrm{\Omega }_n)`$ $`=`$ $`I_{bc}(i\nu _m,i\nu _m^{};i\mathrm{\Omega }_n)`$ (3)
$``$ $`{\displaystyle \frac{V^2}{\beta }}{\displaystyle \underset{i\nu _m^{\prime \prime }}{}}G_f(i\nu _m+i\nu _m^{\prime \prime }i\mathrm{\Omega }_n)G_b(i\nu _m^{\prime \prime })G_c(i\nu _m^{\prime \prime }i\mathrm{\Omega }_n)T_{bc}(i\nu _m^{\prime \prime },i\nu _m^{};i\mathrm{\Omega }_n)`$ (4)
with
$`I_{bc}(i\nu _m,i\nu _m^{};i\mathrm{\Omega }_n)={\displaystyle \frac{V^4}{\beta }}{\displaystyle \underset{i\nu _m^{\prime \prime }}{}}G_f(i\nu _m+i\nu _m^{\prime \prime }i\mathrm{\Omega }_n)G_b(i\nu _m^{\prime \prime })G_c(i\nu _m^{\prime \prime }i\mathrm{\Omega }_n)G_f(i\nu _m^{}+i\nu _m^{\prime \prime }i\mathrm{\Omega }_n).`$
Note that, in addition to the different sign in $`T_{fc}`$, these vertex functions differ from the $`T`$-matrices defined before in that they contain only terms with two or more rungs, since the inhomogenous parts $`I_{fc}`$ and $`I_{bc}`$ represent terms with two bosonic or fermionic rungs, respectively. The terms with a single rung correspond to the NCA diagrams and are evaluated separately.
The fermion self-energies in Fig. 9 are given by
$`\mathrm{\Sigma }_f^{fc}(i\omega _n)`$ $`=`$ $`{\displaystyle \frac{1}{\beta }}{\displaystyle \underset{i\mathrm{\Omega }_mi\omega _n}{}}G_c(i\mathrm{\Omega }_mi\omega _n)T_{fc}(i\omega _n,i\omega _n;i\mathrm{\Omega }_m)`$ (5)
$`\mathrm{\Sigma }_f^{bc}(i\omega _n)`$ $`=`$ $`{\displaystyle \frac{V^2}{\beta ^2}}{\displaystyle \underset{i\nu _m^{},i\nu _m^{\prime \prime }}{}}G_c(i\omega _ni\nu _m^{})G_b(i\nu _m^{})G_c(i\omega _ni\nu _m^{\prime \prime })G_b(i\nu _m^{\prime \prime })T_{bc}(i\nu _m^{},i\nu _m^{\prime \prime };i\nu _m^{}+i\nu _m^{\prime \prime }i\omega _n)`$ (6)
and the boson self-energies by
$`\mathrm{\Sigma }_b^{bc}(i\nu _m)`$ $`=`$ $`{\displaystyle \frac{1}{\beta }}{\displaystyle \underset{i\nu _mi\mathrm{\Omega }_n}{}}G_c(i\nu _mi\mathrm{\Omega }_n)T_{bc}(i\nu _m,i\nu _m;i\mathrm{\Omega }_n)`$ (7)
$`\mathrm{\Sigma }_b^{fc}(i\omega _n)`$ $`=`$ $`{\displaystyle \frac{V^2}{\beta ^2}}{\displaystyle \underset{i\omega _n^{},i\omega _n^{\prime \prime }}{}}G_c(i\omega _n^{}i\nu _m^{})G_f(i\omega _n^{})G_c(i\omega _n^{\prime \prime }i\nu _m)G_f(i\omega _n^{\prime \prime })T_{fc}(i\omega _n^{},i\omega _n^{\prime \prime };i\omega _n^{}+i\omega _n^{\prime \prime }i\nu _m).`$ (8)
After analytical continuation to the real frequency axis we have to solve the NCA equations (16) and the following CTMA equations
$`\mathrm{\Sigma }_f^{fc}(ϵ)`$ $`=`$ $`{\displaystyle _{\mathrm{}}^{\mathrm{}}}{\displaystyle \frac{du}{\pi }}f(uϵ)A_c(uϵ)T_{fc}(ϵ,ϵ;u)`$ (9)
$`\mathrm{\Sigma }_f^{bc}(ϵ)`$ $`=`$ $`V^2{\displaystyle _{\mathrm{}}^{\mathrm{}}}{\displaystyle \frac{du}{\pi }}{\displaystyle _{\mathrm{}}^{\mathrm{}}}{\displaystyle \frac{du^{}}{\pi }}f(uϵ)f(u^{}ϵ)A_c(ϵu^{})G_b(ϵ)T_{bc}(u,u^{};u+u^{}ϵ)A_c(ϵu^{})G_b(u^{})`$ (10)
$`\mathrm{\Sigma }_b^{bc}(ϵ)`$ $`=`$ $`{\displaystyle _{\mathrm{}}^{\mathrm{}}}{\displaystyle \frac{du}{\pi }}f(uϵ)A_c(ϵu)T_{bc}(ϵ,ϵ;u)`$ (11)
$`\mathrm{\Sigma }_b^{fc}(ϵ)`$ $`=`$ $`V^2{\displaystyle _{\mathrm{}}^{\mathrm{}}}{\displaystyle \frac{du}{\pi }}{\displaystyle _{\mathrm{}}^{\mathrm{}}}{\displaystyle \frac{du^{}}{\pi }}f(uϵ)f(u^{}ϵ)A_c(ϵu)G_f(ϵ)T_{fc}(u,u^{};u+u^{}ϵ)A_c(u^{}ϵ)G_f(u^{})`$ (12)
with the fermion-conduction electron vertex function
$`T_{fc}(ϵ,ϵ^{};\mathrm{\Omega })`$ $`=`$ $`I_{fc}(ϵ,ϵ^{};\mathrm{\Omega })V^2{\displaystyle _{\mathrm{}}^{\mathrm{}}}{\displaystyle \frac{du}{\pi }}f(u\mathrm{\Omega })G_b(ϵ+u\mathrm{\Omega })G_f(u)A_c(\mathrm{\Omega }u)T_{fc}(u,ϵ^{};\mathrm{\Omega })`$ (13)
$`I_{fc}(ϵ,ϵ^{};\mathrm{\Omega })`$ $`=`$ $`V^4{\displaystyle _{\mathrm{}}^{\mathrm{}}}{\displaystyle \frac{du}{\pi }}f(u\mathrm{\Omega })G_b(ϵ+u\mathrm{\Omega })G_f(u)A_c(\mathrm{\Omega }u)G_b(ϵ^{}+u\mathrm{\Omega })`$ (14)
and the boson-conduction electron vertex function
$`T_{bc}(ϵ,ϵ^{};\mathrm{\Omega })`$ $`=`$ $`I_{bc}(ϵ,ϵ^{};\mathrm{\Omega })V^2{\displaystyle _{\mathrm{}}^{\mathrm{}}}{\displaystyle \frac{du}{\pi }}f(u\mathrm{\Omega })G_f(ϵ+u\mathrm{\Omega })G_b(u)A_c(u\mathrm{\Omega })T_{bc}(u,ϵ^{};\mathrm{\Omega })`$ (15)
$`I_{bc}(ϵ,ϵ^{};\mathrm{\Omega })`$ $`=`$ $`V^4{\displaystyle _{\mathrm{}}^{\mathrm{}}}{\displaystyle \frac{du}{\pi }}f(u\mathrm{\Omega })G_f(ϵ+u\mathrm{\Omega })G_b(x)A_c(u\mathrm{\Omega })G_f(ϵ^{}+u\mathrm{\Omega }).`$ (16)
Note that the self-energy contributions obtained from the two rung $`T`$-matrix terms ($`I_{fc}`$ and $`I_{bc}`$) display no skeleton diagrams; they are subtracted in the end. |
no-problem/9912/cond-mat9912170.html | ar5iv | text | # Proposal for a Quantum Hall Pump
## Abstract
A device is proposed that is similar in spirit to the electron turnstile except that it operates within a quantum Hall fluid. In the integer quantum Hall regime, this device pumps an integer number of electrons per cycle. In the fractional regime, it pumps an integer number of fractionally charged quasiparticles per cycle. It is proposed that such a device can make an accurate measurement of the charge of the quantum Hall effect quasiparticles.
The basic idea of a parametric pump is that some parameters of a system are varied slowly and periodically such that after each full cycle the system returns to its initial state with the net effect being that some amount of a fluid is transferred from a source to a drain. There are many examples of such pumps in a very wide range of contexts — from the human heart to a firemans’ bucket brigade. Over the past few years there has been increasing interest in parametric pumping of charge in mesoscopic systems both theoretically and experimentally. One particularly interesting example of a parametric pump is the electron turnstile – a device that transfers a single electron per cycle from a source to a drain. Such devices seem quite promising as metrological current and capacitance standards. In this paper I propose a device very similar to the electron turnstile that operates in the quantum Hall regime. Similar to the electron turnstile, when operated adiabatically at low temperature in the integer quantum Hall regime, the number of electrons pumped in a single cycle is quantized. However, in the fractional quantum Hall regime, it is an integer number of fractionally charged quasiparticles that is pumped in each cycle. Thus, this device has the potential to make measurements of the fractional charge of quantum Hall quasiparticles.
Description of the Device: The structure of the proposed device, shown schematically in Fig. 1, is quite similar to the devices used in Refs. .
A full pumping cycle is shown in Fig. 2. Throughout the cycle, the source-drain voltage may be held at zero. The cycle can be described as the following steps:
(a) Begin in a state where the edges are far from the antidot. In this state tunneling from the antidot to either the right or left edge is forbidden. (I.e., the tunneling amplitude is very close to zero).
(b) Move the left edge state close to the antidot (by charging the left gate negatively) such that the tunneling amplitude between the left gate and the antidot becomes large (compared to the pumping frequency).
(c) Negatively charge the central gate such that the size of the antidot grows. Here, as the potential of the central gate increases, particles (or quasiparticles) that were occupying states near the edges of the antidot are shifted above the Fermi energy. As they cross through the Fermi energy, they tunnel out to the left edge (they cannot tunnel to the right edge because the right edge is insulated from the dot by a large region of quantum Hall fluid).
(d) Move the left edge state back to its original position far from the antidot (by uncharging the left gate) such that tunneling from the antidot to either the right or left edge is once again forbidden.
(e) Move the right edge state close to the antidot (by charging the right gate negatively) such that the tunneling amplitude between the right edge and the antidot becomes large.
(f) Uncharge the central gate such that antidot becomes smaller. As the potential on the central gate decreases, the quasiparticles from the right edge tunnel back to the region near the edges of the antidot, filling states that were above the Fermi energy.
(a) Move the right edge back far away from the antidot (by uncharging the right gate) to return the system to the original state.
Similar to the electron turnstile the charge pumped in this cycle is given by the difference between the charge on the antidots in steps (a) and (d). It is important to note that in stages (a) and (d), when the tunneling to both edges is turned off, the charge on the antidot is quantized either in units of the electron charge (in the integer regime) or in units of the quasiparticle charge (in the fractional regime). Thus, we expect that the charge pumped in a cycle will similarly be quantized, at least at low temperature. More rigorous arguments for this quantization will be made below.
If we then imagine that we fix the central gate voltage at stage (a) and measure the charge pumped per cycle as a function of the central gate voltage at stage (d), at zero temperature, we would obtain a step-like curve, illustrated as the solid line in Fig. 3.
Quantization of Pumping — Integer Case: A general approach to understanding quantized charge pumping is reminiscent of Laughlin’s argument for quantized Hall conductance. Consider the Corbino geometry shown in Fig. 4. In the integer quantum Hall regime, at low temperature, the ground state of the system is unique and gapped at all times in the pumping cycle. If the deformation is made adiabatically, the system simply tracks the ground state. (“Adiabatic” here is defined to mean that the system tracks the ground state). Thus, at the beginning and end of the cycle, the system is in the same state and the only net effect is that an integer number of electrons could have been transferred from the inside to the outside edge of the annulus (or vice-versa).
For the simple case of non-interacting electrons, one can write the dynamics in terms of a simple time dependent Schroedinger equation. This can be integrated explicitly (exactly, or perturbatively) to demonstrate the quantization of pumped charge as claimed above. This explicit approach is useful in that it allows us to study the effects of nonadiabaticity in detail. Such a study is a subject of current research and will be reported elsewhere.
Fractional Case: In the case of the fractional quantum Hall effect, the Laughlin argument must be modified to account for fractionalization of charge. It now becomes possible to transfer a single fractionally charged quasiparticle across the system. (As usual, increasing the charge on the antidot by a fractional amount results in the decrease of the charge on the edges of the system by the same amount being that the bulk is incompressible and the total charge of the system is conserved). The argument given in the above section — which would seem to require transfer of an integer number of electrons per cycle — fails in the fractional Hall effect case because the ground state becomes $`q`$-fold degenerate with $`q`$ a small integer related to the quasiparticle charge and the filling fraction. For example, for the simple case of $`\nu =p/(2p+1)`$, there are $`q=2p+1`$ degenerate ground states (and the quasiparticle charge is $`e/(2p+1)`$). Because of this ground state degeneracy, the system need not return to the same ground state after each pumping period, but may instead cycle through the $`q`$ ground states. As a result, it is the number of electrons transferred across the system in $`q`$ cycles that is quantized, rather than the number transferred in a single cycle. Thus, the average charge transferred in a single cycle is quantized in units of $`e/q`$, which is the quasiparticle charge. Indeed, it is known that adiabatic transfer of a quasiparticle across such a Corbino system does indeed cycle the degenerate ground states.
Other than this minor modification of the above Laughlin-like argument, we expect that the same considerations as in the above integer case will apply for all fractional quantized Hall states. We also expect that, as above, the temperature scale at which the quantization is smeared out is roughly given by the single quasiparticle addition energy. For a more detailed calculation, we expect that chiral Luttinger liquid theory can be used to calculate the pumped current explicitly. This, too, is a subject of current research, and will be reported elsewhere.
Scattering Matrix Approach: A rather elegant, more formal, argument for quantization is based on the scattering matrix approach to adiabatic parametric pumping. In this approach, one writes the charge pumped in one cycle ($`t`$ varies from $`0`$ to $`\tau `$) as
$$Q=e_0^\tau \frac{dt}{2\pi }\underset{\beta }{}\underset{\alpha \text{source}}{}\mathrm{Im}\left[S_{\alpha \beta }^{}(t)\frac{d}{dt}S_{\alpha \beta }(t)\right]$$
(1)
where $`S_{\alpha \beta }(t)`$ is the scattering matrix at time $`t`$ from channel $`\alpha `$ to channel $`\beta `$. Here $`S(t)`$ is to be calculated as if the parameters of the system are frozen at time $`t`$, and $`\alpha `$ is summed only over channels at the source. In the quantum Hall regime, so long as there is no direct tunneling across the quantum Hall bar (I.e, as long as the antidot is not simultaneously connected to both edges), the structure of the scattering matrix is trivial — anything that comes into the left edge at the source (bottom left of each frame of Fig. 2) must follow that edge all the way to the drain (upper left). If we have a quantum Hall state with only a single edge channel ($`\nu =1`$, for example) the scattering matrix has only two nonzero elements – each with unit magnitude (one element for the edge state leaving the source on the lower left side and ending up at the upper left, and one leaving the drain at the upper right and ending up at the source at the lower right). Only one of these two nonzero elements (the one representing the state leaving the source) enters into Eq. 1. We write this relevant unit magnitude ($`U(1)`$ valued) element as $`e^{i\varphi (t)}`$, such that we have the charge pumped per cycle as $`Q=e_0^\tau \frac{dt}{2\pi }\frac{d\varphi (t)}{dt}.`$ In the integer quantum Hall regime the system must return to its original state after a full cycle. Thus, $`\varphi (t)`$ must return to its original value modulo $`2\pi `$. The pumped charge is then just the number of times $`\varphi `$ wraps by $`2\pi `$ per cycle. In this way we see that the pumped charge is quantized as a result of being a topological quantity!
This quantization argument can be generalized to the case of $`m`$ copropagating channels per edge. In this case, the $`m`$ edge channels can mix with each other as long as they all go directly along the edge from the source to the drain and do not cross the Hall bar. The relevant nonzero terms of the scattering matrix then form a $`U(m)=U(1)SU(m)`$ matrix. It can be shown that the $`U(1)`$ part is again the only important piece (representing the total charge) and the pumped charge per cycle is again quantized as described above.
This scattering matrix formalism is easily extended to finite temperature (at least for the integer case). One needs only to define scattering matrices $`S(E,t)`$ as a function of incoming energy. Eq. 1 become $`E`$ dependent resulting in a charge transfer $`Q(E)`$ which is then smeared by a Fermi function to give the charge transfer: $`Q=𝑑EQ(E)\frac{dn_F(E)}{dE}`$ with $`n_F`$ the Fermi function. In Fig. 3 this smearing by a Fermi function is shown as the dashed line (in the figure $`T`$ is taken to be $`10\%`$ of the antidot single particle addition energy).
For the noninteracting electron (integer) case and for some simple interacting cases, it is possible to solve for the scattering matrix explicitly (given the energies of eigenstates on the antidot and the tunneling matrix elements as a function of time). Indeed it can be established, as claimed above, that the charge pumped per cycle at $`T=0`$ is quantized and is equal to the difference in the charge on the antidot between steps (a) and (d).
To generalize this scattering matrix approach to the fractional quantum Hall regime, we imagine connecting a fractional Hall sample to integer Hall leads in a smooth fashion, so that one can still ask about the scattering matrix for electrons injected into the system. Here, due to the above mentioned ground state degeneracy, the system need not return to its original state after a single pumping cycle. In the case of having $`q`$ degenerate ground states, the system can cycle through the ground states returning to the original state only after $`q`$ full periods of pumping. Thus, the pumped charge $`Q`$ in Eq. 1 need only be quantized in units of the electron charge after $`q`$ cycles, so the pumped current per cycle is quantized in units of $`e/q`$.
Experiments: This experiment can thus be used as a measurement of the charge of the fractional quantum Hall quasiparticle. Although, a number of previous works have measured the fractional charge of quantum Hall quasiparticles, it is quite possible that the currently proposed pumping experiment will be the theoretically clearest measurement yet.
The main experimental problem in carrying out this experiment appears to be that temperature must be sufficiently low that the current steps (see Fig 3) are not too smeared out. As discussed above, this temperature scale is mostly determined by the single (quasi)particle addition energy for the antidot. It is thus quite useful to note that this energy scale has in fact been measured for several similar experimental systems in both the integer and fractional regimes. Although the precise addition energy depends on the particular sample in question, the authors of Refs. were able to achieve addition energies on the order of several hundred mK for both $`\nu =1`$ and $`\nu =1/3`$. For the case of $`\nu =2/5`$, however, this energy seems to be somewhat lower, but may still be high enough to successfully perform the proposed pumping experiment.
Another experimental issue is how fast can one pump the system and expect to have the pumped charge quantized. This somewhat subtle issue is a subject of current research. However, as estimates, one can expect that the tunnelling time from the antidot to the edge should set one time scale, the single particle addition energy sets another time scale, and the dissipation time yet another time scale. It is quite safe to say that pumping at a rate slower than all of these time scales will remain quantized. The effects of pumping faster will be discussed in a forthcoming paper.
Acknowledgments: I am indebted to B. Spivak for encouraging me to think about pumping in quantum Hall systems, and to C. Marcus for encouraging me to turn these ideas into a paper. Helpful conversations with N. Zhitenev, R. de Piccioto, L. Levitov, J. K. Jain, A. Moustakas, and C. Chamon are also acknowledged. |
no-problem/9912/chao-dyn9912001.html | ar5iv | text | # A New Feature in Some Quasi-discontinuous Systems *footnote **footnote * Supported by the National Natural Science Foundation of China under grant No. 19975039, and the Foundation of Jiangsu Provincial Education Committee under the Grant No. 98kjb140006.
Many systems can display a very short, rapid changing stage (quasi-discontinuous region) inside a relatively very long and slowly changing process. A quantitative definition for the ”quasi-discontinuity” in these systems has been introduced. We have shown by a simplified model that extra-large Feigenbaum constants can be found inside some period-doubling cascades due to the quasi-discontinuity. As an example, this phenomenon has also been observed in Rose-Hindmash model describing neuron activities.
PACS: 05.45.+b
Recently, there has been considerable interest in piece-wise smooth systems (PWSSs). Such models usually describe systems displaying sudden, discontinuous changes, or jumping transitions after a long, gradually varying process. These systems may show some behaviors apparently different from those of the everywhere - differentiable systems (EDSs) \[1-5\]. In fact, the sudden changes in the above processes also need time. Therefore, such a process can be everywhere smooth if one describes it with a high enough resolution. Usually, in the largest part of the process, a quantity changes very slowly. It has a drastic changing only in one or several very small stages. We suggest to call the stage as a ”quasi-discontinuous region (QDR)” and shall define a ”quasi-discontinuity (QD)” inside it quantitatively. A system that can display QDR in its processes may be called a ”quasi-discontinuous system (QDS)”. Obviously, QDS is a much wider conception than PWSS and may serve as an intermediate between EDS and PWSS.
In order to show our basic idea and the first characteristic of QDS, we have constructed a model map as shown in Fig.1. The map reads:
$$f(x)=\{\begin{array}{cc}f_1(x)=k_1(xx_1)+y_1\hfill & x[0,x_1),\hfill \\ f_2(x)=A(xx_0)^2+y_0\hfill & x[x_1,x_3),\hfill \\ f_3(x)=\sqrt{r^2(xo_x)^2}+o_y\hfill & x[x_3,x_4),\hfill \\ f_4(x)=k_2(xx_4)+y_4\hfill & x[x_4,x_5),\hfill \\ f_5(x)=k_3(x1)\hfill & x[x_5,1].\hfill \end{array}$$
(1)
As can be seen in Fig.1, the slope of $`f_1`$ branch is a unit. It is simulating the slowly changing part of the process. Branch $`f_4`$ is a linear line with a very large negative slope $`k_2`$. Branch $`f_3`$ is a small part of a circle introduced for a smooth connection of $`f_2`$ and $`f_4`$. The center of the circle locates at ($`o_x,o_y`$), and its radius is $`r`$. Branches $`f_3`$ and $`f_4`$ can simulate the small drastic changing part. In Eqs. (1), $`A`$ is chosen as the control parameter. It is obvious that the fixed point at $`P_2`$ undergoes period-doubling bifurcation when $`A`$ changes inside a certain parameter range. $`(x_j,y_j)`$ denote the coordinates of points $`P_j`$ (j=0,…,5), respectively. They are determined by the conditions of smooth connections between neighboring branches. For certain function forms of $`f_2`$ and $`f_4`$, the circle of $`f_3`$ still may be very large or small. We define another parameter $`\alpha `$ to fix it. Therefore, $`o_x`$, $`o_y`$, $`r`$, $`k`$, and $`(x_j,y_j)`$ are all functions of $`A`$ and $`\alpha `$. Their explicit forms will not be shown in this short letter. The parameter ranges chosen for this study are $`A[8.0,9.0]`$ and $`\alpha [0.99995,0.999997]`$.
Now we will define QDR and QD in this model. According to the geometrical properties of Eqs. (1), one can obtain the following conclusions. When $`\alpha =1`$, $`r=0`$, the first order derivative of the map function is discontinuous at $`x_3=x_4`$. The second order derivative shows a singularity, that is, an infinitely large value here. When $`\alpha (0,1)`$, the branch $`f_3`$ has a finite length. The first order derivative of the map function is continuous at both $`x_3`$ and $`x_4`$. The second order derivative value between them is finite but trends toward infinite when $`\alpha 1`$. In this case, the maximum value of the second order derivative of the map function between $`x_3`$ and $`x_4`$ may be used to describe the ”quasi-discontinuity (QD)”. So we shall define QD as
$$\kappa =max|\frac{d^2f}{dx^2}|_{x_0},x_0[x_3,x_4],$$
(2)
and define QDR as
$$\mathrm{\Delta }=|x(2)x(1)|,$$
(3)
where $`x(2)`$ and $`x(1)`$ are between $`x_3`$ and $`x_4`$, and satisfy
$$|\frac{df}{dx}|_{x(1)}=|\frac{df}{dx}|_{x(2)}=\frac{1}{\sqrt{2}}\mathrm{max}|\frac{df}{dx}|.$$
Acording to the definations (2) and (3), the QDR and QD for Eqs. (1) can be expressed as
$$x(1)=o_x\frac{k_2r}{\sqrt{2+k_2^2}},x(2)=x_4,$$
(4)
and
$$\kappa =\frac{r^2}{[r^2(x_4o_x)^2]^{3/2}},$$
(5)
respectively.
When $`\alpha =1`$ it is reasonable to observe one of the typical behaviors of PWSS. That is the interruption of a period - doubling bifurcation cascades by a type V intermittency .
When $`\alpha `$ is smaller than, but close to 1, there is a QDR between $`x_3`$ and $`x_4`$ instead of the non-differentiable point. The mapping is everywhere smooth, so the period-doubling bifurcation cascade should continue to the end. However, there is a drastic transition of the mapping function slope in a very small QDR that makes all further bifurcation points compressed into a relatively much shorter parameter distance. The Feigenbaum constants $`\delta _i`$ (i=3,4,5 or even more), influenced by the compression, should show some extraordinary values. That is exactly what we have observed. Table 1 shows the data about three cascades. In the table, $`n`$ indicate the sequence number of doubling, $`\delta _n(i)`$ $`(i=13)`$ are the Feigenbaum constants of the cascade $`i`$. The parameter values $`\alpha (i)`$ and the maximum value of the QD, $`\kappa (i)`$, for each cascade are indicated in the caption. In the table $`\delta _{n_0}`$ data are obtained from Ref. . They are listed here for a comparison with the corresponding ones obtained in a typical everywhere smooth situation. As can be seen in table I, when $`\kappa (i)`$ is large, a lot of Feigenbaum constants, $`\delta _3`$, $`\delta _4`$, $`\delta _5`$, $`\delta _6`$ and $`\delta _7`$ are extraordinary. The further constants may be considered as ordinary, but they converge to the universal Feigenbaum number very slowly. When $`\kappa (i)`$ is smaller, only $`\delta _3`$, $`\delta _4`$ and $`\delta _5`$ are apparently extraordinary. The further constants converge much faster. When $`\kappa (i)`$ is very small, the whole Feigenbaum constant sequence is very close to the standard $`\delta _{n_0}`$ data. That may indicate a smooth transition from QDS to a EDS. Also, from these data one can believe that the extraordinary Feigenbaum constants in the period-doubling cascades are induced by QD of the system. Based on this understanding we suggest the use of the common extraordinary Feigenbaum constant $`\delta _j(j=3,4)`$ to signify this phenomenon. The relationship between $`\kappa `$, the QD, and the symbol of the phenomenon $`\delta _j`$ ($`j=3,4`$), have been computed. Figure 2 shows the result of function $`\kappa \delta _j`$( Although $`\kappa `$ is dependent on both $`\alpha `$ and $`A`$, our numerical results demonstrate that $`\kappa `$ is not sensitive to the parameter $`A`$ at a given $`\alpha `$. Therefore, it is possible to choose the maxmum $`\kappa `$ to represent QD of a whole diagram. For example, for the bifurcation points $`A_n(n=0,1,2,\mathrm{},12)`$, indicated in the second column of Table 1, the corresponding $`\kappa `$ are $`220.9\times 10^6`$, $`208.2\times 10^6`$, $`205.6\times 10^6`$, …, $`205.0\times 10^6`$, respectively. So we choose $`220.9\times 10^6`$ as the representative $`\kappa `$ of the bifurcation diagram). One can see that $`\delta _4`$ increases, but $`\delta _3`$ decreases when $`\kappa `$ becomes larger and larger.
It is important to find examples of this kind of interesting phenomenon in practical systems. We have done such a study in Rose-Hindmarsh (R-H) model. The model, which describes neuronal bursting , can be expressed by
$$\{\begin{array}{c}\frac{dx}{dt}=yax^3+bx^2+Iz,\hfill \\ \frac{dy}{dt}=cdx^2y,\hfill \\ \frac{dz}{dt}=r[s(xx^{})z],\hfill \end{array}$$
(6)
where $`x`$ is the electrical potential of the biology membrane, $`y`$ is the recovering variable, $`z`$ is the adjusting current, $`a,b,c,d,s`$ and $`x^{}`$ are constants, $`r`$ and $`I`$ are chosen as the control parameters. We shall take $`a=1,b=3,c=1,d=5,x^{}=1.6,s=4`$ for this study. Fig.3 shows the $`\mathrm{Poincar}\stackrel{`}{\mathrm{e}}`$ map of a strange attractor observed when $`I=2.9,r=0.00433`$ (Here the $`\mathrm{Poincar}\stackrel{`}{\mathrm{e}}`$ section is defined as the coordinate value of z axis at the maximum in x direction of the trajectory. We have also tested some different definitions of $`\mathrm{Poincar}\stackrel{`}{\mathrm{e}}`$ section, the results have shown that all of them are qualitatively the same as each other). It is clear that the iterations in the region $`[z_1,z_2]`$ change very rapidly. Therefore, we call this region as a QDR.
Table 2 shows the critical bifurcation parameter values and the corresponding Feigenbaum constants for a period-doubling bifurcation cascade. One can see that $`\delta _1`$ and $`\delta _2`$ are larger than ordinary values. As our computation has confirmed, that means an interruption of the cascade by a collision of the periodic orbit with the QDR when the first time period-doubling finished.
For a comparison with the function $`\kappa \delta _i`$ shown in Fig. 2, we have computed the bifurcation diagrams with $`I=2.8,2.9,3.0,3.1,3.2,3.3,3.4`$, and $`r[0.1\times 10^2,4\times 10^2]`$. The results are shown in Fig. 4 (Here, we also choose the maxmum $`\kappa `$ to represent the quasi-discontinuouty of one bifurcation diagram). They are in a qualitative agreement with those in Fig. 2.
In conclusion, we have found some extraordinary Feigenbaum constants in some period-doubling bifurcation cascades in a constructive and a practical system. The mechanism of the phenomenon is that a periodic orbit near a critical point of bifurcation crosses a QDR in the system. This understanding may be important for the experimental scientists because very often they can measure only the first several Feigenbaum constants in a real experiment. After observing strange Feigenbaum constants, they can verify if their system is a QDS with the knowledge in this discussion. Moreover, our results also demonstrate that between typical PWSSs and EDSs there can be a type of transitive systems. |
no-problem/9912/math-ph9912020.html | ar5iv | text | # One Dimensional Regularizations of the Coulomb Potential with Application to Atoms in Strong Magnetic Fields
## 1. Introduction
It is well-known that systems in strong magnetic fields behave like systems in one-dimension, i.e., a strong magnetic field confines the particles to Landau orbits orthogonal to the field, leaving only their behavior in the direction of the field subject to significant influence by a static potential. Motivated by this general principle and the work of Lieb, Solovej and Yngvason \[LSY\] on atoms in extremely strong magnetic fields, Brummelhuis and Ruskai \[BR1\] initiated a study of models of atoms in homogeneous strong magnetic fields in which the 3-dimensional wave-function has the form
(1.1)
$$\mathrm{\Psi }(𝐫_1,𝐫_2\mathrm{}𝐫_n)=\psi (x_1\mathrm{}x_n)\mathrm{{\rm Y}}(y_1,z_1,y_2,z_2,\mathrm{}y_n,z_n)$$
where $`\mathrm{{\rm Y}}`$ lies in the projection onto the lowest Landau band for an N-electron system. We follow the somewhat non-standard convention of choosing the magnetic field in the x-direction, i.e, $`𝐁=(B,0,0)`$ where $`B`$ is a constant denoting the fields strength, in order to avoid notational confusion with the nuclear charge $`Z`$.
Such models lead naturally to one dimensional regularizations of the Coulomb potentials of the form
(1.2) $`V_m(x)`$ $`=`$ $`{\displaystyle _0^{2\pi }}{\displaystyle _0^{\mathrm{}}}{\displaystyle \frac{|\gamma _m(r,\theta )|^2}{\sqrt{x^2+r^2}}}r𝑑r𝑑\theta `$
(1.3) $`=`$ $`{\displaystyle \frac{1}{\mathrm{\Gamma }(m+1)}}{\displaystyle _0^{\mathrm{}}}{\displaystyle \frac{u^me^u}{\sqrt{x^2+u}}}𝑑u`$
(1.4) $`=`$ $`{\displaystyle \frac{2e^{x^2}}{\mathrm{\Gamma }(m+1)}}{\displaystyle _{|x|}^{\mathrm{}}}(t^2x^2)^me^{t^2}𝑑t`$
where $`\gamma _m(r,\theta )=\frac{1}{\sqrt{\pi m!}}e^{im\theta }r^me^{r^2/2}`$. Recognition that such potentials are important goes back at least to Schiff and Snyder \[SS\] in 1939 and played an important role in the Avron, Herbst and Simon study \[AHS\] of hydrogen. Recently, Ruskai and Werner \[RW\] undertook a detailed study of these potentials, proving the important property of convexity of $`1/V_m`$ as well as a number of other useful properties. The primary purpose of this note is to give a summary of these results in Sections 4 and 5. Before doing that, we briefly discuss one-dimensional models of atoms in strong magnetic fields in Section 2 and their implications for the maximum negative ionization problem in Section 3.
## 2. Atoms in Strong Magnetic Fields
The Hamiltonian for an $`N`$ electron atom in a magnetic field $`𝐁`$ is
(2.1)
$$H(N,Z,B)=\underset{j=1}{\overset{N}{}}\left[|𝐏_j+𝐀|^2\frac{Z}{|𝐫_j|}\right]+\underset{j<k}{}\frac{1}{|𝐫_j𝐫_k|}$$
where $`𝐀`$ is a vector potential such that $`\times 𝐀=𝐁`$. The ground-state energy of $`H(N,Z,B)`$ is given by
(2.2)
$$E_0(N,Z,B)=\underset{\mathrm{\Psi }=1}{inf}H(N,Z,B)\mathrm{\Psi },\mathrm{\Psi }$$
Let $`E_0^{\mathrm{conf}}(N,Z,B)`$ denote the corresponding minimum restricted to functions of the form (1.1). For extremely strong fields, it was shown in \[LSY\] that $`E_0/E_0^{\mathrm{conf}}1`$ as $`B/Z^{4/3}\mathrm{}`$ with $`N/Z`$ fixed.
In \[BR1\]and \[BR2\] we consider two special cases of (1.1). We write the Landau state with angular momentum $`m`$ in the x-direction in the form
(2.3)
$$\gamma _m^B(y,z)=\frac{B^{(m+1)/2}}{\sqrt{\pi m!}}\overline{\zeta }^me^{B|\zeta |^2/2}$$
where $`\zeta =y+iz`$. Then our two special cases can be described as follows:
In this case, we make the extremely simple assumption that $`\mathrm{{\rm Y}}`$ is simply a product of Landau states with $`m=0`$, i.e., $`\mathrm{{\rm Y}}=_{k=1}^N\gamma _0(y_k,z_k)`$.
In this case we assume that $`\mathrm{{\rm Y}}`$ is an antisymmetrized product of Landau states with $`m=0,1,\mathrm{}N1`$, i.e., $`\mathrm{{\rm Y}}=\frac{1}{\sqrt{N!}}[\gamma _0\gamma _1\mathrm{}\gamma _{N1}]`$.
Although the first model is somewhat unrealistic, its simplicity makes it amenable to detailed analysis which yields insight into the general situation. The second model corresponds to the physically reasonable assumption that $`\mathrm{{\rm Y}}`$ is a Slater determinant. In this case, the required antisymmetry of the wave function is inherent in our assumptions on $`\mathrm{{\rm Y}}`$ and the one dimensional function $`\psi `$ is symmetric, i.e., the electrons behave like bosons in one dimension.
It is straightforward to show that
(2.4)
$$E_0^{\mathrm{conf}}(N,Z,B)=\sqrt{B}\underset{\psi =1}{inf}h(N,Z,M)\psi ,\psi +NB$$
where have scaled out the field strength $`B`$ so that
(2.5)
$$h(N,Z,M)=\underset{j=1}{\overset{N}{}}\left[\frac{1}{M}\frac{d^2}{dx_j^2}Z\stackrel{~}{V}(x_j)\right]+\underset{j<k}{}\stackrel{~}{W}(x_jx_k)$$
and the only remnant of the magnetic field is in the “mass” $`M=B^{1/2}`$ and the effective one-dimensional potentials $`\stackrel{~}{V}`$ and $`\stackrel{~}{W}`$ will be defined below for each model.
For the zero model, one easily finds \[BR1\] that
(2.6)
$$\stackrel{~}{V}(x)=V_0(x)\text{and}\stackrel{~}{W}(uv)=\frac{1}{\sqrt{2}}V_0\left(\frac{|uv|}{\sqrt{2}}\right).$$
Note $`V_0(x)\frac{1}{|x|}`$ for large $`|x|`$. Thus, for large separations,
(2.7)
$$\stackrel{~}{W}(uv)\frac{1}{\sqrt{2}}\frac{\sqrt{2}}{|uv|}=\frac{1}{|uv|}.$$
However, not only is the singularity removed at $`u=v`$, $`\stackrel{~}{W}(0)`$ is smaller by a factor of $`\frac{1}{\sqrt{2}}`$ than $`\stackrel{~}{V}(0)`$. This means that if two electrons are simultaneously near the nucleus, the price paid from the electron-electron repulsion is smaller than that gained from the electron-nuclear attraction. Although this effect seems to play an important role in binding additional electrons, it may be partially offset by the price paid in kinetic energy if one attempts to constrain both electrons near the nucleus. See \[BR1\] for further discussion.
For the Slater model it can be shown \[BR2\] that
(2.8)
$$\stackrel{~}{V}(x)V_{\mathrm{av}}^N(x)=\frac{1}{N}\underset{m=0}{\overset{N1}{}}V_m(x)\text{and}$$
(2.9)
$$\stackrel{~}{W}(uv)=\frac{1}{\sqrt{2}}\underset{j=0}{\overset{N1}{}}c_jV_{2j+1}\left(\frac{|uv|}{\sqrt{2}}\right)$$
where $`c_j>0j`$ and $`_jc_j=1`$ so that the effective interaction is a convex combination of $`V_m`$ with odd $`m=1,3,\mathrm{}2N1`$, albeit with the same $`\frac{1}{\sqrt{2}}`$ scaling as in (2.7). Note that the convex sum in (2.9) above includes contributions from $`V_m`$ with $`m>N`$. Properties (b) and (d) of Section 4 imply that $`V_m(0)`$ is decreasing in $`m`$. Therefore, one expects a decrease in the electron-electron repulsion $`\stackrel{~}{W}`$ in addition to that from the factor of $`\frac{1}{\sqrt{2}}`$. However, delicate combinatorics would be needed to verify this exactly.
Some obvious variations on these models are possible and discussed briefly in \[BR1, BR2\]. It is interesting to note that if $`\mathrm{{\rm Y}}=_{k=1}^N\gamma _m(y_k,z_k)`$ with $`m`$ odd, then the convex sum analogous to (2.9) contains only terms $`V_{2j}`$ with even subscript.
It is also worth noting that
(2.10)
$$\underset{\beta \mathrm{}}{lim}\frac{\beta }{\mathrm{log}\beta }V_m\left(\beta x\right)=\delta (x)$$
in the sense of tempered distributions. This implies that the potentials $`\stackrel{~}{V}`$ and $`\stackrel{~}{W}`$ which occur in our models have an analogous delta potential behavior as $`\beta \mathrm{}`$. The proof \[BD\] of (2.10) uses the Fourier transform (property (k) in Section 4) of the potentials $`V_m(x)`$, particularly the observation that $`\widehat{V}_m(\xi )`$ has a logarithmic singularity at $`\xi =0`$. Similar limiting behavior (with $`\beta =\sqrt{B}`$) was observed by \[LSY\] for potentials in the three-dimensional Hamiltonian (2.1). It can also be shown \[BD\] that if (2.5) is appropriately rescaled and the potentials replaced by the corresponding delta potentials, the result is a one-dimensional Hamiltonian whose semi-classical limit (on bosonic wave functions) as $`Z\mathrm{}`$ is precisely that given by the (fermionic) hyperstrong functional in \[LSY\]. Since their functional was shown to describe the $`Z,B/Z^3\mathrm{}`$ limit of the three-dimensional Hamiltonian (2.1), this provides additional justification for our models. Even the simple-minded zero model has the correct asymptotic behavior.
## 3. Maximum Negative Ionization
In the absence of a magnetic field, one expects that the maximum number of electrons a nucleus with charge $`Z`$ can bind is $`N_{\mathrm{max}}(Z)=Z+1`$ or $`Z+2`$. However, only the somewhat weaker result of asymptotic neutrality has been proved rigorously \[LSST\]. If electrons behave like bosons, asymptotic neutrality does not hold and $`N_{\mathrm{max}}`$ behaves asymptotically roughly like $`1.21Z`$. (See \[Sol\] for details and references to earlier work on bosonic atoms.) In \[Lb\] Lieb gave a simple argument which showed that $`N_{\mathrm{max}}<2Z+1`$, independent of particle statistics. Thus, it may seem somewhat surprising that \[LSY\] showed that for atoms in extremely strong magnetic fields
(3.1)
$$\underset{Z,B/Z^3\mathrm{}}{lim\; inf}\frac{N_{\mathrm{max}}(Z)}{Z}2.$$
The study of one-dimensional models in \[BR1\] was initiated, in part, by the hope of proving an asymptotic upper bound of the form $`N_{\mathrm{max}}2Z`$ as $`B,Z\mathrm{}`$. Although we did not succeed in proving such a bound, even for our simplified one-dimensional models, we believe that they offer considerable insight into both the reasons for binding an “extra” $`Z`$ electrons and the reasons why the localization techniques developed to bound $`N_{\mathrm{max}}`$ fail in the strong field case.
It is generally believed that enhanced binding occurs in strong magnetic fields because the field confines the electrons in two dimensions and effectively reduces the atom to a one-dimensional system. Although there is some truth to this, it was shown in \[LSY\] that atoms do not become truly one-dimensional unless $`B>Z^3`$ and the field strength is greater than anything seen on earth. (Sufficiently strong magnetic fields do exist on the surface of neutron stars, making this analysis of some interest in astrophysics.) Moreover, the binding enhancement achieved by making the system effectively one-dimensional can only account for small effects, such as the fact \[AHS\] that singly negative ions always have infinitely many bound states in a magnetic field. It cannot account for the binding of an additional $`Z`$ electrons.
The results in \[BR1\] suggest that the primary mechanism for binding additional electrons in strong fields is the fact that the effective reduction in the strength of the electron-electron repulsion permits two electrons to be near the nucleus simultaneously. However, the one-dimensional confinement also delocalizes the electron. This effect is seen in the Hamiltonian $`h(N,Z,B^{1/2})`$ given by (2.5) where the effective mass is $`M=B^{1/2}`$ so that in strong fields the electrons behave like extremely light particles. The uncertainty principle then implies that trial wave functions which localize the electrons cannot yield bound states.
Since Lieb’s strategy \[Lb\] for finding an upper bound on $`N_{\mathrm{max}}(Z)`$ does not require an explicit localization, it might seem well-suited to atoms in strong magnetic fields. However, Lieb’s method actually has an implicit localization (which is based on an idea of Benguria \[Ben\] for spherically symmetric atoms) for which the localization error is zero in three dimensions. However, as explained in \[BR1\], the localization error is necessarily non-zero in one-dimension. (This is a consequence of the fact that non-positive potentials always have at least one bound state in one dimension. Thus, the phenomenon of enhanced binding in one dimension actually contributes to the delocalization of the electrons.) Using Lieb’s method for the zero model, we were only able to show in \[BR1\] that $`N_{\mathrm{max}}(Z,B)<2Z+1+c\sqrt{B}`$ for an explicit constant $`c`$. In the interesting case $`B=O(Z^3)`$, this yields a bound of the form $`N_{\mathrm{max}}<2Z+cZ^{3/2}`$, rather than a linear one.
Surprisingly, one can get a better bound using the Ruskai-Sigal localization method. (See \[Rusk\] for a summary.) For both the zero model and the Slater model, we can prove the following result.
###### Theorem 3.1.
Let $`N_{\mathrm{max}}(Z,B)`$ be the maximum number of electrons for which the Hamiltonian (2.5) has a bound state, and assume that the potentials $`\stackrel{~}{V}`$ and $`\stackrel{~}{W}`$ have either the form (2.6) corresponding to the zero model or the form (2.8) and (2.9) corresponding to the Slater model. Then for every $`\alpha >0`$ and $`\beta >0`$ there is a constant $`C_{\alpha \beta }`$ such that
(3.2)
$$N_{\mathrm{max}}(Z,B)<C_{\alpha \beta }Z^{1+\alpha }B^\beta $$
where $`\alpha ,\beta `$ can be arbitrarily small and, in the case of the Slater model $`BZ^{3+\gamma }`$ for some $`\gamma >0`$.
This result can be improved slightly to
(3.3)
$$N_{\mathrm{max}}(Z,B)<C_\omega \left[Z(\mathrm{log}Z)^2+Z\mathrm{log}Z(\mathrm{log}B)^{1+\omega }\right]$$
where, as above, $`\omega >0`$ can be arbitrarily small and in the case of the Slater model $`BZ^{3+\gamma }`$. Because the electrons in the one-dimensional model are essentially bosonic, this is the best that one can hope to achieve with the Ruskai-Sigal method.
In the case of the Slater model, the Landau level portion of the wave function $`\mathrm{{\rm Y}}`$ is antisymmetric. Hence, the one-dimensional part of the wave function $`\psi `$ must be symmetric. Even for the zero model, it is physically reasonable to treat the electrons in the one-dimensional model as essentially bosonic. In the Ruskai-Sigal method, the system is divided into a small “inner” ball in which binding is precluded because the electrons are confined to a small region, and an “outer” ball in which the localization error becomes negligible as $`Z\mathrm{}`$. For bosonic systems, one can always squeeze the electrons closer together, yielding a smaller cut-off $`\rho `$ than for fermions. This feature is the only factor which precludes extending the proof of asymptotic neutrality in \[LSST\] to bosonic atoms. This demonstrates that the localization error is not simply a technical artifact, but a reflection of a real physical effect.
For atoms in strong magnetic fields, the cut-off radius $`\rho `$ is not small. Instead $`\rho N\sqrt{B}Z^2(\mathrm{log}\frac{Z^2}{B})^2`$ which grows with $`B`$. For $`B=O(Z^3)`$ and $`N=O(Z)`$, roughly speaking (i.e., ignoring the log term) $`\rho Z^{1/2}B^{1/6}`$. Thus, the localization method can be used to obtain a (non-optimal) upper bound on $`N_{\mathrm{max}}`$ despite the fact that the electrons are highly delocalized and the size of the “inner” region becomes arbitrarily large as $`B\mathrm{}`$. However, the non-optimal bounds above are probably the best one can expect from configuration space localization. It seems likely that a proof of better upper bounds will require the use of phase space localization techniques.
## 4. Properties of $`𝐕_m(x)`$
The functions $`V_m(x)`$ are even and well-defined for $`x>0`$. Although the primary interest in physical applications is for integer $`m0`$, it is easy to see from the form (1.3) that they can be extended to complex $`m`$ with $`\mathrm{}(m)>1`$. For $`\mathrm{}(m)>\frac{1}{2}`$ they are also well-defined for $`x=0`$. In this note we restrict ourselves to non-negative $`x`$ and real $`m>1`$.
It is convenient to define $`V_1(x)=\frac{1}{|x|}`$ and note that this is justified in the sense that $`\underset{m1^+}{lim}x^1V_m(x)=1`$ for all $`x>0`$.
We now summarize the properties of $`V_m(x)`$ for $`x(0,\mathrm{})`$. Unless otherwise stated $`m`$ is real and $`m>1`$. For proofs and further discussion, see \[RW\].
Summary of Properties of $`V_m(x)`$:
1. $`V_m(x)`$ satisfies the inequality $`{\displaystyle \frac{1}{\sqrt{x^2+m}}}>V_m(x)>{\displaystyle \frac{1}{\sqrt{x^2+m+1}}}`$ where the upper bound holds for $`m>0`$ and the lower for $`m>1.`$
2. $`V_m(x)`$ is decreasing in $`m`$. In particular, $`V_{m+1}(x)<V_m(x)<{\displaystyle \frac{1}{x}}`$.
3. The expression $`mV_m(x)`$ is increasing in $`m>1`$.
4. For $`m>1/2`$, the definition of $`V_m(x)`$ can be extended to $`x=0`$ and
$`V_m(0)={\displaystyle \frac{\mathrm{\Gamma }(m+\frac{1}{2})}{\mathrm{\Gamma }(m+1)}}.`$
For integer $`m`$, this becomes
$`V_m(0)={\displaystyle \frac{(2m)!}{2^{2m}(m!)^2}}\sqrt{\pi }={\displaystyle \frac{135\mathrm{}(2m1)}{246\mathrm{}(2m)}}\sqrt{\pi }`$
while for large $`m`$ Stirling’s formula implies
$`V_m(0)\left({\displaystyle \frac{m\frac{1}{2}}{m}}\right)^m\left({\displaystyle \frac{e}{m}}\right)^{1/2}{\displaystyle \frac{1}{\sqrt{m}}}`$
which is consistent with property (a).
5. For all $`m0`$, $`V_m`$ satisfies the differential equation
$`V_m^{}(x)=2x\left(V_mV_{m1}\right).`$
6. For each fixed $`m0`$, $`V_m(x)`$ is decreasing in $`x`$.
7. For $`a>0`$, the expression $`aV_m(ax)`$ increases with $`a`$. Hence $`aV_m(ax)>V(x)`$ when $`a>1`$ and $`aV_m(ax)<V(x)`$ when $`a<1`$.
8. $`V_0(x)`$ is convex in $`x>0`$; however, $`V_m(x)`$ is not convex when $`m>\frac{1}{2}`$.
9. For integer $`m`$, $`1/V_m(x)`$ is convex in $`x>0.`$
10. For integer $`m`$, the ratio $`V_{m+1}(x)/V_m(x)`$ is increasing in $`x>0.`$
11. The Fourier transform is given by
$`\widehat{V}_m(\xi ){\displaystyle \frac{1}{\sqrt{2\pi }}}{\displaystyle _{\mathrm{}}^{\mathrm{}}}V_m(x)e^{ix\xi }𝑑x={\displaystyle \frac{4^{m+1}}{\sqrt{2\pi }}}{\displaystyle _0^{\mathrm{}}}{\displaystyle \frac{s^me^s}{(|\xi |^2+4s)^{m+1}}}𝑑s`$
12. For large $`x`$, it follows from property (a) that
$`{\displaystyle \frac{m}{2(x^2+m)^{3/2}}}{\displaystyle \frac{1}{x}}V_m(x)<{\displaystyle \frac{m+1}{2x^3}}`$
while (1.3) yields the asymptotic expansion
$`V_m(x)={\displaystyle \frac{1}{x}}{\displaystyle \frac{m+1}{2x^3}}+{\displaystyle \frac{3(m+2)(m+1)}{8x^5}}+O\left({\displaystyle \frac{1}{x^7}}\right).`$
The lower bound in (a) was proved earlier (at least for integer $`m`$) by Avron, Herbst and Simon \[AHS\]. Properties (b) and (c) imply that $`V_m(x)`$ is decreasing in $`m`$, while $`mV_m(x)`$ is increasing; this gives an indication of the delicate behavior of $`V_m`$. The differential equation (e) can be verified using integration by parts in (1.3). Property (f) follows directly from (b) and (e). Property (g) follows from (1.3) and the observation that $`{\displaystyle \frac{a}{\sqrt{a^2x^2+u}}}`$ is increasing in $`a`$. It is useful in analyzing $`\stackrel{~}{W}(uv)`$ since it implies $`\frac{1}{\sqrt{2}}V_m\left(\frac{|uv|}{\sqrt{2}}\right)<V_m(|uv|).`$ Property (h) follows from a straightforward analysis of the differential equation (e) which implies that $`V_m^{}(0)=0`$ for $`m>\frac{1}{2}`$. In the next section, we will see that the cusp at $`x=0`$ and the convexity of $`V_m(x)`$ in $`x>0`$ return when $`V_m(x)`$ is replaced by $`V_{\mathrm{av}}^N(x)`$ as in the Slater model.
The convexity of $`1/V_m(x)`$ can be rewritten as
$$\frac{1}{\frac{1}{2}V_m\left(\frac{x+y}{2}\right)}\frac{1}{V_m(x)}+\frac{1}{V_m(y)}$$
Using property (g) with $`a=\frac{1}{2}`$, one easily finds that the this implies
$`{\displaystyle \frac{1}{V_m(x+y)}}{\displaystyle \frac{1}{V_m(x)}}+{\displaystyle \frac{1}{V_m(y)}}.`$
Since $`1/V_m(x)|x|`$ for large $`|x|`$, this subadditivity inequality plays the role of the triangle inequality in applications. The proof of (i) is extremely delicate. Because $`1/V_m(x)x`$ for large $`x`$, we need to prove the convexity of a function that is nearly linear so that its second derivative is extremely close to zero. Proving that this derivative is positive is equivalent to proving some rather sharp inequalities on the ratio $`V_m(x)/V_{m1}(x)`$.
In the special case $`m=0`$ these inequalities (which are also discussed in \[BR1\] and \[SW\]) are equivalent to
(4.1) $`g_\pi (x)V_0(x)<g_4(x)`$
for $`x>0`$, where
(4.2) $`g_k(x)={\displaystyle \frac{k}{(k1)x+\sqrt{x^2+k}}}.`$
Multiplying (4.1) by $`x={\displaystyle \frac{1}{V_1(x)}}`$ converts this to a bound on the ratio $`{\displaystyle \frac{V_0(x)}{V_1(x)}}`$. To obtain general ratio bounds, define
(4.3) $`G_k^m(y)={\displaystyle \frac{ky}{(k1)ym+\sqrt{(y+m)^2+ky}}}.`$
Then it is shown in \[RW\] that
(4.4) $`G_8^{m1}(x^2)<{\displaystyle \frac{V_m(x)}{V_{m1}(x)}}<G_4^m(x^2)`$
for all integer $`m0`$ and $`x>0`$. The sense in which these bounds are optimal is discussed in \[RW\]. Our proof of these inequalities relies on an inductive argument and, hence, is valid only for integer $`m`$. A proof extending them to general real $`m>1`$ would immediately imply that properties (i) and (j) also hold for general real $`m>1`$.
Another interesting open question is whether or not $`V_m(x)`$ is convex in $`m`$? In particular, is $`2V_m(x)V_{m+1}(x)+V_{m1}(x)`$? It follows from property (e) that this is equivalent to asking if $`V_m^{}(x)`$ is increasing in $`m`$.
## 5. Recursion and Averaged Potentials
Using integration by parts on (1.3) one easily finds that $`V_m`$ satisfies the recursion relation
(5.1)
$$V_m(x)=\frac{1}{m}\left[(m\frac{1}{2}x^2)V_{m1}(x)+x^2V_{m2}(x)\right].$$
for all $`m𝐑`$, $`m1`$. Iterating this, one finds that when $`m`$ is a positive integer
(5.2) $`V_m(x)={\displaystyle \frac{1}{2m}}\left[(12x^2)V_{m1}(x)+{\displaystyle \underset{k=0}{\overset{m2}{}}}V_k(x)+2|x|\right].`$
These relations are useful for studying $`V_{\mathrm{av}}^N(x)`$. For example, it follows immediately from (5.2) that
(5.3)
$$V_{\mathrm{av}}^N(x)=2V_N(x)\frac{2x^2}{N}\left[V_1(x)V_{N1}(x)\right].$$
This can then be used to show that $`V_{\mathrm{av}}^N(x)`$ is convex for all $`x>0`$. Furthermore $`\underset{x0+}{lim}{\displaystyle \frac{d}{dx}}V_{\mathrm{av}}^N(x)={\displaystyle \frac{2}{N}}`$, verifying that $`V_{\mathrm{av}}^N(x)`$ has a cusp at $`x=0`$.
It is interesting to note that (5.2) also implies that there are polynomials $`P_m(y)`$ and $`Q_m(y)`$ of degree $`m`$ such that for integer $`m1`$
(5.4)
$$V_m(x)=P_m(x^2)V_0(x)+xQ_{m1}(x^2).$$
These polynomials have many interesting properties. In \[RW\] it is shown that
(5.5)
$$P_m(y)=\frac{1}{mB(m,\frac{1}{2})}e^y{}_{1}{}^{}F_{1}^{}(\frac{1}{2},\frac{1}{2}m,y)$$
where $`B(m,n)`$ denotes the beta function and $`{}_{1}{}^{}F_{1}^{}(\alpha ,\gamma ,y)`$ denotes the indicated confluent hypergeometric function. |
no-problem/9912/astro-ph9912257.html | ar5iv | text | # On effects of resolution in dissipationless cosmological simulations
## 1 Introduction
Dissipationless cosmological $`N`$-body simulations are currently the tool of choice for following the evolution of cold dark matter (CDM) into the highly nonlinear regime. For the widest range of plausible dark matter (DM) candidates (from axions of mass $`10^610^3\mathrm{eV}`$ to $`10^{21}10^{25}\mathrm{eV}`$ wimpzillas; see, e.g., Kolb, Chung & Riotto 1998; Roszkowski 1999), their expected number density is $`n_{\mathrm{DM}}(10^{52}10^{83})\mathrm{\Omega }_0h^2\mathrm{Mpc}^3.`$ $`N`$-body simulations numerically solve the $`N`$-body problem: given initial positions and velocities for $`N`$ pointlike massive objects, the simulations predict the particle positions and velocities at any subsequent time. Current $`N`$-body simulations are capable of following the evolution of $`\genfrac{}{}{0pt}{}{_<}{^{}}10^9`$ particles, far short of the expected number of DM particles. Therefore, the correct approach to modelling dark matter evolution in a cosmologically representative volume, is to use the Vlasov equation (collisionless Boltzmann equation) coupled with the Poisson equation and complemented by appropriate boundary conditions. However, a full-scale modelling of 6D distribution functions with reasonable spatial resolution is extremely challenging computationally. The alternative approach adopted in most cosmological simulations is to split the initial phase space into small volume elements and follow evolution of these elements using $`N`$-body techniques. Each volume element can thus be thought of as an $`N`$-body particle, which moves with a flow and which has some shape (for example, a box or a sphere) or is simply point-like. Two particles are assumed to interact gravitationally only if they are separated by a distance $`\genfrac{}{}{0pt}{}{_>}{^{}}ϵ`$, where $`ϵ`$ is the smoothing scale, often referred to as the force resolution. It is clear that the “size” of a particle is determined by the eulerian spatial size of the initial phase space volume element or by the smoothing scale, whichever is smaller.
While this approach seems logical and reasonable and is expected to provide approximate solution to a complicated problem, questions of its limitations may be raised. For example, while particle shape is usually considered to be rigid (fixed by a specific form of the Green function or the shape of the interparticle force), in eulerian space the shape of the initially cubic phase space volume element can be expected to be stretched as the element moves towards higher density regions. Its volume can also change so as to preserve the phase space density. Furthermore, under certain conditions the $`N`$-body systems may exhibit scattering, which is undesirable when one models a purely collisionless system. This may occur in cosmological simulations if the “size” of particles is much smaller than the eulerian spatial size of the phase space element they are supposed to represent.
These effects may influence the accuracy of the simulations and lead to spurious results. Nevertheless, surprisingly little attention has been given to studies of such limitations in the cosmological simulations. As the resolution of simulations improves and the range of their applications broaden, it becomes increasingly important to address these issues. Indeed, during the past decade the force resolution of the simulations has improved by a factor of $`1001000`$, while (with rare exceptions) the mass resolution has improved only by a factor of $`10`$. Also, modern high-resolution codes follow evolution of cosmological systems for many dynamical time-scales. In this regime the accuracy of the force estimates may be less important than the stability of the overall solution. These issues are usually not addressed when tests of codes are presented.
Recently, in a series of papers Melott and collaborators (Kuhlman, Melott & Shandarin 1996; Melott et al. 1997; Splinter et al. 1998) raised the issue of the balance between the force and the mass resolution. While we disagree with their main conclusion that the force resolution should be larger than the mean interparticle separation (see § 3 and § 5), we agree that the issue is important.
There is also a common misconception related to the adaptive mesh refinement approach in cosmological simulations and other algorithms that integrate equations of particle motion in comoving coordinates. The common criticism (e.g., Splinter et al. 1998 and references therein) is that these algorithms attempt to resolve scales unresolved in the initial conditions (the scales below approximately half of the Nyquist frequency). However, the goal of increasing the resolution in comoving coordinates is not to resolve the waves not present in the initial conditions but rather to properly follow all of the waves initially present.
When followed in comoving coordinates, gravitational instability leads to the separation of structures from the Hubble flow and collapse, resulting in the transfer of power to higher wavenumbers. If the force resolution is fixed in comoving coordinates at the Nyquist frequency of the initial conditions, this transfer cannot be modelled properly for all waves. Moreover, the size of structures that have collapsed and virialized stays fixed in the proper coordinates and decreases in comoving coordinates. When the comoving size of such objects becomes smaller than the resolution of a fixed grid simulation, their subsequent evolution and internal properties will be modelled incorrectly. Adaptive mesh refinement algorithms address this problem by increasing the force resolution locally to follow the evolution of collapsing and virialized density peaks as their size becomes less than the resolution of the original grid. Other algorithms can achieve the same result by varying the comoving force softening with time.
Our motivation for the present study is twofold. Our first goal is to elaborate on the issue of spurious numerical effects. Namely, we study the effects of the balance of force and mass resolutions and of time integration details on statistics commonly used in analyses of cosmological simulations. The balance of force and mass resolutions should be studied by varying both of the resolutions. In this study, however, we will keep the mass resolution fixed and vary the force resolution instead. While this may not reveal all of the artificial effects<sup>1</sup><sup>1</sup>1For example, fixed mass resolution does not allow us to study the effects of the rigid particle shape mentioned above., this allows to study the spurious two-body scattering present when the smoothing scale is set to be too small. It also allows us to study how the limited force resolution affects various statistical properties of the dark matter distribution. Our second goal is to compare results produced using two of the currently employed high-resolution $`N`$-body codes: AP<sup>3</sup>M (Couchman 1991) and ART (Kravtsov, Klypin & Khokhlov 1997). The comparison is of interest due to a relative novelty of the latter technique and some disagreement between the results concerning the details of the central density distribution in DM halos obtained using different codes (e.g., Moore 1994; Navarro, Frenk & White 1997; Kravtsov et al. 1998; Moore et al. 1999). This study can also be interesting as an independent test of the widely used AP<sup>3</sup>M code, for which virtually no systematic tests have been published to date.
The paper is organized as follows. In §2 we describe the $`N`$-body algorithms used in our study and describe the numerical simulations performed . In §3 we compare the dark matter distribution simulated using different codes with different force resolutions and time steps. In §4 we use these simulations to compare the properties and distribution of dark matter halos (dense virialized systems). In §5 we summarize the results of the code comparisons, discuss the effects of resolution and time step on the commonly used statistics, and present our conclusions.
## 2 Cosmological $`N`$-Body Simulations
In this study we will use and compare three different $`N`$-body algorithms: Particle-Mesh (PM) algorithm (Hockney & Eastwood 1981), adaptive Particle-Particle Particle-Mesh algorithm (AP<sup>3</sup>M; Couchman 1991), and Adaptive Refinement Tree algorithm (ART; Kravtsov et al. 1997). The PM algorithm was first used for cosmological simulations by Doroshkevich et al. (1980), Efstathiou & Eastwood (1981), and Klypin & Shandarin (1983). The algorithm makes use of the fast Fourier transforms to solve the Poisson equation on a uniform grid and uses interpolation and numerical differentiation to obtain the force that acts on each particle.
The solution is limited by the number of particles (mass resolution) and by the size of the grid cell which defines force resolution. The exact shape of the resulting force depends on the specific form of the Green function and interpolation used to get the force. The technqiue is attractive due to its simplicity and the fact that it is numerically very robust. Highly efficient implementations have been developed and used during the past decade. The technique is described in detail by Hockney & Eastwood (1981) and we refer the reader to this book for further details. The specific implementations used in our study are those of the AP<sup>3</sup>M code and the ART code. The PM simulations presented here have been run using the publicly available AP<sup>3</sup>M code with the particle-particle part switched off and by the ART code with the mesh refinement block switched off. In the remainder of this section we will describe the AP<sup>3</sup>M and ART algorithms and the specifics of our test simulations.
### 2.1 AP<sup>3</sup>M Code
Particle-Particle-Particle-Mesh (P<sup>3</sup>M) codes (Hockney et al. 1973; Hockney & Eastwood 1981) express the inter-particle force as a sum of a short range force (computed by direct particle-particle pair force summation) and the smoothly varying part (approximated by the particle-mesh force calculation). One of the major problems for these codes is the correct splitting of the force into a short-range and a long-range part. The grid method (PM) is only able to produce reliable inter particle forces down to a minimum of at least two grid cells. For smaller separations the force can no longer be represented on the grid and therefore one must introduce a cut-off radius $`r_e`$ (larger than two grid cells !) where for $`r<r_e`$ the force should smoothly go to zero. The parameter $`r_e`$ defines the chaining-mesh and for distances smaller than this cutoff radius $`r_e`$ a contribution from the direct particle-particle (PP) summation needs to be added to the total force acting on each particle. Again this PP force should smoothly go to zero for very small distances in order to avoid unphysical particle-particle scattering. This cutoff of the PP force determines the overall force resolution of a P<sup>3</sup>M code.
The most widely used version of this algorithm is currently the adaptive P<sup>3</sup>M (AP<sup>3</sup>M) code of Couchman (1991). The smoothing of the force in this code is connected to a $`S_2`$ sphere, as described in Hockney & Eastwood (1981). The particles are treated as density spheres with a profile
$$S_2:\rho (r)=\{\begin{array}{cc}\frac{48}{\pi ϵ^4}\left(\frac{ϵ}{2}r\right),\hfill & \text{ for }r<ϵ/2\hfill \\ & \\ \text{ }0,\hfill & \text{ otherwise }\hfill \end{array}$$
(1)
where $`ϵ`$ is the softening parameter. For distances greater than $`ϵ`$ the particles are treated as point masses interacting according to the newtonian $`1/r^2`$ law, whereas for smaller separations the effective shape of the $`S_2`$ sphere influences and modifies the force law such that the interaction drops down to zero as $`r0`$.
When splitting the force into short- and long-range components, one has to use two softening parameters: one which is directly connected to the cut-off radius $`r_e`$ for the PM force and therefore tells us where to match the PM and PP part, and another which determines the overall force resolution (softening scale for PP force). The PP force is truncated at both the very low separations and at $`rr_e`$ where the force can be calculated using the mesh based PM method. The AP<sup>3</sup>M code uses a cut-off radius $`r_e`$ for the long-range force of approximately 2.4 PM mesh cells, and this leads to the softening parameter of $`ϵ_{\mathrm{PM}}=1.3r_e3.1`$ (cf. Hockney & Eastwood 1981). The softening $`ϵ`$ of the PP force determines the overall force resolution; for the AP<sup>3</sup>M simulations presented in this paper the softening scales are given in Table 1. When setting the overall softening parameter to a value greater than 3.5 the code runs as a pure PM mesh based code, because the softening of the PP force is greater than that of the PM part.
Unfortunately, the particle-particle summation which allows one to achive sub-grid resolutions and thereby makes the P<sup>3</sup>M algorithm attractive, is also the method’s main drawback. The PP calculation must search for neighbors out to roughly two mesh spacings to properly augment the PM force. This becomes increasingly expensive as clustering develops and particles start to clump together (within the chaining-mesh cells). The adaptive P<sup>3</sup>M algorithm remedies this by covering the high-density, most computationally expensive regions with refinement grids. Within the refinements the direct sum is replaced by a further local P<sup>3</sup>M calculation with isolated boundary conditions performed on a finer refinement grid (PM mesh and chaining-mesh are refined). The number of particles per refinement grid cell is smaller and so is the PP associated computations. The number of grid cells per refinement depends on the total number of particles within that region, but is always a power of two. The criterion for placing a refinement depends only on the total number of particles inside a chaining-mesh cell. If this value exceeds a preselected threshold in a given region, the region is refined; for our runs we used a value of 50 particles. It is convenient to isolate patches which cover an exact cubic block of chaining-mesh cells. Recursively placed refinements are allowed, and in the simulations presented in this paper a maximum level of 3 was reached.
The AP<sup>3</sup>M code integrates equations of particle motion using $`p=a^{3/2}`$ as a time variable (here $`a`$ is the expansion factor; Efstathiou et al. 1985). A time-centered leapfrog integration with constant time step $`\mathrm{\Delta }p`$ is used. This scheme leads to ”large” steps in $`a`$ at the beginning of the simulation which then get progressively smaller during the course of simulation.
### 2.2 ART Code
The Adaptive Refinement Tree code (ART; Kravtsov et al. 1997) reaches high force resolution by refining all high-density regions with an automated refinement algorithm. The refinements are recursive: the refined regions can also be refined, each subsequent refinement having half of the previous level’s cell size. This creates an hierarchy of refinement meshes of different resolutions covering regions of interest.
The refinement is done cell-by-cell (individual cells can be refined or de-refined) and meshes are not constrained to have a rectangular (or any other) shape. This allows the code to refine the required regions in an efficient manner. The criterion for refinement is the local overdensity of particles: in the simulations presented in this paper the code refined an individual cell only if the density of particles (smoothed with the cloud-in-cell scheme; Hockney & Eastwood 1981) was higher than $`n_{th}=5`$ particles. Therefore, all regions with overdensity higher than $`\delta =n_{th}2^{3L}/\overline{n}`$, where $`\overline{n}`$ is the average number density of particles in the cube, were refined to the refinement level $`L`$. For the two ART simulations presented here, $`\overline{n}=1/8`$. The Poisson equation on the hierarchy of meshes is solved first on the base grid and then on the subsequent refinement levels. On each refinement level the code obtains the potential by solving the Dirichlet boundary problem with boundary conditions provided by the already existing solution at the previous level. There is no particle-particle summation in the ART code and the actual force resolution is equal to $`2`$ cells of the finest refinement mesh covering a particular region. A detailed description of the code, its tests, and discussion of the force shape is given in Kravtsov et al. (1997). Note, however, that the present version of the code uses multiple time steps on different refinement levels, as opposed to the constant time stepping in the original version of the code. The multiple time stepping scheme is described in some detail in Kravtsov et al. (1998; also see below).
The refinement of the time integration mimics spatial refinement and the time step for each subsequent refinement level is two times smaller than the step on the previous level. Note, however, that particles on the same refinement level move with the same step. When a particle moves from one level to another, the time step changes and its position and velocity are interpolated to appropriate time moments. This interpolation is first-order accurate in time, whereas the rest of the integration is done with the second-order accurate time centered leap-frog scheme. All equations are integrated with the expansion factor $`a`$ as a time variable and the global time step hierarchy is thus set by the step $`\mathrm{\Delta }a_0`$ at the zeroth level (uniform base grid). The step on level $`L`$ is then $`\mathrm{\Delta }a_L=\mathrm{\Delta }a_0/2^L`$.
The choice of an appropriate time step for a simulation is dictated by the peak force resolution. The number of time steps in our simulations is such that the rms displacement of particles during a single time-step is always less than 1/4 of a cell. No particles moves further than $`0.5`$ cells in a single time step, where the cell size and time step for particles located on the refinement level $`L`$ are $`\mathrm{\Delta }x_0/2^L`$ and $`\mathrm{\Delta }a_0/2^L`$, respectively. The value of $`\mathrm{\Delta }a_0=0.0015`$, used in run ART1 (see Table 1) was determined in a convergence study using a set of $`64^3`$ particle simulations described in Kravtsov et al. (1998). To study the effects of time step, we have also run a simulation with $`\mathrm{\Delta }a_0=0.003`$.
The ART code integrates the equations of motion in comoving coordinates. Therefore, if a fixed grid is used to calculate the forces, the force resolution of the simulation degrades as $`a=(1+z)^1`$ (see § 1). In order to prevent this and to preserve the initial resolution in physical coordinates in the simulations presented in this paper, the dynamic range between the start ($`z_i=87`$) and the end ($`z=0`$) of the simulation should increase by $`(1+z_i)`$: i.e., for our simulations reach $`128\times (1+z_i)=11,136`$.
In the simulations presented in this paper, the peak resolution is reached by creating a refinement hierarchy of five levels of refinement in addition to the base $`128^3`$ uniform grid. However, the small number of particles in these simulations does not allow the code to reach the required target dynamic range of $`11,136`$, estimated above.
### 2.3 Simulations
Although we are not directly interested here in cosmological applications we decided to use a definite cosmological model: the cluster normalized, $`\sigma _8=0.67`$, standard cold dark matter (SCDM) model. All simulations were run on a $`128^3`$ mesh with $`64^3`$ particles and started with the same random realization at $`z=87`$. The adopted box size is $`15h^1\mathrm{Mpc}`$, which gives a mass resolution of $`3.55\times 10^9h^1\mathrm{M}_{}`$. The AP<sup>3</sup>M simulations were carried out varying both force resolution and time steps. The two runs of the ART code differ only in the number of integration steps on the lowest-resolution of the uniform grid.
For comparison we also ran the PM simulations. Both the AP<sup>3</sup>M and ART have an internal PM block and we ran two pairs of PM simulations using these different PM implementations. This is done to compare implementations and to explore the effects of the time integration scheme on the results. In the case of AP<sup>3</sup>M (PM1 and PM2) both the adaptive and the PP part of the P<sup>3</sup>M code have been switched off, while in the ART PM runs (PM3 and PM4) we have simply switched off mesh refinement. The parameters of the simulations are summarized in Table 1. The force softening is given in grid units and in $`h^1`$kpc (in brackets), and the number of steps of the ART simulations is presented for the lowest-resolution (the effective number on the highest-resolution is 32 times larger).
Using this set of runs we can compare simulations with the same force resolution but different integration steps (e.g., AP<sup>3</sup>M1 with AP<sup>3</sup>M4, ART1 and ART2) amongst each other and simulations with the same number of integration steps but with varying force resolution (e.g., AP<sup>3</sup>M1 with AP<sup>3</sup>M5). Additionally we compare the different $`N`$-body codes in order to quantify the deviations due to different (grid-based) methods to solve the equations of motion.
It is important to keep in mind that the shape of the small-scale force is somewhat different in the codes used. Therefore, equal dynamic range does not correspond to the same physical resolution. The peak resolution of the ART code is $`2`$ cells of the highest level refinement, and so the actual force resolution is twice worse than the “formal” resolution given by the dynamic range. To make cross-code comparison, we have performed the simulation AP<sup>3</sup>M5, which has approximately half the dynamic range of the ART runs and similar force resolution.
## 3 Properties of the particle distribution
### 3.1 Visual comparison
A first inspection of the global distribution of particles in a 3$`h^1`$Mpc thick slice (Fig. 1, top row) shows that the distributions are very similar. Even the much lower resolution PM1 simulation shows virtually the same global particle distribution. In the bottom row of the figure we zoom into the region marked by rectangles in the upper panel. Here one can clearly see that the two high resolution runs produce many small-size dense halos, which are, however, slightly shifted in their positions. We attribute this shift to the cumulative phase errors due to the differences in the time integration schemes of the type observed and discussed in a recent code comparison by Frenk et al. (1999; Santa-Barbara cluster comparison project). Most of these small clumps are absent in the PM run due to its poor resolution. The small scale density peaks do not collapse on scales smaller than the 2 PM grid cells due to the sub-newtonian of self-gravitation at these scales (see § 3.3). From Figure 1 it can be seen that only halos of size larger than approximately one grid cell collapse in the PM simulation (see, e.g., Klypin 1996).
Our comparison can be contrasted with comparison by Suisalu & Saar (1996) (Fig.1 and Fig.2 in their paper). If compared on a large scale, the particle distributions in different runs compare much better in our case than the simulations compared in Suisalu & Saar (1996). This indicates that the differences they observed are due to the differences in their multigrid algorithm rather than to the two-body scattering, as argued in their paper.
### 3.2 The minimal spanning tree
To quantify the differences between the simulations we have calculated the minimal spanning tree (MST) of the particle distribution. The minimal spanning tree of any point distribution is a unique, well defined quantity which describes the clustering properties of the point process completely (e.g., Bhavsar & Splinter 1996, and references therein). The minimal spanning tree of $`n`$ points contains $`N1`$ connections. For the ART1, AP<sup>3</sup>M1, AP<sup>3</sup>M4, AP<sup>3</sup>M5 and a PM1 simulations we show in Fig. 2 the number of connections $`N_{con}`$ of the tree per bin of the connection length (equal to $`0.005\overline{l}`$) as a function of the length of the connection, $`l_{con}`$. Here, $`\overline{l}=(V_{box}/N)^{1/3}`$ denotes the mean inter-particle separation. Since the length of the connection is proportional to $`\rho ^{1/3}`$ the probability distribution of connections ($`N_{con}/N`$) is equivalent to the density probability distribution in the simulation.
In Fig. 2, the connection length distributions for ART and AP<sup>3</sup>M simulations are peaked at the same relative connection length ($`0.0150.02\overline{l}`$), whereas the PM simulation is peaked at a higher value ($`0.030.04\overline{l}`$). This indicates the ability of the ART and the AP<sup>3</sup>M codes to resolve shorter scales and, therefore, to reach higher overdensities. The position of the maximum depends only slightly on the resolution. In fact, increasing the resolution from PM to ART by a factor of 32 shifts the maximum by about a factor of 2. This is probably because the differences affect only a very small fraction of the volume and dark matter particles. Correspondingly, differences in resolutions of AP<sup>3</sup>M1 and AP<sup>3</sup>M5 are too subtle to have a visible effect on the distribution.
A time integration has a much more noticeable effect. An increase of the integration step in the run AP<sup>3</sup>M4 compared to AP<sup>3</sup>M1 and AP<sup>3</sup>M5 leads to a shift of the maximum (and distribution) to higher values of $`l_{con}`$. This is caused by inaccuracies in the integration of particle trajectories in high-density regions, as is evidenced, for example, by the halo density profiles in the AP<sup>3</sup>M4 run (see also results below).
The maxima of the AP<sup>3</sup>M simulations are slightly higher in amplitude than the maximum of the ART simulation. This reflects the fact that the AP3M resolves forces uniformly (i.e., equally well in both low- and high-density regions). The ART code by design reaches high resolution only in the high-density regions. Therefore, there are groups of a few particles located in low-density environments and thus not resolved by the ART code, which are however resolved by the AP<sup>3</sup>M. For example, we have found 3618 doublets and 558 triplets linked by $`ll=0.015`$ (the maximum of the distribution) in the AP<sup>3</sup>M5 run, whereas only 2753 doublets and 466 triplets are found in the ART1 simulation at this linking lengths.
Figure 3 demonstrates that many of the missing doublets and triplets are indeed located in the low-density regions. The right column of this figure shows the projection of the distribution of all doublets and triplets found in the AP<sup>3</sup>M5 (top) and ART1 (bottom) runs by the friends-of-friends algorithm with the linking length of $`ll=0.015`$. For comparison, we show in the left column of Fig. 3 a projections of randomly selected 5% of all particles. While some doublets in the low-density regions are found in the ART run, all of them are gravitationally unbound (are chance superpositions). We find that in the AP<sup>3</sup>M runs most of such doublets and triplets are bound and are a part of a binary, triple, or higher multiplicity clusters.
According to the sampling theorem, one needs at least $`2030`$ particles to resolve three-dimensional waves in the initial power spectrum. The presence of gravitationally bound clusters consisting of just a few particles is therefore artificial.
### 3.3 Density Cross–Correlation Coefficient
The density cross–correlation coefficient,
$$K=\frac{\delta _1\delta _2}{\sigma _1\sigma _2},$$
(2)
was introduced by Coles et al. (1993) in order to quantify similarities and differences between simulations of different cosmological models. Recently, Splinter et al. (1998) have adopted this statistic to compare simulations. Here, we follow the same approach and use this measure to quantify differences between simulations which have been carried out by different numerical algorithms or by the same algorithm but with different parameters of the simulation.
To compute $`K`$, we have calculated the densities on a regular mesh using the triangular-shaped cloud (TSC; Hockney & Eastwood 1981) density assignment scheme and then used the resulting density field to compute $`\delta _1\delta _2`$ and variances. We have varied the size of the grid in order to show the dependence of the cross-correlation on the smoothing scale of the density field.
We summarize our results in Table 2: the first four rows present the cross-correlation coefficients between the runs with the same time integration scheme but different time steps. In the following two rows we present the cross-correlation coefficients for the AP<sup>3</sup>M runs with different force resolutions but the same integration step. The rest of the rows in the table present the cross-correlation coefficients for the runs with different time integration schemes as well as the cross-correlation coefficients between the AP<sup>3</sup>M1 and PM1 runs which have been simulated with the same time itegration scheme but with vastly different force resolutions.
In all cases, it is obvious that the cross-correlation worsens for smaller smoothing scales (the larger density grid size). With smaller smoothing, smaller structures in the density field are resolved. The degraded cross-correlation therefore indicates that there are differences in locations and/or densities of these small-scale structures. It is also clear from the definition of $`K`$ that this measure is particularly sensitive to the differences in the highest density regions. If we restrict the correlation analysis to a coarse grid, we smooth the particle distribution with a fairly large smoothing length and smear out the details and differences of the small-scale structure.
The differences revealed by the cross-correlation coefficient can arise either because the internal density distribution of the structures is different in different runs or because the spatial locations of these structures are somewhat different. Our analysis of halo profiles (See § 4.5) shows that differences in density in the same halos are small (except, of course for the PM runs) and have no significant effect on the cross-correlation. The degrading cross-correlation in high-resolution runs is thus due to the differences in the locations of small-scale structure rather than to the differences in density. Indeed, one can readily see in the bottom row of Fig. 1 that positions of small clumps in the ART and AP<sup>3</sup>M simulations often differ by $`100300h^1\mathrm{kpc}`$, the scale at which a significant decrease in $`K`$ is observed. With such shifts in halo locations, the same halo may occupy different cells in the density grid which systematically reduces $`\delta _1\delta _2`$ (calculated cell-by-cell) and results in a lower $`K`$.
What causes the differences in halo positions? The rows 1, 2, 5, and 6 of Table 2 indicate that both force resolution and time step have the same effect on $`K`$, both causing some phase errors. However, rows 7-9 show that the time integration scheme causes much bigger phase differences than either force resolution or time step. This was indeed observed in the recent “Santa Barbara Cluster” code comparison project (Frenk et al. 1999). Different time integration schemes lead to a different accumulation of the phase errors. The manifestation of these differences is certain “asynchronicity” between simulations: the same phase error is accumulated at slightly different time moments. This results in shifts of halo positions when simulations are compared at the same time moment.
We have indeed observed such asynchronicity in our simulations. Thus, for example, cross-correlation coefficient $`K`$ between ART1 and AP<sup>3</sup>M5 on a $`128^3`$ grid reaches a maximum of 0.84 at $`a=1.04`$ when AP<sup>3</sup>M5 is evolved further in time (this can be compared to 0.71 at $`a=1.0`$ in Table 2) while keeping ART1 fixed at a=1.0. Similar effect is observed at $`z=2`$: the maximum $`K`$ is achieved when AP<sup>3</sup>M5 is advanced forward in time by a factor of 1.02 in expansion factor. Partly, this asynchronicity may be caused by the initial phase error introduced at the start as the AP<sup>3</sup>M simulations particles were advanced half a step (or by a factor of 1.03 in expansion factor) forward. The additional error accumulates during the time integration.
While most halo properties and properties of the matter distribution appear to be similar in the ART and the AP<sup>3</sup>M runs (see results below), the differences between positions of small-scale structures are much larger between these runs than any differences between runs simulated using the same code. This is clearly seen in the case of PM runs. All 4 runs cross-correlate perfectly within the code type (PM1 and PM2 were run using AP<sup>3</sup>M, while PM3 and PM4 were run using the ART code), but cross-correlate rather poorly when different code simulations are compared. In the latter case we observed a decrease of $`K`$ as we go to smaller smoothing scales similar to that observed in other cross-code cross-correlation coefficients.
One should of course bear in mind that the shape and accuracy of the PM force in AP<sup>3</sup>M and ART codes are somewhat different at small ($`14`$ grid cells) scales. In the AP<sup>3</sup>M, the PM force is shaped using modification of the Green functions (Couchman 1991) which is controlled by a special softening parameter $`ϵ_{\mathrm{PM}}`$. This procedure considerably increases accuracy of the force (down to $`\genfrac{}{}{0pt}{}{_<}{^{}}5\%`$ in the case of $`ϵ_{\mathrm{PM}}3.5`$ used in our PM1 and PM2 simulations) in the force, at the expense of making the force somewhat “softer”. Thus, for example, in PM1 and PM2 runs, the force becomes systematically smaller than the Newtonian value at separations $`\genfrac{}{}{0pt}{}{_<}{^{}}3`$ grid cells (15% smaller at $`r=2`$ cells and 70% smaller at $`r=1`$ cell). The Green functions in the PM solver in the ART code are not modified to reduce the errors. This results in the force which is Newtonian on average down to the scale of one grid cell (the force then falls off sharply at smaller separations). At the same time, the errors at small separations are considerably higher (see Gelb 1992; Kravtsov et al. 1997). At separation of 1 grid cell, the errors may reach $`50\%`$ (albeit in small number of particle configurations). About $`1020\%`$ of that error is due to the cubical shape of the particles assumed in the PM algorithm, while the remaining error arises from numerical differentiation of the potential. These errors, however, are not systematically positive or negative but are scattered more or less evenly around zero. This means that particle trajectories can be integrated stably down to separations of 1-2 grid cells, as was demonstrated in Kravtsov et al. (1997).
Although it is important to keep the differences in force shape and accuracy in mind, we think that they are not the main cause of poor cross-correlations. The halo density profiles in different PM runs are in good agreement at all resolved scales and thus differences in internal density distributions cannot explain low cross-code $`K`$. At the same time, visual comparisons of halo positions show small shifts that are most naturally explained by the different phase errors accumulated in different codes.
This calls into question the usefulness of cross-correlation coefficient for studies of resolution effects (Splinter et al. 1998), unless the study is done within the same numerical code.
This conclusion can, in fact, be drawn from results of Splinter et al. (1998): when the simulation evolves into the highly nonlinear stage the cross-correlation within the same code is much better than cross-correlation between runs of similar resolution but simulated using different codes. For example their Table 4 shows that on a $`128^3`$ grid coefficient $`K`$ for the TREE-code runs with the force resolutions of $`ϵ=0.0625`$ and $`ϵ=0.25`$ (in the units of mean interparticle separation) is 0.87, while the cross-correlation coefficient between the TREE $`ϵ=0.0625`$ and AP<sup>3</sup>M $`ϵ=0.25`$ runs is only $`0.67`$. This is similar to the value of $`0.71`$ which we obtained for $`K`$ on the same grid size for ART1 and AP<sup>3</sup>M runs. Moreover, Figure 9 in Splinter et al. (1998) also agrees with our conclusion. The phase errors demonstrated in this figure increase towards smaller scales which explains the decrease of cross-correlation coefficient for finer grids. Moreover, the figure also shows that cross-correlation within single code type is always good regardless of the mass or force resolution. The largest phase differences are observed between runs simulated using different codes.
The differences in time integration schemes are not the only possible sources of small-scale differences in density fields. For example, the last two rows of Table 2 show that cross-correlation between high-resolution ART and AP<sup>3</sup>M runs and low-resolution PM run is poor regardless of whether the integration scheme was the same (AP<sup>3</sup>M1 $``$ PM1) or different (ART1 $``$ PM1). In this case, the differences in the small-scale details of density fields are due to the vastly different force resolutions of the simulations. As was noted above, the low resolution of the PM simulation ($`234h^1\mathrm{kpc}`$) precludes collapse of any halos with the size smaller than $`12`$ grid cells (see Fig. 1). In the locations of small-size halos $`\delta `$ is very high for ART and AP<sup>3</sup>M runs but is much lower (because there are no halos) in the PM run; hence the considerably lower cross-correlation coefficient.
Given the above considerations, our interpretation of our results and the results of Splinter et al. (1998) is markedly different than the interpretation by the authors of the latter study. These authors interpret the differences between the low- and high-resolution runs as an erroneous evolution in the latter, whereas our interpretation (obvious from Fig. 1) is that these differences are due to the fact that high-density small-scale structures such as halos do not collapse in low-resolution runs. The differences between high-resolution runs are interpreted as the phase errors leading to small shifts to the locations of small-scale structures, as discussed above and as observed in other studies (Frenk et al. 1999).
The origin of these phase errors is the dynamical instability of particle trajectories in the high-density regions. As is well known, the trajectories in the virialized systems tend to be chaotic and any small differences existing at any time moment will tend to grow very fast with time. The divergence can thus be expected to be more important in nonlinear regions and this explains the larger phase errors at smaller scales. The differences may be caused by the difference in the force calculation, errors introduced by numerical time integration, or simply by different roundoff errors. The resulting phase errors are cumulative and thus will grow with time.
Unfortunately, it is impossible to tell which code has the smallest phase errors, because we do not know how the phases should evolve in the high-density regions. However, this is probably not the point. Phase errors of this kind will be very difficult to get rid of, because even in the case of infinitely good mass, force, and time resolutions, there will always be roundoff errors, which will behave differently in different codes and thus will tend to grow differences in phases. Luckily, almost all popular statistics used in cosmological analyses are not sensitive to phases and therefore results are not affected by this problem. Moreover, it is clear that at scales $`\genfrac{}{}{0pt}{}{_>}{^{}}1h^1\mathrm{Mpc}`$ the errors in phases become negligible (different runs cross-correlate perfectly) and therefore even phase-sensitive analyses should not be affected if restricted to large scales. However, it is clear that the existence of such errors should be kept in mind when analyzing or comparing cosmological simulations.
### 3.4 Particle trajectories
As we have discussed in the previous section, small numerical errors tend to grow and lead to deviations of the particle trajectories in nonlinear regions. Nevertheless, it is clear that the maximum deviations should be approximately equal to the size of a typical halo. Although particle trajectories can deviate, they are expected to stay bound to the parent halo. There is an additional deviation of the order of $`100300h^1\mathrm{kpc}`$ in the positions of the halos themselves (due to the phase errors), but this is also of the order of halo size. All in all, we can expect a scatter in the positions of the same particles in different simulations not larger than the size of the largest systems formed: $`12h^1\mathrm{Mpc}`$.
To compare the particle trajectories in our simulations, we have calculated the deviations of the coordinates $`\mathrm{\Delta }r=|\stackrel{}{r}^{(1)(k)}\stackrel{}{r}^{(2)(k)}|`$, where $`\stackrel{}{r}^{(i)(k)}`$ is the position of the k-th particle in the i-th simulation.
In Fig. 4 we have plotted $`\mathrm{\Delta }r`$ as a function of the local overdensity at the position of the particle $`\stackrel{}{r}^{(1)}`$ for 10% of particles randomly selected from the total number of particles. In the upper panel of Fig. 4 we compare runs simulated using the same code but different integration steps (left) and different force resolution (right). In the lower panels we compare runs simulated using different codes.
A quick look at Fig. 4 shows that the scatter in particle positions is substantial but in most runs it is contained within $`\genfrac{}{}{0pt}{}{_<}{^{}}2h^1\mathrm{Mpc}`$, the approximate size of the largest system seen in Fig. 1. The scatter for most particles in cross-code differences shown in the bottom row is somewhat larger due to the larger differences in halo positions. As mentioned above, this difference adds to the difference in particle positions within halos. However, the scatter is even larger when AP<sup>3</sup>M1 and AP<sup>3</sup>M3 are compared. In this case, about 1.4% of particles are separated by more than $`2h^1\mathrm{Mpc}`$. We did not find such outliers when comparing ART runs or AP<sup>3</sup>M runs with lower resolution amongst themselves. The comparison of AP<sup>3</sup>M5 and ART1 shows that the scatter is much smaller.
Figure 5 shows all the particles with separations $`|\mathrm{\Delta }r|>2h^1\mathrm{Mpc}`$ in the AP<sup>3</sup>M1 and AP$`{}_{}{}^{3}M`$3 runs. The overdensity in the upper panel is estimated at the location of the particle in the AP<sup>3</sup>M1 run, while in the bottom panel it is estimated for the corresponding particle in the AP<sup>3</sup>M3 run. It is clear that counterpart particles in the AP<sup>3</sup>M3 tend to be located in low-density regions, whereas the corresponding particles in the AP<sup>3</sup>M1 run are located in a wide range of environments.
The fact that these large deviations occur preferentially in the highest resolution runs, immediately raises suspicion that they are due to two-body scattering. Indeed, when we analyzed the trajectories of the deviant pairs, we found obvious scattering events. In Fig. 6 we present an example of such two-body scattering. In this figure the force resolution increases (softening decreases) from right to left. For clarity only every second integration step is shown.
The region shown in Fig. 6 is smaller than the grid cell size, so there is no interaction at all in the PM simulation. The two particles move in the mean potential of the other particles. The same happens in the ART simulations. On the contrary, due to the high force resolution these two particles interact and approach in the case of AP<sup>3</sup>M 1-4 runs. In the case of the highest force resolution this leads to an interaction which, due to the insufficiently small time step, leads to a violation of energy conservation. The particles approach very closely, feel a large force, undergo in this moment a huge acceleration and move away in opposite directions with high velocities. At the next integration step the particles are already too far to feel substantial two-body interaction. Therefore, they move with almost constant high velocity in opposite directions. The velocity is about a factor of 10 higher than the initial velocity, i.e. the total kinetic energy of the system increased during the interaction by a factor of 100. Such pairs of particles are located at large $`\mathrm{\Delta }r`$ in scatter plots shown in Figs. 4 & 5. We have run an additional simulation with the parameters identical to those of the AP<sup>3</sup>M3 run but with a much larger number of time steps (48,000 steps in total). The outlying particles dissappear and the plots look similar to the AP<sup>3</sup>M5-ART1 plot. This means that although the scattering is still present, there is no violation of energy conservation in the smaller step run and therefore there are no high-velocity streaming particles.
Fig 5 indicates that scattered particles attain large velocities and move out of high-density regions. Most of the simulation volume is low-density, so it is not surprising that we find these streaming particles preferentially in the low-density regions. Their counterparts in the AP<sup>3</sup>M1 runs, on the other hand, are located in a wide range of environments. This implies either that the scattered particles were ejected from high-density regions, or that these particles attained their excess energy in two-body encounters in low-density regions and simply did not participate in the gravitational collapse due to their high velocities.
Fig. 7 shows the acceleration of the particle denoted by filled circles in the three AP<sup>3</sup>M plots in Fig. 6. This figure illustrates the spike in the particle acceleration during the scattering event. It also shows that the higher the force resolution, the larger the acceleration.
Such collisions are possible if the force resolution is independent of the local particle density. This problem does not arise in the ART code because the resolution is only increased in the regions of high local particle density. We have checked all particles beyond separation $`\mathrm{\Delta }r>2h^1`$Mpc within the both ART runs and could not find any event comparable to the collision in the AP<sup>3</sup>M runs.
The conditions for two-body scattering can be estimated by noting that strong scattering occurs when the potential energy of two-body interaction is equal to kinetic energy of the interacting particles. This gives the scale $`s=2Gm/v^2`$ or
$$s=8.61\times 10^2h^1\mathrm{kpc}\left(\frac{m_p}{10^8h^1\mathrm{M}_{}}\right)\left(\frac{v}{100\mathrm{km}/\mathrm{s}}\right)^2.$$
(3)
For the simulations presented here ($`m_p=3.55\times 10^9h^1\mathrm{M}_{}`$) this scale is
$$s=3.05\times v_{100}^2h^1\mathrm{kpc},$$
(4)
where $`v_{100}`$ is velocity in units of $`100\mathrm{km}/\mathrm{s}`$. In terms of the mean interparticle separation, $`d=\overline{n}^{1/3}`$, this gives
$$\stackrel{~}{s}s/d=0.013\times v_{100}^2.$$
(5)
Two-body scattering occurs if the force resolution, $`ϵ`$, is less than the scale $`s`$ and if $`s`$ is much smaller than the local interparticle separation $`d_{loc}`$. The latter for these simulations is $`d_{loc}=234.38(1+\delta )^{1/3}h^1\mathrm{kpc}`$.
The above equations show that scattering is possible in the AP<sup>3</sup>M runs 1 through 4. This in agreement with results presented in Figures 4 and 7.
### 3.5 2-point correlation function
In Fig. 8 we show the correlation function for the dark matter distribution down to the scale of $`5h^1\mathrm{kpc}`$, which is close to the force resolution of all our high-resolution simulations. The correlation function in runs AP<sup>3</sup>M1 and ART2 are similar to those of AP<sup>3</sup>M5 and ART1 respectively and are not shown for clarity. We can see that the AP<sup>3</sup>M5 and the ART1 runs agree to $`\genfrac{}{}{0pt}{}{_<}{^{}}10\%`$ over the whole range of scales. The correlation amplitudes of runs AP<sup>3</sup>M 2-4, however, are systematically lower at $`r\genfrac{}{}{0pt}{}{_<}{^{}}5060h^1\mathrm{kpc}`$ (i.e., the scale corresponding to $`1520`$ resolutions), with the AP<sup>3</sup>M3 run exhibiting the lowest amplitude. At scales $`\genfrac{}{}{0pt}{}{_<}{^{}}30h^1\mathrm{kpc}`$ the deviations from the ART1 and the AP<sup>3</sup>M5 runs are $`100200\%`$. We attribute these deviations to the numerical effects discussed in § 5. The fact that the AP<sup>3</sup>M2 correlation amplitude deviates less than that of the AP<sup>3</sup>M3 run, indicates that the effect is very sensitive to the force resolution.
The correlation function of the PM simulation deviates strongly on small scales. However, the bend coincides with the force resolution ($`234h^1\mathrm{kpc}`$) of this run. At scales smaller than resolution, we can expect an incorrect correlation amplitude because waves of wavelength smaller than the resolution do not grow at the correct rate in these runs.
This result agrees with the correlation function comparison done by Colín et al. (1999), where agreement of $`\genfrac{}{}{0pt}{}{_<}{^{}}10\%`$ was found between the correlation functions from the larger $`256^3`$-particle ART, AP<sup>3</sup>M, and PM simulations at all resolved scales. There is no evidence therefore, that high-resolution simulations, given that they are done with sufficiently small time step, simulate the 2-point correlation function incorrectly at scales smaller than the mean interparticle separation. The agreement between high-resolution and PM simulations of the same mass resolution always agree at the scales resolved in the PM runs.
## 4 Properties of Halos
### 4.1 Identification of Halos
To compare properties of the halos and their distribution in different simulations, we identify DM halos using the the friends-of-friends (FOF) algorithm. The algorithm identifies clumps in which all particles have neighbors with distances smaller than $`ll`$ times the mean inter particle separation, $`r_{ll}ll\times \overline{l}`$. This halo finding algorithm does not assume a special geometry for the identified objects. A drawback is that it only uses particle positions and therefore can identidy spurious unbound clumps at low halo masses.
The mean overdensity of a particle cluster is related to the linking length used to identify it. In an Einstein-de-Sitter universe (simulated in our runs) virialized halos have an overdensity of $`\delta _{\mathrm{vir}}178`$, which corresponds approximately to the linking length of $`ll=0.2`$. A reduction of the linking length by a factor of 2 roughly corresponds to an increase of the overdensity by a factor of 8. Using smaller linking lengths we can study the substructure of the DM halos. Table 3 lists the number of halos identified using different linking lengths in different runs for the two lower limits on the number of particles in a halo: 25 (columns $`24`$) and 50 (columns $`57`$).
The table shows that there are differences of $`1550\%`$ between high-resolution runs. The differences are present not only at low linking lengths, but even at the “virial” linking length, $`ll=0.2`$. These differences are partly due to the nature of the FOF algorithm: small differences in the particle configurations may lead to an identification of a single halo in one simulation and to the identification of two or more halos in another simulation. Nevertheless, some systematic differences are also apparent. PM simulations have almost half as many halos due to the absence of small-mass halos (see Fig. 1) that do not collapse (or do not survive) due to the poor force resolution of these runs.
Also, while AP<sup>3</sup>M1, AP<sup>3</sup>M5, ART1, and ART2 runs agree reasonably among themselves, the number of halos in runs AP<sup>3</sup>M2, AP<sup>3</sup>M3, and AP<sup>3</sup>M4 is systematically lower. In this case the differences seem to be counter-intuitive: the number of halos is smaller in higher force resolution runs. These differences persist at all linking lengths indicating that there are differences in substructure as well as in the number of isolated halos. Moreover, the differences persist even for a larger cut in the number of particles. It has been noted in previous studies (e.g., van Kampen 1995; Moore, Katz & Lake 1996) that particle evaporation due to two-body scattering (see, for example, Binney & Tremaine 1987) can be important for halos of $`\genfrac{}{}{0pt}{}{_<}{^{}}30`$ particles. For such halos the evaporation time-scale, especially in the presence of strong tidal fields, can be comparable to or less than the Hubble time. However, for halos containing $`\genfrac{}{}{0pt}{}{_>}{^{}}50`$ particles, evaporation should be negligible. Nevertheless, the trend with resolution seen in Table 3 does suggest that two-body evaporation is the process responsible for the differences.
The most likely explanation of these results is, in our opinion, the accuracy of the time integration. The estimates of the evaporation time-scale are done assuming no errors in energy exchange between particles in a scattering event. Our analysis of such events in our simulations, on the other hand, shows that with the time step of the AP<sup>3</sup>M 2-4 runs, there is severe energy conservation violation during scatterings. The particles attain much larger velocities than they should have if the integration were perfect. For example, in the scattering event shown in Fig. 6 the final kinetic energy of the two particles is 100 times larger than their initial kinetic energy. Therefore, a time step that is insufficiently small to properly handle two-body scattering may exacerbate the process of evaporation and result in much shorter than predicted evaporation time-scales, even for halos containing relatively large numbers of particles. The differences between AP<sup>3</sup>M1 run and runs AP<sup>3</sup>M2 and 4 indicate that this effect is very sensitive to both the force resolution and to the time step of the simulation.
Another possible explanation is that halos do not evaporate but the particles are heated due to non-conservation of energy which makes halos “puffier”. Such halos would be more prone to destruction by tides in high-density regions.
### 4.2 Mass Function
In Fig. 9 we show the mass functions of all halos identified with various linking lengths at $`z=0`$.
As can be seen in the left panel of Fig. 9, the halo mass function in the PM run is biased towards large masses: the number of objects of mass $`\genfrac{}{}{0pt}{}{_>}{^{}}2\times 10^{12}h^1\mathrm{M}_{}`$ in the PM run agrees well with the corresponding number in the high-resolution runs, while at lower masses the number of halos is strongly underestimated. Mass $`2\times 10^{12}h^1\mathrm{M}_{}`$ corresponds to the virial radius (at $`z=0`$, $`\delta _{\mathrm{vir}}=178`$) of $`R_{\mathrm{vir}}213h^1\mathrm{kpc}`$, i.e. very close to the force resolution of the PM runs ($`234h^1\mathrm{kpc}`$). The conclusion is that the PM runs fail to reproduce the correct abundances of halos with the virial radius less than about two grid cell sizes.
At smaller linking length, $`ll=0.05`$, (right panel of Fig. 9) the PM run severely underestimates the halo abundances. We attribute this also to the poor force resolution of the run. The poor resolution prevents the formation of dense cores in the inner regions of collapsed halos and halos that do not reach overdensity of $`11,000`$ (overdensity of objects identified with $`ll=0.05`$) will be missed. Even if the central density of some halos reaches this value, the halos are still considerably less dense than their counterparts in the high-resolution runs and are therefore more susceptible to destruction by the tidal fields in high-density regions (Moore et al. 1996; Klypin et al. 1999).
The mass functions of the high-resolution runs agree to $`30\%`$ for isolated halos of overdensity $`\delta =178`$ ($`ll=0.2`$). The AP<sup>3</sup>M runs have a larger number of identified halos at masses $`\genfrac{}{}{0pt}{}{_<}{^{}}5\times 10^{10}h^1\mathrm{M}_{}`$ (i.e., halos containg $`\genfrac{}{}{0pt}{}{_<}{^{}}15`$ particles) than the ART runs. This difference is due to smaller number of small halos in low-density regions in the ART runs discussed in § 3.2. The mass functions of the AP<sup>3</sup>M1, AP<sup>3</sup>M5, and the ART runs agree well at masses $`\genfrac{}{}{0pt}{}{_>}{^{}}10^{11}h^1\mathrm{M}_{}`$ for both $`ll=0.2`$ and $`ll=0.05`$ (overdensity of 178 and 13000, respectively), indicating that both runs produced similar populations of halos with similar central densities. The mass functions of the AP<sup>3</sup>M 2-4 runs are similar for $`ll=0.2`$, but show differences for $`ll=0.05`$. Thus, for example, the abundance of halos of mass $`10^{11}10^{12}h^1\mathrm{M}_{}`$ ($`30300`$ particles) in the AP<sup>3</sup>M3 run is underestimated by a factor of $`1.52`$. The mass functions of the AP<sup>3</sup>M2 and AP<sup>3</sup>M4 runs lie in between those of the AP<sup>3</sup>M3 and AP<sup>3</sup>M5 runs.
The fact that differences are present at small linking length indicates differences in the high-density regions. This may be due to the generally lower inner densities of halos and/or to the destruction of “heated” sattelites discussed above.
### 4.3 Halo correlation function
Figure 11 shows the 2-point correlation function of identified DM halos. There is good agreement between the correlation functions of isolated virialized ($`ll=0.2`$) halos in high-resolution runs. Similar to the dark matter correlation function, the agreement is better than 10%. The agreement between the AP<sup>3</sup>M1, AP<sup>3</sup>M5, and the two ART runs does not break even at higher overdensities ($`ll=0.05`$), which indicates that these runs produced similar small-scale substructures in the high-density regions within isolated halos. We do not find any differences of the type seen in the DM correlation function (§ 3.5) between AP<sup>3</sup>M runs at $`ll=0.2`$. For halos identified using $`ll=0.05`$ some differences are observed, but these are comparable to the poisson errors and are therefore inconclusive.
On the other hand, halos in the PM simulation exhibit higher correlations than halos in the high-resolution runs. As discussed in the previous section, the halo mass function in the PM run is biased toward high masses. The higher amplitude of the correlation function can thus be explained by the mass-dependent bias: higher mass halos are clustered more strongly.
### 4.4 Velocity Dispersion vs. Mass
Figure 10 shows the correlation of velocity dispersion and mass for one of the ART, AP<sup>3</sup>M, and PM simulations. A correlation $`\sigma _\mathrm{v}M^{1/3}`$ is expected for virialized halos. For the ART1 and AP<sup>3</sup>M5 simulations, the best fit slope of the correlation for the 50 most massive halos is 0.33 (the correlation for the rest of the AP<sup>3</sup>M runs is similar). The velocity dispersion of low-mass halos scatters around this value. For comparison, a line of slope 1/3 is included in all three panels. It should be mentioned here that at the low mass end the FOF groups contains only a few particles so that $`\sigma _\mathrm{v}`$ is not well determined because the error due to unbound particles accidentally linked by the FOF algorithm may be very high.
The halos in the PM run deviate from the expected slope at masses $`\genfrac{}{}{0pt}{}{_<}{^{}}10^{13}h^1\mathrm{M}_{}`$. The virial radii of the halos of these masses are $`\genfrac{}{}{0pt}{}{_<}{^{}}364h^1\mathrm{kpc}`$. Their size is thus $`\genfrac{}{}{0pt}{}{_<}{^{}}3`$ force resolutions across. Therefore, the potential and the internal dynamics of the particles in these halos are underestimated by the PM code leading to a steeper slope of the $`\sigma _\mathrm{v}M`$ relation.
### 4.5 Density Profiles
In this and the next two sections we present halo-to-halo comparison of individual halos in different simulations. In this section we will compare the density profiles of DM halos. The density distribution of hierarchically formed halos is currently a subject of debate (see, e.g., Navarro, Frenk & White 1997; Kravtsov et al. 1998; Moore et al. 1999) and study of the resolution effects and cross-code comparisons are therefore very important. In figure 12 we present the density profiles of four of the most massive halos in our simulations. We have not shown the profile of the most massive halo because it appears to have undergone a recent major merger and is not very relaxed. In this figure, we present only profiles of halos in the high-resolution runs. Not surprisingly, the inner density of the PM halos is much smaller than in the high-resolution runs and their profiles deviate strongly from the profiles of high-resolution halos at the scales shown in Fig. 12. We do not show the PM profiles for clarity.
A glance at Fig. 12 shows that all profiles agree well at $`r\genfrac{}{}{0pt}{}{_>}{^{}}30h^1\mathrm{kpc}`$. This scales is about eight times smaller than the mean interparticle separation. Thus, despite the very different resolutions, time steps, and numerical techniques used for the simulations, the convergence is observed at a scale much lower than the mean interparticle separation, argued by Splinter et al. (1998) to be the smallest trustworthy scale. At smaller scales the profiles become more noisy due to poorer particle statistics (see Fig. 13).
Nevertheless, there are systematic differences between the runs. The profiles in two ART runs are identical within the errors indicating convergence (we have run an additional simulation with time steps twice smaller than those in the ART1 finding no difference in the density profiles). Among the AP<sup>3</sup>M runs, the profiles of the AP<sup>3</sup>M1 and AP<sup>3</sup>M5 are closer to the density profiles of the ART halos than the rest. The AP<sup>3</sup>M2, AP<sup>3</sup>M3, and AP<sup>3</sup>M4, despite the higher force resolution, exhibit lower densities in the halo cores, the AP<sup>3</sup>M3 and AP<sup>3</sup>M4 runs being the most deviant. These differences can be seen more clearly in Fig. 13, where we plot the cumulative number of particles (i.e., mass) within radius $`r`$ for the halos shown in Fig. 12. The differences between AP<sup>3</sup>M3 and AP<sup>3</sup>M4 and the rest of the runs are apparent up to the radii containing $`1000`$ particles.
These results can be interpreted, if we examine the trend of the central density as a function of the ratio of the number of time steps to the dynamic range of the simulations (see Table 1) shown in Table 4. The ratio is smaller when either the number of steps is smaller or the force resolution is higher. Table 4 shows that agreement in density profiles is observed when this ratio is $`\genfrac{}{}{0pt}{}{_>}{^{}}2`$. This suggests that for a fixed number of time steps, there should be a limit on the force resolution. Conversely, for a given force resolution, there is a lower limit on the required number of time steps. The exact requirements would probably depend on the code type and the integration scheme. For the AP<sup>3</sup>M code our results suggest that the ratio of the number of time steps to the dynamic range should be no less than one. It is interesting that the deviations in the density profiles are similar to and are observed at the same scales as the deviations in the DM correlation function (Fig. 8) suggesting that the correlation function is sensitive to the central density distribution of dark matter halos.
These results are indicative of the sensitivity of the central density distribution in halos to parameters of the numerical simulation. However, due to the limited mass resolution of our test runs, they do not shed light on the density profile debates. The profiles of the ART halos agree well with those of the AP<sup>3</sup>M halos, if the latter are simulated with a sufficiently large number of time steps. But debated differences are at scales of $`\genfrac{}{}{0pt}{}{_<}{^{}}0.01R_{\mathrm{vir}}`$, which are not resolved in these simulations. We are currently carrying out a more detailed, higher-resolution study to clarify the issue.
### 4.6 Shared Particles
We have argued above that particle trajectories may diverge in high-density regions due to their instability to small integration errors. However, despite the instability of its trajectory, we can expect that a bound particle will stay bound to its parent halo. To this end we compare the particle content of individual halos in different runs. We have identified the counterpart halos in different runs as systems that have the largest number of common (or shared) particles. This method is superior to coordinate based methods because due to the phase errors (see § 3.3) one does not expect to find the halos at the exact same position in different runs (this is especially not for small groups of particles).
Fig. 14 shows the percentage of mass that can be found in particle groups that coincide in a specific amount of shared particles. The figure shows that in all runs most halos have more than $`80\%`$ of their particles in common. The distributions peak at around 85-90% for all of the comparisons, except ART1 vs. ART2 which peaks at $`9095\%`$. Even though a direct comparison of particle positions within halos is not a very useful way of comparing different runs, this result shows that the particle content of halos is rather similar.
### 4.7 Correlation of Velocity Dispersions
In Fig. 15 we compare the velocity dispersions of halos identified in different simulations by the FOF algorithm assuming a minimum particle number of 50 per halo. The figure shows that the velocity dispersions agree reasonably well with the overall scatter of $`50\mathrm{km}\mathrm{s}^1`$. The very few outliers are halos of very different masses identified as the same halo by the FOF algorithm. Small differences in particle distribution may result in the identification of a binary system in one simulation and only a single halo in another simulation, leading to the large differences in velocity dispersions.
The differences between velocity dispersions appear to be independent of halo mass, and although they reach $`50\%`$ for the lowest mass halos, there are no obvious systematic differences between different simulations.
## 5 Discussion and conclusions
We have presented results of a study of resolution effects in dissipationless simulations. As we noted in the introduction, an additional goal of this study was to compare simulations done using two different high-resolution $`N`$-body codes: the AP<sup>3</sup>M and the ART. Our results indicate that both codes produce very similar results at all scales resolved in the presented simulations, given the force resolution and time step are such that convergence is reached within the code type. Our results indicate that numerical effects may be complicated due to a combined effect of mass and force resolution and inaccuracies of the time integration scheme. The precise magnitude of the effects depends on the numerical parameters used and the considered statistics or property of particle distribution.
Particles in dissipationless cosmological simulations are supposed to represent elements of the dark matter distribution in phase space <sup>2</sup><sup>2</sup>2Unless particles are supposed to represent individual galaxies, which is virtually never the case in modern simulations.. The smaller the particle number, the larger the volume associated with each of the particles. During the course of evolution, according to Liouville’s theorem, the phase-space volume of each element should be preserved. Its shape, however, will be changing. Correspondingly, the eulerian space volume of these elements may shrink (for particles in an overdense collapsing region of space) or expand (for particles in underdense regions). Regardless of its initial shape, each element can be stretched due to the anisotropic nature of the gravitational collapse in cosmological models.
Usually, none of these effects is modelled in cosmological $`N`$-body simulations. The gravitational field of each particle is assumed to be roughly isotropic and therefore the effects of the volume stretch cannot be modelled. These effects are not addressed in the present study, because they can only be studied by widely varying the number of particles at a fixed force resolution and for a fixed time step<sup>3</sup><sup>3</sup>3Due to the phase errors discussed below, such a study should also be carried out using the same numerical code., while we have varied the force resolution and time step keeping the number of particles fixed. A large number of particles corresponds to the smaller phase-space associated with each particle and the symmetric particle approximation is more accurate. Convergence studies of the halo density profiles indicate that these effects are small (at least at radii $`\genfrac{}{}{0pt}{}{_>}{^{}}0.020.05R_{\mathrm{vir}}`$). However, this does not mean that they cannot be important for other statistics.
Nevertheless, we can study other kinds of resolution effects. Normally, softening of interparticle gravitational force should be approximately equal to the spatial size of the phase-space volume element associated with each particle. If the softening is much smaller than this size, the volume elements will behave like particles and two-body scattering is possible. This was indeed observed in some of our high-resolution simulations. This scattering contradicts the collisionless nature of the modelled dark matter and is thus undesirable.
For the mass resolution of our simulations ($`3.55\times 10^9h^1\mathrm{M}_{}`$), scattering was observed in simulations with uniform force resolution of $`\genfrac{}{}{0pt}{}{_<}{^{}}3h^1\mathrm{kpc}`$ and disappears for larger values of resolution. While we find strong scattering for $`1.4\%`$ of the particles, many more may have suffered weaker scattering events during the course of simulation. Indeed, we find indirect evidence that scattering has substantial effect on the particle distribution. In particular, we find that it may noticeably affects the small-scale amplitude of the 2-point correlation function of dark matter, the abundance and mass function of dark matter halos, and the central density distribution of halos. The effect is amplified strongly for simulations in which larger time steps are used. This is because for larger time steps the scattering events severely violate energy conservation. The magnitude of the violation is very sensitive to the time step: the amount of momentum gained by a particle during a scattering event is $`g\mathrm{\Delta }t`$, where $`g`$ is the acceleration. Therefore, for the same force resolution (which means that the same $`g`$ can be achieved), the momentum gain will be proportional to $`\mathrm{\Delta }t`$.
The 2-point correlation function of matter and the central density distribution in halos in our runs are affected by these effects at scales as large as $`1520`$ force softenings. Also, the abundance of satellite halos in high-density regions appears to be affected as well. We think that these effects may be due to the following two phenomena. First, if the time step of the simulation is too large for a given force resolution, particle trajectories are not integrated accurately in the highest density regions, where the gradient of potential is the highest. In some sense, particles scatter off the central density peak, and may gain energy when integrated with large time step (see arguments above). The number of simulation time steps per deflection on a scale $`R`$ can be estimates as
$$N_{\mathrm{step}}=\frac{R}{v_{100}\mathrm{\Delta }t}7.51\times 10^4RN_{\mathrm{step}}^{\mathrm{tot}}v_{100}^3,$$
(6)
where $`R`$ is in kpc, $`\mathrm{\Delta }t`$ and $`N_{\mathrm{step}}^{\mathrm{tot}}`$ is the time step and the total number of time steps of the simulations, and $`v_{100}`$ is particle velocity in units of $`100\mathrm{km}\mathrm{s}^1`$. We have assumed the Einstein-de Sitter universe with the Hubble constant of $`H_0=50\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$. For high-velocity particles streaming through the centers of massive halos, there may be just a few time steps to integrate the part of trajectory where strong changes in acceleration occur. As illustrated in Fig. 6, this may not be sufficient to ensure energy preservation and may result in an energy gain by particles. This leads to artificial heating and lowers the central density because having acquired energy, particles are not as likely to enter the central region of halo.
The second phenomenon is due to the “graininess” of the potential. Particles in the high-density regions may feel the discreteness of the density field and suffer scattering. We do find evidence for scattering in the high-density regions in our simulations (see § 3.4). Here, again, the effect may be amplified strongly by the incorrect integration of such scattering. Indeed, run AP<sup>3</sup>M1 ($`N_{\mathrm{step}}^{\mathrm{tot}}=8000`$) performs much better than the run AP<sup>3</sup>M4 ($`N_{\mathrm{step}}^{\mathrm{tot}}=2000`$), although both runs have the same mass and force resolution.
The two effects described above may operate in combination, although the first effect does not depend on the mass resolution. The second effect should be eliminated for higher mass resolution. Both lead to the artificial heating of particles thereby lowering the central density of halos and possibly ejecting the particles altogether in some cases. Visual comparisons of halos in the AP<sup>3</sup>M3 (the run which performed the worst) and the AP<sup>3</sup>M5 runs shows that AP<sup>3</sup>M3 halos appear “puffier” and more extended than the same halos in the AP<sup>3</sup>M5 or ART runs. Puffier halos may be destroyed more easily by tides in high-density regions which may explain some of the differences seen between the mass functions of halos in different runs.
Our results show that in constant time step high-resolution simulations the total number of time steps must be rather high to ensure good energy conservation. This requirement can become computationally prohibitive in simulations that follow large numbers of particles. In case of the AP<sup>3</sup>M code, it would probably be preferrable to use its version in the publicly available code “Hydra” (Couchman et al. 1995) that uses adaptively varied time step.
The conditions for scattering, discussed in § 3.4, occur if the force softening is smaller than the scale $`s`$, which in units of mean interparticle separation is
$$\stackrel{~}{s}1.209\times 10^3\mathrm{\Omega }_0^{1/3}\left(\frac{v}{100\mathrm{km}\mathrm{s}^1}\right)^2\left(\frac{m_p}{10^8h^1\mathrm{M}_{}}\right)^{2/3},$$
(7)
and is considerably smaller than the local interparticle separation: $`d_{\mathrm{loc}}=(1+\delta )^{1/3}`$ (in units of the mean interparticle separation, $`\delta `$ is the local particle overdensity). For our simulations $`s3v_{100}^2h^1\mathrm{kpc}`$, where $`v_{100}`$ is particle velocity in units of $`100\mathrm{km}\mathrm{s}^1`$. This means that condition $`\stackrel{~}{s}d_{\mathrm{loc}}`$ is satisfied everywhere but in the highest density regions: $`\delta \genfrac{}{}{0pt}{}{_>}{^{}}10^4`$. The conditions for strong scattering occur for the slow moving ($`\genfrac{}{}{0pt}{}{_<}{^{}}100\mathrm{km}\mathrm{s}^1`$) particles in the AP<sup>3</sup>M runs $`14`$. Such slow moving particles are likely to be present in the low-density regions and in small-mass halos of velocity dispersion $`\sigma _\mathrm{v}\genfrac{}{}{0pt}{}{_<}{^{}}100200\mathrm{km}\mathrm{s}^1`$. This may explain our result that halos of mass $`\genfrac{}{}{0pt}{}{_<}{^{}}(0.51)\times 10^{12}h^1\mathrm{M}_{}`$ ($`\sigma _\mathrm{v}\genfrac{}{}{0pt}{}{_<}{^{}}100150\mathrm{km}\mathrm{s}^1`$, see Fig. 10) appear to be affected by scattering.
One may question the relevance of these results, given the small size of the simulations and extremely high force resolution. Note, however, that our results would be applicable (save for the presence of very massive clusters) to any $`256^3`$-particle simulation, in which force resolution of considerably smaller than the scale $`\stackrel{~}{s}`$ is adopted. The simulations with the particle mass and dynamic range not very far from ours, have already been done. For example, all $`256^3`$-particle simulations of $`239.5h^1\mathrm{Mpc}`$ box presented by Jenkins et al. (1998) satisfy the condition of $`ϵ<\stackrel{~}{s}`$ (where $`\stackrel{~}{s}`$ is estimated using the above equation for $`v_{100}\genfrac{}{}{0pt}{}{_<}{^{}}1.5`$) and have been run using $`<1600`$ time steps. The parameters of the recent “Hubble volume” simulation (Colberg et al. 1998) also satisfy conditions for strong scattering: $`s1.9h^1\mathrm{Mpc}`$, while force resolution of the simulations is $`ϵ=100h^1\mathrm{kpc}`$. The particle mass of these simulations is $`2\times 10^{12}h^1\mathrm{M}_{}`$, which means that individual particles represent galaxies rather than a phase-space element. Galaxies form a collisional system so the presence of scattering may be considered as a correct model of the evolution of galaxy clustering. Moreover, the mean interparticle separation in these simulations is $`23h^1\mathrm{Mpc}`$ and thus conditions for scattering may only occur in the underdense regions.
A high-resolution $`256^3`$-particle run was also done recently using the ART code (see, for example, Colín et al. 1999). However, for this simulation ($`m_p=1.1\times 10^9h^1\mathrm{M}_{}`$; $`\mathrm{\Omega }_0=0.3`$) the scale of strong scattering is $`s0.94v_{100}^2h^1\mathrm{kpc}`$, while peak resolution is $`4h^1\mathrm{kpc}`$. The time steps for the particles at the refinement level of this resolution corresponds to effective number of time steps of $`41,000`$. Therefore, for this simulation the strong scattering condition is not satisfied. Moreover, a refinement level $`L`$ is introduced in these simulations only if the local overdensity is higher than $`\delta =5\times 2^{3(L+1)}`$, or for the highest resolution level $`L=6`$: $`\delta 10^7`$. For these overdensities, the local interparticle separation is $`1h^1\mathrm{kpc}`$, and two-body interactions are thus unlikely.
To summarize, scattering can be precluded if the choice of force resolution is guided by the scale $`s`$, which, in turn, depends on the mass resolution (particle mass). This conclusion may seem similar to that of Splinter et al. (1998), who concluded that force resolution should not be smaller than the mean interparticle separation. It is, however, quite different in practice: eq. (7) shows that for our box size $`s1`$ in units of mean interparticle separation only for $`m_p3\times 10^{12}h^1\mathrm{M}_{}`$. For our mass resolution, force softening as small as $`510h^1\mathrm{kpc}`$ is justified. This is $`2550`$ times below the mean interparticle separation.
We think that the conclusion of Splinter et al. is (at least in part) due to the interpretation of poor cross-correlation between different simulations on small scales as erroneous evolution in high-resolution runs. Our analysis, presented in § 3, shows that poor cross-correlation is due to phase errors whose major source is cumulative errors due to the inaccuracies of time integration. The trajectories of particles become chaotic in high-density regions and small differences in time integration errors tend to grow quickly.
For this reason it may prove to be very difficult to get rid of this effect by improving the time integration. Therefore, one should keep these errors in mind if a phase sensitive statistic is analyzed. Luckily, most of the commonly used statistics are phase-insensitive and are not affected by such errors. Moreover, the errors are confined to the small-scale high-density regions, and no significant phase errors are present in our simulations if the density field is smoothed on a scale $`\genfrac{}{}{0pt}{}{_>}{^{}}1h^1\mathrm{Mpc}`$.
While this is clearly still an error, it has nothing to do with the mass or force resolution and would be present even if both were perfect. This point is clearly demonstrated by the fact that simulations run using two different implementations of the PM code correlate perfectly within the code type but cross-correlate rather poorly when cross-code comparisons are made (see § 3.3). Note that in all of these PM runs, the force resolution is approximately equal to the mean interparticle separation.
The main conclusion of our study is that care must be taken in the choice of force resolution for simulations. If a code with spatially uniform force resolution is used, conditions for strong two-body scattering may exist if the force resolution is smaller than the scale $`s`$ discussed above. The presence of scattering itself may not be important (albeit undesirable); the relaxation time for systems, for example, may be much longer than the Hubble time (e.g., Hernquist & Barnes 1990; Huang, Dubinski & Carlberg 1993). Its effects, however, may be greatly amplified if the time step of the simulation is not sufficiently small. In this case, severe violation of energy conservation occurs during each scattering which may lead to artificial injection of energy into the system.
## Acknowledgements
AVK and AAK are grateful to the Astrophysikalishes Institut Potsdam (AIP), where this project was initiated, for the hospitality during their visit. We thank referee for useful comments. This work was funded by NSF and NASA grants to NMSU. SG acknowledges support from Deutsche Akademie der Naturforscher Leopoldina with means of the Bundesministerium für Bildung und Forschung grant LPD 1996. Our collaboration has been supported by the NATO grant CRG 972148. |
no-problem/9912/astro-ph9912370.html | ar5iv | text | # 1 Introduction
## 1 Introduction
The field of chemical evolution modeling of the Galaxy is experiencing in the last years a phase of high activity and important achievements. There are, however, several open questions which still need to be answered. In this review I will try to summarize what have been the most important achievements and what are some of the most urgent questions to be answered.
The reason for the recent increase of activity and success of chemical evolution models is probably two-folding. First of all, on the observational side, the last decade has witnessed a tremendous improvement in the quality and in the amount of data on the major Galactic features, like the chemical abundances and abundance ratios in stellar and gaseous objects of various types, the density distributions of gas and stars in different Galactic regions, etc.: Fundamental data which provide stringent constraints on evolution models. In addition, also on the theoretical side there has been a recent blooming of new studies, with several new groups working on stellar nucleosynthesis to derive reasonable yields for stars of all mass and of several initial metallicities, and taking into account as much as possible the large uncertainties affecting the latest evolutionary phases. If we consider that for almost two decades the only usable set of yields for low and intermediate mass stars was that provided by Renzini & Voli (1981), while now we can choose among those by Forestini & Charbonnel (1997), van den Hoek & Groenewegen (1997), Boothroyd & Sackman (1998) and Marigo (1998 and this volume), all published in the last two years, it is apparent that we have entered an era of great interest in stellar nucleosynthesis studies.
These circumstances have favoured the appearance in the literature of an increasing number of good chemical evolution models computed by an increasing number of people. Nowadays there are several models able to satisfactorily reproduce all the major observational constraints, not only in the solar neighbourhood but also in the whole Galaxy. Only in the last few months one could count at least four different groups who have presented models in fairly good agreement with the data: Boissier & Prantzos (1999, hereinafter BP), Chang et al. (1999), Chiappini et al. (1999, CMP) and Portinari & Chiosi (1999, PC).
## 2 Major Results
Before analysing the various results, it is important to recall that standard chemical evolution models follow the large-scale, long-term phenomena and can therefore reproduce only the average trends, not the cloud-to-cloud, star-to-star fluctuations. To put it in Steve Shore’s words: They are a way to study the climate, not the weather, in galaxies. This can be considered a limitation of the models, but is the obvious price to pay to avoid introducing too many free parameters that would make it much more difficult to infer the overall evolutionary scenario with sufficient reliability. As well known, we have not yet been able to find a unique scenario for the most probable evolution of the Milky Way (see e.g. Tosi 1988a), but we are converging toward a fairly limited range of possibilities for the involved parameters (initial mass function, IMF, star formation rate, SFR, gas flows in and out of the Galaxy).
Thanks to the improvements both on the observational and on the theoretical sides, good chemical evolution models of the Milky Way nowadays can reproduce the following list of observed features:
* Current distribution with Galactocentric distance of the SFR (e.g. as compiled by Lacey & Fall 1985);
* current distribution with Galactocentric distance of the gas density (see e.g. Tosi, 1996, BP and references therein);
* current distribution with Galactocentric distance of the star density (see e.g. Tosi, 1996, BP and references therein);
* current distribution with Galactocentric distance of element abundances as derived from HII regions and from B-stars (e.g. Shaver et al. 1983, Smartt & Rollerston 1997);
* distribution with Galactocentric distance of element abundances at slightly older epochs, as derived from PNe II (e.g. Pasquali & Perinotto 1993, Maciel & Chiappini 1994, Maciel & Köppen 1994);
* age-metallicity relation not only in the solar neighbourhood but also at other distances from the center (e.g. Edvardsson et al. 1993);
* metallicity distribution of G-dwarfs in the solar neighbourhood (e.g. Rocha-Pinto & Maciel 1996);
* local Present-Day-Mass-Function (PDMF, e.g. Scalo 1986, Kroupa et al. 1993);
* relative abundance ratios (e.g. \[O/Fe\] vs \[Fe/H\]) in disk and halo stars (e.g. Barbuy 1988, Edvardsson et al. 1993, Israelian et al. this volume).
As mentioned above, the most recent examples of how good models can fit the above list of observed Galactic features are given by BP, Chang et al. (1999), CMP and PC (see also in this book the contributions by Chiappini, by Portinari and by Prantzos).
If one bears in mind that the free parameters involved in the computation of standard chemical evolution models are essentially the IMF, the law for the SFR, and those for gas flows in and out of the Galaxy, it is clear that the number of observational constraints is finally sufficient to put significant limits on the parameters. In fact, if we compare the results of all the models in better agreement with the largest set of empirical data, we see that they roughly agree on the selection of the values for the major parameters. The conclusions that can be drawn from such comparison are:
$``$ IMF: after several sophisticated attempts (e.g. CMP) to test if a variable IMF could better fit the data, it is found, instead, that a roughly constant IMF is most likely, even if the exact slopes and mass ends are still subject of debate.
$``$ SFR: it cannot be simply and linearly dependent only on the gas density; a dependence on the Galactocentric distance is necessary, either implicit (e.g. through the total mass density as in Tosi 1988a or in Matteucci & François 1989) or explicit (e.g. as in BP). We don’t know however what is its actual behaviour (see e.g. Portinari, this volume) or even if it should be considered as fairly continuous or significantly intermittent as recently suggested by Rocha-Pinto et al. (1999).
$``$ gas flows: all the models in better agreement with the data invoke no or negligible galactic winds and a substantial amount of infall of metal poor gas (not necessarily primordial, e.g. Tosi 1988b, Matteucci & François 1989) and there are increasing observational evidences on this phenomenon (see also Burton, this volume). We have no empirical information, however, on the spatial and temporal distribution of the accretion process: uniform or not ? continuous or occurring in one, two or several episodes ? (e.g. Beers & Sommer-Larsen 1995, Chiappini et al. 1997, Chang et al. 1999).
## 3 Open Questions
It is apparent from the summary presented above that, in spite of the wealth of good data and models described in the previous sections, the scenario of the Milky Way evolution is not completely clear. There are still several issues we don’t understand, including some of conspicuous importance. Among these, I consider of special interest the evolution of the abundance gradients and that of CNO isotopes.
### 3.1 Abundance Gradients
Thanks to the recent results by Smartt & Rollerstone (1997) we finally know that young objects (HII regions and B-stars) all show the same metallicity distribution with Galactocentric distance and a fairly steep negative gradient. All the models in better agreement with the Galaxy constraints are able to reproduce this distribution (see Tosi 1996, Chiappini, Portinari and Prantzos in this volume).
Slightly older objects, such as PNe of type II whose progenitors on average are 2 Gyr old, show similar abundances and possibly flatter gradients (e.g. Maciel & Köppen 1994). Good models of Galaxy evolution reproduce well not only the present abundance distribution, but also the distributions derived from PNeII observations. For instance, Fig.1 shows the predictions of the best of models of type 1 in Tosi’s (1988a) set for the He, N and O abundance distributions with Galactic radius 2 Gyr ago. The adopted stellar yields are Marigo’s (1998 and this volume) for low and intermediate mass stars and Limongi et al. (this volume) for massive stars. The data points correspond to the PNeII measures by Pasquali & Perinotto (1993) and the open boxes sketch the distribution of the values derived by Maciel & Chiappini (1994) and Maciel & Köppen (1994). The data sets are in perfect agreement with each-other and the model predictions fit well their average distributions.
When we consider earlier epochs, the predictions from different models diverge, despite the common assumption that the Galaxy is initially formed of primordial gas. For instance, the three models which are presented in this volume by Chiappini, Portinari and Prantzos, and that are in fairly good agreement with all the observational constraints, predict the gradient evolutions schematically described in Fig.2 (see BP, CMP and PC for more details). The initial distribution of oxygen with galactic radius in the left panel is totally flat, becomes initially slightly positive, then turns to negative and steepens with time, reaching at the present epoch the observed slope of -0.08 dex/kpc; vice versa, the gradient at 1 Gyr in the central panel is negative and quite steep and then slowly flattens with time, particularly in the inner galactic regions, reaching finally the observed slope at the present time; the same trend occurs in the right panel, but with different absolute abundances. If one compares (e.g. Tosi 1996) all the models able to reproduce the observed Galactic features, it is easy to understand that they present all the possible varieties of gradient evolution: from slopes initially positive becoming first flat and then increasingly negative, to slopes initially flat and then becoming increasingly negative, to slopes initially negative and then becoming increasingly flat.
The reason for such a variety of gradient evolutions is the strong dependence of the radial slope on the radial variations of the ratio between ISM enrichment from stars (i.e. SFR) and ISM dilution from metal poor gas (i.e. initial conditions and/or infall of metal poor gas). Regions with higher SFR have larger enrichment, but can remain relatively metal poor if they contain or accrete large amounts of metal poor gas. It is then sufficient to have different initial conditions or different assumptions on the temporal behaviours of the SFR and of the infall rate to obtain quite different abundance gradients at the various epochs.
The following few examples of possible scenarios give an idea of the sensitivity of the gradient evolution to the boundary conditions:
* If the efficiency in the chemical enrichment of the inner Galactic regions at early epochs is low (for instance because the SFR is low and/or there is a high amount of primordial gas), then the early radial distribution of the heavy elements is flat. And to reach the observed present slope it has to become negative and steepen with time.
* If, instead, the enrichment efficiency in the inner regions at early epochs is high (for high SFR or low gas mass), then the early gradient is negative and steep. And to reach the present slope it has to flatten with time.
* If at late epochs the acretion (infall) of metal poor gas is stronger in the outer than in the inner regions, then the gradient tends to steepen with time because of the increasing dilution for increasing galactocentric distance.
* If at late epochs the inner regions exhaust their gas, then the metallicity saturates there and the inner gradient becomes increasingly flat with time.
All these scenarios are plausible: how can we understand which are the right ones ? If we knew the right history of the abundance gradients we would also know what is the most likely evolution of the Galactic disk. Unfortunately, despite their accuracy, the observational data already available on open clusters and on field stars are not yet sufficient to clearly distinguish whether the abundance gradients were steeper or flatter at early epochs. Open clusters are probably the best candidates to provide such information, thanks to their visibility at large distances and to the relative ease to derive their age and metallicity, but as described by Bragaglia (this volume, and references therein) the number of clusters treated homogeneously is still too small.
### 3.2 Evolution of CNO isotopes
The CNO isotopes are important because they are stable, diffused and largely studied, since they provide the seeds for the production of heavier elements. In particular, the stellar nucleosynthesis of the carbon and oxygen isotopes is examined in detail in most of the most recent studies. Nonetheless, it is not completely clear yet how they should behave during the Galaxy evolution. The problem was already pointed out twenty years ago by Penzias (1980), who noticed that the observed decrease of the local <sup>18</sup>O/<sup>17</sup>O from the solar to the local ISM value and the corresponding increase of <sup>16</sup>O/<sup>18</sup>O were difficult to interpret. In fact, chemical evolution models predicted (Tosi 1982) <sup>18</sup>O/<sup>17</sup>O to remain roughly constant in the last 4.5 Gyr and <sup>16</sup>O/<sup>18</sup>O to steadily decrease. Those predictions were based on simple arguments on the relative enrichment of primary and secondary elements produced by stars of different masses, and have been confirmed by subsequent studies based on nucleosynthesis studies of solar metallicity stars (e.g. Prantzos et al. 1996).
These results for the carbon and oxygen isotopic ratios are represented by the solid line in Fig.3. The left hand panels show the time behaviour of the isotopic ratio in the solar neighbourhood as predicted by models and as observed in the sun and in the local ISM, which are assumed to be representative of the average local ratios 4.5 Gyr ago and now, respectively. The right hand panels show the present distribution with Galactocentric distance as predicted by the same models and as derived from radio observations of molecular clouds. The solid line corresponds to the same model presented in Fig.1 (Tosi-1), assuming the yields for solar initial metallicity computed by Boothroyd & Sackman (1998), by Forestini & Charbonnel (1997) and by Woosley & Weaver (1995) for low, intermediate and high mass stars, respectively. Qualitatively similar results were obtained by Prantzos et al. (1996) adopting the solar yields by Marigo et al. (1996), Renzini & Voli (1981) and Woosley & Weaver (1995). It is apparent that while the predictions for <sup>12</sup>C/<sup>13</sup>C and <sup>16</sup>O/<sup>17</sup>O are in fair agreement with the data, the time behaviour of the oxygen isotopic ratios involving <sup>18</sup>O is inconsistent with them. There have been several speculations on how this impasse could be overcome, with suggestions that either the theory or the data or both might be wrong or misinterpreted (see e.g. Prantzos et al. 1996, Tosi 1996, Wielen & Wilson 1998), but no solution has been found yet.
One possibility is that it is not correct to adopt solar yields also for the earlier epochs, when stars were certainly metal poorer. Now that stellar yields are available also for lower metallicities, we expect to find an improvement in the comparison between model predictions and observed ratios. Unfortunately, this is definitely not the case, as clearly shown by the dashed and dash-dotted lines in Fig.3. The dash-dotted curve represents the same model as the solid curve, with the same sources for the yields, but adopting the low metallicity yields at earlier epochs and the solar ones only when the ISM reaches Z=0.02. It is apparent that, rather than improving the agreement with the data, this curve worsens the fit, both for the local evolution and for the current distribution with Galactocentric distance. This result is strongly dependent on the adopted yields and we may hope that different nucleosynthesis studies would provide more consistent predictions, but so far no set of stellar yields is able to reproduce all the shown observed distributions. Some of the available yields do improve the results on one isotopic ratio, but worsen the results on other ratios, as exemplified by the dashed lines, showing the predictions of the same model when Marigo’s (this volume) metallicity dependent yields are adopted for low and intermediate mass stars and Limongi’s et al. (this volume) for massive stars: the data on the carbon isotopic ratio are now well reproduced, but the predicted oxygen ratios are definitely inconsistent with the data.
I will then conclude this short description of the state of the art in Galactic chemical evolution models by emphasizing that, despite the great work that has been done by observers and theoreticians to improve the number and the quality of the observational and theoretical constraints, further efforts on both sides are needed to shed light on several unclear issues. In particular, it would be important to derive accurate chemical abundances in stars and clusters of different ages and Galactic locations and to study in better detail the stellar nucleosynthesis in stars of all masses and initial metallicities.
I warmly thank S.Chieffi, M.Limongi, P.Marigo and O.Straniero for providing their yields in advance of publication and S.Sandrelli for help. Conversations with them and with C.Chiappini, F.Matteucci and N.Prantzos were very fruitful. |
no-problem/9912/quant-ph9912119.html | ar5iv | text | # Nuclear Teleportation
## Introduction
It was in the middle of twenties that an analysis of transportation of soya beans on the Chinese Eastern Railway was carried out. It appeared that counter transportation constituted a greater share of the total cargo traffic. Then an original procedure of processing the cargoes was invented: in the number of cases it was possible to deliver bean lots to recipients from the nearest stations, where at that time there was a sufficient amount of beans of a corresponding category, intended, though, to be sent to some other and more remote points. Economy of a rolling stock and other advantages for the railway were obvious. The history fails to mention how this innovation ended. Probably the complicated events on the CER in the beginning of the thirties put an end to the promising experiment. Nevertheless, this was perhaps a first attempt to realize the supertransportation of dry substances, or particulate solids.
The process of teleportation (commonly accepted term for supertransportation) according to usual understanding is reduced to moving through space in such a way that the object to be transported disappears at one spot of space and reappears exactly at the same time in some other point. It is well understood that it is not necessary to move through the space the matter the object is composed of. It is enough to extract an exact information about inner properties of the object, then transmit this information to a predetermined place, and use it afterward to reconstruct the initial object from a stuff that comes to hand at the point of destination. Thus the teleportation results in disappearing of the object with its initial properties in the initial place and the identical object to reappear in another place. Without disappearing it would not be the teleportation, but merely a reproduction, i. e. a creation of a new identical specimen, or a copy of the object. Let us look how physicists cope with this problem.
## Action-at-a-distance (teleporting information?)
In 1935 Albert Einstein and his colleagues Boris Podolsky and Nathan Rosen (EPR) developed a gedanken experiment to show as they thought a defect in quantum mechanics (QM) . This experiment has obtained the name of the EPR-paradox, and essence of the paradox is as follows. There are two particles that interacted with each other for some time and have constituted a single system. Within the framework of QM that system is described by a certain wave function. When the interaction of the particles is finished and they flew far away from each other, these two particles are still described by the same wave function as before. However, individual states of each separate particle are completely unknown, moreover, definite individual properties do not exist in principle as quantum mechanics postulates dictate. It is only after one of the particles is registered by a particle-detection system that the states arise to existence for both particles. Furthermore, these states are generated instantly and simulteneously regardless of the distance between the particles at the moment. This scheme is used to be considered sometimes as teleportation of information possible at a speed higher than that of light. The real (not only ”gedanken”) experiments on teleportation of information, in the sense of EPR-effect, or ”a spooky-action-at-a-distance”, as A. Einstein called it, were carried out only 30-35 years later, in the seventies-eighties . Experimenters, however, managed to achieve full and definite success only with photons (quanta of visible light), though, experiments with atoms and protons (nuclei of hydrogen) were also performed . For the case of photons, the experiments were carried out for various distances between the members of the EPR-pairs in the moment of registration. The EPR-correlation between the complementary photons was shown to survive up to as large distances as more than ten kilometers from one to another photon . In the case of protons, the experiment was carried out only for much smaller distances (of about a few centimeters) and a condition of so-called causal separation, $`\mathrm{\Delta }x>c\mathrm{\Delta }t`$, was not met. Thus, it was not fully convincing, as have been recognized by the authors of the work themselves.
## Teleporting photon-quantum state (or the light quantum itself?)
A next step in this way that suggested itself was not merely ”action-at-a-distance”, but the teleportation at least of a quantum state from one quantum object to another. In spite of the successful experiments with the net EPR-effect, it was thought until recently that even this kind of teleportation is at best a long way in the future, if at all. At first sight it seems that the Heisenberg uncertainty principle forbids the first necessary step of the teleportation procedure: the extraction of complete information about the inner properties of the quantum object. This is because of the impossibility to obtain simultaneously the exact values of so-called complementary variables of a quantum microscopic object (e. g., spatial coordinates and momenta). Nevertheless, in 1993, a group of physicists (C. Bennet and his colleagues) managed to get round this difficulty . They showed that full quantum information is not necessary for the process of transferring quantum states from one object to another which are at an arbitrary large distance from each other. Besides, they proposed that a so-called EPR-channel of communication has to be created on the basis of the EPR-pair of two quantum object (let it be photons B and C, shown in FIG. 1).
After they have interacted in a way to form a single system decaying afterward, the photon B is directed to a ”point of departure”, where it meets A in a device (a registration system) arranged in a mode to ”catch” only those events, in which B will appear in the state, leaving no choice to its ”EPR-mate” but to take the state A had initially – before the interaction with B in the detector at the ”point of departure”. This experimental technique is very fine but well known to those skilled in the EPR-art. The conservation laws of general physics are the basis of the procedure realizing the system with a given selective sensitivity. A result of all these manipulations is that particle C gets something from A. It is only the quantum state. Unfortunately, not a soya bean, but all the same it is something. What is important from the point of view of QM, is the disappearing of A in the place, notified in FIG. 1 as a ”Zone of scanning” (ZS). That is, the procedure of interaction of B and A photons destroyed A photon, in a sense that of two photons outgoing from the ZS no one has definite properties of A. They constitute a new EPR-pair of photons, which only as a whole has the definite quantum state, the individual components of the pair are deprived of these properties. Thus, the photon A disappears at ZS. Exactly at the same moment the photon C obtains the properties A had in the beginning. Once it has happened, in view of the principle of identity of elementary particles, we can say that A, disappearing at ZS, reappears at another place, i. e., the teleportation is accomplished. This process has several paradoxical features. In spite of the absence of contacts between objects (particles, photons) A and C, A manages to pass its properties to C. It may be arranged in such a way, that the distance from A to C is large enough to prevent any exchange of signals between A and C. And last, but not least of interest, in contrast to the transportation of ordinary material cargo, when a delivery vehicle first visits the sender to collect the cargo from it, in the case of cargo as subtle as quantum properties, it is delivered in a backward fashion. Here the photon B plays a role of the delivery vehicle, and we can see that B first visits (interacts with) the recipient (photon C) and only after that it travels to the sender (A) for the cargo.
Finally, to reconstruct initial object completely, it is necessary to fix a time moment when the interaction of A and B occurred (the moment of the arrival of the ”vehicle” to the departure ”station” after it visits the recipient), and accomplish the required experimental data processing in due manner. The task of recording the moment of (A-B)-interaction and using it in the data analysis together with the information transmitted by a quantum EPR-channel requires one more channel of communication, an ordinary or classical transmission line. Receiving information that A and B to form a new EPR–pair (using a classical telecommunication line), an observer in the point of destination may be sure that the properties of C are identical to those of A before the teleportation.
The new idea was immediately recognized as extremely important and a few groups of experimenters set forth concurrently to implement it. Nevertheless, it took more than four years to overcome all technology obstacles in the way to realize the project . This is because every experiment in this field, being a record by itself, is always one step farther beyond the limits of experimental state of the art achieved before.
## Start with protons
An analysis of the problem carried out by authors of the present experimental project which is now in a stage of preparation takes them to a conclusion that the experimental setups and instruments developed for usual, though the most modern, nuclear-physics studies (high-current accelerators of protons and heavier nuclei, liquid and polarized hydrogen targets, multi-parameter near $`2\pi `$-geometry – i. e. semi-spherical aperture – facility for particle detection named ”Fobos” at Flerov Laboratory of Nuclear Reaction of the Joint Institute for Nuclear Research), allow one to design a new way to perform the teleportation of the ”heavy” matter (i. e., with non-zero mass at rest), with prospects to realize the project in a short time. Thus, the teleportation of the protons (nucleus of hydrogen atoms) could be achieved in about a year, and it would take about two years to prepare the teleportation of more heavy nuclei, e.g., <sup>3</sup>He. The concept of measurements consists in recording signals entering two independent but strictly synchronized memory devices with the aim to select afterwards only those events that for sure appeared to be causally separate, for even the most rapid signal (light) could not connect them.
FIG. 2 shows the layout of the experiment on teleportation of spin states of protons from a polarized target PH<sub>2</sub> into the point of destination (target C). A proton beam p<sub>0</sub> of a suitable energy within the 20-50 MeV range bombards the liquid-hydrogen target LH<sub>2</sub>. According to the known experimental data, the scattering in the target LH<sub>2</sub> in the direction of a second target (i. e., at the c.m. angle $`\theta 90^{}`$) within a few percent occurs through a so-called singlet intermediary state, characterized by a zero total spin of the two-proton system . Thus, the outgoing p<sub>2</sub> and p<sub>3</sub> protons present a two-proton entangled system and are fully analogous to the EPR-correlated photons used for transmitting information via the quantum communication channel in the experiments on the teleportation of ”massless” matter (light photons), as it was discussed in the preceding section. One of the scattered protons, p<sub>2</sub>, then travels to the point of destination (target-analyzer C), while the other, p<sub>3</sub>, comes to a point where the teleportation is expected to be started, i. e., to the PH<sub>2</sub>-target. The latter is used as a source of particles we are going to teleport. In this sense, protons within this target play the same role as photons A in the above section. There are two features differentiating the case of protons from that of photons. First, protons p<sub>1</sub> are within the motionless target (and, thus, they are motionless themselves) where their density is greater; besides, the protons within the PH<sub>2</sub>-target have quite a definite quantum state, determined by a direction of polarization. The last circumstance allows one to perform the experiment under controllable conditions, i. e., this gives the possibility to check the expected result of the teleportation action. In the case when the scattering in the polarized target PH<sub>2</sub> occurs under the same kinematics conditions as in the target LH<sub>2</sub> (i. e., at the c.m. angle $`\theta 90^{}`$), the total spin of the particles p<sub>1</sub> and p<sub>3</sub> must also be equal to zero after collision. To detect these events, a removable circular module F-1 of the facility ”Fobos” is supposed to be used, thus, the detection efficiency is hoped to be much enhanced. According to QM, if all the above conditions are provided, the protons reaching a point K suddenly receive the same spin projections as the protons in the polarized target PH<sub>2</sub> have. Therefore, the teleportation of the spin states from the PH<sub>2</sub>-target to the recipient p<sub>2</sub> really takes place at the point K. Thus, if the coincidence mode of the detection is provided via any classical channel, then a strong correlation has to take place between polarization direction in the target PH<sub>2</sub> and the direction of the deflection of p<sub>2</sub>-protons scattered in the carbon target C. C plays a role of the analyzer of polarization: the protons are deflected to the left or to the right depending on sign of their polarization, i. e., the orientation of the proton spin that can have only two alternatives (along or opposite to a given direction ). The second module of ”Fobos”, designated F-2 in the FIG. 2, crowns the procedure of teleportation, as it indicates the proton scattering direction in the carbon target C, and hence, its polarization.
If we succeeded to make a distance between the detectors F-1 and F-2 to be sufficiently large, then it would be possible to meet the important criteria of the space-like interval (causal independence) between the events of the ”departure” of the quantum state from the PH<sub>2</sub>-target and ”arrival” of this ”cargo” to the recipient (p<sub>2</sub>-proton) at the point K. To prevent any exchange of signals between the points PH<sub>2</sub> and K, it is essential to choose appropriate proportions of some time and space segments, indicated in FIG.2. Namely, we have to obtain $`S>ct_{12}`$, where $`t_{12}=|t_{F1}t_{F2}|`$. Here $`t_{F1}`$ and $`t_{F2}`$ are moments of registration of signals from the corresponding detectors F-1 and F-2 (their arrival at the data collection-processing center). For simplicity, we neglected a time of flight of the protons from K to C, and from the PH<sub>2</sub>\- and C-targets to the detectors F-1 and F-2, respectively.
## Conclusion
Finally, referring to the principle of identity of elementary particles of the same kind with the same quantum characteristics, i. e. the protons in our case, we can say that protons from a polarized target PH<sub>2</sub> are transmitted to the destination point C (through the point K). Thus, in the nearest future, teleportation of protons can come from the domain of dreams and fiction to the reality in physicists’ laboratories.
Remembering that the above soybeans contains not only protons but as well proteins, somebody perhaps feels disillusioned. However, we should not be stingy, something should be left for physics of the third millennium.
The work was supported in part by the Russian Foundation for Basic Research, projects nr. 99-01-01101. |
no-problem/9912/cond-mat9912195.html | ar5iv | text | # The local spectrum of a superconductor as a probe of interactions between magnetic impurities
## Abstract
Qualitative differences in the spectrum of a superconductor near magnetic impurity pairs with moments aligned parallel and antiparallel are derived. A proposal is made for a new, nonmagnetic scanning tunneling spectroscopy of magnetic impurity interactions based on these differences. Near parallel impurity pairs the mid-gap localized spin-polarized states associated with each impurity hybridize and form bonding and anti-bonding molecular states with different energies. For antiparallel impurity moments the states do not hybridize; they are degenerate.
preprint: submitted to Physical Review Letters
The relative orientation of the moments of two magnetic impurities embedded nearby in a metallic nonmagnetic host will depend on the significance of several electronic correlation effects, such as direct exchange, double exchange, superexchange, and RKKY. Each of these effects produces characteristic moment orientation; the RKKY interactions can align moments either parallel or antiparallel depending on the impurity separation. Reliable experimental measurements of the moment orientation as a function of impurity separation could identify the origin of magnetism in alloys of technological significance, such as the metallic ferromagnetic semiconductor GaMnAs which may eventually play a crucial role in semiconductor-based magnetoelectronics. Such measurements should also clarify the interplay between metallic and magnetic behavior in layered oxides, such as the high-temperature superconductors. In this work we propose, based on theoretical calculations, a robust experimental technique for the systematic and unambiguous experimental determination of moment alignment as a function of impurity separation.
We demonstrate that in an electronic system with a gap there is a fundamental difference between the electronic states localized around parallel and antiparallel impurity moments. Around parallel impurity moments there are mid-gap molecular states (similar to bonding and antibonding states in a diatomic molecule). Around antiparallel impurity moments the states remain more atomic-like and are degenerate. This qualitative difference in the spectrum of an impurity pair provides a robust technique of determining the impurity-impurity interaction via nonmagnetic scanning tunneling spectroscopy (STS). The essential condition for practical application of this technique will be whether the splitting of the states around parallel impurity moments is large enough to be observed spectroscopically.
The gapped system we consider in detail is the superconductor NbSe<sub>2</sub>, which is chosen for its extremely favorable surface properties for STS and for its quasi-two-dimensional electronic structure. STS has already been used to examine the localized states which form near isolated magnetic impurities on the surface of superconducting niobium. We have calculated the energies and spatial structure of the electronic states near impurity pairs in NbSe<sub>2</sub> essentially exactly within mean-field theory. These calculations indicate that the size of the splitting of states around parallel impurity moments in NbSe<sub>2</sub> is measurable — they are split by a sizable percentage of the energy gap even for impurity moment separations of order 30Å.
A nonmagnetic spectroscopy of magnetic impurity interactions is also plausible in a much wider range of materials. The localized spin-polarized states upon which the technique is based occur near magnetic impurities in most systems where there is a gap in the single-particle density of states at the chemical potential, whether or not the gap originates from superconductivity. Even when there is no true gap, if the density of states is substantially reduced at the chemical potential sharp resonances similar to the localized states will form (this has been predicted and recently observed for $`d`$-wave superconductors). Resonances around parallel and antiparallel impurity pairs show similar qualitative features to localized states.
If the energy scales of moment formation and interaction are much greater than those responsible for creating the gap it is also possible to infer the impurity interaction within a material in its high-temperature metallic phase from spectroscopic measurements on the same material in a low-temperature superconducting phase. In this the STS procedure is similar to traditional “superconducting spectroscopy”, where the dependence on impurity concentration of the superconducting transition temperature $`T_c`$ or the specific heat discontinuity at $`T_c`$ is used to determine the presence and rough magnitude of a single impurity moment. However, whereas single-impurity information can often be extracted from such measurements in the dilute limit, pairwise impurity interactions are much more difficult to infer from macroscopic properties like $`T_c`$ which depend on an ensemble of local configurations.
We note that the technique described here is remarkably non-invasive compared to alternate methods. The use of a magnetic tip to probe the magnetic properties of a sample may distort the natural surface orientation of moments. An alternative nonmagnetic STS technique that has been proposed, which involves a superconducting tip in a Tedrow-Meservey geometry, requires either an external or surface-induced magnetic field to spin-split the superconducting DOS of the tip. Finally, the use of spin-polarized tunneling from a GaAs tip relies on a fixed orientation of the magnetic structure on the surface relative to that of the optically generated spin-polarized population in the tip.
To understand the origin of the non-degeneracy of states around parallel moments and the degeneracy of states around antiparallel moments consider a heuristic picture of the two-impurity system in an isotropic-gap superconductor. For parallel alignment of the impurity moments only quasiparticles of one spin direction (assumed to be spin up) will be attracted to the impurity pair. Any localized state will thus be spin up. If the two impurities are close their two spin-up atomic-like states will hybridize and split into molecular states just as atomic levels are split into bonding and antibonding states in a diatomic molecule. Thus there will be two non-degenerate states apparent in the spectrum. This is shown schematically in the top section of Fig. 1, where the potential for spin up quasiparticles is shown on the left (Fig. 1A) and for spin down quasiparticles is shown on the right (Fig. 1B). The potential for spin-down quasiparticles is everywhere repulsive, so no spin-down localized states will form.
The situation for antiparallel aligned spins, shown on the bottom of Fig. 1, is quite different. The effect of the second impurity on the state around the first is repulsive and so does not change the state energy much unless the impurities are very close. Furthermore the Hamiltonian has a new symmetry in this case: it is unchanged under the operation which both flips the quasiparticle spin and inverts space through the point midway between the two impurities. This operation changes the potential of Fig. 1C into that of Fig. 1D. Thus instead of split states we find two degenerate atomic-like states of opposite spin, localized around each of the two impurities.
Detailed results for NbSe<sub>2</sub> are obtained by solving the following lattice-site mean-field Hamiltonian self-consistently:
$$H=\underset{ij,\sigma }{}t_{ij}c_{i\sigma }^{}c_{j\sigma }+\underset{i}{}\left[\mathrm{\Delta }_ic_i^{}c_i^{}+\mathrm{\Delta }_i^{}c_ic_i\right]+V_{S1}(c_1^{}c_1c_1^{}c_1)+V_{S2}(c_2^{}c_2c_2^{}c_2),$$
(1)
where $`c_{i\sigma }^{}`$ and $`c_{i\sigma }`$ create and annihilate an electron at lattice site $`i`$ with spin $`\sigma `$. The impurities reside at lattice sites $`1`$ and $`2`$, the $`t_{ij}`$ are the hopping matrix elements and the $`\mathrm{\Delta }_i`$ are the values of the superconducting order parameter. NbSe<sub>2</sub> has a triangular lattice, and the normal-state band structure can be modeled with an on-site energy of $`0.1`$ eV and with nearest-neighbor and next-nearest-neighbor hopping matrix elements of $`0.125`$ eV. These are determined from a tight-binding fit to ab initio calculations of the electronic structure. The superconducting pairing interaction is modeled with an on-site attractive potential which yields the experimental order parameter $`\mathrm{\Delta }=1`$ meV. The inhomogeneous order parameter $`\mathrm{\Delta }_i`$ is determined self-consistently from the distorted electronic structure in the vicinity of the impurities. We consider equivalent parallel ($`V_{S1}=V_{S2}`$) or antiparallel ($`V_{S1}=V_{S2}`$) impurity moments.
This model assumes the impurity spins behave as classical spin (see Refs. ). Classical spin behavior has been seen, for example, for Mn and Gd impurities on the surface of niobium. The electronic structure in this model, including quasiparticle state energies and spatial structure, can be found rapidly and accurately by inverting the Gor’kov equation in a restricted real-space region including the two impurities, as described in Ref. . Measurements of the spatial structure of these states and of the values of the splitting between states can serve as a sensitive test of the model of the electronic structure of this material and of the impurity potential for a given atom.
Figure 2A shows the energies of the localized states in NbSe<sub>2</sub> for parallel spins (red) and antiparallel spins (black) for a sequence of impurity spacings which are multiples of the in-plane nearest-neighbor vector of the NbSe<sub>2</sub> lattice. The splitting of the bonding and antibonding states oscillates over a distance scale comparable to the Fermi wavelength of NbSe<sub>2</sub> along this direction. The splitting is proportional to the probability of a quasiparticle at one impurity propagating to the other, which is a measure of the coupling of the two atomic-like states. At large distances state energies for parallel and antiparallel moments approach the single impurity state energy, indicated on the right side of Fig. 2A. Figure 2BC shows the spatially integrated change in density of states due to the impurity pair for these impurity separations. The density of states (DOS) of a quasiparticle of energy $`E`$ in a superconductor has an electron component at energy $`E`$ and a hole component at energy $`E`$, so a single state will produce two peaks in the DOS unless it is closer to $`E=0`$ than the linewidth. That linewidth is determined by thermal broadening in the metallic probe tip, which for these plots is assumed to be $`0.05`$ meV$`=0.6`$K. The gap in the homogeneous DOS extends from $`1`$ meV to $`1`$ meV in NbSe<sub>2</sub>, so the variation in state energies is a substantial fraction of this gap. The clear distinction between parallel and antiparallel impurity moments in the DOS is only limited by the linewidth of the states.
A tunneling measurement of the DOS using a broad-area contact would yield the spectrum of an ensemble of impurity separations, hence STS (which measures the local DOS, or LDOS) is the ideal method for examining a single configuration of impurities. Before describing the distinct spatial differences in LDOS measurements between parallel and antiparallel alignments of impurity pairs we show the single impurity result in Fig. 3. The spatial structure of the electron and hole components of the LDOS are independently measurable by STS and can be quite different in detail. In this work we will show only the spatial structure of the hole component — similar gross structure is seen in the electron-like LDOS. Figure 3 shows the six-fold symmetric LDOS for NbSe<sub>2</sub> for $`V_S=200`$ meV at an energy of $`0.19`$ meV. The units are Angstroms and the nearest-neighbor spacing is 3.47Å.
The details of the spatial structure can be traced directly to the normal-state electronic structure of NbSe<sub>2</sub>. We note that the local hopping matrix elements and the local nonmagnetic potential will differ near the impurity atoms. We find that moderate changes in these quantities do not significantly change the magnitude of the splitting of the even and odd parity states. This relative insensitivity occurs because the splitting is largely dependent on the amplitude for a quasiparticle to propagate from one impurity site to the other. Careful comparison of a measured LDOS and Fig. 3 would allow the determination of the changes in the local hopping and the nonmagnetic potential for this case.
Plots of the LDOS for two impurities in NbSe<sub>2</sub> separated by four lattice spacings (13.88Å) are shown in Fig. 4A-D. They demonstrate via their spatial structure the qualitative differences among different types of molecular states possible around an impurity pair. Figure 4A is the bonding state (energy $`0.10`$ meV) and Fig. 4B shows the antibonding state ($`0.26`$ meV). The impurities are at the same sites in each of Fig. 4A-D, labeled $`1`$ and $`2`$ in Fig. 4B. As expected from the symmetry of these states, the antibonding state has a nodal line along the mirror plane (indicated in red) between the two impurities. No such nodal line occurs in Fig. 4A — in contrast the state is enhanced along the nodes.
The nonmagnetic STS probe cannot resolve the spin direction of the electronic states around the impurities, so around antiparallel impurity moments it detects both states. The sum of the LDOS for the two atomic-like states is symmetric around the mirror plane. Figure 4C is the LDOS at the energy for the two degenerate states around antiparallel impurity spins ($`0.28`$ meV). The states are much more diffuse than the bonding state in Fig. 4A due to the repulsive nature of one impurity. Figure 4D shows the experimentally inaccessible spin-resolved LDOS, showing the LDOS of holes with the spin direction attracted to the impurity on the left. The spin-resolved LDOS at the impurity on the left is two orders of magnitude greater than at the impurity on the right. Thus the individual localized states are quite atomic-like.
We have assumed throughout that the impurity moments are locked either parallel or antiparallel. If the alignment is intermediate between the two cases then the spectrum shows non-degenerate states split less than in the parallel case. If there is some flipping of moments between parallel and antiparallel alignment on a timescale longer than the time required for the quasiparticle states to realign with the moments then the spectrum would be a linear superposition of the antiparallel and parallel spectra. If this is an activated process, this energy of activation of moment flipping could be easily distinguished by examining the temperature dependence of the spectrum.
This work describes a robust technique for determining the alignment of two impurity moments in a gapped system. The details of the expected results around magnetic impurities in the quasi-two-dimensional superconductor NbSe<sub>2</sub> have been calculated. Energies and spatial structure of bonding and antibonding states around parallel moments, and of localized atomic-like states around antiparallel moments, indicate the two cases should be distinguishable with nonmagnetic scanning tunneling spectroscopy. This technique should be broadly applicable to a wide range of correlated electronic systems.
We would like to acknowledge the Office of Naval Research’s Grants Nos. N00014-96-1-1012 and N00014-99-1-0313. This research was supported in part by the National Science Foundation under Grant No. PHY94-07194. |
no-problem/9912/chao-dyn9912026.html | ar5iv | text | # Transition to Chaotic Phase Synchronization through Random Phase Jumps
## I Introduction
Phase synchronization phenomena in coupled chaotic systems have been extensively studied during the last few years in the context of non-identical chaotic systems (Rosenblum et al., 1996; Osipov et al., 1997), ecological systems (Earn et al., 1998; Blasius et al., 1999), physiological systems (Schäfer et al., 1998), chaotic systems forced by an external periodic or noisy signal (Pikovsky et al., 1997a, 1997b), an ensemble of coupled chaotic oscillators (Pikovsky et al., 1997c; Osipov et al., 1997), and with an electronic model of two Rössler oscillators (Parlitz et al., 1996). This effect owes its name from the classical definition of synchronization of periodic oscillators which is described in terms of locking or entrainment of the phases, while the amplitudes can be quite different. Hence, synchronization of chaotic oscillators can be defined in the most general case, as the locking between the phases of two coupled systems, while the amplitudes remain chaotically varying in time (Rosenblum et al., 1996).
For chaotic oscillators, there is no unique definition of phase. An approach to determine the amplitude $`A`$ and phase $`\varphi `$ of a narrow-band signal $`s(t)`$ is based on the analytic signal concept that considers an analytical signal $`\psi (t)`$ as a complex function of time, $`\psi (t)=s(t)+ı\stackrel{~}{s}(t)=A(t)e^{ı\varphi (t)}`$ and $`\stackrel{~}{s}(t)`$ is the Hilbert transform of $`s(t)`$ (Rosenblum et al., 1996). However, in other cases phase and amplitude can be defined as a function of the natural variables of the oscillator. For example, for the Rössler attractor $`\varphi =\mathrm{arctan}(y/x)`$ (Pikovsky et al., 1997d) or $`\varphi =\mathrm{arctan}(y/\sqrt{x^2+y^2})`$ (Rosa et al., 1998), and for the Lorenz model $`\varphi =\mathrm{arctan}\left[(\sqrt{x^2+y^2}u_0)/(zz_0)\right]`$ (Pikovsky et al., 1997b), where $`u_0`$ and $`z_0`$ are constants.
In this paper, we focus our interest in the phenomenon of phase synchronization between chaotic Lorenz systems coupled unidirectionally through driving in a ring geometry. It has been shown, that for an appropriate set of parameters, a ring of $`N`$ coupled Lorenz systems shows a Periodic Rotating Wave (PRW) where neighboring oscillators exhibit a phase difference of $`2\pi /N`$ and the amplitude varies with time sinusoidally (Mariño et al., 1998). This system, with a different set of parameters also exhibits Chaotic Rotating Waves (CRW) defined as well by a phase difference between neighboring cells of $`2\pi /N`$ but the amplitude remains chaotic (Sánchez and Matías, 1999). In this structure there exists a superposition of Fourier modes $`k=0`$ and $`k=1`$. Here we will show a transition from a PRW with a phase difference of $`2\pi /N`$ to a CRW with a phase difference of $`4\pi /N`$, where opposite cells are phase synchronized ($`N`$ even). Depending on the unidirectional coupling strength, random/brownian $`2\pi `$-phase slips develop during the mentioned transition.
## II Model
We shall consider rings of Lorenz attractors coupled in such a way that the dynamical behavior (Güémez and Matías, 1995) is defined by,
$`\dot{x_j}`$ $`=`$ $`\sigma (y_jx_j)`$ (1)
$`\dot{y_j}`$ $`=`$ $`R\left(\beta x_{j1}+(1\beta )x_j\right)y_jx_jz_j`$ (2)
$`\dot{z_j}`$ $`=`$ $`x_jy_jbz_j`$ (3)
with $`\sigma `$, $`R`$ and $`b`$ positive parameters. Usual parameters values are $`\sigma =10`$, $`b=\frac{8}{3}`$ and $`R=28`$. In Eq. (2), $`\beta `$ accounts for the coupling strength, $`j`$ runs from 1 to $`N`$ (number of cells in the array), and for $`j=1`$, $`x_0=x_N`$.
For $`\beta =1`$, it was observed (Matías et al., 1997; Mariño et al., 1998) that the synchronized chaotic state is stable if the size of the ring is small enough $`N=2`$, while for a certain critical number $`N_c=3`$ in the case of the Lorenz model, an instability associated to the first Fourier mode $`k=1`$ destroys the uniform chaotic state, leading to a PRW. As the size of the ring is increased, new Fourier modes become unstable and for $`N=6`$ a second instability ($`k=2`$) develops that could lead to a Chaotic Rotating Wave (CRW) where neighboring oscillators exhibit a phase difference of $`4\pi /N`$ as it is shown in Fig. 1(a), that is, Fourier modes $`k=1`$ and $`k=2`$ compete in a nonlinear way. Thus, opposite cells are phase synchronized while amplitudes remain chaotic and are, in general, uncorrelated. Figure 1(b) shows the uncorrelated values of the amplitudes of $`x_{j+N/2}(t)`$ as a function of $`x_j(t)`$. Since the second Fourier mode plays an important role for phase synchronization, we will focus our study in a ring consisiting of $`N=6`$ Lorenz cells described by Eq. (2) as $`\beta `$ is varied. This phase synchronization describes the onset of long-range correlations in chaotic oscillations (suppression of phase diffusion), and thus also corresponds to the appearance of certain order inside chaos that here is shown as a CRW with certain similarities to a quasiperiodic motion.
To study phase synchronization of coupled chaotic systems, we calculate the phases of the oscillators and then check whether the weak locking condition $`\mathrm{\Delta }\varphi =|n\varphi _jm\varphi _{j+N/2}|<`$const is satisfied. In this paper, we restrict ourselves to the case of $`m=n=1`$. The definition of the phase for a given oscillator may be a problematic task as soon as there is not a center of rotation. Fig. 2 shows the $`(x,y)`$ projection of an oscillator phase space for two $`\beta `$ values, $`\beta =1.0`$ and $`\beta =0.85`$. At $`\beta =1.0`$ a center of rotation can be clearly distinguished at $`(x,y)=(0,0)`$. Then a Poincaré surface of section $`y=x`$, $`x>0`$ allows us to define the phase as (Pikovsky et al., 1997b)
$$\varphi (t)=n+\frac{tt_n}{t_{n+1}t_n}$$
(4)
where $`t_n`$ is the nth crossing of the surface. Note that the phase has beeen normalized by a factor $`2\pi `$. We see that with the surface chosen in Fig. 2(a) crossings are equal to maxima of the variable $`x(t)`$. Therefore, we can know at what times the phase is an integer just looking at the time evolution of the variable $`x`$, this criterion has been used before (Blasius et al., 1999). As long as $`\beta `$ decreases unproper rotations become more frequent, (see Fig. 2(b)). However it is clear the existence of a ”rotation axis”, unlike the funnel Rössler attractor case (see e.g. Pikovsky et al., 1997b) where an independent center of rotations emerges at one side of the attractor. In consequence phase is increased in one unit (i.e. $`2\pi `$ radians) when an unproper rotation occurs. As it will be later discussed the time difference of two consecutive maxima does not depend on the own nature of each rotation. So it seems that our definition provides a ”good” period. The instantaneous phase $`\varphi (t)`$ will be determined through linear interpolation after calculating the instants of time at which maxima appear in the $`x(t)`$ series.
## III Results
The main effect of varying $`\beta `$ is shown in Fig. 3. As shown above, for $`\beta =1`$ opposite oscillators within the ring are phase synchronized. As $`\beta `$ decreases, opposite cells still remain phase synchronized, except for some phase jumps. These events are defined as the non-occurrence of a maxima at its due time in one of the $`x(t)`$ signals corresponding to cells $`j`$ or $`j+N/2`$. In other words, we will assume that a phase jump occurs if $`\mathrm{\Delta }\varphi (t)=\varphi _j(t)\varphi _{j+N/2}(t)`$ changes by approximately $`\pm 1`$ for two consecutive maxima of $`x(t)`$. Note for example for $`\beta =0.76`$ and $`\beta =0.85`$ two jumps are encircled in Fig. 3. Unproper rotations are signaled, it is clear that they do not produce phase slips, although both phenomena appear when the signal turns more chaotic. Further decreasing the coupling strength $`\beta `$ finally leads to the formation of a PRW with a phase difference of $`1/6`$ (mod 1) between neighboring cells. The transition between PRWs and phase synchronization with jumps occurs for a critical value of $`\beta _c0.75`$.
The distribution of phase jumps is shown in the sequence of figures at the rightside of Fig. 3 where the periods of time $`T`$ between consecutive maxima of $`x(t)`$ are shown for two opposite cells within the ring. As $`\beta `$ is decreased, the map $`T_{j+N/2}`$ as a function of $`T_j`$ shows a greater dispersion from the mean value (located in the center of the figures) until the critical value $`\beta _c`$ is reached. For high values of $`\beta `$, a small deviation of periods around the mean value appears according to the way the phase synchronization has been defined ($`|\varphi _1\varphi _4|<`$ const $`<1/2`$). As the value of $`\beta `$ decreases, the dispersion around the mean value increases at the same time that two independent accumulation regions responsible for the phase slips appear (see circles at the rightside of Fig. 3). Notice that these phase slips are not related to the unproper rotations which are represented by maxima (minima) peaks of the temporal serie that do not take a positive (negative) value (see arrows in the leftside of Fig. 3). The process of phase jumps formation is as follows; consecutive $`x(t)`$ maxima of one cell remains phase synchronized with the opposite cell within the ring, until a phase jump occurs spontaneously, which corresponds to jumps from arms numbered (2) and (3) in Fig. 3 to the encircled zones. That is, phase slips are characterized by the sequences $`212`$ and $`343`$. At the same time, fluctuations in $`\mathrm{\Delta }\varphi `$ (i.e. no perfect synchronization between maxima) leads to jumps between zones (2) and (3). Besides, it must be pointed out that as $`\beta `$ is decreased the concept of phase synchronization defined above and used here becomes less restrictive as the dispersion around the mean value increases (const $`1/2`$).
The number of phase slips occurring at a given interval of time decreases as $`\beta `$ is increased. Then, when opposite cells within the ring are phase synchronized the phase difference $`|\mathrm{\Delta }\varphi (t)|`$ is, on average, constant in time. But, if phase slips occur for $`\beta _c<\beta <1`$, then one would expect that for $`M`$ different initial conditions, the averaged square phase difference dynamics will be generally diffusive, so for large $`t`$,
$$|\mathrm{\Delta }\varphi (t)|^2=2Dt$$
(5)
where $`D`$ is the diffusion constant. Figure 4 shows the linear dependence found for the root mean square of the phase difference $`|\mathrm{\Delta }\varphi (t)|^2^{1/2}`$ as a function of time, in a log-log plot for $`M=14`$ different random initial conditions for the Lorenz cells within the ring and for four different values of $`\beta `$. The four graphs fit to a straight line with slope $`S1/2`$ as expected from Eq. (5) (see Table I for the fitted values).
The distribution of temporal periods $`\tau `$ between two consecutive phase jumps is shown in Fig. 5 for two different values of $`\beta `$. Note the occurrence of longer periods of time $`\tau `$ for higher values of $`\beta `$. These distributions show an exponential decay with $`\tau `$ as a consequence of the intrinsic random/brownian nature of the dynamical process underlying the formation of phase slips. Moreover, neither the phase jumps occur simultaneously for all couples of opposite cells within the array, nor the phase jumps are correlated in space, which is in agreement with the random dynamics of phase slip formation.
## IV Discussion
From Fig. 5 a mean value of the period $`\tau `$ for each value of $`\beta `$ can be defined. Now, by using a simple model of stochastic diffusion of a particle in a one-dimensional medium (random discrete walk), the averaged quadratic dispersion from the phase synchronized state ($`\mathrm{\Delta }\varphi 0`$) is given by the following equation,
$$|\mathrm{\Delta }\varphi (t)|^2=\frac{t}{\tau }$$
(6)
where $`t/\tau `$ is the number of phase jumps that have appeared for $`t\tau `$. Consequently, comparing Eqs. (5) and (6), it is possible to calculate a theoretical value for the diffusion coefficient $`D_{th}=(2\tau )^1`$. A comparison between the diffusion coefficient $`D_{exp}`$ obtained after fitting the log-log plots given in Fig. 4 and $`D_{th}`$ is shown in Table I. Note the good agreement between both coefficients for large values of $`\beta `$ as expected for a typical brownian dynamics. It must be noted that $`\tau `$ increases dramatically with $`\beta `$ (see Fig. 5 and the values of $`D_{th}`$ in Table I) in such a way that it is not possible to assure the existence of an upper limit of $`\beta `$ above it no jumps appear. For $`\beta \beta _c`$ we have found small values of $`\tau `$ of the order of the mean period between two consecutive maxima of $`x(t)`$. Thus, jumps occur frequently in time and a random, uncorrelated in time, sequence cannot be assured (the system shows a tendence to display $`+1,1,+1,\mathrm{}`$ slips series). Then, the obtained values of the diffusion $`D_{exp}`$ are smaller than those predicted $`D_{th}`$ using $`\tau `$.
The transition between periodic rotating waves and phase synchronized chaotic rotating waves has been shown to occur as the coupling strength $`\beta `$ is increased. For values of $`\beta >\beta _c`$, phase slips develop randomly in time following a diffusive process given by Eq. (5). Note that the dynamics of the phase defined for a single chaotic oscillator is generally diffusive as well (Pikovsky et al., 1997b) and in this case, $`D`$ determines the phase coherence of the chaotic oscillations which is inversely proportional to the width of the spectral peak of the chaotic attractor. On the other hand, for coupled unsynchronized nonidentical chaotic oscillators the average phase difference grows linearly with time (Blasius et al., 1999). Nevertheless, we have shown a different behavior where the root mean square of the phase difference grows with $`t^{1/2}`$ as a consequence of phase slips random formation.
## Acknowledgements
We want to thank I. Sendiña-Nadal for fruitful discussions and comments on this work. The support by DGES and Xunta de Galicia under Research Grants PB97–0540 and XUGA–20602B97, respectively, is gratefully acknowledged.
## References
Blasius, B., Huppert, A. and Stone L. ”Complex dynamics and phase synchronization in spatially extended ecological systems”. Nature 399, 354-359.
Earn, D.J. D., Rohani, P. and Grenfell, B. ”Persistence, chaos, and synchrony in ecology and epidemiology”. Proc. R. Soc. Lond. B 265, 7-10.
Güémez, J. and Matías, M.A. ”Modified method for synchronizing and cascading chaotic systems”. Phys. Rev. E 52, 2145-2148.
Mariño, I.P., Pérez-Muñuzuri, V. and Matías, M.A. ”Desynchronization transitions in rings of coupled chaotic oscillators”. Int. J. of Bif. and Chaos 8, 1733-1738.
Matías, M.A., Pérez-Muñuzuri, V. Lorenzo, M.N., Mariño, I.P. and Pérez-Villar, V. ”Observation of a fast rotating wave in rings of coupled chaotic oscillators”. Phys. Rev. Lett. 78, 219-222.
Osipov, G.V., Pikovsky, A.S., Rosenblum, M.G. and Kurths J. ”Phase synchronization effects in a lattice of nonidentical Rössler oscillators”. Phys. Rev E 55, 2353-2361.
Parlitz, U., Junge, L., Lauterborn, W. and Kocarev, L. ”Experimental observation of phase synchronization”. Phys. Rev. E 54, 2115-2117.
Pikovsky, A., Osipov, G., Rosenblum, M., Zaks, M. and Kurths, J. \[1997a\] ”Attractor-repeller collision and eyelet intermittency at the transition to phase synchronization”. Phys. Rev. Lett. 79, 47-50.
Pikovsky, A., Rosenblum, M., Osipov, G., and Kurths, J. \[1997b\] ”Phase synchronization of chaotic oscillators by external driving”. Physica D 104, 219-238.
Pikovsky, A., Rosenblum, M.G. and Kurths, J. \[1997c\] ”Synchronization in a population of globally coupled chaotic oscillators”. Europhys. Lett. 34, 165-170.
Pikovsky, A., Zaks, M., Rosenblum, M., Osipov, G. and Kurths, J. \[1997d\] ”Phase synchronization of chaotic oscillations in terms of periodic orbits”. Chaos 7, 680-687.
Rosa, E., Ott, E. and Hess, M.H. ”Transition to phase synchronization of chaos”. Phys. Rev. Lett. 80, 1642-1645.
Rosenblum, M.G., Pikovsky, A.S. and Kurths, J. ”Phase synchronization of chaotic oscillators”. Phys. Rev. Lett. 76, 1804-1807.
Sánchez, E., Matías, M.A. ”Transition to rotating chaotic waves in arrays of coupled Lorenz oscillators”. Int. J. of Bif. and Chaos 9 (in press).
Schäfer, C., Rosenblum, G.R., Kurths, J. and Abel, H.H. ”Heartbeat synchronized with ventilation”. Nature 392, 239-240.
| $`\beta `$ | $`S`$ | $`D_{exp}\times 10^3`$ (t.u.<sup>-1</sup>) | $`D_{th}\times 10^3`$ (t.u.<sup>-1</sup>) |
| --- | --- | --- | --- |
| 0.85 | $`0.51\pm 0.02`$ | $`3.53\pm 2.35`$ | $`4.41\pm 0.05`$ |
| 0.86 | $`0.47\pm 0.01`$ | $`2.98\pm 1.09`$ | $`2.32\pm 0.03`$ |
| 0.87 | $`0.48\pm 0.02`$ | $`1.55\pm 1.02`$ | $`0.73\pm 0.02`$ |
| 0.88 | $`0.52\pm 0.01`$ | $`0.050\pm 0.018`$ | $`0.055\pm 0.001`$ |
| 0.89 | $`0.48\pm 0.01`$ | $`0.006\pm 0.002`$ | $`0.004\pm 0.001`$ |
TABLE I: Values of the slope $`S`$, and $`D_{exp}`$ obtained from the linear fitting of Fig. 4 and Eq. (5). $`D_{th}=(2\tau )^1`$ is calculated from the mean values of $`\tau `$. |
no-problem/9912/hep-ex9912053.html | ar5iv | text | # A Comparison of Deep Inelastic Scattering Monte Carlo Event Generators to HERA Data
## 1 Introduction
Monte Carlo generators are an essential tool in modern day experimental High Energy Physics. They play a crucial rôle in the analysis of the data, often in assessing the systematic errors of a measurement. For that reason it is of great importance that the Monte Carlo programs give results that agree closely with the experimental data. This paper aims to describe the agreement, deficiencies and tuning of the Monte Carlo models with the neutral current deep inelastic scattering (DIS) data at HERA. Extensive use is made of the utility package HzTool , which is a FORTRAN library containing a collection of experimental results from the H1 and ZEUS collaborations.
The work described here is part of an ongoing program. During the workshop a forum was established between the H1 and ZEUS collaborations for a joint coordinated investigation of the generators working closely with the programs’ authors.
## 2 Monte Carlo Models
The ARIADNE , HERWIG and LEPTO Monte Carlo generators for DIS data have been investigated during the course of the workshop. Other programs such as RAPGAP and those developed over the duration of the workshop will be examined as part of the ongoing program of work. In the following sections a brief introduction to each of the three generators studied is given.
### 2.1 ARIADNE
In ARIADNE the QCD cascade is modelled by emitting gluons from a chain of independently radiating dipoles spanning colour connected partons , correcting the first emission to reproduce the first order matrix elements . The hadronisation of the partons into final state particles is performed by the Lund string model as incorporated in JETSET . Since the proton remnant at one endpoint of the parton chain is treated as an extended object, the coherence condition allows only a fraction of this source to be involved in gluon radiation. Since the photon probing the proton only resolves the struck quark to a distance $`\lambda 1/Q,`$ the struck quark is also treated as an extended object. As a consequence gluon emissions in the proton and photon directions are suppressed. This phase space restriction is governed by $`a=(\mu /k_T)^\alpha `$ where $`k_T`$ is the transverse momentum of the emission, $`a`$ is the fraction of the colour antenna involved in the radiation, $`\mu `$ is a parameter related to a typical inverse size of a hadron and $`\alpha `$ governs the distribution of the energy along the dipole.
In the default version of ARIADNE, the mechanism for soft suppression of radiation due to the extended source of the proton remnant results in a suppression of radiation in the current region of the Breit frame at high $`Q^2.`$ In the course of the workshop a high $`Q^2`$ modification was developed where this suppression in the current region was removed.
### 2.2 HERWIG
HERWIG relies on a coherent parton branching algorithm with additional first order matrix element corrections to populate the extremities of phase space which the partons from the conventional QCD cascade fail to occupy. The partons are transformed into hadrons using the cluster fragmentation model , whereby the primary hadrons are produced from an isotropic two body decay of colour-singlet clusters formed from partonic constituents.
Since the Monte Carlo tuning to HERA data at the ‘Future Physics at HERA workshop’ , a new version of HERWIG (version 5.9) has become available. This version includes the modified remnant treatment of version 5.8d whereby the fragmentation of the cluster containing the hadronic remnant is treated differently to that containing the perturbative parton from the incident hadron. In addition, the particle decay tables have been updated and now contain a large amount of information on additional resonance decays.
The default version of HERWIG implements the next-to-leading order (NLO) running of the QCD coupling constant $`\alpha _s`$. The HERWIG philosophy is to incorporate as much perturbative QCD behaviour as possible, so even though the generator only uses a leading order (LO) parton shower cascade, a NLO $`\alpha _s`$ behaviour is implemented. This can be justified, to some degree, because of the HERWIG implementation of angular ordering in the QCD cascades. The H1 collaboration have modified HERWIG to allow a LO behaviour of $`\alpha _s`$ .
### 2.3 LEPTO
In LEPTO the hard parton processes are described by a leading order matrix element (ME). The soft and collinear divergences are regulated with a lower and upper cut in $`z_p`$ where $`z_p=pj_1/pq`$ where $`p`$ (q) is the proton (photon) four-vector and $`j_1`$ the four-vector of one of the partons produced in the hard subprocess. In addition, the invariant mass squared of the two hard partons is required to exceed a minimal value, $`\widehat{s}_{min}`$. Below the ME cut-offs, parton emissions are treated by parton showers based on the DGLAP evolution equations . The amount of parton radiation depends on the virtuality chosen between a lower cut-off ($`Q_0^2`$) and a maximum given by the scale of the hard process or the ME cut-off. LEPTO uses JETSET for the hadronization of the partons. In addition to this non-perturbative phase, LEPTO introduces another non-perturbative mechanism. This is a soft (i.e. at a scale below $`Q_0^2`$) colour interaction which assumes that the colour configuration of the partonic system can be changed whilst traversing the colour field of the proton remnant. This was introduced in order to reproduce the rapidity gap events observed at HERA.
During the course of the workshop a new version of LEPTO was released (version 6.5.2$`\beta `$). This version introduced a new scheme for dealing with SCI events, in which the probability of soft colour interactions is suppressed depending on the difference in area spanned by the possible string configurations (after or before a soft colour interaction) . This means at high $`Q^2`$ there are effectively no soft colour interactions.
## 3 Model comparison with the data
### 3.1 ARIADNE
These studies closely followed those of the previous tuning exercise performed at the ‘Future Physics at HERA’ workshop. However, they have been extended: jet data were now available for inclusion in the comparisons; and, the behaviour of the parameter PARA(25) was considered for the first time. ARIADNE version 4.10 has been investigated including the modified treatment of high $`Q^2`$ DIS events (MHAR(151) = 2.)
The four model parameters considered are listed in Table 1, which includes a short description of their influence. The two parameters PARA(10) and PARA(15) govern the slope of the suppression line (in the phase space available for gluon emission) for the proton and the struck quark, respectively. PARA(25) governs the probability of emissions outside the soft suppression cut-off, while PARA(27) corresponds to the square root of the primordial $`k_T^2`$ in the proton.
#### 3.1.1 Approach 1
This approach concentrated on HERA data at $`Q^2>80\mathrm{GeV}^2.`$ The motivation for this was to minimize the theoretical uncertainties in the generator associated with parton evolution in the low $`(x,Q^2)`$ region. The distributions that were most sensitive to the parameters under investigation were first ascertained. Next a combined $`\chi ^2`$ was calculated for each parameter setting according to
$$\chi _{Comb}^2=\frac{1}{nsets}\underset{i=1}{\overset{nsets}{}}\chi _i^2$$
(1)
where $`\chi _i^2`$ represents the total (average) $`\chi ^2`$ per degree of freedom (d.o.f) of data set $`i`$. The parameter combination that yields the minimum of the overall $`\chi _{Comb}^2`$ corresponds to the tuned result.
The following distributions for $`Q^2>80\mathrm{GeV}^2`$ were investigated:
* scaled momentum $`x_p`$ distributions in the current region of the Breit frame ;
* flows of transverse energy in the hadronic centre of mass system ;
* differential distributions and evolution of mean of event shape variables thrust $`T_c`$ and $`T_z`$, jet broadening $`B_c`$ and jet mass $`\rho _c`$ in the current region of the Breit frame ;
* fragmentation functions and charged particle multiplicities in the current region of the Breit frame ;
* differential and integrated jet shapes as a function of pseudorapidity $`\eta `$ and transverse energy $`E_T`$ ;
* (2+1) jet event rate as a function of the transferred momentum squared,$`Q^2`$ .
Table 2 summarises the total $`\chi ^2`$ for each of the six sets of data, together with the combined $`\chi _{Comb}^2`$ given by equation 1. The results of tuning the parameters, set 1 Table 1, agree very well with those previously obtained . Both transverse energy flows and the jet data strongly favour high values for PARA(10), in contrast to the lower value favoured by the charged particle momentum distributions . Again the value PARA(15)=0.5 yields a better overall description of data, as compared to its default value of 1.0. The jet variables are particularly sensitive to PARA(15). The behaviour of PARA(25) was studied for the first time. Although its influence is not large in general, it clearly has a significant effect on the predictions related to the (2+1) jet event rate and on the transverse energy flows. The results suggest a lower value compared to the default one. Parameter PARA(27) is a relatively insensitive parameter, but the data disfavour high values, such as 0.8-1.0 GeV, when describing transverse energy flows and jet shapes. The tuning has been performed using the GRV94 parton density function . The use of the parton density function CTEQ4M results in a slightly worse value for $`\chi _{Comb}^2`$.
The improvement achieved with the tuned parameters is mostly due to a better description of jet shapes and (2+1) jet event rates (see Figs. 1 and 2). This improved agreement with the jet data leads to a slightly worse description of other distributions such as fragmentation functions and transverse energy flows (see Figs. 3 and 4). Conversely, the new treatment at high $`Q^2`$ of ARIADNE describes much better than before transverse energy flows and event shape variables, but lessens the agreement with the data on jet shapes. The current study seems to suggest that a simultaneous description of jet and charged particle distributions is difficult.
#### 3.1.2 Approach 2
The 2nd approach has additionally investigated the behaviour of ARIADNE for $`Q^2<80\mathrm{GeV}^2`$ and has also concentrated on different data sets than those used in approach 1. In particular new preliminary data from H1 on dijet production has been used along with charged particle distributions in $`\gamma ^{}P`$ centre of mass frame . In addition, the $`E_T`$ flows and the charged particle distributions in the Breit frame have been considered but, again, at lower $`Q^2`$ values than approach 1. ARIADNE 4.10, with the high $`Q^2`$ modifications, has been studied using CTEQ4L for the parton density parametrisation.
Investigations showed that parameter PARA(10) was very sensitive to the dijet cross section, especially at low $`Q^2,`$ and was also sensitive, to a lesser degree, to the $`E_T`$ flows. Parameter PARA(25) also influenced the agreement with the dijet measurement and to the rapidity distribution of hard $`p_T`$ charged particles but otherwise displayed little sensitivity to the data. The other two parameters, PARA(15) and PARA(27), displayed a lesser sensitivity to the data, though the hard $`p_T`$ particles and the dijet distributions proved the most affected to changes in these parameters.
Figure 5 shows the sensitivity of the dijet cross section to PARA(10). The default ARIADNE produces $`E_T`$ spectra for the dijets that are too hard, with the discrepancy predominantly occurring in the forward region, $`\eta _{fwd,lab}>1.0.`$ At low $`Q^2`$ there is a large variation in the $`E_T`$ spectrum but PARA(10) affects the distribution at both low and high $`E_T`$, which results in this parameter alone not being able to describe the complete $`E_T`$ spectra. This problem can be circumvented by varying PARA(25) in conjunction with PARA(10). Variation in PARA(25) alone gives larger fractional changes in the cross section at large $`E_T`$ than at smaller values of $`E_T`$, see Fig. 6.
The influence of PARA(10) on the $`E_T`$ flows can be seen in Fig. 7. The increase of this parameter suppresses $`E_T`$ production across the whole $`\eta `$ range. A similar effect is seen in the charged particle rapidity distribution, particularly for particles with $`p_T>1\mathrm{GeV}.`$ As can be seen from the $`x_p`$ spectra, Fig. 7, the current region of the Breit frame seems relatively insensitive to this parameter. The $`E_T`$ flows are less sensitive to PARA(25) than PARA(10), see Fig. 8. However the data seem to prefer values of PARA(25) smaller than the default. This preference is also true for the charged particle rapidity distributions regardless of any $`p_T`$ selection, for the default value of PARA(10).
The average $`\chi ^2`$ for the low and high $`Q^2`$ region, as well as the combined $`Q^2`$ regions, are shown in Table 3 at various settings of PARA(10) and PARA(25). An improved fit to the data was found for all distributions for the parameters listed as set 2 in table 1. It should be noted though that a comparison with the ZEUS jet shapes was not included in the data sets investigated in this approach.
### 3.2 HERWIG
HERWIG overall has fewer tunable parameters than the Lund family of generators . In particular the cluster fragmentation model has far fewer tunable parameters than the Lund string model. Many of the parameters are well constrained by e<sup>+</sup>e<sup>-</sup> annihilation data. Consequently, those involved with the hard subprocess and the perturbative QCD evolution of the final state parton shower were not varied for this study. It was found previously , that of the remaining parameters only a small number were seen to have any sensitivity to the distributions under study in DIS. Therefore it was decided to limit this study primarily to the effects of the CLMAX and PSPLT parameters, where CLMAX relates to the maximum allowed cluster mass and PSPLT is the exponent in generating the mass distribution of split clusters.
#### 3.2.1 Approach 1
The data studied in this approach corresponds to the same data sets considered in Approach 1 for ARIADNE but also extended to lower $`Q^2.`$ In addition to the parameters CLMAX and PSPLT, this study investigated the parameter DECWT, which provides the relative weight between decuplet and octet baryon production relevant to the new decay tables. The dependence on the parton density parameterisation (pdf) of the proton has also been investigated by studying CTEQ4L and MRSD- pdfs. Even though the MRSD- has in principle been retracted by the authors and is known to be too high at low $`x`$ it was used here to provide a more significant variation of the underlying distribution.
The three parameters were studied over the following ranges:
$`2.0<`$ $`\mathrm{CLMAX}`$ $`<5.0`$
$`0.6<`$ $`\mathrm{PSPLT}`$ $`<0.9`$
$`0.6<`$ $`\mathrm{DECWT}`$ $`<0.8.`$
The effect of increasing CLMAX is to increase the $`E_T`$ flow as does increasing PSPLT. Increasing the $`E_T`$ flow with CLMAX has the effect of broadening the jet shapes and producing harder momenta spectra for the charged particles. This is thought to be due to the fact that the clusters are allowed to have more energy before they are forced to split. Reducing DECWT increases the $`E_T`$ flow predictions at low values of PSPLT with a smaller reduction or slight increase for larger values of PSPLT.
An attempt was made to tune the standard HERWIG (using MRSD- parton density functions) and compare with tuned values from LEP data from L3. The best values of the parameters achieved for the HERA data are listed in Table 4. Neither the ‘tuned’ set 1 parameters or the L3 parameters can describe the transverse energy flows at low $`x`$ and $`Q^2`$, see Fig. 9, whilst at higher $`Q^2`$ and $`x`$ the ‘tuned’ values give a better description of the HERA data than the L3 values. The jet shape distributions also prefer the ‘tuned’ values. With the parameters chosen for investigation it was not possible to achieve a consistent description of the data at both low and high $`(x,Q^2).`$
In an attempt to overcome the difficulty in obtaining sufficient $`E_T`$ at higher $`Q^2`$ without using very high values of CLMAX, an investigation of HERWIG with LO running $`\alpha _s`$ was made. Two sets of parameter settings are shown in Table 4 for this modified HERWIG, in conjunction with MRSD-. Set 2 gives the best description of the $`E_T`$ flows, whilst conversely set 3 gives a better description of the jet shape data. Figure 10 compares the HERWIG model predictions for the $`E_T`$ flows with the data. Set 2 describes these distributions well over the whole $`x`$ and $`Q^2`$ range. Set 3 also improves the description of this data in the highest $`Q^2`$ bins, though it underestimates the data in the lowest bins. Figure 11 compares the HERWIG predictions, with the parameter sets, to the jet shape data. Set 3 gives a better description of this data than using the set 2 parameters. Set 2 predicts jets broader than that observed in the data and is in poor agreement with the data.
Investigation of the sensitivity of the data (and the subsequent parameter settings) to the choice of parton density functions was made in the modified HERWIG for the MRSD- and CTEQ4L parametrisations. In particular the $`E_T`$ flows and the jet shapes were sensitive to the choice of parton densities. A 4th parameter set was found using the CTEQ4L parton densities. Again a consistent description of both the $`E_T`$ flows and the jet shapes was not possible. The parameter set listed in table 5 gave a better description of the $`E_T`$ flows than the jet shapes. The $`\chi ^2`$ achieved for the various parameter settings of HERWIG using both MRSD- and CTEQ4L are given in Table 5 for the $`x_p`$ distribution in the current region and the $`E_T`$ flows.
#### 3.2.2 Approach 2
This study considered the same data samples as approach 2 for ARIADNE. Only the H1 modified HERWIG, with the running of $`\alpha _s`$ at leading order, has been considered. The parton densities used in this approach correspond to CTEQ4L.
At high $`Q^2`$ a reasonable description of the dijet data by HERWIG could be obtained only if a larger (than default) value of $`\alpha _s`$ ($`\mathrm{\Lambda }=250\mathrm{M}\mathrm{e}\mathrm{V}`$) was used, Figure 12. At low $`Q^2`$ HERWIG was unable to achieve a good description of the dijet data. The dijet cross sections were relatively insensitive to changes in the hadronisation parameters.
In Figure 12 the DISENT<sup>1</sup><sup>1</sup>1The DISENT program incorporates a NLO calculation for DIS at the parton level. It can also be used to obtain partonic LO predictions. $`𝒪(\alpha _s)`$ predictions (using $`Q^2`$ as the renormalisation scale) are compared to HERWIG. The HERWIG predictions are in agreement with the DISENT LO calculation. In the same figure, it is also shown that the NLO corrections (K-factors), in particular at low $`Q^2`$ and forward pseudorapidities $`\eta _{\mathrm{fwd},\mathrm{lab}}2,`$ are large. The parton showers used to emulate higher orders in HERWIG are insufficient to account for these large NLO corrections.
In contrast to the dijet data, the $`E_T`$ flows and the rapidity distributions of charged particles exhibit a strong dependence on the fragmentation parameters. As in approach 1, the current region of the Breit frame and the high $`Q^2`$ data prefer different settings of PSPLT and CLMAX parameters than does the low $`Q^2`$ data. The results are summarised in Table 6 and the HERWIG predictions are compared to the data in figure 13. The high $`Q^2`$ and the Breit frame current region data prefers settings of $`\mathrm{CLMAX}=3.0`$ (the default) and $`\mathrm{PSPLT}=1.2`$ whilst the low $`Q^2`$ data favour a higher value of $`\mathrm{CLMAX}=5.0`$ with a slightly lower value of $`\mathrm{PSPLT}=1.0.`$
Although variation of the fragmentation parameters leads to large changes in the prediction of the HERWIG model, the underlying parton dynamics in HERWIG are not sufficient to describe the HERA DIS data.
### 3.3 LEPTO
The new version (6.5.2$`\beta `$) of LEPTO was confronted with preliminary high statistics $`(2+1)`$ jet data from the H1 collaboration (statistical error only on the data.) This data set consists of DIS events which are all forced to be of a $`(2+1)`$ jet configuration using the modified Durham algorithm. The distributions studied were $`y_2,`$ the cut-off in the algorithm where an event is first defined as $`(2+1)`$ and the angles in the laboratory frame of the forward and backward going jet ($`\theta _{\mathrm{fwd}}`$ and $`\theta _{\mathrm{bwd}}`$). In addition the jet variables $`x_{\mathrm{jet}}`$, defined as $`Q^2/(Q^2+\widehat{s})`$ where $`\widehat{s}`$ is the invariant mass of the jet(parton) pair, and $`z_p`$, defined as $`1/2(1\mathrm{cos}\theta ^{})`$ where $`\theta ^{}`$ is the polar angle of jet in the photon-parton centre of mass system, were investigated.
The following parameters, that control the cut off in the $`𝒪(\alpha _s)`$ matrix element in the generator, were found to have significant impact on the description of the data and have been studied:
* PARL(8) $`z_p^{min}`$ cut off, and
* PARL(9) $`\widehat{s}^{min}`$ cut off.
The new SCI scheme implemented in LEPTO version 6.5.2$`\beta `$ leads to a dramatic improvement in the description of the data compared with version 6.5. The $`\chi ^2`$ is typically reduced by a factor of 5–6, see Table 7. The predictions of LEPTO compared to the data are shown in Figures 14 and 15. Further significant improvement in the description of the data with LEPTO 6.5.2$`\beta `$ was achieved by optimizing the parameters PARL(8) and PARL(9). The results of this optimization are shown in Table 8 (set 1) and the improvement can clearly be seen in the comparison with the data in Figure 15; the corresponding $`\chi ^2`$ values are given in table 7. It should also be noted that LEPTO describes the $`Q^2`$ dependence of the jet distribution well.
A complementary way to optimize LEPTO for jet distributions, instead of applying hard cuts on divergences of the matrix element (set 1), is to loosen these cuts so that LEPTO is forced to find appropriate divergency cuts on an event–by–event basis. The preferred values of PARL(8) and PARL(9) using this approach are listed in Table 8 (set 2) and the corresponding $`\chi ^2`$ values in Table 7.
The variation of the intrinsic $`k_T`$, PARL(3), and the cut–off value of the initial–state parton shower, PYPAR(22), had no effect on the quality of the description of the jet data. Also the jet data were insensitive to the choice of the parton density functions.
Although both approaches to describing the data with LEPTO, via PARL(8) and PARL(9), result in significant improvements, no satisfactory description of the measured 2–jet distributions could be achieved. The parameter sets 1 and 2 were then cross checked against the data samples used in approach 1 for ARIADNE but over the whole $`Q^2`$ range, see Table 9. Besides the $`(2+1)`$ jet rate and the charged particle $`x_p`$ distribution in the current fragmentation region, the default version of LEPTO 6.5.2$`\beta `$ gave a better description of the data.
## 4 Summary
During the course of the workshop new versions of the LEPTO and ARIADNE Monte Carlo generators were made available. These modified versions of the generators were in far better agreement with data.
An attempt was made to find sets of parameters for the ARIADNE, LEPTO and HERWIG generators that would describe the DIS HERA data. It proved difficult to find such a parameter set that would describe the whole range of distributions at both low and high $`Q^2.`$ A number of parameter sets are given for each generator that are optimised for a particular region of phase space.
This paper attempts to summarise a ‘snapshot’ of an ongoing program of work between experimentalists of both the H1 and the ZEUS collaborations and the authors of the event generators. The ultimate aim is to have event generators that are able to describe the complex structure of DIS events at HERA as impressively as they do the LEP data. |
no-problem/9912/hep-ph9912539.html | ar5iv | text | # 1 Introduction
## 1 Introduction
The hadronic final state in inclusive and diffractive deep inelastic scattering (DIS) can give a better understanding of the interplay between soft and hard processes in QCD. Whereas hard interactions are well described by perturbative QCD, soft interactions are not calculable within perturbation theory. Instead more phenomenological models are used to transform the perturbative partonic final state into an observable hadronic final state. It is normally assumed that the colour topology of an event is given by the planar approximation in perturbation theory, so that terms of order $`1/N_C^2`$ are neglected, and that this topology is not altered by soft interactions.
The models for soft colour interactions (SCI) and the generalised area law (GAL) for colour string re-interactions try to model additional soft colour exchanges which neither belong to the perturbative treatment nor the conventional hadronisation models. These soft colour exchanges can alter the colour topology and thereby produce a different final state, including such phenomena as large rapidity gaps and diffraction, as illustrated in Fig. 1.
In these models there is no sharp distinction between inclusive and diffractive events, which is the case in Regge-inspired models. Instead, there is a continuous transition between the different final states. The common assumption for the two models is that the soft colour exchanges factorises from the hard interactions which can therefore be described by standard perturbative methods, i.e. with matrix elements and parton showers. It is also assumed that compared to the perturbative interactions the momenta in the soft colour exchanges can be neglected and that their effect will be washed out by the hadronisation.
In this note we investigate the hadronic final states in inclusive and diffractive DIS resulting from the SCI and GAL models as implemented in the Monte Carlo program Lepto . In section 2 we give a short review of the two models. In section 3 we show how the diffractive structure function can be used to fix the amount of soft colour exchanges in the two models and compare with data on the hadronic final state in diffractive events (the $`X`$-system). Section 4 then compares the two models with data on inclusive hadronic final states. Finally, in section 5 we summarise and conclude.
## 2 Models for soft colour exchanges
The basic assumption of the soft colour interaction (SCI) model is that the partons produced in the hard interaction can have soft colour exchanges with the background colour field of the incoming hadron or hadrons. These exchanges can change the colour topology of the event as illustrated in Fig. 1. The probability for a soft colour exchange depends on non-perturbative dynamics and is thus not calculable at present and for simplicity it is therefore assumed to be a constant in the SCI model. Its value, $`R=0.5`$, is obtained by comparing the model with the diffractive structure function in DIS. As long as the SCI model represents interactions with a colour background field, it should only be applied to reactions with initial state hadrons.
Apart from being applicable in DIS the SCI model has also been successfully used to describe the surprisingly large quarkonium cross sections observed at the Fermilab Tevatron . A first comparison with quarkonium photoproduction at HERA is presented in . In addition the model describes diffractive $`W`$ and jet production at the Tevatron .
The generalised area law (GAL) model for colour string re-interactions is similar in spirit to the SCI model in that it is a model for soft colour exchanges. The main difference is that the GAL model is formulated in terms of interactions between the strings connecting the partons produced in an event. Thus the GAL model is also applicable for hadronic final states in $`e^+e^{}`$, since it treats string re-interactions and should apply to all interactions producing strings.
Another important feature of the GAL model is that the probability for an interaction is not constant as in the SCI model. Instead there is a dynamical suppression factor giving the probability $`R=R_0\mathrm{exp}(b\mathrm{\Delta }A)`$ for a string reconnection, where $`\mathrm{\Delta }A`$ is the difference between the areas in momentum space spanned by the strings in the two alternative string configurations and $`b`$ is one of the hadronisation parameters in the Lund model .
The parameters of the GAL model were obtained by making a simultaneous tuning to the diffractive structure function in DIS and the charged particle multiplicity distribution and momentum distribution for $`\pi ^\pm `$ in $`e^+e^{}`$ annihilation at the $`Z`$-resonance. This resulted in $`R_0=0.1`$, $`b=0.45`$ GeV<sup>-2</sup> and $`Q_0=2`$ GeV, where $`Q_0`$ is the cut-off for initial and final state parton showers. It is not possible to have the Jetset default cut-off $`Q_0=1`$ GeV in the parton showers and simultaneously reproduce the multiplicity distribution. One might worry that the obtained cut-off is relatively large compared to the default value. However, it is not obvious that perturbation theory should be valid for so small scales when more exclusive final states are considered. Therefore, $`Q_0`$ can be considered as a free parameter describing the boundary below which it is more fruitful to describe the fragmentation process in terms of strings instead of perturbative partons.
Both the SCI and GAL models have been implemented in the LSCI routine in the Monte Carlo program Lepto . For the GAL model one also needs a new version of subroutine LEPTO, see the GAL homepage http://www3.tsl.uu.se/thep/rathsman/gal for details.
## 3 Hadronic final states in diffractive DIS
The diffractive structure function in DIS was obtained from the SCI and GAL models using a subroutine from the HzTool package and the CTEQ4 leading order parton distributions . The results are compared with H1 data in Fig. 2. The normalization parameters in the models, $`R`$ and $`R_0`$ respectively, were determined from this data. The default version of Lepto was used, except for the GAL model having the modified values of the cut-off in the parton showers and the hadronisation parameter $`b`$. In addition, version 2 of the sea-quark treatment (see ) was used for the GAL model with the width of the mean virtuality set to 0.44 GeV. However, the result is not sensitive to this choice.
The agreement between the resulting diffractive structure function calculated from the two models and H1 data is quite good as is shown in Fig. 2, especially if one takes into account that there is only one free parameter in the models. The variables $`x_{IP}\frac{Q^2+M_X^2}{Q^2+W^2}`$ and $`\beta \frac{Q^2}{Q^2+M_X^2}`$ are defined in terms of observable invariants that do not require interpretation within a particular model. As usual, $`Q^2`$ is the photon virtuality and $`W`$ the mass of the complete hadronic system. $`M_X^2=Q^2\frac{1\beta }{\beta }`$ is the mass of the diffractive system $`X`$.
The Regge framework requires pomeron exchange at small $`x_{IP}`$ and other Regge exchanges in the transition region $`0.01<x_{IP}<0.1`$, whereas the SCI and GAL models describes the whole region in a more economic way. The GAL model fails only for small $`M_X`$ which are not included in the model because of the cut-off $`M_X^2>4`$ GeV<sup>2</sup> in the matrix-element. The SCI model also gives a good description of the data except for small $`Q^2`$ and small $`M_X^2`$. The reason for the SCI model overshooting the data at small $`Q^2`$ is probably related to the typically small number of perturbative partons produced at small $`Q^2`$. This in turn means that effectively the probability for a rapidity gap becomes larger. In the extreme case of only four partons in the final state the probability for a rapidity gap in the SCI model is $`R=0.5`$ since there are only two possible string configurations.
One may ask whether this kind of soft colour exchange models are essentially models for the pomeron. This is not the case as long as no pomeron or Regge dynamics is introduced. The behaviour of the data on $`F_2^D(\beta ,Q^2)`$, usually called the pomeron structure function, is in the SCI/GAL models understood as normal perturbative QCD evolution in the proton. The rise with $`lnQ^2`$ also at larger $`\beta `$ is simply the normal behaviour at the small momentum fraction $`x=\beta x_{IP}`$ of the parton in the proton. Here, $`x_{IP}`$ is only an extra variable related to the gap size or $`M_X`$ which does not require a pomeron interpretation. The flat $`\beta `$-dependence of $`x_{IP}F_2^D=\frac{x}{\beta }F_2^D`$ is due to the factor $`x`$ compensating the well-known increase at small-$`x`$ of the proton structure function $`F_2`$. For details of this and a general review of diffractive hard scattering see .
With the free parameters of the two models fixed from the diffractive structure function the models can be tested by comparing with the hadronic final state in diffractive events. The energy flow in Fig. 3a demonstrates that both models give a reasonable description of the data, with the SCI model doing slightly better. The ‘seagull’ plot in Fig. 3b also shows that the SCI model is very close to data and that the GAL model gives a reasonable description although the transverse activity is on the high side.
There are many other observables in diffractive events to which the models could be compared; in particular those related to the proton remnant system, such as $`t`$-dependence, momentum distribution for leading protons and neutrons etc. However, these observables are not directly related to the hadronic final state in the $`X`$-system and depend on a different part of the model contained in Lepto. Therefore we do not study such observables here. They deserve a dedicated investigation as initiated in .
## 4 Inclusive hadronic final states
With both models giving a good description of the hadronic final states in diffractive events it is imperative to check that they also can describe the inclusive hadronic final states in DIS. Energy flows in the hadronic cms is an important observable which we have investigated earlier and H1 has recently made a comprehensive comparison of their data with several models . However, a more detailed test is obtained by looking at the $`p_{}`$-spectrum for charged particles which is sensitive to the distribution of transverse energy and not only the average. We therefore consider this and other observables in the following.
A good starting point for such an investigation is the momentum distribution of particles in the current region of the Breit frame. This part of phase-space is expected to be well described by the models since it should not be affected by the proton remnant and therefore be similar to $`e^+e^{}`$-annihilation. The distribution of scaled momentum $`x_p=2|\overline{p}|/Q`$ in this system is shown in Fig. 4. Although the overall agreement between the ZEUS data and the models is reasonable, it is clear that the SCI model gives too many soft particles (low $`x_p`$) and too few hard (high $`x_p`$) ones. The GAL model and also Lepto without string topology rearrangements, describes the details of the data quite well.
The pseudo-rapidity distribution of charged particles in the detectable regions of the hadronic cms is shown in Fig. 5. Again the SCI model gives too many soft particles, whereas the GAL model is much closer to data and even better than Lepto without reconnections.
Looking at the pseudo-rapidity distribution of charged particles with $`p_{}`$ larger than 1 GeV changes the picture as shown in Fig. 6. Now both models as well as Lepto without string reconnections give too few particles in the central region. Thus one should not expect either version of Lepto to give the correct average transverse energy flow unless the lack of high-$`p_{}`$ particles is compensated by too many soft ones. From this one might be tempted to draw the conclusion that the cascade in Lepto gives the wrong $`p_{}`$ distribution. However, this need not be the case. The $`p_{}`$ distribution in Fig. 7 for events with large energy in the central region is well described by the GAL model and essentially also by Lepto without reconnections. Thus the $`p_{}`$ distribution is well reproduced by the cascade but there are too few events with large energy in the forward region. For the SCI model, on the other hand, more forward energy is made up of soft particles from ‘zig-zag’ shaped strings resulting in a too soft $`p_{}`$ distribution.
Another instructive observable is the energy-energy correlation which in $`e^+e^{}`$ annihilation has been useful to study the internal structure of jets. In DIS one defines the transverse energy-energy correlation $`\mathrm{\Omega }(\omega )=1/N_{event}_{events}_{ij}E_iE_j/Q^2(1y)`$ between pairs ($`ij`$) of hadrons separated a distance $`\omega _{ij}=\sqrt{(\eta _i\eta _j)^2+(\varphi _i\varphi _j)^2}`$. Fig. 8 shows this correlation in the two models and without reconnections compared to data from H1 . The SCI model has the wrong shape since the correlation is smeared out due to the formation of ‘zig-zag’ shaped strings. The suppression of such ‘long’ strings in GAL avoids this and produces a reasonably good description of the data.
## 5 Summary and conclusions
We have shown that both the SCI and GAL models give satisfactory descriptions of the diffractive structure function and of more detailed hadronic properties of the $`X`$-system such as the energy flow and the seagull plot. However, when comparing with detailed properties of inclusive DIS final states it is clear that the SCI model fails in some respects, whereas the GAL model gives a description which is as good as or better than Lepto without string reconnections. Specifically, the SCI model gives too many soft particles both in current and target regions in the Breit frame whereas the GAL model gives a good description of soft particles but has too few particles with large $`p_{}`$, just as when having no reconnections, which results in the average transverse energy flow being too low compared to data . At the same time the GAL model gives a reasonable description of the $`p_{}`$-distribution in events with large energy in the central region. Thus it is too few events with high-$`p_{}`$ emissions that is the problem and not the modelling of the fragmentation process. In other words it is the cross-section for hard emissions that is too small in the model. This may be partly cured by adding resolved photon contributions as in Rapgap . From the energy-energy correlations it is also clear that the SCI model smears out the energy-energy correlations by making the string go ‘zig-zag’, whereas GAL only has minor effects on the energy-energy correlation.
One may consider whether the shortcomings of the SCI model are genuine or can be tuned away. The problems of giving too many soft particles is related to events where the string after SCI goes back-and-forth producing a zig-zag shape, i.e. a longer string. Hadronisation will then produce more, but softer hadrons. This helps to reproduce the inclusive transverse energy flow , but make the agreement with some of the above observables worse. In principle one may be able to tune the hadronisation parameters to recover a good description of the data. We have chosen not to attempt this, since that would be against the principle of having a universal hadronisation model, with the same parameter values in DIS and $`e^+e^{}`$. A possible way out for the SCI model could be to think of it not as interactions with a background field, but taking place generally between all partons in any type of event. Then it should also apply to $`e^+e^{}`$ annihilation and the modified string topologies would require a retuning of the hadronisation parameters in Jetset in order to fit data. Although this might improve the ability of the SCI model to describe DIS data, we have not embarked on such a road because it has no substantial theoretical justification. Another possibility would be to extend the SCI model with some dynamics that suppresses the probability to get longer strings, similarly to the GAL model.
The problem of too many soft hadrons is solved in the GAL model by suppressing the probability for long and thereby ‘zig-zag’ strings. At the same time the problem with too few particles with $`p_{}>1`$ GeV remains and thus the average transverse energy flow is below the data . However, as already mentioned, the source of this problem is to be found in the matrix elements and parton showers describing the hard interactions and not in the soft hadronisation model.
In conclusion, it is far from easy to construct a single Monte Carlo model, based on reasonably physics input and few parameters, that can well describe all kinds of hadronic final states in all interactions. Nevertheless, this should be the goal. |
no-problem/9912/hep-ex9912029.html | ar5iv | text | # Detection of Pionium with DIRAC
## INTRODUCTION
The low-energy dynamics of strongly interacting hadrons is under the domain of non-perturbative QCD, or QCD in the confinement region. At present, low energy pion-pion scattering is still an unresolved problem in the context of QCD. However, the approach based on effective chiral Lagrangian has been able to provide accurate predictions on the dynamics of light hadron interactions . In particular, Chiral Perturbation Theory (CHPT) allows to predict the S-wave $`\pi \pi `$ scattering lengths at the level of few percent . Available experimental results, on their side, are much less accurate than theoretical predictions, both because of large experimental uncertainty and, in some cases, unresolved model dependency .
The DIRAC experiment aims at a model independent measurement of the difference $`\mathrm{\Delta }`$ between the isoscalar $`a_0`$ and isotensor $`a_2`$ S-wave $`\pi \pi `$ scattering lengths with $`5\%`$ precision, by measuring the lifetime of the pionium ground state with $`10\%`$ precision.
## PIONIUM
Pionium ($`A_{2\pi }`$) is a Coulomb weakly-bound system of a $`\pi ^+`$ and a $`\pi ^{}`$, whose lifetime is dominated by the charge-exchange process to two neutral pions. The Bohr radius is 387 fm, the Bohr momentum 0.5 MeV/c, and the binding energy 1.86 keV. The decay probability is proportional to the atom wave function squared at zero pion separation and to the square of $`\mathrm{\Delta }=a_0a_2`$. Using the values of scattering lengths predicted by CHPT, the lifetime of the $`\pi ^+\pi ^{}`$ atom in the ground state is predicted to be 3.25$`\times `$10<sup>-15</sup> s .
### Production of $`A_{2\pi }`$
In DIRAC, $`\pi ^+\pi ^{}`$ atoms are formed by the interaction of 24 GeV/c protons with nuclei in thin targets . If two final state pions have a small relative momentum in their system ($`q1MeV/c`$), and are much closer than the Bohr radius, then the $`A_{2\pi }`$ production probability, due to the high overlap, is large. Such pions originate from short-lived sources (like $`\rho `$ and $`\omega `$), but not from long-lived ($`\eta `$, $`K_s^0`$), because in the latter case the two-pion separation is larger than the Bohr radius. The production probability for $`A_{2\pi }`$ can then be calculated using the double inclusive production cross section for $`\pi ^+\pi ^{}`$ pairs from short-lived sources, excluding Coulomb interaction in the final state . Evidence for $`A_{2\pi }`$ production was reported in a previous experiment .
### Fate of $`A_{2\pi }`$
Pionium travelling in matter can dissociate or break up into a pair of oppositely charged pions with small relative momentum ($`q<3`$ MeV/c) and hence with very small angular divergence ($`\theta <0.3`$ mrad). This process competes with the charge-exchange reaction or decay, if the target material is dense so that the atomic interaction length is similar to the typical decay length of a few GeV/c dimeson atom (a few tens of microns). In a 100$`\mu `$m Ni foil, for example, the $`A_{2\pi }`$ breakup probability ($`47\%`$) becomes larger than the annihilation probability ($`38\%`$). This breakup probability depends on the target nucleus charge Z, the target thickness, the $`A_{2\pi }`$ momentum, and on the $`A_{2\pi }`$ lifetime .
### Measurement of the $`A_{2\pi }`$ lifetime
For a target material of a given thickness, the breakup probability for pionium can be experimentally determined from the measured ratio of the number of dissociated atoms ($`n_A`$) to the calculated number of produced $`A_{2\pi }`$ ($`N_A`$). Thus, by comparison with the theoretical value, known at the 1$`\%`$ level, the $`A_{2\pi }`$ lifetime can be determined.
The number $`n_A`$ of detected “atomic pairs” is obtained from the experimental distribution of relative momenta $`q`$ for pairs of oppositely charged pions. It is however necessary to subtract a background contribution, arising mainly from Coulomb-correlated pions pairs in the $`q`$ region, where the $`A_{2\pi }`$ signal is prominent ($`q<2`$ MeV/c). The low-$`q`$ background contribution is obtained with an extrapolation procedure using the shape of the accidental pair $`q`$-distribution recorded in the region $`q>3`$ MeV/c, taking into account e.m. and strong $`\pi ^+\pi ^{}`$ final state interactions .
From the measured ratio $`n_A/N_A`$ a value for the $`A_{2\pi }`$ ground state lifetime can be extracted and, hence, a value for $`\mathrm{\Delta }=|a_0a_2|`$.
## THE EXPERIMENTAL APPARATUS
The DIRAC experimental apparatus (Fig. 1) , devoted to the detection of charged pion pairs, was installed and commissioned in 1998 at the ZT8 beam area of the PS East Hall at CERN. After a calibration run at the end of 1998, DIRAC has been collecting data since the summer of 1999.
The primary PS proton beam of 24 GeV/c nominal momentum struck the DIRAC target. The non-interacting beam travels below the secondary particle channel (tilted upwards at 5.7<sup>o</sup> with respect to the proton beam), until it is absorbed by a catcher downstream of the setup. Downstream the experimental target, secondary particles travel across the following detectors: three planes of Micro-Strip Gas Chambers (MSGC) and two orthogonal stacks of scintillating fibers (Scintillating Fiber Detector SFD) to provide tracking information upstream of the spectrometer magnet; two planes of vertical scintillator slabs (Ionization Hodoscope IH) to detect the particle energy loss. Downstream the IH, the secondary beam enters a vacuum channel extending through the poles of the spectrometer magnet of 2.3 Tm bending power in the tilted horizontal plane. Downstream the analyzing magnet, the setup splits into two arms (inclined by 5.7<sup>o</sup> in the vertical plane, and open by $`\pm 19^o`$ in the horizontal plane) equipped with a set of identical detectors: 14 drift chamber (DC) planes, one plane of vertical scintillating strips (Vertical Hodoscope VH) and one of horizontal strips (Horizontal Hodoscope HH) for tracking purposes downstream of the magnet; furthermore, a N<sub>2</sub> gas-threshold Cherenkov counter (CH), a Pre-Shower Detector (PS), consisting of Pb converter plates and of vertical scintillator slabs, and a Muon counter (MU), consisting of an array of vertical scintillator elements placed behind a block of iron absorber, with the aim of performing particle identification at the trigger or offline levels.
Fig. 1. The DIRAC experimental apparatus.
A multi-level trigger was designed to reduce the secondary particles rate to a level manageable by the data acquisition system, and to yield the most favorable signal-to-noise ratio, by selecting pion pairs with low relative momentum in the pair system (or small opening angle and equal energies in the lab system) and by recording a sufficiently large number of accidental pairs for the offline analysis. An incoming flux of $``$10<sup>11</sup> protons/s would produce a rate of secondaries of about 3$`\times `$10<sup>6</sup>/s and 1.5$`\times `$10<sup>6</sup>/s in the upstream and downstream detectors, respectively. At the trigger level this rate is reduced to about 2$`\times `$10<sup>3</sup>/s, with an average event size of about 0.75 Kbytes. With the 95$`\mu `$m thin Ni target, the expected average $`A_{2\pi }`$ yield in the geometrical and momentum setup acceptance is $`0.7\times 10^3`$/s, equivalent to a total number of $`10^{13}`$ protons on target to produce one dimeson atom.
## RESULTS FROM FIRST DATA TAKING
A preliminary analysis was performed on a sample of data (Ni target) collected during this summer. The sample consisted of about 10<sup>7</sup> events, corresponding to $``$1/3 of the statistics, accumulated in a 3-week run period. The data analysis was mostly dedicated to the calibration of individual detectors and to the tuning of reconstruction algorithms. However, some general features of the apparatus response were investigated, and some results will be presented hereafter.
Figure 2 shows the time difference between hit slabs in the left and right vertical hodoscopes for events with one single track reconstructed in each detector arm. Within a trigger window of 45 ns, one observes the peak of “on-time” hits associated to correlated particles, over the background from accidental hits. The width of the correlated-pair events yields the time resolution of the hodoscope ($`\sigma 420`$ ps at the time of measurement, recently improved to $`250`$ ps). The asymmetry on the right of the coincidence peak is due to admixture of protons in the “$`\pi ^+`$” sample, thus corresponding to events of the type $`\pi ^{}p`$.
Fig. 2. Time difference between left and right VH scintillator slabs, hit by particles.
Such a contamination sample can be isolated by the time-of-flight measurement along the path from the target to the hodoscope. The discrimination between $`\pi ^{}\pi ^+`$ and $`\pi ^{}p`$ pairs is effective for momenta of positively charged particles below 4.5 GeV/c. This is shown in Fig. 3, where the laboratory momentum of the positive particle in the pair is shown as a function of the arrival-time difference in the vertical hodoscope. The spectrometer single particle momentum acceptance is within the range 1.3 to 7.0 GeV/c.
In Fig. 4, the distribution of the longitudinal component ($`q_L`$) of the relative momentum in the pair system is shown for two samples of events: those (Fig. 4a) occurring with time differences close to zero (real coincidence plus admixture of accidental pairs), associated to free pairs with and without final state interaction; and those (Fig. 4b) occurring at time differences far from the peak of correlated pairs (only accidental pairs).
Fig. 3. Momentum of positive particle as a function of time difference between left and right hit slabs of the vertical hodoscopes.
Finally (Fig. 4c), the $`q_L`$ distribution of correlated pion pairs is obtained from the difference between the distributions of Fig. 4a and 4b, taking into account the relative normalization factor. The distributions in Fig. 4 were obtained from a sample of two-track events, preselected with momentum of the positive particle less than 4.5 GeV/c, to reject unresolved $`\pi ^{}p`$ pairs, and with transverse component ($`q_T`$) of the relative momentum below 4 MeV/c, to increase the fraction of low relative momentum pairs. For values of $`q_L`$ corresponding to correlated pairs ($`|q_L|<10`$ MeV/c) the production cross section of Coulomb pairs is enhanced with respect to the cross section of non-Coulomb pairs: Coulomb attraction in the final state is responsible for the peak in the $`q_L`$ distribution (Fig. 4a and 4c) at small $`q_L`$.
A preliminary estimate of the number of pairs associated to $`A_{2\pi }`$ breakup results in a contribution of about 100 “atomic pairs” in the region $`|q_L|<2`$ MeV/c.
Fig. 4. Distribution of the longitudinal component of the relative momentum for: (a) time-correlated pairs; (b) accidental pairs; (c) spectrum of time-correlated minus accidental pairs.
The reconstruction of Coulomb-correlated $`\pi ^+\pi ^{}`$ pairs is sensitive to the precision of the setup alignment. Any misalignment of the tracking system in one arm relative to the other arm would generate asymmetrical errors on the reconstructed momenta. This would lead to a systematic shift and additional spread of the Coulomb enhanced peak in the $`q_L`$ distribution. The mean value of the Coulomb peak is 0.1 MeV/c, well within the accepted tolerances.
When reconstructed momenta of oppositely charged particles are symmetrically overestimated or underestimated then a calibration using detected resonances is adequate. This is done by reconstructing the effective mass of $`\pi ^{}p`$ pairs, also detected in the spectrometer. Figure 5 shows the invariant mass distribution of correlated $`\pi ^{}p`$ pairs with proton momentum $`>`$3 GeV/c.
Fig. 5. Invariant mass of reconstructed $`\pi ^{}p`$ pairs.
A clear signal at the nominal $`\mathrm{\Lambda }`$ mass is observed. Such events originate from a few GeV/c $`\mathrm{\Lambda }`$, with the decaying proton emitted backward and the pion emitted forward in the $`\mathrm{\Lambda }`$ system, and both decay particles characterized by small transverse momenta. The experimental mean value and standard deviation of the mass peak are 1115.60 and 0.92 MeV/c<sup>2</sup>, respectively. These mass parameter values suggest an excellent calibration of the momentum scale, with accuracy in momentum reconstruction better than 0.5$`\%`$ in the kinematic range of detected $`\mathrm{\Lambda }`$ decays, and the absence of errors in the telescope alignment, which otherwise would cause a displacement of the $`\mathrm{\Lambda }`$ mass peak value.
## CONCLUSION
The DIRAC experiment has begun to collect data this year. A preliminary investigation of the apparatus performances demonstrates its full capability to pursue the foreseen experimental program. Improvements to the hardware as well as software tools have already been implemented in the second run period, currently in progress. These will certainly result in better quality of the data and will contribute to the aimed measurement precision of the pionium lifetime. |
no-problem/9912/astro-ph9912406.html | ar5iv | text | # Galaxy & Cluster Biasing from Local Group Dynamics
## 1 Introduction
Different classes of extragalactic objects trace the underlying matter distribution differently. The realization of such a behaviour arose from the fact that the amplitude of the 2-point correlation function of clusters of galaxies is significantly higher than that of galaxies (cf. Bahcall & Soneira 1983). This was suggested by Kaiser (1984) as a result of the clustering characteristics of different height peaks in an underlying random Gaussian field. A first order description of the effect is provided by linear biasing in which the extragalactic mass tracer fluctuation field is related to that of the underlying mass by:
$$\left(\delta \rho /\rho \right)_{\mathrm{tracer}}=b\left(\delta \rho /\rho \right)_{\mathrm{mass}}$$
(1)
with $`b`$ the linear bias factor. Even in this simplistic model the bias cannot be measured directly and only theoretical considerations or numerical simulations can provide some clues regarding its value. However, the relative bias between different tracers can be measured and such attempts, using their clustering properties, have provided interesting, although somewhat conflicting, results. Lahav, Nemiroff & Piran (1990) comparing the angular correlation function of different subsamples of the UGC, ESO and IRAS catalogues find an optical to IR galaxy bias factor, $`b_{O,I}`$, ranging from 1 to 2 with preferred value $`b_{O,I}1.7`$. Babul & Postman (1990) using the spatial correlation function of the CfA and IRAS galaxies find $`b_{O,I}1.2`$, while comparing the QDOT correlation function (Saunders, Rowan-Robinson & Lawrence 1992) with that of APM galaxies (Maddox et al. 1990) one finds $`b_{O,I}1.4`$. Similarly, Oliver et al. (1996) comparing the clustering properties of the APM-Stromolo survey of optical galaxies and an extended IRAS redshift survey found $`b_{O,I}1.2\pm 0.05`$. Strauss et al. (1992a) using the 1.936 Jy IRAS sample find that the overdensity ratio between CfA and IRAS galaxies within a sphere centered on Virgo with the Local Group on the periphery gives $`b_{O,I}1.2`$ while their correlation function analysis provides discrepant results when comparing IRAS to CfA or SSRS optical galaxies (with $`b_{O,I}2`$ and 1, respectively). Recently, Willmer, daCosta & Pellegrini (1998) using the SSRS2 sample of optical galaxies and comparing with the clustering properties of the 1.2 Jy IRAS survey find $`b_{O,I}1.2`$ and $`1.4`$ ($`\pm 0.07`$) in redshift and real space respectively, while Seaborne et al (1999) performing a similar analysis between the PSC and Stromolo-APM redshift surveys find $`b_{O,I}1.3\pm 0.1`$.
A different approach using the dynamics of the local group of galaxies, was proposed in Plionis (1995) and Kolokotronis et al. (1996). Traditionally, dynamical studies have been used in an attempt to constrain the value of the cosmic density parameter, $`\mathrm{\Omega }_{}`$, by assuming linear theory and comparing observed galaxy or cluster peculiar velocities with estimated accelerations (cf. Strauss & Willick 1995). However due to biasing only the combination $`\mathrm{\Omega }_{}^{0.6}/b`$ can be estimated. Such an analysis has been extensively applied to the Local Group of galaxies, since its peculiar velocity is accurately determined from the CMB temperature dipole (Kogut et al. 1996) and its gravitational acceleration can be measured from the dipole moment of the surrounding spatial distribution of different mass tracers. Within linear theory acceleration and peculiar velocity should be aligned and this indeed has been found to be the case using optical, IR galaxies, X-ray or optical cluster surveys and AGN’s (cf. reviews of Strauss & Willick 1995, Dekel 1997 and references therein). In the linear biasing framework the different mass tracers should therefore exibit similar dipole profiles differing only in their amplitudes, the ratio of which is a measure of their relative bias. Therefore, one can estimate the relative bias factor between different mass tracers, because in the intercomparison of their velocity-acceleration relations the $`\mathrm{\Omega }_{}`$ parameter as well as the velocity cancels out.
In this study we use the recently completed PSCz IRAS galaxy survey (Saunders et al. 1999), the SSRS2 optical galaxy catalogue (DaCosta et al. 1998) and a subsample of the Abell/ACO cluster catalogue (as defined in Branchini & Plionis 1996) to estimate their relative bias factors by comparing their dipole moments.
## 2 Method
Using linear perturbation theory one can relate the gravitational acceleration of an observer, induced by the surrounding mass distribution, to her/his peculiar velocity:
$$𝐯(𝐫)=\frac{\mathrm{\Omega }_{}^{0.6}}{b}\frac{1}{4\pi }\delta (𝐫)\frac{𝐫}{|𝐫|^3}dr=\frac{\mathrm{\Omega }_{}^{0.6}}{b}𝐃(r)$$
(2)
The dipole moment, $`𝐃`$, is estimated by weighting the unit directional vector pointing to the position of each tracer, with its gravitational weight and summing over the tracer distribution;
$$𝐃=\frac{1}{4\pi n}\frac{1}{\varphi (r)r^2}\widehat{𝐫}$$
(3)
with
$$\varphi (r)=\frac{1}{n}_{L_{\mathrm{min}}(r)}^{L_{\mathrm{max}}}\mathrm{\Phi }(L)dL$$
(4)
where $`\mathrm{\Phi }(L)`$ is the luminosity function of the objects under study, $`L_{\mathrm{min}}(r)=4\pi r^2S_{\mathrm{lim}}`$, with $`S_{\mathrm{lim}}`$ the flux limit of the sample and $`n`$ is the mean tracer number density, given by integrating the luminosity function over the whole luminosity range.
Using two different tracers, $`i`$ and $`j`$, of the underlying matter density field to determine the Local Group acceleration one can write: $`𝐯(𝐫)=\mathrm{\Omega }_{}^{0.6}𝐃_i(r)/b_i=\mathrm{\Omega }_{}^{0.6}𝐃_j(r)/b_j`$ and therefore we can obtain an estimate of their relative bias factor from:
$$b_{ij}(r)=\frac{b_i}{b_j}(r)=\frac{𝐃_i}{𝐃_j}(r)$$
(5)
Since the dipole is a cumulative quantity and at each distance it depends on all previous shells, we cannot define an unbiassed $`\chi ^2`$ statistic to fit eq.5. Rather, we can obtain a crude estimate of the reliability of the resulting bias factor by estimating Pearson’s correlation coefficient, $`R_{i,j}`$, between the two dipole profiles (see Kolokotronis et al. 1996); a value $`R_{i,j}1`$ would indicate a perfect match of the two dipole profiles and thus a very reliable estimate of their relative linear bias factor.
A statistically more reliable approach is to assume that the differential dipoles, estimated in equal volume shells, are independent of each other and then fit $`b_{ij}`$ according to:
$$\chi ^2=\underset{k=1}{\overset{N_{bins}}{}}\frac{(𝐃_{i,k}b_{ij}𝐃_{j,k}𝒞_k)^2}{\sigma _{i,k}^2+b_{ij}^2\sigma _{j,k}^2}$$
(6)
where $`𝒞`$ is the zero-point offset of the relation and $`\sigma `$ is the corresponding shot-noise errors, estimated by using either of two methods; a Monte-Carlo approach in which the angular coordinates of all tracers are randomized while keeping their distance, and thus their selection function, unchanged or the analytic estimation of Strauss et al. (1992b); $`\sigma ^2\varphi ^1r^4(\varphi ^1+1)`$.
## 3 Data
We use in our analysis three different catalogues of mass tracers;
* The recently completed IRAS flux-limited 60-$`\mu `$m redshift survey (PSCz) which is described in Saunders et al. (1999). It is based on the IRAS Point Source Catalogue and contains $`15000`$ galaxies with flux $`>0.6`$ Jy. The subsample we use, defined by $`|b|10^{}`$ and limiting galaxy distance of 180 $`h^1`$ Mpc, contains $`10097`$ galaxies and covers $`82\%`$ of the sky.
* The SSRS2 catalogue of optical galaxies (DaCosta et al. 1998) which is magnitude limited to $`m_B=15.5`$ and contains 3573 galaxies in the South ($`40^{}\delta 2.5^{}`$, $`b40^{}`$) and 1939 galaxies in the North ($`\delta 0^{}`$, $`b35^{}`$), covering in total 13.5% of the sky.
* A volume limited subsample of the Abell/ACO cluster catalogue, with $`|b|10^{}`$ and limited within 180 $`h^1`$ Mpc (see Branchini & Plionis 1996). Our sample contains 197 clusters.
### 3.1 Determining distances from redshifts
All heliocentric redshifts are first transformed to the Local Group frame using $`czcz_{}+300\mathrm{sin}(l)\mathrm{cos}(b)`$. We then derive the distance of each tracer by using:
$$r=\frac{2c}{H_{}}\left(1(1+z\delta z)^{1/2}\right)(1+z\delta z)^{3/2}$$
(7)
where $`H_{}=100h`$ Mpc and $`\delta z`$ is a non-linear term to correct the redshifts for the tracer peculiar velocities:
$$\delta z=\frac{1}{c}(𝐮(r)𝐮(0))\widehat{r}$$
(8)
with $`𝐮(0)`$ the peculiar velocity of the Local Group and $`𝐮(r)`$ the peculiar velocity of a galaxy or cluster at position $`𝐫`$. Instead of using elaborate 3D reconstruction schemes (cf. Schmoldt et al 1999; Branchini & Plionis 1996; Branchini et al 1999; Rowan-Robinson et al. 1999) to estimate this term, we decided to use a rather simplistic velocity field model (see Basilakos & Plionis 1998) to treat consistently all three data sets (a self-consistent 3D reconstruction of the SSRS2 density field is in any case not possible due to the small area covered by the survey). Our simplistic velocity field model was found in Basilakos & Plionis (1998) to be sufficient in order to recover the IRAS 1.2Jy and QDOT 3-D dipole. We remind the reader the main assumptions of this model:
(a) The tracer peculiar velocities can be split in two vector components; that of a bulk flow and of a local non-linear term:
$$𝐮(r)=𝐕_{bulk}(r)+𝐮_{\mathrm{nl}}(r)$$
(9)
(b) The first component dominates and thus that
$$𝐮(r)\widehat{𝐫}𝐕_{bulk}(r)\widehat{𝐫}$$
(10)
We then use the observed bulk flow direction and profile, as a function of distance, given by Dekel (1997) and combined with that of Branchini, Plionis & Sciama (1996). The zero-point, $`V_{bulk}(0)`$, and the direction of the bulk flow is estimated applying eq.(9) at $`r=0`$ and assuming, due to the “coldness” of the local velocity field (cf. Peebles 1988), that $`𝐮_{\mathrm{nl}}(0)𝐮_{\mathrm{inf}}=200`$ km/sec (where $`u_{\mathrm{inf}}`$ is the LG infall velocity to the Virgo Supercluster).
### 3.2 Galaxy densities
To estimate the local acceleration field it is necessary to recover the true galaxy density field from the observed flux-limited samples. This is done by weighting each galaxy by $`\varphi ^1(r)`$, where $`\varphi (r)`$, is defined in eq.4. For the PSCz sample we use the Saunders et al. (1990) luminosity function derived from the QDOT catalogue, with $`L_{min}=7.5\times 10^7h^2L_{}`$ since lower luminosity galaxies are not represented well in the available samples (cf. Rowan-Robinson et al. 1990), and $`L_{max}=10^{13}h^2L_{}`$. For the SSRS2 sample we use the Schechter luminosity function of Marzke et al. (1998) with $`M_{max}=22`$ and $`M_{min}=13.8`$.
In figure 1 we present the mean density and its Poissonian uncertainty of PSCz and SSRS2 galaxies in their common area (that of the SSRS2 sample) and in equal volume shells (with $`\delta V4.5\times 10^6`$ $`h^3`$ Mpc<sup>3</sup>). Their densities are extremely comparable, differing only by a constant factor ($`\rho _O/\rho _I=2.03\pm 0.16`$).
## 4 Results
### 4.1 Optical to IR galaxy bias
We first present the results of the intercomparison of the SSRS2 and PSCz samples in their common sky area and for $`r15`$ $`h^1`$ Mpc. In figure 2a we show the amplitudes of the two dipoles as a function of distance from the LG. The monotonic dipole increase reflects the fact that we are measuring only the component of the whole sky dipole which is due to the particular area covered by the sky restricted SSRS2 sample. It is apparent that the shapes of the two dipole amplitudes are extremely similar, giving correlation coefficient $`R0.97`$.
In figure 2b we present the direct dipole ratio (eq.5) in the LG frame (open symbols), while as filled squares the results of the fit of eq.6, as a function of maximum distance used. No significant differences are found when correcting distances for peculiar velocities. It is evident that the different estimates are consistent with each other, especially for $`r>50`$ $`h^1`$ Mpc where the direct dipole ratio becomes flat. It is essential, however, to verify whether such a good dipole-profile correlation could result solely due to the small solid angle used, ie., to investigate whether the survey geometry, coupled with the galaxy selection function, dominates the dipole signal of both samples. To this end we have generated 100 mock SSRS2 samples by reshuffling the galaxy angular coordinates while leaving their distances, and thus selection function, unchanged. If the reshuffled dipole profile resembles that of the SSRS2 original one, then this would indicate the existence of the previously suggested bias. In figure 3 we present both (a) the dipole ratio between the original SSRS2 dipole and the reshuffled one together with its scatter and (b) the SSRS2 and PSCz dipole ratio with the latter rescaled by $`b_{O,I}`$. It is evident that the former ratio deviates significantly from one, an indication that our comparison is not dominated by the suspected bias.
Using the differential equal volume dipoles to fit eq.6 for $`10r185`$ $`h^1`$ Mpc, we find $`b_{O,I}1.24\pm 0.04`$, with zero-point $`𝒞105\pm 50`$ km/sec and $`\chi ^26.3`$ for 6 degrees of freedom. The fit is performed using the FITEXY routine of Press et al. (1992) and the uncertainties of the fitted parameters correspond to the $`\mathrm{\Delta }\chi ^2=1`$ confidence region boundary. The small zero-point offset could be due to uncertainties in tracing the very local contributions to the LG dipole. Indeed, integrating the the SSRS2 and PSCz dipoles for $`r>15`$ $`h^1`$ Mpc (presented in figure 2) we find no zero-point offset $`𝒞40\pm 45`$ km/sec but a slightly smaller bias factor $`b_{O,I}1.17\pm 0.04`$, with $`\chi ^27.5`$ for 6 degrees of freedom. We derive a mean estimate of the bias factor and its uncertainty by varying both the inner and outer dipole integration limits in eq.5 and by taking into account the slight differences of the results based on the differential dipole (eq.5). The resulting optical to IR galaxy bias factor is:
$$b_{O,I}1.21\pm 0.06.$$
Our result is in quite good agreement (within $`1\sigma `$) with that of Seaborne et al (1999), which is based on the PSCz and APM galaxy clustering properties on relatively smaller scales than those probed by our dipole analysis.
### 4.2 Rich cluster to IR galaxy bias
In the case of the cluster and PSCz samples we have a nearly full sky coverage, except at low-galactic latitudes. In order to recover the whole sky dipole we use a spherical harmonic approach to ”fill” the unobserved part of the sky (cf. Lahav 1987; Plionis & Valdarnini 1991). This approach has been found to provide compatible results with the cloning and interpolating method (see Branchini & Plionis 1996 for such a comparison in the context of the cluster dipole).
The main drawback of comparing the cluster and galaxy dipoles arises from the fact that the Abell/ACO cluster distribution is incomplete in the local universe, since it does not include the Virgo cluster, an important contributor of the local velocity field (cf. Tully & Shaya 1984). Therefore the direct comparison of the dipole amplitudes is hampered by this zero-point uncertainty. We can attempt to correct for this problem by:
$``$ including the local Virgo contribution to the cluster dipole by assigning an appropriate Abell number count ($`N_A`$) weight to the Virgo cluster,
$``$ excluding from the PSCz dipole the very near contributions ($`<\text{ }8h^1`$ Mpc).
A first attempt to derive the cluster to IR galaxy bias, using such a procedure and the Abell/ACO and QDOT catalogues, was present in Plionis (1995). If clusters and galaxies do trace the same underline field, as indeed appears to be the case (cf. Branchini et al. 1999; Branchini, Zehavi, Plionis & Dekel 1999), then we should be able to fit the two profiles, using eq.6, varying the Virgo cluster weight. The appropriate value of $`N_A`$ is that for which the zero-point, $`𝒞`$, of the fit vanishes and thus we will consider as our preferred bias parameter the corresponding value of $`b_{C,I}`$. Statistically, this procedure does not provide a rigorous significance indication, due to the fact that the dipoles are cumulative quantities, but it provides a means of comparing quantitatively the two dipole profiles. In order to test the robustness of the resulting bias parameter we fit the two dipole profiles as a function of distance.
In figure 4a we present the resulting bias parameter versus the zero point, $`𝒞`$, for the Virgo cluster weights that provide a fit with $`𝒞`$ $`0`$. The different connected point arrays correspond to results based on the different $`N_A`$ weights while different points in each array correspond to different upper distance limits used for the fit, which increase as indicated by the arrow. Taking into account that the zero-point uncertainty is $`\delta 𝒞130`$ km/sec we conclude that the Virgo cluster weight for which $`𝒞0`$ is $`N_A=24\pm 4`$, confirming the notion that Virgo corresponds to a richness class $`R=0`$ Abell cluster. A consistency check, that we pass with success, is that the Virgocentric infall velocity that corresponds to this weight is $`u_{\mathrm{inf}}300\pm 40`$ km/sec, a value in agreement with most available determinations.
In figure 4b we present the fitted bias parameter as a function of upper distance limit (points) and the direct dipole ratio (eq.5) as broken lines for the $`N_A=24`$ case. Both seem consistent within their uncertainties, especially for distances $`140150h^1`$ Mpc, which roughly corresponds to the apparent cluster dipole convergence depth.
The main result regarding the bias parameter is:
$$b_{C,I}4.3\pm 0.8,$$
which interestingly is mostly independently of the Virgo cluster weights (as can be seen in figure 4a), since such differences are absorbed in the value of $`𝒞`$. The uncertainty in $`b_{C,I}`$ reflects (a) variations due to different Virgo weights, (b) the variation between the eq.5 and eq.6 solutions and (c) the scatter around these solutions (see figure 4b). It does not include, however, the cosmic variance which results from the use of only one observer. Our results are in excellent agreement with those of Peacock & Dodds (1994) and Branchini et al. (1999b) based on completely different approaches.
In figure 5 we present a direct comparison of the PSCz and Abell/ACO cluster dipoles, out to 150 $`h^1`$ Mpc, after having scaled down the latter by $`b_{C,I}=4.3`$. The errorbars in the cluster dipole reflect mainly the uncertainty of the cluster density between the Abell and ACO parts of the sample (see Plionis & Kolokotronis 1998 and references therein). The two profiles are in excellent agreement, at least, up to $`150h^1`$ Mpc with correlation coefficient $`R=0.86`$. This is a further indication that the two density fields are consistent with each other out to these distances and supports the existence of dipole contributions from large depths (see also Schmoldt et al. 1999; Basilakos & Plionis 1998), suggestions which were first put forward by Plionis (1988), on the basis of the Lick counts, by Rowan-Robinson et al. (1990) on the basis of the QDOT survey and by Scaramella et al. (1991) and Plionis & Valdarnini (1991) on the basis of Abell/ACO clusters. A thorough investigation of the deep PSCz dipole (distances $`>150h^1`$ Mpc) will be presented in Rowan-Robinson et al. (2000).
## 5 Conclusions
We have used a novel approach, based on the Local Group dipole properties, to estimate the relative bias parameter of different mass tracers. We find that the optical to IR galaxy bias parameter is $`b_{O,I}1.21\pm 0.06`$, while the rich cluster to IR galaxy bias is $`b_{C,I}4.3\pm 0.8`$. Our results are in good agreement with others based on different approaches. We find that the IR galaxy and rich cluster dipole profiles are extremely compatible once the latter is rescaled by $`b_{C,I}`$ out to at least $`150h^1`$ Mpc.
## Acknowledgements
S.B. thanks the Greek State Fellowship Foundation for financial support (Contract No 2669). We thank V. Kolokotronis for useful comments. |
no-problem/9912/hep-th9912062.html | ar5iv | text | # Untitled Document
General Relativity + Quantum Mechanics
$``$ Discretized Momentum
H. Gopalkrishna Gadiyar
E-mail: padma@imsc.ernet.in
Abstract
The analogy between General Relativity and monopole physics is pointed out and the presence of a 3-cocycle which corresponds to a source leads to discretization of field momentum. This is analogous to the same phenomena in monopole physics.
Recently many attempts have been made to discretize spacetime, cutoff momentum and hence improve quantum field theory. In this telegraphic note we wish to draw an analogy between General Relativity and monopole physics which leads to the same effect.
Recall that in monopole physics the triple commutator of velocity
$`v=peA`$, where $`B=\times A`$ is given by
$$[v^1,[v^2,v^3]]+[v^2,[v^3,v^1]]+[v^3,[v^1,v^2]]=eB.$$
In the presence of a monopole of strength $`g`$ located at the origin
$$eB=4\pi ge\delta (r).$$
The failure of the Jacobi identity signals the occurance of a 3-cocycle which leads to the quantization of $`eg`$.
This can also be viewed in the language of differential forms as the failure of the Bianchi identity for the curvature $`F`$. In the presence of the monopole it is easily seen that the quantization at $`eg`$ follows.
The analogy with General Relativity is based on the simple observation that the equation relating curvature and the energy momentum tensor of matter is
$${}_{}{}^{}G=^{}T.$$
As $`d^{}G=0`$ is a consequence of the contracted Bianchi identity it follows that
$$d^{}T=0$$
which is the conservation of momentum.
Let us for a moment assume that $`d^{}T0`$. This would mathematically signal the appearance of the 3-cocycle in analogy with the monopole case. In combination with the laws of quantum physics this would lead to discretization of the corresponding ‘charge’ which is $`P^\mu =T^{\mu \nu }𝑑\sigma ^\nu `$. Hence the field momentum takes discretized values. Hence in analogy with Dirac if there is a source for energy momentum that would lead to discretization of field momentum just as the presence of a single monopole would lead to charge quantization.
The author wishes to thank Professor H.S. Sharatchandra and Professor G. Rajasekaran for discussions. He is grateful to Professor N.D. Haridass who taught him modern differential geometry.
References
There are many references to monopoles and cocycles. See for example,
R. Jackiw, Chern-Simons terms and cocycles in physics and mathematics, Quantum field theory and quantum statistics, Essays in honor of the sixtieth birthday of E.S. Fradkin, Editors: I.A. Batalin, C.J. Isham and G.A. Vilkovisky, Adam Hilger, 349-378.
We follow the standard notation. See for example,
Charles W. Misner, Kip S. Thorne and John A. Wheeler, Gravitation, W.H. Freeman & Company, 1970. |
no-problem/9912/cond-mat9912051.html | ar5iv | text | # Economic Fluctuations and Diffusion
## Abstract
Stock price changes occur through transactions, just as diffusion in physical systems occurs through molecular collisions. We systematically explore this analogy and quantify the relation between trading activity — measured by the number of transactions $`N_{\mathrm{\Delta }t}`$ — and the price change $`G_{\mathrm{\Delta }t}`$ for a given stock, over a time interval $`[t,t+\mathrm{\Delta }t]`$. To this end, we analyze a database documenting every transaction for 1000 US stocks over the two-year period 1994–1995 . We find that price movements are equivalent to a complex variant of diffusion, where the diffusion coefficient fluctuates drastically in time. We relate the analog of the diffusion coefficient to two microscopic quantities: (i) the number of transactions $`N_{\mathrm{\Delta }t}`$ in $`\mathrm{\Delta }t`$, which is the analog of the number of collisions and (ii) the local variance $`w_{\mathrm{\Delta }t}^2`$ of the price changes for all transactions in $`\mathrm{\Delta }t`$, which is the analog of the local mean square displacement between collisions. We study the distributions of both $`N_{\mathrm{\Delta }t}`$ and $`w_{\mathrm{\Delta }t}`$, and find that they display power-law tails. Further, we find that $`N_{\mathrm{\Delta }t}`$ displays long-range power-law correlations in time, whereas $`w_{\mathrm{\Delta }t}`$ does not. Our results are consistent with the interpretation that the pronounced tails of the distribution of $`G_{\mathrm{\Delta }t}`$ are due to $`w_{\mathrm{\Delta }t}`$, and that the long-range correlations previously found for $`|G_{\mathrm{\Delta }t}|`$ are due to $`N_{\mathrm{\Delta }t}`$.
Consider the diffusion of an ink particle in water. Starting out from a point, the ink particle undergoes a random walk due to collisions with the water molecules. The distance covered by the particle after a time $`\mathrm{\Delta }t`$ is
$$x_{\mathrm{\Delta }t}=\underset{i=1}{\overset{N_{\mathrm{\Delta }t}}{}}\delta x_i,$$
(2)
where $`\delta x_i`$ are the distances that the particle moves in between collisions, and $`N_{\mathrm{\Delta }t}`$ denotes the number of collisions during the interval $`\mathrm{\Delta }t`$. The distribution $`P(x_{\mathrm{\Delta }t})`$ is Gaussian with a variance $`x_{\mathrm{\Delta }t}^2=N_{\mathrm{\Delta }t}w_{\mathrm{\Delta }t}^2`$, where the local mean square displacement $`w_{\mathrm{\Delta }t}^2(\delta x_i)^2`$ is the variance of the individual steps $`\delta x_i`$ in the interval $`[t,t+\mathrm{\Delta }t]`$.
For the classic diffusion problem considered above: (i) the probability distribution $`P(N_{\mathrm{\Delta }t})`$ is a “narrow” Gaussian, i.e., has a standard deviation much smaller than the mean $`N_{\mathrm{\Delta }t}`$, (ii) the time between collisions of an ink particle are not strongly correlated, so $`N_{\mathrm{\Delta }t}`$ at any future time $`t+\tau `$ depends at most weakly on $`N_{\mathrm{\Delta }t}`$ at time $`t`$—i.e., the correlation function $`N_{\mathrm{\Delta }t}(t)N_{\mathrm{\Delta }t}(t+\tau )`$ has a short-range exponential decay, (iii) the distribution $`P(w_{\mathrm{\Delta }t})`$ is also a narrow Gaussian, (iv) the correlation function $`w_{\mathrm{\Delta }t}(t)w_{\mathrm{\Delta }t}(t+\tau )`$ has a short-range exponential decay and (v) the variable $`ϵx_{\mathrm{\Delta }t}/(w_{\mathrm{\Delta }t}\sqrt{N_{\mathrm{\Delta }t}})`$ is uncorrelated and Gaussian-distributed. These conditions result in $`x_{\mathrm{\Delta }t}`$ being Gaussian distributed and weakly correlated.
An ink particle diffusing under more general conditions would result in a quite different distribution of $`x_{\mathrm{\Delta }t}`$, such as in a bubbling hot spring, where the characteristics of bubbling depend on a wide range of time and length scales. In the following, we will present empirical evidence that the movement of stock prices is equivalent to a complex variant of classic diffusion, specified by the following conditions: (i) $`P(N_{\mathrm{\Delta }t})`$ is not a Gaussian, but has a power-law tail, (ii) $`N_{\mathrm{\Delta }t}`$ has long-range power-law correlations, (iii) $`P(w_{\mathrm{\Delta }t})`$ is not a Gaussian, but has a power-law tail, (iv) the correlation function $`w_{\mathrm{\Delta }t}(t)w_{\mathrm{\Delta }t}(t+\tau )`$ is short ranged, and (v) the variable $`ϵx_{\mathrm{\Delta }t}/(w_{\mathrm{\Delta }t}\sqrt{N_{\mathrm{\Delta }t}})`$ is Gaussian distributed and short-range correlated. Under these conditions, the statistical properties of $`x_{\mathrm{\Delta }t}`$ will depend on the exponents characterizing these power laws.
Just as the displacement $`x_{\mathrm{\Delta }t}`$ of a diffusing ink particle is the sum of $`N_{\mathrm{\Delta }t}`$ individual displacements $`\delta x_i`$, so also the stock price change $`G_{\mathrm{\Delta }t}`$ is the sum of the price changes $`\delta p_i`$ of the $`N_{\mathrm{\Delta }t}`$ transactions in the interval $`[t,t+\mathrm{\Delta }t]`$,
$$G_{\mathrm{\Delta }t}=\underset{i=1}{\overset{N_{\mathrm{\Delta }t}}{}}\delta p_i.$$
(3)
Figure 1a shows $`N_{\mathrm{\Delta }t}`$ for classic diffusion and for one stock (Exxon Corporation). The number of trades for Exxon displays several events the size of tens of standard deviations and hence is inconsistent with a Gaussian process .
(i) We first analyze the distribution of $`N_{\mathrm{\Delta }t}`$. Figure 1c shows that the cumulative distribution of $`N_{\mathrm{\Delta }t}`$ displays a power-law behavior $`P\{N>x\}x^\beta `$. For the 1000 stocks analyzed, we obtain a mean value $`\beta =3.40\pm 0.05`$ (Fig. 1d). Note that $`\beta >2`$ is outside the Lévy stable domain $`0<\beta <2`$.
(ii) We next determine the correlations in $`N_{\mathrm{\Delta }t}`$. We find that the correlation function $`N_{\mathrm{\Delta }t}(t)N_{\mathrm{\Delta }t}(t+\tau )`$ is not exponentially decaying as in the case of classic diffusion, but rather displays a power-law decay (Fig. 1e,f). This result quantifies the qualitative fact that if the trading activity ($`N_{\mathrm{\Delta }t}`$) is large at any time, it is likely to remain so for a considerable time thereafter.
(iii) We then compute the variance $`w_{\mathrm{\Delta }t}^2(\delta p_i)^2`$ of the individual changes $`\delta p_i`$ due to the $`N_{\mathrm{\Delta }t}`$ transactions in the interval $`[t,t+\mathrm{\Delta }t]`$ (Fig. 2a). We find that the distribution $`P(w_{\mathrm{\Delta }t})`$ displays a power-law decay $`P\{w_{\mathrm{\Delta }t}>x\}x^\gamma `$ (Fig. 2b). For the 1000 stocks analyzed, we obtain a mean value of the exponent $`\gamma =2.9\pm 0.1`$ (Fig. 2c).
(iv) Next, we quantify correlations in $`w_{\mathrm{\Delta }t}`$. We find that the correlation function $`w_{\mathrm{\Delta }t}(t)w_{\mathrm{\Delta }t}(t+\tau )`$ shows only weak correlations (Fig. 2d,e). This means that $`w_{\mathrm{\Delta }t}`$ at any future time $`t+\tau `$ depends at most weakly on $`w_{\mathrm{\Delta }t}`$ at time $`t`$.
(v) Consider now $`\delta p_i`$ chosen only from the interval $`[t,t+\mathrm{\Delta }t]`$, and let us hypothesize that these $`\delta p_i`$ are mutually independent and with a common distribution $`P(\delta p_i|t[t,t+\mathrm{\Delta }t])`$ having a finite variance $`w_{\mathrm{\Delta }t}^2`$. Under this hypothesis, the central limit theorem implies that the ratio
$$ϵ\frac{G_{\mathrm{\Delta }t}}{w_{\mathrm{\Delta }t}\sqrt{N_{\mathrm{\Delta }t}}}$$
(4)
must be a Gaussian-distributed random variable with zero mean and unit variance . Indeed, for classic diffusion, $`x_{\mathrm{\Delta }t}/(w_{\mathrm{\Delta }t}\sqrt{N_\mathrm{\Delta }})`$ is Gaussian-distributed and uncorrelated (Fig. 3a). We confirm this hypothesis by analyzing (a) the distribution $`P(ϵ)`$, which we find to be consistent with Gaussian behavior (Fig. 3b), and (b) the correlation function $`ϵ(t)ϵ(t+\tau )`$, for which we find only short-range correlations (Fig. 3c,d).
Thus far, we have seen that the data for stock price movements support the following results: (i) the distribution of $`N_{\mathrm{\Delta }t}`$ decays as a power-law, (ii) $`N_{\mathrm{\Delta }t}`$ has long-range correlations, (iii) the distribution of $`w_{\mathrm{\Delta }t}`$ decays as a power-law, (iv) $`w_{\mathrm{\Delta }t}`$ displays only weak correlations, and (v) the price change $`G_{\mathrm{\Delta }t}`$ at any time is consistent with a Gaussian-distributed random variable with a time-dependent variance $`N_{\mathrm{\Delta }t}w_{\mathrm{\Delta }t}^2`$, that is, the variable $`ϵG_{\mathrm{\Delta }t}/(w_{\mathrm{\Delta }t}\sqrt{N_{\mathrm{\Delta }t}})`$ is Gaussian-distributed and uncorrelated.
Next, we explore the implications of our empirical findings. Namely, we show how the statistical properties of price changes $`G_{\mathrm{\Delta }t}`$ can be understood in terms of the properties of $`N_{\mathrm{\Delta }t}`$ and $`w_{\mathrm{\Delta }t}`$. We will argue that the pronounced tails of the distribution of price changes are largely due to $`w_{\mathrm{\Delta }t}`$ and the long-range correlations previously found for $`|G_{\mathrm{\Delta }t}|`$ are largely due to the long-range correlations in $`N_{\mathrm{\Delta }t}`$. By contrast, in classic diffusion $`N_{\mathrm{\Delta }t}`$ and $`w_{\mathrm{\Delta }t}`$ do not change the Gaussian behavior of $`x_{\mathrm{\Delta }t}`$ because they have only uncorrelated Gaussian-fluctuations .
Consider first the distribution of price changes $`G_{\mathrm{\Delta }t}`$, which decays as a power-law $`P\{G_{\mathrm{\Delta }t}>x\}x^\alpha `$ with an exponent $`\alpha 3`$ . Above, we reported that the distribution $`P\{N_{\mathrm{\Delta }t}>x\}x^\beta `$ with $`\beta 3.4`$ (Fig. 1c,d). Therefore, $`P\{\sqrt{N_{\mathrm{\Delta }t}}>x\}x^{2\beta }`$ with $`2\beta 6.8`$. Equation (4) then implies that $`N_{\mathrm{\Delta }t}`$ alone cannot explain the value $`\alpha 3`$. Instead, $`\alpha 3`$ must arise from the distribution of $`w_{\mathrm{\Delta }t}`$, which indeed decays with approximately the same exponent $`\gamma \alpha 3`$ (Fig. 2b,c). Thus the power-law tails in $`P(G_{\mathrm{\Delta }t})`$ appear to originate from the power-law tail in $`P(w_{\mathrm{\Delta }t})`$.
Next, consider the long-range correlations found for $`|G_{\mathrm{\Delta }t}|`$ . Above, we reported that $`N_{\mathrm{\Delta }t}`$ displays long-range correlations, whereas $`w_{\mathrm{\Delta }t}`$ does not (Figs. 1–2). Therefore, the long range correlations in $`|G_{\mathrm{\Delta }t}|`$ should arise from those found in $`N_{\mathrm{\Delta }t}`$. Hence, while the power-law tails in $`P(G_{\mathrm{\Delta }t})`$ are due to the power-law tails in $`P(w_{\mathrm{\Delta }t})`$, the long-range correlations of $`|G_{\mathrm{\Delta }t}|`$ are due to those of $`N_{\mathrm{\Delta }t}`$.
In sum, we have shown that stock price movements are analogous to a complex variant of classic diffusion. Further, we have empirically demonstrated the relation between stock price changes and market activity, i.e., the price change at any time $`G_{\mathrm{\Delta }t}(t)`$ is consistent with a Gaussian-distributed random variable with a local variance $`N_{\mathrm{\Delta }t}w_{\mathrm{\Delta }t}^2`$. What could be the interpretations of our results for the number of transactions $`N_{\mathrm{\Delta }t}`$ and the local standard deviation $`w_{\mathrm{\Delta }t}`$? Since $`N_{\mathrm{\Delta }t}`$ measures the trading activity for a given stock, it is possible that its power-law distribution and long-range correlations may be related to “avalanches” . The fluctuations in $`w_{\mathrm{\Delta }t}`$ reflect several factors: (i) the level of liquidity of the market, (ii) the risk-aversion of the market participants and (iii) the uncertainty about the fundamental value of the asset.
e) In order to accurately quantify time correlations in $`N_{\mathrm{\Delta }t}`$, we use the method of detrended fluctuations often used to obtain accurate estimates of power-law correlations. We plot the detrended fluctuations $`F(\tau )`$ as a function of the time scale $`\tau `$ on a log-log scale for each of the 6 groups. Absence of long-range correlations would imply $`F(\tau )\tau ^{0.5}`$, whereas $`F(\tau )\tau ^\nu `$ with $`0.5<\nu 1`$ show power-law correlations with long-range persistence. For each group, we plot $`F(\tau )`$ averaged over all stocks in that group. In order to detect genuine long-range correlations, the U-shaped intraday pattern for $`N_{\mathrm{\Delta }t}`$ has been removed by dividing each $`N_{\mathrm{\Delta }t}`$ by the intraday pattern . For $`0.5<\nu <1.0`$, correlation function exponent $`\nu _{cf}`$ and $`\nu `$ are related through $`\nu _{cf}=22\nu `$. f) The histogram of the exponents $`\nu `$ obtained by fits to $`F(\tau )`$ for each of the 1000 stocks shows a relatively narrow spread of $`\nu `$ around the mean value $`\nu =0.85\pm 0.01`$. |
no-problem/9912/astro-ph9912343.html | ar5iv | text | # Small-scale structure of cold dark matter
## 1 Introduction
Dark matter with an equation of state that has been non-relativistic ($`p\rho `$) since structure formation started (around matter-radiation equality) is called cold dark matter . CDM gives rise to the hierarchical formation of large structures ($`>1`$ Mpc) in the Universe, i.e. small structures form first and grow to larger structures later. The growth of CDM density fluctuations is suppressed during the radiation dominated epoch. Thus the rms CDM density fluctuations go like $`k^3`$ at large scales, but increase only logarithmically with wavenumber at scales well below the Hubble scale at matter-radiation equality.
At very small scales ($`1`$ Mpc), the power spectrum and the evolution of CDM density fluctuations has not been discussed in detail so far, although the understanding of the small-scale behavior of CDM density fluctuations is essential for a realistic estimate of expected rates in CDM searches. The reason is that analytic calculations in this deeply non-linear regime are very challenging and numerical simulations do not have the dynamical range to resolve scales as small as the solar system. A priori one can say that there has to be a cut-off in the CDM power spectrum at some very small scale, otherwise the energy density in the fluctuations itself would be infinity.
The nature of CDM is unknown. Here we mention three popular candidates with very different properties: The first one is the lightest neutralino, which probably is the lightest supersymmetric particle . Its mass is expected to be in the range $`40`$ GeV – $`600`$ GeV . The neutralino is a mixture of the neutral gauginos and higgsinos, thus it interacts through weak interactions only. Below we assume that the lightest neutralino is the bino, because in the constrained minimal supersymmetric standard model the dominant contribution in the mix comes from the bino .
A second CDM candidate is the axion . The axion mass has been restricted to the be $`10^6`$ eV – $`10^2`$ eV and contributes to the dark matter if the mass is small. Axions interact much weaker than weakly interacting particles, the interactions are suppressed by the Peccei-Quinn scale, which is $`10^{12}`$ GeV. Thus axions are never in thermal equilibrium with the radiation fluid. An example of small-scale structure in CDM has been found from the initial misalignment mechanism of axions by Hogan and Rees . It turns out that, if the Peccei-Quinn scale is below the reheating temperature after inflation, large isocurvature perturbations in the axion density are created once the axion mass is switched on during the QCD transition. It has been shown that axion mini-clusters with $`10^{12}M_{}`$ and radii of $`0.1R_{}`$ might emerge, which may be observed by means of pico- and femtolensing .
Let us mention a third CDM candidate: primordial black holes . Their mass should be $`>10^{16}M_{}`$ in order to survive until today . Primordial black holes interact with the rest of the Universe via gravity only. They may be found or excluded by gravitational lensing .
## 2 Damping scales for CDM
For any thermal CDM species there are two mechanisms that contribute to the damping of small-scale fluctuations: During the kinetic decoupling of CDM from the radiation fluid the mean free path is finite and thus collisional damping occurs. After the kinetic decoupling has been completed free streaming can further wash out the remaining fluctuations.
For neutralinos, chemical freeze out happens at $`m/20>2`$ GeV . In contrast kinetic decoupling happens at much smaller temperatures , because elastic interactions with the radiation fluid are possible at temperatures as low as $`1`$ MeV. However, due to the large momentum of the neutralinos, many collisions are needed for a significant change of the momentum. It turns out that $`Nm/T`$ collisions can keep the neutralinos in kinetic equilibrium and thus the relaxation time can be estimated as $`\tau N\tau _{\mathrm{coll}}`$, where the collision time for a bino is given by
$$\tau _{\mathrm{coll}}\left[5.5\left(\frac{G_\mathrm{F}M_W^2}{M^2m^2}\right)^2T^5\right]^1,$$
(1)
with $`M`$ being the slepton mass and $`m`$ the mass of the bino. With a slepton mass $`M=200`$ GeV and a bino mass of $`m=100`$ GeV, the relaxation time is given by $`\tau (10\text{ MeV}/T)^4t_\mathrm{H}`$, where $`t_\mathrm{H}`$ is the Hubble time. Kinetic decoupling of neutralinos happens at $`T10`$ MeV.
We incorporate dissipative phenomena by describing the CDM as an imperfect fluid . The coefficients of heat conduction, shear and bulk viscosity are estimated to be $`m\chi \eta \zeta nT\tau `$, where $`n`$ is the number density of CDM particles. We find that the damping of density perturbations goes as
$$\delta \mathrm{exp}\left[\left(\frac{M_\mathrm{D}}{M}\right)^{0.3}\right],$$
(2)
where the damping scale depends on the mass of the neutralino and the slepton masses and is typically $`M_\mathrm{D}=10^{13}M_{}`$$`10^{10}M_{}`$. For comparison, the CDM mass within a Hubble patch is $`10^4M_{}`$ at $`T10`$ MeV.
Free streaming leads to additional damping. The velocity of neutralinos right after kinetic decoupling is $`v(T/m)^{1/2}10^2`$. Free streaming also gives rise to exponential damping, due to the velocity dispersion. The typical free steaming scale is $`10^{12}M_{}`$$`10^{10}M_{}`$. Thus both damping mechanisms operate approximately at the same scale. The power spectrum of neutralino CDM is cut off at $`M<10^{12}M_{}`$.
The mechanism of collisional damping also works for CDM in the form of a heavy neutrino ($`1`$ TeV), but it does not work for wimpzillas , because these are too heavy to ever be in thermal equilibrium. Free streaming induces a cut-off in the power spectrum for all mentioned CDM candidates. The scale and the strength of the damping depend on the masses and the primordial velocity distributions.
## 3 QCD induced CDM clumps
Besides damping mechanisms there are also processes that might enhance the primordial CDM spectrum at small scales. One is the formation of axion mini-clusters that we mentioned already in the introduction.
Together with Schmid and Widerin one of the present authors has found that large amplifications of density fluctuations might be induced by the QCD transition at scales $`10^{20}M_{}<M<10^{10}M_{}`$ . The mechanism is the following: During a first-order QCD transition the sound speed vanishes. Thus the density perturbations of the dominant radiation fluid go into free fall and create large peaks and dips in the spectrum, which grow at most linearly with the wavenumber. These peaks and dips produce huge gravitational potentials. CDM falls into these gravitational wells. It is important to note that this amplification mechanism works for matter that is kinetically decoupled at the QCD transition around $`150`$ MeV. For neutralinos this is not the case. Large inhomogeneities in the neutralinos would be washed out by collisional damping later on. A structure similar to the acoustic peaks in the photon-baryon fluid might survive. The large inhomogeneities in the radiation fluid are completely washed out during neutrino decoupling at $`1`$ MeV.
## 4 The first CDM objects
The smallest scales that survive damping are the first scales that go non-linear, thus these scales form the first gravitationally bound objects (apart from primordial black holes, if they exist) in the Universe. Let us estimate their size if the CDM is the neutralino. The scale is given by the cut-off at $`M10^{12}M_{}`$. With a COBE normalized CDM spectrum we find the rms density fluctuations at equality to be
$$\frac{\delta \rho }{\rho }2\times 10^4\left[\mathrm{ln}\left(\frac{k_\mathrm{D}}{k_{\mathrm{eq}}}\right)\right]^{3/2}\left(\frac{k_\mathrm{D}}{k_{\mathrm{eq}}}\right)^{(n1)/2},$$
(3)
which is $`2\times 10^2(10^1)`$ for the spectral index $`n=1(1.2)`$. Thus these objects go nonlinear at or shortly after equality, at a redshift of $`10^2(10^3)`$. If we assume that these clouds are spherical they would collapse to a radius of $`10^3(10^2)R_{}`$ today, which is by chance a very interesting scale for observations. Although $`10^{12}M_{}`$ in a volume of $`(10^2R_{})^3`$ seems to be an extremely diluted cloud, the overdensity of such a cloud today would be $`10^{11}`$. In more optimistic scenarios (larger tilt and/or additional peaks in the spectrum from the QCD transition) it is even possible that these clouds are so compact that pico- and femtolensing can be used to search for them.
It is unclear whether some of these first objects can survive up to today, this will be the subject of further studies. We think that understanding and revealing the small-scale structure of CDM might help us to learn more about the nature of CDM. |
no-problem/9912/hep-lat9912024.html | ar5iv | text | # 1 Introduction
## 1 Introduction
ITEP-TH-31/99
LU-ITP 1999/022
15 December, 1999
Although the standard model does not possess topologically stable monopole– and vortex–like defects, one can define so-called embedded topological defects : Nambu monopoles and $`Z`$–vortex strings . In our numerical simulations of the electroweak theory we have found that the vortices undergo a percolation transition which, when there exists a discontinuous phase transition at small Higgs masses, accompanies the latter. The percolation transition persists at realistic (large) Higgs mass when the electroweak theory, instead of a transition, possesses a smooth crossover around some “crossover temperature” (see Refs. ).
We worked in the $`3D`$ formulation of the $`SU(2)`$ Higgs model. This report is restricted to results obtained in the crossover regime (assuming a Higgs boson mass $`100`$ GeV). Details of the lattice model can be found in . The defect operators on the lattice have been defined in . A nonvanishing integer value of the vortex operator $`\sigma _P`$ on some plaquette $`P`$ signals the presence of a vortex. The lattice gauge coupling $`\beta _G`$ is related to the $`3D`$ continuum gauge coupling $`g_3^2`$ and controls the continuum limit $`\beta _G=4/(ag_3^2)`$ ($`g_3^2g_4^2T`$). The hopping parameter $`\beta _H`$ is related to the temperature $`T`$ (with the higher temperature, symmetric side at $`\beta _H<\beta _H^{\mathrm{cross}}`$).
## 2 Vortex profile
Our vortex defect operator $`\sigma _P`$ is constructed to localize a line-like object (in $`3D`$ space–time) with non-zero vorticity on the dual lattice. Within a given gauge field–Higgs configuration, a profile around that vortex “soul” would be hidden among quantum fluctuations. However, an average over all vortices in a quantum ensemble clearly reveals a structure that can be compared with a classical vortex . We have studied correlators of $`\sigma _P`$ with various operators constructed on the lattice (“quantum vortex profiles”).
Classically, in the center of a vortex the Higgs field modulus turns to zero and the energy density becomes maximal . What can be expected in a thermal ensemble is, that along the vortex soul the (squared) modulus of the Higgs field and the gauge field energy density, $`E_P^g=1\frac{1}{2}\mathrm{Tr}U_P`$, substantially differ from the bulk averages characterizing the corresponding homogeneous phase.<sup>1</sup><sup>1</sup>1Just on the “broken” side of the crossover, for instance, one would expect to find a core of “symmetric” matter inside the vortex. Indeed, in our lattice study they were found lower (or higher, respectively), with the difference growing entering deeper into the “broken phase” side of the crossover (see Figure 1).
To proceed we have studied, among others, the vortex–gluon energy correlator for plaquettes $`P_0`$ and $`P_R`$ located in the same plane (perpendicular to a segment of the vortex path)
$`C_E(R)=\sigma _{P_0}^2E_{P_R}^g,`$ (1)
as function of the distance $`R`$ between the plaquettes.<sup>2</sup><sup>2</sup>2A similar method has been used to study the physical properties of Abelian monopoles in $`SU(2)`$ gluodynamics, Ref. .
To parametrize the vortex shape we fit the correlator data (1) by an ansatz $`C_E^{\mathrm{fit}}(R)=C_E+B_EG(R;m_E)`$ with constants $`C_E`$ and $`B_E`$ and an inverse penetration depth (effective mass $`m_E`$). The function $`G(R;m)`$ is the $`3D`$ scalar lattice propagator with mass $`2\mathrm{sinh}(m/2)`$ which, instead of a pure exponential, has been proposed to fit point–point correlators in Ref. .
If the quantum vortex profile should interpolate between the interior of the vortex and the asymptotic approach to the vacuum, we can only expect to describe the profile by such an ansatz for distances $`R>R_{\mathrm{min}}`$. The distance $`R_{\mathrm{min}}`$ (core size) should be fixed in physical units. Therefore we choose (in lattice units) $`R_{\mathrm{min}}(\beta _G)`$=$`\beta _G/8`$ for $`\beta _G`$=8,16,24 which corresponds to $`R^{\mathrm{core}}`$=$`aR_{\mathrm{min}}`$=$`(2g_3^2)^1`$. How successful this is to define the vortex core can be assessed studying $`\chi ^2/d.o.f.`$ vs. $`R_{\mathrm{min}}`$ (to be reported elsewhere).
An example of the behaviour of the effective mass $`m_E`$ is shown in Figure 2(a). The mass reaches its minimum at the crossover point $`\beta _H^{\mathrm{cross}}`$. Deeper on the symmetric side the quantum vortex profiles are squeezed compared to the classical ones due to Debye screening leading to a smaller coherence length. Approaching the crossover from this side the density of the vortices decreases thereby diminishing this effect. The extrapolation of the mass $`m_E`$ (as defined at the crossover temperature) towards the continuum limit is shown in Figure 2(b).
## 3 Inter–vortex interactions and the type of the vortex medium
In the case of a superconductor, the inter–vortex interactions define the type of superconductivity. If two parallel static vortices with the same sense of vorticity attract (repel) each other, the substance is said to be a type–I (type–II) superconductor. To investigate the vortex–vortex interactions we have measured two–point functions of the vortex currents:
$`|\sigma _{P_0}||\sigma _{P_R}|=2(g_{++}+g_+),\sigma _{P_0}\sigma _{P_R}=2(g_{++}g_+),`$ (2)
where $`g_{+\pm }(R)`$ stands for contributions to the correlation functions from parallel/anti–parallel vortices piercing a plane in plaquettes $`P_0`$ and $`P_R`$. Properly normalized, the correlators $`g_{+\pm }(R)`$ can be interpreted as the average density of vortices (anti-vortices), relative to the bulk density, at distance $`R`$ from a given vortex.
Hence the long range tail of the function $`g_{++}`$ is crucial for the type of the vortex medium: in the case of attraction (repulsion) between same sign vortices $`g_{++}`$ exponentially approaches unity from above (below) while $`g_+`$ is always attractive, independently on the type of superconductivity.
We have seen in our calculations that the tail of $`g_{++}`$ belongs to the attraction case (with minimal slope at the crossover). Therefore, electroweak matter in the crossover regime belongs to the type–I vortex vacuum class.
## Acknowledgments
The authors are grateful to P. van Baal, H. Markum, V. Mitrjushkin, S. Olejnik and M. I. Polikarpov for useful discussions. M. Ch. feels much obliged for the kind hospitality extended to him at the Max-Planck-Institute for Physics in Munich. M. Ch. was supported by the grants INTAS-96-370, RFBR-99-01-01230 and ICFPM fellowship (INTAS-96-0457). |
no-problem/9912/cond-mat9912192.html | ar5iv | text | # Spin diffusion in doped semiconductors
## Abstract
The behavior of spin diffusion in doped semiconductors is shown to be qualitatively different than in undoped (intrinsic) ones. Whereas a spin packet in an intrinsic semiconductor must be a multiple-band disturbance, involving inhomogeneous distributions of both electrons and holes, in a doped semiconductor a single-band disturbance is possible. For $`n`$-doped nonmagnetic semiconductors the enhancement of diffusion due to a degenerate electron sea in the conduction band is much larger for these single-band spin packets than for charge packets, and can exceed an order of magnitude at low temperatures even for equilibrium dopings as small as 10<sup>16</sup> cm<sup>-3</sup>. In $`n`$-doped ferromagnetic and semimagnetic semiconductors the motion of spin packets polarized antiparallel to the equilibrium carrier spin polarization is predicted to be an order of magnitude faster than for parallel polarized spin packets. These results are reversed for $`p`$-doped semiconductors.
preprint: submitted to Physical Review Letters
The motion and persistence of inhomogeneous electronic distributions are central to the electronic technologies based on semiconductors. Recently a broader category of possible disturbances, namely those involving inhomogeneous spin distributions in doped semiconductors, have been shown to exhibit long lifetimes and anomalously high diffusion rates. This behavior indicates the potential of a new electronic technology relying on spin. A crucial requirement of this new technology, however, is the clarification of the transport properties of inhomogeneous spin distributions. A full understanding is also desirable of the relationship between the physical effects driving semiconductor spin electronics and those driving the mature area of metallic spin electronics, which has produced advances in magnetic read heads and non-volatile memory.
These spin distributions are also of fundamental interest as well, for they are phase-coherent states which can be very long lived ($`>100`$ ns) and very extended ($`>100\mu `$m). In contrast to phase-coherent ground states, such as the BCS ground state of a superconductor or the Laughlin state of the fractional quantum hall system, these spin distributions are nonequilibrium phase-coherent states. Their long lifetime and large spatial size allow unprecedented probes of phase-coherent behavior — of which Refs. are initial examples.
We consider the properties of doped and undoped semiconductors which are unpolarized in equilibrium but have a localized perturbation of the carriers. In the highly-doped limit this system should behave like a paramagnetic metal (such as the copper used in Co/Cu multilayer giant magnetoresistive devices). Qualitative differences in diffusion are found between the doped and undoped systems, at doping densities of 10<sup>16</sup> cm<sup>-3</sup> at low temperature and 10<sup>18</sup> cm<sup>-3</sup> at room temperature. Quantitative agreement is found with recent experimental results on rapid spin diffusion at low temperature. We also describe spin diffusion in spin polarized semiconductors. This work may assist in understanding spin transport within metallic ferromagnetic semiconductors, such as GaMnAs, which has been used in spin-dependent resonant tunneling devices, and semimagnetic semiconductors, such as BeMnZnSe, which has been used in a spin-polarized light-emitting diode.
The origin of the differences in spin diffusion between semiconductors and metals are (1) the much greater spin relaxation lifetime in semiconductors, (2) the relative ineffectiveness of screening in semiconductors relative to metals, and (3) the possibility of controlling whether carriers in a band are degenerate or not by small perturbations (e.g. electric fields or doping). The first of these differences was explored in Ref. . In this letter we examine the implications of the second and third aspects. We show that careful consideration of the consequences of (2) and (3) lead to a direct explanation of the anomalously high diffusion rates of spin packets observed in Ref. . The effect of the metal-insulator transition on spin diffusion, which can be substantial in semiconductors, is judged in this circumstance to be small.
Ineffective screening in semiconductors requires that local variations in the conduction electron density ($`\mathrm{\Delta }n(𝐱)`$) be, under normal circumstances, balanced by a local change in the valence hole density ($`\mathrm{\Delta }p(𝐱)`$). Even small local variations of charge in a semiconductor produce large space-charge fields which force the system to approximate local neutrality. In metals, by contrast, local charge density variations are screened out on length scales of Angstroms. The $`\mathrm{\Delta }n(𝐱)\mathrm{\Delta }p(𝐱)`$ constraint in semiconductors has key implications for the motion of packets of increased carrier density. If such a packet moves, both the conduction electrons and valence holes which comprise it must move together. The motion of holes in semiconductors tends to be much slower (due to their lower mobility) than that of electrons, so hole mobility and diffusion tends to dominate the properties of a packet consisting of both electron and hole density variations.
Spin packets in semiconductors are also subject to these constraints. Consider a spin packet which involves an increase in the density of spin-up electrons, or $`\mathrm{\Delta }n_{}(𝐱)>0`$. In undoped semiconductors it is not possible for the population of the other spin species to be substantially decreased, for the thermally generated background of conduction electrons is quite small. Hence an increase in the population of one spin species of carrier implies an increase in the total population of that carrier, so $`\mathrm{\Delta }n_{}(𝐱)>0`$ implies $`\mathrm{\Delta }n(𝐱)>0`$. The increase in the total electron density then implies a local increase in the hole density to maintain $`\mathrm{\Delta }n(𝐱)\mathrm{\Delta }p(𝐱)`$. Even if the holes in the packet are not spin polarized themselves, their presence affects the motion of the spin-polarized electrons.
In a doped semiconductor, however, there is a substantial background of conduction electrons, so $`\mathrm{\Delta }n_{}(𝐱)`$ can be significantly less than zero. Thus one can create a spin packet through a spin imbalance in the conduction band ($`\mathrm{\Delta }n_{}(𝐱)=\mathrm{\Delta }n_{}(𝐱)`$), without excess electrons or holes ($`\mathrm{\Delta }n(𝐱)=0=\mathrm{\Delta }p(𝐱)`$). This spin packet does not drag a local inhomogeneous hole density with it, and thus its mobility and diffusion properties are very different from those of a spin packet in the undoped semiconductor.
The two situations are distinguished in Fig. 1 for a nonmagnetic electron-doped material. Figure 1(a) shows an inhomogeneous electron-hole density in the form of a spatially localized packet. This disturbance, which we will refer to as a charge polarization packet (or charge packet), could be created optically, in which case the excitation process guarantees $`\mathrm{\Delta }n(𝐱)=\mathrm{\Delta }p(𝐱)`$, or by electrical injection, in which case space charge fields force $`\mathrm{\Delta }n(𝐱)\mathrm{\Delta }p(𝐱)`$. This type of disturbance is thus fundamentally multiple-band. Figure 1(b), however, shows a spin disturbance within the conduction band, which is an enhancement of the density of spin-up electrons and a corresponding reduction of the density of spin-down electrons ($`\mathrm{\Delta }n_{}(𝐱)=\mathrm{\Delta }n_{}(𝐱)`$). There is no corresponding inhomogeneity in the hole density ($`\mathrm{\Delta }p_{}(𝐱)=\mathrm{\Delta }p_{}(𝐱)=0`$), so the disturbance in essence only involves a single band. This type of disturbance will be referred to as a spin polarization packet, or a spin packet.
As described and demonstrated in Ref. , generation of this spin packet can be performed optically with circularly polarized light in a system where the spin relaxation time for holes is short, and for electrons is long, relative to the recombination time. Shortly after the excitation process creates spin-polarized electrons and holes, the holes lose their spin polarization. During the recombination process the unpolarized holes annihilate an equal number of spin up and spin down electrons, leaving behind excess spin polarization in the conduction band.
We now describe the implications for mobility and diffusion of these two types of packets. The motion of a charge packet (Fig. 1(a)) involves dragging both a conduction and valence disturbance, and is described by an ambipolar mobility and diffusion constant,
$`\mu _a`$ $`=`$ $`{\displaystyle \frac{(np)\mu _e\mu _h}{n\mu _e+p\mu _h}},`$ (1)
$`D_a`$ $`=`$ $`{\displaystyle \frac{n\mu _eD_h+p\mu _hD_e}{n\mu _e+p\mu _h}},`$ (2)
where $`D_e`$, $`\mu _e`$ and $`D_h`$, $`\mu _h`$ are the diffusion constants and mobilities for electrons and holes respectively. For $`n`$-doping ($`np`$), $`D_aD_h`$ and $`\mu _a\mu _h`$, so diffusion and mobility of the charge packet is dominated by the holes. The mobility and diffusion constants of the spin packet of Fig. 1(b), however, are
$`\mu _s`$ $`=`$ $`{\displaystyle \frac{(n_{}+n_{})\mu _e\mu _e}{n_{}\mu _e+n_{}\mu _e}},`$ (3)
$`D_s`$ $`=`$ $`{\displaystyle \frac{n_{}\mu _eD_e+n_{}\mu _eD_e}{n_{}\mu _e+n_{}\mu _e}},`$ (4)
where we now allow the different spin directions to have different mobilities and diffusion constants.
For the nonmagnetic semiconductor of Ref. , with $`n_{}=n_{}`$, $`\mu _e=\mu _e`$, and $`D_e=D_e`$, the mobility and diffusion constants of the spin packet are merely the electron mobility $`\mu _e`$ and diffusion constant $`D_e`$. Thus the mobility of the spin packet is predicted to be the same as that measured in transport. The importance of the metal-insulator transition can be estimated by considering $`\sigma (L)`$, the dependence of conductivity on the physical length scale probed. In Ref. the mobility measured optically over a distance of microns was seen to be comparable to the mobility from transport measurements through the entire sample, suggesting that the material was not sufficiently close to the metal-insulator transition to exhibit significant effects on the conductivity on these length scales.
Because the diffusion and mobility of spin and charge packets in doped semiconductors are determined by the properties of a single carrier species, we can relate the mobility $`\mu `$ of a packet to the diffusion constant $`D`$ describing the spread of the packet with an expression derived for a single species,
$$qD=\mu \frac{_0^{\mathrm{}}N(E)f_o(E)𝑑E}{_0^{\mathrm{}}N(E)\left(f_o(E)/E\right)𝑑E}.$$
(5)
Here $`N(E)`$ is the density of states of the band with the zero of energy chosen so that the band edge is $`E=0`$, $`f_o(E)`$ is the Fermi function, and $`q`$ is the charge of the species. In the low density limit $`(f_o(E)/E)=f_o(E)/kT`$, where $`T`$ is the temperature and $`k`$ is Boltzmann’s constant, and so $`eD=\mu kT`$, which is Einstein’s relation.
Figure 2 shows $`eD/kT\mu `$ for a spin packet (solid line) and a charge packet (dashed line) in $`n`$-doped GaAs at $`T=1.6`$K. The relevant mobility and diffusion constant for the spin packet are those of the conduction electrons, while those for the charge packet are those of the valence holes. This enhancement over the Einstein relation is directly attributable to Fermi pressure, that is, the faster increase of the chemical potential with density for a degenerate Fermi gas relative to a non-degenerate Fermi gas. For a given gradient in the density of the degenerate Fermi gas, a larger gradient in the chemical potential results, yielding faster diffusion.
Fermi pressure is substantially more important for spin packets than for charge packets, which exhibit the effect at densities closer to 10<sup>18</sup> cm<sup>-3</sup> at low temperature, and may require densities as high as 10<sup>20</sup> cm<sup>-3</sup> at room temperature. At higher temperatures it also requires considerably higher densities for Fermi pressure to play a significant role in spin packet diffusion, but the densities are still achievable, corresponding to 10<sup>18</sup> cm<sup>-3</sup>. The quantitative difference in the significance of Fermi pressure for spin packets, which are dominated by conduction electron properties, and for charge packets, which are dominated by valence hole properties, occurs because the conduction band has a density of states $`(m_e/m_h)^{3/2}0.045`$ smaller than the valence band in GaAs and therefore becomes degenerate at lower density. At 10<sup>16</sup> cm<sup>-3</sup> the enhancement over the Einstein relation is 12, which is in good agreement with the “more than one order of magnitude” enhancement seen in Ref. .
In order to generate spin packets in a $`p`$-doped semiconductor the time scales of spin decoherence in the conduction and valence band would need to be different, but perhaps a semiconductor will be found where this is possible. In this case it is the charge packet which is dominated by the diffusion and mobility properties of the conduction electrons, whereas the spin packet is dominated by the properties of the valence holes. Thus the charge packet is over an order of magnitude more mobile than the spin packet, precisely the opposite case as for an $`n`$-doped semiconductor.
We now turn to the behavior of spin and charge packets in a spin-polarized semiconductor, where equilibrium densities, mobilities and diffusion constants can differ for the two spin densities. Our first specific example will be an $`n`$-doped spin-polarized semiconductor (such as BeMnZnSe) which we assume is 100% spin polarized at the chemical potential. We note that the spin splitting required to achieve this polarization in a semiconductor, where typical Fermi energies are $`10100`$ meV, is much less than that required in a metallic system. For this semiconductor in equilibrium $`n_{}>0`$, but $`n_{}`$, $`p_{}`$, and $`p_{}`$ are all approximately zero. As shown in Fig. 3 a single-band spin polarization packet is only possible for a spin packet polarized antiparallel to the equilibrium carrier spin polarization. This restriction occurs because $`\mathrm{\Delta }n_{}(𝐱)<0`$ is possible, but not $`\mathrm{\Delta }n_{}(𝐱)<0`$. Thus a packet with spin polarized parallel to the equilibrium spin (Fig. 3(a)) must consist of both electron and hole perturbations ($`\mathrm{\Delta }n_{}(𝐱)>0`$ and $`\mathrm{\Delta }p(𝐱)>0`$) and would have diffusion and mobility properties dominated by the minority holes. The antiparallel spin packet (Fig. 3(b)), however, can be a single band disturbance with $`\mathrm{\Delta }n_{}(𝐱)<0`$ and $`\mathrm{\Delta }n_{}(𝐱)>0`$. Such a spin packet would have diffusion and mobility properties entirely determined by those of the majority electrons, and thus over an order of magnitude faster. We show in Fig. 4 the different ratios of diffusion constant to mobility for spin packets polarized parallel and antiparallel to the equilibrium carrier spin polarization.
The behavior of spin packets in a $`p`$-doped spin-polarized semiconductor, such as GaMnAs, is completely the opposite. Here a spin packet polarized parallel to the equilibrium carrier spin polarization would require a conduction electron component. The minority carriers (the electrons) would determine the mobility and diffusion constant of such a packet. A spin packet polarized antiparallel to the equilibrium carrier spin polarization could consist entirely of holes, however, and would have a much smaller mobility and diffusion constant. This qualitative difference in the diffusion and mobility of spin polarization packets in the $`n`$ and $`p`$-doped semiconducting systems should have technological implications for spin electronic devices.
We conclude with a brief comment on the behavior of spin distributions in inhomogeneous semiconductors compared to those in metallic ferromagnets. As pointed out in Ref., in metallic ferromagnets the short-distance physics of screening can be entirely separated from the physics of spin populations by writing a drift-diffusion equation for the chemical potential rather than the density. This separation depends on the linear dependence of the density on the chemical potential in these systems. This relationship does not hold in semiconductors and the separation of the screening length scale from the spin distribution length scale is no longer possible. Thus the exploration of spin transport in inhomogeneously doped spin-polarized semiconductor materials should yield a rich range of behavior distinct from metallic systems.
One of us (M.E.F.) would like to acknowledge the support of the Office of Naval Research through Grant No. N00014-99-1-0379. |
no-problem/9912/cond-mat9912277.html | ar5iv | text | # Reply to the Comment on: Quantum Monte Carlo study of the dipole moment of CO [J. Chem. Phys. 110, 11700 (1999)].
Max-Planck-Institut für Physik komplexer Systeme,
Nöthnitzer Str. 38, D-01187 Dresden, Germany
present address: Max-Planck-Institut für Mathematik in den Naturwissenschaften,
Inselstr. 22-26, D-04103 Leipzig, Germany
e-mail:flad@mis.mpg.de
FAX: ++49-(0)-341-9959-999
In Ref. 1 we have mistakenly claimed that the applicability of the Hellmann-Feynman theorem in fixed-node quantum Monte Carlo calculations is not subject to the manner how the nodal boundary depends on an external parameter $`\lambda `$. As it has been pointed out by Huang et al. in their comment on Ref. 1, this statement is not correct in general, except where the Hellmann-Feynman force is calculated for a nodal boundary which coincides with that of the unconstrained exact eigenfunction. We want to point out the error in our arguments and present an explicit expression for the correction term which supplements the Hellmann-Feynman force.
In our approach the fixed-node approximation is treated as a Dirichlet type of boundary value problem on a nodal region $`\mathrm{\Omega }`$. Properly stated $`\mathrm{\Omega }`$ has to be taken as an open subset of $`R^{3N}`$ in which the fixed-node wavefunction $`\mathrm{\Psi }(\lambda ,\mathrm{\Omega })`$ satisfies the Schrödinger equation
$$\widehat{H}(\lambda )\mathrm{\Psi }(\lambda ,\mathrm{\Omega })=E[\lambda ,\mathrm{\Omega }]\mathrm{\Psi }(\lambda ,\mathrm{\Omega }).$$
(1)
The fixed-node wavefunction $`\mathrm{\Psi }(\lambda ,\mathrm{\Omega })`$ has to vanish on the boundary $`\mathrm{\Omega }`$ of $`\mathrm{\Omega }`$ and must be continous on the entire $`\mathrm{\Omega }\mathrm{\Omega }`$. In order to define the derivatives of the wavefunction on the boundary $`\mathrm{\Omega }`$ one has to take the limit of derivatives of interior points when approaching the boundary . Within such a framework no $`\delta `$ function term appears for the second derivatives on $`\mathrm{\Omega }`$ since we do not extend our wavefunction beyond the boundary. Neclecting spurious singularities in the potential $`\widehat{V}(\lambda )`$ on $`\mathrm{\Omega }`$ we can actually conclude from
$$\mathrm{\Delta }\mathrm{\Psi }(\lambda ,\mathrm{\Omega })=2\left(\widehat{V}(\lambda )E[\lambda ,\mathrm{\Omega }]\right)\mathrm{\Psi }(\lambda ,\mathrm{\Omega })$$
(2)
that the limits of the second derivatives vanish almost everywhere on the boundary. We want to stress however that this is not in contradiction to the arguments given in Ref. 2, where the wavefunction has been extended over the whole space. In this case one actually encounters discontinous first derivatives when crossing the nodes.
Revisiting Eq. (8) of Ref. 1, we can identify the missing boundary term mentioned in Ref. 2 by inspection of the second line. After differentiation with respect to $`\lambda `$ we obtain
$$_{\mathrm{\Omega }(0)}\left[_\lambda \mathrm{\Psi }(0)\widehat{H}_0\mathrm{\Psi }(0)+\mathrm{\Psi }(0)_\lambda \widehat{H}_\lambda |_{\lambda =0}\mathrm{\Psi }(0)+\mathrm{\Psi }(0)\widehat{H}_0_\lambda \mathrm{\Psi }(0)\right]𝑑\mathrm{\Omega }$$
(3)
The first term in the integral vanishes due to Eq. ( 1) and the normalization constraint Eq. (6) in Ref. 1 from which follows <sup>*</sup><sup>*</sup>*To see that this is valid for a parameter dependent boundary we refer to Ref. 1 for a discussion of the boundary terms.
$$_{\mathrm{\Omega }(0)}\mathrm{\Psi }(0)_\lambda \mathrm{\Psi }(0)d\mathrm{\Omega }=0,$$
(4)
whereas the second term yields the standard Hellmann-Feynman force, however the third term does not vanish in general as we have erroneously claimed in Ref. 1. Applying Green’s second formula we can rewrite the third term
$$_{\mathrm{\Omega }(0)}\mathrm{\Psi }(0)\widehat{H}_0_\lambda \mathrm{\Psi }(0)d\mathrm{\Omega }=\frac{1}{2}_{\mathrm{\Omega }(0)}\mathrm{\Psi }(0)_\lambda \mathrm{\Psi }(0)d\mathrm{\Omega }$$
(5)
where the sign of the boundary term corresponds to $`\mathrm{\Psi }>0`$ inside the nodal region. For this step we presume that the first and second derivatives of $`\mathrm{\Psi }(0)`$ and $`_\lambda \mathrm{\Psi }(0)`$ can be continuously extended to the boundary in the sense discussed above. In order to get a better understanding of the physical character of this term we have to generalize our considerations by allowing the external parameter $`\lambda `$ and the nodal domain $`\mathrm{\Omega }`$ to vary independently. Doing so we can rewrite the derivative $`_\lambda \mathrm{\Psi }(0)`$ like a total differential
$$_\lambda \mathrm{\Psi }(0)=\left[_\lambda \mathrm{\Psi }(\lambda ,\mathrm{\Omega }(0))+_\lambda \mathrm{\Psi }(0,\mathrm{\Omega }(\lambda ))\right]|_{\lambda =0}$$
(6)
The first term $`_\lambda \mathrm{\Psi }(\lambda ,\mathrm{\Omega }(0))`$ corresponds to the change of the wave function with respect to the external parameter $`\lambda `$ under the constraint that the nodes are kept fixed. It vanishes on the boundary and therefore does not contribute to Eq. ( 5). The second term represents the change of the wave function $`_\lambda \mathrm{\Psi }(0,\mathrm{\Omega }(\lambda ))`$ under a variation of the nodes only. Since $`\mathrm{\Psi }(0,\mathrm{\Omega }(\lambda ))`$ is an eigenfunction of $`\widehat{H}_0`$ on the nodal domain $`\mathrm{\Omega }(\lambda )`$ we obtain
$$\widehat{H}_0_\lambda \mathrm{\Psi }(0,\mathrm{\Omega }(\lambda ))|_{\lambda =0}=_\lambda E[0,\mathrm{\Omega }(\lambda )]|_{\lambda =0}\mathrm{\Psi }(0,\mathrm{\Omega }(0))+E[0,\mathrm{\Omega }(0)]_\lambda \mathrm{\Psi }(0,\mathrm{\Omega }(\lambda ))|_{\lambda =0}$$
(7)
where the contribution of the second term to the left side of Eq. ( 5) vanishes due to the normalization contraint ( 4). The modified Hellmann-Feynman theorem for fixed-node quantum Monte Carlo calculations with parameter dependent nodal boundary is therefore given by
$`_\lambda E[\lambda ,\mathrm{\Omega }(\lambda )]|_{\lambda =0}`$ $`=`$ $`{\displaystyle _{\mathrm{\Omega }(0)}}\mathrm{\Psi }(0)_\lambda \widehat{H}_\lambda |_{\lambda =0}\mathrm{\Psi }(0)d\mathrm{\Omega }`$ (8)
$``$ $`{\displaystyle \frac{1}{2}}{\displaystyle _{\mathrm{\Omega }(0)}}\mathrm{\Psi }(0)_\lambda \mathrm{\Psi }(0,\mathrm{\Omega }(\lambda ))|_{\lambda =0}d\mathrm{\Omega }`$ (9)
$`=`$ $`{\displaystyle _{\mathrm{\Omega }(0)}}\mathrm{\Psi }(0)_\lambda \widehat{H}_\lambda |_{\lambda =0}\mathrm{\Psi }(0)d\mathrm{\Omega }+_\lambda E[0,\mathrm{\Omega }(\lambda )]|_{\lambda =0}`$ (10)
with an additional term which can be interpreted as the linear response of the energy with respect to the variations of the the nodal region. This term vanishes provided that the nodal region $`\mathrm{\Omega }(0)`$ coincides with a nodal region of the unconstrained solution of the Schrödinger equation.
Finally we want to mention that the main part of our paper concerning the dipole moment of CO is not affected by this correction. In our actual calculations we have only used parameter independent nodal boundaries for which the unmodified Hellmann-Feynman theorem remains applicable. However, further studies of the implications of the additional term seem to be necessary. |
no-problem/9912/astro-ph9912344.html | ar5iv | text | # 1 Modeling the X–ray absorption in Compton–thick source
## 1 Modeling the X–ray absorption in Compton–thick source
BeppoSAX observations have shown that a large fraction (at least 50%) of nearby Seyfert 2 galaxies are Compton–thick, i.e. the nucleus is obscured by matter with $`N_H\sigma _T^1=1.5\times 10^{24}`$ cm<sup>-2</sup> (, ). Because Seyfert 2s outnumber Seyfert 1s by a large factor, this means that Compton–thick Seyfert 2s are the most common type of AGN in the local Universe. It is possible that heavily obscured sources were even more common in the past (), and then Compton–thick sources should be an important ingredient in XRB synthesis models, despite their low flux.
It is therefore important to model in detail the X–ray spectrum emerging from such a thick absorber. We have calculated transmitted spectra by means of Monte Carlo simulations assuming a spherical geometry, with the X–ray source in the centre. All relevant physical processes: photoelectric absorption, Compton scattering (with fully relativistic treatment), and fluorescence (for iron atoms only), have been included in the code. More details can be found in .
To illustrate the importance of a proper treatment of the transmission spectrum, in Figure 1 the case for $`N_H=3\times 10^{24}`$ cm<sup>-2</sup> is shown. For comparison, we also plot the spectrum obtained with only photoelectric absorption (an unphysical situation) and photoelectric plus Compton absorption (neglecting scattering, which corresponds to obscuring matter with small covering factor, an unlikely situation given the large fraction of Compton–thick sources). The differences between the three curves are large, and fitting real data with the inappropriate model can make a big difference in the derived parameters. Let us discuss as an example the case of the Circinus Galaxy. The BeppoSAX observation revealed the nuclear radiation transmitted through a Compton–thick absorber (). When fitted with the “small cloud” absorber, the best fit value for the column density is 6.9$`\times `$10<sup>24</sup> cm<sup>-2</sup>, and the 2–10 keV extrapolated luminosity is 1.5$`\times `$10<sup>44</sup> erg s<sup>-1</sup>, a surprisingly large value when compared to the IR luminosity. If the spectrum is instead fitted with the spherical model (then including Compton scattering), the column density is 4.3$`\times `$10<sup>24</sup> cm<sup>-2</sup>, and the 2–10 keV luminosity reduces to a much more reasonable value of 10<sup>42</sup> erg s<sup>-1</sup>.
## 2 The XRB synthesis model and the evolution of AGN
We developed a synthesis model for the XRB based on the standard assumption that the XRB is mostly made by a combination of type 1 and 2 AGN (, and references therein). Below we schematically summarize the main assumptions of the model. Further details can be found in .
1. AGN spectra
1. type 1 (AGN1) spectrum:
* power law ($`\alpha =0.9`$) + exponential cut-off ($`E_c=400`$ keV);
* Compton reflection component (accretion disk, $`\theta _{obs}60^{}`$);
2. type 2 (AGN2) spectrum ():
* primary AGN1 spectrum obscured by cold matter:
$`10^{21}N_H10^{25}cm^2`$, $`\frac{dN\left(logN_H\right)}{d\left(logN_H\right)}logN_H`$;
* Compton scattering within the absorbing matter fully included.
2. Cosmological parameters
1. PLE ($`\mathrm{\Phi }^{}\left(z=0\right)=1.45\times 10^6Mpc^3\left(10^{44}ergs^1\right)^1`$);
2. power law evolution for the break-luminosity:
$`L^{}\left(z\right)\left(1+z\right)^k`$ up to $`z_{max}=1.73`$, with
$`L^{}\left(z=0\right)=3.9\times 10^{43}ergs^1`$ and $`k=2.9`$ (model H of );
3. the redshift integration is performed up to $`z_d=4.5`$.
### 2.1 The $`R(z)`$ model
The best-fit to the high energy (3-50 keV) XRB HEAO-1 data (), but with a $``$30 % higher normalization (according to the BeppoSAX/MECS results below 10 keV, ) is performed by a $`\chi ^2`$-minimization procedure. The inclusion of a redshift–dependent term in the number ratio of the two types of sources (i.e. $`R\left(z\right)=N(type2,z)/N(type1,z)`$) results in an improvement of the fit at the 99% confidence level. Our best solution is shown in Figure 2 and is described by the following analytical form:
$$R\left(z\right)=R_0\times \left(1+z\right)^{k_1}e^{k_2z}$$
with $`R_0=4`$ (according to ). The best-fit parameters are $`k_1=2.8\pm 0.2`$ and $`k_2=1.5\pm 0.1`$. It is worth noticing that this result does not necessarily imply that the actual number of Seyfert 2s diminishes with $`z`$, but rather that their contribution to the XRB diminishes. This may well be due to a larger fraction of obscured objects being Compton–thick in the past, as proposed by . Hopefully, Chandra and XMM surveys will be able to check this intriguing possibility.
### 2.2 The source counts
While there is no much difference in the goodness of the fit to both the XRB spectrum and the soft X–ray source counts if the HEAO–1 normalization is or not readjusted to match the BeppoSAX results, it makes a large difference in the fitting of the hard X–ray source counts. In particular, the BeppoSAX ( and Comastri, this conference) 5–10 keV source counts can be simultaneously fitted by our model only if the higher normalization is used.
Acknowledgements. We thank the HELLAS group for many useful and stimulating discussions. We acknowledge financial support from ASI and from MURST (grant cofin98–02–32.) |
no-problem/9912/quant-ph9912077.html | ar5iv | text | # The Zeno and anti-Zeno effects on decay in dissipative quantum systems*footnote **footnote * Published in acta physica slovaca 49, 541-548 (1999), under the title “Decay control in dissipative quantum systems”
## I Introduction
The ”watchdog” or quantum Zeno effect (QZE) is a basic manifestation of the influence of measurements on the evolution of a quantum system. The original QZE prediction has been that irreversible decay of an excited state into an open-space reservoir can be inhibited , by repeated interruption of the system-reservoir coupling, which is associated with measurements (e.g., the interaction of an unstable particle with its environment on its flight through a bubble chamber) . However, this prediction has not been experimentally verified as yet! Instead, the interruption of Rabi oscillations and analogous forms of nearly-reversible evolution has been at the focus of interest . Tacit assumptions have been made that the QZE is in principle attainable in open space, but is technically difficult.
We have recently demonstrated that the inhibition of nearly-exponential excited-state decay by the QZE in two-level atoms, in the spirit of the original suggestion , is amenable to experimental verification in resonators. Although this task has been widely believed to be very difficult, we have shown, by means of our unified theory of spontaneous emission into arbitrary reservoirs , that two-level emitters in cavities or in waveguides are in fact adequate for radiative decay control by the QZE. Condensed media or multi-ion traps are their analogs for vibrational decay control (phonon emission) by the QZE. We have now developed a more comprehensive view of the possibilities of excited-state decay by QZE. Here we wish to demonstrate that QZE is indeed achievable by repeated or continuous measurements of the excited state, but only in reservoirs whose spectral response rises up to a frequency which does not exceed the resonance (transition) frequency. By contrast, in open-space decay, where the reservoir response has a much higher cutoff, non-destructive frequent measurements are much more likely to accelerate decay, causing the anti-Zeno effect.
## II Measurement schemes
### A Impulsive measurements (Cook’s scheme)
Consider an initially excited two-level atom coupled to an arbitrary density-of-modes (DOM) spectrum $`\rho (\omega )`$ of the electromagnetic field in the vacuum state. At time $`\tau `$ its evolution is interrupted by a short optical pulse, which serves as an impulsive quantum measurement . Its role is to break the evolution coherence, by transferring the populations of the excited state $`|e`$ to an auxiliary state $`|u`$ which then decays back to $`|e`$ incoherently.
The spectral response, i.e., the emission rate into this reservoir at frequency $`\omega `$, is
$$G(\omega )=|g(\omega )|^2\rho (\omega ),$$
(1)
$`\mathrm{}g(\omega )`$ being the field-atom coupling energy.
We cast the excited-state amplitude in the form $`\alpha _e(\tau )e^{i\omega _a\tau }`$, where $`\omega _a`$ is the atomic resonance frequency. Restricting ourselves to sufficiently short interruption intervals $`\tau `$ such that $`\alpha _e(\tau )1`$, yet long enough to allow the rotating wave approximation, we obtain
$`\alpha _e(\tau )`$ $``$ $`1{\displaystyle _0^\tau }𝑑t(\tau t)\mathrm{\Phi }(t)e^{i\mathrm{\Delta }t},`$ (2)
where
$$\mathrm{\Phi }(t)=_0^{\mathrm{}}𝑑\omega G(\omega )e^{i(\omega \omega _s)t}.$$
(3)
$`\mathrm{\Delta }=\omega _a\omega _s`$ is the detuning of the atomic resonance from the peak (or cutoff) $`\omega _s`$ of $`G(\omega )`$.
To first order in the atom-field interaction, the excited state probability after $`n`$ interruptions (measurements), $`W(t=n\tau )=|\alpha _e(\tau )|^{2n}`$, can be written as
$$W(t=n\tau )[2\text{Re}\alpha _e(\tau )1]^ne^{\kappa t},$$
(4)
where
$$\kappa =\frac{2}{\tau }\text{Re}[1\alpha _e(\tau )]=\frac{2}{\tau }\text{Re}_0^\tau 𝑑t(\tau t)\mathrm{\Phi }(t)e^{i\mathrm{\Delta }t}.$$
(5)
The QZE obtains if $`\kappa `$ decreases with $`\tau `$ for sufficiently short $`\tau `$. This essentially means that the correlation (or memory) time of the field reservoir is longer (or, equivalently, $`\mathrm{\Phi }(t)`$ falls off slower) than the chosen interruption interval $`\tau `$.
Equation (5) can be rewritten as
$$\kappa =2\pi G(\omega )\left\{\frac{\tau }{2\pi }\text{sinc}^2\left[\frac{(\omega \omega _a)\tau }{2}\right]\right\}𝑑\omega ,$$
(6)
where the interruptions are seen to cause dephasing whose spectral width is $`1/\tau `$.
### B Noisy-field dephasing: Random Stark shifts
Instead of disrupting the coherence of the evolution by a sequence of ”impulsive” measurements, as above, we can achieve this goal by noisy-field dephasing of $`\alpha _e(t)`$: Random ac-Stark shifts by an off-resonant intensity-fluctuating field result in the replacement of Eq. (6) by (Fig. 1)
$$\kappa =G(\mathrm{\Delta }+\omega _a)(\mathrm{\Delta })𝑑\mathrm{\Delta },$$
(7)
Here the spectral response $`G(\mathrm{\Delta }+\omega _a)`$ is the same as in Eq. (1), whereas $`(\mathrm{\Delta })`$ is the Lorentzian-shaped relaxation function of the coherence element $`\rho _{eg}(t)`$, which for the common dephasing model decays exponentially. This Lorentzian relaxation spectrum has a HWHM width $`\nu =\mathrm{\Delta }\omega ^2\tau _c`$, the product of the mean-square Stark shift and the noisy-field correlation time. The QZE condition is that this width be larger than the width of $`G_s(\omega )`$ (Fig. 1). The advantage of this realization is that it does not depend on $`\gamma _u`$, and is realizable for any atomic transition. Its importance for molecules is even greater: if we start with a single vibrational level of $`|e`$, no additional levels will be populated by this process.
### C CW dephasing
The random ac-Stark shifts described above cause both shifting and broadening of the spectral transition. If we wish to avoid the shifting altogether, we may employ a CW driving field that is nearly resonant with the $`|e|u`$ transition. If the decay rate of this transition, $`\gamma _u`$, is larger than the Rabi frequency $`\mathrm{\Omega }`$ of the driving field, then one can show that $`\kappa `$ is given again by Eq. (7), where the Lorentzian (dephasing) width is
$$\nu =\frac{2\mathrm{\Omega }^2}{\gamma _u}.$$
(8)
### D Universal formula
All of the above schemes are seen to yield the same universal formula for the decay rate
$$\kappa =2\pi G(\omega )F(\omega \omega _a)𝑑\omega ,$$
(9)
where $`F(\omega )`$ expresses the relevant measurement-induced dephasing (sinc- or a Lorentzian-shaped): its width relative to that of $`G(\omega )`$ determines the QZE behavior.
## III Applications to various reservoirs
### A Finite reservoirs: A Lorentzian line
The simplest application of the above analysis is to the case of a two-level atom coupled to a near-resonant Lorentzian line centered at $`\omega _s`$, characterizing a high-$`Q`$ cavity mode . In this case,
$$G_s(\omega )=\frac{g_s^2\mathrm{\Gamma }_s}{\pi [\mathrm{\Gamma }_s^2+(\omega \omega _s)^2]},$$
(10)
where $`g_s`$ is the resonant coupling strength and $`\mathrm{\Gamma }_s`$ is the linewidth (Fig. 2). Here $`G_s(\omega )`$ stands for the sharply-varying (nearly-singular) part of the DOM distribution, associated with narrow cavity-mode lines or with the frequency cutoff in waveguides or photonic band edges. The broad portion of the DOM distribution $`G_b(\omega )`$ (the ”background” modes), always coincides with the free-space DOM $`\rho (\omega )\omega ^2`$ at frequencies well above the sharp spectral features. In an open cavity, $`G_b(\omega )`$ represents the atom coupling to the unconfined free-space modes. This gives rise to an exponential decay factor in the excited state probability, regardless of how short $`\tau `$ is, i.e.,
$$\kappa =\kappa _s+\gamma _b,$$
(11)
where $`\kappa _s`$ is the contribution to $`\kappa `$ from the sharply-varying modes and $`\gamma _b=2\pi G_b(\omega _a)`$ is the effective rate of spontaneous emission into the background modes. In most structures $`\gamma _b`$ is comparable to the free-space decay rate $`\gamma _f`$.
In the short-time approximation, taking into account that the Fourier transform of the Lorentzian $`G_s(\omega )`$ is $`\mathrm{\Phi }_s(t)=g_s^2e^{\mathrm{\Gamma }_st}`$, Eq. (2) yields (without the background-modes contribution)
$$\alpha _e(\tau )1\frac{g_s^2}{\mathrm{\Gamma }_si\mathrm{\Delta }}\left[\tau +\frac{e^{(i\mathrm{\Delta }\mathrm{\Gamma }_s)\tau }1}{\mathrm{\Gamma }_si\mathrm{\Delta }}\right].$$
(12)
The QZE condition is then
$$\tau (\mathrm{\Gamma }_s+|\mathrm{\Delta }|)^1,g_s^1.$$
(13)
On resonance, when $`\mathrm{\Delta }=0`$, Eqs. (5) and (12) yield
$$\kappa _s=g_s^2\tau .$$
(14)
Thus the background-DOM effect cannot be modified by QZE. Only the sharply-varying DOM contribution $`\kappa _s`$ may allow for QZE. Only the $`\kappa _s`$ term decreases with $`\tau `$, indicating the QZE inhibition of the nearly-exponential decay into the Lorentzian field reservoir as $`\tau 0`$. Since $`\mathrm{\Gamma }_s`$ has dropped out of Eq. (14), the decay rate $`\kappa `$ is the same for both strong-coupling ($`g_s>\mathrm{\Gamma }_s`$) and weak-coupling ($`g_s\mathrm{\Gamma }_s`$) regimes. Physically, this comes about since for $`\tau g_s^1`$ the energy uncertainty of the emitted photon is too large to distinguish between reversible and irreversible evolutions.
The evolution inhibition, however, has rather different meaning for the two regimes. In the weak-coupling regime, where, in the absence of the external control, the excited-state population decays nearly exponentially at the rate $`g_s^2/\mathrm{\Gamma }_s+\gamma _b`$ (at $`\mathrm{\Delta }=0`$), one can speak about the inhibition of irreversible decay, in the spirit of the original QZE prediction . By contrast, in the strong-coupling regime in the absence of interruptions (measurements), the excited-state population undergoes damped Rabi oscillations at the frequency $`2g_s`$. In this case, the QZE slows down the evolution during the first Rabi half-cycle ($`0t\pi /2g_s^1`$), the evolution on the whole becoming irreversible.
A possible realization of this scheme is as follows. Within an open cavity the atoms repeatedly interact with a pump laser, which is resonant with the $`|e|u`$ transition frequency. The resulting $`|e|g`$ fluorescence rate is collected and monitored as a function of the pulse repetition rate $`1/\tau `$. Each short, intense pump pulse of duration $`t_p`$ and Rabi frequency $`\mathrm{\Omega }_p`$ is followed by spontaneous decay from $`|u`$ back to $`|e`$, at a rate $`\gamma _u`$, so as to destroy the coherence of the system evolution, on the one hand, and reshuffle the entire population from $`|e`$ to $`|u`$ and back, on the other hand (Fig. 3). The demand that the interval between measurements significantly exceed the measurement time, yields the inequality $`\tau t_p`$. The above inequality can be reduced to the requirement $`\tau \gamma _u^1`$ if the “measurements” are performed with $`\pi `$ pulses: $`\mathrm{\Omega }_pt_p=\pi ,t_p\gamma _u^1`$. This calls for choosing a $`|u|e`$ transition with a much shorter radiative lifetime than that of $`|e|g`$.
Figure 4, describing the QZE for a Lorentz line on resonance ($`\mathrm{\Delta }=0`$), has been programmed for feasible cavity parameters: $`\mathrm{\Gamma }_s=(1R)c/L,g_s=\sqrt{cf\gamma _f/(2L)},\gamma _b=(1f)\gamma _f`$, where $`R`$ is the geometric-mean reflectivity of the two mirrors, $`f`$ is the fractional solid angle (normalized to $`4\pi `$) subtended by the confocal cavity, and $`L`$ is the cavity length. It shows, that the population of $`|e`$ decays nearly-exponentially well within interruption intervals $`\tau `$, but when those intervals become too short, there is significant inhibition of the decay. Figure 5 shows the effect of the detuning $`\mathrm{\Delta }=\omega _a\omega _s`$ on the decay: The decay now becomes oscillatory. The interruptions now enhance the decay, the degree of enhancement depends on the phase between interruptions.
### B Open-space reservoirs
The spectral response for hydrogenic-atom radiative decay via the $`\stackrel{}{p}\stackrel{}{A}`$ free-space interaction is given by
$$G(\omega )=\frac{\alpha \omega }{[1+(\omega /\omega _c)^2]^4},$$
(15)
where $`\alpha `$ is the effective field-atom coupling constant and the cutoff frequency is
$$\omega _\mathrm{c}10^{19}\text{s}^1\frac{c}{a_\mathrm{B}}.$$
(16)
Using measurement control that produces Lorentzian broadening \[Eq. (7)\] we then obtain
$$\kappa =\frac{\alpha \omega _\mathrm{c}}{3}\text{Re}\left[\frac{f(2f^47f^2+11)}{2(f^21)^3}\frac{6f\mathrm{ln}f}{(f^21)^4}\frac{3i\pi (f^2+4f+5)}{16(f+1)^4}\right],$$
(17)
where
$$f=\frac{\nu i\omega _a}{\omega _\mathrm{c}}.$$
(18)
In the range
$$\nu \omega _\mathrm{c}$$
(19)
we obtain from Eq. (17) the anti-Zeno effect of accelerated decay. This comes about due to the rising of the spectral response $`G(\omega )\alpha \omega `$ as a function of frequency (for $`\omega \omega _\mathrm{c}`$). The Zeno effect can hypothetically occur only for $`\nu \omega _\mathrm{c}10^{19}`$ s<sup>-1</sup>. But this range is well beyond the limit of validity of the present analysis, since $`\mathrm{\Delta }E\mathrm{}\nu \mathrm{}\omega _\mathrm{c}`$ may then induce other decay channels (”destruction”) of $`|e`$, in addition to spontaneous transitions to $`|g`$.
## IV Conclusions
Our unified analysis of two-level system coupling to field reservoirs has revealed the general optimal conditions for observing the QZE in various structures (cavities, waveguides, phonon reservoirs, and photonic band structures) as opposed to open space. We note that the wavefunction collapse notion is not involved here, since the measurement is explicitly described as an act of dephasing (coherence-breaking). This analysis also clarifies that QZE cannot combat the open-space decay. Rather, impulsive or continuous dephasing are much more likely to accelerate decay by the inverse (anti-) Zeno effect. |
no-problem/9912/hep-ph9912537.html | ar5iv | text | # Soft Colour Interactions in Non-perturbative QCDContribution to PANIC 99 conference proceedings
## Abstract
Improved understanding of non-perturbative QCD dynamics can be obtained in terms of soft colour exchange models. Their essence is the variation of colour string-field topologies giving a unified description of final states in high energy interactions. In particular, both events with and without large rapidity gaps are obtained in agreement with data from $`ep`$ at HERA and $`p\overline{p}`$ at the Tevatron, where also the surprisingly large production rate of high-$`p_{}`$ charmonium and bottomonium is reproduced.
TSL/ISV-99-0222
October 1999
Strong interaction processes at small (‘soft’) momentum transfers belong to the realm of non-perturbative QCD, which is a major unsolved problem in particle and nuclear physics. High energy particle collisions involving a ‘hard’ scale, i.e. a large momentum transfer, has the advantage of providing a well defined parton level process which is calculable in perturbative QCD (pQCD). The soft effects (e.g. hadronisation) in such hard scattering events can therefore be investigated based on an understood parton level process.
This hard-soft interplay is the basis for the topical research field of diffractive hard scattering . Diffractive events are characterised by having a rapidity gap, i.e. a large region of rapidity (or polar angle) without any particles. The rapidity gap connects to the soft part of the event and therefore non-perturbative effects on a long space-time scale are important.
In order to better understand non-perturbative dynamics and provide a unified description of all final states, we have developed new models. These models are added to Monte Carlo generators (Lepto for $`ep`$ and Pythia for $`p\overline{p}`$), such that an experimental approach can be taken to classify events depending on the characteristics of the final state: e.g. gaps or no-gaps, leading protons or neutrons etc.
The basic assumption of the models is that variations in the topology of the confining colour force fields (strings ) lead to different hadronic final states after hadronisation, as illustrated in Figs. 1 and 2. The pQCD interaction gives a set of partons with a specific colour order. However, this order may change due to soft, non-perturbative interactions.
In the soft colour interaction (SCI) model it is assumed that colour-anticolour, corresponding to non-perturbative gluons, can be exchanged between partons and remnants emerging from a hard scattering. This can be viewed as the partons interacting softly with the colour medium of the proton as they propagate through it, which should be a natural part of the process in which ‘bare’ perturbative partons are ‘dressed’ into non-perturbative ones and the confining colour flux tube between them is formed. The hard parton level interactions are given by standard perturbative matrix elements and parton showers, which are not altered by softer non-perturbative effects. The unknown probability to exchange a soft gluon between parton pairs is given by a phenomenological parameter $`R`$, which is the only free parameter of the model. With $`R=0.5`$ one obtains the correct rate of rapidity gap events observed at HERA and a quite decent description of the measured diffractive structure function (Fig. 1).
Leading neutrons are also obtained in agreement with experimental measurements . In the Regge approach pomeron exchange would be used for diffraction, pion exchange added to get leading neutrons and still other exchanges should be added for completeness. The SCI model provides a simpler description.
Applying the same SCI model to hard $`p\overline{p}`$ collisions one obtains production of $`W`$ and jets in association with rapidity gaps. As shown in Fig. 2, the model reproduces the rates observed at the Tevatron using the same $`R`$-value as obtained from gaps at HERA. This is in contrast to the Pomeron model which, when tuned to HERA gap events, gives a factor $`6`$ too large rate at the Tevatron .
SCI does not only lead to rapidity gaps, but also to other striking effects. It reproduces (Fig. 3) the observed rate of high-$`p_{}`$ charmonium and bottomonium at the Tevatron, which are factors of 10 larger than predictions based on conventional pQCD. This is accomplished by the change of the colour charge of a $`Q\overline{Q}`$ pair (e.g. from a gluon) from octet to singlet. A quarkonium state can then be formed using a simple model for the division of the cross-section below the threshold for open heavy flavour production onto different quarkonium states .
An alternative to SCI is the newly developed generalised area law (GAL) model which, based on a generalisation of the area law suppression $`e^{bA}`$ with $`A`$ the area swept out by the string in energy-momentum space, gives modified colour string topologies through string reinteractions. The probability $`P=R_0[1exp(b\mathrm{\Delta }A)]`$ for two strings pieces to interact depends on the area difference $`\mathrm{\Delta }A`$ which is gained by the string rearrangement. This favours making ‘shorter’ strings, e.g. with gaps, whereas making ‘longer’, ‘zig-zag’ shaped strings is suppressed. The fixed probability $`R`$ in SCI is thus replaced by a dynamical one, where the parameter $`R_0=0.1`$ is chosen to reproduce the HERA gap event rate in a simultaneous fit to data from $`e^+e^{}`$ annihilation at the $`Z^0`$-peak. The resulting diffractive structure function compares very well with HERA data (Fig. 1). The GAL model also improves the description of non-diffractive HERA data .
The GAL model can also be applied to $`p\overline{p}`$ to obtain diffractive $`W`$ and jet production through string rearrangements like in Fig. 2. The observed rates are reproduced quite well (Fig. 2). However, the treatment of the ‘underlying event’, which is a notorious problem in hadron-hadron scattering, introduces a larger uncertainty than for the SCI model .
In conclusion, our models for non-perturbative QCD dynamics in terms of varying colour string topologies give a satisfactory unified description of several phenomena in different hadronic final states. This should contribute to a better understanding of non-perturbative QCD interactions. |
no-problem/9912/quant-ph9912011.html | ar5iv | text | # Will Quantum Cryptography ever become a successful technology in the marketplace?
## I Introduction
In this quantum cryptography workshop<sup>*</sup><sup>*</sup>*This paper is an extended version of a talk to be presented in NEC Princeton workshop on quantum cryptography, Dec. 13-15. we have heard many interesting talks. From an academic point of view, it is quite clear that quantum cryptography is a very active research area. Now, from the technological point of view, it is natural to ask about the potential of quantum cryptography as a technology. In other words, will quantum cryptography ever be widely used in future?
Given the immense progress in both the theoretical and experimental sides in the last few years, some of us in the audience may be tempted to say ‘yes’. However, ultimately the answer to this question does not depend on the subjective opinions of research scientists, but on the complex social and economic forces behind it as well as future advancements in the quantum technology.
On the question of whether quantum cryptography will ever be widely used in future, I certainly do not claim to have a full answer. In this talk, I will simply share with you some of my thoughts on the subject. None of the viewpoints expressed here are original. Nor are they sophisticated. They are just my personal simplifications and understandings/misunderstandings of what is well known to people in other walks of life. However, those viewpoints may not be commonly known to quantum cryptographers. Being an industrial researcher with a non-negligible amount of experience in research and development of real-life conventional security systems, I find it of value to introduce these viewpoints to other quantum cryptographers. My hope is to stimulate further discussions on the subject. Your comments, corrections and criticisms will be welcome.
## II Academia vs Real World
### A Technology focussed vs solution focussed
Let me begin by saying that the academic world and the real world have very different perspectives. In academia, we often deal with curiosity driven research. Even in quantum technology, our main focus is technology. That is to say the technological aspects of a subject. For instance, in this workshop many talks deal with the fundamental and technical issues of the security of quantum cryptographic systems. Important as they are, those subjects are so esoteric that they are quite beyond the understanding of even the most sophisticated developers and customers of conventional cryptography. More importantly, those subjects do not necessarily address their real world concerns.
In the real world, customers (users of cryptography) generally have problems and they look for solutions, not technology. It does not matter whether it is high-tech or low-tech, so long as it can solve their problem, they will take it. For instance, putting an eraser on top of a pencil is a trivial idea from the technological point of view. From the users’ point of view, this can be regarded as a major invention that offers convenience and added value to the individual eraser and the pencil. As another example, nuclear power generators may be a high-tech solution. But, it must compete with low-tech alternatives like oil and coal in a competitive commodity market—electricity generation.
Clearly, customer acceptance is very important to the success of a technology in the marketplace. Beside customers, there are many other players in the development/adoption of new technologies. Different players have different interests and concerns, some of which may be regarded as irrational by outsiders. Whether we like it or not, the only way to assure that a technology is adopted is to better understand the diverse interests and concerns of different players in the field.
### B Players in the Quantum Game
Let me introduce the interested parties in the development/adoption of cryptographic/security systems one by one and describe their main concerns.
1. Academia
(A) Quantum Cryptographers Type I (Particularly Theorists): The main interest of theoreticians in quantum cryptography is to design cryptographic protocols with perfect security .
(B) Quantum Cryptographers Type II (Particularly Experimentalists): The main interest of experimental quantum cryptographers is to design and implement quantum cryptographic schemes that are feasible with current (or near future) technology and secure against realistic attacks .
(C) Conventional Cryptographers? Some people may argue that many conventional cryptographers live in the same academic world as quantum cryptographers. I have no comment on this argument.
2. Real World
(A) Users Type 1 (individuals): The main interest of individual customers in using cryptographic/security product is often the peace of mind. If the users voluntarily use the product, this peace of mind may be due to the preceived security offered by the product. If the users are forced to use the products by others, the peace of mind may arise because they make their employers happy.
Cost and transparency are two other major concerns of the individual customer. Someone working on information security once told me that the general feeling in the community is that security does not sell. (i.e., While customers worry about security, they are unwilling to pay for a higher-price product for the sole reason of its being more secure.) The acceptable additional cost of a more secure product is essentially zero. People are interested in a solution (say a payment scheme) that is offered as a complete package: versatility, convenience of use, reliablility, cost and security. ”Security” is just a small term in the whole equation. Here, I have put security in a quotation mark because it is preceived security that counts. A layman generally does not understand real security. Besides, it appears to me that there is no logical consistence in users behaviors when many of them seem perfectly happy in giving out their credit card numbers over the phone, but not over the Internet.
Transparency of the operation of encryption is also a plus. While a layman can intuitively appreciate the security offered by a an encrypted file which looks garbled even to the unsophisticated eye, the same cannot be said for quantum cryptography.
(B) Users Type II (businesses): A notable motivation for many businesses such as the banking industry to employ cryptographic products for its customers is to limit its financial and legal liability. Businesses generally accept a certain degree of financial losses due to insecure products as parts of their normal operating cost in doing businesses. Therefore, non-perfect security of conventional cryptographic systems is not a bad thing, but a fact of life. The important things are to have risk management and to have risk factors that are well understood. Employing industrial standards is very useful in reducing businesses’ financial and legal liability. Employing a non-standard disruptive technology like quantum cryptography is much more risky.
Securing long distance communication is an important concern in businesses. As international companies are now getting more and more global and tremendous amount of data are passed between different offices of the same company or different companies, there is an increasing need in securing those massive transcontinental communications. Besides, there is an increasing need for post-Cold-War type of applications like authentication and signatures.
(C) Vendors of Crypto Products: Like any other businesses, the main concern of vendors of crypto products is to make money in the long run. Besides, vendors have vested interests in deciding which technology to employ. For instance, a vendor with a large number of patents and products in the elliptic curve crypto-systems might be tempted to emphasize the strengths of elliptic curve crypto products compared to products based on other principles.
(D) Conventional Cryptographers and Security Experts: Because of their own background and experience, conventional cryptographers and security experts are keen to use something that they can understand and trust such as the one-way function hypothesis. If you ask them whether they believe in quantum mechanics or one-way hypothesis more, their answer is clear.
(E) Governments: Different departments in a government have different interests. For instance, the military and the foreign office are certainly interested in having perfect security for their communications. On the other hand, for agencies such as the FBI, the ability to wiretap communications of the criminals is very important. From this point of view, perfect security might threaten national security and should be discouraged or controlled by laws. I am not up to date with the current US regulations. However, until recently, cryptography has been regarded as ammunition in the US laws, subject to the strictest control in its usage and export.
## III Roadblocks
Having introduced the different players and their interests in cryptography, it is the time to discuss the major roadblocks to the future deployment of quantum cryptographic systems. For ease of discussion, I will divide those roadblocks into different classes. However, my division is somewhat subjective.
### A Fundamental roadblocks
Quantum cryptography is a fundamentally limited technology.
(A) Impossibility of unconditional security for many applications
First, it has a limited range of applications. The fundamental appeal of quantum cryptography has been perfect or unconditional security (i.e., security guaranteed by the laws of quantum mechanics only and without making any computational assumptions). However, the unconditional security of a number of important basic protocols such as bit commitment , one-out-of-two oblivious transfer and one-way identification (and more generally, one-sided two-party secure computations) have been shown to be impossible in a series of no-go theorems. What it means that all such protocols must require quantum computational assumptions.
(B) Lack of public key based quantum cryptographic schemes
Second, quantum cryptography has made no significant contribution to public key cryptography. Many real life cryptographic applications such as signature and authentication schemes in the Internet age involve public key cryptography. However, very little (if anything) has been done on quantum cryptographic signature and authentication schemes that are public-key based.
### B Technological roadblocks
(C) Limited distance in current quantum key distribution experiments
Experimental quantum key distribution has been performed over tens of kilometers. However, a major market for secure communication is, in fact, transcontinental communications. Until the distance achieved in experimental quantum key distribution increases by two order of magnitude, quantum key distribution is not a feasible technology for this major market sector.
(D) Limited data rate
The current data rate for experimental quantum key distribution is of the order kbits for second. (Worse still, the post-processing including error correction and privacy amplification is quite massive.) In contrast, the current world record for a single mode optical fiber communication is 160 Gbits for second . “Multiplying 160 gigabits over additional wavelengths, we expect to be able to scale up to many trillions of bits a second in the foreseeable future.” says Alastair Glass, director of Bell Labs Photonics Research Labs. If quantum key distribution is ever going to be widely used for one-time pad application for the massive data being transmitted in commercial optical fibers, there is probably a ten order of magnitude gap in data rate to be closed in the foreseeable future.
### C Commercial roadblocks
(E) Equipment size is too big.
Ideally, cryptographic applications should be done either by a software or a very small hardware component such as a smart card or a CD. Unfortunately, current quantum cryptographic systems are quite big. Shrinking a quantum cryptographic system to the size of a briefcase is already a big challenge. Shrinking it to the size of a smart card requires much ingenuity and development.
(F) Cost is too high.
The acceptable additional cost of a more secure cryptographic product for individual consumers is essentially zero while the components of existing quantum cryptographic system cost hundreds or even thousands of dollars.
(G) Integration with existing infrastructure in information technology requires further developments.
Except for niche markets, we cannot expect an optical fiber to be solely dedicated to quantum communications for any substantial period of time. The integration of quantum technology with conventional and existing infrastructure in information technology requires much further work.
### D Security roadblocks
(H) Known loopholes in current implementations
While quantum cryptography claims to offer perfect security in theory, in practice current experimental implementations contain quite a number of security loopholes. It has been argued that essentially none of the existing implementations is actually secure . Plugging those known loopholes is a highly non-trivial experimental and theoretical design problem.
(I) Hidden loopholes in implementations
All security analyses of quantum cryptographic systems involve idealizations. It is highly probably that many other fatal security loopholes in the implementations of quantum cryptography remain to be discovered. Given the slippery nature of the subject, quantum cryptography hardly inspires the confidence of potential users.
The best way to construct a secure cryptographic system is to try hard to break it. Unfortunately, until recently very few people worked on breaking quantum cryptographic systems. Without an army of people trying to break them, the security of quantum cryptographic systems are largely untested.
### E Psychological/ Vested Interest roadblocks
(J) Conventional cryptographers have no confidence in quantum mechanics.
Most conventional cryptographers do not understand quantum mechanics. Nor are they familiar with its many applications. In any case, the burden of proof of the usefulness of a new technology lies on its own practitioners, not conventional cryptographers. In contrast, from their point of view, things like the one-way function hypothesis, the hardness of factoring are well-tested principles and something that they understand well. It is wishful thinking to ask them to take a leap of faith by abandoning their well cherished philosophy and taking up a black box philosophy for no apparent good reason.
Moreover, Neal Koblitz remarked in Crypto’ 97 (the most important international conference in fundamental research in cryptography) that many cryptographers hate quantum computation because if it flies, it will put many of them out of business. Indeed, if a quantum computer is ever built, many public key cryptographic schemes that are widely used today will be totally unsafe. This could potentially kill public key cryptography and throw cryptography back to the “dark age” —a nightmare scenario for electronic commerce and data security. \[See, however, for a discussion of the possibility that public key cryptography may actually survive quantum attacks.\] Since quantum cryptography is a part of quantum information processing, it is only natural that conventional cryptographers may not like it neither.
(K) Vendors of conventional cryptographic products have vested interests in promoting and preserving conventional technologies.
If this is what conventional cryptographers might think as individual researchers, you can imagine what existing crypto-system vendors might think about quantum cryptography: Quantum cryptography is far more likely to be seen as an unwelcome threat rather than a potential opportunity.
In the history of technological developments, disruptive technologies are often made possible by new firms rather than existing firms that have large stakes in the dominant existing technology.
### F Political/Legal roadblocks
(L) Quantum cryptography may be limited by governmental crypto control policies.
As mentioned earlier, in the US, usage/export of strong cryptography is subject to stringent governmental control. Any future usage/export of quantum cryptographic systems will be subject to the same stringent set of regulations. How to reconcile the main selling point of quantum cryptography (strong security) and cryptography control (limitation on the employment of strong cryptography) is a subject that deserves future investigations.
## IV Future Directions
Having discussed the various roadblocks to the future widespread applications of quantum cryptography, I hope that you will agree that the issue of commercial feasibility is much more complicated than a research scientist may naively think. Certainly, my own grasp of the problem is limited. If there is a lesson in this talk, it is the following: To better understand the issue of commercial feasibility, it is best for quantum cryptographers to engage more in constructive conversations with people in the real world (users, vendors, conventional cryptographers, government officials, etc). While we do not have to agree with what they say, it is important for us to understand their views clearly. The future adoption of quantum cryptography relies on their acceptance.
On the more technical side, I offer the following subjective list of future directions.
1. Develop new applications for quantum cryptography:
In my opinion, it is important to develop new applications of quantum cryptography such as signature and authentication schemes, quantum voting, etc. Since various no-go theorems have ruled out the possibility of a number of cryptographic primitives, future quantum cryptographic systems may well be based on quantum computational assumptions . Therefore, it would be of practical interest to invent a ”quantum one-way trapdoor function”and public key based quantum cryptographic systems.
One possible viewpoint to take is to regard quantum cryptography as a natural extension (rather than a replacement) of conventional cryptrography and put its foundation on computational assumptions on both conventional cryptography and quantum mechanics. How to combine the advantages offered by quantum mechanics and public key infrastructure is a big issue. It would be of particular interest to construct a public-key based quantum encryption scheme and show rigorously that breaking it will require the simultaneous breaking of widely accepted assumptions in both conventional cryptography (such as cracking the Diffe-Hellman key exchange scheme) and quantum computation (such as the ability to achieve quantum computation/measurement involving more than $`N`$ qubits). Such an encryption scheme will convince people in both conventional and quantum cryptographic communities that it is secure against any realistic attacks.
\[Brassard has emphasized the possibility that conventional public key cryptography may actually survive quantum attacks. According to , it has been argued in that quantum resistant one-way function that can be computed efficiently with classical computers but cannot be inverted efficiently even with a quantum computer may well exist. That would be bad news for quantum cryptographers, though.\]
2. Use teleportation to plug security loopholes :
A major criticism on quantum cryptography is that it may contain many hidden security loopholes. For instance, while it is often assumed that a photon source emits single photons, in real life perfect single photon sources are notoriously hard to make. Besides, experimental systems generally contain higher energy levels whose occupancy is totally ignored in most security analysis. Indeed, as emphasized by, for example, John Smolin , it is even conceivable in principle that an eavesdropper can hide a quantum robot in the quantum signals received by the two users. Owing to this quantum Trojan Horse problem, quantum cryptographic systems seem inherently unsafe.
Nonetheless, one can argue that by using teleportation, quantum cryptographic systems can be made no more unsafe than conventional ones . One can reduce the quantum Trojan Horse problem to a conventional Trojan Horse problem. This is done by the following method. Instead of receiving any untrusted quantum signals from a quantum channel, each user insists that any signal should be teleported to him/her. For instance, Bob prepares locally an EPR pair and sends a member to a laboratory outside his door. Any incoming quantum signal will be teleported to him by his doorman outside his door. What he receives are just classical messages. Note that teleportation provides an exact counting of the number of dimensions of Hilbert space of the reconstructed state. This is so even if the original EPR pair that Bob prepares is imperfect and contains hidden dimensions.
Of course, the problem of classical Trojan Horse attack remains. But, this is inevitable. Since Bob’s goal is to receive classical communications from Alice through an untrusted channel, if receiving untrusted classical messages is a problem, the whole enterprise of secure communication is simply hopeless.
3. Use quantum repeaters to extend the range of secure quantum key distribution.
This is crucial if quantum key distribution is ever to make any impact on intercontinental communication.
4. Increase data rate for quantum key distribution.
Existing schemes for quantum key distribution such as BB84 and Ekert’s scheme are based on two-level quantum systems and as such their data rates are limited. If quantum cryptography is ever widely used as one-time pad for encrypting massive data in communications, higher level systems and particularly continuous variable quantum cryptography are a way to go forward. This would mean that many of the current investigations may become obsolete in the near future.
5. Miniaturization.
The ultimate goal is to reduce the size of quantum cryptographic systems to that of a smart card or a compact disc.
6. Integration with existing infrastructure in information technology.
It may be hard to justify the cost of construction of an entirely new infrastructure dedicated to the long-distance transmission of quantum signals. Integration of quantum technology with existing infrastructure (including optical fibers) in information technology is, therefore, an important subject.
7. Towards an international standard for quantum cryptography.
Ultimately, some form of international standards will be needed for the widespread deployment of quantum cryptography.
8. We need quantum hackers.
We have seen encouraging signs that researchers are finally taking a critical look at the security of current experimental implementations of quantum cryptographic systems . In order to better understand the real risk of employing quantum cryptography, much more should be done on the subject.
An attacker should attack a Chinese Wall from its weakest point. The weakest point of a cryptographic system often lies in the blindspot of its designers. A cryptographer may regard a cryptographic system as a mathematical black box function which provides an output for each input. However, in real life the box is never black to begin with. (Private keys embedded in a smart card circuitry may be read out by illuminating the smart card with various wavelengths of electromagnetic radiations.) The black box also gives out timing information, power consumption information, etc, etc. The inputs to the black box include also its power supply, something that is subject to manipulations by malicious parties. The designer of the black box may try to cheat by designing a black box that leaks information in a subtle encrypted way that can be read by only the designer.
Indeed, in conventional cryptography, it is often the case that the most powerful attacks against a system has little to do with the fundamental design or mathematical equations underlying the design. The devil is in the actual implementation, rather than the fundamental design. If we are really interested in the future of quantum technology, we must face up to those subtle loopholes in implementations. A way to do so is to become a quantum hacker and devise innovative methods of cracking experimental quantum cryptographic system.
9. Crypto control of quantum cryptography?
The issue of cryptography control of quantm cryptography remains to be addressed. I have no particular suggestion.
## V Acknowledgment
I have greatly benefitted from helpful discussions with many colleagues, collaborators, and experts in both conventional cryptography/security systems and quantum cryptography. I would like to thank them all and apologize for any misrepresentations of their ideas/observations in this talk. |
no-problem/9912/astro-ph9912314.html | ar5iv | text | # 1 Introduction
## 1 Introduction
The distribution of s-process abundances in stellar populations is one of the more mysterious features of Galactic chemical evolution (GCE). The problems are well illustrated in the review by McWilliam (1997) where his Figs 9 and 10 show typical s-element (Sr and Ba) to iron ratios and Ba and La to Eu ratios as functions of metallicity \[Fe/H\], Eu being representative of a nearly pure r-process. For \[Fe/H\] $`2.5`$, there is a large scatter above a lower limit \[Ba/Eu\] $`0.8`$ representing a pure r-process, with higher values presumably due to internal mixing or contamination by a companion; but between \[Fe/H\] $`=2`$ and about $`1`$ there is a constant plateau with \[Ba/Eu\] $`0.3`$, which Pagel & Tautvaišienė (1997) attributed to a general contribution to GCE of a primary s-process not readily understandable in terms of the expected age and metallicity dependence. Figs. 1 and 2 show element-to-iron ratios resulting from our ad hoc model, in which the s-process was treated as primary with a superposition of different time delays, noting that any secondary or other dependence of the yields on chemical composition (e.g. Travaglio et al. 1999) could be obscured by scatter in the metallicities at any given time.
In our model we supposed that the first batch of s-process synthesis came from rather massive progenitors with a typical time delay of 40 Myr corresponding to about 8.5$`M_{}`$ to get the plateau in Ba/Eu for $`2`$ \[Fe/H\] $`<1`$, long before the onset of the bulk of SNIa, and a second batch more like the conventional model for the s-process with a time delay of the order of 3 Gyr corresponding to 1.5$`M_{}`$ and longer than for typical SNIa leading to the decline followed by a rise in Ba/Fe that appears near solar metallicity in Fig 1. The overall fit to the data in Figs 1 and 2 is quite good, although at the lowest metallicites one should take into account the scatter in Eu/Fe that has been discussed by Tsujimoto, Shigeyama & Yoshii (1999).
In the last few years there have been substantial developments, of which we should like to mention two here:
* The work of Nissen & Schuster (1997) who investigated disk and halo stars with overlapping metallicity and found ‘anomalous’ halo stars which have too much iron for their content in O, $`\alpha `$\- and s-process elements represented by Y and Ba, and which might represent a slower chemical evolution such as may have occurred in the Magellanic Clouds (Pagel & Tautvaišienė 1998).
* A very interesting paper by Jehin et al. (1999), where they select a group of stars in the restricted metallicity range $`1.2`$ \[Fe/H\] $`0.6`$, which is again the region of metallicity overlap between the halo and thick disk and also just the range where SNIa are believed to kick in.
## 2 Results of Jehin et al.
Jehin et al. determined very precise abundances for a number of metals: Fe, Mg, Ca, Ti, Y, Sr, Ba and Eu, among others. However, in presenting their results they ignore metallicity as such (except in the case of Ti/Fe itself) and plot correlation diagrams \[X/Fe\] vs \[Ti/Fe\], the latter being the most accurate representative of \[$`\alpha `$/Fe\]. Thus $`\alpha `$-elements and Eu are found to track \[Ti/Fe\] quite precisely, except in the case of the two ‘anomalous’ halo stars (in the sense of Nissen & Schuster) in their sample, which have excess europium and other r-process elements – an intriguing result not yet explained, although it may indicate the role of an r-process with a significant time delay. However, the behaviour of s-process elements was found to be different: instead of running more or less parallel to \[Ti/Fe\], there appear to be two sequences, of which one (which they call Pop IIa and includes the ‘anomalous’ stars) does run more or less parallel, while the other (which they call Pop IIb) starts at the end of the previous sequence and then runs up vertically at \[Ti/Fe\] = 0.24. This inspired the authors to put forward what they call the EAS scenario — Evaporation, Accretion, Self-enrichment — in which all halo and thick-disk stars are assumed to form in globular clusters or proto-clusters undergoing chemical evolution. Some clusters were disrupted at an early stage leading to Pop IIa with pure SNII ejecta, whereas others lasted longer enabling dwarf stars to accrete s-process material from nearby AGB stars before the clusters evaporated, leading to Pop IIb.
## 3 Relation with metallicity
While the hypothesis of Jehin et al. is interesting and possibly even right, we do not think their results can be understood without looking at the data as a function of metallicity; this is done in Fig 3 which distinguishes stellar population types and also presents the model by Pagel & Tautvaišienė (1997). The upper 7 or 8 stars in each panel are Type IIb and the ‘anomalous’ stars appear near bottom left.
According to our model, there is an overall downward trend due to the impact of Type Ia supernovae contributing extra iron in just this range of metallicity, and that is nicely confirmed by the mean trend of the new data, although the absolute fit is better for some elements than for others, and the results of Jehin et al. appear to be a scatter around this trend, possibly related to their scenario, whereas Ti/Fe reaches a sharply defined plateau representing pure SNII production on the low-metallicity side, where the s-process points spread out forming a wedge-shaped distribution.
How much of this represents really significant deviations from conventional GCE? To investigate this point, we have plotted in Fig 4 what seems to be the best determined s-process result, \[Y/Ti\], against \[Ti/H\], which is chosen as the best available ‘clock’ (cf. Wheeler, Sneden & Truran 1989). Fig 5 shows corresponding data from the paper by Nissen & Schuster, where scatter in the determinations is somewhat greater, but one star appears as a still more extreme case of Jehin et al.’s Pop IIb. The dotted line in each case shows the prediction of our model. We feel that the departure of any thick-disk or anomalous-halo stars in these samples from our model predictions in this plane are at most marginal, but the effect discovered by Jehin et al. is certainly there among at least some of the non-anomalous halo stars. Clearly more and better statistics would be useful. |
no-problem/9912/gr-qc9912069.html | ar5iv | text | # References
Generalized Chen-Wu type cosmological model
Moncy V. John and K. Babu Joseph
Department of Physics, Cochin University of Science and Technology,
Kochi 682022, India.
Abstract
Recent measurements require modifications in conventional cosmology by way of introducing components other than ordinary matter into the total energy density in the universe. On the basis of some dimensional considerations in line with quantum cosmology, Chen and Wu \[W. Chen and Y. Wu, Phys. Rev. D 41, 695 (1990)\] have argued that an additional component, which corresponds to an effective cosmological constant $`\mathrm{\Lambda }`$ must vary as $`a^2`$ in the classical era. Their decaying-$`\mathrm{\Lambda }`$ model assumes inflation and yields a value for $`q_0`$, which is not compatible with observations. We generalize this model by arguing that the Chen-Wu ansatz is applicable to the total energy density of the universe and not to $`\mathrm{\Lambda }`$ alone. The resulting model, which has a coasting evolution (i.e., $`at`$), is devoid of the problems of horizon, flatness, monopole, cosmological constant, size, age and generation of density perturbations. However, to avoid serious contradictions with big bang nucleosynthesis, the model has to make the predictions $`\mathrm{\Omega }_m=4/3`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }=2/3`$, which in turn are at variance with current observational values.
PACS No(s): 98.80.-k
Permanent address: Department of Physics, St. Thomas College, Kozhencherri 689641, Kerala, India. e-mail: moncy@stthom.ernet.in
Recent measurements of the cosmic deceleration parameter, which point to the need of having some new energy density in the present universe, in addition to the usual relativistic/nonrelativistic matter density have caused some sensation . Several other measurements like that of the combination of the Hubble parameter $`H_0`$ and the age $`t_0`$ of the present universe, gravitational lensing, etc., also indicate such a possibility. Candidates for such an additional component include vacuum energy with density $`\rho _\mathrm{\Lambda }`$ (identical to that due to a cosmological constant $`\mathrm{\Lambda }`$, with equation of state $`p_\mathrm{\Lambda }=\rho _\mathrm{\Lambda }`$) and ”quintessence” with density $`\rho _q`$ (with a general equation of state $`p_q=w\rho _q`$; $`1<w<0`$ \- examples are fundamental fields and macroscopic objects such as light, tangled cosmic strings), the former being considered often in the literature. The above observations specifically show that if the new component is $`\rho _\mathrm{\Lambda }`$, then its magnitude should be comparable to that of matter density $`\rho _m`$. Decaying vacuum cosmologies \[4-8\] (and references therein) are phenomenological models, which conceive a time-varying $`\mathrm{\Lambda }`$ as an attempt to describe how $`\rho _\mathrm{\Lambda }`$ attains such small values in the present universe. In this report, we study one of the pioneering decaying vacuum models and suggest an alternative scenario which is conceptually more sound. Though the resulting model faces some serious problems when concrete theoretical predictions, either on nucleosynthesis or on the density parameters $`\mathrm{\Omega }_m`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$ are compared with observations, it has several positive features and raises certain fundamental issues which invite serious consideration.
First we recall that Chen and Wu , while introducing their widely discussed model mentioned above, have made an interesting argument in favor of an $`a^2`$ variation of the effective cosmological constant on the basis of some dimensional considerations in line with quantum cosmology. Their reasoning is as follows: Since there is no other fundamental energy scale available, one can always write $`\rho _\mathrm{\Lambda }`$, the energy density corresponding to the effective cosmological constant as the Planck density ($`\rho _{pl}=c^5/\mathrm{}G^2=5.158\times 10^{94}`$ gm cm<sup>-3</sup> ) times a dimensionless product of quantities. Assuming that $`\rho _\mathrm{\Lambda }`$ varies as a power of the scale factor $`a`$, the natural ansatz is
$$\rho _\mathrm{\Lambda }\frac{c^5}{\mathrm{}G^2}\left[\frac{l_{pl}}{a}\right]^n,$$
(1)
where $`l_{pl}=(\mathrm{}G/c^3)^{1/2}=1.616\times 10^{33}`$ cm is the Planck length. The authors argue that $`n=2`$ is a preferred choice. It is easy to verify that $`n<2`$ (or $`n>2`$) will lead to a negative (positive) power of $`\mathrm{}`$ appearing explicitly on the right hand side of the above equation. Such an $`\mathrm{}`$-dependent $`\rho _\mathrm{\Lambda }`$ would be quite unnatural in the classical Einstein equation for cosmology, much later than the Planck time. However, it shall be noted that $`n=2`$ is just right to survive the semiclassical limit $`\mathrm{}0`$. This choice is further substantiated by noting that $`n1`$ or $`n3`$ would lead to a value of $`\rho _\mathrm{\Lambda }`$ which violates all observational bounds. Thus the Chen-Wu ansatz is
$$\rho _\mathrm{\Lambda }=\frac{\gamma }{8\pi Ga^2},$$
(2)
where $`\gamma `$ is a phenomenological constant parameter. (Here onwards we set $`\mathrm{}=c=k_B=1`$, except when stating explicit results). Assuming that only the total energy-momentum is conserved, they obtain, for the relativistic era,
$$\rho _r=\frac{A_1}{a^4}+\frac{\gamma }{8\pi Ga^2}\rho _r^{cons.}+\rho _r^{noncons.}$$
(3)
and for the nonrelativistic era,
$$\rho _{nr}=\frac{A_2}{a^3}+\frac{2\gamma }{8\pi Ga^2}\rho _{nr}^{cons.}+\rho _{nr}^{noncons.},$$
(4)
where $`A_1`$ and $`A_2`$ are to be positive. The Chen-Wu model thus differs from the standard model in that it has a decaying cosmological constant and that the matter density has conserving and nonconserving parts \[given by the first and second terms respectively in the right hand sides of Eqs. (3) and (4)\]. By choosing $`\gamma `$ appropriately, they hope to arrange $`\rho _\mathrm{\Lambda }`$ and the nonconserving parts in $`\rho _r`$ and $`\rho _{nr}`$ to be insignificant in the early universe so that the standard model results like nucleosynthesis are undisturbed. But for the late universe, it can have many positive features like providing the missing energy density in the flat and inflationary models, etc.. The model predicts creation of matter, but the authors argue that the creation rate is small enough so that it is inaccessible to observations.
The important criticisms one can raise in this regard are the following: Conversely to the requirement that the conserving part of matter density dominate the early universe (for the standard model results to remain undisturbed), one can deduce that in their model, the standard model results are applicable only to the same part of matter density. The nonconserving parts are, in fact, created almost entirely in the late universe. But the abundance of light nuclei etc. are verified for the present universe and this implies that the conserving part is still substantial. This in turn will create some problem with observations. For example, let us assume that the present era is nonrelativistic and $`\rho _{nr}^{cons.}`$ is at least equal to $`\rho _{nr}^{noncons.}`$. Since the vacuum density is only one-half the latter quantity \[See Eqs. (2) and (4)\], for a $`k=0`$ universe in which $`\mathrm{\Omega }_m+\mathrm{\Omega }_\mathrm{\Lambda }=1`$, the deceleration parameter at present will be $`q_0=(\mathrm{\Omega }_m/2)\mathrm{\Omega }_\mathrm{\Lambda }=0.2`$. This is not compatible with the observations mentioned earlier .
Also, since it is conceived that their model is not different from the standard model in the early universe, to avoid the cosmological problems, they have to assume the occurrence of inflation, which in turn is driven by the vacuum energy. But they apply their ansatz only to the late-time vacuum energy density (which corresponds to the cosmological constant) and not to that during inflation. The stress energy associated with the vacuum energy is identical to that of a cosmological constant and it is not clear how they distinguish them while applying the ansatz.
Lastly, it can genuinely be asked whether $`\rho _\mathrm{\Lambda }`$ is the only quantity to which the Chen-Wu ansatz be applied. An equation analogous to (1) can be written for any kind of energy density by using a similar reasoning and it can be argued that $`n=2`$ is a preferred choice for each one of them in the late universe. Certainly, this will bring in some fundamental issues which need serious consideration, but there is a priori no reason to forbid such an investigation.
In this report, we present a cosmological model by applying the Chen-Wu ansatz to the total energy density $`\stackrel{~}{\rho }`$ of the universe, in place of the vacuum density alone. If the Chen-Wu argument is valid for $`\rho _\mathrm{\Lambda }`$, then it should be valid for $`\stackrel{~}{\rho }`$ too. In fact, this ansatz is better suited to $`\stackrel{~}{\rho }`$ rather than to $`\rho _\mathrm{\Lambda }`$, since the Planck era is characterized by the Planck density for the universe, above which quantum gravity effects become important. Hence we modify the ansatz to write
$$\stackrel{~}{\rho }=A\frac{c^5}{\mathrm{}G^2}\left[\frac{l_{pl}}{a}\right]^n,$$
(5)
where $`A`$ is a positive dimensionless constant. As indicated above, when $`\stackrel{~}{\rho }`$ is the sum of various components and each component is assumed to vary as a power of the scale factor $`a`$, then the Chen-Wu argument can be applied to conclude that $`n=2`$ is a preferred choice for each component. Violating this will force the inclusion of $`\mathrm{}`$ -dependent terms in $`\stackrel{~}{\rho }`$, which would look unnatural in a classical theory. Not only for the Chen and Wu model, in all of FRW cosmology, this argument may be used to forbid the inclusion of substantial energy densities which do not vary as $`a^2`$ in the classical epoch.
At first sight, this may appear as a grave negative result. But let us face it squarely and proceed to the next logical step of investigating the implications of an $`a^2`$ variation of $`\stackrel{~}{\rho }`$. If the total pressure in the universe is denoted as $`\stackrel{~}{p}`$, then the above result that the conserved quantity $`\stackrel{~}{\rho }`$ in the FRW model varies as $`a^2`$ implies $`\stackrel{~}{\rho }+3\stackrel{~}{p}=0`$. This will lead to a coasting cosmology (i.e., $`at`$). Components with such an equation of state are known to be strings or textures . Though such models are considered in the literature, it would be unrealistic to consider the present universe as string-dominated. A crucial observation which makes our model with $`\stackrel{~}{\rho }`$ varying as $`a^2`$ realistic is that this variation leads to string-domination only if we assume $`\stackrel{~}{\rho }`$ to be unicomponent. Instead, if we assume, as done in inflationary, Chen and Wu and many other models (Friedmann-Lamaitre-Robertson-Walker cosmologies) that $`\stackrel{~}{\rho }`$ consists of parts corresponding to relativistic/ nonrelativistic matter (with equation of state $`p_m=w\rho _m`$ where $`w=1/3`$ for relativistic and $`w=0`$ for nonrelativistic cases) and also to a time-varying cosmological constant (with equation of state $`p_\mathrm{\Lambda }=\rho _\mathrm{\Lambda }`$), i.e., if we assume,
$$\stackrel{~}{\rho }=\rho _m+\rho _\mathrm{\Lambda },\stackrel{~}{p}=p_m+p_\mathrm{\Lambda },$$
(6)
then the condition $`\stackrel{~}{\rho }+3\stackrel{~}{p}=0`$ will give
$$\frac{\rho _m}{\rho _\mathrm{\Lambda }}=\frac{2}{1+3w}.$$
(7)
In other words, the modified Chen-Wu ansatz leads to the conclusion that if the universe contains matter and vacuum energies, then vacuum energy density should be comparable to matter density. This, of course, will again lead to a coasting cosmology, but this time a realistic one. (The Ozer-Taha model in its relativistic era and the models in are approximately some such models, but they start from different sets of assumptions.)
$`\rho _m`$ or $`\rho _\mathrm{\Lambda }`$, which varies as $`a^2`$, may sometimes be mistaken for strings but it should be noted that the equations of state we assumed for these quantities are different from that for strings and are what they ought to be to correspond to matter density and vacuum energy density respectively. It is true that components with equations of state $`p=w\rho `$ should obey $`\rho a^{3(1+w)}`$, but this is valid when those components are separately conserved. In our case, we have only assumed that the total energy density is conserved and not the parts corresponding to $`\rho _m`$ and $`\rho _\mathrm{\Lambda }`$ separately. Hence, as in the Chen-Wu model, there can be creation of matter from vacuum, but we shall show later in this report that again the present creation rate is too small to make any observational consequences.
The solution to the Einstein equations in an FRW model with $`\stackrel{~}{\rho }+3\stackrel{~}{p}=0`$, for all the three cases $`k=0,\pm 1`$, is the coasting evolution
$$a(t)=mt,$$
(8)
where $`m`$ is some proportionality constant. The total energy density is then
$$\stackrel{~}{\rho }=\frac{3}{8\pi G}\frac{(m^2+k)}{a^2}.$$
(9)
Comparing this with (5) (with $`n=2`$), we get $`m^2+k=8\pi A/3`$. We shall now show that this simple picture of the universe is devoid of many of the cosmological problems encountered in the standard model.
First let us consider the horizon problem. A necessary condition for the solution of this problem is $`a(t_s)_{t_{pl}}^{t_s}𝑑t/a(t)>[a(t_s)/a(t_0)]H_0^1`$, where $`t_s`$ is the time by which the horizon problem is solved. Using our expression (8) for $`a(t)`$, this condition gives $`t_set_{pl}`$. Thus shortly after the Planck era, the horizon problem is solved in this model. Since causality is established at such early times, the monopole problem will also disappear.
The predictions regarding the age of the universe in the model is obvious from Eq. (8). Irrespective of the value of $`m`$, we get the combination $`H_0t_0`$ as equal to unity, which is well within the bounds. Thus there is no age problem in this model. We can legitimately define the critical density as $`\rho _c(3/8\pi G)(\dot{a}^2/a^2)`$, so that Eq. (9) gives
$$\stackrel{~}{\mathrm{\Omega }}\frac{\stackrel{~}{\rho }}{\rho _c}=\left[1\frac{3k}{8\pi A}\right]^1.$$
(10)
As in the standard model, we have $`\stackrel{~}{\mathrm{\Omega }}=1`$ for $`k=0`$ and $`\stackrel{~}{\mathrm{\Omega }}>1`$ ($`\stackrel{~}{\mathrm{\Omega }}<1`$) for $`k=+1`$ ($`k=1`$). But unlike the standard model, $`\stackrel{~}{\mathrm{\Omega }}`$ is a constant in time. This is not surprising; in an FRW model with total energy density $`\stackrel{~}{\rho }`$, one can always write the time-time component of Einstein equation in the form
$$\stackrel{~}{\mathrm{\Omega }}1=\left[\frac{8\pi G}{3}\frac{\stackrel{~}{\rho }a^2}{k}1\right]^1.$$
(11)
When $`\stackrel{~}{\rho }`$ varies $`a^3`$ or $`a^4`$, the flatness problem appears and the reason can be understood from this equation. But in the present case, since $`\stackrel{~}{\rho }`$ varies as $`a^2`$, $`\stackrel{~}{\mathrm{\Omega }}`$ will remain a constant. Using Eqs. (6) and (7), we get
$$\mathrm{\Omega }_m\frac{\rho _m}{\rho _c}=\frac{2\stackrel{~}{\mathrm{\Omega }}}{3(1+w)},\mathrm{\Omega }_\mathrm{\Lambda }\frac{\rho _\mathrm{\Lambda }}{\rho _c}=\frac{(1+3w)\stackrel{~}{\mathrm{\Omega }}}{3(1+w)}.$$
(12)
For the matter dominated era, the predictions are $`\mathrm{\Omega }_m=2\stackrel{~}{\mathrm{\Omega }}/3`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }=\stackrel{~}{\mathrm{\Omega }}/3`$. Note that also the density parameter $`\mathrm{\Omega }_m`$ is time-independent and hence there is no flatness problem in this model. As mentioned above, the model predicts that the energy density corresponding to the cosmological constant is comparable with matter density and this solves the cosmological constant problem too. It can also be seen that according to the model, the observed universe, characterised by the present Hubble radius has a size equal to the Planck length at the end of Planck epoch and this indicates that the problem with the size of the universe does not appear here. For the investigation of other problems, we have to study the thermal evolution of the universe as envisaged in the model.
In the early relativistic era, temperature $`T`$ is associated with the relativistic matter density $`\rho _r`$ as $`\rho _r=(\pi ^2/30)N(T)T^4`$, where $`N(T)`$ is the effective number of spin degrees of freedom at temperature T. In the present model,
$$\rho _r=\frac{3\stackrel{~}{\mathrm{\Omega }}}{8\pi G}\frac{1}{(\sqrt{2}t)^2}.$$
(13)
This gives
$$T=\left[\frac{3}{8\pi G}\frac{30\stackrel{~}{\mathrm{\Omega }}}{\pi ^2N}\right]^{1/4}\frac{1}{(\sqrt{2}t)^{1/2}}.$$
(14)
These expressions may be compared with the corresponding expressions in the standard model:
$$\rho _{s.m.}=\frac{3}{8\pi G}\frac{1}{(2t)^2},$$
(15)
$$T_{s.m.}=\left[\frac{3}{8\pi G}\frac{30}{\pi ^2N}\right]^{1/4}\frac{1}{(2t)^{1/2}}.$$
(16)
Considering the fact that according to observation $`\stackrel{~}{\mathrm{\Omega }}^{1/4}`$ is close to unity, it can be seen that the values of $`\rho _r`$ and $`T`$ attained at time $`t`$ in the standard model are attained at time $`\sqrt{2}t`$ in the present model. Thus the thermal history in the present model can be expected to be nearly the same as that in the standard model. But the time-dependence of the scale factor is different in our model and this helps to solve the cosmological problems.
So far we have considered $`\stackrel{~}{\mathrm{\Omega }}`$ to be a free parameter, related by Eq. (10) to the constant $`A`$, which in turn is to be understood to come from some deep quantum cosmological theory. An interesting way to estimate the constant $`\stackrel{~}{\mathrm{\Omega }}`$ is to consider the implications of the model for nucleosynthesis . From (13) and (15), one can deduce that the Hubble parameter in the present model is related to that in the standard model according to $`H=\sqrt{2/\stackrel{~}{\mathrm{\Omega }}}H_{s.m.}`$. This modifies the ratio of interaction rate to Hubble parameter as $`\mathrm{\Gamma }/H=\sqrt{\stackrel{~}{\mathrm{\Omega }}/2}\mathrm{\Gamma }/H_{s.m.}`$. To avoid any variation of the freezing temperature with that in the successful standard model, one has to accept a value $`\stackrel{~}{\mathrm{\Omega }}2`$. This leads us to the predictions $`\mathrm{\Omega }_m4/3`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }2/3`$, which are in contradiction with the recent measurements since the corresponding point is outside the error ellipses in the $`\mathrm{\Omega }_m\mathrm{\Omega }_\mathrm{\Lambda }`$ plot. This discrepancy with observation is a serious problem which requires detailed analysis and refinement in the model.
The possibility of the generation of density perturbations on scales well above the present Hubble radius, in the interval between the Planck time $`t_{pl}`$ and the time of decoupling $`t_{dec}`$ can be studied by evaluating the communication distance light can travel between these two times . In the present model, $`d_{comm}(t_{pl},t_{dec})=a_0_{t_{pl}}^{t_{dec}}𝑑t/a(t)=0.627\times 10^6\text{Mpc}`$, where we have used $`t_{dec}10^{13}`$ s, the same as that in the standard model. Thus the coasting evolution in this case has the communication distance between $`t_{pl}`$ and $`t_{dec}`$ much larger than the present Hubble radius ($`4000`$ Mpc) and hence it can generate density perturbations on scales of that order. It is interesting to note that Liddle has precluded coasting evolution as a viable means to produce such perturbations and argued that only inflation ($`\ddot{a}>0`$) can perform this task, thus ”closing the loopholes” in the arguments of Hu et. al. . But it is worthwhile to point out that his observations are true only for a model which coasts from $`t_{pl}`$ to $`t_{nuc}`$ (where $`t_{nuc}1`$ s is the time of nucleosynthesis) and thereafter evolves according to the standard model. In our case, the evolution is coasting throughout the history of the universe and hence his objection is not valid.
A bonus point of the present approach, when compared to all the other aforementioned models may now be noted. In those models, the communication distance between $`t_{nuc}`$ and $`t_{dec}`$, or for that matter the communication distance from any time after the production of particles (assuming this to occur at the end of inflation) to the time $`t_{dec}`$ will be only around $`200h^1`$ Mpc, $`0.6<h<0.8`$ . Thus density perturbations on scales above the present Hubble radius cannot be generated in them in the period when matter is present. This is because inflation cannot enhance the communication distance after it. The only means to generate the observed density perturbations is then to resort to quantum fluctuations of the inflaton field. The present model is at a more advantageous position than the inflationary models in this regard since the communication distance between $`t_{nuc}`$ and $`t_{dec}`$ in this case is $`d_{comm}(t_{nuc},t_{dec})=a_p_{t_{nuc}}^{t_{dec}}𝑑t/a(t)=1.45\times 10^5\text{Mpc}`$, which is much greater than the present Hubble radius. So we can consider the generation of the observed density perturbations as a late-time classical behavior too.
Lastly we check the rate of matter creation in the model. Assuming the present universe to be dominated by nonrelativistic matter, we can calculate the rate of creation per unit volume as $`a^2d(\rho _ma^3)/dt_p=\rho _{m0}H_0`$. This creation rate is only one-third of that in the steady state model. Creation of matter or radiation with an average rate given above will be inaccessible to test and does not pose a serious objection to the model.
It was recently argued that a smooth time-varying $`\mathrm{\Lambda }`$ is ill defined and unstable and that the only valid way of introducing an additional energy component is to replace $`\mathrm{\Lambda }`$ with a fluctuating, inhomogeneous component. (Such an energy component is the quintessence, mentioned in the introduction.) Notwithstanding this and other serious problems with observations (either the big bang nucleosynthesis or the prediction of density parameters), it is worth noting that if we take quantum cosmology seriously, generalizing the Chen-Wu ansatz is a logical conclusion and that it leads to a realistic cosmological scenario, which does not have many of the problems in the standard model, including that of the generation of density perturbations in the late classical epoch itself.
We acknowledge the valuable comments by the unknown referee, with thanks. MVJ is grateful to IUCAA, Pune for its hospitality, where part of this work was done. |
no-problem/9912/hep-ph9912464.html | ar5iv | text | # Chaotic inflation on the brane
## I Introduction
There is considerable interest in higher dimensional cosmological models motivated by superstring theory solutions where matter fields (related to open string modes) live on a lower dimensional brane while gravity (closed string modes) can propagate in the bulk . In such a scenario the extra dimension need not be small , and may even be infinite if non-trivial geometry can lead gravity to be bound to the three-dimensional subspace on which we live at low energies . One possibility of great importance arising from these ideas is the notion that the fundamental Planck scale $`M_{4+d}`$ in $`4+d`$ dimensions can be considerably smaller than the effective Planck scale, $`M_4=1.2\times 10^{19}`$ GeV, in our four-dimensional spacetime, which would have profound consequences for models of the very early universe.
In this paper we investigate the impact of such a scenario when $`d=1`$ for simple chaotic inflation models. Specific models of inflation have previously been discussed with finite compactified dimensions, scalar fields in the bulk and/or multiple branes (see, e.g., ). Our aim is to quantify the minimal modification of slow-roll inflation in the brane scenario for arbitrary inflaton potentials on the brane, independent of the dynamics of the bulk, while assuming stability of the brane. If Einstein’s equations hold in the five-dimensional bulk, with a cosmological constant as source, and the matter fields are confined to the 3-brane, then Shiromizu et al. have shown that the four-dimensional Einstein equations induced on the brane can be written as
$$G_{\mu \nu }=\mathrm{\Lambda }_4g_{\mu \nu }+\left(\frac{8\pi }{M_4^2}\right)T_{\mu \nu }+\left(\frac{8\pi }{M_5^3}\right)^2\pi _{\mu \nu }E_{\mu \nu },$$
(1)
where $`T_{\mu \nu }`$ is the energy-momentum tensor of matter on the brane, $`\pi _{\mu \nu }`$ is a tensor quadratic in $`T_{\mu \nu }`$, and $`E_{\mu \nu }`$ is a projection of the five-dimensional Weyl tensor, describing the effect of bulk graviton degrees of freedom on brane dynamics. The effective cosmological constant $`\mathrm{\Lambda }_4`$ on the brane is determined by the five-dimensional bulk cosmological constant $`\mathrm{\Lambda }`$ and the 3-brane tension $`\lambda `$ as
$$\mathrm{\Lambda }_4=\frac{4\pi }{M_5^3}\left(\mathrm{\Lambda }+\frac{4\pi }{3M_5^3}\lambda ^2\right),$$
(2)
and the four-dimensional Planck scale is given by
$$M_4=\sqrt{\frac{3}{4\pi }}\left(\frac{M_5^2}{\sqrt{\lambda }}\right)M_5.$$
(3)
In a cosmological scenario in which the metric projected onto the brane is a spatially flat Friedmann-Robertson-Walker model, with scale factor $`a(t)`$, the Friedmann equation on the brane has the generalized form
$$H^2=\frac{\mathrm{\Lambda }_4}{3}+\left(\frac{8\pi }{3M_4^2}\right)\rho +\left(\frac{4\pi }{3M_5^3}\right)^2\rho ^2+\frac{}{a^4},$$
(4)
where $``$ is an integration constant arising from $`E_{\mu \nu }`$, and thus transmitting bulk graviton influence onto the brane. This term appears as a form of “dark radiation” affecting primordial nucleosynthesis and the heights of the acoustic peaks in the cosmic microwave background radiation, because it is decoupled from matter on the brane and behaves like an additional collisionless (and isotropic) massless component. Thus observations can be used to place limits on $`||`$. However, during inflation this term will be rapidly diluted, and we can neglect it. We will also assume that the bulk cosmological constant $`\mathrm{\Lambda }4\pi \lambda ^2/3M_5^3`$ so that $`\mathrm{\Lambda }_4`$ is negligible, at least in the early universe. This fine-tuning is the restatement in the brane-world scenario of the cosmological constant problem and we do not attempt to solve it here.
The crucial correction in what follows is the term quadratic in the density, which modifies the expansion dynamics at densities $`\rho \lambda `$. This can be seen on rewriting Eq. (4) using Eq. (3), when $`\mathrm{\Lambda }_4=0`$ and $`=0`$, to give
$$H^2=\frac{8\pi }{3M_4^2}\rho \left[1+\frac{\rho }{2\lambda }\right].$$
(5)
Note that in the limit $`\lambda \mathrm{}`$ we recover standard four-dimensional general relativistic results (neglecting $``$). The quadratic modification will dominate at high energies for moderate $`\lambda `$, but must be sub-dominant at nucleosynthesis. Since it decays as $`a^8`$ during the radiation era, it will rapidly become negligible thereafter. The nucleosynthesis limit implies that $`\lambda (1\text{ MeV})^4`$, and by Eq. (3) this gives
$$M_5\left(\frac{1\text{ MeV}}{M_4}\right)^{2/3}M_410\mathrm{TeV}.$$
(6)
A more stringent constraint may be obtained if the fifth dimension is infinite, by requiring that relative corrections to the Newtonian law of gravity, which are of order $`M_5^6\lambda ^2r^2`$ (see, e.g., ), should be small on scales $`r1`$ mm. Using Eq. (3), this gives $`M_5>10^5`$ TeV.
## II Slow-roll inflation on the brane
We will consider the case where the energy-momentum tensor $`T_{\mu \nu }`$ on the brane is dominated by a scalar field $`\varphi `$ (confined to the brane) with self-interaction potential $`V(\varphi )`$. The field satisfies the Klein-Gordon equation
$$\ddot{\varphi }+3H\dot{\varphi }+V^{}(\varphi )=0,$$
(7)
since $`^\nu T_{\mu \nu }=0`$ on the brane. In four-dimensional general relativity, the condition for inflation is $`\dot{\varphi }^2<V(\varphi )`$, i.e., $`p<\frac{1}{3}\rho `$, where $`\rho =\frac{1}{2}\dot{\varphi }^2+V`$ and $`p=\frac{1}{2}\dot{\varphi }^2V`$. This guarantees $`\ddot{a}>0`$. The modified Friedmann equation leads to a stronger condition for inflation: using Eqs. (5) and (7), we find that
$$\ddot{a}>0p<\left[\frac{\lambda +2\rho }{\lambda +\rho }\right]\frac{\rho }{3}.$$
(8)
As $`\lambda \mathrm{}`$, this reduces to the violation of the strong energy condition, but for $`\rho >\lambda `$, a more stringent condition on $`p`$ is required for accelerating expansion. In the limit $`\rho /\lambda \mathrm{}`$, we have $`p<\frac{2}{3}\rho `$. When the only matter in the universe is a self-interacting scalar field, the condition for inflation becomes
$$\dot{\varphi }^2V+\frac{\dot{\varphi }^2+2V}{8\lambda }(5\dot{\varphi }^22V)<0,$$
(9)
which reduces to $`\dot{\varphi }^2<V(\varphi )`$ when $`(\dot{\varphi }^2+2V)\lambda `$.
Assuming that the “brane energy condition” in Eq. (8) is satisfied, we now discuss the dynamics of the last 50 or so e-foldings of inflation. Within the the slow-roll approximation, we assume that the energy density is dominated by the self-interaction energy of the scalar field and that the scalar field evolution is strongly damped, which implies
$`H^2`$ $``$ $`\left({\displaystyle \frac{8\pi }{3M_4^2}}\right)V\left[1+{\displaystyle \frac{V}{2\lambda }}\right],`$ (10)
$`\dot{\varphi }`$ $``$ $`{\displaystyle \frac{V^{}}{3H}},`$ (11)
where we use ‘$``$’ to denote equality within the slow-roll approximation. The term in square brackets is the brane-modification to the standard slow-roll expression for the Hubble rate. For $`V\lambda `$, Eqs. (3) and (10) give $`H(4\pi /3)V/M_5^3`$ consistent with the “non-linear” regime discussed in Ref. .
Requiring the slow-roll approximation to remain consistent with the full evolution equations places constraints on the slope and curvature of the potential. We can define two slow-roll parameters
$`ϵ`$ $``$ $`{\displaystyle \frac{M_4^2}{16\pi }}\left({\displaystyle \frac{V^{}}{V}}\right)^2\left[{\displaystyle \frac{2\lambda (2\lambda +2V)}{(2\lambda +V)^2}}\right],`$ (12)
$`\eta `$ $``$ $`{\displaystyle \frac{M_4^2}{8\pi }}\left({\displaystyle \frac{V^{\prime \prime }}{V}}\right)\left[{\displaystyle \frac{2\lambda }{2\lambda +V}}\right].`$ (13)
Self-consistency of the slow-roll approximation then requires $`\mathrm{max}\{ϵ,|\eta |\}1`$. At low energies, $`V\lambda `$, the slow-roll parameters reduce to the standard form (see, e.g., Refs.). However at high energies, $`V\lambda `$, the extra contribution to the Hubble expansion helps damp the rolling of the scalar field and the new factors in square brackets become $`\lambda /V`$. Thus brane effects ease the condition for slow-roll inflation for a given potential.
The number of e-folds during inflation is given by $`N=_{t_\mathrm{i}}^{t_\mathrm{f}}H𝑑t`$, which in the slow-roll approximation becomes
$$N\frac{8\pi }{M_4^2}_{\varphi _\mathrm{i}}^{\varphi _\mathrm{f}}\frac{V}{V^{}}\left[1+\frac{V}{2\lambda }\right]𝑑\varphi .$$
(14)
The effect of the modified Friedmann equation at high energies is to increase the rate of expansion by a factor $`[V/2\lambda ]`$, yielding more inflation between any two values of $`\varphi `$ for a given potential. Thus we can obtain a given number of e-folds for a smaller initial inflaton value $`\varphi _\mathrm{i}`$. For $`V\lambda `$, Eq. (14) becomes $`N(128\pi ^3/3M_5^6)_\mathrm{i}^\mathrm{f}(V^2/V^{})𝑑\varphi `$.
## III Perturbations on the brane
The key test of any inflation model, or any modified gravity theory during inflation, will be the spectrum of perturbations produced due to quantum fluctuations of the fields about their homogeneous background values. To date there has been no study of linear perturbations about a four-dimensional Friedmann-Robertson-Walker universe on the brane for the modified four-dimensional Einstein equations given in Eq. (1). The key uncertainty here comes from the tensor $`E_{\mu \nu }`$, which describes the effect of tidal forces and gravitational waves in the vacuum five-dimensional bulk and whose evolution is not completely determined by the four-dimensional effective theory alone. In what follows we set $`E_{\mu \nu }=0`$, effectively neglecting back-reaction due to metric perturbations in the fifth dimension. This is consistent with a homogeneous density of matter on the brane and thus is valid even in the presence of scalar field perturbations in the slow-roll limit (where $`V^{}0`$), but we note that a full investigation is required to discover when back-reaction will have a significant effect.
To quantify the amplitude of scalar (density) perturbations we evaluate the gauge-invariant quantity
$$\zeta \psi \frac{H}{\dot{\rho }}\delta \rho ,$$
(15)
which reduces to the curvature perturbation, $`\psi `$, on uniform density hypersurfaces where $`\delta \rho =0`$. The four-dimensional energy-conservation equation, $`^\nu T_{\mu \nu }=0`$, for linear perturbations (in an arbitrary gauge) on large scales, requires that
$$\delta \dot{\rho }+3H(\delta \rho +\delta p)+3(\rho +p)\dot{\psi }=0,$$
(16)
where we have neglected spatial gradients. We can apply Eq. (16) on uniform density hypersurfaces, where $`\delta \rho =0`$ and $`\psi =\zeta `$, \[or, equivalently, use the gauge-invariant definition of $`\zeta `$ given in Eq.(15)\] to obtain
$$\dot{\zeta }=H\frac{\delta p_{\mathrm{nad}}}{\rho +p}.$$
(17)
Hence $`\zeta `$ is conserved on large scales for purely adiabatic perturbations, for which the non-adiabatic pressure perturbation, $`\delta p_{\mathrm{nad}}\delta p/\dot{p}\delta \rho /\dot{\rho }`$ vanishes. This gauge-invariant result is a consequence of the local conservation of energy-momentum in four dimensions, and is independent of the form of the gravitational field equations .
The curvature perturbation on uniform density hypersurfaces is given in terms of the scalar field fluctuations on spatially flat hypersurfaces, $`\delta \varphi `$, by
$$\zeta =\frac{H\delta \varphi }{\dot{\varphi }}.$$
(18)
The field fluctuations at Hubble crossing ($`k=aH`$) in the slow-roll limit are given by $`\delta \varphi ^2\left(H/2\pi \right)^2`$. Note that this result for a massless field in de Sitter space is also independent of the gravity theory . For a single scalar field the perturbations are adiabatic and hence the curvature perturbation $`\zeta `$ can be related to the density perturbations when modes re-enter the Hubble scale during the matter dominated era which is given (using the notation of Ref. ) by $`A_\mathrm{s}^2=4\zeta ^2/25`$. Using the slow-roll equations and Eq. (18), this gives
$$A_\mathrm{s}^2\left(\frac{512\pi }{75M_4^6}\right)\frac{V^3}{V^2}\left[\frac{2\lambda +V}{2\lambda }\right]^3|_{k=aH}.$$
(19)
Thus the amplitude of scalar perturbations is increased relative to the standard result at a fixed value of $`\varphi `$ for a given potential.
The scale-dependence of the perturbations is described by the spectral tilt
$$n_\mathrm{s}1\frac{d\mathrm{ln}A_\mathrm{s}^2}{d\mathrm{ln}k}6ϵ+2\eta ,$$
(20)
where the slow-roll parameters are given in Eqs. (12) and (13). Because these slow-roll parameters are both suppressed by an extra factor $`\lambda /V`$ at high energies, we see that the spectral index is driven towards the Harrison-Zel’dovich spectrum, $`n_\mathrm{s}1`$, as $`V/\lambda \mathrm{}`$.
The tensor (gravitational wave) perturbations are bound to the brane at long-wavelengths and decoupled from the matter perturbations to first-order, so that the amplitude on large scales is simply determined by the Hubble rate when each mode leaves the Hubble scale during inflation. The amplitude of tensor perturbations at Hubble crossing is given by
$$A_\mathrm{t}^2=\frac{4}{25\pi }\left(\frac{H}{M_4}\right)^2|_{k=aH}.$$
(21)
In the slow-roll approximation this yields
$$A_\mathrm{t}^2\frac{32}{75M_4^4}V\left[\frac{2\lambda +V}{2\lambda }\right]|_{k=aH}.$$
(22)
Again, the tensor amplitude is increased by brane effects, but by a smaller factor than the scalar perturbations. The tensor spectral tilt is
$$n_\mathrm{t}\frac{d\mathrm{ln}A_\mathrm{t}^2}{d\mathrm{ln}k}2ϵ,$$
(23)
so that the ratio between the amplitude of tensor and scalar perturbations is given by
$$\frac{A_\mathrm{t}^2}{A_\mathrm{s}^2}ϵ\left[\frac{\lambda }{\lambda +V}\right]|_{k=aH}.$$
(24)
Thus the standard observational test for consistency condition between this ratio and the tilt of the gravitational wave spectrum is modified by the pre-factor $`\lambda /(\lambda +V)`$, which becomes small at high energies. Although the amplitude of both tensor and scalar perturbations is enhanced due to the increased Hubble rate, the overall effect is to suppress the contribution of tensor perturbations relative to the scalar modes for a given potential $`V`$.
## IV A simple model
As an example we investigate the simplest chaotic inflation model driven by a scalar field with potential $`V=\frac{1}{2}m^2\varphi ^2`$. Equation (14) gives the integrated expansion from $`\varphi _\mathrm{i}`$ to $`\varphi _\mathrm{f}`$ as
$$N\frac{2\pi }{M_4^2}\left(\varphi _\mathrm{i}^2\varphi _\mathrm{f}^2\right)+\frac{\pi ^2m^2}{3M_5^6}\left(\varphi _\mathrm{i}^4\varphi _\mathrm{f}^4\right).$$
(25)
The new term on the right arising from the modified Freidmann equation on the brane means that we always get more inflation for a given initial inflaton value $`\varphi _\mathrm{i}`$.
In the usual chaotic inflation scenario based on Einstein gravity in four dimensions, the value of the inflaton mass $`m`$ is required to be $`10^{13}`$ GeV in order to obtain the observed level of anisotropies in the cosmic microwave background (see below). This corresponds to an energy scale $`10^{16}`$ GeV when the relevant scales left the Hubble scale during inflation, but crucially also an inflaton field value of order $`3M_4`$. Chaotic inflation has been criticised for requiring super-Planckian field values to solve both the problems of the standard background cosmology and lace the microwave background with anisotropies of the observed magnitude. The problem with super-Planckian field values is that one generically expects non-renormalizable quantum corrections $`(\varphi /M_4)^n`$, $`n>4`$ to completely dominate the potential, depriving one of control over the potential and typically destroying the flatness of the potential required for inflation (the $`\eta `$-problem ).
If the brane tension $`\lambda `$ is much below $`10^{16}`$ GeV, corresponding to $`M_5<10^{17}`$ GeV, then the terms quadratic in the energy density dominate the modified Friedmann equation. In particular the condition for the end of inflation given in Eq. (9) becomes $`\dot{\varphi }^2<\frac{2}{5}V`$. In the slow-roll approximation \[using Eqs. (10) and (11)\] $`\dot{\varphi }M_5^3/2\pi \varphi `$ and this yields
$$\varphi _{\mathrm{end}}^4\frac{5}{4\pi ^2}\left(\frac{M_5}{m}\right)^2M_5^4.$$
(26)
In order to estimate the value of $`\varphi `$ when scales corresponding to large-angle anisotropies on the microwave background sky left the Hubble scale during inflation, we take<sup>*</sup><sup>*</sup>*The precise value is dependent upon the actual energy scale during inflation and the reheat temperature . Our results are only very weakly dependent upon the value of $`N`$ chosen. $`N_{\mathrm{cobe}}55`$ in Eq. (25) and $`\varphi _\mathrm{f}=\varphi _{\mathrm{end}}`$. The second term on the right of Eq. (25) dominates, and we obtain
$$\varphi _{\mathrm{cobe}}^4\frac{165}{\pi ^2}\left(\frac{M_5}{m}\right)^2M_5^4.$$
(27)
Imposing the COBE normalization on the curvature perturbations given by Eq. (19) requires
$$A_\mathrm{s}\left(\frac{8\pi ^2}{45}\right)\frac{m^4\varphi _{\mathrm{cobe}}^5}{M_5^6}2\times 10^5.$$
(28)
Substituting in the value of $`\varphi _{\mathrm{cobe}}`$ given by Eq. (27) shows that in the limit of strong brane corrections, observations require
$$m5\times 10^5M_5,\varphi _{\mathrm{cobe}}3\times 10^2M_5.$$
(29)
Thus for $`M_5<10^{17}`$ GeV, chaotic inflation can occur for field values below the four-dimensional Planck scale, $`\varphi _{\mathrm{cobe}}<M_4`$, although still above the five-dimensional scale $`M_5`$. The relation determined by COBE constraints for arbitrary brane tension is shown in Fig. 1, together with the high-energy approximation used above, which provides an excellent fit at low brane tension relative to $`M_4`$.
## V Conclusion
In summary, we have found that slow-roll inflation is enhanced by the modifications to the Friedmann equation in a cosmological scenario where matter, including the inflaton field, is confined to a three-dimensional brane, in five-dimensional Einstein gravity. This enables the simplest chaotic inflation models, where the inflaton potential is a polynomial in $`\varphi `$, to inflate at field values below the four-dimensional Planck scale.
We have calculated the expected amplitude of density perturbations using the curvature perturbation $`\zeta `$ on uniform density hypersurfaces, which we have argued will remain constant on very large scales even in the presence of modifications to the Einstein equations at high energies, so long as the perturbations are adiabatic. Our calculations neglect the effect of gravitons in the five-dimensional bulk which is always a consistent solution for homogeneous matter fields . However we note that a full calculation should include the effect of back-reaction from gravitational radiation in the bulk which might play an important role for the high momentum wavemodes, possibly modifying the amplitude of field fluctuations expected at Hubble-crossing.
Our results show that the additional friction term due to the enhanced expansion at high energies drives the expected tilt of the spectrum of density perturbations to zero, leading to the canonical scale-invariant Harrison-Zel’dovich spectrum. The modified dynamics alters the usual consistency relation between the tilt of the gravitational wave spectrum and the ratio of tensor to scalar perturbations expected in single-field slow-roll inflation. At the same time the amplitude of tensor perturbations is suppressed making an observational test of this prediction more difficult. Conversely, the detection of a tensor signal would be evidence against this scenario.
## Acknowledgements
The authors are grateful to David Lyth, Carlo Ungarelli and Kostya Zloshchastiev for helpful discussions. DW is supported by the Royal Society and IH is supported by the EPSRC. |
no-problem/9912/gr-qc9912009.html | ar5iv | text | # Numerical Relativity in 3+1 Dimensions
## 1 Introduction
In this short review I describe work concerned with one of the central issues of numerical relativity, the solution of the two body evolution problem of general relativity. After a short introduction to (3+1)-dimensional numerical relativity, I briefly discuss recent progress on binary black hole mergers, the evolution of strong gravitational waves, and shift conditions for neutron star binaries.
As opposed to Newtonian theory, where the Kepler ellipses provide an astrophysically relevant example for the analytic solution of the two body problem, in Einsteinian gravity there are no corresponding exact solutions. The failure of Einstein’s theory to lead to stable orbits is due to the fact that in general two orbiting bodies will emit gravitational waves that carry away energy and momentum from the system, leading to an inspiral. But, of course, this “leak” is not considered to be detrimental. Gravitational waves are one of the most interesting new phenomena introduced by general relativity that will open a new window into the universe through gravitational wave astronomy, e.g. .
The evolution of a two body gravitational system, for example a binary black hole system (which can be constructed as a vacuum system and avoids additional complication due to matter sources), can be divided into at least three phases. For sufficiently large separation of the two black holes there is a slow inspiral phase with many orbits, followed by a very brief violent merger phase that leads to a single, distorted black hole that after a short ring-down phase settles down to a final stationary black hole. For the initial and final phase rather well understood approximation schemes are available, i.e. post-newtonian calculations for the slow inspiral of two point masses (e.g. ) and the close limit approximation for the ring-down of a single distorted black hole(e.g. ). For a full treatment of the strongly non-linear, fully general relativistic phase one has to turn to computer simulations to obtain (again approximate) numerical answers.
Each phase leads to a characteristic gravitational wave signal. At this time several gravitational wave detectors are being built world-wide that should for the first time make the direct measurement of gravitational waves possible. The prediction and analysis of future signals is the main motivation for studies of binary systems in numerical relativity. While certainly the primary motivation, let me add that even if it had no direct, in the near future measurable observational consequences, we should solve a basic problem like the two body problem of general relativity.
The article is organized as follows. In Sec. 2, a brief history of black hole evolutions in numerical relativity in 2+1 (axisymmetry) and 3+1 dimensions is given. In Sec. 3, the evolution problem of numerical relativity is introduced in its 3+1 form, leading to three main issues: initial data, evolution, analysis. Initial data is computed on a three-dimensional hypersurface, which is evolved in time, and at various times analysis like identifying the black hole horizons and gravitational wave extraction is carried out. The coordinate problem of numerical relativity is emphasized with the choice of slicing function, the lapse, as an example. The skeleton of a typical black hole evolution is discussed. In Sec. 4, the “Cactus” code, a computational infrastructure for numerical relativity and relativistic astrophysics, is described.
After this general introduction, we discuss several examples for recent progress in (3+1)-dimensional numerical relativity. In Sec. 5, current (summer of 1999) binary black hole simulations are presented. The holes start out close to each other and evolve through a plunge rather than an orbit (a grazing collision). Achievable evolution time is now about $`30M`$ ($`M`$ the mass of the final black hole), which for the first time allows the extraction of wave forms. In Sec. 6, a discussion of strong wave evolutions is included because strong waves play a role in black hole mergers and these studies provided the proving ground for a new evolution scheme discussed in Sec. 3.1. In Sec. 7, the minimal distortion shift condition is described. Lapse and shift specify the coordinate gauge, and in all simulations mentioned so far the shift has been zero, but for systems with rotation a shift condition will be essential. The example presented is minimal distortion shift for a binary neutron star system, which for this purpose is simpler than black holes because there are no special inner boundaries.
Sec. 8 concludes this brief review, pointing out again those issues and techniques that will be important for the numerical simulation of binary black holes for several orbits lasting for $`100`$$`1000M`$ with results that are relevant for gravitational wave astronomy.
## 2 History of black hole simulations in numerical relativity
In this section I endeavor to give a necessarily very short but in its highlights complete exposition of the literature on numerical black hole evolutions, concentrating mostly on work that implements the complete black hole evolution problem (data, evolution, analysis) for the full Einstein equations in vacuum. Matter occurs in a few places but only as a means to form black holes. Clearly, there is a large and important body of work concerned with all the different, separate aspects of and methods for black hole evolutions as outlined in Sec. 3. Still, this allows us to sketch the history of the field.
### 2.1 2+1 dimensions
After some early attempts , it was the work by Smarr and Eppley on the head-on collision of two equal mass Misner black holes which basically founded the field of numerical relativity as a subject of computational physics. Axisymmetric head-on collisions allow significant savings in computational cost when formulated with two spatial and one time coordinate (2+1 dimensions), although this excludes the possibility of orbiting black holes and radiation of angular momentum. Many of the key techniques that are still in use today stem from that period of the sixties and seventies (see for the definitive review).
The beginning of the nineties saw a surge of activity when more powerful computers, improved codes and methods allowed significant advances. The axisymmetric collision of black holes, either formed by particles or implemented as Misner data , was repeated complete with horizon finding and wave extraction. It is remarkable how the crude results for the wave emission of where confirmed in . Another highlight is certainly the numerical computation of the “pair of pants” picture for a black hole merger , which was a result of the US Binary Black Hole Grand Challenge Alliance. Head-on collisions in axisymmetry continue to improve, see the recent work on unequal mass configurations .
Another interesting system in axisymmetry are rotating black holes. In , particles collapse to form a black hole with rotation and toroidal event horizon. In , a Kerr black hole distorted by a gravitational wave is evolved. Matter plus rotating black hole systems are also studied in .
A traditional topic in numerical black hole studies is that of black hole formation, of which I want to mention only the following recent references that are of relevance to this article. The formation of naked singularities was examined in . Furthermore, in the collapse of gravitational waves to a black hole is demonstrated. A surprise was that even (1+1)-dimensional, spherically symmetric black hole systems are far from being trivial, as the rich set of critical phenomena discovered by Choptuik showed (see e.g. for a review). In 2+1 dimensions, the only critical collapse studies so far are those of .
### 2.2 3+1 dimensions
Numerical relativity of black holes in 3+1 dimensions was initiated in 1995 with evolutions of a Schwarzschild black hole with singularity avoiding slicing on a Cartesian grid . Achieved run time is about $`30M`$. At the same time the first (3+1)-dimensional wave simulations were carried out . In Sec. 6, I comment on the collapse of non-axisymmetric waves to a black hole .
Returning to our main topic, the evolution of a Schwarzschild black hole with a non-vanishing shift vector was studied in , compare Sec. 7. In , adaptive mesh refinement techniques, made famous in numerical relativity by , were applied for the first time to 3+1 relativity, also for a Schwarzschild black hole. By now, evolutions for the Schwarzschild spacetime are standard code test, e.g. . Further studies of single black holes include the distorted black holes in , which provided the first detailed tests of wave extraction in 3+1 dimensions. The Black Hole Grand Challenge Alliance performed the longest, stable evolution of a single black hole so far, reaching about $`60M`$ for a standard Cauchy evolution with black hole excision and a boosted black hole , and essentially achieved complete stability ($`>60000M`$) with a characteristic evolution code, which is tailored to the one black hole problem but can also treat small distortions, and for the first time a black hole that moves across the grid .
Binary black hole evolutions are pushing the limits of what is currently possible. Some results for the evolution of the axisymmetric Misner data set with the 3+1 code of with singularity avoiding slicing are reported in . The first true (3+1)-dimensional binary black hole evolutions, the grazing collision of nearby spinning and moving black holes, was performed in . This sets the stage for the recent binary black hole simulations of Sec. 5, but first we want to discuss some of the basic issues in numerical relativity.
## 3 Anatomy of a numerical relativity simulation
### 3.1 3+1 formulation
The Arnowitt-Deser-Misner (ADM) equations are one of the possibilities to rewrite the Einstein equations as an initial value problem for spatial hypersurfaces. The dynamical fields of the ADM formulation are a 3-metric $`g_{ab}`$ and its extrinsic curvature $`K_{ab}`$ on a 3-manifold $`\mathrm{\Sigma }`$, both depending on space (points in $`\mathrm{\Sigma }`$) and a time parameter, $`t`$. The foliation of the 4-dimensional spacetime into hypersurfaces $`\mathrm{\Sigma }`$ is characterized in the usual way by a lapse function $`\alpha `$ and a shift vector $`\beta ^a`$. The Einstein equations for vacuum become
$`(_t_\beta )g_{ab}`$ $`=`$ $`2\alpha K_{ab}`$ (1)
$`(_t_\beta )K_{ab}`$ $`=`$ $`D_aD_b\alpha +\alpha (R_{ab}2K_{ac}K_{}^{c}{}_{b}{}^{}+K_{ab}K)`$ (2)
$`0`$ $`=`$ $`D^b(K_{ab}g_{ab}K)𝒟_a,`$ (3)
$`0`$ $`=`$ $`RK_{ab}K^{ab}+K^2,`$ (4)
where $`R_{ab}`$ is the 3-Ricci tensor, $`R`$ the Ricci scalar, $`K`$ the trace of the extrinsic curvature, $`_\beta `$ the Lie derivative for $`\beta `$, and $`D_a`$ the covariant derivative compatible with the 3-metric. One obtains evolution equations for the metric variables, (1) and (2), and constraint equations that do not contain time derivatives of $`g_{ab}`$ or $`K_{ab}`$, the momentum constraint (3) and the Hamiltonian constraint (4).
These equations are well known, but displaying them explicitly allows me to make a number of basic observations. First of all, these are comparatively simple equations. Even though writing out all the terms in the index contractions, in the definition of $`R_{ab}`$ and the covariant derivative leads to on the order of 1000 floating point operations per point for a typical finite difference representation of (1) and (2), this can be easily dealt with computationally and is not one of the fundamental problems of black hole evolutions. Still, a numerical implementation requires some thought and hard work, see Sec. 4.
Notice that lapse and shift appear in the evolution equations (as of course they have to) and have to be specified as part of the evolution problem. Choosing lapse and shift fixes the coordinate gauge for the evolutions, and is one of the key problems for numerical evolutions, see Sec. 3.2.
The constraint equations imply that specifying initial data $`g_{ab}`$ and $`K_{ab}`$ on a hypersurface $`\mathrm{\Sigma }`$ involves in general solving the constraints numerically. If the constraints are satisfied initially, they will remain satisfied for a well-posed evolution system, but this is an analytic statement that is only approximately true numerically.
Finally, the ADM equations do not define a hyperbolic evolution system (e.g. ), and it is not clear to what extent the original ADM equations can lead to a numerically stable evolution system. The issue of stability has to be addressed on two levels. First, a well-posed evolution system is one for which existence, uniqueness and stability of a solution for at least finite time intervals can be shown, which is true for example for hyperbolic systems. However, in general stability does not rule out exponentially growing modes (this may be the solution one is looking for). Second, the numerical implentation of an analytically stable system does not trivially lead to stable numerical evolutions (e.g. the finite-differenced equations may have exponentially growing solutions which are not present analytically). Also note that important stability issues arise at the boundaries of the computational domain.
Finding stable evolution systems is perhaps the other key issue in numerical relativity besides the choice of coordinate problem. For an excellent review of first order hyperbolic systems for relativity see , but other systems are of interest, too. One of the important developments of the last year was the demonstration by Baumgarte and Shapiro , that a conformal, trace-split version of the ADM system very much like the system used by Shibata and Nakumara in , is significantly more stable (numerically) than the ADM equations for weak fields and some algebraic slicings. This BSSN system can also be understood as a second order, conformal version of the Bona-Massó system . First order hyperbolic versions were given in , although the BSSN system as it stands is not hyperbolic. The BSSN variables are
$`\varphi `$ $`=`$ $`\mathrm{ln}(\text{det}g)/12,`$ (5)
$`K`$ $`=`$ $`g^{ab}K_{ab},`$ (6)
$`\stackrel{~}{g}_{ab}`$ $`=`$ $`e^{4\varphi }g_{ab},`$ (7)
$`\stackrel{~}{A}_{ab}`$ $`=`$ $`e^{4\varphi }(K_{ab}g_{ab}K/3),`$ (8)
$`\stackrel{~}{\mathrm{\Gamma }}^c`$ $`=`$ $`\stackrel{~}{\mathrm{\Gamma }}_{ab}^c\stackrel{~}{g}^{ab},`$ (9)
so that $`\text{det}\stackrel{~}{g}=1`$ and $`\text{tr}\stackrel{~}{A}_{ab}=0`$. Furthermore, introducing $`\stackrel{~}{\mathrm{\Gamma }}^c`$ leads on the right-hand-side of equation (2) to an elliptic expression in derivatives of $`\stackrel{~}{g}_{ab}`$, i.e. the corresponding BSSN equation has the character of a wave equation. However, including the evolution equation for $`\stackrel{~}{\mathrm{\Gamma }}^c`$ appears to spoil hyperbolicity. Nevertheless, the BSSN system has very nice stability properties, and some suggestions about why this may be the case are made in . Several BSSN evolutions have now been reported, for strong waves and maximal slicing (Sec. 6) and also for matter evolutions .
### 3.2 Schwarzschild as an example for a typical black hole evolution problem
Moving on to the prototypical example for a black hole evolution, consider Fig. 1, which shows the Schwarzschild spacetime for a static, spherically symmetric black hole in Novikov coordinates . The coordinates are chosen such that freely falling observers that start at rest at time $`\tau =0`$ follow constant $`R^{}`$ lines. The Schwarzschild radius $`r`$ is related to $`R^{}`$ at $`\tau =0`$ through $`R^{}=(r/(2M)1)^{1/2}`$. Several constant $`r`$ lines are shown: the physical singularity at $`r=0`$, the event horizon at $`r=2M`$, and note how the lines $`r=4M`$ and $`r=6M`$ curve outwards which corresponds to the radial infall of the observers with constant $`R^{}`$.
To set up an evolution problem we can choose the slice $`\tau =0`$ as initial hypersurface with $`g_{ab}`$ and $`K_{ab}`$ derived from the Schwarzschild four-metric ($`g_{ab}`$ and $`K_{ab}`$ therefore solve the constraints). Note that the physical singularity is to the future of this slice and does not show in $`g_{ab}`$ and $`K_{ab}`$. To perform an evolution, we have to specify lapse and shift. The Novikov coordinates correspond to geodesic slicing, $`\alpha =1`$, and vanishing shift, $`\beta ^a=0`$. Concretely, consider a numerical grid at $`\tau =0`$ extending from $`R^{}=0`$ to $`R^{}=2^{1/2}`$. In the first quadrant of the figure we have shown how this initial slice moves through the spacetime for geodesic slicing with vanishing shift. Without precaution a numerical code will crash at $`\tau =\pi M`$ when the point at $`R^{}=0`$ reaches the singularity (this “crash test” has in fact been used as a first crude code test ).
As shown in the figure, one can imagine evolving beyond $`\tau =\pi M`$ by cutting out from the slice what is inside the event horizon of the black hole, which will not affect the outside of the black hole anyway. “Black hole excision” techniques are a very promising approach to black hole evolutions, although in 3+1 dimensions there remain certain stability problems to be resolved for binary black holes. Black hole excision usually involves a non-trivial choice of lapse and shift.
With black hole excision not quite ready yet, it is so-called singularity avoiding slicings that have been most widely used. Assume $`\beta ^a=0`$. Primary examples are maximal slicing ($`K=0`$ initially and $`\mathrm{\Delta }\alpha =\alpha K_{ab}K^{ab}`$ so that $`_tK=0`$), and so-called “1+log” slicings (e.g. $`_t\alpha =\alpha K/2`$). In the second quadrant of Fig. 1, we show a typical example (hand-drawn, while the rest of the figure was computed). At the center, evolution slows down, while for large radii the evolution marches on with $`\alpha =1`$ at infinity. Obviously, a numerical problem will occur in between, which is referred to as grid-stretching, and which is reflected in growing sharp peaks in the radial-radial component of the metric. Singularity avoiding slicings have this fatal problem built in, but they allow us to compute evolutions up to $`30M100M`$, which is barely sufficient for certain black hole collision and ring-down wave forms.
For completeness let me also mention the possibility of using hyperboloidal slices, or null-slices, or null-slices matched to the spatial slices, which when applicable cover more of the interesting space time in the wave region, e.g. . Characteristic matching is also useful near an excision boundary.
The key point to note is that the choice of coordinates in relativity is a more fundamental problem than, say, the choice of spherical over Cartesian coordinates for computational convenience. There is no simple canonical choice like $`\alpha =1`$ and $`\beta ^a=0`$ that works in reasonably general situations. Even if there are no black holes, geodesic slicing fails due to geodesic focusing. It appears to be the case that one has to determine lapse and shift dynamically during the evolution by some geometric principle as functions of the metric and extrinsic curvature.
### 3.3 Anatomy of a black hole simulation
After discussing 3+1 formulations in general and a specific black hole example, let us list the components of a numerical relativity evolution, with the binary black hole problem as example.
#### 3.3.1 Initial Data
* Choice of hypersurface
Simplest choice is $`R^3`$ for non-black hole data. Black hole data can be, for example, of Misner type with an isometry boundary condition at spheres representing the throats of the holes, $`R^3spheres`$ , or Brill-Lindquist type data based on a punctured $`R^3`$, $`R^3points`$ .
* Solution to constraints
There are four constraint equations that restrict the choice of 12 components in $`g_{ab}`$ and $`K_{ab}`$. The most common approach is the conformal method .
#### 3.3.2 Evolution
* Variables and evolution system
There are many different choices that can be roughly divided into ADM like systems that are of second order , and first order systems that are constructed to obtain hyperbolicity, e.g. .
* Choice of coordinates (gauge choice)
Typically a vanishing shift is used, but see Sec. 7. For the lapse, as explained above, algebraic and elliptic conditions are in use.
* Physical singularities
Smooth, regular initial data may develop physical singularities which are features of black hole spacetimes. Physical singularities can be avoided by choice of slicing, or removed from the grid through black hole excision.
* Coordinate singularities
Dynamical determination of lapse and shift may lead to coordinate pathologies, in particular for algebraic slicings (e.g. . Elliptic conditions are sometimes preferable, although they are computationally much more expensive.
* Outer boundary condition
Asymptotic flatness can be assumed, which implies fall-off conditions for the fields. For run times on the order of $`100M`$ for a typical singularity avoiding Cauchy evolution, a radiative boundary condition is sufficiently accurate and stable, e.g. . For a more sophisticated scheme see . Also there are two well developed approaches in which the numerical grid does not end at a finite radius but extends to future null infinity, either by matching to a characteristic code at finite radius , or smoothly without matching via a conformal transformation . (Note that in 3+1 dimensions it is no longer straightforward to use a logarithmic radial coordinate as is conventionally done in axisymmetry.)
* Inner boundary conditions
As discussed above, black hole excision leads to a particular inner boundary. As for initial data construction, the inner boundary for slices in a black hole spacetime may be spherical or point-like. For short term evolutions, the numerical slice can cover the inner asymptotically flat regions of the black holes if the resulting coordinate singularities are treated with the puncture method for evolution .
#### 3.3.3 Analysis
* Tensor components
The raw output of a computer code will be the components of its basic variables, e.g. $`g_{ab}`$, $`K_{ab}`$, $`\alpha `$, and $`\beta ^a`$, all other information is computed from these. Interesting local quantities include Riemann curvature invariants $`I`$ and $`J`$ and the Newman-Penrose invariants $`\psi _0`$ through $`\psi _4`$.
* Black hole horizons
The event horizon of black holes is a spacetime concept and can be found approximately if a sufficiently large spacetime slab has been computed, e.g. . The apparent horizon is a notion intrinsic to the hypersurface (and is therefore slicing dependent). It is defined as the union of outermost marginally trapped surfaces, i.e. surfaces for which the expansion of outgoing null rays vanishes. Trapped surfaces are linked to the existence of black holes through the singularity theorems. See e.g. and references therein for numerical issues.
* Wave extraction
Wave forms can be computed reliably at finite but large radius using the first order gauge invariant approach of , as recently demonstrated for 3+1 dimensions in . In approaches that make future null infinity part of the numerical grid, see above, wave extraction is much more direct.
## 4 Implementation of a numerical relativity simulation
As should be evident from the previous section, numerical relativity poses a complex scientific problem that translates into a challenging software engineering problem. Here I want to discuss “Cactus”, a code that is developed and used at the Albert-Einstein-Institut (AEI, the Max-Planck-Institut für Gravitationsphysik), and several other institutions .
Referring to , the cactus code is a freely available modular portable and manageable environment for collaboratively developing high-performance multidimensional numerical simulations. Cactus provides a powerful application programming interface based on user modules (thorns) that plug into a compact core (flesh). Cactus is composed of modules that are independent of relativity, and of modules designed for relativity. The Cactus Computational Tool Kit supports a variety of supercomputing architectures and clusters, implements MPI-based parallelism for finite difference grids, several input/output layers, elliptic solvers, metacomputing, distributed computing, and visualization tools. Fixed and adaptive mesh refinement is under development. Cactus significantly enhances collaborative development by providing code sharing via CVS and defining appropriate interfaces for code combination. A large number of physics modules or thorns are available for numerical relativity and astrophysical applications, e.g. there are thorns for initial data, evolution routines, and data analysis. The first version of Cactus was created by J. Massó and P. Walker, and has been available for testing since April, 1997 . The Cactus Computational Tool Kit (Cactus 4.0) saw its first public release as a community code in July, 1999. It is actively supported by a cactus maintenance team, and there is good documentation. Cactus is a “third generation” code, going back to the “G” and “H” codes. The key step taken forward is the massive investment in the collaborative infrastructure, which is now beginning to pay off. For many more details, see .
So what does all of this mean? Suppose you want to run a black hole simulation. Cactus is not a high-level science tool where you get an executable with graphical user interface to input, say, the black hole masses and off it goes. Numerical relativity is still too experimental for that. At its heart, Cactus is a large collection of source files together with a sophisticated make system. The user decides what sources to include, then compiles the code. Runs are controlled by a text file containing parameters, e.g. for the grid size and the black hole masses. The sophistication lies in the ease with which code can be changed or added by single users without affecting functionality provided by others. Suppose a users wants to add a routine that computes the determinant of the metric. A new thorn is created with the source code, e.g. 20 lines of C or Fortran, and with files that inform Cactus about new parameters, new grid functions (say an array of reals with the name “detg”), and tell cactus when to call the new routine (in this case whenever analysis is done). This is work to be done by the thorn writer, but he or she gets the rest for free: set-up of a numerical grid, storage for $`g_{ab}`$ and its determinant, evolution of $`g_{ab}`$ according to, say, the ADM equations, parallel execution, input, output, adaptive mesh refinement, etc. When submitted to the Cactus code repository, any Cactus user can now make use of “detg”.
Taking the viewpoint of a physicist, the Cactus infrastructure takes care of many computer tasks that often distract from science. To say that a simulation was carried out with Cactus can refer to Cactus, the Computational Tool Kit, in the same way that credit is given to MPI for parallelism, or Mathematica or Maple for symbolic computation. Cactus is successful if the science outweighs the infrastructure. In order that Cactus does not remain a faceless collection of source code, I would like to give several science examples and also to mention at least a few names in connection with science projects. Work on hyperbolic methods in numerical relativity was done by J. Massó, P. Walker, and others. A project on black hole excision techniques has been implemented as “Agave” by S. Brandt, M. Huq, P. Laguna, and others at Penn State University, which uses Cactus mainly for parallelism. Furthermore, the NASA Neutron Star Grand Challenge Project of W.-M. Suen, E. Seidel, and others, develops the so-called GR3D code , which is a version of Cactus for coupled spacetime and relativistic hydrodynamics evolution based on Riemann solvers. M. Miller implemented the key science module (MAHC) for GR3D, with further contributions from other members of the Grand Challenge , see Sec. 7 for an application. Finally, Cactus is of course our platform for the binary black hole collisions reported on in Sec. 5 and the strong gravitational wave evolutions of Sec. 6. In this case, the code “BAM” originally developed in contributed the multigrid elliptic solver for the initial data and for maximal slicing, and the Mathematica scripts of BAM were used to generate C code for the BSSN evolution. For analysis, an apparent horizon finder implemented by M. Alcubierre was used (there is also one available by C. Gundlach, ), and the wave extraction routines by G. Allen . A much larger number of individuals than is apparent from the above citations has contributed to “the” Cactus code, see . At this moment the Cactus 3.2 CVS repository lists 88 thorns, which range from private and under development to stable and public.
## 5 Grazing collision of black holes
The first crude but truly (3+1)-dimensional binary black hole simulation can be summarized as follows . The approach taken was to address each of the items listed in the skeleton for evolution problems of black holes of Sec. 3.3 in the simplest possible manner that still allowed us to combine all the ingredients to a complete implementation. Initial data for two black holes, each with linear momentum and spin, is constructed using the puncture method (see also ), in which the internal asymptotically flat regions of the holes are compactified so that the numerical domain becomes $`R^3`$. By construction the initial data is conformally flat. The evolution is performed with the original ADM equations and a leapfrog finite difference scheme. Maximal slicing and vanishing shift is chosen, i.e. physical singularities are avoided, while coordinate singularities typically do not occur for this elliptic slicing condition. At the outer boundary, the ADM variables are held constant, which works well for the achievable run times because a fixed mesh refinement of nested boxes (with finer resolution at the center) is used, and for large radii the lapse can approximate the Schwarzschild lapse for which Schwarzschild data would remain static. An important insight is that the puncture method, which can be made rigorous for initial data, can be numerically extended to the evolution equations so that no special inner boundary is present . Analysis is restricted to apparent horizon finding with a curvature flow method. These methods allow one to evolve for about $`7M`$, which is sufficient to observe the merger of the apparent horizons, but too short for wave extraction.
Currently, binary black hole mergers are simulated by our AEI/NCSA/WashU/Palma collaboration, and in this section I summarize some of the preliminary results. These simulations build on and introduce various improvements. On the technical side, for high-performance collaborative computing the code is implemented with Cactus 3.2. An improved apparent horizon finder is now available . The comparatively slow maximal slicing can in many situations be replaced by “1+log” slicing (cmp. Sec. 3.2). No mesh refinement is used, but the outer boundary is treated with a radiative (Sommerfeld) boundary condition. The above plus the BSSN evolution system as given in with a 3-step iterative Crank-Nicholson (ICN) scheme, allow run times of up to $`30M`$ for grazing collisions, compared to $`7M`$ for previous runs, and up to $`50M`$ for simpler data sets.
The important new result is that now for the first time the extraction of wave forms becomes possible with the methods tested in . Let us discuss a concrete example. For initial data we choose the punctures of each hole on the $`y`$-axis at $`\pm 1.5`$, masses $`m_1=1.5`$ and $`m_2=1`$, linear momenta $`P_{1,2}=(\pm 2,0,0)`$, and spins $`S_1=(1/2,1/2,0)`$ and $`S_2=(0,1,1)`$ (all units normalized by $`m_2=1`$). The numerical grid has $`385^3`$ points with grid spacing $`0.2`$, which puts the outer boundary for a centered cube at a coordinate value of about $`38`$. The initial ADM mass is $`M=3.11`$, so the outer boundary is at about $`12M`$ (solving the constraints for the “bare” parameters increases the mass over the Brill-Lindquist vanishing spins and momenta value of $`m_1+m_2`$). The total angular momentum is $`J=6.7`$, which corresponds to an angular momentum parameter of $`a/M=J/M^2=0.70`$.
The black holes start out with separate marginally trapped surfaces forming the apparent horizon (although it may well be that they have a common event horizon). Fig. 2 shows the formation of a single marginally trapped surface surrounding the initial inner marginally trapped surfaces. The apparent horizon is defined by a type of minimal surface equation (e.g. ), and does not evolve continuously, rather a new “minimal” surface appears in a new location. The shading of the surfaces indicates the Gauss curvature on the surfaces. The area of the apparent horizon increases in these coordinates because grid points are falling into the black hole. In Fig. 3, two frames near the merger are shown together with isosurfaces of Re$`\psi _4`$ as a wave indicator.
From the evolution, we can obtain an energy balance by comparing the energy carried by the various modes of the gravitational waves with the difference in mass between the initial slice and the final black hole. For the latter, the apparent horizon mass is $`M_{AH}=[(M_{ir})^2+J^2/(2M_{ir})^2]^{1/2}`$ with $`(M_{ir})^2=A_{AH}/(16\pi )`$ and $`A_{AH}`$ the numerically determined area of the apparent horizon of the final black hole. During the evolution, $`A_{AH}`$ reaches a plateau, but then starts drifting upwards as the grid stretching becomes more severe. With the plateau value for $`A_{AH}`$ and $`M_{ADM}M=3.11`$ we obtain
$$M_{ADM}M_{AH}=0.030.01M_{ADM}.$$
(10)
A rough estimate for the radiated energy in all modes until $`t=30M`$, with the extraction radius rather close to the system at $`8M`$, is
$$M_{RAD}=\text{0.007 – 0.008}M_{ADM}.$$
(11)
Even with all the current restrictions on accuracy coming from resolution, grid size, boundary treatment, and grid stretching, this energy balance can be considered to be a first physics result for such a grazing collision. We learn that for this data set roughly 1% of the total energy is emitted in gravitational waves. Clearly, a thorough parameter space study of such configurations is of interest. To make contact with astrophysical situations, more realistic initial data is probably needed (which ideally would be derived from the slow inspiral of the two black holes). A detailed report is in preparation.
## 6 Gravitational collapse of gravitational waves
One way to probe general relativity in the highly non-linear regime, which should also share some of the strong wave features of the grazing collision, is certainly through the gravitational collapse of gravitational waves to a black hole. As briefly mentioned in Sec. 2.1, one scenario is that of critical collapse . One can construct a one-parameter family of initial data, and examine the region near the “critical” value for that parameter at which a black hole does or does not form. Not much is known in 3+1 dimensions , and the only study in axisymmetry is that of Abrahams and Evans for gravitational waves.
In this section I want to briefly discuss first results for non-axisymmetric collapse, cmp. . We take as initial data a pure Brill type gravitational wave , later studied by Eppley and others . The metric takes the form
$$ds^2=\mathrm{\Psi }^4\left[e^{2q}\left(d\rho ^2+dz^2\right)+\rho ^2d\varphi ^2\right]=\mathrm{\Psi }^4\widehat{ds}^2,$$
(12)
where $`q`$ is a free function subject to certain boundary conditions. Following , we choose $`q`$ of the form
$$q=a\rho ^2e^{r^2}\left[1+c\frac{\rho ^2}{(1+\rho ^2)}\mathrm{cos}^2\left(n\varphi \right)\right],$$
(13)
where $`a,c`$ are constants ($`a`$ different from Sec. 5), $`r^2=\rho ^2+z^2`$ and $`n`$ is an integer. For $`c=0`$, these data sets reduce to the Holz axisymmetric form, recently studied in three-dimensional Cartesian coordinates in preparation for the present work . Taking this form for $`q`$, we impose the condition of time-symmetry, and solve the Hamiltonian constraint numerically in Cartesian coordinates. An initial data set is thus characterized only by the parameters $`(a,c,n)`$. For the case $`(a,0,0)`$, we found in that no apparent horizon exists in initial data for $`a<11.8`$, and we also studied the appearance of an apparent horizon for other values of $`c`$ and $`n`$.
For evolutions, we found that the BSSN system as given in with maximal slicing, a 3-step ICN scheme, and a radiative boundary condition is sufficiently reliable even for the strong waves considered here. The key new extensions to previous BSSN results are that the stability can be extended to (i) strong, dynamical fields and (ii) maximal slicing, where the latter requires some care. Maximal slicing is defined by vanishing of the mean extrinsic curvature, $`K`$=0, and the BSSN formulation allowed us to cleanly implement this feature numerically, in contrast with the standard ADM equations.
As discussed in , axisymmetric data with $`a=4`$ is subcritical, that is the imploding part of the wave disperses again, leaving flat space in a non-trivially distorted coordinate system. An amplitude of $`a=6`$ gives a supercritical evolution as indicated by the formation of an apparent horizon. The “cartoon” method to perform axisymmetric calculations in Cactus using three-dimensional Cartesian stencils on a two-dimensional slab allowed us to close in on the critical region near $`a=4.6`$, but work on detection of critical phenomena is still in progress.
Fig. 4 shows the development of the data set ($`a`$=6, $`c`$=0.2, $`n`$=1), which has reflection symmetry across coordinate planes. The initial ADM mass of this data set turns out to be $`M_{ADM}=1.12`$. Fig. 4a shows a comparison of the apparent horizons of this three-dimensional and the previous axisymmetric cases at $`t`$=10 on the $`x`$-$`z`$ plane. The mass of the three-dimensional apparent horizon case is larger, weighing in at $`M_{AH}`$=0.99 (compared to $`M_{AH}(2D)=0.87`$).
In Fig. 4b we show the {$`l`$=2,$`m`$=0} wave form of this three-dimensional case, compared to the previous axisymmetric case. The $`c=0.2`$ wave form has a longer wave length at late times, consistent with the fact that a larger mass black hole is formed in the three-dimensional case. Figs. 4c and 4d show the same comparison for the {$`l`$=4,$`m`$=0} and {$`l`$=2,$`m`$=2} modes respectively. Notice that while the first two modes are of similar amplitude for both runs, the three-dimensional {$`l`$=2,$`m`$=2} mode is completely different; as a non-axisymmetric contribution, it is absent in the axisymmetric run (in fact, it does not quite vanish due to numerical error, but it remains of order $`10^6`$). We also show a fit to the corresponding quasi normal modes of a black hole of mass 1.0. The fit was performed in the time interval $`(10,36)`$, and is noticeably worse if the fit is attempted to earlier times, showing that the lowest quasi normal modes dominate at around $`10`$. The early parts of the wave forms $`t<10`$ reflect the details of the initial data and BH formation process. This is especially clear in the {$`l`$=2,$`m`$=2} mode, which seems to provide the most information about the initial data and the three-dimensional black hole formation process.
## 7 Minimal distortion shift
As a final example for recent advances in numerical relativity simulations, let me mention shift conditions in (3+1)-dimensional relativity. The first preliminary test of a dynamically computed minimal distortion shift can be found in for a Schwarzschild black hole on a 3+1 Cartesian grid, which is still the only example with black hole excision. Computational domains with holes pose a technical problem for the elliptic solver, which certainly will be solved (see for example ) once excision runs demand dynamic shifts.
A non-vanishing shift plays an important role in calculations that involve orbiting black holes or neutron stars, e.g. in post-newtonian calculations or Newtonian hydrodynamics for neutron stars. The freedom in the shift vector can in principle be used to obtain corotating coordinates or partially corotating coordinates (to counter frame dragging). A variational principle to minimize coordinate shear leads to the minimal distortion family of shift conditions, see . Introducing again a conformal factor such that the conformal metric $`\stackrel{~}{g}_{ab}`$ has unit determinant, one can minimize
$$S[\beta ]=|_t\stackrel{~}{g}|^2𝑑V=\stackrel{~}{g}^{ac}\stackrel{~}{g}^{bd}_t\stackrel{~}{g}_{ab}_t\stackrel{~}{g}_{cd}\sqrt{\text{det}g}d^3x,$$
(14)
which gives a vector elliptic equation for $`\beta ^a`$,
$`(\mathrm{\Delta }_l\beta )^a`$ $`=`$ $`2D_b(\alpha (K^{ab}g^{ab}K/3)),`$ (15)
$`(\mathrm{\Delta }_l\beta )^a`$ $``$ $`D_bD^b\beta ^a+D_bD^a\beta ^b{\displaystyle \frac{2}{3}}D^aD_b\beta ^b.`$ (16)
Note that if there exists a rotational Killing vector, minimal distortion can be trivially obtained , hence such shift conditions begin playing a non-trivial role only when one moves beyond axisymmetric simulations (see also , and in particular for spacetimes with approximate Killing vector fields).
There are now three examples for the application of dynamical shift conditions to binary neutron star simulations, which share the feature that with vanishing shift the code fails after far less than an orbit, while with minimal distortion shift for the first time fully relativistic simulations of one or more orbits become possible. In , minimal distortion is approximated in a way that decouples the three equations but maintains key features. Preliminary experiments have also been performed for the NASA Neutron Star Grand Challenge using Cactus, the hydrodynamics module or “thorn” MAHC , and the author’s implementation of the vector Laplace operator (16) in BAM. The full minimal distortion equations are solved. One choice of initial data is that of irrotational neutron star binaries provided by the Meudon group (, polytropic equation of state with $`\gamma =2`$, $`\kappa =0.03c^2/\rho _{nuc}`$, $`M_1=M_2=1.6M_{sol}`$, $`M/R=0.14`$, $`d=41km`$). Fig. 5 shows four frames of an evolution project that was implemented and carried out this summer by M. Miller, N. Stergioulas, and M. Tobias. Without shift, the simulation crashes before less than 1/10th of an orbit is completed, with shift one observes about 3/4 of an orbit before the code fails when the two neutron stars merge. These first results can probably be improved significantly, but they already serve as a proof that non-vanishing shift is beneficial.
## 8 Conclusion
It is perhaps surprising how little has been achieved to date by numerical simulations of the Einstein equations for the two body problem. After all, the Einstein equations have been extensively studied for more than 80 years, and nowadays modern computational physics has successfully treated the partial differential equations of a large number of evolution problems. Why is it not possible to “simply solve” the problem with standard numerical methods on a big computer? To recall some of the issues raised above, (i) the Einstein equations do not lead to a unique or preferred set of 3+1 evolution equations, with an automatically stable numerical implementation, (ii) choosing lapse and shift is intricately coupled to the evolution, (iii) black holes pose a special challenge due to their singularities.
As a result, black hole simulations in numerical relativity still have to be called rather limited. Either special simplifications are introduced (axisymmetry, null coordinates adapted to single black holes), or the achieved numerical runtime is a limiting factor compared to the lowest quasi-normal ringing period of about $`17M`$. (3+1)-dimensional black hole evolutions with singularity avoiding slicing last to about $`30M`$ for simple data sets starting from time symmetry (vanishing extrinsic curvature) . The first evolution of truly three-dimensional binary black hole data (two black holes with spin and linear momentum) was performed in 1997 , crashing at $`7M`$, which allowed tracking the merging of apparent horizons but not wave extraction. Considering that the first 3+1 simulations of Schwarzschild were reported in 1995, one can certainly call the recent simulations of Sec. 5 with wave extraction and a run time of about $`30M`$ a significant step forward.
Several methods are under intense investigation that should allow us to evolve for hundreds of $`M`$ or even longer. Here we mentioned black hole excision, improved evolution schemes, and shift conditions. Especially excision is expected to be essential. For the purpose of wave extraction, the schemes involving future null infinity are of particular interest. Furthermore, astrophysically more realistic initial data is needed as input for the above methods before we can make contact with gravitational wave astronomy.
How close is numerical relativity to the accurate prediction of gravitational wave forms for binary events ? The post-newtonian and the close-limit approximations are probably in good shape, but full numerical relativity will require two or more years to get ready. An introductory statement often heard during the last two decades is that one essential task of numerical relativity is to provide a catalog of wave forms which is essential for gravitational wave detection. This has changed. Numerical relativity will be essential in wave analysis, producing models for astrophysical scenarios that relate the wave forms to configuration parameters. For the detection as such, however, the task of producing a complete catalog appears to be too hard, and in particular, not a very sensible one. Note that matched filtering gives roughly a factor 5 in signal to noise for wave detection . Recently, the advantage of the perfect catalog over the best “blind” numerical methods has been reduced to a factor of 2, e.g. . This still corresponds to a factor of 100 in observable event rate, but on the other hand optimal matched filtering is assumed in this estimate. The emphasis in numerical relativity should therefore be shifted more towards producing reliable statements about global features of mergers as opposed to detailed wave forms. Predicting the duration of mergers, total energy emission, frequency range and frequency distribution of the signal will be more useful to methods as described in and also more attainable in the near future. The black hole runs of Sec. 5 are being performed with this goal in mind.
It is a pleasure to thank E. Seidel and all the members of the numerical relativity group at the AEI, and W.-M. Suen, M. Miller, and M. Tobias at WashU, St. Louis. Many colleagues have contributed without whom the recent work reported here would not have been possible. In particular, I would like to thank the Cactus support and development team, G. Allen, T. Goodale, G. Lanfermann, J. Massó, M. Miller, and P. Walker, and in addition M. Alcubierre, S. Brandt, L. Nerger, E. Seidel, and R. Takahashi, with whom I have collaborated on the black hole runs reported in Sec. 5. Figs. 2, 3, and 5 were prepared by W. Benger with the Amira software of ZIB, see . This work has been supported by the AEI, NCSA, NSF PHY 9600507, NSF MCA93S025 and NASA NCCS5-153. Calculations were performed at AEI, NCSA, RZG in Garching, and ZIB in Berlin. |
no-problem/9912/astro-ph9912308.html | ar5iv | text | # Wolf-Rayet Stars in Starburst Galaxies
## 1 Introduction
In the last 20 years, Wolf-Rayet stars have been detected in several extragalactic objects. Allen et al. (1976) identified for the first time the characteristic He II $`\lambda `$4686 broad atmospheric emission line in He 2-10. Conti (1991) listed already 37 objects showing WR features, a number which was increased to more than 130 in Schaerer and Vacca (1999) , and which is continuously increasing. The WR features are broad, but generally weak, so that they can be detected only in spectra with high signal to noise in the continuum. This explains why they were not identified in the first years of emission line galaxies spectroscopy. The detectors narrow dynamical range prevented having good signal to noise simultaneously on the bright emission lines and in the weak continuum of these galaxies. More careful searches in the last years have allowed to also identify the broad feature around CIV $`\lambda `$5808 attributed to WC stars, a subtype of WR stars characterized by strong and broad C emission lines. We show in Fig. 1 the optical spectrum of IZw 18, with the identification of some typical WR lines.
Wolf-Rayet stars have been found in very different extragalactic environments: Giant HII regions, Blue Compact galaxies, generic emission line galaxies, IRAS galaxies, Seyfert galaxies,…., in general, always in regions experiencing a strong episode of massive star formation. This fact provided in the 80’s a definitive support to the so-called “Conti scenario”, according to which WR stars were the descendants of massive stars, experiencing this short evolutionary phase (around 500.000 years) just before collapsing into a supernova explosion. Conti (1991) successfully proposed the term “Wolf-Rayet galaxies” to group all the galaxies hosting WR stars. Given the very different kind of objects in which WRs have been identified and the fact that their detection depends in most cases just on the observational strategy, this term has to be taken with care. IZw 18 could be a prototype for that. Being the most metal deficient galaxy known, it has been observed for years aiming to measure in detail the intensities of the different emission lines, with no WR feature being detected at all. However, long integrations on big telescopes allowed two independent groups to identify in 1997 the WR features around HeII and CIV, adding this object to the WR galaxy list (Izotov et al. 1997 , Legrand et al. 1997 ).
In this contribution we will review the parameters that control the presence of Wolf-Rayet stars in star-forming environments, as well as their effects on the surrounding medium. Our goal will be to summarize what the detection of Wolf-Rayet stars can tell us about the properties of massive star-formation episodes in different environments.
## 2 What can we learn from the presence of WR’s?
The detection and quantification of the number (and type!) of Wolf-Rayet stars, and their ratio to OB stars, provide a bulk of information about the intrinsic properties of the different star formation episodes, like Initial Mass Function slope and limits, star formation regime, and so on. Let’s summarize first which factors drive the formation of WR’s in star-forming environments.
### 2.1 What controls the formation of WR’S?
As predicted by present stellar evolutionary tracks, there are mainly three parameters controlling the formation of WR’s:
* Metallicity.
* Initial Mass Function (IMF) limits.
* Properties of binary systems.
The Wolf-Rayet phase is characterized by the ejection via strong stellar winds of the outer layers of evolved massive stars. The efficiency in powering these winds is clearly a function of the metallicity, so that the lower the metallicity, the higher the initial mass required for a star to become a WR. The precise values for this mass limit depends also on the mass loss rate prescriptions and the rotation properties of a given star. Following a conservative mass loss rate scenario, Mas-Hesse and Kunth 1991 and Cerviño and Mas-Hesse 1994 estimated the lower mass limit for WR formation at solar metallicity to be 32 M. Lower and more realistic mass limits are reached if the mass loss rate are somewhat enhanced, as discussed in (Schaerer and Vacca 1998 ). In general we can say that a star will become a WR if its initial mass is above 20 M for solar metallicity, and above 80 M at Z = Z/10. Therefore, the detection of a significant number of WR stars in low metallicity environments, as in IZw 18, directly implies that the upper mass limit of the IMF has to be close to 100 M.
The evolution of massive stars in binary systems can also lead to the formation of WR’s. Around 50% of massive stars are believed to form in binary systems, out of which around 5% are expected to evolve as massive close binaries. Such close binaries experience different processes of mass transfer during their evolution, which can lead to the formation of WR stars at ages where no WRs would exist according to the evolution of single stars (Cerviño 1998 , Vanbeveren 1998 and references therein, Cerviño et al. 1999 ). First, a star can loose completely its outer envelope at the end of the H burning phase, with a naked core emerging which could have very similar properties to single WR stars. Second, accretion of mass would allow a star of initial medium/low mass to evolve as an initially massive star, becoming a WR at late evolutionary stages of the starburst. Summarizing, while the standard Conti scenario predicts the presence of WR stars only between 2 and 6 Myr after the onset of the burst (only 3 to 4 Myr at low metallicities!), the binary channel predicts a rather constant amount of WR stars between 5 and around 20-30 Myr, as shown in Figs. 2 and 3.
### 2.2 WR features detectability
In the previous section we have summarized the parameters affecting the presence of WR stars at a given time during a star formation episode. But even if they are present, their detection and quantification is furthermore affected by some additional questions:
* Star formation regime.
* Underlying stellar population.
* Differential reddening.
It has been well established that the present star formation rates showed by several starbursting galaxies can not have been maintained over long periods of time without exhausting the estimated original amounts of gas. It seems that massive star formation proceeds in these objects as (maybe repeated) short-lived, very intense episodes. The question now is how short are really these episodes: almost instantaneous or extended over tens of million years? Several arguments point towards almost coeval star formation, i.e., all stars (at least, all massive stars) would have been formed almost simultaneously, or in any case within few million years. The ignition of hundreds or thousands of massive stars within relatively small volumes and within relatively short times would probably inhibit the further formation of stars, at least for several million years, until the most massive stars start to fade out.
The detection of Wolf-Rayet stars provides important constraints on this issue. In extended star formation scenarios massive stars would be continuously formed during tens of million stars. Since the WR phase lasts for only around 500.000 years, the net effect is that the expected $`L(WR)/L(H\beta )`$ ratio would be significantly smaller than for coeval starbursts. We show in Fig. 4 the predictions for this ratio at different metallicities and assuming different IMF slopes, both for an instantaneous burst and for an extended star formation episode. We have plotted on the figures the mean vaues compiled by Mas-Hesse and Kunth (1999) . It can be seen that, first, the distribution of observed values fall rather well within the predictions of coeval models, while, second, the observations are barely consistent, at most, with the predictions of extended star formation episodes. We can conclude, therefore, that the formation of massive stars in Wolf-Rayet galaxies proceeds almost coevally, in any case within few million years.
Another factor strongly affecting the detectability of the WR features on the stellar continuum spectra is the presence of an underlying, older stellar population. Up to now, WR stars have been generally detected in galaxies whose optical continuum is mostly dominated by the newly formed, massive stars, with older stars contributing less than 50% to the total continuum at around 5000 Å (see the examples in Mas-Hesse and Kunth 1999 ). But if a starburst takes place in a galaxy with an important older stellar population, the WR features would be diluted within the optical continuum, and would be harder to be detected.
Finally, two additional factors can lead to significant errors in the quantification of the relative WR vs. OB stars population as derived from the observed $`L(WR)/L(H\beta )`$ ratio. First, it has to be taken into account that the $`L(H\beta )`$ emission is spread over a relatively large area ionized by the cluster of young, massive stars. On the other hand, the WR features are associated to the stellar population, and are therefore spatially restricted within a much smaller region. Therefore, if $`L(WR)/L(H\beta )`$ is derived from single narrow slit observations, the ratio can be severely overestimated, since it would have been contributed by most WR stars in the region, but only a fraction of the total H$`\beta `$ flux. Mas-Hesse and Kunth (1999) have estimated that this problem can distort the derived ratios by even an order of magnitude, making them useless for comparison with theoretical predictions. And second, it has been established in the last years that the extinction affecting the stellar continuum (and therefore the WR features) might be in some cases significantly smaller than the extinction affecting the Balmer emission lines (Schaerer and Vacca 1998 ). Maíz-Apellániz et al. (1999) showed the spatial decoupling of stars, gas and dust in the star-forming regions of NGC 4214, which are rich in WR stars. It seems that the stellar winds can be very efficient in some cases in blowing away both the nebular gas and the dust grains, leaving the massive stellar cluster within relatively dust-free volumes. On the other hand, dust particles were detected mixed with the nebular gas, yielding relatively large extinctions on the Balmer emission lines. Maíz-Apellániz et al. (1999) estimated that this effect could yield to an overestimation of the observed $`L(WR)/L(H\beta )`$ ratio by a factor between 2 and 5.
We conclude therefore that the quantification of the relative number of WR over OB stars in star-forming regions can be severely overestimated by different effects. Therefore, reliable constraints on the properties of the star formation episodes can only be derived when different observational parameters are analyzed simultaneously, including $`L(WR)/L(H\beta )`$, $`W(H\beta )`$, $`EW(WR)`$,… as discussed in more detail by Mas-Hesse and Kunth (1999) .
### 2.3 Effects of Wolf-Rayet stars on the surrounding medium
As we have commented above, WR stars appear as the effect of strong stellar winds blowing out the outer atmospheric layers of evolved massive stars. The detection of WR’s traces therefore the presence of clusters rich in very massive stars, which are significantly affecting their surrounding interstellar medium in many ways:
* Large amounts of mechanical energy are being injected into the medium, even before the production of the first supernova explosions after he onset of the burst. Leitherer et al. (1995) and more recently Cerviño et al. (1999) have evaluated the amount of mechanical energy released by these powerful winds. It would be enough to blow out the surrounding nebular gas, leading to an empty cavity free of gas and dust. Kunth et al. (1998) detected outflowing gas apparently powered by the central starburst in a number of galaxies. Fig. 5 shows the profile of the Ly$`\alpha `$ emission line, clearly absorbed at the blue wing by neutral gas moving at several hundreds of km/s. The mechanical energy released would imply that the chemical enrichment associated to a new generation of stars wouldn’t become evident inmediately, since the enriched gas could be thrown away to relatively large distances by these gas outflows, as proposed by different authors in the last years (see the contribution from G. Tenorio-Tagle in this volume).
* When a star enters the Wolf-Rayet phase, its naked He core at a very high effective temperature (around 100.000 K) can become visible, producing so a source of rather hard ionizing radiation, much harder than the ionizing flux associated to Main Sequence OB stars (below 50.000 K in any case). This hard ionizing flux can produce a number of emission lines not usually found in HII regions. Schaerer (1996) proposed that some kind of WR stars could provide enough hard ionizing photons to explain the narrow HeII $`\lambda `$4686 detected in some, but not in all, starburst galaxies.
## 3 Summary and conclusions
The identification of Wolf-rayet stars in starburst environments in the last 20 years has helped to place strong constraints on the properties of these massive star formation episodes. We know presently that these starbursts are apparently short-lived (all massive stars are essentially coeval), that the Initial Mass Function in these regions is almost always close to Salpeter’s one (with slope $`\alpha =2.35`$), with stars of initial masses around 100 M at least. Most of these starbursts formed their massive stars less than around 6 Myr ago, but this is probably a selection effect, since after this age the ionizing flux fades rapidly and the objects do not loook like “emission line galaxies” any longer. There are nevertheless a number of questions still open:
* The starbursts in which WR stars have been detected seem to have been generally very short-lived. But, can we extrapolate this conclusion to all starbursts, including those in which no WR stars have been (yet) detected?
* What are the constraints that can be derived from the WN/WC ratios observed in different galaxies?
* How does rotation affect the predictions of the synthesis models used up to now? A. Maeder provides in this volume a summary of the state of the art evolutionary tracks including stellar rotation.
* Are there really Wolf-Rayet stars at evolved stages of the cluster, when the emission line strengths are very small, as predicted by the models including the evolution of binary systems? Would these WR’s show the same features as WR stars formed along the Conti scenario?
Let’s continue searching for Wolf-Rayet stars in different environments in order to help solve these open questions in the near future. |
no-problem/9912/astro-ph9912008.html | ar5iv | text | # RELATIVISTIC CORRECTIONS TO THE SUNYAEV-ZEL’DOVICH EFFECT FOR CLUSTERS OF GALAXIES. IV. ANALYTIC FITTING FORMULA FOR THE NUMERICAL RESULTS
## 1 INTRODUCTION
Compton scattering of the cosmic microwave background (CMB) radiation by hot intracluster gas — the Sunyaev-Zel’dovich effect (Zel’dovich & Sunyaev 1969; Sunyaev & Zel’dovich 1972, 1980a, 1980b, 1981) — provides a useful method to measure the Hubble constant $`H_0`$ (Gunn 1978; Silk & White 1978; Birkinshaw 1979; Cavaliere, Danese, & De Zotti 1979; Birkinshaw, Hughes, & Arnaud 1991; Birkinshaw & Hughes 1994; Myers et al. 1995; Herbig et al. 1995; Jones 1995; Markevitch et al. 1996; Holzapfel et al. 1997; Furuzawa et al. 1998). The original Sunyaev-Zel’dovich formula has been derived from a kinetic equation for the photon distribution function taking into account the Compton scattering by electrons: the Kompaneets equation (Kompaneets 1957; Weymann 1965). The original Kompaneets equation has been derived with a nonrelativistic approximation for the electron. However, recent X-ray observations have revealed the existence of many high-temperature galaxy clusters (David et al. 1993; Arnaud et al. 1994; Markevitch et al. 1994; Markevitch et al. 1996; Holzapfel et al. 1997; Mushotzky & Scharf 1997; Markevitch 1998). In particular, Tucker et al. (1998) reported the discovery of a galaxy cluster with the electron temperature $`k_BT_e=17.4\pm 2.5`$ keV. Rephaeli and his collaborator (Rephaeli 1995; Rephaeli & Yankovitch 1997) have emphasized the need to take into account the relativistic corrections to the Sunyaev-Zel’dovich effect for clusters of galaxies.
In recent years remarkable progress has been achieved in the theoretical studies of the relativistic corrections to the Sunyaev-Zel’dovich effects for clusters of galaxies. Stebbins (1997) generalized the Kompaneets equation. Itoh, Kohyama, & Nozawa (1998) have adopted a relativistically covariant formalism to describe the Compton scattering process (Berestetskii, Lifshitz, & Pitaevskii 1982; Buchler & Yueh 1976), thereby obtaining higher-order relativistic corrections to the thermal Sunyaev-Zel’dovich effect in the form of the Fokker-Planck expansion. In their derivation, the scheme to conserve the photon number at every stage of the expansion which has been proposed by Challinor & Lasenby (1998) played an essential role. The results of Challinor & Lasenby (1998) are in agreement with those of Itoh, Kohyama, & Nozawa (1998). The latter results include higher-order expansions. Itoh, Kohyama, & Nozawa (1998) have also calculated the collision integral of the Boltzmann equation numerically and have compared the results with those obtained by the Fokker-Planck expansion method. They have confirmed that the Fokker-Planck expansion method gives an excellent result for $`k_BT_e15`$keV, where $`T_e`$ is the electron temperature. For $`k_BT_e15`$keV, however, the Fokker-Planck expansion results show nonnegligible deviations from the results obtained by the numerical integration of the collision term of the Boltzmann equation. Here it should be pointed out that the generalized Kompaneets equation is equivalent to a single-scattering approximation. Thus for high-temperature clusters ($`k_BT_e`$15 keV) the relativistic corrections may underestimate the Sunyaev-Zel’dovich effect at high frequencies.
Nozawa, Itoh, & Kohyama (1998b) have extended their method to the case where the galaxy cluster is moving with a peculiar velocity with respect to CMB. They have thereby obtained the relativistic corrections to the kinematical Sunyaev-Zel’dovich effect. Challinor & Lasenby (1999) have confirmed the correctness of the result obtained by Nozawa, Itoh, & Kohyama (1998b). Sazonov & Sunyaev (1998a, b) have calculated the kinematical Sunyaev-Zel’dovich effect by a different method. Their results are in agreement with those of Nozawa, Itoh, & Kohyama (1998b). The latter authors have given the results of the higher-order expansions.
Itoh, Nozawa, & Kohyama (2000) have also applied their method to the calculation of the relativistic corrections to the polarization Sunyaev-Zel’dovich effect (Sunyaev & Zel’dovich 1980b, 1981). They have thereby confirmed the result of Challinor, Ford, & Lasenby (1999) which has been obtained with a completely different method. Recent works on the polarization Sunyaev-Zel’dovich effect include Audit & Simons (1998), Hansen & Lilje (1999), and Sazonov & Sunyaev (1999).
In the present paper we address ourselves to the numerical calculation of the relativistic corrections to the thermal Sunyaev-Zel’dovich effect. As stated above, Itoh, Kohyama, & Nozawa (1998) have carried out the numerical integration of the collision term of the Boltzmann equation. This method produces the exact results without the power series expansion approximation. In view of the recent discovery of an extremely high temperature galaxy cluster with $`k_BT_e=17.4\pm 2.5`$keV (Tucker et al. 1998), it would be extremely useful to present the results of the numerical integration of the collision term of the Boltzmann equation in the form of an accurate analytic fitting formula.
Sazonov & Sunyaev (1998a, b) have reported the results of the Monte Carlo calculations on the relativistic corrections to the Sunyaev-Zel’dovich effect. In Sazonov & Sunyaev (1998b), a numerical table which summarizes the results of the Monte Carlo calculations has been presented. This table is of great value when one wishes to calculate the relativistic corrections to the Sunyaev-Zel’dovich effect for galaxy clusters of extremely high temperatures. Accurate analytic fitting formulae would be still more convenient to use for the observers who wish to analyze the galaxy clusters with extremely high temperatures. This is the motivation of the present paper. For the analyses of the galaxy clusters with extremely high temperatures, the results of the calculation of the relativistic thermal bremsstrahlung Gaunt factor (Nozawa, Itoh, & Kohyama 1998a) and their accurate analytic fitting formulae (Itoh et al. 2000) will be useful.
The present paper is organized as follows. In $`\mathrm{\S }`$ 2 we give the method of the calculation. In $`\mathrm{\S }`$ 3 we give the analytic fitting formula. Concluding remarks will be given in $`\mathrm{\S }`$ 4.
## 2 BOLTZMANN EQUATION
We will formulate the kinetic equation for the photon distribution function using a relativistically covariant formalism (Berestetskii, Lifshitz, & Pitaevskii 1982; Buchler & Yueh 1976). As a reference system, we choose the system which is fixed to the center of mass of the cluster of galaxies. This choice of the reference system affords us to carry out all the calculations in the most straightforward way. We will use the invariant amplitude for the Compton scattering as given by Berestetskii, Lifshitz, & Pitaevskii (1982) and by Buchler & Yueh (1976).
The time evolution of the photon distribution function $`n(\omega )`$ is written as
$`{\displaystyle \frac{n(\omega )}{t}}`$ $`=`$ $`2{\displaystyle \frac{d^3p}{(2\pi )^3}d^3p^{}d^3k^{}W\left\{n(\omega )[1+n(\omega ^{})]f(E)n(\omega ^{})[1+n(\omega )]f(E^{})\right\}},`$ (2.1)
$`W`$ $`=`$ $`{\displaystyle \frac{(e^2/4\pi )^2\overline{X}\delta ^4(p+kp^{}k^{})}{2\omega \omega ^{}EE^{}}},`$ (2.2)
$`\overline{X}`$ $`=`$ $`\left({\displaystyle \frac{\kappa }{\kappa ^{}}}+{\displaystyle \frac{\kappa ^{}}{\kappa }}\right)+4m^4\left({\displaystyle \frac{1}{\kappa }}+{\displaystyle \frac{1}{\kappa ^{}}}\right)^24m^2\left({\displaystyle \frac{1}{\kappa }}+{\displaystyle \frac{1}{\kappa ^{}}}\right),`$ (2.3)
$`\kappa `$ $`=`$ $`2(pk)=2\omega E\left(1{\displaystyle \frac{\stackrel{}{p}}{E}}\mathrm{cos}\alpha \right),`$ (2.4)
$`\kappa ^{}`$ $`=`$ $`2(pk^{})=\mathrm{\hspace{0.17em}2}\omega ^{}E\left(1{\displaystyle \frac{\stackrel{}{p}}{E}}\mathrm{cos}\alpha ^{}\right).`$ (2.5)
In the above $`W`$ is the transition probability corresponding to the Compton scattering. The four-momenta of the initial electron and photon are $`p=(E,\stackrel{}{p})`$ and $`k=(\omega ,\stackrel{}{k})`$, respectively. The four-momenta of the final electron and photon are $`p^{}=(E^{},\stackrel{}{p}^{})`$ and $`k^{}=(\omega ^{},\stackrel{}{k}^{})`$, respectively. The angles $`\alpha `$ and $`\alpha ^{}`$ are the angles between $`\stackrel{}{p}`$ and $`\stackrel{}{k}`$, and between $`\stackrel{}{p}`$ and $`\stackrel{}{k}^{}`$, respectively. Throughout this paper, we use the natural unit $`\mathrm{}=c=1`$ unit, unless otherwise stated explicitly.
By ignoring the degeneracy effects, we have the relativistic Maxwellian distribution for electrons with temperature $`T_e`$ as follows
$`f(E)`$ $`=`$ $`\left[e^{\left\{(Em)(\mu m)\right\}/k_BT_e}+\mathrm{\hspace{0.17em}1}\right]^1`$ (2.6)
$``$ $`e^{\left\{K(\mu m)\right\}/k_BT_e},`$
where $`K(Em)`$ is the kinetic energy of the initial electron, and $`(\mu m)`$ is the non-relativistic chemical potential of the electron. We now introduce the quantities
$`x`$ $``$ $`{\displaystyle \frac{\omega }{k_BT_e}},`$ (2.7)
$`\mathrm{\Delta }x`$ $``$ $`{\displaystyle \frac{\omega ^{}\omega }{k_BT_e}}.`$ (2.8)
Substituting equations (2.6) – (2.8) into equation (2.1), we obtain
$$\frac{n(\omega )}{t}=2\frac{d^3p}{(2\pi )^3}d^3p^{}d^3k^{}Wf(E)\left\{[1+n(\omega ^{})]n(\omega )[1+n(\omega )]n(\omega ^{})e^{\mathrm{\Delta }x}\right\}.$$
(2.9)
Equation (2.9) is our basic equation. We will denote the Thomson scattering cross section by $`\sigma _T`$, and the electron number density by $`N_e`$. We will define
$`\theta _e`$ $``$ $`{\displaystyle \frac{k_BT_e}{m_ec^2}},`$ (2.10)
$`y`$ $``$ $`\sigma _T{\displaystyle 𝑑\mathrm{}N_e},`$ (2.11)
where $`T_e`$ is the electron temperature, and the integral in equation (2.11) is over the path length of the galaxy cluster. By introducing the initial photon distribution of the CMB radiation which is assumed to be Planckian with temperature $`T_0`$
$`n_0(X)`$ $`=`$ $`{\displaystyle \frac{1}{e^X1}},`$ (2.12)
$`X`$ $``$ $`{\displaystyle \frac{\omega }{k_BT_0}},`$ (2.13)
we rewrite equation (2.9) as
$$\frac{\mathrm{\Delta }n(X)}{n_0(X)}=yF(\theta _e,X).$$
(2.14)
We obtain the function $`F(\theta _e,X)`$ by numerical integration of the collision term of the Boltzmann equation (2.9). The accuracy of the numerical integration is about $`10^5`$. We confirm that the condition of the photon number conservation
$$𝑑XX^2\mathrm{\Delta }n(X)=\mathrm{\hspace{0.17em}0}$$
(2.15)
is satisfied with the accuracy better than $`10^9`$.
We define the distortion of the spectral intensity as
$`\mathrm{\Delta }I`$ $``$ $`{\displaystyle \frac{X^3}{e^X1}}{\displaystyle \frac{\mathrm{\Delta }n(X)}{n_0(X)}}`$ (2.16)
$`=`$ $`y{\displaystyle \frac{X^3}{e^X1}}F(\theta _e,X).`$ (2.17)
The graph of $`F(\theta _e,X)`$ is shown in Figure 1. The graph of $`\mathrm{\Delta }I/y`$ is shown in Figure 2.
## 3 ANALYTIC FITTING FORMULA
We give an accurate analytic fitting formula for the function $`F(\theta _e,X)`$ in equation (2.14) which has been obtained by numerical integration of the collision term of the Boltzmann equation. We will give an analytic fitting formula for the ranges $`0.02\theta _e0.05`$, $`0X20`$, which will be sufficient for the analyses of the galaxy clusters. For $`\theta _e<0.02`$, the results of Itoh, Kohyama, and Nozawa (1998) give sufficiently accurate results (the accuracy is generally better than 1%).
We express the fitting formula for $`0.02\theta _e0.05`$ as follows:
$`{\displaystyle \frac{\mathrm{\Delta }n(X)}{n_0(X)}}`$ $`=`$ $`yF(\theta _e,X)`$ (3.1)
$`=`$ $`y\left\{{\displaystyle \frac{\theta _eXe^X}{e^X1}}\left(Y_0+\theta _eY_1+\theta _e^2Y_2+\theta _e^3Y_3+\theta _e^4Y_4\right)+R\right\}.`$
The functions $`Y_0`$, $`Y_1`$, $`Y_2`$, $`Y_3`$, and $`Y_4`$ have been obtained by Itoh, Kohyama, and Nozawa (1998) with the Fokker-Planck expansion method, and their explicit expressions have been given.
We define the residual function $`R`$ in equation (3.1) as follows:
$`R`$ $`=`$ $`\{\begin{array}{cc}0,\hfill & \text{for }0X<2.5\hfill \\ {\displaystyle \underset{i,j=0}{\overset{10}{}}}a_{ij}\mathrm{\Theta }_e^iZ^j,\hfill & \text{for }2.5X20.0\text{ , }\hfill \end{array}`$ (3.4)
where
$`\mathrm{\Theta }_e`$ $``$ $`25\left(\theta _e\mathrm{\hspace{0.17em}0.01}\right),\mathrm{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}0.02}\theta _e0.05,`$ (3.5)
$`Z`$ $``$ $`{\displaystyle \frac{1}{17.6}}\left(X\mathrm{\hspace{0.17em}2.4}\right),\mathrm{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}2.5}X20.0.`$ (3.6)
The coefficients $`a_{ij}`$ are presented in TABLE 1. The accuracy of the fitting formula for equation (3.1) is generally better than 0.1% except for a region of $`\theta _e=0.05`$, $`X>17`$, where the error exceeds 1%.
## 4 CONCLUDING REMARKS
We have calculated the relativistic corrections to the thermal Sunyaev-Zel’dovich effect for clusters of galaxies by numerical integration of the collision term of the Boltzmann equation. We have presented an accurate analytic fitting formula for the thermal Sunyaev-Zel’dovich effect. The fitting formula covers all the ranges of the observation of galaxy clusters in the foreseeable future. The accuracy of the fitting is generally better than 0.1%. The present results will be useful for the analyses of the galaxy clusters with extremely high temperatures. For galaxy clusters with relatively low temperatures $`\theta _e<0.02`$, the Fokker-Planck expansion results of Itoh, Kohyama, & Nozawa (1998) will be sufficiently accurate (the accuracy is generally better than 1%).
We thank Professor Y. Oyanagi for allowing us to use the least square fitting program SALS. We also thank our anonymous referee for many valuable comments which helped us tremendously in revising the manuscript. This work is financially supported in part by the Grant-in-Aid of Japanese Ministry of Education, Science, Sports, and Culture under the contract #10640289. |
no-problem/9912/cond-mat9912101.html | ar5iv | text | # Mutual information and self-control of a fully-connected low-activity neural network
## I Introduction
It is well-known by now that low-activity neural network models have a larger storage capacity than the corresponding models with a mean 50% activity (see, e.g., ). However, this improvement is not always apparent in the basins of attraction. Furthermore, for low activities the information content in a single pattern is reduced. For these reasons it is argued that a neural activity control system is needed in the dynamics of the network in order to keep its activity the same as the one for the memorized patterns during the whole retrieval process . Recently, new suggestions have been put forward for the choice of threshold functions in network models in order to get an enhanced retrieval quality– overlap, basin of attraction, critical capacity, information content (see and references therein). Diluted models , layered models and models for sequential patterns have been considered. In all cases it has been found that appropriate thresholds lead to considerable improvements of the retrieval quality.
The models mentioned above have a common property. For the diluted and layered models there is no feedback in the dynamics. For the model with sequential patterns no feedback correlations are taken into account. The absence of feedback considerably simplifies the dynamics. Hence, it is interesting to look at a model with feedback correlations and to see whether the introduction of a threshold in the sense described above still enhances the retrieval properties in this much more complex situation.
With these ideas in mind we consider in the sequel low activity (or in other words sparsely coded) fully connected neural networks. In particular, we study the application of a self-control mechanism proposed recently for a diluted network of binary patterns . Self-control has been introduced in order to avoid imposing some external constraints on the network with the purpose of improving its retrieval properties. Such external constraints destroy the autonomous functioning of the network.
The model we look at is a fully-connected attractor neural network with neurons and patterns taking the values $`\{1,0,+1\}`$ and pattern activity $`a`$. A low-activity neural network corresponds then to the case where the pattern distribution is far from uniform, i.e., $`a<2/3`$. This network has the advantage that it can be generated keeping a symmetric distribution of the states since both the $`\pm 1`$ states are considered the active ones, while the $`0`$ state is the inactive one.
The rest of this paper is organised as follows. The three-state network model and its order parameters are described in section II. In order to study the retrieval quality of the model, especially in the limit of low activity the mutual information content is analysed in Section III. Section IV discusses the dynamics of this network in the presence of the self-control mechanism realised through the introduction of a time-dependent threshold. Evolution equations for the order parameters are written down. Using these equations the influence of self-control on the retrieval quality of the network – information content, critical capacity, basins of attraction – is studied in section V. Furthermore, these theoretical findings are compared with results from numerical simulations of a fully connected network of $`10^4`$ neurons. Finally, section VI presents some concluding remarks.
## II The model
Consider a neural network model of $`N`$ three-state neurons. At a discrete time step $`t`$ the neurons $`\sigma _i\{0,\pm 1\},i=1,\mathrm{},N`$ are updated according to the parallel deterministic dynamics
$$\sigma _{i,t+1}=F_{\theta _t}(h_{i,t}),h_{i,t}=\underset{i(j)}{\overset{N}{}}J_{ij}\sigma _{j,t}$$
(1)
where $`h_{i,t}`$ is the local field of neuron $`i`$ at time $`t`$ and $`\theta _t`$ a time-dependent threshold parameter. As usual, the transfer function $`F_{\theta _t}`$ is given by
$$F_{\theta _t}(x)sgn(x)\mathrm{\Theta }(|x|\theta _t)$$
(2)
with $`\mathrm{\Theta }`$ the standard Heaviside function.
The couplings $`J_{ij}`$ are determined as a function of the memorized patterns $`\xi _i^\mu `$ by the Hebbian learning algorithm
$$J_{ij}=\frac{1}{Na}\underset{\mu =1}{\overset{p=\alpha N}{}}\xi _i^\mu \xi _j^\mu $$
(3)
with $`\alpha `$ the loading capacity. The patterns are taken to be independent identically distributed random variables (IIDRV) $`\xi _i^\mu \{0,\pm 1\},i=1,\mathrm{},N,\mu =1,\mathrm{},p`$, chosen according to the probability distribution
$$p(\xi _i^\mu )=a\delta (|\xi _i^\mu |^21)+(1a)\delta (\xi _i^\mu )$$
(4)
with $`a=|\xi _i^\mu |^2`$ the activity of the patterns. Moreover, we assume that there is no bias, i.e., $`\xi _i^\mu =0`$ and that there exist no correlation between patterns such that $`\xi _i^\mu \xi _i^\nu =0`$.
At this point we remark that the long-time behavior of this network model is governed by the spin-1 Hamiltonian
$$H=\underset{i,j}{}J_{ij}\sigma _i\sigma _j\theta _t\underset{i}{}\sigma _i^2.$$
(5)
Furthermore, the Hopfield model can be recovered by taking the activity $`a=1`$ and the threshold $`\theta _t=0`$.
The standard order parameters of this type of models are the retrieval overlap between the $`\mu `$th-pattern and the microscopic state of the network
$$m_{N,t}^\mu \frac{1}{aN}\underset{i}{}\xi _i^\mu \sigma _{i,t},$$
(6)
and the neural activity of the neurons
$$q_{N,t}\frac{1}{N}\underset{i}{}|\sigma _{i,t}|^2.$$
(7)
In the next Section we use these order parameters in order to study the retrieval quality of the network.
## III Mutual information
It is known that the Hamming distance between the state of the network and the pattern $`\{\xi _i^\mu \}`$, viz.
$`d_t^\mu {\displaystyle \frac{1}{N}}{\displaystyle \underset{i}{}}|\xi _i^\mu \sigma _{i,t}|^2=a2am_{N,t}^\mu +q_{N,t}`$ (8)
is a good measure for the retrieval quality of a network when the patterns are uniformly distributed, i.e., when the neural activity $`a=2/3`$. But for low-activity networks it cannot distinguish between a situation where most of the wrong neurons $`(\sigma _i\xi _i^\mu )`$ are turned off and a situation where these wrong neurons are turned on. This distinction is critical in the low-activity three-state network because the inactive neurons carry less information than the active ones . Therefore the mutual information function $`I(\sigma _{i,t};\xi _{i,t}^\mu )`$ has been introduced
$$I(\sigma _{i,t};\xi _{i,t}^\mu )=S(\sigma _{i,t})S(\sigma _{i,t}|\xi _{i,t}^\mu )_{\xi _t^\mu }$$
(9)
where $`\xi _{i,t}^\mu `$ is considered as the input and $`\sigma _{i,t}`$ as the output with $`S(\sigma _{i,t})`$ its entropy and $`S(\sigma _{i,t}|\xi _{i,t}^\mu )`$ its conditional entropy, viz.
$`S(\sigma _{i,t})`$ $`=`$ $`{\displaystyle \underset{\sigma }{}}p(\sigma _{i,t})\mathrm{ln}[p(\sigma _{i,t})]`$ (10)
$`S(\sigma _{i,t}|\xi _{i,t}^\mu )`$ $`=`$ $`{\displaystyle \underset{\sigma }{}}p(\sigma _{i,t}|\xi _{i,t}^\mu )\mathrm{ln}[p(\sigma _{i,t}|\xi _{i,t}^\mu )].`$ (11)
Here $`p(\sigma _{i,t})`$ denotes the probability distribution for the neurons at time $`t`$ and $`p(\sigma _{i,t}|\xi _{i,t}^\mu )`$ indicates the conditional probability that the $`i`$-th neuron is in a state $`\sigma _{i,t}`$ at time $`t`$ given that the $`i`$-th site of the stored pattern to be retrieved is $`\xi _{i,t}^\mu `$.
The calculation of the different terms of this mutual information for the model at hand proceeds as follows. As a consequence of the mean-field theory character of our model it is enough to consider the distribution of a single typical neuron so we forget about the index $`i`$ in the sequel. We also do not write the time index $`t`$ and the pattern index $`\mu `$.
The conditional probability that the $`ith`$ neuron is in a state $`\sigma _i`$ at time $`t`$, given that the $`ith`$ site of the pattern being retrieved is $`\xi _i`$ can be obtained as follows. Formally writing $`O=O_{\sigma |\xi }_\xi =_\xi p(\xi )_\sigma p(\sigma |\xi )O`$ for an arbitrary quantity $`O`$ and using the complete knowledge about the system $`\xi =0,\sigma =0,\sigma \xi =am,\xi ^2=a,\sigma ^2=q,\sigma ^2\xi =0,\sigma \xi ^2=0,\sigma ^2\xi ^2=an,1=1`$ we arrive at
$`p(\sigma |\xi )`$ $`=(s_\xi +m\xi \sigma )\delta (\sigma ^21)+(1s_\xi )\delta (\sigma ),`$ (12)
$`s_\xi `$ $`s{\displaystyle \frac{qn}{1a}}\xi ^2,s{\displaystyle \frac{qan}{1a}}.`$ (13)
At this point we see from (13) that besides $`m`$ and $`q`$ the following parameter
$$n_{N,t}^\mu \frac{1}{aN}\underset{i}{\overset{N}{}}|\sigma _{i,t}|^2|\xi _i^\mu |^2$$
(14)
will play an independent role in the mutual information function. This quantity is called the activity-overlap since it determines the overlap between the active neurons, $`|\sigma _{it}|=1`$, and the active parts of the memorized patterns, $`|\xi _i^\mu |=1`$. We remark that it also shows up in the alternative expression of the retrieval quality through the performance $`P_t^\mu =1/N_i\delta _{\xi _{i,t}^\mu ,\sigma _{i,t}}`$ (see ). It does not play any independent role in the time evolution of the network, independent of the architecture considered – diluted, layered or fully-connected.
Next, one can verify that the probability $`p(\sigma |\xi )`$ is consistent with the averages
$`m`$ $`={\displaystyle \frac{1}{a}}\sigma _{\sigma |\xi }\xi _\xi ,`$ (15)
$`q`$ $`=\sigma ^2_{\sigma |\xi }_\xi ,`$ (16)
$`n`$ $`={\displaystyle \frac{1}{a}}\sigma ^2_{\sigma |\xi }\xi ^2_\xi .`$ (17)
These averages are precisely equal in the limit $`N\mathrm{}`$ to the order parameters $`m`$ and $`q`$ in eq. (6)-(7) and to the activity-overlap defined in eq. (14).(The fluctuations around their mean values can be neglected according to the LLN, hence the average over a particular $`i`$-site distribution equals the infinite sum over $`i`$).
Using the probability distribution of the memorized patterns (4) we furthermore obtain
$$p(\sigma )\underset{\xi }{}p(\xi )p(\sigma |\xi )=q\delta (\sigma ^21)+(1q)\delta (\sigma ).$$
(18)
The expressions for the entropies defined above then become
$`S(\sigma )=q\mathrm{ln}{\displaystyle \frac{q}{2}}(1q)\mathrm{ln}(1q),`$ (19)
$`S(\sigma |\xi )_\xi =aS_a+(1a)S_{1a},`$ (20)
$`S_a={\displaystyle \frac{n+m}{2}}\mathrm{ln}{\displaystyle \frac{n+m}{2}}{\displaystyle \frac{nm}{2}}\mathrm{ln}{\displaystyle \frac{nm}{2}}`$ (21)
$`(1n)\mathrm{ln}(1n),`$ (22)
$`S_{1a}=s\mathrm{ln}{\displaystyle \frac{s}{2}}(1s)\mathrm{ln}(1s)`$ (23)
and the mutual information is then given by eq. (9). We recall that $`m_t`$, $`q_t`$ as well as $`n_t`$ are needed in order to completely know the mutual information content of the network at time $`t`$.
## IV Threshold Dynamics
It is known that the parallel dynamics of fully connected networks is difficult to solve, even at zero temperature, because of the strong feedback correlations . Recently, a recursive dynamical scheme has been developed which calculates the distribution of the local field at a general time step using signal-to-noise analysis techniques . Recursion relations are obtained determining the full time evolution of the order parameters. We shortly review these results.
Suppose that the initial configuration of the network $`\{\sigma _{i,0}\}`$, is a collection of IIDRV with mean $`\sigma _{i,0}=0`$, variance $`(\sigma _{i,0})^2=q_{in}`$, and correlated with only one stored pattern, say the first one $`\{\xi _i^1\}`$:
$$\frac{1}{N}\underset{i}{}\xi _i^\mu \sigma _{i,0}=\delta _{\mu ,1}m_{in}^1a,m_{in}^1>0.$$
(24)
This implies that by the law of large numbers (LLN) one gets for the retrieval overlap and the activity at $`t=0`$
$`m_0^1`$ $``$ $`\underset{N\mathrm{}}{lim}m_{N,0}^1={\displaystyle \frac{1}{a}}\xi _i^1\sigma _{i,0}=m_{in}^1`$ (25)
$`q_0`$ $``$ $`\underset{N\mathrm{}}{lim}q_{N,0}=\sigma _{i,0}^2=q_{in}.`$ (26)
At a given time step of the dynamics, the state of a neuron, $`\sigma _{i,t+1}`$ is determined by its local field at the previous time step. In general, in the limit $`N\mathrm{}`$ the distribution of the local field at time $`t+1`$ consists out of a discrete part and a normally distributed part
$`h_{i,t}=\xi _i^1m_t^1+\sqrt{\alpha aD_t}𝒩(0,1)+B_{i,t}`$ (27)
$`D_t=\text{Var}\left[r_t^\nu \right]=\text{Var}\left[\underset{N\mathrm{}}{lim}{\displaystyle \frac{1}{a\sqrt{N}}}{\displaystyle \underset{i}{}}\xi _i^\nu \sigma _{i,t}\right],\nu >1`$ (28)
$`B_{i,t}={\displaystyle \underset{t^{}=0}{\overset{t1}{}}}\alpha \left[{\displaystyle \underset{s=t^{}}{\overset{t1}{}}}\chi _s\right]\sigma _{i,t^{}}`$ (29)
with $`𝒩(0,1)`$ a Gaussian random variable with mean zero and variance $`1`$ and $`\chi `$ the susceptibility
$$\chi _t=\frac{1}{\sqrt{\alpha aD_t}}𝒟zzF_{\theta _t}\left(\xi ^1m_t^1+\sqrt{\alpha aD_t}z\right)$$
(30)
where $`𝒟`$ is the Gaussian measure. In the above $``$ denotes the average both over the distribution of the embedded patterns $`\{\xi _i^\mu \}`$ and the initial configurations $`\{\sigma _{i,0}\}`$. The average over the initial configurations is hidden in an average over the local field through the updating rule (1).
The first term on the r.h.s. of (27) is the signal term produced by the pattern that is being retrieved, the rest represents the noise induced by the $`(p1)`$ non-condensed patterns. In particular, the second term is Gaussian noise and the last term $`B_{i,t}`$ contains discrete noise coming from the feedback correlations. The quantity $`D_t`$ satisfies the recursion relation
$$D_{t+1}=\frac{q_{t+1}}{a}+\chi _t^2D_t+2\chi _t\text{Cov}[𝒩(0,q_{t+1}/a),r_t^\mu ]$$
(31)
For more details we refer to . Using the above scheme the order parameters at a general time step can then be obtained in the limit $`N\mathrm{}`$ from Eqs. (6)-(7) and (1)
$`m_{t+1}^1`$ $`=`$ $`{\displaystyle \frac{1}{a}}\xi _i^1F_{\theta _t}(h_{i,t})`$ (32)
$`q_{t+1}`$ $`=`$ $`F_{\theta _t}^2(h_{i,t}).`$ (33)
The activity overlap needed in order to find the mutual information can also be written as
$$n_{t+1}^1=\frac{1}{a}(\xi _i^1)^2F_{\theta _t}^2(h_{i,t}).$$
(34)
Of course, we then also need to specify its initial value
$$n_0^1\underset{N\mathrm{}}{lim}n_{N,0}^1=\frac{1}{a}(\xi _i^1)^2(\sigma _{i,0})^2.$$
(35)
The idea of the self-control threshold dynamics introduced in the diluted model and studied for some other models without feedback correlations has been precisely to let the network counter the noise term in the local field at each step of the dynamics by introducing the following form
$$\theta _t=c(a)\sqrt{\alpha aD_t},$$
(36)
where the function $`c(a)`$ is a function of the pattern activity. We remark that in these cases without feedback there is no discrete noise in the local field (the term $`B_{i,t}`$ in eq. (27) is absent). Furthermore, also the covariance term in eq. (31) is absent. Moreover, for the diluted model $`D_t=q_t/a`$, i.e., only the first term in eq. (31) is present. So this dynamical threshold has two important characteristics. First, it is a macroscopic parameter having the same value for every neuron thus no average must be taken over the microscopic random variables at each time step. Secondly, it changes each time step but no statistical history intervenes in this proces.
We see that the choice (36) is in fact related to the variance of the local field, taken for a fixed realization of the pattern which is being retrieved. It is the width of the noise produced by the non-condensed patterns. It is obvious that it cannot be taken to be a function of the overlap with the pattern being retrieved. As a consequence, for the fully connected network we cannot work with the exact form for $`D_t`$ as given in eq. (31) because of the presence of the covariance. So, if we want to take into account some effects of feedback correlations and if we want the threshold to have the characteristic properties mentioned above, we need to approximate the covariance term in eq. (31) such that only the previous time step is involved. This is realised by approximating this term by $`2\chi _t\{\text{Var}[𝒩(0,q_{t+1}/a)]\text{Var}[r_t^\mu ]\}^{1/2}=2\chi _t[q_{t+1}/a]^{1/2}[D_t]^{1/2}`$. We then easily get
$`\alpha aD_{t+1}`$ $`=`$ $`\left[G_t+\sqrt{\alpha q_{t+1}}\right]^2`$ (37)
$`G_t`$ $`=`$ $`{\displaystyle 𝒟zzF_{\theta _t}\left(\xi ^1m_t^1+\sqrt{\alpha aD_t}z\right)}`$ (38)
Furthermore, we take both contributions at equal times and call $`\sqrt{\alpha aD_t}\mathrm{\Delta }_t=G_t+\sqrt{\alpha q_t}`$. For more details on this approximation of the feedback correlations we refer to and references therein. Finally, since $`G_t`$ is a function of the overlap $`m_t^1`$, a quantity which is not available to the network we replace it by $`G_0=\sqrt{2/\pi }a`$.
What is left then is to find a form for $`c(a)`$. For the low-activity networks considered up to now the storage capacity could be considerably improved by taking $`c(a)=[2\mathrm{ln}(a)]^{1/2}`$ such that for the diluted model $`\theta _t=[2\mathrm{ln}(a)\alpha q_t]^{1/2}`$. The same form has been shown to work for the layered model . For the fully connected model considered here we again propose, a priori, this form. So, combining these results we take as self-control threshold
$`\theta _t`$ $`=`$ $`\sqrt{2\mathrm{ln}(a)}\mathrm{\Delta }_t^0`$ (39)
$`\mathrm{\Delta }_t^0`$ $`=`$ $`\sqrt{2/\pi }a+\sqrt{\alpha q_t}.`$ (40)
Finally, we make one more assumption on the dynamics. In the local field distribution (27) we forget about the discrete noise $`B_{i,t}`$ and suppose that the noise produced by the non-condensed patterns is Gaussian distributed. Computer simulations have shown that this assumption is approximately valid as long as the retrieval is succesful . As a consequence we can write down recursion relations for the order parameters
$`m_{t+1}`$ $`=`$ $`{\displaystyle 𝒟zF_{\theta _t}(m_t^1+z\mathrm{\Delta }_t)}`$ (41)
$`q_{t+1}`$ $`=`$ $`a{\displaystyle 𝒟z[F_{\theta _t}(m_t^1+z\mathrm{\Delta }_t)]^2}`$ (43)
$`+(1a){\displaystyle 𝒟z[F_{\theta _t}(z\mathrm{\Delta }_t)]^2}`$
with
$`\mathrm{\Delta }_t`$ $`=`$ $`\sqrt{\alpha q_t}+a{\displaystyle 𝒟zzF_{\theta _t}(m_t^1+z\mathrm{\Delta }_t)}`$ (45)
$`+(1a){\displaystyle 𝒟zzF_{\theta _t}(z\mathrm{\Delta }_t)}`$
where we have already averaged over $`\xi `$. We remark that the form of the last equation is different from the corresponding equation for the diluted and the layered versions of this model because of the feedback.
The expressions for the overlap $`m_{t+1}`$, the neural activity $`q_{t+1}`$ and the noise $`\mathrm{\Delta }_t`$ due to the non-condensed patterns describe the (approximate) macro-dynamics of the fully-connected neural network. Besides the self-control model with the threshold given by eq. (39)-(40) we also consider the model with the threshold fixed at its zero time value, i.e., $`\theta _t=\theta _0`$.
At this point we remark that when studying the mutual information, we want to introduce explicitly the activity-overlap parameter, $`n_t`$ (recall eq. (14)) in the dynamics leading to the following expression for $`q_{t+1}`$
$$q_{t+1}=an_{t+1}+(1a)s_{t+1}$$
(46)
where $`s_{t+1}`$ is then, obviously, defined by the integral of $`[F_{\theta _t}(z\mathrm{\Delta }_t)]^2`$. This parameter $`s`$ is precisely that introduced in Eq. (13) and measures the number of active neurons in inactive condensed pattern sites.
In the following section we compare the retrieval properties of a fully connected network governed by this approximate dynamics with and without self-control with numerical simulations. The main aim is to show that self-control also works in the case of fully connected models.
## V Results
### A Numerical Results
We have solved the time evolution of our threshold dynamics with a time-dependent self-control threshold given by eq. (39)-(40)) and with a time-independent threshold where the neural activity is fixed by $`q_t=q_0`$.
We have studied the behavior of these networks in the range of pattern activities $`0.01a0.67`$ i.e., from low-activities to a uniform distribution of patterns.
For both thresholds it turned out that the best results were obtained by taking $`c(a)=\sqrt{2\mathrm{ln}(a)}+K`$, with $`K=0`$ for $`a0.1`$ and $`K=0.5`$ for $`a<0.1`$. At this point we remark, however, that the pure log form for $`c(a)`$ is derived in the theoretical limit $`a0`$. So, it may be that we did not reach small enough values in our numerical analysis (which is due to numerical complexity). We recall that one of the main aims of this work is to show that self-control also works for fully connected models.
The important features of self-control are illustrated in Figs. 1-5. In Fig. 1 we compare the time evolution of the retrieval overlap, $`m_t`$, starting from several initial values, $`m_0`$, for the model with self-control, $`\theta _{sc}=\theta _t`$ (recall eq. (39)-(40)) with the model with fixed threshold $`\theta _0`$. An initial neural activity $`q_0=a=0.01`$ and a loading $`\alpha =2`$ have been taken. We observe that the self-control forces more of the overlap trajectories to go to the retrieval attractor $`m=1`$. Only an initial overlap $`m_00.4`$ for the self-control model versus $`m_00.6`$ for the fixed threshold model is needed. We remark that for $`m_00.6`$ the overlap decreases in the first time step for both models. This is an expression of the fact that correlations are especially important in the first time steps leading to a decreasing neural activity $`q_t`$, but the self-control threshold is able to counter these effects. Near the attractor correlations seem to become less important and the Gaussian character of the local field distribution dominates.
Since the initial overlap needed to retrieve a pattern is smaller for the self-control model, the basins of attraction of the patterns are substantially larger. This is further illustrated in Figs. 2-4 where the basin of atraction for the whole retrieval phase $`R`$ is shown for both models with an initial value $`q_0=a=0.01`$. We have calculated the fixed-point $`m_{\mathrm{}}`$ of the dynamics (41), (43) and (45) and we have determined the intial conditions of the relevant parameters such that the network is able to retrieve, i.e., such that $`m_{\mathrm{}}1`$. It is interesting to also give $`n_0`$ and/or $`s_0`$ separately in order to see how the activity $`q_t`$ is built up.
In Fig. 2 we have used $`q_0=a,n_0=1`$. The basin of attraction for the self-control model is larger, even near the border of critical storage. Hence the storage capacity itself is also bigger.
Furthermore, a smaller initial activity-overlap $`n_0`$ suffices to have retrieval as is seen in Fig. 3. There we start with initial conditions $`m_0=0.4`$, i.e., the smallest initial overlap possible for $`\alpha =2`$ as we recall from Fig. 1, and $`q_0=an_0`$ or, equivalently $`s_0=0`$. So we consider small $`q_0`$ running from $`0.004`$ to $`0.01`$. We observe the peculiar behavior that for the fixed-threshold network an initial $`n_0>m_0`$ is needed, but even then still no retrieval is possible for low storage $`\alpha <0.1`$. For the self-control model a much broader region of retrieval exists. Finally, the specific role of the parameter $`s_t`$ is displayed in Fig. 4. We start from a maximal initial overlap $`m_0=1`$ and take $`n_0=1`$ meaning that for $`s_0=0`$ to $`s_0=1`$, $`q_t`$ runs from $`0.01`$ to $`1`$. It can be seen that especially when $`s_0`$ is getting large the storage capacity of both models decreases quite drastically but again much less for the self-control model.
We conclude with the observation that self-control works in a large range of pattern activities $`a`$, as shown in Fig. 5. There the mutual information content $`i=_i_\mu I/(\mathrm{\#}J_{ij})=\alpha I`$ is plotted as a function of the loading $`\alpha `$ on a logaritmic scale. We observe the slow increase of $`i`$ as the activity $`a`$ decreases, saturating at a value close to $`i0.3`$. This behavior is typical for low activity networks .
### B Simulations
Simulations have been carried out for systems with $`N=10^4`$ neurons. For every new stored pattern $`\mu `$, we start our dynamics with a state $`\sigma _i=\xi _i^\mu `$, and calculate the order parameters $`m_t`$, $`n_t`$ and the activity overlap $`q_t`$, using the definitions (6), (7) and (14). To avoid a very large computation time we have stopped the dynamics after $`t5`$ time steps when no convergence was reached before. Then we have averaged over windows in the $`p`$-axis in order to obtain the mutual information $`i`$. The window size runs from $`\delta p=50`$ for $`a=0.67`$ (where we have stored $`p=10^3`$ patterns) up to $`\delta p=2\times 10^3`$ for $`a=0.01`$ (where we have stored $`p=5\times 10^4`$ patterns).
The conditions on the LLN mentioned in Section IVA are approximatelly fullfilled for such large networks, since the fluctuations (neglected in Eqs.(15)-(17)) are of order $`1/\sqrt{aN}`$. However, for smaller activities $`a`$, this quantity may be not so small. It becomes crucial in the case $`a=0.01`$, where this quantity is $`0.1`$, such that the finite size effects get relevant. This implies a kind of cut-off in the information for the self-control model as seen in Fig. 6. However, the agreement with the analytic results of Fig. 5 is quite good up to $`a=0.01`$.
In order to further understand the details of the retrieval quality we plot in Fig. 7 all the parameters $`m,n,q`$ for the model with and without self-control in two cases: $`a=0.01`$, a low-activity case, and $`a=0.67`$, implying a uniform distribution of patterns. For the uniform case we do not see a big difference between the two models (self-control and fixed threshold). Only for larger values of $`\alpha `$, self-control shows a little improvement. For the low-activity case, however, the main role of self-control on the neural activity is clearly noticed since $`qa`$ in that case while in the fixed-threshold model it is impossible to control $`q`$ such that it stays in the neigborhood of $`a`$. As a conseqeunce the mutual information, e.g., is only about half of that for the model with self-control.
Finally, in Fig. 8 we compare the simulations with the results from the fixed-points of the dynamics (41), (43) and (45) for $`a=0.03`$. Up to $`t=5`$ time steps are considered for both the model with and without self-control and we have averaged over a window in the $`p`$-axis of size $`\delta p=10^3`$. For the self-control model the small underestimation of the theoretical results can, of course, be attributed to the approximations of the noise term (recall Eqs (37) and (40)).
## VI Concluding Remarks
In this paper we have introduced a self-control threshold in the dynamics of fully connected networks with three-state neurons. This leads to a large improvement of the quality of retrieval of the network. The relevant quantity in order to study this, especially in the limit of low activity is the mutual information function. The mutual information content of the network as well as the critical capacity and the basins of attraction of the retrieval solutions for three-state patterns are shown to be larger because of the self-control mechanism. Furthermore, since the mutual information saturates, the critical capacity of the low-activity network behaves as $`\alpha _c=O(|a\mathrm{ln}(a)|^1)`$. Numerical simulations confirm these results.
This idea of self-control might be relevant for various dynamical systems, e.g., when trying to enlarge the basins of attraction and convergence times. Indeed, it has been shown to work also for both diluted and layered networks. Binary as well as ternary neurons and patterns have been treated. In all cases, it turns out that in the low-activity regime the self-control threshold can be taken to be proportional to the square root of the neural activity of the network.
## Acknowledgments
We would like to thank G. Jongen for useful discussions. This work has been supported by the Research Fund of the K.U.Leuven (grant OT/94/9). One of us (D.B.) is indebted to the Fund for Scientific Research - Flanders (Belgium) for financial support. |
no-problem/9912/cond-mat9912289.html | ar5iv | text | # The joys and pitfalls of Fermi surface mapping in Bi2Sr2CaCu2O8-δ using angle resolved photoemission.
## Abstract
On the basis of angle-scanned photoemission data recorded using unpolarised radiation, with high (E,k) resolution, and an extremely dense sampling of k-space, we resolve the current controversy regarding the normal state Fermi surface (FS) in Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8-δ</sub> (Bi2212). The true picture is simple, self-consistent and robust: the FS is hole-like, with the form of rounded tubes centred on the corners of the Brillouin zone. Two further types of features are also clearly observed: shadow FSs, which are most likely to be due to short range antiferromagnetic spin correlations, and diffraction replicas of the main FS caused by passage of the photoelectrons through the modulated Bi-O planes.
The topology and character of the normal state Fermi surfaces of the high temperature superconductors have been the object of both intensive study and equally lively debate for almost a decade. Angle-resolved photoemission spectroscopy (ARPES) has played a defining role in this discussion. In particular, the pioneering work of Aebi et al. illustrated that angle-scanned photoemission using unpolarised radiation can deliver a direct, unbiased image of the complete FS of Bi2212 , confirming the large FS centered at the corners of the Brillouin zone predicted by band structure calculations . Furthermore, the use of the mapping method enabled the indentification of weak additional features (dubbed the shadow Fermi surface or SFS) which were attributed to the effects of short-range antiferromagnetic spin correlations, the existence of which had already been proposed theoretically . Subsequently, conventional ARPES investigations (involving the analysis of series of energy distribution curves (EDCs) along a particular line in k-space) clearly identified a further set of dispersive photoemission structures which are extrinsic and result from a diffraction of the outgoing photoelectrons as they pass through the structurally modulated Bi-O layer, which forms the cleavage surface in these systems .
Recently, this whole picture of the normal state FS of Bi2212 has been called into question. ARPES data recorded using particular photon energies (32-33 eV) have been interpreted in terms of either: a FS with missing segments , an extra set of one dimensional states , or an electron-like FS centred around the $`\mathrm{\Gamma }`$ point . A further study suggests that both electron or hole-like FS pieces can be observed merely depending on the photon energy used in the ARPES experiment .
These points illustrate that the situation as regards the true topology and character of the normal state FS of Bi2212 as seen by photoemission spectroscopy is, in fact, far from clear. Thus, considering both the fundamental significance of the FS question in general, and the importance of photoemission in this debate, it is essential that an unambiguous framework is arrived at for the interpretation of the ARPES data.
In this Letter we present angle-scanned photoemission data from pure and Pb-doped Bi2212. As our data-sets contain more than 1000 high (E,k)-resolved EDCs per Brillouin zone quadrant, we combine the advantages of both the mapping and EDC methods, making data manipulation such as interpolation superfluous. We show that the origin of the recent controversy stems from the simultaneous presence of three different types of photoemission features around the $`\overline{\text{M}}`$ point : the main FS, diffraction replicas (DRs) and the SFS.
The ARPES experiments were performed using monochromated, unpolarised He I radiation and a SCIENTA SES200 analyser enabling simultaneous analysis of both the E and k-distribution of the photoelectrons. The overall energy resolution was set to 30 meV and the angular resolution to $`\pm `$0.38 , which gives $`\mathrm{\Delta }`$k $``$ 0.028 Å<sup>-1</sup> (i.e. 2.4 $`\%`$ of $`\mathrm{\Gamma }`$X). High quality single crystals of pristine and Pb-doped Bi2212, the latter grown from the flux in the standard manner, were cleaved in-situ to give mirror-like surfaces and all data were measured at either 120 or 300K within 6 hours of cleavage.
Figures 1a and 1b show series of photoemission data (T$``$ 300K) of Bi2212 taken along the two high symmetry directions $`\mathrm{\Gamma }`$X and $`\mathrm{\Gamma }`$Y in k-space, respectively. The EDCs (right panels) are shown together with 2D (E, k) representations where the photoemission intensity forms the grey scale (left panels).
We deal first with $`\mathrm{\Gamma }`$X. Starting from the $`\mathrm{\Gamma }`$ point, we clearly observe the main CuO<sub>2</sub> derived band crossing the Fermi level, $`E_\text{F}`$, at $``$ 0.4($`\pi `$,-$`\pi `$). Also evident are two weaker features straddling the X point, which cross $`E_\text{F}`$
at wavevectors equivalent to the main band crossings but shifted by the vector ($`\pi `$, -$`\pi `$). These are the shadow bands first observed in Ref. . A strikingly different picture occurs for $`\mathrm{\Gamma }`$Y (parallel to the crystallographic b-axis, Fig. 1b). Once again, the strongest feature is the main band, but now additional, extrinsic DRs of the main band are clearly visible, shifted in k from the main band by nq=(0.21$`\pi `$,0.21$`\pi `$), whereby n is the order of the DR . Thus, starting from the bottom of Fig.1(b) we see firstly a first-order DR of the main band (the latter crosses $`E_\text{F}`$ at $``$0.4(-$`\pi `$,-$`\pi `$)). Then, around the $`\mathrm{\Gamma }`$ point a feature resulting from the overlap of very weak second order DRs is observed. There then follows a 1st order DR of the parent main band which is then itself seen crossing $`E_\text{F}`$ at $``$0.4($`\pi `$,$`\pi `$) Subsequently, a further 1st order DR of the same main band is seen, followed by indications for the shadow band and a 2nd order DR. It should be stressed that the strongly dispersive nature of the states along $`\mathrm{\Gamma }`$X,Y eases the task of interpreting the photoemission data.
Thus, the data shown in Fig. 1 indicate the presence of three different types of features for the $`\mathrm{\Gamma }`$X,Y photoemission data from Bi2212 related to the main bands, the shadow bands and diffraction replicas of the main bands. In this point there is a broad consensus .
The picture for the region around the $`\overline{\text{M}}`$-point, however, is currently the subject of considerable controversy. In fact, the interpretation of the photoemission data from this region of k-space is the key to resolving this debate and settling, once and for all, the true FS nature and topology in Bi2212.
Therefore, in Fig. 2 we present a detailed momentum map of the normal state (T=120K) Fermi surface of Bi2212 around the $`\overline{\text{M}}`$-point. The image is based upon 1300 EDCs , each of energy width 500$``$ $`E_\text{B}`$ $``$ -100 meV - and thus combines the unbiased view given by a map with the security of being able to examine a full EDC at each particular k-point. Following the discussion of Fig. 1 we have added guidelines indicating the location of the main FS (thick black solid line), the DRs (1st order: thin black solid line (on top of the map); 2nd order: dashed black line) and the shadow features (thick red solid line).
The main FS appears in Fig. 2 as arcs of high intensity centred around the X and Y points. There is absolutely no indication of a ’closure’ of the two main FS arcs shown in Fig. 2 at (0.8$`\pi `$,0) as reported in Refs. . Thus there is no $`\mathrm{\Gamma }`$-centered (electron-like) FS. The diffraction replicas of the main FS are also clearly present, both individually (e.g. the sole (red) feature on the dark blue background along the $`\mathrm{\Gamma }`$-$`\overline{\text{M}}`$-Z direction at (1.5$`\pi `$,0) is a 1st order DR) and collectively (in principle up to infinite order), leading to a bundling of intensity along a ribbon centered on the (0,-$`\pi `$)-($`\pi `$,0) line - indicated in Fig. 2 by grey shading.
The intensity distribution in the ribbon is, however, anisotropic. The edges of the ribbon are more intense than the centre, reflecting the intensity distribution around the main FS (the DR’s will necessarily have the same intensity distribution as the main FS). Finally, we point out the SFS, which follows the red lines underlying the map and is seen in these data more clearly than ever before.
The main point that is clear from Fig. 2 is the richness of structure in the ARPES data around $`\overline{\text{M}}`$. This arises from the complex interplay between main FS, DRs and the SFS features. As an example of this, one can see the overlap of the SFS and 1st order DR features at ca. 0.6($`\pi `$,$`\pi `$) as a bright spot on the map.
We emphasize that only an analysis of uninterpolated data recorded with high (E,k)-resolution on an extremely fine k-mesh can enable the discrimination between the numerous features concentrated within this small region of the Brillouin zone.
As a test of the robustness of the picture developed above, we show in Fig. 3 a large Fermi surface map of Pb-doped Bi2212. This time 3760 EDCs form the basis of the image. We chose the Pb-doped material as it does not possess the strong structural modulation along the crystallographic b-direction which is characteristic of Bi2212. This should result, then, in the disappearance of DR-related features in the map.
As can be seen from Fig. 3, this is quite evidently the case as there is no intensity ribbon centred along the (0,-$`\pi `$)-($`\pi `$,0) line and intensity profile across the map is practically symmetrical about the $`\mathrm{\Gamma }`$-$`\overline{\text{M}}`$-Z line. In this way we can put the assignment of the DR features in the Bi2212 data beyond any doubt.
The main FS, which is presented here with unprecedented clarity, has the form of tubes centered around the X,Y points. There are absolutely no indications of a $`\mathrm{\Gamma }`$-centred, electron-like FS. The detailed topology of the main FS differs from that in the recent literature in a number of important points, which are related to the favourable experimental conditions used in this study.
Firstly, the use of unpolarised radiation reduces the distorting effects of the photoemission matrix elements to a minimum - illustrated by the roughly equal intensity of the tube sections in both the $`\mathrm{\Gamma }`$X and $`\mathrm{\Gamma }`$Y directions. All of the published FS investigations other than those of Aebi and co-workers have used highly polarised synchrotron radiation and are therefore suscecptible to extreme suppression of intensity in particular k-space regions due purely to symmetry-related matrix element effects. In this way, mapping measurements using polarised radiation are inherently at a disadvantage when one wishes to determine the detailed FS topology without having to know, a priori, what it is. For example, Fermi surface maps of Bi2212 showed a strong dependence on the experimental geometry (and thus polarisation conditions) . Having made this point, it is obvious that maps recorded using polarised radiation cannot be used to support contentions of an electron-like FS in Bi2212.
The second point concerns the exclusive use of real experimental data. If a coarse k-mesh is used to construct a map via interpolation, artefacts can result whereby the topology of the Fermi surface is extremely sensitive to the procedure used to define $`𝐤_\text{F}`$. As an illustration of this, we comment on the recent FS map of Feng et al. , derived from data recorded using polarised radiation at 51 k-points in the (0,0)-($`\pi `$,$`\pi `$)-(0,$`\pi `$) octant of the Brillouin zone, which shows nested FS segments only when using the $``$n(k) method of determining $`𝐤_\text{F}`$ . The FS data presented here for both pristine and Pb-doped Bi2212 show a more curved FS arc in this region of k-space, although a degree of parallelity across the $`\overline{\text{M}}`$-point does exist. The important point here is that the FS topology in ARPES data-sets of sufficient quality is extremely robust - the maps shown in Figs. 2 and 3 can be redrawn using either the $`I_{\text{m}ax}`$ or $``$n(k) methods of determining $`𝐤_\text{F}`$, without altering the form of the main FS.
The third point in the comparative discussion of our data with that in the literature concerns the question of photon energy. We see no indication for an electron-like FS in either 2212 systems studied, neither from the He I data presented here, nor from full FS maps of Pb-Bi2212 (not shown) recorded using unpolarised He II radiation (h$`\nu `$=40.8eV). We reach the same conclusion from the analysis of our synchrotron ARPES data from Bi2212 , in which no sign of a main band crossing is observed along $`\mathrm{\Gamma }`$$`\overline{\text{M}}`$ for photon energies of either 40 or 50 eV (data not shown). If we use h$`\nu `$=32eV, we observe a strong suppression of the intensity of the saddle-point singularity states near $`\overline{\text{M}}`$. As the Fermi surface of a material cannot depend on the photon energy used to measure it, it is clear that photon energies between 32-33 eV (as used in Refs. ) are not suited to giving an undistorted picture of the true FS of Bi2212.
Indeed, our data offer a natural explanation for the observations made with h$`\nu `$=32-33eV. The matrix-element mediated supression of the saddle point intensity for these photon energies explains both the suppression of spectral weight directly along (0,-$`\pi `$)-($`\pi `$,0) line observed in , and means that the edges of the ribbon mentioned earlier will become relatively more intense. Thus, traversal of the two edges of the ribbon feature could be mistakenly interpreted as a ’main’ band crossing along the $`\mathrm{\Gamma }`$-$`\overline{\text{M}}`$ line.
Finally, we turn to the shadow FS features. The most mundane explanation would be an extrinsic diffraction of the CuO<sub>2</sub>-plane photoelectrons on their way to or through the surface of the crystal, giving rise to full, tube-like copies of the main FS - i.e. the SFSs are mere DRs. Within this picture, as there is no evidence for reconstruction of the Sr-O or Bi-O layers of Bi2212 other than the well-known Bi-O modulation, the SFS features would then have to be fifth-order DRs (assuming q is exactly 0.2($`\pi `$,$`\pi `$), which is not the case). For Bi2212 this scenario is clearly highly unrealistic, considering the relative intensities of the first and second-order DR features seen in Figs. 1 and 2. In addition, if the SFS were a DR, it should show the same intensity distribution as the correponding sections of the main FS (as is seen for the genuine Bi2212 DRs observed due to the Bi-O-modulation). As this is not the case for the SFSs in either pristine or Pb-doped Bi2212, it would appear that they are intrinsic to the CuO<sub>2</sub> planes and thus due to the presence of a Brillouin zone of half the size of the original. Such a reduction of the Brillouin zone could be due to structural effects, as has been illustrated for the case of Bi2201 . However, there exists no evidence of this in the case of Bi2212.
Thus, at least for pristine Bi2212, all indicators point to a spin-related origin of the shadow FS in the photoemission data, as originally proposed . As there is electron diffraction evidence of a c(2x2)-like reconstruction in Pb-doped Bi2212 , no definitive conclusion is possible at this stage regarding the origin of the SFSs in this compound. Nevertheless, the strong similarities as regards the topology and relative intensity of the shadow FSs in both the pristine and Pb-doped systems would seem to point to a common origin for these features.
Taking the SFSs to be intrinsic to the CuO<sub>2</sub> planes, regardless their detailed origin, the overall FS topology cannot be that of the main FS tubes with additional SFS-tubes shifted by ($`\pi `$,$`\pi `$), as this would lead to a crossing of the two FS pieces (which is forbidden), as well as the necessity of accepting the simultaneous presence of hole-like and electron-like FSs of the same size originating from the same CuO<sub>2</sub> planes. Thus, the two FS pieces will be separated by a gap, whose magnitude depends on the interaction responsible for the Brillouin zone reduction. This gap could, however, be small enough to remain undetected in a photoemission experiment.
In conclusion, we have shown that high (E,k) resolution, high k-density angle-scanned photoemission data-sets combining the advantages of both the mapping and EDC approaches give a self-consistent and robust picture of the nature and topology of the FS in the 2212-based materials. From a comparison of pristine and Pb-doped Bi2212 we have shown there to be three different features in the ARPES data of Bi2212:
\- the main FS is hole-like, with the topology of a curved tube centred around the X,Y points;
\- the Bi-O modulation gives rise to extrinsic diffraction replicas of the main FS which lead to high intensity ribbons centered on the (0,-$`\pi `$)-($`\pi `$,0) line;
\- a shadow FS is also clearly present, which at least in the case of Bi2212, is likely to be of spin-related origin.
Note added: while completing this paper, we became aware of a preprint containing high k-density ARPES data of Bi2212 recorded using polarised radiation. These data, although measured deep in the superconducting state, are used to confirm the hole-like nature of the main normal-state FS, and the distorting effect of matrix elements on the photoemission spectra when using h$`\nu `$=33eV.
We are grateful to the the BMBF (05 SB8BDA 6), the DFG (Graduiertenkolleg ’Struktur- und Korrelationseffekte in Festkörpern’ der TU-Dresden) and the SMWK (4-7531.50-040-823-99/6) for financial support, to U. Jännicke-Rössler and K. Nenkov for characterisation of the crystals. |
no-problem/9912/hep-ex9912001.html | ar5iv | text | # Comment on “Parton distributions, 𝑑/𝑢, and higher twist effects at high 𝑥”
I. In a recent Letter Yang and Bodek presented results of a new analysis of proton and deuteron structure functions in which the free neutron structure function, $`F_2^n`$, was extracted at large $`x`$. Knowledge of $`F_2^n`$ is crucial for determining the neutron/proton structure function ratio, whose $`x1`$ limit is sensitive to mechanisms of SU(6) spin-flavor symmetry breaking, and provides one of the fundamental tests of the $`x`$ dependence of parton distributions in perturbative QCD.
Relating nuclear structure functions to those of free nucleons is, however, not straightforward because at large $`x`$ nuclear effects become quite sizable. In particular, omitting nuclear binding or off-shell corrections can introduce errors of up to 50% in $`F_2^n/F_2^p`$ already at $`x0.75`$. Rather than follow the conventional procedure of subtracting Fermi motion and binding effects in the deuteron via standard two-body wave functions , Yang and Bodek instead extract $`F_2^n`$ using “a model proposed by Frankfurt and Strikman , in which all binding effects in the deuteron and heavy nuclear targets are assumed to scale with the nuclear density” . Here we point out why this approach is ill-defined for light nuclei, and introduces a large theoretical bias into the extraction of $`F_2^n`$ at large $`x`$.
For heavy nuclei the nuclear EMC effect is observed to scale with the nuclear density, $`\rho _A`$
$`{\displaystyle \frac{R_{A_1}1}{R_{A_2}1}}`$ $`=`$ $`{\displaystyle \frac{\rho _{A_1}}{\rho _{A_2}}},`$ (1)
where $`R_A=F_2^A/F_2^d`$ and $`\rho _A=3A/(4\pi R_e^3)`$, with $`R_e^2=(5/3)r^2`$ and $`r^2^{1/2}`$ is the nuclear r.m.s. radius. Assuming that an analog of Eq.(1) holds also for $`F_2^A/F_2^N`$ ($`F_2^N=F_2^p+F_2^n`$) one finds
$`{\displaystyle \frac{F_2^d}{F_2^N}}`$ $`=`$ $`1+{\displaystyle \frac{\rho _d}{(\rho _A\rho _dR_A)}}(R_A1).`$ (2)
This expression was derived by Frankfurt and Strikman in Ref., where the denominator was further approximated by $`\rho _A\rho _dR_A\rho _A\rho _d`$. It was used in the analysis of the SLAC data , and also referred to by Yang and Bodek , although the explicit formulas used in that analysis are not given. We have checked the numerical values for $`F_2^d/F_2^N`$ in , and they agree with the result one would obtain from Eq.(2) if the densities quoted in are calculated in terms of charge radii .
Frankfurt and Strikman point out that from the above expression for $`F_2^d/F_2^N`$ one can extract the free neutron structure function from empirical EMC ratios and the nuclear densities. With the numerical values for $`\rho _A`$ quoted in , one finds then that the EMC effect in $`d`$ is about 25% as large as in <sup>56</sup>Fe , and has the same $`x`$ dependence.
While the correlation of EMC ratios with nuclear densities is empirical for heavy nuclei, application of Eq.(1) to light nuclei, $`A<4`$, for which EMC effect has not yet been determined, is fraught with ambiguities in defining physically meaningful nuclear densities for few-body nuclei. Firstly, the relevant density in Eq.(1) is the nuclear matter density, while in practice $`\rho _A`$ is calculated from the charge radius — for heavy nuclei the difference is negligible, but for light nuclei it can be significant. Secondly, treating the deuteron as a system with radius $`r^2^{1/2}2`$ fm means that one includes both nucleons in the average density felt by one of them, even though one nucleon obviously cannot influence its own structure. Therefore what one should consider is the probability of one nucleon overlapping with the other, which is simply the deuteron wave function at the origin. This has zero weight, however, so the only sensible definition of mean density for the deuteron is zero. Strictly speaking, the nuclear density extrapolation then predicts no nuclear EMC effect in the deuteron.
In Ref. Frankfurt and Strikman argue that for heavy nuclei the average potential energy is proportional to the average nuclear density, and hence for $`x`$ below 0.5–0.6 (where nuclear Fermi motion is not overwhelming) the nuclear EMC effect should scale with average nuclear density. If one applies the idea from heavy nuclei (where the assumption is known empirically to be reasonable) to the deuteron, one finds that the EMC effect in $`d`$ is $`(F_2^d/F_2^N1)=0.25(F_2^{Fe}/F_2^d1)`$. For light nuclei ($`A=2,3`$) no justification for this assumption is provided, however, and for $`x0.6`$, where nuclear Fermi motion effects become large, Frankfurt and Strikman caution that this estimate is only a qualitative one .
In a reply to our original Comment, Yang and Bodek state that “although the notion of nuclear density for the deuteron may not be very well defined, the value of the nuclear density for deuterium that was used in the SLAC fit yields a similar correction for nuclear binding in the deuteron as the estimate by Frankfurt and Strikman” . As explained above, not only is the notion of nuclear density for the deuteron not very well defined, it is not defined at all.
Moreover, agreement with other calculations for the magnitude of the deuteron EMC effect does not provide a posteriori justification for using an ill-defined quantity like average nuclear density for the deuteron. One would never think of using a density extrapolation to extract the neutron’s electromagnetic form factors from quasi-elastic scattering on the deuteron or <sup>3</sup>He, for example, and there is no reason to believe this method is any more reasonable for structure functions.
II. The size of the EMC effect in the deuteron cannot be tested directly in any inclusive deep-inelastic scattering experiment on the deuteron, as it requires knowledge of $`F_2^n`$, which itself must be extracted from deuteron data. If, on the other hand, the EMC effect scales with nuclear density even for the deuteron, as assumed in , it must also scale with $`\rho _A`$ for all $`A>2`$. In particular, it must predict the size of the EMC effect in 3-body nuclei. In fact, for $`A=3`$ the nuclear density extrapolation makes quite a dramatic prediction: since the 3-body nuclear densities calculated from the charge radii are $`\rho _{{}_{}{}^{3}He}=0.049`$ fm<sup>-3</sup> and $`\rho _{{}_{}{}^{3}H}=0.068`$ fm<sup>-3</sup>, the EMC effect in <sup>3</sup>H is 40% larger than that in <sup>3</sup>He. This is to be compared with standard many-body calculations in terms of Faddeev wave functions which predict $`10\%`$ difference between the EMC effects in $`A=3`$ mirror nuclei, see Fig.1. The $`A=3`$ system presents therefore an ideal case for testing the scaling of the nuclear EMC effect to small $`A`$.
III. In Ref., Yang and Bodek compare their $`F_2^d/F_2^N`$ ratio with the relativistic quark model calculation of Ref. (which we denote by “MST”, and which is referred to in as the “Melnitchouk–Thomas theory”), while attributing to this the empirical extraction of $`F_2^n`$ which was actually carried out in Ref. (which we denote by “MT”). They fail to appreciate the distinction between the MST investigation of relativistic and off-shell effects within a model and the MT analysis of data, so a brief explanation is needed to clarify the confusion.
In the MST model, the objective was to investigate the extent to which relativity and nucleon off-mass-shell effects ($`p^2M^2`$) violate the familiar convolution approximation for nuclear structure functions. To test the significance of these effects, a microscopic model of nucleon structure functions had to be constructed (in the absence of data on off-shell nucleon structure functions!), in order to consistently combine this with a relativistic deuteron wave function in a covariant calculation. Both the proton and neutron structure functions were parameterized in terms of a simple model for the nucleon-quark-diquark vertex functions, and, in the context of this model, the theoretical $`F_2^d/F_2^N`$ ratio was constructed.
No attempt to extract $`F_2^n`$ from the calculated ratio was made in the MST model. Indeed, if any input neutron structure function is used to construct the theoretical $`F_2^d/F_2^N`$ ratio, clearly the calculated ratio cannot then be applied to draw conclusions about the extracted $`F_2^n`$ — different neutron input would result in a different EMC ratio, and the argument would be cyclic. Therefore comparing the ratio calculated in the MST model with a ratio used in an empirical extraction of $`F_2^n`$ such as in Ref. is rather meaningless.
The least ambiguous procedure is to deconvolute the smeared neutron structure function by iterating the inversion procedure until a convergent, self-consistent solution is obtained. This was done in the MT analysis, in which the only theoretical input was the deuteron wave function.
In Ref., Yang and Bodek criticize the calculated $`F_2^d/F_2^N`$ ratio in the MST model<sup>*</sup><sup>*</sup>* Note that this is incorrectly attributed here to , highlighting the apparent confusion in between the MST model and the MT analysis. The curve labeled “Melnitchouk-Thomas model” in Fig.1 of is also incorrectly referenced to rather than to the MST model . on the grounds that its small-$`x`$ behavior is opposite to that in the EMC ratio for iron. They also assert that the small-$`x`$ behavior reflects some sort of violation of the energy–momentum sum rule in the MST model. The history of this discussion is rather long and illustrious, and for the background we refer the reader to any standard review of the EMC effect (see e.g. Refs. and references therein). Both of these points are actually diversions and irrelevant to our main comment, however, since they were raised as apparent justifications for using the density extrapolation model, we feel a need to address them.
Firstly, the behavior of structure functions at $`x0.2`$ is clearly outside the region of large $`x`$ where the nuclear effects which we are discussing are relevant. The focus of the MST investigation was specifically nuclear effects in the valence quark region at large $`x`$. A realistic description of the sea requires much more sophistication than can be reasonably demanded of any simple valence quark model. Secondly, in any realistic model of nuclear structure which properly accounts for nuclear binding, nucleons alone obviously cannot carry all of the momentum of the nucleus. An analog of demanding that nucleons saturate the momentum sum rule would be to demand that valence quarks alone carry all of the nucleon’s momentum and gluons carry none — in contradiction with experiment. This is the content of the assertion that the energy–momentum sum rule is not satisfied in the MST model.
The point is that we have never advocated using the MST model to extract the neutron structure function, contrary to the claim made in . As explained above, the least ambiguous $`F_2^n`$ extraction requires employing the iterative procedure for unsmearing the effects of the deuteron wave function, as outlined in , using deuteron and proton data as input. However, even if these points were valid, they would still not be relevant to the issue of $`F_2^n`$ extraction as they refer specifically to the MST model and not to the $`F_2^n`$ extraction in the MT data analysis.
Yang and Bodek in addition claim that the MST model has not been applied to $`A4`$ nuclei. Since the MST model was relativistic (namely, it included terms beyond order $`v/c`$ in the deuteron wave function), applying it to other nuclei is straightforward once relativistic wave functions are known. To date only non-relativistic wave functions have been calculated for nuclei with $`A4`$. However, since the MST model has a well-defined non-relativistic limit (namely, omitting deuteron $`P`$-waves, and dropping terms of order $`v^2/c^2`$), it smoothly matches onto previous non-relativistic calculations of the nuclear EMC effect for $`A4`$ . So again, contrary to the assertion by Yang and Bodek , the MST model has indeed been tested for all nuclei and for all cases where wave functions exist .
IV. To summarize, we have demonstrated that the nuclear density model of the EMC effect, extrapolated well beyond its region of validity to the case of the deuteron, is a completely unreliable method of extracting the neutron structure function at large $`x`$, and introduces a large theoretical bias into the extraction procedure. Although a nuclear density fit may be quite useful for heavy nuclei where data exist , its extrapolation to $`A3`$ where there are no data is highly speculative, as illustrated by the difficulty in defining physically meaningful densities for few-body systems.
We believe this issue can only be resolved by measuring the nuclear EMC effect in the deuteron and in $`A=3`$ nuclei. One proposal would be to simultaneously measure the structure functions of <sup>3</sup>He and <sup>3</sup>H, extracting the $`F_2^n/F_2^p`$ ratio through the cancellation of nuclear effects, which would indirectly determine the EMC effect in the deuteron. Other solutions would be to reconstruct $`F_2^n`$ from the $`d/u`$ ratio extracted from parity-violating $`\stackrel{}{e}p`$ scattering , completely free of nuclear effects, semi-inclusive $`\pi ^\pm `$ production from an <sup>1</sup>H target , or from various charged-current reactions . The above discussion presents a strong case for performing these experiments as soon as possible.
We would like to acknowledge stimulating discussions with G. Petratos concerning the nuclear corrections in $`A=3`$ nuclei. This work was supported by the Australian Research Council. |
no-problem/9912/cond-mat9912142.html | ar5iv | text | # Magnetic Field Dependence of the Paramagnetic to the High Temperature Magnetically Ordered Phase Transition in CeB6
## I Introduction
The dense Kondo system CeB<sub>6</sub> (T<sub>K</sub>$``$ 1 K) exhibits a three part phase diagram (see Figure 1). This paper reports new high field measurements of the phase I to phase II transition temperature in the H-T plane, T<sub>Q</sub>(H). Cerium hexaboride is one of several rare earth hexaborides that crystallize in the primitive cubic structure with the rare earth ions at the cube center and boron octahedra at the cube corners. In the past decade there have been many studies of the electronic, thermal and magnetic properties of CeB<sub>6</sub> because of interest in the low temperature heavy fermion (HF) ground state. All of the magnetic properties arise from the single $`4f`$ electron on the Ce atom that hybridizes with the conduction electrons to give rise to the HF behavior.
The largest factor influencing the energy levels of the $`4f`$ electron on the Ce atom in CeB<sub>6</sub> is the spin-orbit interaction. This interaction splits the 14-fold degenerate $`4f`$ level into a 6-fold degenerate, $`{}_{}{}^{2}F_{5/2}^{}`$, and an 8-fold degenerate, $`{}_{}{}^{2}F_{7/2}^{}`$, level. The $`{}_{}{}^{2}F_{5/2}^{}`$ level lies lowest in energy and is separated from the $`{}_{}{}^{2}F_{7/2}^{}`$ level by an energy much greater than 500 K. Thus only the J = 5/2 state is populated at room temperature and below. In the absence of any other effects the magnetic sublevels would correspond to J = $`\pm `$ 1/2, $`\pm `$ 3/2 and $`\pm `$ 5/2 with a Lande g-factor for this level of 6/7.
Point ion crystal field theory predicts that the cubic crystal field due to the six borons in CeB<sub>6</sub> further splits the Ce 6-fold degenerate $`{}_{}{}^{2}F_{5/2}^{}`$ level into a 2-fold degenerate $`\mathrm{\Gamma }_7`$ and a 4-fold $`\mathrm{\Gamma }_8`$ level. There have been different interpretations of data with differing conclusions about the energy ordering of these two levels, but it is now generally perceived that in CeB<sub>6</sub> the $`\mathrm{\Gamma }_8`$ is the lowest energy state, and the splitting between the $`\mathrm{\Gamma }_7`$ and the $`\mathrm{\Gamma }_8`$ levels is on the order of 530 K. The $`\mathrm{\Gamma }_8`$ symmetry of the $`f`$ electron on Ce allows not only a magnetic dipole moment, but in addition, an orbital electric and magnetic quadrupole moment. In zero applied magnetic field several different orderings of these moments have been proposed to occur.
The overall results of the previously published magnetic field - temperature phase diagram of CeB<sub>6</sub> is shown in Figure 1. At high temperatures the material is paramagnetic (Phase I) with 2.34 $`\mu _B`$ per Ce atom. In zero applied field, as the temperature is decreased, there is a transformation into the first ordered state at 3.5 K (Phase II), then at 2.2 K the Ce dipole moments align antiferromagnetically (Phase III). There are several substructures within Phase III, but we will not be concerned with the structure of Phase III other than to point out that at all applied magnetic fields above about 2.2 T it does not exist.
The ordering in Phase II was studied by neutron diffraction and proposed to be an ordering of quadrupole moments. Antiferro quadrupolar ordering has been observed in other materials, for example, TmTe. In TmTe this AFQ ordering is destroyed by applied magnetic fields of higher than 6 T. As can be seen from the published phase diagram for CeB<sub>6</sub>, the state is not destroyed by the application of magnetic fields up to 15 T. In this AFQ model it is the coupling between the orbital quadrupole and spin dipole moments that allows the phase transition to be observed with magnetic torque measurements in uniform fields.
## II Measurements
The magnetic measurements were carried out with a metal film cantilever magnetometer, composed of two metal plates (one fixed and the other flexible) that senses forces and torques capacitively. A single crystal of CeB<sub>6</sub> is attached to the flexible plate with Apiezon N grease. When the sample/cantilever is positioned at field center, the sample experiences a torque proportional to its magnetization. Most of the measurements reported here were made at field center. However, three data points were taken with the sample in a field gradient (0.2 T/cm), where the sample experiences a force proportional to its magnetization. The data is summarized in Figure 2.
The sample’s magnetization was measured at fixed fields as a function of temperature (as shown in Figure 3). To ensure proper determination of temperature a Lake Shore Cernox<sup>TM</sup> CX 1030 series resistive thermometer was thermally anchored to the flexible plate of the cantilever with Cry-Con grease, and corrections were made for the magnetic field dependence of the Cernox<sup>TM</sup> thermometer. Details of how such corrections should be made can be found in a paper by Brandt et al.
## III Discussion
Because of the antiferromagnetic ordering with wave vector k<sub>0</sub> = \[1/2, 1/2, 1/2\] observed in neutron diffraction the ordering in Phase II was proposed to be that of quadrupole moments, requiring a splitting of the four-fold degenerate $`\mathrm{\Gamma }_8`$ ground state into two doublets. Several models have been given for this splitting. Either a dynamic Jahn-Teller effect involving acoustic phonons, or a hybridization-mediated anisotropic coupling of the $`4f`$ wave functions to the $`p`$-like boron or $`5d`$-type cerium wave functions were suggested as possibilities in Ref. . An alternative interpretation of these neutron scattering results has been given by Uimin in Ref. . Uimin interprets the low temperature frequency shift of the $`\mathrm{\Gamma }_7`$ \- $`\mathrm{\Gamma }_8`$ as arising from collective modes of spin fluctuations caused by the orbital degrees of freedom.
In an early paper Ohkawa proposed that indirect exchange interactions between pairs of Ce atoms would produce a splitting of the four-fold degenerate level into (4 $`\times `$ 4) sixteen levels split into a group of two triplets and a group consisting of a singlet plus a nine-fold degenerate level with Phase II representing an ordering of the orbital moments. Calculations in Ref. based on this model predict that the critical field that destroys Phase II will be in excess of 30 - 50 T.
Building on Ohkawa’s work, Shiina et al. have constructed a mean field theory for Ohkawa’s RKKY model and calculated the phase diagram. They argue that the increase of T<sub>Q</sub>(H) at low fields is due mainly to field-induced dipolar and octupolar moments. Also, they suggest an improvement to the model by introducing asymmetry into the interaction between dipolar and octupolar moments which leads to induced staggered dipolar moments and accounts for the distinction of Phase II into a low field phase and a high field phase suggested by Nakamura et al.. However, detailed measurements on the symmetry of the order parameter are required to see what applicability Shiina et al.’s model has to CeB<sub>6</sub>.
Uimin described the shape T<sub>Q</sub>(H) as arising from competing AFQ patterns near the ordering temperature. These fluctuations are suppressed by an applied magnetic field. Uimin’s model predicts three important characteristics of the AFQ-Paramagnetic phase diagram: (1) that T<sub>Q</sub>(H) increases linearly at low applied fields, (2) that the AFQ-Paramagnetic phase line is anisotropic in the H-T plane, and (3) that T<sub>Q</sub>(H) decreases and goes to zero at sufficiently high fields. Based on data available at the time Uimin estimated the lower limit field for the re-entrance of T<sub>Q</sub>(H) as approximately 25 - 30 T yielding an H(T<sub>Q</sub> = 0) approaching 80 T. The measurements reported here do not show re-entrance up to 30 T. Uimin points out that his estimate of H(T<sub>Q</sub> = 0) does not take into account the Kondo effect; however, the measurements are carried out at higher energies than the Kondo energy (on the order of 2 K).
Uimin’s theoretical treatment also found a significant dependence of T<sub>Q</sub>(H) on the orientation of the applied field. In the T<sub>Q</sub>(H) does not decrease for arbitrarily high fields. Our measurements were made with the sample in the . However, no experiment has shown any significant orientation dependence in the T<sub>Q</sub>(H) phase line, which Uimin attributes to the unusual anisotropy of the Zeeman energy.
More recently, Kasuya has considered a paired dynamic Jahn-Teller distortion with no quadrupolar ordering causing an increased Ce - Ce antiferromagnetic (AFM) coupling that is enhanced by increasing applied magnetic field. In Ref. the critical field at which this enhanced AFM ordering is destroyed also is predicted to be greater than 30 T.
It should be noted that muon spin rotation measurements in zero applied magnetic field yield a different magnetic structure for CeB<sub>6</sub> for both Phase II and III. Detailed measurement of the variation of magnetic order parameter as a function of temperature are needed.
## IV Conclusions
Our measurements of the phase boundary between Phase I and Phase II, along with previously published points are shown in Fig. 2. As can be seen, the present measurements below 15 T are in good agreement with published values and double the measured field range. The slope of the phase boundary continues to increase with applied field and becomes nearly independent of temperature above 25 T. There is no indication that the phase is being destroyed with field up to 30 T. In addition to measurements in uniform fields, we have included several points that were taken in the presence of a strong magnetic field gradient dH/dz, where both H and z are along the axis of the sample. If Phase II includes antiferro ordering of magnetic quadrupole moments, then the application of a field gradient should exert a force on the moments causing them to align and destroy the phase. As can be seen the magnetic field gradient has no effect on the transition temperature (see Fig. 2).
In conclusion, it is seen that any theory that predicts the destruction of Phase II below 30 T does not include either all of the effects, or includes incorrect mechanisms. Two theories presented to date, both of which are predicated on indirect exchange, predict destruction of the phase at fields $`>`$ 30 T, and cannot be ruled out. Additional measurements would aid in distinguishing between competing theories of the magnetically ordered phase. Clearly, the phase diagram T<sub>Q</sub>(H) needs to be measured to high fields. Also, measurements on the alloy series Ce<sub>x</sub>La<sub>1-x</sub>B<sub>6</sub> will assist in understanding the splitting of the $`\mathrm{\Gamma }_8`$ level as the Ce concentration increases.
This work was supported in part by the National Science Foundation under Grant No. DMR-9971348 (Z. F.). A portion of this work was performed at the National High Magnetic Field Laboratory, which is supported by NSF Cooperative Agreement No. DMR-9527035 and by the State of Florida. |
no-problem/9912/hep-ph9912443.html | ar5iv | text | # Working group summary: 𝜋𝑁 sigma term
## INTRODUCTION
The denomination “sigma term” stands, in a generic way, for the contribution of the quark masses $`m_q`$ to the mass $`M_h`$ of a hadronic state $`|h(p)>`$. According to the Feynman-Hellmann theorem , one has the exact result (the notation does not explicitly take into account the spin degrees of freedom)
$$\frac{M_h^2}{m_q}=<h(p)|(\overline{q}q)(0)|h(p)>.$$
(1)
In practice, and in the case of the light quark flavours $`q=u,d,s`$, one tries to perform a chiral expansion of the matrix element of the scalar density appearing on the right-hand side of this formula. In the case of the pion, for instance, one may use soft-pion techniques to obtain the well-known result (here and in what follows, $`𝒪(M^n)`$ stands for corrections of order $`M^n`$ modulo powers of $`\mathrm{ln}M`$)
$$\frac{M_\pi ^2}{m_q}=\frac{<\overline{q}q>_0}{F_0^2}+𝒪(m_u,m_d,m_s),q=u,d,\text{and}\frac{M_\pi ^2}{m_s}=\mathrm{\hspace{0.17em}0}+𝒪(m_u,m_d,m_s),$$
(2)
where $`<\overline{q}q>_0`$ denotes the single flavour light-quark condensate in the $`SU(3)_L\times SU(3)_R`$ chiral limit, while $`F_0`$ stands for the corresponding value of the pion decay constant $`F_\pi =92.4`$ MeV.
In the case of the nucleon, the sigma term is defined in an analogous way, as the value at zero momentum transfer $`\sigma \sigma (t=0)`$ of the scalar form factor of the nucleon ($`t=(p^{}p)^2`$, $`\widehat{m}(m_u+m_d)/2`$),
$$\overline{\text{u}}_N(p^{})\text{u}_N(p)\sigma (t)=\frac{1}{2M_N}<N(p^{})|\widehat{m}(\overline{u}u+\overline{d}d)(0)|N(p)>,$$
(3)
and contains, in principle, information on the quark mass dependence of the nucleon mass $`M_N`$. Most theoretical evaluations of the nucleon sigma term consider the isospin symmetric limit $`m_u=m_d`$, but this is not required by the definition (3).
Another quantity of particular interest in this context is the relative amount of the nucleon mass contributed by the strange quarks of the sea,
$$y\mathrm{\hspace{0.17em}2}\frac{<N(p)|(\overline{s}s)(0)|N(p)>}{<N(p)|(\overline{u}u+\overline{d}d)(0)|N(p)>}.$$
(4)
Large-$`N_c`$ considerations (Zweig rule) would lead one to expect that $`y`$ is small, not exceeding $`30\%`$. The ratio $`y`$ can be related, via the sigma term and the strange to non-strange quark mass ratio, to the nucleon matrix element of the $`SU(3)_V`$ breaking part of the strong hamiltonian,
$$\sigma (1y)\left(\frac{m_s}{\widehat{m}}1\right)=\frac{1}{2M_N}<N(p^{})|(m_s\widehat{m})(\overline{u}u+\overline{d}d2\overline{s}s)(0)|N(p)>.$$
(5)
For the standard scenario of a strong $`<\overline{q}q>_0`$ condensate, $`m_s/\widehat{m}25`$, the evaluation of the product $`\sigma (1y)`$ in the chiral expansion gives $`26`$ MeV at order $`𝒪(m_q)`$ , $`35\pm 5`$ MeV at order $`𝒪(m_q^{3/2})`$ , and $`36\pm 7`$ MeV at order $`𝒪(m_q^2)`$ .
## THE NUCLEON SIGMA TERM AND $`\pi N`$ SCATTERING
Although the nucleon sigma term is a well-defined QCD observable, there is, unfortunately, no direct experimental access to it. A link with the $`\pi N`$ cross section (for the notation, we refer the reader to Refs. ) at the unphysical Cheng-Dashen point, $`\mathrm{\Sigma }F_\pi ^2\overline{D}^+(\nu =0,t=2M_\pi ^2)`$, is furnished by a very old low-energy theorem ,
$$\mathrm{\Sigma }=\sigma \left(1+𝒪(m_q^{1/2})\right).$$
(6)
A more refined version of this statement relates $`\mathrm{\Sigma }`$ and the form factor $`\sigma (t)`$ at $`t=2M_\pi ^2`$,
$$\mathrm{\Sigma }=\sigma (2M_\pi ^2)+\mathrm{\Delta }_R,$$
(7)
where $`\mathrm{\Delta }_R=𝒪(m_q^2)`$. The size of the correction $`\mathrm{\Delta }_R`$, as estimated within the framework of Heavy Baryon Chiral Perturbation Theory (HBChPT), is small , $`\mathrm{\Delta }_R<2`$ MeV (an earlier calculation to one-loop in the relativistic approach gave $`\mathrm{\Delta }_R=0.35`$ MeV).
In order to obtain information on $`\sigma `$ itself, one thus needs to pin down the difference $`\mathrm{\Delta }_\sigma \sigma (2M_\pi ^2)\sigma (0)`$, and to perform an extrapolation of the $`\pi N`$ scattering data from the physical region $`t0`$ to the Cheng-Dashen point, using the existing experimental information and dispersion relations. The analysis of Refs. , using a dispersive representation of the scalar form factor of the pion, gives the result $`\mathrm{\Delta }_\sigma =15.2\pm 0.4`$ MeV. On the other hand, from the subthreshold expansion
$$\overline{D}^+(\nu =0,t)=d_{00}^++td_{01}^++\mathrm{}$$
(8)
one obtains $`\mathrm{\Sigma }=\mathrm{\Sigma }_d+\mathrm{\Delta }_D`$, with $`\mathrm{\Sigma }_d=F_\pi ^2(d_{00}^++2M_\pi ^2d_{01}^+)`$, and $`\mathrm{\Delta }_D`$ is the remainder, which contains the contributions from the higher order terms in the expansion (8). In Ref. , the value $`\mathrm{\Delta }_D=11.9\pm 0.6`$ MeV was obtained, so that the determination of $`\sigma `$ boils down to the evaluation of the subthreshold parameters $`d_{00}^+`$ and $`d_{01}^+`$. Their values can in principle be obtained from experimental data on $`\pi N`$ scattering, using forward dispersion relations
$$d_{00}^+=\overline{D}^+(0,0)=\overline{D}^+(M_\pi ,0)+𝒥_D(0),d_{11}^+=\overline{E}^+(0,0)=\overline{E}^+(M_\pi ,0)+𝒥_E(0),$$
(9)
where $`𝒥_D(0)`$ and $`𝒥_E(0)`$ stand for the corresponding forward dispersive integrals, while the subtraction constants are expressed in terms of the $`\pi N`$ coupling constant $`g_{\pi N}`$ and of the S- and P-wave scattering lengths as follows:
$$\overline{D}^+(M_\pi ,0)=4\pi (1+x)a_{0+}^++\frac{g_{\pi N}^2x^3}{M_\pi (4x^2)},\overline{E}^+(M_\pi ,0)=6\pi (1+x)a_{1+}^+\frac{g_{\pi N}^2x^2}{M_\pi (2x)^2}.$$
(10)
The dispersive integrals $`𝒥_D(0)`$ and $`𝒥_E(0)`$ are evaluated using $`\pi N`$ scattering data, which exist only above a certain energy, and their extrapolation to the low-energy region using dispersive methods. In the analysis of Ref. , the two scatering lengths $`a_{0+}^+`$ and $`a_{1+}^+`$ are kept as free parameters of the extrapolation procedure. In the Karlsruhe analysis, their values were obtained from the iterative extrapolation procedure itself . Using the partial waves of , the authors of Ref. obtain the following simple representation of $`d_{00}^+`$ and $`d_{01}^+`$ (with $`a_{l+}^+`$, $`l=0,1`$, in units of $`M_\pi ^{12l}`$),
$`d_{00}^+`$ $`=`$ $`1.492+14.6(a_{0+}^++0.010)0.4(a_{1+}^+0.133),`$
$`d_{01}^+`$ $`=`$ $`1.138+0.003(a_{0+}^++0.010)+20.8(a_{1+}^+0.133).`$ (11)
This leads then to a value $`\sigma 45`$ MeV, corresponding to $`y0.2`$ . Further details of this analysis can be found in Refs. .
## THEORETICAL ASPECTS
In the framework of chiral perturbation theory, the sigma term has an expansion of the form
$$\sigma \underset{n1}{}\sigma _nM_\pi ^{n+1}.$$
(12)
The first two terms of this expansion were computed in the framework of the non-relativistic HBChPT in Ref. ,
$$\sigma _1=4c_1,\sigma _2=\frac{9g_A^2}{64\pi F_\pi ^2}.$$
(13)
The determination of the low-energy constant $`c_1`$, which appears also in the chiral expansion of the $`\pi N`$ scattering amplitude, is crucial for the evaluation of $`\sigma `$. Earlier attempts, which extracted the value of $`c_1`$ from fits to the $`\pi N`$ amplitude extrapolated to the threshold region using the phase-shifts of Refs. , obtained rather large values, $`\sigma 59`$ MeV ($`c_1=0.94\pm 0.06`$ GeV<sup>-1</sup>), or even $`\sigma 70`$ MeV ($`c_1=1.23\pm 0.16`$ GeV<sup>-1</sup>), as compared to the result of Ref. .
The threshold region in the case of elastic $`\pi N`$ might however correspond to energies which are already too highy in order to make these determinations of $`c_1`$ stable as far as higher order chiral corrections are concerned. A new determination of $`c_1`$, obtained by matching the $`𝒪(q^3)`$ HBChPT expansion of the $`\pi N`$ amplitude inside the Mandelstam triangle with the dispersive extrapolation of the data leads to a smaller value , $`c_1=0.81\pm 0.15`$ GeV<sup>-1</sup>, corresponding to $`\sigma 40`$ MeV. It remains however to be checked that higher order corrections do not substancially modify this result. Let us mention in this respect that the higher order contribution $`\sigma _3`$ (which contains a non-analytic $`𝒪(M_\pi ^4\mathrm{ln}M_\pi /M_N)`$ piece) in the expansion (12) has been computed in the context of the manifestly Lorentz-invariant baryon chiral perturbation theory in Ref. , (see also ). Once the expression of the $`\pi N`$ amplitude is also known with the same accuracy , a much better control over the chiral perturbation evaluation of $`\sigma `$ should be reached.
Finally, let us also mention that the results quoted above were based on the $`\pi N`$ phase-shifts obtained by the Karlsruhe group . Using instead the SP99 phase-shifts of the VPI/GW group, the authors of Ref. obtain a very different result, $`c_13`$ GeV<sup>-1</sup>, which leads to $`\sigma 200`$ MeV. Needless to say that the consequences of this last result ($`y0.8`$) would be rather difficult to accept.
## EXPERIMENTAL DEVELOPMENTS
We next turn to the discussion of several new experimental results which have some bearing on the value of the nucleon sigma term. All numerical values quoted below use $`M_\pi =139.57`$ MeV and $`F_\pi =92.4`$ MeV.
Let us start with the influence of the scattering length $`a_{0+}^+`$ on the value of the subthreshold parameter $`d_{00}^+`$, using Eq. (THE NUCLEON SIGMA TERM AND $`\pi N`$ SCATTERING) and $`a_{1+}^+=0.133M_\pi ^3`$. The first line of Table 1 gives the result obtained from the value of the phase-shift analysis of Ref. . In the second line of Table 1, we show the value reported at this conference and obtained from the data on pionic hydrogen, $`10^3M_\pi \times a_{0+}^+=1.6\pm 1.3`$. The analysis of Loiseau et al. consists in extracting the combinations of scattering lengths $`a_{\pi ^{}p}\pm a_{\pi ^{}n}`$ from the value of pion deuteron scattering length $`a_{\pi ^{}d}`$ obtained from the measurement of the strong interaction width and lifetime of the 1S level of the pionic deuterium atom . Assuming charge exchange symmetry ($`a_{\pi +p}=a_{\pi ^{}n}`$), they find $`10^3M_\pi \times a_{0+}^+=2\pm 1`$ (third line of Table 1). Another determination of $`a_{0+}^+`$ is also possible using the GMO sum rule (we use here the form presented in , with the value of the total cross section dispersive integral $`J^{}=1.083(25)`$, expressed in mb and $`a_{\pi ^{}p}`$, $`a_{0+}^+`$ expressed in units of $`M_\pi ^1`$)
$$g_{\pi N}^2/4\pi =4.50J^{}+103.3a_{\pi ^{}p}103.3a_{0+}^+.$$
(14)
Using the value $`a_{\pi ^{}p}=0.0883\pm 0.0008`$ obtained by and the determination $`g_{\pi N}=13.51\pm 12`$ from the Uppsala charge exchange $`np`$ scattering data , one obtains $`a_{0+}^+=0.005\pm 0.003`$. The resulting effect on $`\mathrm{\Sigma }_d`$ is shown on the fourth line of Table 1.
Several new determinations of the $`\pi N`$ coupling constant $`g_{\pi N}`$ have also been reported at this meeting, with values which differ from the “canonical” value obtained long ago . Since most of these recent determinations do not result from a complete partial-wave analysis of $`\pi N`$ scattering data, we can only compare the effect of variations in the value of $`g_{\pi N}`$ on the subtraction terms (10). The results are shown in Tables 2 and 3, respectively. Again, we take the value of as reference point, and show the resulting changes for the value $`g_{\pi N}=13.73\pm 0.07`$ from the latest VPI/GW analysis . For comparison, we have also included the determination of , using the published data on the $`\pi ^{}d`$ atom combined with the GMO sum rule (14), as well as the value determined from the Uppsala charge exchange $`np`$ scattering data . The repercussion on $`\overline{D}^+(M_\pi ,0)`$ is negligible in all cases shown in Table 2, whereas in the case of $`\overline{E}^+(M_\pi ,0)`$, the largest effect comes from the rather low value of $`g_{\pi N}`$ obtained by the VPI/GW analysis.
Finally, we have summarized the various results in Table 4, where now the complete results for the determination of the dispersive integrals $`𝒥_D`$ and $`𝒥_E`$ have beem included where possible, i.e. in the case of the KH and of the VPI/GW analyses (see also Table 1 in ). The corresponding values of $`\mathrm{\Sigma }_d`$ are given in the last column of Table 4. The analysis of the VPI/GW group increases the value of the sigma term by more than 25%, as compared to the value extracted from the KH phase-shift analysis. This would lead to a value of $`y0.5`$, which is rather difficult to understand theoretically. It should also be noticed that this large difference is due for a large part to the value $`d_{01}^+=(1.27\pm 0.03)M_\pi ^3`$ (including a shift in the value of the scattering length $`a_{1+}^+`$, which by itself accounts for half of the difference between KH and VPI/GW in the $`d_{01}^+`$ contribution in Table 4) as quoted by the VPI/GW group and obtained from fixed-$`t`$ dispersion relation. A similar analysis, but based on so-called interior dispersion relation (see for instance and references therein), yields a much smaller value, $`d_{01}^+=1.18M_\pi ^3`$ , which lowers the VPI/GW value of $`\mathrm{\Sigma }_d`$ in Table 4 by 10 MeV. It remains therefore difficult to assess the size of the error bars that should be assigned to the numbers given above. Also, the VPI/GW phase-shifts have sometimes been criticized as far as the implementation of theoretical constraints (analyticity properties) is concerned (see for instance ). Furthermore, the issue of having a coherent $`\pi N`$ data base remains a crucial aspect of the problem. The VPI/GW partial wave analyses include data posterior to the analyses of the Karlsruhe group, but which are not always mutually consistent (see e.g. and references therein). Hopefully, new experiments (see ), will help in solving the existing discrepancies.
Finally, it should be stressed that the above discussion is by no means a substitute for a more elaborate analysis, along the lines of Ref. , for instance (see also and ). Such a task would have been far beyond the competences of the present author, at least within a reasonable amount of time and of work. Nevertheless, very useful discussions with G. Höhler, M. Pavan, M. Sainio and J. Stahov greatly improved the author’s understanding of this delicate subject. The author also thanks R. Badertscher and the organizing committee for this very pleasant and lively meeting in Zuoz. |
no-problem/9912/astro-ph9912385.html | ar5iv | text | # Calibration and First light of the Diabolo photometer at the Millimetre and Infrared Testa Grigia Observatory
## 1 Introduction
The continuum emission of various astrophysical objects in the millimetre domain has long been proposed as one important clue to many physical processes in the Universe: such emission includes dust, free–free, synchrotron emissions, but also fluctuations of the Cosmic Microwave Background (CMB), either primordial (Smoot et al. 1992) or due to intervening matter (Sunyaev & Zel’dovich 1972). In the past 15 years, the field of millimetre and far infrared measurements has tremendously grown. The advances in instrument technology have allowed many discoveries, with ground-based observations of our Galaxy and of extragalactic sources, with the many successful ground-based and balloon-borne CMB anisotropy experiments, and with the instruments onboard the COBE satellite. Following the experience acquired with the submillimetre balloon–borne PRONAOS–SPM experiment (Lamarre et al. 1994), we have devised a millimetre photometer called Diabolo, with two channels matching the relatively transparent atmospheric spectral windows around 1.2 and 2.1 $`\mathrm{mm}`$. This instrument is designed to be used for ground–based observations, taking advantage of the large area provided by millimetre antennas such as the 30 m telescope of IRAM, and of long integration times that can be obtained on a small dedicated telescope. Such observations are complementary to those that can be made with highly-performing but costly and resolution–limited space–borne instruments or short duration balloon–borne experiments.
There are two main disadvantages to ground–based measurements, which are:
* a larger background, which not only produces a larger photon noise but also limits the sensitivity of bolometers because of their power load, especially when one tries to obtain broad–band measurements with a throughput ($`A\mathrm{\Omega }`$) much larger than the diffraction limit
* additional sky noise, mainly due to the fluctuating water vapour content in the atmosphere, and which is usually the main limitation of ground–based instruments unless properly subtracted (see e.g. Matthews 1980, Church 1995, Melchiorri et al. 1996).
There are two usual methods for the subtraction of sky noise, either spatial or spectral ones. The spatial subtraction method uses several detectors in the focal plane of the instrument and takes advantage of the spatial correlation of the atmospheric noise. For this technique to work, the source size must be smaller than the array size. It is especially suited for big telescopes for which the beams from the different detectors have not diverged much when crossing the 2–3 kilometre high water vapour layer. Kreysa et al. (1990), Wilbanks et al. (1990), and Gear et al. (1995) have used this technique with bolometer arrays (MPIfR bolometer arrays, SuZie photometer and SCUBA arrays respectively).
The spectral subtraction method takes advantage of the correlation of the atmospheric signal at different wavelengths. If the source signal has a continuum spectrum different from the water vapour emission, one can form a linear combination of the source fluxes at different wavelengths which should be quite insensitive to sky noise. This technique has been used in various photometers. For extended sources like clusters of galaxies (see below), it has been used by Meyer et al. (1983), Chase et al. (1987) and Andreani et al. (1996, see also Pizzo et al. 1995). In particular the spectra of the 3K CMB distortions (either primordial or secondary) are quite different from the water vapour emission as can be seen in Fig 1. This technique implies that the smaller wavelength channels do not work at the diffraction limit, so that the beams at the different wavelengths are co-extensive. Hence, for broad continuum measurements, the detectors can only be large-throughput bolometers.
Once and if the sky noise can be subtracted, the need for sensitive large-throughput bolometers implies the lowest possible working temperature (see Sect. 3 & Subsect. 4.3). Diabolo has been built following this line of thought. It is a simple dual-channel photometer, with two bolometers cooled to 0.1 Kelvin for atmospheric noise subtraction using the spectral subtraction method adapted to small telescopes. Its design and performance are described in the rest of this paper, which is organised as follows. Section 2 describes the optical layout of the photometers and the filters we use for the proper selection of wavelengths. Section 3 describes the dilution cryostat that is used to cool the bolometers. Section 4 deals with the design and testing of the 2 bolometers. Section 5 gives details on the new bolometer AC readout electronic circuit which is used for the measurements. Section 6 gives the characterisation of the instrument that was possible with the first observations at the new 2.6 metre telescope at Testa Grigia (Italy). Finally, we discuss in Section 7 the recent improvements that have been made over the original design.
## 2 Optical and Filtering systems
### 2.1 Optical system
In order that future versions of the photometer can accomodate small arrays of bolometers on each channel, imaging cold optics have been designed for Diabolo. Using lenses rather than mirrors, the system is compact enough that two (and possibly three in a next version) large throughput channels fit into a small portable dewar. The sky is imaged through a cold pupil lens onto a cold focal plane lens. For each channel, the light is then fed by another lens onto the bolometer and its associated Winston cone. The lenses are made of quartz (of index of refraction 2.14) with anti–reflection coatings adapted to each wavelength. As in the PRONAOS–SPM photometer (Lamarre et al. 1994), the optical plate is sustained below the cryostat by three pillars and contains the optical and filtering systems (Fig. 2). It is shielded by a 1.8 K screen covered with eccosorb. The cryogenic plate (Fig. 3), which is in direct contact with the lHe cryostat (pumped to 1.8 K), receives the dilution fridge (Section 3) which provides cooling of the two bolometers (Section 4). Ray–tracing was done including considerations on diffraction in order to optimise the parameters of the lenses and cones (with limited use of ASAP software). Care was taken to underilluminate the secondary and primary mirrors to reduce sidelobe levels (the photometer effectively uses 2 metres out of the 2.6 m of the primary mirror of the Testa Grigia telescope).
In inverse propagation mode, the beam exiting the photometer has a 5.6 f ratio and the useful diameter of the exit (plane parallel high–density polyethylene) window is 27.5 mm. This matches the bolometer throughput of $`15\mathrm{mm}^2\mathrm{sr}`$,
well above the diffraction limit for both channels which is 2.3 and $`6.4\mathrm{mm}^2\mathrm{sr}`$ in channel 1 and 2.
### 2.2 Filtering system
We have devised a filtering system in order to select the appropriate wavelengths while avoiding submillimetre radiation that would load the bolometers. This system does not rely on the atmosphere to cut unwanted radiation. Figures 4 & 5 summarise the different filters, which are all at 1.8 K temperature except for the first infrared cutoff filter (77 K). Measurements were done on each element separately at room temperature only and at normal incidence. In the submillimetre up to 1.8 mm, this was accomplished with a Fourier Transform Spectrometer, with a 0.3 K bolometer as the detecting device at the Institut d’Astrophysique Spatiale (IAS) facility. Several measurements around 2 mm were done with a heterodyne receiver and a carcinotron emitter at the Meudon Observatory facility (DEMIRM). Once all the measured transmissions are multiplied together we find an overall expected photometer transmission which is a factor 2.5 larger than the transmission deduced from point-source measurements. A large fraction of the discrepancy can be attributed to the optical elements that were not included in the calculation: the cryostat entrance window and the lenses, as well as to some diffractive optical losses.
## 3 The dilution cryostat
In order to have a system noise as close as possible to the photon noise, we decided to cool the detectors to 0.1 K (see Subsect. 4.3). The development of a 0.1 K cooling system fully compatible with balloon-borne and satellite environments has been pursued at the Centre de Recherches sur les Très Basses Températures (CRTBT) in Grenoble (Benoit et al. 1994a, Benoit & Pujol 1994). The compactness and ease of use of this system render it very attractive even for ground-based photometers. Conversely, the Diabolo photometer provides a good testbed for this refrigerator before it is used on space missions. Figure 6 shows the layout of the dilution cryostat. This new refrigerator (Benoit et al. 1994b, Sirbi et al. 1996) is the first prototype of a concept that has become the baseline for the ESA Planck mission (formerly COBRAS/SAMBA). Its principle is based on the cooling power provided at low temperature by the dilution of <sup>3</sup>He into <sup>4</sup>He. The system does not use gravity. Instead, the fluids are forced into room temperature capillaries which, after going through a liquid nitrogen trap, are thermalised by the various shields in the cryostat down to the plate at (pumped lHe) 1.8 K. The two Helium isotopes come from high pressure storage vessels (see Fig 6) through flow controllers. Typical flow rates are 3 $`\mu `$moles of <sup>3</sup>He per second and 16 $`\mu `$moles of <sup>4</sup>He per second. The cooling at the low temperature plate is produced by mixing the two isotopes. The available power is small (only few hundred nanoWatts). Therefore the cold plate is mechanically supported by Kevlar cords and shielded electrical wires (for the bolometers) are thermalised on the heat exchanger (capillaries of 200 and 40 $`\mu \mathrm{m}`$diameter). The output mixture flows back through the heat exchanger in a third capillary which is thermally tied to the two input capillaries. The output gas is stored in a low pressure container for later recycling through purification (it will be thrown away in space in case of a satellite version). The dilution fridge was continuously running during the campaign (i.e. for three and a half weeks), keeping the bolometers at the useful temperature of about 0.1 K, except during the main cryostat helium refilling, which required heating-up the cold plate temperature to 4 K. The absolute temperature of the 0.1 K stage is measured with a Matsushita carbon resistance ($`1000\mathrm{\Omega }`$ at 0.1 K) in a AC low power bridge.
## 4 Design and calibration of the bolometers
### 4.1 Design
The bolometers have been developed at IAS, and benefitted from studies in the 40 mK - 150 mK range done for thermal detection of single events due to X–ray or $`\beta `$ sources (Zhou et al. 1993) or to recoil of dark matter particles (de Bellefon et al. 1996). The design (see Fig 7) is that of a classical composite bolometer with a monolithic sensor as devised by Leblanc et al. (1978). The absorber is made of a diamond window (3.5 mm diameter and 40 microns thickness) with a bismuth resistive coating ($`R=100\mathrm{\Omega }`$) to match optical vacuum impedance. The sensors were cut in a selected crystal of NTD Ge to obtain an impedance around $`10\mathrm{M}\mathrm{\Omega }`$ at 150 mK (the effective temperature of the bolometers during these observations, because the thermal and atmospheric backgrounds load the bolometers above the 0.1 K cryostat temperature). The whole sensitive system is integrated in an integration sphere coupled to the light cone. Moreover, by using an inclined absorber with a larger diameter than the 2.5 mm diameter of the output of the light cone we finally increase the optical absorption efficiency $`\eta `$ from 40 to 80%, before residual rays go out of the integration sphere (see Eq. 5). Indeed, the optical efficiency can be estimated with the well–known Gouffe’s formula (Gouffe 1945), by considering that the effective cavity surface $`S`$ is twice the surface of the resistive bismuth coating (which has an emissivity larger than 0.4: Carli et al. 1981). The entrance surface $`s`$ is the 2.5 mm diameter output of the cone. With $`S/s=2(3.5/2.5)^2=3.9`$, the final cavity emissivity is larger than 0.8. An additional internal calibration device (Fig 7 and 3: a near infrared light fed by a diode on the back of the bolometer via an optical fibre) was sucessfully tested but not subsequently used, once the optics was available.
### 4.2 Calibration
The theory of responsivity and noise from a bolometer has been written by Mather (1984) and Coron (1976). At the equilibrium, the Joule power dissipated in the bolometer, $`P_J`$, and the absorbed radiation power, $`P_R`$, are balanced by the cooling power $`P_c`$ due to the small thermal link to the base temperature:
$$P_c(T_1,T_0)=P_J+P_R,$$
(1)
with
$$P_c=\frac{A}{L}_{T_0}^{T_1}\kappa (T)𝑑T=g\left(\left(\frac{T_1}{T_g}\right)^\alpha \left(\frac{T_0}{T_g}\right)^\alpha \right),$$
(2)
where $`A`$, $`L`$, and $`\kappa `$ are respectively the cross section, length and thermal conductivity of the material which makes the thermal link. We have approximated $`\kappa (T)`$ with a power law,
$`\kappa (T)T^{\alpha 1}`$. From the I-V curves, we find that for the reference temperature of $`T_g=0.1\mathrm{K}`$ the value of $`g`$ and $`\alpha `$ are typically of $`g=140`$ picoWatts and $`\alpha =4.5`$ for both bolometers. The impedance can be approximated with :
$$R(T)=R_{\mathrm{}}\mathrm{exp}((T_r/T)^\beta ),$$
(3)
where $`T_r`$, $`R_{\mathrm{}}`$ and $`\beta `$ are respectively 200K, 0.80 $`\mathrm{\Omega }`$, and 0.38 for channel 1 and 20K, 52 $`\mathrm{\Omega }`$, and 0.51 for channel 2. The electrical responsivity at zero frequency was deduced from the I-V curves using
$$S_{\mathrm{el}}(0)=\frac{ZR}{2RI},$$
(4)
where $`Z=\mathrm{d}V/\mathrm{d}I`$ is the dynamic impedance calculated at the bias point on the I-V curve. We find electrical responsivities of the order of $`3\times 10^7`$ and $`20\times 10^7\mathrm{V}/\mathrm{W}`$ respectively under the sky background conditions (a load of one to few hundred picoWatts). With a noise equivalent voltage of typically $`30\mathrm{nV}\mathrm{Hz}^{1/2}`$ above 2 Hz, the electrical NEP is approximately of $`10\times 10^{16}`$ and $`2\times 10^{16}\mathrm{WHz}^{1/2}`$ for channel 1 and 2 respectively. The response of the bolometer to the optical signal is linked to the electrical response via
$$S_{\mathrm{opt}}=\eta S_{\mathrm{el}},$$
(5)
where $`\eta `$ is the optical efficiency. Thus, assuming $`\eta 0.8`$, the optical NEP (although not measured) at zero frequency could be reliably estimated to be better than $`15\times 10^{16}`$ and $`3\times 10^{16}\mathrm{WHz}^{1/2}`$ for channel 1 and 2 respectively.
The bolometer also responds to the base plate temperature fluctuations with a responsivity that can be deduced from the previous formalism:
$$\frac{dV}{dT_0}=S_{\mathrm{el}}\frac{dP_c}{dT_0}=S_{\mathrm{el}}\frac{A}{L}\kappa (T_0)=\frac{S_{\mathrm{el}}g\alpha }{T_0}\left(\frac{T_0}{T_g}\right)^\alpha $$
(6)
The 2 bolometers that we use have typical sensitivities to the base plate temperature of 0.2 and 1.2 $`\mu \mathrm{V}/\mu \mathrm{K}`$ respectively. We see from equation 6 that the larger the conduction to the base plate and the larger the sensitivity of the bolometer to an external signal, the most sensitive will the bolometer be to the fluctuations of the base plate temperature. As the base plate temperature $`T_0`$ fluctuates by typically $`10\mu \mathrm{K}`$ over time scales of few seconds, a regulation of this temperature should be made in the near future to minimise fluctuations.
The time constant is less than 10 milliseconds for both bolometers as measured with particles absorbed by the bolometers against a small radioactive source.
### 4.3 The need for 0.1 K temperature in ground-based experiments
The background is relatively large in the case of ground-based experiments. There is a general prejudice that very low temperatures are thus not needed. Actually, the temperature required for optimised bolometers depends only on the wavelength, because the photon and bolometer noises both increase as the square root of the incoming background. The general formula is (Mather 1984, Griffin 1995, Benoit 1996):
$$T_{\mathrm{max}}=\frac{hc}{k}\frac{p}{\lambda },$$
(7)
where $`hc/k=14.4\mathrm{K}.\mathrm{mm}`$ and $`p`$ is a dimensionless constant. It turns out that for classical bolometers with a resistive thermometer, one has typically $`p0.025`$, so that the maximum temperature for millimetre continuum astronomy is 0.4 K. Allowing for non ideal effects and bolometers which would be 0.7 less noisy than the background noise, a temperature of 0.1 K is required in the 2 mm cosmological atmospheric window. The ultimate noise equivalent power for a given background $`P`$ and temperature $`T`$ is then
$$\mathrm{NEP}/(\mathrm{WHz}^{1/2})10^{17}(PT/(10^{13}\mathrm{WK})).$$
(8)
The present bolometers are within a factor 3 of this limit, which is also the photon noise limit, thus leaving some margin for improvements.
## 5 Readout electronics
The special development made for the readout electronics is described by Gaertner et al. (1997) in detail. Here we give the basic characteristics of the electronics that were specially devised for this instrument. The bolometer is biased with a square AC modulation at typically 61 Hz. The current is injected through a capacitance (in place of the classical load resistance) and an opposition voltage is applied to ensure a near–equilibrium of the bridge (Fig 8).
Hence a small AC modulated out-of-equilibrium signal can be analysed, which is less than $`10^3`$ of the input voltage. The major advantages of this system are
* a constant power dissipation in the bolometer, which keeps its dynamical impedance constant (the square-wave signal does not perturb the thermal behaviour of the bolometer because it works at a constant input power),
* no additional Johnson noise due to the load (which is capacitive rather than resistive),
* a reduced low-frequency noise from the electronics, due to the modulation with a square function at frequencies above a few tens of Hertz.
A cold FET amplifier (JFET NJ132 at 100 K) is used to have an amplifier noise smaller than the bolometer noise. Shielded wire is used all the way in order to avoid electronic interferences and the cable is soldered throughout to avoid microphonics.
Version 1 of this electronics uses an analog lock-in amplifier with a slow feedback on the bias of the bolometer to force the signal to be zero (with time constant of few seconds). In this way, we measure
* at intermediate frequency (1-10 Hz), the voltage variation of the bolometer at constant current.
* at low frequency (DC below 1 Hz), the absolute power received by the bolometer. As the impedance of the bolometer is fixed by the bridge balance, the bolometer works at constant temperature and the bias power gives us directly the DC radiation input power.
## 6 First observations
We describe now the observations done in March 1995 during a three and a half week campaign at the Millimetre and Infrared Testa Grigia Observatory (MITO) that gives an 8 arcminute beam with the Diabolo photometer. Here we present some results that were acquired with a sawtooth modulation of the secondary at 1.9 Hz (which provided a constant elevation scan across a source, of typically 26.4 arcmin width), combined with a slow drift of the elevation offset relative to the source (by a total of 40 arcmin, with steps of 4 arcmin i.e. half beam width every 10 seconds). The acquisition frequency of 61 Hz is twice the AC modulation readout frequency. It is synchronous with the wobbling secondary frequency of 1.9 Hz, giving 32 measurement points per period.
To our knowledge, these data are the first ever to be acquired on the sky in a total power mode using unpaired bolometers. It anticipates and proves the feasability of the total power readout mode that is planned for next submillimetre ESA missions (Planck and FIRST).
### 6.1 The MITO telescope
The MITO telescope has been specifically designed for submillimetre continuum observations at the arcminute scale up to the degree scale, and as such is a unique facility in Europe. The telescope (De Petris et al. 1996), which was designed in parallel with OLIMPO (Osservatorio nel Lontano Infrarosso Montato su Pallone Orientabile) ex TIR (Telescopio InfraRosso) one, is a classical Cassegrain-type 2.6 m dish with a wobbling secondary mirror designed with very low levels of vibration (Mainella et al. 1996). The MITO facility is situated on a dry cold site at an altitude of 3500 m close to Cervinia-Breuil in Italy, very near the Swiss border and the Gornergrat infrared and millimetre observatory TIRGO (Telescopio InfraRosso del GOrnergrat). During our observations, we routinely had outside temperatures of $`20`$ Celsius (most of the nights) and good weather for about one third of the time, making this site excellent for (sub)millimetre high angular resolution astronomy (the opacity is less than a tenth at zenith in the whole millimetre range).
### 6.2 Data reduction
The data were acquired with two independent acquisition systems, the first one based on the development of the PRONAOS one and the second one custom made to allow for the new readout technique. The following data analysis is based on the last system.
After deglitching, a raw map is made with the values of the signal for given azimuth and elevation offsets. The azimuth offset is deduced from the position in a given period of the secondary while the elevation offset is (or should be) a sawtooth function of time. The registration of the instrument data with the telescope pointing information is done with an absolute time line which happened to be inaccurate after ten minutes of observations. Therefore, we can only show here the data which are post-synchronised with the help of the occurence of a strong source detected in the raw data. The data present a strong systematic effect which is quite reproducible and function of the azimuth offset angle. This is easily removed from the maps by computing the mean effect (over elevation offset angles) after the source has been masked. This effect is most likely due to the instrument “seeing” the asymmetrical back of the secondary during its sawtooth motion. This can and will be reduced by adding a secondary mirror baffle as described in Gervasi et al. (1998).
Another phenomenon is the slow drift of the detectors during time which is removed with a running constant elevation average after the source is masked. Each map is then rotated by the parallactic angle and coadded to the others to make a final map in astronomical coodinates.
The maps of planet Mars as obtained in the two Diabolo channels, are shown in Fig 9 and 10. It corresponds to the average of 9 individual maps of $`28\times 40`$ arcminutes, and a total integration time of 1050 seconds. The beams are quite similar at both wavelengths and coaligned within a precision of a tenth of a beam. The beam FWHM is of 7.5 arcminutes. The integrated beam efficiency is the same as that of an 8 arcminute FWHM Gaussian beam. The signal expected from the planet after dilution in the beam is equivalent to a 110 mK blackbody.
The Orion BN-KL nebula is detected in the raw data and the final maps are given in Fig 11 and 12. It is calibrated with Mars signal, but no correction for differential extinction was applied. As Mars was at a larger elevation at the time of the observations, the fluxes of Orion which are found as $`860\pm 48\mathrm{Jy}`$ and $`330\pm 40\mathrm{Jy}`$ at 1.2 and 2.1 mm should really be considered as a lower limit (especially at 1.2mm). The Orion spectrum, which is dominated by dust emission in the infrared and submillimetre domains, clearly behaves differently at the 2.1 millimetre wavelength, because the flux scales as the frequency to the power 2 between 1.2 and 2.1 mm rather than of 3 to 4 for dust submillimetre emission. Free-free emission from the compact central HII region is most likely at the origin of the 2.1 mm excess.
The atmospheric noise is evident in all the data that were taken. Sensitivities were deduced from blank sky maps as 5, 8 and 7 $`\mathrm{mK}_{\mathrm{RJ}}\mathrm{s}^{1/2}`$ at respectively 1.2, 2.1 mm and 2.1 mm after atmospheric noise decorrelation.
## 7 Recent improvements
A number of improvements to the Diabolo instrument have been made since the original design. The changes made to the Diabolo setup are listed below:
* All quartz lenses have been replaced by polyethylene lenses, because the anti–reflection coatings had a tendency to fall off due to the stresses induced by temperature cycles.
* A new 0.1 K cryostat has been designed with a
Joule–Thomson cycle on the mixed Helium output, which produces the 1.8 K stage, hence the main lHe vessel is now at 4 K. The major advantage is that refilling the cryostat with lHe is now faster because the 0.1 K and 1.8 K stays at the same temperature and no lHe pumping is needed any more. The cryogenic duty cycle of the instrument is now of half an hour refill every three days.
* One bandpass filter has been removed in channel 2, in order to increase the sensitivity by broadening the band. We have checked that the small leaks that appear at high frequencies have no effect on the detection of the SZ effect.
* New electronics, now fully digitally controlled with a computer interface, have been designed and used for subsequent observations. The new system is described in detail by Gaertner et al. (1997).
* The regulation of the temperature of the thermal bath has been improved. Thermometers attached to the 0.1 K stage provide temperature information, whereas resistances permit to heat up the 0.1 K stage by a feedback system to stabilise the temperature in a closed loop.
* Shock absorbers have been attached to the mount in order to minimize the microphonics induced by telescope motion (as seen by a general increase of the noise at all frequencies).
* Single–bolometer detectors for each channel have been replaced with bolometer arrays of three bolometers in each channel.
Subsequent, upgraded versions have been used at the IRAM 30 m antenna in Spain, and at the POM2 2.5 m telescope (without wobbling secondary mirror) in the French Alps, in the winters 1995 through 1999, yielding in particular, significant detections of the Sunyaev Zel’dovich effect towards several clusters of galaxies (Désert et al. 1998, Pointecouteau et al. 1999). The instrument has been open to the IRAM community since 1998.
###### Acknowledgements.
We thank Louis d’Hendecourt and M. Gheudin for their help in measuring the transmission of the Diabolo filters at the IAS and DEMIRM. We thank the Programme National de Cosmologie (ex GdR), the INSU and the participating laboratories for their continued support of this experiment. We also thank Pierre Encrenaz and Claudine Laurent for their early support of the project. Part of us (M. de Petris, P. de Bernardis, S. Masi, G. Mainella) have been supported by Italian ASI and MURST. We thank Istituto di CosmoGeofisica (CNR) in Turin for logistic support. Finally, we wish to thank the referee, C.R. Cunningham, for having suggested several significant improvements to the manuscript. |
no-problem/9912/cond-mat9912143.html | ar5iv | text | # Effective Sublattice Magnetization and Néel Temperature in Quantum Antiferromagnets
## Abstract
We present an analytic expression for the finite temperature effective sublattice magnetization which would be detected by inelastic neutron scattering experiments performed on a two-dimensional square-lattice quantum Heisenberg antiferromagnet with short range Néel order. Our expression, which has no adjustable parameters, is able to reproduce both the qualitative behaviour of the phase diagram $`M(T)\times T`$ and the experimental values of the Néel temperature $`T_N`$ for either doped YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6.15</sub> and stoichiometric La<sub>2</sub>CuO<sub>4</sub> compounds. Finally, we remark that by incorporating frustration and $`3D`$ effects as perturbations is sufficient to explain the deviation of the experimental data from our theoretical curves.
Two dimensional quantum antiferromagnetism has been a matter of great interest and subject to intense investigation, due to its possible relation to the normal state properties of high-temperature superconductors. There is by now clear experimental evidence that the pure high-$`T_c`$ superconducting cuprate compounds are well described by a quasi two-dimensional $`S=1/2`$ Heisenberg antiferromagnet on a quasi-square lattice, whose sites are occupied by $`Cu^{++}`$ magnetic ions. The dynamical structure factor of the $`2D`$ Heisenberg antiferromagnet, calculated via the mapping of the Heisenberg Hamiltonian onto the $`O(3)`$ nonlinear sigma model , was successfuly confirmed by inelastic neutron scattering experiments on La<sub>2</sub>CuO<sub>4</sub> . Several other microscopic techniques like light scattering , muon spin relaxation and thermal neutron scattering , have also been used to probe the magnetic correlations in these materials and confirmed the quasi $`2D`$ Heisenberg antiferromagnet hypotesis.
A common feature among almost all superconducting cuprate compounds is the existence of a Néel ordered moment in the low temperature, underdoped regime. As the temperature is increased, or the sample doped, antiferromagnetic order is destroyed, leading to new forms of spin order . According to spin-wave theory, for a $`\mathrm{d}`$-dimensional hypercubic lattice, Néel order is possible at $`T=0`$ for $`\mathrm{d}2`$. However, despite the widespread success of spin-wave theory, there remain a number of issues that defy the description of the superconducting cuprate compounds within this approach. For example, the development of a sublattice magnetization is known to be suppressed in the two-dimensional Heisenberg antiferromagnet for any nonzero temperature. True long range order, as a genuine three-dimensional phenomenon, would only be achieved by considering the interlayer coupling $`J_{}10^5J_{}`$ not only as a perturbation.
It is the purpose of this work to show that the experimental data for the sublattice magnetization of La<sub>2</sub>CuO<sub>4</sub> and YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6.15</sub> can in fact be described still in the context of a two-dimensional square-lattice quantum Heisenberg antiferromagnet at finite temperatures, as far as inelastic neutron scattering experiments are concerned. Our starting point is the observation that the nature of the spin correlations in the renormalized classical regime is consistent with one of the three possibilities of fig. 1, according to the observation wave vector $`|k|`$, or frequency $`\omega `$ . In this sense, any possible neutron scattering experiment, with high enough energy transfers, performed on a true two-dimensional system, would actually measure a nonvanishing effective sublattice magnetization, since one would be probing the dynamics of spin correlations in the intermediate Goldstone region. Inelastic neutron scattering experiments probe a microscopic, short wavelength physics to which we can associate an effective Néel moment. As it will become clear, the behaviour of this effective moment can be described by an effective field theory for the low frequecy, long wavelength fluctuations of the spin fields about a state with short range Néel order. We will then be able to speak about a finite temperature phase transition in the $`2D`$ system, associated to the colapse of the Goldstone region in fig. 1.
The two-dimensional square-lattice quantum Heisenberg antiferromagnet has a well known continuum limit given in terms of the $`2+1`$ dimensional $`O(3)`$ quantum nonlinear sigma model . The later, on the other hand, is defined by the partition function
$$𝒵(\beta )=𝒟n_l\delta (n_l^21)\mathrm{exp}((n_l)),$$
(1)
where the action
$$(n_l)=\frac{\rho _0}{2\mathrm{}}_0^\beta \mathrm{}d\tau \mathrm{d}^2𝐱\left[(n_l)^2+\frac{1}{c_0^2}(_\tau n_l)^2\right]$$
(2)
describes the long-wavelength fluctuations of the staggered components of the spin-field $`n_l=(\sigma ,\stackrel{}{\pi })`$, $`l=1,\mathrm{},N=3`$. The fixed length constraint is understood. In the above expression, $`\rho _0`$ is the spin stiffness, $`c_0`$ is the spin-wave velocity, $`\beta =(k_BT)^1`$ and all quantities with a $`0`$ subscript represent bare quantities.
We shall work in the natural units $`k_B=\mathrm{}=c=1`$, with $`c`$ being the renormalized spin wave velocity. <sup>*</sup><sup>*</sup>*For large $`N`$ the spin wave velocity does not renormalize and $`c_0=c`$. Also, further analysis will be simply expressed in terms of the coupling constant $`g_0=N/\rho _0`$, which has the units of inverse length. With this notation and choosing the staggered magnetization to be along the $`\sigma `$ field direction, we can integrate over the remaining $`N1`$ spin-wave degrees of freedom $`\stackrel{}{\pi }`$ and study the behavior of the partition function (1) in the large $`N`$ limit. As usual, $`N`$ is taken to be large enough while $`g_0`$ is kept fixed. This means that we have to choose $`\rho _0N`$.
For large $`N`$, the partition function (1) is dominated by the stationary configurations of the magnetization, $`\sigma `$, and of the Lagrange multiplier field, $`\mathrm{i}\lambda =m^2`$, introduced in order to ensure the averaged fixed length constraint. These, on the other hand, can be determined from the stationarity conditions
$`m^2\sigma `$ $`=`$ $`0,`$ (3)
$`\sigma ^2`$ $`=`$ $`{\displaystyle \frac{1}{g_0}}{\displaystyle \frac{1}{\beta }}{\displaystyle \underset{n=\mathrm{}}{\overset{\mathrm{}}{}}}{\displaystyle _0^\mathrm{\Lambda }}{\displaystyle \frac{\mathrm{d}^2𝐤}{(2\pi )^2}}{\displaystyle \frac{1}{𝐤^2+\omega _n^2+m^2}},`$ (4)
where a cutoff $`\mathrm{\Lambda }`$ was introduced to make the momentum integral ultraviolet finite.
From the above set of equations we see that the antiferromagnetic system could in principle be found in two distinct phases. If $`\sigma 0`$ then $`m=0`$ and the system would be in the Goldstone phase with a nonvanishing net sublattice magnetization. In this case the ground state would exhibit true long range Néel order. If $`m0`$ on the other hand, then $`\sigma =0`$ and Néel order is absent. There are no gapless excitations in the spectrum of the finite temperature system. It is a well known fact that for the $`2+1`$ dimensional $`O(N)`$ invariant nonlinear $`\sigma `$ model at $`T>0`$, the only possible physical situation is the second one, due to severe infrared divergencies in the second saddle-point equation (4). As a consequence, the value of $`m`$ is pushed from zero to a finite value making the sublattice magnetization $`\sigma `$ to vanish, in agreement with the Coleman-Mermin-Wagner theorem.
We can compute the value of the $`O(N)`$ invariant mass $`m`$ in a closed form by subtracting the linear divergence in the second saddle-point equation (4) as
$$\frac{1}{g_0}=\frac{1}{g_c}+\frac{\rho _s}{4\pi N},$$
(5)
where $`g_c=4\pi /\mathrm{\Lambda }`$ is the bulk critical coupling and
$$\rho _s=\frac{\sqrt{S(S+1)}}{2\sqrt{2}}\frac{\mathrm{}c}{a},$$
(6)
with $`S=1/2`$ and $`a`$ being the lattice spacing . Now, after momentum integration and frequency sum we arrive at
$$\xi ^1=m(\beta )=\frac{2}{\beta }\mathrm{arcsinh}\left(\frac{e^{\beta \rho _s/(2N)}}{2}\right),$$
(7)
which is nonvanishing for $`T>0`$, thus indicating that a Néel phase can only occur at $`T=0`$. The conclusion is that even at the smallest temperature there is a gap in the spin-wave spectrum and a finite correlation length which measures the size of clusters in which there is short range Néel order. The above expression for $`\xi `$ has been obtained by Chakravarty et al. and its zero temperature limit successfully confirmed by quasi-elastic neutron scattering experiments on La<sub>2</sub>CuO<sub>4</sub> , for, however, temperatures approaching $`T_N`$ from above, $`TT_N^+`$, where this compound is known to exhibit a true two-dimensional behaviour.
Let us now consider inelastic neutron scattering experiments, performed on a true $`2D`$ system, with energy transfers $`\mathrm{\Delta }E=\mathrm{}\omega `$ such that the corresponding wavelength, $`\lambda =1/\mathrm{}\omega `$, satisfies $`\xi _J\lambda \xi `$. Typical time scales in such experiments are $`\tau _\lambda =\lambda /c`$, consequently much smaller than the relaxation time $`\tau =\xi /c`$ at which the $`2D`$ system disorders. For such experiments, spins would look like as if they were frozen and a nonvanishing effective sublattice magnetization would be measured. Since at low temperatures $`\xi `$ is much larger than $`\xi _J`$, the three regions of fig. 1 are well separated. In the large intermediate region, probed by our experiment, the system behaves as if it had true long range antiferromagnetic order and dynamic scaling hypotesis is justified . We are then allowed to apply a hydrodynamic picture for the low frequency, long-wavelength fluctuations of the spin-fields $`n_l`$, in which its short-wavelength fluctuations follow adiabatically the fluctuations of the disordered background whose typical wavelenth is the scale of disorder, the correlation length. The effective field theory to describe the spin correlations in this intermediate Goldstone region is obtained by functionally integrating the Fourier components of the fields in (1) with frequency inside momentum shells $`\kappa |\stackrel{}{k}|\mathrm{\Lambda }`$. The resulting partition function is such that, for large $`N`$, the leading contribution now comes from the scale dependent stationary configurations $`\sigma _\kappa `$ and $`\mathrm{i}\lambda _\kappa =m_\kappa ^2`$, solutions of the new set of saddle-point equations
$`m_\kappa ^2\sigma _\kappa `$ $`=`$ $`0,`$ (8)
$`\sigma _\kappa ^2`$ $`=`$ $`{\displaystyle \frac{1}{g_0}}{\displaystyle \frac{1}{\beta }}{\displaystyle \underset{n=\mathrm{}}{\overset{\mathrm{}}{}}}{\displaystyle _\kappa ^\mathrm{\Lambda }}{\displaystyle \frac{\mathrm{d}^2𝐤}{(2\pi )^2}}{\displaystyle \frac{1}{𝐤^2+\omega _n^2+m_\kappa ^2}}.`$ (9)
Differently from the previous case, now we can in fact find the system in two different phases (regimes): ordered (asymptotically free) or disordered (strongly coupled); depending on the size of $`\xi _\kappa =1/\kappa `$ relative to $`\xi `$: smaller (high energies) or larger (low energies). In the ordered phase, $`\xi _\kappa \xi `$, $`m_\kappa =0`$ is the solution that minimizes the free energy and the $`2D`$ system is then characterized by a nonvanishing effective sublattice magnetization $`\sigma _\kappa 0`$, a divergent effective correlation length $`\xi _{eff}=1/m_\kappa =\mathrm{}`$ and gapless excitations in the spectrum.
The effective sublattice magnetization can be exactly computed from the second saddle point equation (9). Using the renormalization scheeme defined by (5), we obtain, after momentum integration and frequency sum, the expression
$$\frac{\rho _s(\kappa ,\beta )}{2\pi N}\sigma _\kappa ^2=\frac{\rho _s}{4\pi N}+\frac{1}{2\pi \beta }\mathrm{ln}(2\mathrm{sinh}(\beta \kappa /2)),$$
(10)
which depends on the energy scale $`\kappa `$ and on the temperature.
Some comments are in order. The running spin stiffness (10) decreases ($`g(\kappa ,\beta )=N/\rho _s(\kappa ,\beta )`$ increases) as $`\xi _\kappa \xi `$, for a given temperature. This is a consequence of the fact that we are coarsing over degrees of freedom which actually feels the finite size of the clusters with short range Néel order. Furthermore, for $`\xi _\kappa >\xi `$ we would be coarsing over degrees of freedom outside these clusters, leading to the disordered (strongly coupled) phase. Lowering the scale $`\kappa `$ is also equivalent to waiting longer for a response, and for $`\tau _\kappa \tau `$ we would be waiting long enough for the system to disorder. Here, conversely, in order to obtain a finite temperature phase transition in the $`2D`$ system, we will rather fix the scale $`\kappa `$ and study the behaviour of the effective spin stiffness (10) with the running parameter being the temperature. We must fine tune, and hold fixed, the energy transfers in our experiment so that our $`2D`$ system is able to reproduce the observed $`3D`$ behaviour in real materials. For this it suffices to impose the boundary condition
$$\rho _s(\kappa ,T=0)=\rho _s,$$
(11)
with $`\rho _s`$ being the bulk spin stiffness of the real system. From (11) we conclude that $`\kappa =\rho _s/N`$, which is exactly the inverse Josephson correlation length, $`\kappa =\xi _J^1`$. This should not be surprising since the spin stiffness is itself a microscopic, short wavelength quantity defined at the Josephson scale. Notice also that $`\kappa =23`$ meV for the case of La<sub>2</sub>CuO<sub>4</sub> , which is actually consistent with the energy transfers commonly used in this kind of experiment . Now, inserting (11) in (10), the expression for the finite temperature effective sublattice magnetization, $`M(T)\rho _s(\kappa =\rho _s/N,T)`$, becomes
$$M(T)=\frac{M_0}{2}+NT\mathrm{ln}\left(2\mathrm{sinh}\left(\frac{M_0}{2NT}\right)\right),$$
(12)
with $`M_0=\rho _s`$. The sublattice magnetization $`M(T)`$ vanishes at a Néel temperature $`T_N`$ given by
$$T_N=\frac{M_0}{N\mathrm{ln}2}.$$
(13)
For temperatures above $`T_N`$, we would be coarsing over degrees of freedom outside the shrinked clusters of size $`\xi `$, leading again to the disordered (strongly coupled) phase.
Let us now show that the above analysis can in fact be used to describe the experimental data for different cuprate compounds. Take for example the data obtained for La<sub>2</sub>CuO<sub>4</sub> and YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6.15</sub> . For these compounds, we find agreement between our predictions and the observed Néel temperatures, within almost $`10\%`$, already at the leading order, as can be seen from table I. More important, we have obtained a good qualitative agreement between the phase diagram $`M(T)\times T`$ and experiment, for the whole range of temperatures from $`0`$ to $`T_N`$ (see dotted lines in figs. 2 and 3).
In order to have a flavor on how our results can be improved, let us mention that already at the next-to-leading order in the $`1/N`$ expansion we will have a nontrivial renormalization of the spin-wave velocity due to the self interaction between spin-waves . This should lower the value of $`c`$ and, if we take for example a lowering of about $`10\%`$, we obtain the behaviour described by the solid curves in figs. 2 and 3. The spin wave velocity will also be renormalized by dynamic scaling, but we assume that at the shortest distances, that is $`\mathrm{}\omega \xi ^1`$, this effect can be neglected when compared to the effects of the self interactions. For this reason, it should not lead to a further damping of the spin waves.
Notice now that, with respect to the solid curves, the experimental points can be separated into two different sets. For $`T<T_N/2`$, we find all points below the solid curves while, for $`T>T_N/2`$, we find, instead, the points all above our theoretical prediction. This is consistent with a picture in which strong frustration induced quantum fluctuations, due for example to a nonzero next-nearest-neighbour coupling, are dominant at low temperatures and suppressed at higher $`T`$, where the effects due to a sizable $`J_{}`$ begin to be felt. Notice also that in the case of YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6.15</sub> the points deviate even more from the solid curve, for $`T>T_N/2`$, than in the case of La<sub>2</sub>CuO<sub>4</sub> . We attribute this to the bilayer structure of YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6.15</sub> , which causes a further increase in the sublattice magnetization. As we approach $`T_N`$ from below, both systems behave effectively as true $`2D`$ Heisenberg antiferromagnets with nearest-neighbour coupling, as shown by the experimental data. From the above discussion we conclude that by incorporating frustration and $`3D`$ effects as perturbations, with properly temperature renormalized coeficients, might be sufficient to account for the deviation of the data from our theoretical predictions. We are presently investigating this possibility.
As a final remark, let us show that our treatment is consistent with experiment also in the disordered phase. Above $`T_N`$, where the effective sublattice magnetization vanishes, we must consider the second possible solution for the set of saddle-point equations (9), namely $`\sigma _\kappa =0`$ and $`m_\kappa 0`$. If we then solve the self-consistent equation for the gap we end up with
$$m_\kappa ^2=\frac{4}{\beta ^2}\mathrm{arcsinh}^2\left(\frac{e^{\beta \rho _s/(2N)}}{2}\right)\left(\frac{\rho _s}{N}\right)^2,$$
(14)
for $`\kappa =\rho _s/N`$. It is straightforward to see that the effective correlation length $`\xi _{eff}=1/m_\kappa `$ diverges as $`T\rho _s/(N\mathrm{ln}2)`$, or in other words, as we approach $`T_N`$ from above. This is consistent with the data for La<sub>2</sub>CuO<sub>4</sub> from , and for the correct temperature limit.
The authors have benefited from several fruitful discussions with C. Farina, A. Katanin, B. Keimer, E. Miranda and F. Nogueira. E.C.M. is partially supported by CNPq and FAPERJ. M.B.S.N is supported by FAPERJ. |
no-problem/9912/astro-ph9912520.html | ar5iv | text | # Search for Pulsed TeV Gamma-ray Emission from the Crab Pulsar
## 1 Introduction
The Crab pulsar/Nebula system is one of the most intensely studied astrophysical sources with measurements throughout the electromagnetic spectrum from the radio to the TeV energy band. In most regions of the spectrum, the characteristic 33 ms pulsations of the pulsar are clearly visible. The pulse profile is unique amongst known pulsars in that it is aligned from radio to gamma-ray energies. The study of the pulsed emission in different energy ranges is of considerable importance to understanding the underlying emission mechanisms (e.g., Eikenberry & Fazio (1997)). The EGRET instrument on the Compton Gamma-Ray Observatory (CGRO) has shown that there is pulsed gamma-ray emission from the pulsar up to at least 10 GeV (Ramanamurthy et al. (1995)). Current imaging atmospheric Cherenkov telescopes have firmly established the Crab Nebula as a steady source of gamma rays from 300 GeV to 50 TeV (Hillas et al. (1998); Tanimori et al. (1998)). However, these observations have not detected any significant modulation of this TeV signal at the period of the pulsar. In contrast to these reports, other groups have reported TeV emission modulated at the 33 ms period of the Crab pulsar. Some of these reports have been associated with episodic activity (Gibson et al. (1982); Bhat et al. (1986); Acharya et al. (1992)). A persistent pulsed signal from the Crab pulsar was reported by the Durham group (Dowthwaite et al. (1984)). However this has not been confirmed by more sensitive observations which show that less than 5% of the total very high energy (VHE) flux is pulsed (Weekes et al. (1989); Reynolds et al. (1993); Goret et al. (1993)). At ultra-high energies, the CASA-MIA experiment does not find any statistically significant evidence for pulsed gamma-ray emission at the Crab pulsar period, on an interval of one day or longer, based on the analysis of data recorded during the interval 1990 March to 1995 October (Borione et al. (1997)).
Pulsed emission from the Crab pulsar at IR energies and above is generally believed to originate in the magnetosphere of the system far from the stellar surface. In each of the two models which address the pulsed gamma-ray emission in detail, the outer gap model (Cheng, Ho & Ruderman (1986); Romani (1996)) and the polar cap model (Daugherty & Harding (1982)), the high energy flux arises from curvature radiation of pairs as they propagate along the open field lines of the magnetosphere. The specific details of the pulse shapes in different pulsars are explained by the line of sight geometry of the observer relative to the spin and magnetic axes of the rotating neutron star in these models. The energy at which the pulsed flux begins to cut-off and the detailed spectral shape of the cut-off can help to distinguish between the two models. Given the detection of pulsations out to 10 GeV by EGRET (Ramanamurthy et al. (1995)) and the restrictive upper limits above 300 GeV (Weekes et al. (1989); Reynolds et al. (1993); Goret et al. (1993)), the cut-off necessarily resides in the $``$100 GeV energy range. This is our primary motivation for this deep search for pulsations from the Crab in the 100 GeV range.
The outer gap model by Romani 1996 also includes TeV emission via the synchrotron-self-Compton mechanism which produces a peak spectral energy density above 1 TeV. Such a mechanism could in principle explain the detection of pulsed emission by the Durham group, which operates at an energy threshold of 1 TeV, and still be consistent with the upper limits reported at lower energies. For this reason we have applied spectral analysis techniques to search for a gamma-ray Crab pulsar signal over the energy band 250 GeV to 4 TeV.
## 2 Observation and Analysis Techniques
The VHE observations reported in this paper utilize the atmospheric Cherenkov technique (Cawley & Weekes (1995)) and the 10 m optical reflector located at the Whipple Observatory on Mt. Hopkins in southern Arizona (elevation 2.3 km) (Cawley et al. (1990)). A camera, consisting of photomultiplier tubes (PMTs) mounted in the focal plane of the reflector, detects the Cherenkov radiation produced by gamma-ray and cosmic-ray air showers from which an image of the Cherenkov light can be reconstructed. For most of the observations reported here, the camera consisted of 109 PMTs (each viewing a circular field of 0$`\stackrel{}{\mathrm{.}}`$259 radius) with a total field of view of $`3^{}`$ in diameter. In 1996 December, 42 additional PMTs were added to the camera, increasing the field of view to 3$`\stackrel{}{\mathrm{.}}`$3.
We characterize each Cherenkov image using a moment analysis (Reynolds et al. (1993)). The roughly elliptical shape of the image is described by the length and width parameters and its location and orientation within the field of view are given by the distance and $`\alpha `$ parameters, respectively. We also determine the two highest signals recorded by the PMTs (max1, max2) and the amount of light in the image (size). These parameters are defined in Table 1 and are depicted in Figure Search for Pulsed TeV Gamma-ray Emission from the Crab Pulsar. Gamma-ray events give rise to more compact shower images than background hadronic showers and are preferentially oriented towards the putative source position in the image plane. By making use of these differences, a gamma-ray signal can be extracted from the large background of hadronic showers.
### 2.1 Selection Methods
The standard gamma-ray selection method utilized by the Whipple Collaboration is the Supercuts criteria (see Table 2; cf., Reynolds et al. (1993); Catanese et al. (1996)). These criteria were optimized on contemporaneous Crab Nebula data to give the best sensitivity to point sources. In an effort to remove the background of events triggered by single muons and night sky fluctuations, Supercuts incorporates pre-selection cuts on the size and on max1 and max2. While the introduction of a pre-selection is desirable from the point of view of optimizing overall sensitivity, it automatically rejects many showers below $`400`$ GeV. In the context of a search for pulsed emission from the Crab pulsar, which must have a low energy cut-off to accommodate existing upper limits, this is clearly undesirable. Accordingly, a modified set of cuts (Table 3; cf., Moriarty et al. (1997)), developed to provide optimal sensitivity in the $`200`$ GeV to $`400`$ GeV region and referred to hereafter as Smallcuts, was used for the events which failed the Supercuts pre-selection criteria. The most notable difference between Smallcuts and Supercuts is the introduction of a cut on the length/size of an image. Such a cut is effective at discriminating partial arcs of Cherenkov light rings arising from single muons, which become the predominant background at lower energies. These images tend to be long compared to their intensity and so may be rejected on the basis of the length/size ratio. When a combination of Supercuts and Smallcuts is used, Monte Carlo simulations indicate that this analysis results in an energy threshold of $`250`$ GeV. This threshold is the energy at which the differential rate from a source with a spectral index equal to that of the steady Crab Nebula reaches its peak. The collection area as a function of gamma-ray energy is depicted in Figure Search for Pulsed TeV Gamma-ray Emission from the Crab Pulsar and results in an effective collection area of $`2.7\times 10^8\mathrm{cm}^2`$. Details of the methods used to estimate the energy threshold and effective area are given elsewhere (Mohanty et al. (1998)).
The data from 1997 were analyzed with slightly modified cuts (see Tables 2,3) which were re-optimized after an upgrade to the Whipple camera which increased the field of view. The greatest effect of the larger field of view was that images appeared longer and at a greater distance from the center of the field of view due to less image truncation than caused by the smaller camera.
Supercuts was optimized to give the best point source sensitivity but in doing so it rejects many of the larger gamma-ray events. Another selection process, known as Extended Supercuts (Table 4; cf., Mohanty et al. (1998)), was utilized to facilitate a search for pulsed emission over the energy band 250 GeV to 4 TeV. This method is quite similar to Supercuts but scales the various cuts with the shower size and retains approximately 95% of gamma-ray events compared to approximately 50% of gamma-ray events passed by the Supercuts criteria. By applying a lower bound on the size of an image, the energy threshold of the analysis increases. Figure Search for Pulsed TeV Gamma-ray Emission from the Crab Pulsar depicts the collection area as a function of gamma-ray energy as derived by Monte Carlo simulations for a lower bound on the size of an image of 500, 1000, 2000 and 5000 digital counts. These cuts impose energy thresholds of 0.6, 1.0, 2.0 and 4.0 TeV respectively.
### 2.2 Periodic Analysis
The arrival times of the Cherenkov events were registered by a GPS clock with an absolute resolution of 250 $`\mu `$s. An oscillator, calibrated by GPS second marks (relative resolution of 100 ns), was used to interpolate to a resolution of 0.1 $`\mu `$s. After an oscillator calibration was applied, all arrival times were transformed to the solar system barycenter by utilizing the JPL DE200 ephemeris as described by Standish (1982). As the acceleration of the pulsar relative to the solar system barycenter is negligible, the only additional correction factor is due to the gravitational redshift. The conversion of the coordinated universal time (UTC) as measured at the telescope, to the solar system barycenter arrival time (TDB), is given by
$$t_{TDB}=t_{UTC}+\mathrm{\Delta }_{TAIUTC}+\mathrm{\Delta }_{TDTTAI}+\mathrm{\Delta }_{TDBTDT}+\mathrm{\Delta }_{REL}.$$
(1)
The international atomic time (TAI) differs from UTC time by an integral number of leap seconds. The terrestrial dynamical time (TDT) is used as a timescale of ephemerides for observations from the Earth’s surface and differs from TAI by 32.184 s. The correction to the Earth’s surface requires the telescope’s geocentric coordinates and a model of the Earth’s motion. The final correction applied, $`\mathrm{\Delta }_{REL}`$, accounts for the variation of the gravitational potential around the Earth’s orbit.
The corrected times were folded to produce the phases, $`\varphi _j`$, of the events modulo the pulse period according to
$$\varphi _j=\varphi _0+\nu (t_jt_0)+\frac{1}{2}\dot{\nu }(t_jt_0)^2,$$
(2)
where $`\nu ,\dot{\nu }`$ are the frequency and first frequency derivative at the epoch of observation $`t_0`$. For each source run the valid frequency parameters were derived from the J2000 ephemeris obtained from Jodrell Bank where the Crab pulsar is monitored on a monthly basis.
To check the Whipple Observatory timing systems an optical observation of the Crab pulsar was undertaken on the nights of 1996 December 2 (UT), 1996 December 18 (UT) and 1997 March 11 (UT), using the 10 m reflector with a photometer at its focus (Srinivasan et al. (1997)). The signal from the photometer was recorded by the data acquisition electronics and timing system of the telescope thereby providing a direct test of the instrument’s timing characteristics. The phase analysis of the event arrival times, depicted in Figure Search for Pulsed TeV Gamma-ray Emission from the Crab Pulsar, yielded a clear detection of the optical signal from the Crab pulsar in phase with the radio pulse. This demonstrates the validity of the timing, data acquisition and barycentering software in the presence of a pulsed signal.
## 3 Observations and Results
The position of the Crab pulsar was observed between 1995 January and 1997 March. The traditional mode of observing potential periodic sources with the Whipple Observatory gamma-ray telescope is to track the putative source location continuously for runs of 28 minute duration. After filtering runs for bad weather and instrumental problems, the data set consists of 159 runs for a total source observing time of 73.4 hrs. The radio position (J2000) of the Crab pulsar ($`\alpha `$ = 05<sup>h</sup> 34<sup>m</sup> 31.949<sup>s</sup>, $`\delta `$= 22 00 52.057<sup>′′</sup>) was assumed for the subsequent timing analysis.
The numbers of events passing the selection criteria described above are given in Table 5. The phases of these events, shown in Figure Search for Pulsed TeV Gamma-ray Emission from the Crab Pulsar, are used for periodic analysis. We find no evidence of pulsed emission at the radio period. To calculate upper limits for pulsed emission we have used the pulse profile seen at lower energies by EGRET. That is, we assume emission occurs within the phase ranges of both the main pulse, phase 0.94-0.04, and the intrapulse, phase 0.32-0.43 (Fierro et al. (1998)). The number of events with phases within these intervals constitutes the number of candidate pulsed events, $`N_{on}`$. $`N_{off}`$, an estimate of the numbers of background events, is obtained by multiplying the number of events with phases outside these pulse intervals by the ratio of ranges spanned by the pulse and non-pulse regions. The results are given in Table 5. The statistical significance of the excess is calculated using the maximum likelihood method of Li & Ma (1983). The 99.9% confidence level upper limits calculated using the method of Helene (1983) are given in Table 6.
Several reports of pulsed emission from the Crab pulsar at very high energies claim to have seen evidence of episodic emission on time scales of several minutes. For this reason we have performed a run-by-run search for periodic emission from the Crab pulsar based on the above pulse profile. The statistical significance of excess events for each observation and the corresponding distribution of significance for the lowest and middle energy ranges are given in Figure Search for Pulsed TeV Gamma-ray Emission from the Crab Pulsar. In each energy band the distribution of significance is consist with the statistical expectation for zero excess.
## 4 Discussion
Data taken with the Whipple Observatory’s 10 m gamma-ray telescope have been used to search for pulsations from the Crab pulsar above 250 GeV. We find no evidence of pulsed emission at the radio period and upper limits on the integral flux have been given.
To model the pulsed gamma-ray spectrum, a function of the form
$$dN/dE=KE^\gamma e^{E/E_o}$$
(3)
was used, where $`E`$ is the photon energy, $`\gamma `$ is the photon spectral index and $`E_o`$ is the cut-off energy. The source spectrum in the EGRET energy range is well fitted by a power law with a photon spectral index of $`2.15\pm 0.04`$ (Nolan et al. (1993)). The pulsed upper limit above 250 GeV reported here is $`3`$ orders of magnitude below the flux predicted by the EGRET power law. Equation 3 was used to extrapolate the EGRET spectrum to higher energies constrained by the TeV upper limit reported here and indicates a cut-off energy $`E_o60`$ GeV for pulsed emission (see Figure Search for Pulsed TeV Gamma-ray Emission from the Crab Pulsar).
As indicated in § 2.1, the energy threshold of the technique is derived assuming a source with a spectral index equal to that of the steady Crab Nebula. With the above model, this assumption is invalid. If we assume a source spectrum as given by Equation 3 and define energy threshold and effective collection area as stated in § 2.1 we simultaneously solve for an energy threshold of 180 GeV and energy cut-off of 60 GeV. The derived cut-off energy is the same as that obtained assuming a Crab Nebula spectrum, and indicates the robustness of defining the energy threshold of the technique in this way.
The sharpness of the spectral cut-off of the emission models depicted in Figure Search for Pulsed TeV Gamma-ray Emission from the Crab Pulsar provides a good discriminant. The status of current observations and the derived cut-off given above indicates that the cut-off must lie in the 10-60 GeV range. However, the upper limits reported here are well above the flux predicted by the polar cap and outer gap models and offer no discrimination between them. In contrast, the outer gap model of Romani (1996) predicts TeV emission via the synchrotron-self-Compton mechanism. The flux produced via this mechanism is dependent on the density and spectrum of primary electrons and positrons in the gap, as well as the density of local soft photon fields. The predicted pulsed TeV flux for a young gamma-ray pulsar is somewhat less than 1% of the pulsed GeV flux. The results reported here derive an upper limit to this fraction of $`<0.07`$%.
We acknowledge the technical assistance of K. Harris, T. Lappin, and E. Roache. We thank A. Lyne and R. Pritchard for providing the radio ephemeris of the Crab pulsar. This research is supported by grants from the U.S. Department of Energy, NASA, the Irish National Research Support Fund Board and by PPARC in the United Kingdom. |
no-problem/9912/hep-ph9912213.html | ar5iv | text | # McGill/99-37NUC-MINN-99/16-T Coherence Time Effects on 𝐽/𝜓 Production and Suppression in Relativistic Heavy Ion Collisions
## Abstract
Using a coherence time extracted from high precision proton-nucleus Drell-Yan measurements and a nuclear absorption cross section extracted from $`pA`$ charmonium production experiments, we study $`J/\psi `$ production and absorption in nucleus-nucleus collisions. We find that coherence time effects are large enough to affect the measured $`J/\psi `$-to-Drell-Yan ratio. The S+U data at 200A GeV/c measured by NA38 are reproduced quantitatively without the introduction of any new parameters. However, when compared with recent NA50 measurements for Pb+Pb at 158A GeV/c, the data is not reproduced in trend or in magnitude.
PACS numbers: 25.75.-q, 24.85.+p, 11.80.La
Ultrarelativistic heavy ion collisions offer the tantalizing possibility of forming and studying a new form of matter predicted by QCD: the quark-gluon plasma. A vigorous experimental program has existed at the CERN SPS for more than ten years. RHIC now signals the dawn of a new era in heavy ion physics at Brookhaven National Laboratory. Several experimental signals have been put forward as candidates for QCD plasma signatures . Of those, the most famous is probably that of $`J/\psi `$ suppression in nucleus-nucleus collisions. The theoretical and experimental activity that have followed this seminal suggestion have been considerable as the disappearance of the $`J/\psi `$ can directly be linked to deconfinement and Debye screening in the plasma . The interested reader will find a recent snapshot of the state of this field in Ref. .
Before an experimentally observed $`J/\psi `$ suppression pattern is interpreted as an unambiguous signal of the existence of a quark-gluon plasma, it is imperative to rule out all competing explanations of purely hadronic origin. Moreover, the hadronic scenarios considered should incorporate elements of physics that are known to be relevant at the energy scale under consideration. It is one such line of thought that we follow in this paper. We study charmonium production in relativistic heavy ion collisions along with the appropriate background, Drell-Yan pair production. Our paper is organized as follows: First we recall the main features of a model that is successful in explaining high precision Drell-Yan data measured in proton-nucleus collisions. Those data enable one to extract a formation time characteristic of the emission of soft hadrons, essentially pions. Next we recall the application of this model to the production of $`J/\psi `$ in $`pA`$ collisions. From those measurements we have extracted a cross section for $`J/\psi `$ absorption on the nucleon. With this formulation we make parameter-free calculations for nucleus-nucleus collisions and compare them with experimental data.
In almost all considerations involving heavy ion collisions at any energy, the issues of dynamics and elementary processes remain intimately connected and inseparable. In view of this, a successful modeling of nuclear collisions is a necessary prerequisite to a deeper exploration of the fine points of the nuclear dynamics. To simulate the heavy ion collision we prefer to work with hadronic variables rather than partonic ones, and make a straightforward linear extrapolation from proton-proton scattering. This extrapolation, referred to as LEXUS, was detailed and applied to nucleus-nucleus collisions at beam energies of several hundred GeV per nucleon in Ref. . Briefly, the inclusive distribution in rapidity $`y`$ of the beam proton in an elementary proton-nucleon collision is parameterized rather well by
$$W_1(y)=\lambda \frac{\mathrm{cosh}y}{\mathrm{sinh}y_0}+(1\lambda )\delta (y_0y),$$
(1)
where $`y_0`$ is the beam rapidity in the lab frame. The parameter $`\lambda `$ has the value 0.6 independent of beam energy, at least in the range in which it has been measured, which is $`12400`$ GeV . It may be interpreted as the fraction of all collisions which are neither diffractive nor elastic. As a nucleon cascades through the nucleus its energy is degraded. An underlying assumption in this model is that of straight line trajectories. In the case of a nucleus-nucleus collision, one obtains the single-particle rapidity distribution of the $`m`$’th projectile nucleon after a collision with the $`n`$’th target nucleon through the solution of an evolution equation :
$`W_{m,n}^P(y)={\displaystyle 𝑑y_P𝑑y_TW_{m,n1}^P(y_P)W_{m1,n}^P(y_0y_T)Q(yy_T,y_Py_T,yy_P)}.`$ (2)
In the above, the kernel is
$`Q(s,t,u)=\lambda {\displaystyle \frac{\mathrm{cosh}s}{\mathrm{sinh}t}}+(1\lambda )\delta (u),`$ (3)
originating from Eq. (1). Equation (2) is a Boltzmann-like equation that is solved numerically. This rapidity distribution then gets folded with impact parameter over the density distributions of the projectile and target nuclei, using a method described in detail in Ref. .
Recently we have extracted the quantum coherence time needed to reproduce Drell-Yan pair production data in $`pA`$ collisions. This can also be formulated in terms of the Landau-Pomeranchuk-Migdal effect . We briefly recall the procedure followed. We began by computing the Drell-Yan yield at leading-order (LO) with the GRV structure functions with a K factor. Those structure functions reflect a flavor-asymmetric Dirac sea. Adopting a fixed K factor of 2.1, we compared the results to pp Drell-Yan data at 800 GeV/c and found the agreement to be excellent for all measured values of $`x_F`$.
Turning then to the case of proton-nucleus collisions as measured by the E772 collaboration , we deduced that the formation (or coherence) time needed to fit the measured $`\sigma _{pA}^{DY}/\sigma _{pD}^{DY}`$ ratios at different values of $`x_F`$ is $`\tau _0`$ = 0.4 $`\pm `$ 0.1 fm/c, in the frame of the colliding nucleons. Practically, this coherence time can be related to an initial state energy loss for some of the Drell-Yan producing collisions as follows. In LEXUS, we assumed that the energy available to produce a Drell-Yan pair was that which the proton had after $`n`$ previous collisions. In order to reproduce the 800 GeV/c E772 data, we needed $`n=5\pm 1`$. The $`n`$ collisions correspond to a path length of $`n/\sigma _{NN}^{tot}\rho `$ in the target nucleus rest frame, where $`\sigma _{NN}^{tot}`$ is a total cross section of 40 mb, and $`\rho `$ is a nuclear matter density of 0.155 nucleons/fm<sup>3</sup>. Lorentz-transforming to the nucleon-nucleon center of mass, one obtains the value of the proper coherence time quoted above. In this language, a traditional Glauber-type model (with no energy loss) would have $`n=\mathrm{}`$. Fixing $`\tau _0`$, the range in $`n`$ appropriate for the energies being considered in this work (nucleon momenta of 158 GeV/c and 200 GeV/c) is found to be $`2n3`$.
We then investigated $`J/\psi `$ production in $`pA`$ collisions . The additional input needed there was the cross section for producing $`J/\psi `$ in elementary nucleon-nucleon interactions. We used a parametrization that follows from a tabulation of data by Lourenço :
$`B\sigma _{NNJ/\psi }(x_F>0)=37(1m_{J/\psi }/\sqrt{s})^{12}\mathrm{nb}.`$ (4)
Here, $`\sqrt{s}`$ in the center of mass energy of the nucleon pair and $`B`$ is the branching ratio into a muon pair. Using the functional $`x_F`$ dependence of the differential cross section as measured by E789 , one can write a normalized differential cross section to use as an input in LEXUS:
$`{\displaystyle \frac{d\sigma _{NNJ/\psi }}{dx_F}}=6\sigma _{NNJ/\psi }(x_F>0)(1|x_F|)^5.`$ (5)
From our analysis , we have extracted a $`J/\psi `$ absorption cross section in nuclear matter of 3.6 mb. It is worthwhile to note that this value is in numerical agreement with the same quantity deduced from experiments of $`J/\psi `$ photoproduction on nuclei . It is smaller than that used in other phenomenological heavy ion applications . The parameters in this model are thus completely determined by proton-nucleus data.
We now turn to recent experiments on the production of the $`J/\psi `$ in S+U collisions at 200A GeV/c and in Pb+Pb collisions at 158A GeV/c, at the CERN SPS. Because the $`J/\psi `$ is measured through its decay into dimuons, the production cross section has traditionally been divided by the natural background in the appropriate invariant mass region: that of Drell-Yan pairs. However, since the absolute cross section measurements are now available, we will first verify the predictions of our model there. Including the appropriate respective detector acceptance, the results for absolute cross sections are show in Table 1.
Consider the system S+U at 200A GeV/c. The coherence time arguments made earlier in this paper suggest that the values $`n=2`$ and $`n=3`$ should bracket the experimental data. We observe that indeed this is so, both for the measured Drell-Yan and $`J/\psi `$ absolute cross sections. One can go further and plot $`B\sigma ^{J/\psi }/\sigma ^{DY}`$ against collision centrality. One needs a model to map the impact parameter bins that enter as input in our dynamical model into bins of measured transverse energy. The experimental collaboration has in fact provided the impact parameter range that corresponds to a measured $`J/\psi `$-Drell-Yan ratio . Comparison to the experimental data is shown in Fig. 1. One can see that our results are consistent with the data within experimental uncertainties. Again, we emphasize that no new parameters were introduced. It is also worthwhile to note that the numerator and denominator of the plotted ratio have been calculated from “first principles”, the meaning of which is clear in the context of this paper: LO Drell-Yan calculations and a parametrization of the differential $`J/\psi `$ production in nucleon-nucleon collisions. The absolute cross sections calculated with no energy loss (or infinite coherence time) fail to reproduce the experimental results. This is also the case for their ratio.
We now turn to experimental results obtained by the NA50 collaboration with Pb projectiles and targets. From Table 1 we see that the measured Drell-Yan cross section exceeds our larger ($`n`$ = 3) value by about one standard deviation. The experimental $`J/\psi `$ value falls within the predicted range. Plotting the $`J/\psi `$-to-Drell-Yan ratio against the impact parameters determined by the experimental collaboration one obtains Fig. 2. Application of this model with its parameters determined solely from $`pA`$ physics does not yield a satisfactory representation of this experimental data. The latter is not reproduced in trend nor is it in magnitude. Note, however, that the poor quality of this fit is entirely comparable with those obtained with other hadronic approaches . Also shown in this figure is the effect of the coherence time on this ratio. It appears that this effect is not as spectacular here as it partially cancels in the numerator and denominator. The flattening and slight increase of the ratio, as one goes to smaller impact parameters, can be attributed to the $`J/\psi `$ cross section growing slightly faster than the Drell-Yan. Here also, the calculated absolute cross sections with no energy loss far exceed the experimental values.
We also considered possible nuclear structure effects on the ratio shown in Fig. 2. It is known that parton distribution functions that have a flavor-asymmetric Dirac sea, like the one we use in this work, will yield different Drell-Yan cross sections depending on whether one has p + p, p + n, n + n or n + p collisions. In our treatment the isospin content of the nucleus is assumed to be uniformly distributed according to the overall charge of the colliding partners. Experimentally, Pb is known to have a neutron skin of 0.19 $`\pm `$ 0.09 fm. This value is in fact too small to have an effect on the calculations shown here. Finally, it seems useful to point out that in Fig. 2 the Pb data does not seem to converge to the vacuum ratio as one moves towards more peripheral collisions, unlike the measurements of S-induced reactions.
We have investigated nucleus-nucleus collisions with a model that incorporates the coherence time associated with the emission of soft quanta in hadronic interactions. This approach translates into lost energy for the formation of hard radiation, such as high-mass Drell-Yan pairs and $`J/\psi `$. We have obtained results in quantitative agreement with experimental data for the reaction S on U at 200A GeV/c. The ratio of $`J/\psi `$ to Drell-Yan cross sections as a function of collision centrality, as well as the total absolute cross sections are reproduced by our model. Therefore, we can understand Drell-Yan and $`J/\psi `$ formation in $`pA`$ and S+U collisions in terms of the same physics. This model fails to reproduce measurements done in connection with the heavier Pb+Pb system.
Several points still need to be clarified. It will be very instructive to repeat this analysis in partonic variables including nuclear shadowing . A systematic exploration of the freedom allowed by the most recent high-precision pA measurements is called for and is underway. Nevertheless, if the Pb+Pb data stand the test of time, it does not seem possible to escape the conclusion that $`J/\psi `$ suppression is caused by high energy density. Whether it is due to absorption on hadronic co-movers or quark-gluon plasma remains an open and exciting question.
ACKNOWLEDGEMENTS
This work was supported in part by the Natural Sciences and Engineering Research Council of Canada, in part by the Fonds FCAR of the Quebec Government, in part by the Director, Office of Energy Research, Office of High Energy and Nuclear Physics, Division of Nuclear Physics, and by the Office of Basic Energy Sciences, Division of Nuclear Sciences of the U.S. Department of Energy under contract DE-AC03-76SF00098, and grant DE-FG02-87ER40328. |
no-problem/9912/astro-ph9912483.html | ar5iv | text | # Phase Resolved Spectroscopy of Burst Oscillations: Searching for Rotational Doppler Shifts
## Introduction
A rotating hot spot (or spots) seems to be the most simple, consistent scenario proposed to date to explain the presence of large amplitude modulations of the X-ray brightness during thermonuclear bursts. The presence of large amplitudes at burst onset combined with spectral evidence for localized X-ray emission supports this hypothesis szs . Additional evidence for spin modulation is provided by the fact that the oscillation frequencies are stable over year timescales and within bursts the oscillations are highly coherent s98b ; sm
Detailed studies of the burst oscillations hold great promise for providing new insights into a variety of physics issues related to the structure and evolution of neutron stars. For example, within the context of the rotating hot spot model it is possible to determine constraints on the mass and radius of the neutron star from measurements of the maximum observed modulation amplitudes during X-ray bursts as well as the harmonic content of the pulses ml ; s98a . Phase resolved X-ray spectroscopy of the burst oscillations also holds great promise, and could yield methods to constrain the radii of neutron stars, a quantity which is extremely difficult to infer on its own. For example, a 10 km radius neutron star spinning at 400 Hz has a surface velocity of $`v_{spin}/c2\pi \nu _{spin}R/c0.084`$ at the rotational equator. This motion of the hot spot produces a Doppler shift of magnitude $`\mathrm{\Delta }E/Ev_{spin}/c`$, thus the observed spectrum is a function of pulse phase cs . Measurement of this phase dependent Doppler shift would provide further compelling evidence supporting the spin modulation model and also a means of constraining the neutron star radius, since for a known spin frequency the velocity, and thus magnitude of the Doppler shift, is proportional to the stellar radius. In addition, both the magnitude of the Doppler shift and the amplitude of spin modulation decrease as the lattitude of the hot spot increases (Both approaching zero when the spot is located at the rotational pole). Detection of this correlation in a large sample of bursts would provide definitive proof of the spin modulation hypothesis.
## Phase Resolved Spectroscopy
Detailed searches for a Doppler shift signature are just beginning to be carried out. Studies using the oscillations in single bursts have shown that 4-5 % modulations of the fitted black body temperature are easily detected in the tails of bursts ssz , and are consistent with the idea that a temperature gradient is present on the stellar surface, which when rotated produces the flux modulations. Phase lag studies in a burst from Aql X-1 indicates that softer photons lag higher energy photons in a manner which is qualitatively similar to that expected from a rotating hot spot ford .
There are several ways to address the issue of how best to search for Doppler shifts. Near burst onset the oscillation amplitude is high and the hot spot is well localized. If it were possible to examine the oscillations with exquisite detail nearer and nearer to the burst onset then the situation should approach that of a point-like hot spot on the star. A point spot located on the rotational equator produces the largest modulation amplitude as well as a maximal Doppler shift. Thus, in principle this would be an ideal interval to examine, however, the difficulty is that the burst flux is still weak near onset and the interval during which the spot is small (to accumulate a spectrum) is short. This means that the desired signal to examine is very weak. An alternative is to examine pulse trains in the decaying tails of bursts sm . In this case the oscillation interval is much longer, the burst flux is still substantial and there are many more photons available. However, the oscillation amplitude is lower and at late times in bursts we expect that a well localized spot is probably not present. More likely the modulations are caused by a broad temperature anisotropy over the stellar surface (see Figure 1. This means that the observed Doppler shift represents an integral over the rotating surface of the star, much of which is moving with a lower line of sight velocity than the maximum at the rotational equator. Thus the Doppler shift will be weaker than what we might expect near burst onset. To some extent the lack of signal in a single burst can be offset by summing data from several bursts coherently sm ; m .
## Results
To search for a Doppler shift I have examined the phase resolved spectrum of 4 bursts with 330 Hz oscillations from 4U 1702-429 mss . Intervals in the tails of these bursts were analysed first since these have much more signal in the pulsations than intervals near onset. For each burst I fit an exponential model of the frequency drift, as in sm , and then used the phase information for each X-ray event, based on the best model, to accumulate spectra in 12 phase bins. The data were recorded in an event mode which provides the arrival time and energy channel of each X-ray. The data mode has 64 energy channels in the $``$ 2 - 100 keV PCA band. As a measure of spectral hardness versus pulse phase I used the mean PCA energy bin for each phase interval. In principle one could also fit the accumulated spectra with a black body function and use the color temperature as a hardness measure, however, the spectra are accumulated over an interval long compared to the burst cooling time so that the spectra are not well fit by black body functions with a constant temperature. In each burst there is a strong modulation of the mean PCA energy channel with pulse phase, with the peak of the phase folded lightcurve having the highest mean channel. This result is similar to that found for oscillations at 580 Hz in a burst from 4U 1636-53 ssz , that is, the peak of the modulations are harder than the minima.
The single burst data show obvious modulations of the mean PCA channel with pulse phase, however, they do not show evidence for any asymmetry such as could be introduced by the rotational Doppler shift. This is mainly due to the lack of sufficient signal in a single burst. In an attempt to overcome these limitations I co-added the data for all 4 bursts in phase sm . This amounts to finding a phase offset, relative to one burst, which maximizes the pulsation signal of the sum. I then computed the mean PCA channel for the summed spectra in the same way as for the individual bursts. The results are given in figure 2. There is a strong 4 % modulation of the mean PCA channel in the combined data. As a simple test for asymmetry I fit a sine function to the mean PCA channel data and find that the data are adequately described by a simple sine wave, so we do not find any strong evidence for a Doppler shift in the combined data. It still remains to conduct more sophisticated tests for asymmetry on these data, and more bursts will be added as new data becomes available. |
no-problem/9912/astro-ph9912121.html | ar5iv | text | # The RASSCALS: An X-ray and Optical Study of 260 Galaxy Groups
## 1. Introduction
The *ROSAT* and *ASCA* missions have shown that even low mass systems of galaxies contain a hot intergalactic plasma. Many of the Hickson (1982) compact groups (HCGs) are embedded in diffuse X-ray emission detectable by the *ROSAT* (Ebeling, Voges, & Böhringer 1994; Pildis, Bregman, & Evrard 1995; Ponman et al. 1996). Other *ROSAT* studies of smaller group samples (e.g. Henry et al. 1995; Burns et al. 1996; Mulchaey et al. 1996; Mahdavi et al. 1997; Mulchaey & Zabludoff 1998) confirm the existence of an intergalactic plasma with an average temperature $`kT1`$ keV in many loose groups.
Although heterogeneous studies abound, an objective survey of the nearby universe for X-ray emitting groups is lacking. A search based on a large optical catalog, drawn objectively from a three dimensional map of the large scale structure, is essential for understanding the physical properties of galaxy groups. Here we construct the first such catalog. Our goals are (1) to investigate similarity breaking in the fundamental scaling laws of systems of galaxies, (2) to make the first calculation of the X-ray selection function of galaxy groups, and (3) to place firm limits on the fraction of optically selected groups that are bound.
One important example of similarity breaking is the relationship between the X-ray luminosity $`L_X`$ and the average plasma temperature $`T`$. Ponman et al. (1996) show that the $`L_XT`$ relation is quite steep for compact groups, with $`L_XT^5`$, whereas for rich clusters $`L_XT^3`$. This result is consistent with a “preheating” scenario where winds from supernovae in galaxies undergoing the starburst phase leave their mark on the poorest systems. Such winds would deplete the intragroup plasma (Davis, Mulchaey, & Mushotzky 1999; Hwang et al. 1999), raise the gas entropy relative to the gravitational collapse value (Ponman, Cannon, & Navarro 1999), and preferentially dim the systems with the lowest temperatures (Cavaliere, Menci, & Tozzi 1997).
Systems in hydrostatic equilibrium should have $`T\sigma _p^2`$, where $`\sigma _p`$ is the velocity dispersion of the dark matter halo in which the galaxies are embedded. Thus one might expect that the $`L_X\sigma _p`$ and the $`L_XT`$ relations for groups of galaxies steepen in a similar manner. Here we show that quite the opposite is true. The groups with the smallest velocity dispersions are in fact overluminous compared to the $`L_X\sigma ^4`$ law valid for higher velocity dispersion systems. Thus the similarity breaking in the $`L_XT`$ law is apparently incommensurate with the break in the $`L_X\sigma _p`$ relation. We discuss several plausible explanations for this lack of concordance.
The paper is organized as follows. After constructing the catalog (§2), we examine the $`L_X\sigma _p`$ relation (§3), calculate the selection function (§4), discuss the $`L_X\sigma _p`$ flattening (§5), and summarize our findings (§6). We call our groups the *ROSAT* All-Sky Survey—Center for Astrophysics Loose Systems, or RASSCALS.
## 2. Data
### 2.1. Optical Group Selection
We extract the optical group catalog for the RASSCALS study from two complete redshift surveys. Our catalog includes a wide variety of systems, from groups with only $`5`$ members to the Coma cluster <sup>5</sup><sup>5</sup>5In a previous work (Mahdavi et al. 1999), we referred to the Center for Astrophysics–SSRS2 Optical Catalog (CSOC) as a distinct entity from the RASSCALS, which was to be the X-ray catalog. We no longer make that distinction, and refer to the X-ray/optical catalog simply as the RASSCALS..
The Center For Astrophysics Redshift Survey (Geller & Huchra 1989; Huchra et al. 1990; Huchra, Geller, & Corwin 1995; CfA) and the Southern Sky Redshift Survey (Da Costa et al. 1994; Da Costa et al. 1998), both complete to a limiting Zwicky magnitude $`m_z15.5`$, serve as sources for the RASSCALS. The portion of the surveys we use covers one fourth of the sky in separate sections described in Table 1. We transform the redshifts to the Local Group frame ($`\mathrm{\Delta }cz=300\mathrm{sin}l\mathrm{cos}b`$), and correct them for infall toward the center of the Virgo cluster (300 km s<sup>-1</sup> towards $`\alpha _{2000}=12^\mathrm{h}31.2\mathrm{m}`$, $`\delta _{2000}=12`$°2.54).
We use the two-parameter friends-of-friends algorithm (FOFA) to construct the optical catalog. Huchra & Geller (1982) first described the FOFA for use with redshift surveys, and Ramella, Pisani, & Geller (1997) applied it to the NRG data. The FOFA is a three-dimensional algorithm which identifies regions with a galaxy overdensity $`\delta \rho /\rho `$ greater than some specified threshold. A second fiducial parameter, $`V_0`$, rejects galaxies in the overdense region which are too far removed in velocity space from their nearest neighbor. The N-body simulations of Frederic (1995) and Diaferio (1999) show that the Huchra & Geller (1982) detection method misses few real systems, at the cost of including some spurious ones. We apply the FOFA to the combined NRG, SRG, and SS2 redshift surveys with $`\delta \rho /\rho =80.`$
The RASSCALS optical catalog contains 260 systems with $`n5`$ members and 3000 km s<sup>-1</sup>$`cz12000`$ km s<sup>-1</sup>. The low velocity cutoff rejects systems that cover a large area on the sky and thus may be affected by the Local Supercluster. The median recession velocity for the systems is 7000 km s<sup>-1</sup>; the effects of cosmology and evolution are negligible throughout the sample. Table 2 lists the individual groups and their properties. Figures 12 show the sky positions of the member galaxies for the systems with statistically significant extended X-ray emission in the RASS.
To compare the membership of groups which have different redshifts we also compute $`n_{17}`$, the number of group members brighter than an absolute magnitude $`M_z=17`$, corresponding to $`m_z=15.5`$ for a group at $`cz=3200`$ km s<sup>-1</sup>. To calculate $`n_{17}`$, we assume that the galaxies in groups have the same luminosity function as the Center for Astrophysics Redshift Survey (Marzke et al. 1994), reconvolved with the magnitude errors. The resulting distribution is well-represented by a Schechter (1976) function with a characteristic absolute magnitude $`M_{}=19.1`$ and a faint-end slope $`\alpha =1`$. Table 2 lists $`n_{17}`$, which has a median value of 44.
### 2.2. X-Ray Field Selection
For every system in the RASSCALS optical catalog, we obtain X-ray data from a newly processed version of the *ROSAT* All-Sky Survey (Voges et al. 1999), which corrects effects leading to a low detection rate in the original reduction.
We first assign each system a seven-character name, beginning with “NRG,” “SRG,” or “SS2,” followed by “b” or “s” (specifying the angular size of the system as “big,” with $`cz<8500`$ km s<sup>-1</sup> or “small,” with $`cz>8500`$ km s<sup>-1</sup>, respectively), followed by a three-digit number.
For each “small” system we extract a square field measuring $`2^{}\times 2^{}`$ from the RASS; for the “big” systems we extract a $`3.5^{}\times 3.5^{}`$ square. The fields are centered at the mean RA and DEC of the galaxies; every field is at least large enough to include a circle with a projected radius of $`1h_{100}^1`$ Mpc around the optical center of the system it contains. We use photons in the 0.5–2.0 keV hard energy band of the Position-Sensitive Proportional Counter (PSPC channels 52-201).
### 2.3. Detection Algorithm
The X-ray detection algorithm consists of four steps: measurement of the background, source identification, decontamination, and measurement of the source flux.
We determine the mean background by temporarily rebinning the exposure-weighted *ROSAT* field into an image with $`15\mathrm{}`$ pixels. We clean the image of all fluctuations with an iterative, $`2.5\sigma `$ clipping algorithm. The adopted background is then the average of the remaining pixel values.
To estimate the probability that a given group is an X-ray source, we use an optical galaxy position template (GPT). Mahdavi et al. (1997), who search the RASS for X-ray emission from a small subset of our sample, describe this method in greater detail. The GPT is defined as the union of all projected $`d=0.2h_{100}^1`$ Mpc regions around the group members, excluding any galaxies isolated by more than $`d`$ from the rest of the group. We count the X-ray photons within the GPT and evaluate the probability that they are drawn from the background distribution. All groups that have a detection significance greater than $`2.5\sigma `$ progress to the next step.
We identify the emission peak which coincides most closely with the optical center of the group as its X-ray counterpart, and calculate the X-ray position of the group with the intensity-weighted first moment of the pixel values. Using standard maximum likelihood techniques, we identify contaminating X-ray point sources over the entire field. We remove these sources by excising a ring of radius 3$`\mathrm{}`$, roughly three times the full width at half maximum of the *ROSAT* PSPC point spread function (PSF). Unrelated extended sources often contaminate the group emission; we use a suitably larger aperture to remove them. We have examined publicly available ROSAT High-Resolution Imager (HRI) observations of a few groups, and find that our RASS decontamination procedure is satisfactory.
To reject groups with entirely pointlike X-ray emission, we calculate $`N(R)`$, the cumulative distribution of the ROSAT counts. We use use the Kolmogorov-Smirnov (KS) Test to compare the shape of the emission peak with that of the PSPC PSF combined with the background. We take sources with $`P_{\mathrm{KS}}0.05`$ as inconsistent with the PSF.
Finally, we convert the PSPC count rate into $`L_X(R)`$, the 0.1–2.4 keV X-ray luminosity contained within a ring of projected radius $`R`$. The Appendix describes the flux conversion procedure in detail.
### 2.4. Core Radius Estimation
Here we describe a procedure for identifying a physical scale for the X-ray emission. There is a great deal of evidence that the emissivity profiles of clusters of galaxies exhibit a characteristic scale, or core radius, $`r_c`$ (e.g. Jones & Forman 1984; Mohr, Mathiesen, & Evrard 1999). For example, the “$`\beta `$-model” emissivity profile frequently used to fit observations of dynamically relaxed systems,
$$ϵ(r)\left(1+\frac{r^2}{r_c^2}\right)^{3\beta },$$
(1)
is nearly constant for physical radii $`rr_c`$, and scales as $`r^{6\beta }`$ for $`rr_c`$. There is also evidence for cuspy profiles ($`ϵr^1`$ for $`rr_c`$) in clusters with cooling flows (Thomas 1998).
The usual method for measuring $`r_c`$ from X-ray observations consists of projecting $`ϵ(r)`$ along one dimension, and fitting the resulting surface brightness profile to the data. This approach has the disadvantage that the resulting estimate of $`r_c`$ is model-dependent, and is often strongly correlated with the slope parameter $`\beta `$, even with very high quality data (e.g. Jones & Forman 1984; Neumann & Arnaud 1999). Furthermore, its application to the RASSCALS is limited, because the small number of counts and the relatively large uncertainty in the background make it difficult to reconstruct accurate surface brightness profiles for all but the brightest systems.
We therefore use the Nonparametric Core Radius Estimator (NOCORE; Mahdavi 2000) to avoid the core fitting procedure and its associated uncertainties. NOCORE is model-independent; it does not require an estimation of the background, and it relies on the properties of the integrated emission profile, rather than the differential profile, to estimate the core radius. Its only assumption is the constancy of the background level at the position of the object of interest.
Now we outline the procedure. Consider the measured count rate within an annulus $`R`$ from the X-ray center of the group: it consists of the emission of the group itself, $`S(R)`$, plus the constant background count rate per unit area, $`B`$:
$$N(R)=S(R)+\pi R^2B.$$
(2)
The fundamental basis of NOCORE is the observation that the quantity $`N(R)k^2N(R/k)`$, where $`k`$ is a number greater than 1, is completely independent of the constant background. Formally, we define the NOCORE radius as the radius where the function
$$\xi (R)\frac{N(R)4N(R/2)}{R}$$
(3)
has a global minimum. The division by $`R`$ is necessary to obtain a detectable minimum.
Figure 2.3a shows $`\xi (R)`$ for theoretical $`\beta `$-models with $`\beta =0.65`$ and $`\beta =1`$. The minimum, $`R_\xi `$, is well-defined in both cases. The location of $`R_\xi `$ as a function of $`\beta `$ appears in Figure 2.3b. We have carried out numerical tests of the method, adding Poisson noise and a background to a variety of $`\beta `$-models, to verify that we recover the appropriate core radius without bias. When applying the method to the observations, we use bootstrap resampling to determine 68% confidence intervals on $`R_\xi `$.
As long as there is a characteristic scale in the emissivity profile of a system, NOCORE will find it. The function $`\xi (R)`$ has a well-defined minimum even in cases when the $`\beta `$-model is not a good description of the emissivity, for example systems with a cooling flow. If there is more than one characteristic scale in the profile—if the emissivity has features at several radii, because of substructure in the cluster, for example—then $`\xi (R)`$ has more than one minimum. If the profile is a pure power law, NOCORE shows no core; $`\xi (R)`$ is then a monotonically decreasing or increasing function of $`R`$.
We use the radius at which $`\xi (R)`$ is minimum as a measure of the physical scale of the X-ray emission.
### 2.5. General Properties of the Final Catalog
Table 3 lists the projected velocity dispersion, $`\sigma _p`$, the 0.1–2.4 keV X-ray luminosity, and the NOCORE radius $`R_\xi `$ of the detected groups. There are 61 detections, of which two (NRGs372 and NRGs392) are bright X-ray clusters (Abell 2147 and 2199) that the friends-of-friends algorithm has broken up into pieces. We count these clusters as detections but we do not calculate luminosities for them. Figures 12 show galaxy positions and X-ray emission contours for the detected groups.
Because the diffuse X-ray emission is generally a marker of gas held in a gravitational potential, these systems are probably bound configurations. There is, however, a chance that the X-ray emission might be due to projection along an unbound filament in the large scale structure of the universe (Hernquist, Katz, & Weinberg 1995). The galaxies projected along this line of sight might also have similar redshifts without being bound. However, deeper redshift surveys in the fields of X-ray emitting RASSCALS show that a number of these systems have velocity dispersion profiles $`\sigma _p(R)`$ that decline as a function of projected distance from the group center; these profiles are consistent with the expectation for a relaxed dynamical system (Mahdavi et al. 1999).
In Figures 12 we also show the cumulative luminosity profile and the NOCORE estimator $`\xi (R)`$. In several cases $`\xi (R)`$ appears to have several local minima in addition to the global minimum. The local minima are almost always due to Poisson noise and deviations from spherical symmetry in the structure of the gas. The confidence intervals on $`R_\xi `$ take these fluctuations into account: when the fluctuations in $`\xi (R)`$ dominate its shape, the error in $`R_\xi `$ is large. But when $`\xi (R)`$ has a well-defined global minimum and the fluctuations are small, $`R_\xi `$ is relatively well determined.
## 3. $`L_X\sigma _p`$ Relation
Here we examine the relationship between the X-ray luminosity and the projected velocity dispersion. First we comment on the link between the $`L_X\sigma _p`$ scaling law, which relates X-ray and optical data, and the $`L_XT`$ scaling law, which is internal to X-ray data. Then we describe the actual relation.
### 3.1. Background on the Scaling Laws
If the member galaxies trace the total mass distribution in a cluster, a simple theoretical calculation (Quintana & Melnick 1982) predicts that a spherically symmetric ball of gas should have $`L_Xf^2\sigma _p^3T^{1/2}`$, where $`f`$ is the ratio of the gas mass to the total mass. A further, common assumption is that $`T\sigma _p^2`$, i.e., that the emission-weighted gas temperature is proportional to the depth of the gravitational potential. These assumptions yield $`L_Xf^2T^2f^2\sigma ^4`$.
The observed $`L_X\sigma _p`$ relation for rich clusters is in good agreement with the theoretical prediction; Quintana & Melnick (1982) and Mulchaey & Zabludoff (1998), for example, find slopes consistent with $`L_X\sigma ^4`$. The empirical $`L_XT`$, relation, however, is somewhat steeper than expected, with most finding $`L_XT^{2.75}`$, even after removing the central cooling flow region (e.g., Markevitch 1998 and references therein).
If the discrepancy between the simple theoretical prediction, $`L_XT^2`$, and the observations is real, several effects might explain it. It could be that $`f`$ increases slightly with $`T`$ (David, Jones, & Forman 1995), or that $`T\sigma ^{1.5}`$, consistent with a nonisothermal, polytropic gas distribution (Wu, Fan, & Xu 1998). Finally, preheating of gas in the $`kT<4`$ keV systems may preferentially dim them, leading to a steeper relation. Ponman, Cannon, & Navarro (1999) show that this latter possibility is particularly attractive because it also accounts for differences in the shapes of X-ray surface brightness profiles among $`kT<4`$ keV and $`kT>4`$ keV clusters. Cavaliere et al. (1997) work out the $`L_XT`$ relation for this scenario, and find that it steepens gradually as $`T`$ declines, with $`L_XT^5`$ for poor groups , $`L_XT^3`$ for 2 keV $`<kT<`$ 7 keV systems, and $`L_XT^2`$ for the hottest clusters. This $`L_XT`$ relation fits temperatures and luminosities for a range of systems from poor groups to clusters.
Now, if a single power law describes the scaling of the velocity dispersion $`\sigma _p`$ with the temperature $`T`$, and the Cavaliere et al. (1997) preheating model is correct, one should observe a similarly steep $`L_X\sigma _p`$ relation for poor groups. Three different works have attempted a measurement of the faint end of this relation, with three different results.
1. Ponman et al. (1996) analyze a mixture of pointed and RASS observations of a sample of Hickson (1982) Compact Groups (HCGs hereafter). They obtain $`L_X\sigma _p^{4.9\pm 2.1}`$ for the groups with pointed observations. While this result favors a steeper slope, the 68% confidence interval is quite large: the HCGs contain as few as three member galaxies, and hence it is very difficult to estimate the correct velocity dispersion. Also, HCGs are often embedded in much richer systems (Ramella et al. 1994), and this embedding may further bias the value of the velocity dispersions.
2. Mulchaey & Zabludoff (1998; MZ98 hereafter) carry out deep optical spectroscopy for a more limited sample of poor groups with pointed *ROSAT* observations. Because they obtain $`30`$ members per group, their derived velocity dispersions should be more reliable than those of Ponman et al. (1996). They obtain $`L_X\sigma ^{4.3\pm 0.4}`$ for a combined sample of groups and clusters.
3. Mahdavi et al. (1997) use our method to examine *ROSAT* data for a small but statistically complete subset of the RASSCALS. They do not, however, excise emission from individual galaxies; furthermore, they assume a constant plasma temperature $`kT=1`$ keV, rather than leaving $`T`$ free to vary as we do here. They obtain $`L_X\sigma _p^{1.56\pm 0.25}`$, much shallower than either Ponman et al. (1996) or MZ98.
In summary, Ponman et al. (1996) find an $`L_X\sigma _p`$ relation consistent with the simplest predictions of preheating models; MZ98 derive a relation that is consistent with the standard picture with no preheating; and Mahdavi et al. (1997) find that the faint-end slope is much shallower than the prediction of either of the two scenarios.
We now consider the $`L_X\sigma _p`$ relation for the complete set of RASSCALS. Our procedure differs from that of Mahdavi et al. (1997), because we remove contaminating sources whenever they are detectable, model the plasma temperature, and use an updated version of the RASS.
To compare the RASSCALS $`L_X\sigma _p`$ relation with that of richer systems, we take cluster X-ray luminosities from the paper by Markevitch (1998), where cooling flows are removed from the analysis. We use only clusters which have velocity dispersion listed in Fadda et al. (1996), who consider systems with $`30`$ measured redshifts. Table 4 lists these data. Figure 3 shows the combined cluster-RASSCALS data. The $`L_X\sigma _p`$ seems to flatten as the luminosity decreases.
### 3.2. Details of the Fitting Procedure
To place a quantitative constraint on the degree of flattening, we fit a broken power law of the form
$`\mathrm{log}{\displaystyle \frac{L_X}{L_k}}`$ $`=`$ $`s(\sigma _p,\sigma _k)\mathrm{log}{\displaystyle \frac{\sigma _p}{\sigma _k}};`$ (4)
$`s(\sigma _p,\sigma _k)`$ $`=`$ $`\{\begin{array}{cc}s_1& \mathrm{if}\sigma _p<\sigma _k\hfill \\ s_2& \mathrm{if}\sigma _p>\sigma _k\hfill \end{array}`$ (7)
Here $`s_1`$ and $`s_2`$ are the faint-end and bright-end slopes, respectively, and $`(\sigma _k,L_k)`$ is the position of the knee of the power law. We then minimize a merit function appropriate for data with error in two coordinates (Press et al. 1995, §15.3),
$$\chi ^2=\underset{i=1}{\overset{n}{}}\frac{\left[\mathrm{log}(L_i/L_k)s(\sigma _i,\sigma _k)\mathrm{log}(\sigma _i/\sigma _k)\right]^2}{\left(\mathrm{\Delta }\mathrm{log}L_i\right)^2+s(\sigma _i,\sigma _k)^2\left(\mathrm{\Delta }\mathrm{log}\sigma _i\right)^2},$$
(8)
where $`(\sigma _i,L_i)`$ are the measurements, with errors $`(\mathrm{\Delta }\sigma _i,\mathrm{\Delta }L_i)`$. To minimize the $`\chi ^2`$ we apply the following procedure.
1. First, we fit a single power law by forcing $`s_1=s_2`$ and $`\mathrm{log}\sigma _k=0`$, and applying the Press et al. (1995, §15.3) package. The result appears as the dashed line in Figure 3. We call this best-fit power law slope $`s_0`$.
2. Next, we vary the position of the knee of the power law over a 50 $`\times `$ 50 grid with bounds $`\mathrm{log}\sigma _k=[2,3]`$ and $`\mathrm{log}L_k=[42,44]`$. At each point in the grid, we minimize the $`\chi ^2`$ over $`s_1`$ and $`s_2`$, using the Fletcher-Reeves-Polak-Ribiere algorithm, which makes use of gradient information (Press et al. 1995, §10.6). We start the minimization algorithm with $`s_1=s_2=s_0`$, and require $`0<s_1<10`$ and $`0<s_2<10`$ as priors. This procedure yields a function $`\chi _{\mathrm{min}}^2(\sigma _k,L_k)`$.
3. Finally, we minimize $`\chi _{\mathrm{min}}^2(\sigma _k,L_k)`$ to obtain the position of the knee of the power law and the best-fit slopes associated with it. The results of the fit appear in Figure 3.
We also try more robust estimators, such as the BCES bisector (Akritas & Bershady 1996). In general, these estimators are in good agreement with the results of the $`\chi ^2`$ fits; however, Akritas & Bershady (1996) do not provide a mechanism for assessing the quality of the fit, and their package does not allow for the calculation of joint two-dimensional confidence intervals. We therefore focus on the $`\chi ^2`$ statistic. Below we also consider how the fit changes with the inclusion of the 199 upper limits.
### 3.3. A Broken Power Law is the Best Fit
Our data unambiguously favor a broken power law over a single power law for the $`L_X\sigma _p`$ relation. The confidence contours in Figure 3 show that the faint-end slope and the bright-end slope are different at better than the 99.7% confidence interval. Furthermore, the scatter in the $`L_X\sigma _p`$ relation is actually reduced by fitting a broken power law instead of a single power law.
The faint-end slope, $`s_1=0.37\pm 0.3`$, is even shallower than the earlier finding of Mahdavi et al. (1997), $`s_1=1.56\pm 0.25`$, for their fit to 9 low-luminosity RASSCALS. The shallowness of our faint-end slope is remarkable because, unlike Mahdavi et al. (1997), we remove sources of individual emission whenever possible, and model the plasma temperature without fixing it at a particular value. The bright-end slope, $`s_2=4.02\pm 0.1`$, on the other hand, is consistent with MZ98, whose $`L_X\sigma _p`$ depends mainly on rich clusters; their single power law fit has a slope $`4.29\pm 0.37`$. A slight discrepancy is to be expected, because MZ98 use bolometric X-ray luminosities, and we measure the luminosities in the 0.1–2.4 keV spectral range. However, this discrepancy should cause only an $`\mathrm{\Delta }s_20.4`$ offset in the slope for clusters with $`L_X>10^{43}`$ ergs s<sup>-1</sup>. Systems with $`L_X<10^{43}`$ ergs s<sup>-1</sup> should have bolometric luminosities comparable to their 0.1–2.4 keV luminosities.
We stress that our fitting procedure in no way favors the shallower slope: we begin the $`\chi ^2`$ minimization by setting both slopes equal to the best-fit single power law. Also, a broken power law is the best fit even if we exclude the lowest velocity dispersion group, SRGb075, from the fit. Doing so, we would obtain $`s_1=1.39\pm 0.5`$ and $`s_2=3.99\pm 0.3`$.
Finally, we consider whether including the upper limits in the fit changes the derived slopes. For this task we obtain the Astronomy Survival Analysis Package (ASURV; Lavalley, Isobe, & Feigelson 1992) from http://www.astro.psu.edu/statcodes. ASURV implements the methods described in Isobe, Feigelson, & Nelson (1986) for regression of data which includes both detections and upper limits. Furthermore, ASURV allows for an intrinsic scatter in the relation.
For the 155 objects with $`\sigma _p<340`$ km s<sup>-1</sup>, we find that the best-fit slope is $`1.38\pm 0.4`$; for the 128 objects with $`\sigma _p>340`$ km s<sup>-1</sup>, it is $`5.37\pm 0.5`$. Thus the inclusion of upper limits does not bring the two slopes closer to each other; if anything, it strengthens our claim that the $`L_X\sigma _p`$ relation is best described by a broken power law.
## 4. Detection Statistics
We now examine the statistical properties of the catalog in greater detail. We seek a deeper understanding of the X-ray selection function of the RASSCALS. A useful tool for this purpose is the number distribution of a set of measurements $`x`$, which we label $`N(x)`$. For example, we call the number distribution of the group velocity dispersion $`N(\sigma _p)`$; the number distribution of the distance-corrected group membership is N($`n_{17}`$).
Although the traditional estimator of the number distribution is the histogram, we compute $`N(x)`$ using the DEDICA algorithm (Pisani 1993). DEDICA makes use of Gaussian kernels to arrive at a maximum-likelihood estimate of the number distribution. The resulting smooth function, $`N(x)`$ is more useful than a histogram, because $`N(x)`$ is nonparametric, and any structure within $`N(x)`$ is statistically significant. We normalize $`N(x)`$ so that the number of groups with $`x_1xx_2`$ is given by,
$$_{x_1}^{x_2}N(x)𝑑x.$$
(9)
### 4.1. An Abundance of Low $`\sigma _p`$ Systems
Figure 4a shows $`N(\sigma _p)`$ and $`N(n_{17})`$ separately for all RASSCALS and for those with significant X-ray emission. Interestingly, $`N(\sigma _p)`$ for all 260 groups is double peaked; there are 102 RASSCALS (39%) with $`\sigma _p<250`$ km s<sup>-1</sup>.
The abundance of these low $`\sigma _p`$ systems is puzzling considering that many of them probably contain unrelated galaxies (“interlopers”) (Frederic 1995), and that these interlopers typically lead to an overestimate, not an underestimate, of the velocity dispersion. Several plausible explanations for their frequent occurrence exist.
1. It may be that the groups with low $`\sigma _p`$ are unbound, chance projections along the line of sight. However, this situation is highly unlikely for systems drawn from a complete redshift survey. Chance superpositions in such a survey in fact have a larger mean velocity dispersion than the bound groups do (Ramella et al. 1997).
2. The groups might be pieces of sheets or filaments in the large scale distribution of matter. A collapsing sheet of galaxies which is removed from the Hubble flow, and which is perpendicular to the line of sight, might look like a group to the friends-of-friends algorithm.
3. The velocity dispersion $`\sigma _p`$ might not be related to the mass distribution in a straightforward manner. For example, Mahdavi et al. (1999) find that galaxy orbits in a subsample of the RASSCALS have a significant mean radial anisotropy; and Diaferio (2000) uses N-body simulations to show that systems with $`\sigma _p<300`$ km s<sup>-1</sup> have galaxy velocity dispersions that are uncorrelated with the total group mass.
### 4.2. Detection Efficiency
Here we use the groups we have detected as a basis for estimating (1) the number of RASSCALS with X-ray emission too faint to be observable by *ROSAT*, and (2) the number of RASSCALS with no X-ray emission, some of which might be unbound superpositions.
We begin by assuming that all the RASSCALS emit X-rays according to an empirical relationship between $`L_X`$ and $`\sigma _p`$. We compute the number of groups we expect to detect as a function of $`\sigma _p`$, and compare the theoretical detection probability with the true detection efficiency.
Suppose that all the RASSCALS emit X-rays according to a power law relationship between $`L_X`$ and $`\sigma _p`$,
$$\mathrm{log}L_X=s\mathrm{log}\sigma _p+b.$$
(10)
If the local flux detection threshold for a group at redshift $`z`$ is $`F_0`$, it will be detectable if $`\sigma _p>\sigma _0`$, where
$$\mathrm{log}\sigma _0=\frac{\mathrm{log}\left[4\pi F_0(1+z)^2c^2z^2\right]b}{s}$$
(11)
The theoretical probability that the group will be detected is then
$$P_{\mathrm{th}}(\sigma _p)=_{\mathrm{log}\sigma _0}^{\mathrm{}}p(\mathrm{log}\sigma _p)d\mathrm{log}\sigma _p,$$
(12)
where $`p(\mathrm{log}\sigma _p)`$ is the probability distribution function of $`\mathrm{log}\sigma _p`$. We calculate $`P_{\mathrm{th}}(\sigma _p)`$ for all 260 RASSCALS, taking $`s=4.02`$ and $`b=32.19`$ from the single power law determined in §3. We approximate $`\mathrm{log}\sigma _p`$ as a Gaussian with a standard deviation equal to 1.3 times the uncertainty given in Table 2 for each group. Multiplying the error in $`\sigma _p`$ by 1.3 is a way of spreading the uncertainty in the $`L_X\sigma _p`$ relation directly into $`P_{\mathrm{th}}(\sigma _p)`$. The resulting average theoretical probability of detecting a group with velocity dispersion $`\sigma _p`$ is well approximated by
$$P_{\mathrm{th}}(\sigma _p)=\frac{1}{2}+\frac{1}{2}\mathrm{erf}\left[4\left(\mathrm{log}\frac{\sigma _p}{250\mathrm{km}\mathrm{s}^1}\right)\right],$$
(13)
where $`\mathrm{erf}(x)`$ is the error function.
Figure 4b shows the observed and the theoretical detection probabilities. The solid line represents the fraction of the RASSCALS we actually detect, $`P_{\mathrm{obs}}(\sigma _p)`$, and the short dashed line shows $`P_{\mathrm{th}}(\sigma _p)`$, the fraction of the RASSCALS we should detect given $`L_X\sigma ^4`$. The quotient,
$$f_X(\sigma _p)\frac{P_{\mathrm{obs}}(\sigma _p)}{P_{\mathrm{th}}(\sigma _p)},$$
(14)
appears as the long dashed line. $`f_X(\sigma _p)`$ represents the fraction of groups that should have extended X-ray emission in order that we detect our set of 61 RASSCALS. Remarkably, $`f_X`$ is a nearly constant $`40\%`$ for $`\sigma _p>150`$ km s<sup>-1</sup>, and rises steeply for $`\sigma _p<150`$ km s<sup>-1</sup>. The scatter around the theoretical probability $`P_{\mathrm{th}}(\sigma _p)`$ introduces a $`30\%`$ uncertainty in the breaking point $`\sigma _p=150`$ km s<sup>-1</sup>, but does not affect the result that $`f_X40\%\pm 8\%`$ above the breaking point.
Thus Figure 4b shows that we detect many fewer systems overall than expected from the raw $`L_X\sigma _p^4`$ relation. To match our observed detection efficiency, only $`40\%`$ of groups with $`\sigma _p>150`$ km s<sup>-1</sup> must have extended X-ray emission. The probability $`f_X`$ that a group contains X-ray emitting gas does not seem to increase with the group velocity dispersion.
On the other hand, the detection of the $`\sigma _p<150`$ km s<sup>-1</sup> groups exceeds the expectation from $`L_X\sigma _p^4`$. The theoretical probability of detecting any of these low-$`\sigma _p`$ groups is near 0, and yet we detect 6% of them. A flattening of the true $`L_X\sigma _p`$ relation for low velocity dispersion systems, of the kind we discuss in §3, resolves the discrepancy.
The result that only 40% of the $`\sigma _p>150`$ km s<sup>-1</sup> RASSCALS should emit X-rays has an interesting interpretation when combined with the predictions of N-body (Frederic 1995, Diaferio 1999) and geometric (Ramella et al. 1997) simulations of the local large-scale structure. These simulations suggest that $`>80\%`$ of groups with $`n5`$ members drawn from a complete redshift survey should be real, bound systems. If indeed $`>80\%`$ of the RASSCALS are bound, and our simulations are correct, then at least half the bound groups must possess a negligible amount of extended X-ray emission.
The X-ray data impose a lower limit of 40%, and the simulations impose an upper limit of 80%, on the fraction of RASSCALS that are real, bound systems of galaxies.
## 5. Discussion
The flattening of the $`L_X\sigma _p`$ relation for systems with $`\sigma _p<340`$ km s<sup>-1</sup> is now well established, not just by our study, but by pointed *ROSAT*observations of an independent sample of 24 groups (Helson & Ponman 2000). This similarity breaking is particularly striking, because it is in conflict with the $`L_XT`$ relation for systems of galaxies, which actually steepens as the temperature drops (Metzler & Evrard 1994; Ponman et al. 1996).
It is difficult to dismiss the $`L_X\sigma _p`$ flattening by claiming that the velocity dispersion of the discrepant groups is biased towards lower values. Most groups drawn from redshift surveys contain unrelated galaxies which tend to bias $`\sigma _p`$ towards larger values (Frederic 1995; Diaferio 1999). It is also no longer possible to argue (e.g. Mulchaey & Zabludoff 1998) that the flattening is due to a failure to remove detectable contamination. We, as well as Helson & Ponman (2000), remove such contamination to the extent allowed by the data.
The shallow $`L_X\sigma _p`$ slope may be explainable through the “mixed-emission” scenario proposed by Dell’Antonio et al. (1994). It is possible that a number of galaxies with faint, X-ray emitting ISMs are embedded within the intragroup medium. These individually emitting galaxies could contribute significantly to the total luminosity and place it above the virial value. A large fraction of such emission would be neither directly detectable nor removable, appearing instead as fluctuations in excess of Poisson and instrumental noise superposed on the central emission peak (Soltan & Fabricant 1990). Further verification of the mixed emission scenario thus depends on higher quality observations of the lowest velocity dispersion systems with the *Chandra* or *XMM* missions.
However, we can investigate whether the break in the $`L_X\sigma _p`$ relation is linked to other physical properties of the RASSCALS. One possibility is that the excess emission is characteristic of the dynamically youngest groups, those perhaps still in the process of formation. An indicator of such a dynamical state might be the fraction of spiral member galaxies, $`f_{\mathrm{sp}}`$. If the dominant process for the formation of elliptical galaxies in groups is galaxy-galaxy interaction, then one might expect a system with $`f_{\mathrm{sp}}0`$ to be much more evolved than a group composed mainly of spiral galaxies.
Figure 5a shows a weak correlation between the X-ray luminosity and the spiral fraction. The correlation between the two quantities is barely significant (Kendall’s $`\tau =0.109`$, with $`P=0.22`$, a 1-$`\sigma `$ result). It is noteworthy that no system with $`f_{\mathrm{spi}}0.5`$ is more luminous than $`5h_{100}^2\times 10^{42}`$ ergs s<sup>-1</sup>—groups that are spiral-dominated tend to have below average X-ray luminosities.
However, closer inspection reveals that the spiral fraction is not related to the $`L_X\sigma _p`$ flattening. SRGb075, the X-ray emitting group with the smallest velocity dispersion ($`\sigma _p=60`$ km s<sup>-1</sup>) has $`f_{\mathrm{spi}}=0.2`$. The group with the next smallest $`\sigma _p`$, SS2b293, has $`f_{\mathrm{sp}}=0.33`$, and the following group, NRGb045, has $`f_{\mathrm{sp}}=0.2`$. Although a trend relating $`f_{\mathrm{sp}}`$ and $`L_X`$ probably exists, the RASSCALS that are responsible for the flattening of the $`L_X\sigma _p`$ relation have spiral fractions comparable to those of higher velocity dispersion groups.
Another possible indicator of the dynamical age of a system of galaxies is its crossing time. Groups where galaxies have completed many orbits might be closer to dynamical equilibrium than those where the galaxies have made only a few crossings. We note, however, that if the accretion of external galaxies plays a significant role in the evolution of a group, it may not reach dynamical equilibrium even after many crossing times have passed (Diaferio et al. 1993).
The crossing time of a system in units of the Hubble time is roughly $`t_c=RH_0/\sigma _p`$, where $`R`$ is the characteristic size. Because many of the groups in our sample have fewer than 9 members, computing $`R`$ from the optical data is likely to lead to large errors in $`t_c`$. Instead, we use the NOCORE radius (§2.4) to estimate the crossing time, $`t_c=R_\xi H_0/\sigma _p`$. The NOCORE radius provides a characteristic scale for the X-ray emission, and hence for the gravitational potential of each group.
There is a significant correlation between $`L_X`$ and $`t_c`$ in Figure 5b (Kendall’s $`\tau =0.228`$, $`P=0.01`$, a nearly 3-$`\sigma `$ result). Of course, this effect follows directly from the relationship between $`L_X`$ and $`\sigma _p`$ (which exhibit a 10-$`\sigma `$ correlation). Because $`R_\xi `$ is uncorrelated with $`\sigma _p`$, the inclusion of $`R_\xi `$ increases the scatter.
However, the $`L_Xt_c`$ comparison does reveal an interesting property of the groups which contribute to the flattening of the $`L_X\sigma _p`$ correlation. These groups have $`t_c>0.3H_0^1`$; they have longer crossing times than the groups in the steeper, $`L_X\sigma ^4`$ portion of the relation. Thus we have an indication that the X-ray overluminous groups are also the ones where the crossing time is a large fraction of the Hubble time. An explanation of this result in terms of the dynamical histories of the low-$`\sigma _p`$ groups awaits a much deeper optical and X-ray probe of their structure.
## 6. Conclusion
The RASSCALS are the largest extant combined X-ray and optical catalog of galaxy groups. We draw the systems from two redshift surveys that have a limiting magnitude of $`m_z=15.5`$ and cover $`\pi `$ ster of the sky. There are 260 systems, of which 23% have statistically significant X-ray emission in the *ROSAT* All-Sky Survey after we remove contamination from unrelated sources. We include a catalog of the systems.
We calculate the X-ray selection function for our sample. The behavior of the function implies that only 40% of the RASSCALS are intrinsically X-ray luminous. The remaining $``$ 60% of the RASSCALS are either chance superpositions, or bound systems devoid of hot gas.
We examine the relationship between the X-ray luminosity $`L_X`$ and the velocity dispersion $`\sigma _p`$ for the 59 high-quality RASSCALS and a representative sample 25 of rich clusters not internal to our data. The best fit relation is a broken power law with $`L_X\sigma _p^{0.37\pm 0.3}`$ for $`\sigma _p<340`$ km s<sup>-1</sup>, and $`L_X\sigma _p^{3.9\pm 0.1}`$ for $`\sigma _p>340`$ km s<sup>-1</sup>. Whether we include the upper limits in our analysis, or assume a dominant intrinsic scatter in the relation, a broken power law with a shallow faint-end slope is still a better fit than a single power law.
Stressing that we have been careful to remove contamination from individual galaxies and unrelated sources, we conclude that the flattening in the $`L_X\sigma _p`$ relation for groups of galaxies is a physical effect. A potential mechanism for the excess luminosity of the faintest systems is the “mixed emission” scenario (Dell’Antonio et al. 1994): the emission from the intragroup plasma may be irrecoverably contaminated by a superposition of diffuse X-ray sources corresponding to the hot interstellar medium of the member galaxies. A final explanation of the flattening of the $`L_X\sigma `$ relation must focus on the detailed X-ray and optical structure of the groups with small velocity dispersions ($`\sigma _p<150`$ km s<sup>-1</sup>).
We plan to calculate the X-ray luminosity function of the RASSCALS soon. Deep optical spectroscopy of these systems is already underway, and the first results appear in Mahdavi et al. (1999).
We are grateful to the anonymous referee and the editor, Gregory Bothun, for comments which led to significant improvement of the paper. We thank Saurabh Jha and Trevor Ponman for useful discussions. This research was supported by the National Science Foundation (A. M.), the Smithsonian Institution (A. M., M. J. G.), and the Italian Space Agency (M. R.).
## Appendix A Luminosity, Flux and Temperature Calibration
Here we describe the procedure we use to convert the *ROSAT* PSPC count rate into a 0.1–2.4 keV X-ray luminosity. Our procedure does not require fixing or guessing the plasma temperature. Instead, we fold the uncertainty in the temperature directly into the derived luminosities.
The decontaminated source count rate within a ring of projected radius $`R`$ is
$$S=\frac{\pi R^2}{\pi R^2A_{\mathrm{clean}}}\left(\underset{i}{}\frac{N_i}{E_i}\right)\pi R^2B,$$
(A1)
where $`A_{\mathrm{clean}}`$ is the area of the portion of the ring removed during the decontamination process, $`N_i`$ is the total count rate within pixel $`i`$, $`E_i`$ is the exposure time within pixel $`i`$, $`B`$ is the average background count rate per unit area on the sky, and the summation is over all pixels within the ring. Pixels which fall partially inside the ring are appropriately subdivided. The error in the source count rate, $`\sigma _S`$, is given by
$$\sigma _S^2=\left(\frac{\pi R^2}{\pi R^2A_{\mathrm{clean}}}\right)^2\left(\underset{i}{}\frac{N_i}{E_i^2}\right)+\left(\pi R^2\sigma _B\right)^2,$$
(A2)
where $`\sigma _B`$ is the uncertainty in the background.
Because all our systems have redshift $`z<0.04`$, the 0.1–2.4 keV X-ray luminosity, $`L_X`$, is
$$L_X=4\pi F(1+z)^2\left(\frac{cz}{H_0}\right)^2.$$
(A3)
The 0.1–2.4 keV flux, $`F`$ from the GPT is then
$$F=C(N_H,T)S.$$
(A4)
Here $`C(N_H,T)`$ is a function suited to the *ROSAT* PSPC Survey Mode instrumental setup which converts the 0.5–2.0 keV count rate to the appropriate 0.1–2.4 keV flux from a Raymond & Smith (1977) spectrum with the abundance fixed at $`30`$% of the solar value. $`C(N_H,T)`$ depends on $`N_H`$, the total hydrogen column density along the line of sight, which we compute using the results of Dickey & Lockman (1990), and the emission-weighted plasma temperature, $`T`$.
We cannot accurately determine $`T`$ independently of $`F`$ from the RASS data. However, once $`N_H`$ is fixed, $`C(N_H,T)`$ varies only 15%–20% for $`0.3`$ keV $`kT10`$ keV. We therefore fold this uncertainty in $`T`$ into our calculation of the flux.
If $`p_C(C)`$ is the probability distribution function (PDF) of $`C(N_H,T)`$, and $`p_S(S)`$ is the PDF of the source count rate $`S`$, then the PDF of the flux is (Lupton 1993, pp. 9–10)
$$p_F(F)=_0^{\mathrm{}}p_C(C)p_S(F/C)\frac{dC}{C}$$
(A5)
If the PDF of the group’s emission-weighted temperature is $`p_T(T)`$, then, by the law of transformation of probabilities,
$$p_C(C)=p_T(T)\left|\frac{dT}{dC}\right|.$$
(A6)
Approximating $`p_S(S)`$ as a Gaussian distribution with mean $`S`$ and standard deviation $`\sigma _S(S)`$, we obtain
$$p_F(F)=\frac{1}{\sqrt{2\pi }\sigma _S}_0^{\mathrm{}}\frac{p_T(T)}{C(N_H,T)}\mathrm{exp}\left[\frac{1}{2}\left(\frac{F/C(N_H,T)S}{\sigma _S}\right)^2\right]𝑑T.$$
(A7)
We take $`p_T(T)`$ to be $`(9.7\mathrm{keV})^1`$ over the range 0.3–10 keV, and zero everywhere else. We have also tried a more sophisticated approach, with $`p(T)`$ proportional to the observed temperature function of systems of galaxies (Markevitch 1998). The difference between the resulting PDF and the constant $`p_T(T)`$ PDF is negligible compared with the error introduced by the uncertainty in the temperature function itself. |
no-problem/9912/astro-ph9912270.html | ar5iv | text | # 1 Introduction
## 1 Introduction
The double-line eclipsing binaries are now often considered to be one of the most promising distance indicators (e.g., Paczyński 1997). The method is largely geometrical with only a single relation that needs to be calibrated. This is the relation between the stellar surface brightness and whatever data that can be obtained to judge about the stellar temperature.
The method itself is pretty much obvious once the spectroscopic and photometric orbits of the binary system are at hand. Yet its origins are not commonly known so that it happens once and again that somebody rediscovers it, claiming that he has found an original method. That has motivated us to write Section 2 where we describe how this nearly hundred years old method was developed.
Section 3 is devoted to explaining why it is important to derive the calibration of the surface brightness – color relation exclusively from observations of eclipsing binaries with known distances.
Finally in Section 4 we present the list with a selection of nearby Hipparcos eclipsing binaries.
## 2 Historical Outlook
The photometric orbit of an eclipsing binary gives us relative radii of both stellar components expressed in terms of their separation and in addition an orbital inclination. The double-line spectroscopic orbit when combined with the orbital inclination that comes out from the photometric orbit results with masses of both components and metric value of the system dimension.
When the distance to the binary system is known then the angular sizes of both components are also known and knowing the apparent magnitudes of both components which come out from the photometric solution makes it possible to calculate surface brightness for each of components. So, known parallax of an eclipsing binary can be used to obtain direct measurement of the surface brightness.
On the other hand if one knows the value of surface brightness of an eclipsing binary component and if also the photometric and double-line spectroscopic orbits for that eclipsing binary are available then it is possible to calculate distance to the binary system.
First applications of this two-way inference were made in the ”distance to surface brightness” direction. The necessary observational data started to be available about hundred years ago. Vogel (1890) was first to determine radial velocity orbital variations for Algol and thus he obtained the first single-line spectroscopic orbit for any eclipsing variable. He is also credited with the first determination of the stellar radius, expressed in that case in miles. Stebbins (1910), starting observations with his newly developed selenium cell photometer, obtained the first accurate photometric light curve of Algol. Combining the photometric orbit based on his observations and the single-line spectroscopic orbit of Schlesinger and Curtiss (1908) and using average of three then existing determinations of trigonometric parallax (70 mas as compared with the Hipparcos value of 35 mas) he was able to estimate values of surface brightness for both components expressed in units of the solar surface brightness. With a single-line spectroscopic orbit these estimates depended on an assumed mass ratio of Algol components. For two plausible assumptions about mass ratio the resulting surface brightness differed by a factor of three.
$`\beta `$ Aur was the first eclipsing binary with double-line spectroscopic orbit (Baker 1910) and with good photometric light curve (Stebbins 1911). The only thing that marred this otherwise excellent situation was low accuracy of the available data on the trigonometric parallax. Stebbins (1911) was analyzing this set of data under the assumption that the parallax is smaller than 30 mas and therefore he was able to determine only lower limits for surface brightness of both components. This limitation was overcome by Russell, Dugan and Steward (1927) who used parallax equal to 34 mas and obtained surface brightness of both components expressed in units of an equivalent effective temperature. It is worth mentioning here that the Hipparcos parallax for $`\beta `$ Aur is equal to 40 mas.
Gaposchkin (1933) made an attempt to determine effective temperatures for 30 eclipsing binaries with measured parallaxes even though in most cases these parallaxes were smaller than corresponding measurements errors. This work was criticized (Woolley 1934, Pilowski 1936) on obvious reason of using nonuniform and largely unreliable data. Kopal (1939) repeated the work of Gaposchkin using data on radial velocities and proper motions available for 39 systems and resorting to the statistical parallax method after he had divided his data into three groups depending on spectral types. He also used two binary systems with trigonometric parallaxes and two with group parallaxes. Thus before the year 1940 there already existed a crude independent calibration of the surface brightness expressed in terms of temperature as a function of spectral type, based exclusively on the eclipsing binaries.
Up to now we presented the ”distance to surface brightness” inference. It is difficult to imagine that all the involved individuals were not aware of the possibilities and potentials of the reverse inference ”surface brightness to distance”. In any case, we have not encountered any reference to that possibility prior to the papers by Gaposchkin (1938, 1940) but even in that case the problem of the distance determination was not stated openly. The luminosities of eclipsing binary components were calculated with the help of system dimensions and with temperatures judged from the spectral types. A trivial step of calculating distances by comparing luminosities with apparent magnitudes was not done – as it was not done in much newer and much more accurate analysis by Andersen (1991). Gaposchkin stressed the fact that the calibration he had applied was based exclusively on the eclipsing binaries data.
For nearby Galactic stars the quantities that are interesting are masses, sizes, luminosities and temperatures. Once we know these quantities it is not really relevant, if a star is 100 or 200 pc away. The situation is much different when we have to do with eclipsing variables in extragalactic nebulae. In that case it provides opportunity to determine the distance of the host external galaxy. Gaposchkin (1962) determined distance to an eclipsing variable in the M31 nebula. He did it using very crude form of the method but undoubtly the distance determination was the main aim of that paper and certainly that was the ”surface brightness to distance” inference. Several papers with the determination of distances of eclipsing variables in M31, LMC, and SMC followed (Gaposchkin 1968, 1970, Dworak 1974, de Vaucouleurs 1978). Attention was also directed to the Galactic eclipsing binary systems. Dworak (1975) and Brancewicz and Dworak (1980) prepared a catalog of more than 1000 eclipsing variables for which they made crude determination of parallaxes.
It was a common property of both Gaposchkin and Dworak determinations of the distance that they did not stick to the clean case with a good photometric orbit and a good double-line spectroscopic orbit supplemented with information about temperatures of components. Eclipsing binaries offer plenty of opportunities for estimating the mass ratio of components in the case of single-line spectrum and even for estimating masses without any spectroscopic data. This kind of mixed accuracy data could be useful e.g., for selecting candidates for parallax observations by Hipparcos (Dworak and Oblak 1987, 1989) but it has not helped to the method’s reputation.
Originally it was the stellar spectral type that was used for estimating the temperature and consequently the surface brightness what needed also the knowledge of bolometric correction. Barnes and Evans (1976) found that the $`VR`$ color can serve as an excellent tool in that context without any need to know the spectral types, effective temperatures or bolometric corrections. All the relevant informations are compressed into so called surface brightness parameter $`F_V`$ that can be directly determined from observations. In particular, for stars later than the spectral type A0 the plot of surface brightness parameter $`F_V`$ vs. the $`VR`$ color index is parallel to the reddening line what obviates the need for precise reddening determination. The Barnes–Evans finding was soon applied by Lacy (1977, 1979) to the eclipsing binary distance determination.
In an early calibration Barnes, Evans and Moffett (1978) could only use three eclipsing binaries as calibrators, namely $`\beta `$ Aur, YY Gem and CM Dra so that the calibration was based mainly on stars with interferometrically determined angular sizes supplemented with data from lunar occultations. Popper (1980) has modified slightly the calibration of Barnes, Evans and Moffett. He allowed deviations from linearity in the relation between the surface brightness parameter and the color index and beside the recommended by Barnes and Evans $`VR`$ index he also calibrated $`BV`$ and Strömgren $`by`$ indices. Also separate calibrations for dwarfs and giants were given. Recent calibrations of the Barnes–Evans relation concerned late type stars (Fouqué and Gieren 1997, Beuermann et al. 1999) or stars later than A0 (Di Benedetto 1998).
The new Hipparcos data were used by Popper (1998) for comparison with his old (Popper 1980) calibration of the relation between the surface brightness parameter and the $`BV`$ color index. He selected 14 detached eclipsing binaries closer than 125 pc with the mean errors of the Hipparcos parallax of 10% or less and with good photometric and spectroscopic data. The outcome of the comparison is that the majority of objects lie on or slightly above the calibration curve but 5 binary systems are situated clearly below it. Popper suggested that these 5 outliers may have depressed surface brightness due to spotted character of their surfaces. Ribas et al. (1998) made another selection of eclipsing binaries with Hipparcos parallaxes. As compared to the Popper selection they relaxed the distance accuracy requirement (relative errors in the trigonometric parallax smaller than 20%) but sticked to the high accuracy of the object dimension determination. The resulting sample of 20 stars contains only 5 objects common with Popper sample. Ribas et al. stopped at the calculation of the effective temperatures for all components of these 20 binaries and did not proceeded with collecting the color indices and constructing the surface brightness parameter vs. color index diagram. These two papers give an idea what kind of photometric and spectroscopic data is available right now. About ten times larger number of eclipsing variables have trigonometric parallaxes measured by Hipparcos with accuracy better than 20%, many of them discovered as eclipsing variables by Hipparcos as well, but majority of them lacking sufficiently good photometric and spectroscopic data.
## 3 Motivation for Using More Eclipsing Binaries as Calibrators
When one aims to determine accurate distances with the help of eclipsing binaries then the calibration of the surface brightness parameter vs. color index should be as good as possible and free as much as possible from any systematic errors.
Angular sizes determined with the help of interferometry (Hanbury Brown et al. 1974, Davis 1997), lunar occultations (Ridgway et al. 1980, Richichi 1977) or infrared flux method (Blackwell and Shallis 1977, Blackwell and Lynas-Gray 1994) are plagued by the presence of limb darkening. They are effective sizes corresponding to some effective surface brightness. One can correct such effective sizes having some idea about the degree of limb darkening either from theoretical models of stellar atmospheres or from observations of limb darkening in eclipsing binaries. In any case the need to correct for the limb darkening makes the calibration less direct. When analyzing light curve of an eclipsing binary astronomer can determine also limb darkenings of the components so that the component sizes should be free from limb darkening uncertainty. Recent progress in the interferometric techniques opens the possibility to determine limb-darkened angular diameters of stars (Benson et al. 1997, Hummel et al. 1998, Pauls et al. 1998, Armstrong et al. 1998, Hajian et al. 1998) also by means of interferometry. A comparison of limb-darkening resulting from these two techniques can be seen as an additional cross-check of the calibration.
Surface brightness dependence on gravity and metallicity is not particularly strong but striving for the best accuracy the corresponding corrections should be calibrated and applied. As these corrections are not expected to be large it should be enough to determine the shape of the functional dependence of corrections on gravity and metallicity with the help of atmospheric models but the zero point, or more precisely the dependence of the surface brightness parameter on color for solar metallicity main sequence stars, should be determined by comparison with the calibrating data. One of advantages of the eclipsing binary data is that surface gravity is also accurately known in that case.
We think that the optimal case is when the eclipsing binary method of distance determination is calibrated exclusively with the use of eclipsing binaries with geometrically determined distances. This has been made feasible by publication of the Hipparcos trigonometric parallaxes for many nearby eclipsing binaries.
Beside the use for distance determination such calibration can serve as an independent check for the data on fundamental stellar parameters resulting from other methods of stellar angular size determination including model calculations.
In the following Section we present the list of nearby Hipparcos eclipsing binaries. For most of them there are only scanty observational data available. Some of them are not good for being reliable calibrators because of the light curve characteristics, RS CVn type variability, small depths of eclipses or because of being semi-detached system but rejections based on such arguments could be done for well observed systems only. For the sake of using clearly defined selection criteria we have left in the Table all the objects that fulfill our primary criteria.
## 4 Table Description
Table 1 contains all Hipparcos eclipsing binaries that have their variability types denoted as EA or EA: and that fulfill the following distance condition: the binary must be either nearer than 200 pc or the standard error of its parallax must be five times smaller than the parallax value. In the Hipparcos Catalogue we have found 198 eclipsing binaries that fulfill these conditions. 156 of these stars are comprised in the Section ”Periodic Variables” of the Hipparcos Variability Annex (ESA 1997, Vol. 11) and 42 in the Section ”Unsolved Variables” of this Annex. The latter Section contains the stars with generally unknown periods. All of them are listed in Table 1 in order of increasing Hipparcos numbers. The columns of Table 1 are generally self-explanatory. Comments must only be given to some of them. The asterisk between the Hipparcos number and the name of the star indicates that the object has been newly-classified in the Hipparcos Catalogue on the basis of the Hipparcos observations and the preliminary variability analysis. The asterisk preceding variability type in column 3 denotes that this type was newly classified by Hipparcos. The maximum and minimum magnitudes in columns 5 and 6 of the Table are taken as determined by Hipparcos. Columns 9 and 10 give the parallax value and its standard error in milliarcseconds (mas).
We have also selected a set of poorly observed stars that have neither spectroscopic nor photometric orbit solutions what has been validated by search in the SIMBAD database. Such objects have been marked by exclamation marks between columns 8 and 9.
Acknowledgements. We are greatly indebted to Professor Bohdan Paczyński for continuous interest in this work and stimulating advices. Mr. Wojciech Pych is gratefully acknowledged for helping us at reading Hipparcos data. This work would not be possible without use of the Hipparcos Catalogue. Also the SIMBAD database operated by the Strasbourg University was very useful. We acknowledge partial support from the KBN grant BST to the Warsaw University Observatory.
## REFERENCES
* Andersen, J. 1991, Astron. Astrophys. Rev., 3, 91.
* Armstrong, J.T., Mozurkewich, D., Pauls, T.A., and Hajian, A.R. 1998, Proc. SPIE, 3350, 461.
* Baker, R.H. 1910, Publ. Allegheny Obs., 1, 163.
* Barnes, T.G., and Evans, D.S. 1976, MNRAS, 174, 489.
* Barnes, T.G., Evans, D.S. and Moffett,T.J. 1978, MNRAS, 183, 285.
* Benson, J.A., et al. 1997, Astron. J., 114, 1221.
* Beuermann, K., Baraffe, I., and Hauschildt, P. 1999, Astron. Astrophys., 348, 524.
* Blackwell, D.E., and Shallis, M.J. 1977, MNRAS, 180, 177.
* Blackwell, D.E., and Lynas-Gray, A.E. 1994, Astron. Astrophys., 282, 899.
* Brancewicz, H.K., and Dworak, T.Z. 1980, Acta Astron., 30, 501.
* Davis, J. 1977, ”Fundamental Stellar Properties: The Interaction Between Observation and Theory”, IAU Symposium No. 189, Ed. T.R. Bedding, A.J. Booth and J. Davis, (Kluver Academic Publishers), 31.
* de Vaucouleurs, G. 1978, Astrophys. J., 223, 730.
* Di Benedetto, G.P. 1998, Astron. Astrophys., 339, 858.
* Dworak, T.Z. 1974, Acta Cosmologica, 2, 13.
* Dworak, T.Z. 1975, Acta Astron., 25, 383.
* Dworak, T.Z., and Oblak, E. 1987, IBVS, No. 2991.
* Dworak, T.Z., and Oblak, E. 1989, IBVS, No. 3399.
* ESA 1997, The Hipparcos and Tycho Catalogues, ESA-SP1200.
* Fouqué, P., and Gieren, W.P. 1997, Astron. Astrophys., 320, 799.
* Gaposchkin, S.I. 1933, Astron. Nachr., 248, 213.
* Gaposchkin, S.I. 1938, Harvard Reprint, No. 151.
* Gaposchkin, S.I. 1940, Harvard Reprint, No. 201.
* Gaposchkin, S.I. 1962, Astron. J., 67, 358.
* Gaposchkin, S.I. 1968, P.A.S.P., 80, 558.
* Gaposchkin, S.I. 1970, IBVS, No. 496.
* Hajian, A.R., et al. 1998, Astrophys. J., 496, 484.
* Hummel, C.A., Mozurkewich, D., Armstrong, T.J., Hajian, A.R., and Elias II, N.M., and Hutter, D.J. 1998, Astron. J., 116, 2536.
* Hanbury Brown, R., Davis, J., and Allen, L.R. 1974, MNRAS, 167, 121.
* Kopal, Z. 1939, Astrophys. J., 90, 281.
* Lacy, C.H. 1977, Astrophys. J., 213, 458.
* Lacy, C.H. 1979, Astrophys. J., 228, 817.
* Paczyński, B. 1997, ”The Extragalactic Distance Scale”, STScI Symp. Ser. 10, Ed. M. Livio, M. Donahue and N. Panagia (Cambridge University Press), 273.
* Pauls, T.A., Mozurkewich, D., Armstrong, J.T., Hummel, C.A., Benson, J.A., and Hajian, A.R., 1998, Proc. SPIE, 3350, 467.
* Pilowski, K. 1936, Zeitschr. Astrophys., 11, 267.
* Popper, D.M. 1980, Ann. Rev. Astron. Astrophys., 18, 115.
* Popper, D.M. 1998, P.A.S.P., 110, 919.
* Ribas, I., Giménez, A., Torra, J., Jordi, C., and Oblak, E. 1998, Astron. Astrophys., 330, 600.
* Richichi, R. 1977, ”Fundamental Stellar Properties: The Interaction Between Observation and Theory”, IAU Symposium No. 189, Ed. T.R. Bedding, A.J. Booth and J. Davis, (Kluver Academic Publishers), 45.
* Ridgway, S.T., Joyce, R.R., White, N.M., and Wing, R.F. 1980, Astrophys. J., 235, 126.
* Russell, H.N., Dugan, R.S., and Stewart, J.Q. 1927, ”Astronomy” II, (Ginn and Company), 750.
* Schlesinger, F. and Curtiss, R.H. 1908, Publ. Allegheny Obs., 1, 25.
* Stebbins, J. 1910, Astrophys. J., 32, 185.
* Stebbins, J. 1911, Astrophys. J., 34, 112.
* Vogel, H.C. 1890, Astron. Nachr., 123, 289.
* Woolley, R.v.d.R. 1934, MNRAS, 94, 713. |
no-problem/9912/cond-mat9912015.html | ar5iv | text | # Vortex formation in a stirred Bose-Einstein condensate
## Abstract
Using a focused laser beam we stir a Bose-Einstein condensate of <sup>87</sup>Rb confined in a magnetic trap and observe the formation of a vortex for a stirring frequency exceeding a critical value. At larger rotation frequencies we produce states of the condensate for which up to four vortices are simultaneously present. We have also measured the lifetime of the single vortex state after turning off the stirring laser beam.
3.75.Fi, 67.40.Db, 32.80.Lg
Rotations in quantum physics constitute a source of counterintuitive predictions and results as illustrated by the famous “rotating bucket” experiment with liquid helium. When an ordinary fluid is placed in a rotating container, the steady state corresponds to a rotation of the fluid as a whole together with the vessel. Superfluidity, first observed in liquid HeII, changes dramatically this behavior . For a small enough rotation frequency, no motion of the superfluid is observed; while above a critical frequency, lines of singularity appear in its velocity field. These singularities, referred to as vortex filaments, correspond to a quantized circulation of the velocity ($`nh/m`$ where $`n`$ is an integer, and $`m`$ the mass of a particle of the fluid) along a closed contour around the vortex. In this letter we report the observation of such vortices in a stirred gaseous condensate of atomic rubidium. We determine the critical frequency for their formation, and we analyze their metastability when the rotation of the confining “container” is stopped.
The interest in vortices for gaseous condensates is that, due to the very low density, the theory is tractable in these systems and the diameter of the vortex core, which is on the order of the healing length, is typically three orders of magnitude larger than in HeII. At this scale, further improved by a ballistic expansion, the vortex filament is large enough to be observed optically. The generation of quantized vortices in gaseous samples has been the subject of numerous theoretical studies since the first observations of Bose-Einstein condensation in atomic gases . Two schemes have been considered. The first one uses laser beams to engineer the phase of the condensate wave function and produce the desired velocity field . Recently this scheme has been successfully applied to a binary mixture of condensates, resulting in a quantized rotation of one of the two components around the second one . Phase imprinting has also been used for the generation of solitons inside a condensate .
The second scheme, which is explored in the present work, is directly analogous to the rotating bucket experiment . The atoms are confined in a static, cylindrically-symmetric Ioffe-Pritchard magnetic trap upon which we superimpose a non-axisymmetric, attractive dipole potential created by a stirring laser beam. The combined potential leads to a cigar-shaped harmonic trap with a slightly anisotropic transverse profile. The transverse anisotropy is rotated at angular frequency $`\mathrm{\Omega }`$ as the gas is evaporatively cooled to Bose-Einstein condensation, and it plays the role of the bucket wall roughness.
In this scheme, the formation of vortices is –in principle– a consequence of thermal equilibrium. In the frame rotating at the same frequency as the anisotropy, the Hamiltonian is time-independent and one can use a standard thermodynamics approach to determine the steady-state of the system. In this frame, the Hamiltonian can be written $`\stackrel{~}{H}=H\mathrm{\Omega }L_z`$, where $`H`$ is the Hamiltonian in the absence of rotation, and $`L_z`$ is the total orbital angular momentum along the rotation axis. Above a critical rotation frequency, $`\mathrm{\Omega }_\mathrm{c}`$, the term $`\mathrm{\Omega }L_z`$ can favor the creation of a state where the condensate wave function has an angular momentum $`\mathrm{}`$ along the $`z`$ axis and therefore contains a vortex filament . The density of the condensate at the center of the vortex is zero, and the radius of the vortex core is of the order of the healing length $`\xi =(8\pi a\rho )^{1/2}`$, where $`a`$ is the scattering length characterizing the 2-body interaction, and $`\rho `$ the density of the condensate .
The study of a vortex generated by this second route allows for the investigation of several debated questions such as the fate of the system when the rotating velocity increases above $`\mathrm{\Omega }_\mathrm{c}`$. This could in principle lead to the formation of a single vortex with $`n>1`$ at the center of the trap; however, this state has been shown to be either dynamically or thermodynamically unstable . The predicted alternative for large rotation frequencies consists of a lattice of $`n=1`$ vortices. Another important issue is the stability of the current associated with the vortex once the rotating anisotropy is removed.
Our experimental set-up has been described in detail previously . We start with $`10^9`$ <sup>87</sup>Rb atoms in a magneto-optical trap which are precooled and then transferred into an Ioffe-Pritchard magnetic trap. The evaporation radio frequency starts at $`\nu _{\mathrm{rf}}=15`$ MHz and decreases exponentially to $`\nu _{\mathrm{rf}}^{(\mathrm{final})}`$ in 25 s with a time constant of 5.9 s. Condensation occurs at $`\mathrm{\Delta }\nu _{\mathrm{rf}}=\nu _{\mathrm{rf}}^{(\mathrm{final})}\nu _{\mathrm{rf}}^{(\mathrm{min})}50`$ kHz, with $`\mathrm{2.5\hspace{0.33em}10}^6`$ atoms and a temperature $`500`$ nK. Here $`\nu _{\mathrm{rf}}^{(\mathrm{min})}=430(\pm 1)`$ kHz is the radio frequency which empties completely the trap. The slow oscillation frequency of the elongated magnetic trap is $`\omega _z/(2\pi )=11.7`$ Hz ($`z`$ is horizontal in our setup), while the transverse oscillation frequency is $`\omega _{}/(2\pi )=219`$ Hz. For a quasi-pure condensate with $`10^5`$ atoms, using the Thomas-Fermi approximation, we find for the radial and longitudinal sizes of the condensate $`\mathrm{\Delta }_{}=2.6\mu `$m and $`\mathrm{\Delta }_z=49\mu `$m, respectively.
When the evaporation radio frequency $`\nu _{\mathrm{rf}}`$ reaches the value $`\nu _{\mathrm{rf}}^{(\mathrm{min})}+80`$ kHz, we switch on the stirring laser beam which propagates along the slow axis of the magnetic trap. The beam waist is $`w_s=20.0(\pm \mathrm{\hspace{0.17em}1})`$ $`\mu `$m and the laser power $`P`$ is 0.4 mW. The recoil heating induced by this far-detuned beam (wavelength $`852`$ nm) is negligible. Two crossed acousto-optic modulators, combined with a proper imaging system, then allow for an arbitrary translation of the laser beam axis with respect to the symmetry axis of the condensate.
The motion of the stirring beam consists in the superposition of a fast and a slow component. The optical spoon’s axis is toggled at a high frequency (100 kHz) between two symmetric positions about the trap axis $`z`$. The intersections of the stirring beam axis and the $`z=0`$ plane are $`\pm a(\mathrm{cos}\theta 𝐮_x+\mathrm{sin}\theta 𝐮_y)`$, where the distance $`a`$ is $`8\mu `$m. The fast toggle frequency is chosen to be much larger than the magnetic trap frequencies so that the atoms experience an effective two-beam, time averaged potential. The slow component of the motion is a uniform rotation of the angle $`\theta =\mathrm{\Omega }t`$. The value of the angular frequency $`\mathrm{\Omega }`$ is maintained fixed during the evaporation at a value chosen between 0 and 250 rad $`\mathrm{s}^1`$.
Since $`w_s\mathrm{\Delta }_{}`$, the dipole potential, proportional to the power of the stirring beam, is well approximated by $`m\omega _{}^2(ϵ_XX^2+ϵ_YY^2)/2`$. The $`X,Y`$ basis is rotated with respect to the fixed axes ($`x,y`$) by the angle $`\theta (t)`$, and $`ϵ_X=0.03`$ and $`ϵ_Y=0.09`$ for the parameters given above . The action of this beam is essentially a slight modification of the transverse frequencies of the magnetic trap while the longitudinal frequency is nearly unchanged. The overall stability of the stirring beam on the condensate appears to be a crucial element for the success of the experiment, and we estimate that our stirring beam axis is fixed to and stable on the condensate axis to within 2 $`\mu `$m. We checked that for $`\mathrm{\Omega }<\mathrm{\Omega }_\mathrm{c}`$ the stirring beam does not affect the evaporation.
For the data presented here, the final frequency of the evaporation ramp was chosen just above $`\nu _{\mathrm{rf}}^{(\mathrm{min})}`$ ($`\mathrm{\Delta }\nu _{\mathrm{rf}}[3,6]`$ kHz). After the end of the evaporation ramp, we let the system reach thermal equilibrium in this “rotating bucket” for a duration $`t_\mathrm{r}=500`$ ms in the presence of an rf shield 30 kHz above $`\nu _{\mathrm{rf}}^{(\mathrm{final})}`$. The vortices induced in the condensate by the optical spoon are then studied using a time-of-flight analysis. We ramp down the stirring beam slowly (in 8 ms) to avoid inducing additional excitations in the condensate, and we then switch off the magnetic field and allow the droplet to fall for $`\tau =27`$ ms. Due to the atomic mean field energy, the initial cigar shape of the atomic cloud transforms into a pancake shape during the free fall. The transverse $`xy`$ and $`z`$ sizes grow by a factor of $`40`$ and $`1.2`$ respectively . In addition, the core size of the vortex should expand at least as fast as the transverse size of the condensate . Therefore a vortex with an initial diameter $`2\xi =0.4\mu `$m for our experimental parameters is expected to grow to a size of 16 $`\mu `$m.
At the end of the time-of-flight period, we illuminate the atomic sample with a resonant probe laser for 20 $`\mu `$s. The shadow of the atomic cloud in the probe beam is imaged onto a CCD camera with an optical resolution $`7\mu `$m. The probe laser propagates along the $`z`$-axis so that the image reveals the column density of the cloud after expansion along the stirring axis. The analysis of the images, which proceeds along the same lines as in , gives access to the number of condensed $`N_0`$ and uncondensed $`N^{}`$ atoms and to the temperature $`T`$. Actually, for the present data, the uncondensed part of the atomic cloud is nearly undetectable, and we can only give an upper bound for the temperature $`T<80`$ nK.
Figure 1 shows a series of five pictures taken at various rotation frequencies $`\mathrm{\Omega }`$. They clearly show that for fast enough rotation frequencies, we can generate one or several (up to 4) “holes” in the transverse density distribution corresponding to vortices. We show for the 0- and 1-vortex cases a cross-section of the column density of the cloud along a transverse axis. The 1-vortex state exhibits a spectacular dip at the center (up to 50 % of the maximal column density) which constitutes an unambiguous signature of the presence of a vortex filament. The diameter of the vortex core following the expansion is measured at the half max of the dip to be $`20\mu `$m.
For a systematic study of the vortex stability domain, we have varied in steps of 1 Hz the rotation frequency for a given atom number and temperature. For each frequency, we infer from the absorption image the number of vortices present, and the results are shown in Fig. 2. Below a certain frequency, we always obtain a condensate with no vortices. Then, in a zone with a 2 Hz width, we obtain condensates showing randomly 0 or 1 vortex. Increasing $`\mathrm{\Omega }`$, we arrive at a relatively large frequency interval (width 10 Hz) where we systematically observe a condensate with a single vortex present. Our value for the critical frequency is notably larger than the predicted value of 91 Hz (see also ). This deviation may be due to the marginality of the Thomas-Fermi approximation for our relatively low condensate number. If $`\mathrm{\Omega }`$ is increased past the upper edge of the 1-vortex zone, multiple vortices, as shown in Fig. 1, are observed . The range of stability of the multiple vortex zones appears to be much smaller than that for the 1-vortex zone. The 3-vortex zone, for instance, seems to be stable over only 3-4 Hz and is complicated by the occasional appearance of a 2-vortex or a 4-vortex condensate. At this stage of the experiment, it is difficult to determine whether these shot-to-shot fluctuations are due to a lack of experimental reproducibility or to the fact that these various states all have comparable energies and therefore all have a reasonable probability to occur for the range of parameters in question. Finally, when $`\mathrm{\Omega }`$ is increased past the range of stability for the multiple vortex configuration, the density profile of the condensate takes on a turbulent structure, and the condensate completely disappears for $`\mathrm{\Omega }`$ larger than 210 Hz, which should be compared with the average transverse frequency of the magnetic + laser dipole potential (226 Hz).
It is remarkable that the multiple vortex configurations most often occur in a symmetric arrangement of the vortex cores: an equilateral triangle and a square for the 3-vortex and the 4-vortex cases respectively. This finding supports the theoretical analysis of which shows that vortices rotating in the same direction experience an effective repulsive interaction, which in turn favors these stable configurations (see also ).
The final question addressed in this letter concerns the lifetime of a vortex state in an axisymmetric trap. Without a rotating anisotropy, the vortex state is no longer the lowest energy state of the system, and after the anisotropy is removed, one expects that the gas will eventually relax to a condensate with no vortex plus a slightly larger thermal component, bolstered by the energy contained in the vortex state. Figure 3 presents the experimental study of the single-vortex state lifetime at two different condensate parameters. We choose a rotation frequency $`\mathrm{\Omega }`$ in the middle of the 1-vortex range of stability, and we let the vortex form as before in the presence of the stirring beam. Then we switch off the stirring beam, and we allow the gas to evolve in the pure magnetic trap for an adjustable time. Finally, we perform a time-of-flight analysis to determine whether the vortex is present or not. Each point in the two lifetime curves represents the average of 10 shots, where we have plotted the fraction of pictures showing unambiguously a vortex as a function of time . We deduce from this curve a characteristic lifetime of the vortex state in the range 400 to 1000 ms with a clear non-exponential decay behavior. In addition, we observed that at long times the vortex rarely appears well centered as it does immediately after formation.
To summarize, we have reported the formation of vortices in a gaseous Bose-Einstein condensate when it is stirred by a laser beam which produces a slight rotating anisotropy. A natural extension of this work is to study the superfluid aspects of this system by investigating the dynamics for vortex nucleation and decay. An important question is the role of the thermal component. For instance, nucleation can occur either by transfer of angular momentum from this component to the condensate or directly from a dynamical instability of the non-vortex state . Also the decay of the single vortex state may be due to the coupling with a non-rotating thermal component or due to the instability induced by the residual, fixed anisotropy of our magnetic trap (measured to be $`\omega _x/\omega _y=1.012\pm .002`$) . Finally, this type of experiment gives access, in principle, to the elementary excitations of the vortex filament , the study of which might reveal new aspects of the superfluid properties of these systems.
###### Acknowledgements.
We thank Y. Castin, C. Cohen-Tannoudji, C. Deroulers, D. Guéry-Odelin, C. Salomon, G. Shlyapnikov, S. Stringari, and the ENS Laser cooling group for several helpful discussions and comments. This work was partially supported by CNRS, Collège de France, DRET, DRED and EC (TMR network ERB FMRX-CT96-0002). This material is based upon work supported by the North Atlantic Treaty Organization under an NSF-NATO grant awarded to K.M. in 1999. permanent address: Max Planck Institute für KernPhysik, Heidelberg, Germany. Unité de Recherche de l’Ecole normale supérieure et de l’Université Pierre et Marie Curie, associée au CNRS. |
no-problem/9912/chao-dyn9912032.html | ar5iv | text | # Chaotic Transport and Current Reversal in Deterministic Ratchets
## Abstract
We address the problem of the classical deterministic dynamics of a particle in a periodic asymmetric potential of the ratchet type. We take into account the inertial term in order to understand the role of the chaotic dynamics in the transport properties. By a comparison between the bifurcation diagram and the current, we identify the origin of the current reversal as a bifurcation from a chaotic to a periodic regime. Close to this bifurcation, we observed trajectories revealing intermittent chaos and anomalous deterministic diffusion.
In recent years there has been an increasing interest in the study of the transport properties of nonlinear systems that can extract usable work from unbiased nonequilibrium fluctuations. These, so called ratchet systems, can be modeled, for instance, by considering a Brownian particle in a periodic asymmetric potential and acted upon by an external time-dependent force of zero average . This recent burst of work is motivated in part by the challenge to explain the unidirectional transport of molecular motors in the biological realm . Another source of motivation arises from the potential for new methods of separation or segregation of particles , and more recently in the recognition of the “ratchet effect” in the quantum domain . The latter research includes: a quantum ratchet based on an asymmetric (triangular) quantum dot ; an asymmetric antidot array ; the ratchet effect in surface electromigration ; a ratchet potential for fluxons in Josephson-junctions arrays ; ratchet effect in cold atoms using an asymmetric optical lattice ; and the reducing of vortex density in superconductors using the ratchet effect .
In order to understand the generation of unidirectional motion from nonequilibrium fluctuations, several models have been used. In Ref. , there is a classification of different types of ratchet systems; among them we can mention the “Rocking Ratchets”, in which the particles move in an asymmetric periodic potential subject to spatially uniform, time-periodic deterministic forces of zero average. Most of the models, so far, deal with the overdamped case in which the inertial term due to the finite mass of the particle is neglected. However, in recent studies, this oversimplification was overcome by treating properly the effect of finite mass .
In particular, in a recent paper , Jung, Kissner and Hänggi study the effect of finite inertia in a deterministically rocked, periodic ratchet potential. They consider the deterministic case in which noise is absent . The inertial term allows the possibility of having both regular and chaotic dynamics, and this deterministically induced chaos can mimic the role of noise. They showed that the system can exhibit a current flow in either direction, presenting multiple current reversals as the amplitude of the external force is varied.
In this paper, the problem of transport in periodic asymmetric potentials of the ratchet type is address. We elaborate on the model analyzed by Jung et al. , in which they find multiple current reversals in the dynamics. In fact, the study of the current-reversal phenomena has given rise to a research activity on its own .
The goal of this paper is to reveal the origin of the current reversal, by analyzing in detail the dynamics for values of the parameters just before and after the critical values at which the current reversal takes place.
Let us consider the one-dimensional problem of a particle driven by a periodic time-dependent external force, under the influence of an asymmetric periodic potential of the ratchet type. The time average of the external force is zero. Here, we do not take into account any kind of noise, and thus the dynamics is deterministic. The equation of motion is given by
$$m\ddot{x}+\gamma \dot{x}+\frac{dV(x)}{dx}=F_0\mathrm{cos}(\omega _Dt),$$
(1)
where $`m`$ is the mass of the particle, $`\gamma `$ is the friction coefficient, $`V(x)`$ is the external asymmetric periodic potential, $`F_0`$ is the amplitude of the external force and $`\omega _D`$ is the frequency of the external driving force. The ratchet potential is given by
$$V(x)=V_1V_0\mathrm{sin}\frac{2\pi (xx_0)}{L}\frac{V_0}{4}\mathrm{sin}\frac{4\pi (xx_0)}{L},$$
(2)
where $`L`$ is the periodicity of the potential, $`V_0`$ is the amplitude, and $`V_1`$ is an arbitrary constant. The potential is shifted by an amount $`x_0`$ in order that the minimum of the potential is located at the origin.
Let us define the following dimensionless units: $`x^{}=x/L`$, $`x_0^{}=x_0/L`$, $`t^{}=\omega _0t`$, $`w=\omega _D/\omega _0`$, $`b=\gamma /m\omega _0`$ and $`a=F_0/mL\omega _0^2`$. Here, the frequency $`\omega _0`$ is given by $`\omega _0^2=4\pi ^2V_0\delta /mL^2`$ and $`\delta `$ is defined by $`\delta =\mathrm{sin}(2\pi |x_0^{}|)+\mathrm{sin}(4\pi |x_0^{}|)`$.
The frequency $`\omega _0`$ is the frequency of the linearize motion around the minima of the potential, thus we are scaling the time with the natural period of motion $`\tau _0=2\pi /\omega _0`$. The dimensionless equation of motion, after renaming the variables again without the primes, becomes
$$\ddot{x}+b\dot{x}+\frac{dV(x)}{dx}=a\mathrm{cos}(wt),$$
(3)
where the dimensionless potential is given by $`V(x)=C(\mathrm{sin}2\pi (xx_0)+0.25\mathrm{sin}4\pi (xx_0))/4\pi ^2\delta `$ and is depicted in Fig. 1.
In the equation of motion Eq. (3) there are three dimensionless parameters: $`a`$, $`b`$ and $`w`$, defined above in terms of physical quantities. We vary the parameter $`a`$ and fix $`b=0.1`$ and $`w=0.67`$ throughout this paper.
The extended phase space in which the dynamics is taking place is three-dimensional, since we are dealing with an inhomogeneous differential equation with an explicit time dependence. This equation can be written as a three-dimensional dynamical system, that we solve numerically, using the fourth-order Runge-Kutta algorithm. The equation of motion Eq. (3) is nonlinear and thus allows the possibility of chaotic orbits. If the inertial term associated with the second derivative $`\ddot{x}`$ were absent, then the dynamical system could not be chaotic.
The main motivation behind this work is to study in detail the origin of the current reversal in a chaotically deterministic rocked ratchet. In order to do so, we have to study first the current $`J`$ itself, that we define as the time average of the average velocity over an ensemble of initial conditions. Therefore, the current involves two different averages: the first average is over $`M`$ initial conditions, that we take equally distributed in space, centered around the origin and with an initial velocity equal to zero. For a fixed time, say $`t_j`$, we obtain an average velocity, that we denoted as $`v_j`$, and is given by $`v_j=\frac{1}{M}\underset{i=1}{\overset{M}{}}\dot{x_i}(t_j)`$. The second average is a time average; since we take a discrete time for the numerical solution of the equation of motion, we have a discrete finite set of $`N`$ different times $`t_j`$; then the current can be defined as $`J=\frac{1}{N}\underset{j=1}{\overset{N}{}}v_j`$. This quantity is a single number for a fixed set of parameters $`a,b,w`$, but it varies with the parameter $`a`$, fixing $`b`$ and $`w`$.
Besides the continuum orbits in the extended phase space, we can obtain the Poincaré section, using as a stroboscopic time the period of oscillation of the external force. With the aid of Poincaré sections we can distinguish between periodic and chaotic orbits, and we can obtain a bifurcation diagram as a function of the parameter $`a`$.
The bifurcation diagram for $`b=0.1`$ and $`w=0.67`$ is shown in Fig. 2a in a limited range of the parameter $`a`$. We can observe a period-doubling route to chaos and after a chaotic region, there is a bifurcation taking place at a critical value $`a_c0.08092844`$. It is precisely at this bifurcation point that the current reversal occurs. After this bifurcation, a periodic window emerges, with an orbit of period four. In Fig. 2b, we show the current as a function of the parameter $`a`$, in exactly the same range as the bifurcation diagram above. We notice the abrupt transition at the bifurcation point that leads to the first current reversal. In Figs. 2a,b we are analyzing only a short range of values of $`a`$, where the first current reversal takes place. If we vary $`a`$ further, we can obtain multiple current reversals .
In order to understand in more detail the nature of the current reversal, let us look at the orbits just before and after the transition. The reversal occurs at the critical value $`a_c0.08092844`$. If $`a`$ is below this critical value $`a_c`$, say $`a=0.074`$, then the orbit is periodic, with period two. For this case we depict, in Fig. 3a, the position of the particle as a function of time. We notice a period-two orbit, as can be distinguish in the bifurcation diagram for $`a=0.074`$. In Fig. 3b we show again the position as a function of time for $`a=0.081`$, which is just above the critical value $`a_c`$. In this case, we observe a period-four orbit, that corresponds to the periodic window in the bifurcation diagram in Fig. 2a. This orbit is such that the particle is “climbing” in the negative direction, that is, in the direction in which the slope of the potential is higher. We notice that there is a qualitative difference between the periodic orbit that transport particles to the positive direction and the periodic orbit that transport particles to the negative direction: in the latter case, the particle requires twice the time than in the former case, to advances one well in the ratchet potential. A closer look at the trajectory in Fig. 3b reveals the “trick” that the particle uses to navigate in the negative direction: in order to advance one step to the left, it moves first one step to the right and then two steps to the left. The net result is a negative current.
In Fig. 4, we show a typical trajectory for $`a`$ just below $`a_c`$. The trajectory is chaotic and the corresponding chaotic attractor is depicted in Fig. 5. In this case, the particle starts at the origin with no velocity; it jumps from one well in the ratchet potential to another well to the right or to the left in a chaotic way. The particle gets trapped oscillating for a while in a minimum (sticking mode), as is indicated by the integer values of $`x`$ in the ordinate, and suddenly starts a running mode with average constant velocity in the negative direction. In terms of the velocity, these running modes, as the one depicted in Fig. 3b, correspond to periodic motion. The phenomenology can be described as follows. For values of $`a`$ above $`a_c`$, as in Fig. 3b, the attractor is a periodic orbit. For $`a`$ slightly less than $`a_c`$ there are long stretches of time (running or laminar modes) during which the orbit appears to be periodic and closely resembles the orbit for $`a>a_c`$, but this regular (approximately periodic) behavior is intermittently interrupted by finite duration “bursts” in which the orbit behaves in a chaotic manner. The net result in the velocity is a set of periodic stretches of time interrupted by burst of chaotic motion, signaling precisely the phenomenon of intermittency . As $`a`$ approach $`a_c`$ from below, the duration of the running modes in the negative direction increases, until the duration diverges at $`a=a_c`$, where the trajectory becomes truly periodic.
To complete this picture, in Fig. 5, we show two attractors: 1) the chaotic attractor for $`a=0.08092`$, just below $`a_c`$, corresponding to the trajectory in Fig. 4 and; 2) the period-4 attractor for $`a=0.08093`$, corresponding to the trajectory in Fig. 3b. This periodic attractor consist of four points in phase space, which are located at the center of the open circles. We obtain these attractors confining the dynamics in $`x`$ between $`0.5`$ and $`0.5`$. As $`a`$ approaches $`a_c`$ from below, the dynamics in the attractor becomes intermittent, spending most of the time in the vicinity of the period-4 attractor, and suddenly “jumping” in a chaotic way for some time, and then returning close to the period-4 attractor again, and so on. In terms of the velocity, the result is an intermittent time series as discussed above.
In order to characterize the deterministic diffusion in this regime, we calculate the mean square displacement $`x^2`$ as a function of time. We obtain numerically that $`x^2t^\alpha `$, where the exponent $`\alpha 3/2`$. This is a signature of anomalous deterministic diffusion, in which $`x^2`$ grows faster than linear, that is, $`\alpha >1`$ (superdiffusion). Normal deterministic diffusion corresponds to $`\alpha =1`$. In contrast, the trajectories in Figs. 3a and 3b transport particles in a ballistic way, with $`\alpha =2`$. The relationship between anomalous deterministic diffusion and intermittent chaos has been explored recently, together with the connection with Lévy flights . The character of the trajectories, as the one in Fig. 4, remains to be analyzed more carefully in order to determine if they correspond to Lévy flights.
In summary, we have identify the mechanism by which the current reversal in deterministic ratchets arises: it corresponds to a bifurcation from a chaotic to a periodic regime. Near this bifurcation, the chaotic trajectories exhibit intermittent dynamics and the transport arises through deterministic anomalous diffusion with an exponent greater than one (superdiffusion). As the control parameter $`a`$ approaches the critical value $`a_c`$ at the bifurcation from below, the duration of the running modes in the negative direction increases. Finally, the duration diverges at the critical value, leading to a truly periodic orbit in the negative direction. This is precisely the mechanism by which the current-reversal takes place.
The author acknowledges helpful discussions with P. Hänggi, P. Jung, G. Cocho, C. García, H. Larralde, G. Martínez-Mekler, V. Romero-Rochín and F. Leyvraz. |
no-problem/9912/cond-mat9912238.html | ar5iv | text | # HD–TVP–99–11, accepted for publication in Phys. Rev. Lett. Noise induced stability in fluctuating, bistable potentials.
## Abstract
The over-damped motion of a Brownian particle in an asymmetric, bistable, fluctuating potential shows noise induced stability: For intermediate fluctuation rates the mean occupancy of minima with an energy above the absolute minimum is enhanced. The model works as a detector for potential fluctuations being not too fast and not too slow. This effect occurs due to the different time scales in the problem. We present a detailed analysis of this effect using the exact solution of the Fokker-Planck equation for a simple model. Further we show that for not too fast fluctuations the system can be well described by effective rate equations. The results of the rate equations agree quantitatively with the exact results.
PACS-numbers: 05.40.-a, 02.50.Ey, 82.20.Fd, 87.16Ac
Models of over-damped Brownian particles in potentials with one or more minima and barriers serve as paradigms for many relaxation processes in physical, chemical, and biological systems. The minima represent the stable or metastable states of the system. Transitions from one state to another are induced by the interaction with the environment. This interaction is typically described by a thermal white noise. The dynamics of the system is dominated by characteristic time scales which are given by the mean first passage times for the escape out of the minima of the potential. The most simple system in this class of models is the problem of diffusion over a single potential barrier, pioneered by Kramers .
In many situations, the potential fluctuates due to some external fluctuations, chemical reactions, or oscillations. The most prominent model of this kind is a Brownian particle in a symmetric bistable potential, subject to a harmonic force. It serves as the standard model for stochastic resonance . In many other applications one has to consider stochastic, correlated fluctuations of the potential. Doering and Gadoua investigated the situation of a symmetric, bistable fluctuating potential. They found a local minimum in the mean first passage time as a function of the barrier fluctuation rate. This effect has been called resonant activation and has been studied in detail by various people -. In most of these papers either a symmetric bistable potential or the escape over a single barrier has been studied. Escape rates for general potentials and dichotomous as well as Gaussian fluctuations of the potential have been calculated by Pechukas and Hänggi . Their results support a simple, physical picture of activated processes with fluctuating barriers: If the potential fluctuates fast, the rate for transitions over the barrier is determined by the average barrier. If the potential fluctuations are slow (static limit), the slowest process determines the rate. In an intermediate regime the rate is given by the average rate, which is greater than the rate for fast or slow fluctuations. This picture has already been suggested by Bier and Astumian on the basis of a simple model with a dichotomously fluctuating linear ramp.
In many applications, one does not have a single barrier or a symmetric, bistable potential. In a more general situation the potential will have several minima of different depth. In equilibrium, the system rests most of the time in the absolute minimum of the potential. But due to potential fluctuations, the position of the absolute minimum may fluctuate. Typical, biological examples of such a situation are membrane proteins like a cell surface receptor or an ion channel. When a ligand binds to the receptor, it changes the potential energy of the receptor and induces a conformational change of the receptor molecule. If the transition from one to the other conformation and back is always more or less the same, a description of this transition by a single coordinate may be sufficient. Then it is possible to model the conformational changes by the motion of a particle in a fluctuating potential.
The effect of a periodic electric field on membrane proteins has been investigated theoretically and experimentally . Astumian and Robertson described such a system by a two state model with periodically modulated rates. The effect of a periodic modulation can be related to a stochastic, dichotomous modulation . This clearly demonstrates the relevance of our results to such biologically motivated models. We will come back to this point at the end.
The motion of the over-damped particle in a fluctuating potential can be described by a Langevin equation
$$\dot{x}=f(x,t)+\xi (t)$$
(1)
where $`f=\frac{V}{x}`$. We are using units where the friction constant and $`k_B`$ are unity. $`f`$ (and $`V`$) depend on $`t`$ since the potential fluctuates. $`\xi (t)`$ is a thermal (white) noise, it satisfies $`\xi =0`$, $`\xi (t)\xi (t^{})=2T\delta (tt^{})`$. In this letter, we restrict ourselves to the discussion of potentials with two minima, separated by a barrier. The position of the minima is $`\pm x_m`$ and does not depend on $`t`$. The maximum of $`V(x)`$ is located at $`x=0`$. The fluctuation of the potential is mainly a fluctuation of the depth of the two minima. Let us consider first the most simple, non-trivial version of such a model. Let us assume that the potential fluctuation is a dichotomous process and that $`V(x,t)`$ takes the two different values $`V_+(x)`$ and $`V_{}(x)`$. Further we assume that the absolute minimum of $`V_+(x)`$ ($`V_{}(x)`$) is the right (left) minimum. Such a model contains various time scales: Four mean first passage times for the two minima of $`V_+(x)`$ and $`V_{}(x)`$, the intra-well relaxation times for $`V_+(x)`$ and $`V_{}(x)`$, and the characteristic time scales for the fluctuation of the potential. The mean first passage times and the intra-well relaxation times are fixed by the form of the potential; the fluctuation of the potential is an external parameter that can be varied. In a biological model for a cell-surface receptor, as mentioned above, it is determined e.g. by the concentration of the signaling molecule. Let $`V(x)=\frac{1}{2}(V_+(x)+V_{}(x))`$, $`\mathrm{\Delta }V(x)=\frac{1}{2}(V_+(x)V_{}(x))`$. Then $`V(x,t)=V(x)+z(t)\mathrm{\Delta }V(x)`$ where $`z(t)`$ is a random process that takes two values $`\pm 1`$. Its static distribution is $`q_0(z)=p_+\delta (z1)+p_{}\delta (z+1)`$. Let $`\tau `$ be the correlation time of this process, so that $`z(t)z(t^{})=z^2+(1z^2)\mathrm{exp}(t/\tau )`$. Without loss of generality we restrict ourselves to $`p_{}1/2`$. What does one expect for such a model? Let us first suppose that the temperature is such that the typical barrier heights of the system are a few $`T`$. If $`\tau `$ is small compared to the intra-well relaxation times of the potential, the systems can be described by an effective static potential $`V(x)=V(x)+z\mathrm{\Delta }V(x)`$. The stationary distribution is $`p_0(x)=C\mathrm{exp}(V(x)/T)`$. In the static limit, the stationary distribution is $`p_0(x)=p_+\mathrm{exp}(V_+(x)/T)+p_{}\mathrm{exp}(V_{}(x)/T)`$. Suppose that $`p_+`$ is close to unity. Then the average potential is approximately given by $`V_+(x)`$ and $`p_0(x)`$ is approximately the same for small or large $`\tau `$. In the following we will show that between these two extreme situations an interesting effect occurs: The mean occupancy of the minimum at $`x_m`$, which is the minimum that has most of the time the higher energy, may become very large. We will show that this effect is related to resonant activation.
To calculate $`p_0(x)`$ or dynamic quantities of the system one has to solve the Fokker-Planck equation
$$\frac{\rho (x,z,t)}{t}=\frac{}{x}\left(f(x,z)T\frac{}{x}\right)\rho (x,z,t)+M_z\rho (x,z,t).$$
(2)
for this model. Here we assumed that the potential fluctuations can be parameterized by a single stochastic variable $`z(t)`$. $`\rho (x,z,t)`$ is the joint probability density for the stochastic variables $`x`$ and $`z`$, and $`M_z`$ is the generator of the stochastic process $`z(t)`$. To obtain the stationary distribution $`p_0(x)=𝑑z\rho (x,z)`$, it is sufficient to analyze the stationary Fokker-Planck equation. A standard way to solve this equation is to expand $`\rho (x,z)`$ in the right eigenbasis of $`M_z`$. If the potential is piecewise linear, one obtains a set of coupled differential equations with constant coefficients, which can be solved analytically. The remaining task is to satisfy the continuity conditions for $`\rho (x,z)`$, which is a simple linear algebraic problem.
For the case of a dichotomous process, the force is $`f(x,z)=f(x)+z\mathrm{\Delta }f(x)`$ as discussed above. In Fig. 1 we show results for the probability $`\overline{n}=_{\mathrm{}}^0p_0(x)𝑑x`$ of the particle to sit in the left minimum of the fluctuating potential as a function of the correlation time $`\tau `$, and for various values of $`p_+`$. The choice of the potential is arbitrary, similar results can be obtained for other potentials as well. The results for $`\overline{n}`$ show that the qualitative discussion for small and large $`\tau `$ given above is valid. But for intermediate $`\tau `$, $`\overline{n}`$ is much larger than expected. The system is able to detect fluctuations that are not too fast or not too slow. Such fluctuations enhance the occupancy of the left minimum, although it is most of the time not the absolute minimum of the potential. We thus observe a noise induced stability for the state which has most of the time the higher energy, at least when $`p_+>1/2`$. The results in Fig. 1 show that for large values of $`p_+`$ this effect is even stronger.
What is the reason for this effect? How can it be described quantitatively and how does it depend on the potential? To answer these questions, let us go back to the general case (2). If $`\tau `$ is large compared to the intra-well relaxation times for the two minima of $`V(x,z)`$, the dynamics of the system can be described by an effective rate equation for the probability of the particle to sit in the left minimum, $`n`$, or in the right minimum, $`1n`$. The rate equation is given by
$$\frac{dn}{dt}=r_1(z)n+r_2(z)(1n)$$
(3)
where $`r_{1,2}(z)`$ is the escape rate for the left or right minimum of $`V(x,z)`$. If $`\mathrm{\Delta }V_{1,2}(z)=V(0,z)V(x_m,z)`$ is the depth of the potential, $`r_{1,2}(z)\mathrm{exp}(\mathrm{\Delta }V_{1,2}(z)/T)`$. For (3) we can again discuss the two limiting cases of large or small $`\tau `$. For large $`\tau `$, the average occupancy is $`\overline{n}(\tau \mathrm{})=n_{\mathrm{}}=𝑑zq_0(z)n(z)`$ where $`n(z)=r_1(z)/(r_1(z)+r_2(z))`$. For small $`\tau `$, the particle feels average rates and the mean occupancy is $`\overline{n}(\tau =0)=n_0=\overline{r_1}/(\overline{r_1}+\overline{r_2})`$ where $`\overline{r_i}=𝑑zr_i(z)q_0(z)`$. For the results presented in Fig. 1 the potential has been chosen such that $`n_0`$ is larger than $`n_{\mathrm{}}`$ and also larger than the occupancy determined by the average potential $`V(x)`$. This explains qualitatively the $`\tau `$-dependence of $`\overline{n}`$ in Fig. 1. Let us now compare the results of the rate equation (3) with the results of the Fokker-Planck equation quantitatively. To calculate the stationary probability $`\overline{n}`$ as a function of $`\tau `$ from (3), we use the Fokker-Planck equation for the density $`p(n,z,t)`$,
$$\frac{p(n,z,t)}{t}=\frac{}{n}((r_1(z)+r_2(z))nr_2(z))p(n,z,t)+M_zp(n,z,t).$$
(4)
The stationary probability $`\overline{n}=_0^1𝑑nnp_0(n)`$ can be obtained from the stationary distribution $`p(n,z)`$. It is possible to calculate the stationary distribution $`p_0(n)=𝑑zp(n,z)`$ for a dichotomous process ,
$$p_0(n)=C(n\stackrel{~}{n})(nn_{})^{\alpha _{}1}(n_+n)^{\alpha _+1}$$
(5)
where
$$\alpha _\pm =\frac{(p_+r_++p_{}r_{})(n_\pm n_0)}{\tau r_+r_{}(n_+n_{})},$$
(6)
$$\stackrel{~}{n}=\frac{r_{2+}r_2}{r_+r_{}}.$$
(7)
$`C`$ is a normalization constant. We introduced $`r_\pm =r_{1\pm }+r_{2\pm }`$. $`r_{i\pm }`$ are the two values for the fluctuating rates $`r_i(z)`$, and $`n_\pm =r_{2\pm }/r_\pm `$. The rates are given by (see , where $`T`$ and $`L`$ are set to unity. $`L_{\mathrm{eff}}`$ occurs instead of $`L`$, since we do not have a linear ramp.)
$$r_{i\pm }=\frac{\mathrm{\Delta }V_{i\pm }^2L_{\mathrm{eff}}^2}{T}(\mathrm{exp}(\mathrm{\Delta }V_{i\pm }/T)\mathrm{\Delta }V_{i\pm }/T1)^1.$$
(8)
$`p_0(n)`$ vanishes outside the interval between $`n_{}`$ and $`n_+`$ as it should have been expected. $`\stackrel{~}{n}`$ does not lie in this interval. For $`\overline{n}`$ one obtains
$$\overline{n}=n_0+(n_{\mathrm{}}n_0)\frac{\tau }{\overline{\tau }+\tau }$$
(9)
where $`\overline{\tau }=\frac{p_+}{r_{}}+\frac{p_{}}{r_+}`$. This shows that for a dichotomous process one always has a monotonic behavior of $`\overline{n}`$ as a function of $`\tau `$ and the characteristic time-scale for the transition from $`n_0`$ to $`n_{\mathrm{}}`$ is given by $`\overline{\tau }`$. For more general noise processes it is possible to calculate $`\overline{n}`$ as well. The calculation is much more involved, but the typical behavior of $`\overline{n}`$ is the same as for the dichotomous case . In Figs. 1 and 2
we compare results of the effective rate equations with results of the fluctuating potential. The agreement is indeed excellent for sufficiently large $`\tau `$. The value of $`\tau `$ where the transition occurs, and the value of the maximum of $`\overline{n}`$ agree well with the exact results. The agreement between the rate theory and the exact results becomes better for smaller temperatures (see Fig. 2), which is clear since the rate equations are only valid for low temperatures. The motion in the average potential, i.e. the behavior of the system for small $`\tau `$, cannot be described this way, since the rates in the average potential differ from the averaged rates. The validity of the two-state model breaks down when $`\tau `$ becomes smaller than the intra-well relaxation times. Nevertheless, we are able to understand why the occupancy $`\overline{n}`$ in the minimum that has usually the higher energy has the features shown in Fig. 1. Using the average potential, the average rate, and the average occupancy we are able to calculate analytically the three values of $`\overline{n}`$. The transition between these values occur at $`\tau `$-scales given by the intra-well relaxation time and by $`\overline{\tau }`$. This also explains the results in Fig. 2, which shows how the effect depends on the temperature. For lower values of the temperature, $`\overline{\tau }`$ becomes larger, and the values of $`n`$ change due to the dependence of the rates on $`T`$.
Comparing the above results with calculations for the mean first passage time shows that the noise induced stability is related to resonant activation. To calculate the mean first passage time, one has to introduce an absorbing boundary at the maximum of the potential and has to solve the Fokker-Planck equation with this boundary. The mean first passage time depends on the initial condition $`\rho (x,z,0)`$, but usually the relaxation within the potential well is fast compared to the mean first passage time and the dependence on the initial condition is weak. As initial condition we choose $`\rho (x,z,0)=\delta (xx_i)\delta (zz_i)`$ with $`x_i=\pm x_m`$. Let $`x=0`$ be the absorbing boundary. The solution of the Fokker-Planck equation is denoted by $`\rho (x,z,t|x_i,z_i,0)`$. The mean first passage time is then given by
$$\tau _{\text{MFPT}}=𝑑z_{\mathrm{}}^0𝑑x\rho _1(x,z|x_i)$$
(10)
where
$$\rho _1(x,z|x_i)=_0^{\mathrm{}}t\frac{}{t}\rho (x,z,t|x_i,z_i,0)_{z_i}.$$
(11)
The average is taken with respect to the stationary distribution of $`z_i`$. For a piecewise linear potential $`\rho _1(x,z|x_i)`$ can be calculated using the same methods as for $`\rho (x,z)`$ described above. Bier and Astumian calculated the mean first passage time for a linear ramp, which is similar to our situation. They showed that for not too small $`\tau `$ the system is well described by simple rate equations, as in our case as well. Using the rate equations one obtains for the mean first passage times (, eq. (15))
$$\tau _{\text{MFPT}}(\tau )=\tau _{0i}+(\tau _\mathrm{}i\tau _{0i})\frac{\tau }{\tau _\mathrm{}i+\tau }$$
(12)
where $`\tau _{0i}=\overline{r_i}^1`$ and $`\tau _\mathrm{}i=p_+/r_i+p_{}/r_{i+}`$. The $`\tau `$-dependence of the mean first passage time is similar to the $`\tau `$-dependence of $`\overline{n}`$ in (9). The characteristic time $`\overline{\tau }`$ has the same form as $`\tau _\mathrm{}i`$.
As already pointed out, it is possible to extend our calculations to various noise processes. The main qualitative features of the system are the same. Our results show that an asymmetric, fluctuating, bistable system can detect fluctuations that are not too slow and not too fast. For such fluctuations the occupancy of the state that has usually the higher energy is enhanced. The results show that this effect may be very large, depending on the parameters of the system. For the largest value of $`p_+`$ in Fig. 1, the mean occupancy in the left minimum is very small for slow and fast fluctuations, but reaches a large value for intermediate values of $`\tau `$. If one lowers the temperature or modifies the potential it is possible to obtain an even larger effect, as shown in Fig. 2.
To some extend noise induced stability can be compared to noise enhanced stability first found numerically by Dayan *et al* and observed experimentally by Mantegna and Spagnolo , but there are several differences. The effect called noise enhanced stability in is observed in a periodically driven system with a single, metastable minimum. The system remains in the metastable minimum for some time given by the mean first passage time for the barrier, and the mean first passage time has a maximum at some noise intensity. This effect is related to stochastic resonance. In our case the potential fluctuates stochastically with some correlation time $`\tau `$ and has two minima. The less stable minimum is the absolute minimum in some configurations of the potential, but most of the time this minimum is metastable. Nevertheless it can be highly occupied.
As metioned above Astumian and Robertson investigated a two-state model with periodically modulated rates to describe the effect of an oscillating electric field on membrane proteins. Their results are in qualitative agreement with our results for the model with dichotomously fluctuating rates. One should expect that our results for the motion of a particle in a fluctuating potential, described by a Fokker-Planck equation, are relevant for such biologically motivated models. This is important, because the description by a Fokker-Planck equation is much more general. Furthermore, for large frequencies or small correlation times the systems feels an average potential that cannot be described by fluctuating rates. |
no-problem/9912/astro-ph9912209.html | ar5iv | text | # Morphological and Star Formation Evolution to z=1
## 1. Introduction
The evolution of luminosity densities has been examined by Lilly et al (1996), from 600 I$`<`$22 galaxies of the Canada France Redshift Survey (CFRS). They found a large decrease by factor 10 of the rest-frame UV luminosity density from z=1 to z=0. This factor has probably to be lowered to $``$6, since recent estimates (Treyer et al, 1998) of the local UV luminosity density are 1.5 times larger than previous estimates based on H$`\alpha `$ luminosity density (Gallego et al, 1995).
At 15$`\mu `$m deep counts show a steep slope below 400$`\mu `$Jy (see Elbaz et al, 1999). Associated with the flattening of the deep radio count slope (Fomalont et al, 1991), this suggests the presence of an evolving population at infra-red wavelengths. On the basis of a sample of $``$ 30 15$`\mu `$m and radio sources, Flores et al (1999) have shown that the rest-frame IR luminosity density evolves as rapidly as the rest-frame UV luminosity density.
It is important to notice that all these works are based on the observations of relatively luminous galaxies in optical ($`M_B<`$ -20), as well as in IR ($`L_{bol}>`$ 2 $`10^{11}`$ $`L_{}`$). And that the corresponding evaluations of luminosity density evolution are, strictly speaking, related to luminous galaxies. Assuming an unevolved shape of the luminosity function in both UV and IR would provide an equipartition of the energy output (or star formation rate density) between UV and IR light from z=1 ($``$ 9 Gyr ago) to the present day (Hammer, 1999), in accordance with bolometric measurements of the background (see Pozetti et al, 1998).
## 2. Galaxy morphologies and their global evolution
### 2.1. Observations of the Hubble Space Telescope and their limitations
Studies of distant galaxies are limited by the spatial resolution, since 1 pixel of HST/WFPC2 corresponds to 1$`h_{50}^1`$kpc at z$``$0.75. This provides the most severe limitation to their morphological studies. For example, at z=0.75, a 5 kpc half-light radius would correspond to only 5 WFPC2 pixels, with an HWHM of only two pixels. This limits the accuracy of bulge/disk deconvolution for a non-negligeable fraction of the distant galaxies.
Another limitation is related to redshift dependent effects: for example at z=0.9 the I (F814W) filter samples the rest-frame B band, a color which is more sensitive to star forming regions. 24% of the spirals observed at z$``$ 0.9 could be mis-identified as irregulars (Brinchman et al., 1998) when compared to lower z systems.
Other effects (biases against low surface brightness objects or extincted disks) are caused by the limited photometrical depth reachable in a reasonable exposure time (few hours) and all the above limitations emphasize the need for an optical camera optimized at the diffraction limit on an 8 meter space telescope.
### 2.2. Evolution of averaged properties
Brinchman et al (1998) have presented the HST imagery of $``$ 340 I$``$22 galaxies, spanning a redshift range from z=0.1 to z=1. Galaxy morphologies have been classified by eye as well as through bulge/disk deconvolution. Brinchman et al (1998) quoted that 9% of galaxies at 0.2$`<`$ z $`<`$0.5 are irregulars, a fraction which reaches 32% at 0.75$`<`$ z $`<`$1. The luminous galaxies in the highest redshift bins were much bluer and with a later type than that of a Sbc, conversely to present-day galaxies (Figure 1). Present-day stellar population has an average color ($`(BK)_{AB}`$=2.5) typical of an Sab (see Hammer, 1999).
This trend is followed by a large redshift increase of the rate of emission line galaxies (those with $`W_0(OII)>`$ 15Å ) from 13% locally to more than 50% at z$`>`$0.5 (Hammer et al, 1997). These properties, taken together, are consistent with the observed rest-frame UV luminosity density, and confirm a declining star formation history since the last 9 Gyr.
## 3. Evolution of galaxies selected by morphology
### 3.1. Ellipticals
The number density evolution of luminous ellipticals is still controversial (see Kauffman et al, 1998). It is an important debate, because the monolithic collapse scenario (see e.g. Bower et al., 1992) predicts their formation at a high redshift conversely to hierarchical models (see White and Rees, 1978) in which massive ellipticals are formed at later times from the collapse of smaller units. The two scenarios are to predict a different star formation history, since a large fraction of the metals are bound in bulges (see Fukugita et al, 1998).
In selecting elliptical galaxies on the basis of their luminosity profiles, Schade et al. (1999) have shown that a color criterion is rather unefficient. This latter likely selects as many disks (possibly with small amounts of dust) as ellipticals. Schade et al. also find no evidence for a decline in space density of ellipticals since z=1, although this conclusion is limited to the small sample of objects in consideration (46 galaxies). More interesting is the fact that a third of the selected ellipticals show significant emission lines, which they interpret as related to small events of star formation at z$`<`$1, representing the formation of only few percents of the stellar mass.
It is premature to conclude on their number density evolution before a larger sample is gathered. Several biases can also affect the apparent density of ellipticals at high z, including a possible mis-classification of some S0 with faint disks. Even the detection of small amounts of star formation in z$`<`$1 ellipticals should be taken with caution, because the presence of emission lines seems not to affect their (U -V) colors (Figure 2). Extinction of hot stars might be at work in these objects, but this should be rather complex to explain the presence of the \[OII\]3727 emission line. Alternatively, these emission lines could be as well related to the presence of an AGN, which is suggested to be present in most of the massive ellipticals by Hammer et al (1995), on the basis of their radio inverted spectra. There is however an evidence that elliptical galaxies are not contributing to the observed evolution of the rest-frame UV luminosity density.
### 3.2. Large disks
The density of large disks with $`r_{disk}`$ 3.2$`h_{50}^1`$kpc is found to be the same at z=0.75 as locally (Lilly et al, 1998). Only a density decrease by less than 30% at z=1 is consistent with the data. Lilly et al (1998) also find that UV luminosity density produced by large disks shows only a modest increase with the redshift. There is however a general shift towards later type for disks in the highest redshift bin.
From long-slit spectroscopy studies, Vogt et al (1997, see also Koo, 1999) show an unevolved Tully Fischer relation for disks at z $``$ 1. However the disk velocity could be affected by the presence of companions at high z as well as by the geometry and alignement of the slit with the disk major axis (see Amram et al, 1996). Higher resolution spectroscopy associated with integral field unit will definitively establish the Tully Fisher relation at high redshift.
An important question is to know if the present-day population of galaxies, similar to the Milky Way, was already in place 7 to 9 Gyr ago. Studies of the Milky Way (see Boissier and Prantzos, 1999) as well as the Schmitt law for disks (see Kennicutt, 1998) argue in favor of a rather long duration (3-7 Gyr) for the formation of the bulk of their stars. Number density evolution, Tully Fisher relation and present-day properties of disks, they all suggest a relatively passive evolution of large disks. Redshift changes in large disks appear not to be the main contributors to the evolution of the rest-frame UV luminosity density, as detected in that redshift range. However, it is still unclear if all the disks observed at z=1 are progenitors of present-day disks. Star formation estimates in disks galaxies might be severely affected by extinction, as shown by Gruel et al (1999, in preparation).
## 4. The major contributors to the observed evolution
### 4.1. Compact galaxies
The most rapidly evolving population of galaxies detected in the visible is made of small and compact galaxies with half light radius smaller than 5$`h_{50}^1`$kpc (Guzman et al, 1997; Lilly et al, 1998). Their UV luminosity density was 10 times higher at z=0.875 than at z=0.375, and they correspond to $``$ 40% of the rest-frame UV luminosity density in the higher redshift bin (Hammer and Flores, 1998). These objects are somewhat enigmatic: their sizes -$`r_{disk}`$ 2.5$`h_{50}^1`$kpc- and their velocity widths -35 to 150 km/s (Phillips et al, 1997)- are apparently similar to those of local dwarves, while they are 10 to 100 times more luminous than a $`M_B`$=-17.5 dwarf.
Guzman (1999) has argued that compact galaxies are the result of bursts in low massive systems (few $`10^9`$$`M_{}`$), which would generate the present day population of spheroidal dwarf galaxies. Kobulnicky and Zaritsky (1999) have estimated a range of Z=0.3$`Z_{}`$ to $`Z_{}`$ for the metal abundance of few z$`<`$0.5 compact galaxies. These values -as well as their luminosities- are rather consistent with those of local spiral galaxies or of the most massive irregular galaxies. An important question is to know if the very narrow emission lines are indeed sampling the gravitational potential, or if alternatively they are only located in a small area of the galaxies, or being affected by dust or inclination effects. An important fraction (if not all) of the luminous compact galaxies at z$`>`$0.5 show evidences for low surface brightness extents (Figure 3), as well as for a noticeable fraction of galaxies with companions. Further studies with large exposure time at 8 meter telescope are required to study their continuum properties (absorption lines).
### 4.2. Interacting and starbursting galaxies detected by ISO
During a follow-up study with ISOCAM of the CFRS, Flores et al (1999) have detected galaxies with strong emission at both radio and mid-IR wavelengths. They interpret them as being strong star forming galaxies with SFR from 40 to 250 $`M_{}`$$`yr^1`$, most of their UV light being reprocessed by dust to IR wavelengths. These galaxies represents 4% of the luminous ($`M_B<`$-20) galaxy population, while they produce as many stars as the rest of galaxy population.
Most of these star-forming galaxies 0.5$``$z$``$ 1, appear to be strong mergers, or at least they show signs of interactions (Figure 4). It is important to notice that individual galaxies involved in these systems have sizes larger than the normal galaxy population (Figure 5). This argues in favor of the formation at z$`<`$1 of large systems, including massive ellipticals by merging of two large disks. Several large disks are also strong IR emitters, implying that UV luminosity samples only a small fraction of their star formation.
## 5. Conclusion
UV and IR luminosity density both present a surprisingly similar evolution since the last 9 Gyr. The former is dominated by a numerous population of blue galaxies, the latter is concentrated in a small fraction of the galaxy population, mostly interacting and dusty galaxies.
When looking at the morphological properties of the population responsible for the luminosity density evolution, UV and IR selected galaxies draw strikingly different pictures:
-Large galaxies -elliptical and large disks- have blue or UV properties almost unchanged since z=1, and most of the reported evolution in the UV is related to irregular and compact galaxies.
-Conversely, most of the galaxies responsible for the IR luminosity density evolution, are large galaxies (from S0 to Sbc), generally found in interacting systems; they include some good candidates to the formation of a massive elliptical at z $`<`$1 resulting from the collapse of two disk galaxies.
It is too premature to test which scenario -monolithic collapse or hierarchical model- is dominating the galaxy formation. But there is good evidence that at least, galaxy interactions were still at work in the latest 9 Gyr to form massive galaxies. Larger samples and better spectroscopic resolution are required to quantify the above observational facts.
#### Acknowledgments.
I would like to thank the organizing and scientific committees for their kind invitation.
## References
Boissier, S., Prantzos, N., 1999, MNRAS, in press (astro-ph/9902148)
Bower, R., Lucey, J., Ellis, R. 1992, MNRAS, 254, 613
Brinchman, J., Abraham, R., Schade, D., Tresse, L. et al, 1998, ApJ, 499, 112
Elbaz, D. Aussel, H., Cesarsky, C. et al, 1999 (astro-ph/9902229)
Flores, H., Hammer, F. Thuan, T.X., Cesarsky, C. et al., 1999, ApJ, 517, 148
Fomalont, E., Windhorst, R., Kristian, J., Kellerman, K., 1991, AJ, 102, 1258
Fukugita, M., Hogan, C., Peebles, P., 1998, ApJ, 503, 518
Gallego, J., Zamorano, J., Aragon-Salamanca, A., Rego, 1995, ApJ, 455, L1
Guzman, R., Gallego, J., Koo, D.C., Phillips, A.C. et al, 1997, ApJ489, 559
Guzman, R., 1999, in Proceedings of the XIXth Moriond Conference on ”Buiding Galaxies: from the primordial Universe to the present”, eds Hammer et al, Ed. Frontières
Hammer, F., Crampton, D., Lilly, S. et al, 1995, MNRAS, 276, 1085
Hammer F., Flores H., Lilly S., Crampton D. et al, 1997, ApJ, 480, 59.
Hammer F., Flores H, 1998, in Proceedings of the XVIIIth Moriond Conference on ”Dwarfs Galaxies and Cosmology”, eds Thuan et al, Ed. Frontières (astro-ph/9806184)
Hammer F., 1999, in Proceedings of the XIXth Moriond Conference on ”Buiding Galaxies: from the primordial Universe to the present”, eds Hammer et al, Ed. Frontières
Kauffmann, G., Charlot, S., White, S., 1998, MNRAS, 283, 117
Kennicutt, R., 1998 ApJ, 498, 541
Kobulnicky, H., Zaritsky, D., 1999, ApJ, 511, 118
Koo, D., 1999, in Proceedings of the XIXth Moriond Conference on ”Buiding Galaxies: from the primordial Universe to the present”, eds Hammer et al, Ed. Frontières
Lilly S., Le Fèvre O., Hammer F., Crampton, D., 1996 , ApJ, 460, L1
Lilly, S.J., Schade, D., Ellis, R.S. et al, 1998, ApJ, 500, 75
Phillips, A., Guzman, R., Gallego, J., Koo, D. et al, 1997, ApJ, 489, 543
Pozzetti, L., Madau, P., Zamorani, G., Ferguson, H.C., Bruzual, G., 1998, MNRAS, 298, 1133
Schade, D., Lilly, S., Crampton, D. et al, 1999, ApJ, in press (astro-ph/9906171)
Treyer, M., Ellis, R., Milliard, B. et al 1998 MNRAS, 300, 303
Vogt, N., Phillips, A., Faber, S., et al, 1997, ApJ, 479, 121
White, S., Rees, M., 1978, MNRAS, 183, 341 |
no-problem/9912/cond-mat9912185.html | ar5iv | text | # Reversible Random Sequential Adsorption of Dimers on a Triangular Lattice
## I Introduction
A large number of nonequilibrium systems can be qualitatively described as a flux of particles impinging on a surface or line. Two heavily studied models of such systems treat the particles as either fixed in place upon impact (random sequential adsorption) or as free to diffuse along the surface or line (random cooperative adsorption) . One can also consider the deposition of particles that are free to desorb . Some examples of the wide range of applicability of these models include coating problems, chemisorption, physisorption, the reaction of molecular species on surfaces and at interfaces, and the binding of ligands on polymer chains. Jamming is one of the common occurrences in these systems that random sequential adsorption models effectively describe. Loosely speaking, a jammed system is one that is locked into a state of partial coverage because of adsorbate size or shape. In addition to the various adsorption processes, jamming occurs in a wide range of nonequilibrium situations, including glasses, granular materials, and traffic flow . In spite of significant progress, no general framework exists for the description of jamming phenomena.
A particular realization of random sequential adsorption is the parking lot model . In the irreversible version of this model, identical particles (cars) adsorb on a line (curb) at a rate $`K^+`$. In this model, the phenomenon of jamming has been known for some time . A certain number of the parked cars leave a space that is too small to fit another car. These are referred to as bad parkers. The result is a density of cars along the curb that is less than one. The density of cars reached in the irreversible model is the jamming limit.
In the reversible version, identical particles (cars) adsorb on a line (curb) at a rate $`K^+`$ and leave the line (curb) at a rate $`K^{}`$. The removal of cars allows for adjustments in the bad parkers that relieve the jamming. Recently, there has been renewed interest in the reversible case because of its successful application to compaction in granular materials when generalized to three dimensions . In this version, the “parking spots” are voids in the material that can be filled with particles. The dynamics of the reversible parking lot model for large values of $`K=K^+/K^{}`$ has a number of interesting features. Perhaps the most dramatic feature is the existence of two very different time scales for the evolution of the coverage fraction of particles . First, there is a rapid approach to a coverage fraction that is equal to the jamming limit. This is followed by a slow relaxation to a larger steady state value. The slow relaxation is understood in terms of collective parking/leaving events involving multiple cars .
In this paper, we present the results for simulations of the reversible adsorption of dimers on: (1) a one-dimensional lattice, (2) a two-dimensional triangular lattice, and (3) a two-dimensional triangular lattice with the nearest neighbors excluded. The one-dimensional lattice model was chosen as a test case, and the results are in good agreement with existing data. In particular, our simulations confirm the importance of collective parking events in controlling the slow dynamics, as seen in Ref. . The two triangular lattice models exhibit differences in their time evolution that can be attributed to effects of bond orientation and packing on the collective events. The case without nearest-neighbor exclusion corresponds to attempting to cover the plane with a shape formed by two regular hexagons sharing a side. The nearest-neighbor excluded case corresponds to a tiling of distorted hexagons that cover multiple sites.
The rest of the paper is organized as follows. Section II describes the details of the simulations. Section III presents the results for the one-dimensional model. Section IV presents the results for the two triangular lattices. The simulations were motivated in part by experimental measurements of the viscosity of Langmuir monolayers. A brief description of the experimental system and its relationship to the simulations presented here is given in Section V. The results are discussed and summarized in Section VI.
## II Simulation Details
For the one-dimensional simulations, a line of 32000 particles was used. Both of the triangular lattices consisted of a grid of 1000 x 1000 particles. To distinguish between the two-dimensional models, we introduce the following nomenclature. Model A will refer to the triangular lattice without nearest-neighbor exclusion. Model B will refer to the triangular lattice with nearest-neighbors excluded from binding. Particles are taken to bind to two neighboring sites on the lattice, forming a dimer. The binding occurs at a rate $`K^+`$, and particles leave the surface at a rate of $`K^{}`$.
At each step in the simulation, a site was chosen at random. Then, a random number between 0 and 1 was compared with the ratio $`K^+/(K^++K^{})`$ to determine whether a binding or unbinding event was attempted. For unbinding events, if the chosen site was part of a bond, the bond was broken; otherwise, no action took place. For binding events, a nearest neighbor was randomly selected. A binding event occurred only if both sites were allowed binding sites. The definition of allowed binding site depends on the model. For the one-dimensional and Model A cases, an allowed site is any site that is not part of a bond. For Model B, if either site is part of a bond or the nearest neighbor of a bound site, binding is not allowed. It is important to note that the number of new bonds created is directly proportional to the number of allowed sites, which is not the same as the number of open sites. The number of desorption events is still directly proportional to the coverage fraction. The coverage fraction, $`\rho `$, is defined as the ratio of sites that are part of a bond to the total number of sites.
A schematic of each of the model systems with examples of bound sites is shown in Fig. 1. It is important to notice the different spatial structures in Model A and Model B. In Model A, complete coverage corresponds to all sites being part of a bond. In Model B, perfect coverage of the system corresponds to a tile of distorted hexagons that are composed of both empty and bound sites. This results in a maximal coverage fraction $`\rho _{max}=0.4`$. For both the one-dimensional case and Model A, $`\rho _{max}=1.0`$. In this paper, $`\rho (\mathrm{})`$ will designate the steady state value of the fractional coverage, and $`\rho _{jam}`$ will refer to $`\rho (\mathrm{})`$ in the case $`K^{}=0`$, i.e. the jamming limit.
## III One-dimensional Simulation
Figure 2 shows $`\rho `$ as a function of iteration step for selected values of $`K=K^+/K^{}`$. We include the case $`K^{}=0`$, which gives $`\rho _{jam}=0.86474`$. For comparison, analytic calculations give $`\rho _{jam}=0.86466`$ . The original work on the reversible parking lot model proposed a mean-field description of the dynamics that can be expressed in terms of the average density, or fractional surface coverage, $`\rho `$. Both the continuous and lattice versions of the parking lot model were considered. Figure 3 shows the steady state value of $`\rho `$ for the values of $`K`$ plotted in Fig. 2. The solid curve in Fig. 3 is the value for $`\rho (\mathrm{})`$ for dimers binding to a one-dimensional lattice, as determined by the following equation from Ref. :
$$\rho (\mathrm{})=1(K^{}/K^+)^{1/2}/2.$$
(1)
The agreement between our simulations and Eq. 1 confirms the mean field prediction for the equilibrium values of $`\rho `$. However, as with the continuous parking lot model , the mean-field description is unable to accurately predict the time evolution of $`\rho `$. This can be seen in Fig. 2 where the two time scales controlling the evolution of $`\rho `$ are evident for $`K>10`$. The system rapidly reaches $`\rho _{jam}`$, and then slowly approaches its equilibrium value. As K goes to infinity, $`\rho (\mathrm{})`$ approaches one, but the time to reach equilibrium approaches infinity. This is in agreement with results for the continuous parking lot model reported in Ref. .
We have found that the explanation of the two distinct time scales reported in Ref. applies to the discrete case as well. Essentially, collective events are responsible for the evolution of $`\rho `$ for $`\rho >\rho _{jam}`$. In Ref. , the authors calculated the transition rates for two good particles to one bad particle and one bad particle to two good particles and found that these rates account for the additional slow time scales. In contrast, we directly monitor the transitions as part of the simulation. The reason such transitions result in an additional slow time scale can be understood in terms of the following argument.
As discussed in the introduction, when $`K^{}=0`$, jamming occurs because of “bad parkers” that leave empty space. For the one-dimensional lattice, empty space refers to a single site that is unable to bond. An example is shown in Fig. 4a. For small values of $`K^{}`$, bad parkers initially occur at essentially the same rate as for $`K^{}=0`$ because very few particles desorb. Therefore, the coverage fraction for the system quickly approaches a value of $`\rho _{jam}`$. Even when a value of $`\rho _{jam}`$ is reached, the rare desorption event is generally followed immediately by a readsorption because $`K^+`$ is so large. The total number of particles is not changed by these events. However, when one bad parker desorbs and two particles adsorb in the opened good locations, then the number of particles is increased by one. Likewise, if two good parkers unbind and one bad parker binds, the number of particles is decreased by one. Because these events involve multiple particle transitions, they occur on a longer time scale then simple adsorption/desorption events.
For the one-dimensional discrete case, one can identify the relevant good to bad and bad to good transition that involve only two good parkers. These are illustrated in Figs. 4b and Fig. 4c. As these events are expected to dominate the dynamics, one can write the following equation for the evolution of $`\rho `$ once the jamming limit has been reached:
$$d\rho /dt=R_{bg}R_{gb}+h.o.t..$$
(2)
Here $`R_{bg}`$ and $`R_{gb}`$ are the rates of bad to good and good to bad transitions respectively, and $`h.o.t`$ are collective transitions that a larger number of particles, and hence, occur at a slower rate.
We were able to track the bad to good and good to bad transitions during the simulation. This was accomplished by converting the particle sites to an array of bond locations. Each location between two sites was assigned a value of 1 if a bond was present and 0 is there was no bond. For example, the solid lines in Fig. 4b would be represented by the string $`1010101`$. Notice, by definition, between any two bonds there is an open space, so the completely filled system is represented by $`1010101010\mathrm{}`$. The string of bond locations was saved at step $`i`$ and $`i+\mathrm{\Delta }`$. Each bond location was taken as the initial digit in a seven digit string, and these strings were compared for steps $`i`$ and $`i+\mathrm{\Delta }`$. We counted the following transitions:
$`1010101`$ $``$ $`1001001.`$
These transitions correspond to two good to one bad and one bad to two good, as discussed in Fig. 4c and Fig. 4b respectively. The choice of $`\mathrm{\Delta }`$ is important. If $`\mathrm{\Delta }`$ is too small, the transitions do not have enough time to complete. For example, in the extreme limit of choosing $`\mathrm{\Delta }`$ to be a single time step, it is not possible to have multiparticle events, but the total number of bound sites can change by one. Essentially, $`\mathrm{\Delta }`$ must be large enough for the multiparticle transitions to have time to complete. For $`\mathrm{\Delta }`$ large enough, the recorded number of transitions is essentially independent of $`\mathrm{\Delta }`$. For the data reported here, we used $`\mathrm{\Delta }=2\times 10^6`$.
In addition to counting multiparticle transition, we also recorded the total change in $`\rho `$. Figure 5 compares the actual value of $`\rho `$ as a function of the number of iterations with the value obtained using Eq. 2 and the computed number of bad to good and good to bad transitions. Once the jamming limit is reached, the bad to good and good to bad transitions account for 94.3% of the change in $`\rho `$, confirming the general idea behind Eq. 2. An additional 3.2% of the change in $`\rho `$ is accounted for by considering a single class of three particle transitions where three good parkers were replaced by two parkers, and the reverse process. These were counted by considering 9 digit strings and looking for the transition:
$`101010101`$ $``$ $`100101001.`$
This curve is also plotted in Fig. 5. The spatial arrangement corresponding to this transition is shown in Fig. 4d. It is important to note that when bonds exist at site D and E, the only way to increase the number of bound sites in this region is for two particles to desorb and three particles adsorb at sites A, B and C.
## IV Two-dimensional Simulations
The results of the simulation for $`\rho `$ as a function of iteration step for the adsorption of dimers on a two-dimensional triangular lattice (Model A) and on a two-dimensional triangular lattice with nearest-neighbor exclusion (Model B) are presented in Figs. 6 and 7, respectively. For Model A, previous simulations have found a jamming limit of 0.9243 . Our simulations give a value of 0.9120. For Model B, we find a jamming limit of 0.275. Recall that complete coverage in this case corresponds to $`\rho =0.4`$. We are not aware of any previous work on a Model B type simulation. However, by appropriately including the empty nearest-neighbor sites in the definition of $`\rho `$, we can compare to simulations involving n-mers of length 6 that cover a hexagonal patch. These simulations find a jamming limit of 0.6847, and our converted value is 0.6875 .
The results for the two-dimensional cases are qualitatively similar to the one-dimensional case. One observes multiple time scales: a rapid approach to the jamming limit and a slow relaxation to the steady-state value. This suggests that the same picture of multiparticle transitions will apply to the two-dimensional system. However, in contrast to the one-dimensional case, the identification of collective transitions is significantly more complex for Model A and B because of the number of arrangements due to differing orientations of the bonds that can produce bad parkers. However, we did carry out a limited analysis for the case of Model A.
The method used to track multiparticle events in Model A was similar in concept to the one dimensional case. However, because the bonds have orientation, we compared the actual sites instead of the bonds for three classes of transitions:
$`0110`$ $``$ $`1111`$
$`{\displaystyle \genfrac{}{}{0pt}{}{\mathrm{0\hspace{0.33em}\hspace{0.33em}1}}{\mathrm{\hspace{0.33em}\hspace{0.33em}1\hspace{0.33em}\hspace{0.33em}0}}}`$ $``$ $`{\displaystyle \genfrac{}{}{0pt}{}{\mathrm{1\hspace{0.33em}\hspace{0.33em}1}}{\mathrm{\hspace{0.33em}\hspace{0.33em}1\hspace{0.33em}\hspace{0.33em}1}}}`$
$`{\displaystyle \genfrac{}{}{0pt}{}{\mathrm{\hspace{0.33em}\hspace{0.33em}1\hspace{0.33em}\hspace{0.33em}0}}{\mathrm{0\hspace{0.33em}\hspace{0.33em}1}}}`$ $``$ $`{\displaystyle \genfrac{}{}{0pt}{}{\mathrm{\hspace{0.33em}\hspace{0.33em}1\hspace{0.33em}\hspace{0.33em}1}}{\mathrm{1\hspace{0.33em}\hspace{0.33em}1}}}`$
In this case, occupied sites are represented by 1 and unoccupied sites are represented by 0. Using sites instead of bonds results in some differences between the methods used in the two-dimensional and one-dimensional cases. First, the transitions counted in this manner correspond to classes of transitions in the following sense. Because we track sites and not bonds, two neareset neighbor sites can be occupied either because they share a bond or because of two neighboring bonds that are at an angle to the line being considered. So, the first class of transitions includes the transitions that are exactly analogous to the one-dimensional good to bad transitions. But, it also includes multiparticle transitions that involve bonds at an angle to the horizontal and that successfully fill the empty sites along the horizontal. Second, the offset of the 1’s and 0’s in the second two classes of transitions are important and reflect the underlying hexagonal lattice. Note that because only nearest-neighbor bonding is allowed, the diagonal connecting the two zeros in each case is not an allowed binding site. Finally, for the two-dimensional case, we exploited the hexagonal symmetry of the problem, and multiplied the rate for the first type of transition by three. The rates of the second two transitions are multiplied by 3/2 to account for double counting. The value of $`\mathrm{\Delta }`$ was chosen in a similar fashion to the one-dimensional case and corresponded to a constant interval in $`\mathrm{\Delta }\rho =0.005`$.
The results for $`\rho `$ as a function of the number of iterations and the value of $`\rho `$ computed from Eq. 2 using just these three classes of events defined above are plotted in Fig. 8. One striking feature of Fig. 8 is the fact that the two-particle events we identified account for nearly 100% of the dynamics until the number of steps reaches approximately $`2\times 10^9`$. At this point, the coverage continues to grow, only there is essentially no change due to the identified two-particle events. This strongly suggests other two-particle events or higher-order events involving more than two particles are becoming important.
## V Possible Application to Langmuir Monolayers
An obvious question is: do the triangular lattices considered here apply to any experimental systems? There is indirect evidence that the models discussed here are relevant to the binding of Ca<sup>++</sup> ions to a Langmuir monolayer. Langmuir monolayers are composed of insoluble, amphiphilic molecules that are confined to the air-water interface . They exhibit the usual gas, liquid, and solid phases, as well as a large number of two-dimensional analogs of smectic phases . Many of these phases are hexatic, with the molecules locally arranged on a distorted hexagonal lattice. When Ca<sup>++</sup> is present in the water, it can bind two fatty-acid molecules together. This substantially alters a number of the physical properties of the monolayer, such as the lattice spacing and the viscosity . Existing measurements and models of Ca<sup>++</sup> binding have focused on the equilibrium coverage fraction. However, the measurements have focused on time scales of one hour or less. The coverage fraction depends strongly on pH, which is understandable in terms of the degree of ionization of the fatty acid headgroup. At low values of the pH, essentially all of the fatty acid molecules are neutral, and the Ca<sup>++</sup> ions do not bind. As the pH is increased, an increasing number of fatty acid molecules become charged, and the Ca<sup>++</sup> ions are free to bind to the monolayer.
The possible relevance of Model A and B to the fatty acid monolayers is based on viscosity measurements as a function of time in the presence of Ca<sup>++</sup> for the hexatic phase of a particular fatty acid . Figure 9 reproduces one set of data from Fig. 2 of Ref. , illustrating a typical time evolution of the viscosity. The viscosity increases 2 orders of magnitude over 15 hours. The time evolution can be divided into three distinct regions: an initial rapid rise in viscosity within the first hour, a slower rise in viscosity covering 5 - 6 hours, and a final even slower rise in viscosity. For comparison, the computed fractional coverage of $`\rho `$ is shown in Fig. 9 versus the number of iterations. In this case, we have used a linear scale for the number of iterations. The previous plots all used a logarithmic scale. The time evolution of the Ca<sup>++</sup> binding exhibits the three general regions present in the viscosity data, and as such, provides a natural explanation for the effect.
There are a number of points with regard to the connections between the model and the monolayer experiments. The simulations are consistent with the fact that previous measurements of Ca<sup>++</sup> binding do not observe multiple time scales. In the simulations, the interesting change in coverage fraction occurs at late times, while in the experiments, only relatively early times are considered . Also, the fact that the experiments agree reasonably well with equilibrium calculations is not surprising because the late-time changes in $`\rho `$ are relatively small in the simulations. Therefore, longer experiments with more precise measurements of $`\rho `$ are required to directly observe the effects predicted by our simulations. This discussion naturally leads to the second point: how do small changes in coverage fraction produce large changes in viscosity? An ad hoc model that is capable of explaining the large viscosity rise assumes that the viscosity is proportional to 1/(A - $`\rho `$), where A is a constant determined by the equilibrium coverage fraction. This model is based on the idea that the fluidity (the inverse of the viscosity) is proportional to the number of unbound sites. Clearly, both more careful direct measurements of the coverage fraction versus time and a better theoretical understanding of the connection between viscosity and coverage fraction are needed.
The final two comments concern possible refinements of the adsorption model when applying it to the monolayer system. In this paper, we considered the two cases of binding to any open pair of sites (Model A) and binding with nearest-neighbor exclusions (Model B) because they are simple cases with different geometric arrangements. The correct detailed description of the Langmuir monolayer system is certainly more complicated than either of these. However, as mentioned, the degree of ionization of the monolayer is pH dependent. To zeroth order, Model B is a reasonable description of a monolayer that is only partially ionized for two reasons. First, for a partially ionized monolayer, if a particular site is available for binding, it is highly unlikely that any of the neighbors will be available as well. Second, the steady-state values of $`\rho `$ found in Model B are in reasonable agreement with measurements of the values of $`\rho `$ reported for monolayers for pH between 5 and 6 .
The second refinement concerns lateral diffusion of Ca<sup>++</sup> ions once they have bound to the monolayer. Inclusion of diffusion should not substantially alter the qualitative results presented here, but it would effect the quantitative interpretation of the rate constants $`K^+`$ and $`K^{}`$. One can model lateral diffusion of Ca<sup>++</sup> as the unbinding of a Ca<sup>++</sup> from one of the monolayer molecules followed by a rotation around its remaining bond and subsequent binding to another available site. However, this process could also be viewed as a complete unbinding and rebinding at a neighboring site with a renormalized rate constant. In addition, for $`\rho `$ to evolve in time, diffusion would need to be coupled with additional binding. This would result in rearrangements that are completely analogous to transitions from one bad parker to two good parkers. Therefore, even with diffusion in the plane of the monolayer, the basic physics remains the same. Jamming will still occur, and the slow relaxation of the bad parkers due to cooperative behavior will result in the slow time scales.
## VI Discussion
Equation 2 provides a means of expanding the dynamics in terms of collective events that occur on slower and slower time scales. We were able to directly confirm this in the simple situation of the one-dimensional model and for the two-particle transitions in Model A. For the one-dimensional case, two-particle events were sufficient to describe the dynamics of the system, as was found in the continuous model. This results in two plateaus in the time evolution of the coverage fraction. Single particle events, dominated by adsorption, rapidly drive the system to the jamming limit. Processes involving two particles are sufficiently slow that $`\rho `$ plateaus for some time. The length of this plateau is controlled by $`K`$, as $`K`$ ultimately determines the rates of multiparticle transitions. The larger the value of $`K`$, the longer the system remains at the jamming limit. After enough time, the two particle processes have a sufficiently large contribution to the dynamics that $`\rho `$ increases at a noticeable rate until the true steady-state value is reached, and the coverage plateaus again.
In contrast, one can imagine more complicated dynamics, such as multiple plateaus in the time evolution, occurring when collective events involving three or more particles are important. For example, Fig. 4d illustrates the existence of spatial arrangements of unbound sites that can not be corrected by two-particle events. In Model A, Fig. 8 shows that the two particle events are not capable of bringing the system to its steady-state value, as they are no longer contributing to the dynamics at late enough times. This suggests that the remaining unbound sites occur in spatial arrangements that are analogous to those in Fig. 4d. Multiple plateaus would arise in the extreme case where the transition rates for two-particle and three-particle events are sufficiently different. This would occur as follows. The two-particle transitions would drive the system to some value $`\rho _2`$ in a given time $`t`$. If $`t`$ was small enough compared to the three-particle transition rate, the system would stay at $`\rho _2`$ until the three-particle events contributed to the dynamics.
Identifying the existence of multiple plateaus is extremely challenging. First, the steady state value of $`\rho `$ must be sufficiently large that at late times the unbound sites are arranged in such a way that two-particle events are ineffective. This implies a sufficiently high value of $`K`$. However, this in turn both decreases significantly the rate of collective events and increases the time to reach steady state. For Model A and B, we have indirect evidence of multiple plateaus. In both cases, the coverage fraction for $`K=10000`$ appears to be leveling off at a value that is lower than the apparent steady-state values for $`K=500`$, in the case of Model A, and $`K=200`$, in the case of Model B. In principle, $`\rho (\mathrm{})`$ should approach one (or 0.4 for Model B) as $`K`$ approaches infinity. Therefore, the behavior for $`K=10000`$ suggests the beginning of a secondary plateau. Unfortunately, as discussed, the time required to achieve steady-state increases with $`K`$, and we do not have sufficient computing power to determine if this is a true intermediate plateau for $`K=10000`$ or if this is actually the steady-state value.
It is clear that both analytic and more numeric work is needed to fully explore the effects of higher order transitions. Identification of the higher order terms in Eq. 2 is an important step in this process. An exhaustive identification of all possible transitions is beyond the scope of this paper; however, Fig. 10 identifies a small subset of transitions that illustrates why one would expect differences between Model A and B for large enough times or large enough values of $`K`$.
Figure 10a shows a set of transitions for Model A, and Fig. 10b illustrates the equivalent ones for Model B. In both cases, there exists at least two different classes of transitions that turn two good parkers (labeled A, B, and C in Fig. 10) into one bad parker. For Model A, if A and C desorb, then there are two possible sites that result in a bad parker, and two possible sites that result in the re-establishment of a pair of good parkers. But, if A and B desorb, then the situation reverts to the one-dimensional case. (In one-dimension, after two good parkers desorb, bonding to one out of the three open sites corresponds to the creation of a bad parker (see Fig. 4c).) For Model B, if A and C desorb, then there are six possible sites that result in a bad parker, and six possible sites that result in the re-establishment of a pair of good parkers. This results in the same probabilities as in Model A. However, in the A to B case, two sites are available for bad parkers, and two sites are available for good parkers. Therefore, the chance of two good switching to one bad is increased. Because differences in transition rates may affect the length of any additional plateaus, detailed calculations of these rates are needed for a fuller understanding of the possible dynamics.
In conclusion, we present results of simulations of the reversible parking lot model for three different lattices. We have directly confirmed the importance of multiparticle transitions for governing the late time behavior in two of the models. The behavior of the third model is consistent with the other two. We discussed the implications of a description of the dynamics in terms of collective events. For the right ratios of transition rates, one would expect to observe multiple plateaus. There is a suggestion of intermediate plateaus in our system, but computational limits prevented any conclusive evidence. One alternative method for finding multiple plateaus would be to consider different particle shapes as a means of adjusting the relative rates of multiparticle transitions. Finally, we presented the possible relevance of the model to the binding of Ca<sup>++</sup> to Langmuir monolayers. We showed that the jamming and subsequent slow relaxation of the binding of Ca<sup>++</sup> ions is a strong candidate for the source of the long-time scales observed in the viscosity measurements. There are experimental and theoretical details that require further exploration, including direct measurements of the Ca<sup>++</sup> coverage fraction, modeling of the dependence of viscosity on Ca<sup>++</sup> coverage fraction, better modeling of pH effects, and both measurements and modeling of lateral diffusion. However, given how well the model presented here captures the time scales present in the viscosity data, such future studies should prove extremely fruitful.
###### Acknowledgements.
We thank Amy Kolan for bringing the parking lot model to our attention and fruitful discussions with Chuck Knobler and Robijn Bruinsma. This work was supported in part by NSF grant CTS-9874701. Acknowledgment by M. Dennin is made to the donors of The Petroleum Research Fund, administered by the ACS, for partial support of this research. |
no-problem/9912/quant-ph9912005.html | ar5iv | text | # Convergences in the Measurement Problem in Quantum Mechanics
## I Introduction
The existence of quantum physics imposes on physicists an unavoidable ambiguity when describing atomic and subatomic systems. On his/her daily activity in the laboratory, the physicist describes and calculates classical trajectories of particles in order not only to interpret the data, but also to design the apparatus producing that data. Witness, for example, how a high energy physicist reconstructs events involving a maze of particles produced in a complex subatomic collision. He/She rarely uses any quantum mechanics at all, determining trajectories, lifetimes, vertex positions, trigger algorithms, all with classical relativistic mechanics. Bohr was perfectly aware of this necessity of using classical language for describing the results of experiments and constructed his interpretation of quantum mechanics based on this (Bohr 1939). The need for a classical language is imposed, according to him, by the classical nature of observers and experimental apparatuses. It is fair to say that for Bohr there was no measurement problem, as classical apparatuses are not described by wave functions, avoiding superpositions of macroscopic states. Wave functions pertain only to the microscopic world. The problem as it is recognized today can be traced back to von Neumann.
We argue in this paper that von Neumann’s interpretation of quantum mechanics (von Neumann 1955) originated in an attempt to remove the somewhat arbitrary division between the classical and the quantum world introduced by Bohr. In so doing, von Neumann shifted the cut by introducing an observer, who was not required by Bohr.
Modern attempts to solve the measurement problem introducing the environment to dissipate macroscopic coherence do not explain the collapse of the wave function. We argue in the text that decoherence models (Zurek 1998) are true descendants of von Neumann and therefore will ultimately bring the observer to the forefront. Consequently, the decoherence approaches are not a solution of the measurement problem if one’s standing point is that the observer should not play a role in the interpretation.
We further argue that the causal approach, introduced by de Broglie and developed by Bohm and collaborators (Bohm 1952 and 1995), also originated in an effort to better deal with the division of the world. The causal approach removed it by combining classical and quantum concepts in a single description of nature: from the classical world it takes the position of the particles (be they part of the system or of the measuring device), while keeping from the quantum world the wave function and its Schrödinger’s evolution.
We also bring forward our view that von Neumann’s (and its modern day version - decoherence) and the de Broglie-Bohm interpretation, though corresponding to two different branches emerging from Bohr’s elaborate world view, dealt in their own specific way with the vague concept of information and the quite obscure notion of its disappearance.
## II Bohr
With a long historical hindsight, we can now see Bohr’s position as one that intended to provide an interpretation, whose main purpose was to protect the successful formalism of quantum mechanics. We might say that Bohr anticipated many of the problems that would be faced by those who would later try to analyze in detail the measurement process. As we will show below, the attempts presented here to solve the measurement have to answer questions that do not pertain to the daily activity of an experimenter in the laboratory. Bohr somehow foresaw these inextricable difficulties and cut them short by declaring (Bohr 1939):
‘In the system to which the quantum mechanical formalism is applied, it is of course possible to include any intermediate auxiliary agency employed in the measuring process \[but\] some ultimate measuring instruments must always be described entirely on classical lines, and consequently kept outside the system subject to quantum mechanical treatment.’
Although Bohr’s position was a strong and deeply intricate one, it was challenged by one simple criticism: where is the demarcation between system and apparatus, quantum and classical?
‘The ‘Problem’ then is this: how exactly is the world to be divided into speakable apparatus … that we can talk about … and unspeakable quantum system that we can not talk about? How many electrons, or atoms, or molecules, make an ‘apparatus’? The mathematics of the ordinary theory requires such a division, but says nothing about how it is to be made. In practice the question is resolved by pragmatic recipes which have stood the test of time, applied with discretion and good taste born of experience. But should not fundamental theory permit exact mathematical formulation?’ (Bell 1987, 171)
Though simple, this question has a devastating effect on Bohr’s interpretation. This quotation from Bell summarizes the challenge and motivation for those who felt the urge to explain the measurement process despite the best advice against it by Bohr.
## III Von Neumann
Von Neumann was probably the first to attempt a unified quantum description of system and apparatus. Contrary to Bohr, who avoided the danger of such a description by constructing a philosophical fortress around quantum mechanics, von Neumann made a formal analysis of the measurement process and ended up by arriving at an altogether new interpretation of quantum mechanics, which no wonder is frequently misidentified with Bohr’s philosophical constructs. As stressed by Feyerabend (Feyerabend 1962, 237):
‘when dealing with von Neumann’s investigation, we are not dealing with a refinement of Bohr - we are dealing with a completely different approach.’
Like Bohr, von Neumann attributed importance to the apparatus as part of the measurement process, but he examined the evolution of the joint system (system + apparatus) with a single wave function governed by Schrödinger’s evolution, which establishes a correlation between them in such a way that any result pertaining to the system is inferred from the reading of the apparatus.
In doing so, the states of the apparatus are also subjected to the superposition principle. Clearly von Neumann managed to move the classical/quantum cut from the system/apparatus boundary, but at the price of leaving the apparatus in a coherent superposition of states which is not observed. No matter how many apparatuses are included, the superpositions will remain. At this stage von Neumann distinguished two types of processes in quantum mechanics: the one described above, leading to undesirable macroscopic superpositions as a consequence of the reversible unitary evolution of Schrödinger’s equation and the other one, corresponding to our knowledge of the result of the measurement, which is irreversible. Following Bohr,
‘it is also essential to remember that all unambiguous information concerning atomic object is derived from the permanent marks… left on the bodies which define the experimental conditions. Far from involving any special intricacy, the irreversible amplification effects on which the recording of the presence of atomic objects rests rather remind us of the essential irreversibility inherent in the very concept of observation.’ (Bohr 1964, 3)
Von Neumann formalized the irreversibility in quantum mechanics by postulating the collapse of the wave function. Notice, though, that he deals with ensembles and therefore uses density matrices in his formalism. To avoid imposing the postulate without any physical justification, the observer is introduced and his/her subjective perception becomes essential. This interpretation is thereby weakened and open to severe criticisms. The cut is still present, but has now moved to a position between joint system/observer.
In addition to interpretative and epistemological problems, this interpretation also has problems in its formalism. The need to consider instantaneous interactions in the measurement process, so that the unitary evolution does not move the state vector away from the position of measurement, implies that the Hamiltonian for the joint system commutes with the observable which is being measured $`[H,O]=0`$. This could be a demanding condition on the Hamiltonian, but not an excessive one, as we will make clear when discussing decoherence.
## IV Decoherence
Von Neumann’s approach is taken one step further in the decoherence models. These models invoke the inevitable interaction between joint system and environment to help solving, it is claimed, the measurement problem. Following von Neumann’s tradition, system, apparatus and environment are treated quantum mechanically and, as for von Neumann, the unavoidable superposition of macroscopically different states will still be present. As the environment has a large number of degrees of freedom, the observer has no access to them and therefore, they must be traced over, ignored. Notice that the observer still plays a crucial role in this approach, for the trace must be done by someone and the cut is maintained as the boundary between the degrees of freedom which are traced over and those which are not. The inevitability of this division of the world is acknowledged by the proponents of this approach as illustrated by Zurek in a recent paper (Zurek 1998, 1794):
‘We can mention two such open issues right away: both the formulation of the measurement problem and its resolution through the appeal of decoherence require a universe split into systems.’
The trick of tracing over the unaccessible degrees of freedom brings the density matrix of the total system to a diagonal form removing the undesirable macroscopic superpositions and this is von Neumann’s postulate presented in a more elaborate dynamical way. However, there is a subjective element in the whole procedure: how far does the environment reach?
Von Neumann’s condition of commutativity of the Hamiltonian and the observable to be measured, now acquires a complex meaning: $`[H_{int},A]`$, where now $`A`$ is the pointer basis of the apparatus and the Hamiltonian refers to the interaction between joint system and environment, for which there is even less control by an experimental physicist. It is as if a measuring instrument should bring in its instruction manual recommendations on the appropriate environment where to operate. Von Neumann’s condition only indicated the adequate apparatuses for measuring a certain observable. Demands put on the experimenter do not stop here however, he/she - according to the decoherence procedure known as the predictability sieve (Zurek 1993) - should refer to a list to decide which observable can be measured, those on the top of the list are more classical in appearance and thus preferable for measurement. The existence of such a list brings back the subjectivity in the choice of where to put the quantum/classical cut, which is ‘to be decided by circumstances’ (Zurek 1993). Besides, how is it possible to know the environment well enough to decide which observable can be measured, but at the same time to be so ignorant about it that one is obliged to trace it over?
One of the worst aspects of the decoherence approach, taken to its ultimate consequence, is thus to introduce a set of procedures that should be obeyed when measuring a quantity, procedures which no experimentalist on his/her right mind would recognize as what goes on in the laboratory. The appeal to notions far remote from the reality of the laboratory experiments is well illustrated in the following passages in a recent paper on decoherence (Zurek 1998, 1796 and 1799).
‘Correlations \[between states of the joint system and environment\] are both the cause of decoherence and the criterion used to evaluate the stability of the states…Moreover, stability of the correlations between the states of the system monitored by their environment and of some other ‘recording’ system (i.e. an apparatus or a memory of an observer) is a criterion of the ‘reality’ of these states.’
or still,
‘the observer can know beforehand what (limited) set of observables can be measured with impunity. He will be able to select measurement observables that are already monitored by the environment.’
The above passages clearly show some subjective elements of this approach, invoking memory of observer or a priori knowledge of the interaction between the environment and joint system.
One last criticism is that decoherence, as well as von Neumann, deals with the density matrix, which is by force in the realm of the ensemble interpretation and as Bell says:
‘If one were not actually on the look-out for probabilities, I think the obvious interpretation of even \[the butchered density matrix\] would be that the system is in a state in which the various \[wave functions\] somehow co-exist…This is not at all a probability interpretation, in which the different terms are seen not as co-existing, but as alternatives.’ (Bell 1990, 36 \[ref.2\] and Whitaker 1996, 289)
## V de Broglie - Bohm
A description of individual events was proposed by de Broglie-Bohm (Bohm 1952). In it, an individual system is described by a wave function and a particle. The particle is guided by the wave function, which works, to a certain extent, like a field. One could say that this theory corresponds to a refinement of Bohr’s duality - the use of quantum concepts in a scale and classical concepts in another one - taken to an extreme. The wave function and the particle position are now used at the same time in all scales. It thus eliminates the division quantum/classical and apparently has no measurement problem, as the particle has always a definite position. Moreover, it claims to be free from problems connected with the act of observation, contrary to what we will suggest below.
It is arguably that this theory is subjected to some serious criticisms (Holland 1993), but the only one we want to emphasize here is related to the infamous empty wave and more specifically, to the information carried by it. Whenever the wave splits up into parts which do not have spatial overlap, such as in the trajectories of the double-slit experiment, one part will be with the particle and the other one will be empty, though it can still influence the particle motion. The empty wave carries information on the superpositions of states, but as soon as a measurement is realized, the empty wave loses any overlap it had before with the branch that carries the particle.
‘Perhaps we shouldn’t talk about it actually disappearing from the universe. Rather the information in the ‘empty’ wave packet no longer has any effect, because during the act of measurement the irreversible process introduces a stochastic or random disturbance which destroys the information of quantum potential of the wave packet.’ (Hiley 1986, 146)
Suddenly the superpositions are destroyed, this only happens thanks to the measurement which identifies which branch corresponds to the empty wave. How this happens, where the empty wave information is taken to, what are the effects of its disappearance on the surroundings are left unspecified. This sends us back to the similar problem encountered in the decoherence models, where information - as before, information on superpositions of states - was dissipated into an environment with no observable effects on it, but only on the system.
At this point the convergence of these two apparently different interpretations becomes clear. If one accepts the concept of information as they do, the act of measurement implies its loss, be it dissipated in the environment or in the arbitrary sterilization of the empty wave, whose information is now declared passive:
‘we’ve tried to introduce a distinction between active information and inactive information. That is, when an apparatus has undergone this irreversible change, one wave packet becomes inactive.’ (Hiley 1986, 146)
This vague notion of information is central to both causal and decoherence interpretations. Its vagueness has been nicely expressed by Bell (Bell 1990, 8 \[ref.3\]):
‘I don’t have a concept of disembodied information - it must be located and represented in the material world, and I don’t know how to formulate the concept of how much information there is in an arbitrary space region - I think the concept of information is again a very useful one in practice but not in principle…’
This concludes our arguments. It is certainly frustrating that the measurement problem in quantum mechanics, after six decades of being delineated, remains open. Perhaps this indicates a fundamentally new epistemological obstacle to be overcome jointly by physicists and philosophers, an obstacle that was not present in classical physics.
## VI Conclusion
We argued in this paper that von Neumann’s approach (and its modern version: decoherence) and the causal interpretation have many points in common, despite being so different in formalism and in language. Remarkably the common points are the problematic ones springing from Bohr’s deep analysis of the interface quantum/classical. These elaborated attempts to define quantitatively what happens in this boundary have exposed open unsurmountable problems which Bohr carefully avoided by his radical separation of classical and quantum.
NOTES
\*The authors acknowledge the support of the Brazilian Research Council, CNPq and would like to thank Osvaldo Pessoa Jr. for many discussions on the subject. |
no-problem/9912/astro-ph9912263.html | ar5iv | text | # Viewing the Shadow of the Black Hole at the Galactic Center
## 1. Introduction
High resolution spectroscopy (especially with the Hubble Space Telescope) of galactic nuclei has produced an abundance of evidence for compact dark mass concentrations of up to $`10^8M_{}/\mathrm{pc}^3`$, whose nature is strongly suspected to be indicative of supermassive black holes (Kormendy & Richstone (1995)). Even better evidence exists for the galaxy NGC 4258 and the Milky Way, for which spectroscopic and proper motion studies have provided an unprecedented three-dimensional view of the kinematics of gas and stars around a central point mass, pointing to dark mass concentrations of $`>10^{12}M_{}/\mathrm{pc}^3`$ with very high significance (Miyoshi et al. (1995); Eckart & Genzel (1996); Lo et al. (1998)).
Complementary observations of galactic nuclei with very long baseline interferometry (VLBI) reveal the presence of compact radio cores (Zensus (1997)) which appear to be coincident with the central black hole candidates. An intriguing case is that of the Galactic Center where the bright, compact radio source Sgr A\* lies at the dynamical origin (Menten et al. (1997); Ghez et al. (1998)). The nature of Sgr A\* is still unclear, since its structure is completely washed out by strong interstellar scattering at cm-wavelengths (Lo et al. (1998)). It is only at millimeter-wavelengths that we may begin to see some internal structure (Bower & Backer (1998); Krichbaum et al. (1998); Lo et al. (1998)). Though the dark mass concentration could in principle be distributed in the form of exotic objects on a scale slightly larger than the size of Sgr A\*—but with difficulties accounting for its radiation characteristics (Melia & Coker (1999))—it is expected to be associated with Sgr A\* itself, since the latter, unlike the surrounding stars, has a tightly restricted proper motion indicating that it is very heavy (Reid et al. (1999); Genzel et al. (1997)).
The key spectral features of Sgr A\* are a slightly inverted cm-wavelength spectrum, an apparent excess (or bump) at sub-millimeter (sub-mm) wavelengths, and a steep cut-off towards the infrared (Falcke et al. (1998); Serabyn et al. (1997)). The radio emission is circularly polarized but undetected in linear polarization (Bower et al. 1999a ; Bower et al. 1999b ). Proposed models for the radio emission range from quasi-spherical inflows (Melia (1992, 1994); Narayan, Yi, & Mahadevan (1995)) to a jet-like outflow (Falcke, Mannheim, & Biermann (1993); Falcke & Biermann (1999)).
The sub-mm bump is particularly interesting since this should be the signature of a very compact synchrotron emitting region with a size of a few Schwarzschild radii (Falcke (1996); Falcke et al. (1998)). The presence of compact radio emission in Sgr A\* at a wavelength as short as 1.4 mm has been confirmed recently by a first VLBI detection at this wavelength (Krichbaum et al. (1998)). This detection is exciting for several reasons. First, it lies in a region of the spectrum where the intrinsic source size should become apparent over scatter-broadening by the intervening screen (Melia, Jokipii, & Narayanan (1992)). Second, this component is sufficiently bright to be detected with VLBI techniques at even shorter wavelengths, and third, Sgr A\* is sufficiently close that the size scale where general relativistic effects are significant could be resolved with VLBI at sub-mm wavelengths. In addition, at sub-mm wavelengths, the various models predict that the synchrotron emission is not self-absorbed, allowing a view into the region near the horizon. The horizon has a size of $`(1+\sqrt{1a_{}^2})R_g`$, where $`R_gGM/c^2`$, $`M`$ is the mass of the black hole, $`G`$ is Newton’s constant, $`c`$ the speed of light, $`a_{}Jc/(GM^2)`$ is the dimensionless spin of the black hole in the range 0 to 1, and $`J`$ is the angular momentum of the black hole.
Bardeen (1973) described the idealized appearance of a black hole in front of a planar emitting source, showing that it literally would appear as a ‘black hole’. At that time such a calculation was of mere theoretical interest and limited to just calculating the envelope of the apparent black hole. To test whether there is a realistic chance of seeing this ‘black hole’ in Sgr A\* (Falcke et al. (1998)), we here report the first calculations obtained with our general relativistic (GR) ray-tracing code that allows us to simulate observed images of Sgr A\* for various combinations of black hole spin, inclination angle, and morphology of the emission region directly surrounding the black hole and not just for a background source. A more detailed description of our calculations is in preparation (Agol, Falcke, & Melia (1999)).
## 2. The appearance of a black hole
We determine the appearance of the emitting region around a black hole under the condition that it is optically thin. For Sgr A\* this might be the case for the sub-mm bump (Falcke et al. (1998)) indicated by the turnover in the spectrum, and can always be achieved by going to a suitably high frequency. Here we simply assume that the overall specific intensity, $`I_\nu `$, observed at infinity is an integration of the emissivity, $`j_\nu `$, times the differential path length along geodesics (Jaroszynski & Kurpiewski (1997)). In line with the qualitative discussion of this paper, we assume that $`j_\nu `$ is independent of frequency, and that it is either spatially uniform, or scales as $`r^2`$. These two cases cover a large range of conditions expected under several reasonable scenarios, be it a quasi-spherical infall, a rotating thick disk, or the base of an outflow.
The calculation of the photon trajectories and the intensity integrated along the line-of-sight is based on the standard formalism (Thorne (1981); Viergutz (1993); Rauch & Blandford (1994); Jaroszynski & Kurpiewski (1997)). Our calculations take into account all the well-known relativistic effects, e.g., frame dragging, gravitational redshift, light bending, and Doppler boosting. The code is valid for all possible spins of the black hole and for any arbitrary velocity field of the emission region.
For a planar emitting source behind a black hole, a closed curve on the sky plane divides a region where geodesics intersect the horizon from a region whose geodesics miss the horizon (Bardeen (1973)). This curve, which we refer to as the “apparent boundary” of the black hole, is a circle of radius $`\sqrt{27}R_g`$ in the Schwarzschild case ($`a_{}=0`$), but has a more flattened shape of similar size for a Kerr black hole, slightly dependent on inclination. The size of the apparent boundary is much larger than the event horizon due to strong bending of light by the black hole. When the emission occurs in an optically thin region surrounding the black hole, the case of interest here, the apparent boundary has the same exact shape since the properties of the geodesics are independent of where the sources are located. However, photons on geodesics located within the apparent boundary that can still escape to the observer experience strong gravitational redshift and a shorter total path length, leading to a smaller integrated emissivity, while photons just outside the apparent boundary can orbit the black hole near the circular photon radius several times, adding to the observed intensity (Jaroszynski & Kurpiewski (1997)). This produces a marked deficit of the observed intensity inside the apparent boundary, which we refer to as the “shadow” of the black hole.
We here consider a compact, optically-thin emitting region surrounding a black hole with spin parameter $`a_{}=0`$ (i.e., a Schwarzschild black hole) and a maximally spinning Kerr hole with $`a_{}=0.998`$. In the set of simulations shown here, we take the viewing angle $`i`$ to be $`45^{}`$ with respect to the spin axis (when it is present), and we consider two distributions of gas velocity $`v`$. The first has the plasma in free-fall, i.e., $`v^r=\sqrt{2r(a^2+r^2)}\mathrm{\Delta }/A`$ and $`\mathrm{\Omega }=2ar/A`$, where $`v^r`$ is the Boyer-Lindquist radial velocity, $`\mathrm{\Omega }`$ is the orbital frequency, $`\mathrm{\Delta }r^22r+a^2`$, and $`A(r^2+a^2)^2a^2\mathrm{\Delta }\mathrm{sin}^2\theta `$. (We have set $`G=M=c=1`$ in this paragraph.) The second has the plasma orbiting in rigidly rotating shells with the equatorial Keplerian frequency $`\mathrm{\Omega }=1/(r^{3/2}+a)`$ for $`r>r_{ms}`$ with $`v^r=0`$, and infalling with constant angular momentum inside $`r<r_{ms}`$ (Cunningham (1975)), with $`v^\theta =0`$ for all $`r`$.
In order to display concrete examples of how realistic our proposed measurements of these effects with VLBI will be, we have simulated the expected images for the massive black hole candidate Sgr A\* at the Galactic Center. For its measured mass (Eckart & Genzel (1996); Ghez et al. (1998)) $`M=2.6\times 10^6M_{}`$, the scale size for this object is the gravitational radius $`R_g=3.9\times 10^{11}`$ cm, which is half of the Schwarzschild radius $`R_s2GM/c^2`$.
To simulate an observed image we have to take two additional effects into account: interstellar scattering and the finite telescope resolution achievable from the ground. Scatter-broadening at the Galactic Center is incorporated by smoothing the image with an elliptical Gaussian with a FWHM of 24.2 $`\mu `$arcsecond$`\times (\lambda /1.3\text{mm})^2`$ along the major axis and 12.8 $`\mu `$arcsecond$`\times (\lambda /1.3\text{mm})^2`$ along the minor axis (Lo et al. (1998)). The position angle of this ellipse is arbitrary since we do not know yet the spin axis of the black hole on the sky and we have assumed PA=$`90^{}`$ for the major axis. The telescope resolution—in an idealized form—is then added by convolving the smoothed image with a spherical Gaussian point-spread function of FWHM 33.5 $`\mu `$arcsecond$`\times (\lambda /1.3\text{mm})^1(l/8000\text{km})^1`$—the possible resolution of a global interferometer with 8000 km baselines (Krichbaum (1996)). In reality the exact point-spread-function will of course depend on the number and placement of the participating telescopes.
In Figure 1, we show the resulting image of Sgr A\* for a maximally rotating black hole viewed at an angle of $`i=45^{}`$, for a compact region in free fall, with an emissivity $`j_\nu =\nu ^0r^2`$. We first show the original, unsmoothed image of the emission region as calculated with the GR code in panel (a), and then present the simulated ‘observed’ images at 0.6 and 1.3 mm wavelengths in panels (b) and (c), respectively. The two distinct features that are evident in Figure 1a are (1) the clear depression in $`I_\nu `$—the shadow—produced near the black hole, which in this particular example represents a modulation of up to 90% in intensity from peak to trough, and (2) the size of the shadow, which here is $`9.2R_\mathrm{g}`$ in diameter. This represents a projected size of 27 $`\mu `$arcseconds, which is already within a factor of two of the current VLBI resolution (Krichbaum et al. (1995)). The shadow is a generic feature of various other models we have looked at, including those with outflows, cylindrical emissivity, and various inclinations or spins.
To illustrate the expected image for another extreme case, we show in Figure 1d the analogue to Figure 1a for the case with $`a_{}=0`$ (i.e., no rotation), an emitting plasma orbiting in Keplerian shells (as described above), and a uniform $`j_\nu `$ for $`r<25R_g`$. Even though these conditions are distinctly different compared to those of Figure 1a, the black hole shadow is still clearly evident, here representing a modulation in $`I_\nu `$ in the range of 50-75% from peak to trough (Fig. 1d), and with a diameter of roughly $`10.4R_g`$. In this case, the emission is asymmetric due to the strong Doppler shifts associated with the emission by a rapidly moving plasma along the line-of-sight (with velocity $`v_\varphi `$).
The important conclusion is that the diameter of the shadow—in marked contrast to the event horizon—is fairly independent of the black hole spin and is always of order 10$`R_\mathrm{g}`$. Indeed, this is consistent with the observed 0.8 mm size limit $`>4R_g`$ of Sgr A\* from a lack of scintillation (Gwinn et al. (1991)). The presence of a rotating hole viewed edge-on will lead to a shifting of the apparent boundary (by as much as 2.5 $`R_g`$, or 8 $`\mu `$arcseconds) with respect to the center of mass, or the centroid of the outer emission region.
Interestingly, the scattering size of Sgr A\* and the resolution of global VLBI arrays become comparable to the size of the shadow at a wavelength of about 1.3 mm. As one can see from Figures 1c&f the shadow is still almost completely washed out for VLBI observations at 1.3 mm, while it is very apparent at a factor two shorter wavelength (Figures 1b&e). In fact, already at 0.8 mm (not shown here) the shadow can be easily seen. Under certain conditions, i.e., a very homogeneous emission region, the shadow would be visible even at 1.3 mm (Fig. 1f).
## 3. How realistic is such an experiment?
The arguments for the feasibility of such an experiment are rather compelling. First of all, the mass of Sgr A\* is very well known within 20%, the main uncertainty being the exact distance to the Galactic Center. Since, as we have shown, the unknown spin of the suspected black hole contributes only another 10% uncertainty, we can conservatively predict the angular diameter of the shadow in Sgr A\* from the GR calculations alone to be $`30\pm 7\mu `$arcseconds independent of wavelength. As seen in Fig. 1, the finite telescope resolution and the scatter broadening will make the detectability of the shadow a function of wavelength and emissivity; however, the size of the shadow will remain of similar order and under no circumstances can become smaller.
The technical methods to achieve such a resolution at wavelengths shortwards of 1.3 mm are currently being developed and a first detection of Sgr A\* at 1.4 mm with VLBI has already been reported. The challenge will be to push this technology even further towards 0.8 or even 0.6 mm VLBI. Over the next decade many more telescopes are expected to operate at these wavelengths. Depending on how short a wavelength is required, the projected time scale for developing the necessary VLBI techniques may be about ten years. A fundamental problem preventing such an experiment is not now apparent, but in light of our results, planning of the new sub-mm-telescopes should include sufficient provisions for VLBI experiments.
A potential problem with our model may occur if $`j_\nu `$ has an inner cutoff which is larger than that of the horizon, making the shadow larger than predicted due to a decrease in emissivity rather than to GR effects. However, first of all, the truncation of accretion disk emission at the marginal stable orbit $`r_{\mathrm{ms}}`$ is somewhat arbitrary (Cunningham (1975)) and, secondly, if it exists such a cutoff would likely be frequency dependent, while there will be a frequency-independent minimum radius due to the general relativistic effects we have described. Another problem could be the unknown morphology of the emission region. Anisotropy, strong velocity fields, and density inhomogeneities would make an identification of the shadow in an observed image more difficult. However, inhomogeneities are unlikely to be a major issue, since the time scale for rotation around the black hole in the Galactic Center is only a few hundred seconds and hence much less than the typical duration of a VLBI observation. The strong shear near the black hole would tend to smooth out any inhomogeneities very quickly. Indeed, sub-mm variability studies on such short time scales (Gwinn et al. (1991)) have yielded negative results. The same argument applies to emission models which are offset from the black hole, e.g., are one-sided. Since the shadow of the black hole has a very well defined shape it would under any conditions appear as a distinct feature, given that the dynamic range of the map is large enough (i.e., $``$100:1, considering a range of emission models, Agol, Falcke, & Melia (1999)).
Finally, synchrotron self-absorption could pose a problem. So far the available sub-mm spectra show a flattening of the spectrum around 1.3-0.6 mm indicating a turnover towards an optically thin spectrum. Given the current observational uncertainties one could in principle construct simple models where the flow does not become optically thin until 0.2 mm. Improved simultaneous measurements at sub-mm wavelengths are therefore highly desirable to exactly measure the spectral turnover since the experiment we propose here will only work for an optically thin flow. At hundreds of microns the atmosphere becomes optically thick, making much more expensive space-based observations necessary. At X-ray wavelengths, the accretion flow will be optically thin to electron scattering, so there may be a better chance of detecting the shadow with future space-based X-ray interferometry as proposed in the MAXIM experiment.
## 4. Summary
The importance of the proposed imaging of Sgr A\* at sub-mm wavelengths with VLBI cannot be overemphasized. The bump in the spectrum of Sgr A\* strongly suggests the presence of a compact component whose proximity to the event horizon is predicted to result in a shadow of measurable dimensions in the intensity map. To our knowledge, such a feature is unique and Sgr A\* seems to have all the right parameters to make it observable. The observation of this shadow would confirm the widely held belief that most of the dark mass concentration in the nuclei of galaxies such as ours is contained within a black hole, and it would be the first direct evidence of the existence of an event horizon. A non-detection with sufficiently developed techniques, on the other hand, might pose a major problem for the standard black hole paradigm. Because of this fundamental importance, the experiment we propose here should be a major motivation for intensifying the current development of sub-mm astronomy in general and mm- and sub-mm VLBI in particular.
Acknowledgments We thank P.L. Biermann. T. Krichbaum, A. Zensus, O. Blaes, R. Antonucci, and M. Reid for useful discussions. This work was supported in part by a Sir Thomas Lyle Fellowship (FM), NASA grant NAG58239 (FM), DFG grants Fa 358/1-1&2 (HF), and NSF grant AST-9616922 (EA). EA would like to thank the ITP at the University of California at Santa Barbara for their hospitality. |
no-problem/9912/hep-ph9912336.html | ar5iv | text | # The Pinch Technique at Two Loops
## Abstract
It is shown that the fundamental properties of gauge-independence, gauge-invariance, unitarity, and analyticity of the $`S`$-matrix lead to the unambiguous generalization of the pinch technique algorithm to two loops.
PACS numbers: 11.15.-q, 12.38.Bx, 14.70.Dj, 11.55.Fv FTUV-99-12-14
e-mail: Joannis.Papavassiliou@cern.ch;@uv.es
A variety of important physical problems cannot be addressed within the framework of fixed-order perturbation theory, the most widely used calculational scheme in the continuum. This is often the case within Quantum Chromodynamics (QCD), when large disparities in the physical scales involved result in a complicated interplay between perturbative and non-perturbative effects. Similar limitations appear when physical kinematic singularities, such as resonances, render the perturbative expansion divergent at any finite order, or when perturbatively exact symmetries prohibit the appearance of certain phenomena, such as chiral symmetry breaking or gluon mass generation. In such cases one often resorts to various reorganizations of the perturbative expansion inspired from scalar field theories or Quantum Electrodynamics (QED), supplemented by a number of auxiliary physical principles. When studying the interface between perturbative and non-perturbative QCD for example, one finds it advantageous to use concepts familiar from QED, such as the effective charge, in conjunction with dispersive techniques and analyticity properties of the $`S`$-matrix . In addition, in the field of renormalon calculus, one studies the onset of non-perturbative effects from the behaviour near the QCD mass-scale of judiciously selected infinite sub-sets of the perturbative series . Similarly, the extension of the Breit-Wigner formalism to the electro-weak sector of the Standard Model necessitates a non-trivial rearrangement of the perturbative expansion ; an analogous task must be undertaken when studying various aspects of finite temperature QCD , as well as mass generation, both in 3-dimensional field-theories and in QCD , as a prelude to a systematic truncation scheme for the Schwinger-Dyson series.
One of the main difficulties encountered when dealing with the problems mentioned above is the fact that several physical properties, which are automatically preserved in fixed-order perturbative calculations by virtue of powerful field-theoretical principles, may be easily compromised when rearrangements of the perturbative series, such as resummations, are carried out. These complications may in turn be traced down to the fact that in non-Abelian gauge theories individual off-shell Green’s functions ($`n`$-point functions) are in general unphysical.
It turns out that this last problem can be circumvented by resorting to the method known as the pinch technique (PT) . The PT reorganises systematically a given physical amplitude into sub-amplitudes, which have the same kinematic properties as conventional $`n`$-point functions, (propagators, vertices, boxes) , but, in addition, are endowed with desirable physical properties. Most importantly, at one-loop order (i) are independent of the gauge-fixing parameter; (ii) satisfy naive, (ghost-free) tree-level Ward identities, instead of the usual Slavnov-Taylor identities. (iii) contain only physical thresholds and satisfy very special unitarity relations (iv) coincide with the conventional $`n`$-point functions when the latter are computed in the background field method Feynman gauge (BFMFG). These properties are realized diagrammatically by exploiting the elementary Ward identities of the theory in order to enforce crucial cancellations, and make manifest intrinsic properties of the $`S`$-matrix, which are usually concealed by the quantization procedure.
The important question which arises is whether the PT algorithm may be extended beyond one-loop, leading to the systematic replication of the aforementioned special properties of the PT effective $`n`$-point functions to higher orders . In this Letter we will show that the PT can be generalized to two loops by resorting exactly to the same physical and field-theoretical principles as at one-loop.
We start by briefly reviewing the one-loop case. Consider the $`S`$-matrix element for the quark ($`u`$)-antiquark ($`\overline{u}`$) scattering process $`u(P)\overline{u}(P^{})u(Q)\overline{u}(Q^{})`$ in QCD; we set $`q=P^{}P=Q^{}Q`$, and $`s=q^2`$ is the square of the momentum transfer. It is convenient to work in the renormalizable Feynman gauge (RFG); this constitutes no loss of generality, since the full $`S`$-matrix is independent of the gauge-fixing parameter and gauge-fixing scheme. One first decomposes the elementary tree-gluon vertex $`\mathrm{\Gamma }_{\alpha \mu \nu }^{(0)}(q,p_1,p_2)`$ as follows :
$`\mathrm{\Gamma }_{\alpha \mu \nu }^{(0)}(q,p_1,p_2)`$ $`=`$ $`[(p_1p_2)_\alpha g_{\mu \nu }+2q_\nu g_{\alpha \mu }2q_\mu g_{\alpha \nu }]+[p_{2\nu }g_{\alpha \mu }p_{1\mu }g_{\alpha \nu }]`$ (1)
$`=`$ $`\mathrm{\Gamma }_{F\alpha \mu \nu }^{(0)}(q,p_1,p_2)+\mathrm{\Gamma }_{P\alpha \mu \nu }^{(0)}(q,p_1,p_2).`$ (2)
This decomposition assigns a special role to the $`q`$-leg, and allows $`\mathrm{\Gamma }_{F\alpha \mu \nu }^{(0)}`$ to satisfy the Ward identity
$$q^\alpha \mathrm{\Gamma }_{F\alpha \mu \nu }^{(0)}(q,p_1,p_2)=(p_2^2p_1^2)g_{\mu \nu }$$
(3)
where the right-hand-side is the difference of two-inverse propagators in the Feynman gauge, and vanishes on shell, i.e. $`p_1^2=p_2^2=0`$. Notice that the first term in $`\mathrm{\Gamma }_{F\alpha \mu \nu }^{(0)}`$ is a convective vertex, whereas the other two terms originate from gluon spin or magnetic moment. $`\mathrm{\Gamma }_{F\alpha \mu \nu }^{(0)}(q,p_1,p_2)`$ coincides with the BFMFG bare vertex involving one background ($`q`$) and two quantum ($`p_1`$,$`p_2`$) gluons .
We then carry out the above decomposition on the three-gluon vertex appearing inside the non-Abelian graph contributing to the one-loop quark-gluon vertex . The result of this is two-fold: First, the action of the longitudinal momenta $`p_1^\mu =k^\mu `$, $`p_2^\nu =(kq)^\nu `$ on the bare quark-gluon vertices $`\mathrm{\Gamma }_\mu ^{(0)}`$ and $`\mathrm{\Gamma }_\nu ^{(0)}`$, respectively, triggers the elementary Ward identity of the form $`\overline{)}k=(\overline{)}k+\overline{)}Qm)(\overline{)}Qm)`$. The first term gives rise to the pinch contribution $`V_{P\alpha \sigma }^{(1)}(q)`$ given by $`V_{P\alpha \sigma }^{(1)}(q)=2g^2C_A[dk][k^2(k+q)^2]^1g_{\alpha \sigma }`$, where $`g`$ is the gauge coupling, $`C_A`$ is the Casimir eigenvalue of the adjoint representation, and $`[dk]=\mu ^{2ϵ}\frac{d^dk}{(2\pi )^d}`$, with $`\mu `$ the ’t Hooft mass; the second term vanishes on-shell. Second, the part of the graph containing $`\mathrm{\Gamma }_{F\alpha \mu \nu }^{(0)}`$ together with its Abelian-like counterpart defines the PT one-loop quark-gluon vertex $`\widehat{\mathrm{\Gamma }}_\alpha ^{(1)}(Q,Q^{})`$, which satisfies the QED-like Ward identity $`q^\alpha \widehat{\mathrm{\Gamma }}_\alpha ^{(1)}(Q,Q^{})=\widehat{\mathrm{\Sigma }}^{(1)}(Q)\widehat{\mathrm{\Sigma }}^{(1)}(Q^{})`$, where $`\widehat{\mathrm{\Sigma }}^{(1)}`$ is the PT one-loop quark self-energy. The propagator-like parts extracted from the vertex are cast into the form of a genuine self-energy by setting $`\mathrm{\Pi }_{P\alpha \beta }^{(1)}(q)=V_P^{(1)}(q)_{\alpha \sigma }t_\beta ^\sigma (q)`$, where $`t_{\mu \nu }(q)=q^2g_{\mu \nu }q_\mu q_\nu `$; thus, the resulting one-loop PT self-energy reads $`\widehat{\mathrm{\Pi }}_{\alpha \beta }^{(1)}(q)=\mathrm{\Pi }_{\alpha \beta }^{(1)}(q)+\mathrm{\Pi }_{P\alpha \beta }^{(1)}(q)`$. Carrying out the one-loop integrations one finds that the prefactor in front of the logarithm of $`\widehat{\mathrm{\Pi }}_{\alpha \beta }^{(1)}(q)`$ is $`(11/3)C_A`$, i.e. the coefficient of the one-loop $`\beta `$ function for quark-less QCD.
For the two-loop case, one considers the two-loop $`S`$-matrix element for the aforementioned process $`u\overline{u}u\overline{u}`$ in the RFG, and focusses on the two-loop quark-gluon vertex $`\mathrm{\Gamma }_\alpha ^{(2)}(Q,Q^{})`$. The Feynman graphs contributing to $`\mathrm{\Gamma }_\alpha ^{(2)}(Q,Q^{})`$ can be classified into two sets. (a) those containing an “external” three-gluon vertex i.e. a three-gluon vertex where the momentum $`q`$ is incoming (Fig.1). (b) those which do not have an “external” three-gluon vertex. This latter set contains either graphs with no three gluon vertices (abelian-like), or graphs with three-gluon vertices whose all three legs are irrigated by virtual momenta, i.e. $`q`$ never enters alone into any of the legs. Carrying out the decomposition of Eq. (2) to the external three-gluon vertex of all graphs belonging to set (a), leaving all their other vertices unchanged , the following situation emerges:
$$\mathrm{\Gamma }_\alpha ^{(2)}(Q,Q^{})=\widehat{\mathrm{\Gamma }}_\alpha ^{(2)}(Q,Q^{})+\frac{1}{2}V_{P\alpha }^{(2)\sigma }(q)\mathrm{\Gamma }_\sigma ^{(0)}+\frac{1}{2}\mathrm{\Pi }_{P\alpha }^{(1)\beta }(q)(\frac{i}{q^2})\widehat{\mathrm{\Gamma }}_\beta ^{(1)}(Q,Q^{}),$$
(4)
with
$`V_{P\alpha \sigma }^{(2)}(q)`$ $`=`$ $`I_1\left[k_\sigma g_{\alpha \rho }+\mathrm{\Gamma }_{\rho \sigma \alpha }^{(0)}(k,\mathrm{},k+\mathrm{})\right](\mathrm{}q)^\rho +(2I_2+I_3)g_{\alpha \sigma }`$ (6)
$`I_4[\mathrm{\Gamma }_{\alpha \lambda \rho }^{(0)}(\mathrm{},k,k\mathrm{})\mathrm{\Gamma }_\sigma ^{(0)\lambda \rho }(\mathrm{},k,k\mathrm{})2k_\alpha (k+\mathrm{})_\sigma ],`$
where $`I_1=I_0(k+\mathrm{})^2(k+\mathrm{}q)^2`$, $`I_2=I_0(k+q)^2`$, $`I_3=I_0(k+\mathrm{})^2`$, $`I_4=I_0\mathrm{}^2(k+\mathrm{})^2`$, with $`iI_0=g^4C_A^2[\mathrm{}^2(\mathrm{}q)^2k^2]^1`$, and the two-loop integration prefactor $`(\mu ^{2ϵ})^2\frac{d^dk}{(2\pi )^d}\frac{d^d\mathrm{}}{(2\pi )^d}`$ has been suppressed. $`\widehat{\mathrm{\Gamma }}_\alpha ^{(2)}(Q,Q^{})`$ is the two-loop BFMFG quark-gluon vertex, $`V_{P\alpha \sigma }^{(2)}(q)`$ the propagator-like part, and the third term on the right-hand side is the necessary contribution for converting the one-particle reducible part of the two-loop $`S`$-matrix element $`\mathrm{\Gamma }_\alpha ^{(0)}(\frac{i}{q^2})\mathrm{\Pi }_{\alpha \beta }^{(1)}(q)(\frac{i}{q^2})\mathrm{\Gamma }_\beta ^{(1)}(Q,Q^{})`$ into $`\mathrm{\Gamma }_\alpha ^{(0)}(\frac{i}{q^2})\widehat{\mathrm{\Pi }}_{\alpha \beta }^{(1)}(q)(\frac{i}{q^2})\widehat{\mathrm{\Gamma }}_\beta ^{(1)}(Q,Q^{})`$. Eq.(4) is a non-trivial result, since there is no a-priori reason why the implementation of the decomposition of Eq. (2) should only give rise to terms which can be interpreted in the way described above. In fact, individual diagrams, or even natural sub-sets of diagrams such as the one-loop three-gluon vertex nested inside the two-loop quark-gluon vertex, give in general rise to contributions which do not belong to any of the terms on the right-hand side of Eq. (4) . It is only after all terms have been considered that the aforementioned crucial cancellations become possible. Finally, the counterterms of $`\mathrm{\Gamma }_\alpha ^{(2)}(Q,Q^{})`$ must be correctly accounted for . $`\widehat{\mathrm{\Gamma }}_\alpha ^{(2)}(Q,Q^{})`$ satisfies the QED-like Ward identity $`q^\alpha \widehat{\mathrm{\Gamma }}_\alpha ^{(2)}(Q,Q^{})=\widehat{\mathrm{\Sigma }}^{(2)}(Q)\widehat{\mathrm{\Sigma }}^{(2)}(Q^{})`$, where $`\widehat{\mathrm{\Sigma }}^{(2)}`$ is the two-loop PT quark-self-energy. $`\widehat{\mathrm{\Sigma }}^{(2)}`$ is identical to the conventional $`\mathrm{\Sigma }^{(2)}`$ in the RFG (and the BFMFG), exactly as happens at one-loop.
To construct the two-loop PT gluon self-energy $`\widehat{\mathrm{\Pi }}_{\alpha \beta }^{(2)}(q)`$, one must append to the conventional two-loop self-energy $`\mathrm{\Pi }_{\alpha \beta }^{(2)}(q)`$ the term $`\mathrm{\Pi }_{P\alpha \beta }^{(2)}(q)=V_{P\alpha \sigma }^{(2)}(q)t_\beta ^\sigma (q)`$ together with the term $`iR_{P\alpha \beta }^{(2)}(q)=\mathrm{\Pi }_{\alpha \beta }^{(1)}(q)V_P^{(1)}(q)+\frac{3}{4}V_{P\alpha }^{(1)\sigma }(q)\mathrm{\Pi }_{P\sigma \beta }^{(1)}(q)`$ originating from converting a string of two conventional one-loop self-energies into a string of two one-loop PT self-energies . One can show by means of a diagram-by-diagram mapping that the resulting $`\widehat{\mathrm{\Pi }}_{\alpha \beta }^{(2)}(q)`$ is exactly identical to the corresponding two-loop self-energy of the BFMFG, and that this correspondence persists after renormalization . Notice that the presence of the term $`R_{P\alpha \beta }^{(2)}(q)`$ is crucial for the entire construction, and constitutes a non-trivial consistency check of the resummation mechanism first proposed in . An immediate consequence of the above correspondence is that the coefficient in front of the leading logarithm of $`\widehat{\mathrm{\Pi }}_{\alpha \beta }^{(1)}(q)`$ is precisely the coefficient of the two-loop quark-less QCD $`\beta `$ function , namely $`(34/3)C_A^2`$ . As a result, one may extend to two-loops the one-loop construction of a renormalization-group-invariant effective charge presented in , leading to the unambiguous identification of the conformally-(in)variant subsets of QCD graphs. Finally we note that, exactly as happens at one-loop, the two-loop PT box-graphs are simply the conventional ones in the RFG (and are equal to the ones in the BFMFG).
As has been explained in detail in , the one-loop PT $`n`$-point functions satisfy the optical theorem individually. To verify that one starts with the tree-level process $`u(P)\overline{u}(P^{})g(p_1)+g(p_2)`$, whose $`S`$-matrix element we denote by $`𝒯_{\mu \nu }`$; then, one considers the quantity $`𝒯_{\mu \nu }P^{\mu \mu ^{}}(p_1)P^{\nu \nu ^{}}(p_2)𝒯_{\mu ^{}\nu ^{}}`$, where $`P_{\mu \nu }(p,\eta )=g_{\mu \nu }+(\eta _\mu p_\nu +\eta _\nu p_\mu )/\eta p+\eta ^2p_\mu p_\nu /(\eta p)^2`$, with $`\eta `$ an arbitrary four-vector. One proceeds by first eliminating the $`\mathrm{\Gamma }_{P\alpha \mu \nu }^{(0)}(q,p_1,p_2)`$ part of $`\mathrm{\Gamma }_{\alpha \mu \nu }^{(0)}(q,p_1,p_2)`$, which vanishes when contracted with the term $`P^{\mu \mu ^{}}(p_1)P^{\nu \nu ^{}}(p_2)`$. Then, the longitudinal parts of the $`P_{\mu \mu ^{}}(p_1)`$ and $`P_{\nu \nu ^{}}(p_2)`$ trigger a fundamental cancellation involving the $`s`$\- and $`t`$\- channel graphs, which is a consequence of the underlying BRS symmetry . Specifically, the action of $`p_{1\mu }`$ on the $`\mathrm{\Gamma }_{F\alpha \mu \nu }^{(0)}`$ gives
$$p_1^\mu \mathrm{\Gamma }_{F\alpha \mu \nu }^{(0)}(q,p_1,p_2)=t_{\alpha \nu }(q)+(p_1^2p_2^2)g_{\alpha \nu }+(p_2p_1)_\alpha p_{2\nu };$$
(7)
the first term on the right-hand side cancels against an analogous contribution from the $`t`$-channel graph, whereas the second term vanishes for on-shell gluons. Finally, the term proportional to $`p_{2\nu }`$ is such that (i) all dependence on $`\eta `$ vanishes, and (ii) a residual contribution emerges, which must be added to the parts stemming from the $`g_{\mu \mu ^{}}g_{\nu \nu ^{}}`$ part of the calculation. Then one simply defines self-energy/vertex/box-like sub-amplitudes according to the dependence on $`s=(p_1+p_2)^2`$ and $`t=(Pp_1)^2`$, as in a scalar theory, or QED. The emerging structures correspond to the imaginary parts of the one-loop PT effective Green’s functions, as one can readily verify by employing the Cutkosky rules; in fact the residual pieces mentioned at step (ii) above correspond precisely to the Cutkosky cuts of the one-loop ghost diagrams. The one-loop PT structures may be reconstructed directly from this tree-level calculation, without resorting to an intermediate diagrammatic interpretation, by means of appropriately subtracted dispersion relations.
The same procedure must be followed at two-loops; the only difference is that one must now combine contributions from both the one-loop $`S`$-matrix element for the process $`u(P)\overline{u}(P^{})g(p_1)+g(p_2)`$ and the tree-level $`S`$-matrix element for the process $`u(P)\overline{u}(P^{})g(p_1)+g(p_2)+g(p_3)`$ . The non-trivial point is that the one-loop $`S`$-matrix element must be cast into its PT form (as shown in Fig 2a.) before any further manipulations take place. Notice that the same procedure which leads to the appearance of $`\widehat{\mathrm{\Pi }}(q)`$ leads also to the conversion of the conventional one-loop three-gluon vertex $`\mathrm{\Gamma }_{\alpha \mu \nu }^{(1)}(q,p_1,p_2)`$ into $`\mathrm{\Gamma }_{F\alpha \mu \nu }^{(1)}(q,p_1,p_2)`$, which is the BFMFG one-loop three-gluon vertex with one background ($`q`$) and two quantum ($`p_1`$, $`p_2`$) . It is straightforward to show that $`\mathrm{\Gamma }_{F\alpha \mu \nu }^{(1)}(q,p_1,p_2)`$ satisfies the following Ward identity
$$q^\alpha \mathrm{\Gamma }_{F\alpha \mu \nu }^{(1)}(q,p_1,p_2)=\mathrm{\Pi }_{\mu \nu }^{(1)}(p_1)\mathrm{\Pi }_{\mu \nu }^{(1)}(p_2),$$
(8)
which is the exact one-loop analogue of the tree-level Ward identity of Eq (3); indeed the right-hand side is the difference of two one-loop self-energies computed in the RFG. In order to extend to the next order the dispersive construction outlined above, one needs the following Ward identity
$$p_1^\mu \mathrm{\Gamma }_{F\alpha \mu \nu }^{(1)}=i\widehat{\mathrm{\Pi }}_{\alpha \nu }^{(1)}(q)i\mathrm{\Pi }_{\alpha \nu }^{(1)}(p_2)+\lambda _{\nu \sigma }^{(1)}t_\alpha ^\sigma (q)+s_\alpha ^{(1)}p_{2\nu }$$
(9)
with
$`\lambda _{\nu \sigma }^{(1)}`$ $`=`$ $`J_3\left[(kp_1)^\rho \mathrm{\Gamma }_{\nu \rho \sigma }^{(0)}(p_2,k,kp_2)(k+p_2)_\nu k_\sigma \right]i\left[2B(q)+B(p_1)\right]g_{\nu \sigma }`$ (10)
$`s_\alpha ^{(1)}`$ $`=`$ $`J_3\left[p_2^\sigma k^\rho \mathrm{\Gamma }_{F\alpha \sigma \rho }^{(0)}(q,k+p_2,k+p_1)p_2(kp_1)(2k+p_2p_1)_\alpha \right]+\left({\displaystyle \frac{1}{8}}\right)\left[B(p_1)+B(p_2)\right]q_\alpha ,`$ (11)
$`J\frac{1}{2}ig^2C_A[k^2(kp_1)^2(k+p_2)^2]^1`$ and $`B(p)g^2C_A[dk][k^2(k+q)^2]^1`$. Eq. (9) is the one-loop analogue of Eq. (7) . The one-loop version of the fundamental BRS-driven cancellation will then be implemented; for instance, the first term on the right-hand side of Eq. (9) will cancel against analogous contributions from the graph of Fig. 2$`a_2`$, whereas all remaining terms proportional to $`t_{\sigma \alpha }(q)`$ will cancel against contributions from the $`t`$-channel graphs of Fig. 2$`a_3`$
The same construction must then be repeated for the tree-level process $`u\overline{u}ggg`$, whose tree-level $`S`$-matrix element we denote by $`𝒯_{\mu \nu \rho }`$ ; again, the $`s`$-channel graphs (Fig. 2b) must be rewritten in such a way that when contracted with $`q`$ only terms proportional to $`p_i^2`$ emerge, but no transverse pieces, exactly as in Eq.(3). This is accomplished by simply carrying out the decomposition of Eq.(2) only to the vertices where $`q`$ is entering; then the contributions originating from the $`\mathrm{\Gamma }_{P\alpha \mu \nu }^{(0)}`$ parts eventually vanish when contracted with the polarization tensors $`P^{\mu \mu ^{}}(p_1)P^{\nu \nu ^{}}(p_2)P^{\rho \rho ^{}}(p_3)`$. Acting with the longitudinal parts of the polarization tensors on the $`𝒯_{\mu \nu \rho }`$ one must first carry out the corresponding BRS $`st`$ channel cancellation, and pick up automatically the correct ghost parts. Notice in particular that this procedure gives rise to the ghost structure given in Fig.3c of , which has only three-particle Cutkosky cuts, and does not exist in the conventional formulation.
Adding the $`s`$-channel terms together the total propagator-like part emerges; it is proportional to $`(34/3)C_A^2q^2`$, as it should. Notice that the result is infrared finite, by virtue of crucial cancellations between the the one-loop $`u\overline{u}gg`$ and the tree-level $`u\overline{u}ggg`$ cross-sections. The most direct way to verify that is by exploiting the one-to-one correspondence between the terms thusly generated and the Cutkosky cuts of the BFMFG two-loop self-energy; the latter are infrared finite since they effectively originate from a single logarithm.
In conclusion, we have shown that the same physical principles, and, evidently, the same procedure used at one-loop, lead to the generalization of the PT to two-loops. In particular , the known correspondence between PT and BFMFG persists. It would be interesting to explore its origin further, and establish a formal, non-diagrammatic understanding of the PT. |
no-problem/9912/astro-ph9912238.html | ar5iv | text | # Experimental Cosmic Statistics II : Distribution
## 1 Introduction
Precision higher order statistics will become a reality when the new wide field surveys, such as the SDSS and the 2dF, become available in the near future. These prospective measurements contain information relating to the regime of structure formation, to the nature of initial conditions, and to the physics of galaxy formation. The ability of such measurements to constrain models, in a broad sense, is inversely proportional to the overlap between the distribution of statistics predicted by different theories for a finite galaxy survey. More precisely, maximum likelihood methods give the probability of the particular measurements for each theory, or after inversion, the likelihood of the theories themselves. This is an especially natural and fruitful procedure for a Gaussian distribution, where the first two moments are sufficient for a full statistical description. This simple case is assumed for most analyses in the literature, and it motivates the special attention given to the investigation of the errors, or standard deviations. In general, however, the underlying distribution of measurements can be strongly non-Gaussian, in which case the correct shape for the distribution has to be employed for a maximum likelihood analysis. As a consequence, terms such as “$`1`$-$`\sigma `$ measurement” loose their usual meaning: a few $`\sigma `$ deviation from the average can be quite likely for a non-Gaussian distribution with a long tail. Therefore it is of utmost importance to ask two important questions:
1. In what regime is the Gaussian approximation valid for the distribution of the measured statistical quantities?
2. If the Gaussian limit is violated, is there any reasonably simple, practical assumption which would enable a maximum likelihood analysis?
This paper attempts to answer these questions by studying numerically the underlying distribution function of measurements for estimators of higher order statistics based on counts-in-cells. This complements the thorough numerical investigation of the errors undertaken by Colombi et al. (1999, hereafter paper I), and the theoretical investigation of the errors exposed in a suite of papers by Szapudi & Colombi (1996, hereafter SC), Colombi, Szapudi, & Szalay (1998, hereafter CSS), and Szapudi, Colombi, & Bernardeau (1999, hereafter SCB).
For a particular statistic $`A`$, $`\mathrm{{\rm Y}}(\stackrel{~}{A})`$ denotes the probability density of measuring a value $`\stackrel{~}{A}`$ in a finite galaxy catalog. We consider the following counts-in-cells statistics: factorial moments $`F_k`$, cumulants $`\overline{\xi }`$ and $`S_N`$, void probability $`P_0`$ and its corresponding scaling function $`\sigma \mathrm{ln}(P_0)/F_1`$, as well the counts-in-cells distribution itself, $`P_N`$. A large $`\tau `$CDM $`N`$-body experiment, $``$, generated by the VIRGO consortium (e.g., Evrard et al. 1999) was divided into $`C_{}=4096`$ cubic subsamples, $`_i`$, $`i=1,\mathrm{},C_{}`$ for estimating numerically the cosmic distribution function, $`\mathrm{{\rm Y}}(\stackrel{~}{A})`$. This was rendered possible by the fact that this “Hubble Volume” simulation involves $`10^9`$ particles in a cubic box of size $`2000h^1`$ Mpc. A detailed description of the simulation and the method we used to extract count-in-cells statistics in the full box $``$ and its each of subsamples $`_i`$ can be found in paper I.
Paper I concentrated entirely on the first two moments of $`\mathrm{{\rm Y}}(\stackrel{~}{A})`$, the average
$$\stackrel{~}{A}=\stackrel{~}{A}\mathrm{{\rm Y}}(\stackrel{~}{A})𝑑\stackrel{~}{A},$$
(1)
and the cosmic error
$$(\mathrm{\Delta }A)^2(\stackrel{~}{A}\stackrel{~}{A})^2=(\stackrel{~}{A}\stackrel{~}{A})^2\mathrm{{\rm Y}}(\stackrel{~}{A})𝑑\stackrel{~}{A}.$$
(2)
In the equations above, the mean $`\stackrel{~}{A}`$ can differ from the true value. The cosmic bias is defined as
$$b_A\frac{\stackrel{~}{A}}{A}1.$$
(3)
It is always present when indicators are constructed from unbiased estimators in a nonlinear fashion, such as cumulants (e.g., SBC; Hui & Gaztañaga 1998, hereafter HG).
The most relevant results of paper I are summarized next:
1. is in excellent agreement with perturbation theory, one-loop perturbation theory and extended perturbation theory (EPT) in their respective range of applicability. These tests demonstrate the quality of our numerical experiment.
2. is in accord with the theoretical predictions of SC and SBC in their respective domain of validity. A few percent accuracy is achieved in the weakly non-linear regime for the factorial moments. On small scales the theory tends to overestimate the errors, perhaps by a factor of two in the worst case, due to the approximate nature of the hierarchical models representing the joint moments (SCB).
3. is negligible compared to the errors in the full dynamic range, as predicted by theory (SCB, see also HG for an opposing view).
4. are in general agreement with theory considering the preliminary nature of the measurements. The precision of the predictions, however, decreases with increasing difference of orders, $`|kl|`$. This suggests that the local Poisson model (SC) looses accuracy, as expected.
The theory of the errors confirmed by paper I provides an excellent basis for future maximum likelihood analyses of data whenever $`\mathrm{{\rm Y}}`$ is Gaussian. While this was tacitly assumed by most previous works, this article examines for the first time the range of validity of this assumption. To this end the cosmic distribution function $`\mathrm{{\rm Y}}(\stackrel{~}{A})`$ is examined numerically. In particular, one of the parameters determining its shape, the cosmic skewness
$$S(\stackrel{~}{A}\stackrel{~}{A})^3/(\mathrm{\Delta }A)^3,$$
(4)
is calculated as well. When Gaussianity is no longer a good approximation, new Ansätze are proposed for characterizing $`\mathrm{{\rm Y}}(\stackrel{~}{A})`$. In addition we perform a preliminary analysis of the bivariate cosmic distributions $`\mathrm{{\rm Y}}(\stackrel{~}{A},\stackrel{~}{B})`$.
The next section presents the estimates of $`\mathrm{{\rm Y}}`$ for the factorial moments, the cumulants (including the variance of the counts), the void probability distribution and its scaling function, and the counts-in-cells themselves. A universal shape is found for $`\mathrm{{\rm Y}}(\stackrel{~}{A})`$ which is well described in all regimes by a generalized version of the lognormal distribution. In addition to the mean (1) and variance (2), this depends on a third parameter, the cosmic skewness (4). This is also investigated along with the resulting effective cosmic bias. Section 3 presents the measured bivariate distributions, with explicit comparison to theoretical predictions of SCB. Finally, section 4 discusses the results in the context of maximum likelihood analysis of future surveys. Readers unfamiliar with counts-in-cells statistics can consult Appendix A in paper I for a concise summary of definitions and notation.
## 2 The Cosmic Distribution Function
The main results of this section are displayed in figures 1–6. For simplicity figures 1, 3, and 5 will be referred to as type D, displaying distributions, while figures 2, 4, and 6 as type S, showing skewness. A general description of each type is followed by results obtained for the cosmic distribution of the factorial moments (§ 2.1), cumulants (§ 3.2), counts-in-cells (§ 2.3), and void probability with its scaling function $`\sigma `$ (§ 2.4). The cosmic skewness and the resulting effective bias are discussed in § 2.5.
In all figures of type D, the results are displayed in a convenient system of coordinates. For any statistic $`\stackrel{~}{A}`$ the normalized quantity
$$\stackrel{~}{x}_A\frac{\delta \stackrel{~}{A}}{\mathrm{\Delta }A}=\frac{\stackrel{~}{A}A}{\mathrm{\Delta }A}$$
(5)
is considered where $`A=\stackrel{~}{A}`$ to simplify notations. The average of $`\stackrel{~}{x}_A`$ is zero and its variance is unity by definition which facilitates the comparison of the plots. The disadvantage of this coordinate system is that the cosmic error $`\mathrm{\Delta }A/A`$ is not directly shown.
For reference, each figure of type D displays a Gaussian (solid curve), and lognormal distribution with the same variance and average (dots, e.g. Coles & Jones 1991):
$$\mathrm{{\rm Y}}(\stackrel{~}{A})=\frac{1}{\stackrel{~}{A}\sqrt{2\pi \kappa }}\mathrm{exp}\left\{\frac{[\mathrm{ln}(\stackrel{~}{A}/A)+\kappa /2]^2}{2\kappa }\right\},$$
(6)
with
$$\kappa =\mathrm{ln}[1+(\mathrm{\Delta }A/A)^2].$$
(7)
The skewness of this distribution is given by
$$S=(\mathrm{\Delta }A/A)^3+3\mathrm{\Delta }A/A.$$
(8)
For comparison, the skewness of the lognormal assumption is plotted with dotted lines on figures of type S. The amount of skewness of the lognormal is a function of the cosmic error, i.e. more skewness on the figures indicates a larger cosmic error which is hidden by the choice of the coordinate system.
In addition, a “generalized lognormal distribution” is introduced (dashes on figures of type D):
$`\mathrm{{\rm Y}}(\stackrel{~}{A})`$ $`=`$ $`{\displaystyle \frac{s}{\mathrm{\Delta }A[s(\stackrel{~}{A}A)/\mathrm{\Delta }A+1]\sqrt{2\pi \eta }}}`$ (9)
$`\times \mathrm{exp}\left({\displaystyle \frac{\{\mathrm{ln}[s(\stackrel{~}{A}A)/\mathrm{\Delta }A+1]+\eta /2\}^2}{2\eta }}\right),`$
$$\eta =\mathrm{ln}(1+s^2),$$
(10)
where $`s`$ is an adjustable parameter. It is fixed by the requirement that the analytical function (9) have identical average, variance, and skewness, $`S=s^3+3s`$, with the measured $`\mathrm{{\rm Y}}(\stackrel{~}{A})`$. It has more parameters, thus form (9) characterizes the shape of function $`\mathrm{{\rm Y}}(\stackrel{~}{A})`$ better than the other two functions, especially for the large $`\delta \stackrel{~}{A}`$ tail. As will be shown next, it is an excellent approximation for the underlying probability distribution in all regimes for all statistics. This robust universality is the most striking result of this article.
The cosmic distribution function, as with any measurement from finite data, is subject to both measurement and cosmic errors (the “error on the error problem”, cf. SC). The measurement error on $`\mathrm{{\rm Y}}`$, due to the finite number of subsamples extracted from the whole simulation, can be calculated via straightforward error propagation. It essentially corresponds to the usual $`1/\sqrt{C_{}}`$ factor, where $`C_{}`$ is the number of subsamples. This is plotted on all figures of type D as errorbars. On figures of type S no errorbars are shown, since this would require an accurate estimate up to the 6th moment of the cosmic distribution $`\mathrm{{\rm Y}}(\stackrel{~}{A})`$. The excellent agreement between cosmic error measurements and theory (paper I) indicates that the number of subsamples is sufficient and thus the resulting errorbars should be fairly small. Similar arguments suggest that the simulation volume was sufficient large to render the cosmic error on the cosmic distribution negligible.
### 2.1 Factorial Moments
Figure 1 displays $`\mathrm{{\rm Y}}(\stackrel{~}{F}_k)`$ for $`1k4`$
and various scales $`\mathrm{}=1,7.8`$, $`62.5h^1`$ Mpc.
The agreement with the generalized lognormal distribution is excellent, but even the lognormal gives an adequate description. The deviation from a Gaussian is pronounced whenever the relative cosmic error $`\mathrm{\Delta }F_k/F_k`$ is significantly larger than unity. While the figures do not show the cosmic error directly, the skewness of $`\mathrm{{\rm Y}}(\stackrel{~}{F}_k)`$ is a reliable indication. It increases with the order $`k`$ since $`\mathrm{\Delta }F_k/F_k`$ also increases with $`k`$. Figure 2 shows directly the quantity $`S`$ measured for
$`\mathrm{{\rm Y}}(F_k)`$ along with the lognormal value (8). The agreement shows that the lognormal model yields an excellent approximation.
Fig. 1 in conjunction with the measurements of the cosmic error in Paper I suggests that
$$\mathrm{\Delta }A/A\mathrm{\Delta }_{\mathrm{crit}},\mathrm{\Delta }_{\mathrm{crit}}=0.2,$$
(11)
is a practical criterion for the validity of the Gaussian approximation.
### 2.2 Cumulants
Figure 3 is analogous to Fig. 1,
showing functions $`\mathrm{{\rm Y}}(\stackrel{~}{\overline{\xi }})`$, $`\mathrm{{\rm Y}}(\stackrel{~}{S}_3)`$ and $`\mathrm{{\rm Y}}(\stackrel{~}{S}_4)`$ for the biased estimators. As was shown in paper I, the bias is negligible compared to the cosmic errors, thus correction is not necessary. The agreement with the lognormal is more approximate than for $`\mathrm{{\rm Y}}(\stackrel{~}{F}_k)`$, except for the variance $`\overline{\xi }`$. Indeed, the skewness of $`\mathrm{{\rm Y}}(\stackrel{~}{S}_N)`$ is in general different from the lognormal prediction, as illustrated by Fig. 4. On small scales it is larger than predicted by equation (8) while on large scales where edge effects dominate it is much smaller. The generalized lognormal (9) can still account for the shape of $`\mathrm{{\rm Y}}(\stackrel{~}{S}_N)`$ quite well, especially for the large $`\stackrel{~}{S}_N`$ tail.
The cosmic skewness of $`\mathrm{{\rm Y}}(\stackrel{~}{S}_k)`$ is fairly small on large scales. This is a natural consequence of the fact that cumulants are not subject to the positivity constraint $`\stackrel{~}{S}_k0`$, as it is the case for factorial moments. On large scales, the measured $`\stackrel{~}{S}_k`$ may well be positive or negative, similarly with $`\overline{\xi }`$ on extremely large scales. As a result, the left-hand tail of the distribution is more pronounced in both lower right panels of Fig. 3 than the corresponding figure for factorial moments, and $`\mathrm{{\rm Y}}(S_3)`$ is almost Gaussian in the middle right panel.
Rule (11) for the Gaussian limit still applies, at least for $`\overline{\xi }`$, and perhaps a slightly more stringent condition should be chosen for cumulants of higher order. $`\mathrm{{\rm Y}}(\stackrel{~}{S}_3)`$ is fairly skewed even though the measured cosmic error is slightly below the threshold value for $`\mathrm{}=1h^1`$ Mpc and $`\mathrm{}=7h^1`$ Mpc (see paper I).
### 2.3 Counts-in-cells
Figure 5 shows the function $`\mathrm{{\rm Y}}(\stackrel{~}{P}_N)`$ in various cases. The upper panels focus on a small scale $`\mathrm{}1h^1`$ Mpc. In this regime, the CPDF and $`\mathrm{\Delta }P_N/P_N`$ are decreasing functions of $`N`$ as demonstrated in paper I. Once again, the validity of the Gaussian approximation depends on the size of cosmic error. As a result, $`\mathrm{{\rm Y}}(\stackrel{~}{P}_N)`$ is nearly Gaussian for $`N=1`$ and becomes more and more skewed as $`N`$ increases. The lognormal approximation appears to be adequate within the errors, although it is slightly too skewed as illustrated by Fig. 6.
The middle panels show an intermediate scale $`\mathrm{}7.8h^1`$ Mpc. On these scales (cf. paper I) both the CPDF and the cosmic error have a unimodal behaviour with an extremum (maximum for the CPDF and correspondingly minimum for the errors) for $`NN_{\mathrm{max}}=26`$. This explains why for the chosen values of $`N=5`$, $`50`$, and $`500`$, function $`\mathrm{{\rm Y}}(\stackrel{~}{P}_N)`$ is skewed, approximately Gaussian, and skewed again respectively. For $`N=5`$ lognormal is an excellent approximation, while the skewness for $`N=500`$ is somewhat less than that of a lognormal.
Finally, the lower panels display the largest available scale $`\mathrm{}=62.5h^1`$ Mpc. The behaviour of $`P_N`$ and $`\mathrm{\Delta }P_N/P_N`$ is similar as previously with the extremum shifted to $`NN_{\mathrm{max}}30000`$. In this case, the cosmic error is always large, at least of order fifty percent (cf. paper I). All the curves are thus significantly skewed for the chosen values of $`N=\mathrm{25\hspace{0.17em}000}`$, $`\mathrm{30\hspace{0.17em}000}`$ and $`\mathrm{40\hspace{0.17em}000}`$. The agreement with the lognormal assumption is somewhat inaccurate, although the generalized lognormal improves the fit, especially for the left-hand panel. Note that the apparently abrupt limit for small values of $`\delta \stackrel{~}{P}_N/\mathrm{\Delta }P_N`$ is due to the positivity constraint $`\stackrel{~}{P}_N0`$. This constraint becomes quite severe when the average value is much smaller than the errors. While there is still plenty of dynamic range for upscattering, there is a hard restriction for down scattering. This is only partly taken into account in our generalized lognormal model, and any modifications in this respect are left for future work. Finally, the practical criterion (11) is again valid for determining Gaussian approximation.
Note that the finite number $`C=512^3`$ of sampling cells (see paper I), the CPDF is necessarily a multiple of $`1/C`$. This quantization could cause contamination of $`\mathrm{{\rm Y}}(\stackrel{~}{P}_N)`$ unless $`P_N1/C10^{8.13}`$. The condition $`P_N10^6`$ adopted corresponds to at least $`100`$ cells per subsample in average with $`N`$ particles. Despite that, a small amount of contamination might still persist for $`\delta \stackrel{~}{P}_NP_N`$, i.e. at the left side of the plots on figure 5. The same effect might also alter the tail of the counts-in-cells measurements presented in paper I, although not significantly.
### 2.4 Void Probability and Scaling Function
According to the investigations in paper I, the cosmic error on $`P_0`$ and $`\sigma `$ increases steadily with scale up to a sudden transition on scales $`\mathrm{}5h^1`$ Mpc where it becomes large or infinite. This behavior was studied extensively by CBS where more of the details can be found. The most relevant consequence here is that in the available dynamic range the cosmic error is small, and $`\mathrm{{\rm Y}}(\stackrel{~}{P}_0)`$ and $`\mathrm{{\rm Y}}(\stackrel{~}{\sigma })`$ are nearly Gaussian. For this reason it would be superfluous to print the corresponding figures.
### 2.5 Cosmic Skewness and Cosmic Bias
According to Figs. 16, the degree of skewness of the cosmic distribution function increases with the order $`k`$ and with $`|NN_{\mathrm{max}}|`$, where $`N_{\mathrm{max}}`$ is the value for which $`P_N`$ reaches its maximum. The cosmic skewness is already significant for third order statistics, $`F_3`$ and $`S_3`$. An important consequence of the large cosmic skewness is that the maximum $`\mathrm{{\rm Y}}(\stackrel{~}{A})`$, i.e. the most likely measurement, is shifted to the left from the ensemble average on Figs. 1, 3 and 5. Maximizing the Ansatz (9), which is always a good fit to the cosmic distribution function, yields
$$\mathrm{}_A=A_{\mathrm{max}}/A1=\frac{\mathrm{\Delta }A}{As}\left(\frac{1}{(1+s^2)^{3/2}}1\right),$$
(12)
where $`\mathrm{}_A`$ is the effective cosmic bias. Since $`s>0`$, it is negative, and its absolute value is smaller than the cosmic error,
$$|\mathrm{}_A|0.66\frac{\mathrm{\Delta }A}{A}.$$
(13)
For a lognormal distribution, $`s=\mathrm{\Delta }A/A`$,
$$|\mathrm{}_A|=1\left[1+(\mathrm{\Delta }A/A)^2\right]^{3/2}1.$$
(14)
The effective cosmic bias becomes increasingly significant when the cosmic error is large. Similarly to the cosmic bias (SBC), $`\mathrm{}_A(3/2)(\mathrm{\Delta }A/A)^2`$ from expanding eq. (14) in the small error regime.
The phenomenon of effective bias was already pointed out by SC (and preliminarily investigated by Colombi, Bouchet & Schaeffer, 1994). Since $`A_{\mathrm{max}}`$ is the most likely value of $`\stackrel{~}{A}`$, the only one available measurement in a catalog of the neighbouring Universe is likely to yield lower than average value. This is true even for an unbiased indicator such as $`\stackrel{~}{F}_k`$ or $`\stackrel{~}{P}_N`$. Unfortunately, this effect cannot be corrected for, but it can be taken into account in the framework of the maximum likelihood approach using the above results on the shape of $`\mathrm{{\rm Y}}(\stackrel{~}{A})`$.
## 3 Bivariate Cosmic Distribution Function: a Preliminary Analysis
Figures 7 and 8 display contours of the joint cosmic distribution $`\mathrm{{\rm Y}}(\stackrel{~}{A},\stackrel{~}{B})`$ (solid lines) for factorial moments and cumulants, respectively. For comparison the Gaussian limit is shown,
$$\mathrm{{\rm Y}}(\stackrel{~}{A},\stackrel{~}{B})=\frac{1}{2\pi \mathrm{\Delta }A\mathrm{\Delta }B\sqrt{1\rho ^2}}\mathrm{exp}\left[\frac{1}{2}𝒬(\stackrel{~}{A},\stackrel{~}{B})\right],$$
(15)
$$𝒬(\stackrel{~}{A},\stackrel{~}{B})=\frac{1}{1\rho ^2}\left[\stackrel{~}{x}_A^22\rho \stackrel{~}{x}_A\stackrel{~}{x}_B+\stackrel{~}{x}_B^2\right],$$
(16)
where $`\rho \delta \stackrel{~}{x}_A\delta \stackrel{~}{x}_B`$ is the cross-correlation coefficient. Dot-dashes display the above function with the measured $`\rho `$, $`\mathrm{\Delta }A/A`$ and $`\mathrm{\Delta }B/B`$, while long dashes represent the same function but with the parameters inferred from the theory of SCB with the E<sup>2</sup>PT model (see paper I for details). The contours, correspond in the Gaussian limit to the 1$`\sigma `$ (thin curves) level, $`𝒬(\stackrel{~}{A},\stackrel{~}{B})=1`$, and the 2$`\sigma `$ (thick curves) level, $`𝒬(\stackrel{~}{A},\stackrel{~}{B})=4`$, are displayed in the coordinate system of the measured $`\stackrel{~}{x}_A`$ and $`\stackrel{~}{x}_B`$.
On $`\mathrm{}=7.1h^1`$ Mpc scales the theoretical predictions are expected to match the second order moments of $`\mathrm{{\rm Y}}`$ for factorial moments, and even the cross-correlations (see Paper I). This is illustrated by Fig. 7, where the long-dashed ellipses superpose well to the dot-dashed ones. For the cumulants the theory overestimates the errors slightly, which is reflected in the contours of Fig. 8, although cross-correlations are still reasonable, as indicated by the orientation of the ellipses.
The departure from the Gaussian limit is significant, except for the upper left panel on Figs. 7 and 8, and increases with order, in accord with the findings of the previous section. The contrast with Gaussianity increases with the cosmic error, and thus with the order considered. With the exception of $`\overline{N}`$, $`F_2`$, $`\overline{\xi }`$ and $`S_3`$, the measured cosmic error violates (11) at $`\mathrm{}=7.1h^1`$ Mpc (see paper I). Moreover, as shown previously, criterion (11) should be strengthened for cumulants $`S_k`$, $`k3`$. In conclusion, condition (11) distinguishes the Gaussian limit for $`\mathrm{{\rm Y}}(\stackrel{~}{A},\stackrel{~}{B})`$ adequately when applied to both statistics $`\stackrel{~}{A}`$ and $`\stackrel{~}{B}`$.
Similarly to the monovariate distribution (§ 2), function $`\mathrm{{\rm Y}}(\stackrel{~}{A},\stackrel{~}{B})`$ develops skewness and a significant tail for large values of $`\stackrel{~}{x}=(\stackrel{~}{x}_A,\stackrel{~}{x}_B)`$ when rule (11) is broken. There are three notable consequences:
1. The effective cosmic bias (§ 2.5) is present again, i.e. the maximum of $`\mathrm{{\rm Y}}`$ is shifted from the average towards the lower left corner of the panels.
2. The contours tend to cover a smaller area than for the Gaussian limit.
3. As a result of the positivity constraint, there is a well defined lower vertical/horizontal bound in some panels, e.g., for $`\stackrel{~}{x}_{F_4}`$, $`F_40`$.
## 4 Summary and Discussion
This paper has presented an experimental study of the cosmic distribution function of measurements $`\mathrm{{\rm Y}}(\stackrel{~}{A})`$, where $`\stackrel{~}{A}`$ is an indicator of a statistic related to counts-in-cells. The cosmic distribution was considered for the factorial moments $`F_k`$, cumulants $`\overline{\xi }`$ and $`S_N`$, the void probability $`P_0`$ with its scaling function, $`\sigma \mathrm{ln}(P_0)/F_1`$, and finally the counts-in-cells $`P_N`$ themselves. To analyse properties of the function $`\mathrm{{\rm Y}}(\stackrel{~}{A})`$, we used a state of the art $`\tau `$CDM simulation divided into 4096 sub-cubes large enough themselves to represent a full galaxy catalog. The statistics mentioned above were extracted from each subsample, and the resulting distribution of measurements was used to estimate $`\mathrm{{\rm Y}}(\stackrel{~}{A})`$.
While paper I concentrated on the first two moments of the cosmic distribution, the average and the errors, here the focus was shifted towards the general shape of function $`\mathrm{{\rm Y}}`$ itself, including its skewness, the cosmic skewness. The main results of this analysis are the followings:
1. In contrast with popular belief, the cosmic distribution is not Gaussian in general. The most reassuring result is, however, that the Gaussian approximation appears to be valid whenever the cosmic errors are small, typically $`\mathrm{\Delta }A/A0.2`$. This result is quite robust and it is insensitive to the particular statistic considered (except that a slightly more stringent condition might be chosen for cumulants $`S_k`$, $`k3`$). This means that for any quantity which can be reliably measured from a survey, a Gaussian error analysis should be valid.
When the relative cosmic error $`\mathrm{\Delta }A/A`$ becomes significant, $`\mathrm{{\rm Y}}`$ becomes increasingly skewed. Since $`\mathrm{\Delta }F_k/F_k`$ and $`\mathrm{\Delta }S_k/S_k`$ increase with $`k`$ (SC, paper I), and $`\mathrm{\Delta }P_N/P_N`$ with $`|NN_{\mathrm{max}}|`$, where $`N_{\mathrm{max}}`$ is the maximum of the CPDF, so does the cosmic skewness, which eventually results in the break down of the Gaussian approximation. Functions $`\mathrm{{\rm Y}}(\stackrel{~}{F}_k)`$ and $`\mathrm{{\rm Y}}(\stackrel{~}{\overline{\xi }})`$ are well approximated by a lognormal law. Otherwise, a third order parametrisation matching the average, the variance and the skewness of the observed distribution is necessary, and in general sufficient. Such a generalization of lognormal distribution is proposed and found to be in agreement with the measurements in all regimes investigated. Note that there are other alternatives such as the Edgeworth expansion (e.g., Juszkiewicz et al. 1995) or the skewed lognormal approximation of Colombi (1994). This latter consists of applying Edgeworth expansion to $`\mathrm{log}(\stackrel{~}{A})`$. This method, when applicable, improves significantly the domain of validity of the Edgeworth expansion, normally only useful in the weakly non-Gaussian limit $`\mathrm{\Delta }A/A0.5`$.
2. While paper I examined the cosmic bias resulting from the non-linear construction of certain estimators, here a new phenomenon was pointed out, which is similar in effect, but different in nature: the effective cosmic bias. It affects all estimators, including unbiased ones, and is a result of the cosmic skewness. Whenever the cosmic errors are large, the cosmic distribution function develops a skewness corresponding to a long tail. As a result, the most likely measurement will be smaller than the average. Such a phenomenon was pointed out earlier in SC, and here it has been found to be universal. As SCB and paper I found that the cosmic bias is usually insignificant compared to the cosmic errors, it is likely that the effective cosmic bias is responsible for some of the conspicuously low measurements from small galaxy catalogs. This is in contrast with the conjecture of Hui & Gaztañaga (1998, hereafter HG), who assumed that the cosmic bias resulting from the use of biased estimators could explain this phenomenon. The effective cosmic bias renders correction for the cosmic bias useless, in contrast with the proposition of HG. The effective cosmic bias (and the less significant cosmic bias if any) can be taken into account in the framework of a full maximum likelihood analysis, which relies on the shape of the cosmic distribution function approximated with sufficient accuracy.
3. A preliminary investigation of joint distribution $`\mathrm{{\rm Y}}(\stackrel{~}{A},\stackrel{~}{B})`$ was performed for factorial moments and cumulants. It confirms the validity of the above points (i) and (ii) for cosmic bivariate distribution. In particular, a practical criterion for the validity of the Gaussian limit is that the cosmic error for both estimators be small enough, typically $`\mathrm{\Delta }A/A0.2`$ and $`\mathrm{\Delta }B/B0.2`$. This result can be safely generalized to $`N`$-variate distribution functions, thus providing the basis of full multivariate maximum likelihood analysis of data in the Gaussian limit.
We have not attempted to develop a more accurate multivariate approximation than (multivariate) Gaussian as this would go beyond the scope of this paper. However, we conjecture that an extension of our generalized lognormal distribution would be feasible (see the point of view of Sheth, 1995). An alternate approach, proposed by Amendola (1996), would employ a multivariate Edgeworth expansion. However, similarly with point (i) above for monovariate distributions, this approximation is only valid when the errors are small; but this is precisely the criterion for the Gaussian limit as we shown previously. A generalization of the lognormal distribution expanding the logarithm of the statistics via the multivariate Edgeworth technique provides a potential improvement of this method.
It is worth noting that the behaviour of the cosmic distribution function is expected to be extremely robust with respect to the particular model studied in this paper, $`\tau `$CDM. For example, SC, in their preliminary investigations, found essentially the same universal behaviour in Rayleigh-Levy fractals. Moreover, as discussed more extensively in Paper I, the results are sufficiently stable that the usual worries of galaxy biasing (not to be confused with cosmic and effective cosmic bias) and redshift distortions are unlikely to change them qualitatively. Indeed the shape of the cosmic distribution function is almost entirely determined by the magnitude of the cosmic error, and it is insensitive to which statistic is considered. The powerful universality found among entirely different statistics is likely to carry over when the two effects mentioned above, which are subtleties in comparison with the range of statistics investigated, are taken into account.
The results found in the present work and in paper I are encouraging for investigations in future large galaxy catalogs and for problems related to data compression (e.g. Bond 1995; Vogeley & Szalay 1996; Tegmark, Taylor & Heavens 1996; Bond, Jaffe & Knox 1998; Seljak 1998). For example, the cosmic error on factorial moments is expected to be small on a large dynamic range in the SDSS (see, e.g. CSS), implying according to the above findings that the cosmic distribution function should be nearly Gaussian in this regime. In that case, theory of the cosmic errors and cross-correlations, outlined in SC, CSS and SCB and thoroughly tested in paper I, will be sufficient for full multivariate maximum likelihood analyses. Preliminary investigations on current surveys are being undertaken by Szapudi, Colombi & Bernardeau (1999b) and Bouchet, Colombi & Szapudi (1999). Similarly the theoretical background is currently being developed for future weak lensing surveys (Berneardeau, Colombi, Szapudi, 1999), where statistical analyses will be conducted with indicators very close to counts-in-cells (see, e.g. Bernardeau, Van Waerbeke & Mellier 1997; Mellier 1998; Jain, Seljak & White 1999).
## Acknowledgments
We thank F. Bernardeau, P. Fosalba, C. Frenk, R. Scoccimarro, A. Szalay and S. White for useful discussions. It is a pleasure to acknowledge support for visits by IS and SC to the MPA, Garching and by SC to the dept Physics, Durham, during which part of this work was completed. IS and AJ were supported by the PPARC rolling grant for Extragalactic Astronomy and Cosmology at Durham.
The Hubble volume simulation data was made available by the Virgo Supercomputing Consortium (http://star-www.dur.ac.uk/frazerp/virgo/virgo.html). The simulation was performed on the T3E at the Computing Centre of the Max-Planck Society in Garching. We would like to give our thanks to the many staff at the Rechenzentrum who have helped us to bring this project to fruition. |
no-problem/9912/astro-ph9912021.html | ar5iv | text | # Theory of thermal and ionization effects in colliding winds of WR+O binaries
## 1. Why bother? And how?
Theory and observations both strongly suggest that colliding winds do exist in WR+O binaries. This means that for a more complete understanding of the physics of such binaries the collision zone must be taken into account. Also, the presence of a collision zone in wide binaries may allow to learn more about shock physics by comparing theory with direct observations of the collision zone. Finally, the presence of the collision zone is likely to contaminate many observational quantities. Subsequently derived system parameters may be contaminated as well. Seen in a more positive light, such contamination could perhaps be used to detect previously unknown binaries.
There exist various theoretical predictions for the influence of the collision zone on observations, including enhanced X-ray emission (Stevens 1992; Myasnikov & Zhekov 1993; Pittard & Stevens 1997; Walder, Folini, & Motamen 1999) and thermal radio emission (Stevens 1995), variability of line profiles in the UV (Shore & Brown 1988; Stevens 1993; Luehrs 1997), optical (Rauw, Vreux, & Bohannan 1999), and IR (Stevens 1999), the heating of the O-star photosphere (Gies, Bagnuolo Jr., & Penny 1997), and dust formation (Usov 1991).
The different models on which these predictions are based may be divided into two groups. Starting from basic physics (e.g. Euler equations) the interaction zone is modeled and some more or less general predictions are made. The emphasis here lies on the development and understanding of a consistent physical picture of the interaction zone. In the following, we shall call them ‘type 1’ models. Instead, one may start from an observational feature and make some assumptions about the interaction zone (e.g. geometrical shape, ionization state, density) which are then tuned until the modeled emission matches the observations. Here the point is to derive the physical parameters of the interaction zone in accordance with observations but basically without regard for what physics may be responsible for the value of these parameters. Subsequently, we shall call them ‘type 2’ models. Both approaches have their merits and drawbacks, and increased mutual exchange between them would seem advantageous for both sides. This review, however, will mostly concentrate on ‘type 1’ models.
## 2. Physical mechanisms at work: a visit to the zoo
A variety of physical processes are of importance in colliding wind WR+O binaries. In order to achieve a complete picture of the situation, ‘type 1’ models should include them all, an aim we are still far away from today. In the following, some of the most relevant processes are listed, along with a brief outline of their physical implications and the state of the art with respect to their modeling.
### 2.1. Geometry, orbital motion
Obviously, reality takes place in three space dimensions. And some observational features seem to require 3D models for their explanation, e.g. the asymmetric X-ray light curve of $`\gamma `$ Velorum (Willis, Schild, & Stevens 1995) or the dust spiral in WR 104 observed by Tuthill, Monnier, & Danchi (1999) which apparently even rotates according to their observations. On the modeling side, there are basically two approaches to 3D: by analytical means or through numerical simulations.
Analytical approaches to 3D are often limited to approximate geometrical descriptions of the location of the interaction zone. Tuthill et al. (1999) point out, for example, that the spiral observed in WR 104 follows approximately a rather simple geometrical path, an Archimedian spiral. More elaborate analytical models including orbital motion to some degree and describing the shape, surface density, and velocity of the colliding winds interaction zone have been provided by Cantó, Raga, & Wilkin (1996) and Chen, Bandiera, & Wang (1996). The analytical model of Usov (1995) goes even further and allows, for example, predictions for X-ray emission and particle acceleration.
The strength of such analytical models is that they yield precise results within the frame of their assumptions, there are no artifacts, and that they can be quickly evaluated for a particular set of parameters. Their drawback is that they can take into account only a few physical processes at a time and that they usually have to neglect time dependence. While on geometrical grounds, for example, they support the idea that the dust observed in WR 104 is related to the interaction zone, they give no clue to why there should be dust at all.
It is this latter kind of question where numerical models are needed and where for some issues nothing less than 3D hydrodynamical simulations will do. However, while obviously being closer to reality, 3D models are expensive in terms of CPU and memory requirements and complicated with regard to data management, visualization, and analysis of the simulations. Consequently, they should be invoked only where required. This point is also reflected in the fact that so far only a handful of 3D hydrodynamical models exist for WR+O binaries. First 3D simulations for three different WR+O binaries, revealing the spiral shape of the interaction zone, were presented by Walder (1995). Based on 3D simulations of $`\gamma `$ Velorum Walder, Folini, & Motamen (1999) were able to obtain an asymmetric X-ray light curve similar to the observed one. Pittard (1999) presented the first 3D simulations including radiative forces acting on the winds in the frame of CAK model. While much physics is still missing, these simulations are on the edge of what is feasible today and they provide a wealth of new insight.
### 2.2. Radiative forces
In close WR+O and O+O binaries the stellar radiation fields are strong enough to affect the dynamics of the colliding winds. For comparable stellar radii of the components, as in O+O binaries, radiative inhibition can occur, as described by Stevens & Pollock (1994). Here the radiation field of one star can inhibit the acceleration of the wind from the companion star. If the two stellar components have largely different radii, as is probably the case in WR+O binaries, Owocki & Gayley (1995) demonstrated that radiative inhibition of the WR-wind by the O-star radiation field is not efficient. Instead, radiative braking of the fully accelerated WR-wind occurs as it approaches the O-star. Whether this braking of a highly supersonic flow can be achieved without the generation of shocks is not yet clear.
In both cases, the stellar winds finally do not collide at the terminal velocities of single star winds, but at lower velocities. Consequently, the X-ray emission is softer and the total X-ray flux is probably diminished as well. Owocki & Gayley (1995) also make the point that the opening angle of the collision zone can be considerably increased due to radiative braking and that radiative braking can prevent photospheric collision.
In both works CAK theory is applied to compute the radiative forces, and within this frame both mechanisms suffer from the same main difficulty: as the ionization state, temperature, and composition of the matter is not exactly known, its response to the stellar radiation fields and, therefore, the CAK coefficients are not well known either. Despite these uncertainties Gayley, Owocki, & Cranmer (1997) have estimated that radiative braking should probably be taken into account in a variety of close WR+O systems.
### 2.3. Thermal conduction
So far, thermal conduction by electrons and ions has mostly been neglected when modeling WR+O colliding wind binaries. However, because of its strong temperature dependence ($`_tT(T^{5/2}T)`$) and the high post shock temperatures reached in such systems (up to 10<sup>8</sup> K if terminal wind velocities are reached) thermal conduction is likely to play an important role in the physical description of the collision zone.
Quantitatively, not much is known about the influence of heat conduction in WR+O binaries. Myasnikov & Zhekov (1998) have performed 2D numerical simulations using a one temperature model. They neglect radiative cooling and saturation effects, which in particular also means that the entire interaction zone stays hot in their simulations. Their results confirm most expectations: Pre-heating zones, also known as thermal precursors, form upstream of each shock whose temperatures and extensions depend significantly on the flow parameters and the efficiency of thermal conduction. Meanwhile, the temperature of the interaction zone decreases by up to an order of magnitude compared to its adiabatic value. To preserve pressure balance, its density increases by the same amount. The shocks become isothermal. Myasnikov & Zhekov (1998) also note that in the frame of their model the growth of KH instabilities is reduced by strong heat conduction.
The 1D simulations of Motamen, Walder, & Folini (1999) show a drastic change of the picture if radiative cooling is included. A cold, high density region can now form in the interaction zone. The combination of reduced post shock temperature and higher post shock density, both due to thermal conduction, results in enhanced radiative cooling and narrower cooling layers. For WR+O binaries this means that efficient radiative cooling may already set in close to the center of the system. Previous adiabatic cooling of the shocked matter to reach temperatures where strong radiative cooling finally is possible may become obsolete. In summary, the system is likely to become more radiative under the combined influence of thermal conduction and radiative cooling than if thermal conduction were absent. For the case of wind-blown bubbles, 1D simulations by Zhekov & Myasnikov (1998) suggest that the combined influence of radiative cooling and thermal conduction can also cause the formation of additional multiple shocks.
So far, there exist no 2D simulations for WR+O binaries including both radiative cooling and thermal conduction. However, for other situations such simulations have been performed, for example by Comerón & Kaper (1998) for runaway OB stars.
In reality, however, thermal conduction is likely to be even more complicated. Only two papers shall be mentioned here which illustrate this. One is by Balbus (1986), where the emphasis lies on the effect of magnetic fields and, in particular, on the time scales associated with magnetized conduction fronts. The second is by Borkowski, Shull, & McKee (1989) who show that in many cases one temperature models will not do. Also, they emphasize that the chemical composition and the ionization state can affect the conductive heat flux and, in particular, its saturation. They show that under certain circumstances thermal conduction may be able to reduce the peak temperature only by a factor of three, while for other conditions a reduction by a factor of ten is possible.
With regard to observations, thermal conduction will certainly lead to softer X-ray emission. One may speculate that the X-ray emission could be enhanced as well. Due to higher compression and lower temperatures, the shocked matter may cool radiatively before significantly cooling adiabatically when moving out of the center of the system.
### 2.4. Instabilities and turbulence
A wealth of analytical estimates and numerical simulations suggest that the interaction zone in colliding wind WR+O binaries is unstable, especially when strong radiative cooling occurs. The interaction zone as a whole gets bent and is possibly torn apart as becomes apparent, for example, in the work of Stevens, Blondin, & Pollock (1992). More recent numerical simulations also suggest the cold interior of the interaction zone to be in supersonically turbulent motion.
A variety of papers are concerned with the physical nature of different kinds of instabilities. The scope covered by these works ranges from the classical Rayleigh-Taylor, Kelvin-Helmholtz, and Richtmyer-Meshkov instabilities, which act on interfaces separating two different physical states (for example the two winds), over various thin shell instabilities, which act on the entire thin layer of high density, cold matter (e.g. Dgani, Walder, & Nussbaumer 1993; Vishniac 1994), to the thermal instability related to radiative cooling (e.g. Walder & Folini 1995). A recent review can be found in Walder & Folini (1998).
Which of the suggested instabilities is important for a certain astrophysical object is often not clear. Also, instabilities usually will not occur isolated but will interact with each other, making it often pointless to speak of one particular instability. Finally, in WR+O binaries advection out of the system center will be superimposed on all instabilities (Belov, & Myasnikov 1999; Ruderman 2000).
The resulting bending of the thin, cold, high density interaction zone probably also affects the interior dynamics of this thin sheet. In a planar, high resolution study Folini & Walder (2000) have focused on the interior structure of the cold part of a colliding winds interaction zone. They find that the cold part of the interaction zone to be subject to driven, supersonic turbulence. The matter distribution within the turbulent interaction zone consist of overcompressed, high density knots and filaments, separated by large voids. The mean density of the interaction zone is considerably reduced compared to the density required to balance the ram pressure of the incoming flows by thermal pressure alone. Its surface becomes billowy due to the turbulent motion inside.
The unstable behaviour of the cold part of the interaction zone should have observable consequences. Spectral lines originating from it should show clearly stronger than thermal line broadening because of turbulent motion. The total extent of the hot post shock zones will be affected by the combined influence of bending and the thermal instability. This may lead to some X-ray variability, as suggested by Pittard & Stevens (1997) for the case of O+O binaries.
Neglecting for a moment the review type character of this article, some speculations may be added. On the basis of observed line profile variations, Luehrs (1997) derived the angular extension of the collision zone in HD 152270. The value he found seems extraordinarily large for a cold, high density interaction zone. Could strong global bending and twisting of a slim interaction zone make it appear to be much more extended? The second speculation is related to the overcompressed knots and filaments observed in numerical simulations of radiatively cooling, unstable collision zones. Could dust formation in close WR+O binaries be linked to such knots? (See also Section 2.6.)
Having argued in favor of an unstable interaction zone in colliding wind WR+O binaries a word of caution seems advisable here as well. Analytical results generally are not directly applicable because their rather restrictive assumptions are usually not fulfilled. Numerical simulations, on the other hand, are more flexible in that respect but it may be difficult to rule out numerical artifacts. Also, instabilities so far have mostly been studied in the frame of rather simple physical models. How additional physical processes will influence the stability properties of the interaction zone has yet to be investigated.
With regard to possible numerical artifacts the recent publication of Myasnikov, Zhekov, & Belov (1998) must be mentioned. They argue that if not purely numerical in origin anyway, the instabilities observed in numerical simulations of colliding wind binaries at least greatly depend on the applied cooling limit. Their findings certainly require further attention.
Despite these objections, we are firmly convinced that the colliding winds interaction zone in WR+O binaries is unstable. If efficient radiative cooling takes place the interaction zone is most likely subject to strong bending and is possibly torn apart in some locations. We are, however, not sure how violent this instability is and what its exact physical cause is. The cold interior of the interaction zone is probably subject to supersonic turbulence. How exactly this turbulence is driven is not yet clear. Also, the statistical properties of this part of the interaction zone, for example its mean density, have barely been investigated and then only in 2D. This unstable behaviour must cause observable traces.
### 2.5. Ionizing radiation
There are three sources of ionizing radiation in WR+O binaries: the two stars themselves and the shock heated interaction zone. For the temperature and ionization state of the cold matter within and around the colliding winds interaction zone this radiation is crucial. So far, only a few studies exist, each of which deals with a separate aspect of the problem. Gies, Bagnuolo Jr., & Penny (1997) find that the X-ray radiation emitted by the collision zone in close binaries is capable of significantly heating the O-star photosphere, thereby changing observational quantities used to derive the stellar parameters. Aleksandrova & Bychkov (1998) considered the same radiation source but investigated its influence on the pre-shock material. Investigating wind velocities in the range between 4000 km/s and 15000 km/s they found that the X-rays from the collision zone are capable of ionizing iron nearly completely before it gets shocked. While these velocities are clearly above those encountered in WR+O binaries the pre-ionizing effect as such is likely to be present also in these systems. Consequently, emission from highly ionized ions may not solely originate from the shock heated zones themselves, which should be taken into account when using highly ionized elements for diagnostics of the collision zone. Also the temperature of the pre-shock matter is likely to be affected. First attempts to estimate the effect of the stellar radiation field on the cold part of the interaction zone have been made by Rauw et al. (1999) and by Folini & Walder (1999), the latter using 3D optically thick NLTE radiative transfer. Although their results are preliminary, the latter authors find that for their toy example of $`\gamma `$ Vel optically thick effects become important within the cold, high density part of the collision zone.
### 2.6. Dust formation
Observations clearly prove the permanent or episodical dust formation in certain WC+O binaries. The most spectacular example is probably the dust spiral of WR 104 observed by Tuthill et al. (1999). Recent observational summaries can be found in Williams (1997) and Williams (1999). The observation of dust in such systems is puzzling as conditions there (high temperature, strong UV radiation) seem not especially suited for dust formation.
Theoretically, dust formation in WR+O binaries is not understood so far. Usov (1991) has published density estimates for a homogeneous collision zone. The 2D simulations of V444 first presented by Walder & Folini (1995) have been carried on in the mean time, showing overcompressions of up to a factor of ten in a supersonically turbulent interaction zone, compared to a homogeneous one. In this particular simulation densities of up to about 10<sup>13</sup> cm<sup>-3</sup> are observed out to a distance comparable to the separation of the two stars. Considerably more publications exist on dust nucleation and grain growth under laboratory conditions and in WC winds. Cherchneff & Tielens (1995) and Cherchneff (1997), for example, focused on dust nucleation. In particular, they found that high densities, possibly up to 10<sup>12</sup>cm<sup>-3</sup>, are required for dust nucleation to take place while the nucleation process is nearly independent of temperature between 1000 K and 4000 K. Although these temperatures seem small for the colliding winds interaction zone in WR+O binaries they might not be out of reach if the densities are high enough. (See also Section 3.) Leaving the question of nucleation aside, Zubko (1992), Zubko, Marchenko, & Nugis (1992), and Zubko (1998) carried out theoretical studies of grain growth via collisions of charged grains with carbon ions in WC winds. A main conclusion from their work is that dust grains may grow even in a highly ionized standard WC atmosphere, provided the condensation nuclei are created somehow.
### 2.7. Clumped winds, magnetic fields, particle acceleration
The influence of several other physical processes on the collision zone is even less investigated than of those outlined in the previous sections. Three of them, clumped winds, magnetic fields, and particle acceleration, shall be briefly touched in the following.
Evidence is growing that the winds of both, WR- and O-stars are indeed clumped rather than smooth. However, the size, compactness and distribution of the clumps is still under debate. Is a clumped wind more like a few massive clumps in a homogeneous flow? Or is more appropriate to talk about the flow in terms of an ensemble of different blobs? As pointed out for example by Lépin (1995), the effect of clumped winds on the interaction zone will depend on which of the two scenarios applies. A fast, compact, high density clump may pass through the entire interaction zone with basically no interaction at all (see e.g. Cherepashchuk 1990). A less dense, not too fast clump, on the other hand, may finally get dissolved in the interaction zone, thereby possibly affecting its stability and emission. But also the theoretical treatment may be different, depending on the nature of the clumped winds. Can they be treated statistically using some mean properties or is it important to treat clumps as ’individuals’?
Concerning magnetic fields, the observation of non-thermal emission suggests the presence of magnetic fields in at least some WR+O binaries. However, their strength and orientation have yet to be determined. On the theoretical side, only a few papers exist on magnetic fields in colliding wind WR+O binaries. Eichler & Usov (1993) and Jardine, Allen, & Pollock (1996) present studies on particle acceleration and related synchrotron emission in the interaction zone in WR+O binaries. Zhekov, Myasnikov, & Barsky (1999) focus on the magnetic field distribution, assuming for the stellar wind magnetic field a simplified Parker model. Depending on their strength, magnetic fields are likely to affect several other physical processes directly or indirectly as well. (See also Section 3.)
## 3. Open the fences: mutual interactions
The different physical processes addressed in the previous section obviously do not occur as isolated processes. Some processes influence others and vice versa. A few examples of this we have already briefly encountered above. However, there exist hardly any models including more than one or two different physical processes at the same time. It just would be too costly up to now to include more physics in the numerical models. Also, it may be wiser anyway to first improve our understanding of simpler situations before turning to more complicated ones. Nevertheless, one should bear in mind that most likely there will be considerable interaction amongst different physical processes. The remainder of this section is devoted to speculations, rather than results, on a few such interactions. This section may, therefore, be considered to go beyond the frame of a review. However, we deem it necessary to address these questions as they are crucial for the physical understanding of the interaction zone.
Consider first thermal conduction and radiative braking. Especially for close binaries, they are both crucial for the post shock temperature as well as for the location of the interaction zone and its opening angle. The interaction of the pre-shock matter with the radiation field, crucial for radiative braking, will certainly be affected by the increase in temperature due to thermal conduction. It seems plausible to assume that this interaction will be reduced if the matter temperature deviates strongly from the radiation temperature. Let us first start from a radiative braking model. If we now add thermal conduction the pre-shock wind will be heated. According to our assumption this means less interaction between the wind and the radiation field and therefore less radiative braking. The post shock temperature will rise and the pre-heating zones will become even larger. ’Positive back coupling’ occurs. Now let us start with a thermal conduction model. Adding radiative braking will slow down the pre-shock wind, the post shock temperature will drop, and the pre-heating zones will become smaller and cooler. Again according to our assumption, radiative braking will become more efficient and the post shock temperature will drop. ’Positive back coupling’ again. Both scenarios are speculations. But as each of the two processes alone already has powerful impact on the physics of the interaction zone a common investigation of the two would be highly desirable.
Here it may be added that a radiative precursor instead of a thermal one could have similar effects. A radiative precursor is likely to exist around the wind collision zone in WR+O binaries because of the X-ray and UV emission of the high temperature post shock zones. Its effect would be to heat and ionize the pre-shock matter. However, at least for the case of colliding winds in WR+O binaries radiative precursors on their own are even less investigated up to now than thermal ones.
Another issue is the interplay of thermal conduction, radiative cooling, and ionizing radiation. Together these processes essentially determine how cold the matter can get in the cold, high density part of the interaction zone. While attempts have been made to bring the first two together the isolated problem of the influence of ionizing radiation on the interaction zone has hardly been investigated so far. Its influence is usually taken into account only in the form of heating due to photoionization and then only in the form of a more or less arbitrarily chosen cooling limit of the radiative loss function. Just for arguments sake consider a compact, high density clump in the interaction zone that is opaque to UV radiation. Its outside would be bombarded by UV and X-ray photons, the surface of the clump would be heated. Now, thermal conduction would tend to distribute this energy, received on the surface, over the entire volume of the clump. If the clump then were able to radiate this energy at longer wavelengths, for which the clump were still transparent, the clump may manage to remain cold. Such a mechanism could possibly preserve the cold environment necessary for dust formation.
So far, we have again neglected magnetic fields in this section. Their presence, however, will have a direct influence on thermal conduction or the stability and density of the cold part of the interaction zone. The altered thermal conduction then in turn may affect, for example, radiative braking or dust formation. So indirectly magnetic fields may influence physical processes or quantities, like radiative braking, which at first glance one may believe to be unaffected.
## 4. End of the visit: collecting the pieces
Theoretical predictions for the thermal and ionization state of the colliding winds interaction zone in WR+O star binaries require the inclusion of a variety of physical processes. Which processes are indeed important for which system is, however, often a difficult question. A brief summary of where we stand today is attempted in the following.
How hot can it get? Being one of the key questions it has been examined quite carefully. The only process leading to a temperature increase in the high temperature part of the interaction zone is shock heating. The temperature reached depends on the relative velocity of the colliding flows which, in turn, can be affected by radiative braking. Thermal conduction, on the other hand, causes a decrease of the peak temperature. This peak temperature may be higher than the temperature we observe, depending on when radiative cooling becomes important. If the density is too low or the peak temperature too high the matter will undergo significant adiabatic cooling as it moves out of the center of the system, before it starts to cool radiatively and thus becomes observable. The peak temperature may also affect the maximum densities which can be reached within the cold part of the collision zone. A lower peak temperature, due to thermal conduction for example, allows for faster cooling, closer to the center of the system, and consequently for higher densities of the cooled matter. Presently, studies exist on the influence of each single processes and on the combined influence of thermal conduction and radiative cooling in 1D. The combined influence of all these processes has, however, yet to be investigated, finally in 2D or 3D. The question is certainly important with regard to X-ray emission. Comparison with observation shows that for close binaries the predicted X-ray emission is still too high and too hard, whereas for wider systems theory and observation agree much better. The combined influence of the above processes probably will help to bring theory and observation closer together in this point.
How cold can it get? Another key question which, despite its importance with regard to ionization states, compression, and dust formation, especially for close binaries, has barely been attacked so far. Basically, thermal conduction and photoionization tend to heat the cold part of the interaction zone, whereas radiative cooling reduces its temperature. From these processes, heating of the interaction zone by photoionization is by far the least investigated. Here the stellar radiation fields as well as the radiation from the shock heated zones must be taken into account and their interaction with the matter has to be determined. At least for some systems, this probably requires detailed multidimensional radiative transfer computations. The difficulty with radiative cooling, crucial in this context, is that it strongly depends on the temperature and ionization state of the matter. The cooling history, and therefore time dependence, may also be important. Finally, the properties of the hot post shock zones, the source of thermal electrons and ionizing photons, are not well known either. In the future, the question should be clarified whether detailed radiative transfer is indeed needed. Then, the heating of the cold part of the interaction zone by photoionization should be investigated more quantitatively.
Towards observations. Traces of the interaction zone seem to be present in all spectral ranges. Observational predictions from what we called ‘type 1’ models in this review exist, to our knowledge, for X-rays (many) and radio (one). As far as we know, there are no predictions form ‘type 1’ models for UV, optical, and IR. The reason is that predictions for these latter spectral ranges depend essentially on the cold part of the interaction zone which, as mentioned before, is not yet well understood. All models reproducing line profile variations in the UV, optical, or IR are of what we called ‘type 2’. Starting from a more or less simple geometrical description of the interaction zone these models then assume a certain ionization state and level population. They are valuable as they show us what certain parameters of the interaction zone should be like in order to reproduce observations. However, these models themselves give no physical explanation of why the parameters should have their particular values.
The ’grand unified model’. Looking at ‘type 1’ models, some physical processes are included in one model but not in the other (for example thermal conduction or radiative braking) and some physical ingredients are barely considered at all so far (for example clumped winds or magnetic fields). ‘Type 2’ models may be improved by considering other than just the most simple, analytical matter distributions. It may also be worthwhile trying to find out how unique a certain observed signature is. Is there only one way to reproduce it or allow several different, equally plausible models for the same observational feature? Enhanced combination of ‘type 1’ and ‘type 2’ approaches could help to decide such questions. Although a ’grand unified model’ for WR+O colliding wind binaries will remain out of reach for several years to come, considerable progress has been made with regard to the modeling and understanding of single physical processes such a model must comprise. This should also help us to decide which physical processes are indeed needed in order to explain a particular system. Also in future modeling will be expensive and models, therefore, should comprise only the essential physics.
## References
Aleksandrova, O. V. & Bychkov, K. V. 1998, Astron. Rep., 42, 160
Balbus, S. A. 1986, ApJ, 304, 787
Belov, N. A. & Myasnikov, A. V. 1999, Fluid Dynamics, 3, 96
Borkowski, K. J., Shull, J. M., & McKee, C. F. 1989, ApJ, 336, 979
Cantó, J., Raga, A. C., & Wilkin, F. P. 1996, ApJ, 469, 729
Chen, Y., Bandiera, R., & Wang, Z. 1996, ApJ, 469, 715
Cherchneff, I. & Tielens, A. G. G. M. 1995, in IAU Symp. 163, WR Stars: Binaries, Colliding Winds, Evolution, ed. K. A. van der Hucht & P. M. Williams (Dordrecht: Kluwer), 346
Cherchneff, I. 1997, Ap&SS, 251, 333
Cherepashchuk, A. M. 1990, Soviet Ast., 34, 481
Comerón, F. & Kaper, L. 1998, A&A, 338, 273
Dgani, R., Walder, R., & Nussbaumer, H., 1993, A&A, 267, 155
Folini, D., & Walder, R. 1999, in IAU Symp. 193, Wolf-Rayet Phenomena in Massive Stars and Starburst Galaxies ed. K. A. van der Hucht, Koenigsberger, G., & Eenens, P. R. J. (San Francisco: ASP), 352
Folini, D., & Walder, R. 2000, in preparation (San Francisco: ASP), 352
Gayley, K. G., Owocki, S. P., & Cranmer, S. R. 1997, ApJ, 475, 786
Gies, D. R., Bagnuolo jr., W. G., & Penny, L. R. 1997, ApJ, 479, 408
Jardine, M., Allen, H. R., & Pollock, A. M. T. 1996, A&A, 314, 594
Lépin, S., Eversberg, T., & Moffat, A. F. J. 1999, AJ, 117, 1441
Lépin, S. 1995, in IAU Symp. 163, WR Stars: Binaries, Colliding Winds, Evolution, ed. K. A. van der Hucht & P. M. Williams (Dordrecht: Kluwer), 411
Luehrs, S. 1997, PASP, 109, 504
Marchenko, S. V., Moffat, A. F. J., Eenens, P. R. J., Cardona, O., Echevarria, J., & Hervieux, Y. 1997, ApJ, 485, 826
Motamen, S., Walder, R., & Folini, D. 1999, in IAU Symp. 193, Wolf-Rayet Phenomena in Massive Stars and Starburst Galaxies ed. K. A. van der Hucht, Koenigsberger, G., & Eenens, P. R. J. (San Francisco: ASP), 378
Myasnikov, A. V. & Zhekov, S. A. 1993, MNRAS, 260, 221
Myasnikov, A. V. & Zhekov, S. A. 1998, MNRAS, 300, 686
Myasnikov, A. V., Zhekov, S. A., & Belov, N. A. 1998, MNRAS, 298, 1021
Owocki, S. P. & Gayley, K. G. 1995, ApJ, 454, L145
Patzer, A. B. C., Gauger, A., & Sedlmayr, E. 1998, A&A, 337, 847
Pittard, J. M. & Stevens, I. R. 1997, MNRAS, 292, 298
Pittard, J. M. 1999, in IAU Symp. 193, Wolf-Rayet Phenomena in Massive Stars and Starburst Galaxies ed. K. A. van der Hucht, Koenigsberger, G., & Eenens, P. R. J. (San Francisco: ASP), 386
Rauw, G., Vreux, J. M., & Bohannan, B. 1999, ApJ, 517, 416
Ruderman, M. S. 2000, Ap&SS, Conf. Proc. Progress in Cosmic Gas Dynamics
Shore, S. N. & Brown, D. N. 1988, ApJ, 334, 1021
Stevens, I. R., Blondin, J. M., & Pollock, A. M. T.1992, ApJ, 386, 265
Stevens, I. R. & Pollock, A. M. T. 1994, MNRAS, 269, 226
Stevens, I. R. 1995, MNRAS, 277, 163
Stevens, I. R. & Howarth, I. D. 1999, MNRAS, 302, 549
Tuthill, P. G., Monnier, J. D., & Danchi, W. C. 1999, Nature, 398, 487
Usov, V. V. 1991, MNRAS, 252, 49
Usov, V. V. 1995, in IAU Symp. 163, WR Stars: Binaries, Colliding Winds, Evolution, ed. K. A. van der Hucht & P. M. Williams (Dordrecht: Kluwer), 495
Vishniac, E. T. 1994, ApJ, 428, 186
Walder, R. 1995, in IAU Symp. 163, WR Stars: Binaries, Colliding Winds, Evolution, ed. K. A. van der Hucht & P. M. Williams (Dordrecht: Kluwer), 420
Walder, R. & Folini, D. 1995, in IAU Symp. 163, WR Stars: Binaries, Colliding Winds, Evolution, ed. K. A. van der Hucht & P. M. Williams (Dordrecht: Kluwer), 525
Walder, R. & Folini, D. 1998, Ap&SS, 260, 215
Walder, R. 1998, Ap&SS, 260, 243
Walder, R., Folini, D., & Motamen, S. 1999, in IAU Symp. 193, Wolf-Rayet Phenomena in Massive Stars and Starburst Galaxies ed. K. A. van der Hucht, Koenigsberger, G., & Eenens, P. R. J. (San Francisco: ASP), 298
Williams, P. M. 1997, Ap&SS, 251, 321
Williams, P. M. 1999, in IAU Symp. 193, Wolf-Rayet Phenomena in Massive Stars and Starburst Galaxies ed. K. A. van der Hucht, Koenigsberger, G., & Eenens, P. R. J. (San Francisco: ASP), 267
Willis, A. J., Schild, H., & Stevens, I. R. 1995, A&A,298, 549
Zhekov, S. A. & Myasnikov, A. V. 1998, New Astronomy, 3, 57
Zhekov, S. A., Myasnikov, A. V., & Barsky, E. V. 1999, in IAU Symp. 193, Wolf-Rayet Phenomena in Massive Stars and Starburst Galaxies ed. K. A. van der Hucht, Koenigsberger, G., & Eenens, P. R. J. (San Francisco: ASP), 400
Zubko, V. G. 1992, Astron. Astrophys. Trans., 3, 141
Zubko, V. G., Marchenko, S. V., & Nugis, T. 1992, Astron. Astrophys. Trans., 3, 131
Zubko, V. G. 1998, MNRAS, 295, 109 |
no-problem/9912/math9912130.html | ar5iv | text | # Lecture notes on quantum cohomology of the flag manifold
## 1. Classical theory
Let us briefly review the standard facts from the Schubert calculus of the flag manifold; see for details. Let $`Fl_n`$ be the variety of complete flags in $`^n`$. The cohomology ring $`\mathrm{H}^{}(Fl_n,)`$ can be described in two different ways. The first description, due to Borel , represents it as a quotient of a polynomial ring:
(1)
$$\mathrm{H}^{}(Fl_n,)[x_1,\mathrm{},x_n]/I_n,$$
where $`x_1,\mathrm{},x_n\mathrm{H}^2(Fl_n,)`$ are the first Chern classes of $`n`$ standard line bundles on $`Fl_n`$, and $`I_n`$ is the ideal generated by symmetric polynomials in $`x_1,\mathrm{},x_n`$ without constant term<sup>1</sup><sup>1</sup>1This result, as well as several others below, extends to the more general setup of the homogeneous space $`G/B`$ for a complex semisimple Lie group $`G`$. In these notes, we only treat the type $`A`$ case, with $`G=SL_n`$..
The second description is based on the decomposition of $`Fl_n`$ into Schubert cells, indexed by the elements of the symmetric group $`S_n`$. The corresponding cohomology classes $`\sigma _w`$, $`wS_n`$ (the Schubert classes) form an additive basis in $`\mathrm{H}^{}(Fl_n,)`$.
The elements of the quotient ring $`[x_1,\mathrm{},x_n]/I_n`$ which correspond to the Schubert classes under the isomorphism (1) were identified by Bernstein, Gelfand, and Gelfand and Demazure . Then Lascoux and Schützenberger introduced remarkable polynomial representatives of the Schubert classes called Schubert polynomials. These polynomials $`𝔖_w`$, $`wS_n`$, are defined as follows.
Let $`s_i`$ denote the adjacent transposition $`(ii+1)`$. For $`wS_n`$, an expression $`w=s_{i_1}s_{i_2}\mathrm{}s_{i_l}`$ of minimal possible length is called a *reduced decomposition*. The number $`l=\mathrm{}(w)`$ is the *length* of $`w`$. The symmetric group $`S_n`$ acts on $`[x_1,\mathrm{},x_n]`$ by $`wf=f(x_{w^1(1)},\mathrm{},x_{w^1(n)})`$. The *divided difference operator* $`_i`$ is defined by $`_if=(x_ix_{i+1})^1(1s_i)f`$. For any permutation $`w`$, the operator $`_w`$ is defined by $`_w=_{i_1}_{i_2}\mathrm{}_{i_l}`$, where $`s_{i_1}s_{i_2}\mathrm{}s_{i_l}`$ is a reduced decomposition for $`w`$.
Let $`\delta =\delta _n=(n1,n2,\mathrm{},1,0)`$ and $`x^\delta =x_1^{n1}x_2^{n2}\mathrm{}x_1`$. For $`wS_n`$, the *Schubert polynomial* $`𝔖_w`$ is defined by $`𝔖_w=_{w^1w_\mathrm{o}}x^\delta `$, where $`w_\mathrm{o}`$ is the longest element in $`S_n`$. Equivalently, $`𝔖_{w_\mathrm{o}}=x^\delta `$, and $`𝔖_{ws_i}=_i𝔖_w`$ whenever $`\mathrm{}(ws_i)=\mathrm{}(w)1`$. The following result is immediate from .
###### Theorem 1.
The Schubert polynomials represent Schubert classes under Borel’s isomorphism (1).
## 2. Quantum cohomology
The (small) *quantum cohomology ring* $`\mathrm{QH}^{}(X,)`$ of a smooth algebraic variety $`X`$ is a certain deformation of the classical cohomology; see, e.g., for references and definitions. The additive structure of this ring is usually rather simple. For example, $`\mathrm{QH}^{}(Fl_n,)`$ is canonically isomorphic, as an abelian group, to the tensor product $`\mathrm{H}^{}(Fl_n,)[q_1,\mathrm{},q_{n1}]`$, where the $`q_i`$ are formal variables (deformation parameters). The multiplicative structure of the quantum cohomology is however deformed comparing to $`\mathrm{H}^{}(Fl_n,)`$, and specializes to it in the classical limit $`q_1=\mathrm{}=q_{n1}=0`$. The multiplication in $`\mathrm{QH}^{}(Fl_n,)`$ is given by
(2)
$$\sigma _u\sigma _v=\underset{w}{}\underset{d=(d_1,\mathrm{},d_{n1})}{}q^d\sigma _u,\sigma _v,\sigma _w_d\sigma _{w_\mathrm{o}w},$$
where the $`\sigma _u,\sigma _v,\sigma _w_d`$ are the (3-point, genus $`0`$) *Gromov-Witten invariants* of the flag manifold, and $`q^d=q_1^{d_1}\mathrm{}q_{n1}^{d_{n1}}`$. Informally, these invariants count equivalence classes of rational curves in $`Fl_n`$ which have multidegree $`d=(d_1,\mathrm{},d_{n1})`$ and pass through given Schubert varieties. In order for an invariant to be nonzero, the condition $`\mathrm{}(u)+\mathrm{}(v)+\mathrm{}(w)=\left(\genfrac{}{}{0pt}{}{n}{2}\right)+2_{i=1}^{n1}d_i`$ has to be satisfied. The operation $``$ defined by (2) is associative , and obviously commutative.
The quantum analog of Borel’s theorem was obtained by Givental and Kim and Ciocan-Fontanine who showed that
(3)
$$\mathrm{QH}^{}(Fl_n,)P_n/I_n^q,$$
where $`P_n=[q_1,\mathrm{},q_{n1}][x_1,\mathrm{},x_n]`$, the $`x_i`$ are the same as before, and $`I_n^q`$ is the ideal generated by the coefficients $`E_1^n,\mathrm{},E_n^n`$ of the characteristic polynomial
(4)
$$det(1+\lambda G_n)=\underset{i=0}{\overset{n}{}}E_i^n\lambda ^i$$
of the matrix
(5)
$$G_n=\left(\begin{array}{ccccc}x_1& q_1& 0& \mathrm{}& 0\\ 1& x_2& q_2& \mathrm{}& 0\\ 0& 1& x_3& \mathrm{}& 0\\ \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}& \mathrm{}\\ 0& 0& 0& \mathrm{}& x_n\end{array}\right).$$
(These coefficients are called *quantum elementary symmetric functions*.) More precisely, let us identify the polynomial $`x_1+\mathrm{}+x_i`$ with the Schubert class $`\sigma _{s_i}`$. The quantum cohomology ring is then generated by the elements $`x_i`$, subject to the relations in the ideal $`I_n^q`$.
## 3. Quantum Schubert polynomials
The above description of $`\mathrm{QH}^{}(Fl_n,)`$ does not tell which elements on the right-hand side of (3) correspond to the Schubert classes. The main goal of was to give the quantum analogues of the Bernstein–Gelfand–Gelfand theorem and the Schubert polynomials construction of Lascoux and Schützenberger. This allowed us to design algorithms for computing the Gromov-Witten invariants for the flag manifold. Our approach relied on some of the most basic properties of the quantum cohomology, which can be expressed in elementary terms (see below).
Let $`A_n`$ denote the vector space spanned by the classical Schubert polynomials. Another basis of $`A_n`$ is formed by the monomials $`x_1^{a_1}x_2^{a_2}\mathrm{}x_{n1}^{a_{n1}}`$ dividing the staircase monomial $`x^\delta `$. The space $`A_n`$ is complementary to the ideal $`I_n`$, and also to the quantized ideal $`I_n^q`$.
The *quantum Schubert polynomial* $`𝔖_w^q`$ is defined as the unique polynomial in $`A_n`$ that belongs to the coset modulo $`I_n^q`$ representing the Schubert class $`\sigma _w`$ under the canonical isomorphism (3). The primary goal of was to algebraically identify these polynomials.
## 4. Axiomatic characterization
The following properties of the quantum Schubert polynomials are directly implied by their definition.
###### Property 1.
$`𝔖_w^q`$ is homogeneous of degree $`\mathrm{}(w)`$, assuming $`\mathrm{deg}(x_i)=1`$, $`\mathrm{deg}(q_j)=2`$.
###### Property 2.
Specializing $`q_1=\mathrm{}=q_{n1}=0`$ yields $`𝔖_w^q=𝔖_w`$.
###### Property 3.
$`𝔖_w^q`$ belongs to the span $`A_n`$ of the classical Schubert polynomials.
It follows that the $`𝔖_w^q`$ form a linear basis in $`A_n`$, and that the transition matrices between the bases $`\{𝔖_w^q\}`$ and $`\{𝔖_w\}`$ are unipotent triangular, with respect to any linear ordering consistent with $`\mathrm{}(w)`$.
The next property reflects the fact that the Gromov-Witten invariants of the flag manifold are nonnegative integers.
###### Property 4.
Consider any product of polynomials $`𝔖_w^q`$. Expand it (modulo $`I_n^q`$) in the linear basis $`\{𝔖_w^q\}`$. Then all coefficients in this expansion are polynomials in the $`q_j`$ with nonnegative integer coefficients.
The following result is a restatement of formula (3) in .
###### Property 5.
For a cycle $`w=s_{ki+1}\mathrm{}s_k`$, we have $`𝔖_w^q=E_i^k`$.
###### Theorem 2.
The polynomials $`𝔖_w^q`$ are uniquely determined by Properties 15.
We conjecture in that Property 5, which is the only property stated above that does not trivially follow from the quantum-cohomology definition of the $`𝔖_w^q`$, is not actually needed to uniquely determine the quantum Schubert polynomials.
The next two sections provide constructive descriptions of these polynomials.
## 5. Quantum polynomial ring
For $`k=1,2,\mathrm{}`$, define the operator $`X_k`$ acting in the polynomial ring by
(6)
$$X_k=x_k\underset{i<k}{}q_{ik}_{(ik)}+\underset{j>k}{}q_{kj}_{(kj)},$$
where $`_{(ij)}`$ is the divided difference operator which corresponds to the transposition $`t_{ij}`$, and $`q_{ij}=q_iq_{i+1}\mathrm{}q_{j1}`$. (We will always assume $`i<j`$.)
###### Theorem 3.
The operators $`X_i`$ commute pairwise, and generate a free commutative ring. For any polynomial $`gP_n`$, there exists a unique operator $`G[q_1,\mathrm{},q_{n1}][X_1,\mathrm{},X_n]`$ satisfying $`g=G(1)`$.
(Here $`G(1)`$ denotes the result of applying $`G`$ to the polynomial $`1`$.)
For a polynomial $`gP_n`$, the polynomial $`G`$ given by $`g=G(1)`$ is called the *quantization* of $`g`$. The bijective correspondence $`gG`$ between $`P_n`$ and $`[q_1,\mathrm{},q_{n1}][X_1,\mathrm{},X_n]`$ is by no means a ring homomorphism. Identifying the two spaces via this bijection, we obtain an alternative ring structure on $`P_n`$. The multiplication thus defined is called *quantum multiplication* and denoted by $``$; it coincides with the usual multiplication in the classical limit.
Recall that $`I_nP_n`$ is the ideal generated by the elementary symmetric functions $`e_i=e_i(x_1,\mathrm{},x_n)`$, $`i=1,\mathrm{},n`$. It can be checked that $`I_n`$ is also an ideal with respect to the quantum multiplication (i.e., $`I_n`$ is an invariant space for the operators $`X_1,\mathrm{},X_n`$ acting in $`P_n`$).
We are now going to relate our quantum multiplication to the quantum cohomology of the flag manifold. First we verify that for $`i=1,\mathrm{},n`$, the quantization of the elementary symmetric function $`e_i(x_1,\mathrm{},x_n)`$, is the quantum elementary symmetric function $`E_i^n`$ defined by (4). As a corollary, the quantization map bijectively maps the ideal $`I_n`$ onto the Givental-Kim ideal $`I_n^q`$. Thus the quotient $`P_n/I_n`$, with the quantum multiplication $``$ defined above, is canonically isomorphic to the quotient ring $`P_n/I_n^q`$ (hence to $`\mathrm{QH}^{}(Fl_n,)`$). In fact, more is true.
###### Theorem 4.
The canonical isomorphism between the quotient space $`P_n/I_n`$ and the classical cohomology of the flag manifold is also a ring isomorphism between $`P_n/I_n`$, endowed with quantum multiplication defined above in this section, and the quantum cohomology ring of the flag manifold.
In other words, the identification of the (classical) Schubert polynomials with the corresponding Schubert classes translates the quantum multiplication defined in this section into the multiplication in the quantum cohomology ring.
The quantum Schubert polynomial $`𝔖_w^q`$ is the quantization of the ordinary Schubert polynomial $`𝔖_w`$, in the sense of the above construction. In other words, $`𝔖_w^q`$ is uniquely determined by $`𝔖_w^q(X_1,X_2,\mathrm{})(1)=𝔖_w(x_1,x_2,\mathrm{})`$. It follows that the quantum multiplication of ordinary Schubert polynomials translates into the ordinary multiplication of the corresponding quantum Schubert polynomials.
## 6. Standard monomials
Let $`e_i^k`$ denote the elementary symmetric function of degree $`i`$ in the variables $`x_1,\mathrm{},x_k`$. The *standard elementary monomials* are defined by the formula
(7)
$$e_{i_1\mathrm{}i_{n1}}=e_{i_1}^1\mathrm{}e_{i_{n1}}^{n1},$$
where we assume $`0i_kk`$ for all $`k`$. It is well known (and easy to prove) that the polynomials (7), for a fixed $`n`$, form a linear basis in the space $`A_n`$ spanned by the Schubert polynomials for $`Fl_n`$. Each Schubert polynomial $`𝔖_w`$ is thus uniquely expressed as a linear combination of such monomials.
Let $`G_k`$ denote the $`k`$th leading principal minor of the matrix $`G_n`$ given by (5). The *quantum standard elementary monomial* is defined by
(8)
$$E_{i_1\mathrm{}i_{n1}}=E_{i_1}^1\mathrm{}E_{i_{n1}}^{n1},$$
where $`E_i^k=E_i(X_1,\mathrm{},X_k)`$ denotes the coefficient of $`\lambda ^i`$ in the characteristic polynomial $`\chi (\lambda )=det(1+\lambda G_k)`$ of $`G_k`$.
###### Theorem 5.
The quantum Schubert polynomial $`𝔖_w^q`$ is obtained by replacing each standard monomial (7) in the expansion of $`𝔖_w`$ by its quantum analogue (8).
The expansions of Schubert polynomials in terms of the standard monomials can be computed recursively top-down in the weak order of $`S_n`$, starting from $`𝔖_{w_\mathrm{o}}=e_{12\mathrm{}n1}`$. Namely, use the basic divided difference recurrence for the $`𝔖_w`$ together with the rule for computing a divided difference of an elementary symmetric function, the Leibnitz formula for the $`_i`$, and the corresponding straightening procedure. Having obtained such an expansion for $`𝔖_w`$, “quantize” each term in it to obtain $`𝔖_w^q`$. In the special case $`n=3`$, this produces results shown in Figure 1.
## 7. Computation of the Gromov-Witten invariants
The space $`A_n`$ spanned by the Schubert polynomials for $`S_n`$ can be described as the set of normal forms for the ideal $`I_n^q`$, with respect to certain term order. This allows one to use Gröbner basis techniques (see, e.g., ) to construct efficient algorithms for computing the Gromov-Witten invariants of the flag manifold.
###### Definition 6.
Let us choose the *total degree – inverse lexicographic term order* on the monomials $`x_1^{a_1}\mathrm{}x_n^{a_n}`$. In other words, we first order all monomials by the total degree $`_ia_i`$, and then break the ties by using the inverse lexicographic order $`x_1<x_2<x_3<\mathrm{}`$. This allows us to introduce the *normal*, or fully reduced *form* of any polynomial with respect to the ideal $`I_n^q`$ and the term order specified above. This normal form can be found, e.g., via the Buchberger algorithm employing the corresponding Gröbner basis of $`I_n^q`$.
###### Theorem 7.
Choose a term order as in Definition 6. Then the reduced minimal Gröbner basis for the ideal $`I_n^q`$ consists of the polynomials $`det\left(E_{ji+1}^{ni+1}\right)_{i,j=1}^k`$, for $`k=1,\mathrm{},n`$. The normal form of any polynomial $`FP_n`$, lies in the space $`A_n`$.
For a polynomial $`FP_n`$, let
$$F=\mathrm{coefficient}\mathrm{of}x^\delta \mathrm{in}\mathrm{the}\mathrm{normal}\mathrm{form}\mathrm{of}F.$$
Equivalently, $`F`$ is the coefficient of $`𝔖_{w_\mathrm{o}}^q`$ in the expansion of $`F`$ (modulo $`I_n^q`$) in the basis of quantum Schubert polynomials, since $`𝔖_{w_\mathrm{o}}^q`$ is the only basis element that involves the staircase monomial $`x^\delta `$. The definition (2) implies that
$$𝔖_{w_1}^q\mathrm{}𝔖_{w_k}^q=\underset{d}{}q^d\sigma _{w_1},\mathrm{},\sigma _{w_k}_d,$$
the generating function for the Gromov-Witten invariants. We thus arrived at the following result.
###### Theorem 8.
A Gromov-Witten invariant $`\sigma _{w_1},\mathrm{},\sigma _{w_k}_d`$ of the flag manifold is the coefficient of the monomial $`q^dx^\delta `$ in the normal form (in the sense of Definition 6) of the product of quantum Schubert polynomials $`𝔖_{w_1}^q\mathrm{}𝔖_{w_k}^q`$.
## 8. Quadratic algebras
Another approach to the study of the cohomology ring—ordinary or quantum—of the flag manifold was suggested in , and further developed in .
Let $`_n`$ be the associative algebra generated by the symbols $`[ij]`$, for all $`i,j\{1,\mathrm{},n\}`$, $`ij`$, subject to the convention $`[ij]+[ji]=0`$ and the relations
(12) $`\begin{array}{c}[ij]^2=0,\hfill \\ [ij][jk]+[jk][ki]+[ki][ij]=0,i,j,k\text{ distinct,}\hfill \\ [ij][kl][kl][ij]=0,i,j,k,l\text{ distinct.}\hfill \end{array}`$
The algebras $`_n`$ are naturally graded; the formulas for their Hilbert polynomials, for $`n5`$, can be found in . The algebras $`_n`$ are not Koszul for $`n3`$ (proved by Roos ). It is unknown whether $`_n`$ is generally finite-dimensional; it was proved in that the Hilbert series of $`_n`$ divides that of $`_{n+1}`$.
The “Dunkl elements” $`\theta _1,\mathrm{},\theta _n_n`$ are defined by
(13)
$$\theta _j=\underset{j<j}{}[ij]+\underset{j<k}{}[jk].$$
###### Theorem 9.
The complete list of relations satisfied by the Dunkl elements $`\theta _1,\mathrm{},\theta _n_n`$ is given by $`\theta _i\theta _j=\theta _j\theta _i`$ (for any $`i`$ and $`j`$) and $`e_i(\theta _1,\mathrm{},\theta _n)=0`$ (for $`i=1,\mathrm{},n`$). Thus these elements generate a commutative subring canonically isomorphic to $`P_n/I_n`$, and to the cohomology ring of the flag manifold.
Let $`s_{ij}S_n`$ denote the transposition of elements $`i`$ and $`j`$. Consider the “Bruhat operators” $`[ij]`$ acting in the group algebra of $`S_n`$ by
(14)
$$[ij]w=\{\begin{array}{cc}ws_{ij}\hfill & \text{if }\mathrm{}(ws_{ij})=\mathrm{}(w)+1\text{ ;}\hfill \\ 0\hfill & \text{otherwise .}\hfill \end{array}$$
One easily checks that these operators satisfy the relations (12). We thus obtain an (unfaithful) representation of the algebra $`_n`$, called the *Bruhat representation*. This representation has an equivalent description in the language of Schubert polynomials. Let us identify each element $`wS_n`$ with the corresponding Schubert polynomial $`𝔖_w`$. Then the generators of $`_n`$ act in $`[x_1,\mathrm{},x_n]/I_n`$ by
(15)
$$[ij]𝔖_w=\{\begin{array}{cc}𝔖_{ws_{ij}}\hfill & \text{if }\mathrm{}(ws_{ij})=\mathrm{}(w)+1\text{ ;}\hfill \\ 0\hfill & \text{otherwise .}\hfill \end{array}$$
The following result is a restatement of the classical Monk’s rule .
###### Theorem 10.
In the representation (15) of the quadratic algebra $`_n`$ in the quotient ring $`[x_1,\mathrm{},x_n]/I_n`$, a Dunkl element $`\theta _j`$ acts as multiplication by $`x_j`$, for $`j=1,\mathrm{},n`$. In other words, $`x_jf=\theta _jf`$, for any coset $`f[x_1,\mathrm{},x_n]/I_n`$.
## 9. Structure constants and nonnegativity conjecture
Let $`c_{uv}^w`$ denote the coefficient of $`𝔖_w`$ in the product $`𝔖_u𝔖_v`$. Equivalently, $`c_{uv}^w`$ is the number of points in the intersection of the general translates of three (dual) Schubert cells labelled by $`u`$, $`v`$, and $`w_\mathrm{o}w`$, respectively. Thus all the $`c_{uv}^w`$ are nonnegative integers. The problem of finding a combinatorial interpretation for $`c_{uv}^w`$ is one of the central open problems in Schubert calculus. In fact, no elementary proof of the fact that $`c_{uv}^w0`$ is known. Much less is known about the more general Gromov-Witten invariants of the flag manifold.
Let $`_n^+_n`$ be the cone of all elements that can be written as nonnegative integer combinations of noncommutative monomials in the generators $`[ij]`$, for $`i<j`$.
###### Conjecture 11.
*(Nonnegativity conjecture)* For any $`wS_n`$, the Schubert polynomial $`𝔖_w`$ evaluated at the Dunkl elements belongs to the positive cone $`_n^+`$:
(16)
$$𝔖_w(\theta )=𝔖_w(\theta _1,\mathrm{},\theta _{n1})_n^+.$$
Let us now explain why Conjecture 11 implies nonnegativity of the structure constants $`c_{uv}^w`$, and why furthermore a combinatorial description for the evaluations $`𝔖_w(\theta )`$ would provide a combinatorial rule describing the $`c_{uv}^w`$.
The action (15) of $`_n`$ on the quotient ring $`[x_1,\mathrm{},x_n]/I_n`$ is defined in such a way that every noncommutative monomial in the generators $`[ij]`$, $`i<j`$, when applied to a Schubert polynomial $`𝔖_v`$, gives either another Schubert polynomial or zero. It follows that, for any $`z_n^+`$, the polynomial $`z𝔖_v`$ is *Schubert-positive*, i.e., is a nonnegative linear combination of Schubert polynomials. In particular, if Conjecture 11 holds, then the polynomial $`𝔖_u(\theta )𝔖_v(x)`$ is Schubert-positive (here $`x`$ stands for $`x_1,\mathrm{},x_n`$). Since, according to Theorem 10,
(17)
$$𝔖_u(\theta )𝔖_v(x)=𝔖_u(x)𝔖_v(x),$$
we conclude that $`𝔖_u𝔖_v`$ is Schubert-positive, i.e., the structure constants $`c_{uv}^w`$ are nonnegative. Now suppose we have a combinatorial description for $`𝔖_u(\theta )`$. By (17),
(18)
$$c_{uv}^w=\text{coefficient of }w\text{ in }𝔖_u(\theta )v,$$
where the action of $`𝔖_u(\theta )`$ on $`vS_n`$ is the Bruhat representation action (14). Thus (18) would provide a combinatorial rule for $`c_{uv}^w`$.
The following conjecture, if proved, would provide an alternative description of the basis of Schubert cycles.
###### Conjecture 12.
The evaluations $`𝔖_w(\theta )`$ are the additive generators of the intersection of the cone $`_n^+`$ with the commutative subalgebra generated by the Dunkl elements.
## 10. Quantum deformation of the quadratic algebra
The *quantum deformation* $`_n^q`$ of the quadratic algebra $`_n`$ is defined by replacing the relation $`[ij]^2=0`$ in (12) by
(19)
$$[ij]^2=\{\begin{array}{cc}q_i\hfill & \text{if }j=i+1\text{ ;}\hfill \\ 0\hfill & \text{otherwise .}\hfill \end{array}$$
The “quantum Bruhat operators” $`[ij]`$, acting in the $`[q_1,\mathrm{},q_{n1}]`$-span of the symmetric group $`S_n`$ by
(20)
$$[ij]w=\{\begin{array}{cc}ws_{ij}\hfill & \text{if }\mathrm{}(ws_{ij})=\mathrm{}(w)+1\text{ ;}\hfill \\ q_{ij}ws_{ij}\hfill & \text{if }\mathrm{}(ws_{ij})=\mathrm{}(w)\mathrm{}(s_{ij})\text{ ;}\hfill \\ 0\hfill & \text{otherwise ,}\hfill \end{array}$$
provide a representation of $`_n^q`$, which degenerates into the ordinary Bruhat representation in the classical limit. The operators (20) can be viewed as acting in the quotient space $`[q_1,\mathrm{},q_{n1}][x_1,\mathrm{},x_n]/I_n^q`$ by
(21)
$$[ij]𝔖_w^q=\{\begin{array}{cc}𝔖_{ws_{ij}}^q\hfill & \text{if }\mathrm{}(ws_{ij})=\mathrm{}(w)+1\text{ ;}\hfill \\ q_{ij}𝔖_{ws_{ij}}^q\hfill & \text{if }\mathrm{}(ws_{ij})=\mathrm{}(w)\mathrm{}(s_{ij})\text{ ;}\hfill \\ 0\hfill & \text{otherwise .}\hfill \end{array}$$
The Dunkl elements $`\theta _j_n^q`$ are defined by the same formula (13) as before.
###### Theorem 13.
(Quantum Monk’s formula) In the representation (21) of $`_n^q`$, a Dunkl element $`\theta _j`$ acts as multiplication by $`x_j`$, for $`j=1,\mathrm{},n`$.
The following result is a corollary of Theorem 13.
###### Corollary 14.
As an element of the quotient ring $`P_n/I_n^q`$, a quantum Schubert polynomial $`𝔖_w^q`$ is uniquely defined by the condition that, in the quantum Bruhat representation (20), it acts on the identity permutation $`1`$ by $`w=𝔖_w^q(\theta _1,\mathrm{},\theta _n)(1)`$.
The quantum analogue of Theorem 9 stated below was conjectured in and proved by A. Postnikov in .
###### Theorem 15.
The commutative subring generated by the Dunkl elements in the quadratic algebra $`_n^q`$ is canonically isomorphic to the quantum cohomology ring of the flag manifold. The isomorphism is defined by $`\theta _1+\mathrm{}+\theta _j\sigma _{s_j}`$.
The following statement strengthens and refines Conjecture 11.
###### Conjecture 16.
For any $`wS_n`$, the evaluation $`𝔖_w^q(\theta _1,\mathrm{},\theta _n)`$ can be written as a linear combination of monomials in the generators $`[ij]`$, with nonnegative integer coefficients.
It is not even clear a priori that the evaluations $`𝔖_w^q(\theta )`$ can be expressed as linear combinations of monomials with coefficients not depending on the quantum parameters $`q_1,\mathrm{},q_{n1}`$.
A reformulation of (2) in the language of quantum Schubert polynomials gives
(22)
$$𝔖_u^q𝔖_v^q=\underset{wS_n}{}\underset{d}{}q^d\sigma _u,\sigma _v,\sigma _w_d𝔖_{w_\mathrm{o}w}^q.$$
In view of Theorem 13, one obtains the following analogue of (18).
###### Corollary 17.
For $`u,v,wS_n`$ and $`d=(d_1,\mathrm{},d_{n1})_+^{n1}`$, we have
(23)
$$\sigma _u,\sigma _v,\sigma _w_d=\text{coefficient of }q^dw_\mathrm{o}w\text{ in }𝔖_u^q(\theta )v,$$
where $`𝔖_u^q(\theta )`$ acts on $`v`$ according to the quantum Bruhat representation (20).
Assuming Conjecture 16 holds, one would like to have a combinatorial rule for a nonnegative expansion of $`𝔖_w^q(\theta )`$. Such a rule would immediately lead to a direct combinatorial description of the Gromov-Witten invariants $`\sigma _u,\sigma _v,\sigma _w_d`$ of the flag manifold, given by (23). |
no-problem/9912/cond-mat9912280.html | ar5iv | text | # Supercooling of the high field vortex phase in single crystalline BSCCO
## Abstract
Time resolved magneto-optical images show hysteresis associated with the transition at the so-called “second magnetization peak” at $`B_{sp}`$ in single-crystalline Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8+δ</sub>. By rapid quenching of the high–field phase, it can be made to persist metastably in the sample down to fields that are nearly half $`B_{sp}`$.
Field–tuned pinning–induced order–disorder transitions in the vortex lattice in type II superconductors have received much attention , for the insight they bring to the statistical mechanics of elastic manifolds in a random potential, but also because the plastically deformed disordered state is that which is most frequently encountered in practice, and which determines the high critical currents in technological superconductors. The transition to the disordered vortex state is particularly pronounced in layered materials such as Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8+δ</sub> (BSCCO), in which the very small vortex line tension allows an optimal adaptation of the disordered vortex lattice to the local “pinscape”; hence, the transition is accompanied by a sharp and spectacular increase of the screening current $`j`$ related to pin-
ning, leading to the so-called “second magnetization peak” (SMP) . This very marked feature has lead to speculations that the SMP represents a transition that is peculiar to layered superconductors. Here, we show that the transition at the SMP in BSCCO is accompanied by hysteresis and the persistence of the metastable high-field (disordered) vortex state at inductions $`B`$ much smaller than $`B_{sp}`$, at which the transition occurs when the system is near thermodynamic equilibrium.
The presence of the low–field (ordered) and high–field (disordered) vortex states in the sample is detected by direct imaging of the flux density distribution on the sample surface using the magneto-optical technique. The different vortex states can then be identified from the different critical current density which is related to the local induction gradient $`B/x`$. For the experiments, we choose an optimally doped BSCCO single crystal of size $`630\times 250\times 35`$ $`\mu `$m<sup>3</sup>, grown by the travelling solvent floating zone technique. The sample was covered by a magnetic garnet indicator film with in-plane anisotropy and cooled using a continuous flow cryostat. Magnetic fields $`H_a`$ of up to 600 G could be applied using a split-coil electromagnet surrounding the cryostat. The magnet power supply and simultaneous data acquisition were controlled using a two-channel synthesized wave-generator.
Several types of experiment were performed: (i) rapid field sweeps (with different periods $`<10`$ s) with synchronized acquisition of magneto-optical images at fixed phase (ii) relaxation of the flux distribution after a rapid drop of $`H_a`$ from a value above or close to $`B_{sp}`$. An example of the latter is depicted in Fig. 1, which shows the flux distribution 0.24 s after the target field of 56 G was reached following a quench from $`H_a=600`$ G (the larger intensity corresponds to the greater flux density). Figure 1 shows a bright area of nearly constant $`B`$ in the crystal center, separated from the peripheral low $`B`$, low $`B/x`$ area by a belt of high flux density gradient. Analysis of the image shows that this high gradient is equal in magnitude to that measured during a slow field ramp for $`B>B_{sp}`$, even though here the local induction during the relaxation is (up to 160 G) smaller than $`B_{sp}`$. The same is found when the field is continuously swept at a sufficiently large rate. Fig. 2 shows profiles measured during the first and second quarter cycles after the application of a triangular waveform AC field of amplitude 600 G and frequency 0.1 Hz. The profiles
are characterized by a jump in $`B`$ at the crystal edge due to the presence of the edge barrier current , followed by the gradual decrease of $`B`$ in the interior, resulting from the bulk pinning current. From the break in the profiles and the appearance of a second Bean–like flux front above $`B_{sp}`$ in panel (a), we determine $`B_{sp}=360`$ G for $`T=26`$ K. Panel (b) shows that when $`H_a`$ is rapidly decreased, the relatively large gradient present at $`B>B_{sp}`$ is maintained around a gradually shrinking region in the crystal center, while, again, the local induction is smaller than $`B_{sp}`$.
We interpret these observations as being the result of the persistence of the metastable high–field vortex state at fields below the equilibrium phase boundary. While the appearance of the higher $`B/x`$ at a constant induction $`B=B_{sp}`$ during upward field ramps (Fig. 2(a)), independent of sweep rate, indicates that the order-disorder transition at the SMP is a thermodynamic phase transition, the observation of a persistent metastable state suggests that it is first order. The consequences are multiple. First, the phenomenology of the SMP in BSCCO is entirely equivalent to that in moderately anisotropic or even isotropic superconductors , i.e., one is dealing with a similar transition in each case. Second, a first order transition at the SMP implies that there is no critical point in the phase diagram, as previously suggested . Finally, the presence of the quenched disordered vortex state at $`B<B_{sp}`$ will critically affect the outcome of dynamical creep and transport measurements at fields near the transition. |
no-problem/9912/cond-mat9912176.html | ar5iv | text | # Energy spectra, wavefunctions and quantum diffusion for quasiperiodic systems
## I Introduction
Since the pioneering discoveries of quasicrystals with icosahedral, dodecagonal, decagonal, and octagonal symmetry, electronic transport phenomena arguably belong to the most celebrated and intriguing physical properties of these intermetallic alloys. For instance, the electric conductivity of icosahedral quasicrystals decreases strongly with decreasing temperature and with improving structural quality of the sample, and anomalous transport behaviour is also observed in other quantities such as thermopower or magnetoconductance. Stimulated by the experimental results, a lot of effort has been spent towards a better theoretical understanding of the transport phenomena in quasicrystalline materials. This is also of interest from the theoretical or mathematical point of view, because quasicrystals as ordered aperiodic structures are intermediate between periodically ordered crystals and short-range ordered amorphous solids. In particular, the anomalous diffusion of wave packets in quasiperiodic systems has attracted wide interest.
Multifractal eigenstates — neither extended over the system, nor exponentially localized — exist at the metal-insulator transition of the Anderson model of localization. In tight-binding models of quasicrystals, this kind of eigenstates has also been revealed. Generally, the energy spectra of one-dimensional (1D) quasicrystals are singular continuous. However, in higher-dimensional cases, the energy spectra can be band-like with finite measure, fractal-like with zero band width or a mixture of partly band-like and partly fractal-like character.
The diffusion properties of quasicrystals are associated with the complex eigenstates and energy spectra stated above. To describe the diffusion of a wave packet initially localized at some site $`n_0`$, one usually discusses the temporal autocorrelation function
$$C(t)=\frac{1}{t}\underset{0}{\overset{t}{}}|\mathrm{\Psi }_{n_0}(t^{})|^2𝑑t^{}$$
(1)
or the mean square displacement
$$d(t)=\left(\underset{n}{}|𝐫_n𝐫_{n_0}|^2|\mathrm{\Psi }_n(t)|^2\right)^{1/2}$$
(2)
where $`\mathrm{\Psi }_n(t)`$ is the amplitude of the wavefunction at time $`t`$ at the $`n`$th site which is located at the position $`𝐫_n`$ in space. Apparently, $`C(t)`$ is the time-averaged probability of a wave packet staying at the initial site at time $`t`$, and $`d(t)`$ determines the spreading of the width of a wave packet.
Generally, one finds $`C(t)t^\delta ,d(t)t^\beta `$ with $`0<\delta <1`$ and $`0<\beta <1`$ for 1D quasiperiodic systems. For higher-dimensional cases, no general results are available. Zhong et al. observed a transition of $`C(t)`$ with the increase of the quasiperiodic modulation strength in simple higher-dimensional Fibonacci lattices. For small patches of the octagonal tiling, Passaro et al. found $`d(t)t^\beta `$ with $`0<\beta <1`$ even for the case of a band-like spectrum. However, one of us obtained $`C(t)t^1`$ for this case after analyzing the long-time behaviour. In fact, it is quite difficult to derive the exact long-time behaviour of $`C(t)`$ and $`d(t)`$ from the investigation of rather small systems. Therefore, a study of a large higher-dimensional quasiperiodic system will be significant. In this paper, we will mainly discuss the diffusion properties on a 2D quasiperiodic tiling related to the octagonal quasicrystals. The tiling is based on the octonacci chain and thus permits us to study large systems.
Recent investigations show that the diffusion properties are connected with the multifractality of eigenstates and energy spectra. It can be rigorously proven that the exponent $`\delta `$ ruling the decay of the autocorrelation function $`C(t)`$ equals the correlation dimension $`D_2`$ of the local spectral measure associated with the initial site. In 1D quasiperiodic systems, Guarneri analytically deduced that $`\beta D_1`$, where $`D_1`$ is the information dimension of the spectral measure. More recently, Ketzmerick et al. argued that $`\beta `$ is also related to the multifractal properties of the eigenstates. We shall address the question whether these relations exist in different quasiperiodic systems, especially in higher-dimensional cases.
This paper is organized as follows. In the next section, we describe the construction of the labyrinth tiling and its properties that are relevant for our analysis. Afterwards, in Sec. III, we consider a tight-binding model on the labyrinth tiling and express the eigenstates and eigenvalues in terms of eigenstates and eigenvalues of a tight-binding Hamiltonian on the octonacci chain. In Sec. IV we show the energy spectra and multifractal eigenstates for both these systems. Sec. V describes the diffusion properties of the octonacci chain. The diffusion properties of the labyrinth tiling will be emphasized in Sec. VI. In Sec. VII we discuss the fractal dimensions of eigenstates and eigenspectra and their relation to the diffusion properties. Finally, we conclude in Sec. VIII.
## II The labyrinth tiling
The labyrinth tiling can be considered as a subset of the octagonal quasiperiodic tiling, and vice versa. One can build it directly from the octonacci chain. In order to construct the labyrinth tiling, we introduce the octonacci sequence which can be produced by iterating the inflation rule
$$\varrho :\begin{array}{ccc}S\hfill & & L\hfill \\ L\hfill & & LSL\hfill \end{array}$$
(3)
on the initial word $`w_0=S`$. The number of letters $`g_m`$ in the $`m`$th iterate $`w_m=\varrho ^m(w_0)`$ satisfies the recursion
$$g_m=2g_{m1}+g_{m2},g_0=g_1=1.$$
(4)
The numbers of letters $`L`$ and $`S`$ in $`w_m`$ are given by $`f_m`$ and $`f_{m1}`$, respectively, which fulfill the same recursion relation with a different initial condition
$$f_m=2f_{m1}+f_{m2},f_0=0,f_1=1,$$
(5)
such that $`g_m=f_m+f_{m1}`$. Their ratio in the limit sequence $`w_{\mathrm{}}`$
$$\underset{m\mathrm{}}{lim}\frac{f_m}{f_{m1}}=\underset{m\mathrm{}}{lim}\frac{g_m}{g_{m1}}=\lambda $$
(6)
is given by the silver mean $`\lambda =1+\sqrt{2}`$ which is a root of the quadratic equation $`\lambda ^2=2\lambda +1`$. As can been seen from Eq. (4), $`g_m`$ is odd for all $`m`$.
Associating with the letters $`S`$ and $`L`$ intervals of length $`1`$ and $`\lambda `$, respectively, one obtains a linear chain $`𝒞_m`$ of $`N_m=g_m+1`$ sites, which is known as octonacci or silver mean chain. We note that all words $`w_m`$ obtained from the substitution rule (3) are palindromes, thus the resulting chains are symmetric under reflection.
The labyrinth tiling can be constructed from the Euclidean product $`𝒞_m\times 𝒞_m`$ of two such chains. This product is a rectangular grid, thus its vertices can be classified into even and odd vertices if they can be reached from the origin by an even or odd number of steps along the bonds, respectively. This is completely analogous to the even and the odd sublattice of the square lattice. Connecting the even vertices by diagonal bonds, we obtain a finite approximant $`_m`$ of the labyrinth tiling $``$. The odd vertices, when connected by diagonal bonds, form another labyrinth tiling $`_m^{}`$ that is dual to $`_m`$. We note that, due to the palindromicity of $`w_m`$, $`_m`$ and $`_m^{}`$ just differ by a $`90`$ degree rotation. The finite labyrinth tiling $`_m`$ consists of $`N_m^2/2`$ atoms. An example is shown in Fig. 1. By construction, the labyrinth tiling is symmetric with respect to reflection at the main diagonal. Taking this diagonal as the $`x`$ axis and the direction orthogonal to it as the $`y`$ axis, the coordinates of the vertices of the labyrinth tiling, labeled by $`k,l`$, can be written as
$`x_{k,l}`$ $`=`$ $`u_k+u_l`$ (8)
$`y_{k,l}`$ $`=`$ $`u_ku_l`$ (9)
where the coordinates with even values of $`k+l`$ belong to $``$, those with odd values of $`k+l`$ to $`^{}`$. Here,
$$u_k=k/\sqrt{2}+\left[k/\sqrt{2}\right]$$
(10)
where $`\left[x\right]`$ denotes the integer closest to $`x`$. It is easy to see that the sequence of long and short lengths given by $`u_k`$ again follows the octonacci sequence, but now the two intervals have lengths $`(\lambda \pm 1)/2`$ which again have ratio $`(\lambda +1)/(\lambda 1)=\lambda `$. Thus, the diagonal of the labyrinth $`x_{k,k}=2u_k`$ is just a $`\sqrt{2}`$-scaled version of the original octonacci sequence.
## III Tight-binding model
The energy spectra for tight-binding Hamiltonians on the labyrinth tiling were investigated by Sire. For properly chosen Hamiltonians, the analysis reduces to the one-dimensional case, and the energy spectrum can be derived directly from those of the corresponding Hamiltonian on the octonacci chain. However, the properties of eigenstates, which also factorize into the product of two eigenstates of the octonacci chain, were not discussed in Ref. . Here, we follow the same route to study the eigenvalues and eigenstates.
Consider two identical octonacci chains in the framework of a tight-binding model with zero on-site potentials
$`H^{(1)}\psi _k^{(1,i)}`$ $`=`$ $`t_k\psi _{k1}^{(1,i)}+t_{k+1}\psi _{k+1}^{(1,i)}=E^{(1,i)}\psi _k^{(1,i)}`$ (12)
$`H^{(2)}\psi _l^{(2,j)}`$ $`=`$ $`t_l\psi _{l1}^{(2,j)}+t_{l+1}\psi _{l+1}^{(2,j)}=E^{(2,j)}\psi _l^{(2,j)}`$ (13)
where superscripts $`(1)`$ and $`(2)`$ label the two chains and the indices $`i`$ and $`j`$ enumerate the eigenfunctions $`\psi `$ and eigenvalues $`E`$ of the two octonacci chains. Throughout the paper, we employ free boundary conditions, which formally corresponds to setting $`\psi _0=\psi _{N_m+1}=0`$. The hopping parameters $`t_k`$ and $`t_l`$ take values according to the octonacci sequence. We associate $`t_{k,l}=1`$ to a long bond of length $`\lambda `$ and $`t_{k,l}=v`$ to a short bond of length $`1`$, respectively.
The eigenvalues of the octonacci chain are symmetric with respect to $`E=0`$; if $`\psi `$ is an eigenstate of $`H`$ with eigenvalue $`E`$, then the state $`\stackrel{~}{\psi }`$ with amplitudes
$$\stackrel{~}{\psi }_k=(1)^k\psi _k$$
(14)
is again an eigenstate, but has an eigenvalue $`E`$. For $`E=0`$, the eigenvalue equation reduces to the recursion
$$\psi _{k+1}=\frac{t_k}{t_{k+1}}\psi _{k1}$$
(15)
which always yields precisely two linearly independent solutions $`\psi ^\pm `$ which can be chosen to vanish on either even or odd sites. These have the form
$$\psi _{2r1}^{}=(1)^{r1}\psi _1^{}\underset{s=2}{\overset{r}{}}\frac{t_{2s2}}{t_{2s1}},\psi _{2r}^{}=0,$$
(16)
and
$$\psi _{2r1}^+=0,\psi _{2r}^+=(1)^{r1}\psi _2^+\underset{s=2}{\overset{r}{}}\frac{t_{2s1}}{t_{2s}},$$
(17)
where $`\psi _1^{}0`$ and $`\psi _2^+0`$ are determined, up to phases, by normalization. We note that one has to be careful if one employs periodic boundary conditions because these, for an odd length of the chain, couple the even and odd sublattices of the chain. Thus, while there are again two states (16) and (17) for a periodic chain of even length, only a single state at $`E=0`$ exists for an odd length of the chain.
Multiplying the two Eqs. (12) and (13), we obtain
$`H^{(1,2)}\mathrm{\Phi }_{k,l}^{(i,j)}`$ $`=`$ $`t_kt_l\mathrm{\Phi }_{k1,l1}^{(i,j)}+t_kt_{l+1}\mathrm{\Phi }_{k1,l+1}^{(i,j)}+t_{k+1}t_l\mathrm{\Phi }_{k+1,l1}^{(i,j)}+t_{k+1}t_{l+1}\mathrm{\Phi }_{k+1,l+1}^{(i,j)}`$ (18)
$`=`$ $`E^{(1,i)}E^{(2,j)}\mathrm{\Phi }_{k,l}^{(i,j)}`$ (19)
where we defined
$$\mathrm{\Phi }_{k,l}^{(i,j)}=\psi _k^{(1,i)}\psi _l^{(2,j)}$$
(20)
as an eigenfunction on the product of the two chains with eigenvalue $`E^{(1,i)}E^{(2,j)}`$. In Eq. (19), only wave function amplitudes at positions $`(k\pm 1,l\pm 1)`$ contribute, thus the Hamiltonian $`H^{(1,2)}`$ corresponds to hopping along the diagonals of the product grid $`𝒞_m\times 𝒞_m`$. The corresponding hopping parameters are the product of two hopping parameters in the octonacci chain and thus take values $`1`$, $`v`$, and $`v^2`$ for diagonals of length $`\lambda +1`$, $`\sqrt{2\lambda +2}`$, and $`\lambda 1`$, respectively.
Thus the system in Eq. (19) naturally separates into two independent sets of equations with $`k+l`$ even or $`k+l`$ odd, respectively. In this paper, we restrict our investigation to the case with $`k+l`$ even as the other case is completely analogous. Thus, $`H^{(1,2)}`$ can be interpreted as a tight-binding Hamiltonian with zero on-site potential defined on the labyrinth tiling $``$. Clearly, the eigenvalues for the labyrinth are just products of the eigenvalues for the octonacci chain, and all such products appear as eigenvalues because the spectra of the two dual labyrinth tilings $`_m`$ and $`_m^{}`$ are identical. For the corresponding eigenfunctions on $``$, we have to construct linear combinations of the product eigenfunctions $`\mathrm{\Phi }_{i,j}`$ (20) which vanish on the vertices of the dual tiling $`_m^{}`$. This can be done as follows.
Suppose $`\psi ^{(1,i)}`$ and $`\psi ^{(2,j)}`$ are normalized eigenfunctions of the octonacci chain with eigenvalues $`E^{(1,i)}`$ and $`E^{(2,j)}`$, respectively. Then both $`\mathrm{\Phi }^{(i,j)}=\psi ^{(1,i)}\psi ^{(2,j)}`$ and $`\stackrel{~}{\mathrm{\Phi }}^{(i,j)}=\stackrel{~}{\psi }^{(1,i)}\stackrel{~}{\psi }^{(2,j)}`$ (14) are eigenfunctions of $`H^{(1,2)}`$ with the same eigenvalue $`E^{(1,i)}E^{(2,j)}`$, where we assume $`E^{(1,i)}0`$ and $`E^{(2,j)}0`$. But from Eq. (14) we have
$$\stackrel{~}{\mathrm{\Phi }}_{k,l}^{(i,j)}=(1)^{k+l}\mathrm{\Phi }_{k,l}^{(i,j)}$$
(21)
and thus the linear combinations
$$\mathrm{\Psi }_{}^{(i,j)}{}_{}{}^{\pm }=\frac{1}{\sqrt{2}}\left(\mathrm{\Phi }^{(i,j)}\pm \stackrel{~}{\mathrm{\Phi }}^{(i,j)}\right)$$
(22)
are normalized eigenfunctions that vanish for odd or for even values of $`k+l`$, and thus on $`^{}`$ or $``$, respectively. If one or both eigenvalues $`E^{(1,i)}`$ and $`E^{(2,j)}`$ are zero, we can make use of the previously discussed eigenfunctions $`\psi ^+`$ (17) and $`\psi ^{}`$ (16) to construct the desired wavefunctions. If one of the eigenvalues vanishes, say, without loss of generality, $`E^{(1,i)}=0`$ and $`E^{(2,j)}0`$, the appropriate four linear combinations are
$$\mathrm{\Psi }_{}^{(i,j)}{}_{}{}^{\pm \pm }=\frac{1}{\sqrt{2}}\psi _{}^{(1)}{}_{}{}^{\pm }\left(\psi ^{(2,j)}\pm \stackrel{~}{\psi }^{(2,j)}\right)$$
(23)
where $`\psi _{}^{(1)}{}_{}{}^{\pm }`$ is the wavefunction of Eqs. (17) and (16) on the first chain, and we also used the state $`\stackrel{~}{\psi }^{(2,j)}`$ which has an energy $`E^{(2,j)}`$. Clearly, the wave functions $`\mathrm{\Psi }_{}^{(i,j)}{}_{}{}^{++}`$ and $`\mathrm{\Psi }_{}^{(i,j)}{}_{}{}^{}`$ have support on $``$, the other two linear combinations $`\mathrm{\Psi }_{}^{(i,j)}{}_{}{}^{+}`$ and $`\mathrm{\Psi }_{}^{(i,j)}{}_{}{}^{+}`$ live on $`^{}`$. Finally, for $`E^{(1,i)}=E^{(2,j)}=0`$, we have four states
$$\mathrm{\Psi }_{}^{(i,j)}{}_{}{}^{\pm \pm }=\psi _{}^{(1)}{}_{}{}^{\pm }\psi _{}^{(2)}{}_{}{}^{\pm }$$
(24)
where again $`\mathrm{\Psi }_{}^{(i,j)}{}_{}{}^{++}`$ and $`\mathrm{\Psi }_{}^{(i,j)}{}_{}{}^{}`$ are supported on $``$, and the remaining two product states $`\mathrm{\Psi }_{}^{(i,j)}{}_{}{}^{+}`$ and $`\mathrm{\Psi }_{}^{(i,j)}{}_{}{}^{+}`$ on $`^{}`$. In particular, this argument proves that $`E=0`$ is a $`(2N_m2)`$-fold degenerate eigenvalue for the labyrinth $`_m`$. Thus we find, as for simple tight-binding Hamiltonians on the Penrose or the octagonal Ammann-Beenker tiling, a large degeneracy of states in the “band” center at $`E=0`$. However, in contrast to these well-known examples where the degeneracy stems from certain “confined” states that occur as a consequence of the local topology of the tilings, the spectral measure carried by the states at $`E=0`$ vanishes for the labyrinth as $`N_m\mathrm{}`$, thus it is not a finite fraction of the eigenstates that contributes to $`E=0`$ in this case.
In practice, having the complete knowledge of the eigenstates for the labyrinth tiling at our disposal, we do not need to care too much about the precise linear combinations of states derived above. Since the eigenvalues $`E_i`$, $`i=1,\mathrm{},N`$ of the octonacci chain are symmetric about zero, one can obtain the set of eigenvalues of the labyrinth tiling simply as
$$\{E_iE_j1i\frac{N}{2},ji\}\{E_iE_j\frac{N}{2}<iN,ji1\}$$
(25)
where we assume that the eigenvalues of the octonacci chain are ordered as $`E_iE_j`$ for $`i>j`$. The corresponding eigenvectors are most easily constructed by just restricting the products of eigenvectors to the sites of the labyrinth $``$, and re-normalizing the resulting eigenstate. Eq. (14) guarantees that this procedure yields the correct results, because the states $`\psi `$ and $`\stackrel{~}{\psi }`$ just differ by an alternating sign.
## IV Energy spectra and wavefunctions
Following the results of Sec. III, one can easily calculate the density of states (DOS) and the integrated density of states (IDOS). For comparison, we show the DOS and the IDOS for the octonacci chain and the labyrinth tiling in Fig. 2. For the octonacci chain, the IDOS is a devil’s staircase even for $`v`$ close to $`1`$ and the DOS is singular continuous with zero Lebesgue measure. By more detailed analysis, one finds a self-similar energy spectrum for the octonacci chain with a hierarchical gap structure as described by the gap labelling theorem. In contrast, we observed a smooth IDOS without visual gaps as $`v`$ approaches $`1`$ in the labyrinth tiling. A more detailed analysis of the IDOS and the energy spectra shows that in the regime $`0.6<v<1.0`$ the energy spectrum contains no or a finite number of gaps; for $`v<0.6`$ the spectrum is fractal-like and the IDOS is similar to a devil’s staircase. Sire found that the spectrum is singular continuous with finite Lebesgue measure for $`v0.4`$, which may indicate that the spectrum is a mixture of band-like and fractal-like parts in the regime $`0.4v<0.6`$. In Fig. 2(b) one can see a peak at the center of the spectrum which is due to the degenerate states at $`E=0`$. But it differs from the localized states observed in the Penrose tiling in the sense that no leap at $`E=0`$ is seen in the IDOS, in agreement with the results of the previous section. For varying parameter $`v`$, we find three regions with different behaviour of the DOS of the labyrinth tiling: a maximum around the center, distinct shoulders located between the spectral center and edge, and a tail at the band edge, which is similar to the behaviour observed for a tight-binding model on the icosahedral Ammann-Kramer tiling.
In order to characterize the eigenstates, we employ a multifractal analysis, which is based on the standard box-counting procedure. In our numerical calculations, we determine the singularity strength $`\alpha (q)`$ and the corresponding fractal dimension $`f(q)`$ by a linear regression procedure, but prior to this we need to check the linearity of $`_i\mu _i\mathrm{ln}\mu _i`$ versus $`\mathrm{ln}\epsilon `$, where $`\mu _i(q,\epsilon )`$ denotes the normalized $`q`$th moment of the box probability for boxes of linear size $`\epsilon L`$. A homogenously extended wave function corresponds to $`\alpha (q)=f(q)=d`$, where $`d`$ denotes the spatial dimension. For critical eigenstates, the fractal dimension $`f`$ is a smooth convex function of $`\alpha `$, and $`\alpha `$ should be limited to a $`q`$-interval. Moreover, the generalized dimensions of the eigenstate $`\psi `$ are given by $`D_q^\psi =\left[f(q)q\alpha (q)\right]/(1q)`$ for $`q1`$ and $`D_1^\psi =f(1)=\alpha (1)`$.
The singularity spectra $`f(\alpha )`$ of eigenstates for both the octonacci chain and the labyrinth tiling show the typical shape expected for multifractal states, thus we refrain from showing these here. For the octonacci chain, the eigenstates in the “band” center are more extended than those at the “band” edge. In this case, the curves $`f(\alpha )`$ become fairly narrow as $`v`$ approaches $`1`$. Generally, the eigenstates show stronger multifractal characteristics with decreasing parameter $`v`$. In contrast to the behaviour observed for the Penrose tiling, for the labyrinth tiling we do not find that the multifractal behaviour of eigenstates becomes significantly stronger when moving from energies at the edge towards the center of the “band”. We also calculated the scaling behaviour of the inverse participation number
$$P^1(E,V)=\underset{𝐫}{}|\psi (𝐫)|^4$$
(26)
with respect to the size $`V=L^d`$ of the system, i.e.,
$$P^1(E,V)V^{\gamma (E)}$$
(27)
for large $`V`$. A fractal eigenstate is characterized by $`0<\gamma <1`$, whereas $`\gamma =0`$ corresponds to a localized state, and $`\gamma =1`$ to an extended state. In general, the scaling exponent $`\gamma (E)`$ depends on the energy. Numerically, one analyzes the scaling behaviour of $`P^1(E,V)`$ at an energy $`E`$ by averaging over the eigenstates within a small energy interval $`E\pm \mathrm{\Delta }E/2`$. The result for eigenvectors from the center and at the lower edge of the spectrum is shown in Fig. 3 which corroborates the multifractal nature of the eigenstates in both systems. The exponent $`\gamma `$, given by the slope, decreases, presumably continuously, from $`\gamma =1`$ for the periodic case $`v=1`$ to $`\gamma =0`$ for $`v=0`$.
## V Quantum diffusion for the octonacci chain
In this short section, we briefly present our numerical results of the autocorrelation function $`C(t)`$ and the mean square displacement $`d(t)`$ for the octonacci chain. Further discussion and comparison with the results for the labyrinth will be given below.
Fig. 4 shows the autocorrelation function $`C(t)`$ of the octonacci chain. The initial site is located at the center of the system. The long-time behaviour of $`C(t)`$ follows $`C(t)t^\delta `$ with $`0<\delta <1`$ for different $`v`$. For small $`v`$, $`C(t)`$ displays strong oscillatory behaviour, which may result from level fluctuations. The result for $`d(t)`$ is displayed in Fig. 5. Evidently, $`d(t)t^\beta `$ and $`\beta `$ increases with increasing $`v`$, limited by $`\beta <1`$. For a given modulation parameter $`v`$, we observe the relation $`\beta >\delta `$ between the two exponents. Similar results have been obtained for 1D Fibonacci chains and at the mobility edge of the Harper model. Therefore, in accordance with the singular continuous energy spectra and the multifractal eigenstates, the diffusion is usually anomalous in 1D quasiperiodic systems.
## VI Quantum diffusion for the labyrinth tiling
We now switch to the more interesting case of the labyrinth tiling. In Fig. 6, we show the behaviour of $`C(t)`$ for the labyrinth tiling. The number of sites in our system is $`N^2/2=\mathrm{19\hspace{0.17em}602}^2/2=\mathrm{192\hspace{0.17em}119\hspace{0.17em}202}`$, which is much larger than other 2D quasiperiodic systems discussed previously such as, for instance, Fibonacci lattices and the octagonal tiling. Therefore, we can utilize this system to study the long-time behaviour of $`C(t)`$ more accurately than before. Apparently, Fig. 6 again exhibits a power law behaviour $`C(t)t^\delta `$. By a more detailed analysis, we surprisingly find a transition point at $`v_c0.6`$. For $`v<v_c`$ the slope of the curves decreases with decreasing $`v`$. In the regime $`v>v_c`$, the behaviour of $`C(t)`$ is the same as for a periodic system, i.e., $`C(t)t^1`$. When compared to the results of Sec. IV, we see that this regime corresponds to the region where one finds band-like energy spectra. Since $`\delta `$ equals the correlation dimension $`D_2`$ of the energy spectral measure, $`\delta =1`$ is reasonable for the case of band-like spectra. Similar to the 1D case, one still has $`0<\delta <1`$ in the regime $`v<v_c`$ with fractal-like or mixed spectra for the labyrinth tiling. We expect that this is a general result for higher-dimensional quasiperiodic systems. Furthermore, we find the behaviour of $`C(t)`$ is independent of the initial site, which can be observed from the example shown in Fig. 6. Of course, as our analysis is based on numerical data for a finite system, we cannot possibly prove the existence of a true transition point $`v_c`$, because we cannot rule out a rapid, but continuous change in $`\delta `$ around $`v0.6`$.
The calculation of the square displacement $`d(t)`$ is numerically more expensive, thus we restrict ourselves to a smaller system of $`N^2/2=578^2/2=\mathrm{167\hspace{0.17em}042}`$ sites. Nevertheless, this is still larger than the 2D octagonal quasicrystals studied previously. In Fig. 7, we show that the long-time behaviour is described by a power law $`d(t)t^\beta `$. In contrast to $`C(t)`$, we do not find a transition point for $`d(t)`$ as the parameter $`v`$ is varied. As for the octagonal tiling and for the octonacci chain, $`0<\beta <1`$ for the labyrinth tiling. Therefore, a band spectrum does not imply ballistic diffusion in quasicrystals. It can be argued that the exponent $`\beta `$ is associated with the correlation dimension $`D_2^\psi `$ of the eigenstates. In 1D quasiperiodic systems, or at the metal-insulator transition in the Anderson model of localization, the eigenstates are multifractal and $`0<\beta <1`$. In accordance, the multifractal eigenstates in 2D quasicrystals may be expected to lead to anomalous diffusion with $`0<\beta <1`$. Possibly, ballistic diffusion can occur in 3D quasicrystals because their wavefunctions are more extended.
So far, we assumed that the initial wave packet is a $`\delta `$-function, thus we start with an electron that is localized at a particular site $`n_0=(k_0,l_0)`$ and follow the spreading of its wave function $`\mathrm{\Psi }^{\{n_0\}}`$ with time. This means that, in general, all eigenstates contribute to the time evolution because the expansion in terms of the orthonormal basis of eigenstates $`\mathrm{\Psi }^{(i,j)}`$ is
$$\mathrm{\Psi }_{k,l}^{\{n_0\}}=\delta _{k,k_0}\delta _{l,l_0}=\underset{i,j}{}\mathrm{\Psi }_{k_0,l_0}^{(i,j)}\mathrm{\Psi }_{k,l}^{(i,j)}$$
(28)
and thus the entire energy spectrum is probed. For convenience, we dropped the superscripts $`\pm `$ on the wavefunctions of Eqs. (22)–(24), assuming that the proper linear combinations are used that are supported on the labyrinth $`_m`$. Note that we do not need a complex conjugation in Eq. (28) because the Hamiltonian is a real symmetric and we therefore can choose eigenvectors that form a real orthogonal matrix. In order to check for an energy dependence of the diffusion, we now consider different initial wave packets $`\mathrm{\Psi }^{\{n_0,[E\frac{\mathrm{\Delta }E}{2},E+\frac{\mathrm{\Delta }E}{2}]\}}`$ which have a finite width and are constructed as linear combinations of eigenstates from a certain energy window $`[E\mathrm{\Delta }E/2,E+\mathrm{\Delta }E/2]`$. The new normalized states can be written as
$$\mathrm{\Psi }_{k,l}^{\{n_0,[E\frac{\mathrm{\Delta }E}{2},E+\frac{\mathrm{\Delta }E}{2}]\}}=\frac{_{i,j}^{}\mathrm{\Psi }_{k_0,l_0}^{(i,j)}\mathrm{\Psi }_{k,l}^{(i,j)}}{\sqrt{_{i,j}^{}|\mathrm{\Psi }_{k_0,l_0}^{(i,j)}|^2}}$$
(29)
where the sum $`^{}`$ is restricted to the eigenstates $`\mathrm{\Psi }^{(i,j)}`$ with eigenvalues $`E^{(1,i)}E^{(2,j)}[E\mathrm{\Delta }E/2,E+\mathrm{\Delta }E/2]`$. Clearly, Eq. (29) becomes Eq. (28) if the energy interval contains the complete spectrum, this is nothing but the usual completeness condition of the basis of eigenvectors.
We numerically checked different energy windows for the octonacci chain and the labyrinth tiling. Due to the high DOS around $`E=0`$, we choose smaller intervals in the band center. The results in Fig. 8 and Fig. 9 show that the long-time behaviour of $`C(t)`$ and $`d(t)`$ hardly depends on the selection of the energy window. However, it is more complex at small times due to the different shapes and widths of the initial wave packets. The various values of $`d(t)`$ at the initial time reflect the width of the initial wave packet. The smaller the energy interval under consideration, the wider is the initial wavepacket. In practice, in order to avoid that the wave packet reaches the boundary too early, the energy interval may not be chosen too small.
## VII Dynamical scaling and fractal dimensions
In 1D quasiperiodic systems, it is known that the inequality $`\beta D_1`$ relates the diffusion behaviour and the fractal properties of the energy spectrum. In $`d`$ dimensions, this generalizes to the inequality $`\beta D_1/d`$, thus it implies a superdiffusive behaviour $`\beta 1/2`$ in two dimensions if $`D_11`$. In Fig. 10, the values of the exponent $`\beta `$ for the octonacci chain and the labyrinth tiling are shown for various values of the parameter $`v`$. In all cases, we find that this inequality holds. Apparently, the diffusion exponents $`\beta `$ for the octonacci chain and the labyrinth tiling are very close, which might be due to the product structure of the labyrinth and its wavefunctions.
According to a conjecture by Piéchon, $`\beta =D_1`$ of the global spectral measure for one-dimensional quasiperiodic models with multifractal global spectral measure. In order to check this relation, we calculated $`D_1`$, but it turns out that is rather difficult to extract the accurate values by a linear fit due to strong oscillations in the data. However, it appears that the relation does not hold for general parameter values $`v`$ in the octonacci chain, and it certainly cannot be valid for the two-dimensional system as it only involves a dimension that characterizes the spectral measure.
Ketzmerick et al. suggested an improved inequality $`\beta D_2/D_2^\psi `$ which is numerically obeyed by 1D quasiperiodic models. As can be seen from Fig. 10, this relation applies for the octonacci chain as well as for the the labyrinth tiling. However, in the two-dimensional case the inequality is less sharp as $`\beta `$ is much larger than the ratio $`D_2/D_2^\psi `$, in particular for values of the parameter $`v0.6`$ where the energy spectrum is smooth and $`D_21`$.
For multifractal wavefunctions at the Anderson transition or at quantum Hall transitions, one finds $`D_2^\psi =dD_2`$ for a $`d`$-dimensional system. Above, it has been demonstrated that $`D_2=1`$ for the band-like spectra in 2D quasiperiodic systems, but the corresponding eigenstates are multifractal with generalized dimension $`D_2^\psi <2`$. Although the eigenstates of 2D quasiperiodic tight-binding models are similar to the critical states at the Anderson transition, the equality $`D_2^\psi =dD_2`$ apparently does not apply to 2D quasicrystals.
Recently, Zhong et al. argued that one might interpret the superdiffusive behaviour in aperiodic systems as a ballistic behaviour in a space of effective dimension $`D_2^\psi `$, or that this should at least give an upper bound on the possible values of $`\beta `$. In Fig. 10, we compare the ratio $`D_2^\psi /d`$ to $`\beta `$. It turns out that the values of $`\beta `$ and $`D_2^\psi /d`$ are rather close, but that there seems to be a systematic deviation with $`\beta <D_2^\psi /d`$ for small values of $`v`$ and $`\beta >D_2^\psi /d`$ for $`v`$ close to $`1`$. Therefore, at least for large values of the parameter $`v`$, it appears that this bound does not hold.
Finally, we also included the values of $`D_1^\psi /d`$ in Fig. 10, which apparently does give an upper bound on the values of $`\beta `$ for the models under consideration. So far, this is just an observation, we cannot present an argument that this should hold in general.
## VIII Conclusion
In this paper, the energy spectra, wavefunctions and quantum diffusion for the octonacci chain and the labyrinth tiling are studied. The labyrinth tiling is based on the octonacci chain, which allows us to deal with very large systems. For the octonacci chain, the energy spectra are singular continuous and the eigenstates are critical. The energy spectra of the labyrinth tiling presumably are also singular continuous, but they can be band-like (i.e., of finite Lebesgue measure) with zero or finite gaps, a mixture of band-like and fractal parts, or fractal-like upon increasing the modulation strength. However, the eigenstates are multifractal irrespective of the value of the modulation parameter.
The propagation of an initial wave packet is discussed in terms of the autocorrelation function $`C(t)`$ and the mean square displacement $`d(t)`$. Numerical results show that $`C(t)t^\delta `$ and $`d(t)t^\beta `$ for the octonacci chain and the labyrinth tiling. Corresponding to the multifractal eigenstates, we observe $`0<\beta <1`$ for both systems. In the case of fractal-like or mixed energy spectra and multifractal eigenstates, we find $`0<\delta <1`$. However, for a band-like spectrum, $`C(t)t^1`$ as in a periodic system, which causes a qualitative change of behaviour in $`C(t)`$ for the labyrinth tiling at a parameter value $`v_c0.6`$. Similar effects have also been observed for Fibonacci lattices and for the octagonal tiling.
We believe that the anomalous diffusion shown in $`d(t)`$ and the crossover of the autocorrelation $`C(t)`$ will be a common phenomenon in 2D quasiperiodic systems. Of course, to observe the crossover in $`C(t)`$ one needs a parameter that allows one to continuously move away from the periodic case, which is not easily at hand for the most commonly investigated quasiperiodic model systems such as the Penrose or the octagonal tiling. Finally, we also studied the influence of different initial wave packets by choosing the eigenstates from various energy windows. The results show that the behaviour of $`C(t)`$ and $`d(t)`$ does not depend significantly on the shape and the location of the initial wave packet.
Comparing the values of $`\beta `$ with several expressions involving the fractal dimensions of energy spectra and eigenstates that were proposed in the literature, we find that the inequality $`\beta D_2/D_2^\psi `$ of Ref. holds true. However, it seems that the bound $`\beta D_2^\psi /d`$ proposed recently by Zhong et al. may be violated for parameter values $`v`$ close to one, i.e., close to the periodic case. However, we find that the weaker condition $`\beta D_1^\psi /d`$ is always satisfied.
Our present work corroborates that there are strong relations between fractal properties of energy spectra and wavefunctions on the one hand and the exponents describing the quantum diffusion on the other hand. However, it appears to be difficult to find relations that give quantitative agreement for one- and two-dimensional aperiodic systems. Here, a deeper understanding of the underlying physics is desirable. Higher-dimensional systems constructed as products of one-dimensional systems, such as the labyrinth tiling, provide useful toy examples for further investigations which can, at least, be treated numerically in an efficient way.
###### Acknowledgements.
The authors thank J. X. Zhong for fruitful discussions. HQY is grateful for the kind hospitality in Chemnitz. Financial support from DFG (UG) and the NSF of China (HQY) is gratefully acknowledged. |
no-problem/9912/cond-mat9912094.html | ar5iv | text | # Numerical renormalization group study of random transverse Ising models in one and two space dimensions
## 1 Introduction
The effect of quenched randomness on disordered quantum magnets close to a quantum phase transition is much stronger than on classical systems at temperature driven phase transitions. As first observed by McCoy in a somewhat disguised version of a random transverse Ising chain, non-conventional scaling and off-critical singularities that lead to divergent susceptibilities even away from the critical point now appear to be a generic scenario in any dimension, at least in disordered quantum magnets with an Ising symmetry. The reason for this, as pointed out by Fisher only recently, is a novel fixed point behavior of these systems under renormalization, namely one which is totally determined by the randomness and its geometric properties: the so called infinite randomness fixed point .
Within this scenario the quantum critical behavior of disordered transverse Ising models is essentially determined by strongly coupled clusters and their geometric properties . Let $`L`$ be the linear size of such a cluster. Then it contributes to the low energy spectrum with an exponentially small excitation gap of size $`\mathrm{ln}\mathrm{\Delta }EL^\psi `$, defining the exponent $`\psi `$. Moreover, at the critical point, it has a total magnetization of size $`\mu L^{\varphi \psi }`$ defining the exponent $`\varphi `$. Finally the linear length scale of strongly coupled clusters occurring at a distance $`\delta `$ away from the critical point is $`\xi |\delta |^\nu `$ giving rise to a third scaling exponent $`\nu `$. All bulk exponents can now be expressed via $`\psi ,\varphi `$ and $`\nu `$, c.f. $`\beta _b/\nu =x_b=d\varphi \psi `$, $`\nu _{\mathrm{typ}}=\nu (1\psi )`$ and in the Griffiths phase $`z^{}\delta ^{\nu \psi }`$. For the 1d case, as treated above, it is $`\psi =1/2`$, $`\varphi =(\sqrt{5}+1)/2`$ and $`\nu =2`$ for uncorrelated disorder.
The basic geometric objects, the strongly coupled clusters, still have to be defined and this will be done within a renormalization group scheme. However, for site or bond dilution it is immediately obvious what these clusters are: simply the connected clusters. Hence the critical exponents defined above are directly related to the classical percolation exponents : Let $`\delta =pp_c`$ be the distance from the percolation threshold, $`\nu _{\mathrm{perc}}`$ the exponent for the typical cluster size, $`D_{\mathrm{perc}}`$ the fractal dimension of the percolating cluster, $`\beta _{\mathrm{perc}}`$ the exponent for the probability to belong to the percolating cluster. Then one has for the critical exponents defined above
$$\nu =\nu _{\mathrm{perc}},\psi =D_{\mathrm{perc}},\varphi =(d\beta _{\mathrm{perc}}/\nu _{\mathrm{perc}})/D_{\mathrm{perc}}$$
(1)
Next we consider the question, what happens for generic disorder (i.e. no dilution, but random bonds and/or fields) and we consider the model defined by the Hamiltonian
$$H=\underset{i,j}{}J_{ij}\sigma _i^z\sigma _j^z\underset{i}{}h_i\sigma _i^x.$$
(2)
Here the $`\{\sigma _i^\alpha \}`$ are Pauli spin matrices, and the nearest neighbor interactions $`J_{ij}`$ and transverse fields $`h_i`$ are both independent random variables distributed uniformly:
$`\pi (J_{ij})`$ $`=`$ $`\{\begin{array}{cc}1,\hfill & \text{for }0<J_{ij}<1\hfill \\ 0,\hfill & \text{otherwise}\hfill \end{array}`$
$`\rho (h_i)`$ $`=`$ $`\{\begin{array}{cc}h_0^1,\hfill & \text{for }0<h_i<h_0\hfill \\ 0,\hfill & \text{otherwise}\hfill \end{array}.`$
For this case the distance $`\delta `$ from the critical point is conveniently given by $`\delta =\frac{1}{2}\mathrm{ln}h_0`$. In one space dimension this model has been investigated intensively over the recent years , and many analytical as well as numerical tools are at hand to analyze it. Beyond the simple one-dimensional geometry one has to rely on numerical techniques like quantum Monte-Carlo simulations (as in the two-dimensional case ) or the numerical implementation of the renormalization group scheme, which we outline in the next section.
## 2 The renormalization-group scheme
The strategy of the renormalization-group à la Ma, Dasgupta and Hu is to decrease the number of degrees of freedom and reduce the energy scale by performing successive decimation transformation in which the largest element of the set of random variables $`\{h_{i;}J_{ij}\}`$ at each energy scale is eliminated and weaker effective couplings are generated by perturbation theory.
The renormalization-group procedure is as follows: Find the strongest coupling
$$\mathrm{\Omega }\mathrm{max}\{J_{ij},h_i\}$$
in the system. If $`\mathrm{\Omega }=J_{ij}`$, then the neighboring transverse fields $`h_i`$ and $`h_j`$ can be treated as a perturbation to the term $`J_{ij}\sigma _i^z\sigma _j^z`$ in the Hamiltonian (2); The two spins involved are joined together into a spin cluster with an effective transverse field
$$\stackrel{~}{h}_{(ij)}\frac{h_ih_j}{J_{ij}}$$
and an effective magnetic moment
$$\stackrel{~}{\mu }_{(ij)}=\mu _i+\mu _j.$$
The bonds of the new cluster $`\stackrel{~}{\sigma }_{(ij)}`$ with other clusters $`\sigma _k`$ are
$$\stackrel{~}{J}_{(ij)k}\mathrm{max}(J_{ik},J_{jk}).$$
If instead $`\mathrm{\Omega }=h_j`$, then the associated spin $`\sigma _j`$ is eliminated and effective bonds between each pair of its neighboring spins are generated by second-order perturbation theory. The strength of the effective bonds for each pair $`(i,k)`$ is
$$\stackrel{~}{J}_{ik}\mathrm{max}(J_{ik},\frac{J_{ij}J_{jk}}{h_j})$$
where the $`J_{ik}`$ are the bonds that may have already been present. This procedure is sketched for the 1d case in Fig. 1. We continue the procedure until there is only one remaining spin cluster.
At each stage of the RG, an effective field (bond) is a ratio of a product of some number $`f`$ of original fields (bonds)to a product of original $`f1`$ bonds (fields). The $`f`$ grows under renormalization at criticality. As a result, the log-field and log-bond distributions $`R_\mathrm{\Omega }(\mathrm{ln}\stackrel{~}{h})`$ and $`P_\mathrm{\Omega }(\mathrm{ln}\stackrel{~}{J})`$ become broader and broader under renormalization as the critical point is approached. This increasing width of the field and bond distributions reduces the errors made by the second-order perturbation approximation. The RG becomes thereby asymptotically exact.
## 3 The one-dimensional case
The RG can be carried out analytically in one space dimension , therefore we can use the $`1d`$ case with periodic boundary conditions as a simple check for our numerical implementation. In Fig. 2 we show the probability distribution of the logarithm of the last remaining cluster field at the critical point $`h_0=1`$, which scales, according to Fig. 2, like $`Ł^{1/2}`$, where $`L`$ is the system size. From this one concludes that the exponent $`\psi `$ defined in the introduction, is given by $`\psi =1/2`$. Inspecting the number of active spins in the last remaining cluster at the critical point we obtain the size dependence $`\mu L^{0.81}`$ from Fig. 3, and thus $`\varphi 1.62`$.
In the Griffiths phase $`h_01`$, the probability distribution of the energy gap $`\mathrm{\Omega }`$ still has an algebraic singularity at $`\mathrm{\Omega }=0`$, and its finite size scaling behavior is
$$\mathrm{\Omega }P_L(\mathrm{\Omega })L^d\mathrm{\Omega }^{d/z^{}(\delta )}=(L^{z^{}(\delta )}\mathrm{\Omega })^{d/z^{}(\delta )}$$
(3)
where $`d`$ is the space dimension (in this section $`d=1`$) and $`z^{}(\delta )`$ a generalized dynamical exponent that varies continuously with the distance $`\delta `$ from the critical point. This exponent parameterizes the strength of all singularities in the off-critical region $`\delta 0`$, for instance in the disordered phase $`\delta >0`$ one has for the imaginary time autocorrelations $`G_{\mathrm{loc}}(\tau )=[\sigma _i^x(\tau )\sigma _i(0)_{T=0}]_{\mathrm{av}}\tau ^{1/z^{}(\delta )}`$, for the local susceptibility $`\chi _{\mathrm{loc}}T^{1/z^{}(\delta )1}`$, for the specific heat $`CT^{1/z^{}(\delta )}`$ and for the magnetization in a longitudinal field $`MH^{1/z^{}(\delta )}`$. The most convenient way to determine this exponent is, however, via the distribution $`P_L(\mathrm{\Omega })`$. At the critical point this distribution has to merge with the critical distribution discussed above — and therefore $`lim_{\delta 0}z^{}(\delta )=\mathrm{}`$. Using this finite-size scaling form for the distribution of the last bonds/fields in the RG procedure we can extract the dynamical exponent as is done in Fig. 5.
In the ordered phase $`h_0<1`$ the distribution of fields and bonds are related to the distribution in the disordered phase $`h_0>1`$ via duality, see Fig. 7.
## 4 The double chain
The RG scheme for double chains with some new elements (compared to the 1d case treated above) is depicted in Fig. 8.
As in $`1d`$, we observe that the log-field and log-bond distributions get broader with increasing system size at criticality. To estimate the critical point we compute the field distribution at the last stage of the RG varying the initial transverse field $`h_0`$. We estimate the critical point to be at $`h_0=1.9`$, beyond which the broadening of the log-field distribution appears to be saturating, as for 1d in the Griffiths phase. Moreover at $`h_c=1.9`$ the log-field and the log-bond distributions become asymptotically identical except for a constant multiplicative factor that reflects the short-ranged non-universal physics (See Fig. 9). This is obvious in the single chain, where it follows from the self-duality of the simple chain at the critical point. However, the double chain is not self-dual, nevertheless the scaling forms of the two distributions become identical at the critical point. We speculate that this remains true also in the two-dimensional case to be discussed below.
The scaling of the critical distributions depicted in Fig. 9 yields the critical exponent $`\psi =0.5`$, as shown in Fig. 10. This is the same as for the simple chain. In addition, for the average magnetic moment of the last remaining cluster at $`h_c=1.9`$, we find the same system size dependence for the double chain as for the 1d case, i.e. the same critical exponent $`\varphi `$, see Fig. 11. This implies that the double chain and the simple chain belong to the same universality class.
In the Griffiths phase $`h_0>h_c=1.9`$ we extracted the generalized exponent $`z^{}(h_0)`$, which is depicted in Fig. 12. Close to the critical point $`h_c`$ we observe the same linear dependence of $`1/z^{}(\delta )`$ on the distance $`\delta =h_0h_c`$ from the critical point as in 1d. Since for $`\delta 1`$ one expects $`z^{}(h_0)\delta ^{\psi \nu }`$ this implies that $`\nu =2`$ the same as the simple chain.
## 5 The square lattice (2d)
Next we present our preliminary results for the two-dimensional (2d) case with periodic boundary conditions, where we, in contrast to the treatment in Motrunich et al., keep all bonds generated during renormalization. The RG scheme for the 2d case is very similar to the one for the double chain and is depicted in Fig. 13.
In comparison to 1d and the double chain models, the location of the critical point cannot be fixed precisely for two dimensions according to our numerical observation so far. We obtain a critical field approximately at $`h_0=5.3`$ by applying the criterion that field and bond distribution should have the similar scaling form (as for the 1d case and the double chain). The scaling of the last log-field distribution yields $`\psi 0.5`$ and the scaling plot of the number of the active spins in the last remaining cluster yields $`\varphi 2.0`$ and $`\mu L^{1.06}`$.
Our preliminary results for the two-dimensional case agree with those obtained recently by Motrunich et. al. and with those obtained by us via quantum Monte-Carlo simulation .
## Acknowledgments
H.R. is grateful to the German research foundation (DFG) for financial support within a JSPS-DFG binational (Japan-Germany) cooperation. N.K.’s works is supported by Grant-in-Aid for Scientific Research Progam (No.11740232) from Mombusho, Japan. |
no-problem/9912/astro-ph9912568.html | ar5iv | text | # Nucleosynthesis in Accretion Flows Around Black Holes
## 1 Introduction
In Chakrabarti & Mukhopadhyay (1999, hereafter referred to as Paper 1) we studied the result of nucleosynthesis in hot, highly viscous accretion flows with small accretion rates and showed that neutron tori can form around a black hole. In the present paper, we study nucleosynthesis in disks in other parameter space, where the photo-dissociation may not be complete and other reactions may be important, and show that depending on the accretion parameters, abundances of new isotopes may become abnormal around a black hole. Thus, observation of these isotopes may give a possible indication of black holes at the galactic center or in a binary system.
Earlier, Chakrabarti (1986) and Chakrabarti et al.(1987, hereinafter CJA) initiated discussions of nucleosynthesis in sub-Keplerian disks around black holes and concluded that for very low viscosity ($`\alpha `$ parameter less than around $`10^4`$) and high accretion rates (typically, ten times the Eddington rate) there could be significant nucleosynthesis in thick disks. Radiation-pressure-supported thick accretion flows are cooler and significant nucleosynthesis was not possible unless the residence time of matter inside the accretion disk was made sufficiently high by reducing viscosity. The conclusions of this work were later verified by Arai & Hashimoto (1992) and Hashimoto et al. (1993).
However, the theory of accretion flows which contain a centrifugal-pressure-supported hotter and denser region in the inner part of the accretion disk has been developed more recently (Chakrabarti 1990, hereafter C90 and Chakrabarti 1996, hereafter C96). The improvement in the theoretical understanding can be appreciated by comparing the numerical simulation results done in the eighties (e.g. Hawley et al. 1984, 1985) and in the nineties (e.g. Molteni et al. 1994; Molteni et al. 1996; Ryu et al. 1997). Whereas in the eighties the matching of theory and numerical simulations was poor, the matching of the results obtained recently is close to perfect. It is realized that in a large region of the parameter space, especially for lower accretion rates, the deviated flow would be hot and a significant nuclear reaction is possible without taking resort to very low viscosity.
We arrive at a number of the important conclusions: (a) Significant nucleosynthesis is possible in the accretion flows. Whereas most of the matter of modified composition enters inside the black hole, a fraction may go out through the winds and will contaminate the surroundings in due course. The metalicity of the galaxies may also be influenced. (b) Generation or absorption of energy due to exothermic and endothermic nuclear reactions could seriously affect the stability of a disk. (c) Hot matter is unable to produce Lithium ($`{}_{}{}^{7}Li`$) or Deuterium (D) since when the flow is hot, photo-dissociation (photons partially locally generated and the rest supplied by the nearby Keplerian disk (Shakura & Sunyaev 1973) when the region is optically thin) is enough to dissociate all the elements completely into protons and neutrons. Even when photo-dissociation is turned off (low opacity cases or when the system is fundamentally photon-starved) $`Li`$ was not found to be produced very much. (d) Most significantly, we show that one does not require a very low viscosity for nucleosynthesis in contrary to the conclusions of the earlier works in thick accretion disk (e.g., CJA).
In Paper 1, we already presented the basic equations which govern accretion flows around a compact object, so we do not present them here. The plan of the present paper is the following: we present a set of solutions of these equations in the next section which would be used for nucleosynthesis work. When nucleosynthesis is insignificant, we compute thermodynamic quantities ignoring nuclear energy generation, otherwise we include it. The detailed method is presented here. We divide all the disks into three categories: ultra-hot, moderately hot, and cold. In Sect. 3, we present the results of nucleosynthesis for these cases. We find that in ultra-hot cases, the matter is completely photo-dissociated. In moderately hot cases, proton-capture processes along with dissociation of deuterium and $`{}_{}{}^{3}He`$ are the major processes. In the cold cases, no significant nuclear reactions go on. In Sect. 4, we discuss the stability properties of the accretion disks in presence of nucleosynthesis and conclude that only the very inner edge of the flow is affected. Nucleosynthesis may affect the metallicities of the galaxies as well as $`Li`$ abundance in companions in black hole binaries. In Sect. 5, we discuss these issues and draw our conclusions.
## 2 Typical Solutions of Accretion Flows
In our work below, we choose a Schwarzschild black hole and use the Schwarzschild radius $`2GM/c^2`$ to be the unit of the length scale where $`G`$ and $`c`$ are the gravitational constant and the velocity of light respectively. We choose $`c`$ to be the unit of velocity. We also choose the cgs unit when we find it convenient to do so. The nucleosynthesis work is done using cgs units and the energy release rates are in that unit as well.
A black hole accretion disk must, by definition, have radial motion, and it must also be transonic, i.e., matter must be supersonic (C90) while entering through the horizon. The supersonic flow must be sub-Keplerian and therefore deviate from the Keplerian disk away from the black hole. The location where the flow may deviate will depend on the cooling and heating processes (which depend on viscosity). Several solutions of the governing equations (see Eq. 2(a-d) of Paper 1) are given in C96. By and large, we follow this paper to compute thermodynamical parameters along a flow. However, we have considered Comptonization as in Chakrabarti & Titarchuk (1995, hereafter CT95) and Chakrabarti (1997, hereafter C97). Due to computational constraints, we include energy generation due to nuclear reactions ($`Q_{\mathrm{nuc}}`$) only when it is necessary (namely, when $`|Q_{\mathrm{nuc}}|`$ is comparable to energy generation due to viscous effects), and we do not consider energy generation due to magnetic dissipation (due to reconnection effects, for instance). In Fig. 1, we show a series of solutions which we employ to study nucleosynthesis processes. We plot the ratio $`\lambda /\lambda _K`$ (Here, $`\lambda `$ and $`\lambda _K`$ are the specific angular momentum of the disk and the Keplerian angular momentum respectively.) as a function of the logarithmic radial distance. The coefficient of the viscosity parameters are marked on each curve. The other parameters of the solution are in Table 1. These solutions are obtained with constant $`f=1Q^{}/Q^+`$ and $`Q^+`$ include only the viscous heating. In presence of significant nucleosynthesis, the solutions are obtained by choosing $`f=1Q^{}/(Q^++Q_{\mathrm{nuc}})`$, where $`Q_{\mathrm{nuc}}`$ is the net energy generation or absorption due to exothermic and endothermic reactions. The motivation for choosing the particular cases are mentioned in the next section. At $`x=x_K`$, the ratio $`\lambda /\lambda _K=1`$ and therefore $`x_K`$ represents the transition region where the flow deviates from a Keplerian disk. First, note that when other parameters (basically, specific angular momentum and the location of the inner sonic point) remain roughly the same, $`x_K`$ changes inversely with viscosity parameter $`\alpha _\mathrm{\Pi }`$ (C96). (The only exception is the curve marked with $`0.01`$. This is because it is drawn for $`\gamma =5/3`$; all other curves are for $`\gamma =4/3`$.) If one assumes, as Chakrabarti & Titarchuk (1995) and Chakrabarti (1997) did, that the alpha viscosity parameter decreases with vertical height, then it is clear from the general behaviour of Fig. 1 that $`x_K`$ would go up with height. The disk will then look like a sandwich with higher viscosity Keplerian matter flowing along the equatorial plane. As the viscosity changes, the sub-Keplerian and Keplerian flows redistribute (Chakrabarti & Molteni 1995) and the inner edge of the Keplerian component also recedes or advances. This fact that the inner edge of the disk should move in and out when the black hole goes into soft or hard state (as observed by, e.g., Gilfanov et al. 1997; Zhang et al. 1997) is thus naturally established from this disk solution.
In C90 and C96, it was pointed out that in a large region of the parameter space, especially for intermediate viscosities, centrifugal-pressure-supported shocks would be present in the hot, accretion flows. In these cases a shock-free solution passing through the outer sonic point was present. However, this branch is not selected by the flow and the flow passes through the higher entropy solution through shocks and the inner sonic points instead. This assertion has been repeatedly verified independently by both theoretical (Yang & Kafatos 1995, Nobuta & Hanawa 1994; Lu & Yuan 1997; Lu et al. 1997) and numerical simulations (with independent codes, Chakrabarti & Molteni 1993; Sponholz & Molteni 1994; Ryu et al. 1995, Molteni et al. 1996 and references therein). When the shock forms, the temperature of the flow suddenly rises and the flow slows down considerably, raising the residence time of matter significantly. This effect of shock-induced nucleosynthesis is also studied in the next section and, for comparison, the changes in composition in the shock-free branch were also computed, although it is understood that the shock-free branch is unstable. Our emphasis is not on shocks per se, but on the centrifugal-pressure-dominated region where the accreting matter slows down. When the shock does not form, the rise in temperature is more gradual. We generally follow the results of CT95 and C97 to compute the temperature of the Comptonized flow in the sub-Keplerian region which may or may not have shocks. Basically we borrow the mean factor $`F_{\mathrm{Compt}}<1`$ by which the temperature of the flow at a given radius $`x`$ ($`<x_K`$) is reduced due to Comptonization process from the value dictated by the single-temperature hydrodynamic equations. This factor is typically $`1/300.03`$ for very low ($`<0.1`$) mass accretion rate of the Keplerian component (which supplies the soft photons for the Comptonization) and around $`1/1000.01`$ or less for higher Keplerian accretion rates. In presence of magnetic fields, some dissipation is present due to reconnections. Its expression is $`Q_{\mathrm{mag}}=\frac{3B^2}{16\pi x\rho }v`$ (Shvartsman 1971; Shapiro 1973). We do not assume this heating in this paper.
The list of major nuclear reactions such as PP chain, CNO cycle, rapid proton capture and alpha ($`\alpha `$) processes, photo-dissociation etc. which may take place inside a disk are already given in CJA, and we do not repeat them here. Suffice it to say that due to the hotter nature of the sub-Keplerian disks, especially when the accretion rate is low and Compton cooling is negligible, the major process of hydrogen burning is some rapid proton capture process (which operates at $`T>0.5\times 10^9`$K) and mostly ($`p,\alpha `$) reactions as opposed to the PP chain (which operates at much lower temperature $`T0.010.2\times 10^9`$K) and CNO cycle (which operates at $`T0.020.5\times 10^9`$K) as in CJA.
Typically, accretion onto a stellar-mass black hole takes place from a binary companion which could be a main sequence star. In a supermassive black hole at a galactic center, matter is presumably supplied by a number of nearby stars. Because it is difficult to establish the initial composition of the inflow, we generally take the solar abundance as the abundance of the Keplerian disk. Furthermore, the Keplerian disk being cooler, and the residence time inside it being insignificant compared to the hydrogen burning time scale, we assume that for $`x>x_K`$, the composition of the gas remains the same as that of the companion star, namely, solar. Thus our computation starts only from the time when matter is launched from the Keplerian disk. Occasionally, for comparison, we run the models with an initial abundance same as the output of big-bang nucleosynthesis (hereafter referred to as ‘big-bang abundance’). These cases are particularly relevant for nucleosynthesis around proto-galactic cores and the early phase of star formations. We have also tested our code with an initial abundance same as the composition of late-type stars since in certain cases they are believed to be companions of galactic black hole candidates (Martin et al. 1992, 1994; Filippenko et al. 1995; Harlaftis et al. 1996).
### 2.1 Selection of Models
In selecting models for which the nucleosynthesis should be studied, the following considerations were made. According to CT95, and C97, there are two essential components of a disk. One is Keplerian (of rate $`\dot{m}_d`$) and the other is sub-Keplerian halo (of rate $`\dot{m}_h`$). For $`\dot{m}_d<0.1`$ and $`\dot{m}_h<1`$, the black hole remains in hard states. A lower Keplerian accretion rate generally implies a lower viscosity and a larger $`x_K`$ ($`x_K301000`$; see, C96 and C97). In this parameter range the protons remain hot, typically, $`T_p110\times 10^9`$ degrees or so. This is because the efficiency of emission is lower ($`f=1Q^{}/Q^+0.1`$, where, $`Q^+`$ and $`Q^{}`$ are the height-integrated heat generation and heat loss rates \[ergs cm<sup>-2</sup> sec<sup>-1</sup>\] respectively. Also, see Rees (1984), where it is argued that $`\dot{m}/\alpha ^2`$ is a good indication of the cooling efficiency of the hot flow.). Thus, we study a group of cases (Group A) where the net accretion rate $`\dot{m}1.0`$ and the viscosity parameter $`\alpha 0.0010.1`$. The Comptonization factor $`F_{\mathrm{Compt}}0.03`$, i.e., the cooling due to Comptonization reduces the mean temperature roughly by a factor of around $`30`$, which is quite reasonable. Here, although the density of the gas is low, the temperature is high enough to cause significant nuclear reactions in the disk.
When the net accretion rate is very low ($`\dot{m}<0.01`$) such as in a quiescence state of an X-ray novae, the dearth of soft photons keeps the temperature of the sub-Keplerian flow to a very high value and a high Comptonization factor $`F_{\mathrm{Compt}}0.1`$ could be used (Group B). Here significant nuclear reaction takes place, even though the density of matter is very low. Basically, the entire amount of matter is photo-dissociated into protons and neutrons in this case even when opacity is very low.
In the event the inflow consist of both the Keplerian (accretion rate $`\dot{m}_d`$) and sub-Keplerian (accretion rate $`\dot{m}_h`$) matter as the modern theory predicts, there would be situations where the net accretion rate is high, say $`\dot{m}=\dot{m}_d+\dot{m}_h15`$, and yet the gas temperature is very high ($`T>10^9`$). This happens when viscosity is low to convert sub-Keplerian inflow into a Keplerian disk. Here, most of the inflow is in the sub-Keplerian component and very little ($`\dot{m}_d0.01`$) matter is in the Keplerian flow. Dearth of soft photons keeps the disk hot, while the density of reactants is still high enough to have profuse nuclear reactions. The simple criteria for the cooling efficiency (that $`\dot{m}/\alpha ^2>1`$ would cool the disk, see Rees 1984) will not hold since the radiation source (Keplerian disk) is different from the cooling body (sub-Keplerian disk).
One could envisage yet another set of cases (Group C), where the accretion rate is very high ($`\dot{m}10100`$), and the soft photons are so profuse that the sub-Keplerian region of the disks becomes very cold. In this case, typically, viscosity is very high $`0.2`$, $`x_K`$ becomes low ($`x_K310`$). The efficiency of cooling is very high ($`Q^+Q^{}`$, i.e., $`f0`$). The Comptonization factor is low $`F_{\mathrm{Compt}}<0.01`$. The black hole is in a soft state. There is no significant nuclear reaction in these cases. In the proto-galactic phase when the supply of matter is very high, while the viscosity may be so low (say, $`10^4`$) that the entire amount is not accreted, one can have an ultra-cold accretion flow with $`F_{\mathrm{Compt}}10^3`$. In this case also not much nuclear reaction goes on.
The above simulations have been carried out with polytropic index $`\gamma =4/3`$. In reality, the polytropic index could be in between $`4/3`$ and $`5/3`$. If $`\gamma <1.5`$ then shocks would form as in some of the above cases. However, for $`\gamma >1.5`$, standing shocks would not form (C96). We have included one illustrative example of a shock-free case with $`\gamma =5/3`$ which is very hot and we have presented the result in Group B. In this case the Keplerian component is far away and the intercepted soft photons are very few.
### 2.2 Selection of the Reaction Network
In selecting the reaction network we kept in mind the fact that hotter flows may produce heavier elements through triple-$`\alpha `$ and proton and $`\alpha `$ capture processes. Similarly, due to photo-dissociation, significant neutrons may be produced. Thus, we consider a sufficient number of isotopes on either side of the stability line. The network thus contains protons, neutrons, till $`{}_{}{}^{72}Ge`$ – altogether 255 nuclear species. The network of coupled non-linear differential equation is linearized and evolved in time along the solution of C96 obtained from a given set of initial parameters of the flow. This well proven method is widely used in the literature (see Arnett & Truran 1969; Woosley et al. 1973).
The reaction rates were taken from Fowler et al. (1975) including updates by Harris et al. (1983). Other relevant references from where rates have been updated are: Thielemann (1980); Wallace & Woosley (1981); Wagoner et al.(1967); Fuller et al.(1980, 1982). For details of the procedure of adopting reaction rates, see, CJA and Jin et al.(1989, hereinafter JAC). The solar abundance which was used as the initial composition of the inflow was taken from Anders & Ebihara (1982).
## 3 Results
In this section, we present a few major results of our simulations using different parameter groups as described above. For a complete solution of the sub-Keplerian disks (C96) we need to provide (a) the mass of the black hole $`M`$, (b) the viscosity parameter $`\alpha _\mathrm{\Pi }`$, (c) the cooling efficiency factor $`f`$, (d) the Comptonization factor $`F_{\mathrm{Compt}}`$, (d) the net accretion rate of the flow $`\dot{m}`$, (e) the inner sonic point location $`x_{in}`$ through which the flow must pass and finally, (f) the specific angular momentum $`\lambda _{\mathrm{in}}`$ at the inner sonic point.
The following table gives the cases we discuss in this paper. The $`\mathrm{\Pi }`$-stress viscosity parameter $`\alpha _\mathrm{\Pi }`$, the location of the inner sonic point $`x_{\mathrm{in}}`$ and the value of the specific angular momentum at that point $`\lambda _{\mathrm{in}}`$ are free parameters. The net accretion rate $`\dot{m}`$, the Comptonization factor $`F_{\mathrm{Compt}}`$ and the cooling efficiency $`f`$ are related quantities (CT96, C97). For extremely inefficient cooling, $`f1.0`$, and for extremely efficient cooling $`f=0`$ or even negative. The derived quantities, such as the value of maximum temperature $`T_9^{\mathrm{max}}`$ of the flow (in units of $`10^9`$K), density of matter (in cgs units) at $`T_9^{\mathrm{max}}`$, $`x_K`$, the location from where the Keplerian disk on the equatorial plane becomes sub-Keplerian are also provided in the table. In the rightmost column, we present whether the inner edge of the disk is stable (S) or unstable (U) in the presence of the accretion flow. Three groups are separated as the parameters are clearly from three distinct regimes.
TABLE 1
| Model | $`M/M_{}`$ | $`\gamma `$ | $`x_{\mathrm{in}}`$ | $`\lambda _{\mathrm{in}}`$ | $`\alpha _\mathrm{\Pi }`$ | $`\dot{m}`$ | f | $`F_{\mathrm{Compt}}`$ | $`x_K`$ | $`T_9^{\mathrm{max}}`$ | $`\rho _{\mathrm{max}}`$ | S/U |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| A.1 | 10 | 4/3 | 2.7945 | 1.65 | 0.001 | 1 | 0.1 | 0.03 | 1655.7 | 5.7 | 6.2$`\times 10^7`$ | S |
| A.2 | 10 | 4/3 | 2.9115 | 1.6 | 0.07 | 1 | 0.1 | 0.03 | 401.0 | 4.7 | 4.9$`\times 10^7`$ | S |
| A.3 | $`10^6`$ | 4/3 | 2.9115 | 1.6 | 0.07 | 1 | 0.1 | 0.03 | 401.0 | 4.7 | 4.9$`\times 10^{12}`$ | U |
| B.1 | 10 | 4/3 | 2.8695 | 1.6 | 0.05 | 0.01 | 0.5 | 0.1 | 481.4 | 16.5 | 3.9$`\times 10^9`$ | S |
| B.2 | 10 | 4/3 | 2.8695 | 1.6 | 0.05 | 4 | 0.5 | 0.1 | 481.4 | 16.5 | 1.6$`\times 10^8`$ | U |
| B.3 | 10 | 5/3 | 2.4 | 1.5 | 0.01 | 0.001 | 0.5 | 0.1 | 84.4 | 47 | 3.3$`\times 10^{10}`$ | S |
| B.4 | 10 | 4/3 | 2.795 | 1.65 | 0.2 | 0.01 | 0.2 | 0.1 | 8.4 | 13 | 1.1$`\times 10^8`$ | S |
| C.1 | 10 | 4/3 | 2.795 | 1.65 | 0.2 | 100 | 0.0 | 0.01 | 4.8 | 0.8 | 1.1$`\times 10^4`$ | S |
| C.2 | $`10^6`$ | 4/3 | 2.795 | 1.65 | $`10^4`$ | $`100`$ | 0.0 | 0.001 | 3657.9 | 0.2 | 6.2$`\times 10^{10}`$ | S |
The basis of our three groupings are clear from the Table. Very low $`\dot{m}/\alpha _\mathrm{\Pi }^2`$ in Group B makes the cooling efficiency to be very small. Thus we choose a relatively large $`f0.20.5`$. It also makes the cooling due to Comptonization to be very low ($`F_{\mathrm{Compt}}0.1`$). Thus the disks could be ultra-hot. Intermediate $`\dot{m}/\alpha _\mathrm{\Pi }^2`$ in Group A means that the efficiency of cooling is intermediate $`f0.1`$ and the Compton cooling of the sub-Keplerian region is average: $`F_{\mathrm{Compt}}0.03`$. The sub-Keplerian disk in this case is neither too hot nor too cold. Extremely high $`\dot{m}/\alpha _\mathrm{\Pi }^2`$ causes a strong cooling in Group C. Thus, we choose $`f=0`$, and a very efficient Compton cooling $`F_{\mathrm{Compt}}0.010.001`$. As a result, the disk is also very cold. Now, we present our numerical results in these cases.
### 3.1 Nucleosynthesis in Moderately Hot Flows
Case A.1: In this case, the termination of the Keplerian component in the weakly viscous flow takes place at $`x=1655.7`$. The soft photons intercepted by the sub-Keplerian region reduce the temperature of this region but not by a large factor. The net accretion rate $`\dot{m}=1`$ is the sum of (very low) Keplerian component and the sub-Keplerian component. Using computations of CT95 and C97 for $`\dot{m}_d0.1`$ and $`\dot{m}_h0.9`$, we find that the electron temperature $`T_e`$ is around $`60`$keV $`T90.6`$ ($`T_9`$ is the temperature in units of $`10^9`$K) and the ion temperature is around $`T_9=2.5`$. This fixes the Comptonization factor to about $`F_{Compt}=0.03`$. This factor is used to reduce the temperature distribution of solutions of C96 (which does not explicitly use Comptonization) to temperature distribution with Comptonization. The ion temperature (in $`T_9`$) and density (in units of $`10^{10}`$ gm cm<sup>-3</sup> to bring in the same plot) distribution computed in this manner are shown in Fig. 2a. Figure 2b gives the velocity distribution (velocity is measured in units of $`10^{10}`$ cm sec<sup>-1</sup>). Note the sudden rise in temperature and slowing down of matter close to the centrifugal barrier $`x30`$. Figure 2c shows the changes in composition as matter is accreted onto the black hole. Only those species with abundance $`Y_i>10^4`$ have been shown for clarity. Also, compositions closer to the black hole are shown, as variations farther out are negligible. Most of the burning of species takes place below $`x=10`$. A significant amount of the neutrons (with a final abundance of $`Y_n10^3`$) is produced by the photo-dissociation process. Note that closer to the black hole, $`{}_{}{}^{12}C`$, $`{}_{}{}^{16}O`$, $`{}_{}{}^{24}Mg`$ and $`{}_{}{}^{28}Si`$ are all destroyed completely, even though at around $`x=5`$ or so, the abundance of some of them went up first before going down. Among the new species which are formed closer to the black hole are $`{}_{}{}^{30}Si`$, $`{}_{}{}^{46}Ti`$, $`{}_{}{}^{50}Cr`$. The final abundance of $`{}_{}{}^{20}Ne`$ is significantly higher than the initial value. This was not dissociated as the residence time in hotter region was insufficient. Thus a significant metalicity could be supplied by winds from the centrifugal barrier.
Figure 2d shows the energy release and absorption due to exothermic and endothermic nuclear reactions ($`Q_{\mathrm{nuc}}`$) that are taking place inside the disk (solid). Superposed on it are the energy generation rate $`Q^+`$ (long dashed curve) due to viscous process and the energy loss rate $`Q^{}`$ in the sub-Keplerian flows. For comparison, we also plot the hypothetical energy generation and loss rates (short dashed curves marked as $`Q_{\mathrm{Kep}}^+`$ and $`Q_{\mathrm{Kep}}^{}`$ respectively) if the disk had purely Keplerian angular momentum distribution even in the sub-Keplerian regime. All these quantities are in units of $`3\times 10^6`$ and they represent height-integrated energy release rate (ergs cm<sup>-2</sup> sec<sup>-1</sup>). Note that these Qs are in logarithmic scale (if $`Q<0`$, $`log(|Q|)`$ is plotted). As matter leaves the Keplerian flow, the proton capture ($`p,\alpha `$) processes (such as $`{}_{}{}^{18}O(p,\alpha )^{15}N`$, $`{}_{}{}^{15}N(p,\alpha )^{12}C`$, $`{}_{}{}^{6}Li(p,\alpha )^3He`$, $`{}_{}{}^{7}Li(p,\alpha )^4He`$, $`{}_{}{}^{11}B(p,\gamma )3\alpha `$, $`{}_{}{}^{17}O(p,\alpha )^{14}N`$, etc.) burn hydrogen and release energy to the disk. (Since the temperature of the disk is very high, PP chains or CNO cycles are not the dominant processes for the energy release.) At around $`x=40`$, the deuterium starts burning ($`D(\gamma ,n)p`$) and the endothermic reaction causes the nuclear energy release to become ‘negative’, i.e., a huge amount of energy is absorbed from the disk. At the completion of the deuterium burning (at around $`x=20`$) the energy release tends to goes back to the positive value to the level dictated by the original proton capture processes. Excessive temperature at around $`x=5`$ breaks $`{}_{}{}^{3}He`$ down into deuterium ($`{}_{}{}^{3}He(\gamma ,p)D`$, $`D(\gamma ,n)p`$). Another major endothermic reaction which is dominant in this region is $`{}_{}{}^{17}O(\gamma ,n)^{16}O`$. These reactions absorb a significant amount of energy from the flow. Note that the nuclear energy release or absorption is of the same order as the energy release due to viscous process. This energy was incorporated in computing thermodynamic quantities following these steps:
(a) Compute thermodynamic quantities without nuclear energy (b) Run nucleosynthesis code and compute $`Q_{\mathrm{nuc}}`$
(c) Fit $`Q_{\mathrm{nuc}}`$ using piecewise analytical curves and include this into the definition of $`f`$,
$$f=1\frac{Q^{}}{Q^++Q_{\mathrm{nuc}}}$$
$`(1)`$
(d) Do sonic point analysis once more using this extra heating/cooling term and compute thermodynamic quantities.
(e) Repeat from step (b) till the results converge. In this case, there in virtually no difference in the solution and the solution appear to be completely stable under nucleosynthesis.
Case A.2: Here we choose the same net accretion rate, but with a larger viscosity. As a result, the Keplerian component moves closer. The Comptonization is still not very effective, and the flow is moderately hot as above with $`F_{\mathrm{Compt}}=0.03`$. The flow deviates from a very hot (sufficient to cause the flow to pass through the outer sonic point) Keplerian disk at $`x_K=401.0`$, and after passing through an outer sonic point at $`x=50`$, and through a shock at $`x_S=15`$, the flow enters into the black hole through the inner sonic point at $`x=2.9115`$. We show the results both for the shock-free branch (i.e., the one which passes through only the outer sonic point before plunging into the black hole, dotted curves) and the shocked branch of the solution (solid curves). Figure 3a shows the comparison of the temperatures and densities (scaled in the same way as in Fig. 2a). The temperature and density jump sharply at the shock. Figure 3b shows the comparison of the radial velocities. The velocity sharply drops at the shock. Both of these effects hasten the nuclear burning in the case which includes the shock. Figure 3c shows the comparison of the abundances of only those species whose abundances reached a value of at least $`10^4`$. The difference between the shocked and the shock-free cases is that in the shock case similar burning takes place farther away from the black hole because of much higher temperature in the post-shock region.
The nature of the (height integrated) nuclear energy release is very similar to Case A.1 as the major reactions which take place inside the disk are basically same, except that the exact locations where any particular reactions take place are different since they are temperature sensitive. In Fig. 3d, we show all the energy release/absorption components for the shocked flow (solid curve). For comparison, we include the nuclear energy curve of the shock-free branch (very long dashed curve). Note that in the post-shock region, hotter and denser flow of the shocked-branch causes a particular nuclear reaction to take place farther away from a black hole when compared with the behaviour in the shock-free branch as is also reflected in the composition variation in Fig. 3c. The viscous energy generation ($`Q^+`$) and the loss of energy ($`Q^{}`$) from the disk (long dashed) and shown. As before, these quantities, if the inner part had Keplerian distribution, are also plotted (short dashed). When big-bang abundance is chosen to be the initial abundance, the net composition does not change very much, but the dominating reactions themselves are somewhat different because the initial compositions are different. The dot-dashed curve shows the energy release/absorption in the shocked flow when big-bang abundance is chosen. All these quantities are, as before, in units of $`3\times 10^6`$ and they represent height integrated energy release rate (ergs cm<sup>-2</sup> sec<sup>-1</sup>). For instance, in place of proton capture reactions for computations with solar abundance, the fusion of deuterium into $`{}_{}{}^{4}He`$ plays a dominant role via the following reactions: $`D(D,n)^3He`$, $`D(p,\gamma )^3He`$, $`D(D,p)T`$, $`{}_{}{}^{3}He(D,p)^4He`$. This is because no heavy elements were present to begin with and proton capture processes involving heavy elements such as were prevalent in the solar abundance case cannot take place here. Endothermic reactions at around $`x=2040`$ are dominated by deuterium dissociation as before. However, after the complete destruction of deuterium, the exothermic reaction is momentarily dominated by neutron capture processes (due to the same neutrons which are produced earlier via $`D(\gamma ,n)p`$) such as $`{}_{}{}^{3}He(n,p)T`$ which produces the spike at around $`x=14.5`$. Following this, $`{}_{}{}^{3}He`$ and $`T`$ are destroyed as in the solar abundance case (i.e., $`{}_{}{}^{3}He(\gamma ,p)D`$, $`D(\gamma ,n)p`$, $`T(\gamma ,n)D`$) and reaches the minimum in the energy release curve at around $`x=6`$. The tendency of going back to the exothermic region is stopped due to the photo-dissociation of $`{}_{}{}^{4}He`$ via $`{}_{}{}^{4}He(\gamma ,p)T`$ and $`{}_{}{}^{4}He(\gamma ,n)^3He`$. At the end of the big-bang abundance calculation, a significant amount of neutrons are produced. The disk was found to be perfectly stable under nuclear reactions.
Case A.3: This case is exactly same as A.2 except that the mass of the black hole is chosen to be $`10^6M_{}`$. The temperature and velocity variations are similar to the above case. Because the accretion rate (in non-dimensional units) is the same, the density (which goes as $`\dot{m}/r^2v`$) is lower by a factor of $`10^5`$. Tenuous plasma should change its composition significantly only at higher temperatures than in the previous case. However, the increase in residence time by a factor of around $`10^5`$ causes the nuclear burning to take place farther out even at a lower temperature. This is exactly what is seen. Figure 4a shows the comparison (without including nuclear energy) of the composition of matter when the flow has a shock (solid curves) and when the flow is shock-free (dashed curve). We recall that the shock-free flow is in reality not stable. It is kept only for comparison purposes. Note that unlike earlier cases, a longer residence time also causes to burn all the $`{}_{}{}^{20}Ne`$ that was generated from $`{}_{}{}^{16}O`$.
In Fig. 4b, we show a comparison of various height-integrated energy release and absorption curves as in Fig. 3d (in ergs cm<sup>-2</sup> sec<sup>-1</sup>). The nuclear energy remains negligibly small till around $`x=100`$. After that the endothermic reactions dominate. This is due to the dissociation of $`D`$, $`{}_{}{}^{3}He`$ and $`{}_{}{}^{7}Li`$ and also of $`{}_{}{}^{12}C`$, $`{}_{}{}^{16}O`$, $`{}_{}{}^{20}Ne`$ etc. all of which produce $`{}_{}{}^{4}He`$. The solid curve is for the branch with a shock and the very long dashed curve is for the shock-free branch. A small amount of neutrons are produced ($`Y_n10^3`$) primarily due to the dissociation of $`D`$. These considerations are valid for solar abundance as the initial composition. In the case of big-bang abundance (dash-dotted curve), similar reactions take place but no elements heavier than $`{}_{}{}^{7}Li`$ are involved. The three successive dips are due to dissociation of $`D`$, $`{}_{}{}^{3}He`$ and $`{}_{}{}^{4}He`$ respectively.
Below $`x=10`$, $`|Q_{\mathrm{nuc}}|`$ is larger compared to $`Q^+`$ by 3-4 orders of magnitude. This is because of the superposition of a large number of photo-dissociation effects. We expect that in this case the disk would be unstable. This is exactly what we see. In Fig. 4c, we show the effects of nuclear reactions more clearly. The dotted curve and the solid curves are, as in Fig. 3b, the variation of velocity for the solution without and with shocks, respectively. The dot-dashed curve represents velocity variation without shock when nuclear reaction is included. The dashed curve is the corresponding solution when nucleosynthesis of the shocked branch is included. Both branches are unstable since the steady flow is subsonic at the inner edge. In these cases, the flow is expected to pass through the inner sonic point in a time-dependent manner and some sort of quasi-periodic oscillations cannot be ruled out.
### 3.2 Nucleosynthesis in Hot Flows
Case B.1: This case is chosen with such a set of parameters that a standing shock forms at $`x_s=13.9`$. A very low accretion rate is chosen so that the Compton cooling is negligible and the flow remains very hot (Comptonization factor $`F_{\mathrm{Compt}}=0.1`$). We show the results both for the shock-free branch (dashed) and the shocked branch (solid) of the solution. Figure 5a shows the comparison of the temperatures and densities (in units of $`10^{20}`$ gm cm<sup>-3</sup> to bring in the same plot). Figure 5b shows the comparison of the radial velocities. This behaviour is similar to that shown in Case A.2. Because the temperature is suitable for photo-dissociation, we chose a very small set of species in the network (only 21 species up to $`{}_{}{}^{11}B`$ are chosen). Figure 5c shows the comparison of the abundances of proton (p), $`{}_{}{}^{4}He`$ and neutron (n). In the absence of the shock, the breaking up of $`{}_{}{}^{4}He`$ into n and p takes place much closer to the black hole, while the shock hastens it due to higher temperature and density. Although initially the flow starts with $`Y_p=0.7425`$ and $`{}_{}{}^{4}He=0.2380`$, at the end of the simulation, only proton ($`Y_p0.8786`$) and neutron ($`Y_n0.1214`$) remain and the rest of the species become insignificant.
Figure 5d shows the comparison of the height-integrated nuclear energy release (units are as Fig. 2d). As the flow leaves the Keplerian disk at $`x_K=481.4`$, the deuterium and $`{}_{}{}^{9}Be`$ are burnt instantaneously at the cost of some energy from the disk. At the end of deuterium burning at around $`x=200`$, the rp and proton capture processes (mainly via $`{}_{}{}^{11}B(p,\gamma )3^4He`$ which releases significant energy) and neutron capture ($`{}_{}{}^{3}He(n,p)T`$) take place, but further in, $`{}_{}{}^{3}He`$ (via $`{}_{}{}^{3}He(\gamma ,p)D`$) first and $`{}_{}{}^{4}He`$ (mainly via $`{}_{}{}^{4}He(\gamma ,n)^3He`$ and $`{}_{}{}^{4}He(\gamma ,p)T`$, $`T(\gamma ,n)D`$) subsequently, are rapidly dissociated. As soon as the entire helium is burnt out, the energy release becomes negligible. This is because there is nothing left other than free protons and neutrons and hence no more reactions take place and no energy is released or absorbed. The solid curve is for the branch with a shock and the very long dashed curve is for the shock-free branch. Inclusion of an opacity factor (which reduces photo-dissociation) shifts the burning towards the black hole. The disk is found to be completely stable even in presence of nucleosynthesis.
Case B.2: As discussed in Sect. 2, in extreme hard states, a black hole may accrete very little matter in the Keplerian component and very large amount of matter in the sub-Keplerian component. To simulate this we used B.1 parameters, but $`\dot{m}=4`$. The resulting solution is found to be unstable when shocks are present. In Fig. 5b, we superimposed velocity variation without nuclear energy (same as with nuclear energy as far as Case B.1 is concerned) and with nuclear energy. The dash-dotted curve next to the un-shocked branch and dashed curve next to the shocked branch show the resulting deviation. While the branch without shock still remains stable, the other branch is distinctly unstable as the steady-state solution is sub-sonic at the inner edge. The only solution available must be non-steady with oscillations near the sonic point.
Case B.3: In this case, accretion rate is chosen to be even smaller ($`\dot{m}=0.001`$) and the polytropic index is chosen to be $`5/3`$. The maximum temperature reaches $`T_9^{\mathrm{max}}=47`$. After leaving the Keplerian flow, the temperature and velocity of the flow monotonically increases. Because of excessive temperature, $`D`$ and $`{}_{}{}^{3}He`$ are photo-dissociated immediately after the flow leaves the Keplerian disk at $`x_K=84.4`$. At around $`x=30`$, all $`{}_{}{}^{4}He`$ is photo-dissociated exactly as in Case B.1. Subsequently, the flow contains only protons and neutrons and there is no more energy release from the nuclear reactions. This behaviour is clearly seen in Fig. 6. The notations are the same as in the previous run. This ultra-hot case is found to be stable since the energy release took place far away from the black hole where the matter was moving slowly and therefore the rate ($`Q_{\mathrm{nuc}}`$) was not high compared to that due to viscous dissipation (units are as Fig. 2d).
Case B.4: In this case, the net accretion rate is low ($`\dot{m}=0.01`$) but viscosity is high and the efficiency of emission is intermediate ($`f=0.2`$). That means that the temperature of the flow is high ($`F_{\mathrm{Compt}}=0.1`$, maximum temperature $`T_9^{\mathrm{max}}=13`$). Matter deviates from a Keplerian disk at around $`x_K=8.4`$. Assuming that the high viscosity is due to stochastic magnetic field, protons would be drifted towards the black hole due to magnetic viscosity, but the neutrons will not be drifted (Rees et al. 1982). They will generally circle around the black hole till they decay. This principle has been used to do the simulation in this case. The modified composition in one sweep is allowed to interact with freshly accreting matter with the understanding that the accumulated neutrons do not drift radially. After few iterations or sweeps the steady distribution of the composition is achieved. Figure 7 shows the neutron distribution in the sub-Keplerian region. The formation of a ‘neutron torus’ is very apparent in this result. In fact, the formation of a neutron disk is very generic in all the hot, highly viscous accretion flows as also seen in Cases B.1-B.3 (for details, see, Paper 1). The nuclear reactions leading to the neutron torus formation are exactly same as previous cases and are not described here.
### 3.3 Nucleosynthesis in Cooler Flows
Case C.1: Here we choose a high-viscosity flow with a very high accretion rate. Matter deviates from the Keplerian disk very close to the black hole $`x_K=4.8`$. The flow in the centrifugal barrier is cooler (temperature maximum $`T_9^{\mathrm{max}}=0.8`$). Here clearly, high viscosity removes the centrifugal barrier completely and matter falls in almost freely. Due to very short residence time, no significant change in the composition takes place. Only a small amount of proton capture (mainly due to $`{}_{}{}^{11}B(p,\gamma )3^4He`$, $`{}_{}{}^{16}O(p,\alpha )^{13}N`$, $`{}_{}{}^{15}N(p,\alpha )^{12}C`$, $`{}_{}{}^{18}O(p,\alpha )^{15}N`$, $`{}_{}{}^{19}F(p,\alpha )^{16}O`$) takes place. A small amount of deuterium dissociation also take place, but it does not change the energetics significantly. The flow is not found to be unstable in this case.
Case C.2: This is a test case for the proto-galactic accretion flow. In the early phase of galaxy formation, the supply of matter is high, and the temperature of the flow is very low. The viscosity may or may not be very high, but we choose very low (presumably, radiative) viscosity ($`\alpha =10^4`$). The motivation is to use similar parameters as were used in JAC while studying the nucleosynthesis in thick accretion disks. The central mass $`M=10^6M_{}`$, the maximum temperature is $`T_9^{\mathrm{max}}0.2`$ and the Comptonization factor $`F_{Compt}=0.001`$. The temperature variation is similar to Fig. 2a when scaled down by a factor of $`30`$ (basically by the ratio of the $`F_{\mathrm{Compt}}`$ values). The velocity variation is similar to Fig. 2b and is not repeated here. Due to the low temperature, there is no significant change in the nuclear abundance. Note that since thick accretion disks are rotation dominated, the residence time was very long in CJA simulation and there was significant change in composition even at lower temperatures. But in this case the flow radial velocity is very high and the residence time is shorter. The nuclear energy release is negligible throughout and is not shown.
## 4 Nucleosynthesis Induced Instability
CJA, while studying nucleosynthesis in cooler, mainly rotating disks, suggested that as long as the nuclear energy release is smaller than the gravitational energy release, the disk would be stable. In the present paper, we find that this suggestion is still valid. Indeed, even when momentarily the nuclear energy release or absorption is as high as the gravitational energy release (through viscous dissipation), the disk may be stable. For instance, in case A.1 (Fig. 2d) at around $`x=4`$ these rates are similar. Yet the velocity, temperature and density distributions (Fig. 2a-b) remain unchanged. In Case A.3, $`Q_{\mathrm{nuc}}`$ is several magnitudes greater than viscous energy release $`Q^+`$ and the thermodynamic quantities are indeed disturbed to the extent that the flow with same injected quantities (with the same density and velocity and their gradients) at the outer edge does not become supersonic at the inner edge. In these cases, the flow must be unsteady in an effort to search for the ‘right’ sonic point to enter into the black hole. On the other hand, ultra-hot cases like B.2 show deviation in non-shocked solution while the shocked solution is unstable.
The general behaviour suggests that the present model of accretion disks is more stable under nuclear reactions compared to the earlier, predominantly rotating model. Here, the radial velocity ($`v`$) spreads energy release or absorption radially to a distance $`v\tau _D(\rho ,T)=vN_D/\dot{N}_D`$ cm, where, $`N_D`$ is the number density of, say, Deuterium and $`\dot{N}_D`$ is its depletion rate. For a free fall, $`vx^{1/2}`$, while for most nuclear reactions, $`\tau _D(\rho ,T)x^n`$, with $`n>>1`$ (since reaction rates are strongly dependent on density and temperature). Thus, $`Q_{\mathrm{nuc}}`$ for the destruction of a given element spreads out farther away from the black hole, but steepens closer to it. Large $`dQ_{nuc}/dx`$ causes instability since the derivatives such as $`dv/dx`$ at the inner regions (including the sonic point) become imaginary.
## 5 Discussions and Conclusions
In this paper, we have explored the possibility of nuclear reactions in inner accretion flows. Because of high radial motion and ion pressure, matter deviates from a Keplerian disk close to the black hole. The temperature in this region is controlled by the efficiencies of bremsstrahlung and Comptonization processes (CT96, C97) and possible heating by magnetic fields (Shapiro 1973): for a higher Keplerian rate and higher viscosity, the inner edge of the Keplerian component comes closer to the black hole and the sub-Keplerian region becomes cooler (CT95). The nucleosynthesis in this soft state of the black hole is quite negligible. However, as the viscosity is decreased to around $`0.05`$ or less, the inner edge of the Keplerian component moves away and the Compton cooling becomes less efficient due to the paucity of the supply of soft photons. The sub-Keplerian region, though cooler by a factor of about $`F_{\mathrm{Compt}}=0.01`$ to $`0.03`$ from that of the value obtained through purely hydrodynamical calculations of C96, is still high enough to cause significant nuclear reactions to modify compositions. The composition changes very close to the black hole, especially in the centrifugal-pressure-supported denser region, where matter is hotter and slower.
The degree of change in compositions which takes place in the Group A and B calculations, is very interesting and its importance must not be underestimated. Since the centrifugal-pressure-supported region can be treated as an effective surface of the black hole which may generate winds and outflows in the same way as the stellar surface (Chakrabarti 1998a,b; Das & Chakrabarti 1999), one could envisage that the winds produced in this region would carry away a modified composition and contaminate the atmosphere of the surrounding stars and the galaxy in general.
One could estimate the contamination of the galactic metalicity due to nuclear reactions. For instance, in Case A.1, $`{}_{}{}^{12}C`$, $`{}_{}{}^{16}O`$, $`{}_{}{}^{20}Ne`$, $`{}_{}{}^{30}Si`$, $`{}_{}{}^{44}Ca`$ and $`{}_{}{}^{52}Cr`$ are found to be over-abundant in some region of the disk. Assume that, on an average, all the $`N`$ stellar black holes are of equal mass $`M`$ and have a non-dimensional accretion rate of around $`\dot{m}1`$ ($`\dot{m}=\dot{M}/\dot{M}_{\mathrm{Edd}}`$). Let $`\mathrm{\Delta }Y_i`$ (few times $`10^3`$) be the typical change in composition of this matter during the run and let $`f_w`$ be the fraction of the incoming flow that goes out as winds and outflows (could be from ten percent to more than a hundred percent when disk evacuation occurs), then in the lifetime of a galaxy (say, $`10^{10}`$yrs), the total ‘change’ in abundance of a particular species deposited in the surroundings by all the stellar black holes is given by:
$$<\mathrm{\Delta }Y_i>_{\mathrm{small}}10^7(\frac{\dot{m}}{1})(\frac{N}{10^6})(\frac{\mathrm{\Delta }Y_i}{10^3})(\frac{f_w}{0.1})(\frac{M}{10M_{}})(\frac{T_{\mathrm{gal}}}{10^{10}}Yr)(\frac{M_{\mathrm{gal}}}{10^{11}M_{}})^1.$$
$`(2)`$
The subscript ‘small’ is used here to represent the contribution from small black holes. We also assume a conservative estimate that there are $`10^6`$ such stellar black holes in a galaxy, the mass of the host galaxy is around $`10^{11}M_{}`$ and the lifetime of the galaxy during which such reactions are going on is about $`10^{10}`$Yrs. We also assume that $`\mathrm{\Delta }Y_i10^3`$ and a fraction of ten percent of matter is blown off as winds. The resulting $`<\mathrm{\Delta }Y_i>10^7`$ may not be very significant if one considers averaging over the whole galaxy. However, for a lighter galaxy $`<\mathrm{\Delta }Y_i>`$ could be much higher. For example, for $`M_{gal}=10^9M_{}`$, $`<\mathrm{\Delta }Y_i>10^5`$. This would significantly change the average abundances of $`{}_{}{}^{30}Si`$, $`{}_{}{}^{44}Ca`$ and $`{}_{}{}^{52}Cr`$. On the other hand, if one concentrates on the region of the outflows only, the change in abundance is the same as in the disk, and should be detectable (e.g., through line emissions). One such observation of stronger iron-line emission was reported for SS433 (Lamb et al. 1983; see also Arnould & Takahashi 1999, for a recent discussion on galactic contaminations).
When we consider a case like A.3, we find that $`{}_{}{}^{12}C`$, $`{}_{}{}^{16}O`$, $`{}_{}{}^{20}Ne`$, and $`{}_{}{}^{28}Si`$ are increased by about $`10^3`$ in some regions. In this case, the average change of abundance due to accretion onto the massive black hole situated at the galactic centre would be,
$$<\mathrm{\Delta }Y_i>_{\mathrm{big}}few\times 10^8(\frac{\dot{m}}{1})(\frac{\mathrm{\Delta }Y_i}{10^3})(\frac{f_w}{0.1})(\frac{M}{10^6M_{}})(\frac{T_{\mathrm{gal}}}{10^{10}}Yr)(\frac{M_{\mathrm{gal}}}{10^{11}M_{}})^1.$$
$`(3)`$
Here, we have put ‘big’ as the subscript to indicate the contribution from the massive black hole. Even for a lighter galaxy, e.g., of mass $`M_{\mathrm{gal}}=10^9M_{}`$, $`\mathrm{\Delta }Y_i=10^6`$ which may not be significant. If one considers only the regions of outflows, contamination may not be negligible.
A few related questions have been asked lately: Can lithium be produced in black hole accretion? We believe not. The spalletion reactions (Jin 1990; Yi & Narayan 1997) which may produce such elements assuming that a helium beam hits a helium target in a disk. Using a full network, rather than only He-He reaction, we find that the hotter disks where spalletion would have been important also photo-dissociate (particularly due to the presence of photons from the Keplerian disk) helium to deuterium and then to protons and neutrons before any significant lithium could be produced. Even when photo-dissociation is very low (when the Keplerian disk is far away, for instance), or when late-type stellar composition is taken as the initial composition, we find that the $`{}_{}{}^{7}Li`$ production is insignificant, particularly if one considers more massive black holes ($`M10^8M_{}`$).
Recently, it has been reported by several authors (Martin et al. 1992; 1994; Fillipenko et al. 1995; Harlaftis et al. 1996) that a high abundance of $`Li`$ is observed in late type stars which are also companions of black hole and neutron star candidates. This is indeed surprising since the theory of stellar evolution predicts that these stars should have at least a factor of ten lower $`Li`$ abundance. These workers have suggested that this excess $`Li`$ could be produced in the hot accretion disks. However, in Paper 1 as well as in our Cases A and B computations we showed that $`Li`$ is not likely to be produced in accretion disks. Indeed, we ran several cases with a mass fraction of He as high as 0.5 to 0.98, but we are still unable to produce $`Li`$ with a mass fraction more than $`10^{10}`$. Recent work of Guessoum & Kazanas (1999) agrees with our conclusion that profuse neutrons would be produced in the disk. They farther suggested that these energetic neutrons can produce adequate $`Li`$ through spalletion reactions with the $`C`$, $`N`$, and $`O`$ that is present in the atmospheres of these stars. For instance, in Cases B.1 and B.3 we see that neutrons could have an abundance of $`0.1`$ in the disk. Since the production rate is similar to what Guessoum & Kazanas (1999) found, $`Li`$ should also be produced on stellar surface at a similar rate.
What would be the neutrino flux on earth if nucleosynthesis does take place? The energy release by neutrinos (the pair neutrino process, the photoneutrino process and the plasma neutrino process) can be calculated using the prescription of Beaudet et al. (1967, hereafter BPS; see also Itoh et al. 1996) provided the pairs are in equilibrium with the radiation field. However, in the case of accretion disks, the situation is significantly different from that inside a star (where matter is in static equilibrium). Because of rapid infall, matter density is much lower and the infall time scale could be much shorter compared to the time-scale of various neutrino processes, especially the pair and photo-neutrino processes. As a result, the pair density need not attain equilibrium. One important thing in this context is the opacity ($`\tau _{\mathrm{pair}}`$) of the pair process. Following treatments of Colpi et al. (1984) we find that $`\tau _{\mathrm{pair}}<1`$ for all our cases, and therefore pair process is expected to be negligible (for Case B.2, $`\tau _{\mathrm{pair}}`$ is the highest \[$`0.9`$\]). Park (1990a,b), while studying pair creation processes in spherical accretion, shows that even in the most favourable condition, the ratio of positron ($`n_+`$) and ion ($`n_i`$) is no more than $`0.05`$. A simple analysis suggests that neutrino production rate is many orders of magnitude smaller compared to what the equilibrium solutions of BPS and Itoh et al. would predict. Thus, we can safely ignore the neutrino luminosity.
When the nuclear energy release or absorption is comparable to the gravitational energy release through viscous processes, we find that the disk is still stable. Stability seems to depend on how steeply the energy is released or absorbed in the disk. This in turn depends on $`\tau _Dv`$, the distance traversed inside the disk by the element contributing the highest change of energy before depleting significantly. Thus, an ultra-hot case (Group B) can be stable even though a hot (Group A) case can be unstable as we explicitly showed by including nuclear energy release. In these ‘unstable’ cases, we find that the steady flow does not satisfy the inner boundary condition and becomes subsonic close to the horizon. This implies that in these cases the flow must become non-steady, constantly searching for the supersonic branch to enter into the black hole. This can induce oscillations as have been found elsewhere (Ryu et al. 1997). In such cases, one is required to do time dependent simulations (e.g., Molteni et al. 1994, 1996) to include nuclear reactions. This will be attempted in future.
We thank the referee for many helpful comments. This research is partially supported by DST grant under the project “Analytical and numerical studies of astrophysical flows around black holes and neutron stars” with SKC. |
no-problem/9912/astro-ph9912130.html | ar5iv | text | # 1 Why (old) open clusters?
## 1 Why (old) open clusters?
Open clusters (OC’s) are important to study the properties of the Galactic disk, since they offer information on ages, both absolute and relative, and on the metallicity evolution, both in space and time. In fact, OC’s are found in different regions of the disk, cover a large interval in age (from a few Myr to about 10 Gyr) and in metallicity (Z=Z/20 to supersolar). Moreover, their ages and distances are accurate, much more than for any other disk object like e.g., single field stars. Also, since OC’s orbits do not generally take them too far away from their birthplaces, we may assume that their current position in the Galaxy is representative also of their original one, and that we are not smearing properties by mixing populations, always a problem when dealing with field stars.
To define at least a reliable ranking of the open clusters properties we need a large sample of objects whose age, distance and metallicity are accurately and homogenously known: see e.g., Janes & Phelps (1994), Carraro & Chiosi (1994), Friel (1995), Twarog et al. (1997). These authors have tried to define such samples, but had to sacrifice something in the quality of the data, often taken from the literature, and not as excellent as attainable with today’s means, and/or on the homogeneity of the treatment to get a sample as large as possible.
We have started a few years ago a project, admittedly ambitious, to build a homogeneous and statistically significant sample of clusters of various ages, metallicities and positions in the disk. We work only on new photometry taken by our group, or on literature data of comparable quality. The clusters’ properties are derived using the synthetic colour-magnitude diagram method (Tosi et al. 1991). We compare the observed diagram to a grid of synthetic ones, generated from a series of homogeneous sets of theoretical evolutionary tracks by several authors. The comparison is based both on morphology (e.g., main sequence shape, red clump position, gaps, etc.) and on population ratios (e.g., the luminosity function). We have already applied this method to nine open clusters, ranging in age from 0.1 to about 10 Gyr (see Bragaglia et al. 1999 for a recent review), and we have data for ten more.
We plan now to turn to spectroscopy, in particular to high resolution, in order to get precise information on the clusters’ metallicity.
## 2 What do we know about open clusters’ metallicities?
Of all the OC’s we have studied photometrically, four have the metallicity measured by high dispersion spectroscopy, the best technique to get reliable results, with uncertainties of less than about 0.1 dex in \[Fe/H\] (see Gratton 1999 for a recent review). If we look at the whole sample of old OC’s this is true for only 18 of the about 80 known. There are other ways to get the metallicity, like low resolution spectroscopy calibrated to high-res, as done by Janes and Friel (see e.g., Friel 1995; this allows uncertainties of about 0.15 dex), or Washington, UBV and DDO photometry (with uncertainties of about 0.2 dex). A little more than half of the old OC’s sample has the metal abundance measured by at least one of the above methods. An interesting feature comes out of the comparison between metallicities measures by high-res spectroscopy and any of the other methods: they all underestimate \[Fe/H\] by at least 0.1 dex on average (even if the individual values may be off by much more).
Apart from a possible sistematic effect, does the technique used influence the derived general properties of the cluster sample? If we consider the old OC’s we can study the existence of the radial metallicity gradient using metallicities derived by all methods. Figure 1 shows the different cases; at least part of the observed dispersion at all Galactocentric distances is real. If we determine the slope of the metallicity gradient we find values very similar to each other within the errors; they are indicated in Figure 1, and are of the order of –0.1 dex kpc<sup>-1</sup>. Notice the larger slope obtained for the high-res values; the sample is still too small to decide whether it is really significant, and it would be interesting to have many more high-res determinations to put on firmer grounds the absolute and relative values of OC’s metallicities.
Regarding the slope of the radial gradient, it seems irrelevant which method is used to determine metallicities. On this basis, to study whether the metallicity gradient has changed with time, we use the Friel (1995) values, since they are the most numerous for the old OC’s (36 OC’s, compared to 26 with Washington, 14 with UBV and 20 with DDO photometry). If we divide the whole sample in four age intervals (age $`<`$ 1, 1–3, 3–6, $`>`$ 6 Gyr) we find that there is no variation of the gradient slope with age (see Figure 2). There may be a slight indication of a shallower slope for the younger OC’s, but it’s well within the errors.
Finally, using samples with metallicities obtained homogenously, we do not see any indication of the discontinuity at R<sub>GC</sub> about 10 kpc assumed by Twarog et al. (1997) as an indication of the edge of the original thick disk.
## 3 The metallicity of NGC 6253 and future work
Our first direct contribution to the knowledge of OC’s metallicities comes from NGC 6253, a cluster in the direction of the Galactic centre, with age $``$ 3 Gyr, and metallicity solar or (more probably) twice solar as derived from photometry alone (Bragaglia et al. 1997). We have high resolution spectra of four giant stars, taken with EMMI mounted on NTT as part of a backup program in a night not suitable for photometry. As a result, the signal-to-noise ratio (about 35) and the resolution (about 15,000) are not as good as one would desire to derive precise abundances from fine analysis of high resolution spectra. This is especially true for stars of metallity so high that continuum tracing is a very difficult task, and for this reason we compared the observed spectra with a field star of similar metallicity ($`\zeta `$ Cygni, \[Fe/H\]=+0.05), as shown in Figure 3.
On the basis of their radial velocities, all the four stars are cluster members. We have only studied the iron content of two red clump stars (see Carretta et al. 1999 for a more detailed description of the followed procedure), and derived for them the following values: #2971: \[Fe/H\] = +0.33, and #2508: \[Fe/H\]= +0.39. The overall uncertainty, resulting from the measured equivalent widths and the adopted atmospheric parameters, is 0.15–0.20 dex, so we may assume for NGC 6253: \[Fe/H\] = +0.36 $`\pm `$ 0.20 dex. This is in very good agreement with the photometric study and with the value found by Piatti et al. (1998) from integrated spectra. NGC 6253 appears in Figure 1 (high resolution panel) as the innermost cluster of the whole sample, and its metallicity conforms to the Galactocentric gradient found for the other clusters.
We have recently acquired spectra of better quality of the same stars in NGC 6253, and we plan to analyze them to derive the abundances of other elements. Our plan for the next future is to obtain high-resolution spectra in several open clusters in order to derive accurate and homogenous metallicities of a large sample of clusters. This is a prerequisite for a really meaningful study of the history of the chemical enrichment of the Galactic disk. |
no-problem/9912/hep-th9912169.html | ar5iv | text | # 1 Introduction
## 1 Introduction
The CPT theorem is one of the most important results in flat-spacetime quantum field theory \[1–3\]. The theorem states that the combined operation (CPT) of charge conjugation (C), parity reflection (P) and time reversal (T) is an invariance of local relativistic quantum field theory, even if some of the separate invariances do not hold. Any CPT violation is, therefore, believed to require fundamentally different physics, for example quantum gravity or strings . It may, then, come as a surprise that a particular class of local relativistic quantum field theories has given an indication of CPT violation .
The specific theories considered in Ref. are non-Abelian chiral gauge theories with one compact spatial dimension singled out by the prescribed four-dimensional spacetime manifold $`M`$. An example would be $`SU(3)`$ Yang–Mills theory with a single triplet of left-handed Weyl fermions , defined over the flat spacetime manifold $`M`$ $`=`$ $`\mathrm{IR}^3\times S^1`$, which corresponds to the usual Minkowski spacetime with one spatial coordinate compactified to a circle. The perturbative chiral gauge anomalies of this theory \[10–13\] can be cancelled by the introduction of an octet of elementary pseudoscalar fields with the standard gauged Wess-Zumino term in the action . There remains a nonperturbative $`SU(3)`$ gauge anomaly , which is similar to, but not the same as, the Witten $`SU(2)`$ gauge anomaly . In this case, however, there exists a local counterterm for the action which restores $`SU(3)`$ gauge invariance, but at the price of Lorentz noninvariance and CPT violation . In other words, the remaining non-Abelian chiral gauge anomaly is transmuted into a CPT anomaly. (The situation is analogous to that of certain three-dimensional non-Abelian gauge theories with massless fermions, where gauge invariance is restored at the price of P and T violation and the non-Abelian gauge anomaly is transmuted into the so-called parity anomaly \[18–23\].)
The particular counterterm presented in Ref. is the spacetime integral of a Chern–Simons density involving three of the four gauge potentials. (The precise definition will be given later.) Such a term obviously violates local Lorentz invariance. Also, the integrand of the counterterm is CPT-odd, whereas the standard Yang–Mills action density is CPT-even. This Lorentz and CPT noninvariance would show up, to first order, as a direction-dependent, but wavelength-independent, rotation of the linear polarization of a plane wave of gauge fields traveling *in vacuo* .
We have obtained some heuristic arguments of why the counterterm must violate Lorentz and CPT invariance, but the uniqueness of the counterterm has not been established. If, on the other hand, the non-Abelian chiral gauge anomaly is really as discussed above, then the effective gauge field action due to the chiral fermions, formulated and regularized in a gauge-invariant manner, must already exhibit some sign of CPT violation (and Lorentz noninvariance). It is the goal of the present paper to establish this CPT violation. Maintaining chiral gauge invariance, we will find for spacetime manifolds with the appropriate Cartesian product structure a CPT-violating term in the effective gauge field action which is precisely equal to the counterterm presented in Ref. . This CPT-violating term can appear in anomaly-free chiral gauge theories but not in vectorlike gauge theories such as quantum electrodynamics.
The outline of this paper is as follows. In Section 2, we give the setup of the problem and establish our notation. In Section 3, we choose some simple background gauge potentials and find a Chern–Simons term in the effective gauge field action. The calculation applies to both Abelian and non-Abelian chiral gauge groups, provided the theory is free of chiral gauge anomalies. In Section 4, we show that the Chern–Simons term found violates CPT. In other words, these particular chiral gauge field theories have a CPT anomaly if gauge invariance is maintained. In Section 5, we give some generalizations of our basic result. Also, we exhibit a class of chiral gauge theories which necessarily have the CPT anomaly, as long as the spacetime manifold has the appropriate topology. Remarkably, the so-called Standard Model of elementary particle physics (with three families of quarks and leptons) can be embedded in some of these anomalous theories. In Section 6, finally, we present some remarks on how the CPT theorem is circumvented and discuss two possible applications of the CPT anomaly.
## 2 Setup
For definiteness, we take spacetime to be the flat Euclidean manifold
$$M=\mathrm{IR}^3\times S^1,$$
(2.1)
with Cartesian coordinates $`x^m\mathrm{IR}^3`$, $`m=1`$, 2, 3, and $`x^4S^1`$. At the end of the calculation, we can make the Wick rotation from Euclidean to Lorentzian metric signature, with $`x^4`$ corresponding to a compact spatial coordinate and $`x^1`$, say, to the time coordinate. The length of the circle in the 4-direction is denoted by $`L`$. Throughout this paper, Latin indices $`k`$, $`l`$, $`m`$, etc. run over the coordinate labels 1, 2, 3, and Greek indices $`\kappa `$, $`\lambda `$, $`\mu `$, etc. over 1, 2, 3, 4. Repeated coordinate (and internal) indices are summed over. Also, natural units are used for which $`c`$ $`=`$ $`\mathrm{}`$ $`=`$ $`k`$ $`=`$ $`1`$.
We will first consider non-Abelian chiral gauge theories with a *single* irreducible representation of massless left-handed fermions. Specifically, we take the standard chiral Yang–Mills theory with gauge group $`G`$ $`=`$ $`SO(10)`$ and left-handed Weyl fermions in the complex representation $`R_L`$ $`=`$ $`\mathrm{𝟏𝟔}`$. (This particular model may have some relevance for elementary particle physics, as part of a so-called grand-unified theory. See Refs. and references therein.) The left-handed fermion field is then $`\psi _{L\alpha i}(x)`$, with a spinor index $`\alpha =1`$, 2, and an internal symmetry index $`i=1`$, $`\mathrm{}`$ , $`16`$. The gauge potentials are $`A_\mu (x)`$ $``$ $`eA_\mu ^a(x)T^a`$, with $`e`$ the gauge coupling constant and $`T^a`$, $`a=1`$, $`\mathrm{}`$ , $`45`$, the anti-Hermitian generators of the Lie group $`SO(10)`$ in the representation chosen, normalized by $`\mathrm{tr}(T^aT^b)`$ $`=`$ $`\frac{1}{2}\delta ^{ab}`$. The fermion and gauge fields are periodic in $`x^4`$, with period $`L`$.
In this paper, we are interested in the effective gauge field action obtained from integrating out the chiral fermions, while maintaining gauge invariance. Formally, we have the following functional integral :
$$\mathrm{exp}\left\{\mathrm{\Gamma }_\mathrm{W}[A]\right\}=𝒟\psi _L^{}𝒟\psi _L\mathrm{exp}\left\{\mathrm{I}_\mathrm{W}[\psi _L^{},\psi _L,A]\right\},$$
(2.2)
for the Euclidean Weyl action
$`\mathrm{I}_\mathrm{W}[\psi _L^{},\psi _L,A]`$ $`=`$ $`{\displaystyle _M}\mathrm{d}^4x\psi _L^{}i\sigma _{}^\mu \left(_\mu +A_\mu \right)\psi _L,`$ (2.3)
with $`\sigma _\pm ^\mu (\pm i\sigma ^m,1\text{ }\text{ })`$ defined in terms of the $`2\times 2`$ Pauli matrices $`\sigma ^m`$ and the $`2\times 2`$ identity matrix $`1\text{ }\text{ }`$. The $`SO(10)`$ chiral gauge theory is anomaly free and the effective gauge field action is invariant under local gauge transformations,
$$\mathrm{\Gamma }_\mathrm{W}[g(A+\mathrm{d})g^1]=\mathrm{\Gamma }_\mathrm{W}[A],g(x)G,$$
(2.4)
with $`\mathrm{d}`$ the exterior derivative for differential forms ($`\mathrm{d}g`$ $``$ $`g/x^\mu `$ $`\mathrm{d}x^\mu `$) and $`A`$ $``$ $`A_\mu \mathrm{d}x^\mu `$ a one-form taking values in the Lie algebra (here, in the defining representation).
If the chiral gauge theory considered is not anomaly free (for example, the theory mentioned in the Introduction, with $`G`$ $`=`$ $`SU(3)`$ and $`R_L`$ $`=`$ $`\mathrm{𝟑}`$), then the theory has to be modified in order to make it gauge invariant. One way to restore gauge invariance is by averaging over the gauge orbits,
$$\mathrm{exp}\left\{\mathrm{\Gamma }[A]\right\}𝒟h\mathrm{exp}\left\{\mathrm{\Gamma }_\mathrm{W}[h(A+\mathrm{d})h^1]\right\}.$$
(2.5)
But the interpretation of the resulting theory with the dimensionless variables $`h(x)`$ $``$ $`G`$ is not entirely clear . Another way to restore gauge invariance is by introducing further fermions, which cancel the chiral anomalies of the original fermions . In Section 5, we will discuss some of these theories with reducible fermion representations. All of these complications are, however, not necessary for the anomaly-free chiral gauge theory considered here, which has the gauge group $`G`$ $`=`$ $`SO(10)`$ and the fermion representation $`R_L`$ $`=`$ $`\mathrm{𝟏𝟔}`$.
At this point, there is no need to be explicit about the regularization of the effective gauge field action $`\mathrm{\Gamma }_\mathrm{W}[A]`$. One possible regularization would be the introduction of a spacetime lattice cutoff, which (temporarily?) sacrifices Lorentz invariance but keeps the gauge and chiral invariances intact. (See Refs. and references therein.) This last condition on the regularization method is important, since we intend to look for symmetry violations being forced upon us by maintaining exact gauge invariance in a theory with genuine chiral fermions.
## 3 Calculation
As discussed in the Introduction, our goal is to establish the presence of a CPT-violating term in the effective gauge field action for the theory defined in Section 2. The strategy is to simplify the calculation as much as possible. We, therefore, take the case of $`x^4`$-independent $`SO(10)`$ gauge potentials, with the one gauge potential corresponding to the special direction (here, $`x^4S^1`$ for the Euclidean spacetime manifold $`M`$ $`=`$ $`\mathrm{IR}^3\times S^1`$) vanishing altogether,
$$A_m(\stackrel{}{x},x^4)=\stackrel{~}{A}_m(\stackrel{}{x}),A_4(\stackrel{}{x},x^4)=\stackrel{~}{A}_4(\stackrel{}{x})=0.$$
(3.1)
Also, the gauge potentials considered vanish on the boundary of a ball $`B^3`$ embedded in $`\mathrm{IR}^3`$, and outside of it,
$$\stackrel{~}{A}_m(\stackrel{}{x})=0\mathrm{for}|\stackrel{}{x}|R,$$
(3.2)
with $`R`$ a fixed radius which can be taken to infinity at the end of the calculation.
The left-handed fermion field $`\psi _L`$ in the complex representation $`R_L`$ $`=`$ $`\mathrm{𝟏𝟔}`$ of $`SO(10)`$ and the independent fermion field $`\psi _L^{}`$ in the conjugate representation can be expanded in Fourier modes
$`\psi _L(\stackrel{}{x},x^4)`$ $`=`$ $`{\displaystyle \underset{n=\mathrm{}}{\overset{\mathrm{}}{}}}e^{+2\pi inx^4/L}\xi _n(\stackrel{}{x}),`$
$`\psi _L^{}(\stackrel{}{x},x^4)`$ $`=`$ $`{\displaystyle \underset{n=\mathrm{}}{\overset{\mathrm{}}{}}}e^{2\pi inx^4/L}\xi _n^{}(\stackrel{}{x}).`$ (3.3)
The Weyl action (2.3) for the gauge potentials (3.1) then becomes
$`\mathrm{I}_\mathrm{W}`$ $`=`$ $`{\displaystyle \underset{n=\mathrm{}}{\overset{\mathrm{}}{}}}{\displaystyle _{\mathrm{IR}^3}}\mathrm{d}^3xL\xi _n^{}\left(\sigma ^m(_m+\stackrel{~}{A}_m)2\pi n/L\right)\xi _n.`$ (3.4)
Redefining the two independent sets of spinor fields
$$\chi _n(\stackrel{}{x})iL\xi _n(\stackrel{}{x}),\chi _n^{}(\stackrel{}{x})\xi _n^{}(\stackrel{}{x}),$$
(3.5)
the action reads
$`\mathrm{I}_\mathrm{W}`$ $`=`$ $`{\displaystyle \underset{n=\mathrm{}}{\overset{\mathrm{}}{}}}{\displaystyle _{\mathrm{IR}^3}}\mathrm{d}^3x\chi _n^{}\left(i\sigma ^m(_m+\stackrel{~}{A}_m)+i\mathrm{\hspace{0.17em}2}\pi n/L\right)\chi _n`$ (3.6)
$``$ $`{\displaystyle \underset{n=\mathrm{}}{\overset{\mathrm{}}{}}}\mathrm{I}_{\mathrm{\hspace{0.17em}3}}[\chi _n^{},\chi _n,\stackrel{~}{A}].`$
We have thus obtained an infinite set of three-dimensional Euclidean Dirac fields $`\chi _n(\stackrel{}{x})`$ with masses $`2\pi n/L`$, all of which interact with the *same* three-dimensional gauge potentials $`\stackrel{~}{A}_m(\stackrel{}{x})`$. (This is, of course, reminiscent of Kaluza-Klein theory, which reduces five spacetime dimensions to four. See Refs. and references therein.)
For the special gauge potentials (3.1), the effective action (2.2) now factorizes to
$$\mathrm{exp}\left\{\mathrm{\Gamma }_\mathrm{W}[\stackrel{~}{A}]\right\}\underset{n=\mathrm{}}{\overset{\mathrm{}}{}}\left(𝒟\chi _n^{}𝒟\chi _n\mathrm{exp}\left\{\mathrm{I}_{\mathrm{\hspace{0.17em}3}}[\chi _n^{},\chi _n,\stackrel{~}{A}]\right\}\right),$$
(3.7)
with the three-dimensional action $`\mathrm{I}_{\mathrm{\hspace{0.17em}3}}`$ as defined in (3.6). Each factor in (3.7) can be regularized separately by the introduction of appropriate *three-dimensional* Pauli–Villars fields . This ultraviolet regularization preserves the restricted gauge invariance
$$\chi _nU_r(\stackrel{~}{g})\chi _n,\stackrel{~}{A}_m^{(r)}U_r(\stackrel{~}{g})\left(\stackrel{~}{A}_m^{(r)}+_m\right)U_r^1(\stackrel{~}{g}),\stackrel{~}{g}(\stackrel{}{x})G,$$
(3.8)
with $`U_r`$ the appropriate unitary representation for the fermions (here, $`r`$ $`=`$ $`\mathrm{𝟏𝟔}`$ and $`G`$ $`=`$ $`SO(10)`$) and gauge functions $`\stackrel{~}{g}(\stackrel{}{x})`$ $`=`$ $`1\text{ }\text{ }`$ for $`|\stackrel{}{x}|`$ $``$ $`R`$. Even though this is not the full gauge invariance (2.4) of the theory, it turns out to be sufficient for our purpose (see Section 4).
In addition to the ultraviolet divergences in the separate factors of (3.7), which are regularized by the corresponding three-dimensional Pauli–Villars fields, there are also infrared divergences in the $`n=0`$ factor. These infrared divergences can be regularized by imposing antiperiodic boundary conditions for the Dirac (and Pauli–Villars) fields on the surface of the ball $`B^3`$, where the gauge potentials (3.2) vanish.
The massive Pauli–Villars regulator fields for the $`n=0`$ factor of (3.7), viewed as $`x^4`$-independent four-dimensional fields, introduce a breaking of Lorentz and CPT invariance in the four-dimensional context. This breaking will show up later as a finite remnant in the effective gauge field action. (Preliminary results seem to indicate that this is also the case for the lattice regularization mentioned in the last paragraph of Section 2.)
For the present calculation, it is sufficient to introduce for each (anticommuting) field $`\chi _n(\stackrel{}{x})`$ with mass $`M_n`$ $``$ $`2\pi n/L`$ a single (commuting) Pauli–Villars field $`\varphi _n(\stackrel{}{x})`$ with mass $`\mathrm{\Lambda }_0`$ for $`n=0`$ and $`\mathrm{\Lambda }_n`$ $``$ $`M_n`$ $`+`$ $`\mathrm{sign}(n)\mathrm{\Lambda }`$ for $`n0`$, where $`\mathrm{\Lambda }`$ is taken to be positive. Formally, this gives for (3.7) the following product:
$$\mathrm{exp}\left\{\mathrm{\Gamma }_\mathrm{W}[\stackrel{~}{A}]\right\}\underset{k}{}\frac{\lambda _k}{\lambda _k+i\mathrm{\Lambda }_0}\left(\underset{l=1}{\overset{\mathrm{}}{}}\frac{\lambda _k^2+M_l^2}{\lambda _k^2+(M_l+\mathrm{\Lambda })^2}\right),$$
(3.9)
in terms of the real eigenvalues $`\lambda _k`$ of the massless three-dimensional Dirac operator $`i\sigma ^m`$ $`(_m+\stackrel{~}{A}_m)`$. The factors in (3.7) with $`n=\pm l`$, for $`l>0`$, thus combine to give a *real* contribution to the effective gauge field action $`\mathrm{\Gamma }_\mathrm{W}[\stackrel{~}{A}]`$. Moreover, it is clear that the spectral flow of the full three-dimensional Dirac operator as given in (3.6) can occur only in the $`n=0`$ sector (there is a mass gap for $`n0`$), and that the potential non-Abelian gauge anomaly , which shows up in the imaginary part of $`\mathrm{\Gamma }_\mathrm{W}[\stackrel{~}{A}]`$, resides there.
The imaginary part of the effective gauge field action for massless three-dimensional Dirac fermions, with Pauli–Villars regularization to maintain gauge invariance, has already been calculated . Revisiting the perturbative calculation, we have for the $`n=0`$ sector of our non-Abelian $`SO(10)`$ gauge theory the one-loop result
$$\mathrm{\Gamma }_\mathrm{W}^{n=0}[\stackrel{~}{A}]i_{B^3}\mathrm{d}^3xs_0\pi \omega _{\mathrm{CS}}[\stackrel{~}{A}_1,\stackrel{~}{A}_2,\stackrel{~}{A}_3],$$
(3.10)
in terms of a sign factor $`s_0=\pm \mathrm{\hspace{0.17em}1}`$ whose origin will be explained shortly and the Chern–Simons density
$$\omega _{\mathrm{CS}}[A_1,A_2,A_3]\frac{1}{16\pi ^2}ϵ^{klm}\mathrm{tr}\left(A_{kl}A_m\frac{2}{3}A_kA_lA_m\right),$$
(3.11)
with indices $`k`$, $`l`$, $`m`$, running over 1, 2, 3. Here, $`ϵ^{klm}`$ is the completely antisymmetric Levi-Civita symbol, normalized to $`ϵ^{123}=+1`$, and $`A_{kl}`$ $``$ $`_kA_l`$ $``$ $`_lA_k`$ $`+`$ $`[A_k,A_l]`$ is the field strength tensor for the gauge potential $`A_m`$ $``$ $`eA_m^aT^a`$, with gauge coupling constant $`e`$ and anti-Hermitian Lie group generators $`T^a`$, normalized by $`\mathrm{tr}(T^aT^b)`$ $`=`$ $`\frac{1}{2}\delta ^{ab}`$.
The sign ambiguity $`s_0`$ in (3.10) traces back to the parity-violating Pauli–Villars mass $`\mathrm{\Lambda }_0`$ used to regularize the ultraviolet divergences of the three-dimensional Feynman diagrams. (Here, parity violation is meant in the three-dimensional sense. As will become clear in the next section, three-dimensional parity corresponds effectively to CPT in the four-dimensional context .) The factor $`s_0`$ in (3.10) comes, in fact, from a factor $`\mathrm{\Lambda }_0/|\mathrm{\Lambda }_0|`$ out of the momentum integrals. The triangle diagram, for example, gives in the limit $`|\mathrm{\Lambda }_0|`$ $``$ $`\mathrm{}`$
$$\pi ^2_0^{\mathrm{}}dq\mathrm{\hspace{0.33em}4}\pi q^2\mathrm{\Lambda }_0(q^2+\mathrm{\Lambda }_0^2)(q^2+\mathrm{\Lambda }_0^2)^3=\mathrm{\Lambda }_0/|\mathrm{\Lambda }_0|s_0,$$
(3.12)
with the explicit factor $`\mathrm{\Lambda }_0(q^2+\mathrm{\Lambda }_0^2)`$ from the spinor trace in the integrand on the left-hand side. It is also important that the infrared divergences of the three-dimensional Feynman diagrams without Pauli–Villars fields are *not* regularized by the introduction of a small Dirac mass $`\lambda _0`$, which would again violate parity invariance, but that they are kept under control by the antiperiodic boundary conditions imposed on the fermions (turning the momentum integrals into sums).
The essential conditions for the derivation of (3.10) are thus the requirement of gauge invariance (3.8) and the control of infrared divergences in the $`n=0`$ factor of (3.7). For non-Abelian gauge groups, there is, in addition to the local term (3.10) obtained in perturbation theory, also a nonlocal term in $`\stackrel{~}{A}_m(\stackrel{}{x})`$ which restores the full three-dimensional gauge invariance (3.8), not just its infinitesimal version. This nonlocal term vanishes, however, for gauge potentials $`\stackrel{~}{A}_m(\stackrel{}{x})`$ sufficiently close to zero. See Refs. and references therein.
The integral in (3.10) can be extended over the whole of 3-space, because the gauge potentials $`\stackrel{~}{A}_m`$ of (3.1), (3.2) vanish outside the ball $`B^3`$. The gauge potentials $`\stackrel{~}{A}_m`$ are also $`x^4`$-independent. Insisting upon translation invariance, the expression (3.10) can then be written as the following four-dimensional integral:
$$\mathrm{\Gamma }_\mathrm{W}^{n=0}[\stackrel{~}{A}]i_{\mathrm{IR}^3}\mathrm{d}^3x_0^Ldx^4\frac{s_0(1+a)\pi }{L}\omega _{\mathrm{CS}}[\stackrel{~}{A}_1(\stackrel{}{x}),\stackrel{~}{A}_2(\stackrel{}{x}),\stackrel{~}{A}_3(\stackrel{}{x})],$$
(3.13)
with $`s_0=\pm \mathrm{\hspace{0.17em}1}`$ as defined in (3.12) and parameter $`a=0`$ for the simple non-Abelian gauge group considered up till now. The one-loop calculation for three-dimensional Abelian $`U(1)`$ gauge potentials gives essentially the same result , with the factor $`\pi `$ in (3.10) replaced by $`2\pi `$ and the parameter $`a`$ in (3.13) set equal to $`1`$.
The Chern–Simons term (3.13) is the main result of this paper. The result was obtained for the particular chiral gauge theory with the gauge group $`G`$ $`=`$ $`SO(10)`$ and the fermion representation $`R_L`$ $`=`$ $`\mathrm{𝟏𝟔}`$, but holds for an arbitrary simple compact Lie group (or Abelian $`U(1)`$ group) and an arbitrary nonsinglet irreducible fermion representation, as long as the fermion representation is normalized appropriately and the complete theory is free of chiral gauge anomalies (see Section 5). In the next two sections, we will take a closer look at this result and present some generalizations.
## 4 Lorentz and CPT noninvariance
For the special gauge potentials (3.1), (3.2) and the Euclidean spacetime manifold $`M`$ $`=`$ $`\mathrm{IR}^3\times S^1`$, we have found in the previous section the emergence of a Chern–Simons term (3.13) in the effective gauge field action. The calculation, which relies on earlier results for the three-dimensional parity anomaly, applies to both Abelian and non-Abelian gauge groups, provided the chiral gauge anomalies cancel in the complete theory (see Section 5).
For arbitrary gauge potentials $`A_\mu (\stackrel{}{x},x^4)`$ which drop to zero faster than $`r^1`$ as $`r|\stackrel{}{x}|`$ $``$ $`\mathrm{}`$ and which have trivial holonomies (see below), the effective action term (3.13) can be written as the following local expression:
$$\mathrm{\Gamma }_{\mathrm{CS}\mathrm{like}}^{\mathrm{IR}^3\times S^1}[A]=i_{\mathrm{IR}^3}\mathrm{d}^3x_0^Ldx^4\frac{s_0(1+a)\pi }{L}\omega _{\mathrm{CS}}[A_1(\stackrel{}{x},x^4),A_2(\stackrel{}{x},x^4),A_3(\stackrel{}{x},x^4)],$$
(4.1)
with the Chern–Simons density $`\omega _{\mathrm{CS}}`$ given by (3.11), an integer factor $`s_0=\pm \mathrm{\hspace{0.17em}1}`$ defined in (3.12), and an integer parameter $`a=0`$ or 1 for a simple non-Abelian gauge group or an Abelian $`U(1)`$ gauge group, respectively. Eq. (4.1), for simple non-Abelian gauge groups, is precisely equal to the counterterm presented in Ref. . The expression (4.1) is called Chern–Simons-like, because a genuine topological Chern–Simons term exists only in an odd number of dimensions . Remark that this Chern–Simons-like term (4.1) comes from a combination of infrared ($`1/L`$) and ultraviolet ($`s_0`$ $``$ $`\mathrm{\Lambda }_0/|\mathrm{\Lambda }_0|`$) effects.
The local Chern–Simons-like term (4.1) has the important property of invariance under *infinitesimal* four-dimensional gauge transformations. (This property would not hold if the particular Chern–Simons density $`\omega _{\mathrm{CS}}`$ as given by (4.1) were replaced by, for example, $`\omega _{\mathrm{CS}}[𝒜_1(\stackrel{}{x}),𝒜_2(\stackrel{}{x}),𝒜_3(\stackrel{}{x})]`$, with the averaged gauge potentials $`𝒜_m(\stackrel{}{x})`$ $``$ $`L^1_0^Ldx^4`$ $`A_m(\stackrel{}{x},x^4)`$. Of course, such an effective action term using the averaged gauge potentials $`𝒜_m`$ would not be local either.) For simple compact connected Lie groups $`G`$, there are also *large* gauge transformations with a gauge function $`g`$ $`=`$ $`g(\stackrel{}{x})`$ $``$ $`G`$ corresponding to a nontrivial element of the homotopy group $`\pi _3(G)`$ $`=`$ Z Z. As mentioned in Section 3, there is a nonlocal term in the effective gauge field action which restores invariance under these finite gauge transformations, but this nonlocal term vanishes for gauge potentials $`A_m(\stackrel{}{x},x^4)`$ sufficiently close to zero. In addition, there are, for Lie groups $`G`$ $`=`$ $`SO(N3)`$ or $`U(1)`$ with homotopy group $`\pi _1(G)`$ $``$ $`0`$, large gauge transformations with gauge function $`g`$ $`=`$ $`g(x^4)`$ $``$ $`G`$, but the Chern–Simons-like term (4.1) is obviously invariant under these particular finite gauge transformations.
Turning to spacetime transformations, the effective action term (4.1) is clearly invariant under translations. The Chern–Simons density in its integrand, though, involves only three of the four gauge potentials $`A_\mu (x)`$ and three of the six components of the field strength tensor $`A_{\mu \nu }(x)`$ $``$ $`_\mu A_\nu `$ $``$ $`_\nu A_\mu `$ $`+`$ $`[A_\mu ,A_\nu ]`$, which makes the effective gauge field action manifestly Lorentz noninvariant. Physically, this would, for example, lead to anisotropic propagation (birefringence) of the gauge boson fields .
We are not able to determine the imaginary part of the effective gauge field action exactly. (The effective action might, for example, have some dependence on the trace of the path-ordered exponential integral (holonomy) $`\mathrm{tr}h(\stackrel{}{x})`$ $``$ $`\mathrm{tr}\mathrm{P}\mathrm{exp}\{_0^Ldx^4A_4(\stackrel{}{x},x^4)\}`$, which could not have been detected by the gauge potentials (3.1) used in Section 3. The effective action term (4.1) holds, most likely, only for trivial holonomies $`h(\stackrel{}{x})`$ $`=`$ $`1\text{ }\text{ }`$.) But the partial result (3.13) suffices for the main purpose of this paper. The appropriate CPT transformation for an anti-Hermitian gauge potential is, namely,
$$A_\mu (x)A_\mu ^\mathrm{T}(x),$$
(4.2)
with the suffix T indicating the transpose of the matrix. For a Hermitian electromagnetic vector potential $`a_\mu (x)`$, this corresponds to the usual transformation $`a_\mu (x)`$ $``$ $`a_\mu (x)`$. Using (4.2), one then readily verifies that the Yang–Mills action density $`\mathrm{tr}\left(A_{\mu \nu }A^{\mu \nu }\right)`$ is CPT-even and that the integrand of the effective action term (3.13), or (4.1) for that matter, is CPT-odd. (The overall factor $`i`$ in (3.13), or (4.1), is absent for spacetime metrics with Lorentzian signature and need not be complex conjugated.) This establishes the CPT anomaly for chiral gauge theories with left-handed fermions in an arbitrary nonsinglet irreducible representation (provided the chiral gauge anomalies cancel in the complete theory) and spacetime manifold $`M`$ $`=`$ $`\mathrm{IR}^3\times S^1`$.
The Abelian Chern-Simons density has no cubic term and the integrand of the Abelian version of (4.1) is odd under both CPT and T (and even under both C and P), provided $`x^4`$ corresponds to a spatial coordinate after the Wick rotation from Euclidean to Lorentzian metric signature. The T and CPT violation would, for example, show up in the anisotropic propagation of the circular polarization modes of these Abelian gauge fields (a given circular polarization mode would, generically, have different phase velocity for propagation in opposite directions ).
Note, finally, that the terms (4.1) applied to the gauge groups $`SU(3)`$, $`SU(2)`$, and $`U(1)`$, with undetermined coefficients replacing $`s_0(1+a)\pi /L`$, have also been considered in a Standard-Model extension with Lorentz and CPT violation . As discussed above, these $`SU(3)`$ and $`SU(2)`$ Chern–Simons-like terms are noninvariant under certain large gauge transformations and the $`U(1)`$ Chern–Simons-like term may also become gauge dependent if magnetic flux from monopoles is allowed for . This suggests that either the corresponding coefficients must be zero or that additional nonlocal terms restoring gauge invariance must be included in the theory. The anomaly calculation of the present paper follows the second path, with nonlocal terms restoring gauge invariance.<sup>3</sup><sup>3</sup>3The same is to be expected for the effective gauge field action from fermions with *explicit* CPT-violating, but gauge-invariant, terms in the action . For recent results on the induced Abelian Chern–Simons-like term from a massive Dirac fermion with a CPT-violating axial-vector term in the action, see Ref. and references therein. Chiral fermions with a real chemical potential $`\mu `$ (and a corresponding CPT-odd term in the action) also give rise to an induced Chern–Simons-like term, which is now proportional to $`\mu `$, see Ref. and the last equation therein.
## 5 Generalizations
The effective gauge field action for certain chiral gauge theories defined over a fixed four-dimensional Euclidean spacetime manifold $`M`$ with Cartesian product structure $`\mathrm{IR}^3\times S^1`$ has been found to contain a CPT-violating term if gauge invariance is maintained. For the special gauge potentials (3.1), this CPT-violating term is given by (3.13), which can be written as (4.1) for arbitrary localized gauge potentials with trivial holonomies. As mentioned in the previous section, the overall factor $`i`$ in (3.13) and (4.1) would be absent for a Lorentzian signature of the metric.
Essentially the same result holds for other orientable spacetime manifolds $`M`$, as long as at least one compact spatial dimension can be factored out. The crucial point is that the Weyl operator should be separable with respect to this compact coordinate. Also, the spin structure over the compact spatial dimension must be such as to allow for zero momentum of the fermions, cf. Eq. (3.6). One example would be the flat Euclidean spacetime manifold $`M`$ $`=`$ $`\mathrm{IR}^3\times \mathrm{I}`$, with the closed interval $`\mathrm{I}`$ $``$ $`[0,L]`$ $``$ $`\mathrm{IR}`$ replacing the circle $`S^1`$ considered before. Here, the chiral fermions are taken to have free boundary conditions over $`\mathrm{I}`$. (There would be no CPT anomaly for strictly antiperiodic boundary conditions. This would be the case for finite-temperature field theory in the Euclidean path integral formulation , which uses the same manifold $`\mathrm{IR}^3\times \mathrm{I}`$ with antiperiodic boundary conditions for the fermions over the interval $`\mathrm{I}`$ $``$ $`[0,\beta ]`$, where $`\beta `$ stands for the inverse temperature.) Another example would be the flat Minkowski-like manifold $`M`$ $`=`$ $`\mathrm{IR}\times S^1\times S^1\times S^1`$, with time $`t\mathrm{IR}`$ and a compact space manifold, which would have three possible terms of the form (4.1) in the effective gauge field action. Similar effects may occur in higher- and lower-dimensional chiral gauge theories, but for the rest of this section we concentrate on the four-dimensional case, again with the spacetime manifold $`M`$ $`=`$ $`\mathrm{IR}^3\times S^1`$.
The calculation of the CPT-violating term in Section 3 was performed for a single irreducible representation of left-handed Weyl fermions. The particular theory considered, with the gauge group $`G`$ $`=`$ $`SO(10)`$ and the fermion representation $`R_L`$ $`=`$ $`\mathrm{𝟏𝟔}`$, is free of chiral gauge anomalies . It is, of course, possible to have more than one nonsinglet irreducible fermion representation $`r`$ for the left-handed fermions, provided the chiral anomalies cancel. The reducible fermion representation is then $`R_L`$ $`=`$ $`_fr_f`$, with the label $`f`$ running over $`1`$, $`\mathrm{}`$ , $`N_F`$. Here, and in the following, the gauge group $`G`$ is taken to be either a simple compact Lie group or an Abelian $`U(1)`$ group.
Vectorlike gauge theories with, for example, one nonsinglet irreducible representation $`r`$ have $`R_L`$ $`=`$ $`r`$ $`+`$ $`\overline{r}`$ and corresponding three-dimensional Pauli–Villars masses $`\mathrm{\Lambda }_{0r}`$ and $`\mathrm{\Lambda }_{0\overline{r}}`$, where $`\overline{r}`$ denotes the conjugate representation of $`r`$. Four-dimensional parity invariance gives $`\mathrm{\Lambda }_{0r}`$ $`=`$ $`\mathrm{\Lambda }_{0\overline{r}}`$. Recalling (3.12), this implies that the CPT anomaly (4.1) cancels for this particular vectorlike gauge theory. The same cancellation occurs, in fact, for *any* vectorlike gauge theory.
Chiral gauge theories with $`R_L`$ $`=`$ $`_fr_f`$ (and $`R_L`$ $``$ $`\overline{R}_L`$) may or may not have a CPT-violating term (4.1) left over in the effective gauge field action, depending on the relative signs of the corresponding three-dimensional Pauli–Villars masses $`\mathrm{\Lambda }_{0f}`$. Of course, attention must be paid to the normalization of the different irreducible representations $`r_f`$. Also, the situation can be complicated further by having more three-dimensional Pauli–Villars fields than the ones used in Section 3. The factor $`s_0=\pm \mathrm{\hspace{0.17em}1}`$ in (3.10), (3.13), and (4.1), is then replaced by $`(2k_{0f}+1)`$, for $`k_{0f}\text{Z Z}`$. The same odd integer prefactors of the induced Chern–Simons density also appear for other three-dimensional ultraviolet regularization methods and are, in fact, to be expected on general grounds .
For a chiral gauge theory with an *odd* number $`N_F`$ of *equal* irreducible left-handed fermion representations, there necessarily appears a CPT-violating term proportional to (4.1) in the effective gauge field action. The reason is that the sum of an odd number of odd numbers does not vanish, $`_f(2k_{0f}+1)`$ $``$ $`0`$ for $`f`$ summed over $`1`$ to $`N_F`$. An example for $`N_F=3`$ would be the $`SO(10)`$ chiral gauge theory with the reducible fermion representation $`R_L`$ $`=`$ $`\mathrm{𝟏𝟔}`$ $`+`$ $`\mathrm{𝟏𝟔}`$ $`+`$ $`\mathrm{𝟏𝟔}`$, which necessarily has a CPT-violating term proportional to (4.1) for the $`SO(10)`$ gauge fields in the effective action.
This particular $`SO(10)`$ model contains, as is well known, the $`SU(3)`$ $`\times `$ $`SU(2)`$ $`\times `$ $`U(1)`$ Standard Model with $`N_F=3`$ families of 15 left-handed Weyl fermions (quarks and leptons), together with $`N_F=3`$ left-handed Weyl fermion singlets (conjugates of the hypothetical right-handed neutrinos). The Standard Model has thus a CPT-violating term proportional to (4.1) for the hypercharge $`U(1)`$ gauge fields in the effective action, together with similar terms for the weak $`SU(2)`$ and color $`SU(3)`$ gauge fields, as long as the spacetime manifold has the appropriate Cartesian product structure and the Standard Model is embedded in this particular $`SO(10)`$ chiral gauge theory.<sup>4</sup><sup>4</sup>4It is not clear to what extent the 33 remaining gauge bosons from $`SO(10)`$ need to be physical, but they could always be given large masses by the Higgs mechanism . Other simple compact Lie groups instead of $`SO(10)`$ may also be used for the embedding of the Standard Model fermions, as long as they have an odd number of equal irreducible representations. This embedding condition for the Standard Model fermions *guarantees* the presence of the CPT-violating Chern–Simons-like terms for the $`SU(3)`$ $`\times `$ $`SU(2)`$ $`\times `$ $`U(1)`$ gauge fields in the effective action, otherwise these terms may or may not appear, depending on the regularization scheme.
## 6 Discussion
In the previous sections, we have established for certain chiral gauge theories defined over the spacetime manifold $`M`$ $`=`$ $`\mathrm{IR}^3\times S^1`$ the necessary presence of a CPT-violating term in the gauge-invariant effective action. The question, now, is what happened to the CPT theorem? It appears that the CPT theorem is circumvented by a breakdown of local Lorentz invariance at the quantum level. (See also the paragraph above (3.9), which discusses the breaking of Lorentz invariance by the ultraviolet regularization used.) More specifically, the second-quantized vacuum seems to play a role in connecting the global spacetime structure to the local physics. The next two paragraphs elaborate this point but may be skipped in a first reading.
For non-Abelian chiral gauge groups, there is the condition of gauge invariance to deal with in these particular quantum field theories which are potentially afflicted by the nonperturbative chiral gauge anomaly discovered earlier . This nonperturbative chiral gauge anomaly depends on the global spacetime structure in a Lorentz noninvariant way, one spatial direction being singled out by the so-called $`Z`$-string configuration responsible for the gauge anomaly in the Hamiltonian formulation. If the theory has indeed this $`Z`$-string chiral gauge anomaly, then the restoration of gauge invariance obviously requires interactions which are themselves Lorentz noninvariant . But, even if the theory does not have a net $`Z`$-string chiral gauge anomaly, there still occurs, in first-quantization, the spectral flow which treats one spatial dimension differently from the others . This implies that a tentative second-quantized vacuum state varies along the corresponding loop of gauge transformations. Imposing gauge invariance throughout then leads to Lorentz noninvariance of the theory. In both cases, the invariance under the proper orthochronous Lorentz group is lost and the CPT theorem no longer applies . It is then possible to have a non-Abelian CPT-odd term (4.1) in the effective gauge field action.
For Abelian chiral gauge groups, nonzero magnetic flux from monopoles can also give rise to spectral flow, as discussed for the three-dimensional case in Ref. . For the four-dimensional case, this setup again breaks Lorentz invariance (just as the $`Z`$-string does for the non-Abelian chiral gauge anomaly) and the CPT theorem no longer applies, with the possibility of having an Abelian CPT-odd term (4.1) in the effective gauge field action.
For both Abelian and non-Abelian chiral gauge groups, it remains to be seen whether or not the gauge-invariant, but CPT-violating, theory is consistent. In particular, the properties of microcausality and positivity of the energy need to be established, cf. Refs. . If the theory in question turns out to be inconsistent, then perhaps it could be interpreted as part of a more fundamental theory, possibly involving curvature and torsion.
In this paper, we have primarily been concerned with the mechanism of the CPT anomaly, not potential applications. Let us, however, mention two possibilities. First, there may be the “optical activity” discussed in the Introduction, where the linear polarization of a plane wave of gauge fields gets rotated *in vacuo* (in our case, through the quantum effects of the chiral fermions encoded in the effective gauge field action). As mentioned in Section 5, the phenomenon could occur for the photon field of the Standard Model, as long as there is the $`SO(10)`$-like embedding of the Standard Model fermions and the appropriate Cartesian product structure of the spacetime manifold. The laboratory measurement of this optical activity of the vacuum could, in principle, provide information about the global structure and size of the universe. More realistically, the mass scale of the CPT-violating term (4.1) for the photon field is of the order of
$$\alpha L^1\mathrm{\hspace{0.17em}10}^{35}\mathrm{eV}\left(\frac{\alpha }{1/137}\right)\left(\frac{\mathrm{1.5\hspace{0.33em}10}^{10}\mathrm{lyr}}{L}\right),$$
(6.1)
with $`\alpha `$ $``$ $`e^2/4\pi `$ the fine-structure constant and $`L`$ the range of the compact spatial coordinate. This mass is, of course, very small on the scale of the known elementary particles (the present universe being very large), but, remarkably, it is only a factor $`100`$ below the current upper bound of $`10^{33}\mathrm{eV}`$ obtained from observations on distant radio galaxies, see Refs. and references therein. (The “laboratory” has now been expanded to a significant part of the visible universe.) A dedicated observation program to map the linear polarization in a large number of distant radio sources , or future satellite experiments to measure the polarization of the cosmic microwave background , can perhaps reach the sensitivity level set by (6.1).
Second, the CPT anomaly may have been important in the very early universe. In the present paper, we have considered a fixed spacetime manifold with given topology. With gravity, spacetime becomes dynamic. For an inverse size $`1/L(t)`$ and typical scattering energies of the order of the gravitational scale (Planck mass), the CPT-violating effects of the effective action term (4.1) are relatively unsuppressed compared to gravity, that is suppressed by the square of the gauge coupling constant only. Of course, the fundamental theory of gravity remains to be determined if there is indeed Lorentz noninvariance in certain inertial frames. Still, it is conceivable that the CPT anomaly plays a role in defining a “fundamental arrow-of-time” , as a quantum mechanical effect coming from the interplay of chiral fermions, gauge field interactions and the topology of spacetime.
## Acknowledgements
This work was completed at the Niels Bohr Institute and we gratefully acknowledge the hospitality of the High-Energy Theory group. We also thank C. Adam and V.A. Kostelecký for comments on the manuscript and R. Jackiw and the referee for pointing out some additional references. |
no-problem/9912/hep-th9912115.html | ar5iv | text | # 1 Introduction
## 1 Introduction
Quantum field theory studies the properties of algebras which are expected to give accurate mathematical descriptions of physical systems. In general, the manner in which one can extract informations of direct physical relevance from the algebraic description is very subtle because, for a given abstract algebra, there may exist in general many (unitarily inequivalent) representations in terms of operator algebras acting on a Hilbert space. Therefore, the basic problem of quantum field theory concerns the characterization of physically admissible representations.
This problem is considerably simplified in the presence of space-time symmetries. For example, in quantum field theory in Minkowski space, because of the Lorentz symmetry, it is always possible to refer to a representation containing the physical vacuum. A similar simplification could, in principle, arise in any theory admitting at least a group of space-time symmetry with a global time-like generator.
It turns out that in a generally covariant quantum field theory, because of the dynamical role played by the space-time metric, no a priori notion of space-time symmetry exists. Consequently, considerable difficulties arise if one wants to characterize the physically admissible representations.
Because of the lack of a priori space-time symmetries in the generally covariant context, it is useful for the general treatment of the basic problem of quantum field theory in that context to isolate those features of the problem which can be discussed without reference to any pre-assigned space-time symmetries. It is perfectly possible that this may not resolve the problem completely, nevertheless attempts in this direction may provide important indications for understanding the physical content of generally covariant quantum field theory. The present paper contemplates a consideration of this issue within the scope of the algebraic approach to quantum field theory .
We first briefly discuss the question of how general covariance can be incorporated into the conventional framework of quantum field theory . The basic idea is to start with free algebras, i.e. algebras which are free from a priori relations. The need for this is obvious, since otherwise we have a priori no principle at hand ensuring that the algebraic relations are kept unchanged under the action of an arbitrary space-time diffeomorphism. The general scheme we shall now describe is a generalization of the scheme used in \[2-5\].
We consider a differentiable manifold $``$ and assume the existence of a net of free algebras over $``$ generated by what we call kinematical procedures. In specific terms we require an intrinsic correspondence between each open set $`𝒪`$ and a free involutive algebra $`𝒜(𝒪)`$ such that the additivity
$$𝒜(𝒪)𝒜(𝒪^{}),if𝒪𝒪^{}$$
(1)
holds. The attribute ’intrinsic’ means that the principle of general covariance is implemented by considering the group $`Diff()`$ of all diffeomorphisms of the manifold as acting by automorphisms on the net of the algebras $`𝒜(𝒪)`$, i.e. each diffeomorphism $`\chi Diff()`$ is represented by an automorphism $`\alpha _\chi `$ such that
$$\alpha _\chi (𝒜(𝒪))=𝒜(\chi (𝒪))$$
(2)
holds. Given such an intrinsic correspondence between open sets and algebras, we call a self adjoint element of $`𝒜(𝒪)`$ a kinematical procedure in $`𝒪`$.
We should emphasize that, because there is no diffeomorphism invariant notion of locality, it is by no means clear whether there is an a priori correspondence between kinematical procedures and local properties in the underlying manifold. For example we may find a coordinate system in which the kinematical procedures carry the global properties of the entire manifold in a ”local domain”, i.e., in a finite range of that coordinate system. Typical examples of such coordinate systems in general relativity are coordinate systems which compactify the structure of infinity. Indeed, the exploration of the question concerning the characterization of local kinematical procedures is one of the basic tasks of the present analysis. It will be dealt with in the next section.
There could be many kinematical procedures which are equivalent with respect to the action of a physical system on them which is, in general, expected to connect the kinematical procedures with dynamical procedures (traditionally identified with observables). Thus, the essential question is how to identify the dynamical procedures of the net $`𝒪𝒜(𝒪)`$ as suitable equivalence classes of kinematical procedures?
For this aim, we first note that the precise mathematical description of a physical system is given in terms of a state which is taken to be a positive linear functional over the total algebra of kinematical procedures $`𝒜:=𝒜(𝒪)`$. Given a state $`\omega `$, one gets via the GNS-construction a representation $`\pi ^\omega `$ of $`𝒜`$ by an operator algebra acting on a Hilbert space $`^\omega `$ with a cyclic vector $`\mathrm{\Omega }^\omega ^\omega `$. In the representation $`(\pi ^\omega ,^\omega ,\mathrm{\Omega }^\omega )`$ one can select a family of related states on $`𝒜`$, namely those represented by vectors and density matrices in $`^\omega `$. It corresponds to the set of normal states of the representation $`\pi ^\omega `$, the so called folium of $`\omega `$.
Once a physical state $`\omega `$ has been specified, one can consider in each subalgebra $`𝒜(𝒪)`$ the equivalence relation
$$AB\omega ^{}(AB)=0,\omega ^{}^\omega .$$
(3)
Here $`^\omega `$ denotes the folium of the state $`\omega `$. The set of such equivalence relations generates a two-sided ideal $`^\omega (𝒪)`$ in $`𝒜(𝒪)`$. One can construct the algebra of dynamical procedures $`𝒜^\omega (𝒪)`$ from the algebra of kinematical procedures $`𝒜(𝒪)`$ by taking the quotient
$$𝒜^\omega (𝒪)=𝒜(𝒪)/^\omega (𝒪).$$
(4)
By this annihilation all the relevant relations between the dynamical procedures can be characterized by the totality of elements in the kernel of the representation $`\pi ^\omega `$, namely the total ideal $`^\omega `$
$$^\omega =^\omega (𝒪).$$
(5)
This construction implies that the mapping from kinematical procedures to dynamical procedures becomes fundamentally state-dependent. This aspect reflects one of the characteristic features of generally covariant quantum field theory.
Crucial for further investigations is the realization that a diffeomorphism $`\chi Diff()`$ can act as an automorphism $`\alpha _\chi `$ on the net $`𝒪𝒜^\omega (𝒪)`$ provided
$$\alpha _\chi (^\omega (𝒪))=^\omega (\chi (𝒪))$$
(6)
holds. Any diffeomorphism satisfying this condition is called dynamical (or proper). Otherwise it is called nondynamical (or improper). Nondynamical diffeomorphisms can not be represented as automorphisms on the algebra of the dynamical procedures. For dynamical diffeomorphisms such a representation is possible. They generate a group $`G_\omega `$ which is called in the following the dynamical group of $`\omega `$ and will be denoted by $`G_\omega `$. The elements of $`G_\omega `$ correspond to state-dependent automorphisms of the algebra of dynamical procedures with a pure geometric action.
## 2 Local inertial sector
One of the basic difficulty of the above scheme is that, in general, the GNS-representation of a physical state can not unitarily be fixed in an intrinsic manner, because the structure of the total ideal $`^\omega `$ depends crucially on the particular coordinates one uses. For example, starting from the GNS-representation of a physical state one can obtain another representation if the kinematical procedures are transformed by a nondynamical diffeomorphism. The representation obtained in this way may not be in the equivalence class of the former because it may have a different kernel. Thus a physical state will, in general, provide us with a variety of unitarily inequivalent representations depending on the nature of the coordinates that happened to have been chosen for a given problem, and a priori it is not known which representation is physical.
The problem can be addressed on various levels. One possibility is to take a global point of view and select the equivalence class of representations for a physical state $`\omega `$ as that for which the dynamical group $`G_\omega `$ is nontrivial and acts globally on the manifold $``$. The geometric action of this group would then determine the nature of the equivalence class of coordinates to which the representation refers. These are coordinates which are related by the geometric action of the dynamical group $`G_\omega `$. Such coordinates may be considered as typical examples of global inertial coordinates.
A criterion of this type may be useful to analyze the particular type of a physical theory resulting from the transition from the generally covariant description of a physical state to the special relativistic one.
For the description of a physical state in the generally covariant context we shall formulate a local variant of the above criterion. Specifically, we assume that, given a physical state $`\omega `$, we can assign to any point $`x`$ a neighbourhood $`𝒪_x^\omega `$ so that by the restriction of the GNS- representation $`\pi ^\omega `$ to $`𝒪_x^\omega `$ a nontrivial dynamical group $`G_\omega `$ is established which acts on $`𝒪_x^\omega `$. To emphasize the individuality of the point $`x`$, we shall assume that the geometric action of $`G_\omega `$ on $`𝒪_x^\omega `$ leaves the point $`x`$ invariant. In an alternative formulation we shall require the invariance of the local ideal $`^\omega (𝒪_x^\omega )`$ under the (nontrivial) action of the dynamical group $`G_\omega `$, namely
$$\alpha (^\omega (𝒪_x^\omega ))=^\omega (𝒪_x^\omega ),\alpha G_\omega $$
(7)
with $`x`$ being invariant under to the geometric action of $`G_\omega `$. For any physical state $`\omega `$ this acts as a criterion to select a characteristic local equivalence class of representations. In symbols we shall write for this local equivalence class $`\{\pi ^\omega |𝒪_x^\omega \}`$ and refer to it as a local inertial sector of a physical state $`\omega `$. Correspondingly the equivalence class of local coordinate systems to which $`\{\pi ^\omega |𝒪_x^\omega \}`$ refers are called the equivalence class of local inertial coordinates with the origin at $`x`$. The neighbourhood $`𝒪_x^\omega `$ is called a normal neighbourhood.
## 3 Local and translocal properties
One consequence of a local inertial sector of a physical state would be the distinction it would draw between the two different ideal sets of kinematical procedures. Given a physical state $`\omega `$ and a point $`x`$, consider a local inertial sector $`\{\pi ^\omega |𝒪_x^\omega \}`$. A kinematical procedure in $`A𝒜(𝒪_x^\omega )`$ is called translocal (or absolute) if it escapes the local action of the dynamical group $`G_\omega `$ in $`𝒪_x^\omega `$. In mathematical terms, this is taken to mean that for an arbitrary element $`\alpha G_\omega `$ we have $`\alpha (A)A^\omega (𝒪_x^\omega )`$. A kinematical procedure $`A𝒜(𝒪_x^\omega )`$ for which this condition can not be satisfied is called local<sup>2</sup><sup>2</sup>2In reality there are certain limitations on the applicability of this definition, because of the limited accuracy of actual experiments which makes it impossible to determine the ideal $`^\omega (𝒪_x^\omega )`$ exactly. We shall ignore problems of this type..
It can be shown that this distinction between local and translocal kinematical procedures is preserved at the dynamical level of the theory. In fact we prove the following
$`Statement`$: For a local (respectively translocal) kinematical procedure $`A𝒜(𝒪_x^\omega )`$ the corresponding equivalence class in the sense of (3) contains local (respectively translocal) kinematical procedures only.
Consider first the case of a translocal kinematical procedure $`A𝒜(𝒪_x^\omega )`$. We show that any kinematical procedure $`B𝒜(𝒪_x^\omega )`$ which is equivalent to $`A`$ is translocal. For an arbitrary element $`\alpha `$ of the dynamical group $`G_\omega `$ we have $`\alpha (A)=A+I`$ with $`I(𝒪_x^\omega )`$. Since $`BA`$ we also have $`B=A+I^{}`$ with $`I^{}(𝒪_x^\omega )`$. It then follows for all $`\alpha G_\omega `$ that
$$\alpha (B)=\alpha (A)+\alpha (I^{})=A+I+\alpha (I^{})=BI^{}+I+\alpha (I).$$
This together with the invariance of the ideal, relation (7), implies $`\alpha (B)B^\omega (𝒪_x^\omega )`$. Thus $`B`$ is translocal. Now consider the case of a local kinematical procedure $`A𝒜(𝒪_x^\omega )`$. We show that any kinematical procedure $`B𝒜(𝒪_x^\omega )`$ which is equivalent to $`A`$ is local. We have $`B=A+I`$ with $`I^\omega (𝒪_x^\omega )`$. Since $`A`$ is local there exist an element $`\alpha `$ of the dynamical group so that the difference $`\mathrm{\Delta }=\alpha (A)A`$ does not lie in $`^\omega (𝒪_x^\omega )`$. It follows that
$$\alpha (B)=\alpha (A)+\alpha (I)=A+\mathrm{\Delta }+\alpha (I)=BI+\mathrm{\Delta }+\alpha (I)$$
from which one infers that $`\alpha (B)B`$ can not be in $`^\omega (𝒪_x^\omega )`$. Thus $`B`$ is local.
From this consideration it follows that the dynamical procedures of a local inertial sector decompose into two distinct sets, namely the sets containing all equivalence classes of local and translocal kinematical procedures respectively. A member of the first set (respectively the second set) is called a local (respectively translocal) dynamical procedure.
We emphasize that this distinction between dynamical procedures takes the concept of dynamical activity in a local inertial sector as basic. A translocal dynamical procedure in a local inertial sector is taken to be a dynamical procedure that continually transforms into itself by the local action of the dynamical group. They correspond to absolute properties of a local inertial sector.
It should be emphasized that the appearance of the translocal kinematical procedures in $`𝒜(𝒪_x^\omega )`$ illustrates a novel effect of the principle of general covariance. In fact, any restriction to local kinematical procedures inside a local inertial sector $`\{\pi ^\omega |𝒪_x^\omega \}`$ would be fundamental only to the extend to which the diffeomorphism group refers only to the properties inside the normal neighbourhood $`𝒪_x^\omega `$. That this is not the case is seen by the following consideration which furnishes the necessary prerequisite for our subsequent presentations.
Consider the identification of points inside $`𝒪_x^\omega `$, made in a system of local inertial coordinates, and consider a kinematical procedure parametrized in a local coordinate system outside $`𝒪_x^\omega `$. In general a kinematical procedure of this type characterizes a nonlocal kinematical property outside $`𝒪_x^\omega `$ which does not transform under the change of the system of local inertial coordinates inside $`𝒪_x^\omega `$, so it escapes the local action of the dynamical group in $`𝒪_x^\omega `$. Now consider a diffeomorphism acting entirely outside $`𝒪_x^\omega `$. We shall call a diffeomorphism of this type a gauge transformation. The essential point is that it needs only to apply an appropriate gauge transformation, namely a gauge transformation which has its image inside $`𝒪_x^\omega `$, to convert a nonlocal kinematical property outside $`𝒪_x^\omega `$ into a translocal kinematical procedure inside $`𝒪_x^\omega `$. This argument demonstrates that a translocal kinematical procedure is the image of a non-local kinematical procedure outside $`𝒪_x^\omega `$ under an appropriate gauge transformation. Thus, gauge transformations can be applied to generate the totality of all translocal kinematical procedures inside $`𝒪_x^\omega `$ as the local codifications of the totality of all nonlocal kinematical procedures outside $`𝒪_x^\omega `$. This connection between a local inertial sector and the associated appearance of translocal (absolute) properties is the distinctly marked conclusion of the present analysis.
At this point a clarifying remark concerning the status of translocal kinematical procedures with respect to the conventional quantum field theory appears to be in place. From our presentation one can immediately observe that, in any theory in which one finds a dynamical group globally acting on the underlying (space-time) manifold, there would be no obvious way to introduce (quasi) invariant kinematical procedures with respect to that group, so a translocal kinematical procedure would not be obvious in the fundamental description of the theory. This is specially so in Minkowski-space quantum field theory with the Lorentz-group playing the role of a global dynamical group<sup>3</sup><sup>3</sup>3The situation would change if one considers the embedding of the manifold with a global dynamical group into a larger manifold without extending the action of the dynamical group. In this case any kinematical procedure which lies outside the initial manifold can obviously be interpreted as a (quasi)invariant object with respect to the action of dynamical group.. In particular, in the latter theory the proven statement at the begin of this section trivializes because all kinematical procedures becomes essentially local, because they can not escape the global action of a nontrivial Lorentz-transformation.
## 4 The axioms of translocality
From the scheme presented so far one can immediately infer that the set of all translocal dynamical procedures in a local inertial sector $`\{\pi ^\omega |𝒪_x^\omega \}`$ is closed under the algebraic operations. This statement may not have in general an analog with respect to the local dynamical procedures. Actually, there is a principal possibility that a translocal dynamical procedure can be approximated by finite algebraic operations on local dynamical procedures. In such a situation the dynamical informations monitored by an actual measurement on a physical system would algebraically connect both the local and the translocal properties. It is not the objective of this paper to develop the particular mathematical formalism needed to describe physics of this sort which is indeed a very complicated enterprise<sup>4</sup><sup>4</sup>4It may be expected that nonunitary evolution would be the dominating feature of physics of this type..
The kind of behavior, that we may expect to occur for a large class of physical systems in the generally covariant context, is that it should categorically not possible to connect the local and translocal dynamical procedures by a finite (or infinite in an admissible sense) algebraic process in a local inertial sector. Mathematically, this requirement may be converted into the first axiom of translocality formulated as the statement:
The set of all local dynamical procedures in a local inertial sector $`\{\pi ^\omega |𝒪_x^\omega \}`$ generates a weakly closed subalgebra of $`𝒜^\omega (𝒪_x^\omega )`$ which has a trivial intersection<sup>5</sup><sup>5</sup>5The difference between a trivial intersection and an empty intersection is that the former is allowed to contain multiples of the identity element. with the algebra of translocal dynamical procedures inside $`𝒪_x^\omega `$.
This axiom emphasizes the feasibility of a substantial distinction between the local and translocal properties inside a local inertial sector.
We shall exclusively deal with theories satisfying this axiom. For such theories the local kinematical procedures in $`𝒜(𝒪_x^\omega )`$ can be identified with ordinary local observation procedures (pure description of possible laboratory measurements) and their corresponding equivalence classes in $`𝒜^\omega (𝒪_x^\omega )`$ with local observables. The equivalence classes of translocal kinematical procedures in $`𝒜^\omega (𝒪_x^\omega )`$ correspond to the properties which do not respond to a local measurement process inside $`𝒪_x^\omega `$. We denote the algebra generated by local observables of a normal neighbourhood $`𝒪_x^\omega `$ by $`𝒜_{obs}^\omega (𝒪_x^\omega )`$. It is considered as a weakly closed subalgebra of $`𝒜^\omega (𝒪_x^\omega )`$.
Particular attention should be directed to the transformation properties of a local inertial sector $`\{\pi ^\omega |𝒪_x^\omega \}`$ under various automorphisms of $`𝒜^\omega (𝒪_x^\omega )`$. Consider first the case of an inner-automorphism $`\alpha `$ of $`𝒜^\omega (𝒪_x^\omega )`$ generated by a translocal dynamical procedure $`𝒰`$, namely
$$\alpha (A)=𝒰A𝒰^1,A𝒜(𝒪_x^\omega ).$$
(8)
An inner-automorphism of this kind is called a translocal morphism. The properties of a physical system in the generally covariant context depends very crucially on the particular way in which a translocal morphism acts geometrically. The second axiom of translocality assumes a one to one correspondence between the action of a translocal morphism and the action of a gauge transformation. More precisely, this axiom emphasizes that a given translocal morphisms has a geometric action corresponding to the action of a gauge transformation and conversely a given gauge transformation has an algebraic action corresponding to a translocal morphism.
Since gauge transformations are diffeomorphisms acting entirely outside $`𝒪_x^\omega `$, it follows that a translocal morphism should not affect the local observables inside the normal neighbourhood $`𝒪_x^\omega `$. This would require an arbitrary element $`𝒰`$ of the algebra of the translocal dynamical procedures to commute with all local observables of a local inertial sector $`\{\pi ^\omega |𝒪_x^\omega \}`$, namely
$$[A,𝒰]=0,A𝒜_{obs}^\omega (𝒪_x^\omega ).$$
(9)
Thus the second axiom of translocality implies that the total activity of translocal dynamical procedures inside a local inertial sector can be reduced to the presence of a (nontrivial) commutant of the algebra of local observables in that sector. We call it the translocal commutant of a local inertial sector. It will be denoted by $`[𝒜_{obs}^\omega (𝒪_x^\omega )]^{}`$.
Using the first axiom of translocality we can establish a general property of the translocal commutant $`[𝒜_{obs}^\omega (𝒪_x^\omega )]^{}`$. We prove, namely, that $`[𝒜_{obs}^\omega (𝒪_x^\omega )]^{}`$ should have a trivial center: Let us assume the opposite case. Then, by applying the bicommutant property $`[𝒜_{obs}^\omega (𝒪_x^\omega )]^{\prime \prime }=𝒜_{obs}^\omega (𝒪_x^\omega )`$, we would get a nontrivial intersection of the local elements of $`𝒜_{obs}^\omega (𝒪_x^\omega )`$ and the translocal elements of $`[𝒜_{obs}^\omega (𝒪_x^\omega )]^{}`$. This is a contradiction to the first axiom of translocality. Thus, the triviality of the center of $`[𝒜_{obs}^\omega (𝒪_x^\omega )]^{}`$ becomes imperative.
We may note that the triviality of the center of $`[𝒜_{obs}^\omega (𝒪_x^\omega )]^{}`$ may be illustrated as a statement about the global definiteness of the totality of all (non-local) complementary properties of a local inertial sector $`\{\pi ^\omega |𝒪_x^\omega \}`$. In the generally covariant context, this definiteness seems to be important in determining the long range dynamical coupling of a physical state with distant sources. In particular this global definiteness proves to be very crucial in forming the algebraic action of dynamical group $`G_\omega `$ inside a local inertial sector $`\{\pi ^\omega |𝒪_x^\omega \}`$. To illustrate this point, we note first that, by assumption, this action leaves the translocal dynamical procedures in $`\{\pi ^\omega |𝒪_x^\omega \}`$ unchanged. The most immediate way to manifestly express this property is to approximate an element $`\alpha G_\omega `$ inside $`𝒪_x^\omega `$ by an inner-automorphism of $`𝒜^\omega (𝒪_x^\omega )`$ generated by a corresponding element $`L_\alpha 𝒜_{obs}^\omega (𝒪_x^\omega )`$, namely
$$\alpha (A)=L_\alpha AL_\alpha ^1,A𝒜^\omega (𝒪_x^\omega ).$$
(10)
This relation can be used to study the nature of the group-operator $`L_\alpha `$. We are particularly interested in a situation in which the group-operator $`L_\alpha `$ is uniquely determined by this relation. In general, this relation leaves us an ambiguity concerning the choice of the group-operator $`L_\alpha `$. In fact, with (10) we get the freedom to replace the group-operator $`L_\alpha `$ by $`L_\alpha C`$, where $`C`$ is an arbitrary element in the center of the translocal commutant $`[𝒜_{obs}^\omega (𝒪_x^\omega )]^{}`$. We infer that the triviality of the center of $`[𝒜_{obs}^\omega (𝒪_x^\omega )]^{}`$, which was implied by the first axiom of translocality, appears to be a powerful restriction in order to characterize the group-operator $`L_\alpha `$.
Putting the totality of the translocal dynamical procedures into the translocal commutant $`[𝒜_{obs}^\omega (𝒪_x^\omega )]^{}`$ by no means implies that correlations can not occur between the local observables and the translocal dynamical procedures inside $`𝒪_x^\omega `$. Indeed, an essential input is to make an assumption of general nature to characterize the form of the correlations implied by the activity of the translocal dynamical procedures. This issue is addressed by formulating the third axiom of translocality which reflects the impossibility of isolating the algebra generated by local observables with respect to the dynamical activity of the translocal commutant. To arrive at its mathematical formulation we shall require that for a physical state $`\omega `$, the corresponding vector $`\mathrm{\Omega }^\omega `$ in a local inertial sector $`\{\pi ^\omega |𝒪_x^\omega \}`$ be a separating vector for the algebra of local observable $`𝒜_{obs}^\omega (𝒪_x^\omega )`$. This means that it should not be possible to annihilate the vector $`\mathrm{\Omega }^\omega `$ by elements of $`𝒜_{obs}^\omega (𝒪_x^\omega )`$, namely
$$A\mathrm{\Omega }^\omega =0A=o,A𝒜_{obs}^\omega (𝒪_x^\omega ).$$
(11)
By the standard theorems of the theory of operator algebras the above requirement can alternatively be replaced by the requirement of the cyclicity of the vector $`\mathrm{\Omega }^\omega `$ with respect to the translocal commutant $`[𝒜_{obs}^\omega (𝒪_x^\omega )]^{}`$. In this formulation the third axiom of translocality emphasizes the distinguishing role played by translocal dynamical procedures inside a local inertial sector in singling out a dense subset of the corresponding Hilbert space.
## 5 Commutant duality
In reality, the more informations which should, in principle, be available in the form of correlations between local observables and translocal dynamical procedures has a significant effect on the effective description of the short-distance behavior of the underlying theory. To understand this effect, one has to extrapolate the physical informations carried by the members of the translocal commutant to the short-distance characteristics of a local inertial sector. This issue can be addressed by formulating a duality principle which, in essence, connects the long distance properties of states with their corresponding short-distance counterparts using a gauge transformation:
Given a local inertial sector $`\{\pi ^\omega |𝒪_x^\omega \}`$, we call any neighbourhood $`𝒪_x𝒪_x^\omega `$ of the point $`x`$ which is invariant under the geometric action of the dynamical group $`G_\omega `$ an invariant neighbourhood of the origin.
We argue that to any local inertial sector $`\{\pi ^\omega |𝒪_x^\omega \}`$ one can assign a characteristic invariant neighbourhood of the origin. By definition, the origin $`x`$ is invariant under the geometric action of the dynamical group. Thus, one needs only to pass from the origin to one of its neighborhoods $`𝒪_x`$ on which the action of the dynamical group remains still arbitrarily close to the identity such that no local observable can properly be affiliated to $`𝒪_x`$. The operational way to achieve this is as follows: One may start with a contracting sequence of neighborhoods $`𝒪_x^\lambda 𝒪_x^\omega `$ of the point $`x`$
$$𝒪_x^{\lambda +1}𝒪_x^\lambda ,$$
(12)
which is ideally taken to shrink to the point $`x`$ as $`\lambda \mathrm{}`$, and continue to truncate the sequence at some sufficiently large $`\lambda `$. The prefix ’sufficiently large’ characterizes an index $`\lambda `$ for which the set of numbers
$$|\omega ^{}(\alpha (A))\omega ^{}(A)|,A𝒜^\omega (𝒪_x^\lambda )$$
taken for all states $`\omega ^{}`$ in the folium $`^\omega `$ of $`\omega `$ and for all elements $`\alpha `$ of the dynamical group $`G_\omega `$, remains smaller than a characteristic nonvanishing small number $`ϵ`$ characterizing the limited accuracy of the local measurements. In this way the correspondence between a local inertial sector and a characteristic invariant neighbourhood of the origin may be established. For this neighbourhood we use the name the continuous image of the origin.
Our objective is now to apply the second axiom of translocality to derive a duality principle which emphasizes the significance of translocal dynamical procedures for modelling the algebra corresponding to the continuous image of the origin in a local inertial sector. Consider a gauge-transformation $`\sigma `$ in a local inertial sector, which according to the second axiom of translocality, has the algebraic action corresponding to a translocal morphism. Given a local inertial sector $`\{\pi ^\omega |𝒪_x^\omega \}`$ it is always possible to obtain a representation in the equivalence class of $`\pi ^\omega `$ by applying a gauge transformation $`\sigma `$ to $`𝒜^\omega (𝒪_x^\omega )`$. Let us now consider a gauge transformation $`\sigma `$ which sends the totality of points outside $`𝒪_x^\omega `$ into the continuous image of the origin inside a local inertial sector $`\{\pi ^\omega |𝒪_x^\omega \}`$. It follows that for the image of the translocal commutant under $`\sigma `$ we can establish the inclusion property
$$\sigma [𝒜_{obs}^\omega (𝒪_x^\omega )]^{}𝒜^\omega (𝒪_x)$$
(13)
which holds for any neighbourhood $`𝒪_x𝒪_x^\omega `$, which contains the continuous image of the origin as a proper subset. The inclusion property (13) tells us that the gauge-transformation $`\sigma `$ can be used to affiliate the translocal commutant into the continuous image of the origin. Sine gauge transformations are symmetry operations inside a local inertial sector we infer that by restricting a state to the continuous image of the origin the folium of $`\omega `$ becomes indistinguishable from the set of normal states over the translocal commutant. This is the expression of what we call the commutant duality.
We should emphasize that the geometric gauge transformations underlying the formulation of commutant duality is a novel feature of the principle of general covariance and can not be exemplified in conventional models of quantum field theory with no geometric gauge group. Since the geometric gauge transformations can, in principle, be used to affiliate the translocal commutant to any open region inside the normal neighbourhood $`𝒪_x^\omega `$, one can generally say that the theory deals profoundly with two different phases inside a local inertial sector, depending on whether the local or the translocal properties are considered as primary properties. Once this has been recognized, then the investigation of a possible symmetry between these two distinct phases appears to be a problem of direct physical relevance. This symmetry, which can generally be termed under the name of ‘duality’, needs the study of those coordinate transformations exchanging the local and translocal dynamical procedures which are related, in a specific model, to different sets of dynamical variables. One can generally expect that the formulation of this symmetry would reflect new geometric gauge invariance which is not visible inside a local inertial sector. It is needless to say that such a development would also shed new light on the symmetry behind the currently discussed duality of supersymmetric gauge theory<sup>6</sup><sup>6</sup>6See and references therein. and string theory.
Our last remark in this section concerns the notion of quantum equivalence principle. There exists a formulation of this principle in the framework of quantum field theory in curved space which takes the correspondence between the leading short-distance singularity of states and the corresponding singularity of the vacuum in Minkowski space as basic. In the present context, the commutant duality requires a profoundly smooth short-distance behavior, so there is the need to reformulate the quantum equivalence principle in a different way. This formulation is implied by the commutant duality itself. In fact, combining it with the third axiom of translocality it follows that the state-vector $`\mathrm{\Omega }^\omega `$ can be considered as a cyclic vector for any algebra $`𝒜^\omega (𝒪_x)𝒜^\omega (𝒪_x^\omega )`$, for which the neighbourhood $`𝒪_x`$ contains the continuous image of the origin as a proper subset. This cyclicity property establishes an exact correspondence between the structure of correlations of the state $`\omega `$ in a local inertial sector and that of the vacuum state in Minkowski-space<sup>7</sup><sup>7</sup>7Actually, in Minkowski-space there is a general result, obtained by Reeh and Schlieder, which states that the vacuum is cyclic not only for the whole algebra but also for the algebra of any open region . We may, therefore, take this correspondence, which is implied by the commutant duality, as a coded form of a quantum equivalence principle.
## 6 Classical properties
We analyze now the consequence of commutant duality in an idealized limit which destroys the algebraic informations of the translocal commutant in a local inertial sector. At this level of description a state is unable to monitor the exact form of all conceivable correlations between the local observables and the individual members of the translocal commutant and the description of a state is transferred to a positive linear functional over the algebra of local observables in a local inertial sector. This corresponds to the conventional description of states in quantum field theory. However, the essential point is that, at such a level of description, the ignorance concerning the accurate form of algebraic informations contained in the translocal commutant implies a structural dependence of the short-distance behavior of the underlying theory on classical properties. We have to clarify this.
Given a local inertial sector $`\{\pi ^\omega |𝒪_x^\omega \}`$, we may ideally transfer ‘ the description of the state $`\omega `$ to a positive linear functional over the algebra of local observables in that sector. The question we shall address is how this change of the level of description will alter the nature of the translocal commutant. The resolution is quite immediate. Indeed, the inclusion relation (13) implies that the translocal commutant can then be approximated by a commutative algebra lying in the center of any subalgebra $`𝒜_{obs}^\omega (𝒪_x)𝒜_{obs}^\omega (𝒪_x^\omega )`$ for which the neighbourhood $`𝒪_x𝒪_x^\omega `$ contains the continuous image of the origin as a proper subset. In this way the emergence of classical properties in a local inertial sector may be an irreducible feature of the theory if we transfer to the conventional level of the description of a state in quantum field theory.
## 7 Concluding remarks
In this paper we have discussed the impact of the principle of general covariance on the algebraic framework of quantum field theory. At first sight the implementation of this principle seems to create confusion concerning a substantial identification of local properties. We have proposed a tentative resolution of the problem which takes the dynamical activity in a local inertial sector as basic. However, the principle of general covariance implies that the set of all local properties in a local inertial sector may not be considered as a completed totality. The notion of translocality was introduced to address this issue. In our approach there is an effective crossover from local properties to the translocal properties, once the short-distance scaling is performed inside a local inertial sector. This interrelation of short-distance scaling with the translocal properties which is implied by the commutant duality may be of particular importance for expressing Mach’s principle within the framework of quantum field theory . In particular it emphasizes that the short-distance property of quantum field theory in the generally covariant context is profoundly different from ordinary quantum field theory. Remarkably, this is especially so for an important class of currently discussed theories generally termed by string theory. We can not at the present understand how an exemplification of the general principles of generally covariant quantum field theory in a model can be related to string theory. But, nevertheless it can be expected that for the unification of quantum field theory with certain features of string theory the commutant duality may have a vital role to play.
The next point implied by commutant duality concerns the transition to the conventional description of states in quantum field theory. On this level of description the dominant structure of a generally covariant quantum field theory has been recognized to be the occurrence of classical properties. It is an interesting subject to analyze the interrelation of such classical properties with the classical space-time metric of general relativity. We hope to address the issue elsewhere.
Acknowledgment
The Author would like to specially acknowledge the financial support of the Office of Scientific Research of Shahid Beheshti University. Thanks are also due to an anonymous referee for useful suggestions.
References
1. Haag R, Local Quantum Physics, Springer (1992) 2. Fredenhagen K and Haag R, Commun. Math. Phys. 108, 91 (1987) 3. Salehi H, Class. Quantum Grav. 9 2557-2571 (1992) 4. Salehi H, Inter.J.Theo.Phy. 36, 9, (1997) 5. Salehi H, Inter.J.Theo.Phy. 36, No.4, (1998) 6. Bratteli O, Robinson D, Operator Algebras and Quantum Statistical Mechanics I, Springer (1979) 7. Seiberg N, (The Power of duality, Exact results in 4d susy field theory) hep-th/9506077 8. Haag R, Narnhofer H and Stein U, Commun. Math. phys. 94, 219, (1984) 9. Brans C, Dicke R H, Phys. Rev.124, 925-935 (1961) |
no-problem/9912/hep-ph9912419.html | ar5iv | text | # Color Superconducting Quark Matter in Neutron Stars
## Abstract
Color superconductivity in quark matter is studied for electrically charge neutral neutron star matter in $`\beta `$-equilibrium. Both bulk quark matter and mixed phases of quark and nuclear matter are treated. The electron chemical potential and strange quark mass affect the various quark chemical potentials and therefore also the color superconductivity due to dicolor pairing or color-flavor locking.
Strongly interacting matter is expected to undergo a transition to chirally restored matter of quarks and gluons at sufficiently high baryon or energy density. Such phase transitions are currently investigated in relativistic heavy-ion collisions and may exist in the interior of neutron stars. At low temperatures a condensate of quark Cooper pairs may appear characterized by a BCS gap $`\mathrm{\Delta }`$ usually referred to as color superconductivity (CSC). The appearance of a gap through color-flavor locking (CFL) requires the gap to exceed the difference between the quark Fermi momenta, which is not the case for sufficiently large strange quark masses. In neutron star matter the presence of an appreciable electron chemical potential, $`\mu _e`$, also change the conditions for CFL as discussed in the following.
In neutron star matter $`\beta `$-equilibrium relates the quark and electron chemical potentials
$`\mu _d=\mu _s=\mu _u+\mu _e.`$ (1)
Temperatures are normally much smaller than typical Fermi energies in neutron stars. If interactions are weak, the chemical potentials are then related to Fermi momenta by $`\mu _i=\sqrt{m_i^2+p_i^2}`$. If the strange quark mass $`m_s`$ is much smaller than the quark chemical potentials, Eq. (1) implies a difference between the quark Fermi momenta
$`p_up_d`$ $`=`$ $`\mu _e,`$ (2)
$`p_up_s`$ $``$ $`{\displaystyle \frac{m_s^2}{2\mu }}\mu _e,`$ (3)
$`p_dp_s`$ $``$ $`{\displaystyle \frac{m_s^2}{2\mu }},`$ (4)
where $`\mu `$ is an average quark chemical potential. In Fig. 1 the difference between the Fermi momenta given by Eqs. (2-4) are plotted as function of electron chemical potential. Strange quark masses are estimated from low energy QCD $`m_s150200`$ MeV and typical quark chemical potentials are $`\mu 400600`$ MeV in quark matter . Consequently, $`m_s^2/2\mu 1025`$ MeV.
Perturbative corrections change the relation between Fermi momenta and chemical potentials for relativistic quarks to $`p_q=\mu _q(12\alpha _s/3\pi )^{1/3}`$ and lead to corrections of order $`\alpha _sm_s^2/\mu `$ in Eqs. (3-4) for a massive strange quark. For weak coupling and small strange quark masses such effects are small and will be ignored in the following.
The BCS gap equation has previously been solved for u,d and u,d,s quark matter ignoring electrons and $`\beta `$-equilibrium and the conditions for condensates of dicolor pairs and CFL respectively were obtained . The CFL condition consists of three pair-wise “CFL” conditions
$`\mathrm{\Delta }>|p_ip_j|,i,j=u,d,s,`$ (5)
and thus require both small electron chemical potential and strange quark mass according to Eqs. (2-4). Alternatively, only one of these conditions may be fulfilled. For example, ud “CFL” requires small electron chemical potential whereas ds “CFL” requires a small strange quark mass. The us “CFL” condition can actually be satisfied when $`\mu _em_s^2/2\mu `$. For these three cases a condensate of dicolor pairs (2CS) can appear between ud, us, ds-quarks analogous to the standard ud 2CS usually discussed for symmetric ud quark matter .
The magnitude of the electron chemical potential will now be discussed for electrically charge neutral bulk quark matter as well as for mixed phases of quark and nuclear matter .
Bulk quark matter must be electrically neutral, i.e., the net positively charged quark density must be balanced by the electron density
$`n_e`$ $`=`$ $`{\displaystyle \frac{\mu _e^3}{3\pi ^2}}={\displaystyle \frac{2}{3}}n_u{\displaystyle \frac{1}{3}}n_d{\displaystyle \frac{1}{3}}n_s`$ (6)
$``$ $`{\displaystyle \frac{1}{\pi ^2}}\left({\displaystyle \frac{1}{2}}m_s^2\mu 2\mu _e\mu ^2\right).`$ (7)
Muons will appear when their chemical potential exceeds their rest masses, $`\mu _\mu =\mu _e>m_\mu `$, but this occurs for very large electron chemical potentials only, where CSC is unlikely, and muons will therefore be ignored here.
When the electron chemical potential is small as compared to the quark chemical potentials the l.h.s. of Eq. (7) is negligible and we obtain
$`\mu _e{\displaystyle \frac{m_s^2}{4\mu }}.`$ (8)
Albeit the electron charge density is negligible, the electrons affect quark chemical potentials through $`\beta `$-equilibrium. The “CFL” condition for ud-quarks, Eq. (2), is therefore the same as the “CFL” condition for us-quarks, Eq. (3), in pure quark matter.
A mixed phase of quark and nuclear matter has lower energy per baryon at a wide range of densities if the Coulomb and surface energies associated with the structures are sufficiently small . The mixed phase will then consist of two coexisting phases of nuclear and quark matter in droplet, rod- or plate-like structures in a continuos background of electrons much like the mixed phase of nuclear matter and a neutron gas in the inner crust of neutron stars . Another requirement for a mixed phase is that the length scales of such structures must be shorter than typical screening lengths.
In the mixed phase the nuclear and quark matter will be positively and negatively charged respectively. Total charge neutrality requires
$`n_e=(1f)n_p+f({\displaystyle \frac{2}{3}}n_u{\displaystyle \frac{1}{3}}n_d{\displaystyle \frac{1}{3}}n_s),`$ (9)
where $`n_p`$ is the proton density and $`f`$ is the “filling fraction”, i.e. the fraction of the volume filled by quark matter. For pure nuclear matter, $`f=0`$, the nuclear symmetry energy can force the electron chemical potential above $`100`$ MeV at a few times normal nuclear matter densities. With increasing filling fraction, however, negative charged droplets of quark matter replace some of the electrons and $`\mu _e`$ decreases. With increasing density and filling fraction it drops to its minimum value given by Eq. (8) corresponding to pure quark matter, $`f=1`$.
Gap sizes of order a few MeV or less were originally estimated within perturbative QCD . Non-perturbative calculations give large gaps of order a few tens of MeV . At high densities $`\mu \mathrm{}`$ and weak couplings ($`g`$) the gap has been calculated, $`\mathrm{\Delta }g^5\mathrm{exp}(3\pi ^2/\sqrt{2}g)`$, and drops below $`1`$ MeV . Information about the non-perturbative low density limit may obtain from studies of dilute Fermi systems. At low densities, when the typical range of interaction, $`R`$, is much shorter than the scattering length, $`|a|`$, the gap is $`\mathrm{\Delta }\mu \mathrm{exp}(\pi /2p_F|a|)`$ , where $`a`$ is the non-relativistic scattering length. For $`p_F|a|1`$ the gap may be large - of order the Fermi energy. Large scattering lengths appear when the two interacting particles almost form a bound state. However, confinement of quarks is different from such a simple bound state analogy and the large gap of order the chemical potential may not be conjectured for relativistic quarks.
In the mixed phase gaps may also be affected by the finite size of the quark matter structures. For example, nuclei pairing is dominated by surface effects since gaps in nuclear matter are larger at lower densities. As droplets of quark matter are of similar size and baryon number we may expect similar finite size effects to enhance the CSC gap sizes.
For large gaps it may also be energetically favorable to have spatially varying quark chemical potentials and densities such that CFL occurs in some regions but not in others. From the gain in energy of order $`\mathrm{\Delta }^2/\mu `$ per particle the system must, however, pay Coulomb and surface energies associated with these structures . A similar scenario is considered in for u,d quark matter.
Consequences: Some bulk or mixed phase regions of quark matter in neutron stars can be color superconducting either by CFL or 2CS depending on the gap sizes, electron chemical potentials and strange quark masses as described above. Furthermore, temperatures in neutron stars are so low, $`T\stackrel{<}{}10^6K10^4`$ MeV, that quark matter structures would be solid frozen. As a consequence, lattice vibration will couple electrons at the Fermi surface with opposite momenta and spins via phonons and lead to a “standard” BCS gap for electrons. The isotopic masses are similar but as densities and Debye frequencies are larger, we can expect considerably larger BCS gaps for electrons. At typical neutron star densities neutrons and protons are superfluid as well due to $`{}_{}{}^{1}S_{0}^{}`$ and, in the case of protons, also $`{}_{}{}^{3}P_{2}^{}`$ pairing . These superfluid and superconducting components will have drastically different transport properties than normal Fermi liquids . Generally the resistance, specific heat, viscosities, cooling, etc. are suppressed by factors of order $`\mathrm{exp}(\mathrm{\Delta }_i/T)`$, where $`\mathrm{\Delta }_i`$ is the gap of quarks, nucleons or electrons.
In relativistic nuclear collisions the strange quark chemical potential is zero initially and expansion times $`R/c10`$ fm/c are short as compared to time scales for weak decay and strangeness distillation. Therefore, $`\mu _s0`$ and we expect no CFL. In heavy ion collisions the amount on neutrons and therefore also d-quarks exceed that of protons and u-quarks. The resulting difference, $`|p_dp_u|`$, can prohibit a 2CS depending on density, temperature and gap size.
In summary the conditions for color superconductivity in quark matter were given in Eqs. )2-4) for electrically charge neutral neutron star matter in $`\beta `$-equilibrium - both bulk quark matter and mixed phases of quark and nuclear matter. The electron chemical potential and strange quark mass affect the various quark chemical potentials. For CFL to occur the gap must exceed both the electron chemical potential, $`\mathrm{\Delta }\stackrel{>}{}\mu _e`$, and the mitch-match in Fermi momenta induced by a massive strange quark, $`\mathrm{\Delta }\stackrel{>}{}m_s^2/2\mu `$. Alternatively, if $`\mu _e`$, $`m_s^2/2\mu `$ or the difference $`|\mu _em_s^2/2\mu |`$ are smaller than the gap, then a condensate of dicolor pairs (2CS) can appear between ud, ds, us-quarks respectively analogous to the standard ud 2CS usually discussed for symmetric ud quark matter. |
no-problem/9912/cond-mat9912345.html | ar5iv | text | # Metal-Insulator transitions in generalized Hubbard models
## 1 Introduction
The Hubbard model is an elementary model to study strongly interacting systems. In particular it can be used to understand the Mott metal-insulator transition. The original Hubbard model describes $`s`$ electrons in a narrow band. With one electron per site, the half-filled system will become insulating for large enough correlation strength. To describe more realistic situations one has to generalize the Hubbard model to systems with degenerate (or near-degenerate) orbitals. Such degenerate orbitals can arise for example in molecular solids or transition metal compounds. For such a degenerate Hubbard model there will be a Mott transition not only at half-filling, but for all integer fillings. It is then a natural question how the location of the Mott transition depends, for an otherwise unchanged Hamiltonian, on the filling. An other issue is the problem of how the Mott transition depends, for the same filling, on the lattice-structure of the system. To address these questions we have determined the Mott transition for degenerate Hubbard models with various integer fillings and different lattice-structures. We have used Hamiltonians that describe the alkali doped Fullerenes, since these materials have been synthesized in various integer dopings and crystal structures.
## 2 Model and Method
The inter-molecular interaction in solid C<sub>60</sub> is very weak. Therefore the energy levels of the molecule merely broaden into narrow, well separated bands . The conduction band originates from the lowest unoccupied molecular orbital, the 3-fold degenerate $`t_{1u}`$ orbital. To get a realistic, yet simple description of the electrons in the $`t_{1u}`$ band, we use a Hubbard-like model that describes the interplay between the hopping of the electrons and their mutual Coulomb repulsion :
$$H=\underset{ij}{}\underset{mm^{}\sigma }{}t_{im,jm^{}}c_{im\sigma }^{}c_{jm^{}\sigma }^{}+U\underset{i}{}\underset{(m\sigma )<(m^{}\sigma ^{})}{}n_{im\sigma }n_{im^{}\sigma ^{}}.$$
(1)
The sum $`ij`$ is over nearest-neighbor sites. The hopping matrix elements $`t_{im,jm^{}}`$ between orbital $`m`$ on molecule $`i`$ and orbital $`m^{}`$ on molecule $`j`$ are obtained from a tight-binding parameterization . The molecules are orientationally disordered , and the hopping integrals are chosen such that this orientational disorder is included . The band-width for the infinite system is $`W=0.63eV`$. The on-site Coulomb interaction is $`U1.2eV`$. The model neglects multiplet effects, but we remark that these tend to be counteracted by the Jahn-Teller effect, which is also not included in the model.
To identify the Mott transition we calculate the energy gap
$$E_g=E(N+1)2E(N)+E(N1),$$
(2)
where $`E(N)`$ is the energy of a cluster of $`N_{\mathrm{mol}}`$ molecules with $`N=nN_{\mathrm{mol}}`$ electrons (integer filling $`n`$). We determine these energies using fixed-node diffusion Monte Carlo . Starting from a trial function $`|\mathrm{\Psi }_T`$ we calculate
$$|\mathrm{\Psi }^{(n)}=[1\tau (Hw)]^n|\mathrm{\Psi }_T,$$
(3)
where $`w`$ is an estimate of the ground-state energy. The $`|\mathrm{\Psi }^{(n)}`$ are guaranteed to converge to the ground state $`|\mathrm{\Psi }_0`$ of $`H`$, if $`\tau `$ is sufficiently small and $`|\mathrm{\Psi }_T`$ is not orthogonal to $`|\mathrm{\Psi }_0`$. Since we are dealing with Fermions, the Monte Carlo realization of the projection (3) suffers from the sign-problem. To avoid the exponential decay of the signal-to-noise ratio we use the fixed-node approximation . For lattice models this involves defining an effective Hamiltonian $`H_{\mathrm{eff}}`$ by deleting from $`H`$ all nondiagonal terms that would introduce a sign-flip. Thus, by construction, $`H_{\mathrm{eff}}`$ is free of the sign-problem. To ensure that the ground-state energy of $`H_{\mathrm{eff}}`$ is an upper bound of $`E_0`$, for each deleted hopping term an on-site energy is added in the diagonal of $`H_{\mathrm{eff}}`$. Since $`|\mathrm{\Psi }_T`$ is used for importance sampling, $`H_{\mathrm{eff}}`$ will depend on the trial function. Thus, in a fixed-node diffusion Monte Carlo calculation for a lattice Hamiltonian we choose a trial function and construct the corresponding effective Hamiltonian, for which the ground-state energy $`E_{\mathrm{FNDMC}}`$ can then be determined without sign-problem by diffusion Monte Carlo.
For the trial function we make the Gutzwiller Ansatz
$$|\mathrm{\Psi }(U_0,g)=g^D|\mathrm{\Phi }(U_0),$$
(4)
where the Gutzwiller factor reflects the Coulomb term $`UD=Un_{im\sigma }n_{im^{}\sigma ^{}}`$ in the Hamiltonian (1). $`|\mathrm{\Phi }(U_0)`$ is a Slater determinant that is constructed by solving the Hamiltonian in the Hartree-Fock approximation, replacing $`U`$ by a variational parameter $`U_0`$. Increasing $`U_0`$ will change the character of the trial function from paramagnetic to antiferromagnetic. This transition is also reflected in the variational energies obtained in quantum Monte Carlo, as shown in Fig. 1. Clearly, for small $`U`$ the paramagnetic state is more favorable, while for large $`U`$ the antiferromagnetic state gives a lower variational energy. Details on the character of the trial function and the optimization of the parameters can be found in Ref. .
## 3 Results
We now turn to the problem of the Mott transition in degenerate Hubbard models for different integer dopings and for different lattice-structures. The examples will be for integer-doped Fullerides A<sub>n</sub>C<sub>60</sub>, where A stands for an alkali metal like K, Rb, or Cs. Density functional calculations predict that all the doped Fullerides A<sub>n</sub>C<sub>60</sub> with $`n=1\mathrm{}5`$ are metals . Only C<sub>60</sub> and A<sub>6</sub>C<sub>60</sub> are insulators with a completely empty/filled $`t_{1u}`$ band. On the other hand, Hartree-Fock calculations for the Hamiltonian (1) predict a Mott transition already for $`U`$ smaller than the band-width, and hardly any doping dependence. General arguments also suggest that the alkali doped Fullerenes should be Mott insulators, since the Coulomb repulsion $`U`$ between two electrons on the same C<sub>60</sub> molecule ($`U1.2eV`$) is substantially larger than the width of the $`t_{1u}`$ band ($`W0.6eV`$). It has therefore even been suggested that experimental samples of, say, the superconductor K<sub>3</sub>C<sub>60</sub> are metallic only because they are non-stoichiometric, i.e. that they actually are K<sub>3-δ</sub>C<sub>60</sub> .
### 3.1 K<sub>3</sub>C<sub>60</sub>
In a first step we investigate what consequences the degeneracy of the $`t_{1u}`$-band has for the Mott transition in K<sub>3</sub>C<sub>60</sub>. The analysis is motivated by the following simple argument . In the limit of very large $`U`$ we can estimate the energies needed to calculate the gap (2). For half-filling, all molecules will have three electrons in the $`t_{1u}`$ orbital (Fig. 2 a). Hopping is strongly suppressed since it would increase the energy by $`U`$. Therefore, to leading order in $`t^2/U`$, there will be no kinetic contribution to the total energy $`E(N)`$. In contrast, the systems with $`N\pm 1`$ electrons have an extra electron/hole that can hop without additional cost in Coulomb energy. To estimate the kinetic energy we calculate the matrix element for the hopping of the extra charge against an antiferromagnetic background. Denoting the initial state with extra charge on molecule $`i`$ by $`|1`$, we find that the second moment $`1|H^2|1`$ is given by the number of different possibilities for a next-neighbor hop times the single electron hopping matrix element $`t`$ squared. By inserting $`_j|jj|`$, where $`|j`$ denotes the state with the extra charge hopped from site $`i`$ to site $`j`$, we find $`1|H|j=\sqrt{3}t`$, since, with an antiferromagnetic background and degeneracy 3, there are three different ways an extra charge can hop to a neighboring molecule (Fig. 2, b). Thus, due to the 3-fold degeneracy, the hopping matrix element is enhanced by a factor $`\sqrt{3}`$ compared to the single electron hopping matrix element $`t`$. In the single-electron problem the kinetic energy is of the order of half the band-width $`W/2`$. The enhancement of the hopping matrix element in the many-body case therefore suggests that the kinetic energy for the extra charge is correspondingly enhanced. Inserting the energies into (2) we find that for the 3-fold degenerate system our simple argument predicts a gap
$$E_g=U\sqrt{3}W,$$
(5)
instead of $`E_g=UW`$ in the non-degenerate case. Extrapolating to intermediate $`U,`$ it appears that the degeneracy shifts the Mott transition towards larger $`U`$.
The above argument is, of course, not rigorous. First, it is not clear whether the result for $`E_g`$ that was obtained in the limit of large $`U`$ can be extrapolated to intermediate $`U,`$ where the Mott transition actually takes place. Also the analogy of the hopping in the many-body case with the hopping of a single electron is not rigorous, since the hopping of an extra charge against an antiferromagnetic background creates a string of flipped spins . Nevertheless the argument suggests that orbital degeneracy should play an important role for the Mott transition.
To check this proposition, we look at the results of the quantum Monte Carlo calculations for the model Hamiltonian (1) . The Coulomb interaction $`U`$ has been varied from $`U=0\mathrm{}1.75eV`$ to study the opening of the gap. Since the Monte Carlo calculations are for finite systems, we have to extrapolate to infinite system size. To improve the extrapolation we correct for finite-size effects: First, there could be a gap $`E_g(U=0)`$ already in the spectrum of the non-interacting system. Further, even for a metallic system of $`M`$ molecules, there will be a finite-size contribution of $`U/M`$ to the gap. It comes from the electrostatic energy of the extra charge, uniformly distributed over all sites. Both corrections vanish in the limit $`M\mathrm{}`$, as they should. The finite-size corrected gap $`\stackrel{~}{E}_g=E_gU/ME_g(U=0)`$ for systems with $`M=`$ 4, 8, 16, 32, and 64 molecules is shown in Fig. 3. We find that the gap opens for $`U`$ between $`1.50eV`$ and $`1.75eV.`$ Since for the real system $`U=1.2\mathrm{}1.4eV,`$ K<sub>3</sub>C<sub>60</sub> is thus close to a Mott transition, but still on the metallic side — even though $`U`$ is considerably larger than the band-width $`W`$. This is in contrast to simpler theories that neglect orbital degeneracy.
### 3.2 Doping dependence
The degeneracy argument described above for K<sub>3</sub>C<sub>60</sub> can be generalized to integer fillings. Away from half-filling the enhancement of the hopping matrix elements for an extra electron is different from that for an extra hole. The effective enhancement for different fillings are given in Table 1. We find that the enhancement decreases as we move away from half-filling. Therefore we expect that away from half-filling correlations become more important, putting the system closer to the Mott transition, or maybe even pushing it across the transition, making it an insulator. We have analyzed the doping dependence of the Mott transition for the same Hamiltonian as used for K<sub>3</sub>C<sub>60</sub>, changing the filling of the $`t_{1u}`$ band from $`n=1`$ to 5. This model describes the Fm$`\overline{3}`$m-Fullerides A<sub>n</sub>C<sub>60</sub> with fcc lattice and orientational disorder . The critical Coulomb interaction $`U_c`$, at which, for the different integer fillings, the transition from a metal (for $`U<U_c`$) to an insulator ($`U>U_c`$) takes place, is shown in Fig. 4. As expected from the degeneracy argument, $`U_c`$ decreases away from $`n=3`$. We note, however, that $`U_c`$ is asymmetric around half-filling. This asymmetry is not present in the simple degeneracy argument, where we implicitly assumed that the lattice is bipartite. In such a situation we have electron-hole symmetry, which implies symmetry around half-filling. For frustrated lattices, like the fcc lattice, electron-hole symmetry is broken, leading to an asymmetry in $`U_c`$ that is seen in Fig. 4.
### 3.3 Dependence on lattice-structure
To understand the effect of frustration in terms of the hopping arguments that we have made so far, we have to consider more than just one next-neighbor hop. The simplest system where we encounter frustration is a triangle with hopping matrix elements $`t`$ between neighboring sites. In the single-electron case we can form a bonding state with energy $`E_{\mathrm{min}}=2t,`$ but because of frustration we cannot form an anti-bonding state. Instead the maximum eigenenergy is $`E_{\mathrm{max}}=t.`$ Hence frustration leads to an asymmetric ’band’ of width $`W=3t.`$
In the many-body case the situation is different. Like in the degeneracy argument, we look at the hopping of an extra electron against a (frustrated) antiferromagnetic background in the large-$`U`$ limit. For simplicity we assume a non-degenerate system, i.e. there is one electron per site on the triangle, plus the extra electron. In this case we have to move the extra charge twice around the triangle to come back to the many-body state we started from (cf. Fig. 5). Thus in the large-$`U`$ limit the many-body problem is an eigenvalue problem of a $`6\times 6`$ matrix with extreme eigenvalues $`\pm 2t`$. In the degeneracy argument we have assumed that the kinetic energy of the extra charge is given by $`W/2`$. On the triangle, we find, however, that the hopping energy is by a factor of $`4/3`$ larger than that. This suggests that for frustrated systems the single electron band-width $`W`$ in (5) should be multiplied by a prefactor larger than one. We therefore expect that frustration alone, already without degeneracy, shifts the Mott transition to larger $`U`$.
To analyze the effect of frustration on the Mott transition we have determined the critical $`U`$ for a hypothetical doped Fullerene A<sub>4</sub>C<sub>60</sub> with body centered tetragonal (bct) structure, a lattice without frustration, having the same band-width ($`W=0.6eV`$) as the fcc-Fullerides, shown in Fig. 4. For $`U=1.3eV`$, we find a gap $`E_g0.6eV`$ for the Fulleride with bct structure, while the frustrated fcc compound still is metallic $`E_g=0`$. This difference is entirely due to the lattice-structure. Using realistic parameters for K<sub>4</sub>C<sub>60</sub> that crystallizes in a bct structure we find a Mott insulator with gap $`E_g0.7eV`$, which is in line with experimental findings: $`E_g=0.5\pm 0.1eV`$ .
## 4 Conclusion
We have seen that, due to more efficient hopping, orbital degeneracy increases the critical $`U`$ at which the Mott transition takes place. This puts the integer-doped Fullerenes close to a Mott transition. Whether they are on the metallic or insulating side depends on the filling of the band and the lattice-structure: Since the degeneracy enhancement is most efficient for a half-filled band, systems doped away from half-filling tend to be more insulating. The effect of frustration, on the other hand, is to make the system more metallic.
This work has been supported by the Alexander-von-Humboldt-Stiftung under the Feodor-Lynen-Program and the Max-Planck-Forschungspreis, and by the Department of Energy, grant DEFG 02-96ER45439. |
no-problem/9912/nucl-th9912037.html | ar5iv | text | # An investigation of standard thermodynamic quantities as determined via models of nuclear multifragmentation
## I Introduction and Overview
Many models \- have been proposed the describe the breakup of a large nucleus subjected to excitation energies greater than a few MeV per nucleon, a process known as multifragmentation. Experimentally, the signature of multifragmentation is the production of a wide range of nuclear reaction products, particularly intermediate mass fragments (IMFs), $`3Z30`$. On the basis of inclusive data, it was proposed \- that these fragments were produced in analogy to a liquid-to-gas phase transition occurring in a nucleus. A recent experiment that permitted the total charge reconstruction of each event studied multifragmentation resulting from the breakup of gold nuclei as a function of the excitation energy deposited \- . The statistical aspects of these data have provided strong evidence that multifragmentation is indeed related to a phase transition occurring in a finite system. Whether the production of IMFs in such collisions is due to a phase transition, and if so, what type, is still an issue of much debate .
One class of models developed to explore the fate of a nucleus as a function of excitation energy is based on the phenomenological description of the free energy, $`F(V,T)`$, of the breakup state, where $`T`$ is the common temperature of all nucleons and nuclei within the breakup volume $`V`$. These nuclei are considered to be at normal nuclear density and interact only via the Coulomb force. The distribution of nuclear fragments prior to any secondary decay can then be calculated as a first step in the disassembly of the excited initial system. To compare with data, deexcitation of fragments and expansion of the system due to the Coulomb repulsion between the fragments must be accounted for in the model. However, if the thermodynamics of the model is of interest, as is the case in this work, then only the behavior of thermodynamic variables need be examined, e.g. free energy, entropy, specific heat, pressure, isothermal compressibility. Thus, no fragment distributions need be explicitly calculated and therefore no fragment distributions are analyzed in this work.
Here, several variations of a previously discussed model are explored. A canonical ensemble approach is used to investigate the thermodynamics of the system where the free energy, $`F`$, is written as a function of the temperature, $`T`$, and the volume, $`V`$. Calculations are restricted to a system which contains 162 constituent, since this is representative of the size of the system studied in . Contributions to $`F`$, e.g. the surface free energy, the Coulomb energy, are examined by turning them off or altering the form of the contribution in question. In this way insight can be gained as to how the important features of the thermodynamics such as specific heat, isothermal compressibility, etc. depend on the parameterization of the free energy.
This paper is organized as follows. In section II the details of the models are presented. Three versions of a standard statistical multifragmentation model are examined as well as a well-known mean field model whose results are used for comparison. In section III a description of the analysis and the results of that analysis are presented. Section IV discusses the standard interpretations of the models and analysis. Finally, a brief discussion of the questions raised by this work concludes this paper. In general, the notation of references and are followed.
## II Details of the models
This work follows directly the efforts presented in reference in which the canonical partition function was examined as a function of temperature in a fixed volume system for evidence of a phase transition. In that work, evidence for a first order phase transition was found. In the present work, the volume (average density) of the system is permitted to vary. It shall be seen that the nature of the phase transition depends on the volume of the system. The work of is also extended by examining the effects of the Coulomb force on the system and the effects of the choice of surface energy parameterizations. The units used for the nuclear models are: energy and free energy in MeV/nucleon, temperature in MeV, volume in fm<sup>3</sup>, pressure in MeV$`/`$fm<sup>3</sup> and so on.
A general description of each system follows.
### A V1: Full statistical description of an excited nucleus
Calculations begin by considering the free energy of a nuclear fragment. It is assumed that the free energy of a nuclear fragment of mass $`A`$ and charge $`Z`$, for $`A>1`$ is given by:
$$F_{A,Z}=F_{A,Z}^B+F_{A,Z}^{sym}+F_{A,Z}^S+E_{A,Z}^C.$$
(1)
The terms in eq. (1) refer to the bulk, symmetry, surface and Coulomb contributions to the free energy of a nuclear fragment. The forms of these terms are given :
$$F_{A,Z}^B=(W_0T^2/ϵ_0)A,$$
(2)
$$F_{A,Z}^{sym}=\gamma (A2Z)^2/A,$$
(3)
$$F_{A,Z}^S=\beta _0\left(\frac{T_c^2T^2}{T_c^2+T^2}\right)^{5/4}A^{2/3},$$
(4)
$$E_{A,Z}^C=\frac{3}{5}e^2Z^2(1(1+\kappa )^{1/3})/R_{A,Z}.$$
(5)
In eq. (2) the constants are taken as $`W_0=16`$ MeV and $`ϵ_0=16`$ MeV. In eq. (3) $`\gamma =25`$ MeV. In eq. (4) $`\beta _0=18`$ MeV and $`T_c=16`$ MeV, following reference . The contribution from the Coulomb term is estimated via a Wigner-Seitz approximation as in reference .
The $`\kappa `$-term is related to the volume of the system through
$$1+\kappa =V/V_0.$$
(6)
This simplified model presented here differs from the standard version in that there is only one parameter relating the volume excluded by the constituents, $`V_0`$, to the total volume of the system, $`V`$, and to the free volume, $`V_f`$. Here the free volume is the difference between the total volume $`V`$ and the sum of the volume of the fragments, assumed to be at normal nuclear density, and is the volume available for the translational motion of the fragments.
In the standard version of the model the free volume is given by $`V_f=\chi V_0`$, where $`\chi `$ is parameterized to increase with fragment multiplicity such that it varies between $`0.2`$ and $`2`$; the parameter $`\kappa `$ is fixed, usually at $`\kappa =2`$. For simplicity, here it is assumed that $`\kappa =\chi `$ so that specifying $`V_f`$ determines the value of $`\kappa `$ in eq. (6). See reference for details of $`\kappa `$ and $`\chi `$ in the standard version.
For this work then, the total volume of the system is then given by:
$$V=V_0+V_f.$$
(7)
Two things become obvious from eq. (7); first, with this form of $`V`$ the free energy of the system varies with $`V_f`$ since $`V_0`$ is a constant. Second, the loss of free volume in the closest packing of spherical clusters is ignored. The issue of whether spherical nuclei can actually be placed in a total volume, $`V_0`$, given a free volume, $`V_f`$, is not addressed. Undoubtedly, there will be situations where it is not possible for the total volume to accommodate all of the nuclear clusters. The purpose here, however, is to explore the thermodynamics and self-consistency of the model and not physical consistency.
Finally $`R_{A,Z}`$ is the radius of the fragment in question and is determined by
$$R_{A,Z}=r_0A^{1/3},$$
(8)
with $`r_0=1.17`$ fm. The version of the model presented above will be termed V1.
From this point the intrinsic partition function of a fragment of $`A`$, $`Z`$ at temperature $`T`$ and volume $`V`$ can be determined as follows:
$$z_{A,Z}=\mathrm{exp}(F_{A,Z}/T).$$
(9)
Using a technique developed in reference and used on a simplified version of this model , the canonical partition function can be built via a recursion relation:
$$𝒵_p=\frac{1}{p}\underset{A=1}{\overset{p}{}}A\omega _A𝒵_{pA},$$
(10)
starting from $`𝒵_0=1`$. Here for calculational simplicity the approximation has been made that for each and every fragment with $`A>1`$, $`A/Z=2.5`$ which represents an average mass to charge ratio for fragments. The $`\omega _A`$ term is
$$\omega _A=\frac{V_f}{h^3}(2\pi mT)^{3/2}A^{3/2}z_{A,Z},$$
(11)
where the terms the the left of the fragment partition function, $`z_{A,Z}`$, account for the translational free energy contribution, $`F^{tr}`$. It is now straightforward to calculate the partition function of the system for a given $`T`$, $`V_f`$, $`A_0`$ and $`Z_0`$. The free energy of the system of $`p`$-particles is then determined as usual
$$F=T\mathrm{log}(𝒵_p)+E_0^c(V),$$
(12)
where the last term is the usual Coulomb contribution of a uniformly charged sphere:
$$E_0^c(V)=\frac{3}{5}\frac{Z_0^2e^2}{R},$$
(13)
with $`R=(3V/4\pi )^{1/3}`$ .
#### 1 Comparison of V1 to the full version of the model
The model and calculations described above were compared to the full, or unmodified version of the model often cited in the literature, see for example references , \- . See Figure 1. In Figure 1 results from the full version of this model are shown for the mean fragment distribution calculated at a given input excitation energy. To generate event-by-event distributions Poissonian fluctuations about the mean are introduced, after which, temperature is adjusted to ensure energy conservation. To more fully recover the standard version of the model most often used, higher order corrections were introduced just as in the full version of the model; e.g. $`ϵ_0`$ in eq. (2) was made dependent on the fragment mass $`A`$, for light clusters, $`A4`$, the empirical masses and binding energies, radii and spin degeneracy factors of the ground state were used, the total volume was held constant at $`3V_0`$ and the free volume was set to depend on the input excitation energy. Finally, energy was explicitly conserved; an input excitation energy was given and a temperature was determined such that total energy was conserved. The explicit conservation of energy produced results that were essentially the same as those resulting from the unconstrained canonical ensemble.
Figure 1a shows the caloric curve from the full version of this model for a system with 100 nucleons (60 neutrons and 40 protons) compared to the same size system used in calculations with a modified version of V1. The general trend of the modified V1 reproduced the average behavior of the full model, though there is not a perfect agreement. This is to be expected. While this modified version of V1 is closer to the model, there are still some differences, e.g. the charge of $`A>4`$ fragment is treated in only an average fashion in V1. The reproduction of the general trends indicates that V1 captures the essence of the full model. Figure 1b shows the fragment multiplicity, before any secondary decay, from both models. Again there is general agreement between the two.
The break observed in the caloric curve shown in Figure 1 is well known in the full model, see, for example, Figure 4 in ref. and Figure 11 in ref. . The break is due to the initial guess of the system’s multiplicity which is in turn used to guess the system’s free volume. For low energies the multiplicity is chosen to be 1, 2 or 3 (Figure 1b shows that the initial guess of the multiplicity is consistent with the final state multiplicity), while at higher energies the multiplicity depends smoothly on a function of the input excitation energy . In some systems, e.g. a system of 100 nucleons, there is a jump in multiplicity at the transition from the low energy computations and the high energy computations which gives rise to a jump in the final state multiplicity, Figure 1b, and a break in the caloric curve, Figure 1a. When the simplified model used in this work is given the same volume dependence as the full model, the results of the full model are reproduced.
Energy conservation is explored in Figure 2. Here the unmodified version of V1 was used with a system of 162 particles and energy was explicitly conserved as outlined above. In order to examine the change in energy between the initial and final state of each term contributing to the system’s total energy, the temperature of the initial state of the system has been calculated corresponding to the input $`E^{}`$. The assumption that the initial nuclear state is in thermal equilibrium prior to its deexcitation to the final state has no bearing on the thermodynamics of the final state and is done only for purposes of the abovementioned calculation. The initial state used for this calculation was the system of 162 nucleons at excitation energy, $`E^{}`$, at normal nuclear density, $`\rho _0`$ and at a temperature that conserves energy when the total energy is determined using eqns (2)-(5), (12) and:
$$E=FT\left(\frac{F}{T}\right)_V.$$
(14)
The total energy of the initial state is shown in Figure 2 as well as it’s various components and a caloric curve. The final state of the system was computed with the same $`E^{}`$ but was held at a third normal density, $`\rho _0/3`$, and allowed to fragment in the manner outlined above. The caloric curve produced for the final state via this explicit conservation of energy calculation is identical to the caloric curve produced via the calculations without an explicit conservation of energy. Figure 2 shows that the temperature in the final state is lower than that in the initial state. Further inspection indicates that while the Coulomb energy is reduced by creating smaller charged nuclei, the energy required to create the additional surface area more is more than offsetting. Thus, the temperature must decrease.
### B V2: Description of an excited charge-free nucleus
The general ideas of V1 are followed but with the Coulomb energy of the system set to zero. The free energy shown in eq. (1) then becomes:
$$F_A=F_A^B+F_A^S.$$
(15)
And the total free energy of the system is given by
$$F=T\mathrm{log}(𝒵_p).$$
(16)
Every other aspect of the model is the same as in V1. This version of the model will be termed V2.
### C V3: Description of an excited charge-free nucleus with a temperature independent surface
In this version, the Coulomb force is suppressed and the temperature dependent surface term in eq. (4) is made independent of temperature:
$$F_A^S=\beta _0A^{2/3}.$$
(17)
The free energy of a fragment and the entire system are still given by eqns (15) and (16). Every other aspect of the model is the same as in V1 and V2. This version of the model will be termed V3.
### D The van der Waals fluid
The free energy of the van der Waals fluid is determined in the standard textbook fashion. Starting from the free energy
$$F/A=t\left\{\mathrm{ln}\left[n_Q\left(VAb\right)/A\right]+1\right\}Aa/V,$$
(18)
with $`t=k_bT`$, $`n_Q=\left(Mt/2\pi \mathrm{}^2\right)^{3/2}`$ and $`a`$ and $`b`$ the usual van der Waals constants. In this work, eq. (18) is computed in terms of the density, $`\rho =A/V`$, so that the number of constituents, A, is not a factor. For the van der Waals’ constants $`a`$ and $`b`$, values were used for helium so that $`T_c4.5\times 10^4`$ eV$`/`$A. This also suggested the value of $`M`$ in $`n_Q`$. Finally, eq. (18) was evaluated in terms of $`T/T_c`$ and $`V/V_c`$. The van der Waals fluid model is well defined ad free of internal inconsistencies. It will be used as a benchmark for the analysis presented in this paper. Units for the van der Waals fluid results will be in eV$`/A`$ for energy and free energy, eV$`/A\times V_c`$ for pressure and so on.
## III Discussion of calculations
Calculations were performed for each version of the model to determine $`F(T,V_f,A_0,Z_0)`$ for $`A_0=162`$, $`A_0/Z_0=2.5`$ and over a range in temperature, $`1`$ MeV $`T14`$ MeV, and volume of $`2\times 10^8(V_f/V_0)`$ $`2\times 10^8`$. Once the general vicinity of the critical point was identified, a smaller range in $`(T,V_f)`$ was used for more detailed calculations. For the van der Waals fluid the range was smaller and in terms of reduced temperature and volume: $`0.1T/T_c2.0`$ and $`0.34V/V_c2.0`$. Figure 3 shows the behavior of the free energy over the ranges of temperature and volume used in the calculations.
After the value of the free energy was calculated, it was simple to determine other thermodynamic quantities. Holding the volume fixed the entropy is given by the usual relation:
$$S=\left(\frac{F}{T}\right)_{V_f}.$$
(19)
In the case of this work differences were used instead of derivatives due to the numerical nature of the calculation, thus eq. (19) becomes
$$S=\left(\frac{\mathrm{\Delta }F}{\mathrm{\Delta }T}\right)_{V_f}.$$
(20)
Similarly the specific heat at constant volume was determined via
$$C_V=T\left(\frac{\mathrm{\Delta }S}{\mathrm{\Delta }T}\right)_V,$$
(21)
where $`T`$ is the average value of $`T`$ over the $`\mathrm{\Delta }T`$ interval. Using the entropy and the free energy, the total energy can be determined from:
$$E=F+TS.$$
(22)
In these calculations it was possible to hold either the temperature or the volume constant. The pressure was then found by holding the temperature fixed
$$P=\left(\frac{\mathrm{\Delta }F}{\mathrm{\Delta }V_f}\right)_T.$$
(23)
Taking another derivative then gave the isothermal compressibility
$$\kappa _T=\frac{1}{V_f}\left(\frac{\mathrm{\Delta }V_f}{\mathrm{\Delta }P}\right)_T.$$
(24)
where $`V_f`$ is the average value of $`V_f`$ over the $`\mathrm{\Delta }V_f`$ interval. With this information it is possible to determine if there is a phase transition in a model such as this and, if present, the nature of that phase transition. The following section addresses this question.
## IV Results of calculations
In this section, each of the axes of the standard phase diagram, $`T`$, $`V`$ and $`P`$, will in turn be held fixed. The behavior of other quantities will be examined in order to understand the behavior of each system. The van der Waals fluid will serve as a guide for the interpretation of the analysis and also as a benchmark to illustrate the accuracy of this analysis.
### A Isotherms
Determination of the critical point, coexistence and spinodal curves is discussed below. The variation of the free energy as a function of density shows the same general features for these systems.
Figure 4 shows the behavior of the free energy isotherms for each system as function of reduced density. Each plot in Figure 3 shows three isotherms. The isotherms are sub-critical ($`T=0.95T_c`$), critical ($`T=T_c`$) and super-critical ($`T=1.05T_c`$). Determination of the critical point is discussed below. Also shown are the approximate location of the coexistence and spinodal curves. Determination of these curves is also discussed below. The behavior of the free energy for these isotherms for all systems is more or less the same. As the reduced density increases, the free energy increases. At some mid range in reduced density the slope of the increase in free energy changes. At a greater reduced density the slope of the increase in free energy changes again. This is most clearly demonstrated by the van der Waals fluid system. See Figure 4d. However, the behavior is present in all the models. This behavior, while appearing modest in these plots, will be seen to be the cause of the critical-like behavior exhibited by these models.
The pressure was calculated from the free energy isotherms via eq. (23). Figure 5 shows the results for each model. The determination of the location of the critical point and the coexistence and spinodal curves is based on the phase diagram of pressure, temperature and reduced density. By searching for inflections points along the pressure versus reduced density isotherms the spinodal curve was determined. The isotherm immediately following, as the temperature of each isotherm increases, the last isotherm with two inflection points was labeled the critical isotherm.
Another method to determine the location of the critical point began with the isothermal compressibility which was calculated with eq. (24). Isotherms of $`\kappa _T`$ versus reduced density were inspected. All sub-critical isotherms showed at least one negative value of $`\kappa _T`$. The first isotherm, as a function of increasing temperature, which showed only positive values of $`\kappa _T`$ was labeled the critical isotherm. Both procedures yielded the same results, as they are essentially identical. See Table I.
The coexistence curve was determined by making a Maxwell equal area construction for each isotherm. See Figure 4 for examples of the Maxwell construction.
On first glance at Table 1 several noteworthy features stand out: (1) none of the values of $`T_c`$ determined for the models is the same as the value of the parameter $`T_c`$ specified in the surface term in eq. (4); (2) the critical densities for each model is close to unity; and (3) the critical temperature of V1, the model which includes the Coulomb force, is larger than the critical temperature of V2, the model with no Coulomb force. The error of the analysis of the van der Waals fluid was on the order of a few percent; $`T_c^{vdW}1`$. This illustrates the error inherent with this type of analysis.
It is not surprising that the critical temperature found by the analysis of thermodynamical quantities is different than the parameter $`T_c`$ used to parameterize the surface free energy of infinite nuclear matter. If one considers the critical point to be that temperature at which the surface free energy vanishes, then this can only be at $`T_c`$ (16 MeV in this case). However, the form of the surface term given by eq. (4), approximately the form of the macroscopic surface free energy of a fluid near its critical point, is not an appropriate description of the microscopic surface of a droplet . Moreover, eq. (4) leads to a specific heat which approaches negative infinity as $`T`$ approaches $`T_c`$. In a more fundamental model the critical temperature would be an output of the model rather than an input. Of the phenomenological models studied here, only the van der Waals model is known to be self-consistent.
The high densities found at the critical point for V1, V2, and V3 cannot be realized if one is constrained to placing spherical nuclei without overlaps inside of the total volume. Again this issue is mentioned but not dealt with since the aim of this paper is to explore only the thermodynamical predictions of the above models.
Were this sort of model interpreted physically, the high value of the critical density determined here would suggest that the critical point could never be reached by finite nuclear matter as a multiplicity of spherical clusters at normal nuclear density could not physically fit into the critical volume. If the constraint that the breakup volume must be large enough to avoid overlapping volumes of the final state (spherical) nuclei is added, then only first order phase transitions are possible. Of course there are several problems with a strictly physical interpretation of models such as the one presented here, not the least of which is the introduction of a volume for the system. Actual nuclei excited to high energies in nucleus-nucleus collisions do not exist in a box and thus have no volume in the sense suggested here.
Item (3), the rise in $`T_c`$ with the vanishing of the Coulomb force, is, on the surface, counterintuitive. Many other models of nuclear systems show just the opposite behavior , . However, those models are fundamentally different than the ones examined in this work. Such models begin by describing the free energy or chemical potential or some equivalent quantity using formulae which assume a uniform distribution of material in much the same way that the van der Waals fluid assumes a uniform density. The model in this work samples a different part of the final state phase space . It will be seen that the calculation of the Coulomb term via eq. (5) gives rise to the counterintuitive rise in the critical temperature when the Coulomb force is suppressed in the model.
To begin to understand the effect of the Coulomb energy on the critical point the same isotherm for models V1 and V2 was examined. Figures 4a and b show the isotherm of $`T=7.2`$ MeV for V1 and V2. When the coexistence curve is shown for both systems, it is obvious that for V1 the isotherm is sub-critical while for V2 the isotherm is super-critical. Figures 4c and d begin to shed light on the cause of this counterintuitive occurrence.
For the sake of illustration, the Maxwell constructed (a line of constant slope through the coexistence region, which then leads to a constant value of the pressure through the coexistence region that will give ”equal areas” on a pressure-volume plot, see Figure 5a) path of the free energy through the coexistence region is shown as a dashed line, barely visible just below the isotherm in the coexistence region, in Figure 6a. The Maxwell constructed free energy is a straight line through the coexistence region. The constant slope of the Maxwell constructed free energy leads to a constant pressure for the system in the coexistence region. Because the path of Maxwell constructed free energy is very close to the path of the free energy of V1, it is difficult to see the difference in a plot such as shown in Figure 6a. A plot of the difference in the calculated free energy of V1 and the Maxwell constructed free energy shows what gives rise to the van der Waals loops in Figure 5a. See Figure 6c. There are two inflection points in the curve of the calculated free energy of V1, these are shown more clearly as the points in Figure 6c where the curve shows an ordinate value of zero. These plots for a canonical system’s free energy are in the same spirit as plots for a microcanonical system’s entropy . It is clear from Figures 4b and d that there are no similar inflection points in the free energy curve for V2. Therefore the isotherm of $`T=7.2`$ MeV in V2 is super-critical while the very same isotherm is sub-critical for V1.
It is possible to understand what gives rise to the inflection points introduced when going from V2 to V1 by looking at the contributions to the total free energy of each system along the isotherm $`T=7.2`$ MeV. See Figure 7. Plotted in this figure for both systems are the components of the overall free energy for the isotherm in question: translational, bulk, surface, symmetry, total Coulomb, background Coulomb and clusterization Coulomb free energies. Figure 7 shows that the translational and bulk free energies of each system are nearly identical. An inspection of eqns (11) and (2) shows that these quantities are relatively insensitive to small changes in the fragment distribution. On the other hand, the surface free energy shows an obvious difference in behavior between systems. In V1 the initial decrease in surface free energy as a function of reduced density is slower than in V2. An inspection of eq. (4) shows that the $`A^{2/3}`$-term introduces more sensitivity to the fragment distribution than the previously discussed terms. The cause of this difference, and all the differences between these two systems, is the presence of the Coulomb force in V1 and its absence in V2.
The behavior of the Coulomb contribution to the free energy of this isotherm is now discussed. The first, and simplest, additional term in the free energy due to the Coulomb force is the asymmetry term. The change is in the asymmetry term is smooth as a function of increasing density and will not introduce the inflection points in the free energy curve that will change a super-critical isotherm to a sub-critical isotherm.
In this model the total Coulomb contribution to the free energy comes from two sources. One which represents the background energy due to a uniform distribution of charges, eq. (13), and the other due to the energy from the clusterization of fragments, eq. (5) . The background energy, $`E_0^c`$, goes as $`\rho ^{1/3}`$ and therefore varies smoothly with volume. The clusterization free energy, $`E_{A,Z}^C`$, shown in Figure 7 has a different behavior. It decreases as the reduced density increases, the rate of decrease is at first nearly constant, then slows and then increases rapidly over some small interval in reduced density. The combination of these two Coulomb terms introduces sufficient changes in the overall free energy of the system from V2 to V1 that inflection points arise and thus the critical temperature increase when the Coulomb force is added to the system. A smooth or constant version of the Coulomb free energy added to version V2 should not cause this sort of behavior. It is the variation in $`E_{A,Z}^C`$ that introduced the inflection points and increases the critical temperature. In the end, it is the behavior of the free energy curve that served to determine the location of the critical point and that behavior is, at times, counter intuitive.
Also listed in Table I is the compressibility factor:
$$C_f=\frac{P_cV_c}{T_c}.$$
(25)
The text book value for $`C_f`$ for the van der Waals gas is recovered to within error bars. According to the law of corresponding states, the value of $`C_f`$ should be universal. For fluid systems this is the case and $`C_f0.292`$ . For V1 and V2 there no such universal behavior observed to within error bars, while V2 shows some degree of universality.
### B Isochores
The volume of the system is held constant and the behavior of various quantities with respect to the system’s temperature is explored.
Beginning again with the primary quantity calculated, Figure 8 shows the free energy for each system as a function of the system’s temperature. For models V1, V2 and V3 the isotherms are for $`\rho =\rho _0/3`$, $`\rho =\rho _c`$, and $`\rho =0.9995\rho _0`$. For the van der Waals fluid, the isochores shown are for $`\rho =1.25\rho _c`$, $`\rho =\rho _c`$ and $`\rho =0.75\rho _c`$. See table I for critical density values. Also shown are the values of the free energy of the systems along their respective coexistence curves.
In reference the authors show results of a similar model similar to V2, for an isochore of approximately $`\rho =\rho _0/3`$. In that work there was a kink in the free energy curve which was interpreted as evidence for a first order phase transition. For the small system used in this work the kink is smoothed out negating the efficacy in using the kink as evidence towards determining the order of a phase transition if one is present. In larger systems, systems in ref. were more than 15 times larger than the system used in this work, the kink is more evident and this procedure may be possible.
Knowledge of the location of the coexistence curve allows for the identification of sub-critical, critical and super-critical isochores. For the projections shown in Figure 8 all the versions of the nuclear model a sub-critical isochore crosses the coexistence region. The critical isochore travels along the high value edge of the coexistence curve passing through the critical point. Super-critical isochores do not come into contact with the coexistence curve nor do they traverse the coexistence region. The behavior of the van der Waals fluid is somewhat different from the behavior of the nuclear models. Figure 8d shows that for a van der Waals fluid in this projection of the phase diagram all are in the coexistence region for low temperatures and cross the coexistence curve at higher temperatures. This apparent behavior results from projection of the three dimensional phase diagram onto a two dimensional plot. In a three dimensional figure, the super-critical isochore is seen to travel outside the coexistence region, the critical isochore is observed to intersect with the critical point and the sub-critical isochore is seen to traverse the coexistence region. See Figure 9.
The pressure isochore was obtained by following the procedure outlined in eq. (23) and using $`\rho `$, the average density, and $`T`$, the average temperature over the $`\mathrm{\Delta }T`$ interval. See Figure 10. In the limit of vanishing $`\mathrm{\Delta }T`$ this procedure is valid. Figure 10 shows the same three isochores discussed above as well as the location of the coexistence curve, seen edge on in this projection of the phase diagram. All the figures for the nuclear model show similar behaviors for the critical and super-critical isochore. All super-critical isochores follow a trajectory above the coexistence curve. The critical isochore for all versions of the model approaches the coexistence curve, follows it, and leaves at the termination or critical point. The sub-critical isochores for V2 and V3 pass through the coexistence curve as the temperature is increased and the system makes a first order transition from a liquid to gaseous state. For version V1 the behavior is more complicated. The sub-critical isochore begins on the gaseous side of the coexistence curve, crosses the coexistence curve into the liquid region before crossing back over the coexistence curve into the gaseous region at higher temperature. The $`P`$-$`T`$ projection of the van der Waals fluid looks as expected. See Figure 8d.
Following the analysis procedure outlined in the previous section, the entropy was determined via eq. (20). See Figure 11. The same three isochores are plotted showing the entropy as a function of temperature. All three isochores for the nuclear models show a smooth rise as a function of temperature with some region of increased slope. This behavior is consistent with a continuous phase transition, if a phase transition were present. Along an isochore, a first order transition would be indicated by a sudden change the behavior of the entropy as a function of temperature at one edge of the coexistence region. However, due to the small size of the system such sharp behavior is smoothed out into smooth curves making it impossible to draw a conclusion about the order of a phase transition from plots such as those shown in Figure 11. As before, knowing the location of the coexistence curves makes identification sub-critical, critical and super-critical isochores in the nuclear model trivial. The sub-critical isochore traverses the coexistence region, the critical isochore passes through the critical point and the super-critical isochore avoids the coexistence region. Also as before the van der Waals system shows a different behavior, and an added dimension to the plot in Figure 11d must be made to understand the behavior of the various isochores.
The specific heat at a constant volume, $`C_V`$, for each system is shown in Figure 12, again for the same three isochores. All curves show a peak in that could be due to a smoothed out discontinuity (first order phase transition), the remnants of a power law divergence at the critical point (continuous phase transition) or a specific heat anomaly (super-critical behavior). As with the free energy and entropy it is impossible to come to a definite conclusion regarding the presence and nature of a phase transition from this plot. Of particular importance is the specific heat for the van der Waals fluid which is seen to be nearly constant and equal to $`3/2`$ as it should . There is a small slope for the specific heat over the temperature range in question: for $`T=T_c/2`$ $`C_V=1.507`$ and for $`T=2T_c`$ $`C_V=1.502`$. This illustrates the technique employed here, following equations (20) and (21), yields results that are no better than $`0.25`$% for the quantities determined in this paper.
Finally, Figure 13 shows the constant volume caloric curves of $`T`$ as a function of $`E`$ for each system at the same isochores discussed above. Also shown are the values of the temperature and energy of the system, $`E=F+TS`$, along the boundary of the coexistence region. Just as with the behavior of the entropy isochores, the behavior of the caloric curves for each isochore shows behavior that is consistent with either a first order or continuous phase transition in a small system. Each isochore shows similar behavior, a steep rise followed by a region of shallower incline followed by a portion which approaches $`E/A=\frac{3}{2}T`$. The lack of a flat region, or back bend, in the caloric curve is not due to the small size of the system but rather due to the system being held at a constant volume. Only for isobars will flat regions or back bends be observed in the canonical nuclear model. See following section. Again the van der Waals system shows very different behavior, a steady rise in the temperature as a function of energy. And again a three dimensional plot is needed to clearly understand the nature of each isochore.
### C Isobars
The pressure of the system is held constant and the behavior of various quantities with respect to the system’s temperature is explored. Here some care should be taken with the interpretation of the results. The analysis outlined above is still followed. However, since the pressure is an extensive quantity plots such as free energy versus temperature are now plots of $`F`$ versus $`T`$. Where $`F`$ is the mid point in the $`\mathrm{\Delta }F`$ range over which a difference such as eq. (23) is taken. In the limit of vanishing interval size, this approximation is accurate. Also because the pressure is an extensive variable, it was necessary to allow some small variation in $`P`$ in order to make a plot such as $`F`$ against $`T`$. The variation in $`P`$ was usually less than one percent; i.e. $`\mathrm{\Delta }P/P0.01`$. Small changes in the amount of variation of $`P`$ had little effect on the analysis presented here. Large changes in the variation in $`P`$ wash out the behavior observed below.
The isobaric free energy as a function of temperature is shown for all systems in Figure 14 as well as the values of the isobaric free energy along the boundaries of the coexistence region. In all systems there is a back bend in the free energy curve for sub-critical isobars. The sub-critical isobar also traverses the coexistence region. It would be possible to perform the Maxwell construction procedure and deduce the critical point from these plots. The van der Waals fluid system shows that the critical point determined in the construction of the $`P`$-$`V`$ coexistence curves agrees with considerations of isobaric $`F`$. See Figure 14d. The critical isobar shows a vertical slope tangent to the coexistence curve and no back bend. The super-critical isobar does not traverse the coexistence region and shows no back bend.
Figure 15 shows the isobaric temperature as a function of reduced density. Temperature is plotted as a function of reduced density in the spirit of the Guggenheim plot which shows the universal behavior of several fluids near their critical point. On the scales shown here no universal behavior is observed. It may be that very near the critical point, the coexistence curves for each system are identical. The $`T`$-$`\rho `$ isobars for all systems show the expected behaviors: a sub-critical back bending curve that traverses the coexistence region giving way to the critical isobar, a critical curve with a flat section which intercepts the coexistence region at the critical point and finally a super-critical isobar which avoids the coexistence region altogether.
The behavior of the isobaric entropy as a function of temperature is also just as expected. See Figure 16.
Figure 17 shows the isobaric caloric curves for each system. As the energy is constructed from the free energy and entropy of the system the back bending observed in the sub-critical isobars is expected. Also shown is the value of temperature and energy along the boundary of the coexistence region. Note the difference between the isochoric caloric curves shown in Figure 13 and the isobaric caloric curves shown in Figure 17. When the pressure is held constant all of the canonical systems discussed here, including the van der Waals fluid, show sub-critical caloric curves with a back bend. No back bending is present when the volume is held constant.
Finally Figure 18 shows the constant pressure specific heat, $`C_P`$, as calculated from:
$$C_P=\left(\frac{\mathrm{\Delta }E}{\mathrm{\Delta }T}\right)_P$$
(26)
taking as input the isobars shown in Figure 17. The results for $`C_P`$ of these canonical calculations are similar to the behavior reported in micro canonical models for various systems , , , . The sub-critical isobars show the remnants of poles with $`C_P<0`$ values in between. The critical isobar shows the remnants of a divergence, and the super-critical isobar shows some peaking behavior. The lack of poles and divergences is not due to the finite size of the systems V1, V2 and V3, but rather due to the computational nature of these calculations. The calculations for the van der Waals fluid are, in effect, for a truly thermodynamic system and the van der Waals critical isobar of $`C_P`$ still shows no true divergence. The critical isobar of $`C_P`$ for V2 shows a negative value which is due to the computational nature of the calculation and the manner in which $`C_P`$ was calculated from $`E`$ and $`T`$ values.
### D Iso-nothing: Variable $`(P,V,T)`$
The nuclear model presented here assumes a system enclosed in some volume. An actual excited nucleus is not enclosed in a volume. This has the effect of forcing a path through the thermodynamic phase space which is considerably different from any of the paths investigated thus far. To bridge the gap between reality and tractable calculations, an energy dependent free volume is assumed in models such as the ones presented in this work . At low energies the free volume of the system is assumed to be nearly constant and vanishingly small. At a given energy the system is allowed to expand and the free volume increases from near zero. This has the effect of tracing a path through the thermodynamic phase space of the system off any of the trivial paths along one of the axes investigated above. It is possible to examine the effects of such a parameterization of the free volume as a function of energy with the calculations made here. Calculations in this section were performed only for V1.
Following the same ideas presented in the discussion of isobars, the values of $`F`$, $`S`$ and $`T`$ are determined along a path in $`V_f`$ as a function of $`E`$. As $`V_f`$ changes values of $`F`$, $`S`$ and $`E`$ are picked from the appropriate isochore. For example, instead of traveling along a path parallel to one of the axes in Figure 2a, e.g. an isochore, isotherm or isobar, an energy dependent free volume was chosen so that the system evolved through thermodynamic phase space on a non-trivial trajectory. See Figure 19. In Figure 19a small points show a small sample of the set of calculations for $`F(T,V)`$. Larger points show the values of $`F(V,T)`$ selected for a $`V_f(E)`$ trajectory described above. Figures 19b and c show the same for $`S(T,V)`$ and $`E(T,V)`$.
For the purposes of the present analysis, values of $`S(T,V)`$ were used from different isochores: as $`V_f`$ changed values of $`S(T,V)`$ were selected from the appropriate isochore. If the change from isochore to isochore is small, $`\mathrm{\Delta }V_f0`$, then the procedure is a good approximation. The intervals in $`V_f`$ for the calculation of $`F(T,V)`$ for this analysis were on the order of two percent of the free volume of the system; e.g. when $`V_F=500`$fm<sup>3</sup>, $`\mathrm{\Delta }V_f10`$fm<sup>3</sup> and when $`V_f=5`$fm<sup>3</sup>, $`\mathrm{\Delta }V_f0.1`$fm<sup>3</sup>.
Figure 20a shows three different paths through thermodynamic phase space: one which travels very near to the critical point and into the coexistence region (solid curve), and two others which avoid the coexistence region all together (dotted and dashed curves). As mentioned previously for the full version of this model the free volume is nearly zero for the lower end of the energy range. At some energy, $`2`$ MeV/A for the solid curve in Figure 20, the system is allowed to expand and the free volume increases as a function of energy. The functional form of $`V_f(E)`$ is not identical to other models but close enough to show the same behavior observed in the full version of the model . Also shown in Figure 20a are the values of the free volume and energy along the boundary of the coexistence region. Note that for the solid curve the $`V_F(E)`$ trajectory enters the coexistence region near the critical point and leaves the coexistence region at a higher energy and free volume. The other two trajectories shown in Figure 20 have the same general behavior: increasing free volume with increasing energy, but the precise paths differ.
In a plot of free volume against temperature back bending is observed for two of the trajectories presented here. See Figure 20b. The solid curve trajectory of $`V_f(T)`$ begins with a small free volume that is constant until a temperature of just over $`8`$ MeV, then the system expands and cools. This is shown by the back bend. The solid curve $`V_f(T)`$ trajectory then enters the coexistence region near the critical point. At a temperature between $`7`$ and $`7.5`$ MeV the slope of the free volume nearly diverges and then changes in sign. After this point, further expansion in the free volume is accompanied by an increase in the temperature of the system. The $`V_f(T)`$ curve then leaves the coexistence region at $`T<T_c`$ and $`V>V_c`$. The other trajectories show similar, but less extreme behavior.
Other projections of the phase diagram for this model with the solid curve $`V_f(E)`$ trajectory show back bends as well. See Figures 20c and d. The $`V_f`$-$`P`$ projections shows the system’s pressure increases nearly an order of magnitude over the constant free volume section. When the system is allowed to expand, the pressure drops. The trajectory passes near the critical point as it enters the coexistence region and in the course of back bending exits the coexistence region.
The solid curve $`P`$-$`T`$ trajectory is equally interesting. As the system increases in temperature with a fixed free volume, the pressure increases. When the system reaches a temperature between $`8`$ and $`8.5`$ MeV the expansion sets in and the pressure and temperature both drop so that the trajectory moves towards the critical point. At a temperature between $`7`$ and $`7.5`$ MeV, the system reverses the trend and both pressure and temperature increase. See Figure 20d. Again the other two trajectories show similar behavior, but to a lesser extent as the $`V_f(E)`$ trajectory becomes smoother.
As back bending has already been observed for other $`V_f(E)`$ trajectories, it is no surprise that the caloric curve for this changing volume system also shows a back bend. See Figure 20e. The solid caloric curve shown here is reminiscent of other back bending caloric curves already published in reference , \- where variable free volume constrained canonical and constrained grand canonical calculations are made, but not those published in reference , where a constant volume micro canonical calculation is made. Also shown in Figure 20e are the values of the temperature and energy along the coexistence curve. Again the trajectory of the solid curve variable free volume enters the coexistence region near the critical point and exits the coexistence region at $`T<T_C`$ and $`E>E_c`$. The dotted caloric curve also shows back bending, albeit to a more limited extent and the dashed caloric curves shows no major back bend.
Next the specific heat of the system was calculated via eq. (26). See Figure 20f. The application of eq. (26) to this trajectory through thermodynamic phase space is problematic. From Figure 20d it is clear that pressure is not a constant and thus eq. (26) should not be used. However, it has become commonplace to follow this sort of procedure , even though it is in contradiction with definition of $`C_P`$ or $`C_V`$. When eq. (26) is applied to the solid and dotted curves in Figure 20e, the resulting specific heat shows negative values and the remnants of a divergence.
Finally the specific heat of the system was determined in the same manner that the entropy of the system was determined. The specific heat at a constant volume was calculated along an isochore, then the values of $`C_V`$ were selected from the paths through thermodynamic phase space of the $`V_f(E)`$ trajectory. See Figure 20g. For the solid like path, the value of $`C_V`$ shows a steady rise until an energy of $`2`$ MeV/A and then a sharp rise as the system expands. As the system continues to expand and the energy increases the value of $`C_V`$ reaches a maximum and then shows a gradual decline. No $`C_V<0`$ is observed in this plot. The other two paths show smoother behavior.
The question that now arises is what, if any, insight into the nature of the phase transition can be obtained from curves such as those in Figure 20e and f. Were there no other knowledge of the system, the back bends observed in the solid and dotted caloric curves would suggest that the system had gone through a first order phase transition. Negative values and a peak in the specific heat would seem to confirm this. While for the dashed curve, the lack of back bending in the caloric curve and the lack of a negative specific hear would argue either for a continuous phase transition of no phase transition. However, from the analysis of the previous sections, the location of the critical point and the shape and location of the coexistence curve are known. The addition of this knowledge makes it clear that the naive analysis of Figures 20e and f can provide misleading results. While the solid caloric curve shows a back bend, the trajectory of the systems goes through the critical point, into the coexistence region and exits along a sub-critical path. Can one conclude that the system has undergone a continuous phase transition, or has it undergone a first order phase transition because the trajectory traverses the coexistence region? The answer to the first question is yes, since the system does reach the critical temperature and density simultaneously. The answer to the second question also appears to be “yes” since the specific heat found from eq. (26) is less than zero (for the solid curve). Furthermore, the solid curve intersects the coexistence curve at a $`T`$ less than $`T_c`$. Note, however, that without knowledge of the location of the coexistence curve, the above questions cannot be unambiguously answered. A naive inspection of the dotted caloric curve may lead one to conclude that the back bending is indicative of a first order phase transition. Such is not the case, Figure 20e shows that the dashed line never traverses the coexistence region. It is clear that drawing conclusions based on caloric curves is difficult unless one has knowledge of the complete thermodynamics of the system.
### E Critical exponents from thermodynamic quantities
Any model that attempts to describe a system capable of undergoing a continuous phase transition should exhibit quantities with singular behavior that, near the critical point, are described by power laws with a consistent set of critical exponents. These critical exponents should obey well known scaling laws and may, or may not, fall into one of the established universality classes. To that end, four critical exponent values are determined and three scaling laws are checked for these models using the critical point, $`(T_c,P_c,\rho _c)`$, determined previously and other thermodynamic quantities. Note that in the determination of critical exponents presented here, thermodynamic variables are used explicitly, e.g. in the extraction of $`\gamma `$ it is the isothermal compressibility that is used, and not moments of the fragment distribution.
#### 1 Power law results
The exponent $`\alpha `$ is determined by the behavior of the specific heat along the critical isochore, see Figure 12. The $`C_V(T)`$ curve was fit with the functional form:
$$C_V(T)=H_\pm \left(\frac{\left|\frac{TT_c}{T_c}\right|^{\alpha _\pm }1}{\alpha _\pm }\right)+G_\pm $$
(27)
on both sides of the critical point $`T\frac{>}{<}T_c`$ . The fit parameters $`H_\pm `$, $`\alpha _\pm `$ and $`G_\pm `$ were allowed to vary to minimize the $`\chi ^2`$ of the fit. Figure 21 shows the results for each system and Table II lists the extracted exponents.
The functional form in eq. (27) did not fit the curves shown in Figure 21a, b and c over the entire range of $`(TT_c)/T_c`$. To some degree this is to be expected. Near the critical point the finite size effects, which manifest themselves first in the smoothing of the kink of sub-critical free energy isochores, diminish a diverging specific heat into a peaking specific heat. Far from the critical point, the analytic terms in the expression for the specific heat become dominant and the power law behavior is overwhelmed. In some mid-range region, neither to0 far from nor too near to the critical point behavior consistent with eq. (27) was observed. Various fits were tried on both sides of the critical point, but only those which gave a matching value for $`\alpha `$ were considered. Figures 21a, b and c show the results of one such fit. Table II lists the average results for many such fits.
The van der Waals fluid shows much different behavior that do the nuclear models. The constant value of $`C_V=3/2`$ in the van der Waals fluid leads to the result of $`\alpha =0`$ as expected for a mean field model. Based on this result it would seem that the nuclear models are not mean field models. They show a peaking in the specific heat that is inconsistent with the behavior of a van der Waals fluid type of mean field model or the behavior of the Landau model which shows a discontinuity in the specific heat.
The exponent $`\beta `$ is determined using the $`(P,V_f)`$ points along the coexistence curve, shown in Figure 15, which should be described by
$$\rho _l\rho _g\left(\frac{T_cT}{T_c}\right)^\beta $$
(28)
Fitting $`\rho _l\rho _g`$ versus $`\left(TT_c\right)/T_c`$ to a simple power law for the positive slope portion of Figure 22a, b and c gives the exponent $`\beta `$. See Table II for results. The van der Waals fluid recovers the mean field value of $`\beta =1/2`$. The model V1 gives the least impressive fit results.
Near the critical point the isothermal compressibility, $`\kappa _T`$, is given by
$$\kappa _T=\mathrm{\Gamma }_\pm \left|\frac{TT_c}{T_c}\right|^{\gamma _\pm }$$
(29)
For $`T<T_c`$ fitting $`\kappa _T`$ versus $`\left|(TT_c)/T_c\right|`$ along the coexistence curve gives $`\gamma _{}`$ while $`\gamma _+`$ is determined by fitting $`\kappa _T`$ versus $`\left|(TT_c)/T_c\right|`$ for $`T>T_c`$ at $`\rho =\rho _c`$. Due to the imprecise nature of the data along the coexistence curve for $`T<T_c`$ fits were made only for $`T>T_c`$. Fits were made over the entire region of $`\left|(TT_c)/T_c\right|`$ of $`T>T_c`$. The results for the extraction of the exponent $`\gamma `$ are shown in Figure 23.
For V1 two different power law regions appear to be present, one close to the critical point and one further from the critical point. See Figure 23a. The error bars on the $`\gamma `$-value in Table II account for this behavior. The fit to the entire $`\left|(TT_c)/T_c\right|`$ region is used because the resulting power law shows some agreement with the behavior of $`\kappa _T`$ for $`T<T_c`$ when the coefficient of the power law is increased by some factor. See dashed line and open squares in Figure 23a. Similar arguments apply to the results for V2 and V3. See Figures 21b and c.
The van der Waals fluid shows the expected behavior and recovers the value of $`\gamma =1`$ to within error bars. See Figure 23d and Table II for results. The $`T<T_c`$ behavior of $`\kappa _T`$ also shows the expected power law behavior with the appropriate exponent value.
Examining the shape of the critical isotherm leads to an estimation of the exponent $`\delta `$ from:
$$\left|PP_c\right|\left|\frac{\rho \rho _c}{\rho _c}\right|^{\delta _\pm }.$$
(30)
The critical isotherm was examined independently for $`\rho <\rho _c`$, which gives $`\delta _{}`$, and $`\rho >\rho _c`$, which gives $`\delta _+`$. As with the exponents $`\alpha `$ and $`\gamma `$, a system with a continuous phase transition the values of $`\delta _\pm `$ should be the same on both sides of the critical density. This fact is again used as guide in searching for fitting regions to extract the exponent $`\delta `$. See Figure 24 and Table II.
For V1 only the regions closest to the critical point gave matching $`\delta `$-values. The error bars on the $`\delta `$-values in Table II reflect the changes in $`\delta _\pm `$ when different fit regions are examined. In V2 there are regions on both sides of the critical point which yield a matching set of $`\delta _\pm `$ values. No such region could be found for V3, even very close to the critical point. The van der Waals fluid shows some regions on both sides of the critical point where $`\delta _\pm `$ match, to within error bars, and agree with the expected value of $`\delta =3`$.
Finally, the topological exponent, $`\tau `$, from Fisher’s droplet model can be recovered based on considerations of the compressibility factor, $`C_f`$ via the relationship :
$$C_f=\frac{\zeta (\tau )}{\zeta (\tau 1)}.$$
(31)
The Riemann $`\zeta `$ functions of eq. (31) were summed from $`1`$ to $`1000000000`$. When a value of $`\tau =7/3`$ was input for the van der Waals fluid, eq. (31) yielded a value of $`0.393`$ indicating that terminating the summation at $`1000000000`$ yields a value of $`C_f`$ that is approximately $`5`$% too high; for the van der Waals fluid $`C_f=3/8`$. This supposition is supported by decreasing the upper summation limit and observing and increase in the value of $`C_f`$. This error was accounted for in the estimation of the value of $`\tau `$. See Table II for results.
#### 2 Scaling laws
With four critical exponents determined it is possible to perform a consistency check using the well known scaling relations. For example, the Rushbrooke inequality shows that:
$$\alpha +2\beta +\gamma =2,$$
(32)
here shown as an equality in keeping with the scaling hypothesis and renormalization . And the Griffiths equality:
$$\alpha +\beta (1+\delta )=2$$
(33)
and the Widom equality:
$$\beta (\delta 1)\gamma =0.$$
(34)
And finally from Fisher’s droplet model:
$$\frac{\beta }{\gamma }\frac{\tau 2}{3\tau }=0.$$
(35)
Using the average values determined for $`\alpha `$, $`\beta `$, $`\delta `$, $`\gamma `$ and $`\tau `$ the results for these scaling laws are compiled in Table III. Only the van der Waals fluid results consistently satisfy the above scaling laws to within error bars. The nuclear models generally fail to satisfy three of the four scaling laws. This failure is inconsistent with the behavior of the phase diagram, shown for example in Figure 5, which appears show a critical point, thus indicating the presence of a continuous phase transition. While moderately good fits are observed for the specific heat, the liquid-gas density difference, the isothermal compressibility and the critical isotherm for each of the versions of the nuclear model, the meaning of these power laws and critical exponents remains an open question in light of the failure to adhere to well known scaling laws.
## V Summary
It has been shown that the type of nuclear model discussed here exhibits many features commonly associated with a system in which critical phenomena are present, e.g. a coexistence curve, power laws, critical exponents. By removing both the Coulomb and temperature-dependent free energy terms, it was found that the appearance of a critical point in these models is due to the interplay between the surface, volume, and translational free energy terms. However, these types of models are not without inconsistencies. One striking inconsistency is the fact that the temperature dependent surface free energy gives rise to a infinite negative specific heat at the critical temperature used by the model. Furthermore, no version of the model showed a critical temperature that agreed with the one explicitly input into the surface free energy term. Additionally, when the surface term was rendered temperature independent the critical point remained. Thus suggesting that the appearance of a critical point in these models is not dependent on the temperature dependence of the surface term but rather is a result of the interplay between the surface and volume free energy terms.
The critical temperature and density have been determined by examining isotherms in the $`P\rho `$ plane. In the neighborhood of this critical point, singular behavior characterized by power laws was observed. However, these critical exponents do not obey well known scaling relations. This is a particularly troublesome occurrence as any model with true critical behavior, even the simple van der Waals fluid, does have exponents which obey these scaling relations. It is possible that an examination of this model for larger systems, with smaller steps in temperature and volume in the calculation of the free energy, will yield a consistent set of critical exponents.
It is important to note that the critical densities found here are much higher than could be realized with a closest packing of normal density nuclei. Additionally, these critical densities are significantly higher than those typically used to compare model predictions , to data.
A major conclusion of this work is that the particular phenomenological description of the free energy of a hot nucleus leads to several inconsistencies regarding both temperature and density. It was pointed out that the parameterization of the surface free energy leads to a negative and divergent contribution to the specific heat at $`T`$ approaches the value of the parameter $`T_c`$ in eq. (4). Furthermore, all values of the critical temperature found from examination of isotherms in the $`P`$-$`\rho `$ plane are much below this parameter value. Thus, while use of such a model may well lead to an excellent description of multifragmentation data, the lack of internal consistency noted here makes the interpretation of data in terms of the model problematic. Such agreement may rest more on the phase space sampling and variable free volume inherent in the model than on the finer details examined here.
Finally, it has been shown that the variable volume version of this phenomenological model of multifragmentation exhibits caloric curves which can be misinterpreted in the absence of detailed knowledge of the complete thermodynamic phase diagram.
This work was supported in part by the U.S. Department of Energy Contracts or Grants No. DE-ACO3-76F00098, DE-FG02-89ER-40513, DE-FG02-88ER-40408, DE-FG02-88ER40412, DE-FG05-ER40437 and by the U.S. National Science Foundation under Grant No. PHY-91-23301. |
no-problem/9912/nucl-ex9912002.html | ar5iv | text | # Level density and 𝛾 strength function in 162Dy from inelastic 3He scattering
## 1 Introduction
Nuclear level densities have recently gained new interest. When earlier studies of level densities were mainly based on counting levels close to the ground state and neutron resonance spacing at the neutron binding energy , a variety of new methods and experimental results are available today. A more recent compilation of all existing data on level densities includes level spacing data of several other reactions involving light particles up to $`A=4`$ as well as results from Ericson fluctuation measurements. Recently, experimental level densities in <sup>69</sup>As and <sup>70</sup>Ge over a large excitation energy interval of 5-24 MeV have been reported , obtained from proton evaporation spectra of <sup>12</sup>C induced reactions. Also the Oslo cyclotron group has reported on a new method to extract level density and $`\gamma `$ strength function from primary $`\gamma `$ spectra (see for the basic assumptions and for the method). This method has the advantage that the level density is deduced from $`\gamma `$ transitions, thus the nucleus is likely to be thermalized and the measured level density is supposed to be independent of the formation mechanism of the excited nucleus. Several applications of the method are reported in .
The experimental progress has been accompanied by new theoretical developments. with respect to the first analytical nuclear level density formula proposed by Bethe . Level densities have been studied for finite temperatures within the BCS model . Today, Monte Carlo shell model calculations are able to estimate nuclear level densities for heavy mid shell nuclei like <sup>162</sup>Dy . Also more schematic approaches like binomial level densities have been revived lately. Important applications of the theoretical and experimental efforts are calculations of the nucleon synthesis in stars, where the level densities are inputs in large computer codes and thousands of cross sections are estimated .
Also the present knowledge of the $`\gamma `$ strength function is poor. Although the strengths can be roughly calculated by the Weisskopf estimate, which is based on single particle transitions (see e.g. ), some transitions deviate many orders of magnitude from this approximation. A compilation of average $`\gamma `$ transition strengths for dipole and electric quadrupole transitions can be found in . The uncertainty of the $`\gamma `$ strength function concerns the absolute value and the $`\gamma `$ energy dependence. For E1 transitions one assumes that the $`\gamma `$ energy dependence follows the Giant Dipole Resonance (GDR) $`(\gamma ,\gamma ^{})`$ cross section. This is, however, to be proven.
In this work, we determine the level density and the $`\gamma `$ strength function for <sup>162</sup>Dy for energies close up to the neutron binding energy $`B_n`$. By comparing the present data, which were obtained from the <sup>162</sup>Dy(<sup>3</sup>He,<sup>3</sup>He’$`\gamma `$)<sup>162</sup>Dy reaction, to previous data , which were obtained from the <sup>163</sup>Dy(<sup>3</sup>He,$`\alpha \gamma `$)<sup>162</sup>Dy reaction, we can test if the basic assumption of our analysis method is fulfilled.
This main assumption is that the $`\gamma `$ decay pattern from any excitation energy bin is independent of the population mechanism of states within this bin, e.g. direct population by a nuclear reaction, or indirect population by a nuclear reaction followed by one or several $`\gamma `$ rays. Since the $`\gamma `$ decay probabilities of an excited state are independent of the populating reaction, the assumption above is generally equivalent to the assumption that the same states are populated equally by the direct and indirect population mechanisms. One can now imagine several cases where this assumption might be invalid.
Firstly, thermalization time might compete with the half life of excited states, and the selectivity of the direct population by a nuclear reaction will be reflected by a different $`\gamma `$ decay pattern with few and relatively strong $`\gamma `$ transitions compared to a statistical spectrum which is the expected $`\gamma `$ decay pattern after complete thermalization.
Secondly, direct population might populate states with different exact or approximate quantum numbers like spin or parity than indirect population. Since states with different exact or approximate quantum numbers do not mix at all or very weakly in the latter case, the ensemble of populated states after thermalization will differ for the two population mechanisms and therefore one can expect different $`\gamma `$ decay patterns.
It is very difficult to judge where the assumption of the method is applicable and how good this approximation is. Below, we will, by comparing two different direct population mechanisms represented by two different nuclear reactions, investigate in which excitation energy interval the assumption might break down.
## 2 Experiment and data analysis
The experiment was carried out at the Oslo Cyclotron Laboratory (OCL) using the MC35 Scanditronix cyclotron. The beam current was $``$1 nA of <sup>3</sup>He particles with an energy of 45 MeV. The experiment was running for a total of 2 weeks. The target was an isotopically enriched 95% <sup>162</sup>Dy self supporting metal foil with a thickness of 1.4 mg/cm<sup>2</sup> glued on an aluminum frame. Particle identification and energy measurements were performed by a ring of 8 Si(Li) telescopes at 45 relative to the beam axis. The telescopes consist of a front and end detector with thicknesses of some 150 $`\mu `$m and 3000 $`\mu `$m respectively, which is enough to effectively stop the ejectiles of the reaction. The $`\gamma `$ rays were detected by a ball of 27 5”$`\times `$5” NaI(Tl) detectors (CACTUS) covering a solid angle of $``$15% of $`4\pi `$. Three 60% Ge(HP) detectors were used to monitor the selectivity of the reaction and the entrance spin distribution of the product nucleus. During the experiment we collected besides data for the <sup>162</sup>Dy(<sup>3</sup>He,<sup>3</sup>He’)<sup>162</sup>Dy reaction, where results are presented in this work, also data for the <sup>162</sup>Dy(<sup>3</sup>He,$`\alpha `$)<sup>161</sup>Dy reaction, where some results were presented in . A comprehensive description of the <sup>163</sup>Dy(<sup>3</sup>He,$`\alpha \gamma `$)<sup>162</sup>Dy experiment, which we will compare our findings to, can be found in .
In the first step of the data analysis, the measured ejectile energy is transformed into excitation energy of the product nucleus. In Fig. 1 the raw data are shown. In the next step, the $`\gamma `$ spectra are unfolded for every excitation energy bin using measured response functions of the CACTUS detector array . In Fig. 2 the unfolded data are shown. In the third step, the primary $`\gamma `$ spectra for every excitation energy bin are extracted from the unfolded data by the subtraction technique of Ref. . In Fig. 3 the primary $`\gamma `$ spectra are shown.
In the fourth step, we extract level density and $`\gamma `$ strength function from the primary $`\gamma `$ spectra. The main assumption behind this method is the Axel Brink hypothesis
$$\mathrm{\Gamma }(E_x,E_\gamma )F(E_\gamma )\varrho (E_f)$$
(1)
with $`E_f=E_xE_\gamma `$. It says that the $`\gamma `$ decay probability in the continuum energy region represented by the primary $`\gamma `$ spectrum $`\mathrm{\Gamma }`$ is proportional to the level density $`\varrho `$ and a $`\gamma `$ energy dependent factor $`F`$. The level density and the $`\gamma `$ energy dependent factor are estimated by a least $`\chi ^2`$ fit to the experimental data . In Fig. 4 the experimental data including estimated errors are compared to the fit according to Eq. (1).
The data are fitted very well by the theoretical expression of Eq. (1). This is a remarkable example for the validity of the Axel Brink hypothesis. However, it can never completely be ruled out, that a minor portion of the primary $`\gamma `$ matrix cannot be factorized into a level density and a $`\gamma `$ energy dependent factor. One might also encounter large fluctuations in these quantities at very low level densities around the ground state or when considering highly collective $`\gamma `$ transitions and single particle $`\gamma `$ transitions at similar $`\gamma `$ energies.
Since the least $`\chi ^2`$ fit according to Eq. (1) yields an infinitely large number of equally good solutions, which can be obtained by transforming one arbitrary solution by
$`\stackrel{~}{\varrho }(E_xE_\gamma )`$ $`=`$ $`\varrho (E_xE_\gamma )A\mathrm{exp}(\alpha [E_xE_\gamma ])`$ (2)
$`\stackrel{~}{F}(E_\gamma )`$ $`=`$ $`F(E_\gamma )B\mathrm{exp}(\alpha E_\gamma ),`$ (3)
we have to determine the three parameters $`A`$, $`B`$ and $`\alpha `$ of the transformation by comparing the results to other experimental data. We fix the parameters $`A`$ and $`\alpha `$ by comparing the extracted level density curve to the number of known levels per excitation energy bin around the ground state and to the level density at the neutron binding energy $`B_n`$ calculated from neutron resonance spacing data . Since the procedure is described in detail in Ref. , we only show in Fig. 5 how the extracted level density curve compares to other experimental data.
The parameter $`B`$ could now in principle be fixed by comparing the extracted $`\gamma `$ energy dependent factor $`F`$ to other experimental data of the $`\gamma `$ strength function. However since data are very sparse and the absolute normalization of $`\gamma `$ strength function data is very uncertain, we give the $`\gamma `$ energy dependent factor in arbitrary units.
## 3 Results and discussion
### 3.1 The level density
We compare extracted level densities of <sup>162</sup>Dy from two reactions, namely <sup>162</sup>Dy(<sup>3</sup>He,<sup>3</sup>He’$`\gamma `$)<sup>162</sup>Dy and <sup>163</sup>Dy(<sup>3</sup>He,$`\alpha \gamma `$)<sup>162</sup>Dy. While level densities from the latter reaction were already published in using approximate extraction methods, and in in the present form, data from the first reaction are shown here for the first time. Figure 6 shows the relative level densities, which are calculated by dividing the extracted level densities by an exponential $`C\mathrm{exp}(E/T)`$ with $`T=580`$ keV and $`C=10`$ MeV<sup>-1</sup> in our case. One can see that both level densities agree very well within 10% in the excitation energy interval 1.5 MeV to 6.5 MeV. This result is very encouraging, since level densities are generally only known within an error of $`\pm `$50-100%. Above 6.5 MeV the errors are too large in order to make conclusive observations. Below $``$1.5 MeV the two level densities differ dramatically from each other. In Fig. 5 one can see that the extracted level density from the <sup>163</sup>Dy(<sup>3</sup>He,$`\alpha \gamma `$)<sup>162</sup>Dy reaction agrees very well with the number of known levels per excitation energy bin below $``$1.2 MeV, whereas the extracted level density from the <sup>162</sup>Dy(<sup>3</sup>He,<sup>3</sup>He’$`\gamma `$)<sup>162</sup>Dy reaction overestimates the number of levels in this energy region by a factor of $``$3.
The level density at $``$0.5 MeV of excitation energy is determined by the data in the primary $`\gamma `$ matrix which lie approximately on the diagonal $`E_x\stackrel{>}{}E_\gamma `$ (see Fig. 3). Careful examination of Fig. 4 shows, that the bumps at $`E_x\stackrel{>}{}E_\gamma `$ are very well fitted by the factorization given by Eq. (1). We therefore conclude, that the differences in level density around $``$0.5 MeV of excitation energy are not artifacts of the extraction method, but have their origin in differences of the primary $`\gamma `$ spectra. We actually find in the primary $`\gamma `$ matrix of the <sup>162</sup>Dy(<sup>3</sup>He,<sup>3</sup>He’$`\gamma `$)<sup>162</sup>Dy reaction a large number of high energetic $`\gamma `$ transitions, connecting the direct populated states with the ground state rotational band. This surplus of counts compared to primary $`\gamma `$ spectra from the <sup>163</sup>Dy(<sup>3</sup>He,$`\alpha \gamma `$)<sup>162</sup>Dy reaction is the reason for overestimating the level density at $``$0.5 MeV of excitation energy.
We argue that the level density curve extracted from the neutron pick up reaction data is the more realistic one, as supported by Fig. 5. Since the neutron pick up reaction cross section is dominated by high $`l`$ neutron transfer, the direct population of the <sup>162</sup>Dy nucleus takes place through one particle one hole components of the wave functions. Such configurations are not eigenstates of the nucleus, but they are rather distributed over virtually all eigenstates in the neighboring excitation energy region. Thus, we can expect fast and complete thermalization before $`\gamma `$ emission. The inelastic <sup>3</sup>He scattering on the other hand is known to populate mainly collective excitations. These collective excitations will thermalize rather slowly, since their structure is much more like eigenstates of the nucleus, and their wave functions are less spread over eigenfunctions in the close excitation energy region. However, we can expect that their structure is similar to the structure of states in the ground state rotational band. Therefore, the large $`\gamma `$ transition rates from the direct populated states to the ground state rotational band might just reflect the inverse process of inelastic scattering. The surplus of $`\gamma `$ counts can therefore be interpreted as preequilibrium decay. An extreme example for this are nuclear resonance fluorescence studies (NRF) . It is estimated, that in even even nuclei more than 90% of the $`\gamma `$ strength from states excited by $`\gamma `$ rays is going to the ground state or to the first excited state. Thermalization of the excited states in NRF is also hindered by the fact that one populates isovector states, which in the proton neutron interacting boson model (IBA-2) are characterized by a different (approximate) $`F`$ spin quantum number than other states in the same excitation energy regions.
We would like to point out, that although the basic assumption behind the primary $`\gamma `$ method is partially violated in the case of the <sup>162</sup>Dy(<sup>3</sup>He,<sup>3</sup>He’$`\gamma `$)<sup>162</sup>Dy reaction, the level densities in the excitation energy interval 1.5 MeV to 6.5 MeV deduced from the two reactions agree extremely well. This indicates, that the extracted level density curves are quite robust with respect to the goodness of the assumption. Especially the bump at $``$2.5 MeV excitation energy indicating the breaking of nucleon pairs and the quenching of pairing correlations could be very well reproduced. One should also keep in mind that the two reactions populate states with slightly different spin distributions due to the different target spins in the two reactions, which might account for some differences in the extracted level densities.
### 3.2 The $`\gamma `$ energy dependent factor
We compare the extracted $`\gamma `$ energy dependent function $`F`$ of <sup>162</sup>Dy for the two reactions. The $`F`$ function from the <sup>163</sup>Dy(<sup>3</sup>He,$`\alpha \gamma `$)<sup>162</sup>Dy reaction was already published in using an approximate extraction method, however the data were reanalyzed using the exact extraction method of Ref. and are in the present form, as well as data from the <sup>162</sup>Dy(<sup>3</sup>He,<sup>3</sup>He’$`\gamma `$)<sup>162</sup>Dy reaction, published for the first time in this work. Figure 7 shows the relative $`F`$ functions, which are obtained by dividing the extracted $`F`$ function by $`E_\gamma ^n`$ with $`n=4.3`$ and scaling them to $``$1 at $``$4 MeV of $`\gamma `$ energy. Also in this case the two functions agree within 10% in the $`\gamma `$ energy interval of 1.5 MeV to 6.5 MeV. Above $``$6.5 MeV again, the error bars are too large in order to allow for any conclusions. Below $``$1.3 MeV of $`\gamma `$ energy, the two functions differ dramatically from each other. Due to experimental difficulties, like ADC threshold walk and bad timing properties of low energetic $`\gamma `$ rays, we had to exclude $`\gamma `$ rays with energies below 1 MeV from the data analysis . It is therefore very difficult to judge if the differences in the $`F`$ function curves below 1.5 MeV of $`\gamma `$ energy are also due to experimental problems (i.e. the experimental cut was too optimistic, and we should rather have excluded all $`\gamma `$ rays with energies below 1.5 MeV) or due to the different nuclear reactions used to excite the <sup>162</sup>Dy nucleus.
Also here we would like to emphasize, that despite the basic assumption behind the primary $`\gamma `$ method is not completely fulfilled in the case of the <sup>162</sup>Dy(<sup>3</sup>He,<sup>3</sup>He’$`\gamma `$)<sup>162</sup>Dy reaction, the two $`F`$ functions agree very well. Especially the bump at $``$2.5 MeV of $`\gamma `$ energy, which we interpret as a Pigmy Resonance is equally pronounced in both reactions. We are therefore very confident that the extracted level density and $`\gamma `$ energy dependent factor for <sup>162</sup>Dy presented in this work are not, or very little, reaction dependent.
## 4 Conclusions
This work compares the results from the <sup>162</sup>Dy(<sup>3</sup>He,<sup>3</sup>He’$`\gamma `$)<sup>162</sup>Dy reaction to those of the <sup>163</sup>Dy(<sup>3</sup>He,$`\alpha \gamma `$)<sup>162</sup>Dy reaction. The level density $`\varrho `$ and the $`\gamma `$ energy dependent factor $`F`$ in <sup>162</sup>Dy are shown to be reliably extracted with our method in the energy interval 1.5-6.5 MeV. The findings are independent of the particular reaction chosen to excite the <sup>162</sup>Dy nucleus. The two reactions differ from each other (i) in the reaction type; i.e. inelastic <sup>3</sup>He scattering versus neutron pick up, and thus in the nuclear states populated before thermalization, namely collective excitations versus one particle one hole states, (ii) in the target spins; $`0^+`$ for <sup>162</sup>Dy versus $`5/2^{}`$ for <sup>163</sup>Dy, and thus in the spin distribution of direct populated states, and (iii) in the $`Q`$-value; 0 MeV for inelastic <sup>3</sup>He scattering versus 14.3 MeV for the neutron pick up reaction. Nevertheless, the only differences in the extracted quantities are those in the level densities below $``$1.5 MeV of excitation energy. These might be explained by preequilibrium $`\gamma `$ decay in the <sup>162</sup>Dy(<sup>3</sup>He,<sup>3</sup>He’$`\gamma `$)<sup>162</sup>Dy reaction, whereas the <sup>163</sup>Dy(<sup>3</sup>He,$`\alpha \gamma `$)<sup>162</sup>Dy reaction is supposed to show only equilibrium $`\gamma `$ decay, and thus reveals reliable level densities below 1.5 MeV of excitation energy, which is supported by comparison to known data. However, although preequilibrium $`\gamma `$ decay violates the basic assumption of the primary $`\gamma `$ method, the effect on the extracted level density $`\varrho `$ and the $`\gamma `$ energy dependent factor $`F`$ between 1.5 MeV and 6.5 MeV of energy is shown to be less than 10%. In conclusion, the present results have given further confidence in the new extraction techniques, and open for several interesting applications in the future.
The preequilibrium decay does not seem to violate the Axel Brink hypothesis, since the respective parts of the primary $`\gamma `$ spectrum could be fitted within this assumption. However, the extracted quantities $`\varrho `$ and $`F`$ will then only represent a weighted sum of the respective quantities obtained from preequilibrium and equilibrium $`\gamma `$ decay, where in the case of the <sup>162</sup>Dy(<sup>3</sup>He,<sup>3</sup>He’$`\gamma `$)<sup>162</sup>Dy reaction, the preequilibrium process dominates the level density below 1.5 MeV of excitation energy. We conclude therefore that neutron pick up reactions are more suitable than inelastic <sup>3</sup>He scattering for our method, since the states populated by the former reaction presumably thermalize completely, whereas those populated by the latter reaction might not completely thermalize before $`\gamma `$ emission.
## 5 Acknowledgments
The authors wish to thank Jette Sörensen for making the target and E.A. Olsen and J. Wikne for excellent experimental conditions. Financial support from the Norwegian Research Council (NFR) is gratefully acknowledged. |
no-problem/9912/astro-ph9912092.html | ar5iv | text | # Sub-Arcsecond Imaging of 3C 123: 108-GHz Continuum Observations of the Radio Hotspots
## 1 Introduction
Hotspots in the jets of radio galaxies are manifestations of the interaction between the jet and the intergalactic medium— a strong shock which converts some of the beam energy into relativistic particles (Blandford & Rees 1974). Morphologically, hotspots are bright compact regions toward the end of the jet lobe, primarily observed in the radio with a few sources having optical counterparts (e.g. Läteenmäki & Valtaoja 1999). First detected in Cygnus A (Hargrave & Ryle 1974), hotspots are a characteristic and ubiquitous feature in high luminosity, class FRII radio galaxies (Fanaroff & Riley 1974) that can provide constraints on the energetics of the lobes and the powering of radio loud active galactic nuclei.
Unfortunately, the simple, constant beam model of Blandford & Rees does not fully explain the common occurrence of multiple hotspot regions in radio galaxies and quasars (cf. Laing 1989). To accommodate these observations, two modifications have been proposed: (1) the end of the beam precesses from point to point, the ‘dentist’s drill’ model of Scheuer (1982) or (2) the shocked material flows from the initial impact site to the secondary site, the ‘splatter-spot’ model of Williams & Gull (1985) or the deflection model of Lonsdale & Barthel (1986). Both of these models predict that there should be a compact hotspot at the jet termination; indeed, observations have shown that when the jet is explicitly seen to terminate, it is always at the most compact hotspot (Laing 1989; Leahy et al. 1997; Hardcastle et al. 1997).
However, the models in their simplest forms predict two essentially different physical processes in the hotspots. If the secondary (or less compact) hotspots are the relics of primary (more compact) hotspots, as suggested in the ‘dentist’s drill’ model, then the shock-driven particle acceleration has ceased, and the spectrum of the continuum emission seen toward these objects will steepen rapidly with increasing frequency as a result of synchrotron aging and adiabatic expansion. On the other hand, the secondary hotspots in the ‘splatter-spot’ or deflection models still have ongoing particle acceleration as a result of outflow from the primary hotspot, and as long as the observing frequency does not correspond to an energy close to the expected high-energy cutoff in the electron population, the spectral index will not be steeper than $`\alpha `$ = 1.0 (where S $`\nu ^\alpha `$), indicative of a balance between spectral aging and particle acceleration. Of course, it may be that neither of these simple models can properly describe the physics of the interaction. For example, in a more sophisticated version of the dentist’s drill model (Cox, Gull, & Scheuer 1991), the disconnected jet material can continue to flow into the secondary hotspot, causing particle acceleration for some time after the disconnection event. This type of hybrid model will make predictions that will not always be distinguishable from the simple cases.
Hotspots have been well studied with high resolution at radio frequencies. To probe the hotspot regions at higher electron energies, and test models for multiple hotspot formation, we present, in this paper, the first high-resolution image of the FRII radio galaxy 3C 123 in the 108 GHz continuum, focusing on the hotspot regions. The radio galaxy 3C 123 (z = 0.218; Spinrad et al. 1985) is one of the original FRII objects from Fanaroff & Riley (1974) and has an extremely high radio luminosity, a highly unusual radio structure (Riley & Pooley 1978), and an optically peculiar host galaxy (Hutchings 1987; Hutchings, Johnson & Pyke 1988). With the highest resolution to date at these high frequencies, we can compare the morphology and emission of the hotspots to other high-resolution images at longer wavelengths.
Throughout the paper we use a cosmology with $`H_0=50`$ km s<sup>-1</sup> Mpc<sup>-1</sup> and $`q_0=0`$. With this cosmology, 1 arcsecond at the distance of 3C 123 corresponds to 4.74 kpc. The physical conditions we derive in the components of 3C 123 are not sensitive to the value of $`H_0`$.
## 2 Observations and Imaging
3C 123 was observed in three configurations (C, B, and A) of the 9-element BIMA Array<sup>1</sup><sup>1</sup>1The BIMA Array is operated by the Berkeley Illinois Maryland Association under funding from the National Science Foundation. (Welch et al. 1996). The observations were acquired from 1996 November to 1997 February, with the digital correlator configured with two 700 MHz bands centered at 106.04 GHz and 109.45 GHz. The two continuum bands were checked for consistency, then combined in the final images. During all of the observations, the system temperatures ranged from 150-700 K (SSB).
In the compact C array (typical synthesized beam of $``$8$`\mathrm{}`$), the shortest baselines were limited by the antenna size of 6.1 m, yielding a minimum projected baseline of 2.1 k$`\lambda `$ and good sensitivity to structures as large as $``$50$`\mathrm{}`$. This resolution is critical for obtaining an accurate observation of the structure in the large-scale radio-lobes. In the mid-sized B array (typical synthesized beam of $`2\mathrm{}`$), the observations are sensitive to structures as large as $`10\mathrm{}`$. In the long-baseline A array (typical synthesized beam of $`0\stackrel{}{\mathrm{.}}5`$), the longest baselines were typically 450 k$`\lambda `$. With the high-resolution imaging of the hotspots, we can make direct comparisons of the hotspots, and their components, out to millimeter wavelengths. The combination of the three arrays provide a well sampled u,v plane from 2.1 k$`\lambda `$ to 400 k$`\lambda `$.
The uncertainty in the amplitude calibration is estimated to be between 10% and 15%. In the B and C arrays, the amplitude calibration was boot-strapped from Mars. In the A Array, amplitude calibration was done by assuming the flux density of the quasar 3C 273 to be 23.0 Jy. This flux assumption was an interpolation through the A array configuration and supported by data from other observatories. Absolute positions in our image have uncertainty due to the uncertainty in the antenna locations and the statistical variation from the signal-to-noise of the observation. These two factors add in quadrature to give a typical absolute positional uncertainty of $`0\stackrel{}{\mathrm{.}}`$10 in the highest resolution image.
The A array observations required careful phase calibration. On long baselines, the interferometer phase is very sensitive to atmospheric fluctuations. We employed rapid phase referencing; the observations were switched between source and phase calibrator (separation of 9$`\mathrm{°}`$) on a two minute cycle, to follow the atmospheric phase (Holdaway & Owen 1995; Looney, Mundy, & Welch 1997). Since 3C 123 was one of three sources included in the A array calibration cycle, the time spent on-source was approximately 3 hours; thus, the noise in the high-resolution image is higher than would otherwise be expected in a single track with the BIMA array.
## 3 Results
The data span u,v distances from 2.1k$`\lambda `$ to 430k$`\lambda `$, providing information on the brightness distribution on spatial scales from 0$`\stackrel{}{\mathrm{.}}`$4 to 60$`\mathrm{}`$. In order to display the complete u,v information in the image plane, we imaged the emission with four different u,v weighting schemes which include all of the u,v data and stress structures on spatial scales of roughly 5$`\mathrm{}`$, 3$`\mathrm{}`$, 1$`\mathrm{}`$, and 0$`\stackrel{}{\mathrm{.}}5`$. These resolutions were obtained with natural weighting, robust weighting (Briggs 1995) of 1.0, robust weighting of -0.2, and robust weighting of -0.6, respectively. All data reduction was performed using MIRIAD (Sault, Teuben, & Wright 1995), and the images shown were deconvolved using the CLEAN algorithm (Högbom 1974).
The 108 GHz continuum emission from 3C 123, imaged at the four resolutions mentioned above, is shown in Fig. 1. In this Figure, each successive panel is a higher-resolution zoom, beginning with the 5$`\mathrm{}`$ image. Fig. 1a shows the large-scale overall jet-lobe structure, which is very similar to lower frequency images (e.g. Hardcastle et al. 1997) and other low resolution millimeter images at 98 GHz (Okayasu, Ishiguro, & Tabara 1992). Our observations, which have more sensitivity to large-scale structure and better signal-to-noise than the 98 GHz data, do not detect the extended emission to the south of the bright eastern hotspot that is seen at longer wavelengths (component F of Riley & Pooley 1978). We also do not detect feature H of Okayasu et al. (1992), which does not in any case correspond to any feature seen on lower-frequency radio images.
In Fig. 1b, the 4 major sources of millimeter emission at 3$`\mathrm{}`$ resolution are clearly distinguished– from east to west, the eastern hotspot, the core, the western hotspot, and the northwest lobe, respectively. As the resolution increases to $``$1$`\mathrm{}`$ in Fig. 1c1 and 1c2, the western hotspot and the northwest lobe corner are resolved into three peaks that contain only a small fraction of the large scale flux. Since the interferometer is acting as a spatial filter, this implies that the northern lobe consists mainly of large-scale emission; however, the eastern hotspot is dominated by compact emission at this resolution. In the highest resolution image, Fig. 1d, the eastern hotspot is resolved at a principal axis of $``$38$`\mathrm{°}`$, while the core is a point source. The western hotspot is too faint to be seen in this image. Our image of the eastern hotspot looks very similar to high resolution 8.4 GHz observations (Hardcastle et al. 1997), which resolve the hotspot into two components — an extended southeastern component (E4), which corresponds to the peak of the 108 GHz image, and a very compact northwestern component (E3), which accounts for the extension seen in the present image.
## 4 VLA data and Spectral Indices
To compare our data with observations at longer wavelengths, we obtained existing Very Large Array (VLA) data or images at 1.4, 5, 8.4 and 15 GHz. The 1.4 GHz image was taken from Leahy, Bridle & Strom (1998) based on observations with the VLA A configuration, the 5 and 15 GHz were re-reduced observations by R. A. Laing from the VLA archive using A and B configurations and B and C configurations respectively, and the 8.4 GHz data were from Hardcastle et al. (1997), using A, B and C configurations. All these datasets have shortest baselines very similar to that of our BIMA data, so that they sample comparable largest angular scales; with the exception of the 1.4 GHz data, they are also comparable in longest baseline and thus angular resolution. Flux density scales were calibrated using observations of 3C 48 and 3C 286; we applied a correction to the flux levels of the 15 GHz B-configuration data to compensate for an estimated 7% decrease in the flux density of 3C 48 between the epoch of observation (1982 August 06) and the epoch at which the flux density coefficients for 3C 48 used in AIPS were measured (1995.2).
Having imaged the VLA data, we measured the flux densities of the various components of 3C 123 using the regions specified on the 8.4 GHz VLA image in Fig. 2. These flux densities are tabulated in Table 1. Except where otherwise stated in the final column of the Table, they are derived by integration using MIRIAD, from aligned images, convolved to the same (3$`\mathrm{}`$) resolution, with polygonal regions defined on low-frequency images. This process ensures that we are measuring the same region at each frequency. The exceptions are the flux density of the core, which was measured by fitting a Gaussian to the matched-resolution maps, and the flux densities of the two components of the E hotspot, which were measured from maps with resolution matched to the highest resolution of the BIMA data. Using these flux densities, we derived a spectral index between each of the 5 frequencies (4 two-point spectral indices). Table 2 lists these spectral indices for each component in the four bands.
The radio core shows an approximately flat spectral index across the radio and millimeter bands. The 8.4 GHz data were taken in 1993–1995 while the other radio frequencies were taken in 1982–1983, so we are comparing data separated in time by a decade, but there was no evidence for core variability on timescales of years in the observations at different epochs that make up the 5, 8.4 and 15 GHz datasets, and the similarity in the flux densities at 8.4 GHz and 5 and 15 GHz \[cf. also the 15 GHz core flux density of 120 mJy from Riley & Pooley (1978) and the 5 GHz core flux density of 99 mJy measured from the MERLIN images of Hardcastle et al. (1997)\] suggests that there is little variability even on timescales of decades at centimeter wavelengths, contrasting with the variability found in some other well-observed radio galaxies with bright radio cores. However, our 108 GHz core flux density is a factor 3 lower than the flux density measured by Okayasu et al. (1992) between 1989 and 1990 at 98 GHz. Either the spectrum cuts off very sharply between these frequencies — more sharply than would be expected in a synchrotron model — or, more probably, the core is more variable at higher frequency. It is generally found in studies of core-dominated objects that the amplitude of nuclear variability is higher in the millimeter band than at centimeter wavelengths, a fact which can be explained in terms of synchrotron self-absorption effects at lower frequencies (e.g., Hughes, Aller & Aller 1989). Unfortunately, little is known about the millimeter-wave variability of lobe-dominated objects like 3C 123.
All the other components of the radio source have relatively steep spectra even at centimeter wavelengths. As expected, the flattest spectra are observed in the hot spots. We cannot distinguish between the NW and SE component of the E hotspot, within the errors, on the basis of their high-frequency spectral indices, and the W hotspot, also detected at 108 GHz, has a comparable spectrum. The southern lobe (all extended emission to the south of the eastern hotspot, see Fig. 2) has spectral indices which indicate a spectral cutoff at centimeter wavelengths, so it is not surprising that we do not detect it at 108 GHz. However, the northern lobe (the extended emission E and S of the ‘NW corner’) shows no strong indication of a spectral cutoff even at millimeter wavelengths.
## 5 Spectral Fitting
In order to investigate the synchrotron emission, we fit simulated spectra to the different components of the source, using the code from Hardcastle, Birkinshaw & Worrall (1998). We assume an injection energy index for the electrons of 2, corresponding to a low-frequency spectral index of 0.5, since we cannot derive an injection index from any of our existing data; the 81.5 MHz scintillation measurements of Readhead & Hewish (1974) suggest a flatter spectral index for the hotspots, but this low frequency may be below a spectral turnover due to synchrotron self-absorption or a low-energy cutoff in the electron energy spectrum, as seen in the hotspots of Cygnus A (Carilli et al. 1991). To find magnetic field strengths, we assume equipartition between the electrons and magnetic fields, with no contribution to the energy density from relativistic protons. The choice of an equipartition field does not affect our conclusions about spectral shape, but does affect our estimates of break and cutoff electron energies. Since the fitting is essentially done in the frequency domain, all energies quoted may be scaled by a factor $`\sqrt{B_{\mathrm{eq}}/B}`$ if the field deviates from equipartition. We perform $`\chi ^2`$ fitting of the simulated spectra by combining the systematic errors in flux calibration (fixed at 2% for the VLA data and 10% for the BIMA data) with the statistical errors tabulated in Table 1; the systematic errors are the dominant source of error for the VLA data. Because the systematic errors are uncorrelated from frequency to frequency, this procedure is valid when fitting spectra, though not when comparing fluxes or spectral indices from different parts of the source.
We consider two basic models for the electron energy spectrum. Both have high-energy cutoffs, but one has a constant electron energy index of 2, while the other is a broken power law model, allowed to steepen from an electron energy index of 2 to 3 at a given energy. The latter is appropriate for a situation in which particle acceleration is being balanced by synchrotron losses or in which loss processes are important within the hotspot (Pacholczyk 1970; Heavens & Meisenheimer 1987). These two models are equivalent to models (i) and (ii) of Meisenheimer et al. (1989), respectively.
### 5.1 Component Fitting Results
The fitting results are tabulated in Table 3. We find that model (i), the simple, single power-law, never fits the data well, and that in half of the component fits, model (ii), the broken power-law model, fits well with a very high-energy cutoff (labeled as “break” in Table 3). For the rest of the components, a broken-power law and a high-energy cutoff within our data’s frequency range is necessary (labeled as “both” in Table 3). For the chosen model, we tabulate the equipartition magnetic field strength in nT and the best-fitting break energies and, where appropriate, cutoff energies in GeV.
The NW component of the E hotspot (E3) is well fit with the break model (Fig. 3a), but it is very poorly fit with break models having a energy cutoff within our data frequency range; all of the best high cutoff fits to our data have cutoff energies above $`10^{10}`$ eV (corresponding to $`>200`$ GHz). This is due mainly to the essentially constant spectral index between 8 and 108 GHz. The SE component of the E hotspot (E4) is also best fit with a broken power-law spectrum, although not as well, and only poorly with a high-energy cutoff spectrum (Fig. 3b).
These results differ from the conclusion of Meisenheimer, Yates & Röser (1997), who prefer a model with only a high-energy cutoff as a fit to the overall spectrum of the eastern hotspot. This may be the result of subtle measurement differences in the regions and frequencies used by Meisenheimer et al., who took flux densities for the E hotspot from a variety of sources in the literature, or it may be the effect of combining the two hotspot regions. Our results are more consistent with the model favored by Meisenheimer et al. (1989).
The W hotspot is also best fit with a broken power-law model (Fig. 3c), although no fit is particularly good because of the anomalously flattening spectral index between 15 and 108 GHz that our simple models cannot reproduce. The effect may be due to a bad data point at 15 GHz, but it should be noted that we are not resolving the two components of this hotspot (Hardcastle et al. 1997), so the spectral situation is probably more complex than is represented by our simple one-component model. Again, a high-energy cutoff within our frequency range fits the data even more poorly.
Although the northwest corner region is resolved out at high resolution, it dominates the western side of our low-resolution 108 GHz images (Fig. 1a). The spectrum of this region is smoothly curved from centimeter to millimeter wavelengths. It is poorly fit with a single power-law and cutoff model, but reasonably well fit with a spectral break model; however, better fits are obtained with a model with a high-energy cutoff as well as a spectral break (though the improvement is not significant on an F-test) because of the steep 15–108 GHz spectral index.
The northern lobe’s spectrum is poorly fit with the break model or with a high-frequency cutoff; even the combination of the two, though a substantial improvement, gives a clearly poor fit, modeling the 108 GHz data badly (Fig. 3d), because of the way the spectrum first curves between 8 and 15 GHz, then remains straight between 15 and 108 GHz (Table 2). A Jaffe & Perola (1973) aged synchrotron spectrum is also a poor fit, though it does represent the 108 GHz data better. Like the northern lobe, the southern lobe is best fit with a spectral break and high-energy cutoff, but again the fits are not particularly good.
Overall, the regions required models with broken power-laws to achieve good fits, but the three hotspot component models have high-energy cutoffs significantly above 108 GHz, while the three other regions required energy cutoffs within our data frequency range.
### 5.2 Spectral Model Interpretation
The mm-to-cm spectra of both components of the eastern hotspot, resolved at millimeter wavelengths for the first time in our observations, are consistent with a simple, spectral break model, as expected for regions in which ongoing particle acceleration is balanced by synchrotron losses. There is no evidence for significant spectral differences between the two hotspot components, which implies either that particle acceleration (and hence energy supply) is still ongoing in the less compact SE component, as in the model of Williams & Gull (1985), or that it was disconnected from the energy flow less than $`1.5\times 10^4`$ years ago, assuming Jaffe & Perola (1973) spectral aging on top of the broken power-law model for the electron spectrum and an aging field equal to the equipartition field in Table 3. (The estimate of $`1.5\times 10^4`$ years is a 99% confidence limit with $`\mathrm{\Delta }\chi ^2=6.6`$. We neglect the possible effects of adiabatic expansion.)
The three non-hotspot regions studied all show evidence for a high-energy cutoff in addition to the broken power-law spectrum of the hotspots. However, it is clearly more difficult to draw conclusions from the fitted spectra. The fact that the fitted break energies in the lobes are much lower than the break energies in the hotspots may suggest that the assumption of equipartition is wrong in one or both regions, with $`B`$-field strengths deviating from their equipartition values by up to a factor $`40`$. However, X-ray observations suggest that both in the hotspots and in the lobes of other radio galaxies the magnetic field strength is close to equipartition with the energy density in relativistic electrons (Harris, Carilli & Perley 1994; Feigelson et al. 1995; Tsakiris et al. 1996). (The equipartition assumption in the hotspots of 3C 123 will be tested by forthcoming Chandra observations.) Instead, the lower break energies seen in the lobes may simply be due to adiabatic expansion of the electron population as it leaves the hotspot. Radial expansion by a factor $`ϵ`$ moves the electron energy spectrum down by a factor $`ϵ^1`$, so the estimated change in break energies between the hotspots and lobes implies expansion out of the hotspots by factors up to $`6`$, though we emphasize that the break energies in the lobes are only weakly constrained by the data. These factors are rather higher than those that would be estimated from the ratio of magnetic fields between lobe and hotspot (field strength $`Bϵ^2`$ on adiabatic expansion). If either expansion has taken place or the magnetic field in the lobes is much weaker than equipartition, the high-energy cutoffs fitted to the lobe data cannot be said to be unambiguously due to spectral aging; to take the most extreme example, shifting the break energy of the S lobe up to match that of the E4 component of the E hotspot brings the corresponding cutoff energy up to 30 GeV, which is not ruled out by our data. In any case, the expected aged spectrum depends on the detailed order of expansion and aging, and the lobes are probably not spectrally homogeneous, so we do not attempt to fit aging models to the data.
Unlike the lobe spectra, the best-fit spectrum of the NW corner shows a break energy that is comparable to those in the W hotspot, and is certainly consistent within the large errors introduced by uncertainties in the geometry and field strength. The brightening here may be due either to particle reacceleration in this region or simply to compression. The fact that the break energy is higher than that in the W hotspot while the magnetic field strength is lower might seem to favor a reacceleration model, but if, as seems likely, the hotspots are transient features, the present-day properties of the W hotspot do not necessarily reflect those of the hotspot that was present when the material now at the NW corner was first accelerated. The same caveat, of course, applies to a comparison of the hotspot and lobe spectra.
## 6 Conclusions
We have presented the first sub-arcsecond millimeter wavelength continuum imaging of the radio galaxy 3C 123, resolving the eastern hotspot. These are only the second observations at millimeter wavelengths to resolve a double hotspot pair. Hat Creek and later BIMA observations of the bright, nearby classical double radio galaxy Cygnus A (Wright & Birkinshaw 1983; Wright & Sault 1993; Wright, Chernin & Forster 1997) resolve both the eastern and western double hotspots in that source, and, as in the case of 3C 123, it is found that in Cygnus A there is little or no clear spectral difference between the primary (more compact) and secondary (more diffuse) hotspots. Thus, in both these sources it is impossible to say whether or not there is continued energy supply to the secondary hotspot. The short synchrotron lifetimes at millimeter wavelengths mean that if the secondary hotspots are disconnected from the energy supply, as in the ‘dentist’s drill’ model, the disconnection must have taken place on timescales which are much shorter (by factors of $`>100`$) than the lifetime of a typical radio source. Indeed, numerical simulations suggest that such short-timescale transient hotspot structures are expected in low-density radio sources (Norman 1996).
In both 3C 123 and Cygnus A, there is no clear evidence in the radio structure for continuing outflow between the primary and secondary hotspotsi. Specifically, there are no filaments connecting the eastern hotspots in 3C 123 together, as there are in several other multiple-hotspot sources or even in the western hotspot pair of 3C 123 (Hardcastle et al. 1997), and the suggestion that the hotspots in Cygnus A are connected by an outflow marked by a ridge seen in the radio is inconsistent with the pressure gradients in the lobes, as pointed out by Cox et al. (1991). Overall, therefore, the situation in these two sources seems most consistent with the picture of Cox et al., in which the bright secondary hotspots are recently disconnected remnants of earlier primaries and are still being, or have been until recently, powered by continued inflow of disconnected jet material. These models predict that sources should exist in which the secondary hotspots are genuinely no longer powered, as in the original dentist’s drill model; such sources should, observed at the right time in the evolution of their hotspots, show a clear spectral difference between the primary and secondary hotspots at millimeter wavelengths. To find them, it seems likely that it will be necessary to look at sources with more typical double hotspot structure and without the dominant, compact secondary hotspots of 3C 123 and Cygnus A; we have BIMA data for such a source (3C 20) and will report on our results in a future paper.
On larger scales, our observations of 3C 123 show a striking difference in the spectra of the northern and southern lobes; the northern ‘arm’ of the northern lobe is quite clearly detected at our observing frequency, while there is absolutely no detection of any extended emission at 108 GHz south of the eastern hotspot. The spectral difference extends back down to GHz frequencies, in spite of the fact that at 1.4 GHz the northern and southern lobe regions are morphologically quite similar and have similar surface brightness. We have not been able to rule out particle (re)acceleration at the bright ‘northwest corner’ of the northern lobe, which might account for the difference, but we note that there is some detected extended emission at 108 GHz in the northern lobe between the western hotspot and the ‘northwest corner’, which is not consistent with such a picture. The difference could be caused simply by different aging processes or different magnetic field strengths in the two regions. However, it is tempting to relate the differences in northern and southern lobes with the differences in the corresponding hotspots. Specifically, we suggest, as in the models of Meisenheimer et al. (1989), that the ‘high-loss’ eastern hotspot does not efficiently accelerate particles to the high energies required to produce 108-GHz emission from the lobes, while the less spectacular western hotspot is more efficient at putting the energy supplied by the jet into high-energy electrons.
We thank the Hat Creek staff for their efforts in the construction and operation of the long baseline array. We would also like to thank Matt Lehnert and Christian Kaiser for discussions, and Robert Laing for allowing us to use his archival VLA data. This work was supported by NSF grants NSF-FD93-20238, NSF-FD96-13716, and AST-9314847, and PPARC grant GR/K98582. The National Radio Astronomy Observatory Very Large Array is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. |
no-problem/9912/cond-mat9912489.html | ar5iv | text | # Fixed Number and Quantum Size Effects in Nanoscale Superconductors
## Abstract
In recent experiments on nanoscale Al particles, whose electron number was fixed by charging effects, a “negative gap” was observed in particles with an odd number of electrons. This observation has called into question the use of a grand canonical ensemble in describing superconductivity in such ultrasmall particles.
We have studied the effects of fixed electron number and finite size in nanoscale superconductors, by applying the canonical BCS theory for the attractive Hubbard model. The ground state energy and the energy gap are compared with the conventional and parity-projected grand canonical BCS results, and in one dimension also with the exact solutions by the Bethe ansatz. The crossover from the bulk to quantum limit is studied for various regimes of electron density and coupling strength. The effects of boundary conditions and different lattice structures are also examined.
A “negative gap” for odd electron number emerges most naturally in the canonical scheme. For even electron number, the gap is particularly large for “magic numbers” of electrons for a given system size or of atoms for a fixed electron density. These features are in accordance with the exact solutions, but are essentially missed in the grand canonical results.
The ability to fabricate ultrasmall superconducting particles in a reasonably controlled way allows us to revive old questions . The question we focus on here is the validity (and usefulness) of the grand canonical ensemble vs a canonical one for a description of very small superconducting particles.
The canonical and parity-projected BCS formalisms have been described elsewhere . Figure 1 shows a comparison of even and odd canonical (CBCS) and grand canonical (GCBCS) ground state energies, along with exact ones, for the attractive Hubbard model (AHM) in 1 D, for coupling strength (scaled by the kinetic energy parameter $`t`$) $`|U|/t=4`$ and 10. Odd-even effects are clearly discernible.
Figure 2 shows the (even) GCBCS result for the gap ($`\mathrm{\Delta }_{}`$) vs. electron density $`n`$, along with the smoothed density of states (DOS) as a function of single-electron energy $`ϵ_k`$. The structure visible in the gap requires painstaking numerical work, and reflects the underlying discrete density of states (as seen in Fig. 3 below). In Fig. 3 we show a smaller system, with both CBCS and (even) GCBCS results as a function of electron density. Quite a few anomalously high gaps occur, at various values of $`n`$, as revealed by the CBCS result ($`\mathrm{\Delta }_{N_e}`$).
Finally, in Fig. 4 we examine the CBCS gap (normalized by the bulk value) as a function of system size $`N`$, for systems with an even number of electrons. There are two distinct curves which approach the bulk limit (solid circles), corresponding to $`N_e=4m`$ or $`N_e=4m+2`$, with $`m`$ an integer, the so-called ‘super-even’ effect . Clearly, the transition to the bulk is smooth. |
no-problem/9912/cond-mat9912461.html | ar5iv | text | # Viscoelastic Depinning of Driven Systems: Mean-Field Plastic Scallops
## Abstract
We have investigated the mean field dynamics of an overdamped viscoelastic medium driven through quenched disorder. The model allows for the coexistence of pinned and sliding regions and can exhibit continuous elastic depinning or first order hysteretic depinning. Numerical simulations indicate mean field instabilities that correspond to macroscopic stick-slip events and lead to premature switching. The model describes the elastic and plastic dynamics of driven vortex arrays in superconductors and other extended disordered systems.
Extended condensed matter systems driven over quenched disorder exhibit a very complex dynamics, including nonequilibrium phase transitions and history dependence. Such systems include vortex arrays in type-II superconductors, charge density waves in anisotropic conductors, and many others. Closely related behavior also arises in friction and lubrication, where a surface or monolayer is brought in contact with another solid surface and forced to slide relative to it.
Most of the theoretical work to date has focused on the dissipative dynamics of driven elastic media that are distorted by disorder, but cannot tear. At zero temperature such systems exhibit a sharp depinning transition from a pinned to a sliding state. The transition, first studied in the context of charge density waves, is continuous, with universal critical behavior. The sliding state is unique and there is no hysteresis or history dependence . More recent work, while still focusing on elastic media, has shown that the dynamics is quite rich well into the uniformly sliding state.
On the other hand, experiments and simulations show that the elastic medium model is inadequate for many physical systems with strong disorder that upon depinning exhibit a spatially inhomogeneous plastic response, without long wavelength elastic restoring forces. In this plastic flow regime, topological defects proliferate and the system is broken up in fluid-like regions flowing around pinned solid regions. Not much progress has been made in describing this behavior analytically. The wealth of experimental work on driven vortex arrays clearly indicates that, in most of the field and temperature region of interest, the current-driven vortex dynamics is strongly history dependent, with long-term memory and switching as the system explores a variety of nonequilibrium sliding states .
In this paper we describe a coarse-grained model for the dynamics of a driven medium that allows for spatially inhomogeneous response, with the coexistence of moving and pinned regions. The model is inspired by the well-known phenomenology of viscoelasticity in dense fluids . The elastic couplings between the local displacements are replaced by couplings that are nonlocal in time and allow for elastic restoring forces to turn into dissipative fluid flow on short time scales. The model yields elastic depinning in one limit; as the parameters are varied, it incorporates continuous depinning, hysteretic plastic depinning and eventually viscous flow, allowing the crossovers between these regimes (such as those, observed in vortex arrays ) to be studied in detail. For a wide range of parameter values the depinning transition is first order, with velocity hysteresis (switching.) The nonlinear velocity-force characteristic can be evaluated analytically in mean field for various types of pinning forces, under the assumption of constant mean field velocity. Numerical simulations confirm the inhomogeneous nature of the dynamics, with pinning and tearing (coexisting moving and pinned degrees of freedom.) In addition, the mean velocity near depinning fluctuates, due to macroscopic stick-slip type events. These events appear to only mildly violate the uniform mean-velocity assumption but directly lead to switching from one velocity branch to another before the first branch terminates (premature switching.) Models that account for switching in charge density waves and are in spirit similar to ours have been proposed and studied by Strogatz and collaborators . In such models, plasticity is modeled by a non-convex elastic potential, in contrast with the velocity convolutions studied here. A model similar to ours has also been proposed for crack propagation .
The model: a driven viscoelastic medium. To motivate our model, we first recall the generic model of driven elastic media discussed extensively in the literature, where the long-wavelength dynamics is described in terms of a coarse-grained displacement field, $`u(𝐫,t)`$. The displacement fields represent deformations of regions pinned collectively by disorder (e.g., a Larkin domains) and are coupled by convex elastic interactions. No topological defects are allowed. Considering, for simplicity, the overdamped dynamics of a scalar field (the model is easily extended to the more general case) and modeling the displacement field on lattice sites, $`u(𝐫,t)u_i(t)`$, the equation of motion for the local field $`u_i`$ (measured in the laboratory frame) at site $`i`$ is
$$\gamma _0\dot{u}_i=\underset{ij}{}\mu _{ij}(u_ju_i)+F+F_i(u_i),$$
(1)
where the summation is restricted to nearest neighbor pairs and $`\gamma _0`$ is the friction. If all the nearest-neighbor elastic couplings, $`\mu _{ij}0`$, are equal, the first term on the right hand side of Eq. (1) is the discrete Laplacian in $`d`$ dimensions. The second term is the homogeneous driving force, $`F`$, and $`F_i(u_i)`$ denotes the pinning force arising from a quenched random potential, $`V_i(u_i)`$, $`F_i(u_i)=dV_i/du_i=h_if(u_i\beta _i)`$, with $`f(u)`$ a periodic function with period $`1`$ and $`\beta _i`$ random phases uniformly distributed in $`[0,1]`$. The $`h_i`$ are chosen independently at each site from a distribution $`\rho (h)`$. One of the quantities of interest is the average velocity of the driven medium, $`\overline{v}(F)=N^1\dot{u}_i`$. For an elastic medium there is a unique stationary sliding state for $`F>F_c`$, with critical behavior $`\overline{v}(F)(FF_c)^\beta `$ , and no hysteresis at the transition .
We now modify the elastic interactions in Eq. (1) to allow for local tearing of the medium. Inspired by standard models of viscoelasticity, we replace the elastic interaction by couplings to the local velocity field, $`v_i=\dot{u}_i`$, that are nonlocal in time. Our model equation for the overdamped dynamics of a driven viscoelastic medium is
$$\gamma _0\dot{u}_i=\underset{ij}{}_0^t𝑑sC_{ij}(ts)\left[\dot{u}_j(s)\dot{u}_i(s)\right]+F+F_i(u_i),$$
(2)
where the viscous couplings $`C_{ij}(s)`$ have finite first moments, $`_0^{\mathrm{}}𝑑sC_{ij}(s)=\eta _{ij}<\mathrm{}`$ and $`C_{ij}(0)=\mu _{ij}`$. Such nonlocal couplings to velocity are of course not present at the microscopic level, but are generated generically upon coarse-graining. Eq. (2) is a coarse-grained model for the dynamics of a driven disordered medium that allows for slip or friction of the interacting Larkin domains relative to each other.
A simple, yet successful, model of viscoelasticity due to Maxwell is obtained when the memory kernels are assumed to be uniform in space and to decay exponentially in time, according to $`C_{ij}(t)=\mu e^{t/\tau }`$, with $`\tau =\eta /\mu `$ the Maxwell relaxation time. For $`\tau \mathrm{}`$ and fixed $`\mu `$, Eq. (2) reduces to Eq. (1) for a driven elastic medium. For $`\tau 0`$ and $`\eta `$ fixed, the first term on the right hand side of Eq. (2) can be approximated as $`\eta _{ij}[v_j(t)v_i(t)]`$, which represents viscous forces coupling the local fluid velocity at different spatial points. In this limit, Eq. (2) describes an overdamped lattice-fluid of viscosity $`\eta `$. We propose Eq. (2) as a simple, yet realistic model for a driven disordered system that exhibits spatially inhomogeneous plastic response.
Mean Field Approximation. As for the driven elastic media, substantial analytical progress in two or three dimensions is presumably only possible via perturbation theory or by a functional renormalization group treatment . An alternative approach that has provided valuable insight for a driven elastic medium is mean field theory (MFT), first discussed by D. S. Fisher . MFT is formally valid in the limit of infinite-range interaction, with $`_jC_{ij}=NC(t)`$ held fixed. The equation of motion for the displacement at each site is then given by
$$\gamma _0\dot{u}_i=_0^ty𝑑sC(ts)\left[\overline{v}(s)\dot{u}_i(s)\right]+F+F_i(u_i),$$
(3)
where the mean field is given by $`\overline{u}(t)=N^1_{i=1}^Nu_i(t)`$, and $`\overline{v}(t)=\dot{\overline{u}}(t)`$.
If the memory kernel $`C(t)`$ is chosen to be of the Maxwell form, it is then possible to transform the integro-differential equation (3) to a second-order differential equation, given by
$`\tau \ddot{u}+\gamma (\eta ,\tau ,h;u_i)\dot{u}=F+F_i(u_i)+\eta \overline{v},`$ (4)
with $`\gamma (\eta ,\tau ,u_i;h)=1+\eta \tau \frac{F_i}{u_i}`$ an effective friction. We have scaled Eq. (4) by letting $`\tau \tau h_0`$, $`tth_0`$, $`\eta \eta /\gamma _0`$, $`FF/(\gamma _0h_0)`$ and $`hh/(\gamma _0h_0)`$, where $`h_0`$ is the characteristic scale of the distribution $`\rho (h)`$. With this change of variables, the model is now characterized by two parameters, $`\eta `$ and $`\tau `$, and the shape of $`\rho (h)`$. The MF equation for our viscoelastic model closely resembles the MF equation for a driven massive elastic medium, with $`\tau `$ playing the role of the mass. The most important difference is that in the massive elastic medium the MF term $`\eta \overline{v}`$ is replaced by $`\mu \overline{u}`$. As a result, the MFT of a driven massive elastic medium even with constant $`\overline{v}`$ contains three degrees of freedom (as opposed to the two of our problem) and the dynamics of a single $`u_i`$ is chaotic .
We are first interested here in steadily sliding solutions of the MF model, Eq. (3). It is natural to look for periodic solutions $`u_p(t;h)`$ of period $`T(h)`$, ($`_0^{T(h)}𝑑t\dot{u}_p(t;h)=1`$) that may set in after an initial transient ($`tT(h)`$). Such solutions need not be unique. Guided by a large body of previous work on driven elastic media, we focus on the MFT for the case of a pinning potential with cusp-like singularities, which better captures the physics of the corresponding finite-dimensional model . An explicit solution of Eq. (4) can be obtained for the scalloped parabolic potential, $`V(u)=(h/2)(u^2u+1/4)`$. In this case Eq. (4) is linear and its general solution is $`u_p(t;h)=C_1\mathrm{exp}(\lambda _1t)+C_2\mathrm{exp}(\lambda _2t)+1/2+(\eta \overline{v}+F)/h`$, with $`\lambda _{1,2}=\left(1+\eta +\tau h\pm \sqrt{(1+\eta +\tau h)^24\tau h}\right)/(2\tau )`$. For each fixed value of $`h`$, we obtain an implicit equation for the period, $`T(h)`$, $`\eta \overline{v}+F=𝒢(T;\eta ,\tau ,h)`$, with
$$𝒢(T;\eta ,\tau ,h)=\frac{\lambda _1(1e^{\lambda _1T})\lambda _2(1e^{\lambda _2T})+\tau \lambda _1\lambda _2(e^{\lambda _1T}e^{\lambda _2T})}{(\lambda _1\lambda _2)(1e^{\lambda _1T})(1e^{\lambda _2T})}\frac{h}{2}.$$
(5)
The solution of Eq. (5), together with the self-consistency constraint $`\overline{v}=<[T(h)]^1>_h`$, determines the drift velocity as a function of driving force, $`F`$. When $`T(h_i)\mathrm{}`$, the $`u_i`$ is pinned.
Figure 1 shows the analytical solution for the mean velocity as a function of driving force for $`\rho (h)=\mathrm{exp}(h)`$. The depinning occurs at $`F=0`$ for all distributions of pinning strengths, $`\rho (h)`$, with support not bounded from below by a positive $`h_{\mathrm{min}}`$. For small $`\eta `$ and $`\tau `$, corresponding to weak coupling among the local displacements, the analytical solution is single-valued and the depinning is continuous. For large $`\eta `$ and $`\tau `$ the analytical solution yields multi-valued velocity curves, reflecting the existence of multiple sliding states, and the depinning is hysteretic. As shown in the inset of Fig. 1, there is a critical value, $`\eta _c(\tau )`$, that separates single-valued from multi-valued solutions. The value $`\eta =\eta _c`$ is a critical point and the velocity curve is expected to exhibit critical scaling. While the value of $`\eta _c`$ depends on $`\tau `$, the existence of an hysteretic region at large $`\eta `$, with coexistence of sliding and moving states and early switching (see also Fig. 1) occurs for all finite values of $`\tau `$, including $`\tau =0`$. For $`\tau \mathrm{}`$ and $`\eta \mathrm{}`$, with the ratio $`\mu =\eta /\tau `$ held fixed, Eq. (4) reduces to the MFT of an overdamped elastic medium . In this case an analytical solution is available and the velocity vanishes linearly as $`FF_c`$ .
Numerical work. We have investigated the stability of the branches of the analytically determined current-drive relationship. We performed direct numerical simulation of the equations of motion, for both force drive and constrained mean velocity. The simulations were performed using two codes, for verification: a Runge-Kutta integration and an event-driven Euler integration, with the “events” being crossings of a displacement $`u_i`$ from one parabolic region to the next. The results were checked for insensitivity to time step $`\mathrm{\Delta }t`$ and size $`N`$. For the constant $`\overline{v}`$ constraint, the drive-velocity relationship matches the analytical prediction.
In the regions where the velocity is a unique function of the drive, the simulation results with slowly changing $`F`$ for the force-drive curve match very closely those of the analytical results, which assume a constant $`\overline{v}`$. In the presumed hysteretic region, though, the simulation results can be quite different. In particular, we note two features: mean field velocity oscillations on the lower branch and “early” switching, where the mean velocity switches from the lower to upper branch prior to the end of the analytically computed hysteresis region. A sample hysteresis curve indicating early switching is shown in Fig. 2. We have computed the magnitude of the fluctuations in the mean velocity on the upper branch as a function of $`N`$: the results are numerically consistent with a magnitude $`N^{1/2}`$, indicating that these fluctuations vanish as $`N\mathrm{}`$. The fluctuations on the lower branch do not vanish in the limit of large $`N`$, however. These fluctuations are presumably due to an instability of the constant $`\overline{v}`$ solution in the large volume limit. We hypothesize, with the support of detailed analysis of the numerics, that nearly depinned degrees of freedom (which would remain pinned at constant $`\overline{v}`$) are made unstable by velocity fluctuations and lead to an avalanche type of behavior, which causes a peak in $`\overline{v}`$. The magnitude of this instability apparently becomes large enough to drive the mean velocity to the upper branch before the presumed constant $`\overline{v}`$ velocity jump occurs.
In conclusion, we have introduced a coarse-grained model of plastic flow that allows for slip of coherently pinned domains. We have solved this model analytically in mean field for the case of Maxwellian kernel, under the assumption of non-fluctuating mean velocity. We find that (1) the model exhibits both continuous and first order hysteretic depinning as the parameters are varied, (2) we can recover the case of elastic depinning in one limit, (3) pinned and sliding regions coexist in the hysteretic regime, and (4) the mean velocity curves display features observed in experiments. Numerical simulations suggest that the behavior is much richer than suggested by the MF calculation and includes stick-slip-like instabilities which lead to early switching. Strong history dependence has been observed in the dc response of vortex lattices in type-II superconductors and in charge density waves . Hysteresis in vortex lattice motion is most pronounced in the region of the so-called peak effect, where the dc response during ramp-up of the current proceeds via a series of jumps. These have been attributed to strong spatial inhomogeneities in the distribution of vortex velocities, not unlike what is observed in our model . We expect that in finite dimensions, the transition to hysteresis will be characterized by non-trivial universal scaling exponents , similar to the situation for hysteresis in random magnets , and that these exponents could be experimentally tested.
One of us (MCM) thanks Daniel Fisher and Jennifer Schwarz for illuminating discussions. The work was supported by NSF through grants DMR-9730678, POWRE-DMR9805818 and CAREER-DMR9702242. |
no-problem/9912/astro-ph9912453.html | ar5iv | text | # Massive Stellar Clusters in Interacting Galaxies
## 1. Introduction
Ten years ago there were only suggestions that interacting and merging galaxies contained young, massive star clusters (Schweizer 1982; Lutz 1991). However, subsequent observations, especially those using the Hubble Space Telescope, have shown that young clusters are nearly ubiquitous in such systems. Table 1 of Schweizer (1999) gives a nearly complete list of galaxies observed to have young star clusters. To this list can be added recent papers on NGC 3597 (Carlson et al. 1999; Forbes & Hau 1999) and NGC 5128 (Holland et al. 1999). It would seem that cluster formation is a natural result of the star formation triggered by strong gravitational interactions or direct collisions.
Many of these observations were motivated by the question of whether ellipticals can form from the mergers of two spirals, but, in addition, they provide important information about the formation and evolution of the star clusters themselves. The sizes and profiles of the youngest clusters can constrain their initial states. The distribution of ages gives the cluster formation history, which can be compared with the dynamical and star formation histories. The ages and metallicities allow us to determine masses and the mass function. This is critical for understanding the physics of how clusters form. The evolution of the mass function then shows us how the interplay between stellar evolution and both internal and external dynamics affect cluster evolution. This paper will review the sizes, ages, and masses of young star clusters in merging galaxies. The focus will be on recent results from WFPC2 observations of NGC 4038/39 (“the Antennae”, Whitmore et al. 1999, hereafter W+99; Zhang & Fall 1999, hereafter ZF99) and NGC 7252 (Miller et al. 1997, hereafter M+97).
## 2. Cluster Sizes
The sizes of the star cluster candidates determine whether they are structurally similar to Galactic globular clusters (GCs) with effective radii of a few parsecs, or to open clusters and associations which can have a much wider range of sizes but which are generally bigger than GCs. The sizes and profiles of the youngest clusters are also important initial conditions for dynamical models of clusters (see Zwart’s comments in the Discussion). In addition, Galactic GCs have a lognormal or broken power-law mass function with a characteristic mass of about $`10^5`$ M$`_{}`$, while open clusters have an unbroken power-law mass function. Thus, density could be related to formation process. Pre-refurbishment HST images of NGC 4038/39 and NGC 7252 showed the cluster candidates to have $`R_{\mathrm{eff}}>10`$ pc and a power-law luminosity function. Thus, it was argued that these objects would not become GCs and that galaxy mergers would not produce GC systems like seen in elliptical, calling into question whether ellipticals were produced by mergers (van den Bergh 1995)
Observations with the corrected optics of WFPC2 have consistently shown that the bulk of the young cluster candidates are marginally resolved and that $`R_{\mathrm{eff}}<5`$ pc (Schweizer et al. 1996; Whitmore et al. 1997; M+97; Carlson et al. 1999; W+99). Thus, the effective radii are consistent with the values for old GCs in M87 (Whitmore et al. 1995) and the Milky Way. As suspected, the larger effective radii measured previously were due the difficulty of measuring sizes on the aberrated WF/PC1 images.
In the Antennae we may now be seen changes in both the effective and tidal radii with age (see Section 3). Old cluster candidates have $`R_{\mathrm{eff}}=3.0\pm 0.3`$ pc while young and intermediate age cluster candidates have $`R_{\mathrm{eff}}=4.6\pm 0.4`$ pc. Further, the tidal radii of the young clusters can be much larger than for the old clusters (Figure 1). Thus, the density distribution of clusters may extend beyond their tidal radii at birth and a few orbits around the galaxy are needed to remove the stars beyond the tidal radius.
## 3. Metallicities and Ages
Since broad-band colors are degenerate in age and metallicity, we must have an independent measurement of one of these properties in order to determine the other from evolutionary models. Spectroscopy of the brightest three young clusters in NGC 7252 shows that they have near-solar metallicity (Schweizer & Seitzer 1998), so solar metallicity is assumed for all the young clusters. Then, ages are determined by comparing the broad-band colors and luminosities with evolutionary models for simple stellar populations. The youngest clusters are often surrounded by considerable dust, so we attempt to correct for this internal extinction by calculating “reddening-free” indices based on the ($`UB`$), ($`BV`$), and ($`VI`$) colors (M+97; W+99). In NGC 4038/39, the youngest clusters are also be distinguished by their H$`\alpha `$ emission.
Multiple populations of star clusters can be distinguished in several systems. Four populations have been identified in the Antennae: 1) a $`<10`$ Myr-old population with compact H$`\alpha `$ emission located near the dusty overlap region; 2) a $`100`$ Myr-old population found further out in the disk of NGC 4038; 3) a $`500`$ Myr-old population that may have been formed during the first close encounter when the tidal tails were formed; and 4) a few $`10`$ Gyr-old clusters that are probably original GCs from the progenitor galaxies (W+99). The very young ages of the youngest clusters are confirmed by ultraviolet spectroscopy (W+99) and infrared spectroscopy (see the contributions by Gilbert and Mengel). The older merger remnant NGC 7252 has a $`<10`$ Myr-old population of rather extended clusters associated with the central gas disk, a 500–800 Myr population that formed during the merger, and old clusters from the progenitor galaxies (M+97). Young cluster formation lasts for several hundred Myr, consistent with the dynamical time-scale of the merger event.
## 4. Luminosity and Mass Functions
WFPC2 observations of young star clusters have most often found the luminosity functions (LFs) to have a power-law shape, $`\varphi (L)L^\alpha `$, with $`\alpha 1.8`$ down to the completeness limits of the observations (e.g. Schweizer et al. 1996; M+97; Carlson et al. 1999). The masses of the most luminous clusters, as inferred from evolutionary models of the appropriate age and metallicty, can approach $`10^8`$ M$`_{}`$, over an order of magnitude more massive than the most massive Galactic GC (Schweizer & Seitzer 1998). These are extreme clusters, even considering the fact that the mass-to-light ratios of the models are about a factor of two higher than measured (Fritze-v. Alvensleben, this proceedings). At the faint end, the new observations are sensitive enough detect objects less massive than $`10^5`$ M$`_{}`$, the mass at the peak of the old GC mass function. If the mass function were peaked, then one would expect to see a bump in the luminosity function since fading preserves the shape of the luminosity function. This immediately suggests that the mass function is a power law. However, one must be sure that large relative age spreads, reddening, and and stellar contamination do not affect the shape of the cluster luminosity function (cf. Meurer 1995).
A few observations are now suggesting that the young cluster LF may not be a single power law. Zepf et al. (1999) find that the LF for young clusters in NGC 3256 to be slightly flattened for $`M_B>11`$. However, the statistical significance of the flattening is relatively weak (2.5$`\sigma `$) and the most likely mass function is a power law with $`\alpha =1.8`$. The new observations of the Antennae show a stronger flattening of the luminosity function (Figure 2). The break occurrs at a mass of $`10^5`$ M$`_{}`$, similar to the peak in the old GC mass function. While this is suggestive that the mass function has a break or peak, a reconstruction of the mass function by ZF99 shows that it is still most likely a single power law with $`\varphi (M)M^2`$.
The proximity of the Antennae, the depth of the photometry, and the youth of the starburst made stellar contamination a significant issue. However, stellar contamination at the faint end of the cluster LF may be significant even for the older and more distant merger remnants like NGC 3921 and NGC 7252.
I have attempted to determine the young cluster mass function in NGC 7252 by matching the observed LF with Monte Carlo simulations. Artificial clusters are drawn from either a power-law mass function consistent with the GC mass function of the Galaxy, or a power-law mass function with slope equal to the observed slope of the cluster LF. The masses are converted to magnitudes and colors using Bruzual & Charlot (1996) evolutionary models of simple stellar populations. Young clusters are assumed to have solar metallicty, and the mean age is determined by matching the ($`VI`$) colors of the clusters. Some simulations also include artificial stars drawn from a Salpeter IMF and placed in the color-magnitude diagram using Geneva evolutionary tracks (Schaller et al. 1992) and bolometric corrections from Bessell, Castelli, & Plez (1998) for solar metallicity.
Measurement error and selection criteria are applied to the simulations before comparing them with the observations. Gaussian-distributed random errors are added to the colors and magnitudes of model clusters and stars based on the photometric uncertainties of the observed clusters. Then, both the observed and simulated clusters are selected according to the same criteria. The goodness-of-fit between a simulation and the observations is measured by the $`\chi ^2`$ per degree of freedom, $`\chi ^2/\nu `$.
If all the observed objects are clusters, then the mass function is a power law with slope $`\alpha 1.8`$ (Figure 3). However, stellar contamination at the faint end of the cluster LF may be important. Using a star formation rate (SFR) measured from H$`\alpha `$ images results in more objects at faint magnitudes than are observed. With young cluster drawn from a lognormal mass function, the SFR needed to match the observed LF is about half the observed SFR (Figure 4). This difference could be explained if the binary fraction is about 50%. The observed SFR could be much lower if the H$`\alpha `$ flux is due to shocks or other processes besides star formation. Most of the flux in the region under consideration is from diffuse emission rather than discreet HII regions. The mass-loss prescription in the stellar evolutionary tracks is also a crucial parameter; higher mass-loss rates yield fewer supergiants. The main point is that determining cluster mass functions is complicated, all these factors must be considered.
## 5. Conclusions
The study of young star clusters in interacting and merging galaxies is a good example of the symbiotic relationship between observations and theory. Observations of the youngest star clusters in the Antennae are providing the initial density distributions and the initial cluster mass function that are needed for models of individual star clusters and cluster systems. Further observations of older merger remnants will hopefully show how the mass function evolves with time. On the other hand, evolutionary models are needed to convert colors and luminosities into ages and masses. Dynamical models will explain the processes that cause clusters and cluster systems to evolve.
The current observations suggest that young clusters may be born without a tidal radius, that cluster formation occurs over several hundred Myr in a merger, and that the mass function of young clusters is most likely a power-law (though there are indications of flattening). Thus, there still appears to be a difference between the mass functions of young clusters and old GCs. This could be due either to the effect of different initial conditions at recent epochs (e.g. increased metallicity) or to the slow destruction of low-mass clusters over a Hubble time (see ZF99 and references therein).
### Acknowledgments.
I would like to thank the Leids Kerkhoven Bosscha Fonds for a subsidy that allowed me to attend this workshop. Rob Kennicutt and Audra Baleisis kindly provided the H$`\alpha `$ image of NGC 7252. Thanks to Michael Fall, Brad Whitmore, and Gerhardt Meurer for many useful suggestions on modeling the cluster mass function.
## References
Bessell, M. S., Castelli, F., & Plez, B. 1998, A&A, 333, 231
Carlson, M. N., et al. 1999, AJ, 117, 1700
Forbes, D. A., & Hau, G. K. T. 1999, MNRAS, in press (astro-ph/9910421)
Harris, W. E. 1991, ARA&A, 29, 543
Holland, S., Côté, P., & Hesser, J. E. 1999, A&A, 348, 418
Lutz, D. 1991, A&A, 245, 31
Meurer, G. R. 1995, Nature, 375, 742
Miller, B. W., Whitmore, B. C., Schweizer, F.. & Fall, S. M. 1997, AJ, 114, 2381 (M+97)
Schaller, G., Schaerer, D., Meynet, G., & Maeder, A. 1992, A&AS, 96, 269
Schweizer, F. 1982, ApJ, 252, 455
Schweizer, F. 1999, in Spectrophotometric Dating of Stars and Galaxies, eds. I. Hubeny, S. R. Heap, & R. H. Cornett (San Francisco: ASP), in press.
Schweizer, F., Miller, B. W., Whitmore, B. C., Fall, S. M. 1996, AJ, 112, 1839
Schweizer, F., & Seitzer, P. 1998, AJ, 116, 2206
van den Bergh, S. 1995, ApJ, 450, 27
Whitmore, B. C., Sparks, W. B., Lucas, R. A., Macchetto, F. D., & Biretta, J. A. 1995, ApJ, 454, L73
Whitmore, B. C., Miller, B. W., Schweizer, F., & Fall, S. M. 1997, AJ, 114, 1797
Whitmore, B. C., Zhang, Q., Leitherer, C., Fall, S. M., Schweizer, F., & Miller, B. W. 1999, AJ, 118, 1551 (W+99)
Zhang, Q., & Fall, S. M. 1999, ApJ, 527, L81 (ZF99)
## Discussion
C. Boily: How is the tidal radii of candidate clusters estimated? What might be the velocity dispersion of populations of candidate clusters?
B. Miller: The tidal radius depends on the shape of the potential, which is complicated in a merging system. However, the potential of a young merger like NGC 4038 may still not be too different from a normal disk. Thus, we can get a range of likely tidal radii by looking at the Galaxy, M31, and truncated clusters in the Antennae itself. They do seem to be similar, with values of $`r_t=50100`$ pc. The velocity dispersion of the cluster system is probably on the order of 100 km sec<sup>-1</sup> (see Schweizer & Seitzer 1998).
S. P. Zwart: The luminosity density of your youngest globular cluster seems to extend beyond its tidal radius in the potential of its parent galaxy. the small age of this system may be smaller than the cluster’s crossing time, which suggests that when clusters form their density distribution extends beyond the tidal radius. This is important for understanding the initial conditions of globular clusters, which are required for numerical models.
P. Kroupa: The flattening of the LF near 10<sup>5</sup> M$`_{}`$ may be due to cluster-cluster disruptions in cluster-rich regions. Are there any HST images for tidal dwarf galaxies?
B. Miller: HST images of tidal dwarf candidates in NGC 4038/39 and NGC 7252 have been taken and are being analyzed.
J. C. Mermilliod: The important point is to distinguish a real bound star cluster from a large OB association which can cover several hundred parsecs. Galactic examples would be the Sco OB1 region with a dense cluster and a whole population of supergiants, or the h and $`\chi `$ Persei region.
B. Miller: Agreed, we try to select as cluster candidates only the most compact objects that do not appear to be stars. |
no-problem/9912/nucl-th9912051.html | ar5iv | text | # Virtual states of light non-Borromean halo nuclei
## Abstract
It is shown that the three-body non-Borromean halo nuclei like $`{}_{}{}^{12}Be`$, $`{}_{}{}^{18}C`$, $`{}_{}{}^{20}C`$ have $`p`$wave virtual states with energy of about 1.6 times the corresponding neutron-core binding energy. We use a renormalizable model that guarantees the general validity of our results in the context of short range interactions.
PACS 21.10.Dr, 21.45.+v, 24.30.Gd, 27.20.+n
Halo nuclei offer the opportunity to study the few-body aspects of the nuclear interaction with their peculiar three-body phenomena. Recently, attention was drawn to the possibility of existing Efimov states in such systems, because some halo nuclei can be viewed as a three body system with two loosely bound neutrons and a core . It was suggested $`{}_{}{}^{18}C`$ and $`{}_{}{}^{20}C`$ as promising candidates to have Efimov states. In Ref. , by considering the critical conditions to allow the existence of one Efimov state, using the experimental values for the neutron separation energies ($`{}_{}{}^{19}C+n`$ and $`{}_{}{}^{18}C+2n`$) given in , it was concluded that $`{}_{}{}^{20}C`$ could have such state.
The weakly bound Efimov states appear in the zero angular momentum state of a three boson system and the number of states grows to infinity, condensing at zero energy as the pair interactions are just about to bind two particles in $`s`$wave. Such states are loosely bound and their wave functions extend far beyond those of normal states. If such states exist in nature they will dominate the low-energy scattering of one of the particles with the bound-state of the remaining two particles. Such states have been studied in several numerical model calculations. There were theoretical searches for Efimov states in atomic and nuclear systems without a clear experimental signature of their occurrence. .
The physical picture underlying such phenomena is related to the unusually large size of these light three-body halo nuclei. The core can be assumed structureless , considering that the radius of the neutron halo is much greater than the radius of the core. The large size scale of the orbit of the outer neutrons in halo nuclei comes from the small neutron separation energies, characterizing a weakly bound few-body system. Thus, the detailed form of the nuclear interaction is not important giving to the system universal properties as long some physical scales are known . This situation allows the use of concepts coming from short-range interactions.
In the limit of a zero-range interaction the three-body system is parameterized by the physical two-body and three-body scales. In a renormalization approach of the quantum mechanical many-body model with the $`s`$wave zero-range force, all the low-energy properties of the three-body system are well defined if one three-body and another two-body physical informations are known . The three-body input can be chosen as the experimental ground state binding energy. All the detailed informations about the short-range force, beyond the low-energy two-body observables, are retained in only one three-body physical information in the limit of zero range interaction. The sensibility of the three body binding energy to the interaction properties comes from the collapse of the system in the limit of zero-range force, which is known as the Thomas effect .
The three-body scale vanishes as a physical parameter if angular momentum or symmetry do not allow the simultaneous presence of the particles close to each other. In three-body $`p`$wave states, the particles interacting through $`s`$wave potentials have the centrifugal barrier forbbiden the third particle to be close to the interacting pair. Consequently, the third particle just notice the asymptotic wave of the interacting pair, which is defined by a two-body physical scale, and the three-body scale is not seen by the system in these states. The observables of the three-body system in states which have non zero angular momentum are determined just by two-body scales. We look for special possibilities in $`p`$wave like the virtual state. The trineutron system in $`p`$wave presents a peculiar pole in the second energy sheet , when the neutron-neutron ($`nn`$) is artificially bound. The value of the pole scale with the binding energy of the fictious $`nn`$ system, as this is the only scale of the three-body system . It is not forbbiden, in principle, to exist one virtual state of the three-body halo nuclei system in $`p`$wave, and if it exists, it depends exclusively on the two-body scales: the binding energy of the neutron to the core and the $`nn`$ virtual state energy.
In this work, we search for the virtual state of the three-body halo nuclei in $`p`$wave. We make use of the zero-range model which is well defined in $`p`$wave, and the inputs are the energy of the bound state of the neutron to the core and the virtual $`nn`$ state energy. We look for weakly bound $`n`$core systems, in particular we look at $`{}_{}{}^{12}Be`$ ($`{}_{}{}^{10}Be+2n`$), $`{}_{}{}^{18}C`$ ($`{}_{}{}^{16}C+2n`$), and $`{}_{}{}^{20}C`$ ($`{}_{}{}^{18}C+2n`$). The zero-range model is analitically continued to the second sheet, in the complex energy plane, and there we seek for the solution of the homogeneous equation. In the case of Borromean halo nuclei such as $`{}_{}{}^{11}Li`$, our method does not work. However to get reed of the virtual state, the $`{}_{}{}^{10}Li`$ core is made artificially bound, to allow the analytical continuation to the second energy sheet through the elastic cut.
The nuclei $`{}_{}{}^{12}Be`$, $`{}_{}{}^{18}C`$, and $`{}_{}{}^{20}C`$ have an interesting non-Borromean nature with strong $`nn`$ pairing in the ground-state. Specifically, $`{}_{}{}^{12}Be`$ is $`\left\{O^+,23.6\mathrm{ms},E_n=3169\mathrm{K}\mathrm{e}\mathrm{V}\right\}`$, $`{}_{}{}^{18}C`$ is $`\left\{O^+,95\mathrm{m}\mathrm{s},E_n=4180\mathrm{K}\mathrm{e}\mathrm{V}\right\}`$, and $`{}_{}{}^{20}C`$ is $`\left\{O^+,\mathrm{?},E_n=3340\mathrm{K}\mathrm{e}\mathrm{V}\right\}`$, where the first number is the spin-parity of the ground state, the second is the mean lifetime and the third is the neutron separation energy. The lifetime of $`{}_{}{}^{20}C`$, shown by a question mark, is not available. The numbers should be compared to the one-neutron-less isotopes, $`{}_{}{}^{11}Be`$, $`{}_{}{}^{17}C`$, and $`{}_{}{}^{19}C`$, respectively given by $`\left\{1/2^+,13.81\mathrm{ms},E_n=504\mathrm{K}\mathrm{e}\mathrm{V}\right\}`$, $`\left\{\mathrm{?},193\mathrm{m}\mathrm{s},E_n=729\mathrm{K}\mathrm{e}\mathrm{V}\right\}`$, and $`\left\{5/2^+(1/2^+),\mathrm{?},E_n=160(530)\mathrm{KeV}\right\}`$. The number in the round brackets, in $`{}_{}{}^{19}C`$ refer to the recent measurement of Nakamura et al. . Again, the question marks refer to not available results. The above nuclei are used to determine the neutron-core binding energies in our calculation to follow. Note that the $`nn`$ pairing energies $`\mathrm{\Delta }_{nn}`$ are in the range $`2260\mathrm{\Delta }_{nn}3400`$ KeV. In our calculation of the $`p`$wave virtual state, the pairing is taken inoperative and the only energy scales left are the neutron-core binding energy $`(E_{nc})`$ and the $`nn`$ virtual state energy $`(E_{nn})`$ in the $`p`$wave three-body virtual state (pygmy dipole state).
As the input energies are fixed in the renormalized model, a more realistic potential will not affect the generality of the present conclusions. The Pauli principle correction, between the halo and the core neutrons, affects essentially the ground state and it is weakened in the $`p`$wave state due to the centrifugal barrier. We have to consider that this is a short-range phenomenon that occurs for distances less than the core size (about $`3fm`$ for light-halo nuclei). We believe that our results are valid even in the case where the spin of the core is non zero. The results show little dependence on the mass difference of the particles, in a sense explained together with the numerical results, enough to indicate that the dependence on the details of the interactions cannot be larger.
In other context, the three-nucleon system has been studied with zero-range force models . They succeeded in explaining the qualitative properties of the three-nucleon system and described the known correlations between three-nucleon observables. The universality in the three-nucleon system means the independence of the correlations to the details of the short-range nucleon-nucleon potentials .
Here we use a notation appropriate for halo nuclei, $`n`$ for neutron and $`c`$ for core, but we would like to point out that our approach is applicable to any three-particle system that interact via $`s`$wave short-range interactions, where two of the particles are identical. The $`s`$wave interaction for the $`nc`$ potential is justified in the present analysis, because the $`p`$wave virtual state if exists it should have a small energy, just being sensitive to the properties of the zero angular momentum two-particle state in the relative coordinates. It also was observed in Ref., when discussing $`{}_{}{}^{11}Li`$, that even the three-body wave-function with an $`s`$wave $`nn`$ correlation produces a ground state of the halo nuclei with two or more shell-model configurations.
The energies of the two particle subsystem, $`E_{nn}`$ and $`E_{nc}`$ can be virtual or bound. However, the extension to the second energy sheet, will be done through the cut of the elastic scattering of the neutron and the bound neutron-core subsystem. Thus, we are going to use the value of the virtual state energy $`E_{nn}`$=143 KeV and the binding energy of the neutron to the core $`E_{nc}`$ in our calculations. We vary the core mass to study the light halo nuclei like $`{}_{}{}^{11}Li`$, $`{}_{}{}^{12}Be`$, $`{}_{}{}^{18}C`$ and $`{}_{}{}^{20}C`$.
The zero-range three-body integral equations for the bound state of two identical particles and a core, is written as generalization of the three-boson equation . It is composed by two coupled integral equations in close analogy to the case of $`s`$wave separable potential model presented in Ref.. The antisymmetrization of the two outer neutrons is satisfied since the spin couples to zero . In our approach the potential form factors and corresponding strengths are replaced, in the renormalization procedure, by the two-body binding energies, $`E_{nn}`$ and $`E_{nc}`$. In the case of bound systems, these quantities are the separation energies. We distinguish these two cases by the following definition:
$$K_{nn}\sqrt{E_{nn}},K_{nc}\sqrt{E_{nc}},$$
(1)
where $`+`$ refers to bound and $``$ to virtual state-energies. Our units will be such that $`\mathrm{}=1`$ and the nucleon mass, $`m_n=1`$.
After partial wave projection, the $`\mathrm{}`$wave coupled integral equations for the three-body system consisting of two neutrons and a core ($`nnc`$) are:
$`\chi _{nn}^{\mathrm{}}(q)`$ $`=`$ $`2\tau _{nn}(q;E;K_{nn}){\displaystyle _0^{\mathrm{}}}𝑑kG_1^{\mathrm{}}(q,k;E)\chi _{nc}^{\mathrm{}}(k)`$ (2)
$`\chi _{nc}^{\mathrm{}}(q)`$ $`=`$ $`\tau _{nc}(q;E;K_{nc}){\displaystyle _0^{\mathrm{}}}𝑑k\left[G_1^{\mathrm{}}(k,q;E)\chi _{nn}^{\mathrm{}}(k)+A_cG_2^{\mathrm{}}(q,k;E)\chi _{nc}^{\mathrm{}}(k)\right],`$ (3)
where
$`\tau _{nn}(q;E;K_{nn})`$ $`=`$ $`{\displaystyle \frac{1}{\pi }}\left[\sqrt{E+{\displaystyle \frac{A_c+2}{4A_c}}q^2}K_{nn}\right]^1,`$ (4)
$`\tau _{nc}(q;E;K_{nc})`$ $`=`$ $`{\displaystyle \frac{1}{\pi }}\left({\displaystyle \frac{A_c+1}{2A_c}}\right)^{3/2}\left[\sqrt{E+{\displaystyle \frac{A_c+2}{2(A_c+1)}}q^2}K_{nc}\right]^1,`$ (5)
$`G_1^{\mathrm{}}(q,k;E)`$ $`=`$ $`2A_ck^2{\displaystyle _1^1}𝑑x{\displaystyle \frac{P_{\mathrm{}}(x)}{2A_c(E+k^2)+q^2(A_c+1)+2A_cqkx}},`$ (6)
$`G_2^{\mathrm{}}(q,k;E)`$ $`=`$ $`2k^2{\displaystyle _1^1}𝑑x{\displaystyle \frac{P_{\mathrm{}}(x)}{2A_cE+(q^2+k^2)(A_c+1)+2qkx}}.`$ (7)
In the above equations, $`A_c`$ is the core mass number and $`E`$ is modulus of the energy of the three-body halo state. As we are interested in the $`\mathrm{}`$-th angular momentum three-body state, the Thomas collapse is forbidden and the integration over momentum extends to infinity. For $`\mathrm{}>0`$ the short-range three-body scale are not seen by the system, while for $`\mathrm{}=0`$ the renormalization of the Faddeev equations is necessary. In the renormalization procedure, for $`\mathrm{}=0`$ a subtraction should be performed in the Faddeev equations and the momentum scale, which represents the subtraction point in the integral equation , qualitatively represents the inverse of the interaction radius . The subtraction point goes to infinity as the radius of the interaction decreases. The three-body model is renormalizable for $`\mathrm{}=0`$, requiring only one three-body observable to be fixed, which is the physical meaning of the subtraction performed in the Faddeev equations, together with the two-body low-energy physical informations. The scheme is invariant under renormalization group transformations. However, for $`\mathrm{}>0`$, the original equations as given by (2) and (3) are well defined and the three-body observables are completly determined by the two-body physical scales corresponding to $`K_{nn}`$ and $`K_{nc}`$.
The analytic continuation of the scattering equations for separable potentials to the second energy sheet, has been extensively discussed by Glöckle and, in the case of the zero-range three-body model , by Frederico et al.. The integral equations on the second energy sheet are obtained by the analytical continuation through the two-body elastic scattering cut due to neutron scattering on the bound neutron-core subsystem. The elastic scattering cut comes through the pole of the neutron-core elastic scattering amplitude in Eq.(5). In the next, we perform the analytic continuation of Eqs. (2 \- 7) to the second energy sheet. The spectator function $`\chi _{nc}^{\mathrm{}}(k)`$ is substituted by $`\chi _{nc}^{\mathrm{}}(k)/\left[E_vE_{nc}+{\displaystyle \frac{A_c+2}{2(A_c+1)}}k^2\right]`$, where $`E_v`$ is the modulus of the virtual state energy. The resulting coupled equations in the second energy sheet are given by:
$`\chi _{nn}^{\mathrm{}}(q)`$ $`=`$ $`\tau _{nn}(q;E_v;K_{nn}){\displaystyle \frac{4i(A_c+1)}{\pi q(A_c+2)}}G_1^{\mathrm{}}(q,ik_v;E_v)\chi _{nc}^{\mathrm{}}(ik_v)`$ (8)
$`+`$ $`2\tau _{nn}(q;E_v;K_{nn}){\displaystyle _0^{\mathrm{}}}𝑑k{\displaystyle \frac{G_1^{\mathrm{}}(q,k;E_v)\chi _{nc}^{\mathrm{}}(k)}{E_vE_{nc}+{\displaystyle \frac{A_c+2}{2(A_c+1)}}k^2}},`$ (9)
$`\chi _{nc}^{\mathrm{}}(q)`$ $`=`$ $`\overline{\tau }_{nc}(q;E_v;K_{nc}){\displaystyle \frac{2iA_c(A_c+1)}{\pi q(A_c+2)}}G_2^{\mathrm{}}(q,ik_v;E_v)\chi _{nc}^{\mathrm{}}(ik_v)`$ (10)
$`+`$ $`\overline{\tau }_{nc}(q;E_v;K_{nc}){\displaystyle _0^{\mathrm{}}}𝑑k\left(G_1^{\mathrm{}}(k,q;E_v)\chi _{nn}^{\mathrm{}}(k)+{\displaystyle \frac{A_cG_2^{\mathrm{}}(q,k;E_v)\chi _{nc}^{\mathrm{}}(k)}{E_vE_{nc}+{\displaystyle \frac{A_c+2}{2(A_c+1)}}k^2}}\right),`$ (11)
where, the on-energy-shell momentum at the virtual state is $`k_v=\sqrt{{\displaystyle \frac{2(A_c+1)}{A_c+2}}(E_vE_{nc})}`$, and
$`\overline{\tau }_{nc}(q;E;K_{nc})`$ $`=`$ $`{\displaystyle \frac{1}{\pi }}\left({\displaystyle \frac{A_c+1}{2A_c}}\right)^{3/2}\left[\sqrt{E+{\displaystyle \frac{A_c+2}{2(A_c+1)}}q^2}+K_{nc}\right],`$ (12)
The cut of the elastic amplitude, given by the exchange of the core between the different possibilities of the bound core-neutron subsystems, is near the physical region of virtual state pole, due to the small value of $`E_{nc}`$. This cut is given by the values of imaginary $`k`$ between the extreme poles of the free three-body Green’s function, $`G_2^{\mathrm{}}(q,k;E_v)`$, given by Eq.(7), which appears in the first term of the right-hand side of Eq.(11),
$`2A_cE+(q^2+k^2)(A_c+1)+2qkx=0,`$ (13)
with $`1<x<1`$, $`q=k=ik_{cut}`$ and $`E={\displaystyle \frac{A_c+2}{2(A_c+1)}}k_{cut}^2+E_{nc}`$. Introducing the value of $`E`$ substituting the imaginary $`k`$ in Eq.(13), the cut is found at values of $`E`$ satisfying
$`2{\displaystyle \frac{A_c+1}{A_c}}E_{nc}>E>2{\displaystyle \frac{A_c+1}{A_c+2}}E_{nc}.`$ (14)
The virtual state energy $`E_v`$ in the second energy sheet is found between the scattering threshold and the cut, $`E_v<2{\displaystyle \frac{A_c+1}{A_c+2}}E_{nc}`$, which gives for $`B_v=E_vE_{nc}<{\displaystyle \frac{A_c}{A_c+2}}E_{nc}`$.
In the limit of zero-ranged interaction the only physical scales of the three-body system for $`p`$waves is $`E_{nn}`$ and $`E_{nc}`$, implying that $`B_v=E_{nc}(E_{nc}/E_{nn},A_c)`$, where $``$ is a scaling function to be determined by the solution of Eqs. (9) and (11). However, because of the proximity of the cut to the scattering threshold, it is reasonable to believe that it should have a major importance for the formation of the virtual state, and $`(E_{nc}/E_{nn},A_c)`$ be roughly independent of the ratio $`E_{nc}/E_{nn}`$. Another consequence of the dominance of the cut in the virtual state energy, should be a soft dependence on $`A_c`$ of the ratio $`B_v(A_c+2)/(E_{nc}A_c)`$.
In figure 1, the results of the virtual state energy are shown in the form of the ratio $`B_v(A_c+2)/(E_{nc}A_c)`$ as a function of the core mass $`A_c`$, for $`E_{nc}=E_{nn}`$. The numerical values of the virtual $`nn`$ and bound $`nc`$ states energies can be choosen as being equal. The calculation are shown for a extreme variation of $`A_c`$ between .001 and 1000, while the ratio changed by a factor of three. The other characteristic of the virtual state is the approximate independence of $`B_v(A_c+2)/(E_{nc}A_c)`$ on the ratio $`E_{nc}/E_{nn}`$, which is confirmed in figure 2, where calculations were performed for $`E_{nc}/E_{nn}`$ between .01 and 1000.
The three-body halo nuclei $`{}_{}{}^{11}Li`$, $`{}_{}{}^{12}Be`$, $`{}_{}{}^{18}C`$ and $`{}_{}{}^{20}C`$ have the $`p`$wave virtual state. In the case of $`{}_{}{}^{11}Li`$ we artificially changed the virtual state of $`{}_{}{}^{10}Li`$ to a bound state, just to give to the reader one value of the three-body virtual state in case the binding energy of the neutron to the core is some tenth’s of KeV. In table I, our results are show. The $`p`$wave virtual state energy scales with the binding energy of the neutron to the core, and it is roughly of about twice $`E_{nc}`$.
In summary, we have discussed the universal aspects of $`p`$wave virtual states of three-body halo nuclei in the limit of a zero-range interaction. We show the existence of scaling properties of the three-body $`p`$wave virtual state energy in respect to the energies of the $`nn`$ virtual and $`nc`$ bound states which determine the value of the $`p`$wave virtual state energy. We conclude that the scaling function $`(E_{nc}/E_{nn},A_c)`$ which gives the virtual state energy as $`E_v=E_{nc}\left[1+(E_{nc}/E_{nn},A_c)\right]`$, is rougly independent on the ratio $`E_{nc}/E_{nn}`$, and approximately entirely determined by $`A_c`$. From knowledge of $`E_{nc}`$, we calculated the $`p`$wave virtual state energies for $`{}_{}{}^{12}Be`$, $`{}_{}{}^{18}C`$ and $`{}_{}{}^{20}C`$ which came out to be about 1.6 times the neutron-core binding energy. These threshold dominated excited states, commonly called “pygmy resonances”, are therefore not resonances at all and correspond to a manifestation of predominantly dipole final state interaction, just as in the two-body case of the most well known halo nucleus, the deuteron .
Our thanks for support from Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) and from Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) of Brazil.
Figure Captions
Scaling plot of $`{\displaystyle \frac{A_c+2}{A_c}}{\displaystyle \frac{B_v}{E_{nc}}}`$ as a function of $`A_c`$ for $`E_{nc}=E_{nn}`$.
Scaling plot of $`{\displaystyle \frac{A_c+2}{A_c}}{\displaystyle \frac{B_v}{E_{nc}}}`$ as a function of $`E_{nc}/E_{nn}`$ for $`A_c=`$ 0.1 , 10 and 100. |
no-problem/9912/nucl-th9912013.html | ar5iv | text | # Effects of HBT correlations on flow measurements
## I Introduction
In a heavy ion collision, the azimuthal distribution of particles with respect to the direction of impact (reaction plane) is not isotropic for non-central collisions. This phenomenon, referred to as collective flow, was first observed fifteen years ago at Bevalac , and more recently at the higher AGS and SPS energies. Azimuthal anisotropies are very sensitive to nuclear matter properties . It is therefore important to measure them accurately. Throughout this paper, we use the word “flow” in the restricted meaning of “azimuthal correlation between the directions of outgoing particles and the reaction plane”. We do not consider radial flow , which is usually measured for central collisions only.
Flow measurements are done in three steps (see for a recent review of the methods): first, one estimates the direction of the reaction plane event by event from the directions of the outgoing particles; then, one measures the azimuthal distribution of particles with respect to this estimated reaction plane; finally, one corrects this distribution for the statistical error in the reaction plane determination. In performing this analysis, one usually assumes that the only azimuthal correlations between particles result from their correlations with the reaction plane, i.e. from flow This implicit assumption is made, in particular, in the “subevent” method proposed by Danielewicz and Odyniec in order to estimate the error in the reaction plane determination. This method is now used by most, if not all, heavy ion experiments.
However, other sources of azimuthal correlations are known, which do not depend on the orientation of the reaction plane. For instance, there are quantum correlations between identical particles, due to the (anti)symmetry of the wave function : this is the so-called Hanbury-Brown and Twiss effect , hereafter denoted by HBT (see for reviews). Azimuthal correlations due to the HBT effect have been studied recently in . In the present paper, we show that if the standard flow analysis is performed, these correlations produce a spurious flow. This effect is important when pions are used to estimated the reaction plane, which is often the case at ultrarelativistic energies, in particular for the NA49 experiment at CERN . We show that when these correlations are properly subtracted, the flow observables are considerably modified at low transverse momentum.
In section 2, we recall how the Fourier coefficients of the azimuthal distribution with respect to the reaction plane are extracted from the two-particle correlation function in the standard flow analysis. Then, in section 3, we apply this procedure to the measured two-particle HBT correlations, and calculate the spurious flow arising from these correlations. Finally, in section 4, we explain how to subtract HBT correlations in the flow analysis, and perform this subtraction on the NA49 data, using the HBT correlations measured by the same experiment. Conclusions are presented in section 5.
## II Standard flow analysis
In nucleus–nucleus collisions, the determination of the reaction plane event by event allows in principle to measure the distribution of particles not only in transverse momentum $`p_T`$ and rapidity $`y`$, but also in azimuth $`\varphi `$, where $`\varphi `$ is the azimuthal angle with respect to the reaction plane. The $`\varphi `$ distribution is conveniently characterized by its Fourier coefficients
$$v_n(p_T,y)\mathrm{cos}n\varphi =\frac{_0^{2\pi }\mathrm{cos}n\varphi \frac{dN}{d^3𝐩}d\varphi }{_0^{2\pi }\frac{dN}{d^3𝐩}𝑑\varphi }$$
(1)
where the brackets denote an average value over many events. Since the system is symmetric with respect to the reaction plane for spherical nuclei, $`\mathrm{sin}n\varphi `$ vanishes. Most of the time, because of limited statistics, $`v_n`$ is averaged over $`p_T`$ and/or $`y`$. The average value of $`v_n(p_T,y)`$ over a domain $`𝒟`$ of the $`(p_T,y)`$ plane, corresponding to a detector, will be denoted by $`v_n(𝒟)`$. In practice, the published data are limited to the $`n=1`$ (directed flow) and $`n=2`$ (elliptic flow) coefficients. However, higher harmonics could reveal more detailed features of the $`\varphi `$ distribution .
Since the orientation of the reaction plane is not known a priori, $`v_n`$ must be extracted from the azimuthal correlations between the produced particles. We introduce the two-particle distribution, which is generally written as
$$\frac{dN}{d^3𝐩_\mathrm{𝟏}d^3𝐩_\mathrm{𝟐}}=\frac{dN}{d^3𝐩_\mathrm{𝟏}}\frac{dN}{d^3𝐩_\mathrm{𝟐}}\left(1+C(𝐩_1,𝐩_2)\right)$$
(2)
where $`C(𝐩_1,𝐩_2)`$ is the two-particle connected correlation function, which vanishes for independent particles. The Fourier coefficients of the relative azimuthal distribution are given by
$$c_n(p_{T1},y_1,p_{T2},y_2)\mathrm{cos}n(\varphi _1\varphi _2)=\frac{\mathrm{cos}n(\varphi _1\varphi _2)\frac{dN}{d^3𝐩_1d^3𝐩_2}𝑑\varphi _1𝑑\varphi _2}{\frac{dN}{d^3𝐩_1d^3𝐩_2}𝑑\varphi _1𝑑\varphi _2}.$$
(3)
We denote the average value of $`c_n`$ over $`(p_{T2},y_2)`$ in the domain $`𝒟`$ by $`c_n(p_{T1},y_1,𝒟)`$, and the average over both $`(p_{T1},y_1)`$ and $`(p_{T2},y_2)`$ by $`c_n(𝒟,𝒟)`$.
Using the decomposition (2), one can write $`c_n`$ as the sum of two terms:
$$c_n(p_{T1},y_1,p_{T2},y_2)=c_n^{\mathrm{flow}}(p_{T1},y_1,p_{T2},y_2)+c_n^{\mathrm{non}\mathrm{flow}}(p_{T1},y_1,p_{T2},y_2)$$
(4)
where the first term is due to flow:
$$c_n^{\mathrm{flow}}(p_{T1},y_1,p_{T2},y_2)=v_n(p_{T1},y_1)v_n(p_{T2},y_2)$$
(5)
and the remaining term comes from two-particle correlations:
$$c_n^{\mathrm{non}\mathrm{flow}}(p_{T1},y_1,p_{T2},y_2)=\frac{\mathrm{cos}n(\varphi _1\varphi _2)C(𝐩_1,𝐩_2)\frac{dN}{d^3𝐩_1}\frac{dN}{d^3𝐩_2}𝑑\varphi _1𝑑\varphi _2}{\frac{dN}{d^3𝐩_1d^3𝐩_2}𝑑\varphi _1𝑑\varphi _2}$$
(6)
In writing Eq.(5), we have used the fact that $`\mathrm{sin}n\varphi _1=\mathrm{sin}n\varphi _2=0`$ and neglected the correlation $`C(𝐩_1,𝐩_2)`$ in the denominator.
In the standard flow analysis, non-flow correlations are neglected , with a few exceptions: the correlations due to momentum conservation are taken into account at intermediate energies , and correlations between photons originating from $`\pi ^0`$ decays were considered in . The effect of non-flow correlations on flow observables is considered from a general point of view in . In the remainder of this section, we assume that $`c_n^{\mathrm{non}\mathrm{flow}}=0`$. Then, $`v_n`$ can be calculated simply as a function of the measured correlation $`c_n`$ using Eq.(5), as we now show. Note, however, that Eq.(5) is invariant under a global change of sign: $`v_n(p_T,y)v_n(p_T,y)`$. Hence the sign of $`v_n`$ cannot be determined from $`c_n`$. It is fixed either by physical considerations or by an independent measurement. For instance, NA49 chooses the minus sign for the $`v_1`$ of charged pions, in order to make the $`v_1`$ of protons at forward rapidities come out positive . Averaging Eq.(5) over $`(p_{T1},y_1)`$ and $`(p_{T2},y_2)`$ in the domain $`𝒟`$, one obtains :
$$v_n(𝒟)=\pm \sqrt{c_n(𝒟,𝒟)}.$$
(7)
This equation shows in particular that the average two-particle correlation $`c_n(𝒟,𝒟)`$ due to flow is positive. Finally, integrating (5) over $`(p_{T2},y_2)`$, and using (7), one obtains the expression of $`v_n`$ as a function of $`c_n`$:
$$v_n(p_{T1},y_1)=\pm \frac{c_n(p_{T1},y_1,𝒟)}{\sqrt{c_n(𝒟,𝒟)}}.$$
(8)
This formula serves as a basis for the standard flow analysis.
Note that the actual experimental procedure is usually different: one first estimates, for a given Fourier harmonic $`m`$, the azimuth of the reaction plane (modulo $`2\pi /m`$) by summing over many particles. Then one studies the correlation of another particle (in order to remove autocorrelations) with respect to the estimated reaction plane. One can then measure the coefficient $`v_n`$ with respect to this reaction plane if $`n`$ is a multiple of $`m`$. In this paper, we consider only the case $`n=m`$. Both procedures give the same result, since they start from the same assumption (the only azimuthal correlations are from flow). This equivalence was first pointed out in .
## III Azimuthal correlations due to the HBT effect
The HBT effect yields two-particle correlations, i.e. a non-zero $`C(𝐩_1,𝐩_2)`$ in Eq.(2). According to Eq.(6), this gives rise to an azimuthal correlation $`c_n^{\mathrm{non}\mathrm{flow}}`$, which contributes to the total, measured correlation $`c_n`$ in Eq.(4). In particular, there will be a correlation between randomly chosen subevents when one particle of a HBT pair goes into each subevent. The contribution of HBT correlations to $`c_n^{\mathrm{non}\mathrm{flow}}`$ will be denoted by $`c_n^{\mathrm{HBT}}`$.
In the following, we shall consider only pions. Since they are bosons, their correlation is positive, i.e. of the same sign as the correlation due to flow. Therefore, if one applies the standard flow analysis to HBT correlations alone, i.e. if one replaces $`c_n`$ by $`c_n^{\mathrm{HBT}}`$ in Eq.(8), they yield a spurious flow $`v_n^{\mathrm{HBT}}`$, which we calculate in this section.
First, let us estimate its order of magnitude. The HBT effect gives a correlation of order unity between two identical pions with momenta $`𝐩_\mathrm{𝟏}`$ and $`𝐩_\mathrm{𝟐}`$ if $`|𝐩_\mathrm{𝟐}𝐩_\mathrm{𝟏}|\mathrm{}/R`$, where $`R`$ is a typical HBT radius, corresponding to the size of the interaction region. From now on, we take $`\mathrm{}=1`$. In practice, $`R4`$ fm for a semi–peripheral Pb–Pb collision at 158 GeV per nucleon, so that $`1/R50`$ MeV/c is much smaller than the average transverse momentum, which is close to $`400`$ MeV/c: the HBT effect correlates only pairs with low relative momenta.
In particular, the azimuthal correlation due to the HBT effect is short-ranged : it is significant only if $`\varphi _2\varphi _11/(Rp_T)0.1`$. This localization in $`\varphi `$ implies a delocalization in $`n`$ of the Fourier coefficients, which are expected to be roughly constant up to $`nRp_T10`$, as will be confirmed below.
For small $`n`$ and $`(p_{T1},y_1)`$ in $`𝒟`$, the order of magnitude of $`c_n^{\mathrm{HBT}}(p_{T1},y_1,𝒟)`$ is the fraction of particles in $`𝒟`$ whose momentum lies in a circle of radius $`1/R`$ centered at $`𝐩_\mathrm{𝟏}`$. This fraction is of order $`(R^3p_T^2m_T\mathrm{\Delta }y)^1`$, where $`p_T`$ and $`m_T`$ are typical magnitudes of the transverse momentum and transverse mass ($`m_T=\sqrt{p_T^2+m^2}`$, where $`m`$ is the mass of the particle), respectively, while $`\mathrm{\Delta }y`$ is the rapidity interval covered by the detector. Using Eq.(7), this gives a spurious flow of order
$$\left|v_n^{\mathrm{HBT}}(𝒟)\right|\left(\frac{1}{R^3p_T^2m_T\mathrm{\Delta }y}\right)^{1/2}.$$
(9)
The effect is therefore larger for the lightest particles, i.e. for pions. Taking $`R=4`$ fm, $`p_Tm_T400`$ MeV/c and $`\mathrm{\Delta }y=2`$, one obtains $`\left|v_n(𝒟)\right|3`$ %, which is of the same order of magnitude as the flow values measured at SPS. It is therefore a priori important to take HBT correlations into account in the flow analysis.
We shall now turn to a more quantitative estimate of $`c_n^{\mathrm{HBT}}`$. For this purpose, we use the standard gaussian parametrization of the correlation function (2) between two identical pions:
$$C(𝐩_1,𝐩_2)=\lambda e^{q_s^2R_s^2q_o^2R_o^2q_L^2R_L^2}$$
(10)
One chooses a frame boosted along the collision axis in such a way that $`p_{1z}+p_{2z}=0`$ (“longitudinal comoving system”, denoted by LCMS). In this frame, $`q_L`$, $`q_o`$ and $`q_s`$ denote the projections of $`𝐩_2𝐩_1`$ along the collision axis, the direction of $`𝐩_1+𝐩_2`$ and the third direction, respectively. The corresponding radii $`R_L`$, $`R_o`$ and $`R_s`$, as well as the parameter $`\lambda `$ ($`0\lambda 1`$), depend on $`𝐩_1+𝐩_2`$. We neglect this dependence in the following calculation. Note that the parametrization (10) is valid for central collisions, for which the pion source is azimuthally symmetric. Therefore the azimuthal correlations studied in this section have nothing to do with flow. Note also that we neglect Coulomb correlations, which should be taken into account in a more careful study. We hope that repulsive Coulomb correlations between like-sign pairs will be compensated, at least partially, by attractive correlations between opposite sign pairs.
Since $`C(𝐩_1,𝐩_2)`$ vanishes unless $`𝐩_2`$ is very close to $`𝐩_1`$, we may replace $`dN/d^3𝐩_\mathrm{𝟐}`$ by $`dN/d^3𝐩_\mathrm{𝟏}`$ in the numerator of Eq.(6), and then integrate over $`𝐩_2`$. As we have already said, $`q_s`$, $`q_o`$ and $`q_L`$ are the components of $`𝐩_2𝐩_1`$ in the LCMS, and one can equivalently integrate over $`q_s`$, $`q_o`$ and $`q_L`$. In this frame, $`y_10`$ and one may also replace $`dN/d^3𝐩_1`$ by $`(1/m_{T1})dN/d^2𝐩_{𝐓}^{}{}_{1}{}^{}dy_1`$. The resulting formula is boost invariant and can also be used in the laboratory frame.
The relative angle $`\varphi _2\varphi _1`$ can be expressed as a function of $`q_s`$ and $`q_o`$. If $`p_{T1}1/R`$, then to a good approximation
$$\varphi _2\varphi _1q_s/p_{T1}.$$
(11)
If $`p_{T1}1/R`$, Eq.(11) is no longer valid. We assume that $`R_sR_o`$ and use, instead of (11), the following relation :
$$q_s^2+q_o^2=p_{T1}^2+p_{T2}^22p_{T1}p_{T2}\mathrm{cos}(\varphi _2\varphi _1).$$
(12)
To calculate $`c_n^{\mathrm{HBT}}(p_{T1},y_1,𝒟)`$, we insert Eqs.(10) and (11) in the numerator of (6) and integrate over $`(q_s,q_o,q_L)`$. The limits on $`q_o`$ and $`q_L`$ are deduced from the limits on $`(p_{T2},y_2)`$, using the following relations, valid if $`p_{T1}1/R`$ :
$`q_o`$ $`=`$ $`p_{T2}p_{T1}`$ (13)
$`q_L`$ $`=`$ $`m_{T1}(y_2y_1).`$ (14)
Since $`q_s`$ is independent of $`p_{T2}`$ and $`y_2`$ (see Eq.(11)), the integral over $`q_s`$ extends from $`\mathrm{}`$ to $`+\mathrm{}`$.
Note that values of $`q_o`$ and $`q_L`$ much larger than $`1/R`$ do not contribute to the correlation (10), so that one can extend the integrals over $`q_o`$ and $`q_L`$ to $`\pm \mathrm{}`$ as soon as the point $`(p_{T1},y_1)`$ lies in $`𝒟`$ and is not too close to the boundary of $`𝒟`$. By too close, we mean within an interval $`1/R_o50`$ MeV/c in $`p_T`$ or $`1/(R_Lm_T)0.3`$ in $`y`$. One then obtains after integration
$$c_n^{\mathrm{HBT}}(p_{T1},y_1,𝒟)=\frac{\lambda \pi ^{3/2}}{R_sR_oR_L}\mathrm{exp}\left(\frac{n^2}{4p_{T1}^2R_s^2}\right)\frac{{\displaystyle \frac{1}{m_{T1}}}{\displaystyle \frac{dN}{d^2𝐩_{𝐓}^{}{}_{1}{}^{}dy_1}}}{{\displaystyle _𝒟}{\displaystyle \frac{dN}{d^2𝐩_{𝐓}^{}{}_{2}{}^{}dy_2}}d^2𝐩_{𝐓}^{}{}_{2}{}^{}𝑑y_2}.$$
(15)
At low $`p_T`$, Eq.(11) must be replaced by Eq.(12). Then, one must do the following substitution in Eq.(15) :
$$\mathrm{exp}\left(\frac{n^2}{4\chi ^2}\right)\frac{\sqrt{\pi }}{2}\chi e^{\chi ^2/2}\left(I_{\frac{n1}{2}}\left(\frac{\chi ^2}{2}\right)+I_{\frac{n+1}{2}}\left(\frac{\chi ^2}{2}\right)\right)$$
(16)
where $`\chi =R_sp_T`$ and $`I_k`$ is the modified Bessel function of order $`k`$.
Let us discuss our result (15). First, the correlation depends on $`n`$ only through the exponential factor, which suppresses $`c_n^{\mathrm{HBT}}`$ in the very low $`p_T`$ region $`p_{T1}n/2R_s`$. For $`n`$ smaller than $`R_sp_T10`$, the correlation depends weakly on $`n`$, as discussed above. Neglecting this $`n`$ dependence, (15) reproduces the order of magnitude (9). To see this, we normalize the particle distribution in $`𝒟`$ in order to get rid of the denominator in (15), and the numerator $`(1/m_{T1})(dN/d^2𝐩_{𝐓}^{}{}_{1}{}^{}dy_1)`$ is of order $`1/p_T^2m_T\mathrm{\Delta }y`$. However, Eq.(15) is more detailed, and shows in particular that the dependence of the correlation on $`p_{T1}`$ and $`y_1`$ follows that of the momentum distribution in the LCMS (neglecting the $`m_T`$ and $`y`$ dependence of HBT radii). This is because the correlation $`c_n^{\mathrm{HBT}}`$ is proportional to the number of particles surrounding $`𝐩_1`$ in phase space.
Let us now present numerical estimates for a Pb–Pb collision at SPS. We assume for simplicity that the $`p_T`$ and $`y`$ dependence of the particle distribution factorize, thereby neglecting the observed variation of $`p_T`$ with rapidity . The rapidity dependence of charged pions can be parametrized by :
$$\frac{dN}{dy}=\frac{1}{\sigma \sqrt{2\pi }}\mathrm{exp}\left(\frac{\left(yy\right)^2}{2\sigma ^2}\right)$$
(17)
with $`\sigma =1.4`$ and $`y=2.9`$. The normalized $`p_T`$ distribution is parametrized by
$$\frac{dN}{d^2𝐩_𝐓}=\frac{e^{m/T}}{2\pi T(m+T)}\mathrm{exp}\left(\frac{m_T}{T}\right).$$
(18)
with $`T190`$ MeV . This parametrization underestimates the number of low-$`p_T`$ pions. The values of $`R_o`$, $`R_s`$ and $`R_L`$ used in our computations, taking into account that the collisions are semi-peripheral, are respectively 4 fm, 4 fm and 5 fm . The correlation strength $`\lambda `$ is approximately 0.4 for pions .
Finally, we must define the domain $`𝒟`$ in Eq.(15). It is natural to choose different rapidity windows for odd and even harmonics, because odd harmonics have opposite signs in the target and projectile rapidity region, by symmetry, and vanish at mid-rapidity ($`y=2.9`$), while even harmonics are symmetric around mid-rapidity. Following the NA49 collaboration , we take $`4<y<6`$ and $`0.05<p_T<0.6`$ GeV/c for odd $`n`$, and $`3.5<y<5`$ and $`0.05<p_T<2`$ GeV/c for even $`n`$. We assume that the particles in $`𝒟`$ are 85% pions , half $`\pi ^+`$, half $`\pi ^{}`$. Then, for an identified charged pion (a $`\pi ^+`$, say) with $`p_T=p_{T1}`$ and $`y=y_1`$, the right-hand side of Eq.(15) must be multiplied by $`0.85\times 0.5`$, which is the probability that a particle in $`𝒟`$ be also a $`\pi ^+`$.
Substituting the correlation calculated from Eq.(15) in Eq.(8), one obtains the value of the spurious flow $`v_n^{\mathrm{HBT}}(p_T,y)`$ due to the HBT effect. Fig.1 displays $`\left|v_n^{\mathrm{HBT}}\right|`$, integrated between $`4<y<5`$ (as are the NA49 data) as a function of $`p_T`$. As expected, $`v_n^{\mathrm{HBT}}`$ depends on the order $`n`$ only at low $`p_T`$, where it vanishes due to the exponential factor in Eq.(15). HBT correlations, which follow the momentum distribution, also vanish if $`p_T`$ is much larger than the average transverse momentum. Assuming that $`1/R_sm,T`$, we find from Eq.(15) that the correlation is maximum at $`p_T=p_{T\mathrm{max}}`$ where
$$p_{T\mathrm{max}}=\left(\frac{2T}{m+T}\right)^{1/4}\sqrt{\frac{nm}{2R_s}}60\sqrt{n}\mathrm{MeV}/\mathrm{c},$$
(19)
which reproduces approximately the maxima in Fig.1.
Although data on higher order harmonics are still unpublished, they were shown at the Quark Matter ’99 conference by the NA45 Collaboration which reports values of $`v_3`$ and $`v_4`$ of the same order as $`v_1`$ and $`v_2`$, respectively, suggesting that most of the effect is due to HBT correlations. Similar results were found with NA49 data .
## IV Subtraction of HBT correlations
Now that we have evaluated the contribution of HBT correlations to $`c_n^{\mathrm{non}\mathrm{flow}}`$, we can subtract this term from the measured correlation (left-hand side of Eq.(4), which will be denoted by $`c_n^{\mathrm{measured}}`$ in this section) to isolate the correlation due to flow. Then, the flow $`v_n`$ can be calculated using Eq.(8), replacing in this equation $`c_n`$ by the corrected correlation $`c_n^{\mathrm{flow}}=c_n^{\mathrm{measured}}c_n^{\mathrm{HBT}}`$. In this section, we show the result of this modification on the directed and elliptic flow data published by NA49 for pions .
The published data do not give directly the two-particle correlation $`c_n^{\mathrm{measured}}`$, but rather the measured flow $`v_n^{\mathrm{measured}}`$. Since these analyses assume that the correlation factorizes according to Eq.(5), we can reconstruct the measured correlation as a function of the measured $`v_n`$. In particular,
$$c_n^{\mathrm{measured}}(p_{T1},y_1,𝒟)=v_n^{\mathrm{measured}}(p_{T1},y_1)v_n^{\mathrm{measured}}(𝒟).$$
(20)
We then perform the subtraction of HBT correlations in both the numerator and the denominator of Eq.(8).
The integrated flow values measured by NA49 are $`v_1^{\mathrm{measured}}(𝒟)=3.0\pm 0.1\%`$ and $`v_2^{\mathrm{measured}}(𝒟)=3.0\pm 0.1\%`$ . After subtraction of HBT correlations, the coefficients are smaller by some 20% : $`v_1(𝒟)=2.5\%`$ and $`v_2(𝒟)=2.6\%`$.
Fig.2 displays the rapidity dependence of $`v_1`$ and $`v_2`$ at low transverse momentum, where the effect of HBT correlations is largest. Let us first comment on the uncorrected data. We note that $`v_1^{\mathrm{measured}}`$ is zero below $`y<4`$ (i.e. outside $`𝒟`$, where there are no HBT correlations) and jumps to a roughly constant value when $`y>4`$ (where HBT correlations set in). This gap disappears once HBT correlations are subtracted, and the resulting values of $`v_1`$ are considerably smaller. The values of $`v_2`$ are also much smaller after correction, except near mid-rapidity.
Fig.3 displays the $`p_T`$ dependence of $`v_1`$ and $`v_2`$. The behaviour of $`v_n(p_T)`$ is constrained at low $`p_T`$ : if the momentum distribution is regular at $`𝐩_𝐓=\mathrm{𝟎}`$, then $`v_n(p_T)`$ must vanish like $`p_T^n`$. One naturally expects this decrease to occur on a scale of the order of the average $`p_T`$. This is what is observed for protons . However, the uncorrected $`v_1^{\mathrm{measured}}`$ and $`v_2^{\mathrm{measured}}`$ for pions remain large far below 400 MeV/c. In order to explain this behaviour, one would need to invoke a specific phenomenon occurring at low $`p_T`$. No such phenomenon is known. Even though resonance (mostly $`\mathrm{\Delta }`$) decays are known to populate the low-$`p_T`$ pion spectrum, they are not expected to produce any spectacular increase in the flow.
HBT correlations provide this low-$`p_T`$ scale, since they are important down to $`1/R50`$ MeV/c. Once they are subtracted, the peculiar behaviour of the pion flow at low $`p_T`$ disappears. $`v_1`$ and $`v_2`$ are now compatible with a variation of the type $`v_1p_T`$ and $`v_2p_T^2`$, up to $`400`$ MeV/c.
## V Conclusions
We have shown that the HBT effect produces correlations which can be misinterpreted as flow when pions are used to estimate the reaction plane. This effect is present only for pions, in the $`(p_T,y)`$ window used to estimate the reaction plane. Azimuthal correlations due to the HBT effect depend on $`p_T`$ and $`y`$ like the momentum distribution in the LCMS, i.e. $`(1/m_T)dN/dyd^2p_T`$, and depend weakly on the order of the harmonic $`n`$.
The pion flow observed by NA49 has peculiar features at low $`p_T`$: the rapidity dependence of $`v_1`$ is irregular, and both $`v_1`$ and $`v_2`$ remain large down to values of $`p_T`$ much smaller than the average transverse momentum, while they should decrease with $`p_T`$ as $`p_T`$ and $`p_T^2`$, respectively. All these features disappear once HBT correlations are properly taken into account. Furthermore, we predict that HBT correlations should also produce spurious higher harmonics of the pion azimuthal distribution ($`v_n`$ with $`n3`$) at low $`p_T`$, weakly decreasing with $`n`$, with an average value of the order of 1%. The data on these higher harmonics should be published. This would provide a confirmation of the role played by HBT correlations. More generally, our study shows that although non-flow azimuthal correlations are neglected in most analyses, they may be significant.
Acknowledgements
We thank A. M. Poskanzer and S. A. Voloshin for detailed explanations concerning the NA49 flow analysis and useful comments, and J.-P. Blaizot for careful reading of the manuscript and helpful suggestions. |
no-problem/9912/gr-qc9912089.html | ar5iv | text | # Untitled Document
Fig. 1. Solid line: Schwarzschield Potential.
The Effective Potential for $`\lambda =0`$ and $`ϵ=0.001`$
Dashed line: $`\stackrel{~}{E}=1`$. Dotted line: $`\stackrel{~}{E}=1.5`$. Dot-dashed line: $`\stackrel{~}{E}=2`$.
Fig. 2. Solid line: Schwarzschield Potential.
The Effective Potential for $`\lambda =90`$ and $`ϵ=0.001`$
Dashed line: $`\stackrel{~}{E}=9`$. Dotted line: $`\stackrel{~}{E}=13`$. Dot-dashed line: $`\stackrel{~}{E}=22`$.
Fig. 3. The curves represent the points at which the effective potential $`\stackrel{~}{V}_{eff}^2`$, the scalar curvature R and the Kretschmann curvature invariant I diverge.
Dashed line: $`\lambda =0`$ and $`ϵ=0.001`$. Solid line: $`\lambda =0`$ and $`ϵ=0.001`$.
Fig. 4. The Scalar Curvature $`R`$ for $`ϵ=0.001`$
Solid line: $`\lambda =0`$ and $`\stackrel{~}{E}=9`$
Dashed line: $`\lambda =90`$ and $`\stackrel{~}{E}=22`$ |
no-problem/9912/astro-ph9912147.html | ar5iv | text | # Untitled Document
Early Planet Formation as a Trigger for further Planet Formation<sup>1</sup>
Philip J. Armitage<sup>2</sup> & Brad M.S. Hansen<sup>3</sup>
Canadian Institute for Theoretical Astrophysics, University of Toronto, Toronto, ON, M5S 3H8, Canada
Recent discoveries of extrasolar planets<sup>1</sup><sup>,</sup><sup>2</sup> at small orbital radii, or with significant eccentricities, indicate that interactions between massive planets and the disks of gas and dust from which they formed are vital for determining the final shape of planetary systems<sup>3</sup><sup>,</sup><sup>4</sup><sup>,</sup><sup>5</sup><sup>,</sup><sup>6</sup>. We show that if this interaction occurs at an early epoch, when the protoplanetary disc was still massive, then rapid planet growth through accretion causes an otherwise stable disc to fragment into additional planetary mass bodies when the planetary mass reaches $`45m_{\mathrm{Jupiter}}`$. We suggest that such catastrophic planet formation could account for apparent differences in the mass function of massive planets and brown dwarfs<sup>1</sup>, and the existence of young stars that appear to have dissipated their discs at an early epoch<sup>7</sup>. Subsequent gravitational interactions<sup>5</sup><sup>,</sup><sup>6</sup><sup>,</sup><sup>8</sup><sup>,</sup><sup>9</sup> will lead to planetary systems comprising a small number of massive planets in eccentric orbits.
<sup>1</sup>To appear in Nature, 9th December 1999.<sup>2</sup>Present address: Max-Planck-Institut for Astrophysik, Karl-Schwarzschild-Str. 1, D-85740 Garching, Germany<sup>2</sup>Present address: Department of Astrophysical Sciences, Princeton University, Peyton Hall, Ivy Lane, Princeton, NJ 08544-1001
The planet–disc interaction has been studied extensively for low mass protoplanetary discs<sup>3</sup><sup>,</sup><sup>10</sup><sup>,</sup><sup>11</sup><sup>,</sup> <sup>12</sup><sup>,</sup><sup>13</sup><sup>,</sup><sup>14</sup>. This is appropriate for the proto-Solar nebula where the rate limiting step, the assembly of the cores of the giant planets from smaller bodies<sup>15</sup>, is believed to require timescales comparable to the lifetimes of protoplanetary discs, which are observed<sup>7</sup> to last for a $`\mathrm{few}\times 10^610^7\mathrm{yr}`$. However, some extrasolar giant planets could form more rapidly, either via direct hydrodynamic collapse<sup>16</sup>, or via accelerated core formation in discs that are significantly more massive<sup>17</sup> than the minimum mass Solar nebula<sup>18</sup>. More broadly, if angular momentum transport mechanisms other than self-gravity are inefficient in discs where the ionization fraction is low (and no purely hydrodynamic instabilities that lead to outward angular momentum transport are known to exist in Keplerian disc flows<sup>19</sup>), then the outer regions of protoplanetary discs may remain only marginally stable against gravitational instability even at late evolutionary epochs<sup>20</sup><sup>,</sup><sup>21</sup>. In either case, planet-disc interactions could occur while the effects of disc self-gravity are still important.
The local stability of a gaseous disc against gravitational instability depends upon the balance between thermal pressure and self-gravity. At a point in a disc with sound speed $`c_s`$, surface density $`\mathrm{\Sigma }`$, and angular velocity $`\mathrm{\Omega }`$, the controlling parameter<sup>22</sup> is Toomre’s $`Q`$, defined as,
$$Q=\frac{c_s\mathrm{\Omega }}{\pi G\mathrm{\Sigma }},$$
with $`G`$ the gravitational constant. Numerical simulations<sup>23</sup><sup>,</sup><sup>24</sup> show that non-axisymmetric instabilities set in at $`Q<1.5`$, and become increasingly violent for smaller values of $`Q1`$. We consider the relatively cool outer regions of the disc, at radii of several a.u., and set our initial conditions such that the disc is both marginally unstable, and has properties comparable to the upper end of the mass distribution of T Tauri discs inferred from mm wavelength observations<sup>17</sup>, which have $`m_{\mathrm{disc}}0.1m_{}`$. This is around an order of magnitude greater than the canonical minimum mass Solar nebula value of $`10^2m_{}`$, though even for our Solar System the initial disc mass may have been substantially in excess of this minimum<sup>25</sup><sup>,</sup><sup>26</sup>. The surface density profile is taken to be
$$\mathrm{\Sigma }=\mathrm{\Sigma }_0r^{3/2}\left(1\sqrt{\frac{r_{\mathrm{in}}}{r}}\right),$$
where $`r_{\mathrm{in}}`$ is the inner edge of the simulated disc annulus. $`\mathrm{\Sigma }_0`$ and $`c_s`$ are chosen such that $`m_{\mathrm{disc}}=0.1`$, and the ratio of disc scale height to radius at the outer edge is $`(h/r)=0.075`$. With these parameters $`Q>1.5`$ at all radii, and the disc is everywhere close to stable against gravitational instability, as expected if it is the endpoint of an earlier phase of violent gravitational instabilities that drive rapid angular momentum transport<sup>21</sup>.
Figure 1 shows the evolution of the disc, computed using a Lagrangian hydrodynamics code. The isolated disc is shown at $`t=512`$, in units where $`\mathrm{\Delta }t=1`$ corresponds to the dynamical time, $`\mathrm{\Omega }^1`$, at $`r_{\mathrm{in}}`$. This run shows weak spiral arms in the outer disc, as expected from the $`Q`$ profile on the basis of previous simulations of gravitationally unstable discs<sup>23</sup><sup>,</sup><sup>24</sup>, and is amply stable against fragmentation. The disc surface density does not evolve significantly over this relatively short interval, as expected for a thin disc where the efficiency of angular momentum transport from gravitationally instabilities, if parameterized approximately via an equivalent Shakura-Sunyaev $`\alpha `$ parameter<sup>27</sup>, corresponds to a fairly small effective $`\alpha 10^2`$.
We now consider the evolution of the same star-disc system with an embedded planet of initial mass $`m_p=10^3m_{}`$. For seeds of this mass and smaller, the additional potential fluctuations induced by the planet at the Lindblad resonances<sup>12</sup> are small compared to the background fluctuations due to the disc’s own self-gravity, as measured in the control run. This is shown in Figure 2. Neither the mass resolution nor the equation of state are realistic enough to model the internal structure of individual planets, so we focus solely on their influence on the disc, for which purpose details of their internal structure are unimportant.
The presence of a Jupiter mass planet significantly modifies the disc evolution. A partial gap is cleared in the disc on the dynamical timescale at $`r_p`$, bounded by a strongly compressed gravitational wake attached to the planet. This forms part of an $`m=2`$ pattern of strong spiral arms, along with weaker transient spiral features excited in the disc by the combination of gravitational instability and planetary perturbation. The presence of a gap fails to prevent ongoing accretion along the spiral arms at a rate $`\dot{m}_pm_p`$, with an e-folding time of a few planetary orbits. Continuing accretion is expected since the disc viscosity needs to be significantly lower than the values expected in a gravitationally unstable disc to inhibit accretion altogether<sup>11</sup>. Much of this mass accumulates in a resolved, strongly tidally distorted disc surrounding the planet. As the planet mass grows, the overdensity in the spiral arms and at the Lindblad resonances increases while the background surface density profile is unable to evolve on as rapid a timescale. This essential imbalance in timescales is expected to be valid even for the formation of giant planets in a minimum mass Solar nebula<sup>26</sup>, and is therefore a robust prediction for the more massive discs studied here. The rapid growth in mass leads to an increased amplitude of potential fluctuations, as shown in Figure 2, decreasing disc stability, and inevitable fragmentation at the gap edges, shown in the lower panels of Figure 1. For these disc parameters and equation of state this occurs at $`m_p=45\times 10^3m_{}`$. Once this mass is reached, rapid fragmentation into numerous planetary mass bodies occurs near both the inner and outer Lindblad resonances.
Several additional calculations were used to check the sensitivity of the results to the initial conditions and numerical method. With the same initial conditions, fragmentation occurs at the same final planet mass in lower resolution simulations with 10,000 and 20,000 particles, though the initial masses of fragments do vary. In all cases, however, the fragments accrete rapidly from the disc, and so their final masses would fall into the regime of massive planets. Fragmentation also occurs at the same planet mass in a simulation where the initial seed mass was $`m_p=2\times 10^4m_{}`$, which is closer to the mass of a giant planet core beginning runaway accretion of disc gas<sup>26</sup>. For this simulation the time required prior to fragmentation was roughly doubled. Fragmentation does not occur if we artificially set the mass accreted by a Jupiter mass seed to zero, verifying that it is the increased planetary mass after significant accretion that leads to instability.
The existence of a propagating mode of planet formation has implications for the evolution of protoplanetary discs and the statistics of planetary systems. In particular, the formation of one massive planet could suffice to trigger rapid planet formation across the range of disc radii for which $`Q12`$. This could allow massive planets to form at large radii where the timescales for planet formation via other mechanisms can become worryingly long compared to typical disc lifetimes. The consequent disruption of the outer disc concomitant with such violent planet formation would allow the unreplenished inner disc to drain viscously onto the star in a short timescale. Studies of the UV and H$`\alpha `$ flux arising from the accretion process, and near infra-red flux from the inner disc, suggest that a significant fraction of T Tauri stars are able to dissipate their inner discs rapidly<sup>7</sup>. Related processes may be relevant to the formation of planetary satellite systems<sup>12</sup>.
For planetary formation, these results imply that steady growth of giant planets in massive discs around solar mass stars is limited by the vulnerability of the disc to fragmentation once the planetary mass reaches approximately $`5m_{\mathrm{Jupiter}}`$. The resulting formation of additional planets, which then compete to accrete the available disc gas, implies an upper limit to the mass of massive planets formed via this mechanism. In particular, even if the disc was sufficiently massive it would not be possible to grow a planet from a Jupiter mass far into the brown dwarf regime. This is consistent with observational evidence that planets and brown dwarfs do not share a common mass function<sup>1</sup>, which has prompted suggestions that a break in formation mechanisms exists at around $`7m_{\mathrm{Jupiter}}`$. Finally we note that the endpoint of early disc fragmentation would be a system of numerous massive coplanar planets in initially close to circular orbits. Such a system would possess a global organisation imprinted via non-local gravitational effects at birth. The subsequent evolution will be strongly affected by mutual perturbations. These would be favourable initial conditions for the eventual formation of a system comprising one or more massive planets on eccentric orbits<sup>9</sup>.
References
1. Mayor, M., Udry, S., & Queloz, D., The mass function below the substellar limit, in The Tenth Cambridge Workshop on Cool Stars, Stellar Systems and the Sun, ASP Conf. Ser. 154, eds R. A. Donahue & J. A. Bookbinder, 77-87 (1998)
2. Marcy, G. W., & Butler, R. P., Detection of extrasolar giant planets, Ann. Rev. Astron. Astrophys., 36, 57-98 (1998)
3. Lin, D. N. C., Bodenheimer, P., & Richardson, D. C., Orbital migration of the planetary companion of 51 Pegasi to its present location, Nature, 380, 606-607 (1996)
4. Murray, N., Hansen, B., Holman, M., & Tremaine, S., Migrating planets, Science, 279, 69 (1998)
5. Rasio, F. A., & Ford, E. B., Dynamical instabilities and the formation of extrasolar planetary systems, Science, 274, 954-965 (1996)
6. Weidenschilling, S. J., & Marzari, F., Gravitational scattering as a possible origin for giant planets at small stellar distances, Nature, 384, 619-621 (1996)
7. Strom, S. E., Initial frequency, lifetime and evolution of YSO disks, Rev. Mex. Astron. Astrophys. Conf. Ser., 1, 317-328 (1995)
8. Gladman, B., Dynamics of systems of two close planets, Icarus, 106, 247-263 (1993)
9. Lin, D. N. C., & Ida, S., On the origin of massive eccentric planets, Astrophys. J., 477, 781-791 (1997)
10. Artymowicz, P., & Lubow, S. H., Mass flow through gaps in circumbinary disks, Astrophys. J., 467, L77-L80 (1996)
11. Takeuchi, T., Miyama, S. M., & Lin, D. N. C., Gap formation in protoplanetary disks, Astrophys. J., 460, 832-847 (1996)
12. Lin, D. N. C., & Papaloizou, J., On the structure of circumbinary accretion discs and the tidal evolution of commensurable satellites, Mon. Not. R. Astron. Soc., 188, 191-201 (1979)
13. Kley, W., Mass flow and accretion through gaps in accretion discs, Mon. Not. R. Astron. Soc., 303, 696 (1999)
14. Bryden, G., Chen, X., Lin, D. N. C., Nelson, R. P., & Papaloizou, J. C. B., Tidally induced gap formation in protostellar disks: Gap clearing and suppression of protoplanetary growth, Astrophys. J., 514, 344-367 (1999)
15. Safronov, V. S., Evolution of the protoplanetary cloud and formation of the Earth and the planets, Nauka (Moscow), (1969), (English translation for NASA and NSF by Israel Program for Scientific Translations, NASA-TT-F-677, 1972)
16. Boss, A. P., Evolution of the solar nebula IV. Giant gaseous protoplanet formation, Astrophys. J., 503, 923-937 (1998)
17. Osterloh, M., & Beckwith, S. V. W., Millimeter-wave continuum measurements of young stars, Astrophys. J., 439, 288-302 (1995)
18. Hayashi, C., Nakazawa, K., & Nakagawa, Y., Formation of the solar system, in Protostars and planets II, eds D. C. Black & M. S. Matthews, Univ. of Arizona Press (Tucson), p. 1100-1153 (1985)
19. Balbus, S. A., Hawley, J. F., & Stone, J. M., Nonlinear stability, hydrodynamical turbulence, and transport in disks, Astrophys. J., 467, 76-86 (1996)
20. Larson, R. B., Gravitational torques and star formation, Mon. Not. R. Astron. Soc., 206, 197-207 (1984)
21. Lin, D. N. C., & Pringle, J. E., The formation and initial evolution of protostellar disks, Astrophys. J., 358, 515-524 (1990)
22. Toomre, A., On the gravitational stability of a disk of stars, Astrophys. J., 139, 1217-1238 (1964)
23. Laughlin, G., & Bodenheimer, P., Nonaxisymmetric evolution in protostellar disks, Astrophys. J., 436, 335-354 (1994)
24. Nelson, A. F., Benz, W., Adams, F. C., & Arnett, D., Dynamics of circumstellar disks, Astrophys. J., 502, 342-371 (1998)
25. Lissauer, J. J., Timescales for planetary accretion and the structure of the protoplanetary disk, Icarus, 69, 249-265 (1987)
26. Pollack, J. B., Hubickyj, O., Bodenheimer, P., Lissauer, J. J., Podolak, M., & Greenzweig, Y., Formation of the giant planets by concurrent accretion of solids and gas, Icarus, 124, 62-85 (1996)
27. Shakura, N. I., & Sunyaev, R. A., Black holes in binary systems. Observational appearance, Astron. Astrophys., 24, 337-355 (1973)
28. Benz, W., Smooth particle hydrodynamics – A review, in The numerical modelling of nonlinear stellar pulsations, ed J. R. Buchler, Kluwer Academic Publishers (Dordrecht), 269-287 (1990)
29. Monaghan, J. J., Smoothed particle hydrodynamics, Ann. Rev. Astron. Astrophys., 30, 543-574 (1992)
30. Barnes, J., & Hut, P., A hierarchical O(N log N) force calculation algorithm, Nature, 324, 446 (1986)
ACKNOWLEDGEMENTS. We thank Norm Murray for many helpful discussions, and Norman Wilson for maintaining the required computational resources.
Correspondence to P. Armitage (email: armitage@mpa-garching.mpg.de)
Figure 1 – Disc surface density: Disc structure, computed using a Smooth Particle Hydrodynamics (SPH) code<sup>28</sup><sup>,</sup><sup>29</sup>, with individual timesteps for the particles and a tree structure for computing the gravitational forces<sup>30</sup>. We use 60,000 SPH particles, an isothermal equation of state, and standard artificial viscosity parameters<sup>29</sup>. We have verified that torques due to gravity and pressure forces dominate over those attributed to artificial viscosity in determining the planet-disc evolution. Additional support for collapsed objects is provided by limiting the force resolution via a minimum SPH smoothing length, $`h_{\mathrm{min}}=0.075`$. The central star is treated as a smoothed point mass chosen to give a Keplerian potential at $`r>r_{\mathrm{in}}`$. We use units in which $`r_{\mathrm{in}}=m_{}=1`$, and set the outer disc edge at $`r_{\mathrm{out}}=25`$. The upper left panel shows the isolated disc evolved to a time $`t=512`$, the upper right panel the disc at the same time but with a planet, initially of $`10^3m_{}`$, orbiting in a coplanar circular orbit at $`r_p=12.5`$. The planet is treated as a point mass smoothed on a scale $`h_p=0.1`$. The colours signify density on a logarithmic scale. In the presence of a planet the spiral structure is significantly amplified, and a partial gap has been cleared in the disc material. By $`t=608`$ (lower left), when the planetary mass (plus surrounding circumplanetary disc) has reached $`45\times 10^3m_{}`$, the disc near the inner Lindblad resonance has become unstable and fragments into additional planets. Thereafter rapid destruction of the disc occurs. By $`t=736`$ (lower right) numerous fragments have formed, including one near the outer Lindblad resonance, and are accreting rapidly.
Figure 2 – Gravitational potential fluctuations, evaluated as a function of azimuthal angle $`\varphi `$ at the inner Lindblad resonance where fragmentation first occurs. The amplitude of potential fluctuations in the control run (dotted line) is not significantly increased by the addition of a Jupiter mass seed planet (dashed line). By $`t=592`$ (solid line), the greatly increased planet mass has led to a strong $`m=2`$ mode. Fragmentation occurs shortly afterwards. |
no-problem/9912/astro-ph9912421.html | ar5iv | text | # EUVE Observations of clusters of galaxies: Virgo and M87
## 1 Introduction
Observations with the Extreme Ultraviolet Explorer (EUVE) have provided evidence that a number of clusters of galaxies produce intense EUV emission (e.g., Bowyer et al. 1997). The initial explanation for this emission was that it is produced by a diffuse, (5–10) $`\times 10^5`$K thermal gas component of the intracluster medium (ICM). Gas at these temperatures cools very rapidly, however, and there is no obvious energy source to re-heat it (Fabian, 1996). Consequently, a number of other mechanisms have been investigated as the source of the emission. Inverse Compton (IC) scattering of cosmic microwave background photons by relativistic electrons present in the ICM was proposed as the source of the observed EUV emission in the Coma cluster (Hwang 1997; Enßlin & Biermann 1998). However, Bowyer & Berghöfer (1998) have shown that the spatial distribution of the EUV emission in this cluster is not consistent with IC emission from the observed population of relativistic electrons.
A variety of alternative explanations has been advanced which dismiss the EUVE excess in clusters of galaxies. Most recently, Arabadjis & Bregman (1999) argue that the EUV excess can be explained away by a different cross section for absorption by hydrogen and helium in the foreground ISM column. Bowyer, Berghöfer & Korpela (1999) find that in some clusters this may account for some of the excess present in the ROSAT PSPC data, however, this cannot explain the intense EUV excesses found with EUVE.
Bowyer, Berghöfer & Korpela (1999) reexamined EUVE data of the clusters Abell 1795, Abell 2199, and Coma. They demonstrated that the initially reported results are based on an improper background subtraction. In previous analyses a flat background had been assumed. However, a detailed investigation of blank field observations with the EUVE Deep Survey (DS) instrument shows that the background consists of two independent components, a non-photonic background and a background due to photons. The non-photonic background level can be determined in obscured regions of the detector and can be directly subtracted from the raw data. However, the photonic background is affected by telescope vignetting and must be treated differently.
In the case of Abell 1795 and Abell 2199, Bowyer, Berghöfer & Korpela (1999) show that the extent of the diffuse EUV emission is much smaller than earlier reported. Furthermore, the radial EUV emission profiles of these two clusters show a flux deficit compared to the soft energy tail of the X-ray emitting intracluster gas. These findings are consistent with the presence of strong cooling flows in Abell 1795 and Abell 2199.
In this paper we employ our new reduction method to EUVE archival data of the central part of the Virgo cluster. We compare our results with results derived from radio observations of this region. We consider the possibility that the observed diffuse EUV excess emission is due to an inverse Compton process of the known population of relativistic electrons in the ICM near M87. Furthermore, we investigate the emission originating from the jet in M87 and compare our results with observations at other wavelengths.
## 2 Data and Data Analysis
The Virgo cluster has been observed for 36,000 s with the Deep Survey (DS) Telescope of EUVE (Bowyer & Malina, 1991). Data reduction was carried out with the EUVE package built in IRAF which is especially designed to process EUVE data.
In order to reduce the non-photonic (non-astronomical) background contribution to the data we investigated the pulse-height spectrum of all detected events. A large number of EUVE DS observations of all kinds of astronomical targets has shown that a typical pulse-height spectrum consists of two components, a Gaussian profile representing the source events and an exponential background distribution. More details about the different background contributions to the DS data and the method of pulse-height thresholding can be found in Berghöfer et al. (1998). From our experience with stellar and extragalactic observations with EUVE we know that the pulse-height selection effectively reduces the non-astronomical background in the data without any significant reduction of the source signal. By comparing the source signal with and without pulse-height selection we find that the effect on the source signal is lower than 4%.
Then we applied corrections for detector dead time and for telemetry limitations (primbsching) to the screened event list and produced a DS EUV image of the Virgo cluster. We then determined the non-photonic background level in the image from highly obscured regions at the outer most parts of the field of view near the Lexan/B filter frame bars. This non-astronomical background contribution is assumed to be constant over the entire detector field and was subtracted from the image.
In order to subtract the (vignetted) photonic background we computed the azimuthally averaged radial emission profile centered on M87. We used the EUVE DS sensitivity map provided by Bowyer, Berghöfer & Korpela (1999) to determine a radial sensitivity profile centered on the detector position of M87. This was then fit to the outer part (15–20′) of the radial emission profile to determine the scaling factor between sensitivity profile and the photonic background in the data. The radial emission profile and the best fit background model are shown in Figure EUVE Observations of clusters of galaxies: Virgo and M87.
## 3 Results
The data in Figure EUVE Observations of clusters of galaxies: Virgo and M87 demonstrate the presence of diffuse EUV emission in the vicinity of M87 which extends to a radius of $``$13′. At larger radii the radial profile is well fit by the background model demonstrating the absence of any significant cluster emission beyond this. The initial publication on the diffuse EUV emission from Virgo (Lieu et al. 1996) claimed to detect excess emission to 20′.
In Figure EUVE Observations of clusters of galaxies: Virgo and M87 we plot the background subtracted radial EUV emission profile (solid line). The dashed line shows the expected EUV emission of the low energy tail of the X-ray emitting diffuse intracluster gas as derived in the following. Note that the inner 1′ bin is dominated by the core and jet of M87 and must be ignored for the discussion of the diffuse emission.
To determine the diffuse X-ray contribution to the observed EUV emission we processed ROSAT PSPC archival data of the Virgo cluster. We used standard procedures implemented in the EXSAS software package to produce an image from the photon event list. Then a vignetting corrected exposure map was computed for this data set and a PSPC count rate image was generated by dividing the PSPC image by the exposure map.
We point out that the background in the ROSAT PSPC hard energy band is dominated by the photonic (vignetted) background and the contribution of the non-photonic background is minor. Therefore, a similar analysis as described for the EUVE DS data including a separation of the photonic and non-photonic background contributions is not essential. However, in the case of detectors with low effective areas (e.g., BeppoSAX), and less efficient rejection mechanisms for non-photonic events, this background contribution must be treated separately.
For our analysis of the ROSAT PSPC data we selected only photon events in the hardest energy band, channels 90–236. This channel selection has several advantages: First, any contamination by a possible steep-spectrum source at soft X-ray energies is excluded and, therefore, ensures that this band pass represents only thermal contributions to the overall diffuse emission in Virgo. Second, this part of the ROSAT band pass is nearly unaffected by interstellar absorption. This minimizes errors due to possible differential ISM absorption effects when modeling conversion factors between DS and PSPC counts. Third, the count rate conversion factor between DS and PSPC is nearly temperature independent in the range of X-ray temperatures measured in the central Virgo region and, thus, ROSAT count rates of the diffuse X-ray emission can be converted into DS count rates by using one single conversion factor.
In order to be able to convert PSPC counts into DS counts we modeled conversion factors for a range of plasma temperatures (0.1–2.7 keV) employing the MEKAL plasma emission code with abundances of 0.34 solar (Hwang et al. 1997). These calculations include absorption by the interstellar medium. We used an ISM absorption column density of $`1.72\times 10^{20}`$cm<sup>-2</sup> (Hartmann & Burton 1997) and an absorption model including cross sections and ionization ratios for the ISM as described in Bowyer, Berghöfer & Korpela (1999). In Figure EUVE Observations of clusters of galaxies: Virgo and M87 we show the DS to PSPC count rate conversion factor. The left-hand scale and the solid curve gives the plasma temperature as a function of the DS to PSPC count rate ratio. As can be seen, for a wide range of temperatures (0.6–2.7 keV) the model conversion factor is constant within 15%. According to Böhringer et al. (1995) and Böhringer (1999), the temperature of the X-ray emitting intracluster gas in the Virgo cluster is $``$2 keV. In addition to this thermal gas component these authors detected several diffuse emission features near M87 which are significantly softer than the average Virgo cluster gas temperature. However, spectral fits to the ROSAT data do not provide any evidence for gas at temperatures below 1 keV (Böhringer, private communication). For temperatures near 1 keV the modeled conversion factor for a thermal gas is slightly lower than for higher temperatures. Therefore, the contribution of the lower temperature components to the overall diffuse X-ray emission in the EUV band pass is lower than the dominant 2 keV cluster gas component. Using the conversion factor appropriate for the mean cluster gas temperature of 2 keV for the entire emission including the softer thermal enhancements, slightly overestimates the low energy X-ray contribution to the EUV emission.
We also modeled the DS to PSPC conversion factor for a non-thermal power law type spectrum including ISM absorption. The right-hand scale and dashed curve give the power law spectral index as a function of the modeled conversion factor.
In Figure EUVE Observations of clusters of galaxies: Virgo and M87 we show the observed ratio between azimuthally averaged radial intensity profiles observed with the EUVE DS and PSPC. Within the error bars the ratio is constant (reduced $`\chi ^2`$ = 0.9). The best fit value is $`0.0186\pm 0.0057`$. The ratio for the inner 1′ bin is consistent with this value, however, we excluded this point due to the presence of emission from the core and jet of M87. Sarazin & Lieu (1998) have suggested that an increasing EUV to X-ray emission ratio towards larger distances from the cluster center is an indication of an inverse Compton process producing the EUV emission in the cluster. However, the data in Figure EUVE Observations of clusters of galaxies: Virgo and M87 demonstrate that this is not observed in the central Virgo region.
Our best fit value of 0.0186 is $``$4.3 times larger than expected for the low energy tail of the X-ray emitting gas in the Virgo cluster. Therefore, the X-ray contribution to the observed EUV excess in the central part of the Virgo cluster must be minor.
It is clear that the ratio between observed EUV flux and modeled X-ray contribution cannot directly be used to determine the physical parameters of the source. Instead, one must first subtract the X-ray contribution from the observed EUV emission.
In Figure EUVE Observations of clusters of galaxies: Virgo and M87 we show the spatial distribution of the EUV excess emission in the central Virgo region; the background and the contribution of the low energy tail of the X-ray emitting ICM have been subtracted. The central emission peak at the position of M87 is surrounded by a diffuse EUV emission structure which is asymmetric in shape. Its extent varies between 1′ and 7′. Several arm-like features are visible. At larger radii the EUV emission results from a number of apparently discrete and extended diffuse features in the M87 radio halo region. These emission features are consistent with the emission seen in the surface brightness profile (Figure EUVE Observations of clusters of galaxies: Virgo and M87) between 9–13′. These asymmetric features show the flux is not produced by a gravitationally bound thermal gas. For the diffuse EUV emission within 7′ (excluding the core + jet emission in the inner 1′) we determine a total count rate of $`(0.036\pm 0.006)`$ counts s<sup>-1</sup>. Assuming an extraction radius of 13′ results in a total count rate of $`(0.066\pm 0.009)`$ counts s<sup>-1</sup>.
We also investigated the EUV emission peak at the position of M87. X-ray observations with the Einstein and ROSAT HRIs have demonstrated that the central X-ray emission peak splits into two major components which are associated with the core and mainly knots A+B+C of the jet in M87. The spatial resolution of the EUVE DS ($``$ 20″) is not sufficient to completely resolve the jet from the galaxy core. However, the central peak indicates emission slightly elongated by about one resolution element in the direction from the core to the jet. The central emission peak (core + jet) provides a total count rate of $`(4.9\pm 0.6)\times 10^3`$ counts s<sup>-1</sup> in excess of the diffuse emission component.
## 4 Discussion and Conclusions
### 4.1 Diffuse EUV emission
The results of our reanalysis show a clear EUV excess in the central Virgo region around M87. Compared to previous studies the azimuthally averaged extent of this emission is smaller and extends only to $``$13′ from the center of M87.
To explore the nature of the EUV excess we compare this emission with a 90 cm radio map of the central Virgo region near M87 (Owen, Eilek & Kassim 1999). If the diffuse EUV emission is due to inverse Compton processes in the ICM, one would expect to see similar emission features in both the EUV and radio image. In Figure EUVE Observations of clusters of galaxies: Virgo and M87 we show a contour plot of the EUV emission superposed on the 90 cm radio map. As can be seen, the EUV emission peaks at the position of the radio emission of the core and jet of M87. EUV excess emission features are, however, not directly coincident with any of the other brighter features visible in the radio map. The EUV emission is also not associated with the higher temperature X-ray emission features seen in the ROSAT PSPC images in Virgo (cf. Böhringer 1999 and Harris 1999).
We next investigate whether the integrated flux of the diffuse EUV emission is compatible with an inverse Compton origin of the observed EUV excess in the central Virgo region. We use the observed radio synchrotron power law spectrum of the M87 halo ($`\alpha =0.84`$, Herbig & Readhead 1992) to compute the underlying distribution of relativistic electrons in this region and its inverse Compton flux. Note that the radio spectrum needs to be extrapolated into the low frequency range near 1 MHz which is not observable due to ionospheric effects. The conversion from the synchrotron spectrum into an electron energy distribution depends on the magnetic field strength in the ICM. We derive a relation between magnetic field strength and the inverse Compton flux produced by the relativistic electrons; the results are shown in Figure EUVE Observations of clusters of galaxies: Virgo and M87. The flux is folded with the EUVE DS response and given in units of DS counts s<sup>-1</sup> which allows a direct comparison to the observed integrated DS count rate of the diffuse emission (horizontal line in Figure EUVE Observations of clusters of galaxies: Virgo and M87). As can be seen, for a magnetic field strength of $`3\mu `$G the observed flux matches the model flux. Note that this value would also be consistent with Faraday rotation measurements in the M87 halo (Dennison 1980).
However, with $`\alpha =0.84`$ the radio synchrotron spectrum is inconsistent with the required steep EUV to X-ray power law spectrum. In Figure EUVE Observations of clusters of galaxies: Virgo and M87 we show three dotted vertical lines labeled with 100%, 10%, and 5%. These lines indicate relative contributions of the hard energy tail of the EUV excess component to the overall X-ray emission in the ROSAT band. A contribution of 100% is obviously not realistic since this would require that no emission is seen from the gravitationally bound intracluster gas. The other two dotted lines show 10% and 5% contributions, respectively. No other emission component in excess of the thermal component has been detected in the ROSAT PSPC data of Virgo and only an upper limit can be derived from this data. A determination of an accurate upper limit for the EUV excess component in the ROSAT band is highly model dependent. However, from our experience with ROSAT data of diffuse sources we estimate that a contribution of 10% should be detectable. If we assume a 10% contribution as the upper limit for the EUV excess component in the ROSAT band, according to Figure EUVE Observations of clusters of galaxies: Virgo and M87 a power law photon number index of $`\alpha 3.2`$ is required to explain the observed EUV flux and the upper limit in the ROSAT PSPC hard band (channels 90–238) by a non-thermal power law source. Therefore, inverse Compton emission from the known population of relativistic electrons in the M87 halo cannot account for the observed EUV excess in the central Virgo region.
We compute the total luminosity of the diffuse EUV emission for a steep non-thermal power law spectrum and for a low temperature thermal plasma spectrum since these have been discussed in the literature, but we make no claim that either of these are the correct spectral distribution for the emission. Assuming a power law spectrum with $`\alpha =3.2`$ results in a luminosity of $`5.2\times 10^{42}`$ erg s<sup>-1</sup> in the 0.05–0.2 keV band. For a thermal plasma with a temperature of 0.15 keV we obtain a luminosity of $`5.7\times 10^{42}`$ erg s<sup>-1</sup>. These values were derived from the total count rate of the diffuse EUV emission within 7′. Including the apparently discrete and extended diffuse EUV emission detected between 7′ and 13′ increases the luminosity by 80%. Assuming larger power law indices or lower plasma temperatures result in higher luminosities. For the luminosity calculations we assume a distance of 17 Mpc.
### 4.2 EUV emission of the jet in M87
Since the core and jet of M87 cannot be resolved in the EUVE image of M87, we assume that the X-ray flux ratio between core and jet which can be determined from the ROSAT HRI observations is also valid for the EUV fluxes. Harris, Biretta & Junor (1997) give a ratio of $``$ 1.5 for the core/jet X-ray flux ratio. Based on their compilation of measurements for the jet in M87, Meisenheimer et al. (1996) derived a spectral index of 0.65 for the radio to near-UV spectrum. In order to be able to explain the X-ray emission of the jet in M87 by the same spectrum, these authors introduced a spectral cut-off near 10<sup>15</sup>Hz. The spectral index of the UV to X-ray power law spectrum then has to be $`\alpha 1.4`$ to explain the UV and X-ray data.
Based on these assumptions we compute a flux of $`3.4\times 10^{12}`$ erg cm<sup>-2</sup> s<sup>-1</sup> ($`6.5\times 10^6`$ Jy) and a luminosity of $`1.2\times 10^{41}`$ erg s<sup>-1</sup> for the emission of the M87 jet in the EUVE DS bandpass. For the luminosity calculation we assume a distance of 17 Mpc.
In Figure EUVE Observations of clusters of galaxies: Virgo and M87 we show the radio-to-X-ray spectrum of the jet in M87 including the EUVE data point. As can be seen, the spectral model provided by Meisenheimer et al. (1996) also fits the EUVE observations. This confirms the suggested cut-off in the UV and further supports that the entire jet emission, from the radio to the X-ray band, is synchrotron radiation produced by relativistic electrons in the jet.
## 5 Summary
The observed EUV excess in the central Virgo region is not spatially coincident with either the distribution of the radio emission or the observed high temperature thermal X-ray emission seen in the ROSAT images. This provides strong evidence that a separate source mechanism is present. In addition, due to the required steep EUV to X-ray spectrum, this emission cannot be produced by an extrapolation to lower energies of the observed synchrotron radio emitting electrons. If the observed EUV excess is inverse Compton emission, a new population population of relativistic electrons is required. Therefore, the same difficulties as in the case of the explanation of the EUV excess of the Coma cluster (cf. Bowyer & Berghöfer 1998) exist in the central Virgo region. The EUVE observations of M87 are consistent with the spectral cut-off in the spectrum of the jet in M87 as suggested by Meisenheimer et al. (1996). This further supports the idea that the EUV and X-ray emission of the jet is synchrotron radiation.
We thank Jean Eilek for providing us a postscript file of the M87 radio map. We acknowledge useful discussions with John Vallerga, Jean Dupuis, and Hans Böhringer. This work was supported in part by NASA contract NAS 5-30180. TWB was supported in part by a Feodor-Lynen Fellowship of the Alexander-von-Humboldt-Stiftung. |
no-problem/9912/astro-ph9912174.html | ar5iv | text | # The Effect of Time Variation in the Higgs Vacuum Expectation Value on the Cosmic Microwave Background
## I INTRODUCTION
The possibility that the fundamental constants of nature are not, in fact, constant, but might vary with time has long been an object of speculation by physicists . The fundamental constants which have received the greatest attention in this regard are the coupling constants which determine the interaction strengths of the fundamental forces: the gravitational constant $`G`$, the fine-structure constant $`\alpha `$, and the coupling constants for the weak and strong interactions. It has recently been noted that measurements of the cosmic microwave background (CMB) fluctuations in the near future will sharply constrain the variation of $`\alpha `$ at redshifts $`1000`$ ; here we extend this analysis to the Fermi coupling constant, through its dependence on the Higgs vacuum expectation value.
As emphasized by Dixit and Sher (see also reference ) the Fermi constant is not a fundamental coupling constant; it is actually independent of the gauge coupling constant and depends directly on the Higgs vacuum expectation value $`\varphi `$: specifically, $`G_F\varphi ^2`$. Hence, it is most meaningful to discuss constraints on the time variation of $`\varphi `$, rather than $`G_F`$. Furthermore, the possibility of a time-variation in the vacuum expectation value of a field seems more plausible than the time variation of a fundamental coupling constant. (For more detailed arguments in favor of considering (spatial) variations in $`\varphi `$, see reference ).
Constraints on the time variation of $`G_F`$ or $`\varphi `$ have been considered previously in references . As noted in reference , changing $`\varphi `$ has four main physical effects with astrophysical consequences: $`G_F`$ changes, the electron mass $`m_e`$ changes, and the nuclear masses and binding energies change. All four of these alter Big Bang nucleosynthesis, and requiring consistency with the observed element abundances gives limits of $`\mathrm{\Delta }G_F/G_F<20\%`$ at a redshift on the order of $`10^{10}`$. In contrast, only one effect is relevant for the CMB spectrum: the change in $`m_e`$. The weak interactions have no relevance at the epoch of recombination, while the effect of changing the nuclear masses and binding energies is negligible compared to the effect of altering $`m_e`$. Hence, for the purposes of the CMB, we can treat a change in the Higgs vacuum expectation value as equivalent to a change in $`m_e`$ alone, where $`m_e\varphi `$.
In the next section, we describe the changes in recombination produced by a change in $`m_e`$ and show how the CMB fluctuation spectrum is altered. We also examine the degeneracy between altering $`m_e`$ and changing the fine structure constant $`\alpha `$. In Sec. III, we translate our results into limits on a time-variation in $`m_e`$ and, therefore, on the variation of $`\varphi `$ and $`G_F`$. We find that the MAP and PLANCK experiments might be sensitive to variations as small as $`|\mathrm{\Delta }m_e/m_e|10^210^3`$, although the limits are much weaker if $`\alpha `$ is allowed to vary as well.
## II Changes in the recombination scenario and the CMB
As in references , we will assume that the variation in $`m_e`$ is sufficiently small during the process of recombination that we need only consider the difference between $`m_e`$ at recombination and $`m_e`$ today; i.e., we treat $`m_e`$ as constant during recombination. The electron mass $`m_e`$ changes the CMB fluctuations because it enters into the expression for the differential optical depth $`\dot{\tau }`$ of photons due to Thomson scattering:
$$\dot{\tau }=x_en_pc\sigma _T,$$
(1)
where $`\sigma _T`$ is the Thomson scattering cross-section, $`n_p`$ is the number density of electrons (both free and bound) and $`x_e`$ is the ionization fraction. The Thomson cross section depends on $`m_e`$ through the relation
$$\sigma _T=8\pi \alpha ^2\mathrm{}^2/3m_e^2c^2.$$
(2)
The dependence of $`x_e`$ on $`m_e`$ is more complicated; it depends on both the change in the binding energy of hydrogen:
$$B=\alpha ^2m_ec^2/2,$$
(3)
which is the dominant effect, and also on the change in the recombination rates with $`m_e`$. Note that $`m_e`$ and $`\alpha `$ enter into the expressions for $`B`$ and $`\sigma _T`$ in different ways, so that the effect of changing $`m_e`$ cannot be parametrized in a simple way in terms of the effect of changing $`\alpha `$ (calculated in references ). However, since the change in $`B`$ dominates all other effects, we expect significant degeneracy between the effect of changing $`m_e`$ and the effect of changing $`\alpha `$. Since a change in $`m_e`$ affects the same physical quantities as a change in $`\alpha `$, our discussion will parallel that in reference .
The ionization fraction $`x_e`$ is determined by the ionization equation for hydrogen :
$$\frac{dx_e}{dt}=𝒞\left[n_px_e^2\beta (1x_e)\mathrm{exp}\left(\frac{B_1B_2}{kT}\right)\right],$$
(4)
where $``$ is the recombination coefficient, $`\beta `$ is the ionization coefficient, $`B_n`$ is the binding energy of the $`n^{th}`$ hydrogen atomic level and $`n_p`$ is the sum of free protons and hydrogen atoms. The Peebles correction factor $`𝒞`$ accounts for the effect of non-thermal Lyman-$`\alpha `$ resonance photons and is given by:
$$𝒞=\frac{1+A}{1+A+C}=\frac{1+K\mathrm{\Lambda }(1x_e)}{1+K(\mathrm{\Lambda }+\beta )(1x_e)},$$
(5)
where $`K=H^1n_pc^3/8\pi \nu _{12}^3`$ ($`\nu _{12}`$ is the Lyman-$`\alpha `$transition frequency), and $`\mathrm{\Lambda }`$ is the rate of decay of the 2s excited state to the ground state via 2 photons and scales as $`m_e`$ . Since $`\nu _{12}`$ scales as $`m_e`$, we have $`Km_e^3`$. The ionization and recombination coefficients are related by the principle of detailed balance:
$$\beta =\left(\frac{2\pi m_ekT}{h^2}\right)^{3/2}\mathrm{exp}\left(\frac{B_2}{kT}\right),$$
(6)
and the recombination coefficient can be expressed as
$$=\underset{n,\mathrm{}}{}^{}\frac{(2\mathrm{}+1)8\pi }{c^2}\left(\frac{kT}{2\pi m_e}\right)^{3/2}\mathrm{exp}\left(\frac{B_n}{kT}\right)_{B_n/kT}^{\mathrm{}}\frac{\sigma _n\mathrm{}y^2dy}{\mathrm{exp}(y)1},$$
(7)
where $`\sigma _n\mathrm{}`$ is the ionization cross-section for the $`(n,\mathrm{})`$ excited level of hydrogen . In the above, the asterisk on the summation indicates that the sum from $`n=2`$ to $`\mathrm{}`$ needs to be regulated. The $`m_e`$ dependence of the ionization cross-section is rather complicated, but can be written as $`\sigma _n\mathrm{}m_e^2f(h\nu /B_1)`$, from which one can derive the following equation:
$$\frac{(T)}{m_e}=\frac{1}{m_e}\left(2(T)+T\frac{(T)}{T}\right).$$
(8)
This equation allows us to relate the $`m_e`$ dependence of the recombination coefficient to its temperature parametrization $`(T)`$, which can be approximated by a power law of the form $`(T)T^\xi `$. Then a solution of equation (8) has the $`m_e`$ dependence $`m_e^{\xi 2}`$. As in reference we will take $`\xi =0.7`$, corresponding to power law $`(T)T^{0.7}`$. We are interested in small changes in $`m_e`$, so that $`m_e^{}=m_e(1+\mathrm{\Delta }_m)`$ with $`\mathrm{\Delta }_m1`$. Now equation (4) including a change in $`m_e`$ can be written as:
$$\frac{dx_e}{dt}=𝒞^{}\left[^{}n_px_e^2\beta ^{}(1x_e)\mathrm{exp}\left(\frac{B_1^{}B_2^{}}{kT}\right)\right],$$
(9)
with $`^{}=(1+\mathrm{\Delta }_m)^{\xi 2}`$, the changed binding energies $`B_n^{}=B_n(1+\mathrm{\Delta }_m)`$,
$$\beta ^{}=\beta (1+\mathrm{\Delta }_m)^{\xi 1/2}\mathrm{exp}\left(\frac{B_2\mathrm{\Delta }_m}{kT}\right),$$
(10)
and the changes in the Peebles factor (equation 5). We then integrated equations (9) and (1) using CMBFAST to obtain the CMB fluctuation spectra for different values of $`m_e`$.
Fig. 1 shows the results for a change in $`m_e`$ of $`\pm 5`$% for a standard cold dark matter model (SCDM) with $`h=0.65`$ and $`\mathrm{\Omega }_bh^2=0.02`$. There are two main effects, similar to what is seen for a change in the fine-structure constant . First, an increase in $`m_e`$ shifts the curves to the right (i.e., larger $`l`$ values) due to the increase in the hydrogen binding energy, which results in earlier recombination, corresponding to a smaller sound horizon at the surface of last scattering. Second, the amplitude of the curves increases with increasing $`m_e`$. This second change is due to two different physical effects: an increase in the early ISW effect due to earlier recombination (which dominates at small $`l`$) and a change in the diffusion damping (which dominates at large $`l`$) .
Since a change in $`\alpha `$ affects the same physical quantities as a change in $`m_e`$, it is not surprising that the effects on the CMB fluctuation spectrum are similar. However, they are not identical. This can best be illustrated by choosing changes in $`m_e`$ and $`\alpha `$ which leave the binding energy $`B`$ unchanged, i.e., $`(1+\mathrm{\Delta }_\alpha )^2(1+\mathrm{\Delta }_m)=1`$, since the change in $`B`$ dominates the changes in the fluctuation spectrum. This is illustrated in Fig. 2, in which we have taken a $`3\%`$ increase in $`\alpha `$ and a $`5.74\%`$ decrease in $`m_e`$.
As expected, the changes in $`m_e`$ and $`\alpha `$ nearly cancel in their effect on the CMB, and there is no shift in the location of the peaks. However, there is a residual increase in the amplitude which is largest at large $`l`$. Recall that the shift in the position of the peaks and the change in their amplitude at small $`l`$ are dominated by the change in the binding energy, which is zero in this case. However, the change in the diffusion damping, which dominates the change in the amplitude at large $`l`$, scales differently with $`m_e`$ and $`\alpha `$, producing an increase in the peak amplitude at large $`l`$. If both $`\alpha `$ and $`m_e`$ are assumed to be variable, any CMB constraints on this variation will be considerably weaker. There is some theoretical justification to consider such models .
## III LIMITS ON VARIATIONS IN THE ELECTRON MASS
We know from the analysis in references and in the previous section that variations in $`\alpha `$ and/or $`m_e`$ will change the CMB spectrum significantly. In order to impose limits on this variation from future CMB data, the Fisher information matrix is a very useful tool. For small variations in the parameters ($`\theta _i`$) of a cosmological model the likelihood function ($``$) can be expanded about its maximum as
$$_m\mathrm{exp}(F_{ij}\delta \theta _i\delta \theta _j),$$
(11)
where $`F_{ij}`$ is the Fisher information matrix, as defined in reference
$$F_{ij}=\underset{\mathrm{}=2}{\overset{\mathrm{}_{\mathrm{m}ax}}{}}\frac{1}{\mathrm{\Delta }𝒞_{\mathrm{}}^2}\left(\frac{𝒞_{\mathrm{}}}{\theta _i}\right)\left(\frac{𝒞_{\mathrm{}}}{\theta _j}\right),$$
(12)
where $`\mathrm{\Delta }𝒞_{\mathrm{}}`$ is the error in the measurement of $`𝒞_{\mathrm{}}`$. In this approximation the inverse of the Fisher matrix $`F^1`$ is the covariance matrix, and in particular the variance of parameter $`\theta _i`$ is given by $`\sigma _i^2=(F^1)_{ii}`$. In the case of the CMB the cosmological parameters ($`\theta _i`$) that are taken to be determined from the measured fluctuation spectrum are the Hubble parameter $`h`$, the number density of baryons (parametrized as $`\mathrm{\Omega }_bh^2`$), the cosmological constant (parametrized as $`\mathrm{\Omega }_\mathrm{\Lambda }h^2`$), the effective number of relativistic neutrino species $`N_\nu `$, and the primordial helium abundance $`Y_p`$. Additionally, we allow the electron mass $`m_e`$ to serve as an undetermined parameter, and consider also the effect of adding the fine-structure constant $`\alpha `$ to this set. We make the assumption that the experiments are limited only by the cosmic variance up to a maximum $`\mathrm{}`$, denoted by $`\mathrm{}_{max}`$. Analysis of the Fisher information matrix will now enable us to calculate a rough upper bound for the limits on $`\mathrm{\Delta }_m`$ which could be obtained from future CMB experiments.
We analyze two flat ($`\mathrm{\Omega }=1`$) cold dark matter models, a standard cold dark matter model (SCDM) and one with $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$ ($`\mathrm{\Lambda }`$CDM). Both models have $`h=0.65`$, $`\mathrm{\Omega }_bh^2=0.02`$, $`N_\nu =3.04`$ and $`Y_p=0.246`$. For each model we calculate the variation in the electron mass, $`\sigma _m/m`$, as a function of $`\mathrm{}_{max}`$ for two different cases. In the first case we consider only $`m_e`$ to vary, taking $`\alpha `$ as constant; in the second case we take both $`m_e`$ and $`\alpha `$ to be variable. The results are shown in Fig. 3.
If $`\alpha `$ is taken to be constant, the upper limits on $`|\mathrm{\Delta }m_e/m_e|`$ are of order $`10^210^3`$ for $`\mathrm{}_{max}5002500`$ in both the SCDM and $`\mathrm{\Lambda }`$CDM models. Since $`m_e\varphi `$, and $`G_F\varphi ^2`$, similar limits apply to the variation in $`\varphi `$ and $`G_F`$. This represents potentially a much tighter limit on the time variation in $`G_F`$ than can be obtained from Big Bang nucleosynthesis . However, if we allow for an independent variation in both $`m_e`$ and $`\alpha `$, then these limits become much less restrictive, since these two effects are nearly degenerate. For $`\mathrm{}_{max}5001000`$ the limit on $`|\mathrm{\Delta }m_e/m_e|`$ is no better than 10%, while for $`\mathrm{}_{max}>1500`$ it can be as small as $`10^2`$. This is consistent with the results shown in Fig. 2: the degeneracy between the effect of changing $`m_e`$ and the effect of changing $`\alpha `$ is broken only at the largest values of $`l`$. As we have noted, there are models in which simultaneous variation of $`\varphi `$ and $`\alpha `$ occurs “naturally” . Hence, our result also supplies an important caveat to the limits on $`|\mathrm{\Delta }\alpha /\alpha |`$ discussed in references : these limits will apply only if the Higgs vacuum expectation value is taken to be constant.
We are grateful to M. Kaplinghat for helpful discussions, and to S. Hannestad for useful comments on the manuscript. We thank U. Seljak and M. Zaldariagga for the use of CMBFAST . This work was supported in part by the DOE (DE-FG02-91ER40690). |
no-problem/9912/nucl-th9912016.html | ar5iv | text | # Properties of 𝛽-stable neutron star matter with hyperons
## I Introduction
The physics of compact objects like neutron stars offers an intriguing interplay between nuclear processes and astrophysical observables. Neutron stars exhibit conditions far from those encountered on earth; typically, expected densities $`\rho `$ of a neutron star interior are of the order of $`10^3`$ or more times the density $`\rho _d410^{11}`$ g/cm<sup>3</sup> at ’neutron drip’, the density at which nuclei begin to dissolve and merge together. Thus, the determination of an equation of state (EoS) for dense matter is essential to calculations of neutron star properties. The EoS determines properties such as the mass range, the mass-radius relationship, the crust thickness and the cooling rate. The same EoS is also crucial in calculating the energy released in a supernova explosion.
At densities near to the saturation density of nuclear matter, ( with number density $`n_0=0.16`$ fm<sup>-3</sup>), we expect the matter to be composed of mainly neutrons, protons and electrons in $`\beta `$-equilibrium, since neutrinos have on average a mean free path larger than the radius of the neutron star. The equilibrium conditions can then be summarized as
$$\mu _n=\mu _p+\mu _e,n_p=n_e,$$
(1)
where $`\mu _i`$ and $`n_i`$ refer to the chemical potential and number density in fm<sup>-3</sup> of particle species $`i`$, respectively. At the saturation density of nuclear matter, $`n_0`$, the electron chemical potential is of the order $`100`$ MeV. Once the rest mass of the muon is exceeded, it becomes energetically favorable for an electron at the top of the $`e^{}`$ Fermi surface to decay into a $`\mu ^{}`$. We then develop a Fermi sea of degenerate negative muons, and we have to modify the charge balance according to $`n_p=n_e+n_\mu `$, and require that $`\mu _e=\mu _\mu `$.
As the density increases, new hadronic degrees of freedom may appear in addition to neutrons and protons. One such degree of freedom is hyperons, baryons with a strangeness content. Contrary to terrestrial conditions where hyperons are unstable and decay into nucleons through the weak interaction, the equilibrium conditions in neutron stars can make the inverse process happen, so that the formation of hyperons becomes energetically favorable. As soon as the chemical potential of the neutron becomes sufficiently large, energetic neutrons can decay via weak strangeness non-conserving interactions into $`\mathrm{\Lambda }`$ hyperons leading to a $`\mathrm{\Lambda }`$ Fermi sea with $`\mu _\mathrm{\Lambda }=\mu _n`$. However, one expects $`\mathrm{\Sigma }^{}`$ to appear via
$$e^{}+n\mathrm{\Sigma }^{}+\nu _e,$$
(2)
at lower densities than the $`\mathrm{\Lambda }`$, even though $`\mathrm{\Sigma }^{}`$ is more massive. The negatively charged hyperons appear in the ground state of matter when their masses equal $`\mu _e+\mu _n`$, while the neutral hyperon $`\mathrm{\Lambda }`$ appears when $`\mu _n`$ equals its mass. Since the electron chemical potential in matter is larger than the mass difference $`m_\mathrm{\Sigma }^{}m_\mathrm{\Lambda }=81.76`$ MeV, $`\mathrm{\Sigma }^{}`$ will appear at lower densities than $`\mathrm{\Lambda }`$. For matter with hyperons as well the chemical equilibrium condition becomes,
$`\mu _\mathrm{\Xi }^{}=\mu _\mathrm{\Sigma }^{}=\mu _n+\mu _e,`$ (3)
$`\mu _\mathrm{\Lambda }=\mu _{\mathrm{\Xi }^0}=\mu _{\mathrm{\Sigma }^0}=\mu _n,`$ (4)
$`\mu _{\mathrm{\Sigma }^+}=\mu _p=\mu _n\mu _e.`$ (5)
We have omitted isobars $`\mathrm{\Delta }`$, see the discussion below.
Hyperonic degrees of freedom have been considered by several authors, but mainly within the framework of relativistic mean field models or parametrized effective interactions , see also Balberg et al. for a recent update. Realistic hyperon-nucleon interactions were employed by Schulze et al. recently, see Ref. , in a many-body calculation in order to study where hyperons appear in neutron star matter. All these works show that hyperons appear at densities of the order of $`2n_0`$.
In Ref. however, one was only able to fix the density where $`\mathrm{\Sigma }^{}`$ appears, since only a hyperon-nucleon interaction was employed. As soon as $`\mathrm{\Sigma }^{}`$ appears, one needs a hyperon-hyperon interaction in order to estimate e.g., the self-energy of $`\mathrm{\Lambda }`$. The aim of this work is thus to present results from many-body calculations of hyperonic degrees of freedom for $`\beta `$-stable neutron star matter employing interactions which also account for strangeness $`S<1`$. To achieve this goal, our many-body scheme starts with the most recent parametrization of the free baryon-baryon potentials for the complete baryon octet as defined by Stoks and Rijken in Ref. . This entails a microscopic description of matter starting from realistic nucleon-nucleon, hyperon-nucleon and hyperon-hyperon interactions. In a recent work we have developed a formalism for microscopic Brueckner-type calculations of dense nuclear matter that includes all types of baryon-baryon interactions and allows to treat any asymmetry on the fractions of the different species ($`n,p,\mathrm{\Lambda },\mathrm{\Sigma }^{},\mathrm{\Sigma }^0,\mathrm{\Sigma }^+,\mathrm{\Xi }^{}`$ and $`\mathrm{\Xi }^0`$). Results for various fractions of the above particles were also discussed.
Here we extend the calculations of Ref. to studies of $`\beta `$-stable neutron star matter. Our results, together with a brief summary of the formalism discussed in Ref. , are presented in section II. There we discuss the equation of state (EoS) and the composition of $`\beta `$-stable matter with various baryon-baryon potentials. Based on the composition of matter we present also results for baryon superfluidity and discuss the possible neutron star structures.
## II Equation of state and composition of $`\beta `$-stable matter
Our many-body scheme starts with the most recent parametrization of the free baryon-baryon potentials for the complete baryon octet as defined by Stoks and Rijken in Ref. . This potential model, which aims at describing all interaction channels with strangeness from $`S=0`$ to $`S=4`$, is based on SU(3) extensions of the Nijmegen potential models for the $`S=0`$ and $`S=1`$ channels, which are fitted to the available body of experimental data and constrain all free parameters in the model. In our discussion we employ the interaction version NSC97e of Ref. , since this model, together with the model NSC97f of Ref. , result in the best predicitions for hypernuclear observables . For a discussion of other interaction models, see Refs. .
### A Formalism
With a given interaction model, the next step is to introduce effects from the nuclear medium. Here we will construct the so-called $`G`$-matrix, which takes into account short-range correlations for all strangeness sectors, and solve the equations for the single-particle energies of the various baryons self-consistently. The $`G`$-matrix is formally given by
$`B_1B_2\left|G(\omega )\right|B_3B_4=B_1B_2\left|V\right|B_3B_4+`$ (6)
$`{\displaystyle \underset{B_5B_6}{}}B_1B_2\left|V\right|B_5B_6{\displaystyle \frac{1}{\omega \epsilon _{B_5}\epsilon _{B_6}+ı\eta }}`$ (7)
$`\times B_5B_6\left|G(\omega )\right|B_3B_4`$ . (8)
Here $`B_i`$ represents all possible baryons $`n`$, $`p`$, $`\mathrm{\Lambda }`$, $`\mathrm{\Sigma }^{}`$, $`\mathrm{\Sigma }^0`$, $`\mathrm{\Sigma }^+`$, $`\mathrm{\Xi }^{}`$ and $`\mathrm{\Xi }^0`$ and their quantum numbers such as spin, isospin, strangeness, linear momenta and orbital momenta. The intermediate states $`B_5B_6`$ are those which are allowed by the Pauli principle, and the energy variable $`\omega `$ is the starting energy defined by the single-particle energies of the incoming external particles $`B_3B_4`$. The $`G`$-matrix is solved using relative and centre-of-mass coordinates, see e.g., Refs. for computational details. The single-particle energies are given by
$$\epsilon _{B_i}=t_{B_i}+u_{B_i}+m_{B_i}$$
(9)
where $`t_{B_i}`$ is the kinetic energy and $`m_{B_i}`$ the mass of baryon $`B_i`$. The single-particle potential $`u_{B_i}`$ is defined by
$$u_{B_i}=\mathrm{Re}\underset{B_jF_j}{}B_iB_j\left|G(\omega =\epsilon _{B_j}+\epsilon _{B_i})\right|B_iB_j.$$
(10)
The linear momentum of the intermediate single-particle state $`B_j`$ is limited by the size of the Fermi surface $`F_j`$ for particle species $`B_j`$. The last equation is displayed in terms of Goldstone diagrams in Fig. 1. Diagram (a) represents contributions from nucleons only as hole states, while diagram (b) has only hyperons as holes states in case we have a finite hyperon fraction in $`\beta `$-stable neutron star matter. The external legs represent nucleons and hyperons.
The total non-relativistic energy density, $`\epsilon `$, and the total binding energy per baryon, $``$, can be evaluated from the baryon single-particle potentials in the following way
$$\epsilon =2\underset{B}{}_0^{k_F^{(B)}}\frac{d^3k}{(2\pi )^3}\left(\frac{\mathrm{}^2k^2}{2M_B}+\frac{1}{2}U_B(k)\right)$$
(11)
$$=\frac{\epsilon }{n},$$
(12)
where $`n`$ is the total baryonic density. The density of a given baryon species is given by
$$n_B=\frac{k_{F_B}^3}{3\pi ^2}=x_Bn,$$
(13)
where $`x_B=n_B/n`$ is the fraction of baryon $`B`$, which is of course constrained by
$$\underset{B}{}x_B=1.$$
(14)
Detailed expressions for the single-particle energies and the $`G`$-matrices involved can be found in Ref. . In order to satisfy the equations for $`\beta `$-stable matter summarized in Eq. (5), we need to solve Eqs. (8) and (9) to obtain the single-particle energies of the particles involved at the corresponding Fermi momenta. Typically, for every total baryonic density $`n=n_N+n_Y`$, the density of nucleons plus hyperons, Eqs. (8) and (9) were solved for five nucleon fractions and five hyperons fractions and, for every nucleon and hyperon fraction, we computed three proton fractions and three fractions for the relevant hyperons. The set of equations in Eq. (5) were then solved by interpolating between different nucleon and hyperon fractions.
The many-body approach outlined above is the lowest-order Brueckner-Hartree-Fock (BHF) method extended to the hyperon sector. This means also that we consider only two-body interactions. However, it is well-known from studies of nuclear matter and neutron star matter with nucleonic degrees of freedom only that three-body forces are important in order to reproduce the saturation properties of nuclear matter, see e.g., Ref. for the most recent approach. In order to include such effects, we replace the contributions to the proton and neutron self-energies arising from intermediate nucleonic states only, see diagram (a) of Fig. 1, with those derived from Ref. (hereafter APR98) where the Argonne $`V_{18}`$ nucleon-nucleon interaction is used with relativistic boost corrections and a fitted three-body interaction, model. The calculations of Ref. represent at present perhaps the most sophisticated many-body approach to dense matter. In the discussions below we will thus present two sets of results for $`\beta `$-stable matter, one where the nucleonic contributions to the self-energy of nucleons is derived from the baryon-baryon potential model of Stoks and Rijken and one where the nucleonic contributions are replaced with the results from Ref. following the parametrization discussed in Eq. (49) of Ref. . Replacing the nucleon-nucleon part of the interaction model of Ref. with that from the $`V_{18}`$ nucleon-nucleon interaction , does not introduce large differences at the BHF level. However, the inclusion of three-body forces as done in Ref. is important. Hyperonic contributions will however all be calculated with the baryon-baryon interaction of Stoks and Rijken .
### B $`\beta `$-stable neutron star matter
The above models for the pure nucleonic part combined with the hyperon contribution yield the composition of $`\beta `$-stable matter, up to total baryonic number density $`n=1.2`$ fm<sup>-3</sup>, shown in Fig. 2. The corresponding energies per baryon are shown in Fig. 3 for both pure nucleonic (BHF and APR98 pn-matter) and hyperonic matter (BHF and APR98 with hyperons) in $`\beta `$-equilibrium for the same baryonic densities as in Fig. 2.
For both types of calculations $`\mathrm{\Sigma }^{}`$ appears at densities $`23n_0`$. Since the EoS of APR98 for nucleonic matter yields a stiffer EoS than the corresponding BHF calculation, $`\mathrm{\Sigma }^{}`$ appears at $`n=0.27`$ fm<sup>-3</sup> for the APR98 EoS and $`n=0.35`$ fm<sup>-3</sup> for the BHF EoS. These results are in fair agreement with results obtained from mean field calculations, see e.g., Refs. . The introduction of hyperons leads to a considerable softening of the EoS. Moreover, as soon as hyperons appear, the leptons tend to disappear, totally in the APR98 case whereas in the BHF calculation only muons disappear. For the APR98 case, positrons appear at higher densities, i.e., $`n=1.18`$ fm<sup>-3</sup>. This result is related to the fact that $`\mathrm{\Lambda }`$ does not appear at the densities considered here for the BHF EoS. For the APR98 EoS, $`\mathrm{\Lambda }`$ appears at a density $`n=0.67`$ fm<sup>-3</sup>. Recalling $`\mu _\mathrm{\Lambda }=\mu _n=\mu _p+\mu _e`$ and that the APR98 EoS is stiffer due to the inclusion of three-body forces, this clearly enhances the possibility of creating a $`\mathrm{\Lambda }`$ with the APR98 EoS. However, the fact that $`\mathrm{\Lambda }`$ does not appear in the BHF calculation can also, in addition to the softer EoS, be retraced to a delicate balance between the nucleonic and hyperonic hole state contributions (and thereby to features of the baryon-baryon interaction) to the self-energy of the baryons considered here, see diagrams (a) and (b) in Fig. 1. Stated differently, the contributions from $`\mathrm{\Sigma }^{}`$, proton and neutron hole states to the $`\mathrm{\Lambda }`$ chemical potential are not attractive enough to lower the chemical potential of the $`\mathrm{\Lambda }`$ so that it equals that of the neutron. Furthermore, the chemical potential of the neutron does not increase enough since contributions from $`\mathrm{\Sigma }^{}`$ hole states to the neutron self-energy are attractive, see e.g., Ref. for a detailed account of these aspects of the interaction model.
We illustrate the role played by the two different choices for nucleonic EoS in Fig. 4 in terms of the chemical potentials for various baryons for matter in $`\beta `$-equilibrium. We also note that, using the criteria in Eq. (5), neither the $`\mathrm{\Sigma }^0`$ nor $`\mathrm{\Sigma }^+`$ do appear for both the BHF and the APR98 equations of state. This is due to the fact that none of the $`\mathrm{\Sigma }^0`$-baryon and $`\mathrm{\Sigma }^+`$-baryon interactions are attractive enough. A similar argument applies to $`\mathrm{\Xi }^0`$ and $`\mathrm{\Xi }^{}`$. In the latter case the mass of the particle is $`1315`$ MeV and almost $`200`$ MeV in attraction is needed in order to fullfil e.g., the condition $`\mu _\mathrm{\Lambda }=\mu _{\mathrm{\Xi }^0}=\mu _n`$. This has also been checked by us in studies of the self-energy of $`\mathrm{\Xi }^{}`$ in finite nuclei, using the recipe outlined in Ref. . For both light and medium heavy nuclei, $`\mathrm{\Xi }^{}`$ is unbound with the present hyperon-hyperon interactions, except for version NSC97f of Ref. . The latter results in a weakly bound $`\mathrm{\Xi }^{}`$, in agreement with the recent studies of Batty et al. . From the bottom panel of Fig. 4 we see however that $`\mathrm{\Sigma }^0`$ could appear at densities close to $`1.2`$ fm<sup>-3</sup>. Thus, for the present densities, which would be within the range of energies for where the interaction model has been fitted, the only hyperons which can appear are $`\mathrm{\Sigma }^{}`$ and $`\mathrm{\Lambda }`$.
In summary, using the realistic EoS of Akmal et al. for the nucleonic sector and including hyperons through the most recent model for the baryon-baryon interaction of the Nijmegen group , we find through a many-body calculation for matter in $`\beta `$-equilibrium that $`\mathrm{\Sigma }^{}`$ appears at a density of $`n=0.27`$ fm<sup>-3</sup> while $`\mathrm{\Lambda }`$ appears at $`n=0.67`$ fm<sup>-3</sup>. Due to the formation of hyperons, the matter is deleptonized at a density of $`n=0.85`$ fm<sup>-3</sup>. Within our many-body approach, no other hyperons appear at densities below $`n=1.2`$ fm<sup>-3</sup>. Although the EoS of Akmal et al. may be viewed as the currently most realistic approach to the nucleonic EoS, our results have to be gauged with the uncertainty in the hyperon-hyperon and nucleon-hyperon interactions. Especially, if the hyperon-hyperon interactions tend to be more attractive, this may lead to the formation of hyperons such as the $`\mathrm{\Lambda }`$, $`\mathrm{\Sigma }^0`$, $`\mathrm{\Sigma }^+`$, $`\mathrm{\Xi }^{}`$ and $`\mathrm{\Xi }^0`$ at lower densities. The hyperon-hyperon interaction and the stiffness of the nucleonic contribution play crucial roles in the formation of various hyperons. These results differ from present mean field calculations , where all kinds of hyperons can appear at the densities considered here.
### C Baryon superfluidity in $`\beta `$-stable matter
A generic feature of fermion systems with attractive interactions is that they may be superfluid in a region of the density-temperature plane. The $`{}_{}{}^{1}S_{0}^{}`$ wave of the nucleon-nucleon interaction is the best known and most investigated case in neutron stars, and the results indicate that one may expect a neutron superfluid in the inner crust of the star and a proton superfluid in the quantum liquid interior, both with energy gaps of the order of $`1\mathrm{MeV}`$ . Furthermore, neutrons in the quantum liquid interior may form a superfluid due to the attractive $`{}_{}{}^{3}P_{2}^{}`$-$`{}_{}{}^{3}F_{2}^{}`$ wave of the nucleon-nucleon interaction . Baryon superfluidity has important consequences for a number of neutron star phenomena, including glitches and cooling . If hyperons appear in neutron stars, they may also form superfluids if their interactions are sufficiently attractive. The case of $`\mathrm{\Lambda }`$ superfluidity has been investigated by Balberg and Barnea using parametrized effective $`\mathrm{\Lambda }`$-$`\mathrm{\Lambda }`$ interactions. Results for $`\mathrm{\Lambda }`$ and $`\mathrm{\Sigma }^{}`$-pairing using bare hyperon-hyperon interaction models have recently been presented by Takatsuka and Tamagaki . The result of both groups indicate the presence of a $`\mathrm{\Lambda }`$ superfluid for baryon densities in the range of $`2`$$`4n_0`$. The latter authors also suggest that the formation of a $`\mathrm{\Sigma }^{}`$ superfluid may be more likely than $`\mathrm{\Lambda }`$-superfluidity. Along the lines followed by these authors we will here present results for hyperon superfluidity within our model.
The crucial quantity in determining the onset of superfluidity is the energy gap function $`\mathrm{\Delta }(𝐤)`$. The value of this function at the Fermi surface is proportional to the critical temperature of the superfluid, and by determining $`\mathrm{\Delta }`$ we therefore map out the region of the density-temperature plane where the superfluid may exist. When the $`{}_{}{}^{1}S_{0}^{}`$ interaction is the driving cause of the superfluidity, the gap function becomes isotropic and depends on the magnitude of $`𝐤`$ only. It can be determined by solving the BCS gap equation
$$\mathrm{\Delta }(k)=\frac{1}{\pi }_0^{\mathrm{}}𝑑k^{}k^2\stackrel{~}{V}_{{}_{}{}^{1}S_{0}^{}}(k,k^{})\frac{\mathrm{\Delta }(k^{})}{\sqrt{(ϵ_k^{}\mu )^2+\mathrm{\Delta }(k^{})^2}},.$$
(15)
In this equation, $`ϵ_k`$ is the momentum-dependent single particle energy in the medium for the particle species in question, $`\mu `$ is the corresponding chemical potential, and $`\stackrel{~}{V}_{{}_{}{}^{1}S_{0}^{}}`$ is the effective pairing interaction. At this point we emphasize that using parametrized effective interactions in the gap equation can lead to errors. The gap equation includes diagrams also found in the $`G`$-matrix, and one therefore needs to calculate $`\stackrel{~}{V}`$ systematically from microscopic many-body theory to avoid double counting of ladder contributions. The expansion for $`\stackrel{~}{V}`$ can be found in e.g. Migdal , and to lowest order $`\stackrel{~}{V}=V`$, the free-space two-particle interaction. Higher order terms include contributions from e.g. density- and spin-density fluctuations. In this first exploratory calculation we will follow Ref. and use the bare hyperon-hyperon interaction in Eq. (15). The relevant hyperon fractions and single-particle energies are taken from the BHF calculations described earlier in this paper. Details of the numerical solution of the gap equation can be found in Ref. .
Fig. 5 shows the energy gap $`\mathrm{\Delta }_F\mathrm{\Delta }(k_F^{(\mathrm{\Sigma }^{})})`$ as a function of the total baryon density for $`\mathrm{\Sigma }^{}`$ hyperons in $`\beta `$-stable matter for the NSC97E model. Although $`\mathrm{\Lambda }`$ may appear at higher densities, the $`{}_{}{}^{1}S_{0}^{}`$ $`\mathrm{\Lambda }`$-$`\mathrm{\Lambda }`$ matrix elements of the NSC97E interaction are all repulsive, and therefore the energy gap for $`\mathrm{\Lambda }`$ hyperons would (to lowest order) have been zero at all densities, i.e. these particles would not have formed a superfluid. This is it at variance with the results of Ref. , however, as remarked earlier this work employs an effective, parametrized interaction to drive the gap equation and therefore overestimates the $`\mathrm{\Lambda }`$ energy gap. Our $`\mathrm{\Sigma }^{}`$ results are comparable to those of Ref. which were obtained with a gaussian soft core parametrization of the bare $`\mathrm{\Sigma }^{}`$-$`\mathrm{\Sigma }^{}`$ interaction.
If taken at face value these results have implications for neutron star cooling. Since at low densities $`\mathrm{\Sigma }^{}`$ is the only hyperon species that is present in our calculation, the most important contribution to the neutrino cooling rate at such densities comes from the reaction $`\mathrm{\Sigma }^{}n+e^{}+\overline{\nu }_e`$. According to Ref. the threshold density for this reaction to occur is at around $`2.4n_0`$. If the $`\mathrm{\Sigma }^{}`$s are superfluid with energy gaps similar to what we found here, a sizeable reduction of the order of $`\mathrm{exp}(\mathrm{\Delta }_F/kT)`$ may be expected in the reaction rate. If neutron stars were to cool through direct Urca processes, their surface temperatures would be barely detectable within less than 100 yr of the star’s birth. This is at askance with present observations. Thus, the formation of a hyperon superfluid will clearly suppress the hyperon direct Urca process and cooling will most likely proceed through less efficient processes and bring the results closer to experimental surface temperatures.
### D Structure of neutron stars
We end this section with a discussion on neutron star properties with the above equations of state.
The best determined neutron star masses are found in binary pulsars and all lie in the range $`1.35\pm 0.04M_{}`$ except for the nonrelativistic pulsar PSR J1012+5307 of mass $`M=(2.1\pm 0.8)M_{}`$ . Several X-ray binary masses have been measured of which the heaviest are Vela X-1 with $`M=(1.9\pm 0.2)M_{}`$ and Cygnus X-2 with $`M=(1.8\pm 0.4)M_{}`$ . The recent discovery of high-frequency brightness oscillations in low-mass X-ray binaries provides a promising new method for determining masses and radii of neutron stars, see Ref. . The kilohertz quasi-periodic oscillations (QPO) occur in pairs and are most likely the orbital frequencies of accreting matter in Keplerian orbits around neutron stars of mass $`M`$ and its beat frequency with the neutron star spin. According to Zhang et al. and Kaaret et al. the accretion can for a few QPO’s be tracked to its innermost stable orbit. For slowly rotating stars the resulting mass is $`M2.2M_{}(\mathrm{kHz}/\nu _{QPO})`$. For example, the maximum frequency of 1060 Hz upper QPO observed in 4U 1820-30 gives $`M2.25M_{}`$ after correcting for the neutron star rotation frequency. If the maximum QPO frequencies of 4U 1608-52 ($`\nu _{QPO}=1125`$ Hz) and 4U 1636-536 ($`\nu _{QPO}=1228`$ Hz) also correspond to innermost stable orbits the corresponding masses are $`2.1M_{}`$ and $`1.9M_{}`$. These constraints give us an upper limit for the mass of the order of $`M2.2M_{}`$ and a lower limit $`M1.35M_{}`$ and restrict thereby severely the EoS for dense matter.
In the following, we display the results for mass and radius using the equations of state discussed above. In order to obtain the radius and mass of a neutron star, we have solved the Tolman-Oppenheimer-Volkov equation with and without rotational corrections, following the approach of Hartle , see also Ref. . Our results are shown in in Figs. 6 and 7. The equations of state we have used are those for
1. $`\beta `$-stable $`pn`$-matter with the parametrization of the results from Akmal et al. made in Ref. . This EoS is rather stiff compared with the EoS obtained with hyperons, see Fig. 3. The EoS yields a maximum mass $`M1.9M_{}`$ without rotational corrections and $`M2.1M_{}`$ when rotational corrections are included. The results for the mass are shown in Fig. 6 as functions of central density $`n_c`$. They are labelled as $`pn`$-matter with and without rotational corrections. The corresponding mass-radius relation (without rotational corrections) is shown in Fig. 7.
2. The other EoS employed is that which combines the nucleonic part of Ref. with the computed hyperon contribution. As can be seen from Fig. 6, the softening of the EoS due to additional binding from hyperons leads to a reduction of the total mass. Without rotational corrections, we obtain a maximum mass $`M1.3M_{}`$ whilst the rotational correction increases the mass to $`M1.4M_{}`$. The size of the reduction, $`\mathrm{\Delta }M0.60.7M_{}`$, and the obtained neutron star masses due to hyperons are comparable to those reported by Balberg et al. .
There are other features as well to be noted from Fig. 6. The EoS with hyperons reaches a maximum mass at a central density $`n_c1.21.3`$ fm<sup>-3</sup>. In Fig. 2 we showed that the only hyperons which can appear at these densities are $`\mathrm{\Lambda }`$ and $`\mathrm{\Sigma }^{}`$. If other hyperons were to appear at higher densities, this would most likely lead to a further softening of the EoS, and thereby smaller neutron star masses. Furthermore, the softer EoS yields also a smaller moment of inertia, as seen in Fig. 8.
The reader should however note that our calculation of hyperon degrees freedom is based on a non-relativistic Brueckner-Hartree-Fock approach. Although the nucleonic part extracted from Ref. , including three-body forces and relativistic boost corrections, is to be considered as a benchmark calculation for nucleonic degrees of freedom, relativistic effects in the hyperonic calculation could result in a stiffer EoS and thereby larger mass. However, relativistic mean field calculations with parameters which result in a similar composition of matter as shown in Fig. 2, result in similar masses as those reported in Fig. 6. In this sense, our results may provide a lower and upper bounds for the maximum mass. This leaves two natural options when compared to the observed neutron star masses. If the above heavy neutron stars prove erroneous by more detailed observations and only masses like those of binary pulsars are found, this may indicate that heavier neutron stars simply are not stable which in turn implies a soft EoS, or that a significant phase transition must occur already at a few times nuclear saturation densities. Our EoS with hyperons would fit into this case, although the mass without rotational corrections is on the lower side. Else, if the large masses from QPO’s are confirmed, then the EoS for baryonic matter needs to be stiffer and in our case, this would rule out the presence of hyperons up to densities $`10n_0=1.2`$ fm<sup>-3</sup>.
Although we have only considered the formation of hyperons in neutron stars, transitions to other degrees of freedom such as quark matter, kaon condensation and pion condensation may or may not take place in neutron star matter. We would however like to emphasize that the hyperon formation mechanisms is perhaps the most robust one and is likely to occur in the interior of a neutron star, unless the hyperon self-energies are strongly repulsive due to repulsive hyperon-nucleon and hyperon-hyperon interactions, a repulsion which would contradict present data on hypernuclei . The EoS with hyperons yields however neutron star masses without rotational corrections which are even below $`1.4M_{}`$. This means that our EoS with hyperons needs to be stiffer, a fact which may in turn imply that more complicated many-body terms not included in our calculations, such as three-body forces between nucleons and hyperons and/or relativistic effects, are needed.
## III Conclusions
Employing the recent parametrization of the free baryon-baryon potentials for the complete baryon octet of Stoks and Rijken , we have performed a microscopic many-body calculation of the structure of $`\beta `$-stable neutron star matter including hyperonic degrees of freedom. The potential model employed allows for the presence of only two types of hyperons up to densities ten times nuclear matter saturation density. These hyperons are $`\mathrm{\Sigma }^{}`$ and $`\mathrm{\Lambda }`$. The interactions for strangeness $`S=1`$, $`S=2`$, $`S=3`$ and $`S=4`$ are not attractive enough to allow the formation of other hyperons. The presence of hyperons leads however to a considerable softening of the EoS, entailing a corresponding reduction of the maximum mass of the neutron star. With hyperons, we obtain maximum masses of the order $`M1.31.4M_{}`$.
In addition, since $`\mathrm{\Sigma }^{}`$ hyperons appear already at total baryonic densities $`n=0.27`$ fm<sup>-3</sup>), we have also considered the possibility of forming a hyperon superfluid. The latter would in turn quench the increased emission of neutrinos due to the presence of hyperons. Within our many-body approach, we find that $`\mathrm{\Sigma }^{}`$ forms a superfluid in the $`{}_{}{}^{1}S_{0}^{}`$ wave, whereas the $`\mathrm{\Lambda }\mathrm{\Lambda }`$ interaction for the same partial wave leads to a vanishing gap for the potential model employed here.
We are much indebted to H. Heiselberg, H.-J. Schulze and V. G. J. Stoks for many usuful comments. This work has been supported by the DGICYT (Spain) Grant PB95-1249 and the Program SCR98-11 from the Generalitat de Catalunya. One of the authors (I.V.) wishes to acknowledge support from a doctoral fellowship of the Ministerio de Educación y Cultura (Spain). |
no-problem/9912/cond-mat9912148.html | ar5iv | text | # Metastabilities in vortex matter
## I Introduction
The mixed state of type II superconductors is seen for applied fields H lying between the lower (H<sub>C1</sub>) and upper (H<sub>C2</sub>) critical fields. It consists of vortices carrying quantized flux which ideally form a two-dimensional hexagonal lattice under repulsive forces. Can this lattice of vortices undergo structural transitions? Can vortex structures show metastabilities seen in usual condensed matter? These questions assume a broader significance because the density of vortex matter in superconductors can be varied over a wide range ( from nearly zero to about 10<sup>12</sup> per cm<sup>2</sup>) by varying applied magnetic field, and may thus provide better experimental tests for metastabilities around phase transitions.
The period after the discovery of high-T<sub>C</sub> superconductors (HTSC) has seen many theoretical works proposing vortex- matter phase transitions1 . Vortex lattice melting as the field (or temperature) is raised towards the H<sub>C2</sub>(T) line now stands established as a first order phase transition and experiments have established a latent heat, as well as a jump in equilibrium magnetisation, satisfying the Clausius-Clapeyron relation 2 ; 3 ; 4 .
Simultaneously, an early theoretical prediction of a first order phase transition in the vortex lattice of paramagnetic superconductors5 ; 6 ; 7 , in which the infinitely long vortices get segmented into short strings with a sudden enhancement of pinning8 , (and thus critical current density J<sub>C</sub> vs H shows a peak) has been motivating experimentalists into studying in great detail the ”peak effect” (PE) in CeRu<sub>2</sub>. The first thermodynamic signature indicating that the onset of the PE is a first order transition (FOT), consistent with the theoretical prediction of Fulde, Ferrel and Larkin, Ovchinnikov (FFLO), came through the observation that the PE appears at a field H$`{}_{a}{}^{}{}_{}{}^{}`$ on increasing field, but vanishes at a lower field H$`{}_{d}{}^{}{}_{}{}^{}`$ on decreasing fields9 ; 10 ; 11 . This hysteresis in the occurrence of the PE was taken as the hysteresis expected in a FOT. We have attempted to identify other measurable signals of a FOT. The vortex matter in CeRu<sub>2</sub> has been our paradigm, and the FFLO theory has been motivating us, possibly as a red herring. The theory is correct and is still used by theorists in understanding coexisting superconductivity and (weak) magnetism12 . Since our experiments cannot probe the microscopic nature of the phase in the PE region of CeRu<sub>2</sub>, we shall not discuss the relevance of FFLO theory to CeRu<sub>2</sub> any further in this talk; those interested can see our recent papers13 ; 14 .
In this talk we shall briefly outline the existing wisdom of experimental tests for a FOT, and then present our extension to the case where one can interchangeably vary two control parameters to traverse the FOT line. The need for this extension was necessitated by our studies on CeRu<sub>2</sub>. We shall state new predictions, and discuss experimental verification.
## II Supercooling across first order phase transitions
A phase transition is defined as an nth-order transition in the Ehrenfest scheme15 if the nth derivatives of the free energy are discontinuous, whereas all lower derivatives are continuous, at the transition point. (The derivatives are taken with respect to the control variables.) The derivative with respect to temperature is entropy and its discontinuity in a FOT implies a latent heat, while the derivative with respect to pressure is volume which should show a discontinuous change at T<sub>C</sub>. The latent heat and volume change are further related by the Clausius-Clapeyron equation.
The Ehrenfest scheme is ambiguous15 for some phase transitions - one example being the lambda transition in liquid helium. Phase transitions are now classified using an order parameter S that changes across the phase boundary. The change is discontinuous, from $`S=0`$ to $`S=S_0`$ for a FOT, but continuous for a second-order transition. Two phases can coexist at the transition point of a FOT. This is put on a formal footing by writing the free energy as a function of the order parameter S. When the control variable (say T) corresponds to the transition point (T<sub>C</sub>), then the free energy f(S) has two equal minima (at $`S=0`$ and $`S=S_0`$) for a FOT16 , while there is only one minimum for a second order transition. One can obviously show that a FOT is accompanied by a latent heat and a sudden volume change, consistent with the Ehrenfest scheme.
The existence of two equal minima implies the coexistence of two phases at the transition point; slightly away from the transition point one still has two minima - one global and one local - with slightly unequal values of the free energy f. We show in fig. 1a schematic of f(S) curves as the control variable (T) is varied from above to below the transition point (T<sub>C</sub>). The high temperature phase has higher entropy and is ‘disordered’, having an order parameter $`S=0`$. while the low temperature phase has a finite (but T–dependent) order parameter. Since $`S=0`$ continues to correspond to a local minimum in f(S) slightly below T<sub>C</sub>, one can supercool the higher entropy state below the transition point16 . Similarly one can superheat the ordered phase above the transition point. One thus sees another experimental characteristic of a FOT, viz. the possibility of hysteresis in the transition point as one varies a control variable. This was the observation9 ; 10 ; 11 that led to the inference that the onset of PE in CeRu<sub>2</sub> is a FOT.
Concentrating on supercooling, we note from fig. 1 that the barrier f<sub>B</sub>(T) in f(S) separating the metastable state at $`S=0`$ from the stable ordered state reduces continuously as T is lowered below T<sub>C</sub>, and vanishes at the limit of metastablilty (T) of the supercooled state16 . Supercooling is easily observed across the water-ice transition17 , a FOT familiar to all of us, and we believe that the hysteresis in the onset of PE in CeRu<sub>2</sub> is also a manifestation of the same18 ; 19 . If the system is in the disordered state at T$`<`$T<sub>C</sub>, then nucleation of the metastable ordered phase occurs, with $`f_B(T)>>kT`$, only by introducing localised fluctuations of large energy e<sub>f</sub>. The nucleation rate is extremely sensitive to the height of the barrier f<sub>B</sub>, and carefully purified metastable liquids evolve suddenly from apparent stability to catastrophic growth of the ordered phase17 . The barrier vanishes below T, and the unstable disordered state now relaxes into the ordered state by the spontaneous growth of long-wavelength fluctuations of small amplitude, i.e. by spinodal decomposition17 . (Here we shall assume that the ordered stable phase is formed fast compared to experimental time scales if $`f_B(T)[e_f+kT]`$, and the system remains in the metastable state if f<sub>B</sub>(T) is larger. We shall briefly initiate a discussion on kinetics and kinetic metastabilities in the last section of this paper).
Both the water-ice transition, and the onset of PE in CeRu<sub>2</sub>, have been studied extensively with density as a second control variable. While the density of water has been varied by varying pressure up to 3kbar17 , the density of vortices is varied by varying the applied magnetic field, and the onset of PE in CeRu<sub>2</sub> has been tracked from 1 Tesla to 4 Tesla11 , corresponding to a four-fold change in vortex density. We can now talk of supercooling the disordered $`S=0`$ phase, at differing densities, below the T<sub>C</sub>(P) corresponding to that density. Can one compare the extent of metastability in such supercooled states? Secondly, we can cross the transition line T<sub>C</sub>(P) by varying density rather than by varying temperature. The f(S) curves are defined once a (T,P) point is defined; the $`S=0`$ state would be metastable just below the FOT line irrespective of whether the line is crossed by varying T or by varying density. We can thus supercool the disordered phase into the region below the FOT line even by varying density. Can one compare the metastability in a supercooled state, at a point (T,P) below the FOT line, as depending on the path followed to reach this (T,P) point? Before pursuing this we must emphasize that such questions cropped up in our studies on CeRu<sub>2</sub> because it is experimentally easy to follow arbitrary paths in density (magnetic field) and temperature space in the case of vortex matter. The disordered phase here is characterised by a larger critical current density J<sub>C</sub> compared to the ordered phase; supercooling is confirmed by measuring the minor hysteresis loops14 ; 18 ; 19 in contrast to the case of supercooled water where one measures diffusivity17 .
We have recently argued that while reduction of temperature at constant density does not a priori cause building up of fluctuations, the very procedure of varying density introduces fluctuations20 . Lowering temperature isothermally can keep e<sub>f</sub> zero in ‘carefully purified liquids’. Density variation at constant T, however, builds up e<sub>f</sub> even in such systems. It was noted that free energy curves should be plotted for three parameters as f(P,T,S) where P is a generic pressure that implies magnetic field in the case of vortex matter. Supercooling along various paths in (T,P) space involves moving from a f(P<sub>1</sub>,T<sub>1</sub>,S) curve to f(P<sub>2</sub>,T<sub>2</sub>,S) curve in this multidimensional space. These curves have two equal minima for (T,P) values lying on the FOT line T<sub>C</sub>(P), and the barrier f<sub>B</sub> vanishes for (T,P) values lying on a line T(P). We refer to this T(P) line as the limit of metastability on supercooling. The first point to note is that while f(P,T,0) is weakly dependent on T, it depends strongly on P20 . The second point is that the S-dependence of f(P,T,S), for fixed T, is different at different P. This originates from the different densities of the ordered and disordered phases, and this was also incorporated by us20 . Finally, we argued that if density is varied then a fraction of the energy change f(P<sub>1</sub>,T,0) - f(P<sub>2</sub>,T,0) will be randomised into a fluctuation energy e<sub>f</sub>. This last point20 looks obvious in the case of vortex matter where vortices get pinned and unpinned as they move, and the energy dissipated in the process is easily measured as the area within the M-H loop. With these physical inputs, we could make the following predictions20 :
1. When T<sub>C</sub> falls with rising density as in vortex matter FOT, or in water-ice transition below 2 kbar, then ($`T_CT^{}`$) will rise with rising density. If T<sub>C</sub> rises with rising density as in water-ice above 2 kbar, and in most other solid-liquid transitions, then ($`T_CT^{}`$) will fall with rising density. This prediction is consistent with known data on water17 .
2. The disordered phase can be supercooled upto the limit of metastability T(P) only if T is lowered in constant P. If the T<sub>C</sub>(P) line is crossed by lowering P at constant T, then supercooling will terminate at T<sub>0</sub>(P) which lies above the T(P) line. If T<sub>C</sub> falls with rising density, then ($`T_0(P)T^{}(P)`$) rises with rising density.
3. A supercooled metastable state can be transformed into the stable ordered state by density variations through variation of pressure or magnetic field. These variations produce a fluctuation energy e<sub>f</sub> which, when large enough, can cause a jump over the free energy barrier f<sub>B</sub>. In vortex matter e<sub>f</sub> is related to the area under the M-H loop as the field is varied by h. This area, and thus e<sub>f</sub>, increase monotonically but nonlinearly with h. If this field is varied n times with fixed h, then e<sub>f</sub> will increase linearly with n. With this basic idea, one can predict the effect of field variation h on various supercooled metastable states21 . We show, in fig. 2, three points in (T,P) space where supercooled states are produced by lowering T at constant field. It follows that if h<sub>0</sub> is the lowest field excursion (with $`n=1`$) for which the metastable state is transformed into a stable state, then h<sub>0</sub> will be smallest for point 1, and largest for point 3. Further if one uses a field variation h<sub>i</sub> which is lower than the smallest h<sub>0</sub>, but makes repeated excursions until the stable state is formed after n<sub>0</sub> such field excursions, then it follows that n<sub>0</sub> will be smallest for point 1 and largest for point 3. The predictions made above are qualitative and based on general arguments20 ; 21 ; such predictions can be made for various possible paths of crossing the T<sub>C</sub>(P) line. We have experimentally confirmed most of the predictions stated above with the PE in CeRu<sub>2</sub> as our paradigm FOT22 .
## III Hindered kinetics and kinetic metastabilities.
The experimental confirmation of a FOT involves measurement of a volume discontinuity, also of a latent heat, and of these two satisfying the Clausius-Clapeyron equation. For vortex matter in CeRu<sub>2</sub> the discontinuity in vortex volume was observed by us19 but was tedious because we were extracting equilibrium magnetisation from hysteretic M-H curves23 . The latent heat has so far not been measurable, and hysteresis was invoked9 ; 10 ; 11 as a signature of an FOT. We have made predictions on path-dependence of metastabilities associated with an FOT, and these have also been observed. We must recognise that while we have advanced our understanding of metastabilities associated with FOTs, metastability can also be kinetic in origin. We wish to now address this and pose some questions.
Glasses are known to be metastable, but differ significantly from supercooled liquids24 . The diffusivity of a supercooled liquid does not drop suddenly below T<sub>C</sub>; its diffusivity is large enough to permit it to explore configuration space on laboratory timescales. The ergodic hypothesis is valid, entropy is a valid concept and free energy can be defined, permitting the arguments we made in Section 2. A glass on the other hand is characterised by low diffusivity and hindered kinetics (with a viscosity greater than 10<sup>13</sup> poise). It sits in a local minimum of only the energy landscape and not of the free energy, and is non-ergodic24 . The low diffusivity of a glass causes metastabilities; the metastabilities are associated with hindered kinetics and not with local minima in free energy. Hindered kinetics (with kinetic hysteresis) will be seen wherever diffusivities are low, examples are critical slowing down near a second order phase transition and, closer home, M-H hysteresis in hard superconductors where the pinning, or hindered kinetics, of vortices prevents decay of shielding currents (Bean’s critical state model). Does a metastability induced by hindered kinetics also depend on the path followed in (T,P) space? If the metastability is due to reduced diffusivity, then naïve arguments suggest that the metastability will be more persistent when larger motions (of particles in configuration space) are involved. And larger motions are involved when density is varied, rather than when temperature is varied. For the case of vortex matter, a much larger rearrangement of vortex structure is involved when we reach an (H,T) point by varying field isothermally, than when we reach that point by varying temperature at constant field. Hysteresis would thus be lower in the field-cooled case, for hard superconductors, than in the case of isothermal field variation. This is consistent with observations25 , and with predictions of the Bean’s critical state model26 . It is also well known that Bitter patterns generally show an almost disorder- free vortex lattice on reducing T in constant H, in striking contrast to the case when H is reduced in constant T. We thus conclude, with our na ve arguments, that the path-dependence of metastability associated with hindered kinetics may be opposite to the case of metastability associated with a FOT.
## IV Acknowledgement
We gratefully acknowledge helpful discussions with Dr. S. M. Sharma, Dr. S. K. Sikka, Dr. Srikanth Sastry, Prof. Deepak Dhar and Dr. Sujeet Chaudhary.
Figure Captions
Fig. 1: We show schematic free energy curves for (a) $`T=T^{}`$, (b) $`T_C<T<T^{}`$, (c) $`T=T_C`$, (d) $`T^{}<T<T_C`$, and (e) $`T=T^{}`$.
Fig. 2: We show a schematic of the phase diagram with supercooled states at 1, 2, and 3 obtained by lowering T in constant field (or ‘pressure’ ). |
no-problem/9912/cond-mat9912237.html | ar5iv | text | # Hysteresis in vibrated granular media
## I Introduction
The phenomenology of granular materials is very rich. One of the most characteristic dynamical behaviors is compaction. If a system of grains in a loosely packed configuration is submitted to vertical vibrations, or tappings, it approaches slowly a more compact state. The relaxation is well described by an inverse logarithmic law, and both the relaxation function and the density of the “stationary” state seem to depend only on a dimensionless parameter characterizing the vibration intensity .
More recently, it has also been shown that granular materials exhibit irreversible-reversible cycles if the vibration strength is increased and decreased alternately. When the system is tapped with increasing intensity starting from a loosely packed state, the density shows a non-monotonic behavior, with a maximum at a certain value of the intensity of vibration. If, afterwards, the granular medium is tapped with decreasing intensity, the density increases monotonically, and hysteresis effects show up. Interestingly, if the vibration intensity is again increased, the density follows a curve that is approximately equal to the evolution in the decreasing intensity process, being thus “reversible” . Of course, the rate of variation of the vibration intensity is the same for all the processes described above.
The static properties of granular materials have been studied by Edwards and co-workers . The starting point is the plausible idea that all the “microscopic” configurations of a powder having the same volume are equiprobable, provided that the powder has been prepared by extensive manipulation, i. e., by processes which do not act on individual grains. This has been called the “ergodic hypothesis” of powders. On this basis, it is possible to make an analogy between the variables of a molecular system and the parameters characterizing the state of a powder. The volume of a powder is analogous to the energy, while the entropy remains the same quantity, measuring the available number of configurations. The derivative of the volume with respect to the entropy is called the “compactivity” of the powder, and plays the role of the temperature. The lowest compactivity corresponds to the densest state (minimum volume), and the highest compactivity to the fluffiest stable configuration (maximum volume). The stationary value of the volume is an increasing function of the compactivity, like the energy increases with the temperature in a thermal system.
It seems interesting to try to establish a connection between the dynamical and the equilibrium properties of powders. It looks reasonable that, if the steady state is reached in a tapping process, there should be a relationship between the vibration intensity and the compactivity of a powder. The tapping process allows the system to explore the phase space of available configurations, and the same role is played by the temperature in a thermal system. In this way, it is tempting to explain the tendency of a granular system to move over an almost reversible curve, when the tapping strength varies in time, as the approach towards the stationary curve of the powder. If the above is true, the compactivity will be an increasing function of the vibration intensity, because the density over the “reversible” curve decreases with the tapping strength. In fact, this kind of behavior has been found in a simple model of a granular system described in terms of a Fokker-Planck equation .
Also, glass formers exhibit hysteresis effects when cooled and heated through the glass transition region. When a glass former is cooled, the system follows the equilibrium curve until a certain temperature $`T_g`$, at which it gets frozen due to the fast increase of the relaxation time. If this structural glass is reheated from its frozen state, the equilibrium curve is only approached for temperatures larger than $`T_g`$. This is due to the fact that the system starts from a configuration in which the structural rearrangements are very difficult . In this way, also hysteresis effects show up in glass formers, when they are submitted to thermal cycles.
The study of simple models have been very useful in order to understand, at least qualitatively, the behavior of structural glasses. In particular, hysteresis cycles are also shown by simple models when cooled and heated . The analytical approach to thermal cycles is a very difficult task, because it is necessary to solve the kinetic equations of the model with time-dependent coefficients. Nevertheless, some models whose dynamics is formulated by means of a simple master equation can be exactly solved . In those situations, the role played by the “normal solution” of the master equation, which is monotonically approached by all the other solutions , has been shown to play a fundamental role.
Here we will consider a simple model for granular media, which has been previously introduced . Its dynamical evolution is governed by a master equation, with the transition rates given as functions of the tapping strength. When the system is vibrated with a given intensity from a low density state, the density of the system increases monotonically until reaching an stationary value. The relaxation of the density is very slow, being very well fitted by an empirical inverse-logarithmic law. Interestingly, the stationary state of the system is described by Edward’s theory of powders. Then, it seems worth studying its behavior when the tapping strength changes in time for several reasons. Firstly, in order to verify if its behavior resembles that of real granular systems, showing the irreversible and reversible branches found in experiments . Secondly, to understand whether the hysteresis effects are related to the existence of a “normal solution” of the master equation with time-dependent tapping strength.
The paper is organized as follows. In Sec. II some general properties of models for granular media based on a master equation formulation of the dynamics are considered. The existence of a “normal solution” of the master equation is analyzed for the case of time-dependent transition rates. Also, the conditions to be verified by the law of variation of the tapping strength in order to guarantee that the system approaches the steady state curve are discussed. Section III is devoted to the analysis of the normal solution by means of Hilbert’s expansion of the master equation. A quite general expression for the first order deviation of the normal solution from the stationary curve is obtained. The specific model to be considered is presented is Sec. IV, as well as a brief discussion of its equilibrium state. Section V deals with the behavior of the model in processes with time-dependent vibration intensity. Firstly, processes in which the tapping strength decreases in time are studied. The existence of a phenomenon similar to the laboratory glass transition is analyzed. Secondly, we discuss processes with increasing vibration intensity. The role played by the normal solution turns out to be fundamental. Finally, the main conclusions of the paper are summarized in Sec. VI.
## II Some general dynamical properties
Let us consider a model system whose dynamics is described by means of the master equation
$$\frac{dp_i(t)}{dt}=\underset{j}{}\left[W_{ij}(t)p_j(t)W_{ji}(t)p_i(t)\right].$$
(1)
Here $`p_i(t)`$ is the probability of finding the system in state $`i`$ at time $`t`$, and the rates $`W_{ij}`$ for transition from state $`j`$ to state $`i`$ depend on time in a given way, independently of the state of the system. Let us define a function
$$H(t)=\underset{i}{}p_i(t)\mathrm{ln}\frac{p_i(t)}{p_i^{}(t)},$$
(2)
where $`p_i(t)`$ and $`p_i^{}(t)`$ are two solutions of Eq. (1) corresponding to different initial conditions. The above definition for $`H(t)`$ assumes that $`p_i^{}(t)`$ is positive for all the states $`i`$. This condition will be fulfilled after a transient period if the process defined by Eq. (1) is irreducible, even in the case that some initial probabilities vanish. The time variation of $`H(t)`$ is given by
$$\frac{dH(t)}{dt}=A(t)$$
(3)
with $`A(t)`$ being a complicated functional of the two solutions $`p`$ and $`p^{}`$ , that can be written in the form
$$A(t)=\underset{ij}{}W_{ij}p_j^{}\left[\left(\frac{p_j}{p_j^{}}\frac{p_i}{p_i^{}}\right)\left(\mathrm{ln}\frac{p_i}{p_i^{}}+1\right)+\frac{p_i}{p_i^{}}\mathrm{ln}\frac{p_i}{p_i^{}}\frac{p_j}{p_j^{}}\mathrm{ln}\frac{p_j}{p_j^{}}\right].$$
(4)
Its main property is that $`A(t)0`$. Besides, if the transition rates define an irreducible process at time $`t`$, the equality sign holds only when
$$p_i(t)=p_i^{}(t),$$
(5)
for all the states $`i`$. As the function $`H(t)`$ is bounded below, $`H(t)0`$, it must tend to a limit and, therefore,
$$\underset{t\mathrm{}}{lim}p_i(t)=\underset{t\mathrm{}}{lim}p_i^{}(t).$$
(6)
Thus all the solutions of the master equation converge toward the same behavior, if the long-time limit of the transition rates still define an irreducible process. This equation can be understood as showing the existence of a long-time regime, where the influence of the initial conditions has been lost, and the state of the system and its dynamics is fully determined by the law of variation of the transition rates. Therefore, there will be a special solution of the master equation such that all the other solutions approach it after an initial transient period. We will refer to this special solution as the “normal” solution of the master equation for the given time dependence of the transition rates. A more detailed and general discussion of the H-theorem leading to the existence of this “normal” solution can be found in Ref. .
Now suppose that the master equation models the dynamical behavior of a granular system submitted to vertical vibration. The transition rates $`W_{ij}`$ would be functions of the parameter $`\mathrm{\Gamma }`$ characterizing the strength of the vibration. If the granular pile is vibrated with sinusoidal pulses of amplitude $`A`$ and frequency $`\omega `$, the quantity $`\mathrm{\Gamma }=A\omega ^2/g`$ , where $`g`$ is the gravity, is usually defined. In the case of time dependent intensity $`\mathrm{\Gamma }`$, the equation determining the evolution of the granular pile will be of the type given by Eq. (1). Assuming that for arbitrary $`\mathrm{\Gamma }0`$ the tapping process allows the system to explore the whole configuration space, the stochastic process will be irreducible and the existence of a normal solution for a given program of variation of the intensity $`\mathrm{\Gamma }`$ follows, provided that $`\mathrm{\Gamma }`$ does not vanish in the long-time limit.
Let us assume that for every value of the intensity $`\mathrm{\Gamma }>0`$, the equation
$$\underset{j}{}W_{ij}(\mathrm{\Gamma })p_j^{(s)}(\mathrm{\Gamma })=\underset{j}{}W_{ji}(\mathrm{\Gamma })p_i^{(s)}(\mathrm{\Gamma })$$
(7)
has a “canonical” solution, of the form
$$p_i^{(s)}(\mathrm{\Gamma })=\frac{1}{Z(X)}\mathrm{exp}\left[\frac{V_i}{\lambda X}\right],$$
(8)
where $`V_i`$ is the volume of the system in state $`i`$,
$$Z(X)=\underset{i}{}\mathrm{exp}\left[\frac{V_i}{\lambda X}\right]$$
(9)
is a partition function, $`\lambda `$ is a constant with the dimension of volume, and $`X`$ is a variable termed the compactivity of the granular system in the framework of Edward’s statistical mechanics theory of powders . Of course, in the context considered here the compactivity $`X`$ will be a function of the vibration intensity $`\mathrm{\Gamma }`$, $`X=f(\mathrm{\Gamma })`$, which has to be found for each particular model. Equation (8) defines the steady state reached by the system if the vibration intensity is constant in time. The macroscopic value of the volume in this state is
$$\overline{V}^{(s)}(X)=\underset{i}{}V_ip_i^{(s)}\left[\mathrm{\Gamma }(X)\right],$$
(10)
and the configurational entropy $`S(X)`$ reads
$$S(X)=\lambda \underset{i}{}p_i^{(s)}(X)\mathrm{ln}p_i^{(s)}(X).$$
(11)
Of course, the compactivity $`X`$ can be obtained from its usual definition,
$$X=\frac{d\overline{V}^{(s)}}{dS}.$$
(12)
For the sake of simplicity, in the following we will take the unit of volume such that $`\lambda =1`$. The stationary volume $`\overline{V}^{(s)}(X)`$ is an increasing function of the compactivity $`X`$, since the powder “compressibility”
$$\kappa (X)\frac{d\overline{V}^{(s)}(X)}{dX}=\frac{1}{X^2}\underset{i}{}\left[V_i\overline{V}^{(s)}(X)\right]^2p_i^{(s)}(X),$$
(13)
is proportional to the volume fluctuations over the ensemble of granular systems considered. Then, the compactivity $`X`$ should be an increasing function of the vibration intensity $`\mathrm{\Gamma }`$, because it is reasonable to expect that the powder became fluffier with higher vibration intensities. This is, for instance, the behavior found in the simple two-volume model considered in Ref. . In general, this property must be checked once the relationship between $`\mathrm{\Gamma }`$ and $`X`$, $`X=f(\mathrm{\Gamma })`$, has been derived for each specific model.
Of course, $`p_i^{(s)}`$ is not a solution of the master equation when the intensity $`\mathrm{\Gamma }`$ is time-dependent, and in general the system does not monotonically approach the steady distribution. Nevertheless, define \[compare with Eq. (2)\]
$$H^{(s)}(t)=\underset{i}{}p_i(t)\mathrm{ln}\frac{p_i(t)}{p_i^{(s)}(X)},$$
(14)
where $`p_i(t)`$ is again one solution of Eq. (1), and $`p_i^{(s)}(X)`$ depend on time through the compactivity $`X=X(t)`$. If we define a statistical entropy as
$$S^{}(t)=\lambda \underset{i}{}p_i(t)\mathrm{ln}p_i(t),$$
(15)
and use the notation
$$\overline{V}(t)=\underset{i}{}V_ip_i(t),$$
(16)
for the actual average volume at time $`t`$, after a very simple algebra it is found that
$$H^{(s)}(t)=\frac{1}{X}\left[\left(\overline{V}(t)XS^{}(t)\right)\left(\overline{V}^{(s)}(X)XS(X)\right)\right].$$
(17)
Thus $`H^{(s)}(t)`$ is proportional to the deviation of the actual “effective” volume at time $`t`$,
$$Y(t)=\overline{V}(t)XS^{}(t),$$
(18)
from its stationary value.
The time variation of $`H^{(s)}`$ is easily obtained as
$$\frac{dH^{(s)}}{dt}=A^{(s)}(t)\underset{i}{}\frac{p_i(t)}{p_i^{(s)}(X)}\frac{dp_i^{(s)}(X)}{dX}\frac{dX}{dt},$$
(19)
where $`A^{(s)}(t)`$ is given by Eq. (4), but replacing $`p_i^{}(t)`$ by $`p_i^{(s)}(X)`$. Taking into account Eqs. (8-10), it is found
$$\frac{dH^{(s)}}{dt}=A^{(s)}(t)\frac{1}{X^2}\frac{dX}{dt}\left[\overline{V}(t)\overline{V}^{(s)}(X)\right].$$
(20)
Equation (20) does not have a well-defined sign and, therefore, the stationary distribution is not monotonically approached, in general, when the vibration intensity is time-dependent. However, in those processes such that the compactivity (or, equivalently, the vibration intensity $`\mathrm{\Gamma }`$) increases monotonically in time, only the term
$$B(t)=\frac{1}{X^2}\frac{dX}{dt}\overline{V}^{(s)}(X)$$
(21)
is positive. Using the analogy between temperature and compactivity, we will refer to those processes as “heating” processes. If in a given “heating” process it is verified that
$$\underset{t\mathrm{}}{lim}B(t)=0,$$
(22)
we can conclude
$$\underset{t\mathrm{}}{lim}\frac{dH^{(s)}(t)}{dt}0.$$
(23)
Since $`H^{(s)}(t)`$ is bounded below, the only possibility is in fact the equality, which implies
$$\underset{t\mathrm{}}{lim}A^{(s)}(t)=0$$
(24)
and
$$\underset{t\mathrm{}}{lim}\frac{1}{X^2}\frac{dX}{dt}\overline{V}(t)=0.$$
(25)
From Eq. (24) and the irreducibility of the stochastic process, it follows that in the long time limit,
$$\underset{t\mathrm{}}{lim}p_i(t)=\underset{t\mathrm{}}{lim}p_i^{(s)}(X),$$
(26)
i. e., the system goes to the steady curve for long enough times, if the “heating” program verifies the condition given by Eq. (22).
The following picture emerges for the evolution of a granular system submitted to vibrations with increasing intensity: Starting from an arbitrary initial condition, in a first step the system tends to a behavior which is determined by the law of variation of the vibration intensity $`\mathrm{\Gamma }`$, and the initial condition has been forgotten. This implies the existence of a special solution of the master equation, called the “normal” solution, that is approached by all the other solution after an initial transient period. Besides, if the system is “heated” slowly, in the sense that Eq. (22) is verified, the normal solution tends afterwards to the stationary curve. This picture is similar to the one found in some models of structural glasses .
## III Hilbert’s method around the steady curve
In this Section we will use Hilbert’s method to derive a quite general form of the normal solution of Eq. (1) near the steady state curve. We will focus on the class of models for granular media considered in the previous section, but the results can be directly extended to any system with a “canonical” distribution describing the stationary state.
Let us consider Eq. (1), rewritten in the form
$$\frac{d𝒑(t)}{dt}=\widehat{𝑾}(t)𝒑(t),$$
(27)
where $`𝒑`$ is a vector (column matrix) whose elements are the probabilities $`p_i(t)`$ of the $`i`$-th state of the system at time $`t`$, and $`\widehat{𝑾}`$ is a square matrix with elements $`\widehat{W}_{ij}`$ given by
$$\widehat{W}_{ij}(t)=W_{ij}(t)\delta _{ij}\underset{k}{}W_{kj}(t).$$
(28)
If the transition rates are time independent and the detailed balance condition is verified, solving Eq. (27) is equivalent to obtain the solution of the eigenvalue problem
$$\widehat{𝑾}𝝋(q)=\lambda (q)𝝋(q),$$
(29)
with $`\lambda (q)>0`$ for all $`q`$. The eigenvectors $`𝝋(q)`$ are completed with the stationary distribution $`𝒑^{(s)}`$, which is an eigenvector of $`\widehat{𝑾}`$ corresponding to the null eigenvalue, i. e.,
$$\widehat{𝑾}𝒑^{(s)}=0.$$
(30)
Besides, if the Markov process is irreducible there is only one stationary distribution with all its components positive . The matrix $`\widehat{𝑾}`$ is hermitian with the following definition for the scalar product of any two vectors $`𝒂`$ and $`𝒃`$,
$$(𝒂,𝒃)=\underset{i}{}\frac{a_ib_i}{p_i^{(s)}}.$$
(31)
If the Markov process is irreducible and the detailed balance condition is fulfilled for all times, Eqs. (29-31) remain valid for time dependent transition rates. Of course, the eigenvalues and eigenvectors will depend on time in general. The usual situation is that the transition rates depend on time through an externally controlled parameter like the temperature in a thermal system or the compactivity $`X`$ in a granular medium. Thus, we will write sometimes in the following $`\widehat{𝑾}(X)`$, $`\lambda (q,X)`$, $`𝝋(q,X)`$ and $`𝒑^{(s)}(X)`$.
Hilbert’s method consists in solving the master equation by means of the iterative process
$$\widehat{𝑾}(t)𝒑^{(0)}(t)=0,$$
(33)
$$\widehat{𝑾}(t)𝒑^{(n)}(t)=\frac{d𝒑^{(n1)}(t)}{dt},n1.$$
(34)
In this way we obtain a probability distribution
$$𝒑_H(t)=\underset{n=0}{\overset{\mathrm{}}{}}𝒑^{(n)}(t),$$
(35)
which is a solution of Eq. (27). The first term $`𝒑^{(0)}(t)`$ gives the “stationary” distribution,
$$p_i^{(0)}(t)=p_i^{(s)}(X),$$
(36)
where $`X`$ stands for the value of the compactivity at time $`t`$, $`X=X(t)`$. The Hilbert solution $`𝒑_H`$, constructed following the above rules, is a “normal” solution, because it only depends on the external law of variation of the transition rates, and it does not refer to any specific initial conditions. Nevertheless, the range of validity of $`𝒑_H(t)`$ is limited in general because of the divergence of Hilbert’s expansion . From a physical point of view, this divergence is connected to the fact that we are expanding $`𝒑(t)`$, which may describe a situation arbitrarily far from the steady state, around the stationary solution $`𝒑^{(s)}(X)`$.
In “heating” processes we have shown in Sec. II that a normal solution of the master equation exists and, under very general conditions, it tends to the stationary solution for very high compactivity. Thus in the high compactivity limit Hilbert’s method can be useful, since it is an expansion around the steady state. We will restrict ourselves to the first correction $`𝒑^{(1)}(t)`$, because the difference between the probability distributions $`𝒑_H(t)`$ and $`𝒑^{(s)}(X)`$ is expected to be small in the high compactivity limit. The normalization of $`𝒑^{(s)}(X)`$ implies that
$$(𝒑^{(s)}(X),\frac{d𝒑^{(s)}(X)}{dX})=\underset{i}{}\frac{dp_i^{(s)}(X)}{dX}=0,$$
(37)
and, therefore,
$$𝒑^{(1)}(t)=\widehat{𝓣}(X)\frac{d𝒑^{(s)}(X)}{dX}\frac{dX}{dt},$$
(38)
where $`\widehat{𝓣}(X)`$ is the inverse operator of $`\widehat{𝑾}(X)`$ in the space orthogonal to the equilibrium distribution $`𝒑^{\mathbf{(}𝒔\mathbf{)}}(X)`$. Using Eq. (8), we obtain
$$\frac{dp_i^{(s)}(X)}{dX}=\frac{1}{X^2}\left[V_i\overline{V}^{(s)}(X)\right]p_i^{(s)}.$$
(39)
By introducing a function
$$\xi (q,X)=\underset{i}{}V_i\phi _i(q,X),$$
(40)
where $`𝝋(q,X)`$ is the eigenvector defined in Eq. (29), we can rewrite Eq. (39) in a vectorial form as
$$\frac{d𝒑^{(s)}(X)}{dX}=\frac{1}{X^2}\underset{q}{}\xi (q,X)𝝋(q,X).$$
(41)
The above expression follows from the completeness of the eigenvectors and the property
$$(𝝋(q,X),\frac{d𝒑^{(s)}(X)}{dX})=\frac{1}{X^2}\underset{i}{}\phi _i(q,X)\left[V_i\overline{V}^{(s)}(X)\right]=\frac{1}{X^2}\xi (q,X).$$
(42)
The term proportional to $`\overline{V}^{(s)}(X)`$ vanishes as a consequence of the orthogonality of the eigenvectors $`𝒑^{(s)}(X)`$ and $`𝝋(q,X)`$ for all $`q`$. Then, to first order in deviations from the equilibrium curve we have
$$𝒑_H(t)=𝒑^{(s)}(X)\frac{1}{X^2}\frac{dX}{dt}\underset{q}{}\lambda ^1(q,X)\xi (q,X)𝝋(q,X).$$
(43)
The notation used reflects the fact that the right hand side depends on time only through the compactivity $`X`$ of the system.
From Eq. (43) it is possible to evaluate the average values of the physical quantities of the system over Hilbert’s distribution in the first order approximation. For instance, the mean value of the volume is
$$\overline{V}_H(t)=\overline{V}^{(s)}(X)\frac{1}{X^2}\frac{dX}{dt}\underset{q}{}\lambda ^1(q,X)\xi ^2(q,X),$$
(44)
This expression can be written in a more transparent way by using the following identity for the powder “compressibility” defined in Eq. (13),
$$\kappa (X)=\frac{1}{X^2}\underset{q}{}\xi ^2(q,X),$$
(45)
and the expression for the average relaxation time of the volume found in linear relaxation theory (see the appendix)
$$\tau (X)=\frac{_q\lambda ^1(q,X)\xi ^2(q,X)}{_q\xi ^2(q,X)}.$$
(46)
In this way, it is found
$$\overline{V}_H(t)=\overline{V}^{(s)}(X)\frac{dX}{dt}\kappa (X)\tau (X).$$
(47)
It is important to note that the detailed balance condition is a key point in our derivation of Eq. (47), while the existence of the normal solution does not need detailed balance to be satisfied, but it follows if the Markov process is irreducible in the long time limit. On the other hand, Eq. (47) can also be applied to estimate the departure from the stationary curve in “cooling” processes, if the system is initially prepared in the stationary state. In other words, Eq. (47) should be applicable to any situation near the stationary curve. Therefore, it allows to calculate the first order in the deviation of the volume from its steady value over the normal curve when the system asymptotically tends to the stationary curve, as it is the case of “heating” processes. The same remark can be made about Eq. (43) for Hilbert’s distribution $`𝒑_H(t)`$. It contains all the information needed to evaluate one-time properties over the normal curve in situations close to the steady state.
## IV The model
In this Section we will briefly review a simple model for the vibrocompaction of a dry granular system which has been recently introduced . We consider a one-dimensional lattice with $`N`$ sites. Each site can be either occupied by a particle or empty, i. e., occupied by a hole, with the restriction that there cannot be two nearest neighbor holes (such a configuration would be unstable). A variable $`m_i`$ is assigned to each site $`i`$, taking the value $`m_i=1`$ if the site is empty, and $`m_i=0`$ if there is a particle on it. Then, a configuration of the system is fully specified by giving the values of the set of variables $`𝒎\{m_1,m_2,\mathrm{},m_N\}.`$
The dynamics of the system is defined as a Markov process, and formulated by means of a master equation for the probability $`p(𝒎,t)`$ of finding the system in the configuration $`𝒎`$ at time $`t`$ ,
$$\frac{d}{dt}p(𝒎,t)=\underset{𝒎^{}}{}\left[W(𝒎|𝒎^{})p(𝒎^{},t)W(𝒎^{}|𝒎)p(𝒎,t)\right],$$
(48)
where $`W(𝒎|𝒎^{})`$ is the transition rate from state $`𝒎^{}`$ to state $`𝒎`$. The possible transitions in the system can be classified in three groups. Firstly, there are transitions conserving the number of particles, corresponding to purely diffusive processes. Their rates are given by
$$W(010|100)=W(010|001)=\frac{1}{2}\alpha ,$$
(49)
with $`\alpha `$ being a constant. These transition rates must be understood as it is usually done, as the transition rates between states connected by the given rearrangement. Only the variables of the set of sites involved in the transition are indicated in the notation. Secondly, there are also transitions increasing the number of particles, with rates
$$W(010|101)=\frac{1}{2}\alpha ,$$
(51)
$$W(001|101)=W(100|101)=\frac{1}{4}\alpha .$$
(52)
Finally, the transition rates for those processes decreasing the number of particles are
$$W(01010|00100)=\frac{1}{2}\alpha ^2,$$
(54)
$$W(01010|01000)=W(01010|00010)=\frac{1}{4}\alpha ^2.$$
(55)
The transition rates in Eqs. (49)-(52) define an effective dynamics for tapping processes taking place in a previous more general model presented in Ref. . The parameter $`\alpha `$ characterizes the tapping process completely. Note that we have rescaled all the expressions of the transition rates in Ref. in such a way that the time scale $`t`$ is a measure of the number of taps. For $`\alpha =0`$ no transition is possible in the system and, therefore, the parameter $`\alpha `$ measures the intensity of the vibration. The configuration with the highest density, i. e., no holes present, is completely isolated from the rest of the states, in the sense that no transition is possible from or towards it for any value of $`\alpha `$. Aside from this particular state, all the other possible configurations are connected through a chain of transitions with non-zero probability for $`\alpha 0`$, and the Markov process defining the dynamics is then irreducible.
When $`\alpha `$ does not depend on time, i. e., the intensity of vibration is constant, there is a unique stationary solution $`p^{(s)}(𝒎)`$ of the master equation (48). We will use the notation $`𝒎^{\mathbf{(}𝒌\mathbf{)}}`$ for a configuration of the system with $`k`$ holes. Clearly, it is verified that $`1k(N+1)/2`$. It is easily found that
$$p^{(s)}(𝒎^{\mathbf{(}𝒌\mathbf{)}})=\frac{e^{k/X}}{Z},$$
(56)
where $`Z`$ is the normalization constant, and $`X`$ is related to the vibration intensity $`\alpha `$ through the relation
$$\alpha =e^{1/X}.$$
(57)
Comparison with Eq. (8) shows that in our model the number of holes $`k`$ plays the role of the volume of the system and $`X`$ is the compactivity of Edward’s theory of powders . More precisely, $`k`$ would be proportional to the excess volume from the densest state. Interestingly, the compactivity $`X`$ is an increasing function of $`\alpha `$, as it has been argued on general basis in Section II. Besides, the compactivity vanishes for the no vibrated case. The partition function can be analytically derived in the infinite system limit, with the result
$$\mathrm{ln}\zeta \frac{1}{N}\mathrm{ln}Z=\mathrm{ln}2+\mathrm{ln}\left[1+\left(1+4\alpha \right)^{1/2}\right].$$
(58)
From here, the stationary value of the density of holes is obtained in the standard way,
$$x_1^{(s)}=\frac{\overline{k}^{(s)}}{N}=\frac{d\mathrm{ln}\zeta }{d(1/X)}=\frac{1}{2}\left[1\left(1+4\alpha \right)^{1/2}\right].$$
(59)
The stationary density of holes increases monotonically from the densest state $`x_1^{(s)}=0`$ to the fluffiest state $`x_1^{(s)}=1/2`$ as the vibration intensity increases monotonically from $`\alpha =0`$ to $`\alpha =\mathrm{}`$, since the analogous to the “compressibility”
$$\kappa =\frac{dx_1^{(s)}}{dX}=\frac{e^{1/X}}{X^2\left(1+4e^{1/X}\right)^{3/2}}$$
(60)
is positive definite for any value of $`X`$. Nevertheless, the densest configuration cannot be actually reached, since for $`\alpha =0`$ no transition is possible and the system gets trapped in its initial state. As the number of holes $`k`$ is an upper bounded variable, the compactivity $`X`$ can take negative values, corresponding to configurations fluffier than those of positive compactivities. For instance, $`X0^{}`$ gives $`\alpha \mathrm{}`$ and $`x_1^{(s)}=1/2`$, the least dense state.
Finally, let us qualitatively study the dynamics of the system for time-independent transition rates in the low vibration limit, $`\alpha 1`$ or $`X0^+`$. In that limit, the stationary value of the density of holes $`x_1^{(s)}`$ is very small, and a typical configuration of the system consist of a few holes, separated by long arrays of particles. Since $`x_1^{(s)}\alpha `$, the mean distance between holes is of order $`\alpha ^1`$. Moreover, at least in linear relaxation theory and at low compactivities, the evolution of the system will be mainly associated to diffusive processes. The characteristic time $`\widehat{\tau }`$ of the relaxation process would be the square of the mean distance between holes divided by an effective diffusion coefficient, which can be estimated from Eq. (49) as of the order $`\alpha `$. Therefore,
$$\widehat{\tau }\alpha ^3.$$
(61)
This result will be important in the following, since an estimate of the relaxation time in the linear relaxation regime is necessary when studying the time-dependent rates case, as it follows from Eq. (47).
## V Processes with time-dependent vibration intensity
Next we will study the dynamical behavior of the model described in the previous section, when submitted to processes in which the vibration intensity $`\alpha `$ changes in time in a given way. Due to the relationship between $`\alpha `$ and the compactivity $`X`$, Eq. (57), this is equivalent to consider that the compactivity $`X`$ varies in time following a given law. As already mentioned, we will refer to a process as a “heating” (“cooling”) one when the vibration intensity is monotonically increased (decreased). Such kind of processes have been already considered in the literature, both in real granular systems and in simple models .
The general results of Sec. II are applicable to this particular model. The only limitation is due to the loss of irreducibility of the dynamics for $`\alpha =0`$. Thus the existence of a special solution of the master equation, such that all the others approach it in the long time limit, applies to any program of variation of $`\alpha `$ except for “cooling” processes up to $`X=0`$. As a consequence, for “heating” processes there will be a special “normal” curve, to which all the other solutions tend at a first stage. Later on, the system will approach the “stationary” distribution $`p^{(s)}[\alpha (t)]`$ in the long time limit, provided that the condition in Eq. (22) is verified, i. e.,
$$\underset{t\mathrm{}}{lim}\frac{1}{X^2(t)}\frac{dX(t)}{dt}x_1^{(s)}\left[\alpha (t)\right]=0.$$
(62)
Taking into account that $`x_1^{(s)}`$ is upper bounded by $`1/2`$, and the relationship between $`\alpha `$ and $`X`$, Eq. (57), the above condition can also be written as
$$\underset{t\mathrm{}}{lim}\frac{d\mathrm{ln}\alpha (t)}{dt}=0.$$
(63)
Equation (63) expresses a restriction for the “heating” programs driving the system to the stationary curve in the long time limit, but it does not affect at all the existence of the normal solution, which only depends on the ergodicity of the process.
Application of Eq. (47) to the present model yields
$$x_1(t)=x_1^{(s)}[X(t)]\frac{dX(t)}{dt}\kappa [X(t)]\tau [X(t)]+\mathrm{},$$
(64)
where $`\tau (X)`$ is the mean relaxation time of the density in the linear relaxation approximation. Upon writing the above expression we have taken into account that in our model the number of holes plays the role of the volume. For high compactivities the second term on the right hand side of Eq. (64) is negligible against the first one, since the compressibility $`\kappa 0`$ when $`X\mathrm{}`$. Then, for high vibration intensities the system remains over the stationary curve. As the compactivity decreases, the system departs from that curve, as a consequence of the increase of the mean relaxation time $`\tau `$, which is expected to be proportional to the characteristic time $`\widehat{\tau }`$ estimated in Eq. (61).
There is some freedom when choosing the law of variation of the vibration intensity $`\alpha `$. Our choice will be motivated by simplicity, but also by the analogies with a glass-like behavior previously found in granular systems . In supercooled liquids the temperature is usually varied at a constant rate , so we have considered processes in which the compactivity changes linearly in time,
$$\frac{dX}{dt}=\pm r,$$
(65)
with $`r>0`$, which is equivalent to
$$\frac{d\alpha }{dt}=\pm r\alpha \left(\mathrm{ln}\alpha \right)^2,$$
(66)
the plus sign corresponding to “heating” processes and the minus sign to “cooling” programs.
The rest of this section is organized as follows. First, we study “cooling” processes. The system is initially put in the stationary state corresponding to a given value $`\alpha _0`$ of the vibration intensity. Then, the compactivity $`X=1/\mathrm{ln}\alpha `$ is decreased following Eq. (65). The existence of a phenomenon analogous to the laboratory glass transition of supercooled liquids arises. Afterwards, “heating” processes are considered, paying special attention to the appearance of hysteresis effects, and relating them to the trend of the system to approach the normal curve.
### A “Cooling” processes
We consider the continuous decreasing of the compactivity of the system from a given initial value $`X_0`$ down to $`X=0`$. The latter corresponds to $`\alpha =0`$, i. e., no vibration. The system is initially placed in the stationary state corresponding to the value $`\alpha _0=\mathrm{exp}(1/X_0)`$ of the vibration intensity. Then, the compactivity is decreased following the law
$$\frac{dX}{dt}=r_c.$$
(67)
Our starting point will be Eq. (64), particularized for the “cooling” process we are considering, i. e.,
$$x_1(t)=x_1^{(s)}[X(t)]+r_c\kappa [X(t)]\tau [X(t)]+O(r_c^2).$$
(68)
As we have already discussed, from the above equation follows that the system remains in equilibrium at “high” compactivities, since the second term in the right hand side vanishes for $`X\mathrm{}`$. Nevertheless, as the compactivity becomes smaller this term grows, due to the increase of the relaxation time $`\tau `$. This means that there will exist a range of values of the compactivity in which the second term is comparable to the first one. A rough estimate of the value of the compactivity $`X_g`$ at which the system would depart from the stationary curve can be obtained by equalling both terms. If we consider that the system is slowly “cooled”, $`r_c1`$, it is also $`\alpha _g=e^{1/X_g}1`$, and we can approximate both terms for their leading behaviors in that limit, namely
$$x_1^{(s)}(X)\alpha ,$$
(70)
$$\kappa (X)=\frac{dx_1^{(s)}}{dX}\alpha \left(\mathrm{ln}\alpha \right)^2$$
(71)
$$\tau (X)\tau _0\alpha ^3$$
(72)
where $`\tau _0`$ is a constant of the order of unity. Therefore, we get
$$r_c\tau _0\left(\mathrm{ln}\alpha _g\right)^2=\alpha _g^3.$$
(73)
For $`r_c1`$ it is
$$\alpha _gr_c^{1/3}\left|\mathrm{ln}r_c\right|^{2/3}.$$
(74)
We have omitted the factors containing $`\tau _0`$, as well as any other factor of the order of unity.
For $`\alpha <\alpha _g`$ the system is effectively “frozen”, due to the divergent tendency of the relaxation time no more transitions are possible, and the density of holes will be approximately constant in this region. A measure of the effective number of transitions left to the system before reaching $`\alpha =0`$ from a given time $`t`$ is given by the scale
$$s(t)=_t^{t_0}𝑑t^{}\frac{1}{\tau [X(t^{})]},$$
(75)
where $`t_0`$ is the time instant for which the “cooling” program finishes, i.e. $`\alpha (t_0)=0`$. In that way, the system would get frozen for $`t>t_f`$ such that $`s(t_f)=1`$, being easily obtained that $`\alpha (t_f)\alpha _g`$. Using Eq. (74) it is possible to estimate the leading order value of the compactivity at which the system gets frozen,
$$X_g=\frac{1}{\mathrm{ln}\alpha _g}\frac{3}{|\mathrm{ln}r_c|}.$$
(76)
This kind of behavior has been also found numerically in a granular system model . The inverse logarithmic dependence of the cooling rate is typical for the laboratory glass transition temperature of supercooled liquids , and it has also been analytically derived in some simple models of structural glasses .
Since the density of holes remains nearly constant for $`\alpha <\alpha _g`$, and the two first terms of Hilbert’s expansion (68) are of the same order for $`\alpha \alpha _g`$, it is reasonable to expect that
$$x_{1,\text{res}}=\underset{tt_0}{lim}x_1(t)x_1^{(s)}(\alpha _g)=\alpha _gr_c^{1/3}\left|\mathrm{ln}r_c\right|^{2/3},$$
(77)
where $`x_{1,\text{res}}`$ is the residual value of the density of holes, extending again to this model of granular system the terminology of structural glasses.
In order to check the above results, Fig. 1 shows the residual value of the density of holes as a function of the “cooling” rate, measured by the parameter
$$\delta =r_c\left(\mathrm{ln}r_c\right)^2,$$
(78)
in a log-log scale. The numerical result agrees with the theoretical prediction, Eq. (77), since the curve is well fitted by a straight line with a slope approximately equal to $`1/3`$. In Fig. 2 the evolution of the density of particles, $`\rho =1x_1`$, in a “cooling” process with rate $`r_c=10^5`$ is plotted. For comparison, the equilibrium curve, given by Eq. (59), is also shown. The estimate of the freezing compactivity from Eq. (76) is $`X_g0.26`$, which is seen to be in good agreement with the region in which the Monte Carlo density is approximately constant. Similar behaviors have been observed for other small values of the cooling rate.
### B “Heating” processes and hysteresis effects
Let us analyze “heating” processes, i. e., processes in which the vibration intensity is monotonically increased. In this kind of processes, the dynamics of the system is irreducible. Thus there is a “normal” solution of the master equation, such that any other solution tends to it. Besides, if the heating program verifies Eq. (63), the normal solution approaches the stationary curve for large enough times. We are going to discuss how these results can be applied to our model, in order to understand its behavior when “heated” from $`\alpha =0`$.
The compactivity will be increased according to the law
$$\frac{dX}{dt}=r_h,$$
(79)
where $`r_h`$ is the rate for this process. According to Eq. (64), the system will approach the stationary curve in the high compactivity (long time) limit. In the vicinity of the stationary curve, the evolution of the system is given by
$$x_1(t)=x_1^{(s)}[X(t)]r_h\kappa [X(t)]\tau [X(t)]+O(r_h^2).$$
(80)
This equation explains why, in “heating processes”, the system tends to the stationary curve following a curve different from the “cooling” one, even when $`r_c=r_h`$. It follows directly from the comparison of Eqs. (80) and Eq. (68), by noting that the deviation from the stationary behavior is of opposite signs in “heating” and “cooling” processes. Therefore, hysteresis effects show up.
However, perhaps the main result for “heating” processes is the existence of the normal solution. Fig. 3 shows the evolution of the density of particles in a heating process with $`r_h=10^5`$. The shape of the curve depends on the initial condition for $`X=0`$. Two different initial preparations of the system have been considered. The system was previously cooled down to $`X=0`$, following two different linear programs with $`r_c=10^5`$ and $`r_c=10^3`$, respectively. For the sake of clarity, these “cooling” curves are not shown. From Fig. 3 it is seen that both “heating” curves tend to a common behavior and, afterwards, they approach the stationary curve for high compactivities. Also plotted is the normal curve, which was obtained by starting from the loosest packing state, $`x_1=0.5`$ .
Figure 4 depicts a particular cycle of “cooling” and “heating” with the same rate, namely $`r_c=r_h=10^5`$, as well as the normal curve of the “heating” process. A behavior similar to the one found in real granular systems , and also in the “Tetris” model , is observed. When starting the heating process from the loosest packing state the normal curve is obtained, which tends to the stationary behavior in the high vibration intensity limit. Afterwards, cooling and reheating with the same rate leads to the other two curves of the figure. These are approximately “reversible” for very small rates, since the deviation from the stationary curve is smaller the smaller the rate. Nevertheless, they cannot be used to obtain the stationary values of the density for low compactivities, due to the glass-like kinetic transition. On the other hand, at high compactivities the deviations of the system from the stationary curve for the “cooling” and the “heating“ processes are symmetric, as predicted by Eqs. (68) and (80).
## VI Final remarks
In the first part of the paper we have considered a wide class of models of granular systems, namely those models whose dynamics under tapping is described by means of a master equation. It has been assumed that the tapping process is such that the system is able to explore the whole space of metastable states, when it is tapped with any vibration intensity $`\mathrm{\Gamma }0`$. Then, if $`\mathrm{\Gamma }`$ changes in time, and the Markov process remains irreducible in the long time limit, all the solutions of the master equation tend to approach a special solution, called the “normal” solution for the given vibration program. Besides, if the stationary distribution for constant $`\mathrm{\Gamma }`$ is consistent with Edward’s statistical mechanics theory of powders, the normal solution tends to the stationary curve in the high vibration intensity regime, provided that the “heating” program is not too fast, in the sense given by Eq. (22). It has also been argued that the compactivity $`X`$ of Edward’s theory should be an increasing function of the vibration intensity $`\mathrm{\Gamma }`$. A quite general prediction for the behavior of the normal solution in situations near the steady state has been found, by using Hilbert’s method to solve the master equation. This result can be applied to any system having a “canonical” steady state distribution.
Very recently, a simple model for a granular system submitted to vertical vibration with an stationary state described by Edward’s theory of powders has been introduced . The model belongs to the class of systems described in the previous paragraph, and here we have investigated its behavior in processes in which the vibration intensity depends on time. Such kind of processes have been carried out in real granular systems , and also investigated in some models . For the sake of concreteness we have taken the compactivity as a linear function of time.
The behavior of the model under “cooling” processes up to zero vibration intensity exhibits a phenomenon similar to the laboratory glass transition. The “cooling” evolution departs from the stationary curve, and freezes, in a narrow region around the value of the compactivity $`X_g`$ such that the effective number of transitions left to the system until reaching $`X=0`$ becomes smaller than unity. This allows us to estimate the dependence of $`X_g`$ and the residual values of the density upon the “cooling” rate. The results are analogous to those obtained previously in simple models of structural glasses .
In “heating” processes, a crucial role is played by the “normal” solution of the master equation, which is completely determined by the program of variation of the compactivity, and attracts any other solution, independently of the initial condition. The hysteresis effects found when the system is “cooled” and “reheated” are related to the trend of the system to approach the normal curve. The normal curve corresponds to “heating” the system from the loosest state. In a first stage, the system tends to approach monotonically the “normal” curve and, for longer times, corresponding to high enough compactivities, the stationary curve is reached if the “heating” program verifies Eq. (63).
The work reported here suggests a very close relationship between structural glasses and the behavior of a granular system under vibrations, supporting previous results in the same direction . It also raises some questions, both from a theoretical and experimental point of view. We think that it is worth checking the existence of the normal solution in real granular systems, and whether it can be constructed starting from the loosest packing state. Also, it seems interesting to know if the glass analogy also extends to the linear relaxation regime, giving a stretched exponential behavior of the response functions, or if the inverse logarithmic relaxation law remains valid in such a region. The latter possibility would lead to the need of investigating the reason for such a behavior.
###### Acknowledgements.
This research was partially supported by the Dirección General de Investigación Científica y Técnica (Spain) through Grant No. PB98–1124.
## A Average relaxation time of the volume
Linear relaxation describes the evolution of the system at constant value of the compactivity $`X`$, when starting from the steady state corresponding to a compactivity $`X+\mathrm{\Delta }X`$, with $`\mathrm{\Delta }X`$ very small, in the sense that we can approximate
$$𝒑^{(s)}(X+\mathrm{\Delta }X)=𝒑^{(s)}(X)+\frac{d𝒑^{(s)}(X)}{dX}\mathrm{\Delta }X.$$
(A1)
Taking into account Eq. (41), the evolution of the probability distribution from this initial condition, $`𝒑(t=0)=𝒑^{(s)}(X+\mathrm{\Delta }X)`$, is
$$𝒑(t)=𝒑^{(s)}(X)+\frac{\mathrm{\Delta }X}{X^2}\underset{q}{}\xi (q,X)𝝋(q,X)e^{t\lambda (q,X)},$$
(A2)
because $`𝝋(q,X)`$ is the eigenvector of the evolution operator corresponding to the eigenvalue $`\lambda (q,X)`$. From the above expression it is straightforward to calculate the time evolution of the average volume,
$$\overline{V}(t)\overline{V}^{(s)}(X)=\frac{\mathrm{\Delta }X}{X^2}\underset{q}{}\xi ^2(q,X)e^{t\lambda (q,X)},$$
(A3)
where we have made use of the definition of the function $`\xi (q,X)`$, Eq. (40).
The volume linear relaxation function corresponding to a value of the compactivity $`X`$ is defined in the usual way,
$$\varphi _V(t;X)\frac{\overline{V}(t)\overline{V}^{(s)}(X)}{\overline{V}(t=0)\overline{V}^{(s)}(X)},$$
(A4)
yielding
$$\varphi _V(t;X)=\frac{_q\xi ^2(q,X)e^{t\lambda (q,X)}}{_q\xi ^2(q,X)}.$$
(A5)
On the other hand, the linear relaxation time for the volume $`\tau (X)`$ is given by definition by the area under the curve $`\varphi _V(t;X)`$ as a function of $`t`$. This leads to Eq. (46). |
no-problem/9912/cond-mat9912157.html | ar5iv | text | # Dynamic instabilities and memory effects in vortex matter
Understanding the nature of flow is essential for the resolution of a wide class of phenomena in condensed matter physics, ranging from dynamic friction, through pattern formation in sand dunes, to the pinning of charge density waves. The flux line lattice in type II superconductors serves as a unique model system with tunable dynamic properties. Indeed, recent studies have shown a number of puzzling phenomena including: (i) low frequency noise , (ii) slow voltage oscillations , (iii) history dependent dynamic response , (iv) memory of the direction, amplitude, duration, and even the frequency of the previously applied current , (v) high vortex mobility for ac current with no apparent vortex motion at dc current , and (vi) strong suppression of an ac response by small dc bias . Taken together, these phenomena are incompatible with the current understanding of vortex dynamics. By investigating the current distribution across single crystals of 2H-NbSe<sub>2</sub> we reveal a generic mechanism that accounts for these observations in terms of a competition between the injection of a disordered vortex phase at the sample edges, and the dynamic annealing of this metastable disorder by the transport current. For an ac current, only narrow regions near the edges are in the disordered phase, while for dc bias, most of the sample is filled by the pinned disorder, preventing vortex motion. The resulting spatial profile of disorder acts as an active memory of the previous history.
In conventional superconductors like NbSe<sub>2</sub> the anomalous phenomena are found in the vicinity of the ‘peak effect’ where the critical current $`I_c`$ increases sharply below the upper critical field $`H_{c2}`$, as described in Fig. 1a. The peak effect marks a structural transformation of the vortex lattice: Below the peak region an ordered phase (OP) is present, which is dominated by the elastic energy of the lattice and is, therefore, weakly pinned. On approaching the peak region, however, the increased softening of the lattice causes a transition into a disordered vortex phase (DP), which accommodates better to the pinning landscape, resulting in a sharp increase in $`I_c`$. In high-temperature superconductors like Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8</sub>, this situation is equivalent to the second magnetization peak , where the ordered Bragg-glass phase is believed to transform into a disordered solid . Figure 1a shows the $`I_c`$ measured at various frequencies. On the high temperature side of the peak effect $`I_c`$ is frequency independent; in this region the DP is thermodynamically stable. In contrast, on the low temperature side, a significant frequency dependence is observed ; in this region all the unusual vortex response phenomena appear . As described below, a dynamic coexistence of the OP and a metastable DP is established in this region in the presence of an applied current. We first outline the proposed mechanism, and then present the experimental evidence.
The first important ingredient of the proposed model is the observation that in NbSe<sub>2</sub> the DP can be readily supercooled to below the peak effect by field cooling, where it remains metastable, since the thermal fluctuations are negligible . This supercooled DP is pinned more strongly and displays a significantly larger critical current $`J_c^{dis}`$ as compared to $`J_c^{ord}`$ of the stable OP. An externally applied current in excess of $`J_c^{dis}`$ serves as an effective temperature and ‘anneals’ the metastable DP as observed by transport , magnetic response , decoration , and SANS experiments on NbSe<sub>2</sub>.
The second ingredient of the model is the presence of substantial surface barriers , as observed recently in NbSe<sub>2</sub>. Consider a steady state flow of an OP in the presence of a transport current. In the standard platelet strip geometry, in a perpendicular field $`B`$, vortices penetrate from one edge of the sample and exit at the opposite edge. In the absence of a surface barrier, vortex penetration does not require any extra force. As a result, the vortices penetrate close to their proper vortex lattice locations, as dictated by the elastic forces of the lattice. In the presence of a surface barrier, however, a large force is required for vortex penetration and exit, and hence much of the applied current flows at the edges in order to provide the necessary driving force . The surface barrier is known to be very sensitive to surface imperfections. Therefore, the penetrating vortices are injected predominantly at the weakest points of the barrier, thus destroying the local order and forming a metastable DP near the edge, which drifts into the sample with the flow of the entire lattice. (Note that steps on the surface or extended defects in inhomogeneous samples could also act as injection points of the DP). The applied current, therefore, has two effects: the current that flows at the edges causes ‘contamination’ by injecting a DP, while the current that flows in the bulk acts as an annealing mechanism. The observed dynamic instabilities and memory phenomena arise from the fine balance between these two competing processes.
The annealing process is sensitive to the exact location on the $`HT`$ phase diagram. Below the peak effect, the DP is highly unstable and therefore its relaxation time $`\tau _r`$ in the presence of a driving force is very short. As a result, it anneals rapidly over a characteristic ‘healing’ length $`L_r=v\tau _r`$, where $`v`$ is the vortex lattice drift velocity. The corresponding profile of the local critical current $`J_c(x)`$ should therefore decay from $`J_c^{dis}`$ to $`J_c^{ord}`$ over the characteristic length scale $`L_r`$, as illustrated by the dotted line in Fig. 1b. Note that $`L_r`$ and $`\tau _r`$ are generally current dependent and decrease dramatically at elevated currents . On the other hand, near the peak effect the free energies of the DP and OP are comparable and therefore the ‘life time’ of the disordered phase $`\tau _r`$ and the corresponding $`L_r`$ are very large. As a result, the front of the DP, given by $`x_d(t)`$, progressively penetrates into the bulk as shown by the solid line in Fig. 1b, until the entire sample is contaminated. In this situation the experimental, steady state dc critical current $`I_c^{dc}`$ does not reflect an equilibrium property, but rather a dynamic coexistence of two phases. It is given by $`I_c^{dc}=d_0^WJ_c(x)𝑑xdL_rJ_c^{dis}+d(WL_r)J_c^{ord}`$ for $`L_r<W`$, and $`I_c^{dc}I_c^{dis}=dWJ_c^{dis}`$ for $`L_rW`$, where $`d`$ and $`W`$ are the thickness and width of the sample (neglecting, for simplicity, the surface barrier edge currents, and assuming, for example, an exponential decay of $`J_c(x)`$ in Fig. 1b).
The ac response of the system should be distinctly different since the contamination process occurs only near the edges, where the disordered lattice periodically penetrates and exits the sample. For a square wave $`I_{ac}`$ of period $`T_{ac}=1/f`$, by the end of the positive half cycle the DP occupies the left edge to a depth of $`x_d^{ac}`$, as illustrated by the solid curve in Fig. 1b. During the negative half cycle the DP on the left exits the sample, while a DP on the right edge penetrates, until at $`t=T_{ac}`$ a mirror-image profile is obtained, as shown by the dashed line. Assuming $`x_d^{ac}<W<L_r`$, the effective $`I_c`$ observed by ac transport measurement is given by $`I_c^{ac}dx_d^{ac}J_c^{dis}+d(Wx_d^{ac})J_c^{ord}`$. Thus, an ac current necessarily contaminates the sample less than a dc current of the same amplitude, and therefore $`I_c^{ac}I_c^{dc}`$ always, as seen in Fig. 1a. In addition, since $`x_d^{ac}`$ decreases with frequency, $`I_c^{ac}`$ should decrease with $`f`$ explaining the frequency dependence of $`I_c^{ac}`$ in Fig. 1a. Furthermore, and most importantly, at sufficiently high frequency $`I_c^{ac}`$ should approach the true $`I_c`$ of the stable phase. The steep increase of the 881 Hz $`I_c^{ac}`$ data in Fig. 1a (open circles) therefore indicates that the OP transforms sharply into the DP at the peak effect. In contrast, the smooth behavior of $`I_c^{dc}`$ reflects rather the dynamic coexistence of the two phases in which $`L_r`$ gradually increases and diverges upon approaching the peak effect from below. From Fig. 1a we can evaluate $`L_r`$ and $`\tau _r`$. For example, at T=5.1K, $`I_c^{dc}`$ 50 mA is about half way between $`I_c^{ord}`$ 5 mA and the extrapolated $`I_c^{dis}`$ 100 mA, which means that $`L_r0.5W=170\mu `$m. The $`I_c^{dc}`$ was measured at a voltage criterion of 1 $`\mu `$V, which translates into vortex velocity of $`v4\times 10^3`$ m/sec, and hence $`\tau _r=L_r/v4\times 10^2`$ sec. This value is well within the range of the relaxation times measured previously by applying a current step to the field-cooled matastable DP.
We now provide a direct experimental manifestation of the key aspect of the model, which is the spatial variation of the disorder and $`J_c(x)`$, and of the transport current distribution that traces this $`J_c(x)`$ (see Fig. 1b). We have used Hall sensor arrays to measure the ac transport current self-induced field $`B_{ac}(x)`$, which is then directly inverted into the current density distribution $`J_{ac}(x)`$ using the Biot-Savart law, as described previously (see inset to Fig 1a). Figure 2 shows the corresponding current profiles $`J_{ac}(x)`$ measured at different frequencies. At high $`f`$, the DP with the enhanced $`J_c`$ is present only in narrow regions near the edges (481 Hz data). As the frequency is reduced, $`x_d^{ac}`$ grows and correspondingly the enhanced $`J_{ac}(x)`$ flows in wider regions near the edges. Note that our measurement procedure (see Fig. 2) provides the time averaged local amplitude of $`J_{ac}(x)`$, which is much smoother as compared to the sharp instantaneous profiles in Fig. 1b.
We confirm the above finding independently by measuring the corresponding ac resistance of the sample $`R_{ac}(f)`$ as shown in Fig. 3a. At high frequencies most of the sample is in the low pinning OP and therefore $`R`$ is large. As $`f`$ is decreased, progressively wider regions near the edges become contaminated with the more strongly pinned DP and thus $`R_{ac}(f)`$ decreases. If the applied $`I_{ac}`$ is larger than $`I_c^{dc}`$, a finite $`R`$ will be measured at all frequencies, however, if $`I_c^{ac}I_{ac}I_c^{dc}`$ (see Fig. 1a) the measured $`R`$ will vanish as $`f0`$, as observed in Fig. 3a. This explains the surprising phenomenon of finite vortex response to ac current, while for dc drive the vortex motion is absent . From $`R_{ac}(f)`$ one can directly calculate the width of the disordered regions by noting that $`x_d^{ac}`$ equals the distance the entire lattice is displaced during half an ac period, $`x_d^{ac}(f)=v/2f=R(f)I_{ac}/2fLB`$, where $`L`$ is the voltage contact separation. The open circles in Fig. 3b show $`x_d^{ac}`$ obtained from $`R_{ac}(f)`$, while the open squares show the $`x_d^{ac}`$ derived directly from the $`J_{ac}(x)`$ profiles of Fig. 2. The good correspondence between the two independent evaluations of $`x_d^{ac}`$ demonstrates the self-consistency of the model.
Next we address the extreme sensitivity of the ac response to a small dc bias , as shown in Fig. 4a, where $`R_{ac}`$ is presented as a function of a superposed $`I_{dc}`$. A dc bias of only 10 to 20% of $`I_{ac}`$ suppresses $`R_{ac}`$ by orders of magnitude. This behavior is a natural consequence of the described mechanism since the dc bias contaminates the sample very similarly to the pure dc case, except that $`L_r`$ is now renormalized as following. For $`I_{dc}I_{ac}`$, the vortices move back and forth during the ac cycle, with a forward displacement being larger by about $`2I_{dc}/I_{ac}`$. Therefore, a vortex that enters through the sample edge and reaches a position $`x`$, accumulates a much longer total displacement path of $`xI_{ac}/I_{dc}`$. Since the annealing process of the DP depends on the total displacement regardless of the direction, the lattice at this location is thus annealed substantially, as if the effective $`L_r`$ is reduced to $`L_r^{eff}L_rI_{dc}/I_{ac}`$. Thus, at very small biases, the DP is present only within $`x_d^{ac}`$ from the edges, as in the absence of a bias, where the disordered vortices exit and re-penetrate every cycle. Vortices that drift deeper into the bulk under the influence of $`I_{dc}`$ are practically fully annealed due to the very short $`L_r^{eff}`$. As a result in Fig. 4a the initial decrease of $`R_{ac}`$ up to $`I_{dc}2`$ mA is relatively small. The corresponding $`J_{ac}(x)`$ in Fig. 4b at $`I_{dc}=1.7`$ mA shows narrow contaminated regions near the edges, very similar to the zero bias case in Fig. 2. However, as $`I_{dc}`$ is increased, $`L_r^{eff}`$ grows and the bulk of the sample becomes contaminated by the penetrating DP, leading to a dramatic drop of $`R_{ac}`$. In this situation, $`J_{ac}(x)`$ at $`I_{dc}=5.7`$ mA shows a wide region of DP at the left edge. When $`I_{dc}`$ is inverted to -5.7 mA, a similar situation is observed, but now the vortices and hence the DP penetrate from the right edge, as expected.
The revealed mechanism readily explains a wide range of additional reported phenomena: (i) The history of the previously applied current is encoded in the spatial profile of the lattice disorder, which is preserved while the current is switched off due to negligible thermal relaxation. Upon reapplying the current, the vortex system will display a memory of all the parameters of the previously applied current including its direction, duration, amplitude, and frequency, as observed experimentally . (ii) Application of a current step $`I<I_c^{dc}`$ to a sample in the OP, results in a transient response which decays to zero since the DP is able to penetrate only a limited distance. The resulting new $`I_c`$ of the sample is given by the condition that $`I_c=I`$, as derived by fast transport measurements . Such transient phenomena, would also display characteristic times shorter or comparable to the vortex transit time across the sample, in agreement with observations . (iii) The competition between the contamination and annealing processes is expected to result in local instabilities causing the reported noise enhancement below the peak effect (see also Fig. 4a). (iv) Related phenomena should be observed in high-temperature superconductors in the vicinity of the peak effect associated with the melting transition, or near the second magnetization peak, consistent with experiments . (v) In high-temperature superconductors there is an additional consideration of thermal activation of vortices over the surface barriers, which may explain the reported slow voltage oscillations . If the thermal activation rate is higher or comparable to the driving rate, the slowly injected lattice will be ordered in contrast to the DP injected at higher drives. Thus, at a given applied current, if the bulk of the sample is in the OP, much of the current flows on the edges, rapidly injecting a DP through the surface barrier. Once the bulk gets contaminated, the resulting slower vortex motion causes again injection of an OP. This feedback mechanism can explain the voltage oscillations in YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7</sub> and similar narrow band noise in YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7</sub> and Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8</sub> with characteristic frequencies comparable to the inverse transit time. (vi) Finally, the described phenomena should be absent in the Corbino disk geometry where vortices do not cross the sample edges. Our studies of NbSe<sub>2</sub> in this geometry confirm this prediction, as will be published elsewhere.
ACKNOWLEDGEMENTS
We acknowledge helpful discussions with P. B. Littlewood. The work at WIS was supported by the Israel Science Foundation - Center of Excellence Program, by the US-Israel Binational Science Foundation (BSF), and by Alhadeff Research Award. EYA acknowledges support from NSF.
FIGURE CAPTIONS
Fig. 1. The experimental setup (inset), the critical current vs. temperature in the vicinity of the peak effect (a), and (b) a schematic plot of the dynamic coexistence of the ordered phase (OP) with a metastable disordered phase (DP).
Experimental. Several Fe (200 ppm) doped single crystals of 2H-NbSe<sub>2</sub> were investigated. Here we report data on crystal A of $`2.6\times 0.34\times 0.05`$ mm<sup>3</sup> and $`T_c=6.0`$K, and crystal B of $`2.4\times 0.29\times 0.02`$ mm<sup>3</sup> with $`T_c=6.05`$K. Four electrical contacts were attached to the top surface for transport measurements, with the voltage contact separation of 0.6 $`\pm `$ 0.2 mm. The bottom surface of the crystal was attached to an array of 19 2DEG Hall sensors $`10\times 10`$ $`\mu m^2`$ each (inset). The vortex lattice was initially prepared in the OP by zero-field-cooling to a low temperature at which the DP is unstable, and then slowly heated to the desired temperature in the presence of a constant field $`H_a`$ applied parallel to the c axis.
(a) The peak effect in critical current $`I_c`$ in NbSe<sub>2</sub> crystal A vs. temperature, as measured with a dc drive, $`I_c^{dc}`$ ($`\mathrm{}`$), and ac drive, $`I_c^{ac}`$, at 181 ($`\mathrm{}`$) and 881 Hz ($``$). The critical current was determined resistively using a voltage criterion of 1 $`\mu `$V. At low temperatures only the stable OP is present. On the lower temperature side of the peak effect a metastable DP coexists dynamically with the stable OP resulting in a frequency dependent $`I_c`$. On the high temperature side of the peak effect only the stable DP is present with no anomalous behavior.
(b) Schematic plot of the local critical current density $`J_c(x)`$ across a crystal of width $`W`$. $`J_c^{dis}`$ and $`J_c^{ord}`$ are the values of the critical current density in the fully disordered and in the OP, respectively. For $`L_rW`$ the DP relaxes rapidly into OP resulting in the dotted $`J_c(x)`$ in the steady state dc flow. For $`L_r>W`$ the DP penetrates to a depth $`x_d(t)`$ following the application of dc current at $`t=0`$ (solid curve). For an ac current at $`t=T_{ac}/2`$ the DP occupies the left edge to a depth of $`x_d^{ac}`$ (solid curve), and symmetrically the right edge at $`t=T_{ac}`$ (dashed curve).
Fig. 2. Current density profiles $`J_{ac}(x)`$ in crystal B obtained by inversion of the self-induced field measured by the Hall sensors. Shown are three frequencies $`f=22`$ ($``$), 181 ($`\mathrm{}`$), and 481 Hz ($`\mathrm{}`$). A square wave ac current $`I_{ac}`$ was applied and the corresponding self induced magnetic field $`B_{ac}(x)`$ across the crystal was measured by the Hall sensors using a lock-in amplifier (see Fig. 1 inset). By using the Biot-Savart law the $`B_{ac}(x)`$ was directly inverted into the current density profiles $`J_{ac}(x)`$. The width $`x_d^{ac}`$ of the highly pinned DP near the edges grows with decreasing frequency as expected. The measured $`J_{ac}(x)`$ is the magnitude of the local current density averaged over the ac cycle period. As a result, $`J_{ac}(x)`$ reflects a time-averaged superposition of solid and dashed $`J_c(x,t)`$ profiles in Fig. 1b, which are present separately during the positive and negative half-cycles. Close to the edges the high $`J`$ is present most of the time, while close to $`x_d^{ac}`$ it is present only a small fraction of the ac period as the DP moves in and out of the sample. Therefore, the time-averaged $`J_{ac}(x)`$, decreases smoothly from the edge to $`x_d^{ac}`$. A more detailed analysis shows that the first-harmonic measurement by the lock-in amplifier Fourier transforms the sharp instantaneous $`J_c(x,t)`$ of Fig. 1b into the observed smooth $`J_c^{ac}(x)=J_c^{ord}+(J_c^{dis}J_c^{ord})(1+cos(\pi x/x_d^{ac}))/2+(J_c^{dis}J_c^{ord})(1cos(\pi (x+x_d^{ac}W)/x_d^{ac}))/2`$. The second and third terms hold at $`0xx_d^{ac}`$ and $`Wx_d^{ac}xW`$, respectively, and are zero otherwise.
Fig. 3. Frequency dependence of the resistance $`R_{ac}(f)`$ and of the width of the disordered regions $`x_d^{ac}`$ in crystal B. (a) At high frequencies $`R_{ac}`$ is large since most of the sample is in the weakly pinned OP. As the frequency is decreased the disordered regions increase and $`R_{ac}`$ drops sharply, when $`x_d^{ac}`$ reaches values close to the sample width. In the limit of zero $`f`$ the resistance is zero since the applied current is lower than $`I_c^{dc}`$. (b) The corresponding width of the DP near the edges $`x_d^{ac}`$ derived from the $`R_{ac}(f)`$ data ($``$) and from the $`J_{ac}(x)`$ current profiles of Fig. 2 ($`\mathrm{}`$). As expected, $`x_d^{ac}`$ increases monotonically with decreasing frequency. The drop of the $`x_d^{ac}`$ data ($``$) at very low frequencies is an artifact. At such frequencies the instantaneous vortex motion is present mainly at the onset of the square wave $`I_{ac}`$ pulses, and decays towards zero during the pulse . In this situation the first-harmonic $`R_{ac}`$ measurement by the lock-in amplifier underestimates the integrated vortex displacement. In addition to the frequency dependence, $`x_d^{ac}`$ also changes significantly by varying the amplitude of the ac current $`I_{ac}`$. Here $`I_{ac}`$=10 mA was chosen such that $`x_d^{ac}`$ becomes comparable to the sample width $`W`$ at low frequencies.
Fig. 4. Measured resistance $`R_{ac}`$ at $`I_{ac}=20`$ mA as a function of dc bias $`I_{dc}`$ (a) and the corresponding distribution (b) of the ac current $`J_{ac}(x)`$ in crystal A. (a) At low bias $`I_{dc}`$ 2 mA only the regions near the edges are contaminated by the DP since $`L_r^{eff}`$ is very short, resulting in only a moderate decrease of $`R_{ac}`$. For $`I_{dc}`$ 2 mA the contamination becomes substantial and the significant decrease of $`R_{ac}`$ is accompanied by enhanced noise due to the local instabilities during the competing contamination and annealing processes. At still larger dc bias most of the sample becomes contaminated and $`R_{ac}`$ drops below our noise level. The corresponding $`J_{ac}(x)`$ profiles in (b) show that for positive $`I_{dc}=+5.7`$ mA ($``$) a substantial part of the sample is contaminated from the left edge where the vortices enter into the crystal, and similarly from the right edge for negative bias $`I_{dc}=5.7`$ mA ($`\mathrm{}`$). |
no-problem/9912/hep-ph9912508.html | ar5iv | text | # Untitled Document
##
The current wisdom on neutrinos is that the seesaw mechanism forces their masses to be very small. This paper presents a rather different explanation of the experimental facts based upon the approximate conservation of baryon-minus-lepton number, $`BL`$: If $`BL`$ is almost conserved, then the six two-component neutrino fields form three nearly Dirac neutrinos, and the six neutrino masses coalesce into three nearly degenerate pairs.
If there are three right-handed neutrinos, then there are six left-handed fields, the three left-handed flavor eigenfields $`\nu _e,\nu _\mu `$, and $`\nu _\tau `$ and the charge conjugates of the three right-handed neutrinos. The neutrino mass matrix is then a $`6\times 6`$ complex symmetric matrix $``$ which admits a singular-value decomposition $`=UMV^{}`$. The singular values are the six neutrino masses $`m_j`$, and the unitary matrix $`V^{}`$ describes the neutrino mixings.
An angle $`\theta _\nu `$ is introduced that describes the kind of the neutrinos. Dirac neutrinos have $`\theta _\nu =0`$, and Majorana neutrinos have $`\theta _\nu =\pi /2`$. If all Majorana mass terms vanish, that is if $`\theta _\nu =0`$, then the standard model conserves $`BL`$, which is a global $`U(1)`$ symmetry. It is therefore natural in the sense of ’t Hooft to assume that $`\theta _\nu 0`$ so that this symmetry is only slightly broken. The neutrinos then are nearly Dirac fermions and their masses coalesce into three pairs of almost degenerate masses. Thus the approximate conservation of $`BL`$ explains the tiny mass differences seen in the solar and atmospheric neutrino experiments without requiring the neutrino masses to be absurdly small. If one sets $`\theta _\nu 0.003`$, suppresses inter-generational mixing, and imposes a quark-like mass hierarchy, then one may fit the essential features of the solar, reactor, and atmospheric neutrino data with otherwise random mass matrices $``$ in the eV range. Thus neutrinos easily can have masses that saturate the cosmological bound of about 8 eV. Moreover because neutrinos are almost Dirac fermions, neutrinoless double-beta decay is suppressed by an extra factor $`\mathrm{sin}^2\theta _\nu \mathrm{sin}^2\varphi _\nu \stackrel{<}{}\mathrm{\hspace{0.25em}10}^5`$, where $`\varphi _\nu `$ is a second neutrino angle, and is very slow, with lifetimes in excess of $`2\times 10^{27}`$ years.
This $`BL`$ model of neutrino masses and mixings leads to these predictions about future experiments: The three flavor neutrinos oscillate mainly into the conjugates of the right-handed fields, which are sterile. Thus all experiments that look for the appearance of neutrinos will yield small or null signals, like those of LSND and KARMEN. Secondly because neutrino masses are not required to be nearly as small as the solar and atmospheric mass differences might suggest, neutrinos may well be an important part of hot dark matter. Thirdly if a suitable experiment can be designed, it should be possible to see the tau neutrino disappear. Fourthly, the rate of neutrinoless double-beta decay is suppressed by an extra factor $`\mathrm{sin}^2\theta _\nu \mathrm{sin}^2\varphi _\nu \stackrel{<}{}\mathrm{\hspace{0.25em}10}^5`$ and hence will not be seen in the Heidelberg/Moscow, IGEX, GENIUS, or CUORE experiments.
## Masses and Mixings
Because left- and right-handed fields transform differently under Lorentz boosts, they cannot mix. It is therefore convenient to write the action exclusively in terms of two-component, left-handed fields. The two-component, left-handed neutrino flavor eigenfields $`\nu _e,\nu _\mu ,\nu _\tau `$ will be denoted $`\nu _i,`$ for $`i=e,\mu ,\tau `$. The two-component, left-handed fields that are the charge conjugates of the putative right-handed neutrino fields $`n_{re},n_{r\mu },n_{r\tau }`$ will be denoted $`n_i=i\sigma ^2n_{ri}^{}`$ for $`i=e,\mu ,\tau `$, where $`\sigma ^2`$ is the second Pauli spin matrix.
The six left-handed neutrino fields $`\nu _i,n_i`$ for $`i=1,2,3`$ can have three kinds of mass terms: The fields $`\nu _i`$ and $`n_j`$ can form the Dirac mass terms $`iD_{ij}\nu _i\sigma ^2n_jiD_{ij}^{}n_j^{}\sigma ^2\nu _i^{}`$; in a minimal extension of the standard model, the complex numbers $`D_{ij}`$ are proportional to the mean value in the vacuum of the neutral component of the Higgs field. The fields $`n_i`$ and $`n_j`$ can form the Majorana mass terms $`iE_{ij}n_i\sigma ^2n_jiE_{ij}^{}n_j^{}\sigma ^2n_i^{}`$, which break $`BL`$. Because these mass terms connect right-handed neutrino fields, which are sterile, they do not affect neutrinoless double-beta decay, at least in leading order. Within the standard model, the complex numbers $`E_{ij}`$ are simply numbers; in a more unified theory, they might be proportional to the mean values in the vacuum of neutral components of Higgs bosons. The fields $`\nu _i`$ and $`\nu _j`$ can form the Majorana mass terms $`iF_{ij}\nu _i\sigma ^2\nu _jiF_{ij}^{}\nu _j^{}\sigma ^2\nu _i^{}`$, which break $`SU(2)_LU(1)_Y`$ and $`BL`$. Because these mass terms connect left-handed neutrino fields, they potentially drive neutrinoless double-beta decay. In a minimal extension of the standard model, the complex numbers $`F_{ij}`$ might be proportional to the mean values in the vacuum of the neutral component of a new Higgs triplet $`h_{ab}=h_{ba}`$.
Since $`\sigma ^2`$ is antisymmetric and since any two fermion fields $`\chi `$ and $`\psi `$ anticommute, it follows that $`\chi \sigma ^2\psi =\psi \sigma ^2\chi `$ and $`\chi ^{}\sigma ^2\psi ^{}=\psi ^{}\sigma ^2\chi ^{}`$, which implies that the $`3\times 3`$ complex matrices $`E`$ and $`F`$ are symmetric $`E^{}=E\mathrm{and}F^{}=F`$ and that $`iD_{ij}n_j\sigma ^2\nu _i=iD_{ij}\nu _i\sigma ^2n_j.`$ Thus if we introduce the $`6\times 6`$ matrix
$$=\left(\begin{array}{cc}F& D\\ D^{}& E\end{array}\right)$$
(1)
and the (transposed) six-vector $`N^{}=(\nu _e,\nu _\mu ,\nu _\tau ,n_e,n_\mu ,n_\tau )`$ of left-handed neutrino fields, then we may gather the mass terms into the matrix expression
$$\frac{i}{2}N^{}\sigma ^2N\frac{i}{2}N^{}^{}\sigma ^2N^{}.$$
(2)
The complex symmetric mass matrix $``$ is not normal unless the positive hermitian matrix $`^{}`$ is real because $`[,^{}]=2i\mathrm{}m\left(^{}\right)`$. When the mass matrix $``$ is real, it may be diagonalized by an orthogonal transformation. In general $``$ is neither real nor normal; but like every matrix, it admits a singular-value decomposition
$$=UMV^{}$$
(3)
in which the $`6\times 6`$ matrices $`U`$ and $`V`$ are both unitary and the $`6\times 6`$ matrix $`M`$ is diagonal, $`M=\mathrm{diag}(m_1,m_2,m_3,m_4,m_5,m_6)`$, with singular values $`m_j0`$, which will turn out to be the masses of the six neutrinos.
The free, kinetic action density of a two-component left-handed spinor $`\psi `$ is $`i\psi ^{}\left(_0\stackrel{}{\sigma }\right)\psi `$. Thus by including the mass terms (2), one may write the free action density of the six left-handed neutrino fields $`N`$ as
$$_0=iN^{}\left(_0\stackrel{}{\sigma }\right)N+\frac{i}{2}N^{}\sigma ^2N\frac{i}{2}N^{}^{}\sigma ^2N^{}$$
(4)
from which follow the equations of motion for $`N`$
$$\left(_0\stackrel{}{\sigma }\right)N=^{}\sigma ^2N^{}$$
(5)
and $`N^{}`$
$$\left(_0+\stackrel{}{\sigma }\right)\sigma ^2N^{}=N.$$
(6)
Applying $`\left(_0+\stackrel{}{\sigma }\right)`$ to the field equation (5) for $`N`$ and then using the field equation (6) for $`N^{}`$, we find that
$$\left(\mathrm{}^{}\right)N=0,$$
(7)
in which we used the symmetry of the matrix $``$ to write $`^{}`$ as $`^{}`$.
The singular-value decomposition $`=UMV^{}`$ allows us to express this equation (7) in the form
$$\left(\mathrm{}M^2\right)V^{}N=0,$$
(8)
which shows that the singular values $`m_i`$ of the mass matrix $``$ are the neutrino masses and that the eigenfield of mass $`m_j`$ is
$$\nu _{m_j}=\underset{i=1}{\overset{6}{}}V_{ij}^{}N_i.$$
(9)
The vector $`N_m`$ of mass eigenfields is thus $`N_m=V^{}N`$, and so the flavor eigenfields $`N`$ are given by $`N=VN_m`$. In particular, the three left-handed fields $`\nu _i`$ for $`i=e,\mu ,\tau `$ are linear combinations of the six mass eigenfields, $`\nu _i=_{j=1}^6V_{ij}\nu _{m_j}`$ and not simply linear combinations of three mass eigenfields.
## Experimental Constraints
The four LEP measurements of the invisible partial width of the $`Z`$ impose upon the number of light neutrino types the constraint $`N_\nu =2.984\pm 0.008.`$ The amplitude for the $`Z`$ production of two neutrinos $`\nu _{m_j}`$ and $`\nu _{m_k}`$ to lowest order is $`A(\nu _{m_j},\nu _{m_k})_{i=1}^3V_{ik}^{}V_{ij}`$, and therefore the x-section for that process is $`\sigma (\nu _{m_j},\nu _{m_k})|_{i=1}^3V_{ik}^{}V_{ij}|^2`$. The LEP measurement of the number $`N_\nu `$ of light neutrino species thus implies that the sum over the light-mass eigenfields is
$$\underset{j,k\mathrm{light}}{}|\underset{i=1}{\overset{3}{}}V_{ik}^{}V_{ij}|^2=2.984\pm 0.008.$$
(10)
This constraint on the $`6\times 6`$ unitary matrix $`V`$ is quite well satisfied if all six neutrino masses are light. For in this all-light scenario, the sum is
$$\underset{j,k=1}{\overset{6}{}}\underset{i=1}{\overset{3}{}}\underset{i^{}=1}{\overset{3}{}}V_{ik}^{}V_{ij}V_{i^{}k}V_{i^{}j}^{}=\underset{i=1}{\overset{3}{}}\underset{i^{}=1}{\overset{3}{}}\delta _{ii^{}}\delta _{ii^{}}=\underset{i=1}{\overset{3}{}}1=32.984\pm 0.008.$$
(11)
If the Hubble constant in units of 100 km/sec/Mpc is $`h0.65`$, then the conservative upper bound on the neutrino component of hot dark matter, $`\mathrm{\Omega }_\nu \stackrel{<}{}\mathrm{\hspace{0.25em}0.2}`$, implies that the sum of the masses of the light, stable two-component neutrinos that interact weakly is bounded by
$$\underset{j\mathrm{light}}{}m_j\stackrel{<}{}\mathrm{\hspace{0.25em}\hspace{0.17em}8}\mathrm{eV}.$$
(12)
The lowest-order amplitude for a neutrino $`\nu _i`$ to be produced by a charged lepton $`e_i`$, to propagate with energy $`E`$ a distance $`L`$ as some light-mass eigenfield of mass $`m_jE`$, and to produce a charged lepton $`e_i^{}`$ is
$$A(\nu _i\nu _i^{})\underset{j\mathrm{light}}{}V_{i^{}j}V_{ij}^{}e^{\frac{im_j^2L}{2E}}.$$
(13)
The lowest-order amplitude for the anti-process, $`\overline{\nu }_i\overline{\nu }_i^{}`$, involves the complex conjugate of the matrix $`V`$
$$A(\overline{\nu }_i\overline{\nu }_i^{})\underset{j\mathrm{light}}{}V_{i^{}j}^{}V_{ij}e^{\frac{im_j^2L}{2E}}.$$
(14)
To lowest order the corresponding probabilities are
$$P(\nu _i\nu _i^{})\underset{j,j^{}\mathrm{light}}{}V_{i^{}j}V_{ij}^{}V_{i^{}j^{}}^{}V_{ij^{}}\mathrm{exp}\left(\frac{i(m_j^{}^2m_j^2)L}{2E}\right)$$
(15)
and
$$P(\overline{\nu }_i\overline{\nu }_i^{})\underset{j,j^{}\mathrm{light}}{}V_{i^{}j}V_{ij}^{}V_{i^{}j^{}}^{}V_{ij^{}}\mathrm{exp}\left(\frac{i(m_j^{}^2m_j^2)L}{2E}\right).$$
(16)
If all six neutrinos are light, then in the limit $`L/E0`$ these sums are $`\delta _{ii^{}}`$.
If for simplicity we stretch the error bars on the Chlorine experiment and average over one year, then the solar neutrino experiments, especially Gallex and SAGE, see a diminution of electron neutrinos by a factor of about one-half:
$$P_{\mathrm{sol}}(\nu _e\nu _e)\frac{1}{2},$$
(17)
which requires a pair of mass eigenstates whose squared masses differ by at least $`10^{10}\mathrm{eV}^2`$ . The reactor experiments, Palo Verde and especially Chooz, imply that these squared masses differ by less than $`10^3\mathrm{eV}^2`$ .
The atmospheric neutrino experiments, Soudan II, Kamiokande III, IMB-3, and especially SuperKamiokande, see a diminution of muon neutrinos and antineutrinos by about one-third:
$$P_{\mathrm{atm}}(\nu _\mu \nu _\mu )\frac{2}{3},$$
(18)
which requires a pair of mass eigenstates whose squared masses differ by $`10^3\mathrm{eV}^2\stackrel{<}{}|m_j^2m_k^2|\stackrel{<}{}\mathrm{\hspace{0.25em}10}^2\mathrm{eV}^2`$ .
## The $`BL`$ Model
When the Majorana mass matrices $`E`$ and $`F`$ are both zero, the action density (4) is invariant under the $`U(1)`$ transformation $`N^{}=e^{i\theta G}N`$ in which the $`6\times 6`$ block-diagonal matrix $`G=\mathrm{diag}(I,I)`$ with $`I`$ the $`3\times 3`$ identity matrix. The kinetic part of (4) is clearly invariant under this transformation. The mass terms are invariant only when the anti-commutator
$$\{,G\}=2\left(\begin{array}{cc}F& 0\\ 0& E\end{array}\right)=0$$
(19)
vanishes.
This $`U(1)`$ symmetry is the restriction to the neutrino sector of the symmetry generated by baryon-minus-lepton number, $`BL`$, which is exactly conserved in the standard model. A minimally extended standard model with right-handed neutrino fields $`n_{ri}`$ and a Dirac mass matrix $`D`$ but with no Majorana mass matrices, $`E=F=0`$, also conserves $`BL`$. When $`BL`$ is exactly conserved, *i.e.,* when $`D0`$ but $`E=F=0`$, then the six neutrino masses $`m_j`$ collapse into three pairs of degenerate masses because the left-handed and right-handed fields that form a Dirac neutrino have the same mass.
Suppose this symmetry is slightly broken by the Majorana mass matrices $`E`$ and $`F`$. Then for random mass matrices $`D`$, $`E`$, and $`F`$, the six neutrino masses $`m_j`$ will form three pairs of nearly degenerate masses as long as the ratio
$$\mathrm{sin}^2\theta _\nu =\frac{\mathrm{Tr}(E^{}E+F^{}F)}{\mathrm{Tr}(2D^{}D+E^{}E+F^{}F)}$$
(20)
is small. For a generic mass matrix $``$, the parameter $`\mathrm{sin}^2\theta _\nu `$ lies between the extremes $`0\mathrm{sin}^2\theta _\nu 1`$ and characterizes the kind of the neutrinos. The parameter $`\mathrm{sin}^2\theta _\nu `$ is zero for purely Dirac neutrinos and unity for purely Majorana neutrinos.
Let us now recall ’t Hooft’s definition of naturalness: It is natural to assume that a parameter is small if the theory becomes more symmetrical when the parameter vanishes. In this sense it is natural to assume that the parameter $`\mathrm{sin}^2\theta _\nu `$ is small because the minimally extended standard model becomes more symmetrical, conserving $`BL`$, when $`\mathrm{sin}^2\theta _\nu =0`$.
In Fig. 1 the six neutrino masses $`m_j`$ are plotted for a set of mass matrices $``$ that differ only in the parameter $`\mathrm{sin}\theta _\nu `$. Apart from $`\mathrm{sin}\theta _\nu `$, every other parameter of the mass matrices $``$ is a complex number $`z=x+iy`$ in which $`x`$ and $`y`$ were chosen randomly and uniformly on the interval $`[1\mathrm{eV},1\mathrm{eV}]`$. It is clear in the figure that when $`\mathrm{sin}^2\theta _\nu 0`$, the six neutrino masses $`m_j`$ coalesce into three nearly degenerate pairs. Although the six masses of the neutrinos are in the eV range, they form three pairs with very tiny mass differences when $`\mathrm{sin}^2\theta _\nu 0`$.
Thus the very small mass differences required by the solar and atmospheric experiments are naturally explained by the assumption that the symmetry generated by $`BL`$ is broken only slightly by the Majorana mass matrices $`E`$ and $`F`$. This same assumption implies that neutrinos are very nearly Dirac fermions and hence explains the very stringent upper limits on neutrinoless double-beta decay. Because the masses of the six neutrinos may lie in the range of a few eV, instead of being squashed down to the meV range by the seesaw mechanism, they may contribute to hot dark matter in a way that is cosmologically significant. This $`BL`$ model with $`\mathrm{sin}^2\theta _\nu 0`$ is the converse of the seesaw mechanism.
If $`\mathrm{sin}^2\theta _\nu =0`$, then there are three purely Dirac neutrinos, and the mixing matrix $`V`$ is block diagonal $`V=\mathrm{diag}(u^{},v)`$ in which the $`3\times 3`$ unitary matrices $`u`$ and $`v`$ occur in the singular-value decomposition of the $`3\times 3`$ matrix $`D=umv^{}`$. If these three Dirac neutrinos are also light, then unitarity implies that the sum of the normalized probabilities is unity $`_{i^{}=e}^\tau P(\nu _i\nu _i^{})=1`$. If $`\mathrm{sin}^2\theta _\nu =1`$, then this sum is also unity by unitarity because in this case the mixing matrix for the six purely Majorana neutrinos is also block diagonal $`V=\mathrm{diag}\left(\begin{array}{cc}v_F& v_E\end{array}\right)`$. But if there are six light, nearly Dirac neutrinos, then each neutrino flavor $`\nu _i`$ will oscillate both into other neutrino flavor eigenfields and into sterile neutrino eigenfields. In this case this sum tends to be roughly a half $`_{i^{}=e}^\tau P(\nu _i\nu _i^{})\frac{1}{2}`$ as long as $`\mathrm{sin}^2\theta _\nu `$ is small but not infinitesimal. Because of this approximate, empirical sum rule for $`i=e`$ and $`\mu `$, the only way in which the probabilities $`P_{\mathrm{sol}}(\nu _e\nu _i)`$ and $`P_{\mathrm{atm}}(\nu _\mu \nu _i)`$ can fit the experimental results (17) and (18) is if inter-generational mixing is suppressed so that $`\nu _e`$ oscillates into $`n_e`$ and so that $`\nu _\mu `$ oscillates into $`n_\mu `$. In other words, random mass matrices $``$, even with $`\mathrm{sin}\theta _\nu 0`$, produce probabilities $`P_{\mathrm{sol}}(\nu _e\nu _e)`$ and $`P_{\mathrm{atm}}(\nu _\mu \nu _\mu )`$ (suitably averaged respectively over the Earth’s orbit and over the atmosphere) that are too small. The probabilities $`P_{\mathrm{sol}}(\nu _e\nu _e)`$ and $`P_{\mathrm{atm}}(\nu _\mu \nu _\mu )`$ do tend to cluster around $`(\frac{1}{2},\frac{2}{3})`$ as required by the experiments when inter-generational mixing is severely repressed, that is if the singly off-diagonal matrix elements of $`D,E,`$ and $`F`$ are suppressed by 0.05 and the doubly off-diagonal matrix elements by 0.0025.
It is possible to relax the factors that suppress inter-generational mixing to 0.2 and 0.04 and improve the agreement with the experimental constraints (17) and (18) (while satisfying the CHOOZ constraint) provided that one also requires that there be a quark-like mass hierarchy. The points in Fig. 2 were generated by random mass matrices $``$ with $`\mathrm{sin}\theta _\nu =0.003`$ by using CKM-suppression factors of 0.2 and 0.04 and by scaling the $`i,j`$-th elements of the mass matrices $`E,F,`$ and $`D`$ by the factor $`f(i)f(j)`$ where $`\stackrel{}{f}=(0.2,1,2)`$. Thus the mass matrix $``$ has the $`\tau ,\tau `$ elements that are larger than its $`\mu ,\mu `$ elements and $`\mu ,\mu `$ elements that in turn are larger than its $`e,e`$ elements. The clustering of the probabilities $`P_{\mathrm{sol}}(\nu _e\nu _e)`$ and $`P_{\mathrm{atm}}(\nu _\mu \nu _\mu )`$ around $`(\frac{1}{2},\frac{2}{3})`$ in Fig. 2 shows that the experimental results (17) and (18) are satisfied. The vector $`\stackrel{}{f}`$ was tuned so as to nearly saturate the cosmological upper bound (12) of about $`8\mathrm{eV}`$.
In this scatter plot, every parameter of each of the 10000 matrices $``$ is a complex number $`z=x+iy`$ with $`x`$ and $`y`$ chosen randomly and uniformly from the interval $`[1\mathrm{e}\mathrm{V},1\mathrm{e}\mathrm{V}]`$. The solar neutrinos are taken to have an energy of 1 MeV, and the probability (15) is averaged over one revolution of the Earth about the Sun. The atmospheric neutrinos are taken to have an energy of 1 GeV, and the probability (15) is averaged over the atmosphere weighted by $`\mathrm{sec}\theta _Z^{}`$ in the notation of Fisher *et al.* . The thousands of singular-value decompositions were performed by the lapack driver subroutine zgesvd .
Neutrinoless double beta decay occurs when a right-handed antineutrino emitted in one decay $`np+e^{}+\overline{\nu }_e`$ is absorbed as a left-handed neutrino in the another decay $`\nu _e+np+e^{}`$. To lowest order these decays proceed via the Majorana mass term $`iF_{ee}^{}\nu _e^{}\sigma ^2\nu _e^{}`$. Let us introduce a second angle $`\varphi _\nu `$ defined by
$$\mathrm{sin}^2\varphi _\nu =\frac{\mathrm{Tr}(F^{}F)}{\mathrm{Tr}(E^{}E+F^{}F)}.$$
(21)
We have seen that we may fit the experimental data (17) and (18) by assuming that $`\mathrm{sin}\theta _\nu 0.003`$ and by requiring the mass matrices $`E,F,`$ and $`D`$ to exhibit quark-like mass hierarchies with little inter-generational mixing. Under these conditions the rate of $`0\nu \beta \beta `$ decay is limited by the factor
$$|F_{ee}|^2\stackrel{<}{}\mathrm{sin}^2\theta _\nu \mathrm{sin}^2\varphi _\nu m_{\nu _e}^2,$$
(22)
in which $`m_{\nu _e}`$ is the heavier of the lightest two neutrino masses. Thus the rate of $`0\nu \beta \beta `$ decay is suppressed by an extra factor $`\mathrm{sin}^2\theta _\nu \mathrm{sin}^2\varphi _\nu \stackrel{<}{}\mathrm{\hspace{0.25em}10}^5`$ resulting in lifetimes $`T_{\frac{1}{2},0\nu \beta \beta }>2\times 10^{27}`$yr. The $`BL`$ model therefore explains why neutrinoless double-beta decay has not been seen and predicts that the current and upcoming experiments Heidelberg/Moscow, IGEX, GENIUS, and CUORE will not see $`0\nu \beta \beta `$ decay.
## Conclusions
The standard model slightly extended to include right-handed neutrino fields exactly conserves $`BL`$ if all Majorana mass terms vanish. It is therefore natural to assume that the Majorana mass terms are small compared to the Dirac mass terms. A parameter $`\mathrm{sin}^2\theta _\nu `$ is introduced that characterizes the relative importance of these two kinds of mass terms. When this parameter is very small, then the neutrinos are nearly Dirac and only slightly Majorana. In this case the six neutrino masses $`m_j`$ coalesce into three pairs of nearly degenerate masses. Thus the very tiny mass differences seen in the solar and atmospheric neutrino experiments are simply explained by the natural assumption that $`\mathrm{sin}\theta _\nu 0.003`$ or equivalently that $`BL`$ is almost conserved. In these experiments the probabilities $`P_{\mathrm{sol}}(\nu _e\nu _e)`$ and $`P_{\mathrm{atm}}(\nu _\mu \nu _\mu )`$ are respectively approximately one half and two thirds. One may fit these probabilities with random mass matrices in the eV range by requiring the neutrino mass matrices $`E,F,`$ and $`D`$ to exhibit quark-like mass hierarchies with little inter-generational mixing.
This $`BL`$ model leads to these predictions:
1. Because $`\mathrm{sin}^2\theta _\nu 0`$ and because inter-generational mixing is suppressed, neutrinos oscillate mainly into sterile neutrinos of the same flavor and not into neutrinos of other flavors. Hence rates for the appearance of neutrinos, $`P(\nu _i\nu _i^{})`$ with $`ii^{}`$, are very low as shown by LSND and KARMEN.
2. The assumption that $`\mathrm{sin}^2\theta _\nu `$ is very small naturally explains the very small differences of squared masses seen in the solar and atmospheric experiments without requiring that the neutrino masses themselves be very small. Thus the neutrinos may very well saturate the cosmological bound, $`_jm_j\stackrel{<}{}\mathrm{\hspace{0.25em}8}\mathrm{eV}`$. In fact the masses associated with the points of Fig. 2 do nearly saturate this bound. Neutrinos thus may well be an important part of hot dark matter.
3. The disappearance of $`\nu _\tau `$ should *in principle* be observable.
4. In the $`BL`$ model, the rate of neutrinoless double-beta decay is suppressed by an extra factor $`\mathrm{sin}^2\theta _\nu \mathrm{sin}^2\varphi _\nu \stackrel{<}{}\mathrm{\hspace{0.25em}10}^5`$ resulting in lifetimes greater than $`2\times 10^{27}`$yr. Thus the current and upcoming experiments Heidelberg/Moscow, IGEX, GENIUS, and CUORE will not see $`0\nu \beta \beta `$ decay.
## Acknowledgements
I am grateful to H. Georgi for a discussion of neutrinoless double beta decay and to B. Bassalleck, J. Demmel, B. Dieterle, M. Gold, G. Herling, D. Karlen, B. Kayser, P. Krastev, S. McCready, R. Mohapatra, R. Reeder, and G. Stephenson for other helpful conversations. |
no-problem/9912/hep-th9912117.html | ar5iv | text | # 1 Introduction
## 1 Introduction
$`D`$-branes play a significant role in superstrings and superconformal field theories. Two of the most outstanding developments in this direction have been achieved:
1. The generalized AdS/CFT correspondence , which relates the superconformal field theory on $`Dp`$-branes placed at the orbifold singularity and the Type IIB string theory compactified on $`AdS_{p+2}\times H^{8p}`$ ;
2. The K-theory approach to $`D`$-brane charges , which identifies $`D`$-brane charges with elements of Grothendieck K-groups of horizon manifolds.
In the present paper we use K-theory to compute the $`D`$-brane spectra in the Type IIB string theory compactified on $`AdS_{p+2}\times S^{8p}`$.
## 2 $`D`$-brane spectra
Let us consider the fibre bundle
| $`S^{8p}`$ | $``$ | $`B^{9p}`$ |
| --- | --- | --- |
| | | $``$ |
| | | $`B^{9p}/S^{8p}`$ |
The K-groups characterizing this bundle are related by the exact hexagon
| | | $`\stackrel{~}{K}\left(B^{9p}/S^{8p}\right)`$ | $``$ | $`\stackrel{~}{K}\left(B^{9p}\right)`$ | | |
| --- | --- | --- | --- | --- | --- | --- |
| $`\stackrel{\delta }{}`$ | | | | | | $``$ |
| $`\stackrel{~}{K}\left(SS^{8p}\right)`$ | | | | | | $`\stackrel{~}{K}\left(S^{8p}\right)`$ |
| $``$ | | | | | | $`\stackrel{\delta }{}`$ |
| | | $`\stackrel{~}{K}\left(SB^{9p}\right)`$ | $``$ | $`\stackrel{~}{K}\left(S\left(B^{9p}/S^{8p}\right)\right)`$ | | |
where $`\delta `$ is the coboundary homomorphism. This hexagon is the counterpart of the generalized AdS/CFT correspondence (cf. ).
Since
$$\stackrel{~}{K}\left(B^{9p}\right)=\stackrel{~}{K}\left(SB^{9p}\right)=0,$$
the hexagon splits into the exact sequences
$$0\stackrel{~}{K}\left(SS^{8p}\right)\stackrel{\delta }{}\stackrel{~}{K}\left(B^{9p}/S^{8p}\right)0$$
(1)
$$0\stackrel{~}{K}\left(S^{8p}\right)\stackrel{\delta }{}\stackrel{~}{K}\left(S\left(B^{9p}/S^{8p}\right)\right)0$$
(2)
The sequences (1) and (2) are related by T-duality.
The group $`\stackrel{~}{K}\left(SS^{8p}\right)`$ from (1) reproduces the $`D`$-brane spectrum
Table 1
| $`Dp`$ | $`D9`$ | $`D8`$ | $`D7`$ | $`D6`$ | $`D5`$ | $`D4`$ | $`D3`$ | $`D2`$ | $`D1`$ | $`D0`$ | $`D(1)`$ |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| $`S^{9p}`$ | $`S^0`$ | $`S^1`$ | $`S^2`$ | $`S^3`$ | $`S^4`$ | $`S^5`$ | $`S^6`$ | $`S^7`$ | $`S^8`$ | $`S^9`$ | $`S^{10}`$ |
| $`\stackrel{~}{K}(S^{9p})`$ | | 0 | | 0 | | 0 | | 0 | | 0 | |
which coincides with the known result in Type IIB theory .
The group $`\stackrel{~}{K}\left(S^{8p}\right)`$ from (2) reproduces the $`D`$-brane spectrum
Table 2
| $`Dp`$ | $`D9`$ | $`D8`$ | $`D7`$ | $`D6`$ | $`D5`$ | $`D4`$ | $`D3`$ | $`D2`$ | $`D1`$ | $`D0`$ | $`D(1)`$ |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| $`S^{8p}`$ | $`S^1`$ | $`S^0`$ | $`S^1`$ | $`S^2`$ | $`S^3`$ | $`S^4`$ | $`S^5`$ | $`S^6`$ | $`S^7`$ | $`S^8`$ | $`S^9`$ |
| $`\stackrel{~}{K}(S^{8p})`$ | 0 | | 0 | | 0 | | 0 | | 0 | | 0 |
which signals the existence of the mirror-symmetry-analogue for branes, analogous to that rised in the context of derived categories .
## 3 Vacuum manifold
Using standard definitions , we obtain
$$\stackrel{~}{K}\left(SS^{8p}\right)=\pi _{9p}\left(BU\right),$$
$$\stackrel{~}{K}\left(S^{8p}\right)=\pi _{8p}\left(BU\right),$$
where $`BU`$ is the inductive limit of the manifold
$$U\left(2N\right)/U\left(N\right)\times U\left(N\right)$$
(3)
The vacuum manifold (3) has the following interpretation in terms of $`D`$-branes . When $`2N`$ coinciding branes are separated to form two parallel stacks of $`N`$ coinciding branes, their gauge symmetry $`U\left(2N\right)`$ is spontaneously broken to $`U\left(N\right)\times U\left(N\right)`$. This situation generically allows for the existence of topological solitons.
## 4 Acknowledgements
I would like to thank S. Gukov for stimulating discussions. It is a pleasure to thank E. Witten for his attention to this work. |
no-problem/9912/cond-mat9912278.html | ar5iv | text | # Mechanical Mixing in Nonlinear Nanomechanical Resonators
\[
## Abstract
Nanomechanical resonators, machined out of Silicon-on-Insulator wafers, are operated in the nonlinear regime to investigate higher-order mechanical mixing at radio frequencies, relevant to signal processing and nonlinear dynamics on nanometer scales. Driven by two neighboring frequencies the resonators generate rich power spectra exhibiting a multitude of satellite peaks. This nonlinear response is studied and compared to $`n^{th}`$-order perturbation theory and nonperturbative numerical calculations.
\]
Mechanical devices in combination with modern semiconductor electronics offer great advantages as for example their robustness against electrical shocks and ionization due to radiation. In outstanding work by Rugar and Grütter the importance for applications in scanning probe microscopy of mechanical cantilevers was demonstrated. Greywall et al. investigated noise evasion techniques for frequency sources and clocks with microscopical mechanical resonators. The main disadvantage of mechanical devices so far is the low speed of operation. This has been overcome with the realization of nanomechanical resonators, which allow operation at frequencies up to 500 MHz .
In the present work we realize such a nanomechanical resonator to study its nonlinear dynamics and its mechanical mixing properties. Mixing is of great importance for signal processing in common electronic circuits. Combining signal mixing with the advantages of mechanical systems, i.e. their insensitivity to the extremes of temperature and radiation, is very promising, especially when considering the high speed of operation currently becoming available. Here we present measurements on such a nonlinear nanomechanical resonator, forced into resonance by application of two different but neighboring driving frequencies. We also present a theoretical model, based on the Duffing equation, which accurately describes the behavior of the mechanical resonator. The model gives insight into the degree of nonlinearity of the resonator and hence into the generation of higher-harmonic mechanical mixing.
The starting materials are commercially available Silicon-on-insulator (SOI) substrates with thicknesses of the Si-layer and the SiO<sub>2</sub> sacrificial layer of 205 nm and 400 nm, respectively (Smart-Cut wafers). The gate leads connecting the resonator to the chip carrier are defined using optical lithography. In a next step the nanomechanical resonator is defined by electron beam lithography. The sample is dry-etched in a reactive-ion etcher (RIE) in order to obtain a mesa structure with clear-cut walls. Finally, we perform a hydro-fluoric (HF) wet-etch step in order to remove the sacrificial layer below the resonators and the metallic etch mask. The last step of processing is critical point drying, in order to avoid surface tension by the solvents. The suspended resonator is shown in a scanning electron beam micrograph in Fig 1(a): The beam has a length of $`l=3\mu `$m, a width of $`w=200`$ nm, and a height of $`h=250`$ nm and is clamped on both sides. The inset shows a close-up of the suspended beam. The restoring force of this Au/Si-hybrid beam is dominated by the stiffer Si supporting membrane. The selection of the appropriate HF etch allows for attacking only the Si and thus the minute determination of the beam’s flexibility and in turn the strength of the nonlinear response.
The chip is mounted in a sample holder and a small amount of <sup>4</sup>He exchange-gas is added (10 mbar) to ensure thermal coupling. The sample is placed at 4.2 K in a magnetic field, directed in parallel to the sample surface but perpendicular to the beam. When an alternating current is applied to the beam a Lorentz force arises perpendicular to the sample surface and sets the beam into mechanical motion. For characterization we employ a spectrum analyzer (Hewlett Packard 8594A): The output frequency is scanning the frequency range of interest ($`37`$ MHz), the reflected signal is tracked and then amplified (setup $`\alpha `$ in Fig. 1(b), reflectance measured in mV). The reflected power changes when the resonance condition is met, which can be tuned by the gate voltages $`V_g`$ in a range of several 10 kHz. The mixing properties of the suspended nanoresonators are probed with a different setup comprising two synthesizers (Marconi 2032 and Wavetek 3010) emitting excitations at constant, but different, frequency (setup $`\beta `$ in Fig. 1(b)). Here, the reflectance is measured in dBm for better comparison of the driving amplitudes and the mixing products. The reflected power is finally amplified and detected by the spectrum analyzer.
In Fig. 2 the radio-frequency (rf) response of the beam near resonance is depicted for increasing magnetic field strength $`B=0,1,2,\mathrm{},12`$ T. The excitation power of the spectrum analyzer was fixed at $`50`$ dBm. The mechanical quality factor, $`Q=f/\delta f`$, of the particular resonator under test in the linear regime is $`Q=2330`$. As seen the profile of the resonance curve changes from a symmetric shape at moderate fields to an asymmetric, sawtooth shape at large field values, characteristic of an oscillator operated in the nonlinear regime.
This behavior can be described by the Duffing equation
$$\ddot{x}(t)+\mu \dot{x}(t)+\omega _0^2x(t)+\alpha x^3(t)=F(t)$$
(1)
with a positive prefactor $`\alpha `$ of the cubic term being the parameter of the strength of the nonlinearity. In Eq. (1) $`\mu `$ is the damping coefficient of the mechanical system, $`\omega _0=2\pi f_0`$, where $`f_0`$ is the mechanical eigenfrequency of the beam, and $`x(t)`$ its elongation. In our case the external driving $`F(t)`$ is given by the Lorentz force:
$$F(t)=\frac{lB}{m_{\mathrm{eff}}}I(t)=\frac{lB}{m_{\mathrm{eff}}}I_0\mathrm{cos}(2\pi ft),$$
(2)
where $`l=1.910^6`$ m is the effective length and $`m_{\mathrm{eff}}=4.310^{16}`$ kg is the effective mass of the resonator. $`B`$ is the magnetic field and $`I_0`$ the input current corresponding to the amplitude of the driving power.
Solving Eq. (1) and computing the amplitude of the oscillation as a function of the driving frequency $`f`$ for several excitation strengths reproduces the measured curves shown in Fig. 2. The solutions at large power exhibit a region where three different amplitude values coexist at a single frequency. This behavior leads to a hysteretic response in the measurements at high powers (e.g. $`50`$ dBm), as shown in the inset of Fig. 2, where we used an external source (Marconi) to sweep the frequencies in both directions. If the frequency is increased (inverted triangles ($``$) in the inset), the resonance first follows the lower branch, and then suddenly jumps to the upper branch. When sweeping downwards from higher to lower frequencies (triangles ($``$)), the jump in resonance occurs at a different frequency.
Turning now to the unique properties of the nonlinear nanomechanical system: By applying two separate frequency sources as sketched in Fig. 1(b) (setup $`\beta `$) it is possible to demonstrate mechanical mixing, as shown in Fig. 3(a). The two sources are tuned to $`f_1=37.28`$ MHz and $`f_2=37.29`$ MHz with constant offset and equal output power of $`48`$ dBm, well in the nonlinear regime. Without applying a magnetic field the two input signals are simply reflected (upper left panel). Crossing a critical field of $`B8`$ T higher-order harmonics appear. Increasing the field strength further a multitude of satellite peaks evolves. As seen the limited bandwidth of this mechanical mixer allows effective signal filtering.
Variation of the offset frequencies leads to the data presented in Fig. 3(b): Excitation at $`48`$ dBm and $`B=12`$ T with the base frequency fixed at $`f_1=37.290`$ MHz and varying the sampling frequency in 1 kHz steps from $`f_2=37.285`$ MHz to 37.290 MHz yields satellites at the offset frequencies $`f_{1,2}\pm n\mathrm{\Delta }f`$, $`\mathrm{\Delta }f=f_1f_2`$. The dotted line is taken at zero field for comparison, showing only the reflected power when the beam is not set into mechanical motion. At the smallest offset frequency of 1 kHz the beam reflects the input signal as a broad band of excitations.
We model the nanomechanical system as a Duffing oscillator (1) with a driving force
$$F(t)=F_1\mathrm{cos}(2\pi f_1t)+F_2\mathrm{cos}(2\pi f_2t),$$
(3)
with two different, but neighboring, frequencies $`f_1`$ and $`f_2`$ and amplitudes $`F_i=lBI_i/m_{\mathrm{eff}}`$.
Before presenting our results of a numerical solution of Eq. (1) for the driving forces (3) we perform an analysis based on $`n^{th}`$-order perturbation theory to explain the generation of higher harmonics. Expanding
$$x=x_0+ϵx_1+ϵ^2x_2+\mathrm{},$$
(4)
where we assume that the (small) parameter $`ϵ`$ is of order of the nonlinearity $`\alpha `$, and inserting this expansion into Eq. (1) yields equations for the different orders in $`ϵ`$. In zeroth order we have
$$\ddot{x}_0+\mu \dot{x}_0+\omega _0^2x_0=F_1\mathrm{cos}(2\pi f_1t)+F_2\mathrm{cos}(2\pi f_2t),$$
(5)
to first-order $`\ddot{x}_1+\mu \dot{x}_1+\omega _0^2x_1+\alpha x_0^3=0,`$ and similar equations for higher orders. After inserting the solution of Eq. (5) into the first-order equation and assuming $`f_1f_2f_0=\omega _0/2\pi `$, two types of resonances can be extracted: One resonance is located at $`3f_0`$ which we, however, could not detect experimentally. Resonances of the other type are found at frequencies $`f_i\pm \mathrm{\Delta }f`$. Proceeding along the same lines in second-order perturbation theory we obtain resonances at $`5f_0`$ and $`f_i\pm 2\mathrm{\Delta }f`$. Accordingly, owing to the cubic nonlinear term, $`n^{th}`$-order resonances are generated at $`(2n+1)f_0`$ and $`f_i\pm n\mathrm{\Delta }f`$. While the $`(2n+1)f_0`$-resonances could not be observed, the whole satellite family $`f_i\pm n\mathrm{\Delta }f`$ is detected in the experimental power spectra Fig. 3(a,b).
The perturbative approach yields the correct peak positions and, for $`B<4`$ T, also the peak amplitudes. However, in the hysteretic, strongly nonlinear regime a nonperturbative numerical calculation proves necessary to explain quantitatively the measured peak heights. To this end we determined the parameters entering into Eq. (1) in the following way: The damping is estimated from the quality factor $`Q=2330`$ which gives $`\mu =50265`$ Hz. The eigenfrequency is $`f_0=37.26`$ MHz as seen from Fig. 2 in the linear regime. The nonlinearity $`\alpha `$ is estimated from the shift
$$\delta f(B)=f_{\mathrm{max}}(B)f_0=\frac{3\alpha [\mathrm{\Lambda }_0(B)]^2}{32\pi ^2f_0}$$
(6)
in frequency $`f_{\mathrm{max}}`$ at maximum amplitude in Fig. 2. In zero order the displacement of the beam is given by $`\mathrm{\Lambda }_0=lI_0B/(4\pi f_0\mu m_{\mathrm{eff}})`$. Relation (6) yields with $`I_0=1.910^5`$A a value of $`\alpha =9.110^{28}(\mathrm{ms})^2`$.
We first computed $`x(t)`$ by numerical integration of the Duffing equation with driving (3) and $`F_1=F_2=lBI_0/m_{\mathrm{eff}}`$, $`I_0=2.910^5`$A. We then calculated the power spectrum from the Fourier transform $`\widehat{x}(\omega )`$ of $`x(t)`$ for large times (beyond the transient regime). For a direct comparison with the measured power $`P`$ in Fig. 3 we employ $`PRI_{\mathrm{imp}}^2`$. Here $`R`$ is the resistance of the electromechanical circuit and $`I_{\mathrm{imp}}=[4\pi f_0\mu m_{\mathrm{eff}}/(lB)]\widehat{x}(\omega )`$ in close analogy to the zero-order relation between displacement $`\mathrm{\Lambda }_0`$ and $`I_0`$.
The numerically obtained power spectra are displayed in Fig. 4: (a) shows the emitted power for the same parameters as in Fig. 3(a), but with $`B=4,8,9,10,11`$, and $`12`$ T. Corresponding curves are shown in Fig. 4(b) for fixed $`B`$ and various $`\mathrm{\Delta }f`$ for the same set of experimental parameters as in Fig. 3(b). The positions of the measured satellite peaks, $`f_i\pm n\mathrm{\Delta }f`$, and their amplitudes are in good agreement with the numerical simulations for the entire parameter range shown. Even small modulations in the peak heights to the left of the two central peaks in Fig. 3(b) seem to be reproduced by the calculations in Fig. 4(b). (Note that the height of the two central peaks in Fig. 3 cannot be reproduced by the simulations, since they are dominated by the reflected input signal.)
The numerical results in Fig. 4(a) show clearly the evolution of an increasing number of peaks with growing magnetic field, i.e. increasing driving amplitude. As in the experiment, the spectra exhibit an asymmetry in number and height of the satellite peaks which switches from lower to higher frequencies by increasing the magnetic field from 8 T to 12 T. This behavior can be understood from Eq. (6) predicting a shift $`\delta f`$ in resonance frequency with increasing magnetic field. This shift is reflected in the crossover in Figs. 3(a) and 4(a). For $`B=8`$ T the amplitudes of the satellite peaks are larger on the left than on the right side of the two central peaks. As the field is increased the frequency shift drives the right-hand-side satellites into resonance increasing their heights.
The power spectra in Fig. 3(a) and 4(a) are rather insensitive to changes in magnetic field for $`B<8`$ T compared to the rapid evolution of the satellite pattern for 8 T $`<B<12`$ T. Our analysis shows that this regime corresponds to scanning through the hysteretic part (inset Fig. 2) in the amplitude/frequency (or amplitude/$`B`$-field) diagram, involving abrupt changes in the amplitudes. The resonator studied is strongly nonlinear but not governed by chaotic dynamics. Similar setups should allow for entering into the truly chaotic regime.
In summary we have shown how to employ the nonlinear response of a strongly driven nanomechanical resonator as a mechanical mixer in the radio-frequency regime. This opens up a wide range of applications, especially for signal processing. The experimental results are in very good agreement with numerical calculations based on a generalized Duffing equation, a prototype of a nonlinear oscillator. Hence these mechanical resonators allow for studying nonlinear, possibly chaotic dynamics on the nanometer scale.
We thank J.P. Kotthaus for helpful discussions. We acknowledge financial support by the Deutsche Forschungsgemeinschaft (DFG). |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.