text
stringlengths 6
128k
|
---|
Recently, large pretrained language models have demonstrated strong language
understanding capabilities. This is particularly reflected in their zero-shot
and in-context learning abilities on downstream tasks through prompting. To
assess their impact on spoken language understanding (SLU), we evaluate several
such models like ChatGPT and OPT of different sizes on multiple benchmarks. We
verify the emergent ability unique to the largest models as they can reach
intent classification accuracy close to that of supervised models with zero or
few shots on various languages given oracle transcripts. By contrast, the
results for smaller models fitting a single GPU fall far behind. We note that
the error cases often arise from the annotation scheme of the dataset;
responses from ChatGPT are still reasonable. We show, however, that the model
is worse at slot filling, and its performance is sensitive to ASR errors,
suggesting serious challenges for the application of those textual models on
SLU.
|
Cooperative interval game is a cooperative game in which every coalition gets
assigned some closed real interval. This models uncertainty about how much the
members of a coalition get for cooperating together.
In this paper we study convexity, core and the Shapley value of games with
interval uncertainty.
Our motivation to do so is twofold. First, we want to capture which
properties are preserved when we generalize concepts from classical cooperative
game theory to interval games. Second, since these generalizations can be done
in different ways, mainly with regard to the resulting level of uncertainty, we
try to compare them and show their relation to each other.
|
We propose a QCD motivated theoretical approach to high energy soft
interactions, which successfully describes the experimental data on total,
elastic and diffraction cross sections. We predict that the survival
probability for the diffractive Higgs production at the LHC energy is small
(less than 1%), and investigate the influence of suggested corrections e.g.
threshhold effects and semi-enhanced diagrams, on this value.
|
The main goal of this paper is to model the epidemic and flattening the
infection curve of the social networks. Flattening the infection curve implies
slowing down the spread of the disease and reducing the infection rate via
social-distancing, isolation (quarantine) and vaccination. The
nan-pharmaceutical methods are a much simpler and efficient way to control the
spread of epidemic and infection rate. By specifying a target group with high
centrality for isolation and quarantine one can reach a much flatter infection
curve (related to Corona for example) without adding extra costs to health
services. The aim of this research is, first, modeling the epidemic and, then,
giving strategies and structural algorithms for targeted vaccination or
targeted non-pharmaceutical methods for reducing the peak of the viral disease
and flattening the infection curve. These methods are more efficient for
nan-pharmaceutical interventions as finding the target quarantine group
flattens the infection curve much easier. For this purpose, a few number of
particular nodes with high centrality are isolated and the infection curve is
analyzed. Our research shows meaningful results for flattening the infection
curve only by isolating a few number of targeted nodes in the social network.
The proposed methods are independent of the type of the disease and are
effective for any viral disease, e.g., Covid-19.
|
Neural sequence-to-sequence models have provided a viable new approach for
abstractive text summarization (meaning they are not restricted to simply
selecting and rearranging passages from the original text). However, these
models have two shortcomings: they are liable to reproduce factual details
inaccurately, and they tend to repeat themselves. In this work we propose a
novel architecture that augments the standard sequence-to-sequence attentional
model in two orthogonal ways. First, we use a hybrid pointer-generator network
that can copy words from the source text via pointing, which aids accurate
reproduction of information, while retaining the ability to produce novel words
through the generator. Second, we use coverage to keep track of what has been
summarized, which discourages repetition. We apply our model to the CNN / Daily
Mail summarization task, outperforming the current abstractive state-of-the-art
by at least 2 ROUGE points.
|
We discuss the role of light absorption by pairs of atoms (radiative
collisions) in the context of a model for an atom laser. The model is applied
to the case of VSCPT cooling of metastable triplet helium. We show that,
because of radiative collisions, for positive detuning of the driving light
fields from an atomic resonance the operating conditions for the atom laser can
only be marginally met. It is shown that the system only behaves as an atom
laser if a very efficient sub-Doppler precooling mechanism is operative. In the
case of negative frequency detuning the requirements on this sub-Doppler
mechanism are less restricting, provided one avoids molecular resonances.
|
In a previous paper, we developed a numerical method to obtain a static black
hole localized on a 3-brane in the Randall-Sundrum infinite braneworld, and
presented examples of numerical solutions that describe small localized black
holes. In this paper we quantitatively analyze the behavior of the numerically
obtained black hole solutions, focusing on thermodynamic quantities. The
thermodynamic relations show that the localized black hole deviates smoothly
from a five-dimensional Schwarzschild black hole, which is a solution in the
limit of a small horizon radius. We compare the thermodynamic behavior of these
solutions with that of the exact solution on the 2-brane in the 4D braneworld.
We find similarities between them.
|
Thermo-optical effects cause a bifocusing of incoming beams in optical media,
due to the birefringence created by a thermal lens that can resolve the
incoming beams into two-component signals of different polarizations. We
propose a non-perturbative theoretical description of the process of formation
of double-pulse solitons in Kerr optical media with a thermally-induced
birefringence, based on solving simultaneously the heat equation and the
propagation equation for a beam in a one-dimensional medium with uniform heat
flux load. By means of a non-isospectral Inverse Scattering Transform assuming
an initial solution with a pulse shape, a one-soliton solution to the wave
equation is obtained that represents a double-pulse beam which characteristic
properties depend strongly on the profile of heat spatial distribution.
|
In order to reveal possible mass shifts of the vector mesons in a dense
strongly interacting medium presumably created in high energy heavy ion
collisions it is necessary to know their free masses reliably. The rho(770)
mass quoted in the last two editions of the Review of Particle Physics is
significantly larger than the values quoted in previous editions. The new value
is mostly influenced by the results of recent experiments CMD-2 and SND at the
VEPP-2M e+e- collider at Novosibirsk. We show that the values of the mass and
width of the rho(770) meson measured in the e+e- -> pi+pi- annihilation depend
crucially on the parametrization used for the pion form factor. We propose a
parametrization of the rho(770) contribution to the pion form factor based on
the running mass calculated from a single-subtracted dispersion relation and
compare it with the parametrization based on the formula of Gounaris and
Sakurai used recently by the CMD-2 collaboration. We show that our
parametrization gives equally good or better fits when applied to the data of
the CMD-2, SND, and KLOE collaborations, but yields much smaller values of the
rho(770) mass, consistent with the photoproduction and hadronic reactions
results. Our fit to the KLOE data becomes exceptionally excellent (confidence
level of 99.88%) if an energy shift of about 2 MeV in the rho-omega region is
allowed.
|
We construct new multi-field realisations of the $N=2$ super-$W_3$ algebra,
which are important for building super-$W_3$ string theories. We derive the
structure of the ghost vacuum for such theories, and use the result to
calculate the intercepts. These results determine the conditions for physical
states in the super-$W_3$ string theory.
|
Life on earth depends on healthy oceans, which supply a large percentage of
the planet's oxygen, food, and energy. However, the oceans are under threat
from climate change, which is devastating the marine ecosystem and the economic
and social systems that depend on it. The Internet-of-underwater-things
(IoUTs), a global interconnection of underwater objects, enables
round-the-clock monitoring of the oceans. It provides high-resolution data for
training machine learning (ML) algorithms for rapidly evaluating potential
climate change solutions and speeding up decision-making. The sensors in
conventional IoUTs are battery-powered, which limits their lifetime, and
constitutes environmental hazards when they die. In this paper, we propose a
sustainable scheme to improve the throughput and lifetime of underwater
networks, enabling them to potentially operate indefinitely. The scheme is
based on simultaneous wireless information and power transfer (SWIPT) from an
autonomous underwater vehicle (AUV) used for data collection. We model the
problem of jointly maximising throughput and harvested power as a Markov
Decision Process (MDP), and develop a model-free reinforcement learning (RL)
algorithm as a solution. The model's reward function incentivises the AUV to
find optimal trajectories that maximise throughput and power transfer to the
underwater nodes while minimising energy consumption. To the best of our
knowledge, this is the first attempt at using RL to ensure sustainable
underwater networks via SWIPT. The scheme is implemented in an open 3D RL
environment specifically developed in MATLAB for this study. The performance
results show up 207% improvement in energy efficiency compared to those of a
random trajectory scheme used as a baseline model.
|
Spin is one fundamental property of microscopic particles. A lot of
theoretical work postulated the possible coupling between spin and gravitation,
which could result in the violation of equivalence principle. In a recent joint
mass-and-energy test of weak equivalence principle (WEP) with a 10-meter
$^{85}$Rb-$^{87}$Rb dual-species atom interferometer, the E${\rm
\ddot{o}}$tv${\rm \ddot{o}}$s parameters of four $^{85}$Rb-$^{87}$Rb
combinations with specific atomic spin states were measured to the
$10^{-10}$-level. These experimental results are used to constrain the
postulated spin-gravity coupling effects in a robust way. The bounds on the
spin-independent and spin-dependent anomalous passive gravitational mass
tensors are consistently set to the $10^{-10}$-level, which improve existing
bounds by three orders of magnitude.
|
Neutron stars (NS) and black holes (BH) are sources of gravitational waves
(GW) and the investigation of young isolated radio-quiet NS can in principle
lead to constraints of the equation of state (EoS). The GW signal of merging
NSs critically depends on the EoS. However, unlike radio pulsars young isolated
radio-quiet neutron stars are hard to detect and only seven of them are known
so far. Furthermore, for GW projects it is necessary to confine regions in the
sky where and of which quantity sources of GW can be expected. We suggest
strategies for the search for young isolated radio-quiet NSs. One of the
strategies is to look for radioactivities which are formed during a supernova
(SN) event and are detectable due to their decay. Radioactivities with half
lives of ~1 Myr can indicate such an event while other remnants like nebulae
only remain observable for a few kyrs. Here we give a brief overview of our
strategies and discuss advantages and disadvantages
|
The Magellanic Stream (MS) - an enormous ribbon of gas spanning $140^\circ$
of the southern sky trailing the Magellanic Clouds - has been exquisitely
mapped in the five decades since its discovery. However, despite concerted
efforts, no stellar counterpart to the MS has been conclusively identified.
This stellar stream would reveal the distance and 6D kinematics of the MS,
constraining its formation and the past orbital history of the Clouds. We have
been conducting a spectroscopic survey of the most distant and luminous red
giant stars in the Galactic outskirts. From this dataset, we have discovered a
prominent population of 13 stars matching the extreme angular momentum of the
Clouds, spanning up to $100^\circ$ along the MS at distances of $60-120$ kpc.
Furthermore, these kinemetically-selected stars lie along a
[$\alpha$/Fe]-deficient track in chemical space from $-2.5 < \mathrm{[Fe/H]} <
-0.5$, consistent with their formation in the Clouds themselves. We identify
these stars as high-confidence members of the Magellanic Stellar Stream. Half
of these stars are metal-rich and closely follow the gaseous MS, whereas the
other half are more scattered and metal-poor. We argue that the metal-rich
stream is the recently-formed tidal counterpart to the MS, and speculate that
the metal-poor population was thrown out of the SMC outskirts during an earlier
interaction between the Clouds. The Magellanic Stellar Stream provides a strong
set of constraints - distances, 6D kinematics, and birth locations - that will
guide future simulations towards unveiling the detailed history of the Clouds.
|
The non-elementary integrals $\mbox{Si}_{\beta,\alpha}=\int [\sin{(\lambda
x^\beta)}/(\lambda x^\alpha)] dx,\beta\ge1,\alpha>\beta+1$ and
$\mbox{Ci}_{\beta,\alpha}=\int [\cos{(\lambda x^\beta)}/(\lambda x^\alpha)] dx,
\beta\ge1, \alpha>2\beta+1$, where $\{\beta,\alpha\}\in\mathbb{R}$, are
evaluated in terms of the hypergeometric function $_{2}F_3$. On the other hand,
the exponential integral $\mbox{Ei}_{\beta,\alpha}=\int (e^{\lambda
x^\beta}/x^\alpha) dx, \beta\ge1, \alpha>\beta+1$ is expressed in terms of
$_{2}F_2$. The method used to evaluate these integrals consists of expanding
the integrand as a Taylor series and integrating the series term by term.
|
We consider the high energy limit of the colour ordered one-loop five-gluon
amplitude in the planar maximally supersymmetric N=4 Yang-Mills theory in the
multi-Regge kinematics where all of the gluons are strongly ordered in
rapidity. We apply the calculation of the one-loop pentagon in D=6-2 eps
performed in a companion paper to compute the one-loop five-gluon amplitude
through to O(eps^2). Using the factorisation properties of the amplitude in the
high-energy limit, we extract the one-loop gluon-production vertex to the same
accuracy, and, by exploiting the iterative structure of the gluon-production
vertex implied by the BDS ansatz, we perform the first computation of the
two-loop gluon-production vertex up to and including finite terms.
|
We study the $\rho$-meson spectral function in hot nuclear matter by taking
into account the isospin-symmetric pion and the nucleon loops within the
quantum hadrodynamics (QHD) model as well as using an effective chiral SU(3)
model. The spectral function of the $\rho$ meson is studied in the mean field
approximation (MFA) as well as in the relativistic Hartree (RHA) approximation.
The inclusion of the nucleon loop considerably changes the $\rho$-meson
spectral function. Due to a larger mass drop of $ \rho $ meson in the RHA, it
is seen that the spectral function shifts towards the low invariant mass
region, whereas in the MFA the spectral function is seen to be slightly shifted
towards the high mass region. Moreover, while the spectral function is observed
to be sharper with the nucleon-antinucleon polarization in RHA, the spectral
function is seen to be broader in the MFA.
|
We study the electromagnetic structure of the pion in terms of the quantum
cromodynamic~(QCD) model on the Breit-frame. We calculated the observables,
such as the electromagnetic form factor. The priori to have a calculation
covariant need to get the valence term of the eletromagnetic form factor. We
use the usual formalism in quantum field theory (QFT) and light-front quantum
field theory (LFQFT) in order to test the properties of form factor in
nonperturbative QCD. In this particular case, the form factor can be obtained
using the pion Light-Front (LF) wave function including self-energy from
Lattice-QCD. Specifically, these calculations was performed in LF formalism. We
consider a quark-antiquark vertex model having a quark self-energy. Also we can
use other models to compare the pion electromagnetic form factor with different
wave function and to observe the degree of agreement between them.
|
We discuss the physical effects causing a modification of resonance masses,
widths and even shapes in a dilute hadronic gas at late stages of heavy ion
collisions. We quantify the conditions at which resonances are produced at
RHIC, and found that it happens at $T\approx 120 MeV$. Although in the pp case
the ``kinematic'' effects like thermal weighting of the states is sufficient,
in AA we see a clear effect of dynamical interaction with matter, both due to a
variety of s-channel resonances and due to t-channel scalar exchanges. The
particular quantity we focus mostly on is the $\rho$ meson mass, for which
these dynamical effects lead to about -50 MeV shift, on top of about -20 MeV of
a thermal effect: both agree well with preliminary data from STAR experiment at
RHIC. We also predict a complete change of shape of $f_0(600)$ resonance, even
by thermal effects alone.
|
Here we design, construct, and characterize a compact
Raman-spectroscopy-based sensor that measures the concentration of a
water-methanol mixture. The sensor measures the concentration with an accuracy
of 0.5% and a precision of 0.2% with a 1 second measuring time. With longer
measurement times, the precision reaches as low as 0.006%. We characterize the
long-term stability of the instrument over an 11-day period of constant
measurement, and confirm that systematic drifts are on the level of 0.02%. We
describe methods to improve the sensor performance, providing a path towards
accurate, precise, and reliable concentration measurements in harsh
environments. This sensor should be adaptable to other water-alcohol mixtures,
or other small-molecule liquid mixtures.
|
For the Malvenuto-Reutenauer Hopf algebra of permutations, we provide a
cancellation-free antipode formula for any permutation of the form
$ab1\cdots(b-1)(b+1)\cdots(a-1)(a+1)\cdots n$, which starts with the decreasing
sequence $ab$ and ends with the increasing sequence
$1\cdots(b-1)(b+1)\cdots(a-1)(a+1)\cdots n$, where $1\leq b<a\leq n$. As a
consequence, we confirm two conjectures posed by Carolina Benedetti and Bruce
E. Sagan.
|
We present a second set of results from a wide-field photometric survey of
the environs of Milky Way globular clusters. The clusters studied are NGC 1261,
NGC 1851 and NGC 5824: all have data from DECam on the Blanco 4m telescope. NGC
5824 also has data from the Magellan Clay telescope with MegaCam. We confirm
the existence of a large diffuse stellar envelope surrounding NGC 1851 of size
at least 240 pc in radius. The radial density profile of the envelope follows a
power-law decline with index $\gamma = -1.5 \pm 0.2$ and the projected shape is
slightly elliptical. For NGC 5824 there is no strong detection of a diffuse
stellar envelope, but we find the cluster is remarkably extended and is similar
in size (at least 230 pc in radius) to the envelope of NGC 1851. A stellar
envelope is also revealed around NGC 1261. However, it is notably smaller in
size with radius $\sim$105 pc. The radial density profile of the envelope is
also much steeper with $\gamma = -3.8 \pm 0.2$. We discuss the possible nature
of the diffuse stellar envelopes, but are unable to draw definitive conclusions
based on the current data. NGC 1851, and potentially NGC 5824, could be
stripped dwarf galaxy nuclei, akin to the cases of $\omega$ Cen, M54 and M2. On
the other hand, the different characteristics of the NGC 1261 envelope suggest
that it may be the product of dynamical evolution of the cluster.
|
We study the impact of the Landau Pomeranchuk Midgal (LPM) effect on the
dynamics of parton interactions in proton proton collisions at the Large Hadron
Collider energies. For our investigation we utilize a microscopic kinetic
theory based on the Boltzmann equation. The calculation traces the space-time
evolution of the cascading partons interacting via semihard pQCD scatterings
and fragmentations. We focus on the impact of the LPM effect on the production
of charm quarks, since their production is exclusively governed by processes
well described in our kinetic theory. The LPM effect is found to become more
prominent as the collision energy rises and at central rapidities and may
significantly affect the model's predicted charm distributions at low momenta.
|
We propose a straightforward extension of symbolic transfer entropy to enable
the investigation of delayed directional relationships between coupled
dynamical systems from time series. Analyzing time series from chaotic model
systems, we demonstrate the applicability and limitations of our approach. Our
findings obtained from applying our method to infer delayed directed
interactions in the human epileptic brain underline the importance of our
approach for improving the construction of functional network structures from
data.
|
We use the model for the migration of planets introduced in Del Popolo,
Yesilyurt & Ercan (2003) to calculate the observed mass and semimajor axis
distribution of extra-solar planets. The assumption that the surface density in
planetesimals is proportional to that of gas is relaxed, and in order to
describe disc evolution we use a method which, using a series of simplifying
assumptions, is able to simultaneously follow the evolution of gas and solid
particles for up to $10^7 {\rm yr}$. The distribution of planetesimals obtained
after $10^7 {\rm yr}$ is used to study the migration rate of a giant planet
through the model of this paper. The disk and migration models are used to
calculate the distribution of planets as function of mass and semimajor axis.
The results show that the model can give a reasonable prediction of planets'
semi-major axes and mass distribution. In particular there is a pile-up of
planets at $a \simeq 0.05$ AU, a minimum near 0.3 AU, indicating a paucity of
planets at that distance, and a rise for semi-major axes larger than 0.3 AU,
out to 3 AU. The semi-major axis distribution shows that the more massive
planets (typically, masses larger than $4 M_{\rm J}$) form preferentially in
the outer regions and do not migrate much. Intermediate-mass objects migrate
more easily whatever the distance they form, and that the lighter planets
(masses from sub-Saturnian to Jovian) migrate easily.
|
Abell~1142 is a low-mass galaxy cluster at low redshift containing two
comparable Brightest Cluster Galaxies (BCG) resembling a scaled-down version of
the Coma Cluster. Our Chandra analysis reveals an X-ray emission peak, roughly
100 kpc away from either BCG, which we identify as the cluster center. The
emission center manifests itself as a second beta-model surface brightness
component distinct from that of the cluster on larger scales. The center is
also substantially cooler and more metal rich than the surrounding intracluster
medium (ICM), which makes Abell 1142 appear to be a cool core cluster. The
redshift distribution of its member galaxies indicates that Abell 1142 may
contain two subclusters with each containing one BCG. The BCGs are merging at a
relative velocity of ~1200 km/s. This ongoing merger may have shock-heated the
ICM from ~ 2 keV to above 3 keV, which would explain the anomalous L_X--T_X
scaling relation for this system. This merger may have displaced the
metal-enriched "cool core" of either of the subclusters from the BCG. The
southern BCG consists of three individual galaxies residing within a radius of
5 kpc in projection. These galaxies should rapidly sink into the subcluster
center due to the dynamical friction of a cuspy cold dark matter halo.
|
Atoms at liquid metal surfaces are known to form layers parallel to the
surface. We analyze the two-dimensional arrangement of atoms within such layers
at the surface of liquid sodium, using ab initio molecular dynamics (MD)
simulations based on density functional theory. Nearest neighbor distributions
at the surface indicate mostly 5-fold coordination, though there are noticeable
fractions of 4-fold and 6-fold coordinated atoms. Bond angle distributions
suggest a movement toward the angles corresponding to a six-fold coordinated
hexagonal arrangement of the atoms as the temperature is decreased towards the
solidification point. We rationalize these results with a distorted hexagonal
order at the surface, showing a mixture of regions of five and six-fold
coordination. The liquid surface results are compared with classical MD
simulations of the liquid surface, with similar effects appearing, and with ab
initio MD simulations for a model solid-liquid interface, where a pronounced
shift towards hexagonal ordering is observed as the temperature is lowered.
|
It is a well known fact that in $\mathbb{R}^n$ a subset of minimal perimeter
$L$ among all sets of a given volume is also a set of maximal volume among all
sets of the same perimeter $L$. This is called the reciprocity principle for
isoperimetric problems. The aim of this note is to prove this relation in the
case where the class of admissible sets is restricted to the subsets of some
subregion $G\subsetneq\mathbb{R}^n$. Furthermore, we give a characterization of
those (unbounded) convex subsets of $\mathbb{R}^2$ in which the isoperimetric
problem has a solution. The perimeter that we consider is the one relative to
$\mathbb{R}^n$.
|
In this paper, we introduce a concept of super-pseudoconvex domain. We prove
that the solution of the Feffereman equation on a smoothly bounded strictly
pseudoconvex domain $D$ in $\CC^n$ is plurisubharmonic if and only if $D$ is
super-pseudoconvex. As an application, we give a lower bound estimate the
bottom of the spectrum of Laplace-Beltrami operators when $D$ is
super-pseudoconvex by using the result of Li and Wang \cite{LiWang}.
|
For many complex diseases, prognosis is of essential importance. It has been
shown that, beyond the main effects of genetic (G) and environmental (E) risk
factors, the gene-environment (G$\times$E) interactions also play a critical
role. In practice, the prognosis outcome data can be contaminated, and most of
the existing methods are not robust to data contamination. In the literature,
it has been shown that even a single contaminated observation can lead to
severely biased model estimation. In this study, we describe prognosis using an
accelerated failure time (AFT) model. An exponential squared loss is proposed
to accommodate possible data contamination. A penalization approach is adopted
for regularized estimation and marker selection. The proposed method is
realized using an effective coordinate descent (CD) and minorization
maximization (MM) algorithm. Simulation shows that without contamination, the
proposed method has performance comparable to or better than the unrobust
alternative. With contamination, it outperforms the unrobust alternative and,
under certain scenarios, can be superior to the robust method based on quantile
regression. The proposed method is applied to the analysis of TCGA (The Cancer
Genome Atlas) lung cancer data. It identifies interactions different from those
using the alternatives. The identified marker have important implications and
satisfactory stability.
|
We derive the quantitative estimates of propagation of chaos for the large
interacting particle systems in terms of the relative entropy between the joint
law of the particles and the tensorized law of the mean field PDE. We resolve
this problem for the first time for the viscous vortex model that approximating
2D Navier-Stokes equation in the vorticity formulation on the whole space. We
obtain as key tools the Li-Yau-type estimates and Hamilton-type heat kernel
estimates for 2D Navier-Stokes in the whole space.
|
Blue Luminescence (BL) was first discovered in a proto-planetary nebula, the
Red Rectangle (RR) surrounding the post-AGB star HD 44179. BL has been
attributed to fluorescence by small, 3-4 ringed neutral polycyclic aromatic
hydrocarbon (PAH) molecules, and was thought to be unique to the RR environment
where such small molecules are actively being produced and shielded from the
harsh interstellar radiation by a dense circumstellar disk. In this paper we
present the BL spectrum detected in several ordinary reflection nebulae
illuminated by stars having temperatures between 10,000 -- 23,000 K. All these
nebulae are known to also exhibit the infrared emission features called
aromatic emission features (AEFs) attributed to large PAHs. We present the
spatial distribution of the BL in these nebulae. In the case of Ced~112, the BL
is spatially correlated with mid-IR emission structures attributed to AEFs.
These observations provide evidence for grain processing and possibly for
in-situ formation of small grains and large molecules from larger aggregates.
Most importantly, the detection of BL in these ordinary reflection nebulae
suggests that the BL carrier is an ubiquitous component of the ISM and is not
restricted to the particular environment of the RR.
|
An accurate estimation of the experimental field-emission area remains a
great challenge in vacuum electronics. The lack of convenient means, which can
be used to measure this parameter, creates a critical knowledge gap, making it
impossible to compare theory to experiment. In this work, a fast pattern
recognition algorithm was developed to complement a field emission microscopy,
together creating a methodology to obtain and analyze electron emission
micrographs in order to quantitatively estimate the field emission area. The
algorithm is easy to use and made available to the community as a freeware, and
therefore is described in detail. Three examples of dc emission are given to
demonstrate the applicability of this algorithm to determine spatial
distribution of emitters, calculate emission area, and finally obtain
experimental current density as a function of the electric field for two
technologically important field emitter materials, namely an
ultrananocrystalline diamond film and a carbon nanotube fiber. Unambiguous
results, demonstrating the current density saturation and once again proving
that conventional Fowler-Nordheim theory, its Murphy-Good extension, and the
vacuum space-charge effect fail to describe such behaviour, are presented and
discussed. We also show that the transit-time limited charge resupply captures
the current density saturation behaviour observed in experiments and provides
good quantitative agreement with experimental data for all cases studied in
this work.
|
We systematically investigate the robustness of symmetry protected
topological (SPT) order in open quantum systems by studying the evolution of
string order parameters and other probes under noisy channels. We find that
one-dimensional SPT order is robust against noisy couplings to the environment
that satisfy a strong symmetry condition, while it is destabilized by noise
that satisfies only a weak symmetry condition, which generalizes the notion of
symmetry for closed systems. We also discuss "transmutation" of SPT phases into
other SPT phases of equal or lesser complexity, under noisy channels that
satisfy twisted versions of the strong symmetry condition.
|
Recently spatially localized anomalies have been considered in higher
dimensional field theories. The question of the quantum consistency and
stability of these theories needs further discussion. Here we would like to
investigate what string theory might teach us about theories with localized
anomalies. We consider the Z_3 orbifold of the heterotic E_8 x E_8 theory, and
compute the anomaly of the gaugino in the presence of Wilson lines. We find an
anomaly localized at the fixed points, which depends crucially on the local
untwisted spectra at those points. We show that non-Abelian anomalies cancel
locally at the fixed points for all Z_3 models with or without additional
Wilson lines. At various fixed points different anomalous U(1)s may be present,
but at most one at a given fixed point. It is in general not possible to
construct one generator which is the sole source of the anomalous U(1)s at the
various fixed points.
|
We have studied the binding energies and electronic structures of metal (Ti,
Al, Au) chains adsorbed on single-wall carbon nanotubes (SWNT) using first
principles methods. Our calculations have shown that titanium is much more
favored energetically over gold and aluminum to form a continuous chain on a
variety of SWNTs. The interaction between titanium and carbon nanotube
significantly modifies the electronic structures around Fermi energy for both
zigzag and armchair tubes. The delocalized 3d electrons from the titanium chain
generate additional states in the band gap regions of the semiconducting tubes,
transforming them into metals.
|
We study the statistics of weak lensing convergence peaks, such as their
abundance and two-point correlation function (2PCF), for a wide range of
cosmological parameters $\Omega_m$ and $\sigma_8$ within the standard
$\Lambda$CDM paradigm, focusing on intermediate-height peaks with
signal-to-noise ratio (SNR) of $1.5$ to $3.5$. We find that the cosmology
dependence of the peak abundance can be described by a one-parameter fitting
formula that is accurate to within $\sim3\%$. The peak 2PCFs are shown to
feature a self-similar behaviour: if the peak separation is rescaled by the
mean inter-peak distance, catalogues with different minimum peak SNR values
have identical clustering, which suggests that the peak abundance and
clustering are closely interconnected. A simple fitting model for the rescaled
2PCF is given, which together with the peak abundance model above can predict
peak 2PCFs with an accuracy better than $\sim5\%$. The abundance and 2PCFs for
intermediate peaks have very different dependencies on $\Omega_m$ and
$\sigma_8$, implying that their combination can be used to break the degeneracy
between these two parameters.
|
In this paper we completely characterize the graphs which have an edge
weighted adjacency matrix belonging to the class of $n \times n$ involutions
with spectrum equal to $\{ \lambda_1^{n-2}, \lambda_2^{2} \}$ for some
$\lambda_1$ and some $\lambda_2$. The connected graphs turn out to be the
cographs constructed as the join of at least two unions of pairs of complete
graphs, and possibly joined with one other complete graph.
|
A real-world example of adding OpenACC to a legacy MPI FORTRAN Preconditioned
Conjugate Gradient code is described, and timing results for multi-node
multi-GPU runs are shown. The code is used to obtain three-dimensional
spherical solutions to the Laplace equation. Its application is finding
potential field solutions of the solar corona, a useful tool in space weather
modeling. We highlight key tips, strategies, and challenges faced when adding
OpenACC. Performance results are shown for running the code with MPI-only on
multiple CPUs, and with MPI+OpenACC on multiple GPUs and CPUs.
|
Many Control Systems are indeed Software Based Control Systems, i.e. control
systems whose controller consists of control software running on a
microcontroller device. This motivates investigation on Formal Model Based
Design approaches for automatic synthesis of control software.
Available algorithms and tools (e.g., QKS) may require weeks or even months
of computation to synthesize control software for large-size systems. This
motivates search for parallel algorithms for control software synthesis.
In this paper, we present a Map-Reduce style parallel algorithm for control
software synthesis when the controlled system (plant) is modeled as discrete
time linear hybrid system. Furthermore we present an MPI-based implementation
PQKS of our algorithm. To the best of our knowledge, this is the first parallel
approach for control software synthesis.
We experimentally show effectiveness of PQKS on two classical control
synthesis problems: the inverted pendulum and the multi-input buck DC/DC
converter. Experiments show that PQKS efficiency is above 65%. As an example,
PQKS requires about 16 hours to complete the synthesis of control software for
the pendulum on a cluster with 60 processors, instead of the 25 days needed by
the sequential algorithm in QKS.
|
We report on work to increase the number of well-measured Type Ia supernovae
(SNe Ia) at high redshifts. Light curves, including high signal-to-noise HST
data, and spectra of six SNe Ia that were discovered during 2001 are presented.
Additionally, for the two SNe with z>1, we present ground-based J-band
photometry from Gemini and the VLT. These are among the most distant SNe Ia for
which ground based near-IR observations have been obtained. We add these six
SNe Ia together with other data sets that have recently become available in the
literature to the Union compilation (Kowalski et al. 2008). We have made a
number of refinements to the Union analysis chain, the most important ones
being the refitting of all light curves with the SALT2 fitter and an improved
handling of systematic errors. We call this new compilation, consisting of 557
supernovae, the Union2 compilation. The flat concordance LambdaCDM model
remains an excellent fit to the Union2 data with the best fit constant equation
of state parameter w=-0.997^{+0.050}_{-0.054} (stat) ^{+0.077}_{-0.082}
(stat+sys\ together) for a flat universe, or w=-1.035^{+0.055}_{-0.059}
(stat)^{+0.093}_{-0.097} (stat+sys together) with curvature. We also present
improved constraints on w(z). While no significant change in w with redshift is
detected, there is still considerable room for evolution in w. The strength of
the constraints depend strongly on redshift. In particular, at z > 1, the
existence and nature of dark energy are only weakly constrained by the data.
|
We report the first near infrared (NIR) imaging of a circumstellar annular
disk around the young (~8 Myr), Vega-like star, HR 4796A. NICMOS coronagraph
observations at 1.1 and 1.6 microns reveal a ring-like symmetrical structure
peaking in reflected intensity 1.05 arcsec +/- 0.02 arcsec (~ 70 AU) from the
central A0V star. The ring geometry, with an inclination of 73.1 deg +/- 1.2
deg and a major axis PA of 26.8 deg +/- 0.6 deg, is in good agreement with
recent 12.5 and 20.8 micron observations of a truncated disk (Koerner, et al.
1998). The ring is resolved with a characteristic width of less than 0.26
arcsec (17 AU) and appears abruptly truncated at both the inner and outer
edges. The region of the disk-plane inward of ~60 AU appears to be relatively
free of scattering material. The integrated flux density of the part of the
disk that is visible (greater than 0.65 arcsec from the star) is found to be
7.5 +/- 0.5 mJy and 7.4 +/- 1.2 mJy at 1.1 and 1.6 microns, respectively.
Correcting for the unseen area of the ring yields total flux densities of 12.8
+/- 1.0 mJy and 12.5 +/- 2.0 mJy, respectively (Vega magnitudes = 12.92 /+-
0.08 and 12.35 +/-0.18). The NIR luminosity ratio is evaluated from these
results and ground-based photometry of the star. At these wavelengths
Ldisk(lambda)/L*(lambda) = 1.4 +/- 0.2E-3 and 2.4 +/- 0.5E-3, giving reasonable
agreement between the stellar flux scattered in the NIR and that which is
absorbed in the visible and re-radiated in the thermal infrared. The somewhat
red reflectance of the disk at these wavelengths implies mean particle sizes in
excess of several microns, larger than typical interstellar grains. The
confinement of material to a relatively narrow annular zone implies dynamical
constraints on the disk particles by one or more as yet unseen bodies.
|
Galaxy-scale strong gravitational lensing is not only a valuable probe of the
dark matter distribution of massive galaxies, but can also provide valuable
cosmological constraints, either by studying the population of strong lenses or
by measuring time delays in lensed quasars. Due to the rarity of galaxy-scale
strongly lensed systems, fast and reliable automated lens finding methods will
be essential in the era of large surveys such as LSST, Euclid, and WFIRST. To
tackle this challenge, we introduce CMU DeepLens, a new fully automated
galaxy-galaxy lens finding method based on Deep Learning. This supervised
machine learning approach does not require any tuning after the training step
which only requires realistic image simulations of strongly lensed systems. We
train and validate our model on a set of 20,000 LSST-like mock observations
including a range of lensed systems of various sizes and signal-to-noise ratios
(S/N). We find on our simulated data set that for a rejection rate of
non-lenses of 99%, a completeness of 90% can be achieved for lenses with
Einstein radii larger than 1.4" and S/N larger than 20 on individual $g$-band
LSST exposures. Finally, we emphasize the importance of realistically complex
simulations for training such machine learning methods by demonstrating that
the performance of models of significantly different complexities cannot be
distinguished on simpler simulations. We make our code publicly available at
https://github.com/McWilliamsCenter/CMUDeepLens .
|
We present results of new photometry for the globular star cluster NGC 2155
in the Large Magellanic Cloud (LMC). Our I- and V-band observations were
obtained with the 6.5-meter Magellan 1 Baade Telescope at Las Campanas
Observatory resulting in deep photometry down to V ~ 24 mag. By analyzing the
color-magnitude diagram for the cluster and utilizing the Victoria-Regina grid
of isochrones models we estimated the age of the cluster at ~ 2.25 Gyr and
[Fe/H]=-0.71, the numbers which place NGC 2155 outside the age-gap in the
age-metallicity relation for LMC clusters. Using the Difference Image Analysis
Package (DIAPL), we detected 7 variable stars in the cluster field with
variability at the level of 0.01 magnitude in the I-band. Three variables are
particularly interesting: two SX Phoenicis (SX Phe) stars pulsating in the
fundamental mode, and a detached eclipsing binary which is a prime candidate to
estimate the distance to the cluster.
|
We propose a data-driven approach for deep convolutional neural network
compression that achieves high accuracy with high throughput and low memory
requirements. Current network compression methods either find a low-rank
factorization of the features that requires more memory, or select only a
subset of features by pruning entire filter channels. We propose the Cascaded
Projection (CaP) compression method that projects the output and input filter
channels of successive layers to a unified low dimensional space based on a
low-rank projection. We optimize the projection to minimize classification loss
and the difference between the next layer's features in the compressed and
uncompressed networks. To solve this non-convex optimization problem we propose
a new optimization method of a proxy matrix using backpropagation and
Stochastic Gradient Descent (SGD) with geometric constraints. Our cascaded
projection approach leads to improvements in all critical areas of network
compression: high accuracy, low memory consumption, low parameter count and
high processing speed. The proposed CaP method demonstrates state-of-the-art
results compressing VGG16 and ResNet networks with over 4x reduction in the
number of computations and excellent performance in top-5 accuracy on the
ImageNet dataset before and after fine-tuning.
|
In this paper we give a randomized reduction for the Rank Syndrome Decoding
problem and Rank Minimum Distance problem for rank codes. Our results are based
on an embedding from linear codes equipped with Hamming distance unto linear
codes over an extension field equipped with the rank metric. We prove that if
both previous problems for rank metric are in ZPP = RP$\cap$coRP, then we would
have NP=ZPP. We also give complexity results for the respective approximation
problems in rank metric.
|
We look for and place observational constraints on the imprint of ultralight
dark matter (ULDM) soliton cores in rotation-dominated galaxies. Extending
previous analyses, we find a conservative constraint which disfavors the
soliton-host halo relation found in some numerical simulations over a broad
range in the ULDM particle mass $m$. Combining the observational constraints
with theoretical arguments for the efficiency of soliton formation via
gravitational dynamical relaxation, and assuming that the soliton-halo relation
is correct, our results disfavor ULDM from comprising 100\% of the total
cosmological dark matter in the range $10^{-24}~{\rm eV}\lesssim
m\lesssim10^{-20}~{\rm eV}$. The constraints probe the ULDM fraction down to
$f\lesssim0.3$ of the total dark matter.
|
An enormous and ever-growing volume of data is nowadays becoming available in
a sequential fashion in various real-world applications. Learning in
nonstationary environments constitutes a major challenge, and this problem
becomes orders of magnitude more complex in the presence of class imbalance. We
provide new insights into learning from nonstationary and imbalanced data in
online learning, a largely unexplored area. We propose the novel Adaptive
REBAlancing (AREBA) algorithm that selectively includes in the training set a
subset of the majority and minority examples that appeared so far, while at its
heart lies an adaptive mechanism to continually maintain the class balance
between the selected examples. We compare AREBA with strong baselines and other
state-of-the-art algorithms and perform extensive experimental work in
scenarios with various class imbalance rates and different concept drift types
on both synthetic and real-world data. AREBA significantly outperforms the rest
with respect to both learning speed and learning quality. Our code is made
publicly available to the scientific community.
|
This paper is mainly concerned with applying the theory of M-regularity
developed in the previous math.AG/0110003 to the study of linear series given
by multiples of ample line bundles on abelian varieties. We define a new
invariant of a line bundle, called M-regularity index, which is seen to govern
the higher order properties and (partly conjecturally) the defining equations
of such embeddings. We prove a general result on the behavior of higher
syzygies in embeddings given by multiples of ample bundles whose base locus has
no fixed components, extending a conjecture of Lazarsfeld known to be true by
work of the first author. This approach also unifies essentially all the
previously known results in this direction and is mainly based on vanishing
theorems rather than representations of theta groups.
|
The magnetic hyperfine structure constants have been calculated for low-lying
levels in neutral potassium atom taking into account the Bohr--Weisskopf (BW)
and Breit--Rosenthal (BR) effects. According to our results the $4p_{1/2}$
state of K~I is free from both BR and BW corrections on the level of the
current theoretical uncertainties. Using this finding and the measured values
of the $A(4p_{1/2})$ constants, we corrected the nuclear magnetic moments for
several short-lived potassium isotopes. The BW correction is represented as a
product of atomic and nuclear factors. We calculated the atomic factor for the
ground state of K I, which allowed us to extract nuclear factors for potassium
$I^\pi = 3/2^+$ isotopes from the experimental data. In this way the
application range of the single-particle nuclear model for nuclear-factor
calculation in these isotopes has been clarified.
|
. This study is devoted in the search of a model describing the best possible
liquid and solid phases of the systems Cd-Te, Hg-Te, and Cd-Hg-Te. For the
liquid phases, we used the model of Sub-regular Associated Solution (S.A.S).
Thermodynamic quantities of dissociation, for the associated species are
calculated by the numerical resolution of the phase equilibrium equations. The
results of this calculation indicate a more marked associated character in
Cd-Te than in Hg-Te with a pseudo-regular behavior of the binary liquid Cd-Te.
The phases diagrams calculated from thermodynamic quantities, describe very
well the experimental diagrams especially for the binary systems. This models
used to calculate the parameters of interactions, enthalpy and entropy of
dissociation and mixture, this thermodynamic quantities not measurable used to
describe phases diagrams and other studies.
|
This paper aims to provide a description of totally isotropic Willmore
two-spheres and their adjoint transforms. We first recall the isotropic
harmonic maps which are introduced by H\'elein, Xia-Shen and Ma for the study
of Willmore surfaces. Then we derive a description of the normalized potential
(some Lie algebra valued meromorphic 1-forms) of totally isotropic Willmore
two-spheres in terms of the isotropic harmonic maps. In particular, the
corresponding isotropic harmonic maps are of finite uniton type. The proof also
contains a concrete way to construct examples of totally isotropic Willmore
two-spheres and their adjoint transforms. As illustrations, two kinds of
examples are obtained this way.
|
We investigate the role of dipolar interactions on the magnetic properties of
nanowires aggregates. Micromagnetic simulations show that dipolar interactions
between wires are not detrimental to the high coercivity properties of magnetic
nanowires composites even in very dense aggregates. This is confirmed by
experimental magnetization measurements and Henkel plots which show that the
dipolar interactions are small. Indeed, we show that misalignment of the
nanowires in aggregates leads to a coercivity reduction of only 30%. Direct
dipolar interactions between nanowires, even as close as 2 nm, have small
effects (maximum coercivity reduction of ~15%) and are very sensitive to the
detailed geometrical arrangement of wires. These results strenghten the
potential of magnetic composite materials based on elongated single domain
particles for the fabrication of permanent magnetic materials.
|
We present an energy conserving space discretisation based on a Poisson
bracket that can be used to derive the dry compressible Euler as well as
thermal shallow water equations. It is formulated using the compatible finite
element method, and extends the incorporation of upwinding for the shallow
water equations as described in Wimmer, Cotter, and Bauer (2019). While the
former is restricted to DG upwinding, an energy conserving SUPG scheme for the
(partially) continuous Galerkin thermal field space is newly introduced here.
The energy conserving property is validated by coupling the Poisson bracket
based spatial discretisation to an energy conserving time discretisation.
Further, the discretisation is demonstrated to lead to an improved temperature
field development with respect to stability when upwinding is included. An
approximately energy conserving full discretisation with a smaller
computational cost is also presented.
|
Board games are a great source of entertainment for all ages, as they create
a competitive and engaging environment, as well as stimulating learning and
strategic thinking. It is common for digital versions of board games, as any
other type of digital games, to offer the option to select the difficulty of
the game. This is usually done by customizing the search parameters of the AI
algorithm. However, this approach cannot be extended to General Game Playing
agents, as different games might require different parametrization for each
difficulty level. In this paper, we present a general approach to implement an
artificial intelligence opponent with difficulty levels for zero-sum games,
together with a propose of a Minimax-MCTS hybrid algorithm, which combines the
minimax search process with GGP aspects of MCTS. This approach was tested in
our mobile application LoBoGames, an extensible board games platform, that is
intended to have an broad catalog of games, with an emphasis on accessibility:
the platform is friendly to visually-impaired users, and is compatible with
more than 92\% of Android devices. The tests in this work indicate that both
the hybrid Minimax-MCTS and the new difficulty adjustment system are promising
GGP approaches that could be expanded in future work.
|
The morphology of glands has been used routinely by pathologists to assess
the malignancy degree of adenocarcinomas. Accurate segmentation of glands from
histology images is a crucial step to obtain reliable morphological statistics
for quantitative diagnosis. In this paper, we proposed an efficient deep
contour-aware network (DCAN) to solve this challenging problem under a unified
multi-task learning framework. In the proposed network, multi-level contextual
features from the hierarchical architecture are explored with auxiliary
supervision for accurate gland segmentation. When incorporated with multi-task
regularization during the training, the discriminative capability of
intermediate features can be further improved. Moreover, our network can not
only output accurate probability maps of glands, but also depict clear contours
simultaneously for separating clustered objects, which further boosts the gland
segmentation performance. This unified framework can be efficient when applied
to large-scale histopathological data without resorting to additional steps to
generate contours based on low-level cues for post-separating. Our method won
the 2015 MICCAI Gland Segmentation Challenge out of 13 competitive teams,
surpassing all the other methods by a significant margin.
|
We introduce and explore a family of self-dual models of single-particle
motion in quasiperiodic potentials, with hopping amplitudes that fall off as a
power law with exponent $p$. These models are generalizations of the familiar
Aubry-Andre model. For large enough $p$, their static properties are similar to
those of the Aubry-Andre model, although the low-frequency conductivity in the
localized phase is sensitive to $p$. For $p \leq 2.1$ the Aubry-Andre
localization transition splits into three transitions; two distinct
intermediate regimes with both localized and delocalized states appear near the
self-dual point of the Aubry-Andre model. In the intermediate regimes, the
density of states is singular continuous in much of the spectrum, and is
approximately self-similar: states form narrow energy bands, which are divided
into yet narrower sub-bands; we find no clear sign of a mobility edge. When $p
< 1$, localized states are not stable in random potentials; in the present
model, however, tightly localized states are present for relatively large
systems. We discuss the frequency-dependence and strong sample-to-sample
fluctuations of the low-frequency optical conductivity, although a suitably
generalized version of Mott's law is recovered when the power-law is slowly
decaying. We present evidence that many of these features persist in models
that are away from self-duality.
|
This paper demonstrates the undecidability of a number of logics with
quantification over public announcements: arbitrary public announcement logic
(APAL), group announcement logic (GAL), and coalition announcement logic (CAL).
In APAL we consider the informative consequences of any announcement, in GAL we
consider the informative consequences of a group of agents (this group may be a
proper subset of the set of all agents) all of which are simultaneously (and
publicly) making known announcements. So this is more restrictive than APAL.
Finally, CAL is as GAL except that we now quantify over anything the agents not
in that group may announce simultaneously as well. The logic CAL therefore has
some features of game logic and of ATL. We show that when there are multiple
agents in the language, the satisfiability problem is undecidable for APAL,
GAL, and CAL. In the single agent case, the satisfiability problem is decidable
for all three logics. This paper corrects an error to the submitted version of
Undecidability of Quantified Announcements, identified by Yuta Asami . The
nature of the error was in the definition of the formula $cga(X)$ (see
Subsection 5.2) which is corrected in this version.
|
In the mixed-valence manganites, a near-infrared laser typically melts the
orbital and spin order simultaneously, corresponding to the photoinduced
$d^{1}d^{0}$ $\xrightarrow{}$ $d^{0}d^{1}$ excitations in the Mott-Hubbard
bands of manganese. Here, we use ultrafast methods -- both femtosecond resonant
x-ray diffraction and optical reflectivity -- to demonstrate that the orbital
response in the layered manganite Nd$_{1-x}$Sr$_{1+x}$MnO$_{4}$ ($\it{x}$ =
2/3) does not follow this scheme. At the photoexcitation saturation fluence,
the orbital order is only diminished by a few percent in the transient state.
Instead of the typical $d^{1}d^{0}$ $\xrightarrow{}$ $d^{0}d^{1}$ transition, a
near-infrared pump in this compound promotes a fundamentally distinct mechanism
of charge transfer, the $d^{0}$ $ \xrightarrow{}$ $d^{1}L$, where $\it{L}$
denotes a hole in the oxygen band. This novel finding may pave a new avenue for
selectively manipulating specific types of order in complex materials of this
class.
|
We investigate braid group representations associated with unitary braided
vector spaces, focusing on a conjecture that such representations should have
virtually abelian images in general and finite image provided the braiding has
finite order. We verify this conjecture for the two infinite families of
Gaussian and group-type braided vector spaces, as well as the generalization to
quasi-braided vector spaces of group-type.
|
In this paper we consider massless spin 2 interacting with the massive
arbitrary spin fermions in d=3. First of all, we study all possible
deformations for the massive fermion unfolded equations in the massless spin 2
background. We find three linearly independent solutions one of which
corresponds to the standard gravitational interactions. Then for all three
cases we reconstruct appropriate Lagrangian formulation.
|
We present an analytical model to study the role of expectation feedbacks and
overlapping portfolios on systemic stability of financial systems. Building on
[Corsi et al., 2016], we model a set of financial institutions having Value at
Risk capital requirements and investing in a portfolio of risky assets, whose
prices evolve stochastically in time and are endogenously driven by the trading
decisions of financial institutions. Assuming that they use adaptive
expectations of risk, we show that the evolution of the system is described by
a slow-fast random dynamical system, which can be studied analytically in some
regimes. The model shows how the risk expectations play a central role in
determining the systemic stability of the financial system and how wrong risk
expectations may create panic-induced reduction or over-optimistic expansion of
balance sheets. Specifically, when investors are myopic in estimating the risk,
the fixed point equilibrium of the system breaks into leverage cycles and
financial variables display a bifurcation cascade eventually leading to chaos.
We discuss the role of financial policy and the effects of some market
frictions, as the cost of diversification and financial transaction taxes, in
determining the stability of the system in the presence of adaptive
expectations of risk.
|
Heavy Fermion physics deals with the ground state formation and interactions
in f-electron materials where the electron effective masses are extremely
large, more than 100 times the rest mass of an electron. The details of how the
f-electrons correlate at low temperature to become so massive lacks a coherent
theory, partially because so few materials display this heavy behavior and thus
global trends remain unclear. UHg_{3} is now found experimentally to be a heavy
Fermion antiferromagnet, just as are all the other U_{x}M_{y} compounds with
the metal M being in column II B (filled d electron shells) in the periodic
table (Zn/Cd/Hg) and the spacing between Uranium ions being greater than the
Hill limit of 3.5 Angstroms. This result that, independent of the structure of
these U_{x}M_{y}, M=Zn/Cd/Hg, compounds and independent of the value of their
Uranium Uranium spacings (ranging from 4.39 to 6.56 Angstroms), all exhibit
heavy Fermion antiferromagnetism, is a clear narrowing of the parameters
important for understanding the formation of this ground state. The sequence of
antiferromagnetic transition temperatures, T_{Neel}, of 9.7 K, 5.0 K, and 2.6 K
for U_{x}M_{y} as the metal M varies down column II B (Zn/Cd/Hg) indicates an
interesting regularity for the antiferromagnetic coupling strength.
|
We propose a new method to construct maximin distance designs with arbitrary
number of dimensions and points. The proposed designs hold interleaved-layer
structures and are by far the best maximin distance designs in four or more
dimensions. Applicable to distance measures with equal or unequal weights, our
method is useful for emulating computer experiments when a relatively accurate
priori guess on the variable importance is available.
|
A self-adjoint first order system with Hermitian $\pi$-periodic potential
$Q(z)$, integrable on compact sets, is considered. It is shown that all zeros
of $\Delta + 2e^{-i\int_0^\pi \Im q dt}$ are double zeros if and only if this
self-adjoint system is unitarily equivalent to one in which $Q(z)$ is
$\frac{\pi}{2}$-periodic. Furthermore, the zeros of $\Delta - 2e^{-i\int_0^\pi
\Im q dt}$ are all double zeros if and only if the associated self-adjoint
system is unitarily equivalent to one in which $Q(z) = \sigma_2 Q(z) \sigma_2$.
Here $\Delta$ denotes the discriminant of the system and $\sigma_0$, $\sigma_2$
are Pauli matrices. Finally, it is shown that all instability intervals vanish
if and only if $Q = r\sigma_0 + q\sigma_2$, for some real valued $\pi$-periodic
functions $r$ and $q$ integrable on compact sets.
|
The COVID-19 pandemic has led to a need for widespread and rapid vaccine
development. As several vaccines have recently been approved for human use or
are in different stages of development, governments across the world are
preparing comprehensive guidelines for vaccine distribution and monitoring. In
this early article, we identify challenges in logistics, health outcomes,
user-centric matters, and communication associated with disease-related,
individual, societal, economic, and privacy consequences. Primary challenges
include difficulty in equitable distribution, vaccine efficacy, duration of
immunity, multi-dose adherence, and privacy-focused record-keeping to be HIPAA
compliant. While many of these challenges have been previously identified and
addressed, some have not been acknowledged from a comprehensive view accounting
for unprecedented interactions between challenges and specific populations. The
logistics of equitable widespread vaccine distribution in disparate populations
and countries of various economic, racial, and cultural constitutions must be
thoroughly examined and accounted for. We also describe unique challenges
regarding the efficacy of vaccines in specialized populations including
children, the elderly, and immunocompromised individuals. Furthermore, we
report the potential for understudied drug-vaccine interactions as well as the
possibility that certain vaccine platforms may increase susceptibility to HIV.
Given these complicated issues, the importance of privacy-focused, user-centric
systems for vaccine education and incentivization along with clear
communication from governments, organizations, and academic institutions is
imperative. These challenges are by no means insurmountable, but require
careful attention to avoid consequences spanning a range of disease-related,
individual, societal, economic, and security domains.
|
Digital zero-noise extrapolation (dZNE) has emerged as a common approach for
quantum error mitigation (QEM) due to its conceptual simplicity, accessibility,
and resource efficiency. In practice, however, properly applying dZNE to extend
the computational reach of noisy quantum processors is rife with subtleties.
Here, based on literature review and original experiments on noisy simulators
and real quantum hardware, we define best practices for QEM with dZNE for each
step of the workflow, including noise amplification, execution on the quantum
device, extrapolation to the zero-noise limit, and composition with other QEM
methods. We anticipate that this effort to establish best practices for dZNE
will be extended to other QEM methods, leading to more reproducible and
rigorous calculations on noisy quantum hardware.
|
Hydrogen-based fuel cells are promising solutions for the efficient and clean
delivery of electricity. Since hydrogen is an energy carrier, a key step for
the development of a reliable hydrogen-based technology requires solving the
issue of storage and transport of hydrogen. Several proposals based on the
design of advanced materials such as metal hydrides and carbon structures have
been made to overcome the limitations of the conventional solution of
compressing or liquefying hydrogen in tanks. Nevertheless none of these systems
are currently offering the required performances in terms of hydrogen storage
capacity and control of adsorption/desorption processes. Therefore the problem
of hydrogen storage remains so far unsolved and it continues to represent a
significant bottleneck to the advancement and proliferation of fuel cell and
hydrogen technologies. Recently, however, several studies on graphene, the
one-atom-thick membrane of carbon atoms packed in a honeycomb lattice, have
highlighted the potentialities of this material for hydrogen storage and raise
new hopes for the development of an efficient solid-state hydrogen storage
device. Here we review on-going efforts and studies on functionalized and
nanostructured graphene for hydrogen storage and suggest possible developments
for efficient storage/release of hydrogen at ambient conditions.
|
The aim of this paper is to describe the largest inscribed circle into the
fundamental domains of a discontinuous group in Bolyai-Lobachevsky hyperbolic
plane. We give some known basic facts related to the Poincare-Delone problem
and the existence notion of the inscribed circle. We study the best circle of
the group G = [3, 3, 3, 3] with 4 rotational centers each of order 3. Using the
Lagrange multiplier method, we would describe the characteristic of the
best-inscribed circle. The method could be applied for the more general case in
G = [3, 3, 3,..., 3] with at least 4 rotational centers each of order 3, by
more and more computations. We observed by a more geometric Theorem 2 that the
maximum radius is attained by equalizing the angles at equivalent centers and
the additional vertices with trivial stabilizers, respectively. Theorem 3 will
close our arguments where Lemma 3 and 4 play key roles.
|
We developed a strategic of optimal portfolio based on information theory and
Tsallis statistics. The growth rate of a stock market is defined by using
$q$-deformed functions and we find that the wealth after n days with the
optimal portfolio is given by a $q$-exponential function. In this context, the
asymptotic optimality is investigated on causal portfolios, showing advantages
of the optimal portfolio over an arbitrary choice of causal portfolios.
Finally, we apply the formulation in a small number of stocks in brazilian
stock market $[B]^{3}$ and analyzed the results.
|
We report the detection of three mercury-manganese stars in the Orion OB1
association. HD 37886 and BD-0 984 are in the approximately 1.7 million year
old Orion OB1b. HD 37492 is in the approximately 4.6 million year old Orion
OB1c. Orion OB1b is now the youngest cluster with known HgMn star members. This
places an observational upper limit on the time scale needed to produce the
chemical peculiarities seen in mercury-manganese stars, which should help in
the search for the cause or causes of the peculiar abundances in HgMn and other
chemically peculiar upper main sequence stars.
|
Radiation emitted by nonthermal particles accelerated during relativistic
magnetic reconnection is critical for understanding the nonthermal emission in
a variety of astrophysical systems, including blazar jets, black hole coronae,
pulsars, and magnetars. By means of fully kinetic Particle-in-Cell (PIC)
simulations, we demonstrate that reconnection-driven particle acceleration
imprints an energy-dependent pitch-angle anisotropy and gives rise to broken
power laws in both the particle energy spectrum and the pitch-angle anisotropy.
The particle distributions depend on the relative strength of the
non-reconnecting (guide field) versus the reconnecting component of the
magnetic field ($B_g/B_0$) and the lepton magnetization ($\sigma_0$). Below the
break Lorentz factor $\gamma_0$ (injection), the particle energy spectrum is
ultra-hard ($p_< < 1$), while above $\gamma_0$, the spectral index $p_>$ is
highly sensitive to $B_g/B_0$. Particles' velocities align with the magnetic
field, reaching minimum pitch angles $\alpha$ at a Lorentz factor $\gamma_{\min
\alpha}$ controlled by $B_g/B_0$ and $\sigma_0$. The energy-dependent
pitch-angle anisotropy, evaluated through the mean of $\sin^2 \alpha$ of
particles at a given energy, exhibits power-law ranges with negative ($m_<$)
and positive ($m_>$) slopes below and above $\gamma_{\min \alpha}$, becoming
steeper as $B_g/B_0$ increases. The generation of anisotropic pitch angle
distributions has important astrophysical implications. We address their
effects on regulating synchrotron luminosity, spectral energy distribution,
polarization, particle cooling, the synchrotron burnoff limit, emission
beaming, and temperature anisotropy.
|
One of the outstanding puzzles of theoretical physics is whether quantum
information indeed gets lost in the case of Black Hole (BH) evaporation or
accretion. Let us recall that Quantum Mechanics (QM) demands an upper limit on
the acceleration of a test particle. On the other hand, it is pointed out here
that, if a Schwarzschild BH would exist, the acceleration of the test particle
would blow up at the event horizon in violation of QM. Thus the concept of an
exact BH is in contradiction of QM and quantum gravity (QG). It is also
reminded that the mass of a BH actually appears as an INTEGRATION CONSTANT of
Einstein equations. And it has been shown that the value of this integration
constant is actually zero. Thus even classically, there cannot be finite mass
BHs though zero mass BH is allowed. It has been further shown that during
continued gravitational collapse, radiation emanating from the contracting
object gets trapped within it by the runaway gravitational field. As a
consequence, the contracting body attains a quasi-static state where outward
trapped radiation pressure gets balanced by inward gravitational pull and the
ideal classical BH state is never formed in a finite proper time. In other
words, continued gravitational collapse results in an "Eternally Collapsing
Object" which is a ball of hot plasma and which is asymptotically approaching
the true BH state with M=0 after radiating away its entire mass energy. And if
we include QM, this contraction must halt at a radius suggested by highest QM
acceleration. In any case no EH is ever formed and in reality, there is no
quantum information paradox.
|
We apply a recently developed technique utilizing machine learning for
statistical analysis of computational nitrogen K-edge spectra of aqueous
triglycine. This method, the emulator-based component analysis, identifies
spectrally relevant structural degrees of freedom from a data set filtering
irrelevant ones out. Thus tremendous reduction in the dimensionality of the
ill-posed nonlinear inverse problem of spectrum interpretation is achieved.
Structural and spectral variation across the sampled phase space is notable.
Using these data, we train a neural network to predict the intensities of
spectral regions of interest from the structure. These regions are defined by
the temperature-difference profile of the simulated spectra, and the analysis
yields a structural interpretation for their behavior. Even though the utilized
local many-body tensor representation implicitly encodes the secondary
structure of the peptide, our approach proves that this information is
irrecoverable from the spectra. A hard X-ray Raman scattering experiment
confirms the overall sensibility of the simulated spectra, but the predicted
temperature-dependent effects therein remain beyond the achieved statistical
confidence level.
|
In 1964 A. Bruckner observed that any bounded open set in the plane has an
inscribed triangle, that is a triangle contained in the open set and with the
vertices lying on the boundary. We prove that this triangle can be taken
uniformly fat, more precisely having all internal angles larger than $\sim 0.3$
degrees, independently of the choice of the initial open set. We also build a
polygon in which all the inscribed triangles are not too-fat, meaning that at
least one angle is less than $\sim 55$ degrees. These results show the
existence of a maximal number $\Theta$ strictly between 0 and 60, whose exact
value remains unknown, for which all bounded open sets admit an inscribed
triangle with all angles larger than or equal to $\Theta$ degrees.
|
While electrical compatibility constraints normally prevent head-to-head (HH)
and tail-to-tail (TT) domain walls from forming in ferroelectric materials, we
propose that such domain walls could be stabilized by intentional growth of
atomic layers in which the cations are substituted from a neighboring column of
the periodic table. In particular, we carry out predictive first-principles
calculations of superlattices in which Sc, Nb, or other substitutional layers
are inserted periodically into PbTiO$_3$. We confirm that this gives rise to a
domain structure with the longitudinal component of the polarization
alternating from domain to domain, and with the substitutional layers serving
as HH and TT domain walls. We also find that a substantial transverse component
of the polarization can also be present.
|
The cutoff phenomenon is an abrupt transition from out of equilibrium to
equilibrium undergone by certain Markov processes in the limit where the size
of the state space tends to infinity: instead of decaying gradually over time,
their distance to equilibrium remains close to the maximal value for a while
and suddenly drops to zero as the time parameter reaches a critical threshold.
Despite the accumulation of many examples, this phenomenon is still far from
being understood, and identifying the general conditions that trigger it has
become one of the biggest challenges in the quantitative analysis of finite
Markov chains. Very recently, the author proposed a general sufficient
condition for the occurrence of a cutoff, based on a certain
information-theoretical statistics called varentropy. In the present paper, we
demonstrate the sharpness of this approach by showing that the cutoff
phenomenon is actually equivalent to the varentropy criterion for all sparse,
fast-mixing chains. Reversibility is not required.
|
Cepheids in multiple systems provide information on the outcome of the
formation of massive stars. They can also lead to exotic end-stage objects.
This study concludes our survey of 70 galactic Cepheids using the {\it Hubble
Space Telescope\} (\HST) Wide Field Camera~3 (WFC3) with images at two
wavelengths to identify companions closer than $5\arcsec$. In the entire WFC3
survey we identify 16 probable companions for 13 Cepheids. The seven Cepheids
having resolved candidate companions within $2"$ all have the surprising
property of themselves being spectroscopic binaries (as compared with a 29\%
incidence of spectroscopic binaries in the general Cepheid population). That is
a strong suggestion that an inner binary is linked to the scenario of a third
companion within a few hundred~AU\null. This characteristic is continued for
more widely separated companions. Under a model where the outer companion is
formed first, it is unlikely that it can anticipate a subsequent inner binary.
Rather it is more likely that a triple system has undergone dynamical
interaction, resulting in one star moving outward to its current location. {\it
Chandra\} and {\it Gaia\} data as well as radial velocities and \HSTSTIS and
{\it IUE\} spectra are used to derive properties of the components of the
Cepheid systems.
The colors of the companion candidates show a change in distribution at
approximately 2000~AU separations, from a range including both hot and cool
colors for closer companions, to only low-mass companions for wider
separations.
|
Online social networks have become the medium for efficient viral marketing
exploiting social influence in information diffusion. However, the emerging
application Social Coupon (SC) incorporating social referral into coupons
cannot be efficiently solved by previous researches which do not take into
account the effect of SC allocation. The number of allocated SCs restricts the
number of influenced friends for each user. In the paper, we investigate not
only the seed selection problem but also the effect of SC allocation for
optimizing the redemption rate which represents the efficiency of SC
allocation. Accordingly, we formulate a problem named Seed Selection and SC
allocation for Redemption Maximization (S3CRM) and prove the hardness of S3CRM.
We design an effective algorithm with a performance guarantee, called Seed
Selection and Social Coupon allocation algorithm. For S3CRM, we introduce the
notion of marginal redemption to evaluate the efficiency of investment in seeds
and SCs. Moreover, for a balanced investment, we develop a new graph structure
called guaranteed path, to explore the opportunity to optimize the redemption
rate. Finally, we perform a comprehensive evaluation on our proposed algorithm
with various baselines. The results validate our ideas and show the
effectiveness of the proposed algorithm over baselines.
|
We elucidate the mismatch between the $A$-anomaly coefficient and the
coefficient of the logarithmic term in the entanglement entropy of a Maxwell
field. In contrast to the usual assumptions about the protection of
renormalization group charges at the infrared, the logarithmic term is
different for a free Maxwell field and a Maxwell field interacting with heavy
charges. This is possible because of the presence of superselection sectors in
the IR theory. However, the correction due to the coupling with charged vacuum
fluctuations, that restores the anomaly coefficient, is independent of the
precise UV dynamics. The problem is invariant under electromagnetic duality,
and the solution requires both the existence of electric charges and magnetic
monopoles. We use a real-time operator approach but we also show how the
results for the free and interacting fields are translated into an effective
correction to the four-sphere partition function.
|
In this paper we study some new theories of characteristic homology classes
for singular complex algebraic varieties. First we introduce a natural
transformation T_{y}: K_{0}(var/X) -> H_{*}(X,Q)[y] commuting with proper
pushdown, which generalizes the corresponding Hirzebruch characteristic. Here
K_{0}(var/X) is the relative Grothendieck group of complex algebraic varieties
over X as introduced and studied by Looijenga and Bittner in relation to
motivic integration. T_{y} is a homology class version of the motivic measure
corresponding to a suitable specialization of the well known Hodge polynomial.
This transformation unifies the Chern class transformation of MacPherson and
Schwartz (for y=-1) and the Todd class transformation in the singular
Riemann-Roch theorem of Baum-Fulton-MacPherson (for y=0). In fact, T_{y} is the
composition of a generalized version of this Todd class transformation due to
Yokura, and a new motivic Chern class transformation mC_{*}: K_{0}(var/X)->
G_{0}(X)[y], which generalizes the total lambda-class of the cotangent bundle
to singular spaces. Here G_{0}(X) is the Grothendieck group of coherent
sheaves, and the construction of mC_{*} is based on some results from the
theory of algebraic mixed Hodge modules due to M.Saito. In the final part of
the paper we use the algebraic cobordism theory of Levine and Morel to lift
mC_{*} further up to a natural transformation mC'_{*} from K_{0}(var/X) to a
suitable universal Borel-Moore weak homology theory. Moreover, all our results
can be extended to varieties over a base field k, which can be embedded into
the complex numbers.
|
From generation to generation there are increasing requirements for wireless
standards both in terms of spectral and energy efficiency. While up to now the
layered wireless transceiver architecture worked allowing for, e.g., separation
of channel decoding algorithms from front-end design, this may need
reconsideration in the 6G era. Especially the hardware-originated distortions
have to be taken into account while designing other layer algorithms as the
high throughput and energy efficiency requirements will push these devices to
their limit revealing their nonlinear characteristics. This position paper will
shed some light on new degrees of freedom while cross-layer designing and
controlling multicarrier and multiantenna transceivers of 6G systems.
|
The 2-dimensional Ising model on a square lattice is investigated with a
variational autoencoder in the non-vanishing field case for the purpose of
extracting the crossover region between the ferromagnetic and paramagnetic
phases. The encoded latent variable space is found to provide suitable metrics
for tracking the order and disorder in the Ising configurations that extends to
the extraction of a crossover region in a way that is consistent with
expectations. The extracted results achieve an exceptional prediction for the
critical point as well as agreement with previously published results on the
configurational magnetizations of the model. The performance of this method
provides encouragement for the use of machine learning to extract meaningful
structural information from complex physical systems where little a priori data
is available.
|
We present PDFFlow, a new software for fast evaluation of parton distribution
functions (PDFs) designed for platforms with hardware accelerators. PDFs are
essential for the calculation of particle physics observables through Monte
Carlo simulation techniques. The evaluation of a generic set of PDFs for quarks
and gluon at a given momentum fraction and energy scale requires the
implementation of interpolation algorithms as introduced for the first time by
the LHAPDF project. PDFFlow extends and implements these interpolation
algorithms using Google's TensorFlow library providing the capabilities to
perform PDF evaluations taking fully advantage of multi-threading CPU and GPU
setups. We benchmark the performance of this library on multiple scenarios
relevant for the particle physics community.
|
Machine learning models, especially deep neural networks have been shown to
be susceptible to privacy attacks such as membership inference where an
adversary can detect whether a data point was used for training a black-box
model. Such privacy risks are exacerbated when a model's predictions are used
on an unseen data distribution. To alleviate privacy attacks, we demonstrate
the benefit of predictive models that are based on the causal relationships
between input features and the outcome. We first show that models learnt using
causal structure generalize better to unseen data, especially on data from
different distributions than the train distribution. Based on this
generalization property, we establish a theoretical link between causality and
privacy: compared to associational models, causal models provide stronger
differential privacy guarantees and are more robust to membership inference
attacks. Experiments on simulated Bayesian networks and the colored-MNIST
dataset show that associational models exhibit upto 80% attack accuracy under
different test distributions and sample sizes whereas causal models exhibit
attack accuracy close to a random guess.
|
In low power wireless sensor networks, MAC protocols usually employ periodic
sleep/wake schedule to reduce idle listening time. Even though this mechanism
is simple and efficient, it results in high end-to-end latency and low
throughput. On the other hand, the previously proposed CSMA/CA-based MAC
protocols have tried to reduce inter-node interference at the cost of increased
latency and lower network capacity. In this paper we propose IAMAC, a CSMA/CA
sleep/wake MAC protocol that minimizes inter-node interference, while also
reduces per-hop delay through cross-layer interactions with the network layer.
Furthermore, we show that IAMAC can be integrated into the SP architecture to
perform its inter-layer interactions. Through simulation, we have extensively
evaluated the performance of IAMAC in terms of different performance metrics.
Simulation results confirm that IAMAC reduces energy consumption per node and
leads to higher network lifetime compared to S-MAC and Adaptive S-MAC, while it
also provides lower latency than S-MAC. Throughout our evaluations we have
considered IAMAC in conjunction with two error recovery methods, i.e., ARQ and
Seda. It is shown that using Seda as the error recovery mechanism of IAMAC
results in higher throughput and lifetime compared to ARQ.
|
We report the discovery of 28 quasars and 7 luminous galaxies at 5.7 $\le$ z
$\le$ 7.0. This is the tenth in a series of papers from the Subaru High-z
Exploration of Low-Luminosity Quasars (SHELLQs) project, which exploits the
deep multi-band imaging data produced by the Hyper Suprime-Cam (HSC) Subaru
Strategic Program survey. The total number of spectroscopically identified
objects in SHELLQs has now grown to 93 high-z quasars, 31 high-z luminous
galaxies, 16 [O III] emitters at z ~ 0.8, and 65 Galactic cool dwarfs (low-mass
stars and brown dwarfs). These objects were found over 900 deg2, surveyed by
HSC between 2014 March and 2018 January. The full quasar sample includes 18
objects with very strong and narrow Ly alpha emission, whose stacked spectrum
is clearly different from that of other quasars or galaxies. While the stacked
spectrum shows N V 1240 emission and resembles that of lower-z narrow-line
quasars, the small Ly alpha width may suggest a significant contribution from
the host galaxies. Thus these objects may be composites of quasars and
star-forming galaxies.
|
We provide infinitely many rational homology 3-spheres with weight-one
fundamental groups which do not arise from Dehn surgery on knots in $S^3$. In
contrast with previously known examples, our proofs do not require any gauge
theory or Floer homology. Instead, we make use of the $SU(2)$ character variety
of the fundamental group, which for these manifolds is particularly simple:
they are all $SU(2)$-cyclic, meaning that every $SU(2)$ representation has
cyclic image.
|
The Kaigorodov space is a homogeneous Einstein space and it describes a
pp-wave propagating in anti-de Sitter space. It is conjectured in the
literature that M-theory or string theory on the Kaigorodov space times a
compact manifold is dual to a conformal field theory in an infinitely-boosted
frame with constant momentum density. In this note we present a charged
generalization of the Kaigorodov space by boosting a non-extremal charged
domain wall to the ultrarelativity limit where the boost velocity approaches
the speed of light. The finite boost of the domain wall solution gives the
charged generalization of the Carter-Novotn\'y-Horsk\'y metric. We study the
thermodynamics associated with the charged Carter-Novotn\'y-Horsk\'y space and
discuss its relation to that of the static black domain walls and its
implications in the domain wall/QFT (quantum field theory) correspondence.
|
We consider bulk quantum fields in AdS/CFT in the background of an eternal
black hole. We show that for black holes with finite entropy, correlation
functions of semiclassical bulk operators close to the horizon deviate from
their semiclassical value and are ill-defined inside the horizon. This is due
to the large-time behavior of correlators in a unitary CFT, and means the
region near and inside the horizon receives corrections. We give a prescription
for modifying the definition of a bulk field in a black hole background, such
that one can still define operators that mimic the inside of the horizon, but
at the price of violating microcausality. For supergravity fields we find that
commutators at spacelike separation generically ~ exp(-S/2). Similar results
hold for stable black holes that form in collapse. The general lesson may be
that a small amount of non-locality, even over arbitrarily large spacelike
distances, is an essential aspect of non-perturbative quantum gravity.
|
The possible effect of environment on the efficiency of a quantum algorithm
is considered explicitely. It is illustrated through the example of Shor's
prime factorization algorithm that this effect may be disastrous. The influence
of environment on quantum computation is probed on the basis of its analogy to
the problem of wave function collapse in quantum measurement.Techniques from
the Hepp-Colemen approach and its generalization are used to deal with
decoherence problems in quantum computation including dynamic mechanism of
decoherence, quantum error avoiding tricks and calculation of decoherence time.
|
In this work, we introduce a facile procedure to graft a thin graphitic C3N4
(g-C3N4) layer on aligned TiO2 nanotube arrays (TiNT) by one-step chemical
vapor deposition (CVD) approach. This provides a platform to enhance the
visible-light response of TiO2 nanotubes for antimicrobial applications. The
formed g- C3N4/TiNT binary nanocomposite exhibits excellent bactericidal
efficiency against E. coli as a visiblelight activated antibacterial coating.
|
Let $W$ be a random variable with mean zero and variance $\sigma^2$. The
distribution of a variate $W^*$, satisfying $EWf(W)=\sigma ^2 Ef'(W^*)$ for
smooth functions $f$, exists uniquely and defines the zero bias transformation
on the distribution of $W$. The zero bias transformation shares many
interesting properties with the well known size bias transformation for
non-negative variables, but is applied to variables taking on both positive and
negative values. The transformation can also be defined on more general random
objects. The relation between the transformation and the expression
$wf'(w)-\sigma^2 f''(w)$ which appears in the Stein equation characterizing the
mean zero, variance $\sigma ^2$ normal $\sigma Z$ can be used to obtain bounds
on the difference $E\{h(W/\sigma)-h(Z)\}$ for smooth functions $h$ by
constructing the pair $(W,W^*)$ jointly on the same space. When $W$ is a sum of
$n$ not necessarily independent variates, under certain conditions which
include a vanishing third moment, bounds on this difference of the order $1/n$
for classes of smooth functions $h$ may be obtained. The technique is
illustrated by an application to simple random sampling.
|
We study the real-time dynamics of the Kondo effect after a quantum quench in
which a magnetic impurity is coupled to two metallic Hubbard chains. Using an
effective field theory approach, we find that for noninteracting electrons the
charge current across the impurity is given by a scaling function that involves
the Kondo time. In the interacting case, we show that the Kondo time decreases
with the strength of the repulsive interaction and the time dependence of the
current reveals signatures of the Kondo effect in a Luttinger liquid. In
addition, we verify that the relaxation of the impurity magnetization does not
exhibit universal scaling behavior in the perturbative regime below the Kondo
time. Our results highlight the role of nonequilibrium dynamics as a valuable
tool in the study of quantum impurities in interacting systems.
|
We extend and improve the known results about the boundedness of the bilinear
pseudo-differential operators with symbols in the bilinear H\"ormander class
$BS^{m}_{0,0}(\mathbb{R}^n)$. We consider wider classes of symbols and improve
estimates for the corresponding operators. A key idea is to consider the
operators in Wiener amalgam spaces.
|
In recent years, \emph{search story}, a combined display with other organic
channels, has become a major source of user traffic on platforms such as
e-commerce search platforms, news feed platforms and web and image search
platforms. The recommended search story guides a user to identify her own
preference and personal intent, which subsequently influences the user's
real-time and long-term search behavior. %With such an increased importance of
search stories, As search stories become increasingly important, in this work,
we study the problem of personalized search story recommendation within a
search engine, which aims to suggest a search story relevant to both a search
keyword and an individual user's interest. To address the challenge of modeling
both immediate and future values of recommended search stories (i.e.,
cross-channel effect), for which conventional supervised learning framework is
not applicable, we resort to a Markov decision process and propose a deep
reinforcement learning architecture trained by both imitation learning and
reinforcement learning. We empirically demonstrate the effectiveness of our
proposed approach through extensive experiments on real-world data sets from
JD.com.
|
We present a twofold contribution to the numerical simulation of Lindblad
equations. First, an adaptive numerical approach to approximate Lindblad
equations using low-rank dynamics is described: a deterministic low-rank
approximation of the density operator is computed, and its rank is adjusted
dynamically, using an on-the-fly estimator of the error committed when reducing
the dimension. On the other hand, when the intrinsic dimension of the Lindblad
equation is too high to allow for such a deterministic approximation, we
combine classical ensemble averages of quantum Monte Carlo trajectories and a
denoising technique. Specifically, a variance reduction method based upon the
consideration of a low-rank dynamics as a control variate is developed.
Numerical tests for quantum collapse and revivals show the efficiency of each
approach, along with the complementarity of the two approaches.
|
We report a process for fabricating high quality, defect-free spherical
mirror templates suitable for developing high finesse optical Fabry-Perot
resonators. The process utilizes the controlled reflow of borosilicate glass
and differential pressure to produce mirrors with 0.3 nanometer surface
roughness. The dimensions of the mirrors are in the 0.5-5mm range making them
suitable candidates for integration with on-chip neutral atom and ion
experiments where enhanced interaction between atoms and photons are required.
Moreover the mirror curvature, dimension and placement is readily controlled
and the process can easily provide an array of such mirrors. We show that
cavities constructed with these mirror templates are well suited to quantum
information applications such as single photon sources and atom-photon
entanglement.
|
Coupled-resonance spectroscopy has been recently reported and applied for
spectroscopic measurements and laser stabilizations. With coupled-resonance
spectroscopy, one may indirectly measure some transitions between the excited
states that are hard to be measured directly because of the lack of populations
in the excited states. An improvement of the coupled-resonance spectroscopy by
combining the technology of electromagnetically induced transparency (EIT) is
proposed. The coupled-resonance spectroscopy signal can be significantly
enhanced by EIT. Several experimental schemes have been discussed. The line
shape of the EIT-enhanced coupled-resonance spectroscopy has been
calculated.The EIT-enhanced coupled-resonance spectroscopy can be used for
simultaneously stabilizing two lasers to the same atomic source.
|
Not all generate-and-test search algorithms are created equal. Bayesian
Optimization (BO) invests a lot of computation time to generate the candidate
solution that best balances the predicted value and the uncertainty given all
previous data, taking increasingly more time as the number of evaluations
performed grows. Evolutionary Algorithms (EA) on the other hand rely on search
heuristics that typically do not depend on all previous data and can be done in
constant time. Both the BO and EA community typically assess their performance
as a function of the number of evaluations. However, this is unfair once we
start to compare the efficiency of these classes of algorithms, as the overhead
times to generate candidate solutions are significantly different. We suggest
to measure the efficiency of generate-and-test search algorithms as the
expected gain in the objective value per unit of computation time spent. We
observe that the preference of an algorithm to be used can change after a
number of function evaluations. We therefore propose a new algorithm, a
combination of Bayesian optimization and an Evolutionary Algorithm, BEA for
short, that starts with BO, then transfers knowledge to an EA, and subsequently
runs the EA. We compare the BEA with BO and the EA. The results show that BEA
outperforms both BO and the EA in terms of time efficiency, and ultimately
leads to better performance on well-known benchmark objective functions with
many local optima. Moreover, we test the three algorithms on nine test cases of
robot learning problems and here again we find that BEA outperforms the other
algorithms.
|
Subsets and Splits