text
stringlengths 6
128k
|
---|
We present new X-ray and UV observations of the Wolf-Rayet + black hole
binary system NGC 300 X-1 with the Chandra X-ray Observatory and the Hubble
Space Telescope Cosmic Origins Spectrograph. When combined with archival X-ray
observations, our X-ray and UV observations sample the entire binary orbit,
providing clues to the system geometry and interaction between the black hole
accretion disk and the donor star wind. We measure a binary orbital period of
32.7921$\pm$0.0003 hr, in agreement with previous studies, and perform
phase-resolved spectroscopy using the X-ray data. The X-ray light curve reveals
a deep eclipse, consistent with inclination angles of $i=60-75^{\circ}$, and a
pre-eclipse excess consistent with an accretion stream impacting the disk edge.
We further measure radial velocity variations for several prominent FUV
spectral lines, most notably He II $\lambda$1640 and C IV $\lambda$1550. We
find that the He II emission lines systematically lag the expected Wolf-Rayet
star orbital motion by a phase difference $\Delta \phi\sim0.3$, while C IV
$\lambda$1550 matches the phase of the anticipated radial velocity curve of the
Wolf-Rayet donor. We assume the C IV $\lambda$1550 emission line follows a
sinusoidal radial velocity curve (semi-amplitude = 250 km s$^{-1}$) and infer a
BH mass of 17$\pm$4 M$_{\odot}$. Our observations are consistent with the
presence of a wind-Roche lobe overflow accretion disk, where an accretion
stream forms from gravitationally focused wind material and impacts the edge of
the black hole accretion disk.
|
Imidazolium cations are promising for anion exchange membranes, and
electrochemical applications and gas capture. They can be chemically modified
in many ways including halogenation. Halogenation possibilities of the
imidazole ring constitute a particular interest. This work investigates
fluorination and chlorination reactions of all symmetrically non-equivalent
sites of the imidazolium cation. Halogenation of all carbon atoms is
thermodynamically permitted. Out of these, the most favorable site is the first
methylene group of the alkyl chain. In turn, the least favorable site is carbon
of the imidazole ring. Temperature dependence of enthalpy, entropy, and Gibbs
free energy at 1 bar is discussed. The reported results provide an important
guidance in functionalization of ionic liquids in search of task-specific
compounds.
|
We report a measurement of the inclusive semileptonic branching fraction of
the B_s meson using data collected with the BaBar detector in the
center-of-mass (CM) energy region above the Upsilon(4S) resonance. We use the
inclusive yield of phi mesons and the phi yield in association with a
high-momentum lepton to perform a simultaneous measurement of the semileptonic
branching fraction and the production rate of B_s mesons relative to all B
mesons as a function of CM energy. The inclusive semileptonic branching
fraction of the B_s meson is determined to be B(B_s to l nu X)=9.5
(+2.5/-2.0)(stat)(+1.1/-1.9)(syst)%, where l indicates the average of e and mu.
|
We present XMM-Newton observations of the radio pulsar PSR J1119-6127, which
has an inferred age of 1,700 yr and surface dipole magnetic field strength of
4.1x10^13 G. We report the first detection of pulsed X-ray emission from PSR
J1119-6127. In the 0.5--2.0 keV range, the pulse profile shows a narrow peak
with a very high pulsed fraction of (74 +/- 14)%. In the 2.0--10.0 keV range,
the upper limit for the pulsed fraction is 28% (99% confidence). The pulsed
emission is well described by a thermal blackbody model with a temperature of
T^{\infty} = 2.4^{+0.3}_{-0.2}x10^6 K and emitting radius of 3.4^{+1.8}_{-0.3}
km (at a distance of 8.4 kpc). Atmospheric models result in problematic
estimates for the distance/emitting area. PSR J1119-6127 is now the radio
pulsar with smallest characteristic age from which thermal X-ray emission has
been detected. The combined temporal and spectral characteristics of this
emission are unlike those of other radio pulsars detected at X-ray energies and
challenge current models of thermal emission from neutron stars.
|
It is shown that the location of the Fisher zeros of the Baxter-Wu model, for
two series of finite sized clusters, with spherical boundary conditions, is
extremely simple. They lie on the unit circle in the complex Sinh[2\b{eta}J3]
plane. This is the same location as the Fisher zeros of the Ising model with
nearest neighbor interactions, J2, on the square lattice have, with
Brascamp-Kunz boundary conditions. The Baxter-Wu model is an Ising model with
three site interactions, J3, on the triangle lattice. From the leading Fisher
zeros, using finite size scaling, accurate estimates of the critical exponent
1/{\nu} are obtained. Furthermore, using the imaginary parts of the leading
zeros versus the real part of the leading zeros leads to different results
similar the results of Janke and Kenna for the nearest neighbor, Ising model on
the square lattice and extending this behavior to a multi-site interaction
system.
|
In this note, by using several ideas of other researchers, we derive several
relations among multiple zeta-star values from the hypergeometric identities of
C. Krattenthaler and T. Rivoal.
|
Analysing (e,e'p) experimental data involves corrections for radiative
effects which change the interaction kinematics and which have to be carefully
considered in order to obtain the desired accuracy. Missing momentum and energy
due to bremsstrahlung have so far always been calculated using the peaking
approximation which assumes that all bremsstrahlung is emitted in the direction
of the radiating particle. In this article we introduce a full angular Monte
Carlo simulation method which overcomes this approximation. The angular
distribution of the bremsstrahlung photons is reconstructed from H(e,e'p) data.
Its width is found to be underestimated by the peaking approximation and
described much better by the approach developed in this work.
|
Applying Bayesian optimization in problems wherein the search space is
unknown is challenging. To address this problem, we propose a systematic volume
expansion strategy for the Bayesian optimization. We devise a strategy to
guarantee that in iterative expansions of the search space, our method can find
a point whose function value within epsilon of the objective function maximum.
Without the need to specify any parameters, our algorithm automatically
triggers a minimal expansion required iteratively. We derive analytic
expressions for when to trigger the expansion and by how much to expand. We
also provide theoretical analysis to show that our method achieves
epsilon-accuracy after a finite number of iterations. We demonstrate our method
on both benchmark test functions and machine learning hyper-parameter tuning
tasks and demonstrate that our method outperforms baselines.
|
We present a systematic study of the photo-absorption spectra of various
Si$_{n}$H$_{m}$ clusters (n=1-10, m=1-14) using the time-dependent density
functional theory (TDDFT). The method uses a real-time, real-space
implementation of TDDFT involving full propagation of the time dependent
Kohn-Sham equations. Our results for SiH$_{4}$ and Si$_{2}$H$_{6}$ show good
agreement with the earlier calculations and experimental data. We find that for
small clusters (n<7) the photo-absorption spectrum is atomic-like while for the
larger clusters it shows bulk-like behaviour. We study the photo-absorption
spectra of silicon clusters as a function of hydrogenation. For single
hydrogenation, we find that in general, the absorption optical gap decreases
and as the number of silicon atoms increase the effect of a single hydrogen
atom on the optical gap diminishes. For further hydrogenation the optical gap
increases and for the fully hydrogenated clusters the optical gap is larger
compared to corresponding pure silicon clusters.
|
The intrinsic Baldwin effect is an anti-correlation between the line
equivalent width and the flux of the underlying continuum detected in a single
variable active galactic nucleus (AGN). This effect, in spite of the extensive
research, is still not well understood, and might give us more information
about the physical properties of the line and continuum emission regions in
AGNs. Here we present preliminary results of our investigation of the intrinsic
Baldwin effect of the broad H$\beta$ line in several Narrow-line Seyfert 1
(NLSy 1) galaxies, for which data were taken from the long-term monitoring
campaigns and from the Sloan Digital Sky Survey Reverberation Mapping project.
|
The effects of long-range interactions in quantum transport are still largely
unexplored, mainly due to the difficulty of devising efficient embedding
schemes. In this work we present a substantial progress in the interacting
resonant level model by reducing the problem to the solution of
Kadanoff-Baym-like equations with a correlated embedding self-energy. The
method allows us to deal with short- and long-range interactions and is
applicable from the transient to the steady-state regime. Furthermore, memory
effects are consistently incorporated and the results are not plagued by
negative densities or non-conservation of the electric charge. We employ the
method to calculate densities and currents with long-range interactions
appropriate to low-dimensional leads, and show the occurrence of a jamming
effect which drastically reduces the screening time and suppresses the
zero-bias conductance. None of these effects are captured by short-range
dot-lead interactions.
|
The planets HR8799bc display nearly identical colours and spectra as variable
young exoplanet analogues such as VHS 1256-1257ABb and PSO J318.5-22, and are
likely to be similarly variable. Here we present results from a 5-epoch SPHERE
IRDIS broadband-$H$ search for variability in these two planets. HR 8799b
aperture photometry and HR 8799bc negative simulated planet photometry share
similar trends within uncertainties. Satellite spot lightcurves share the same
trends as the planet lightcurves in the August 2018 epochs, but diverge in the
October 2017 epochs. We consider $\Delta(mag)_{b} - \Delta(mag)_{c}$ to trace
non-shared variations between the two planets, and rule out non-shared
variability in $\Delta(mag)_{b} - \Delta(mag)_{c}$ to the 10-20$\%$ level over
4-5 hours. To quantify our sensitivity to variability, we simulate variable
lightcurves by inserting and retrieving a suite of simulated planets at similar
radii from the star as HR 8799bc, but offset in position angle. For HR 8799b,
for periods $<$10 hours, we are sensitive to variability with amplitude $>5\%$.
For HR 8799c, our sensitivity is limited to variability $>25\%$ for similar
periods.
|
Counting independent sets in graphs and hypergraphs under a variety of
restrictions is a classical question with a long history. It is the subject of
the celebrated container method which found numerous spectacular applications
over the years. We consider the question of how many independent sets we can
have in a graph under structural restrictions. We show that any $n$-vertex
graph with independence number $\alpha$ without $bK_a$ as an induced subgraph
has at most $n^{O(1)} \cdot \alpha^{O(\alpha)}$ independent sets. This
substantially improves the trivial upper bound of $n^{\alpha},$ whenever
$\alpha \le n^{o(1)}$ and gives a characterization of graphs forbidding of
which allows for such an improvement. It is also in general tight up to a
constant in the exponent since there exist triangle-free graphs with
$\alpha^{\Omega(\alpha)}$ independent sets. We also prove that if one in
addition assumes the ground graph is chi-bounded one can improve the bound to
$n^{O(1)} \cdot 2^{O(\alpha)}$ which is tight up to a constant factor in the
exponent.
|
We study the 'spectral projector' method for the computation of the chiral
condensate and the topological susceptibility, using $N_f=2+1+1$ dynamical
flavors of maximally twisted mass Wilson fermions. In particular, we perform a
study of the quark mass dependence of the chiral condensate $\Sigma$ and
topological susceptibility $\chi_{top}$ in the range $270 MeV < m_{\pi} < 500
MeV$ and compare our data with analytical predictions. In addition, we compute
$\chi_{top}$ in the quenched approximation where we match the lattice spacing
to the $N_f=2+1+1$ dynamical simulations. Using the Kaon, $\eta$ and
$\eta^{\prime}$ meson masses computed on the $N_f=2+1+1$ ensembles, we then
perform a preliminary test of the Witten-Veneziano relation.
|
Thanks to a Multiple Scattering Theory algorithm, we present a way to focus
energy at the deep subwavelength scale, from the far-field, inside a cubic
disordered bubble cloud by using broadband Time Reversal (TR). We show that the
analytical calculation of an effective wavenumber performing the Independant
Scattering Approximation (ISA) matches the numerical results for the focal
extension. Subwavelength focusings of lambda/100 are reported for simulations
with perfect bubbles (no loss). A more realistic case, with viscous and thermal
losses, allows us to obtain a $\lambda/14$ focal spot, with a low volume
fraction of scatterers (phi = 0.01). Bubbly materials could open new
perspective for acoustic actuation in the microfluidic context.
|
Conversion between different types of entangled states is an interesting
problem in quantum mechanics. But research on the conversion between
Greenberger-Horne-Zeilinger (GHZ) state and Knill-Laflamme-Milburn (KLM) state
in atomic system is absent. In this paper, we propose a scheme to realize the
interconversion (one-step) between GHZ state and KLM state with Rydberg atoms.
By utilizing Rydberg-mediated interactions, we simplify the system. By
combining Lie-transform-based pulse design, the evolution path is built up to
realize interconversion of GHZ state and KLM state. The numerical simulation
result shows that the present scheme is robust against decoherence and
operational imperfection, the analysis shows that the scheme is feasible with
current experimental technology.
|
Let $A, B$ be positive definite $n\times n$ matrices. We present several
reverse Heinz type inequalities, in particular \begin{align*} \|AX+XB\|_2^2+
2(\nu-1) \|AX-XB\|_2^2\leq \|A^{\nu}XB^{1-\nu}+A^{1-\nu}XB^{\nu}\|_2^2,
\end{align*} where $X$ is an arbitrary $n \times n$ matrix, $\|\cdot\|_2$ is
Hilbert-Schmidt norm and $\nu>1$. We also establish a Heinz type inequality
involving the Hadamard product of the form \begin{align*} 2|||A^{1\over2}\circ
B^{1\over2}|||\leq|||A^{s}\circ B^{1-t}+A^{1-s}\circ B^{t}|||
\leq\max\{|||(A+B)\circ I|||,|||(A\circ B)+I|||\}, \end{align*} in which $s,
t\in [0,1]$ and $|||\cdot|||$ is a unitarily invariant norm.
|
We study the asymptotic phase concentration phenomena for the
Kuramoto-Sakaguchi(K-S) equation in a large coupling strength regime. For this,
we analyze the detailed dynamics of the order parameters such as the amplitude
and the average phase. For the infinite ensemble of oscillators with the
identical natural frequency, we show that the total mass distribution
concentrates on the average phase asymptotically, whereas the mass around the
antipodal point of the average phase decays to zero exponentially fast in any
positive coupling strength regime. Thus, generic initial kinetic densities
evolve toward the Dirac measure concentrated on the average phase. In contrast,
for the infinite ensemble with distributed natural frequencies, we find a
certain time-dependent interval whose length can be explicitly quantified in
terms of the coupling strength. Provided that the coupling strength is
sufficiently large, the mass on such an interval is eventually non-decreasing
over the time. We also show that the amplitude order parameter has a positive
lower bound that depends on the size of support of the distribution function
for the natural frequencies and the coupling strength. The proposed asymptotic
lower bound on the order parameter tends to unity, as the coupling strength
increases to infinity. This is reminiscent of practical synchronization for the
Kuramoto model, in which the diameter for the phase configuration is inversely
proportional to the coupling strength. Our results for the K-S equation
generalize the results in [19] on the emergence of phase-locked states for the
Kuramoto model in a large coupling strength regime.
|
The first experimental information on the strong interaction between
$\Lambda$ and $\Xi^-$ strange baryons is presented in this Letter. The
correlation function of $\Lambda-\Xi^-$ and
$\overline{\Lambda}-\overline{\Xi}^{+}$ pairs produced in high-multiplicity
proton-proton (pp) collisions at $\sqrt{s}$ = 13 TeV at the LHC is measured as
a function of the relative momentum of the pair. The femtoscopy method is used
to calculate the correlation function, which is then compared with theoretical
expectations obtained using a meson exchange model, chiral effective field
theory, and Lattice QCD calculations close to the physical point. Data support
predictions of small scattering parameters while discarding versions with large
ones, thus suggesting a weak $\Lambda-\Xi^{-}$ interaction. The limited
statistical significance of the data does not yet allow one to constrain the
effects of coupled channels like $\Sigma-\Xi$ and N$-\Omega$.
|
We review the General Relativistic model of a (quasi) point-like particle
represented by a massive shell of neutral matter which has vanishing total
energy in the small-volume limit. We then show that, by assuming a Generalised
Uncertainty Principle, which implies the existence of a minimum length of the
order of the Planck scale, the total energy instead remains finite and equal to
the shell's proper mass both for very heavy and very light particles. This
suggests that the quantum structure of space-time might be related to the
classical Equivalence Principle and possible implications for the late stage of
evaporating black holes are briefly mentioned.
|
Cellular elastomeric metamaterials are interesting for various applications,
e.g. soft robotics, as they may exhibit multiple microstructural pattern
transformations, each with its characteristic mechanical behavior. Numerical
literature studies revealed that pattern formation is restricted in (thick)
boundary layers causing significant mechanical size effects. This paper aims to
experimentally validate these findings on miniaturized specimens, relevant for
real applications, and to investigate the effect of increased geometrical and
material imperfections resulting from specimen miniaturization. To this end,
miniaturized cellular metamaterial specimens are manufactured with different
scale ratios, subjected to in-situ micro-compression tests combined with
digital image correlation yielding full-field kinematics, and compared to
complementary numerical simulations. The specimens' global behavior agrees well
with the numerical predictions, in terms of pre-buckling stiffness, buckling
strain and post-buckling stress. Their local behavior, i.e. pattern
transformation and boundary layer formation, is also consistent between
experiments and simulations. Comparison of these results with idealized
numerical studies from literature reveals the influence of the boundary
conditions in real cellular metamaterial applications, e.g. lateral
confinement, on the mechanical response in terms of size effects and boundary
layer formation.
|
In this paper, we introduce some notions on the pair consisting of a Chern
connection and a Higgs field closely related to the first and second variation
of Yang-Mills- Higgs functional, such as strong Yang-Mills-Higgs pair,
degenerate Yang-Mills-Higgs pair, stable Yang-Mills-Higgs pair. We investigate
some properties of such pairs.
|
We study the pseudoscalar, vector and axial current correlation functions in
SU(2)-NJL model with scalar and vector couplings. The correlation functions are
evaluated in leading order in number of colors $N_c$. As it is expected in the
pseudoscalar channel pions appear as Goldstone bosons, and after fixing the
cutoff to reproduce the physical pion decay constant, we obtain well-known
current-algebra results. For the vector and axial channels we use essentially
that at spacelike momenta the correlation functions can be related to the
experimentally known spectral density via dispersion relations. We show that
the latter imposes strong bounds on the strength of the vector coupling in the
model. We find that the commonly used on-shell treatment of the vector and
axial mesons (identified as poles at large timelike momenta) fails to reproduce
the behavior of the corresponding correlation functions at small spacelike
momenta extracted from the physical spectral density. The parameters of the NJL
model fixed by the correlation functions at small spacelike momenta differ
noticeably from those of the on-shell treatment.
|
This is a revision (and partial retraction) of my previous abstarct. Let
$\lambda(X)$ denote Lebesgue measure. If $X\subseteq [0,1]$ and $r \in (0,1)$
then the $r$-Hausdorff capacity of $X$ is denoted by $H^r(X)$ and is defined to
be the infimum of all $\sum_{i=0}^\infty \lambda(I_i)^r$ where
$\{I_i\}_{i\in\omega}$ is a cover of $X$ by intervals. The $r$ Hausdorff
capacity has the same null sets as the $r$-Hausdorff measure which is familiar
from the theory of fractal dimension. It is shown that, given $r < 1$, it is
possible to enlarge a model of set theory, $V$, by a generic extension $V[G]$
so that the reals of $V$ have Lebesgue measure zero but still have positive
$r$-Hausdorff capacity.
|
We give a new presentation of the Yangian for the orthosymplectic Lie
superalgebra $\mathfrak{osp}_{1|2m}$. It relies on the Gauss decomposition of
the generator matrix in the $R$-matrix presentation. The defining relations
between the Gaussian generators are derived from a new version of the
Drinfeld-type presentation of the Yangian for $\mathfrak{osp}_{1|2}$ and some
additional relations in the Yangian for $\mathfrak{osp}_{1|4}$ by an
application of the embedding theorem for the super-Yangians.
|
A universal quantum computer of moderate scale is not available yet, however
intermediate models of quantum computation would still permit demonstrations of
a quantum computational advantage over classical computing and could challenge
the Extended Church-Turing Thesis. One of these models based on single photons
interacting via linear optics is called Boson Sampling. Although Boson Sampling
was demonstrated and the threshold to claim quantum computational advantage was
achieved, the question on how to scale up Boson Sampling experiments remains.
To make progress with this problem, here we present a practically achievable
pathway to scale Boson Sampling experiments with current technologies by
combining continuous-variables quantum information and temporal encoding. We
propose the combination of switchable dual-homodyne and single-photon
detections, the temporal loop technique and scattershot-based Boson Sampling.
We detail the required assumptions for concluding computational hardness for
this configuration. Furthermore, this particular combination of techniques
permits an efficient scaling and certification of Boson Sampling, all in a
single experimental setup.
|
In this paper, we present a novel method for constrained cluster size signed
spectral clustering which allows us to subdivide large groups of people based
on their relationships. In general, signed clustering only requires K hard
clusters and does not constrain the cluster sizes. We extend signed clustering
to include cluster size constraints. Using an example of seating assignment, we
efficiently find groups of people with high social affinity while mitigating
awkward social interaction between people who dislike each other.
|
V605 Aquilae is today widely assumed to have been the result of a final
helium shell flash occurring on a single post-asymptotic giant branch star. The
fact that the outbursting star is in the middle of an old planetary nebula and
that the ejecta associated with the outburst is hydrogen deficient supports
this diagnosis. However, the material ejected during that outburst is also
extremely neon rich, suggesting that it derives from an oxygen-neon-magnesium
star, as is the case in the so-called neon novae. We have therefore attempted
to construct a scenario that explains all the observations of the nebula and
its central star, including the ejecta abundances. We find two scenarios that
have the potential to explain the observations, although neither is a perfect
match. The first scenario invokes the merger of a main sequence star and a
massive oxygen-neon-magnesium white dwarf. The second invokes an
oxygen-neon-magnesium classical nova that takes place shortly after a final
helium shell flash. The main drawback of the first scenario is the inability to
determine whether the ejecta would have the observed composition and whether a
merger could result in the observed hydrogen-deficient stellar abundances
observed in the star today. The second scenario is based on better understood
physics, but, through a population synthesis technique, we determine that its
frequency of occurrence should be very low and possibly lower than what is
implied by the number of observed systems. While we could not envisage a
scenario that naturally explains this object, this is the second final flash
star which, upon closer scrutiny, is found to have hydrogen-deficient ejecta
with abnormally high neon abundances. These findings are in stark contrast with
the predictions of the final helium shell flash and beg for an alternative
explanation.
|
We compute higher moments of the Siegel--Veech transform over quotients of
$SL(2,\mathbb{R})$ by the Hecke triangle groups. After fixing a normalization
of the Haar measure on $SL(2,\mathbb{R})$ we use geometric results and linear
algebra to create explicit integration formulas which give information about
densities of $k$-tuples of vectors in discrete subsets of $\mathbb{R}^2$ which
arise as orbits of Hecke triangle groups. This generalizes work of W.~Schmidt
on the variance of the Siegel transform over
$SL(2,\mathbb{R})/SL(2,\mathbb{Z})$.
|
Sensitive computation often has to be performed in a trusted execution
environment (TEE), which, in turn, requires tamper-proof hardware. If the
computational fabric can be tampered with, we may no longer be able to trust
the correctness of the computation. We study the idea of using computational
platforms in space as a means to protect data from adversarial physical access.
In this paper, we propose SpaceTEE - a practical implementation of this
approach using low-cost nano-satellites called CubeSats. We study the
constraints of such a platform, the cost of deployment, and discuss possible
applications under those constraints. As a case study, we design a hardware
security module solution (called SpaceHSM) and describe how it can be used to
implement a root-of-trust for a certi cate authority (CA).
|
We show that discretization of spacetime naturally suggests discretization of
Hilbert space itself. Specifically, in a universe with a minimal length (for
example, due to quantum gravity), no experiment can exclude the possibility
that Hilbert space is discrete. We give some simple examples involving qubits
and the Schrodinger wavefunction, and discuss implications for quantum
information and quantum gravity.
|
We extend the Khintchine transference inequalities, as well as a
homogeneous-inhomogeneous transference inequality for lattices, due to Bugeaud
and Laurent, to a weighted setting. We also provide applications to
inhomogeneous Diophantine approximation on manifolds and to weighted badly
approximable vectors. Finally, we interpret and prove a conjecture of
Beresnevich-Velani (2010) about inhomogeneous intermediate exponents.
|
Electroweak precision tests of the SM and MSSM as well as searches for
Supersymmetric Particles and Higgs bosons at LEP II and their significance
within the MSSM are discussed. Invited talk presented at the EPIPHANY Conf. in
Cracow, Jan. 3-7, 1997.
|
In this paper, we study the Hankel edge ideals of graphs. We determine the
minimal prime ideals of the Hankel edge ideal of labeled Hamiltonian and
semi-Hamiltonian graphs, and we investigate radicality, being a complete
intersection, almost complete intersection and set theoretic complete
intersection for such graphs. We also consider the Hankel edge ideal of trees
with a natural labeling, called rooted labeling. We characterize such trees
whose Hankel edge ideal is a complete intersection, and moreover, we determine
those whose initial ideal with respect to the reverse lexicographic order
satisfies this property.
|
This paper proposes an Arbitrary Lagrangian-Eulerian incompressible finite
volume scheme based on Consistent Flux Reconstruction method (ALE-CFR) on
deforming two-dimensional unstructured triangular grids. The scheme is further
applied to solve for a problem of two-dimensional incompressible viscous flow
past a rotationally oscillating circular cylinder when subjected to forced
transverse oscillations at Re = 100. The paper first focuses on the
mathematical development and validation of the ALE-CFR scheme and then its
application on the problem. The rotational oscillation frequencies were taken
corresponding to the four primary vortex shedding mode frequencies discovered
from the studies conducted by Tokumaru and Dimotakis. The transverse
oscillation frequency was varied for a frequency ratio $f_{tv}$ with respect to
natural strouhal frequency for flow past a stationary circular cylinder from
$0.9 \leq f_{tv} \leq 3.0$. The amplitude ratio for transverse oscillations was
taken as 0.2 with respect to the cylinder diameter. The distinct effects on the
vortex shedding patterns and the contributions of such rotational and
transverse oscillations on lift and drag characteristics of the circular
cylinder are studied using the newly developed scheme.
|
The antiferromagnetic Ising model in uncorrelated scale-free networks has
been studied by means of Monte Carlo simulations. These networks are
characterized by a connectivity (or degree) distribution P(k) ~ k^(- gamma).
The disorder present in these complex networks frustrates the antiferromagnetic
spin ordering, giving rise to a spin-glass (SG) phase at low temperature. The
paramagnetic-SG transition temperature T_c has been studied as a function of
the parameter gamma and the minimum degree present in the networks. T_c is
found to increase when the exponent gamma is reduced, in line with a larger
frustration caused by the presence of nodes with higher degree.
|
There is a one-to-one and onto correspondence between the class of numerical
semigroups of depth $n$, where $n$ is an integer, and a certain language over
the alphabet $\{1,\ldots,n\}$ which we call a Kunz language of depth $n$. The
Kunz language associated with the numerical semigroups of depth $2$ is the
regular language $\{1,2\}^*2\{1,2\}^*$. We prove that Kunz languages associated
with numerical semigroups of larger depth are context-sensitive but not
regular.
|
Let $\mu$ be a strong limit singular cardinal. We prove that if $2^{\mu} >
\mu^+$ then $\binom{\mu^+}{\mu}\to \binom{\tau}{\mu}_{<{\rm cf}(\mu)}$ for
every ordinal $\tau<\mu^+$. We obtain an optimal positive relation under $2^\mu
= \mu^+$, as after collapsing $2^\mu$ to $\mu^+$ this positive relation is
preserved.
|
A {\em pseudo-arc} in $\mathrm{PG}(3n-1,q)$ is a set of $(n-1)$-spaces such
that any three of them span the whole space. A pseudo-arc of size $q^n+1$ is a
{\em pseudo-oval}. If a pseudo-oval $\mathcal{O}$ is obtained by applying field
reduction to a conic in $\mathrm{PG}(2,q^n)$, then $\mathcal{O}$ is called a
{\em pseudo-conic}.
We first explain the connection of (pseudo-)arcs with Laguerre planes,
orthogonal arrays and generalised quadrangles. In particular, we prove that the
Ahrens-Szekeres GQ is obtained from a $q$-arc in $\mathrm{PG}(2,q)$ and we
extend this construction to that of a GQ of order $(q^n-1,q^n+1)$ from a
pseudo-arc of $\mathrm{PG}(3n-1,q)$ of size $q^n$.
The main theorem of this paper shows that if $\mathcal{K}$ is a pseudo-arc in
$\mathrm{PG}(3n-1,q)$, $q$ odd, of size larger than the size of the second
largest complete arc in $\mathrm{PG}(2,q^n)$, where for one element $K_i$ of
$\mathcal{K}$, the partial spread
$\mathcal{S}=\{K_1,\ldots,K_{i-1},K_{i+1},\ldots,K_{s}\}/K_i$ extends to a
Desarguesian spread of $\mathrm{PG}(2n-1,q)$, then $\mathcal{K}$ is contained
in a pseudo-conic. The main result of \cite{Casse} also follows from this
theorem.
|
We have studied null geodesics of the charged black hole surrounded by
quintessence. Quintessence is a candidate for dark energy and is represented by
a scalar field. Here, we have done a detailed study of the photon trajectories.
The exact solutions for the trajectories are obtained in terms of the
Jacobi-elliptic integrals for all possible energy and angular momentum of the
photons. We have also studied the bending angle using the Rindler and Ishak
method.
|
Using the AdS/CFT duality, we study the expectation value of stress tensor in
$2+1$-dimensional quantum critical theories with a general dynamical scaling
$z$, and explore various constrains on negative energy density for strongly
coupled field theories. The holographic dual theory is the theory of gravity in
3+1-dimensional Lifshitz backgrounds. We adopt a consistent approach to obtain
the boundary stress tensor from bulk construction, which satisfies the trace
Ward identity associated with Lifshitz scaling symmetry. In particular, the
boundary stress tensor, constructed from the gravitational wave deformed
Lifshitz geometry, is found up to second order in gravitational wave
perturbations. {The result} is compared to its counterpart in free {scalar}
field theory at the same order in an expansion of small squeezing parameters.
This allows us to relate the boundary values of gravitational waves to the
squeezing parameters of squeezed vacuum states. We find that, in both cases
with $z=1$, the stress tensor satisfies the averaged null energy condition, and
is consistent with the quantum interest conjecture. Moreover, the negative
lower bound on null-contracted stress tensor, which is averaged over time-like
trajectories along nearly null directions, is obtained. We find a weaker
constraint on the magnitude and duration of negative null energy density in
strongly coupled field theory as compared with the constraint in free
relativistic field theory. The implications are discussed.
|
E394 in the Enestrom index. Translated from the Latin original, "De
partitione numerorum in partes tam numero quam specie datas" (1768).
Euler finds a lot of recurrence formulas for the number of partitions of $N$
into $n$ parts from some set like 1 to 6 (numbers on the sides of a die). He
starts the paper talking about how many ways a number $N$ can be formed by
throwing $n$ dice. There do not seem to be any new results or ideas here that
weren't in "Observationes analyticae variae de combinationibus", E158 and "De
partitione numerorum", E191. In this paper Euler just does a lot of special
cases. My impression is that Euler is trying to make his theory of partitions
more approachable,. Also, maybe for his own benefit he wants to say it all
again in different words, to make it clear.
|
Starting from the representation of the $(n-1)+n-$dimensional Lorentz
pseudo-sphere on the projective space $\mathbb{P}\mathbb{R}^{n,n}$, we propose
a method to derive a class of solutions underlying to a Dirac-K\"ahler type
equation on the lattice. We make use of the Cayley transform $\varphi({\bf
w})=\dfrac{1+{\bf w}}{1-{\bf w}}$ to show that the resulting group
representation arise from the same mathematical framework as the conformal
group representation in terms of the {\it general linear group}
$GL\left(2,\Gamma(n-1,n-1)\cup\{ 0\}\right)$. That allows us to describe such
class of solutions as a commutative $n-$ary product, involving the
quasi-monomials $\varphi\left({\bf z}_j\right)^{-\frac{x_j}{h}}$ ($x_j \in
h\mathbb{Z}$) with membership in the paravector space $\mathbb{R}\oplus
\mathbb{R}{\bf e}_j{\bf e}_{n+j}$.
|
We report on a first search for resonant pair production of neutral
long-lived particles (NLLP) which each decay to a bbbar pair, using 3.6 fb^-1
of data recorded with the D0 detector at the Fermilab Tevatron collider. We
search for pairs of displaced vertices in the tracking detector at radii in the
range 1.6--20 cm from the beam axis. No significant excess is observed above
background, and upper limits are set on the production rate in a hidden-valley
benchmark model for a range of Higgs boson masses and NLLP masses and
lifetimes.
|
On the Stack Overflow (SO) Q&A site, users often request solutions to their
code-related problems (e.g., errors, unexpected behavior). Unfortunately, they
often miss required code snippets during their question submission, which could
prevent their questions from getting prompt and appropriate answers. In this
study, we conduct an empirical study investigating the cause & effect of
missing code snippets in SO questions whenever required. Here, our
contributions are threefold. First, we analyze how the presence or absence of
required code snippets affects the correlation between question types (missed
code, included code after requests & had code snippets during submission) and
corresponding answer meta-data (e.g., presence of an accepted answer).
According to our analysis, the chance of getting accepted answers is three
times higher for questions that include required code snippets during their
question submission than those that missed the code. We also investigate
whether the confounding factors (e.g., user reputation) affect questions
receiving answers besides the presence or absence of required code snippets. We
found that such factors do not hurt the correlation between the presence or
absence of required code snippets and answer meta-data. Second, we surveyed 64
practitioners to understand why users miss necessary code snippets. About 60%
of them agree that users are unaware of whether their questions require any
code snippets. Third, we thus extract four text-based features (e.g., keywords)
and build six ML models to identify the questions that need code snippets. Our
models can predict the target questions with 86.5% precision, 90.8% recall,
85.3% F1-score, and 85.2% overall accuracy. Our work has the potential to save
significant time in programming question-answering and improve the quality of
the valuable knowledge base by decreasing unanswered and unresolved questions.
|
We address two bottlenecks for concise QBF encodings of maker-breaker
positional games, like Hex and Tic-Tac-Toe. Our baseline is a QBF encoding with
explicit variables for board positions and an explicit representation of
winning configurations. The first improvement is inspired by lifted planning
and avoids variables for explicit board positions, introducing a universal
quantifier representing a symbolic board state. The second improvement
represents the winning configurations implicitly, exploiting their structure.
The paper evaluates the size of several encodings, depending on board size and
game depth. It also reports the performance of QBF solvers on these encodings.
We evaluate the techniques on Hex instances and also apply them to Harary's
Tic-Tac-Toe. In particular, we study scalability to 19$\times$19 boards, played
in human Hex tournaments.
|
Character rigging is a process of endowing a character with a set of custom
manipulators and controls making it easy to animate by the animators. These
controls consist of simple joints, handles, or even separate character
selection windows.This research paper present an automated rigging system for
quadruped characters with custom controls and manipulators for animation.The
full character rigging mechanism is procedurally driven based on various
principles and requirements used by the riggers and animators. The automation
is achieved initially by creating widgets according to the character type.
These widgets then can be customized by the rigger according to the character
shape, height and proportion. Then joint locations for each body parts are
calculated and widgets are replaced programmatically.Finally a complete and
fully operational procedurally generated character control rig is created and
attached with the underlying skeletal joints. The functionality and feasibility
of the rig was analyzed from various source of actual character motion and a
requirements criterion was met. The final rigged character provides an
efficient and easy to manipulate control rig with no lagging and at high frame
rate.
|
Spinor representation of group GL(4,R) on special spinor space is developed.
Representation space has a structure of the fiber space with the space of
diagonal metricses as the base and standard spinor space as typical fiber.
Non-isometric motions of the space-time entail spinor transformations which
are represented by translation over fibering base in addition to standard
$Spin(4,C)$ representation.
|
Although Marcus theory is widely used to describe charge transfer in
molecular systems, in its usual form it is restricted to transfer from one
molecule to another. If a charge is delocalised across multiple donor
molecules, this approach requires us to treat the entire donor aggregate as a
unified supermolecule, leading to potentially expensive quantum-chemical
calculations and making it more difficult to understand how the aggregate
components contribute to the overall transfer. Here, we show that it is
possible to describe charge transfer between groups of molecules in terms of
the properties of the constituent molecules and couplings between them,
obviating the need for expensive supermolecular calculations. We use the
resulting theory to show that charge delocalisation between molecules in either
the donor or acceptor aggregates can enhance the rate of charge transfer
through a process we call supertransfer (or suppress it through subtransfer).
The rate can also be enhanced above what is possible with a single molecule by
judiciously tuning energy levels and reorganisation energies. We also describe
bridge-mediated charge transfer between delocalised molecular aggregates. The
equations of generalised Marcus theory are in closed form, providing
qualitative insight into the impact of delocalisation on charge dynamics in
molecular systems.
|
The structure of the particle-hole nucleus 132Sb provides a direct source of
information on the effective neutron-proton interaction in the 132Sn region. We
have performed a shell-model calculation for this nucleus using a realistic
effective interaction derived from the Bonn A nucleon-nucleon potential. The
results are in very good agreement with the experimental data evidencing the
reliability of our effective interaction matrix elements both with isospin T=1
and T=0.
|
Fractional Chern insulators (FCIs) are lattice generalizations of the
conventional fractional quantum Hall effect (FQHE) in two-dimensional (2D)
electron gases. They typically arise in a 2D lattice without time-reversal
symmetry when a nearly flat Bloch band with nonzero Chern number is partially
occupied by strongly interacting particles. Band topology and interactions
endow FCIs exotic topological orders which are characterized by the precisely
quantized Hall conductance, robust ground-state degeneracy on high-genus
manifolds, and fractionalized quasiparticles. Since in principle FCIs can exist
at zero magnetic field and be protected by a large energy gap, they provide a
potentially experimentally more accessible avenue for observing and harnessing
FQHE phenomena. Moreover, the interplay between FCIs and lattice-specific
effects that do not exist in the conventional continuum FQHE poses new
theoretical challenges. In this chapter, we provide a general introduction of
the theoretical model and numerical simulation of FCIs, then pay special
attention on the recent development of this field in moir\'e materials while
also commenting on potential alternative implementations in cold atom systems.
With a plethora of exciting theoretical and experimental progress, topological
flat bands in moir\'e materials such as magic-angle twisted bilayer graphene on
hexagonal boron nitride have indeed turned out to be a remarkably versatile
platform for FCIs featuring an intriguing interplay between topology, geometry,
and interactions.
|
We use the phase space position-velocity ($x,v$) to deal with the statistical
properties of velocity dependent dynamical systems, like dissipative ones.
Within this approach, we study the statistical properties of an ensemble of
harmonic oscillators in a linear weak dissipative media. Using the Debye model
of a crystal, we calculate at first order in the dissipative parameter the
entropy, free energy, internal energy, equation of state and specific heat
using the classical and quantum approaches. for the classical approach we found
that the entropy, the equation of state, and the free energy depend on the
dissipative parameter, but the internal energy and specific heat do not depend
of it. For the quantum case, we found that all the thermodynamical quantities
depend on this parameter.
|
The ever-increasing competitiveness in the academic publishing market
incentivizes journal editors to pursue higher impact factors. This translates
into journals becoming more selective, and, ultimately, into higher publication
standards. However, the fixation on higher impact factors leads some journals
to artificially boost impact factors through the coordinated effort of a
"citation cartel" of journals. "Citation cartel" behavior has become
increasingly common in recent years, with several instances being reported.
Here, we propose an algorithm -- named CIDRE -- to detect anomalous groups of
journals that exchange citations at excessively high rates when compared
against a null model that accounts for scientific communities and journal size.
CIDRE detects more than half of the journals suspended from Journal Citation
Reports due to anomalous citation behavior in the year of suspension or in
advance. Furthermore, CIDRE detects many new anomalous groups, where the impact
factors of the member journals are lifted substantially higher by the citations
from other member journals. We describe a number of such examples in detail and
discuss the implications of our findings with regard to the current academic
climate.
|
Path integrals give a possibility to compute in details routes of particles
from particle sources through slit gratings and further to detectors. The path
integral for a particle passing through the Gaussian slit results in the
Gaussian wavepacket. The wavepackets prepared on N slits and superposed
together give rise to interference pattern in the near-field zone. It
transforms to diffraction in the far-field zone represented by divergent
principal rays, at that all rays are partitioned from each other by (N-2)
subsidiary rays. The Bohmian trajectories in the near-field zone of N-slit
gratings show wavy behavior. And they become straight in the far-field zone.
The trajectories show zigzag behavior on the interference Talbot carpet (ratio
of particle wavelength to a distance between slits are much smaller than 1 and
N>>1). Interference from the the N-slit gratings is simulated by scattering
monochromatic neutrons (wavelength=0.5 nm). Also we have considered simulation
of interference fringes arising at scattering on an N-slit grating of fullerene
molecules (according to the real experiment stated in e-print 1001.0468).
|
In the frame of the Wigner function formalism, we derive a coalescence model
to use with a hydrodynamical picture of the reaction zone
|
There are a number of well-established methods such as principal components
analysis (PCA) for automatically capturing systematic variation due to latent
variables in large-scale genomic data. PCA and related methods may directly
provide a quantitative characterization of a complex biological variable that
is otherwise difficult to precisely define or model. An unsolved problem in
this context is how to systematically identify the genomic variables that are
drivers of systematic variation captured by PCA. Principal components (and
other estimates of systematic variation) are directly constructed from the
genomic variables themselves, making measures of statistical significance
artificially inflated when using conventional methods due to over-fitting. We
introduce a new approach called the jackstraw that allows one to accurately
identify genomic variables that are statistically significantly associated with
any subset or linear combination of principal components (PCs). The proposed
method can greatly simplify complex significance testing problems encountered
in genomics and can be utilized to identify the genomic variables significantly
associated with latent variables. Using simulation, we demonstrate that our
method attains accurate measures of statistical significance over a range of
relevant scenarios. We consider yeast cell-cycle gene expression data, and show
that the proposed method can be used to straightforwardly identify
statistically significant genes that are cell-cycle regulated. We also analyze
gene expression data from post-trauma patients, allowing the gene expression
data to provide a molecularly-driven phenotype. We find a greater enrichment
for inflammatory-related gene sets compared to using a clinically defined
phenotype. The proposed method provides a useful bridge between large-scale
quantifications of systematic variation and gene-level significance analyses.
|
The gas pixel detector (GPD) is designed and developed for high-sensitivity
astronomical X-ray polarimetry, which is a new window about to open in a few
years. Due to the small mass, low power, and compact geometry of the GPD, we
propose a CubeSat mission Polarimeter Light (PolarLight) to demonstrate and
test the technology directly in space. There is no optics but a collimator to
constrain the field of view to 2.3 degrees. Filled with pure dimethyl ether
(DME) at 0.8 atm and sealed by a beryllium window of 100 micron thick, with a
sensitive area of about 1.4 mm by 1.4 mm, PolarLight allows us to observe the
brightest X-ray sources on the sky, with a count rate of, e.g., ~0.2 counts/s
from the Crab nebula. The PolarLight is 1U in size and mounted in a 6U CubeSat,
which was launched into a low Earth Sun-synchronous orbit on October 29, 2018,
and is currently under test. More launches with improved designs are planned in
2019. These tests will help increase the technology readiness for future
missions such as the enhanced X-ray Timing and Polarimetry (eXTP), better
understand the orbital background, and may help constrain the physics with
observations of the brightest objects.
|
We consider the electromagnetic field in the presence of polarizable point
dipoles. In the corresponding effective Maxwell equation these dipoles are
described by three dimensional delta function potentials. We review the
approaches handling these: the selfadjoint extension,
regularization/renormalisation and the zero range potential methods. Their
close interrelations are discussed in detail and compared with the
electrostatic approach which drops the contributions from the self fields. For
a homogeneous two dimensional lattice of dipoles we write down the complete
solutions, which allow, for example, for an easy numerical treatment of the
scattering of the electromagnetic field on the lattice or for investigating
plasmons. Using these formulas, we consider the limiting case of vanishing
lattice spacing. For a scalar field and for the TE polarization of the
electromagnetic field this transition is smooth and results in the results
known from the continuous sheet. Especially for the TE polarization, we
reproduce the results known from the hydrodynamic model describing a two
dimensional electron gas. For the TM polarization, for polarizability parallel
and perpendicular to the lattice, in both cases, the transition is singular.
For the parallel polarizability this is surprising and different from the
hydrodynamic model. For perpendicular polarizability this is what was known in
literature. We also investigate the case when the transition is done with
dipoles described by smeared delta function, i.e., keeping a regularization.
Here, for TM polarization for parallel polarizability, when subsequently doing
the limit of vanishing lattice spacing, we reproduce the result known from the
hydrodynamic model. In case of perpendicular polarizability we need an
additional renormalization to reproduce the result obtained previously by
stepping back from the dipole approximation.
|
The observation of our home galaxy, the Milky Way (MW), is made difficult by
our internal viewpoint. The Gaia survey that contains around 1.6 billion star
distances is the new flagship of MW structure and can be combined with other
large-scale infrared (IR) surveys to provide unprecedented long distance
measurements inside the Galactic plane. Concurrently, the past two decades have
seen an explosion of the use of Machine Learning (ML) methods that are also
increasingly employed in astronomy.
I will first describe the construction of a ML classifier to improve a widely
adopted classification scheme for Young Stellar Object (YSO) candidates. Stars
being born in dense interstellar environment, the youngest ones that did not
had time to move away from their formation location are a probe of the densest
structures of the interstellar medium. The combination of YSO identification
and Gaia distance measurements then enables the reconstruction of dense cloud
structures in 3D. Our ML classifier is based on Artificial Neural Networks
(ANN) and uses IR data from the Spitzer space telescope to reconstruct the YSO
classification automatically from given examples.
In a second part, I will propose a new method for reconstructing the 3D
extinction distribution of the MW based on Convolutional Neural Networks (CNN).
The CNN is trained using a large-scale Galactic model, the Besan\c{c}on Galaxy
Model, and learns to infer the extinction distance distribution by comparing
results of the model with observed data. This method is able to resolve distant
structures up to 10 kpc with a formal resolution of 100 pc, and was found to be
capable of combining 2MASS and Gaia datasets without the necessity of a cross
match. The results from this combined prediction are encouraging and open the
possibility for future full Galactic plane prediction using a larger
combination of various datasets.
|
In this Letter we propose a systematic approach for detecting and calculating
preserved measures and integrals of a rational map. The approach is based on
the use of cofactors and Discrete Darboux Polynomials and relies on the use of
symbolic algebra tools. Given sufficient computing power, all rational
preserved integrals can be found.
We show, in two examples, how to use this method to detect and determine
preserved measures and integrals of the considered rational maps.
|
Inclusive single-particle production cross sections have been calculated
including higher-order QCD corrections. Transverse-momentum and rapidity
distributions are presented and the scale dependence is studied. The results
are compared with experimental data from the CERN S(p anti-p)S Collider and the
Fermilab Tevatron.
|
Denoising diffusion models have emerged as one of the most powerful
generative models in recent years. They have achieved remarkable success in
many fields, such as computer vision, natural language processing (NLP), and
bioinformatics. Although there are a few excellent reviews on diffusion models
and their applications in computer vision and NLP, there is a lack of an
overview of their applications in bioinformatics. This review aims to provide a
rather thorough overview of the applications of diffusion models in
bioinformatics to aid their further development in bioinformatics and
computational biology. We start with an introduction of the key concepts and
theoretical foundations of three cornerstone diffusion modeling frameworks
(denoising diffusion probabilistic models, noise-conditioned scoring networks,
and stochastic differential equations), followed by a comprehensive description
of diffusion models employed in the different domains of bioinformatics,
including cryo-EM data enhancement, single-cell data analysis, protein design
and generation, drug and small molecule design, and protein-ligand interaction.
The review is concluded with a summary of the potential new development and
applications of diffusion models in bioinformatics.
|
The goal of this article is to study the limit of the empirical distribution
induced by a mutation-selection multi-allelic Moran model, whose dynamic is
given by a continuous-time irreducible Markov chain. The rate matrix driving
the mutation is assumed irreducible and the selection rates are assumed
uniformly bounded. The paper is divided into two parts. The first one deals
with processes with general selection rates. For this case we are able to prove
the propagation of chaos in $\mathbb{L}^p$ over the compacts, with speed of
convergence of order $1/\sqrt{N}$. Further on, we consider a specific type of
selection that we call additive selection. Essentially, we assume that the
selection rate can be decomposed as the sum of three terms: a term depending on
the allelic type of the parent (which can be understood as selection at death),
another term depending on the allelic type of the descendant (which can be
understood as selection at birth) and a third term which is symmetric. Under
this setting, our results include a uniform in time bound for the propagation
on chaos in $\mathbb{L}^p$ of order $1/\sqrt{N}$, and the proof of the
asymptotic normality with zero mean and explicit variance, for the
approximation error between the empirical distribution and its limit, when the
number of individuals tend towards infinity. Additionally, we explore the
interpretation of the Moran model with additive selection as a particle process
whose empirical distribution approximates a quasi-stationary distribution, in
the same spirit as the Fleming\,--\,Viot particle systems. We then address the
problem of minimising the asymptotic quadratic error, when the time and the
number of particles go to infinity.
|
We construct algebraic-geometric families of genus one (i.e. elliptic)
current and affine Lie algebras of Krichever-Novikov type. These families
deform the classical current, respectively affine Kac-Moody Lie algebras. The
construction is induced by the geometric process of degenerating the elliptic
curve to singular cubics. If the finite-dimensional Lie algebra defining the
infinite dimensional current algebra is simple then, even if restricted to
local families, the constructed families are non-equivalent to the trivial
family. In particular, we show that the current algebra is geometrically not
rigid, despite its formal rigidity. This shows that in the infinite-dimensional
Lie algebra case the relations between geometric deformations, formal
deformations and Lie algebra two-cohomology are not that close as in the
finite-dimensional case. The constructed families are e.g. of relevance in the
global operator approach to the Wess-Zumino-Witten-Novikov models appearing in
the quantization of Conformal Field Theory.
|
Nutrition information is crucial in precision nutrition and the food
industry. The current food composition compilation paradigm relies on laborious
and experience-dependent methods. However, these methods struggle to keep up
with the dynamic consumer market, resulting in delayed and incomplete nutrition
data. In addition, earlier machine learning methods overlook the information in
food ingredient statements or ignore the features of food images. To this end,
we propose a novel vision-language model, UMDFood-VL, using front-of-package
labeling and product images to accurately estimate food composition profiles.
In order to empower model training, we established UMDFood-90k, the most
comprehensive multimodal food database to date, containing 89,533 samples, each
labeled with image and text-based ingredient descriptions and 11 nutrient
annotations. UMDFood-VL achieves the macro-AUCROC up to 0.921 for fat content
estimation, which is significantly higher than existing baseline methods and
satisfies the practical requirements of food composition compilation.
Meanwhile, up to 82.2% of selected products' estimated error between chemical
analysis results and model estimation results are less than 10%. This
performance sheds light on generalization towards other food and
nutrition-related data compilation and catalyzation for the evolution of
generative AI-based technology in other food applications that require
personalization.
|
We study the excitation dynamics of a single molecular nanomagnet by static
and pulsed magnetic fields. Based on a stability analysis of the classical
magnetization dynamics we identify analytically the fields parameters for which
the energy is stochastically pumped into the system in which case the
magnetization undergoes diffusively and irreversibly a large angle deflection.
An approximate analytical expression for the diffusion constant in terms of the
fields parameters is given and assessed by full numerical calculations.
|
There is a strong consensus that combining the versatility of machine
learning with the assurances given by formal verification is highly desirable.
It is much less clear what verified machine learning should mean exactly. We
consider this question from the (unexpected?) perspective of computable
analysis. This allows us to define the computational tasks underlying verified
ML in a model-agnostic way, and show that they are in principle computable.
|
We design a new family of hybrid CNN-ViT neural networks, named FasterViT,
with a focus on high image throughput for computer vision (CV) applications.
FasterViT combines the benefits of fast local representation learning in CNNs
and global modeling properties in ViT. Our newly introduced Hierarchical
Attention (HAT) approach decomposes global self-attention with quadratic
complexity into a multi-level attention with reduced computational costs. We
benefit from efficient window-based self-attention. Each window has access to
dedicated carrier tokens that participate in local and global representation
learning. At a high level, global self-attentions enable the efficient
cross-window communication at lower costs. FasterViT achieves a SOTA
Pareto-front in terms of accuracy and image throughput. We have extensively
validated its effectiveness on various CV tasks including classification,
object detection and segmentation. We also show that HAT can be used as a
plug-and-play module for existing networks and enhance them. We further
demonstrate significantly faster and more accurate performance than competitive
counterparts for images with high resolution. Code is available at
https://github.com/NVlabs/FasterViT.
|
Quantum computers have the potential to outperform classical computers for
some complex computational problems. However, current quantum computers (e.g.,
from IBM and Google) have inherent noise that results in errors in the outputs
of quantum software executing on the quantum computers, affecting the
reliability of quantum software development. The industry is increasingly
interested in machine learning (ML)--based error mitigation techniques, given
their scalability and practicality. However, existing ML-based techniques have
limitations, such as only targeting specific noise types or specific quantum
circuits. This paper proposes a practical ML-based approach, called Q-LEAR,
with a novel feature set, to mitigate noise errors in quantum software outputs.
We evaluated Q-LEAR on eight quantum computers and their corresponding noisy
simulators, all from IBM, and compared Q-LEAR with a state-of-the-art ML-based
approach taken as baseline. Results show that, compared to the baseline, Q-LEAR
achieved a 25% average improvement in error mitigation on both real quantum
computers and simulators. We also discuss the implications and practicality of
Q-LEAR, which, we believe, is valuable for practitioners.
|
The reconstruction mechanisms built by the human auditory system during sound
reconstruction are still a matter of debate. The purpose of this study is to
propose a mathematical model of sound reconstruction based on the functional
architecture of the auditory cortex (A1). The model is inspired by the
geometrical modelling of vision, which has undergone a great development in the
last ten years. There are however fundamental dissimilarities, due to the
different role played by the time and the different group of symmetries. The
algorithm transforms the degraded sound in an 'image' in the time-frequency
domain via a short-time Fourier transform. Such an image is then lifted in the
Heisenberg group and it is reconstructed via a Wilson-Cowan differo-integral
equation. Preliminary numerical experiments are provided, showing the good
reconstruction properties of the algorithm on synthetic sounds concentrated
around two frequencies.
|
After more than 30 years of validation of Moore's law, the CMOS technology
has already entered the nanoscale (sub-100nm) regime and faces strong
limitations. The nanowire transistor is one candidate which has the potential
to overcome the problems caused by short channel effects in SOI MOSFETs and has
gained signifi - cant attention from both device and circuit developers. In
addition to the effective suppression of short channel effects due to the
improved gate strength, the multi-gate NWFETs show excellent current drive and
have the merit that they are compatible with conventional CMOS processes. To
simulate these devices, accurate modeling and calculations based on quantum
mechanics are necessary to assess their performance limits, since
cross-sections of the multigate NWFETs are expected to be a few nanometers wide
in their ultimate scaling. In this paper we have explored the use of ATLAS
including the Bohm Quantum Potential (BQP) for simulating and studying the
shortchannel behaviour of nanowire FETs.
|
The relations between the hidden symmetries of the six-dimensional
pseudo-Euclidean space with signature (+++ -- ) and the conserved quantum
characteristics of elementary particles is established. The hidden symmetries
are brought out by the various forms of representation of the pseudo-Euclidean
space metric with the aid of spinors and hyperbolic complex numbers. Using the
emerging hidden symmetry groups one can disclose such conserved quantum
characteristics as spin, isospin, electric and baryon charges, hypercharge,
color and flavor. One can also predict the exact number of such conserved
quantum characteristics of quarks as color and flavor.
|
Tech-leading organizations are embracing the forthcoming artificial
intelligence revolution. Intelligent systems are replacing and cooperating with
traditional software components. Thus, the same development processes and
standards in software engineering ought to be complied in artificial
intelligence systems. This study aims to understand the processes by which
artificial intelligence-based systems are developed and how state-of-the-art
lifecycle models fit the current needs of the industry. We conducted an
exploratory case study at ING, a global bank with a strong European base. We
interviewed 17 people with different roles and from different departments
within the organization. We have found that the following stages have been
overlooked by previous lifecycle models: data collection, feasibility study,
documentation, model monitoring, and model risk assessment. Our work shows that
the real challenges of applying Machine Learning go much beyond sophisticated
learning algorithms - more focus is needed on the entire lifecycle. In
particular, regardless of the existing development tools for Machine Learning,
we observe that they are still not meeting the particularities of this field.
|
We numerically evaluate the Casimir interaction energy for configurations
involving two perfectly conducting eccentric cylinders and a cylinder in front
of a plane. We consider in detail several special cases. For quasi-concentric
cylinders, we analyze the convergence of a perturbative evaluation based on
sparse matrices. For concentric cylinders, we obtain analytically the
corrections to the proximity force approximation up to second order, and we
present an improved numerical procedure to evaluate the interaction energy at
very small distances. Finally, we consider the configuration of a cylinder in
front of a plane. We first show numerically that, in the appropriate limit, the
Casimir energy for this configuration can be obtained from that of two
eccentric cylinders. Then we compute the interaction energy at small distances,
and compare the numerical results with the analytic predictions for the first
order corrections to the proximity force approximation.
|
Compensation mechanisms are used to counterbalance the discomfort suffered by
users due to quality service issues. Such mechanisms are currently used for
different purposes in the electrical power and energy sector, e.g., power
quality and reliability. This paper proposes a compensation mechanism using EV
flexibility management of a set of charging sessions managed by a charging
point operator (CPO). Users' preferences and bilateral agreements with the CPO
are modelled via discrete utility functions for the energy not served. A
mathematical proof of the proposed compensation mechanism is given and applied
to a test scenario using historical data from an office building with a parking
lot in the Netherlands. Synthetic data for 400 charging sessions was generated
using multivariate elliptical copulas to capture the complex dependency
structures in EV charging data. Numerical results validate the usefulness of
the proposed compensation mechanism as an attractive measure both for the CPO
and the users in case of energy not served.
|
Microfluidics have shown great promise in multiple applications, especially
in biomedical diagnostics and separations. While the flow properties of these
microfluidic devices can be solved by numerical methods such as computational
fluid dynamics (CFD), the process of mesh generation and setting up a numerical
solver requires some domain familiarity, while more intuitive commercial
programs such as Fluent and StarCCM can be expensive. Hence, in this work, we
demonstrated the use of a U-Net convolutional neural network as a surrogate
model for predicting the velocity and pressure fields that would result for a
particular set of microfluidic filter designs. The surrogate model is fast,
easy to set-up and can be used to predict and assess the flow velocity and
pressure fields across the domain for new designs of interest via the input of
a geometry-encoding matrix. In addition, we demonstrate that the same
methodology can also be used to train a network to predict pressure based on
velocity data, and propose that this can be an alternative to numerical
algorithms for calculating pressure based on velocity measurements from
particle-image velocimetry measurements. Critically, in both applications, we
demonstrate prediction test errors of less than 1%, suggesting that this is
indeed a viable method.
|
Isotropy-violation statistics can highlight polarized galactic foregrounds
that contaminate primordial $B$-modes in the Cosmic Microwave Background (CMB).
We propose a particular isotropy-violation test and apply it to polarized
Planck 353 GHz data, constructing an map that indicates $B$-mode foreground
dust power over the sky. We build our main isotropy test in harmonic space via
the bipolar spherical harmonic basis, and our method helps us to identify the
least-contaminated directions. By this measure, there are regions of low
foreground in and around the BICEP field, near the South Galactic Pole, and in
the Northern Galactic Hemisphere. There is also a possible foreground feature
in the BICEP field. We compare our results to those based on the local power
spectrum, which is computed on discs using a version of the method of Planck
Int.~XXX (2016). The discs method is closely related to our isotropy-violation
diagnostic. We pay special care to the treatment of noise, including chance
correlations with the foregrounds. Currently we use our isotropy tool to assess
the cleanest portions of the sky, but in the future such methods will allow
isotropy-based null tests for foreground contamination in maps purported to
measure primordial $B$-modes, particularly in cases of limited frequency
coverage.
|
We present a novel method and analysis to train generative adversarial
networks (GAN) in a stable manner. As shown in recent analysis, training is
often undermined by the probability distribution of the data being zero on
neighborhoods of the data space. We notice that the distributions of real and
generated data should match even when they undergo the same filtering.
Therefore, to address the limited support problem we propose to train GANs by
using different filtered versions of the real and generated data distributions.
In this way, filtering does not prevent the exact matching of the data
distribution, while helping training by extending the support of both
distributions. As filtering we consider adding samples from an arbitrary
distribution to the data, which corresponds to a convolution of the data
distribution with the arbitrary one. We also propose to learn the generation of
these samples so as to challenge the discriminator in the adversarial training.
We show that our approach results in a stable and well-behaved training of even
the original minimax GAN formulation. Moreover, our technique can be
incorporated in most modern GAN formulations and leads to a consistent
improvement on several common datasets.
|
We describe a diffraction microscopy technique based on refractive optics to
study structural variations in crystals. The X-ray beam diffracted by a crystal
was magnified by beryllium parabolic refractive lenses on a 2D X-ray camera.
The microscopy setup was integrated into the 6-circle Huber diffractometer at
the ESRF beamline ID06. Our setup allowed us to visualize structural
imperfections with a resolution of approximately 1 micrometer. The
configuration, however, can easily be adapted for sub-micrometer resolution.
|
We report results of long timescale adaptive kinetic Monte Carlo simulations
aimed at identifying possible molecular reordering processes on both
proton-disordered and ordered (Fletcher) basal plane (0001) surfaces of
hexagonal ice. The simulations are based on a force field for flexible
molecules and span a time interval of up to 50 {\mu}s at a temperature of 100
K, which represents a lower bound to the temperature range of Earth's
atmosphere. Additional calculations using both density functional theory and an
ab initio based polarizable potential function are performed to test and refine
the force field predictions. Several distinct processes are found to occur
readily even at this low temperature, including concerted reorientation
(flipping) of neighboring surface molecules, which changes the pattern of
dangling H-atoms, and the formation of interstitial defects by the downwards
motion of upper-bilayer molecules. On the proton-disordered surface, one major
surface roughening process is observed that significantly disrupts the
crystalline structure. Despite much longer simulation time, such roughening
processes are not observed on the highly ordered Fletcher surface which is
energetically more stable because of smaller repulsive interaction between
neighboring dangling H-atoms. However, a more localized process takes place on
the Fletcher surface involving a surface molecule transiently leaving its
lattice site. The flipping process provides a facile pathway of increasing
proton-order and stabilizing the surface, supporting a predominantly
Fletcher-like ordering of low-temperature ice surfaces, but our simulations
also show that proton- disordered patches on the surface may induce significant
local reconstructions. Further, a subset of the molecules on the Fletcher
surface are susceptible to forming interstitial defects.
|
A spectroscopic analysis was carried out to clarify the properties of KIC
11145123 -- the first main-sequence star with a determination of
core-to-surface rotation -- based on spectra observed with the High Dispersion
Spectrograph (HDS) of the Subaru telescope. The atmospheric parameters ($T_{\rm
eff} = 7600$ K, $\log g = 4.2$, $\xi = 3.1$ km s$^{-1}$ and $ {\rm [Fe/H]} =
-0.71$ dex), the radial and rotation velocities, and elemental abundances were
obtained by analysing line strengths and fitting line profiles, which were
calculated with a 1D LTE model atmosphere. The main properties of KIC 11145123
are: (1) A low $ {\rm [Fe/H]} = -0.71\pm0.11$ dex and a high radial velocity of
$-135.4 \pm 0.2$ km s$^{-1}$. These are remarkable among late-A stars. Our best
asteroseismic models with this low [Fe/H] have slightly high helium abundance
and low masses of 1.4 M$_\odot$. All of these results strongly suggest that KIC
11145123 is a Population II blue straggler; (2) The projected rotation velocity
confirms the asteroseismically predicted slow rotation of the star; (3)
Comparisons of abundance patterns between KIC 11145123 and Am, Ap, and blue
stragglers show that KIC 11145123 is neither an Am star nor an Ap star, but has
abundances consistent with a blue straggler. We conclude that the remarkably
long 100-d rotation period of this star is a consequence of it being a blue
straggler, but both pathways for the formation of blue stragglers -- merger and
mass loss in a binary system -- pose difficulties for our understanding of the
exceedingly slow rotation. In particular, we show that there is no evidence of
any secondary companion star, and we put stringent limits on the possible mass
of any such purported companion through the phase modulation (PM) technique.
|
We study the magneto-elastic coupling behavior of paramagnetic chains in soft
polymer gels exposed to external magnetic fields. To this end, a laser scanning
confocal microscope is used to observe the morphology of the paramagnetic
chains together with the deformation field of the surrounding gel network. The
paramagnetic chains in soft polymer gels show rich morphological shape changes
under oblique magnetic fields, in particular a pronounced buckling deformation.
The details of the resulting morphological shapes depend on the length of the
chain, the strength of the external magnetic field, and the modulus of the gel.
Based on the observation that the magnetic chains are strongly coupled to the
surrounding polymer network, a simplified model is developed to describe their
buckling behavior. A coarse-grained molecular dynamics simulation model
featuring an increased matrix stiffness on the surfaces of the particles leads
to morphologies in agreement with the experimentally observed buckling effects.
|
Delta ($\delta$) sunspots sometimes host fast photospheric flows along the
central magnetic polarity inversion line (PIL). Here we study the strong
Doppler shift signature in the central penumbral light bridge of solar active
region NOAA 12673. Observations from the Helioseismic and Magnetic Imager (HMI)
indicate highly sheared, strong magnetic fields. Large Doppler shifts up to 3.2
km s$^{-1}$ appeared during the formation of the light bridge and persisted for
about 16 hours. A new velocity estimator, called DAVE4VMwDV, reveals fast
converging and shearing motion along the PIL from HMI vector magnetograms, and
recovers the observed Doppler signal much better than an old version of the
algorithm. The inferred velocity vectors are largely (anti-)parallel to the
inclined magnetic fields, suggesting that the observed Doppler shift contains
significant contribution from the projected, field-aligned flows.
High-resolution observations from the Hinode/Spectro-Polarimeter (SP) further
exhibit a clear correlation between the Doppler velocity and the cosine of the
magnetic inclination, which is in agreement with HMI results and consistent
with a field-aligned flow of about 9.6 km s$^{-1}$. The complex Stokes profiles
suggest significant gradients of physical variables along the line of sight. We
discuss the implications on the $\delta$-spot magnetic structure and the
flow-driving mechanism.
|
The presence of brown dwarfs in the dark galactic halo could be detected
through their gravitational lensing effect and experiments under way monitor
about one million stars to observe a few lensing events per year. We show that
if the photon flux from a galaxy is measured with a good precision, it is not
necessary to resolve the stars and besides more events could be observed.
|
Deflation techniques are typically used to shift isolated clusters of small
eigenvalues in order to obtain a tighter distribution and a smaller condition
number. Such changes induce a positive effect in the convergence behavior of
Krylov subspace methods, which are among the most popular iterative solvers for
large sparse linear systems. We develop a deflation strategy for symmetric
saddle point matrices by taking advantage of their underlying block structure.
The vectors used for deflation come from an elliptic singular value
decomposition relying on the generalized Golub-Kahan bidiagonalization process.
The block targeted by deflation is the off-diagonal one since it features a
problematic singular value distribution for certain applications. One example
is the Stokes flow in elongated channels, where the off-diagonal block has
several small, isolated singular values, depending on the length of the
channel. Applying deflation to specific parts of the saddle point system is
important when using solvers such as CRAIG, which operates on individual blocks
rather than the whole system. The theory is developed by extending the existing
framework for deflating square matrices before applying a Krylov subspace
method like MINRES. Numerical experiments confirm the merits of our strategy
and lead to interesting questions about using approximate vectors for
deflation.
|
We investigate the behaviour of dissipative accreting matter close to a black
hole as it provides the important observational features of galactic and
extra-galactic black holes candidates. We find the complete set of global
solutions in presence of viscosity and synchrotron cooling. We show that
advective accretion flow can have standing shock wave and the dynamics of the
shock is controlled by the dissipation parameters (both viscosity and cooling).
We study the effective region of the parameter space for standing as well as
oscillating shock. We find that shock front always moves towards the black hole
as the dissipation parameters are increased. However, viscosity and cooling
have opposite effects in deciding the solution topologies. We obtain two
critical cooling parameters that separate the nature of accretion solution.
|
The Gertsenshtein effect could in principle be used to detect a single
graviton by firing it through a region filled with a constant magnetic field
that enables its conversion to a photon, which can be efficiently detected via
standard techniques. The quantization of the gravitational field could then be
inferred indirectly. We show that for currently available single-photon
detector technology, the Gertsenshtein detector is generically inefficient,
meaning that the probability of detection is $\ll 1$. The Gertsenshtein
detector can become efficient on astrophysical scales for futuristic
single-photon detectors sensitive to frequencies in the Hz to kHz range. It is
not clear whether such devices are in principle possible.
|
We explore the evolution of the specific star formation rate (SSFR) for
3.6um-selected galaxies of different M_* in the COSMOS field. The average SFR
for sub-sets of these galaxies is estimated with stacked 1.4GHz radio continuum
emission. We separately consider the total sample and a subset of galaxies (SF)
that shows evidence for substantive recent star formation in the rest-frame
optical SED. At 0.2<z<3 both populations show a strong and M_*-independent
decrease in their SSFR towards z=0.2, best described by a power- law (1+z)^n,
where n~4.3 for all galaxies and n~3.5 for SF sources. The decrease appears to
have started at z>2, at least above 4x10^10M_Sun where our conclusions are most
robust. We find a tight correlation with power-law dependence, SSFR (M_*)^beta,
between SSFR and M_* at all z. It tends to flatten below ~10^10M_Sun if
quiescent galaxies are included; if they are excluded a shallow index beta_SFG
-0.4 fits the correlation. On average, higher M_* objects always have lower
SSFRs, also among SF galaxies. At z>1.5 there is tentative evidence for an
upper SSFR-limit that an average galaxy cannot exceed. It is suggested by a
flattening of the SSFR-M_* relation (also for SF sources), but affects massive
(>10^10M_Sun) galaxies only at the highest z. Below z=1.5 there thus is no
direct evidence that galaxies of higher M_* experience a more rapid waning of
their SSFR than lower M_* SF systems. In this sense, the data rule out any
strong 'downsizing'. We combine our results with recent measurements of the
galaxy (stellar) mass function in order to determine the characteristic mass of
a SF galaxy (M_*=10^(10.6\pm0.4)M_Sun). In this sense, too, there is no
'downsizing'. Our analysis constitutes the most extensive SFR density
determination with a single technique to z=3. Recent Herschel results are
consistent with our results, but rely on far smaller samples.
|
Breath analysis enables rapid, non-invasive diagnostics, as well as long-term
monitoring, of human health through the identification and quantification of
exhaled biomarkers. Here, for the first time, we demonstrate the remarkable
capabilities of mid-infrared (mid-IR) cavity-enhanced direct frequency comb
spectroscopy (CE-DFCS) applied to breath analysis. We simultaneously detect and
monitor as a function of time four breath biomarkers - CH$_3$OH, CH$_4$, H$_2$O
and HDO - as well as illustrating the feasibility of detecting at least six
more (H$_2$CO, C$_2$H$_6$, OCS, C$_2$H$_4$, CS$_2$ and NH$_3$) without
modifications to the experimental apparatus. We achieve ultra-high detection
sensitivity at the parts-per-trillion level. This is made possible by the
combination of the broadband spectral coverage of a frequency comb, the high
spectral resolution afforded by the individual comb teeth, and the sensitivity
enhancement resulting from a high-finesse cavity. Exploiting recent advances in
frequency comb, optical coating, and photodetector technologies, we can access
a large variety of biomarkers with strong carbon-hydrogen bond spectral
signatures in the mid-IR.
|
Let $K$ be an imaginary quadratic number field of class number one and
$\mathcal{O}_K$ be its ring of integers. We show that, if the arithmetic
functions $f, g:\mathcal{O}_K\rightarrow \mathbb{C}$ both have level of
distribution $\vartheta$ for some $0<\vartheta\leq 1/2$ then the Dirichlet
convolution $f*g$ also have level of distribution $\vartheta$.
|
We present the results of the characterization of pixel modules composed of
75 um thick n-in-p sensors and ATLAS FE-I3 chips, interconnected with the SLID
(Solid Liquid Inter-Diffusion) technology. This technique, developed at
Fraunhofer-EMFT, is explored as an alternative to the bump-bonding process.
These modules have been designed to demonstrate the feasibility of a very
compact detector to be employed in the future ATLAS pixel upgrades, making use
of vertical integration technologies. This module concept also envisages
Inter-Chip-Vias (ICV) to extract the signals from the backside of the chips,
thereby achieving a higher fraction of active area with respect to the present
pixel module design. In the case of the demonstrator module, ICVs are etched
over the original wire bonding pads of the FE-I3 chip. In the modules with ICVs
the FE-I3 chips will be thinned down to 50 um. The status of the ICV
preparation is presented.
|
Computer Aided Design systems provide tools for building and manipulating
models of solid objects. Some also provide access to programming languages so
that parametrised designs can be expressed. There is a sharp distinction,
therefore, between building models, a concrete graphical editing activity, and
programming, an abstract, textual, algorithm-construction activity. The
recently proposed Language for Structured Design (LSD) was motivated by a
desire to combine the design and programming activities in one language. LSD
achieves this by extending a visual logic programming language to incorporate
the notions of solids and operations on solids. Here we investigate another
aspect of the LSD approach; namely, that by using visual logic programming as
the engine to drive the parametrised assembly of objects, we also gain the
powerful symbolic problem-solving capability that is the forte of logic
programming languages. This allows the designer/programmer to work at a higher
level, giving declarative specifications of a design in order to obtain the
design descriptions. Hence LSD integrates problem solving, design synthesis,
and prototype assembly in a single homogeneous programming/design environment.
We demonstrate this specification-to-final-assembly capability using the
masterkeying problem for designing systems of locks and keys.
|
In this paper, we investigate the space of certain weak stability conditions
on the triangulated category of D0-D2-D6 bound states on a smooth projective
Calabi-Yau 3-fold. In the case of a quintic 3-fold, the resulting space is
interpreted as a universal covering space of an infinitesimal neighborhood of
the conifold point in the stringy Kahler moduli space. We then construct the DT
type invariants counting semistable objects in our triangulated category, which
are new curve counting invariants on a Calabi-Yau 3-fold. We also investigate
the wall-crossing formula of our invariants and their interplay with the
Seidel-Thomas twist.
|
We establish both a Boltzmann-Gibbs principle and a Parisi formula for the
limiting free energy of an abstract GREM (Generalized Random Energy Model)
which provides an approximation of the TAP (Thouless-Anderson-Palmer) free
energies associated to the Sherrington-Kirkpatrick (SK) model.
|
Below $T_N = 1.1$K, the XY pyrochlore Er$_2$Ti$_2$O$_7$ orders into a $k=0$
non-collinear, antiferromagnetic structure referred to as the $\psi_2$ state.
The magnetic order in Er$_2$Ti$_2$O$_7$ is known to obey conventional three
dimensional (3D) percolation in the presence of magnetic dilution, and in that
sense is robust to disorder. Recently, however, two theoretical studies have
predicted that the $\psi_2$ structure should be unstable to the formation of a
related $\psi_3$ magnetic structure in the presence of magnetic vacancies. To
investigate these theories, we have carried out systematic elastic and
inelastic neutron scattering studies of three single crystals of
Er$_{2-x}$Y$_x$Ti$_2$O$_7$ with $x=0$ (pure), 0.2 (10$\%$-Y) and 0.4
(20$\%$-Y), where magnetic Er$^{3+}$ is substituted by non-magnetic Y$^{3+}$.
We find that the $\psi_2$ ground state of pure Er$_2$Ti$_2$O$_7$ is
significantly affected by magnetic dilution. The characteristic domain
selection associated with the $\psi_2$ state, and the corresponding energy gap
separating $\psi_2$ from $\psi_3$, vanish for Y$^{3+}$ substitutions between
10$\%$-Y and 20$\%$-Y, far removed from the 3D percolation threshold of
$\sim$60$\%$-Y. The resulting ground state for Er$_2$Ti$_2$O$_7$ with magnetic
dilutions from 20$\%$-Y up to the percolation threshold is naturally
interpreted as a frozen mosaic of $\psi_2$ and $\psi_3$ domains.
|
With the emergence of cyber-attacks on control systems it has become clear
that improving the security of control systems is an important task in today's
society. We investigate how an attacker that has access to the measurements
transmitted from the plant to the controller can perfectly estimate the
internal state of the controller. This attack on sensitive information of the
control loop is, on the one hand, a violation of the privacy, and, on the other
hand, a violation of the security of the closed-loop system if the obtained
estimate is used in a larger attack scheme. Current literature on sensor
attacks often assumes that the attacker has already access to the controller's
state. However, this is not always possible. We derive conditions for when the
attacker is able to perfectly estimate the controller's state. These conditions
show that if the controller has unstable poles a perfect estimate of the
controller state is not possible. Moreover, we propose a defence mechanism to
render the attack infeasible. This defence is based on adding uncertainty to
the controller dynamics. We also discuss why an unstable controller is only a
good defence for certain plants. Finally, simulations with a three-tank system
verify our results.
|
Production distributed systems are challenging to formally verify, in
particular when they are based on distributed protocols that are not rigorously
described or fully understood. In this paper, we derive models and properties
for two core distributed protocols used in eventually consistent production
key-value stores such as Riak and Cassandra. We propose a novel modeling called
certified program models, where complete distributed systems are captured as
programs written in traditional systems languages such as concurrent C.
Specifically, we model the read-repair and hinted-handoff recovery protocols as
concurrent C programs, test them for conformance with real systems, and then
verify that they guarantee eventual consistency, modeling precisely the
specification as well as the failure assumptions under which the results hold.
|
We prove in this paper the linear stability of the celebrated Schwarzschild
family of black holes in general relativity: Solutions to the linearisation of
the Einstein vacuum equations around a Schwarzschild metric arising from
regular initial data remain globally bounded on the black hole exterior and in
fact decay to a linearised Kerr metric. We express the equations in a suitable
double null gauge. To obtain decay, one must in fact add a residual pure gauge
solution which we prove to be itself quantitatively controlled from initial
data. Our result a fortiori includes decay statements for general solutions of
the Teukolsky equation (satisfied by gauge-invariant null-decomposed curvature
components). These latter statements are in fact deduced in the course of the
proof by exploiting associated quantities shown to satisfy the Regge--Wheeler
equation, for which appropriate decay can be obtained easily by adapting
previous work on the linear scalar wave equation. The bounds on the rate of
decay to linearised Kerr are inverse polynomial, suggesting that dispersion is
sufficient to control the non-linearities of the Einstein equations in a
potential future proof of nonlinear stability. This paper is self-contained and
includes a physical-space derivation of the equations of linearised gravity
around Schwarzschild from the full non-linear Einstein vacuum equations
expressed in a double null gauge.
|
This paper develops distributed optimization based, platoon centered CAV car
following schemes, motivated by the recent interest in CAV platooning
technologies. Various distributed optimization or control schemes have been
developed for CAV platooning. However, most existing distributed schemes for
platoon centered CAV control require either centralized data processing or
centralized computation in at least one step of their schemes, referred to as
partially distributed schemes. In this paper, we develop fully distributed
optimization based, platoon centered CAV platooning control under the linear
vehicle dynamics via the model predictive control approach with a general
prediction horizon. These fully distributed schemes do not require centralized
data processing or centralized computation through the entire schemes. To
develop these schemes, we propose a new formulation of the objective function
and a decomposition method that decomposes a densely coupled central objective
function into the sum of several locally coupled functions whose coupling
satisfies the network topology constraint. We then exploit the formulation of
locally coupled optimization and operator splitting methods to develop fully
distributed schemes. Control design and stability analysis is carried out to
achieve desired traffic transient performance and asymptotic stability.
Numerical tests demonstrate the effectiveness of the proposed fully distributed
schemes and CAV platooning control.
|
Radar detection and communication can be operated simultaneously in joint
radar-communication (JRC) system. In this paper, we propose a bistatic JRC
system which is applicable in multi-path environments. Basing on a novel joint
waveform, a joint detection process is designed for both target detection and
channel estimation. Meanwhile, a low-cost channel equalization method that
utilizes the channel state information acquired from the detection process is
proposed. The numerical results show that the symbol error rate (SER) of the
proposed system is similar to that of the binary frequency shift keying system,
and the signal to noise ratio requirement in multi-path environments is less
than 2 dB higher compared with that in single-path environment to reach a SER
of 10-5. Besides, the knowledge of the embedded information is not required for
the joint detection process and the detection performance is robust to unknown
information.
|
Subsets and Splits