text
stringlengths 6
128k
|
---|
Graphical analysis methods are widely used in positron emission tomography
quantification because of their simplicity and model independence. But they
may, particularly for reversible kinetics, lead to bias in the estimated
parameters. The source of the bias is commonly attributed to noise in the data.
Assuming a two-tissue compartmental model, we investigate the bias that
originates from model error. This bias is an intrinsic property of the
simplified linear models used for limited scan durations, and it is exaggerated
by random noise and numerical quadrature error. Conditions are derived under
which Logan's graphical method either over- or under-estimates the distribution
volume in the noise-free case. The bias caused by model error is quantified
analytically. The presented analysis shows that the bias of graphical methods
is inversely proportional to the dissociation rate. Furthermore, visual
examination of the linearity of the Logan plot is not sufficient for
guaranteeing that equilibrium has been reached. A new model which retains the
elegant properties of graphical analysis methods is presented, along with a
numerical algorithm for its solution. We perform simulations with the fibrillar
amyloid-beta radioligand [11C] benzothiazole-aniline using published data from
the University of Pittsburgh and Rotterdam groups. The results show that the
proposed method significantly reduces the bias due to model error. Moreover,
the results for data acquired over a 70 minutes scan duration are at least as
good as those obtained using existing methods for data acquired over a 90
minutes scan duration.
|
Using basic ideas of simplectic geometry, we find the covariant canonically
conjugate variables, the commutation relations and the Poincar\'e charges for
chiral superconducting membranes (with null currents), as well as we find the
stress tensor for the theory under study.
|
The assignment of classifying spectra to saturated fusion systems was
suggested by Linckelmann and Webb and has been carried out by Broto, Levi and
Oliver. A more rigid (but equivalent) construction of the classifying spectra
is given in this paper. It is shown that the assignment is functorial for
fusion-preserving homomorphisms in a way which extends the assignment of stable
p-completed classifying spaces to finite groups, and admits a transfer theory
analogous to that for finite groups. Furthermore the group of homotopy classes
of maps between classifying spectra is described, and in particular it is shown
that a fusion system can be reconstructed from its classifying spectrum
regarded as an object under the stable classifying space of the underlying
p-group.
|
In most fluid models the generation mechanism and the magnetide of anomalous
transport are usually treated as auxiliary terms external to the model
description and are free to manipulate, the anomalous transport is indeed a
noticeably self-generated effect exhibited in a multi-fluid system. Comparing
the current relaxation levels with kinetic Vlasov simulation of the same
initial setups, it's found that there is a higher anomalous transport in the
multi-fluid plasma, i.e. a stronger current reduction in the multi-fluid
simulation than in the kinetic Vlasov simulation for the same setup. To isolate
the mechanism that causes the different anomalous transport levels, we hence
investigated the detailed wave-particle interaction by using spectrum analysis
of the generated waves, combined with a spatial-averaged distributions at
different instants. It shows that the Landau damping in kinetic simulation
takes a role that stablizes the plasma-drifting system, when the bulk veliocity
of electron drifts drop beneath the phase velocity of waves. The current
relaxation process stops while the relative drift velocity between electrons is
still high.
|
Attention based neural networks are state of the art in a large range of
applications. However, their performance tends to degrade when the number of
layers increases. In this work, we show that enforcing Lipschitz continuity by
normalizing the attention scores can significantly improve the performance of
deep attention models. First, we show that, for deep graph attention networks
(GAT), gradient explosion appears during training, leading to poor performance
of gradient-based training algorithms. To address this issue, we derive a
theoretical analysis of the Lipschitz continuity of attention modules and
introduce LipschitzNorm, a simple and parameter-free normalization for
self-attention mechanisms that enforces the model to be Lipschitz continuous.
We then apply LipschitzNorm to GAT and Graph Transformers and show that their
performance is substantially improved in the deep setting (10 to 30 layers).
More specifically, we show that a deep GAT model with LipschitzNorm achieves
state of the art results for node label prediction tasks that exhibit
long-range dependencies, while showing consistent improvements over their
unnormalized counterparts in benchmark node classification tasks.
|
Deep networks realize complex mappings that are often understood by their
locally linear behavior at or around points of interest. For example, we use
the derivative of the mapping with respect to its inputs for sensitivity
analysis, or to explain (obtain coordinate relevance for) a prediction. One key
challenge is that such derivatives are themselves inherently unstable. In this
paper, we propose a new learning problem to encourage deep networks to have
stable derivatives over larger regions. While the problem is challenging in
general, we focus on networks with piecewise linear activation functions. Our
algorithm consists of an inference step that identifies a region around a point
where linear approximation is provably stable, and an optimization step to
expand such regions. We propose a novel relaxation to scale the algorithm to
realistic models. We illustrate our method with residual and recurrent networks
on image and sequence datasets.
|
The Radiation Monitor (RADOM) payload is a miniature dosimeter-spectrometer
onboard Chandrayaan-1 mission for monitoring the local radiation environment in
near-Earth space and in lunar space. RADOM measured the total absorbed dose and
spectrum of the deposited energy from high energy particles in near-Earth
space, en-route and in lunar orbit. RADOM was the first experiment to be
switched on soon after the launch of Chandrayaan-1 and was operational till the
end of the mission. This paper summarizes the observations carried out by RADOM
during the entire life time of the Chandrayaan-1 mission and some the salient
results.
|
We show that a quantized large-scale system with unknown parameters and
training signals can be analyzed by examining an equivalent system with known
parameters by modifying the signal power and noise variance in a prescribed
manner. Applications to training in wireless communications and signal
processing are shown. In wireless communications, we show that the optimal
number of training signals can be significantly smaller than the number of
transmitting elements. Similar conclusions can be drawn when considering the
symbol error rate in signal processing applications, as long as the number of
receiving elements is large enough. We show that a linear analysis of training
in a quantized system can be accurate when the thermal noise is high or the
system is operating near its saturation rate.
|
We have measured the index of refraction for sodium de Broglie waves in gases
of Ar, Kr, Xe, and nitrogen over a wide range of sodium velocities. We observe
glory oscillations -- a velocity-dependent oscillation in the forward
scattering amplitude. An atom interferometer was used to observe glory
oscillations in the phase shift caused by the collision, which are larger than
glory oscillations observed in the cross section. The glory oscillations depend
sensitively on the shape of the interatomic potential, allowing us to
discriminate among various predictions for these potentials, none of which
completely agrees with our measurements.
|
We study transport in a class of physical systems possessing two conserved
chiral charges. We describe a relation between universality of transport
properties of such systems and the chiral anomaly. We show that the
non-vanishing of a current expectation value implies the presence of gapless
modes, in analogy to the Goldstone theorem. Our main tool is a new formula
expressing currents in terms of anomalous commutators. Universality of
conductance arises as a natural consequence of the nonrenormalization of
anomalies. To illustrate our formalism we examine transport properties of a
quantum wire in (1+1) dimensions and of massless QED in background magnetic
field in (3+1) dimensions.
|
This paper presents a systematic numerical study of the effects of noise on
the invariant probability densities of dynamical systems with varying degrees
of hyperbolicity. It is found that the rate of convergence of invariant
densities in the small-noise limit is frequently governed by power laws. In
addition, a simple heuristic is proposed and found to correctly predict the
power law exponent in exponentially mixing systems. In systems which are not
exponentially mixing, the heuristic provides only an upper bound on the power
law exponent. As this numerical study requires the computation of invariant
densities across more than 2 decades of noise amplitudes, it also provides an
opportunity to discuss and compare standard numerical methods for computing
invariant probability densities.
|
We study the Goussarov-Habiro finite type invariants theory for framed string
links in homology balls.
Their degree 1 invariants are computed: they are given by Milnor's triple
linking numbers, the mod 2 reduction of the Sato-Levine invariant, Arf and
Rochlin's $\mu$ invariant. These invariants are seen to be naturally related to
invariants of homology cylinders through the so-called Milnor-Johnson
correspondence: in particular, an analogue of the Birman-Craggs homomorphism
for string links is computed.
The relation with Vassiliev theory is studied.
|
A gravitational field can be defined in terms of a moving frame, which when
made noncommutative yields a preferred basis for a differential calculus. It is
conjectured that to a linear perturbation of the commutation relations which
define the algebra there corresponds a linear perturbation of the gravitational
field. This is shown to be true in the case of a perturbation of Minkowski
space-time.
|
Recently, there is a revival of interest in low-rank matrix completion-based
unsupervised learning through the lens of dual-graph regularization, which has
significantly improved the performance of multidisciplinary machine learning
tasks such as recommendation systems, genotype imputation and image inpainting.
While the dual-graph regularization contributes a major part of the success,
computational costly hyper-parameter tunning is usually involved. To circumvent
such a drawback and improve the completion performance, we propose a novel
Bayesian learning algorithm that automatically learns the hyper-parameters
associated with dual-graph regularization, and at the same time, guarantees the
low-rankness of matrix completion. Notably, a novel prior is devised to promote
the low-rankness of the matrix and encode the dual-graph information
simultaneously, which is more challenging than the single-graph counterpart. A
nontrivial conditional conjugacy between the proposed priors and likelihood
function is then explored such that an efficient algorithm is derived under
variational inference framework. Extensive experiments using synthetic and
real-world datasets demonstrate the state-of-the-art performance of the
proposed learning algorithm for various data analysis tasks.
|
Thermal leptogenesis is an attractive mechanism for generating the baryon
asymmetry of the Universe. However, in supersymmetric models, the parameter
space is severely restricted by the gravitino bound on the reheat temperature
$T_{RH}$. Using a parametrisation of the seesaw in terms of weak-scale inputs,
the low-energy footprints of thermal leptogenesis are discussed.
|
Macroscopic realism is a classical worldview that a macroscopic system is
always determinately in one of the two or more macroscopically distinguishable
states available to it, and so is never in a superposition of these states. The
question of whether there is a fundamental limitation on the possibility to
observe quantum phenomena at the macroscopic scale remains unclear. Here we
implement a strict and simple protocol to test macroscopic realism in a
light-matter interfaced system. We create a micro-macro entanglement with two
macroscopically distinguishable solid-state components and rule out those
theories which would deny coherent superpositions of up to 76 atomic
excitations shared by 10^10 ions in two separated solids. These results provide
a general method to enhance the size of superposition states of atoms by
utilizing quantum memory techniques and to push the envelope of macroscopicity
at higher levels.
|
Particle Markov chain Monte Carlo techniques rank among current
state-of-the-art methods for probabilistic program inference. A drawback of
these techniques is that they rely on importance resampling, which results in
degenerate particle trajectories and a low effective sample size for variables
sampled early in a program. We here develop a formalism to adapt ancestor
resampling, a technique that mitigates particle degeneracy, to the
probabilistic programming setting. We present empirical results that
demonstrate nontrivial performance gains.
|
A new research project on spectral analysis that aims to characterize the
vertical stratification of element abundances in stellar atmospheres of
chemically peculiar (CP) stars is discussed in detail. Some results on
detection of vertical abundance stratification in several slowly rotating main
sequence CP stars are presented and considered as an indicator of the
effectiveness of the atomic diffusion mechanism responsible for the observed
peculiarities of chemical abundances. This study is carried out in the frame of
Project VeSElkA (Vertical Stratification of Elements Abundance) for which 34
slowly rotating CP stars have been observed with the ESPaDOnS
spectropolarimeter at CFHT.
|
Out-of-distribution (OOD) detection is a crucial aspect of deploying machine
learning models in open-world applications. Empirical evidence suggests that
training with auxiliary outliers substantially improves OOD detection. However,
such outliers typically exhibit a distribution gap compared to the test OOD
data and do not cover all possible test OOD scenarios. Additionally,
incorporating these outliers introduces additional training burdens. In this
paper, we introduce a novel paradigm called test-time OOD detection, which
utilizes unlabeled online data directly at test time to improve OOD detection
performance. While this paradigm is efficient, it also presents challenges such
as catastrophic forgetting. To address these challenges, we propose adaptive
outlier optimization (AUTO), which consists of an in-out-aware filter, an ID
memory bank, and a semantically-consistent objective. AUTO adaptively mines
pseudo-ID and pseudo-OOD samples from test data, utilizing them to optimize
networks in real time during inference. Extensive results on CIFAR-10,
CIFAR-100, and ImageNet benchmarks demonstrate that AUTO significantly enhances
OOD detection performance.
|
Extragalactic jets are the most powerful persistent sources of the universe.
Those pointing at us are called blazars. Their relativistically boosted
emission extends from radio frequencies to TeV energies. They are also
suspected to be the sources of energetic neutrinos and high energies cosmic
rays. The study of their overall spectrum indicates that most of the emission
of powerful blazars is in hard X-rays or in soft gamma-rays. In this band we
can find the most powerful jets, visible also at high redshifts. It is found
that the jet power is linked to the accretion luminosity, and exceeds it,
especially if they produce energetic neutrinos, that require the presence of
ultrarelativistic protons.
|
Vehicular Communication (VC) systems will greatly enhance intelligent
transportation systems. But their security and the protection of their users'
privacy are a prerequisite for deployment. Efforts in industry and academia
brought forth a multitude of diverse proposals. These have now converged to a
common view, notably on the design of a security infrastructure, a Vehicular
Public Key Infrastructure (VPKI) that shall enable secure conditionally
anonymous VC. Standardization efforts and industry readiness to adopt this
approach hint to its maturity. However, there are several open questions
remaining, and it is paramount to have conclusive answers before deployment. In
this article, we distill and critically survey the state of the art for
identity and credential management in VC systems, and we sketch a roadmap for
addressing a set of critical remaining security and privacy challenges.
|
The O(alpha) electroweak radiative corrections to gamma gamma --> WW --> 4f
within the electroweak Standard Model are calculated in double-pole
approximation (DPA). Virtual corrections are treated in DPA, leading to a
classification into factorizable and non-factorizable contributions, and
real-photonic corrections are based on complete lowest-order matrix elements
for gamma gamma --> 4f + gamma. Soft and collinear singularities appearing in
the virtual and real corrections are combined alternatively in two different
ways, namely by using the dipole subtraction method or by applying phase-space
slicing. The radiative corrections are implemented in a Monte Carlo generator
called COFFERgammagamma, which optionally includes anomalous triple and quartic
gauge-boson couplings in addition and performs a convolution over realistic
spectra of the photon beams. A detailed survey of numerical results comprises
O(alpha) corrections to integrated cross sections as well as to angular,
energy, and invariant-mass distributions. Particular attention is paid to the
issue of collinear-safety in the observables.
|
We provide the sufficient conditions for Rees algebras of modules to be
Cohen-Macaulay, which has been proven in the case of Rees algebras of ideals by
Johnson-Ulrich and Goto-Nakamura-Nishida. As it turns out the generalization
from ideals to modules is not just a routine generalization, but requires a
great deal of technical development. We use the technique of generic Bourbaki
ideals introduced by Simis, Ulrich and Vasconcelos to obtain the
Cohen-Macaulayness of Rees Algebras of modules.
|
We show that first order logic (FO) and first order logic extended with
modulo counting quantifiers (FOMOD) over purely functional vocabularies which
extend addition, satisfy the Crane beach property (CBP) if the logic satisfies
a normal form (called positional normal form). This not only shows why logics
over the addition vocabulary have the CBP but also gives new CBP results, for
example for the vocabulary which extends addition with the exponentiation
function. The above results can also be viewed from the perspective of circuit
complexity. Showing the existence of regular languages not definable in
FOMOD[<, +, *] is equivalent to the separation of the circuit complexity
classes ACC0 and NC1 . Our theorem shows that a weaker logic , namely,
FOMOD[<,+,2^x] cannot define all regular languages.
|
Massive volumes of data continuously generated on social platforms have
become an important information source for users. A primary method to obtain
fresh and valuable information from social streams is \emph{social search}.
Although there have been extensive studies on social search, existing methods
only focus on the \emph{relevance} of query results but ignore the
\emph{representativeness}. In this paper, we propose a novel Semantic and
Influence aware $k$-Representative ($k$-SIR) query for social streams based on
topic modeling. Specifically, we consider that both user queries and elements
are represented as vectors in the topic space. A $k$-SIR query retrieves a set
of $k$ elements with the maximum \emph{representativeness} over the sliding
window at query time w.r.t. the query vector. The representativeness of an
element set comprises both semantic and influence scores computed by the topic
model. Subsequently, we design two approximation algorithms, namely
\textsc{Multi-Topic ThresholdStream} (MTTS) and \textsc{Multi-Topic
ThresholdDescend} (MTTD), to process $k$-SIR queries in real-time. Both
algorithms leverage the ranked lists maintained on each topic for $k$-SIR
processing with theoretical guarantees. Extensive experiments on real-world
datasets demonstrate the effectiveness of $k$-SIR query compared with existing
methods as well as the efficiency and scalability of our proposed algorithms
for $k$-SIR processing.
|
We study charge transport in the Peierls-Harper model with a quasi-periodic
cosine potential. We compute the Landauer-type conductance of the wire. Our
numerical results show the following: (i) When the Fermi energy lies in the
absolutely continuous spectrum that is realized in the regime of the weak
coupling, the conductance is quantized to the universal conductance. (ii) For
the regime of localization that is realized for the strong coupling, the
conductance is always vanishing irrespective of the value of the Fermi energy.
Unfortunately, we cannot make a definite conclusion about the case with the
critical coupling. We also compute the conductance of the Thue-Morse model.
Although the potential of the model is not quasi-periodic, the energy spectrum
is known to be a Cantor set with zero Lebesgue measure. Our numerical results
for the Thue-Morse model also show the quantization of the conductance at many
locations of the Fermi energy, except for the trivial localization regime.
Besides, for the rest of the values of the Fermi energy, the conductance shows
a similar behavior to that of the Peierls-Harper model with the critical
coupling.
|
The recent isolation of two-dimensional van der Waals magnetic materials has
uncovered rich physics that often differs from the magnetic behaviour of their
bulk counterparts. However, the microscopic details of fundamental processes
such as the initial magnetization or domain reversal, which govern the magnetic
hysteresis, remain largely unknown in the ultrathin limit. Here we employ a
widefield nitrogen-vacancy (NV) microscope to directly image these processes in
few-layer flakes of magnetic semiconductor vanadium triiodide (VI$_3$). We
observe complete and abrupt switching of most flakes at fields
$H_c\approx0.5-1$ T (at 5 K) independent of thickness down to two atomic
layers, with no intermediate partially-reversed state. The coercive field
decreases as the temperature approaches the Curie temperature ($T_c\approx50$
K), however, the switching remains abrupt. We then image the initial
magnetization process, which reveals thickness-dependent domain wall depinning
fields well below $H_c$. These results point to ultrathin VI$_3$ being a
nucleation-type hard ferromagnet, where the coercive field is set by the
anisotropy-limited domain wall nucleation field. This work illustrates the
power of widefield NV microscopy to investigate magnetization processes in van
der Waals ferromagnets, which could be used to elucidate the origin of the hard
ferromagnetic properties of other materials and explore field- and
current-driven domain wall dynamics.
|
We study the electronic transport properties of dual-gated bilayer graphene
devices. We focus on the regime of low temperatures and high electric
displacement fields, where we observe a clear exponential dependence of the
resistance as a function of displacement field and density, accompanied by a
strong non-linear behavior in the transport characteristics. The effective
transport gap is typically two orders of magnitude smaller than the optical
band gaps reported by infrared spectroscopy studies. Detailed temperature
dependence measurements shed light on the different transport mechanisms in
different temperature regimes.
|
In the present paper, we construct an invariant for virtual knots in the
thickened sphere with g handles; this invariant is a Laurent polynomial in 2g+3
variables. To this end, we use a modification of the Wirtinger presentation of
the knot group and the concept of parity introduced by V.O.Manturov. The
section 4 of the paper is devoted to an enhancement of the invariant
(construction of the invariant module) by using the parity hierarchy concept
suggested by V.O.Manturov. Namely, we discriminate between odd crossings and
two types of even crossings; the latter two types depend on whether an even
crossing remains even/odd after all odd crossings of the diagram are removed.
The construction of the invariant also works for virtual knots.
|
In this work we study differential problems in which the reflection operator
and the Hilbert transform are involved. We reduce these problems to ODEs in
order to solve them. Also, we describe a general method for obtaining the
Green's function of reducible functional differential equations and illustrate
it with the case of homogeneous boundary value problems with reflection and
several specific examples.
|
Many online social networks thrive on automatic sharing of friends'
activities to a user through activity feeds, which may influence the user's
next actions. However, identifying such social influence is tricky because
these activities are simultaneously impacted by influence and homophily. We
propose a statistical procedure that uses commonly available network and
observational data about people's actions to estimate the extent of
copy-influence---mimicking others' actions that appear in a feed. We assume
that non-friends don't influence users; thus, comparing how a user's activity
correlates with friends versus non-friends who have similar preferences can
help tease out the effect of copy-influence.
Experiments on datasets from multiple social networks show that estimates
that don't account for homophily overestimate copy-influence by varying, often
large amounts. Further, copy-influence estimates fall below 1% of total actions
in all networks: most people, and almost all actions, are not affected by the
feed. Our results question common perceptions around the extent of
copy-influence in online social networks and suggest improvements to diffusion
and recommendation models.
|
Plant hormone auxin has critical roles in plant growth, dependent on its
heterogeneous distribution in plant tissues. Exactly how auxin transport and
developmental processes such as growth coordinate to achieve the precise
patterns of auxin observed experimentally is not well understood. Here we use
mathematical modelling to examine the interplay between auxin dynamics and
growth and their contribution to formation of patterns in auxin distribution in
plant tissues. Mathematical models describing the auxin-related signalling
pathway, PIN and AUX1 dynamics, auxin transport, and cell growth in plant
tissues are derived. A key assumption of our models is the regulation of PIN
proteins by the auxin-responsive ARF-Aux/IAA signalling pathway, with
upregulation of PIN biosynthesis by ARFs. Models are analysed and solved
numerically to examine the long-time behaviour and auxin distribution. Changes
in auxin-related signalling processes are shown to be able to trigger
transition between passage and spot type patterns in auxin distribution. The
model was also shown to be able to generate isolated cells with oscillatory
dynamics in levels of components of the auxin signalling pathway which could
explain oscillations in levels of ARF targets that have been observed
experimentally. Cell growth was shown to have influence on PIN polarisation and
determination of auxin distribution patterns. Numerical simulation results
indicate that auxin-related signalling processes can explain the different
patterns in auxin distributions observed in plant tissues, whereas the
interplay between auxin transport and growth can explain the `reverse-fountain'
pattern in auxin distribution observed at plant root tips.
|
We study the notion of algebraic tangent cones at singularities of reflexive
sheaves. These correspond to extensions of reflexive sheaves across a negative
divisor. We show the existence of optimal extensions in a constructive manner,
and we prove the uniqueness in a suitable sense. The results here are an
algebro-geometric counterpart of our previous study on singularities of
Hermitian-Yang-Mills connections.
|
94 Ceti is a triple star system with a circumprimary gas giant planet and
far-infrared excess. Such excesses around main sequence stars are likely due to
debris discs, and are considered as signposts of planetary systems and,
therefore, provide important insights into the configuration and evolution of
the planetary system. Consequently, in order to learn more about the 94 Ceti
system, we aim to precisely model the dust emission to fit its observed SED and
to simulate its orbital dynamics. We interpret our APEX bolometric observations
and complement them with archived Spitzer and Herschel bolometric data to
explore the stellar excess and to map out background sources in the fields.
Dynamical simulations and 3D radiative transfer calculations were used to
constrain the debris disc configurations and model the dust emission. The best
fit dust disc model for 94 Ceti implies a circumbinary disc around the
secondary pair, limited by dynamics to radii smaller than 40 AU and with a
grain size power-law distribution of ~a^-3.5. This model exhibits a
dust-to-star luminosity ratio of 4.6+-0.4*10^-6. The system is dynamically
stable and N-body symplectic simulations results are consistent with
semi-analytical equations that describe orbits in binary systems. In the
observations we also find tentative evidence of a circumtertiary ring that
could be edge-on.
|
In the meantime, a wide variety of terminologies, motivations, approaches,
and evaluation criteria have been developed within the research field of
explainable artificial intelligence (XAI). With the amount of XAI methods
vastly growing, a taxonomy of methods is needed by researchers as well as
practitioners: To grasp the breadth of the topic, compare methods, and to
select the right XAI method based on traits required by a specific use-case
context. Many taxonomies for XAI methods of varying level of detail and depth
can be found in the literature. While they often have a different focus, they
also exhibit many points of overlap. This paper unifies these efforts and
provides a complete taxonomy of XAI methods with respect to notions present in
the current state of research. In a structured literature analysis and
meta-study, we identified and reviewed more than 50 of the most cited and
current surveys on XAI methods, metrics, and method traits. After summarizing
them in a survey of surveys, we merge terminologies and concepts of the
articles into a unified structured taxonomy. Single concepts therein are
illustrated by more than 50 diverse selected example methods in total, which we
categorize accordingly. The taxonomy may serve both beginners, researchers, and
practitioners as a reference and wide-ranging overview of XAI method traits and
aspects. Hence, it provides foundations for targeted, use-case-oriented, and
context-sensitive future research.
|
Cooperative Adaptive Cruise Control (CACC) is a pivotal vehicular application
that would allow transportation field to achieve its goals of increased traffic
throughput and roadway capacity. This application is of paramount interest to
the vehicular technology community with a large body of literature dedicated to
research within different aspects of CACC, including but not limited to
security with CACC. Of all available literature, the overwhelming focus in on
CACC utilizing vehicle-to-vehicle (V2V) communication. In this work, we assert
that a qualitative increase in vehicle-to-infrastructure (V2I) and
infrastructure-to-vehicle (I2V) involvement has the potential to add greater
value to CACC. In this study, we developed a strategy for detection of a
denial-of-service (DoS) attack on a CACC platoon where the system edge in the
vehicular network plays a central role in attack detection. The proposed
security strategy is substantiated with a simulation-based evaluation using the
ns-3 discrete event network simulator. Empirical evidence obtained through
simulation-based results illustrate successful detection of the DoS attack at
four different levels of attack severity using this security strategy.
|
The atmospheric pressure fluctuations on Mars induce an elastic response in
the ground that creates a ground tilt, detectable as a seismic signal on the
InSight seismometer SEIS. The seismic pressure noise is modeled using Large
Eddy Simulations of the wind and surface pressure at the InSight landing site
and a Green's function ground deformation approach that is subsequently
validated via a detailed comparison with two other methods based on Sorrells'
theory (Sorrels 1971; Sorrels et al. 1971). The horizontal acceleration as a
result of the ground tilt due to the LES turbulence-induced pressure
fluctuations are found to be typically ~2 - 40 nm/s^2 in amplitude, whereas the
direct horizontal acceleration is two orders of magnitude smaller and is thus
negligible in comparison. The vertical accelerations are found to be ~0.1 - 6
nm/s^2 in amplitude.
We show that under calm conditions, a single-pressure measurement is
representative of the large-scale pressure field (to a distance of several
kilometers), particularly in the prevailing wind direction. However, during
windy conditions, small-scale turbulence results in a reduced correlation
between the pressure signals, and the single-pressure measurement becomes less
representative of the pressure field. Nonetheless, the correlation between the
seismic signal and the pressure signal is found to be higher for the windiest
period because the seismic pressure noise reflects the atmospheric structure
close to the seismometer. In the same way that we reduce the atmospheric
seismic signal by making use of a pressure sensor that is part of the InSight
APSS (Auxiliary Payload Sensor Suite), we also the use the synthetic noise data
obtained from the LES pressure field to demonstrate a decorrelation strategy.
|
Using differential geometry, I derive a form of the Bayesian Cram\'er-Rao
bound that remains invariant under reparametrization. With the invariant
formulation at hand, I find the optimal and naturally invariant bound among the
Gill-Levit family of bounds. By assuming that the prior probability density is
the square of a wavefunction, I also express the bounds in terms of functionals
that are quadratic with respect to the wavefunction and its gradient. The
problem of finding an unfavorable prior to tighten the bound for minimax
estimation is shown, in a special case, to be equivalent to finding the ground
state of a Schr\"odinger equation, with the Fisher information playing the role
of the potential. To illustrate the theory, two quantum estimation problems,
namely, optomechanical waveform estimation and subdiffraction incoherent
optical imaging, are discussed.
|
An approximate formula for complex Riemann Xi function, previously developed,
is used to refine Backlund's estimate of the number of zeros till a chosen
imaginary coordinate
|
Next-gen computing paradigms foresee deploying applications to virtualised
resources along a continuum of Cloud-Edge nodes. Much literature focussed on
how to place applications onto such resources so as to meet their requirements.
To lease resources to application operators, infrastructure providers need to
identify a portion of their Cloud-Edge assets to meet set requirements. This
article proposes a novel declarative resource selection strategy prototyped in
Prolog to determine a suitable infrastructure portion that satisfies all
requirements. The proposal is showcased over a lifelike scenario.
|
Recent self-supervised advances in medical computer vision exploit global and
local anatomical self-similarity for pretraining prior to downstream tasks such
as segmentation. However, current methods assume i.i.d. image acquisition,
which is invalid in clinical study designs where follow-up longitudinal scans
track subject-specific temporal changes. Further, existing self-supervised
methods for medically-relevant image-to-image architectures exploit only
spatial or temporal self-similarity and only do so via a loss applied at a
single image-scale, with naive multi-scale spatiotemporal extensions collapsing
to degenerate solutions. To these ends, this paper makes two contributions: (1)
It presents a local and multi-scale spatiotemporal representation learning
method for image-to-image architectures trained on longitudinal images. It
exploits the spatiotemporal self-similarity of learned multi-scale
intra-subject features for pretraining and develops several feature-wise
regularizations that avoid collapsed identity representations; (2) During
finetuning, it proposes a surprisingly simple self-supervised segmentation
consistency regularization to exploit intra-subject correlation. Benchmarked in
the one-shot segmentation setting, the proposed framework outperforms both
well-tuned randomly-initialized baselines and current self-supervised
techniques designed for both i.i.d. and longitudinal datasets. These
improvements are demonstrated across both longitudinal neurodegenerative adult
MRI and developing infant brain MRI and yield both higher performance and
longitudinal consistency.
|
A widely applied approach to causal inference from a non-experimental time
series $X$, often referred to as "(linear) Granger causal analysis", is to
regress present on past and interpret the regression matrix $\hat{B}$ causally.
However, if there is an unmeasured time series $Z$ that influences $X$, then
this approach can lead to wrong causal conclusions, i.e., distinct from those
one would draw if one had additional information such as $Z$. In this paper we
take a different approach: We assume that $X$ together with some hidden $Z$
forms a first order vector autoregressive (VAR) process with transition matrix
$A$, and argue why it is more valid to interpret $A$ causally instead of
$\hat{B}$. Then we examine under which conditions the most important parts of
$A$ are identifiable or almost identifiable from only $X$. Essentially,
sufficient conditions are (1) non-Gaussian, independent noise or (2) no
influence from $X$ to $Z$. We present two estimation algorithms that are
tailored towards conditions (1) and (2), respectively, and evaluate them on
synthetic and real-world data. We discuss how to check the model using $X$.
|
The extension of an $r$-uniform hypergraph $G$ is obtained from it by adding
for every pair of vertices of $G$, which is not covered by an edge in $G$, an
extra edge containing this pair and $(r-2)$ new vertices. In this paper we
determine the Tur\'an number of the extension of an $r$-graph consisting of two
vertex-disjoint edges, settling a conjecture of Hefetz and Keevash, who
previously determined this Tur\'an number for $r=3$. As the key ingredient of
the proof we show that the Lagrangian of intersecting $r$-graphs is maximized
by principally intersecting $r$-graphs for $r \geq 4$.
|
For a long time, malware classification and analysis have been an arms-race
between antivirus systems and malware authors. Though static analysis is
vulnerable to evasion techniques, it is still popular as the first line of
defense in antivirus systems. But most of the static analyzers failed to gain
the trust of practitioners due to their black-box nature. We propose MAlign, a
novel static malware family classification approach inspired by genome sequence
alignment that can not only classify malware families but can also provide
explanations for its decision. MAlign encodes raw bytes using nucleotides and
adopts genome sequence alignment approaches to create a signature of a malware
family based on the conserved code segments in that family, without any human
labor or expertise. We evaluate MAlign on two malware datasets, and it
outperforms other state-of-the-art machine learning based malware classifiers
(by 4.49% - 0.07%), especially on small datasets (by 19.48% - 1.2%).
Furthermore, we explain the generated signatures by MAlign on different malware
families illustrating the kinds of insights it can provide to analysts, and
show its efficacy as an analysis tool. Additionally, we evaluate its
theoretical and empirical robustness against some common attacks. In this
paper, we approach static malware analysis from a unique perspective, aiming to
strike a delicate balance among performance, interpretability, and robustness.
|
Recent developments in High Level Synthesis tools have attracted software
programmers to accelerate their high-performance computing applications on
FPGAs. Even though it has been shown that FPGAs can compete with GPUs in terms
of performance for stencil computation, most previous work achieve this by
avoiding spatial blocking and restricting input dimensions relative to FPGA
on-chip memory. In this work we create a stencil accelerator using Intel FPGA
SDK for OpenCL that achieves high performance without having such restrictions.
We combine spatial and temporal blocking to avoid input size restrictions, and
employ multiple FPGA-specific optimizations to tackle issues arisen from the
added design complexity. Accelerator parameter tuning is guided by our
performance model, which we also use to project performance for the upcoming
Intel Stratix 10 devices. On an Arria 10 GX 1150 device, our accelerator can
reach up to 760 and 375 GFLOP/s of compute performance, for 2D and 3D stencils,
respectively, which rivals the performance of a highly-optimized GPU
implementation. Furthermore, we estimate that the upcoming Stratix 10 devices
can achieve a performance of up to 3.5 TFLOP/s and 1.6 TFLOP/s for 2D and 3D
stencil computation, respectively.
|
Atmospheric transient eddies and low-frequency flow contribution to the ocean
surface wave climate in the North Atlantic during boreal winter is investigated
(1980 - 2016). We conduct a set of numerical simulations with a
state-of-the-art spectral wave model Wavewatch III forced by decomposed wind
fields derived from the ERA-Interim reanalysis (0.7{\deg} horizontal
resolution). Synoptic-scale processes (2-10 day bandpassed winds) are found to
have the largest impact on the formation of wind waves in the western
mid-latitude North Atlantic along the North American and western Greenland
coasts. The eastern North Atlantic is found to be influenced by the combination
of low-frequency forcing (>10 day bandpassed winds) contributing up to 60% and
synoptic processes contributing up to 30% to mean wave heights. Mid-latitude
storm track variability is found to have a direct relationship with wave height
variability on the eastern and western margins of the North Atlantic in
particular implying an association between cyclone formation over the North
American Eastern Seaboard and wave heights anomalies in the eastern North
Atlantic. A shift in wave height regimes defined using an EOF analysis is
reflected in the occurrence anomalies in their distribution. Results highlight
the dominant role of transient eddies on the ocean surface wave climatology in
the mid-latitude eastern North Atlantic both locally and through association
with cyclone formation in the western part of the basin. These conclusions are
presented and discussed particularly within the context of long-term
storm-track shifts projected as a possible response to climate warming over the
coming century.
|
Let $\alpha>1$ be an irrational number. We establish asymptotic formulas for
the number of partitions of $n$ into summands and distinct summands, chosen
from the Beatty sequence $(\lfloor\alpha m\rfloor)_{m\in\mathbb{N}}$. This
improves some results of Erd\"{o}s and Richmond established in 1977.
|
We show that the strong operator topology, the weak operator topology and the
compact-open topology agree on the space of unitary operators of a infinite
dimensional separable Hilbert space. Moreover, we show that the unitary group
endowed with any of these topologies is a Polish group.
|
The application of machine learning principles in the photometric search of
elusive astronomical objects has been a less-explored frontier of research.
Here we have used three methods: the Neural Network and two variants of
k-Nearest Neighbour, to identify brown dwarf candidates using the photometric
colours of known brown dwarfs. We initially check the efficiencies of these
three classification techniques, both individually and collectively, on known
objects. This is followed by their application to three regions in the sky,
namely Hercules (2 deg x 2 deg), Serpens (9 deg x 4 deg) and Lyra (2 deg x 2
deg). Testing these algorithms on sets of objects that include known brown
dwarfs shows a high level of completeness. This includes the Hercules and
Serpens regions where brown dwarfs have been detected. We use these methods to
search and identify brown dwarf candidates towards the Lyra region. We infer
that the collective method of classification, also known as ensemble
classifier, is highly efficient in the identification of brown dwarf
candidates.
|
Observing, understanding, and mitigating the effects of failure in embedded
systems is essential for building dependable control systems. We develop a
software-based monitoring methodology to further this goal. This methodology
can be applied to any embedded system peripheral and allows the system to
operate normally while the monitoring software is running. We use software to
instrument the operating system kernel and record indicators of system
behavior. By comparing those indicators against baseline indicators of normal
system operation, faults can be detected and appropriate action can be taken.
We implement this methodology to detect faults caused by electrostatic
discharge in a USB host controller. As indicators, we select specific control
registers that provide a manifestation of the internal execution of the host
controller. Analysis of the recorded register values reveals differences in
system execution when the system is subject to interference. %We also develop a
classifier capable of predicting whether or not the system's behavior is being
affected by such shocks. This improved understanding of system behavior may
lead to better hardware and software mitigation of electrostatic discharge and
assist in root-cause analysis and repair of failures.
|
The best-response dynamics is an example of an evolutionary game where
players update their strategy in order to maximize their payoff. The main
objective of this paper is to study a stochastic spatial version of this game
based on the framework of interacting particle systems in which players are
located on an infinite square lattice. In the presence of two strategies, and
calling a strategy selfish or altruistic depending on a certain ordering of the
coefficients of the underlying payoff matrix, a simple analysis of the
non-spatial mean-field approximation of the spatial model shows that a strategy
is evolutionary stable if and only if it is selfish, making the system bistable
when both strategies are selfish. The spatial and non-spatial models agree when
at least one strategy is altruistic. In contrast, we prove that, in the
presence of two selfish strategies and in any spatial dimensions, only the most
selfish strategy remains evolutionary stable. The main ingredients of the proof
are monotonicity results and a coupling between the best-response dynamics
properly rescaled in space with bootstrap percolation to compare the infinite
time limits of both systems.
|
Let $f$ be the germ of a real analytic function at the origin in
$\mathbb{R}^n $ for $n \geq 2$, and suppose the codimension of the zero set of
$f$ at $\mathbf{0}$ is at least $2$. We show that $\log |f|$ is
$W^{1,1}_{\operatorname{loc}}$ near $\mathbf{0}$. In particular, this implies
the differential inequality $|\nabla f |\leq V |f|$ holds with $V \in
L^1_{\operatorname{loc}}$. As an application, we derive an inequality relating
the {\L}ojasiewicz exponent and singularity exponent for such functions.
|
The purpose of this experiment was to use the known analytical techniques to
study the creation, simulation, and measurements of molecular Hamiltonians. The
techniques used consisted of the Linear Combination of Atomic Orbitals (LCAO),
the Linear Combination of Unitaries (LCU), and the Phase Estimation Algorithm
(PEA). The molecules studied were $H_2$ with and without spin, as well as
$He_2$ without spin. Hamiltonians were created under the LCAO basis, and
reconstructed using the Jordan-Winger transform in order to create a linear
combination of Pauli spin operators. The lengths of each molecular Hamiltonian
greatly increased from the $H_2$ without spin, to $He_2$. This resulted in a
reduced ability to simulate the Hamiltonians under ideal conditions. Thus, only
low orders of l = 1 and l = 2 were used when expanding the Hamiltonian in
accordance to the LCU method of simulation. The resulting Hamiltonians were
measured using PEA, and plotted against function of $\frac{2\pi(K)}{N}$ and the
probability distribution of each register. The resolution of the graph was
dependent on the amount of registers, N, being used. However, the reduction of
order hardly changed the image of the $H_2$ graphs. Qualitative comparisons
between the three molecules were drawn.
|
The finite sample variance of an inverse propensity weighted estimator is
derived in the case of discrete control variables with finite support. The
obtained expressions generally corroborate widely-cited asymptotic theory
showing that estimated propensity scores are superior to true propensity scores
in the context of inverse propensity weighting. However, similar analysis of a
modified estimator demonstrates that foreknowledge of the true propensity
function can confer a statistical advantage when estimating average treatment
effects.
|
Causality is a non-obvious concept that is often considered to be related to
temporality. In this paper we present a number of past and present approaches
to the definition of temporality and causality from philosophical, physical,
and computational points of view. We note that time is an important ingredient
in many relationships and phenomena. The topic is then divided into the two
main areas of temporal discovery, which is concerned with finding relations
that are stretched over time, and causal discovery, where a claim is made as to
the causal influence of certain events on others. We present a number of
computational tools used for attempting to automatically discover temporal and
causal relations in data.
|
Furihata and Matsuo proposed in 2010 an energy-conserving scheme for the
Zakharov equations, as an application of the discrete variational derivative
method (DVDM).
This scheme is distinguished from conventional methods (in particular the one
devised by Glassey in 1992) in that the invariants are consistent with respect
to time, but it has not been sufficiently studied both theoretically and
numerically.
In this study, we theoretically prove the solvability under the loosest
possible assumptions.
We also prove the convergence of this DVDM scheme by improving the argument
by Glassey.
Furthermore, we perform intensive numerical experiments for comparing the
above two schemes.
It is found that the DVDM scheme is superior in terms of accuracy, but since
it is fully-implicit, the linearly-implicit Glassey scheme is better for
practical efficiency.
In addition, we proposed a way to choose a solution for the first step that
would allow Glassey's scheme to work more efficiently.
|
Self-similarity and fractals have fascinated researchers across various
disciplines. In graphene placed on boron nitride and subjected to a magnetic
field, self-similarity appears in the form of numerous replicas of the original
Dirac spectrum, and their quantization gives rise to a fractal pattern of
Landau levels, referred to as the Hofstadter butterfly. Here we employ
capacitance spectroscopy to probe directly the density of states (DoS) and
energy gaps in this spectrum. Without a magnetic field, replica spectra are
seen as pronounced DoS minima surrounded by van Hove singularities. The
Hofstadter butterfly shows up as recurring Landau fan diagrams in high fields.
Electron-electron interactions add another twist to the self-similar behaviour.
We observe suppression of quantum Hall ferromagnetism, a reverse Stoner
transition at commensurable fluxes and additional ferromagnetism within replica
spectra. The strength and variety of the interaction effects indicate a large
playground to study many-body physics in fractal Dirac systems.
|
Local unitary stabilizer subgroups constitute powerful invariants for
distinguishing various types of multipartite entanglement. In this paper, we
show how stabilizers can be used as a basis for entanglement verification
protocols on distributed quantum networks using minimal resources. As an
example, we develop and analyze the performance of a protocol to verify
membership in the space of Werner states, that is, multi-qubit states that are
invariant under the action of any 1-qubit unitary applied to all the qubits.
|
The influence of nuclear matter on the properties of coherently produced
resonances is discussed. It is shown that, in general, the mass distribution of
resonance decay products has a two-component structure corresponding to decay
outside and inside the nucleus. The first (narrow) component of the amplitude
has a Breit-Wigner form determined by the vacuum values of mass and width of
the resonance. The second (broad) component corresponds to interactions of the
resonance with the nuclear medium. It can be also described by a Breit-Wigner
shape with parameters depending e.g. on the nuclear density and on the cross
section of the resonance-nucleon interaction. The resonance production is
examined both at intermediate energies, where interactions with the nucleus can
be considered as a series of successive local rescatterings, and at high
energies, $E>E_{crit}$, where a change of interaction picture occurs. This
change of mechanisms of the interactions with the nucleus is typical for the
description within the Regge theory approach and is connected with the nonlocal
nature of the reggeon interaction.
|
Intuitively, image classification should profit from using spatial
information. Recent work, however, suggests that this might be overrated in
standard CNNs. In this paper, we are pushing the envelope and aim to further
investigate the reliance on spatial information. We propose spatial shuffling
and GAP+FC to destroy spatial information during both training and testing
phases. Interestingly, we observe that spatial information can be deleted from
later layers with small performance drops, which indicates spatial information
at later layers is not necessary for good performance. For example, test
accuracy of VGG-16 only drops by 0.03% and 2.66% with spatial information
completely removed from the last 30% and 53% layers on CIFAR100, respectively.
Evaluation on several object recognition datasets (CIFAR100, Small-ImageNet,
ImageNet) with a wide range of CNN architectures (VGG16, ResNet50, ResNet152)
shows an overall consistent pattern.
|
We study the quantum entanglement and quantum phase transition (QPT) of the
anisotropic spin-1/2 XY model with staggered Dzyaloshinskii-Moriya (DM)
interaction by means of quantum renormalization group method. The scaling of
coupling constants and the critical points of the system are obtained. It is
found that when the number of renormalization group iterations tends to
infinity, the system exhibit a QPT between the spin-fluid and N\'eel phases
which corresponds with two saturated values of the concurrence for a given
value of the strength of DM interaction. The DM interaction can enhance the
entanglement and influence the QPT of the system. To gain further insight, the
first derivative of the entanglement exhibit a nonanalytic behavior at the
critical point and it directly associates with the divergence of the
correlation length. This shows that the correlation length exponent is closely
related to the critical exponent, i.e., the scaling behaviors of the system.
|
Determining the number of clusters is a fundamental issue in data clustering.
Several algorithms have been proposed, including centroid-based algorithms
using the Euclidean distance and model-based algorithms using a mixture of
probability distributions. Among these, greedy algorithms for searching the
number of clusters by repeatedly splitting or merging clusters have advantages
in terms of computation time for problems with large sample sizes. However,
studies comparing these methods in systematic evaluation experiments still need
to be included. This study examines centroid- and model-based cluster search
algorithms in various cases that Gaussian mixture models (GMMs) can generate.
The cases are generated by combining five factors: dimensionality, sample size,
the number of clusters, cluster overlap, and covariance type. The results show
that some cluster-splitting criteria based on Euclidean distance make
unreasonable decisions when clusters overlap. The results also show that
model-based algorithms are insensitive to covariance type and cluster overlap
compared to the centroid-based method if the sample size is sufficient. Our
cluster search implementation codes are available at
https://github.com/lipryou/searchClustK
|
Parametric amplification of vacuum fluctuations is crucial in modern quantum
optics, enabling the creation of squeezing and entanglement. We demonstrate the
parametric amplification of vacuum fluctuations for matter waves using a spinor
F=2 Rb-87 condensate. Interatomic interactions lead to correlated pair creation
in the m_F= +/- 1 states from an initial unstable m_F=0 condensate, which acts
as a vacuum for m_F unequal 0. Although this pair creation from a pure m_F=0
condensate is ideally triggered by vacuum fluctuations, unavoidable spurious
initial m_F= +/- 1 atoms induce a classical seed which may become the dominant
triggering mechanism. We show that pair creation is insensitive to a classical
seed for sufficiently large magnetic fields, demonstrating the dominant role of
vacuum fluctuations. The presented system thus provides a direct path towards
the generation of non-classical states of matter on the basis of spinor
condensates.
|
We study several properties of equi-Baire 1 families of functions between
metric spaces. We consider the related equi-Lebesgue property for such
families. We examine the behaviour of equi-Baire 1 and equi-Lebesgue families
with respect to pointwise and uniform convergence. In particular, we obtain a
criterion for a choice of a uniformly convergent subsequence from a sequence of
functions that form an equi-Baire 1 family, which solves a problem posed in
[3]. Finally, we discuss the notion of equi-cliquishness and relations between
equi-Baire 1 families and sets of equi-continuity points.
|
This work is concerned with the study of singular limits for the
Vlasov-Poisson system in the case of massless electrons (VPME), which is a
kinetic system modelling the ions in a plasma. Our objective is threefold:
first, we provide a mean field derivation of the VPME system in dimensions
$d=2,3$ from a system of $N$ extended charges. Secondly, we prove a rigorous
quasineutral limit for initial data that are perturbations of analytic data,
deriving the Kinetic Isothermal Euler (KIE) system from the VPME system in
dimensions $d=2,3$. Lastly, we combine these two singular limits in order to
show how to obtain the KIE system from an underlying particle system.
|
We show some computations on representations of the fundamental group in
SL(2;C) and Reidemeister torsion for a homology 3-sphere obtained by Dehn
surgery along the figure-eight knot. This is the second version. We recorrected
several errors in the first version.
|
This paper presents a fast spectral unmixing algorithm based on Dykstra's
alternating projection. The proposed algorithm formulates the fully constrained
least squares optimization problem associated with the spectral unmixing task
as an unconstrained regression problem followed by a projection onto the
intersection of several closed convex sets. This projection is achieved by
iteratively projecting onto each of the convex sets individually, following
Dyktra's scheme. The sequence thus obtained is guaranteed to converge to the
sought projection. Thanks to the preliminary matrix decomposition and variable
substitution, the projection is implemented intrinsically in a subspace, whose
dimension is very often much lower than the number of bands. A benefit of this
strategy is that the order of the computational complexity for each projection
is decreased from quadratic to linear time. Numerical experiments considering
diverse spectral unmixing scenarios provide evidence that the proposed
algorithm competes with the state-of-the-art, namely when the number of
endmembers is relatively small, a circumstance often observed in real
hyperspectral applications.
|
Hilbert space representations of the cross product *-algebras of the Hopf
*-algebra U_q(su_2) and its module *-algebras O(S^2_{qr}) of Podles spheres are
investigated and classified by describing the action of generators. The
representations are analyzed within two approaches. It is shown that the Hopf
*-algebra O(SU_q(2)) of the quantum group SU_q(2) decomposes into an orthogonal
sum of projective Hopf modules corresponding to irreducible integrable
*-representations of the cross product algebras and that each irreducible
integrable *-representation appears with multiplicity one. The projections of
these projective modules are computed. The decompositions of tensor products of
irreducible integrable *-representations with spin l representations of
U_q(su_2) are given. The invariant state h on O(S^2_{qr}) is studied in detail.
By passing to function algebras over the quantum spheres S^2_{qr}, we give
chart descriptions of quantum line bundles and describe the representations
from the first approach by means of the second approach.
|
We study the decay of dynamically generated resonances from the interaction
of two vectors into a $\gamma$ and a pseudoscalar meson. The dynamics requires
anomalous terms involving vertices with two vectors and a pseudoscalar, which
renders it special. We compare our result with data on $K^{*+}(1430)\to
K^+\gamma$ and $K^{*0}(1430)\to K^0\gamma$ and find a good agreement with the
data for the $K^{*+}(1430)$ case and a width considerably smaller than the
upper bound measured for the $K^{*0}(1430)$ meson.
|
Surface stress drives long-range elastocapillary interactions at the surface
of compliant solids, where it has been observed to mediate interparticle
interactions and to alter the transport of liquid drops. We show that such an
elastocapillary interaction arises between neighboring structures that are
simply protrusions of the compliant solid. For compliant micropillars arranged
in a square lattice with spacing p less than an interaction distance p*, the
distance of a pillar to its neighbors determines how much it deforms due to
surface stress: pillars that are close together tend to be rounder and flatter
than those that are far apart. The interaction is mediated by the formation of
an elastocapillary meniscus at the base of each pillar, which sets the
interaction distance and causes neighboring structures to deform more than
those that are relatively isolated. Neighboring pillars also displace toward
each other to form clusters, leading to the emergence of pattern formation and
ordered domains.
|
The classical Bj\"orling problem is to find the minimal surface containing a
given real analytic curve with tangent planes prescribed along the curve. We
consider the generalization of this problem to non-minimal constant mean
curvature (CMC) surfaces, and show that it can be solved via the loop group
formulation for such surfaces. The main result gives a way to compute the
holomorphic potential for the solution directly from the Bj\"orling data, using
only elementary differentiation, integration and holomorphic extensions of real
analytic functions. Combined with an Iwasawa decomposition of the loop group,
this gives the solution, in analogue to Schwarz's formula for the minimal case.
Some preliminary examples of applications to the construction of CMC surfaces
with special properties are given.
|
Prototypical part network (ProtoPNet) methods have been designed to achieve
interpretable classification by associating predictions with a set of training
prototypes, which we refer to as trivial prototypes because they are trained to
lie far from the classification boundary in the feature space. Note that it is
possible to make an analogy between ProtoPNet and support vector machine (SVM)
given that the classification from both methods relies on computing similarity
with a set of training points (i.e., trivial prototypes in ProtoPNet, and
support vectors in SVM). However, while trivial prototypes are located far from
the classification boundary, support vectors are located close to this
boundary, and we argue that this discrepancy with the well-established SVM
theory can result in ProtoPNet models with inferior classification accuracy. In
this paper, we aim to improve the classification of ProtoPNet with a new method
to learn support prototypes that lie near the classification boundary in the
feature space, as suggested by the SVM theory. In addition, we target the
improvement of classification results with a new model, named ST-ProtoPNet,
which exploits our support prototypes and the trivial prototypes to provide
more effective classification. Experimental results on CUB-200-2011, Stanford
Cars, and Stanford Dogs datasets demonstrate that ST-ProtoPNet achieves
state-of-the-art classification accuracy and interpretability results. We also
show that the proposed support prototypes tend to be better localised in the
object of interest rather than in the background region.
|
We explore the ferromagnetic quantum critical point in a three-dimensional
semimetallic system with upward- and downward-dispersing bands touching at the
Fermi level. Evaluating the static spin susceptibility to leading order in the
coupling between the fermions and the fluctuating ferromagnetic order
parameter, we find that the ferromagnetic quantum critical point is masked by
an incommensurate, longitudinal spin density wave phase. We first analyze an
idealized model which, despite having strong spin-orbit coupling, still
possesses O(3) rotational symmetry generated by the total angular momentum
operator. In this case, the direction of the incommensurate spin density wave
propagation can point anywhere, while the magnetic moment is aligned along the
direction of propagation. Including symmetry-allowed anisotropies in the
fermion dispersion and the coupling to the order parameter field, however, the
ordering wavevector instead breaks a discrete symmetry and aligns along either
the [111] or [100] direction, depending on the signs and magnitudes of these
two types of anisotropy.
|
We introduced a methodology to efficiently exploit natural-language expressed
biomedical knowledge for repurposing existing drugs towards diseases for which
they were not initially intended. Leveraging on developments in Computational
Linguistics and Graph Theory, a methodology is defined to build a graph
representation of knowledge, which is automatically analysed to discover hidden
relations between any drug and any disease: these relations are specific paths
among the biomedical entities of the graph, representing possible Modes of
Action for any given pharmacological compound. These paths are ranked according
to their relevance, exploiting a measure induced by a stochastic process
defined on the graph. Here we show, providing real-world examples, how the
method successfully retrieves known pathophysiological Mode of Actions and
finds new ones by meaningfully selecting and aggregating contributions from
known bio-molecular interactions. Applications of this methodology are
presented, and prove the efficacy of the method for selecting drugs as
treatment options for rare diseases.
|
In the regime where traditional approaches to electronic structure cannot
afford to achieve accurate energy differences via exhaustive wave function
flexibility, rigorous approaches to balancing different states' accuracies
become desirable. As a direct measure of a wave function's accuracy, the energy
variance offers one route to achieving such a balance. Here, we develop and
test a variance matching approach for predicting excitation energies within the
context of variational Monte Carlo and selective configuration interaction. In
a series of tests on small but difficult molecules, we demonstrate that the
approach it is effective at delivering accurate excitation energies when the
wave function is far from the exhaustive flexibility limit. Results in C$_3$,
where we combine this approach with variational Monte Carlo orbital
optimization, are especially encouraging.
|
Multi-document summarization (MDS) aims to generate a summary for a number of
related documents. We propose HGSUM, an MDS model that extends an
encoder-decoder architecture, to incorporate a heterogeneous graph to represent
different semantic units (e.g., words and sentences) of the documents. This
contrasts with existing MDS models which do not consider different edge types
of graphs and as such do not capture the diversity of relationships in the
documents. To preserve only key information and relationships of the documents
in the heterogeneous graph, HGSUM uses graph pooling to compress the input
graph. And to guide HGSUM to learn compression, we introduce an additional
objective that maximizes the similarity between the compressed graph and the
graph constructed from the ground-truth summary during training. HGSUM is
trained end-to-end with graph similarity and standard cross-entropy objectives.
Experimental results over MULTI-NEWS, WCEP-100, and ARXIV show that HGSUM
outperforms state-of-the-art MDS models. The code for our model and experiments
is available at: https://github.com/oaimli/HGSum.
|
One pivot challenge for image anomaly (AD) detection is to learn
discriminative information only from normal class training images. Most image
reconstruction based AD methods rely on the discriminative capability of
reconstruction error. This is heuristic as image reconstruction is unsupervised
without incorporating normal-class-specific information. In this paper, we
propose an AD method called dual deep reconstruction networks based image
decomposition (DDR-ID). The networks are trained by jointly optimizing for
three losses: the one-class loss, the latent space constrain loss and the
reconstruction loss. After training, DDR-ID can decompose an unseen image into
its normal class and the residual components, respectively. Two anomaly scores
are calculated to quantify the anomalous degree of the image in either normal
class latent space or reconstruction image space. Thereby, anomaly detection
can be performed via thresholding the anomaly score. The experiments
demonstrate that DDR-ID outperforms multiple related benchmarking methods in
image anomaly detection using MNIST, CIFAR-10 and Endosome datasets and
adversarial attack detection using GTSRB dataset.
|
With the advent of machine learning in applications of critical
infrastructure such as healthcare and energy, privacy is a growing concern in
the minds of stakeholders. It is pivotal to ensure that neither the model nor
the data can be used to extract sensitive information used by attackers against
individuals or to harm whole societies through the exploitation of critical
infrastructure. The applicability of machine learning in these domains is
mostly limited due to a lack of trust regarding the transparency and the
privacy constraints. Various safety-critical use cases (mostly relying on
time-series data) are currently underrepresented in privacy-related
considerations. By evaluating several privacy-preserving methods regarding
their applicability on time-series data, we validated the inefficacy of
encryption for deep learning, the strong dataset dependence of differential
privacy, and the broad applicability of federated methods.
|
This study investigates the impacts of the Automobile NOx Law of 1992 on
ambient air pollutants and fetal and infant health outcomes in Japan. Using
panel data taken from more than 1,500 monitoring stations between 1987 and
1997, we find that NOx and SO2 levels reduced by 87% and 52%, respectively in
regulated areas following the 1992 regulation. In addition, using a
municipal-level Vital Statistics panel dataset and adopting the regression
differences-in-differences method, we find that the enactment of the regulation
explained most of the improvements in the fetal death rate between 1991 and
1993. This study is the first to provide evidence on the positive impacts of
this large-scale automobile regulation policy on fetal health.
|
The characterization of the dynamical state of clusters is key to study their
evolution, their selection, and use them as a cosmological probe. The offsets
between different definitions of the center have been used to estimate the
cluster disturbance. Our goal is to study the distribution of the offset
between the X-ray and optical centers in clusters of galaxies. We study the
offset for eROSITA clusters. We aim to connect observations to hydrodynamical
simulations and N-body models. We assess the astrophysical effects affecting
the displacements. We measure the offset for clusters observed in eFEDS and
eRASS1. We focus on a subsample of 87 massive eFEDS clusters at low redshift.
We link the observations to the offset parameter Xoff measured on dark matter
halos in N-body simulations, using the hydrodynamical simulations as a bridge.
eFEDS clusters show a smaller offset compared to eRASS1, because the latter
contains a larger fraction of massive and disturbed structures. We measure an
average offset of 76.3+30.1-27.1 kpc on the subsample of 87 eFEDS clusters.
This is in agreement with the predictions from TNG and Magneticum, and the
distribution of Xoff from DMO simulations. The tails of the distributions are
different. Using the offset to classify relaxed and disturbed clusters, we
measure a relaxed fraction of 31% in the eFEDS subsample. Finally, we find a
correlation between the offset in hydrodynamical simulations and Xoff measured
on their parent DMO run and calibrate a relation between them. There is good
agreement between eROSITA data and simulations. Baryons cause a decrement
(increment) in the low (high) offset regime compared to the Xoff distribution.
The offset-Xoff relation provides an accurate prediction of the true Xoff
distribution in Magneticum and TNG. It allows introducing the offsets in
cosmology, marginalizing on dynamical selection effects.
|
In this paper, we study entire solutions of the difference equation
$\psi(z+h)=M(z)\psi(z)$, $z\in{\mathbb C}$, $\psi(z)\in {\mathbb C}^2$. In this
equation, $h$ is a fixed positive parameter and $M: {\mathbb C}\to
SL(2,{\mathbb C})$ is a given matrix function. We assume that $M(z)$ is a
$2\pi$-periodic trigonometric polynomial. We construct the minimal entire
solutions, i.e. entire solutions with the minimal possible growth
simultaneously as for im$z\to+\infty$ so for im$z\to-\infty$. We show that the
monodromy matrices corresponding to the minimal entire solutions are
trigonometric polynomials of the same order as $M$. This property relates the
spectral analysis of difference Schr\"odinger equations with trigonometric
polynomial coefficients to an analysis of finite dimensional dynamical systems.
|
Under certain conditions, space-charge limited emission in vacuum microdiodes
manifests as clearly defined bunches of charge with a regular size and
interval. The frequency corresponding to this interval is in the Terahertz
range. In this computational study it is demonstrated that, for a range of
parameters, conducive to generating THz frequency oscillations, the frequency
is dependant only on the cold cathode electric field and on the emitter area.
For a planar micro-diode of given dimension, the modulation frequency can be
easily tuned simply by varying the applied potential. Simulations of the
microdiode are done for 84 different combinations of emitter area, applied
voltage and gap spacing, using a molecular dynamics based code with exact
Coulomb interaction between all electrons in the vacuum gap, which is of the
order 100. It is found, for a fixed emitter area, that the frequency of the
pulse train is solely dependent on the vacuum electric field in the diode,
described by a simple power law. It is also found that, for a fixed value of
the electric field, the frequency increases with diminishing size of the
emitting spot on the cathode. Some observations are made on the spectral
quality, and how it is affected by the gap spacing in the diode and the initial
velocity of the electrons.
|
A universal scheme is introduced to speed up the dynamics of a driven open
quantum system along a prescribed trajectory of interest. This framework
generalizes counterdiabatic driving to open quantum processes. Shortcuts to
adiabaticity designed in this fashion can be implemented in two alternative
physical scenarios: one characterized by the presence of balanced gain and
loss, the other involves non-Markovian dynamics with time-dependent Lindblad
operators. As an illustration, we engineer superadiabatic cooling, heating, and
isothermal strokes for a two-level system, and provide a protocol for the fast
thermalization of a quantum oscillator.
|
Security-critical system requirements are increasingly enforced through
mandatory access control systems. These systems are controlled by security
policies, highly sensitive system components, which emphasizes the paramount
importance of formally verified security properties regarding policy
correctness. For the class of safety-properties, addressing potential dynamic
right proliferation, a number of known and tested formal analysis methods and
tools already exist. Unfortunately, these methods need to be redesigned from
scratch for each particular policy from a broad range of different application
domains.
In this paper, we seek to mitigate this problem by proposing a uniform formal
framework, tailorable to a safety analysis algorithm for a specific application
domain. We present a practical workflow, guided by model-based knowledge, that
is capable of producing a meaningful formal safety definition along with an
algorithm to heuristically analyze that safety. Our method is demonstrated
based on security policies for the SELinux operating system.
Keywords: Security engineering, security policies, access control systems,
access control models, safety, heuristic analysis, SELinux.
|
This manuscript originated from the discussion at the workshop on the "Future
of Few-body Low Energy Experimental Physics" (FFLEEP), which was held at the
University of Trento on December 4-7, 2002 and has been written in its present
form on March 19, 2003. It illustrates a selection of theoretical advancements
in the nuclear few-body problem, including two- and many-nucleon interactions,
the three-nucleon bound and scattering system, the four-body problem, the
A-body (A$>$4) problem, and fields of related interest, such as reactions of
astrophysical interest and few-neutron systems. Particular attention is called
to the contradictory situation one experiences in this field: while theory is
currently advancing and has the potential to inspire new experiments, the
experimental activity is nevertheless rapidly phasing out. If such a trend will
continue, advancements in this area will become critically difficult.
|
Reinforcement learning of real-world tasks is very data inefficient, and
extensive simulation-based modelling has become the dominant approach for
training systems. However, in human-robot interaction and many other real-world
settings, there is no appropriate one-model-for-all due to differences in
individual instances of the system (e.g. different people) or necessary
oversimplifications in the simulation models. This requires two approaches: 1.
either learning the individual system's dynamics approximately from data which
requires data-intensive training or 2. using a complete digital twin of the
instances, which may not be realisable in many cases. We introduce two
approaches: co-kriging adjustments (CKA) and ridge regression adjustment (RRA)
as novel ways to combine the advantages of both approaches. Our adjustment
methods are based on an auto-regressive AR1 co-kriging model that we integrate
with GP priors. This yield a data- and simulation-efficient way of using
simplistic simulation models (e.g., simple two-link model) and rapidly adapting
them to individual instances (e.g., biomechanics of individual people). Using
CKA and RRA, we obtain more accurate uncertainty quantification of the entire
system's dynamics than pure GP-based and AR1 methods. We demonstrate the
efficiency of co-kriging adjustment with an interpretable reinforcement
learning control example, learning to control a biomechanical human arm using
only a two-link arm simulation model (offline part) and CKA derived from a
small amount of interaction data (on-the-fly online). Our method unlocks an
efficient and uncertainty-aware way to implement reinforcement learning methods
in real world complex systems for which only imperfect simulation models exist.
|
We describe a reversible, spatially-controlled doping method for cuprate
films. The technique has been used to create superconductor-antiferromagnetic
insulator-superconductor (S-AFI-S) junctions and optimally doped
superconductor-underdoped superconductor-optimally doped superconductor
(OS-US-OS) cuprate structures. We demonstrate how the S-AFI-S structure can be
employed to reliably measure the transport properties of the antiferromagnetic
insulator region at cryogenic temperatures using the superconductors as
seamless electrical leads. We also discuss applied and fundamental issues which
may be addressed with the structures created with this doping method. Although
it is implemented on a cuprate film (YBa2Cu3O7-delta) in this work, the method
can also be applied to any mixed-valence transition metal oxide whose physical
properties are determined by oxygen content.
|
This study introduces a novel expert generation method that dynamically
reduces task and computational complexity without compromising predictive
performance. It is based on a new hierarchical classification network topology
that combines sequential processing of generic low-level features with
parallelism and nesting of high-level features. This structure allows for the
innovative extraction technique: the ability to select only high-level features
of task-relevant categories. In certain cases, it is possible to skip almost
all unneeded high-level features, which can significantly reduce the inference
cost and is highly beneficial in resource-constrained conditions. We believe
this method paves the way for future network designs that are lightweight and
adaptable, making them suitable for a wide range of applications, from compact
edge devices to large-scale clouds. In terms of dynamic inference our
methodology can achieve an exclusion of up to 88.7\,\% of parameters and
73.4\,\% fewer giga-multiply accumulate (GMAC) operations, analysis against
comparative baselines showing an average reduction of 47.6\,\% in parameters
and 5.8\,\% in GMACs across the cases we evaluated.
|
Index coding studies multiterminal source-coding problems where a set of
receivers are required to decode multiple (possibly different) messages from a
common broadcast, and they each know some messages a priori. In this paper, at
the receiver end, we consider a special setting where each receiver knows only
one message a priori, and each message is known to only one receiver. At the
broadcasting end, we consider a generalized setting where there could be
multiple senders, and each sender knows a subset of the messages. The senders
collaborate to transmit an index code. This work looks at minimizing the number
of total coded bits the senders are required to transmit. When there is only
one sender, we propose a pruning algorithm to find a lower bound on the optimal
(i.e., the shortest) index codelength, and show that it is achievable by linear
index codes. When there are two or more senders, we propose an appending
technique to be used in conjunction with the pruning technique to give a lower
bound on the optimal index codelength; we also derive an upper bound based on
cyclic codes. While the two bounds do not match in general, for the special
case where no two distinct senders know any message in common, the bounds
match, giving the optimal index codelength. The results are expressed in terms
of strongly connected components in directed graphs that represent the
index-coding problems.
|
We investigate the pressing down game and its relation to the Banach Mazur
game. In particular we show: Consistently, there is a nowhere precipitous
normal ideal $I$ on $\aleph_2$ such that player nonempty wins the pressing down
game of length $\aleph_1$ on $I$ even if player empty starts.
|
This paper presents a novel framework for accurate pedestrian intent
prediction at intersections. Given some prior knowledge of the curbside
geometry, the presented framework can accurately predict pedestrian
trajectories, even in new intersections that it has not been trained on. This
is achieved by making use of the contravariant components of trajectories in
the curbside coordinate system, which ensures that the transformation of
trajectories across intersections is affine, regardless of the curbside
geometry. Our method is based on the Augmented Semi Nonnegative Sparse Coding
(ASNSC) formulation and we use that as a baseline to show improvement in
prediction performance on real pedestrian datasets collected at two
intersections in Cambridge, with distinctly different curbside and crosswalk
geometries. We demonstrate a 7.2% improvement in prediction accuracy in the
case of same train and test intersections. Furthermore, we show a comparable
prediction performance of TASNSC when trained and tested in different
intersections with the baseline, trained and tested on the same intersection.
|
Sketches are a family of streaming algorithms widely used in the world of big
data to perform fast, real-time analytics. A popular sketch type is Quantiles,
which estimates the data distribution of a large input stream. We present
Quancurrent, a highly scalable concurrent Quantiles sketch. Quancurrent's
throughput increases linearly with the number of available threads, and with
$32$ threads, it reaches an update speedup of $12$x and a query speedup of
$30$x over a sequential sketch. Quancurrent allows queries to occur
concurrently with updates and achieves an order of magnitude better query
freshness than existing scalable solutions.
|
We wish to understand the macroscopic plastic behaviour of metals by
upscaling the micro-mechanics of dislocations. We consider a highly simplified
dislocation network, which allows our microscopic model to be a one dimensional
particle system, in which the interactions between the particles (dislocation
walls) are singular and non-local.
As a first step towards treating realistic geometries, we focus on
finite-size effects rather than considering an infinite domain as typically
discussed in the literature. We derive effective equations for the dislocation
density by means of \Gamma-convergence on the space of probability measures.
Our analysis yields a classification of macroscopic models, in which the size
of the domain plays a key role.
|
We consider the possibility of constructing realistic Higgsless models within
the context of deconstructed or moose models. We show that the constraints
coming from the electro-weak esperimental data are very severe and that it is
very difficult to reconcile them with the requirement of improving the
unitarity bound of the Higgsless Standard Model. On the other hand, with some
fine tuning, a solution is found by delocalizing the standard fermions along
the lattice line, that is allowing the fermions to couple to the moose gauge
fields.
|
We prove central and non-central limit theorems for the Hermite variations of
the anisotropic fractional Brownian sheet $W^{\alpha, \beta}$ with Hurst
parameter $(\alpha, \beta) \in (0,1)^2$. When $0<\alpha \leq 1-\frac{1}{2q}$ or
$0<\beta \leq 1-\frac{1}{2q}$ a central limit theorem holds for the
renormalized Hermite variations of order $q\geq 2$, while for
$1-\frac{1}{2q}<\alpha, \beta < 1$ we prove that these variations satisfy a
non-central limit theorem. In fact, they converge to a random variable which is
the value of a two-parameter Hermite process at time $(1,1)$.
|
There are two Rellich inequalities for the bilaplacian, that is for $\int
(\Delta u)^2dx$, the one involving $|\nabla u|$ and the other involving $|u|$
at the RHS. In this article we consider these inequalities with sharp constants
and obtain sharp Sobolev-type improvements. More precisely, in our first result
we improve the Rellich inequality with $|\nabla u|$ obtained recently by Cazacu
in dimensions $n=3,4$ by a sharp Sobolev term thus complementing existing
results for the case $n\geq 5$. In the second theorem the sharp constant of the
Sobolev improvement for the Rellich inequality with $|u|$ is obtained.
|
Explaining the origin and evolution of exoplanetary "hot Jupiters" remains a
significant challenge. One possible mechanism for their production is
planet-planet interactions, which produces hot Jupiters from planets born far
from their host stars but near their dynamical stability limits. In the much
more likely case of planets born far from their dynamical stability limits, can
hot Jupiters can be formed in star clusters? Our N-body simulations of
planetary systems inside star clusters answer this question in the affirmative,
and show that hot Jupiter formation is not a rare event. We detail three case
studies of the dynamics-induced births of hot Jupiters on highly eccentric
orbits that can only occur inside star clusters. The hot Jupiters' orbits bear
remarkable similarities to those of some of the most extreme exoplanets known:
HAT-P-32 b, HAT-P-2 b, HD 80606 b and GJ 876 d. If stellar perturbations formed
these hot Jupiters then our simulations predict that these very hot, inner
planets are often accompanied by much more distant gas giants in highly
eccentric orbits.
|
\textit{Resolve} onboard the X-ray satellite XRISM is a cryogenic instrument
with an X-ray microcalorimeter in a Dewar. A lid partially transparent to
X-rays (called gate valve, or GV) is installed at the top of the Dewar along
the optical axis. Because observations will be made through the GV for the
first few months, the X-ray transmission calibration of the GV is crucial for
initial scientific outcomes. We present the results of our ground calibration
campaign of the GV, which is composed of a Be window and a stainless steel
mesh. For the stainless steel mesh, we measured its transmission using the
X-ray beamline at ISAS. For the Be window, we used synchrotron facilities to
measure the transmission and modeled the data with (i) photoelectric absorption
and incoherent scattering of Be, (ii) photoelectric absorption of contaminants,
and (iii) coherent scattering of Be changing at specific energies. We discuss
the physical interpretation of the transmission discontinuity caused by the
Bragg diffraction in poly-crystal Be, which we incorporated into our
transmission phenomenological model. We present the X-ray diffraction
measurement on the sample to support our interpretation. The measurements and
the constructed model meet the calibration requirements of the GV. We also
performed a spectral fitting of the Crab nebula observed with Hitomi SXS and
confirmed improvements of the model parameters.
|
This paper describes a simple and efficient Binary Byzantine faulty tolerant
consensus algorithm using a weak round coordinator and the partial synchrony
assumption to ensure liveness. In the algorithm, non-faulty nodes perform an
initial broadcast followed by a executing a series of rounds consisting of a
single message broadcast until termination. Each message is accompanied by a
cryptographic proof of its validity. In odd rounds the binary value 1 can be
decided, in even round 0. Up to one third of the nodes can be faulty and
termination is ensured within a number of round of a constant factor of the
number of faults. Experiments show termination can be reached in less than 200
milliseconds with 300 Amazon EC2 instances spread across 5 continents even with
partial initial disagreement.
|
We consider a real Lagrangian off-critical submodel describing the soliton
sector of the so-called conformal affine $sl(3)^{(1)}$ Toda model coupled to
matter fields (CATM). The theory is treated as a constrained system in the
context of Faddeev-Jackiw and the symplectic schemes. We exhibit the parent
Lagrangian nature of the model from which generalizations of the sine-Gordon
(GSG) or the massive Thirring (GMT) models are derivable. The dual description
of the model is further emphasized by providing the relationships between
bilinears of GMT spinors and relevant expressions of the GSG fields. In this
way we exhibit the strong/weak coupling phases and the (generalized)
soliton/particle correspondences of the model. The $sl(n)^{(1)}$ case is also
outlined.
|
Subsets and Splits