text
stringlengths 6
128k
|
---|
Cooperative game theory deals with systems where players want to cooperate to
improve their payoffs. But players may choose coalitions in a non-cooperative
manner, leading to a coalition-formation game. We consider such a game with
several players (willing to cooperate) and an adamant player (unwilling to
cooperate) involved in resource-sharing. Here, the strategy of a player is the
set of players with whom it wants to form a coalition. Given a strategy
profile, an appropriate partition of coalitions is formed; players in each
coalition maximize their collective utilities leading to a non-cooperative
resource-sharing game among the coalitions, the utilities at the resulting
equilibrium are shared via Shapley-value; these shares define the utilities of
players for the given strategy profile in coalition-formation game. We also
consider the utilitarian solution to derive the price of anarchy (PoA). We
considered a case with symmetric players and an adamant player; wherein we
observed that players prefer to stay alone at Nash equilibrium when the number
of players (n) is more than 4. In contrast, in the majority of the cases, the
utilitarian partition is grand coalition. Interestingly the PoA is smaller with
an adamant player of intermediate strength. Further, PoA grows like O(n).
|
I-V characteristics of the high T$_c$ superconductor
Bi$_2$Sr$_2$Ca$_1$C$_2$O$_8$ shows a strong hysteresis, producing many
branches. The origin of hysteresis jumps is studied by use of the model of
multi-layered Josephson junctions proposed by one of the authors (T. K.). The
charging effect at superconducting layers produces a coupling between the next
nearest neighbor phase-differences, which determines the structure of
hysteresis branches. It will be shown that a solution of phase motions is
understood as a combination of rotating and oscillating phase-differences, and
that, at points of hysteresis jumps, there occurs a change in the number of
rotating phase-differences. Effects of dissipation are analyzed. The
dissipation in insulating layers works to damp the phase motion itself, while
the dissipation in superconducting layers works to damp relative motions of
phase-differences. Their effects to hysteresis jumps are discussed.
|
About three million hadronic decays of the Z collected by ALEPH in the years
1991-1994 are used to search for anomalous CP violation beyond the Standard
Model in the decay Z -> b \bar{b} g. The study is performed by analyzing
angular correlations between the two quarks and the gluon in three-jet events
and by measuring the differential two-jet rate. No signal of CP violation is
found. For the combinations of anomalous CP violating couplings, ${\hat{h}}_b =
{\hat{h}}_{Ab}g_{Vb}-{\hat{h}}_{Vb}g_{Ab}$ and $h^{\ast}_b =
\sqrt{\hat{h}_{Vb}^{2}+\hat{h}_{Ab}^{2}}$, limits of \hat{h}_b < 0.59$ and
$h^{\ast}_{b} < 3.02$ are given at 95\% CL.
|
The Minkowski problem for electrostatic capacity characterizes measures
generated by electrostatic capacity, which is a well-known variant of the
Minkowski problem. This problem has been generalized to $L_p$ Minkowski problem
for $\mathfrak{p}$-capacity. In particular, the logarithmic case $p=0$ relates
to cone-volumes and therefore has a geometric significance. In this paper we
solve the discrete logarithmic Minkowski problem for $1<\mathfrak{p}<n$ in the
case where the support of the given measure is in general position.
|
The first three-dimensional simulation of shear-induced phase transitions in
a polymeric system has been performed. The method is based on dynamic
density-functional theory. The pathways between a bicontinuous phase with
developing gyroid mesostructure and a lamellar/cylinder phase coexistence are
investigated for a mixture of flexible triblock ABA-copolymer and solvent under
simple steady shear.
|
First-principles calculation predict that olivine Li4MnFeCoNiP4O16 has
ferrotoroidic characteristic and ferrimagnetic configuration with magnetic
moment of 1.56 \muB per formula unit. The ferrotoroidicity of this material
makes it a potential candidate for magnetoelectric materials . Based on the
orbital-resolved density of states for the transtion-metal ions in
Li4MnFeCoNiP4O16, the spin configuration for Mn2+,Fe3+,Co2+, and Ni2+ is
t2g3eg2, t2g3eg2,t2g1t2g3eg1eg2, and t2g2t2g3eg1eg2, respectively. Density
functional theory plus U (DFT+U) shows a indirect band gap of 1.25 eV in this
predicted material, which is not simply related to the electronic conductivity
in terms of being used as cathode material in rechargeable Li-ion batteries.
|
We analyse the influence of diagonal disorder (random site energy) on Charge
Density Wave (CDW) and Superconductivity (SS) in local pair systems which are
described by the model of hard core charged bosons on a lattice. This problem
was previously studied within the mean field approximation for the case of half
filled band (n = 1). Here we extend that investigation to the case of arbitrary
particle concentration (0 < n < 2) and examine the phase diagrams of the model
and the behaviour of superfluid density as a function of n and the increasing
disorder. Depending on the strength of random on-site energies, the intersite
density-density repulsion and the concentration the model can exhibit several
various phases, including homogeneous phases: CDW, SS and Bose-glass (NO) as
well as the phase separated states: CDW-SS, CDW-NO and particle droplets. The
obtained results for SS phase are in qualitative agreement with the available
Monte Carlo calculations for two dimensional lattice. Also, in a definite range
of parameters the system exhibits the phenomena which we call a disorder
induced superconductivity and a disorder induced charge ordering.
|
In this work, we address the problem of solving complex collaborative robotic
tasks subject to multiple varying parameters. Our approach combines
simultaneous policy blending with system identification to create generalized
policies that are robust to changes in system parameters. We employ a blending
network whose state space relies solely on parameter estimates from a system
identification technique. As a result, this blending network learns how to
handle parameter changes instead of trying to learn how to solve the task for a
generalized parameter set simultaneously. We demonstrate our scheme's ability
on a collaborative robot and human itching task in which the human has motor
impairments. We then showcase our approach's efficiency with a variety of
system identification techniques when compared to standard domain
randomization.
|
We extend the recent proposal of Papadodimas and Raju of a CFT construction
of operators inside the black hole interior to arbitrary non-maximally mixed
states. Our construction builds on the general prescription given in earlier
work, based on ideas from quantum error correction. We indicate how the CFT
state dependence of the interior modes can be removed by introducing an
external system, such as an observer, that is entangled with the CFT.
|
In recent years, convolutional neural networks (CNNs) have revolutionized
medical image analysis. One of the most well-known CNN architectures in
semantic segmentation is the U-net, which has achieved much success in several
medical image segmentation applications. Also more recently, with the rise of
autoML ad advancements in neural architecture search (NAS), methods like
NAS-Unet have been proposed for NAS in medical image segmentation. In this
paper, with inspiration from LadderNet, U-Net, autoML and NAS, we propose an
ensemble deep neural network with an underlying U-Net framework consisting of
bi-directional convolutional LSTMs and dense connections, where the first (from
left) U-Net-like network is deeper than the second (from left). We show that
this ensemble network outperforms recent state-of-the-art networks in several
evaluation metrics, and also evaluate a lightweight version of this ensemble
network, which also outperforms recent state-of-the-art networks in some
evaluation metrics.
|
Does the training of large language models potentially infringe upon code
licenses? Furthermore, are there any datasets available that can be safely used
for training these models without violating such licenses? In our study, we
assess the current trends in the field and the importance of incorporating code
into the training of large language models. Additionally, we examine publicly
available datasets to see whether these models can be trained on them without
the risk of legal issues in the future. To accomplish this, we compiled a list
of 53 large language models trained on file-level code. We then extracted their
datasets and analyzed how much they overlap with a dataset we created,
consisting exclusively of strong copyleft code.
Our analysis revealed that every dataset we examined contained license
inconsistencies, despite being selected based on their associated repository
licenses. We analyzed a total of 514 million code files, discovering 38 million
exact duplicates present in our strong copyleft dataset. Additionally, we
examined 171 million file-leading comments, identifying 16 million with strong
copyleft licenses and another 11 million comments that discouraged copying
without explicitly mentioning a license. Based on the findings of our study,
which highlights the pervasive issue of license inconsistencies in large
language models trained on code, our recommendation for both researchers and
the community is to prioritize the development and adoption of best practices
for dataset creation and management.
|
We show that any isometric immersion of a flat plane domain into $\mathbb
R^3$ is developable provided it enjoys the little H\"older regulairty
$c^{1,2/3}$. In particular, isometric immersions of local $C^{1,\alpha}$
regularity with $\alpha > 2/3$ belong to this class. The proof is based on the
existence of a weak notion of second fundamental form for such immersions, the
analysis of the Gauss-Codazzi-Mainardi equations in this weak setting, and a
parallel result on the very weak solutions to the degenerate Monge-Amp\`ere
equation analyzed by Lewicka and the second author.
|
Sixth generation (6G) mobile networks may include new passive technologies,
such as ambient backscatter communication or the use of reconfigurable
intelligent surfaces, to avoid the emission of waves and the corresponding
energy consumption. On the one hand, a reconfigurable intelligent surface
improves the network performance by adding electronically controlled reflected
paths in the radio propagation channel. On the other hand, in an ambient
backscatter system, a device, named tag, communicates towards a reader by
backscattering the waves of an ambient source (such as a TV tower). However,
the tag's backscattered signal is weak and strongly interfered by the direct
signal from the ambient source. In this paper, we propose a new reconfigurable
intelligent surface assisted ambient backscatter system. The proposed surface
allows to control the reflection of an incident wave coming from the exact
source location towards the tag and reader locations (creating hot spots at
their locations), thanks to passive reflected beams from a predefined codebook.
A common phase-shift can also be applied to the beam. Thanks to these features,
we demonstrate experimentally that the performance of ambient backscatter
communications can be significantly improved.
|
I construct a secure multi-party scheme to compute a classical function by a
succinct use of a specially designed fault-tolerant random polynomial quantum
error correction code. This scheme is secure provided that (asymptotically)
strictly greater than five-sixths of the players are honest. Moreover, the
security of this scheme follows directly from the theory of quantum error
correcting code, and hence is valid without any computational assumption. I
also discuss the quantum-classical complexity-security tradeoff in secure
multi-party computation schemes and argue why a full-blown quantum code is
necessary in my scheme.
|
We present some results and conjectures on a generalization to the
noncommutative setup of the Brouwer fixed-point theorem from the Borsuk-Ulam
theorem perspective.
|
We investigate the number fluctuations in small cells of quantum gases
pointing out important deviations from the thermodynamic limit fixed by the
isothermal compressibility. Both quantum and thermal fluctuations in weakly as
well as highly compressible fluids are considered. For the two-dimensional (2D)
superfluid Bose gas we find a significant quenching of fluctuations with
respect to the thermodynamic limit, in agreement with recent experimental
findings. An enhancement of the thermal fluctuations is instead predicted for
the 2D dipolar superfluid Bose gas, which becomes dramatic when the size of the
sample cell is of the order of the wavelength of the rotonic excitation induced
by the interaction.
|
We study by mean-field analysis and stochastic simulations chemical models
for genetic toggle switches formed from pairs of genes that mutually repress
each other. In order to determine the stability of the genetic switches, we
make a connection with reactive flux theory and transition state theory. The
switch stability is characterised by a well defined lifetime $\tau$. We find
that $\tau$ grows exponentially with the mean number $\Nmean$ of transcription
factor molecules involved in the switching. In the regime accessible to direct
numerical simulations, the growth law is well characterised by
$\tau\sim\Nmean{}^{\alpha}\exp(b\Nmean)$, where $\alpha$ and $b$ are
parameters. The switch stability is decreased by phenomena that increase the
noise in gene expression, such as the production of multiple copies of a
protein from a single mRNA transcript (shot noise), and fluctuations in the
number of proteins produced per transcript. However, robustness against
biochemical noise can be drastically enhanced by arranging the transcription
factor binding domains on the DNA such that competing transcription factors
mutually exclude each other on the DNA. We also elucidate the origin of the
enhanced stability of the exclusive switch with respect to that of the general
switch: while the kinetic prefactor is roughly the same for both switches, the
`barrier' for flipping the switch is significantly higher for the exclusive
switch than for the general switch.
|
This paper briefly analyzes the advantages and problems of AI mainstream
technology and puts forward: To achieve stronger Artificial Intelligence, the
end-to-end function calculation must be changed and adopt the technology system
centered on scene fitting. It also discusses the concrete scheme named Dynamic
Cognitive Network model (DC Net). Discussions : The knowledge and data in the
comprehensive domain are uniformly represented by using the rich connection
heterogeneous Dynamic Cognitive Network constructed by conceptualized elements;
A network structure of two dimensions and multi layers is designed to achieve
unified implementation of AI core processing such as combination and
generalization; This paper analyzes the implementation differences of computer
systems in different scenes, such as open domain, closed domain, significant
probability and non-significant probability, and points out that the
implementation in open domain and significant probability scene is the key of
AI, and a cognitive probability model combining bidirectional conditional
probability, probability passing and superposition, probability col-lapse is
designed; An omnidirectional network matching-growth algorithm system driven by
target and probability is designed to realize the integration of parsing,
generating, reasoning, querying, learning and so on; The principle of cognitive
network optimization is proposed, and the basic framework of Cognitive Network
Learning algorithm (CNL) is designed that structure learning is the primary
method and parameter learning is the auxiliary. The logical similarity of
implementation between DC Net model and human intelligence is analyzed in this
paper.
|
In this article we show that the payment flow of a linear tax on trading
gains from a security with a semimartingale price process can be constructed
for all c\`agl\`ad and adapted trading strategies. It is characterized as the
unique continuous extension of the tax payments for elementary strategies
w.r.t. the convergence "uniformly in probability". In this framework we prove
that under quite mild assumptions dividend payoffs have almost surely a
negative effect on investor's after-tax wealth if the riskless interest rate is
always positive.
|
Recently, dataset-generation-based zero-shot learning has shown promising
results by training a task-specific model with a dataset synthesized from large
pre-trained language models (PLMs). The final task-specific model often
achieves compatible or even better performance than PLMs under the zero-shot
setting, with orders of magnitude fewer parameters. However, synthetic datasets
have their drawbacks. They have long been suffering from low-quality issues
(e.g., low informativeness and redundancy). This explains why the massive
synthetic data does not lead to better performance -- a scenario we would
expect in the human-labeled data. To improve the quality of dataset synthesis,
we propose a progressive zero-shot dataset generation framework, ProGen, which
leverages the feedback from the task-specific model to guide the generation of
new training data via in-context examples. Extensive experiments on five text
classification datasets demonstrate the effectiveness of the proposed approach.
We also show ProGen achieves on-par or superior performance with only 1\%
synthetic dataset size compared to baseline methods without in-context
feedback.
|
We present a lightweight Python framework for distributed training of neural
networks on multiple GPUs or CPUs. The framework is built on the popular Keras
machine learning library. The Message Passing Interface (MPI) protocol is used
to coordinate the training process, and the system is well suited for job
submission at supercomputing sites. We detail the software's features, describe
its use, and demonstrate its performance on systems of varying sizes on a
benchmark problem drawn from high-energy physics research.
|
The hypergeometric type operators are shape invariant, and a factorization
into a product of first order differential operators can be explicitly
described in the general case. Some additional shape invariant operators
depending on several parameters are defined in a natural way by starting from
this general factorization. The mathematical properties of the eigenfunctions
and eigenvalues of the operators thus obtained depend on the values of the
involved parameters. We study the parameter dependence of orthogonality, square
integrability and of the monotony of eigenvalue sequence. The obtained results
allow us to define certain systems of Gazeau-Klauder coherent states and to
describe some of their properties. Our systematic study recovers a number of
well-known results in a natural unified way and also leads to new findings.
|
We prove that a generic differential operator of type DN is irreducible,
regular, (anti)self-adjoint, and has quasiunipotent local monodromies. We prove
that the defining matrix of a DN operator can be recovered from the expression
of the operator as a polynomial in t and d/dt.
|
As a simple corollary of a highly general framework for differential and
difference Galois theory introduced by Y. Andre, we formulate a version of the
Galois correspondence that applies over a difference field with arbitrary field
of constants.
|
Scale spaces were defined by H.Hofer, K.Wysocki, and E.Zehnder. In this note
we introduce a subclass of scale spaces and explain why we believe that this
subclass is the right class for a general setup of Floer theory.
|
The critical properties of the Frank model of spontaneous chiral synthesis
are discussed by applying results from the field theoretic renormalization
group (RG). The long time and long wavelength features of this microscopic
reaction scheme belong to the same universality class as multi-colored directed
percolation processes. Thus, the following RG fixed points (FP) govern the
critical dynamics of the Frank model for d<4: one unstable FP that corresponds
to complete decoupling between the two enantiomers, a saddle-point that
corresponds to symmetric interspecies coupling, and two stable FPs that
individually correspond to unidirectional couplings between the two chiral
molecules. These latter two FPs are associated with the breakdown of mirror or
chiral symmetry. In this simplified model of molecular synthesis, homochirality
is a natural consequence of the intrinsic reaction noise in the critical
regime, which corresponds to extremely dilute chemical systems.
|
Quantum gravity introduces a new source of the combined parity (CP) violation
in gauge theories. We argue that this new CP violation gets bundled with the
strong CP violation through the coloured gravitational instantons.
Consequently, the standard axion solution to the strong CP problem is
compromised. Further, we argue that the ultimate solution to the strong CP
problem must involve at least one additional axion particle.
|
We present a differential-calculus-based method which allows one to derive
more identities from {\it any} given Fibonacci-Lucas identity containing a
finite number of terms and having at least one free index. The method has two
{\it independent} components. The first component allows new identities to be
obtained directly from an existing identity while the second yields a
generalization of the existing identity. The strength of the first component is
that no additional information is required about the given original identity.
We illustrate the method by providing new generalizations of some well-known
identities such as d'Ocagne identity, Candido's identity, Gelin-Ces\`aro
identity and Catalan's identity. The method readily extends to a generalized
Fibonacci sequence.
|
A maximally supersymmetric configuration of super Yang-Mills living on a
noncommutative torus corresponds to a constant curvature connection. On a
noncommutative toroidal orbifold there is an additional constraint that the
connection be equivariant. We study moduli spaces of (equivariant) constant
curvature connections on noncommutative even-dimensional tori and on toroidal
orbifolds. As an illustration we work out the cases of Z_{2} and Z_{4}
orbifolds in detail. The results we obtain agree with a commutative picture
describing systems of branes wrapped on cycles of the torus and branes stuck at
exceptional orbifold points.
|
We investigate mixed Lusin area integrals associated with Jacobi
trigonometric polynomial expansions. We prove that these operators can be
viewed as vector-valued Calder\'on-Zygmund operators in the sense of the
associated space of homogeneous type. Consequently, their various mapping
properties, in particular on weighted $L^p$ spaces, follow from the general
theory.
|
We introduce COMAP-EoR, the next generation of the Carbon Monoxide Mapping
Array Project aimed at extending CO intensity mapping to the Epoch of
Reionization. COMAP-EoR supplements the existing 30 GHz COMAP Pathfinder with
two additional 30 GHz instruments and a new 16 GHz receiver. This combination
of frequencies will be able to simultaneously map CO(1--0) and CO(2--1) at
reionization redshifts ($z\sim5-8$) in addition to providing a significant
boost to the $z\sim3$ sensitivity of the Pathfinder. We examine a set of
existing models of the EoR CO signal, and find power spectra spanning several
orders of magnitude, highlighting our extreme ignorance about this period of
cosmic history and the value of the COMAP-EoR measurement. We carry out the
most detailed forecast to date of an intensity mapping cross-correlation, and
find that five out of the six models we consider yield signal to noise ratios
(S/N) $\gtrsim20$ for COMAP-EoR, with the brightest reaching a S/N above 400.
We show that, for these models, COMAP-EoR can make a detailed measurement of
the cosmic molecular gas history from $z\sim2-8$, as well as probe the
population of faint, star-forming galaxies predicted by these models to be
undetectable by traditional surveys. We show that, for the single model that
does not predict numerous faint emitters, a COMAP-EoR-type measurement is
required to rule out their existence. We briefly explore prospects for a
third-generation Expanded Reionization Array (COMAP-ERA) capable of detecting
the faintest models and characterizing the brightest signals in extreme detail.
|
On the one hand, there is a growing demand for high throughput which can be
satisfied thanks to the deployment of new networks using massive multiple-input
multiple-output (MIMO) and beamforming. On the other hand, in some countries or
cities, there is a demand for arbitrarily low electromagnetic field exposure
(EMFE) of people not concerned by the ongoing communication, which slows down
the deployment of new networks. Recently, it has been proposed to take the
opportunity, when designing the future 6th generation (6G), to offer, in
addition to higher throughput, a new type of service: arbitrarily low EMFE.
Recent works have shown that a reconfigurable intelligent surface (RIS),
jointly optimized with the base station (BS) beamforming can improve the
received throughput at the desired location whilst reducing EMFE everywhere. In
this paper, we introduce a new concept of a non-intended user (NIU). An NIU is
a user of the network who requests low EMFE when he/she is not
downloading/uploading data. An NIU lets his/her device, called NIU equipment
(NIUE), exchange some control signaling and pilots with the network, to help
the network avoid exposing NIU to waves that are transporting data for another
user of the network: the intended user (IU), whose device is called IU
equipment (IUE). Specifically, we propose several new schemes to maximize the
IU throughput under an EMFE constraint at the NIU (in practice, an interference
constraint at the NIUE). Several propagation scenarios are investigated.
Analytical and numerical results show that proper power allocation and beam
optimization can remarkably boost the EMFE-constrained system's performance
with limited complexity and channel information.
|
Information is an inherent component of stochastic processes and to measure
the distance between different stochastic processes it is not sufficient to
consider the distance between their laws. Instead, the information which
accumulates over time and which is mathematically encoded by filtrations has to
be accounted for as well. The nested distance/bicausal Wasserstein distance
addresses this challenge by incorporating the filtration. It is of emerging
importance due to its applications in stochastic analysis, stochastic
programming, mathematical economics and other disciplines.
This article establishes a number of fundamental properties of the nested
distance. In particular we prove that the nested distance of processes
generates a Polish topology but is itself not a complete metric. We identify
its completion to be the set of nested distributions, which are a form of
generalized stochastic processes. We also characterize the extreme points of
the set of couplings which participate in the definition of the nested
distance, proving that they can be identified with adapted deterministic maps.
Finally, we compare the nested distance to an alternative metric, which could
possibly be easier to compute in practical situations.
|
In this paper, we give precise mathematical form to the idea of a structure
whose data and axioms are faithfully represented by a graphical calculus; some
prominent examples are operads, polycategories, properads, and PROPs. Building
on the established presentation of such structures as algebras for monads on
presheaf categories, we describe a characteristic property of the associated
monads---the shapeliness of the title---which says that "any two operations of
the same shape agree". An important part of this work is the study of analytic
functors between presheaf categories, which are a common generalisation of
Joyal's analytic endofunctors on sets and of the parametric right adjoint
functors on presheaf categories introduced by Diers and studied by
Carboni--Johnstone, Leinster and Weber. Our shapely monads will be found among
the analytic endofunctors, and may be characterised as the submonads of a
universal analytic monad with "exactly one operation of each shape". In fact,
shapeliness also gives a way to define the data and axioms of a structure
directly from its graphical calculus, by generating a free shapely monad on the
basic operations of the calculus. In this paper we do this for some of the
examples listed above; in future work, we intend to do so for graphical calculi
such as Milner's bigraphs, Lafont's interaction nets, or Girard's
multiplicative proof nets, thereby obtaining canonical notions of denotational
model.
|
A study is made of nuclear size corrections to the energy levels of
single-electron atoms for the ground state of hydrogen like atoms. We consider
Fermi charge distribution to the nucleus and calculate atomic energy level
shift due to the finite size of the nucleus in the perturbation theory context.
The exact relativistic correction based upon the available analytical
calculations is compared to the result of first-order relativistic perturbation
theory and the non-relativistic approximation. We find small discrepancies
between our perturbative results and those obtained from exact relativistic
calculation even for large nuclear charge number .
|
We introduce a methodology for designing and training deep neural networks
(DNN) that we call "Deep Regression Ensembles" (DRE). It bridges the gap
between DNN and two-layer neural networks trained with random feature
regression. Each layer of DRE has two components, randomly drawn input weights
and output weights trained myopically (as if the final output layer) using
linear ridge regression. Within a layer, each neuron uses a different subset of
inputs and a different ridge penalty, constituting an ensemble of random
feature ridge regressions. Our experiments show that a single DRE architecture
is at par with or exceeds state-of-the-art DNN in many data sets. Yet, because
DRE neural weights are either known in closed-form or randomly drawn, its
computational cost is orders of magnitude smaller than DNN.
|
We study how the quasiparticle picture of the quark can be modified near but
above the critical tempearture (Tc) of the chiral phase transition; we
incorporate into the quark self-energy the effects of the precursory soft modes
of the phase transition, i.e., `para-\sigma(\pi) meson'. It is found that the
quark spectrum has a three-peak structure near Tc: We show that the additional
new spectra originate from the mixing between a quark (anti-quark) and an
anti-quark hole (quark hole) caused by a ``resonant scattering'' of the
quasi-fermions with the thermally-excited soft modes.
|
We investigate the geometry of the maximal a posteriori (MAP) partition in
the Bayesian Mixture Model where the component and the base distributions are
chosen from conjugate exponential families. We prove that in this case the
clusters are separated by the contour lines of a linear functional of the
sufficient statistic. As a particular example, we describe Bayesian Mixture of
Normals with Normal-inverse-Wishart prior on the component mean and covariance,
in which the clusters in any MAP partition are separated by a quadratic
surface. In connection with results of Rajkowski (2018), where the linear
separability of clusters in the Bayesian Mixture Model with a fixed component
covariance matrix was proved, it gives a nice Bayesian analogue of the
geometric properties of Fisher Discriminant Analysis (LDA and QDA).
|
Additive manufacturing (AM; 3D printing) in aluminium using laser powder bed
fusion provides a new design space for lightweight mirror production. Printing
layer-by-layer enables the use of intricate lattices for mass reduction, as
well as organic shapes generated by topology optimisation, resulting in mirrors
optimised for function as opposed to subtractive machining. However, porosity,
a common AM defect, is present in printed aluminium and it is a result of the
printing environment being either too hot or too cold, or gas entrapped bubbles
within the aluminium powder. When present in an AM mirror substrates, porosity
manifests as pits on the reflective surface, which increases micro-roughness
and therefore scattered light. There are different strategies to reduce the
impact of porosity: elimination during printing, coating the aluminium print in
nickel phosphorous, or to apply a heat and pressure treatment to close the
pores, commonly known as a hot isostatic press (HIP).
This paper explores the application of HIP on printed aluminium substrates
intended for mirror production using single point diamond turning (SPDT). The
objective of the HIP is to reduce porosity whilst targeting a small grain
growth within the aluminium, which is important in allowing the SPDT to
generate surfaces with low micro-roughness. For this study, three disks, 50 mm
diameter by 5 mm, were printed in AlSi10Mg at 0 deg, 45 deg, and 90 deg with
respect to the build plate. X-ray computed tomography (XCT) was conducted
before and after the HIP cycle to confirm the effectiveness of HIP to close
porosity. The disks were SPDT and the micro-roughness evaluated. Mechanical
testing and electron backscatter diffraction (EBSD) was used to quantify the
mechanical strength and the grain size after HIP.
|
We report the value of the dynamical critical exponent z for the six
dimensional Ising spin glass, measured in three different ways: from the
behavior of the energy and the susceptibility with the Monte Carlo time and by
studying the overlap-overlap correlation function as a function of the space
and time. All three results are in a very good agreement with the Mean Field
prediction z=4. Finally we have studied numerically the remanent magnetization
in 6 and 8 dimensions and we have compared it with the behavior observed in the
SK model, that we have computed analytically.
|
In this paper we study the recently proposed tensor networks/AdS
correspondence. We found that the Coxeter group is a useful tool to describe
tensor networks in a negatively curved space. Studying generic tensor network
populated by perfect tensors, we find that the physical wave function
generically do not admit any connected correlation functions of local
operators. To remedy the problem, we assume that wavefunctions admitting such
semi-classical gravitational interpretation are composed of tensors close to,
but not exactly perfect tensors. Computing corrections to the connected two
point correlation functions, we find that the leading contribution is given by
structures related to geodesics connecting the operators inserted at the
boundary physical dofs. Such considerations admit generalizations at least to
three point functions. This is highly suggestive of the emergence of the
analogues of Witten diagrams in the tensor network. The perturbations alone
however do not give the right entanglement spectrum. Using the Coxeter
construction, we also constructed the tensor network counterpart of the BTZ
black hole, by orbifolding the discrete lattice on which the network resides.
We found that the construction naturally reproduces some of the salient
features of the BTZ black hole, such as the appearance of RT surfaces that
could wrap the horizon, depending on the size of the entanglement region A.
|
Ground state of the dissipative two-state system is investigated by means of
the Lanczos diagonalization method. We adopted the Hilbert-space-reduction
scheme proposed by Zhang, Jeckelmann and White so as to reduce the overwhelming
reservoir Hilbert space to being tractable in computers. Both the
implementation of the algorithm and the precision applied for the present
system are reported in detail. We evaluate the dynamical susceptibility
(resolvent) with the continued-fraction-expansion formula. Through analysing
the resolvent over a frequency range, whose range is often called `interesting'
frequency, we obtain the damping rate and the oscillation frequency. Our
results agree with those of a recent quantum Monte-Carlo study, which concludes
that the critical dissipation from oscillatory to over-damped behavior
decreases as the tunneling amplitude is strengthened.
|
We consider optimization of the average entropy production in inhomogeneous
temperature environments within the framework of stochastic thermodynamics. For
systems modeled by Langevin equations (e.g. a colloidal particle in a heat
bath) it has been recently shown that a space dependent temperature breaks the
time reversal symmetry of the fast velocity degrees of freedom resulting in an
anomalous contribution to the entropy production of the overdamped dynamics. We
show that optimization of entropy production is determined by an auxiliary
deterministic problem describing motion on a curved manifold in a potential.
The "anomalous contribution" to entropy plays the role of the potential and the
inverse of the diffusion tensor is the metric. We also find that entropy
production is not minimized by adiabatically slow, quasi-static protocols but
there is a finite optimal duration for the transport process. As an example we
discuss the case of a linearly space dependent diffusion coefficient.
|
We present Spitzer MIR spectra of 25 FR-I radio galaxies and investigate the
nature of their MIR continuum emission. MIR spectra of star-forming galaxies
and quiescent elliptical galaxies are used to identify host galaxy
contributions while radio/optical core data are used to isolate the nuclear
non-thermal emission. Out of the 15 sources with detected optical compact
cores, four sources are dominated by emission related to the host galaxy.
Another four sources show signs of warm, nuclear dust emission: 3C15, 3C84,
3C270, and NGC 6251. It is likley that these warm dust sources result from
hidden AGN of optical spectral type 1. The MIR spectra of seven sources are
dominated by synchrotron emission, with no significant component of nuclear
dust emission. In parabolic SED fits of the non-thermal cores FR-Is tend to
have lower peak frequencies and stronger curvature than blazars. This is
roughly consistent with the common picture in which the core emission in FR-Is
is less strongly beamed than in blazars.
|
In this paper, we study the problem of stereo matching from a pair of images
with different resolutions, e.g., those acquired with a tele-wide camera
system. Due to the difficulty of obtaining ground-truth disparity labels in
diverse real-world systems, we start from an unsupervised learning perspective.
However, resolution asymmetry caused by unknown degradations between two views
hinders the effectiveness of the generally assumed photometric consistency. To
overcome this challenge, we propose to impose the consistency between two views
in a feature space instead of the image space, named feature-metric
consistency. Interestingly, we find that, although a stereo matching network
trained with the photometric loss is not optimal, its feature extractor can
produce degradation-agnostic and matching-specific features. These features can
then be utilized to formulate a feature-metric loss to avoid the photometric
inconsistency. Moreover, we introduce a self-boosting strategy to optimize the
feature extractor progressively, which further strengthens the feature-metric
consistency. Experiments on both simulated datasets with various degradations
and a self-collected real-world dataset validate the superior performance of
the proposed method over existing solutions.
|
We discuss weighted estimates for the squares of the Riesz transforms, R^{2},
on L^{2}(W) where W is a matrix A2 weight. We prove that if W is close to the
Identity matrix Id, then the operator norm of R^{2} is close to its unweighted
norm on L^{2} which is one. This is done by the use of the Bellman function
technique.
|
Build automation tools and package managers have a profound influence on
software development. They facilitate the reuse of third-party libraries,
support a clear separation between the application's code and its external
dependencies, and automate several software development tasks. However, the
wide adoption of these tools introduces new challenges related to dependency
management. In this paper, we propose an original study of one such challenge:
the emergence of bloated dependencies.
Bloated dependencies are libraries that the build tool packages with the
application's compiled code but that are actually not necessary to build and
run the application. This phenomenon artificially grows the size of the built
binary and increases maintenance effort. We propose a tool, called DepClean, to
analyze the presence of bloated dependencies in Maven artifacts. We analyze
9,639 Java artifacts hosted on Maven Central, which include a total of 723,444
dependency relationships. Our key result is that 75.1% of the analyzed
dependency relationships are bloated. In other words, it is feasible to reduce
the number of dependencies of Maven artifacts up to 1/4 of its current count.
We also perform a qualitative study with 30 notable open-source projects. Our
results indicate that developers pay attention to their dependencies and are
willing to remove bloated dependencies: 18/21 answered pull requests were
accepted and merged by developers, removing 131 dependencies in total.
|
A new nuclear reaction channel on 197Au with the neutron as a projectile and
a bound dineutron (^2n) in the output channel is considered based on available
experimental observations. The dineutron is assumed to be formed as a
particle-satellite, separated from the volume but not from the potential well
of 196Au nucleus. The dineutron was identified by statistically significant
radioactivity detection due to decay of 196gAu nuclei. Cross sections for the
197Au (n,^2n) 196gAu reaction are determined as 180 +/- 60 microbarns and 37
+/- 8 microbarns for [6.09-6.39] and [6.175-6.455] MeV energy ranges,
correspondingly. Possible outcomes of dineutron detection near the surface of
deformed nuclei are also raised and discussed.
|
The paper discusses technical aspects of constructing a highly versatile
multi-photon source. The source is able to generate up to four photons which is
sufficient for a large number of quantum communications protocols. It can be
set to generate separable, partially-entangled or maximally-entangled photon
pairs with tunable amount of purity. Furthermore, the two generated pairs can
be prepared in different quantum states. In this paper, we provide all the
necessary information needed for construction and alignment of the source. To
prove the working principle, we also provide results of thorough testing.
|
Proteins have evolved through mutations, amino acid substitutions, since life
appeared on Earth, some 109 years ago. The study of these phenomena has been of
particular significance because of their impact on protein stability, function,
and structure. Three of the most recent findings in these areas deserve to be
highlighted. First, an innovative method has made it feasible to massively
determine the impact of mutations on protein stability. Second, a theoretical
analysis showed how mutations impact the evolution of protein folding rates.
Lastly, it has been shown that native-state structural changes brought on by
mutations can be explained in detail by the amide hydrogen exchange protection
factors. This study offers a new perspective on how those findings can be used
to analyze proteins in the light of mutations. The preliminary results indicate
that: (i) mutations can be viewed as sensitive probes to identify "typos" in
the amino-acid sequence and also to assess the resistance of naturally
occurring proteins to unwanted sequence alterations; (ii) the presence of
"typos" in the amino acid sequence, rather than being an evolutionary obstacle,
could promote faster evolvability and, in turn, increase the likelihood of
higher protein stability; (iii) the mutation site is far more important than
the substituted amino acid in terms of the protein's marginal stability
changes, and (iv) the protein evolution unpredictability at the molecular level
by mutations exists even in the absence of epistasis effects. Finally, the
study results support the Darwinian concept of evolution as "descent with
modification" by demonstrating that some regions of any protein sequence are
susceptible to mutations while others are not.
|
We construct a statistical indicator for the detection of short-term asset
price bubbles based on the information content of bid and ask market quotes for
plain vanilla put and call options. Our construction makes use of the
martingale theory of asset price bubbles and the fact that such scenarios where
the price for an asset exceeds its fundamental value can in principle be
detected by analysis of the asymptotic behavior of the implied volatility
surface. For extrapolating this implied volatility, we choose the SABR model,
mainly because of its decent fit to real option market quotes for a broad range
of maturities and its ease of calibration. As main theoretical result, we show
that under lognormal SABR dynamics, we can compute a simple yet powerful
closed-form martingale defect indicator by solving an ill-posed inverse
calibration problem. In order to cope with the ill-posedness and to quantify
the uncertainty which is inherent to such an indicator, we adopt a Bayesian
statistical parameter estimation perspective. We probe the resulting posterior
densities with a combination of optimization and adaptive Markov chain Monte
Carlo methods, thus providing a full-blown uncertainty estimation of all the
underlying parameters and the martingale defect indicator. Finally, we provide
real-market tests of the proposed option-based indicator with focus on tech
stocks due to increasing concerns about a tech bubble 2.0.
|
For a simple algebraic group G in characteristic p, a triple (a,b,c) of
positive integers is said to be rigid for G if the dimensions of the
subvarieties of G of elements of order dividing a,b,c sum to 2dim G. In this
paper we complete the proof of a conjecture of the third author, that for a
rigid triple (a,b,c) for G with p>0, the triangle group T_{a,b,c} has only
finitely many simple images of the form G(p^r). We also obtain further results
on the more general form of the conjecture, where the images G(p^r) can be
arbitrary quasisimple groups of type G.
|
Recommendation models are typically trained on observational user interaction
data, but the interactions between latent factors in users' decision-making
processes lead to complex and entangled data. Disentangling these latent
factors to uncover their underlying representation can improve the robustness,
interpretability, and controllability of recommendation models. This paper
introduces the Causal Disentangled Variational Auto-Encoder (CaD-VAE), a novel
approach for learning causal disentangled representations from interaction data
in recommender systems. The CaD-VAE method considers the causal relationships
between semantically related factors in real-world recommendation scenarios,
rather than enforcing independence as in existing disentanglement methods. The
approach utilizes structural causal models to generate causal representations
that describe the causal relationship between latent factors. The results
demonstrate that CaD-VAE outperforms existing methods, offering a promising
solution for disentangling complex user behavior data in recommendation
systems.
|
Molecular imaging generates large volumes of heterogeneous biomedical imagery
with an impelling need of guidelines for handling image data. Although several
successful solutions have been implemented for human epidemiologic studies, few
and limited approaches have been proposed for animal population studies.
Preclinical imaging research deals with a variety of machinery yielding tons of
raw data but the current practices to store and distribute image data are
inadequate. Therefore, standard tools for the analysis of large image datasets
need to be established. In this paper, we present an extension of XNAT for
Preclinical Imaging Centers (XNAT-PIC). XNAT is a worldwide used, open-source
platform for securely hosting, sharing, and processing of clinical imaging
studies. Despite its success, neither tools for importing large, multimodal
preclinical image datasets nor pipelines for processing whole imaging studies
are yet available in XNAT. In order to overcome these limitations, we have
developed several tools to expand the XNAT core functionalities for supporting
preclinical imaging facilities. Our aim is to streamline the management and
exchange of image data within the preclinical imaging community, thereby
enhancing the reproducibility of the results of image processing and promoting
open science practices.
|
The detection of gamma-rays, antiprotons and positrons due to pair
annihilation of dark matter particles in the Milky Way halo is a viable
indirect technique to search for signatures of supersymmetric dark matter where
the major challenge is the discrimination of the signal from the background
generated by standard production mechanisms. The new PAMELA antiproton data are
consistent with the standard secondary production and this allows us to
constrain exotic contribution to the spectrum due to neutralino annihilations.
In particular, we show that in the framework of minimal supergravity (mSUGRA),
in a clumpy halo scenario (with clumpiness factor > 10) and for large values of
tan(beta)> 55, almost all the parameter space allowed by WMAP is excluded.
Instead, the PAMELA positron fraction data exhibit an excess that cannot be
explained by secondary production. PPB-BETS and ATIC reported a feature in
electron spectrum at a few hundred GeV. The excesses seem to be consistent and
imply a source, conventional or exotic, of additional leptonic component. Here
we discuss the status of indirect dark matter searches and a perspective
forPAMELA and Fermi gamma-ray space telescope (Fermi) experiments.
|
We present bounds for the sparseness and for the degrees of the polynomials
in the Nullstellensatz. Our bounds depend mainly on the unmixed volume of the
input polynomial system. The degree bounds can substantially improve the known
ones when this polynomial system is sparse, and they are, in the worst case,
simply exponential in terms of the number of variables and the maximum degree
of the input polynomials.
|
Basing on states and channels isomorphism we point out that semidefinite
programming can be used as a quick test for nonzero one-way quantum channel
capacity. This can be achieved by search of symmetric extensions of states
isomorphic to a given quantum channel. With this method we provide examples of
quantum channels that can lead to high entanglement transmission but still have
zero one-way capacity, in particular, regions of symmetric extendibility for
isotropic states in arbitrary dimensions are presented. Further we derive {\it
a new entanglement parameter} based on (normalised) relative entropy distance
to the set of states that have symmetric extensions and show explicitly the
symmetric extension of isotropic states being the nearest to singlets in the
set of symmetrically extendible states. The suitable regularisation of the
parameter provides a new upper bound on one-way distillable entanglement.
|
When introduced in a 2018 article in the American Mathematical Monthly, the
omega integral was shown to be an extension of the Riemann integral. Although
results for continuous functions such as the Fundamental Theorem of Calculus
follow immediately, a much more satisfying approach would be to provide direct
proofs not relying on the Riemann integral. This note provides those proofs.
|
Non-adherence to assigned treatment is a common issue in cluster randomised
trials (CRTs). In these settings, the efficacy estimand may be also of
interest. Many methodological contributions in recent years have advocated
using instrumental variables to identify and estimate the local average
treatment effect (LATE). However, the clustered nature of randomisation in CRTs
adds to the complexity of such analyses.
In this paper, we show that under certain assumptions, the LATE can be
estimated via two-stage least squares (TSLS) using cluster-level summaries of
outcomes and treatment received. Implementation needs to account for this, as
well as the possible heteroscedasticity, to obtain valid inferences.
We use simulations to assess the performance of TSLS of cluster-level
summaries under cluster-level or individual-level non-adherence, with and
without weighting and robust standard errors. We also explore the impact of
adjusting for cluster-level covariates and of appropriate degrees of freedom
correction for inference.
We find that TSLS estimation using cluster-level summaries provides estimates
with small to negligible bias and coverage close to nominal level, provided
small sample degrees of freedom correction is used for inference, with
appropriate use of robust standard errors. We illustrate the methods by
re-analysing a CRT in UK primary health settings.
|
Generalized method of moments estimators based on higher-order moment
conditions derived from independent shocks can be used to identify and estimate
the simultaneous interaction in structural vector autoregressions. This study
highlights two problems that arise when using these estimators in small
samples. First, imprecise estimates of the asymptotically efficient weighting
matrix and the asymptotic variance lead to volatile estimates and inaccurate
inference. Second, many moment conditions lead to a small sample scaling bias
towards innovations with a variance smaller than the normalizing unit variance
assumption. To address the first problem, I propose utilizing the assumption of
independent structural shocks to estimate the efficient weighting matrix and
the variance of the estimator. For the second issue, I propose incorporating a
continuously updated scaling term into the weighting matrix, eliminating the
scaling bias. To demonstrate the effectiveness of these measures, I conducted a
Monte Carlo simulation which shows a significant improvement in the performance
of the estimator.
|
Let $\R(\cdot)$ stand for the bounded-error randomized query complexity. We
show that for any relation $f \subseteq \{0,1\}^n \times \mathcal{S}$ and
partial Boolean function $g \subseteq \{0,1\}^n \times \{0,1\}$, $\R_{1/3}(f
\circ g^n) = \Omega(\R_{4/9}(f) \cdot \sqrt{\R_{1/3}(g)})$. Independently of
us, Gavinsky, Lee and Santha \cite{newcomp} proved this result. By an example
demonstrated in their work, this bound is optimal. We prove our result by
introducing a novel complexity measure called the \emph{conflict complexity} of
a partial Boolean function $g$, denoted by $\chi(g)$, which may be of
independent interest. We show that $\chi(g) = \Omega(\sqrt{\R(g)})$ and $\R(f
\circ g^n) = \Omega(\R(f) \cdot \chi(g))$.
|
Given a positive integer $n$ the $k$-fold divisor function $d_k(n)$ equals
the number of ordered $k$-tuples of positive integers whose product equals $n$.
In this article we study the variance of sums of $d_k(n)$ in short intervals
and establish asymptotic formulas for the variance of sums of $d_k(n)$ in short
intervals of certain lengths for $k=3$ and for $k \ge 4$ under the assumption
of the Lindel\"of hypothesis.
|
In this paper the hybrid-NLIE approach of [38] is extended to the ground
state of a D-brane anti-D-brane system in AdS/CFT. The hybrid-NLIE equations
presented in the paper are finite component alternatives of the previously
proposed TBA equations and they admit an appropriate framework for the
numerical investigation of the ground state of the problem. Straightforward
numerical iterative methods fail to converge, thus new numerical methods are
worked out to solve the equations. Our numerical data confirm the previous TBA
data. In view of the numerical results the mysterious L = 1 case is also
commented in the paper.
|
Convolutional neural networks for computer vision are fairly intuitive. In a
typical CNN used in image classification, the first layers learn edges, and the
following layers learn some filters that can identify an object. But CNNs for
Natural Language Processing are not used often and are not completely
intuitive. We have a good idea about what the convolution filters learn for the
task of text classification, and to that, we propose a neural network structure
that will be able to give good results in less time. We will be using
convolutional neural networks to predict the primary or broader topic of a
question, and then use separate networks for each of these predicted topics to
accurately classify their sub-topics.
|
Tidal disruption events (TDEs), events in which a star passes very close to a
supermassive black hole, are generally imagined as leading either to the star's
complete disruption or to its passage directly into the black hole. In the
former case it is widely believed that in all cases the bound portion of the
debris quickly "circularizes" due to relativistic apsidal precession, i.e.,
forms a compact accretion disk, and emits a flare of standardized lightcurve
and spectrum. We show here that TDEs are more diverse and can be grouped into
several distinct categories on the basis of stellar pericenter distance $r_p$;
we calculate the relative frequency of these categories. In particular, because
rapid circularization requires $r_p \lesssim 10r_g$ ($r_g \equiv GM_{\rm
BH}/c^2$), it can happen in only a minority of total disruptions, $\lesssim
1/4$ when the black hole has mass $M_{\rm BH} = 10^6 M_\odot$. For larger
pericenter distances, $10 < r_p/r_g < 27$ (for $M_{\rm BH}=10^6M_\odot$), main
sequence stars are completely disrupted, but the bound debris orbits are highly
eccentric and possess semimajor axes $\sim 100\times$ the scale of the expected
compact disk. Partial disruptions with fractional mass-loss $\gtrsim 10\%$
should occur with a rate similar to that of total disruptions; for fractional
mass-loss $\gtrsim 50\%$, the rate is $\approx 1/3$ as large. Partial
disruptions -- which must precede total disruptions when the stars' angular
momenta evolve in the "empty loss-cone" regime -- change the orbital energy by
factors $\gtrsim O(1)$. Remnants of partial disruptions are in general far from
thermal equilibrium. Depending on the orbital energy of the remnant and
conditions within the stellar cluster surrounding the SMBH, it may return after
hundreds or thousands of years and be fully disrupted, or it may rejoin the
stellar cluster.
|
There are 26 possibilities for the torsion group of elliptic curves defined
over quadratic number fields. We present examples of high rank elliptic curves
with given torsion group which give the current records for most of the torsion
groups. In particular, we show that for each possible torsion group, except
maybe for Z/15Z, there exist an elliptic curve over some quadratic field with
this torsion group and with rank >= 2.
|
We consider two areas of research that have been developing in parallel over
the last decade: blind source separation (BSS) and electromagnetic source
estimation (ESE). BSS deals with the recovery of source signals when only
mixtures of signals can be obtained from an array of detectors and the only
prior knowledge consists of some information about the nature of the source
signals. On the other hand, ESE utilizes knowledge of the electromagnetic
forward problem to assign source signals to their respective generators, while
information about the signals themselves is typically ignored. We demonstrate
that these two techniques can be derived from the same starting point using the
Bayesian formalism. This suggests a means by which new algorithms can be
developed that utilize as much relevant information as possible. We also
briefly mention some preliminary work that supports the value of integrating
information used by these two techniques and review the kinds of information
that may be useful in addressing the ESE problem.
|
We explore the low-temperature behavior of the Abelian Higgs model in AdS_4,
away from the probe limit in which back-reaction of matter fields on the metric
can be neglected. Over a significant range of charges for the complex scalar,
we observe a second order phase transition at finite temperature. The
symmetry-breaking states are superconducting black holes. At least when the
charge of the scalar is not too small, we observe at low temperatures the
emergence of a domain wall structure characterized by a definite index of
refraction. We also compute the conductivity as a function of frequency.
|
Graph Neural Networks (GNNs) bring the power of deep representation learning
to graph and relational data and achieve state-of-the-art performance in many
applications. GNNs compute node representations by taking into account the
topology of the node's ego-network and the features of the ego-network's nodes.
When the nodes do not have high-quality features, GNNs learn an embedding layer
to compute node embeddings and use them as input features. However, the size of
the embedding layer is linear to the product of the number of nodes in the
graph and the dimensionality of the embedding and does not scale to big data
and graphs with hundreds of millions of nodes. To reduce the memory associated
with this embedding layer, hashing-based approaches, commonly used in
applications like NLP and recommender systems, can potentially be used.
However, a direct application of these ideas fails to exploit the fact that in
many real-world graphs, nodes that are topologically close will tend to be
related to each other (homophily) and as such their representations will be
similar.
In this work, we present approaches that take advantage of the nodes'
position in the graph to dramatically reduce the memory required, with minimal
if any degradation in the quality of the resulting GNN model. Our approaches
decompose a node's embedding into two components: a position-specific component
and a node-specific component. The position-specific component models homophily
and the node-specific component models the node-to-node variation. Extensive
experiments using different datasets and GNN models show that our methods are
able to reduce the memory requirements by 88% to 97% while achieving, in nearly
all cases, better classification accuracy than other competing approaches,
including the full embeddings.
|
Euclid is a European Space Agency medium class mission selected for launch in
2019 within the Cosmic Vision 2015-2025 programme. The main goal of Euclid is
to understand the origin of the accelerated expansion of the Universe. Euclid
will explore the expansion history of the Universe and the evolution of cosmic
structures by measuring shapes and redshifts of galaxies as well as the
distribution of clusters of galaxies over a large fraction of the sky. Although
the main driver for Euclid is the nature of dark energy, Euclid science covers
a vast range of topics, from cosmology to galaxy evolution to planetary
research. In this review we focus on cosmology and fundamental physics, with a
strong emphasis on science beyond the current standard models. We discuss five
broad topics: dark energy and modified gravity, dark matter, initial
conditions, basic assumptions and questions of methodology in the data
analysis. This review has been planned and carried out within Euclid's Theory
Working Group and is meant to provide a guide to the scientific themes that
will underlie the activity of the group during the preparation of the Euclid
mission.
|
We show that each end of a noncompact self-shrinker in $\mathbb{R}^3$ of
finite topology is smoothly asymptotic to either a regular cone or a
self-shrinking round cylinder.
|
In this paper, we present new expressions for n-point NMHV tree-level gravity
amplitudes. We introduce a method of factorization diagrams which is a simple
graphical representation of R-invariants in Yang-Mills theory. We define the
gravity analogues which we call G-invariants, and expand the NMHV gravity
amplitudes in terms of these objects. We provide explicit formulas of NMHV
gravity amplitudes up to eight points in terms of G-invariants, and give the
general definition for any number of points. We discuss the connection to BCFW
representation, special behavior under large momentum shift, the role of
momentum twistors and the intricate web of spurious poles cancelation. Because
of the close connection between R-invariants and the (tree-level) Amplituhedron
for Yang-Mills amplitudes, we speculate that the new expansion for gravity
amplitudes should correspond to the triangulation of the putative Gravituhedron
geometry.
|
Over the years, Twitter has become one of the largest communication platforms
providing key data to various applications such as brand monitoring, trend
detection, among others. Entity linking is one of the major tasks in natural
language understanding from tweets and it associates entity mentions in text to
corresponding entries in knowledge bases in order to provide unambiguous
interpretation and additional con- text. State-of-the-art techniques have
focused on linking explicitly mentioned entities in tweets with reasonable
success. However, we argue that in addition to explicit mentions i.e. The movie
Gravity was more ex- pensive than the mars orbiter mission entities (movie
Gravity) can also be mentioned implicitly i.e. This new space movie is crazy.
you must watch it!. This paper introduces the problem of implicit entity
linking in tweets. We propose an approach that models the entities by
exploiting their factual and contextual knowledge. We demonstrate how to use
these models to perform implicit entity linking on a ground truth dataset with
397 tweets from two domains, namely, Movie and Book. Specifically, we show: 1)
the importance of linking implicit entities and its value addition to the
standard entity linking task, and 2) the importance of exploiting contextual
knowledge associated with an entity for linking their implicit mentions. We
also make the ground truth dataset publicly available to foster the research in
this new research area.
|
To a region $C$ of the plane satisfying a suitable convexity condition we
associate a knot concordance invariant $\Upsilon^C$. For appropriate choices of
the domain this construction gives back some known knot Floer concordance
invariants like Rasmussen's $h_i$ invariants, and the Ozsv\'
ath-Stipsicz-Szab\' o upsilon invariant. Furthermore, to three such regions
$C$, $C^+$ and $C^- $ we associate invariants $\Upsilon_{C^\pm, C}$
generalising Kim-Livingston secondary invariant. We show how to compute these
invariants for some interesting classes of knots (including alternating and
torus knots), and we use them to obstruct concordances to Floer thin knots and
algebraic knots.
|
We consider Diophantine inequalities of the kind |f(x)| \le m, where F(X) \in
Z[X] is a homogeneous polynomial which can be expressed as a product of d
homogeneous linear forms in n variables with complex coefficients and m\ge 1.
We say such a form is of finite type if the total volume of all real solutions
to this inequality is finite and if, for every n'-dimensional subspace
S\subseteq R^n defined over Q, the corresponding n'-dimensional volume for F
restricted to S is also finite. We show that the number of integral solutions x
\in Z^n to our inequality above is finite for all m if and only if the form F
is of finite type. When F is of finite type, we show that the number of
integral solutions is estimated asymptotically as m\to \infty by the total
number of integral solutions is estimated asymptotically as m\to \infty by the
total volume of all real solutions. This generalizes a previous result due to
Mahler for the case n=2. Further, we prove a conjecture of W. M. Schmidt,
showing that for F of finite type the number of integral solutions is bounded
above by c(n,d)m^(n/d), where c(n,d) is an effectively computable constant
depending only on n and d.
|
We explore the role of interfacial antiferromagnetic interaction in coupled
soft and hard ferromagnetic layers to ascribe the complex variety of
magneto-transport phenomena observed in $La_{0.7}Sr_{0.3}MnO_3/SrRuO_3$
(LSMO/SRO) superlattices (SLs) within a one-band double exchange model using
Monte-Carlo simulations. Our calculations incorporate the magneto-crystalline
anisotropy interactions and super-exchange interactions of the constituent
materials, and two types of antiferromagnetic interactions between Mn and Ru
ions at the interface: (i) carrier-driven and (ii) Mn-O-Ru bond super-exchange
in the model Hamiltonian to investigate the properties along the hysteresis
loop. We find that the antiferromagnetic coupling at the interface induces the
LSMO and SRO layers to align in anti-parallel orientation at low temperatures.
Our results reproduce the positive exchange bias of the minor loop and inverted
hysteresis loop of LSMO/SRO SL at low temperatures as reported in experiments.
In addition, conductivity calculations show that the carrier-driven
antiferromagnetic coupling between the two ferromagnetic layers steers the SL
towards a metallic (insulating) state when LSMO and SRO are aligned in
anti-parallel (parallel) configuration, in good agreement with the experimental
data. This demonstrate the necessity of carrier-driven antiferromagnetic
interactions at the interface to understand the one-to-one correlation between
the magnetic and transport properties observed in experiments. For high
temperature, just below the ferromagnetic $T_C$ of SRO, we unveiled the
unconventional three-step flipping process along the magnetic hysteresis loop.
We emphasize the key role of interfacial antiferromagnetic coupling between
LSMO and SRO to understand these multiple-step flipping processes along the
hysteresis loop.
|
The dehydrogenation reaction of methanol on metal supported MgO(100) films
has been studied by employing periodic density functional calculations. As far
as we know, the dehydrogenation of single methanol molecule over inert oxide
insulators such as MgO has never been realized before without the introduction
of defects and low coordinated atoms. By depositing the very thin oxide films
on Mo substrate we have successfully obtained the dissociative state of
methanol. The dehydrogenation reaction is energetically exothermic and nearly
barrierless. The metal supported thin oxide films studied here provide a
versatile approach to enhance the activity and properties of oxides.
|
Let $g : X \to Y$ be the contraction of an extremal ray of a smooth
projective 4-fold $X$ such that $\dim Y=3$. Then $g$ may have a finite number
of 2-dimensional fibers. We shall classify those fibers. Especially we shall
prove that any two points of such a fiber is joined by a chain of rational
curves of length at most 2 with respect to $-K_X$, and that $|-K_X|$ is
$g\text{-free}$.
|
We present a detailed analysis of time- and energy-dependent synchrotron
polarization signatures in a shock-in-jet model for gamma-ray blazars. Our
calculations employ a full 3D radiation transfer code, assuming a helical
magnetic field throughout the jet. The code considers synchrotron emission from
an ordered magnetic field, and takes into account all light-travel-time and
other relevant geometric effects, while the relevant synchrotron self-Compton
and external Compton effects are taken care of with the 2D MCFP code. We
consider several possible mechanisms through which a relativistic shock
propagating through the jet may affect the jet plasma to produce a synchrotron
and high-energy flare. Most plausibly, the shock is expected to lead to a
compression of the magnetic field, increasing the toroidal field component and
thereby changing the direction of the magnetic field in the region affected by
the shock. We find that such a scenario leads to correlated synchrotron + SSC
flaring, associated with substantial variability in the synchrotron
polarization percentage and position angle. Most importantly, this scenario
naturally explains large PA rotations by > 180 deg., as observed in connection
with gamma-ray flares in several blazars, without the need for bent or helical
jet trajectories or other non-axisymmetric jet features.
|
The goal of this book is to present classical mechanics, quantum mechanics,
and statistical mechanics in an almost completely algebraic setting, thereby
introducing mathematicians, physicists, and engineers to the ideas relating
classical and quantum mechanics with Lie algebras and Lie groups. The book
emphasizes the closeness of classical and quantum mechanics, and the material
is selected in a way to make this closeness as apparent as possible.
Much of the material covered here is not part of standard textbook treatments
of classical or quantum mechanics (or is only superficially treated there). For
physics students who want to get a broader view of the subject, this book may
therefore serve as a useful complement to standard treatments of quantum
mechanics.
Almost without exception, this book is about precise concepts and exact
results in classical mechanics, quantum mechanics, and statistical mechanics.
The structural properties of mechanics are discussed independent of
computational techniques for obtaining quantitatively correct numbers from the
assumptions made. The standard approximation machinery for calculating from
first principles explicit thermodynamic properties of materials, or explicit
cross sections for high energy experiments can be found in many textbooks and
is not repeated here.
|
The N-1-1 contingency criterion considers the con- secutive loss of two
components in a power system, with intervening time for system adjustments. In
this paper, we consider the problem of optimizing generation unit commitment
(UC) while ensuring N-1-1 security. Due to the coupling of time periods
associated with consecutive component losses, the resulting problem is a very
large-scale mixed-integer linear optimization model. For efficient solution, we
introduce a novel branch-and-cut algorithm using a temporally decomposed
bilevel separation oracle. The model and algorithm are assessed using multiple
IEEE test systems, and a comprehensive analysis is performed to compare system
performances across different contingency criteria. Computational results
demonstrate the value of considering intervening time for system adjustments in
terms of total cost and system robustness.
|
Measurements of disappearance channel of long baseline accelerator based
experiments (like NO$\nu$A) are inflicted with the problem of octant
degeneracy. In these experiments, the mass hierarchy (MH) sensitivity depends
upon the value of CP-violating phase $\delta_{CP}$. Moreover, MH of light
neutrino masses is still not fixed. Also, the flavour structure of fermions is
yet not fully understood. We discuss all these issues, in a highly predictive,
low-scale inverse seesaw (ISS) model within the framework of $A_4$ flavour
symmetry. Recent global analysis has shown a preference for normal hierarchy
and higher octant of $\theta_{23}$, and hence we discuss our results with
reference to these, and find that the vacuum alignment of $A_4$ triplet flavon
(1,-1,-1) favours these results. Finally, we check if our very precise
prediction on $m_{ee}$ and the lightest neutrino mass falls within the range of
sensitivities of the neutrinoless double beta decay ($0\nu\beta\beta$)
experiments. We note that when octant of $\theta_{23}$ and MH is fixed by more
precise measurements of future experiments, then through our results, it would
be possible to precisely identify the favourable vacuum alignment corresponding
to the $A_{4}$ triplet field as predicted in our model.
|
The visual world is vast and varied, but its variations divide into
structured and unstructured factors. We compose free-form filters and
structured Gaussian filters, optimized end-to-end, to factorize deep
representations and learn both local features and their degree of locality. Our
semi-structured composition is strictly more expressive than free-form
filtering, and changes in its structured parameters would require changes in
free-form architecture. In effect this optimizes over receptive field size and
shape, tuning locality to the data and task. Dynamic inference, in which the
Gaussian structure varies with the input, adapts receptive field size to
compensate for local scale variation. Optimizing receptive field size improves
semantic segmentation accuracy on Cityscapes by 1-2 points for strong dilated
and skip architectures and by up to 10 points for suboptimal designs. Adapting
receptive fields by dynamic Gaussian structure further improves results,
equaling the accuracy of free-form deformation while improving efficiency.
|
In this paper, we have investigated a self-developed sine wave gated (SWG)
single-photon detector (SPD) for 1550 nm wavelength primary for quantum key
distribution (QKD) usage. We have investigated different gate parameters`
influence on the SPD`s noise characteristics. We have admitted that with an
increase of gating voltage and constant value of quantum efficiency (QE), the
dark count rate (DCR) decreases. There have been made some recommendations to
improve SPD`s and whole QKD device's characteristics based on these
observations. There have been discovered the quick rise of the DCR value with
the increase of gating voltage above some certain value and this value was
different for different detectors. It has been shown that universal empirical
dependence compilation to connect control and operational parameters of SPD is
a non-trivial task.
|
We study what we call quasi-spline sheaves over locally Noetherian schemes.
This is done with the intention of considering splines from the point of view
of moduli theory. In other words, we study the way in which certain objects
that arise in the theory of splines can be made to depend on parameters. In
addition to quasi-spline sheaves, we treat ideal difference-conditions, and
individual quasi- splines. Under certain hypotheses each of these types of
objects admits a fine moduli scheme. The moduli of quasi-spline sheaves is
proper, and there is a natural compactification of the moduli of ideal
difference-conditions. We include some speculation on the uses of these moduli
in the theory of splines and topology, and an appendix with a treatment of the
Billera-Rose homogenization in scheme theoretic language.
|
We present the first results from the science demonstration phase for the
Hi-GAL survey, the Herschel key-project that will map the inner Galactic Plane
of the Milky Way in 5 bands. We outline our data reduction strategy and present
some science highlights on the two observed 2{\deg} x 2{\deg} tiles
approximately centered at l=30{\deg} and l=59{\deg}. The two regions are
extremely rich in intense and highly structured extended emission which shows a
widespread organization in filaments. Source SEDs can be built for hundreds of
objects in the two fields, and physical parameters can be extracted, for a good
fraction of them where the distance could be estimated. The compact sources
(which we will call 'cores' in the following) are found for the most part to be
associated with the filaments, and the relationship to the local beam-averaged
column density of the filament itself shows that a core seems to appear when a
threshold around A_V of about 1 is exceeded for the regions in the l=59{\deg}
field; a A_V value between 5 and 10 is found for the l=30{\deg} field, likely
due to the relatively larger distances of the sources. This outlines an
exciting scenario where diffuse clouds first collapse into filaments, which
later fragment to cores where the column density has reached a critical level.
In spite of core L/M ratios being well in excess of a few for many sources, we
find core surface densities between 0.03 and 0.5 g cm-2. Our results are in
good agreement with recent MHD numerical simulations of filaments forming from
large-scale converging flows.
|
Here we demonstrate that tensor network techniques - originally devised for
the analysis of quantum many-body problems - are well suited for the detailed
study of rare event statistics in kinetically constrained models (KCMs). As
concrete examples we consider the Fredrickson-Andersen and East models, two
paradigmatic KCMs relevant to the modelling of glasses. We show how variational
matrix product states allow to numerically approximate - systematically and
with high accuracy - the leading eigenstates of the tilted dynamical generators
which encode the large deviation statistics of the dynamics. Via this approach
we can study system sizes beyond what is possible with other methods, allowing
us to characterise in detail the finite size scaling of the trajectory-space
phase transition of these models, the behaviour of spectral gaps, and the
spatial structure and "entanglement" properties of dynamical phases. We discuss
the broader implications of our results.
|
This paper presents "oriented pivoting systems" as an abstract framework for
complementary pivoting. It gives a unified simple proof that the endpoints of
complementary pivoting paths have opposite sign. A special case are the Nash
equilibria of a bimatrix game at the ends of Lemke-Howson paths, which have
opposite index. For Euler complexes or "oiks", an orientation is defined which
extends the known concept of oriented abstract simplicial manifolds. Ordered
"room partitions" for a family of oriented oiks come in pairs of opposite sign.
For an oriented oik of even dimension, this sign property holds also for
unordered room partitions. In the case of a two-dimensional oik, these are
perfect matchings of an Euler graph, with the sign as defined for Pfaffian
orientations of graphs. A near-linear time algorithm is given for the following
problem: given a graph with an Eulerian orientation with a perfect matching,
find another perfect matching of opposite sign. In contrast, the complementary
pivoting algorithm for this problem may be exponential.
|
Systolic Array (SA) architectures are well suited for accelerating matrix
multiplications through the use of a pipelined array of Processing Elements
(PEs) communicating with local connections and pre-orchestrated data movements.
Even though most of the dynamic power consumption in SAs is due to
multiplications and additions, pipelined data movement within the SA
constitutes an additional important contributor. The goal of this work is to
reduce the dynamic power consumption associated with the feeding of data to the
SA, by synergistically applying bus-invert coding and zero-value clock gating.
By exploiting salient attributes of state-of-the-art CNNs, such as the value
distribution of the weights, the proposed SA applies appropriate encoding only
to the data that exhibits high switching activity. Similarly, when one of the
inputs is zero, unnecessary operations are entirely skipped. This selectively
targeted, application-aware encoding approach is demonstrated to reduce the
dynamic power consumption of data streaming in CNN applications using Bfloat16
arithmetic by 1%-19%. This translates to an overall dynamic power reduction of
6.2%-9.4%.
|
We consider a bipartite entangled system half of which falls through the
event horizon of an evaporating black hole, while the other half remains
coherently accessible to experiments in the exterior region. Beyond complete
evaporation, the evolution of the quantum state past the Cauchy horizon cannot
remain unitary, raising the questions: How can this evolution be described as a
quantum map, and how is causality preserved? The answers are subtle, and are
linked in unexpected ways to the fundamental laws of quantum mechanics. We show
that terrestrial experiments can be designed to constrain exactly how these
laws might be altered by evaporation.
|
This work successfully generates uncertainty aware surrogate models, via the
Bayesian neural network with noise contrastive prior (BNN-NCP) technique, of
the EuroPED plasma pedestal model using data from the JET-ILW pedestal database
and subsequent model evaluations. All this conform EuroPED-NN. The BNN-NCP
technique is proven to be a good fit for uncertainty aware surrogate models,
matching the output results as a regular neural network, providing prediction's
confidence as uncertainties, and highlighting the out of distribution (OOD)
regions using surrogate model uncertainties. This provides critical insights
into model robustness and reliability. EuroPED-NN has been physically
validated, first, analyzing electron density
$n_e\!\left(\psi_{\text{pol}}=0.94\right)$ with respect to increasing plasma
current, $I_p$, and second, validating the $\Delta-\beta_{p,ped}$ relation
associated with the EuroPED model. Affirming the robustness of the underlying
physics learned by the surrogate model.
|
The starting point of our analysis is a class of one-dimensional interacting
particle systems with two species. The particles are confined to an interval
and exert a nonlocal, repelling force on each other, resulting in a nontrivial
equilibrium configuration. This class of particle systems covers the setting of
pile-ups of dislocation walls, which is an idealised setup for studying the
microscopic origin of several dislocation density models in the literature.
Such density models are used to construct constitutive relations in plasticity
models.
Our aim is to pass to the many-particle limit. The main challenge is the
combination of the nonlocal nature of the interactions, the singularity of the
interaction potential between particles of the same type, the non-convexity of
the the interaction potential between particles of the opposite type, and the
interplay between the length-scale of the domain with the length-scale $\ell_n$
of the decay of the interaction potential. Our main results are the
$\Gamma$-convergence of the energy of the particle positions, the evolutionary
convergence of the related gradient flows for $\ell_n$ sufficiently large, and
the non-convergence of the gradient flows for $\ell_n$ sufficiently small.
|
Knockoffs is a new framework for controlling the false discovery rate (FDR)
in multiple hypothesis testing problems involving complex statistical models.
While there has been great emphasis on Type-I error control, Type-II errors
have been far less studied. In this paper we analyze the false negative rate
or, equivalently, the power of a knockoff procedure associated with the Lasso
solution path under an i.i.d. Gaussian design, and find that knockoffs
asymptotically achieve close to optimal power with respect to an omniscient
oracle. Furthermore, we demonstrate that for sparse signals, performing model
selection via knockoff filtering achieves nearly ideal prediction errors as
compared to a Lasso oracle equipped with full knowledge of the distribution of
the unknown regression coefficients. The i.i.d. Gaussian design is adopted to
leverage results concerning the empirical distribution of the Lasso estimates,
which makes power calculation possible for both knockoff and oracle procedures.
|
We establish global existence, scattering for radial solutions to the
energy-critical focusing Hartree equation with energy and $\dot{H}^1$ norm less
than those of the ground state in $\mathbb{R}\times \mathbb{R}^d$, $d\geq 5$.
|
The purpose of this paper is to describe explicitly the solution for linear
control systems on Lie groups. In case of linear control systems with inner
derivations, the solution is given basically by the product of the exponential
of the associated invariant system and the exponential of the associated
invariant drift field. We present the solutions in low dimensional cases and
apply the results to obtain some controllability results.
|
Encapsulation of dsDNA fragments (contour length 54 nm) by the cationic
diblock copolymer poly(butadiene-b-N-methyl 4-vinyl pyridinium) [PBd-b-P4VPQ]
has been studied with phase contrast, polarized light, and fluorescence
microscopy, as well as scanning electron microscopy. Encapsulation was achieved
with a single emulsion technique. For this purpose, an aqueous DNA solution is
emulsified in an organic solvent (toluene) and stabilized by the amphiphilic
diblock copolymer. The PBd block forms an interfacial brush, whereas the
cationic P4VPQ block complexes with DNA. A subsequent change of the quality of
the organic solvent results in a collapse of the PBd brush and the formation of
a capsule. Inside the capsules, the DNA is compacted as shown by the appearance
of birefringent textures under crossed polarizers and the increase in
fluorescence intensity of labeled DNA. The capsules can also be dispersed in
aqueous medium to form vesicles, provided they are stabilized with an osmotic
agent (polyethylene glycol) in the external phase. It is shown that the DNA is
released from the vesicles once the osmotic pressure drops below 105 N/m2 or if
the ionic strength of the supporting medium exceeds 0.1 M. The method has also
proven to be efficient to encapsulate pUC18 plasmid in sub-micron sized
vesicles and the general applicability of the method has been demonstrated by
the preparation of the charge inverse system: cationic poly(ethylene imine)
encapsulated by the anionic diblock poly(styrene-b-acrylic acid).
|
Lighting is a crucial technology that is used every day. The introduction of
the white light emitting diode (LED) that consists of a blue LED combined with
a phosphor layer, greatly reduces the energy consumption for lighting. Despite
the fast-growing market white LED's are still designed using slow, numerical,
trial-and-error algorithms. Here we introduce a radically new design principle
that is based on an analytical model instead of a numerical approach. Our
design model predicts the color point for any combination of design parameters.
In addition the model provides the reflection and transmission coefficients -
as well as the energy density distribution inside the LED - of the scattered
and re-emitted light intensities. To validate our model we performed extensive
experiments on an emblematic white LED and found excellent agreement. Our model
provides for a fast and efficient design, resulting in reduction of both design
and production costs.
|
A class of codimension one foliations has been recently introduced by
imposing a natural compatibility condition with a closed maximally
non-degenerate 2-form. In this paper we study for such foliations the
information captured by a Donaldson type submanifold. In particular we deduce
that their leaf spaces are homeomorphic to leaf spaces of 3-dimensional taut
foliations. We also introduce surgery constructions to show that this class of
foliations is broad enough. Our techniques come mainly from symplectic
geometry.
|
The Gaia Sausage is the major accretion event that built the stellar halo of
the Milky Way galaxy. Here, we provide dynamical and chemical evidence for a
second substantial accretion episode, distinct from the Gaia Sausage. The
Sequoia Event provided the bulk of the high energy retrograde stars in the
stellar halo, as well as the recently discovered globular cluster FSR 1758.
There are up to 6 further globular clusters, including $\omega$~Centauri, as
well as many of the retrograde substructures in Myeong et al. (2018),
associated with the progenitor dwarf galaxy, named the Sequoia. The stellar
mass in the Sequoia galaxy is $\sim 5 \times 10^{7} M_\odot$, whilst the total
mass is $\sim 10^{10} M_\odot$, as judged from abundance matching or from the
total sum of the globular cluster mass. Although clearly less massive than the
Sausage, the Sequoia has a distinct chemo-dynamical signature. The strongly
retrograde Sequoia stars have a typical eccentricity of $\sim0.6$, whereas the
Sausage stars have no clear net rotation and move on predominantly radial
orbits. On average, the Sequoia stars have lower metallicity by $\sim 0.3$ dex
and higher abundance ratios as compared to the Sausage. We conjecture that the
Sausage and the Sequoia galaxies may have been associated and accreted at a
comparable epoch.
|
We study the spectrum of stable static fermion bags in the 1+1 dimensional
Gross-Neveu model with $N$ flavors of Dirac fermions, in the large $N$ limit.
In the process, we discover a new kink, heavier than the
Callan-Coleman-Gross-Zee (CCGZ) kink, which is marginally stable (at least in
the large $N$ limit). The connection of this new kink and the conjectured $S$
matrix of the Gross-Neveu model is obscured at this point. After identifying
all stable static fermion bags, we arrange them into a periodic table,
according to their $O (2N)$ and topological quantum numbers.
|
Subsets and Splits