text
stringlengths
6
128k
We give a systematic and thorough study of geometric notions and results connected to Minkowski's measure of symmetry and the extension of the well-known Minkowski functional to arbitrary, not necessarily symmetric convex bodies K on any (real) normed space X. Although many of the notions and results we treat in this paper can be found elsewhere in the literature, they are scattered and possibly hard to find. Further, we are not aware of a systematic study of this kind and we feel that several features, connections and properties - e.g. the connections between many equivalent formulations - are new, more general and they are put in a better perspective now. In particular, we prove a number of fundamental properties of the extended Minkowski functional, including convexity, global Lipschitz boundedness, linear growth and approximation of the classical Minkowski functional of the central symmetrization of the body K. Our aim is to present how in the recent years these notions proved to be surprisingly relevant and effective in problems of approximation theory.
Let $\mathbb{F}\subset \mathbb{G}$ be two filtrations and $S$ be a $\mathbb{F}$ semimartingale possessing a $\mathbb{F}$ local martingale deflator. Consider $\tau$ a $\mathbb{G}$ stopping time. We study the problem whether $S^{\tau-}$ or $S^{\tau}$ can have $\mathbb{G}$ local martingale deflators. A suitable theoretical framework is set up in this paper, within which necessary/sufficient conditions for the problem to be solved have been proved. Under these conditions, we will construct $\mathbb{G}$ local martingale deflators for $S^{\tau-}$ or for $S^{\tau}$. Among others, it is proved that $\mathbb{G}$ local martingale deflators are multiples of $\mathbb{F}$ local martingale deflators, with a multiplicator coming from the multiplicative decomposition of the Az\'ema supermartingale of $\tau$. The proofs of the necessary/sufficient conditions require various results to be established about Az\'ema supermartingale, about local martingale deflator, about filtration enlargement, which are interesting in themselves. Our study is based on a filtration enlargement setting. For applications, it is important to have a method to infer the existence of such setting from the knowledge of the market information. This question is discussed at the end of the paper.
In this thesis, I investigate various aspects of one of the most fundamental questions in thermodynamics: what state transformations can quantum systems undergo while interacting with a thermal bath under specific constraints? These constraints may involve total energy conservation, memory effects, or finite-size considerations. Addressing this question leads to (i) a characterisation of the structure of the thermodynamic arrow of time, (ii) a framework bridging the gap between memoryless and arbitrarily non-Markovian thermodynamic processes, and (iii) a derivation of the famous fluctuation-dissipation relation within a quantum information framework. Finally, the last part of this thesis focuses on studying a ubiquitous phenomenon in science, so-called catalysis. It involves using an auxiliary system (a catalyst) to enable processes that would otherwise be impossible. Over the last two decades, this notion has spread to the field of quantum physics. However, this effect is typically described within a highly abstract framework. Despite its successes, this approach struggles to fully capture the behaviour of physically realisable systems, thereby limiting the applicability of quantum catalysis in practical scenarios. Strikingly, I will demonstrate this effect in a paradigmatic quantum optics setup, namely the Jaynes-Cummings model, where an atom interacts with an optical cavity. The atom plays the role of the catalyst and allows for the deterministic generation of non-classical light in the cavity, as evidenced by sub-Poissonian statistics or Wigner negativity.
This paper presents a framework for the study of convergence when the nodes' dynamics may be both piecewise smooth and/or nonidentical across the network. Specifically, we derive sufficient conditions for global convergence of all node trajectories towards the same bounded region of their state space. The analysis is based on the use of set-valued Lyapunov functions and bounds are derived on the minimum coupling strength required to make all nodes in the network converge towards each other. We also provide an estimate of the asymptotic bound $\epsilon$ on the mismatch between the node states at steady state. The analysis is performed both for linear and nonlinear coupling protocols. The theoretical analysis is extensively illustrated and validated via its application to a set of representative numerical examples.
We show that parameterized versions of splitting theorems in Morse theory can be effectively used to generalize some famous bifurcation theorems for potential operators. In particular, such generalizations based on the author's recent splitting theorems [38, 39, 42, 43] and that of [8] are given though potential operators in [42, 43] have weaker differentiability, even discontinuous. As applications, we obtain many bifurcation results for quasi-linear elliptic Euler equations and systems of higher order.
The semi-Dirac semi-Weyl semi-metal has been of interest in recent years due to its naturally occurring point Fermi surface and the associated exotic band-structure near the Fermi surface, which is linear (graphene-like) in one direction of the Brillouin zone, but quadratic in a direction perpendicular to it. In this paper the effect of a magnetic adatom impurity in a semi-Dirac system is studied. As in a metal, the magnetic impurity in a semi-Dirac system interacts with the sea of conduction electrons and gives rise to magnetism. The transition of the semi-Dirac system from the non-magnetic to the magnetic phase is studied as a function of the impurity energy, the strength of hybridization between the impurity and the bath as well as that of the electron electron interaction at the impurity atom. The results are compared and contrasted with those of graphene and ordinary metal. Since the semi-Dirac and the Dirac dispersion share similar features,e.g, both are particle hole symmetric and linear in one direction, the two systems share resemblances in their characteristics in the presence of a magnetic impurity. But some features are unique to the semi-Dirac dispersion.
We present an experimental and theoretical study of electron tunnelling through quantum dots which focusses the attention on the amplitude of the current peaks as a function of magnetic field. We demonstrate that the amplitudes of the current peaks in the tunnelling spectra show a dramatically different behaviour as a function of the magnetic field, depending on the angular momentum of the dot state through which tunnelling occurs. This is seen in the non-monotonic behaviour of the current amplitude in magnetic field. Furthermore, the magnetic field severely hinders tunnelling through states with angular momentum parallel to the field, and in some cases it makes it altogether impossible. This type of investigation allows us to directly probe the details of the confined wave functions of the quantum dot.
For arbitrary integer n, we describe a large class of right-angled Coxeter systems for which the visual baundary (of the corresponding Coxeter-Davis complex) is homeomorphic to the n-dimensional Sierpi\'nski compactum. We also provide a necessary and sufficient condition for a planar simplicial complex L under which the right-angled Coxeter system whose nerve is L has the visual boundary homeomorphic to the Sierpi\'nski curve.
First-principles-based modelings have been extremely successful in providing crucial insights and predictions for complex biological functions and phenomena. However, they can be hard to build and expensive to simulate for complex living systems. On the other hand, modern data-driven methods thrive at modeling many types of high-dimensional and noisy data. Still, the training and interpretation of these data-driven models remain challenging. Here, we combine the two types of methods to model stochastic neuronal network oscillations. Specifically, we develop a class of first-principles-based artificial neural networks to provide faithful surrogates to the high-dimensional, nonlinear oscillatory dynamics produced by neural circuits in the brain. Furthermore, when the training data set is enlarged within a range of parameter choices, the artificial neural networks become generalizable to these parameters, covering cases in distinctly different dynamical regimes. In all, our work opens a new avenue for modeling complex neuronal network dynamics with artificial neural networks.
Most amino acids and sugars molecules occur in mirror, or chiral, images of each other, knowns as enantiomers. However, life on Earth is mostly homochiral: proteins contain almost exclusively L-amino acids, while only D-sugars appear in RNA and DNA. The mechanism behind this fundamental asymmetry of life remains unknown, despite much progress in the theoretical and experimental understanding of homochirality in the past decades. We review three potential mechanisms for the emergence of biological homochirality on primal Earth and explore their implications for astrobiology: the first, that biological homochirality is a stochastic process driven by local environmental fluctuations; the second, that it is driven by circularly-polarized ultraviolet radiation in star-forming regions; and the third, that it is driven by parity violation at the elementary particle level. We argue that each of these mechanisms leads to different observational consequences for the existence of enantiomeric excesses in our solar system and in exoplanets, pointing to the possibility that the search for life elsewhere will help elucidate the origins of homochirality on Earth.
We study the leading term of the holonomy map of a perturbed plane polynomial Hamiltonian foliation. The non-vanishing of this term implies the non-persistence of the corresponding Hamiltonian identity cycle. We prove that this does happen for generic perturbations and cycles, as well for cycles which are commutators in Hamiltonian foliations of degree two. Our approach relies on the Chen's theory of iterated path integrals which we briefly resume.
The exact solution of the asymmetric exclusion problem with N distinct classes of particles (c = 1,2,...,N), with hierarchical order is presented. In this model the particles (size 1) are located at lattice points, and diffuse with equal asymmetric rates, but particles in a class c do not distinguish those in the classes c' >c from holes (empty sites). We generalize and solve exactly this model by considering the molecules in each distinct class c =1,2,...,N with sizes s_c (s_c = 0,1,2,...), in units of lattice spacing. The solution is derived by a Bethe ansatz of nested type.
We build a setup for path integral quantization through the Faddeev-Jackiw approach, extending it to include Grassmannian degrees of freedom, to be later implemented in a model of generalized electrodynamics that involves fourth-order derivatives in the components of a massive vector field being endowed with gauge freedom, due to an additional scalar field. Namely, the generalized Stueckelberg electrodynamics. In the first instance, we work on the free case to gain some familiarity with the program and, subsequently, we add the interaction with fermionic matter fields to complete our aim. In addition to deriving the correct classical brackets for such a model, we get the full expression for the associated generating functional and its associated integration measure.
The set of coupled equations for the self-consistent propagator and the field expectation value is solved numerically with high accuracy in Euclidean space at zero temperature and in the broken symmetry phase of the phi^4 model. Explicitly finite equations are derived with the adaptation of the renormalization method of van Hees and Knoll [H. van Hees, J. Knoll, Phys. Rev. D65, 025010 (2001)] to the case of non-vanishing field expectation value. The set of renormalization conditions used in this method leads to the same set of counterterms obtained recently in A. Patkos, Zs. Szep, Nucl. Phys. A811, 329-352 (2008). This makes possible the direct comparison of the accurate solution of explicitly finite equations with the solution of renormalized equations containing counterterms. The numerically efficient way of solving iteratively these latter equations is obtained by deriving at each order of the iteration new counterterms which evolve during the iteration process towards the counterterms determined based on the asymptotic behavior of the converged propagator. As shown at different values of the coupling, the use of these evolving counterterms accelerates the convergence of the solution of the equations.
We propose a stochastic process driven by memory effect with novel distributions including both exponential and leptokurtic heavy-tailed distributions. A class of distribution is analytically derived from the continuum limit of the discrete binary process with the renormalized auto-correlation and the closed form moment generating function is obtained, thus the cumulants are calculated and shown to be convergent. The other class of distributions are numerically investigated. The concoction of the two stochastic processes of the different signs of memory under regime switching mechanism does incarnate power-law decay behavior, which strongly implies that memory is the alternative origin of heavy-tail.
We present results for the azimuthal anisotropy of charged hadron distributions in A+A, p+A, d+A, and He3+A collisions within the IP-Glasma+MUSIC model. Obtained anisotropies are due to the fluid dynamic response of the system to the fluctuating initial geometry of the interaction region. While the elliptic and triangular anisotropies in peripheral Pb+Pb collisions at root-s=2.76 TeV are well described by the model, the same quantities in root-s=5.02 TeV p+Pb collisions underestimate the experimental data. This disagreement can be due to neglected initial state correlations or the lack of a detailed description of the fluctuating spatial structure of the proton, or both. We further present predictions for azimuthal anisotropies in p+Au, d+Au, and He3+Au collisions at root-s=200 GeV. For d+Au and 3He+Au collisions we expect the detailed substructure of the nucleon to become less important.
Let G be a nilpotent complete p-valued group of finite rank and let k be a field of characteristic p. We prove that every faithful prime ideal of the Iwasawa algebra kG is controlled by the centre of G, and use this to show that the prime spectrum of kG is a disjoint union of commutative strata. We also show that every prime ideal of kG is completely prime. The key ingredient in the proof is the construction of a non-commutative valuation on certain filtered simple Artinian rings.
UV radiation has been used as a disinfection strategy to deactivate a wide range of pathogens, but existing irradiation strategies do not ensure sufficient exposure of all environmental surfaces and/or require long disinfection times. We present a near-optimal coverage planner for mobile UV disinfection robots. The formulation optimizes the irradiation time efficiency, while ensuring that a sufficient dosage of radiation is received by each surface. The trajectory and dosage plan are optimized taking collision and light occlusion constraints into account. We propose a two-stage scheme to approximate the solution of the induced NP-hard optimization, and, for efficiency, perform key irradiance and occlusion calculations on a GPU. Empirical results show that our technique achieves more coverage for the same exposure time as strategies for existing UV robots, can be used to compare UV robot designs, and produces near-optimal plans. This is an extended version of the paper originally contributed to ICRA2021.
We study genericity of dynamical properties in the space of homeomorphisms of the Cantor set and in the space of subshifts of a suitably large shift space. These rather different settings are related by a Glasner-King type correspondence: genericity in one is equivalent to genericity in the other. By applying symbolic techniques in the shift-space model we derive new results about genericity of dynamical properties for transitive and totally transitive homeomorphisms of the Cantor set. We show that the isomorphism class of the universal odometer is generic in the space of transitive systems. On the other hand, the space of totally transitive systems displays much more varied dynamics. In particular, we show that in this space the isomorphism class of every Cantor system without periodic points is dense, and the following properties are generic: minimality, zero entropy, disjointness from a fixed totally transitive system, weak mixing, strong mixing, and minimal self joinings. The last two stand in striking contrast to the situation in the measure-preserving category. We also prove a correspondence between genericity of dynamical properties in the measure-preserving category and genericity of systems supporting an invariant measure with the same property.
Next-generation accelerator concepts which hinge on the precise shaping of beam distributions, demand equally precise diagnostic methods capable of reconstructing beam distributions within 6-dimensional position-momentum spaces. However, the characterization of intricate features within 6-dimensional beam distributions using conventional diagnostic techniques necessitates hundreds of measurements, using many hours of valuable beam time. Novel phase space reconstruction techniques are needed to substantially reduce the number of measurements required to reconstruct detailed, high-dimensional beam features in order to resolve complex beam phenomena, and as feedback in precision beam shaping applications. In this study, we present a novel approach to reconstructing detailed 6-dimensional phase space distributions from experimental measurements using generative machine learning and differentiable beam dynamics simulations. We demonstrate that for a collection of synthetic beam distribution test cases that this approach can be used to resolve 6-dimensional phase space distributions using basic beam manipulations and as few as 20 2-dimensional measurements of the beam profile, without the need for prior data collection or model training. We also demonstrate an application of the reconstruction method in an experimental setting at the Argonne Wakefield Accelerator, where it is able to reconstruct the beam distribution and accurately predict previously unseen measurements 75x faster than previous methods.
The extension of the highly-optimized local natural orbital (LNO) CCSD(T) method is presented for high-spin open-shell molecules. The techniques enabling the outstanding efficiency of the closed-shell LNO-CCSD(T) variant are adopted, including the iteration- and redundancy-free MP2 and (T) formulations, as well as the integral-direct, memory- and disk use economic, and OpenMP-parallel algorithms. For large molecules, the efficiency of our open-shell LNO-CCSD(T) method approaches that of its closed-shell parent method due to a novel approximation for higher-order long-range spin-polarization effects. The accuracy of open-shell LNO-CCSD(T) is extensively tested for radicals and reactions thereof, ionization processes, as well as spin-state splittings and transition-metal compounds. At the size range, where the canonical CCSD(T) reference is accessible (up to 20-30 atoms) the average open-shell LNO-CCSD(T) correlation energies are found to be 99.9-99.95% accurate, which translates into average absolute deviations of a few tenth of a kcal/mol in the investigated energy differences already with the default settings. This enables the accurate modeling of large systems with complex electronic structure, as illustrated on open-shell organic radicals and transition metal complexes of up to 179 atoms, as well as on challenging biochemical systems, including up to 601 atoms and 11,000 basis functions. While the protein models involve difficulties for local approximations, such as the spin states of a bounded iron ion or an extremely delocalized singly occupied orbital, the corresponding single-node LNO-CCSD(T) computations were feasible in a matter of days with 10s to a 100 GB of memory use. Therefore, the new LNO-CCSD(T) implementation enables highly-accurate computations for open-shell systems of unprecedented size and complexity with widely accessible hardware.
The U(2)_R x U(2)_L symmetry of QCD with two massless flavours is subject to anomalies which affect correlation functions involving the singlet currents A^0_\mu or V^0_\mu. These are relevant for pion-photon interactions, because - for two flavours - the electromagnetic current contains a singlet piece. We give the effective Lagrangian required for the corresponding low energy analysis to next-to-leading order, without invoking an expansion in the mass of the strange quark. In particular, the Wess-Zumino-Witten term that accounts for the two-flavour anomalies within the effective theory is written down in closed form.
In this paper, a general framework is proposed for the analysis and characterization of observability and diagnosability of finite state systems. Observability corresponds to the reconstruction of the system's discrete state, while diagnosability corresponds to the possibility of determining the past occurrence of some particular states, for example faulty states. A unifying framework is proposed where observability and diagnosability properties are defined with respect to a critical set, i.e. a set of discrete states representing a set of faults, or more generally a set of interest. These properties are characterized and the involved conditions provide an estimation of the delay required for the detection of a critical state, of the precision of the delay estimation and of the duration of a possible initial transient where the diagnosis is not possible or not required. Our framework makes it possible to precisely compare some of the observability and diagnosability notions existing in the literature with the ones introduced in our paper, and this comparison is presented.
In this paper, we discuss the Cram\'er-Lundberg model with investments, where the price of the invested risk asset follows a geometric Brownian motion with drift $a$ and volatility $\sigma> 0.$ By assuming there is a cap on the claim sizes, we prove that the probability of ruin has at least an algebraic decay rate if $2a/\sigma^2 > 1$. More importantly, without this assumption, we show that the probability of ruin is certain for all initial capital $u$, if $2a/\sigma^2 \le 1$.
This paper discusses the design and performance of the time measurement technique and of the synchronization systems of the CMS hadron calorimeter. Time measurement performance results are presented from test beam data taken in the years 2004 and 2006. For hadronic showers of energy greater than 100 GeV, the timing resolution is measured to be about 1.2 ns. Time synchronization and out-of-time background rejection results are presented from the Cosmic Run At Four Tesla and LHC beam runs taken in the Autumn of 2008. The inter-channel synchronization is measured to be within 2 ns.
We provide exact and approximation methods for solving a geometric relaxation of the Traveling Salesman Problem (TSP) that occurs in curve reconstruction: for a given set of vertices in the plane, the problem Minimum Perimeter Polygon (MPP) asks for a (not necessarily simply connected) polygon with shortest possible boundary length. Even though the closely related problem of finding a minimum cycle cover is polynomially solvable by matching techniques, we prove how the topological structure of a polygon leads to NP-hardness of the MPP. On the positive side, we show how to achieve a constant-factor approximation. When trying to solve MPP instances to provable optimality by means of integer programming, an additional difficulty compared to the TSP is the fact that only a subset of subtour constraints is valid, depending not on combinatorics, but on geometry. We overcome this difficulty by establishing and exploiting additional geometric properties. This allows us to reliably solve a wide range of benchmark instances with up to 600 vertices within reasonable time on a standard machine. We also show that using a natural geometry-based sparsification yields results that are on average within 0.5% of the optimum.
We define the notion of inseparable coverings of schemes and we propose a ramification formalism for them, along the lines of the classical one. Using this formalism we prove a formula analogous to the classical Riemann-Hurwitz formula for generic torsors under infinitesimal diagonalizable group schemes.
Exact expressions for certain integrated correlators of four half-BPS operators in $\mathcal{N}=4$ supersymmetric Yang-Mills theory with gauge group $SU(N)$ have been recently obtained thanks to a beautiful interplay between supersymmetric localisation and modular invariance. The large-$N$ expansion at fixed Yang-Mills coupling of such integrated correlators produces an asymptotic series of perturbative terms, holographically related to higher derivative interactions in the low energy expansion of the type IIB effective action, as well as exponentially suppressed corrections at large $N$, interpreted as contributions from coincident $(p,q)$-string world-sheet instantons. In this work we define a manifestly modular invariant Borel resummation of the perturbative large-$N$ expansion of these integrated correlators, from which we extract the exact non-perturbative large-$N$ sectors via resurgence analysis. Furthermore, we show that in the 't Hooft limit such modular invariant non-perturbative completions reduce to known resurgent genus expansions. Finally, we clarify how the same non-perturbative data is encoded in the decomposition of the integrated correlators based on $\rm{SL}(2,\mathbb{Z})$ spectral theory.
We consider operators in N=4 SYM theory which are dual, at strong coupling, to classical strings rotating in S^5. Three point correlation functions of such operators factorize into a universal contribution coming from the AdS part of the string sigma model and a state-dependent S^5 contribution. Consequently a similar factorization arises for the OPE coefficients. In this paper we evaluate the AdS universal factor of the OPE coefficients which is explicitly expressed just in terms of the anomalous dimensions of the three operators.
With the rapid growth of blockchain, an increasing number of users have been attracted and many implementations have been refreshed in different fields. Especially in the cryptocurrency investment field, blockchain technology has shown vigorous vitality. However, along with the rise of online business, numerous fraudulent activities, e.g., money laundering, bribery, phishing, and others, emerge as the main threat to trading security. Due to the openness of Ethereum, researchers can easily access Ethereum transaction records and smart contracts, which brings unprecedented opportunities for Ethereum scams detection and analysis. This paper mainly focuses on the Ponzi scheme, a typical fraud, which has caused large property damage to the users in Ethereum. By verifying Ponzi contracts to maintain Ethereum's sustainable development, we model Ponzi scheme identification and detection as a node classification task. In this paper, we first collect target contracts' transactions to establish transaction networks and propose a detecting model based on graph convolutional network (GCN) to precisely distinguishPonzi contracts. Experiments on different real-world Ethereum datasets demonstrate that our proposed model has promising results compared with general machine learning methods to detect Ponzi schemes.
Multi-task language models show outstanding performance for various natural language understanding tasks with only a single model. However, these language models utilize an unnecessarily large number of model parameters, even when used only for a specific task. This paper proposes a novel training-free compression method for multi-task language models using a pruning method. Specifically, we use an attribution method to determine which neurons are essential for performing a specific task. We task-specifically prune unimportant neurons and leave only task-specific parameters. Furthermore, we extend our method to be applicable in low-resource and unsupervised settings. Since our compression method is training-free, it uses few computing resources and does not destroy the pre-trained knowledge of language models. Experimental results on the six widely-used datasets show that our proposed pruning method significantly outperforms baseline pruning methods. In addition, we demonstrate that our method preserves performance even in an unseen domain setting.
Reconfigurable intelligent surface (RIS) provides a promising way to proactively augment propagation environments for better transmission performance in wireless communications. Existing multi-RIS works mainly focus on link-level optimization with predetermined transmission paths, which cannot be directly extended to system-level management, since they neither consider the interference caused by undesired scattering of RISs, nor the performance balancing between different transmission paths. To address this, we study an innovative multi-hop multi-RIS communication system, where a base station (BS) transmits information to a set of distributed users over multi-RIS configuration space in a multi-hop manner. The signals for each user are subsequently reflected by the selected RISs via multi-reflection line-of-sight (LoS) links. To ensure that all users have fair access to the system to avoid excessive number of RISs serving one user, we aim to find the optimal beam reflecting path for each user, while judiciously determining the path scheduling strategies with the corresponding beamforming design to ensure the fairness. Due to the presence of interference caused by undesired scattering of RISs, it is highly challenging to solve the formulated multi-RIS multi-path beamforming optimization problem. To solve it, we first derive the optimal RISs' phase shifts and the corresponding reflecting path selection for each user based on its practical deployment location. With the optimized multi-reflection paths, we obtain a feasible user grouping pattern for effective interference mitigation by constructing the maximum independent sets (MISs). Finally, we propose a joint heuristic algorithm to iteratively update the beamforming vectors and the group scheduling policies to maximize the minimum equivalent data rate of all users.
Landau level gaps are important parameters for understanding electronic interactions and symmetry-broken processes in bilayer graphene (BLG). Here we present transport spectroscopy measurements of LL gaps in double-gated suspended BLG with high mobilities in the quantum Hall regime. By using bias as a spectroscopic tool, we measure the gap {\Delta} for the quantum Hall (QH) state at filling factor {\nu}={\pm}4 and -2. The single-particle gap for {\nu}=4 scales linearly with magnetic field B and is independent of the out-of-plane electric field E. For the symmetry-broken {\nu}=-2 state, the measured values of gap are 1.1 meV/T and 0.17 meV/T for singly-gated geometry and dual-gated geometry at E=0, respectively. The difference between the two values arises from the E-dependence of the gap, suggesting that the {\nu}=-2 state is layer polarized. Our studies provide the first measurements of the gaps of the broken symmetry QH states in BLG with well-controlled E, and establish a robust method that can be implemented for studying similar states in other layered materials.
We define monotone links on a torus, obtained as projections of curves in the plane whose coordinates are monotone increasing. Using the work of Morton-Samuelson, to each monotone link we associate elements in the double affine Hecke algebra and the elliptic Hall algebra. In the case of torus knots (when the curve is a straight line), we recover symmetric function operators appearing in the rational shuffle conjecture. We show that the class of monotone links viewed as links in $\mathbb R^3$ coincides with the class of Coxeter links, studied by Oblomkov-Rozansky in the setting of the flag Hilbert scheme. When the curve satisfies a convexity condition, we recover positroid links that we previously studied. In the convex case, we conjecture that the associated symmetric functions are Schur positive, extending a recent conjecture of Blasiak-Haiman-Morse-Pun-Seelinger, and we speculate on the relation to Khovanov-Rozansky homology. Our constructions satisfy a skein recurrence where the base case consists of piecewise almost linear curves. We show that convex piecewise almost linear curves give rise to algebraic links.
We give an asymptotic formula for the number of elliptic curves over $\mathbb{Q}$ with bounded Faltings height. Silverman has shown that the Faltings height for elliptic curves over number fields can be expressed in terms of modular functions and the minimal discriminant of the elliptic curve. We use this to recast the problem as one of counting lattice points in a particular region in $\mathbb{R}^2$.
We consider statistical and algorithmic aspects of solving large-scale least-squares (LS) problems using randomized sketching algorithms. Prior results show that, from an \emph{algorithmic perspective}, when using sketching matrices constructed from random projections and leverage-score sampling, if the number of samples $r$ much smaller than the original sample size $n$, then the worst-case (WC) error is the same as solving the original problem, up to a very small relative error. From a \emph{statistical perspective}, one typically considers the mean-squared error performance of randomized sketching algorithms, when data are generated according to a statistical linear model. In this paper, we provide a rigorous comparison of both perspectives leading to insights on how they differ. To do this, we first develop a framework for assessing, in a unified manner, algorithmic and statistical aspects of randomized sketching methods. We then consider the statistical prediction efficiency (PE) and the statistical residual efficiency (RE) of the sketched LS estimator; and we use our framework to provide upper bounds for several types of random projection and random sampling algorithms. Among other results, we show that the RE can be upper bounded when $r$ is much smaller than $n$, while the PE typically requires the number of samples $r$ to be substantially larger. Lower bounds developed in subsequent work show that our upper bounds on PE can not be improved.
In this paper, we use the stochastic approximation method to estimate Sliced Average Variance Estimation (SAVE). This method is known for its efficiency in recursive estimation. Stochastic approximation is particularly effective for constructing recursive estimators and has been widely used in density estimation, regression, and semi-parametric models. We demonstrate that the resulting estimator is asymptotically normal and root n consistent. Through simulations conducted in the laboratory and applied to real data, we show that it is faster than the kernel method previously proposed.
The burgeoning demand for collaborative robotic systems to execute complex tasks collectively has intensified the research community's focus on advancing simultaneous localization and mapping (SLAM) in a cooperative context. Despite this interest, the scalability and diversity of existing datasets for collaborative trajectories remain limited, especially in scenarios with constrained perspectives where the generalization capabilities of Collaborative SLAM (C-SLAM) are critical for the feasibility of multi-agent missions. Addressing this gap, we introduce S3E, an expansive multimodal dataset. Captured by a fleet of unmanned ground vehicles traversing four distinct collaborative trajectory paradigms, S3E encompasses 13 outdoor and 5 indoor sequences. These sequences feature meticulously synchronized and spatially calibrated data streams, including 360-degree LiDAR point cloud, high-resolution stereo imagery, high-frequency inertial measurement units (IMU), and Ultra-wideband (UWB) relative observations. Our dataset not only surpasses previous efforts in scale, scene diversity, and data intricacy but also provides a thorough analysis and benchmarks for both collaborative and individual SLAM methodologies. For access to the dataset and the latest information, please visit our repository at https://pengyu-team.github.io/S3E.
Spectrum pooling allows multiple operators, or tenants, to share the same frequency bands. This work studies the optimization of spectrum pooling for the downlink of a multi-tenant Cloud Radio Access Network (C-RAN) system in the presence of inter-tenant privacy constraints. The spectrum available for downlink transmission is partitioned into private and shared subbands, and the participating operators cooperate to serve the user equipments (UEs) on the shared subband. The network of each operator consists of a cloud processor (CP) that is connected to proprietary radio units (RUs) by means of finite-capacity fronthaul links. In order to enable interoperator cooperation, the CPs of the participating operators are also connected by finite-capacity backhaul links. Inter-operator cooperation may hence result in loss of privacy. Fronthaul and backhaul links are used to transfer quantized baseband signals. Standard quantization is considered first. Then, a novel approach based on the idea of correlating quantization noise signals across RUs of different operators is proposed to control the trade-off between distortion at UEs and inter-operator privacy. The problem of optimizing the bandwidth allocation, precoding, and fronthaul/backhaul compression strategies is tackled under constraints on backhaul and fronthaul capacity, as well as on per-RU transmit power and inter-operator privacy. For both cases, the optimization problems are tackled using the concave convex procedure (CCCP), and extensive numerical results are provided.
Small cell networks are seen as a promising technology for boosting the performance of future wireless networks. In this paper, we propose a novel context-aware user-cell association approach for small cell networks that exploits the information about the velocity and trajectory of the users while also taking into account their quality of service (QoS) requirements. We formulate the problem in the framework of matching theory with externalities in which the agents, namely users and small cell base stations (SCBSs), have strict interdependent preferences over the members of the opposite set. To solve the problem, we propose a novel algorithm that leads to a stable matching among the users and SCBSs. We show that the proposed approach can better balance the traffic among the cells while also satisfying the QoS of the users. Simulation results show that the proposed matching algorithm yields significant performance advantages relative to traditional context-unaware approaches.
We present a theory of superconducting pairing originating from soft critical fluctuations near isospin-polarized states in rhombohedral trilayer graphene. Using a symmetry-based approach, we determine possible isospin order types and derive the effective electron-electron interactions mediated by isospin fluctuations. Superconductitivty arising due to these interactions has symmetry and order parameter structure that depend in a unique way on the "mother" isospin order. This model naturally leads to a superconducting phase adjacent to isospin-ordering phase transition, which mimics the behavior observed in experiment. The symmetry of the paired state predicted for the isospin order type inferred in experiments matches the observations. These findings support a scenario of superconductivity originating from electron-electron interactions.
Based on the continuous time random walk, we derive the Fokker-Planck equations with Caputo-Fabrizio fractional derivative, which can effectively model a variety of physical phenomena, especially, the material heterogeneities and structures with different scales. Extending the discretizations for fractional substantial calculus [Chen and Deng, \emph{ ESAIM: M2AN.} \textbf{49}, (2015), 373--394], we first provide the numerical discretizations of the Caputo-Fabrizio fractional derivative with the global truncation error $\mathcal{O}(\tau^\nu)$ $ (\nu=1,2,3,4)$. Then we use the derived schemes to solve the Caputo-Fabrizio fractional diffusion equation. By analysing the positive definiteness of the stiffness matrices of the discretized Caputo-Fabrizio operator, the unconditional stability and the convergence with the global truncation error $\mathcal{O}(\tau^2+h^2)$ are theoretically proved and numerical verified.
We propose a superconducting instability where microscopic supercurrent loops form spontaneously within a unit cell at the superconducting transition temperature with only uniform, onsite and intra-orbital singlet pairing. As a result of the circulating currents time-reversal symmetry is spontaneously broken in the superconducting state. Using Ginzburg-Landau theory, we describe in detail how these currents emerge in a toy model. We discuss the crystallographic symmetry requirements to realize such a state and show that they are met by the Re6X (X=Zr, Hf, Ti) family of time-reversal symmetry breaking, but otherwise seemingly conventional, superconductors. We estimate an upper bound for the resulting internal fields and find it to be consistent with recent muon-spin relaxation experiments.
Microlensing of stars places significant constraints on sub-planetary-mass compact objects, including primordial black holes, as dark matter candidates. As the lens' Einstein radius in the source plane becomes comparable to the size of the light source, however, source amplification is strongly suppressed, making it challenging to constrain lenses with a mass at or below $10^{-10}$ solar masses, i.e. asteroid-mass objects. Current constraints, using Subaru HSC observations of M31, assume a fixed source size of one solar radius. Here we point out that the actual stars in M31 bright enough to be used for microlensing are typically much larger. We correct the HSC constraints by constructing a source size distribution based on the M31 PHAT survey and on a synthetic stellar catalogue, and by correspondingly weighing the finite-size source effects. We find that the actual HSC constraints are weaker by up to almost three orders of magnitude in some cases, broadening the range of masses for which primordial black holes can be the totality of the cosmological dark matter by almost one order of magnitude.
Internet services contribute a large fraction of worldwide carbon emission nowadays, in a context of increasing number of companies tending to provide and more and more developers use Internet services. Noticeably, a trend is those service providers are trying to reduce their carbon emissions by utilizing on-site or off-site renewable energy in their datacenters in order to attract more customers. With such efforts have been paid, there are still some users who are aggressively calling for even cleaner Internet services. For example, over 500,000 Facebook users petitioned the social networking site to use renewable energy to power its datacenter. However, it seems impossible for such demand to be satisfied merely from the inside of those production datacenters, considering the transition cost and stability. Outside the existing Internet services, on the other hand, may easily set up a proxy service to attract those renewable-energy-sensitive users, by 1) using carbon neutral or even over-offsetting cloud instances to bridge the end user and traditional Internet services; and 2) estimating and offsetting the carbon emissions from the traditional Internet services. In our paper, we proposed GreenMail, which is a general IMAP proxy caching system that connects email users and traditional email services. GreenMail runs on green web hosts to cache users' emails on green cloud instances. Besides, it offsets the carbon emitted by traditional backend email services. With GreenMail, users could set a carbon emission constraint and use traditional email service without breaking any code modification of user side and email server side.
We discuss an ongoing study of the connection between galaxy merging/interaction and AGN activity, based on integral field spectroscopy. We focus on the search for AGN ionization in the central regions of mergers, previously not classified as AGNs. We present here the science case, the current status of the project, and plans for future observations.
In this paper we revisit the idea of measuring the magnetic dipole moments of the charm baryons and, in particular, of charmed Lambda by studying the spin precession induced by the strong effective magnetic field inside the channels of a bent crystal. We present a detailed sensitivity study showing the feasibility of such an experiment at the LHC in the coming years.
With the widespread use of biometric recognition, several issues related to the privacy and security provided by this technology have been recently raised and analysed. As a result, the early common belief among the biometrics community of templates irreversibility has been proven wrong. It is now an accepted fact that it is possible to reconstruct from an unprotected template a synthetic sample that matches the bona fide one. This reverse engineering process, commonly referred to as \textit{inverse biometrics}, constitutes a severe threat for biometric systems from two different angles: on the one hand, sensitive personal data (i.e., biometric data) can be derived from compromised unprotected templates; on the other hand, other powerful attacks can be launched building upon these reconstructed samples. Given its important implications, biometric stakeholders have produced over the last fifteen years numerous works analysing the different aspects related to inverse biometrics: development of reconstruction algorithms for different characteristics; proposal of methodologies to assess the vulnerabilities of biometric systems to the aforementioned algorithms; development of countermeasures to reduce the possible effects of attacks. The present article is an effort to condense all this information in one comprehensive review of: the problem itself, the evaluation of the problem, and the mitigation of the problem. The present article is an effort to condense all this information in one comprehensive review of: the problem itself, the evaluation of the problem, and the mitigation of the problem.
Simulating thin and extended galactic disks has long been a challenge in computational astrophysics. We introduce the NIHAO-UHD suite of cosmological hydrodynamical simulations of Milky Way mass galaxies and study stellar disk properties such as stellar mass, size and rotation velocity which agree well with observations of the Milky Way and local galaxies. In particular, the simulations reproduce the age-velocity dispersion relation and a multi-component stellar disk as observed for the Milky Way. Half of our galaxies show a double exponential vertical profile, while the others are well described by a single exponential model which we link to the disk merger history. In all cases, mono-age populations follow a single exponential whose scale height varies monotonically with stellar age and radius. The scale length decreases with stellar age while the scale height increases. The general structure of the stellar disks is already set at time of birth as a result of the inside-out and upside-down formation. Subsequent evolution modifies this structure by increasing both the scale length and height of all mono-age populations. Thus, our results put tight constraints on how much dynamical memory stellar disks can retain over cosmological timescales. Our simulations demonstrate that it is possible to form thin galactic disks in cosmological simulations provided there are no significant stellar mergers at low redshifts. Most of the stellar mass is formed in-situ with only a few percent ($\lesssim5\%$) brought in by merging satellites at early times. Redshift zero snapshots and halo catalogues are publicly available.
We present an exact solution of superstring theory that interpolates in time between an initial type 0 phase and a final phase whose physics is exactly that of the bosonic string. The initial theory is deformed by closed-string tachyon condensation along a lightlike direction. In the limit of large tachyon vev, the worldsheet conformal field theory precisely realizes the Berkovits-Vafa embedding of bosonic string theory into superstring theory. Our solution therefore connects the bosonic string dynamically with the superstring, settling a longstanding question about the relationship between the two theories.
How can we justify the validity of our computer security methods? This meta-methodological question is related to recent explorations on the science of computer security, which have been hindered by computer security's unique properties. We confront this by developing a taxonomy of properties and methods. Interdisciplinary foundations provide a solid grounding for a set of essential concepts, including a decision tree for characterizing adversarial interaction. Several types of invalidation and general ways of addressing them are described for technical methods. An interdisciplinary argument from theory explains the role that meta-methodological validation plays in the adversarial science of computer security.
We study the effect of non-homogeneous out-of-plane magnetic field on the behaviour of 2D spatially indirect excitons. Due to the difference of magnetic field acting on electrons and holes the total Lorentz force affecting the center of mass motion of an indirect exciton appears. Consequently, an indirect exciton acquires an effective charge proportional to the gradient of the magnetic field. The appearance of the Lorentz force causes the Hall effect for neutral bosons which can be detected by measurement of the spatially inhomogeneous blueshift of the photoluminescence using counter-flow experiment.
Shortest paths in treespace, which represent minimal deformations between trees, are unique and can be computed in polynomial time. The ability to quickly compute shortest paths has enabled new approaches for statistical analysis of populations of trees and phylogenetic inference. This paper gives a new algorithm for updating geodesic paths when the end points are dynamic. Such algorithms will be especially useful when optimizing for objectives that are functions of distances from a search point to other points e.g. for finding a tree which has the minimum average distance to a collection of trees. Our method for updating treespace shortest paths is based on parametric sensitivity analysis of the maximum flow subproblems that are optimized when solving for a treespace geodesic.
Van der Waals heterostructures have recently garnered interest for application in high-performance photovoltaic materials. Consequently, understanding the basic electronic characteristics of these heterostructures is important for their utilisation in optoelectronic devices. The electronic structures and bond relaxation of two-dimensional (2D) Sb/transition metal disulfides (TMDs, MoSe2, and MoTe2) van der Waals heterostructures were systematically studied using the bond-charge (BC) correlation and hybrid density functional theory. We found that the Sb/MoSe2 and Sb/MoTe2 heterostructures had indirect band gaps of 0.701 and 0.808 eV, respectively; further, these heterostructures effectively modulated the band gaps of MoSe2 (1.463 eV) and MoTe2 (1.173 eV). The BC correlation revealed four bonding and electronic contributions (electron-holes, antibonding, nonbonding, and bonding states) of the heterostructures. Our results provide an in-depth understanding of the Sb/TMD van der Waals heterojunction, which should be utilised to design 2D metal/semiconductor-based devices.
We develop an embedded boundary method (EBM) to solve the two-phase incompressible flow with piecewise constant density. The front tracking method is used to track the interface. The fractional step methods are used to solve the incompressible Navier-Stokes equations while the EBM is used in the projection step to solve an elliptic interface problem for the pressure with a jump equal to the surface tension force across the interface. Several examples are used to verify the accuracy of the method.
Emission at far-infrared wavelengths makes up a significant fraction of the total light detected from galaxies over the age of Universe. Herschel provides an opportunity for studying galaxies at the peak wavelength of their emission. Our aim is to provide a benchmark for models of galaxy population evolution and to test pre-existing models of galaxies. With the Herschel Multi-tiered Extra-galactic survey, HerMES, we have observed a number of fields of different areas and sensitivity using the SPIRE instrument on Herschel. We have determined the number counts of galaxies down to ~20 mJy. Our constraints from directly counting galaxies are consistent with, though more precise than, estimates from the BLAST fluctuation analysis. We have found a steep rise in the Euclidean normalised counts at <100 mJy. We have directly resolved 15% of the infrared extra-galactic background at the wavelength near where it peaks.
We consider the sequential decision optimization on the periodic environment, that occurs in a wide variety of real-world applications when the data involves seasonality, such as the daily demand of drivers in ride-sharing and dynamic traffic patterns in transportation. In this work, we focus on learning the stochastic periodic world by leveraging this seasonal law. To deal with the general action space, we use the bandit based on Gaussian process (GP) as the base model due to its flexibility and generality, and propose the Periodic-GP method with a temporal periodic kernel based on the upper confidence bound. Theoretically, we provide a new regret bound of the proposed method, by explicitly characterizing the periodic kernel in the periodic stationary model. Empirically, the proposed algorithm significantly outperforms the existing methods in both synthetic data experiments and a real data application on Madrid traffic pollution.
With the development of ultra intense laser technology, MeV ions from the laser foil interaction have been obtained by different mechanisms, such as target normal sheath acceleration, radiation pressure acceleration, collisionless shock acceleration, breakout afterburner, and a combination of different mechanisms. These energetic ion beams can be applied in fast ignition for inertial confinement fusion, medical therapy, and proton imaging. However, these ions are mainly accelerated in the laser propagation direction, and the ion acceleration in an azimuthal orientation is scarcely mentioned. Here, a doughnut Laguerre Gaussian LG laser is used for the first time to study the laser plasma interaction in the relativistic intensity regime in three dimensional particle in cell simulations. Studies have shown that a novel rotation of the plasma is produced from the hollow screw like drill of a LG mode laser. The angular momentum of the protons in the longitudinal direction produced by the LG laser is remarkably enhanced compared with that produced by the usual laser pulses, such as linearly and circularly polarized gaussian pulses. Moreover, the particles, including electrons and ions, can be trapped and uniformly compressed in the dark central minimum of the doughnut LG pulse. Such hollow structured LG laser may be used to investigate some difficult problems, such as screw like drilling in the inertial confinement fusion, laser driven particle accelerations, and pulsars in the astrophysical environment.
We construct $C^\infty$ solutions to the one-dimensional nonlinear wave equation $$ u_{tt} - u_{xx} - \tfrac{2(p+2)}{p^2} |u|^p u=0 \quad \text{with} \quad p>0 $$ that blow up on any prescribed uniformly space-like $C^\infty$ hypersurface. As a corollary, we show that smooth solutions can blow up (at the first instant) on an arbitrary compact set. We also construct solutions that blow up on general space-like $C^k$ hypersurfaces, but only when $4/p$ is not an integer and $k > (3p+4)/p$.
The contribution contains the preface to the Proceedings to the 23rd International Workshop "What Comes Beyond the Standard Models", July 04 -- July 12, 2020, Bled, Slovenia, [Virtual Workshop -- July 6.--10. 2020], Volume 1: Invited Talks and Volume 2: Further Talks And Scientific Debuts, published in Bled workshops in physics, Vol.21, No. 1 and 2, DMFA-Zalo\v{z}nistvo, Ljubljana, Dec. 2020, links to (most of) the published contributions, section (by M.Yu. Khlopov) on VIA and virtual conference at Bled 2020, and two poems by Astri Kleppe.
The ubiquity of distributed machine learning (ML) in sensitive public domain applications calls for algorithms that protect data privacy, while being robust to faults and adversarial behaviors. Although privacy and robustness have been extensively studied independently in distributed ML, their synthesis remains poorly understood. We present the first tight analysis of the error incurred by any algorithm ensuring robustness against a fraction of adversarial machines, as well as differential privacy (DP) for honest machines' data against any other curious entity. Our analysis exhibits a fundamental trade-off between privacy, robustness, and utility. To prove our lower bound, we consider the case of mean estimation, subject to distributed DP and robustness constraints, and devise reductions to centralized estimation of one-way marginals. We prove our matching upper bound by presenting a new distributed ML algorithm using a high-dimensional robust aggregation rule. The latter amortizes the dependence on the dimension in the error (caused by adversarial workers and DP), while being agnostic to the statistical properties of the data.
Post-common envelope binary systems evolve when matter is transferred from the primary star at a rate that cannot be accommodated by its secondary companion. A common envelope forms which is subsequently ejected resulting in a system with a binary period frequently between 2 and 3 hours. Where circumbinary companions are predicted, it remains unclear whether they form before or after the common envelope ejection. From observations of eclipse time variations (ETVs), exoplanet databases e.g. NASA Exoplanet Archive, list typically a dozen systems with confirmed circumbinary planets. Here we examine seven of these systems, discuss other possible causes and consider whether, for these dynamic systems, the ETV methodology is a reliable indicator of planetary companions. The systems selected were those where we could determine precise eclipse timings, free from significant extraneous effects such as pulsations, and present 163 new times of minima permitting us to test existing models. Over thirty circumbinary models have been proposed for these seven systems and note all, other than the latest model for NY Vir which remains to be fully tested, fail within a year to accurately predict eclipse times. In examining alternative mechanisms we find that magnetic effects could contribute significantly in two of the seven systems studied. We conclude that the structure of these dynamic systems, with the extreme temperature differences and small binary separations, are not fully understood and that many factors may contribute to the observed ETVs.
Strongly lensed systems with peculiar configurations allow us to probe the local properties of the deflecting lens mass while simultaneously testing general profile assumptions. The quasar HE0230$-$2130 is lensed by two galaxies at similar redshifts ($\Delta z \sim 0.003$) into four observed images. Using modeled quasar positions from fitting the brightness of the quasar images in ground-based imaging data from the Magellan telescope, we find that lens-mass models where each of these two galaxies is parametrized with a singular power-law (PL) profile predict five quasar images. To interpret the quad configuration of the system, we tested 12 different profile assumptions with the aim of obtaining lens-mass models that correctly predict only four observed images. We tested the effects of adopting: cored profiles for the lensing galaxies; external shear; and additional profiles to represent a dark matter clump. We find that half of our model classes can produce the correct image multiplicity. By comparing the Bayesian evidence of different model parametrizations, we favor two model classes: (i) one that incorporates two singular PL profiles for the lensing galaxies and a cored isothermal sphere in the region of the previously predicted fifth image (rNIS profile), and (ii) one with a bigger lensing galaxy parametrized by a singular PL profile and the smaller galaxy by a cored PL profile with external shear. We estimated the mass of the rNIS clump for each candidate model of our final Markov chain Monte Carlo sample, and find that only 2\% are in the range of $10^6 M_{\odot} \leq M_{\rm rNIS}\leq 10^9 M_{\odot}$, which is the predicted mass range of dark matter subhalos in cold dark matter simulations, or the mass of dark-matter-dominated and low-surface-brightness galaxies. We therefore favor the models with a cored mass distribution for the lens galaxy close to the predicted fifth image.
This paper deals with Bayesian inference of a mixture of Gaussian distributions. A novel formulation of the mixture model is introduced, which includes the prior constraint that each Gaussian component is always assigned a minimal number of data points. This enables noninformative improper priors such as the Jeffreys prior to be placed on the component parameters. We demonstrate difficulties involved in specifying a prior for the standard Gaussian mixture model, and show how the new model can be used to overcome these. MCMC methods are given for efficient sampling from the posterior of this model.
It is difficult to extract reliable criteria for causal locality from the limited ingredients found in textbook quantum theory. In the end, Bell humbly warned that his eponymous theorem was based on criteria that "should be viewed with the utmost suspicion." Remarkably, by stepping outside the wave-function paradigm, one can reformulate quantum theory in terms of old-fashioned configuration spaces together with 'unistochastic' laws. These unistochastic laws take the form of directed conditional probabilities, which turn out to provide a hospitable foundation for encoding microphysical causal relationships. This unistochastic reformulation provides quantum theory with a simpler and more transparent axiomatic foundation, plausibly resolves the measurement problem, and deflates various exotic claims about superposition, interference, and entanglement. Making use of this reformulation, this paper introduces a new principle of causal locality that is intended to improve on Bell's criteria, and shows directly that systems that remain at spacelike separation cannot exert causal influences on each other, according to that new principle. These results therefore lead to a general hidden-variables interpretation of quantum theory that is arguably compatible with causal locality.
We investigate Dirac fermions in the antifferomagnetic metallic state of iron-based superconduc- tors. Deriving an effective Hamiltonian for Dirac fermions, we reveal that there exist two Dirac cones carrying the same chirality, contrary to graphene, compensated by a Fermi surface with a quadratic energy dispersion as a consequence of a non-trivial topological property inherent in the band structure. We also find that the presence of the Dirac fermions gives the difference of sign- change temperatures between the Hall coefficient and the thermopower. This is consistent with available experimental data.
We investigate the edge conductance of particles submitted to an Iwatsuka magnetic field, playing the role of a purely magnetic barrier. We also consider magnetic guides generated by generalized Iwatsuka potentials. In both cases we prove quantization of the edge conductance. Next, we consider magnetic perturbations of such magnetic barriers or guides, and prove stability of the quantized value of the edge conductance. Further, we establish a sum rule for edge conductances. Regularization within the context of disordered systems is discussed as well.
According to standard quantum theory, the time evolution operator of a quantum system is independent of the state of the system. One can, however, consider systems in which this is not the case: the evolution operator may depend on the density operator itself. The presence of such modifications of quantum theory can be tested in long baseline oscillation experiments.
Recent advances have shown that implicit bias of gradient descent on over-parameterized models enables the recovery of low-rank matrices from linear measurements, even with no prior knowledge on the intrinsic rank. In contrast, for robust low-rank matrix recovery from grossly corrupted measurements, over-parameterization leads to overfitting without prior knowledge on both the intrinsic rank and sparsity of corruption. This paper shows that with a double over-parameterization for both the low-rank matrix and sparse corruption, gradient descent with discrepant learning rates provably recovers the underlying matrix even without prior knowledge on neither rank of the matrix nor sparsity of the corruption. We further extend our approach for the robust recovery of natural images by over-parameterizing images with deep convolutional networks. Experiments show that our method handles different test images and varying corruption levels with a single learning pipeline where the network width and termination conditions do not need to be adjusted on a case-by-case basis. Underlying the success is again the implicit bias with discrepant learning rates on different over-parameterized parameters, which may bear on broader applications.
In this paper we offer a review and bibliography of work on Hankel low-rank approximation and completion, with particular emphasis on how this methodology can be used for time series analysis and forecasting. We begin by describing possible formulations of the problem and offer commentary on related topics and challenges in obtaining globally optimal solutions. Key theorems are provided, and the paper closes with some expository examples.
Edge states are studied for the two-dimensional Dirac equation in a circular geometry. The properties of the two-component electromagnetic field are discussed in terms of the three-component polarization field, which can form a vortex structure near the Dirac node with a vorticity changing with the sign of the Dirac mass. The Berry curvature of the polarization field is related to the Berry curvature of the Dirac spinor state. This quantity is sensitive to a change of boundary conditions. In particular, it vanishes for a geometry with a single boundary but not for a geometry with two boundaries. This effect is robust against the creation of a step-like edge inside the sample.
Measurements and data analysis have proved very effective in the study of the Internet's physical fabric and have shown heterogeneities and statistical fluctuations extending over several orders of magnitude. Here we analyze performance measurements obtained by the PingER monitoring infrastructure. We focus on the relationship between the Round-Trip-Time (RTT) and the geographical distance. We define dimensionless variables that contain information on the quality of Internet connections finding that their probability distributions are characterized by a slow power-law decay signalling the presence of scale-free features. These results point out the extreme heterogeneity of the Internet since the transmission speed between different points of the network exhibits very large fluctuations. The associated scaling exponents appear to have fairly stable values in different data sets and thus define an invariant characteristic of the Internet that might be used in the future as a benchmark of the overall state of ``health'' of the Internet. The observed scale-free character should be incorporated in models and analysis of Internet performance.
In this paper, the theoretical terms of contemporary cosmology are examined as intellectual artefacts. An ontology and methodology are introduced for this purpose, which includes defining the concept of a hypothetical object. Introducing a hypothetical object is contrasted with the modification of physical laws as alternative ways of explaining the discrepancy between observations and theoretical predictions. Historical examples of theory choice, which involved these alternatives, are discussed. This is followed by a study of theory choice in contemporary cosmology. In particular, the focus is on the case of dark matter and modified gravity as alternative explanations for observed mass discrepancies in galaxies and galaxy clusters. These alternatives are analyzed, and their similarities and differences to the historical examples are pointed out.
The use of geometric invariants has recently played an important role in the solution of classification problems in non-commutative ring theory. We construct geometric invariants of non-commutative projectivizations, a significant class of examples in non-commutative algebraic geometry.
We report on simulations of capillary filling of high-wetting fluids in nano-channels with and without obstacles. We use atomistic (molecular dynamics) and hydrokinetic (lattice-Boltzmann) approaches which point out clear evidence of the formation of thin precursor films, moving ahead of the main capillary front. The dynamics of the precursor films is found to obey a square-root law as the main capillary front, z^2(t) ~ t, although with a larger prefactor, which we find to take the same value for the different geometries (2D-3D) under inspection. The two methods show a quantitative agreement which indicates that the formation and propagation of thin precursors can be handled at a mesoscopic/hydrokinetic level. This can be considered as a validation of the Lattice-Boltzmann (LB) method and opens the possibility of using hydrokinetic methods to explore space-time scales and complex geometries of direct experimental relevance. Then, LB approach is used to study the fluid behaviour in a nano-channel when the precursor film encounters a square obstacle. A complete parametric analysis is performed which suggests that thin-film precursors may have an important influence on the efficiency of nanochannel-coating strategies.
Unsupervised domain adaptive person re-identification (Re-ID) methods alleviate the burden of data annotation through generating pseudo supervision messages. However, real-world Re-ID systems, with continuously accumulating data streams, simultaneously demand more robust adaptation and anti-forgetting capabilities. Methods based on image rehearsal addresses the forgetting issue with limited extra storage but carry the risk of privacy leakage. In this work, we propose a Color Prompting (CoP) method for data-free continual unsupervised domain adaptive person Re-ID. Specifically, we employ a light-weighted prompter network to fit the color distribution of the current task together with Re-ID training. Then for the incoming new tasks, the learned color distribution serves as color style transfer guidance to transfer the images into past styles. CoP achieves accurate color style recovery for past tasks with adequate data diversity, leading to superior anti-forgetting effects compared with image rehearsal methods. Moreover, CoP demonstrates strong generalization performance for fast adaptation into new domains, given only a small amount of unlabeled images. Extensive experiments demonstrate that after the continual training pipeline the proposed CoP achieves 6.7% and 8.1% average rank-1 improvements over the replay method on seen and unseen domains, respectively. The source code for this work is publicly available in https://github.com/vimar-gu/ColorPromptReID.
Let $\Gamma$ be a surface group of higher genus. Let $\rho\_0: \Gamma \to {PGL}(V)$ be a discrete faithful representation with image contained in the natural embedding of ${SL}(2, {\mathbb R})$ in ${PGL}(3, {\mathbb R})$ as a group preserving a point and a disjoint projective line in the projective plane. We prove that such a representation is $(G,Y)$-Anosov (following the terminology of \cite{labourieanosov}), where $Y$ is the frame bundle. More generally, we prove that all the deformations $\rho: \Gamma \to {PGL}(3, {\mathbb R})$ studied in \cite{barflag} are $(G,Y)$-Anosov. As a corollary, we obtain all the main results of \cite{barflag}, and extend them to any small deformation of $\rho\_0$, not necessarily preserving a point or a projective line in the projective space: in particular, there is a $\rho(\Gamma)$-invariant solid torus $\Omega$ in the flag variety. The quotient space $\rho(\Gamma)\backslash\Omega$ is a flag manifold, naturally equipped with two 1-dimensional transversely projective foliations arising from the projections of the flag variety on the projective plane and its dual; if $\rho$ is strongly irreducible, these foliations are not minimal. More precisely, if one of these foliations is minimal, then it is topologically conjugate to the strong stable foliation of a double covering of a geodesic flow, and $\rho$ preserves a point or a projective line in the projective plane. All these results hold for any $(G,Y)$-Anosov representation which is not quasi-Fuchsian, i.e., does not preserve a strictly convex domain in the projective plane.
We introduce a new technique for gradient normalization during neural network training. The gradients are rescaled during the backward pass using normalization layers introduced at certain points within the network architecture. These normalization nodes do not affect forward activity propagation, but modify backpropagation equations to permit a well-scaled gradient flow that reaches the deepest network layers without experimenting vanishing or explosion. Results on tests with very deep neural networks show that the new technique can do an effective control of the gradient norm, allowing the update of weights in the deepest layers and improving network accuracy on several experimental conditions.
We determine the pre-asymptotic critical behavior at the quantum ferromagnetic transition in strongly disordered metals. We find that it is given by effective power laws, in contrast to the previously analyzed asymptotic critical behavior, which is valid only in an unobservably small region. The consequences for analyzing experiments are discussed, in particular ways to distinguish between critical behavior and Griffiths-phase effects.
We give a self-contained algebraic description of a formal symplectic groupoid over a Poisson manifold M. To each natural star product on M we then associate a canonical formal symplectic groupoid over M. Finally, we construct a unique formal symplectic groupoid `with separation of variables' over an arbitrary Kaehler-Poisson manifold.
We study dynamics of two-dimensional N=(0,1) supersymmetric gauge theories. In particular, we propose that there is an infrared triality between certain triples of theories with orthogonal and symplectic gauge groups. The proposal is supported by matching of anomalies and elliptic genera. This triality can be viewed as a (0,1) counterpart of the (0,2) triality proposed earlier by two of the authors and A. Gadde. We also describe the relation between global anomalies in gauge theoretic and sigma-model descriptions, filling in a gap in the present literature.
Accurate traffic forecasting is challenging due to the complex dependency on road networks, various types of roads, and the abrupt speed change due to the events. Recent works mainly focus on dynamic spatial modeling with adaptive graph embedding or graph attention having less consideration for temporal characteristics and in-situ modeling. In this paper, we propose a novel deep learning model named TESTAM, which individually models recurring and non-recurring traffic patterns by a mixture-of-experts model with three experts on temporal modeling, spatio-temporal modeling with static graph, and dynamic spatio-temporal dependency modeling with dynamic graph. By introducing different experts and properly routing them, TESTAM could better model various circumstances, including spatially isolated nodes, highly related nodes, and recurring and non-recurring events. For the proper routing, we reformulate a gating problem into a classification problem with pseudo labels. Experimental results on three public traffic network datasets, METR-LA, PEMS-BAY, and EXPY-TKY, demonstrate that TESTAM achieves a better indication and modeling of recurring and non-recurring traffic. We published the official code at https://github.com/HyunWookL/TESTAM
Connectivity and reachability on temporal networks, which can describe the spreading of a disease, decimation of information or the accessibility of a public transport system over time, have been among the main contemporary areas of study in complex systems for the last decade. However, while isotropic percolation theory successfully describes connectivity in static networks, a similar description has not been yet developed for temporal networks. Here address this problem and formalize a mapping of the concept of temporal network reachability to percolation theory. We show that the limited-waiting-time reachability, a generic notion of constrained connectivity in temporal networks, displays directed percolation phase transition in connectivity. Consequently, the critical percolation properties of spreading processes on temporal networks can be estimated by a set of known exponents characterising the directed percolation universality class. This result is robust across a diverse set of temporal network models with different temporal and topological heterogeneities, while by using our methodology we uncover similar reachability phase transitions in real temporal networks too. These findings open up an avenue to apply theory, concepts and methodology from the well-developed directed percolation literature to temporal networks.
In this work, we leverage atomistic spin-lattice simulations to examine how magnetic interactions impact the propagation of sound waves through a ferromagnetic material. To achieve this, we characterize the sound wave velocity in BCC iron, a prototypical ferromagnetic material, using three different approaches that are based on the oscillations of kinetic energy, finite-displacement derived forces, and corrections to the elastic constants, respectively. Successfully applying these methods within the spin-lattice framework, we find good agreement with the Simon effect including high order terms. In analogy to experiments, morphic coefficients associated with the transverse and longitudinal waves propagating along the [001] direction are extracted from fits to the fractional change in velocity data. The present efforts represent an advancement in magnetoelastic modelling capabilities which can expedite the design of future magneto-acoustic devices.
This paper presents inference rules for Resource Description Framework (RDF), RDF Schema (RDFS) and Web Ontology Language (OWL). Our formalization is based on Notation 3 Logic, which extended RDF by logical symbols and created Semantic Web logic for deductive RDF graph stores. We also propose OWL-P that is a lightweight formalism of OWL and supports soft inferences by omitting complex language constructs.
There are different approaches for diffractive photoproduction of charmonia. Recently, a new approach is proposed, in which charm quarks are taken as heavy quarks and the nonperturbative effect related to charmonia can be handled with nonrelativistic QCD. The interaction between the $c\bar c$ pair and the initial hadron is through exchange of soft gluons. The exchange of soft gluons can be studied with heavy quark effective theory and an expansion in the inverse of charm quark mass $m_c$ can be employed. In this approach a simple formula for the S-matrix can be derived by neglecting higher orders in $m_c^{-1}$ and relativistic correction related to charmonia. The S-matrix is related to the usual gluon distribution $g(x)$ at small $x$. This result is different than those from other approaches. Confronting experiment the result is not in agreement with experimental measurement because large errors from higher order in $m_c^{-1}$ and from relativistic corrections. Nevertheless the ratio of cross sections of $\jpsi$ and $\psi(2S)$ can be predicted more precisely than cross-sections. In this letter we show that the ratio predicted in this approach with an estimation of relativistic corrections is in good agreement with the recent measurement at HERA.
The $\Omega$-phase of the liquid sodium $\alpha$-$\Omega$ dynamo experiment at NMIMT in cooperation with LANL has successfully demonstrated the production of a high toroidal field, $B_{\phi} \simeq 8\times B_r$ from the radial component of an applied poloidal magnetic field, $B_r$. This enhanced toroidal field is produced by rotational shear in stable Couette flow within liquid sodium at $Rm \simeq 120$. The small turbulence in stable Taylor-Couette flow is caused by Ekman flow where $ (\delta v/v)^2 \sim 10^{-3} $. This high $\Omega$-gain in low turbulence flow contrasts with a smaller $\Omega$-gain in higher turbulence, Helmholtz-unstable shear flows. This result supports the ansatz that large scale astrophysical magnetic fields are created within semi-coherent large scale motions in which turbulence plays only a smaller diffusive role that enables magnetic flux linkage.
We present a method to calculate directly the K-matrices for the pion electro-production processes in the framework of chiral quark models which allows for a clean separation of the resonant amplitudes from the background. The method is applied to the calculation of the multipole amplitudes M_{1+}, E_{1+}, and S_{1+} in the Delta channel within the Cloudy Bag Model. A good overall description is found in a broad energy range.
The evaluation of large language models is an essential task in the field of language understanding and generation. As language models continue to advance, the need for effective benchmarks to assess their performance has become imperative. In the context of Traditional Chinese, there is a scarcity of comprehensive and diverse benchmarks to evaluate the capabilities of language models, despite the existence of certain benchmarks such as DRCD, TTQA, CMDQA, and FGC dataset. To address this gap, we propose a novel set of benchmarks that leverage existing English datasets and are tailored to evaluate language models in Traditional Chinese. These benchmarks encompass a wide range of tasks, including contextual question-answering, summarization, classification, and table understanding. The proposed benchmarks offer a comprehensive evaluation framework, enabling the assessment of language models' capabilities across different tasks. In this paper, we evaluate the performance of GPT-3.5, Taiwan-LLaMa-v1.0, and Model 7-C, our proprietary model, on these benchmarks. The evaluation results highlight that our model, Model 7-C, achieves performance comparable to GPT-3.5 with respect to a part of the evaluated capabilities. In an effort to advance the evaluation of language models in Traditional Chinese and stimulate further research in this field, we have open-sourced our benchmark and opened the model for trial.
Most prostate cancer survivors are confronted with disease-related and treatment-related side effects that impact their quality of life. A tool that combines specific physical activity coaching with the promotion of a healthy lifestyle and self-management guidance might be a successful method to enhance a lifestyle change in these patients. As a prerequisite for useful health technology, it is important to consider a design process centred in the patients. The aim of this study was to investigate the context of the problem and the user needs to support the ideation of a low-fidelity prototype of a tool to promote a healthy lifestyle among early-stage prostate cancer survivors. A user-centred design approach was followed involving a multidisciplinary team. The prototype was developed in 3 phases. In phase 1, the context was studied with 2 systematic reviews of the state of practice and consulting with 3 specialists in Oncology, resulting in a global use case and main requirements. In phase 2, the needs and barriers of the users were studied based on literature research and validated with 3 specialists, resulting in the creation of 3 personas. In phase 3, 2 sessions were held to ideate and prioritize possible app features, based on brainstorming and selection techniques. Using the Ninja Mock and Proto.io software a low-fidelity prototype was developed, resulting in 25 interactive screens. Understanding the user needs and context seems to be essential to highlight key goals, hence facilitating the bridge between ideation of the tool and the intended users tasks and experiences. The conclusion of this first stage of the design process brings valuable details (such as barriers of the users to technology and to physical activity) for future iterations of design of the mobile app.
We study the non-equilibrium transport properties of a one-dimensional array of dissipative quantum dots. Using the Keldysh formalism, we show that the dots' dissipative nature leads to a spatial variation of the chemical potential, which in disordered arrays, breaks the invariance of the current, I, under bias reversal. Moreover, the array's nanoscopic size results in an algebraic low-temperature dependence of I. Finally, we show that a local Coulomb interaction splits the dots' electronic levels, resulting in a Coulomb blockade, which is softened with increasing dissipation and array size.
We present measurements of bias triangles in several biasing configurations. Thorough analysis of the data allows us to present data from all four possible bias configurations on a single plot in chemical potential space. This presentation allows comparison between different biasing directions to be made in a clean and straightforward manner. Our analysis and presentation will prove useful in demonstrations of Pauli-spin blockade where comparisons between different biasing directions are paramount. The long term stability of the CMOS compatible Si/SiO2 only architecture leads to the success of this analysis. We also propose a simple variation to this analysis that will extend its use to systems lacking the long term stability of these devices.
This paper presents a novel method for unsupervised segmentation of pathology images. Staging of lung cancer is a major factor of prognosis. Measuring the maximum dimensions of the invasive component in a pathology images is an essential task. Therefore, image segmentation methods for visualizing the extent of invasive and noninvasive components on pathology images could support pathological examination. However, it is challenging for most of the recent segmentation methods that rely on supervised learning to cope with unlabeled pathology images. In this paper, we propose a unified approach to unsupervised representation learning and clustering for pathology image segmentation. Our method consists of two phases. In the first phase, we learn feature representations of training patches from a target image using the spherical k-means. The purpose of this phase is to obtain cluster centroids which could be used as filters for feature extraction. In the second phase, we apply conventional k-means to the representations extracted by the centroids and then project cluster labels to the target images. We evaluated our methods on pathology images of lung cancer specimen. Our experiments showed that the proposed method outperforms traditional k-means segmentation and the multithreshold Otsu method both quantitatively and qualitatively with an improved normalized mutual information (NMI) score of 0.626 compared to 0.168 and 0.167, respectively. Furthermore, we found that the centroids can be applied to the segmentation of other slices from the same sample.
Cellular hit probabilities of alpha particles emitted by inhaled radon progenies in sensitive bronchial epithelial cell nuclei were simulated at low exposure levels to obtain useful data for the rejection or in support of the linear-non-threshold (LNT) hypothesis. In this study, local distributions of deposited inhaled radon progenies in airway bifurcation models were computed at exposure conditions, which are characteristic of homes and uranium mines. Then, maximum local deposition enhancement factors at bronchial airway bifurcations, expressed as the ratio of local to average deposition densities, were determined to characterize the inhomogeneity of deposition and to elucidate their effect on resulting hit probabilities. The results obtained suggest that in the vicinity of the carinal regions of the central airways the probability of multiple hits can be quite high even at low average doses. Assuming a uniform distribution of activity there are practically no multiple hits and the hit probability as a function of dose exhibits a linear shape in the low dose range. The results are quite the opposite in the case of hot spots revealed by realistic deposition calculations, where practically all cells receive multiple hits and the hit probability as a function of dose is non-linear in the average dose range of 10-100 mGy.
This paper continues our study of radio pulsar emission-beam configurations with the primary intent of extending study to the lowest possible frequencies. Here we focus on a group of 133 more recently discovered pulsars, most of which were included in the (100-200 MHz) LOFAR High Band Survey, observed with Arecibo at 1.4 GHz and 327 MHz, and some observed at decameter wavelengths. Our analysis framework is the core/double-cone beam model, and we took opportunity to apply it as widely as possible, both conceptually and quantitatively, while highlighting situations where modeling is difficult, or where its premises may be violated. In the great majority of pulsars, beam forms consistent with the core/double-cone model were identified. Moreover, we found that each pulsar's beam structure remained largely constant over the frequency range available; where profile variations were observed, they were attributable to different component spectra and in some instances to varying conal beam sizes. As an Arecibo population, many or most of the objects tend to fall in the Galactic anticenter region and/or at high Galactic latitudes, so overall it includes a number of nearer, older pulsars. We found a number of interesting or unusual characteristics in some of the pulsars that would benefit from additional study. The scattering levels encountered for this group are low to moderate, apart from a few pulsars lying in directions more toward the inner Galaxy.
A heuristic hypothesis about domination of Bose-Einstein statistics in the early Universe is suggested. The possibility of Bose-Einstein condensation (BEC) of primordial baryon-antibaryon pairs is considered. In accordance with this postulation enormous masses in the order of galactic mass may be accumulated within the cosmic scales. At the certain threshold value of the matter density the structural bosons decay into fermions and the sharp breakdown of quantum-mechanical symmetry of the particles wave functions occurs. Then, due to the Pauli principle of exclusion a large-scale phase transition occurs because of enormous pressure jump of the matter. This phenomenon might cause Cosmological Bang at the beginning stage of the Matter Era. As a mechanism of accumulation of galactic mass much larger than the configuration with structural bosons, a hypothetical BEC of elementary bosons (gauge bosons $W^{\pm}$ and $Z^{0})$ is discussed as well.
The charged-particle's final state spectrum is derived from an analytic perturbative solution for the relativistic viscous hydrodynamics. By taking into account the longitudinal acceleration effect in relativistic viscous hydrodynamics, the pseudorapidity spectrum describes well the nucleus-nucleus colliding systems at RHIC and LHC. Based on both the extracted longitudinal acceleration parameters $\lambda^{*}$ and a phenomenological description of the $\lambda^{*}$, the charged-particle's pseudorapidity distributions for $\sqrt{s_{NN}}$ = 5.44 TeV Xe+Xe collisions are computed from the final state expression in a limited space-time rapidity $\eta_{s}$ region.
In this study, we conducted a numerical investigation on the Hall conductance ($\sigma_{Hall}$) of graphene based on the magnetic energy band structure calculated using a nonperturbative magnetic-field-containing relativistic tight-binding approximation (MFRTB) method. The nonperturbative MFRTB can revisit two types of plateaus for the dependence of $\sigma_{Hall}$ on Fermi energy. One set is characterized as wide plateaus (WPs). These WPs have filling factors (FFs) of 2, 6, 10, 14, etc. and are known as the half-integer quantum Hall effect. The width of WPs decreases with increasing FF, which exceeds the decrease expected from the linear dispersion relation of graphene. The other set is characterized by narrow plateaus (NPs), which have FFs of 0, 4, 8, 12, etc. The NPs correspond to the energy gaps caused by the spin-Zeeman effect and spin-orbit interaction. Furthermore, it was discovered that the degeneracy of the magnetic energy bands calculated using the nonperturbative MFRTB method leads to a quantized $\sigma_{Hall}$.
This work studies the behaviors of two large-population teams competing in a discrete environment. The team-level interactions are modeled as a zero-sum game while the agent dynamics within each team is formulated as a collaborative mean-field team problem. Drawing inspiration from the mean-field literature, we first approximate the large-population team game with its infinite-population limit. Subsequently, we construct a fictitious centralized system and transform the infinite-population game to an equivalent zero-sum game between two coordinators. We study the optimal coordination strategies for each team via a novel reachability analysis and later translate them back to decentralized strategies that the original agents deploy. We prove that the strategies are $\epsilon$-optimal for the original finite-population team game, and we further show that the suboptimality diminishes when team size approaches infinity. The theoretical guarantees are verified by numerical examples.
In 2+1 dimensional nonrelativistic Chern-Simons gauge theories on $S^2$ which has a global $SU(M)$ symmetry, the semilocal Popov vortex equations are obtained as Bogomolny equations by minimizing the energy in the presence of a uniform external magnetic field. We study the equations with many flavors and find several families of exact solutions. The equations are transformed to the semilocal Liouville equations for which some exact solutions are known. In this paper, we find new exact solutions of the semilocal Liouville equations. Using these solutions, we construct solutions to the semilocal Popov equations. The solutions are expressed in terms of one or more arbitrary rational functions on $S^2$. Some simple solutions reduce to $CP^{M-1}$ lump configurations.