text
stringlengths
6
128k
I show how Hamilton's philosophical commitments led him to a causal interpretation of classical mechanics. I argue that Hamilton's metaphysics of causation was injected into his dynamics by way of a causal interpretation of force. I then detail how forces remain indispensable to both Hamilton's formulation of classical mechanics and what we now call Hamiltonian mechanics (i.e., the modern formulation). On this point, my efforts primarily consist of showing that the orthodox interpretation of potential energy is the interpretation found in Hamilton's work. Hamilton called the potential energy function the force-function because he believed that it represents forces at work in the world. Multifarious non-historical arguments for this orthodox interpretation of potential energy are provided, and matters are concluded by showing that in classical Hamiltonian mechanics, facts about potential energies of systems are grounded in facts about forces. Thus, if one can tolerate the view that forces are causes of motions, then Hamilton provides one with a road map for transporting causation into one of the most mathematically sophisticated formulations of classical mechanics, viz., Hamiltonian mechanics.
Scalable memories that can match the speeds of superconducting logic circuits have long been desired to enable a superconducting computer. A superconducting loop that includes a Josephson junction can store a flux quantum state in picoseconds. However, the requirement for the loop inductance to create a bi-state hysteresis sets a limit on the minimal area occupied by a single memory cell. Here, we present a miniaturized superconducting memory cell based on a Three-Dimensional (3D) Nb nano-Superconducting QUantum Interference Device (nano-SQUID). The major cell area here fits within an 8*9 {\mu}m^2 rectangle with a cross-selected function for memory implementation. The cell shows periodic tunable hysteresis between two neighbouring flux quantum states produced by bias current sweeping because of the large modulation depth of the 3D nano-SQUID (~66%). Furthermore, the measured Current-Phase Relations (CPRs) of nano-SQUIDs are shown to be skewed from a sine function, as predicted by theoretical modelling. The skewness and the critical current of 3D nano-SQUIDs are linearly correlated. It is also found that the hysteresis loop size is in a linear scaling relationship with the CPR skewness using the statistics from characterisation of 26 devices. We show that the CPR skewness range of {\pi}/4~3{\pi}/4 is equivalent to a large loop inductance in creating a stable bi-state hysteresis for memory implementation. Therefore, the skewed CPR of 3D nano-SQUID enables further superconducting memory cell miniaturization by overcoming the inductance limitation of the loop area.
The Bayesian Optimisation Algorithm (BOA) is an Estimation of Distribution Algorithm (EDA) that uses a Bayesian network as probabilistic graphical model (PGM). Determining the optimal Bayesian network structure given a solution sample is an NP-hard problem. This step should be completed at each iteration of BOA, resulting in a very time-consuming process. For this reason most implementations use greedy estimation algorithms such as K2. However, we show in this paper that significant changes in PGM structure do not occur so frequently, and can be particularly sparse at the end of evolution. A statistical study of BOA is thus presented to characterise a pattern of PGM adjustments that can be used as a guide to reduce the frequency of PGM updates during the evolutionary process. This is accomplished by proposing a new BOA-based optimisation approach (FBOA) whose PGM is not updated at each iteration. This new approach avoids the computational burden usually found in the standard BOA. The results compare the performances of both algorithms on an NK-landscape optimisation problem using the correlation between the ruggedness and the expected runtime over enumerated instances. The experiments show that FBOA presents competitive results while significantly saving computational time.
The present article is devoted to certain examples of functions whose argument represented in terms of Cantor series.
We investigate theoretically magnon-mediated superconductivity in a heterostructure consisting of a normal metal and a two-sublattice antiferromagnetic insulator. The attractive electron-electron pairing interaction is caused by an interfacial exchange coupling with magnons residing in the antiferromagnet, resulting in p-wave, spin-triplet superconductivity in the normal metal. Our main finding is that one may significantly enhance the superconducting critical temperature by coupling the normal metal to only one of the two antiferromagnetic sublattices employing, for example, an uncompensated interface. Employing realistic material parameters, the critical temperature increases from vanishingly small values to values significantly larger than 1 K as the interfacial coupling becomes strongly sublattice-asymmetric. We provide a general physical picture of this enhancement mechanism based on the notion of squeezed bosonic eigenmodes.
Catalan words are particular growth-restricted words counted by the eponymous integer sequence. In this article we consider Catalan words avoiding a pair of patterns of length 3, pursuing the recent initiating work of the first and last authors and of S. Kirgizov where (among other things) the enumeration of Catalan words avoiding a patterns of length 3 is completed. More precisely, we explore systematically the structural properties of the sets of words under consideration and give enumerating results by means of recursive decomposition, constructive bijections or bivariate generating functions with respect to the length and descent number. Some of the obtained enumerating sequences are known, and thus the corresponding results establish new combinatorial interpretations for them.
The 1/N expansion for the O(N) vector model in four dimensions is reconsidered. It is emphasized that the effective potential for this model becomes everywhere complex just at the critical point, which signals an unstable vacuum. Thus a critical O(N) vector model cannot be consistently defined in the 1/N expansion for four-dimensions, which makes the existence of a double-scaling limit for this theory doubtful.
We give a complete investigation of Morley's trisector theorem. If the intersections of the half lines starting from the adjacent vertices of a triangle form an equilateral triangle for an arbitrary triangle, then the half lines are the angle trisectors. To derive the result we use elementary trigonometry, Taylor series expansions, and solve systems of polynomial equations step by step. As a byproduct we get a probably new trigonometric identity.
We present results of optical spectroscopic observations of candidates of Lyman Break Galaxies (LBGs) at $z \sim 5$ in the region including the GOODS-N and the J0053+1234 region by using GMOS-N and GMOS-S, respectively. Among 25 candidates, five objects are identified to be at $z \sim 5$ (two of them were already identified by an earlier study) and one object very close to the color-selection window turned out to be a foreground galaxy. With this spectroscopically identified sample and those from previous studies, we derived the lower limits on the number density of bright ($M_{UV}<-22.0$ mag) LBGs at $z \sim 5$. These lower limits are comparable to or slightly smaller than the number densities of UV luminosity functions (UVLFs) that show the smaller number density among $z \sim 5$ UVLFs in literature. However, by considering that there remain many LBG candidates without spectroscopic observations, the number density of bright LBGs is expected to increase by a factor of two or more. The evidence for the deficiency of UV luminous LBGs with large Ly$\alpha$ equivalent widths was reinforced. We discuss possible causes for the deficiency and prefer the interpretation of dust absorption.
Recently, some S-stars (S4711, S62, S4714) orbiting the supermassive black hole (SMBH) in Sgr A$^\ast$ with short orbital periods ($7.6\,\mathrm{yr}\leq P_\mathrm{b}\leq 12\,\mathrm{yr}$) were discovered. It was suggested that they may be used to measure the general relativistic Lense-Thirring (LT) precessions of their longitudes of ascending node $\mathit{\Omega}$ induced by the SMBH's angular momentum $\boldsymbol{J}_\bullet$. In fact, the proposed numerical estimates hold only in the particular case of a perfect alignment of $\boldsymbol{J}_\bullet$ with the line of sight, which does not seem to be the case. Moreover, also the inclination $I$ and the argument of perinigricon $\omega$ undergo LT precessions for an arbitrary orientation of $\boldsymbol{J}_\bullet$ in space. We explicitly show the analytical expressions of $\dot I^\mathrm{LT},\,\dot{\mathit{\Omega}}^\mathrm{LT},\,\omega^\mathrm{LT}$ in terms of the SMBH's spin polar angles $i^\bullet,\,\varepsilon^\bullet$ by finding the range of values for each of them in arcseconds per year ($^{\prime\prime}\,\mathrm{yr}^{-1}$). For each star, the corresponding values of $i^\bullet_\mathrm{max},\,\varepsilon^\bullet_\mathrm{max}$ and $i^\bullet_\mathrm{min},\,\varepsilon^\bullet_\mathrm{min}$ are determined as well, along with those $i_0^\bullet,\,\varepsilon_0^\bullet$ that cancel the LT precessions. The LT perinigricon precessions $\dot\omega^\mathrm{LT}$ are overwhelmed by the systematic uncertainties in the Schwarzschild ones due to the current errors in the stars' orbital parameters and the mass of Sgr A$^\ast$ itself. [Abridged]
Results of forward modelling of acoustic wave propagation in a realistic solar sub-photosphere with two cases of steady horizontal flows are presented and analysed by the means of local helioseismology. The simulations are based on fully compressible ideal hydrodynamical modelling in a Cartesian grid. The initial model is characterised by solar density and pressure stratifications taken from the standard Model S and is adjusted in order to suppress convective instability. Acoustic waves are excited by a non-harmonic source located below the depth corresponding to the visible surface of the Sun. Numerical experiments with coherent horizontal flows of linear and Gaussian dependences of flow speed on depth are carried out. These flow fields may mimic horizontal motions of plasma surrounding a sunspot, differential rotation or meridional circulation. An inversion of the velocity profiles from the simulated travel time differences is carried out. The inversion is based on the ray approximation. The results of inversion are then compared with the original velocity profiles. The influence of steady flow on the propagation of sound waves through the solar interior is analysed. Further, we propose a method of obtaining the travel-time differences for the waves propagating in sub-photospheric solar regions with horizontal flows. The method employs directly the difference between travel-time diagrams of waves propagating with and against the background flow. The analysis shows that the flow speed profiles obtained from inversion based on the ray approximation differ from the original ones. The difference between them is caused by the fact that the wave packets propagate along the ray bundle, which has a finite extent, and thus reach deeper regions of the sub-photosphere in comparison with ray theory.
We propose an immersed boundary scheme for the numerical resolution of the Complete Electrode Model in Electrical Impedance Tomography, that we use as a main ingredient in the resolution of inverse problems in medical imaging. Such method allows to use a Cartesian mesh without accurate discretization of the boundary, which is useful in situations where the boundary is complicated and/or changing. We prove the convergence of our method, and illustrate its efficiency with two dimensional direct and inverse problems.
We study the phase diagram of a dipolar fermi gas at half-filling in a cubic optical lattice with dipole moments aligned along the z-axis. The anisotropic dipole-dipole interaction leads to the competition between pz-wave superfluid and nematic charge-density-wave (CDW) orders at low temperatures. We find that the superfluid phase survives with weak interactions and the CDW phase dominates with strong interactions. In between, the supersolid phase appears as a balance between superfluid and CDW orders. The superfluid density is anisotropic in the supersolid and superfluid phases. In the CDW phase, there is a semimetal to insulator transition with increase of the interaction strength. Experimental implications are discussed.
We provide alternative proofs of two recent Grothendieck theorems for jointly completely bounded bilinear forms, originally due to Pisier and Shlyakhtenko (Invent. Math. 2002) and Haagerup and Musat (Invent. Math. 2008). Our proofs are elementary and are inspired by the so-called embezzlement states in quantum information theory. Moreover, our proofs lead to quantitative estimates.
In this work we present a systematic study of the three-dimensional extension of the ring dark soliton examining its existence, stability, and dynamics in isotropic harmonically trapped Bose-Einstein condensates. Detuning the chemical potential from the linear limit, the ring dark soliton becomes unstable immediately, but can be fully stabilized by an external cylindrical potential. The ring has a large number of unstable modes which are analyzed through spectral stability analysis. Furthermore, a few typical destabilization dynamical scenarios are revealed with a number of interesting vortical structures emerging such as the two or four coaxial parallel vortex rings. In the process of considering the stability of the structure, we also develop a modified version of the degenerate perturbation theory method for characterizing the spectra of the coherent structure. This semi-analytical method can be reliably applied to any soliton with a linear limit to explore its spectral properties near this limit. The good agreement of the resulting spectrum is illustrated via a comparison with the full numerical Bogolyubov-de Gennes spectrum. The application of the method to the two-component ring dark-bright soliton is also discussed.
We propose and analyze a scalable and fully autonomous scheme for preparing spatially distributed multiqubit entangled states in a dual-rail waveguide QED setup. In this approach, arrays of qubits located along two separated waveguides are illuminated by correlated photons from the output of a nondegenerate parametric amplifier. These photons drive the qubits into different classes of pure entangled steady states, for which the degree of multipartite entanglement can be conveniently adjusted by the chosen pattern of local qubit-photon detunings. Numerical simulations for moderate-sized networks show that the preparation time for these complex multiqubit states increases at most linearly with the system size and that one may benefit from an additional speedup in the limit of a large amplifier bandwidth. Therefore, this scheme offers an intriguing new route for distributing ready-to-use multipartite entangled states across large quantum networks, without requiring any precise pulse control and relying on a single Gaussian entanglement source only.
We describe a way to complete a correlation matrix that is not fully specified. Such matrices often arise in financial applications when the number of stochastic variables becomes large or when several smaller models are combined in a larger model. We argue that the proper completion to consider is the matrix that maximizes the entropy of the distribution described by the matrix. We then give a way to construct this matrix starting from the graph associated with the incomplete matrix. If this graph is chordal our construction will result in a proper correlation matrix. We give a detailed description of the construction for a cross-currency model with six stochastic variables and describe extensions to larger models involving more currencies.
Convolutional recurrent neural networks (CRNNs) have achieved state-of-the-art performance for sound event detection (SED). In this paper, we propose to use a dilated CRNN, namely a CRNN with a dilated convolutional kernel, as the classifier for the task of SED. We investigate the effectiveness of dilation operations which provide a CRNN with expanded receptive fields to capture long temporal context without increasing the amount of CRNN's parameters. Compared to the classifier of the baseline CRNN, the classifier of the dilated CRNN obtains a maximum increase of 1.9%, 6.3% and 2.5% at F1 score and a maximum decrease of 1.7%, 4.1% and 3.9% at error rate (ER), on the publicly available audio corpora of the TUT-SED Synthetic 2016, the TUT Sound Event 2016 and the TUT Sound Event 2017, respectively.
We propose BokehMe, a hybrid bokeh rendering framework that marries a neural renderer with a classical physically motivated renderer. Given a single image and a potentially imperfect disparity map, BokehMe generates high-resolution photo-realistic bokeh effects with adjustable blur size, focal plane, and aperture shape. To this end, we analyze the errors from the classical scattering-based method and derive a formulation to calculate an error map. Based on this formulation, we implement the classical renderer by a scattering-based method and propose a two-stage neural renderer to fix the erroneous areas from the classical renderer. The neural renderer employs a dynamic multi-scale scheme to efficiently handle arbitrary blur sizes, and it is trained to handle imperfect disparity input. Experiments show that our method compares favorably against previous methods on both synthetic image data and real image data with predicted disparity. A user study is further conducted to validate the advantage of our method.
Network Function Virtualization (NFV) aims to simplify deployment of network services by running Virtual Network Functions (VNFs) on commercial off-the-shelf servers. Service deployment involves placement of VNFs and in-sequence routing of traffic flows through VNFs comprising a Service Chain (SC). The joint VNF placement and traffic routing is usually referred as SC mapping. In a Wide Area Network (WAN), a situation may arise where several traffic flows, generated by many distributed node pairs, require the same SC, one single instance (or occurrence) of that SC might not be enough. SC mapping with multiple SC instances for the same SC turns out to be a very complex problem, since the sequential traversal of VNFs has to be maintained while accounting for traffic flows in various directions. Our study is the first to deal with SC mapping with multiple SC instances to minimize network resource consumption. Exact mathematical modeling of this problem results in a quadratic formulation. We propose a two-phase column-generation-based model and solution in order to get results over large network topologies within reasonable computational times. Using such an approach, we observe that an appropriate choice of only a small set of SC instances can lead to solution very close to the minimum bandwidth consumption.
Recently, Lai and Rohatgi discovered a shuffling theorem for lozenge tilings of doubly-dented hexagons, which generalized the earlier work of Ciucu. Later, Lai proved an analogous theorem for centrally symmetric tilings, which generalized some other previous work of Ciucu. In this paper, we give a unified proof of these two shuffling theorems, which also covers the weighted case. Unlike the original proofs, our arguments do not use the graphical condensation method but instead rely on a well-known tiling enumeration formula due to Cohn, Larsen, and Propp. Fulmek independently found a similar proof of Lai and Rohatgi's original shuffling theorem. Our proof also gives a combinatorial explanation for Ciucu's recent conjecture relating the total number and the number of centrally symmetric lozenge tilings.
We have used archive HST WFPC2 data for three elliptical galaxies (NGC 3379 in the Leo I group, and NGC 4472 and NGC 4406 in the Virgo cluster) to determine their distances using the Surface Brightness Fluctuation (SBF) method as described by Tonry and Schneider (1988). A comparison of the HST results with the SBF distance moduli of Ciardullo et al (1993) shows significant disagreement and suggests that the r.m.s. error on these ground-based distance moduli is actually as large as +-0.25 mag. The agreement is only slightly improved when we compare our results with the HST and ground-based SBF distances from Ajhar et al (1997) and Tonry et al (1997); the comparison suggests that a lower limit on the error of the HST SBF distance moduli is +-0.17 mag. Overall, these results suggest that previously quoted measurement errors may underestimate the true error in SBF distance moduli by at least a factor of 2-3.
The Multipurpose InfraRed Imaging System (MIRIS) performed the MIRIS Pa{\alpha} Galactic Plane Survey (MIPAPS), which covers the entire Galactic plane within the latitude range of -3{\deg} < b < +3{\deg} at Pa{\alpha} (1.87 um). We present the first result of the MIPAPS data extracted from the longitude range of l = 96.5{\deg}-116.3{\deg}, and demonstrate the data quality and scientific potential of the data by comparing them with H{\alpha} maps obtained from the INT Photometric H{\alpha} Survey (IPHAS) data. We newly identify 90 H II region candidates in the WISE H II region catalog as definite H II regions by detecting the Pa{\alpha} and/or H{\alpha} recombination lines, out of which 53 H II regions are detected at Pa{\alpha}. We also report the detection of additional 29 extended and 18 point-like sources at Pa{\alpha}. We estimate the E(B-V) color excesses and the total Lyman continuum luminosities for H II regions by combining the MIPAPS Pa{\alpha} and IPHAS H{\alpha} fluxes. The E(B-V) values are found to be systematically lower than those estimated from point stars associated with H II regions. Utilizing the MIPAPS Pa{\alpha} and IPHAS H{\alpha} images, we obtain an E(B-V) map for the entire region of the H II region Sh2-131 with an angular size of ~2.5{\deg}. The E(B-V) map shows not only numerous high-extinction filamentary features but also negative E(B-V) regions, indicating H{\alpha} excess. The H{\alpha} excess and the systematic underestimation of E(B-V) are attributed to light scattered by dust.
Low temperature dependence of specific heat of one- dimensional multicomponent systems at the commensurate- incommensurate phase transition point is studied. It is found that for canonical systems, with a fixed total number of particles, low temperature specific heat linearly depends on temperature with a diverging prefactor.
In this note, we show how to obtain a "characteristic power series" of graphons -- infinite limits of dense graphs -- as the limit of normalized reciprocal characteristic polynomials. This leads to a new characterization of graph quasi-randomness and another perspective on spectral theory for graphons, a complete description of the function in terms of the spectrum of the graphon as a self-adjoint kernel operator. Interestingly, while we apply a standard regularization to classical determinants, it is unclear how necessary this is.
The analysis of data on hyperon transverse momentum distributions, dN/dPt, that were gathered from various experiments (ISR, STAR, UA1, UA5 and CDF) reveals an important difference in the dynamics of multiparticle production in proton-proton vs. antiproton-proton collisions in the region of transverse momenta 0.3 GeV/c < Pt < 3 GeV/c. Hyperons produced with proton beams display the sharp exponential slope at low Pt, while spectra prodused with antiproton beam don't. Since LHC experiments have proton projectiles, the spectra of baryon production should seem "softer" in comparison to expectations, because the Monte Carlo simulations were based on the Tevatron antiproton-proton data. From the point of view of Quark-Gluon String Model, the most important contribution into the particle production spectra goes from antidiquark-diquark string fragmentation that exists only in the topological diagram for antiproton-proton collisions and is a very interesting object for investigation even at lower energies. This study may have an impact not only on interpretation of LHC results, but also on the cosmic ray physics and astrophysics, where the baryon contribution into matter-antimatter asymmetry is being studied.
Objective: Global Maxwell Tomography (GMT) is a recently introduced volumetric technique for noninvasive estimation of electrical properties (EP) from magnetic resonance measurements. Previous work evaluated GMT using ideal radiofrequency (RF) excitations. The aim of this simulation study was to assess GMT performance with a realistic RF coil. Methods: We designed a transmit-receive RF coil with $8$ decoupled channels for $7$T head imaging. We calculated the RF transmit field ($B_1^+$) inside heterogeneous head models for different RF shimming approaches, and used them as input for GMT to reconstruct EP for all voxels. Results: Coil tuning/decoupling remained relatively stable when the coil was loaded with different head models. Mean error in EP estimation changed from $7.5\%$ to $9.5\%$ and from $4.8\%$ to $7.2\%$ for relative permittivity and conductivity, respectively, when changing head model without re-tuning the coil. Results slightly improved when an SVD-based RF shimming algorithm was applied, in place of excitation with one coil at a time. Despite errors in EP, RF transmit field ($B_1^+$) and absorbed power could be predicted with less than $0.5\%$ error over the entire head. GMT could accurately detect a numerically inserted tumor. Conclusion: This work demonstrates that GMT can reliably reconstruct EP in realistic simulated scenarios using a tailored 8-channel RF coil design at $7$T. Future work will focus on construction of the coil and optimization of GMT's robustness to noise, to enable in-vivo GMT experiments. Significance: GMT could provide accurate estimations of tissue EP, which could be used as biomarkers and could enable patient-specific estimation of RF power deposition, which is an unsolved problem for ultra-high-field magnetic resonance imaging.
Low-Rank Adaptation~(LoRA), which updates the dense neural network layers with pluggable low-rank matrices, is one of the best performed parameter efficient fine-tuning paradigms. Furthermore, it has significant advantages in cross-task generalization and privacy-preserving. Hence, LoRA has gained much attention recently, and the number of related literature demonstrates exponential growth. It is necessary to conduct a comprehensive overview of the current progress on LoRA. This survey categorizes and reviews the progress from the perspectives of (1) downstream adaptation improving variants that improve LoRA's performance on downstream tasks; (2) cross-task generalization methods that mix multiple LoRA plugins to achieve cross-task generalization; (3) efficiency-improving methods that boost the computation-efficiency of LoRA; (4) data privacy-preserving methods that use LoRA in federated learning; (5) application. Besides, this survey also discusses the future directions in this field. At last, we provide a Github page (https://github.com/ZJU-LLMs/Awesome-LoRAs.git) for readers to check the updates and initiate discussions on this survey paper.
Anisotropic flow of K's, anti-K's, and lambdas is studied in heavy ion collisions at SPS and RHIC energies within the microscopic quark-gluon string model. At SPS energy the directed flow of kaons differs considerably at midrapidity from that of antikaons, while at RHIC energy kaon and antikaon flows coincide. The change is attributed to formation of dense meson-dominated matter at RHIC, where the differences in interaction cross-sections of hadrons become unimportant. The directed flows of strange particles, $v_1^{K,\bar{K}, \Lambda}(y)$, have universal negative slope at $|y| \leq 2$ at RHIC. The elliptic flow of strange hadrons is developed at midrapidity at times 3<t<10 fm/c. It increases almost linearly with rising p_t and stops to rise at p_t < 1.5 GeV/c reaching the same saturation value $v_2 ^{K,\Lambda}(p_t) \approx 10%$ in accord with experimental results.
Along a microtubule, certain active motors propel themselves in one direction whereas others propel themselves in the opposite direction. For example, the cargo transporting motor proteins dynein and kinesin propel themselves towards the so-called plus- and minus-ends of the microtubule, respectively, and in so doing are able to pass one another, but not without interacting. We address the emergent collective behavior of systems composed of many motors, some propelling towards the plus-end and others propelling towards the minus-end. To do this, we used an analogy between this strongly interacting, far-from-equilibrium, classical stochastic many-motor system and a certain quantum-mechanical many-body system evolving in imaginary time. We apply well-known methods from quantum many-body theory, including self-consistent mean-field theory and bosonization, to shed light on phenomena exhibited by the many-motor system such as structure formation and the dynamics of collective modes at low-frequencies and long-wavelengths. In particular, via the bosonized description we find analogs of chiral Luttinger liquids, as well as a qualitative transition in the nature of the low-frequency modes---from propagating to purely dissipative---controlled by density and interaction strength.
High resolution (0.4 arcsec) Atacama Large Millimeter/submillimeter Array (ALMA) Cycle 0 observations of HCO+(4-3) and HCN(4-3) toward a mid-stage infrared bright merger VV114 have revealed compact nuclear (<200 pc) and extended (3 - 4 kpc) dense gas distribution across the eastern part of the galaxy pair. We find a significant enhancement of HCN(4-3) emission in an unresolved compact and broad (290km/s) component found in the eastern nucleus of VV114, and we suggest dense gas associated with the surrounding material around an Active Galactic Nucleus (AGN), with a mass upper limit of < 4 x 10^8 Msun. The extended dense gas is distributed along a filamentary structure with resolved dense gas concentrations (230pc; 10^6 Msun) separated by a mean projected distance of 600 pc, many of which are generally consistent with the location of star formation traced in Pa alpha emission. Radiative transfer calculations suggest moderately dense (10^5 - 10^6 cm^-3) gas averaged over the entire emission region. These new ALMA observations demonstrate the strength of the dense gas tracers in identifying both the AGN and star formation activity in a galaxy merger, even in the most dust enshrouded environments in the local universe.
The cores of neutron stars (NSs) near the maximum mass realize the most highly compressed matter in the universe where quark degrees of freedom may be liberated. Such a state of dense matter is hypothesized as quark matter (QM) and its presence has awaited to be confirmed for decades in nuclear physics. Gravitational waves from binary NS mergers are expected to convey useful information called the equation of state (EOS). However, the signature for QM with realistic EOS is not yet established. Here, we show that the gravitational wave in the post-merger stage can distinguish the theory scenarios with and without a transition to QM. Instead of adopting specific EOSs as studied previously, we compile reliable EOS constraints from the ab initio approaches. We demonstrate that early collapse to a black hole after NS merger signifies softening of the EOS associated with the onset of QM in accord with ab initio constraints. Nature of hadron-quark phase transition can be further constrained by the condition that electromagnetic counterparts need to be energized by the material left outside the remnant black hole.
In this work, we tackle the task of learning generalizable 3D human Gaussians from a single image. The main challenge for this task is to recover detailed geometry and appearance, especially for the unobserved regions. To this end, we propose single-view generalizable Human Gaussian model (HGM), a diffusion-guided framework for 3D human modeling from a single image. We design a diffusion-based coarse-to-fine pipeline, where the diffusion model is adapted to refine novel-view images rendered from a coarse human Gaussian model. The refined images are then used together with the input image to learn a refined human Gaussian model. Although effective in hallucinating the unobserved views, the approach may generate unrealistic human pose and shapes due to the lack of supervision. We circumvent this problem by further encoding the geometric priors from SMPL model. Specifically, we propagate geometric features from SMPL volume to the predicted Gaussians via sparse convolution and attention mechanism. We validate our approach on publicly available datasets and demonstrate that it significantly surpasses state-of-the-art methods in terms of PSNR and SSIM. Additionally, our method exhibits strong generalization for in-the-wild images.
In this work we introduce a new class of optimal tensor codes related to the Ravagnani-type anticodes, namely the $j$-tensor maximum rank distance codes. We show that it extends the family of $j$-maximum rank distance codes and contains the $j$-tensor binomial moment determined codes (with respect to the Ravagnani-type anticodes) as a proper subclass. We define and study the generalized zeta function for tensor codes. We establish connections between this object and the weight enumerator of a code with respect to the Ravagnani-type anticodes. We introduce a new refinement of the invariants of tensor codes exploiting the structure of product lattices of some classes of anticodes and we derive the corresponding MacWilliams identities. In this framework, we also define a multivariate version of the tensor weight enumerator and we establish relations with the corresponding zeta function. As an application we derive connections on the generalized tensor weights related to the Delsarte and Ravagnani-type anticodes.
We studied the phase diagram of Sr$_3$Ru$_2$O$_7$ by means of heat capacity and magnetocaloric effect measurements at temperatures as low as 0.06 K and fields up to 12 T. We confirm the presence of a new quantum critical point at 7.5 T which is characterized by a strong non-Fermi-liquid behavior of the electronic specific heat coefficient $\Delta$C/T $\sim$ -logT over more than a decade in temperature,placing strong constraints on theories of its criticality. In particular logarithmic corrections are found when the dimension d is equal to the dynamic critical exponent z, in contrast to the conclusion proposed recently [Y. Tokiwa et al., Phys. Rev. Lett. 116, 226402 (2016)]. Moreover, we achieved a clear determination of the new second thermodynamic phase adjoining the first one at lower temperatures. Its thermodynamic features differ significantly from those of the dominant phase and characteristics expected of classical equilibrium phase transitions are not observed, indicating fundamental differences in the phase formation.
We present new algorithms to compute fundamental properties of a Boolean function given in truth-table form. Specifically, we give an O(N^2.322 log N) algorithm for block sensitivity, an O(N^1.585 log N) algorithm for `tree decomposition,' and an O(N) algorithm for `quasisymmetry.' These algorithms are based on new insights into the structure of Boolean functions that may be of independent interest. We also give a subexponential-time algorithm for the space-bounded quantum query complexity of a Boolean function. To prove this algorithm correct, we develop a theory of limited-precision representation of unitary operators, building on work of Bernstein and Vazirani.
The critical behavior of a quenched random hypercubic sample of linear size $L$ is considered, within the ``random-$T_{c}$'' field-theoretical mode, by using the renormalization group method. A finite-size scaling behavior is established and analyzed near the upper critical dimension $d=4-\epsilon$ and some universal results are obtained. The problem of self-averaging is clarified for different critical regimes.
In the context of XACML-based access control systems, an intensive testing activity is among the most adopted means to assure that sensible information or resources are correctly accessed. Unfortunately, it requires a huge effort for manual inspection of results: thus automated verdict derivation is a key aspect for improving the cost-effectiveness of testing. To this purpose, we introduce XACMET, a novel approach for automated model-based oracle definition. XACMET defines a typed graph, called the XAC-Graph, that models the XACML policy evaluation. The expected verdict of a specific request execution can thus be automatically derived by executing the corresponding path in such graph. Our validation of the XACMET prototype implementation confirms the effectiveness of the proposed approach.
The tippedisk is a mathematical-mechanical archetype for a peculiar friction-induced instability phenomenon leading to the inversion of an unbalanced spinning disk, being reminiscent to (but different from) the well-known inversion of the tippetop. A reduced model of the tippedisk, in the form of a three-dimensional ordinary differential equation, has been derived recently, followed by a preliminary local stability analysis of stationary spinning solutions. In the current paper, a global analysis of the reduced system is pursued using the framework of singular perturbation theory. It is shown how the presence of friction leads to slow-fast dynamics and the creation of a two-dimensional slow manifold. Furthermore, it is revealed that a bifurcation scenario involving a homoclinic bifurcation and a Hopf bifurcation leads to an explanation of the inversion phenomenon. In particular, a closed-form condition for the critical spinning speed for the inversion phenomenon is derived. Hence, the tippedisk forms an excellent mathematical-mechanical problem for the analysis of global bifurcations in singularly perturbed dynamics.
Context. The Class 0 protostellar binary IRAS 16293-2422 is an interesting target for (sub)millimeter observations due to, both, the rich chemistry toward the two main components of the binary and its complex morphology. Its proximity to Earth allows the study of its physical and chemical structure on solar system scales using high angular resolution observations. Such data reveal a complex morphology that cannot be accounted for in traditional, spherical 1D models of the envelope. Aims. The purpose of this paper is to study the environment of the two components of the binary through 3D radiative transfer modeling and to compare with data from the Atacama Large Millimeter/submillimeter Array. Such comparisons can be used to constrain the protoplanetary disk structures, the luminosities of the two components of the binary and the chemistry of simple species. Methods. We present 13CO, C17O and C18O J=3-2 observations from the ALMA Protostellar Interferometric Line Survey (PILS), together with a qualitative study of the dust and gas density distribution of IRAS 16293-2422. A 3D dust and gas model including disks and a dust filament between the two protostars is constructed which qualitatively reproduces the dust continuum and gas line emission. Results and conclusions. Radiative transfer modeling of source A and B, with the density solution of an infalling, rotating collapse or a protoplanetary disk model, can match the constraints for the disk-like emission around source A and B from the observed dust continuum and CO isotopologue gas emission. If a protoplanetary disk model is used around source B, it has to have an unusually high scale-height in order to reach the dust continuum peak emission value, while fulfilling the other observational constraints. Our 3D model requires source A to be much more luminous than source B; LA ~ 18 $L_\odot$ and LB ~ 3 $L_\odot$.
This paper introduces a novel metaheuristic algorithm, known as the efficient multiplayer battle game optimizer (EMBGO), specifically designed for addressing complex numerical optimization tasks. The motivation behind this research stems from the need to rectify identified shortcomings in the original MBGO, particularly in search operators during the movement phase, as revealed through ablation experiments. EMBGO mitigates these limitations by integrating the movement and battle phases to simplify the original optimization framework and improve search efficiency. Besides, two efficient search operators: differential mutation and L\'evy flight are introduced to increase the diversity of the population. To evaluate the performance of EMBGO comprehensively and fairly, numerical experiments are conducted on benchmark functions such as CEC2017, CEC2020, and CEC2022, as well as engineering problems. Twelve well-established MA approaches serve as competitor algorithms for comparison. Furthermore, we apply the proposed EMBGO to the complex adversarial robust neural architecture search (ARNAS) tasks and explore its robustness and scalability. The experimental results and statistical analyses confirm the efficiency and effectiveness of EMBGO across various optimization tasks. As a potential optimization technique, EMBGO holds promise for diverse applications in real-world problems and deep learning scenarios. The source code of EMBGO is made available in \url{https://github.com/RuiZhong961230/EMBGO}.
We report the results of a CGRO 3-week observation of the binary system containing the 47 ms pulsar PSR B1259-63 orbiting around a Be star companion in a very eccentric orbit. The PSR B1259-63 binary is a unique system for the study of the interaction of a rapidly rotating pulsar with time-variable nebular surroundings. CGRO observed the PSR B1259-63 system in coincidence with its most recent periastron passage (January 3-23, 1994). Unpulsed and non-thermal hard X-ray emission was detected up to 200 keV, with a photon index $1.8 \pm 0.2$ and a flux of ~4 mCrab, corresponding to a luminosity of a few 10^{34} erg/s at the distance of 2 kpc. The hard X-ray flux and spectrum detected by CGRO agrees with the X-ray emission measured by simultaneous ASCA observations. EGRET upper limits are significant, and exclude strong inverse Compton cooling in the PSR B1259-63 system. We interpret the observed non-thermal emission as synchrotron radiation of shocked electron/positron pairs of the relativistic pulsar wind interacting with the mass outflow from the Be star. Our results clearly indicate, for the first time in a binary pulsar, that high energy emission can be shock-powered rather than caused by accretion. The lack of X-ray/gamma-ray pulsations constrains models of high-energy emission from rapidly rotating pulsars.
How the fluctuation-exchange (FLEX) approximation and the Fermi-liquid theory fail to explain the anomalous behavior of the Hall coefficient in the normal state of high-Tc superconductors is clarified.
With 5394 security certificates of IT products and systems, the Common Criteria for Information Technology Security Evaluation have bred an ecosystem entangled with various kind of relations between the certified products. Yet, the prevalence and nature of dependencies among Common Criteria certified products remains largely unexplored. This study devises a novel method for building the graph of references among the Common Criteria certified products, determining the different contexts of references with a supervised machine-learning algorithm, and measuring how often the references constitute actual dependencies between the certified products. With the help of the resulting reference graph, this work identifies just a dozen of certified components that are relied on by at least 10% of the whole ecosystem -- making them a prime target for malicious actors. The impact of their compromise is assessed and potentially problematic references to archived products are discussed.
The deformations of multi-$\la$ hypernuclei corresponding to even-even core nuclei ranging from $^8$Be to $^{40}$Ca with 2, 4, 6, and 8 hyperons are studied in the framework of the deformed Skyrme-Hartree-Fock approach. It is found that the deformations are reduced when adding 2 or 8 $\la$ hyperons, but enhanced when adding 4 or 6 $\la$ hyperons. These differences are attributed to the fact that $\la$ hyperons are filled gradually into the three deformed $p$ orbits, of which the [110]1/2$^-$ orbit is prolately deformed and the degenerate [101]1/2$^-$ and [101]3/2$^-$ orbits are oblately deformed.
We have searched for intermediate-scale anisotropy in the arrival directions of ultrahigh-energy cosmic rays with energies above 57~EeV in the northern sky using data collected over a 5 year period by the surface detector of the Telescope Array experiment. We report on a cluster of events that we call the hotspot, found by oversampling using 20$^\circ$-radius circles. The hotspot has a Li-Ma statistical significance of 5.1$\sigma$, and is centered at R.A.=146.7$^{\circ}$, Dec.=43.2$^{\circ}$. The position of the hotspot is about 19$^{\circ}$ off of the supergalactic plane. The probability of a cluster of events of 5.1$\sigma$ significance, appearing by chance in an isotropic cosmic-ray sky, is estimated to be 3.7$\times$10$^{-4}$ (3.4$\sigma$).
We critically evaluate the treatment of the notion of measurement in the Consistent Histories approach to quantum mechanics. We find such a treatment unsatisfactory because it relies, often implicitly, on elements external to those provided by the formalism. In particular, we note that, in order for the formalism to be informative when dealing with measurement scenarios, one needs to assume that the appropriate choice of framework is such that apparatuses are always in states of well defined pointer positions after measurements. The problem is that there is nothing in the formalism to justify this assumption. We conclude that the Consistent Histories approach, contrary to what is claimed by its proponents, fails to provide a truly satisfactory resolution for the measurement problem in quantum theory.
A distribution matcher (DM) encodes a binary input data sequence into a sequence of symbols (codeword) with desired target probability distribution. The set of the output codewords constitutes a codebook (or code) of a DM. Constant-composition DM (CCDM) uses arithmetic coding to efficiently encode data into codewords from a constant-composition (CC) codebook. The CC constraint limits the size of the codebook, and hence the coding rate of the CCDM. The performance of CCDM degrades with decreasing output length. To improve the performance for short transmission blocks we present a class of multi-composition (MC) codes and an efficient arithmetic coding scheme for encoding and decoding. The resulting multi-composition DM (MCDM) is able to encode more data into distribution matched codewords than the CCDM and achieves lower KL divergence, especially for short block messages.
We present a method for characterizing image-subtracted objects based on shapelet analysis to identify transient events in ground-based time-domain surveys. We decompose the image-subtracted objects onto a set of discrete Zernike polynomials and use their resulting coefficients to compare them to other point-like objects. We derive a norm in this Zernike space that we use to score transients for their point-like nature and show that it is a powerful comparator for distinguishing image artifacts, or residuals, from true astrophysical transients. Our method allows for a fast and automated way of scanning overcrowded, wide-field telescope images with minimal human interaction and we reduce the large set of unresolved artifacts left unidentified in subtracted observational images. We evaluate the performance of our method using archival intermediate Palomar Transient Factory and Dark Energy Camera survey images. However, our technique allows flexible implementation for a variety of different instruments and data sets. This technique shows a reduction in image subtraction artifacts by 99.95% for surveys extending up to hundreds of square degrees and has strong potential for automated transient identification in electromagnetic follow-up programs triggered by the Laser Interferometer Gravitational Wave Observatory-Virgo Scientific Collaboration.
Skin cancer is one of the major types of cancers with an increasing incidence over the past decades. Accurately diagnosing skin lesions to discriminate between benign and malignant skin lesions is crucial to ensure appropriate patient treatment. While there are many computerised methods for skin lesion classification, convolutional neural networks (CNNs) have been shown to be superior over classical methods. In this work, we propose a fully automatic computerised method for skin lesion classification which employs optimised deep features from a number of well-established CNNs and from different abstraction levels. We use three pre-trained deep models, namely AlexNet, VGG16 and ResNet-18, as deep feature generators. The extracted features then are used to train support vector machine classifiers. In the final stage, the classifier outputs are fused to obtain a classification. Evaluated on the 150 validation images from the ISIC 2017 classification challenge, the proposed method is shown to achieve very good classification performance, yielding an area under receiver operating characteristic curve of 83.83% for melanoma classification and of 97.55% for seborrheic keratosis classification.
In this work we study the excitatory AMPA, and NMDA, and inhibitory GABAA receptor mediated dynamical changes in neuronal networks of neonatal rat cortex in vitro. Extracellular network-wide activity was recorded with 59 planar electrodes simultaneously under different pharmacological conditions. We analyzed the changes of overall network activity and network-wide burst frequency between baseline and AMPA receptor (AMPA-R) or NMDA receptor (NMDA-R) driven activity, as well as between the latter states and disinhibited activity. Additionally, spatiotemporal structures of pharmacologically modified bursts and recruitment of electrodes during the network bursts were studied. Our results show that AMPA-R and NMDA-R receptors have clearly distinct roles in network dynamics. AMPA-Rs are in greater charge to initiate network wide bursts. Therefore NMDA-Rs maintain the already initiated activity. GABAA receptors (GABAA-Rs) inhibit AMPA-R driven network activity more strongly than NMDA-R driven activity during the bursts.
There are two major routes to address the ubiquitous family of inverse problems appearing in signal and image processing, such as denoising or deblurring. A first route relies on Bayesian modeling, where prior probabilities are used to embody models of both the distribution of the unknown variables and their statistical dependence with the observed data. The estimation process typically relies on the minimization of an expected loss (e.g. minimum mean squared error, or MMSE). The second route has received much attention in the context of sparse regularization and compressive sensing: it consists in designing (often convex) optimization problems involving the sum of a data fidelity term and a penalty term promoting certain types of unknowns (e.g., sparsity, promoted through an 1 norm). Well known relations between these two approaches have lead to some widely spread misconceptions. In particular, while the so-called Maximum A Posterori (MAP) estimate with a Gaussian noise model does lead to an optimization problem with a quadratic data-fidelity term, we disprove through explicit examples the common belief that the converse would be true. It has already been shown [7, 9] that for denoising in the presence of additive Gaussian noise, for any prior probability on the unknowns, MMSE estimation can be expressed as a penalized least squares problem, with the apparent characteristics of a MAP estimation problem with Gaussian noise and a (generally) different prior on the unknowns. In other words, the variational approach is rich enough to build all possible MMSE estimators associated to additive Gaussian noise via a well chosen penalty. We generalize these results beyond Gaussian denoising and characterize noise models for which the same phenomenon occurs. In particular, we prove that with (a variant of) Poisson noise and any prior probability on the unknowns, MMSE estimation can again be expressed as the solution of a penalized least squares optimization problem. For additive scalar denois-ing the phenomenon holds if and only if the noise distribution is log-concave. In particular, Laplacian denoising can (perhaps surprisingly) be expressed as the solution of a penalized least squares problem. In the multivariate case, the same phenomenon occurs when the noise model belongs to a particular subset of the exponential family. For multivariate additive denoising, the phenomenon holds if and only if the noise is white and Gaussian.
We give a complete characterization of those $f: [0,1] \to X$ (where $X$ is a Banach space which admits an equivalent Fr\'echet smooth norm) which allow an equivalent $C^2$ parametrization. For $X=\R$, a characterization is well-known. However, even in the case $X=\R^2$, several quite new ideas are needed. Moreover, the very close case of parametrizations with a bounded second derivative is solved.
The Marchenko phase-equivalent transformation of the Schr\"{o}dinger equation for two coupled channels is discussed. The combination of the Marchenko transformations valid in the Bargmann potential case is suggested.
This second paper of the series (see the first one in [1]) models the dynamics and structure of upper hurricane layer in adiabatic approximation. Formulation of simplified aerodynamic model allows analytically express the radial istributions of pressure and wind speed components. The vertical evolution of these distributions and hurricane structure in the layer are described by a coupled set of equations for the vertical mass flux and vertical momentum balance, averaged over the eye wall cross section. Several realistic predictions of the model are demonstrated, including the change of directions for the component of radial wind speed and angular velocity of hurricane with altitude.
Being an assembly of identical upright helixes, a chiral sculptured thin film (CSTF) exhibits the circular Bragg phenomenon and can therefore be used as a circular- polarization filter in a spectral regime called the circular Bragg regime. This has been already demonstrated in the near-infrared and short-wavelength infrared regimes. If two CSTFs are fabricated in identical conditions to differ only in the helical pitch, and if both are made of a material whose bulk refractive index is constant in a wide enough spectral regime, then the center wavelengths of the circular Bragg regimes of the two CSTFs must be in the same ratio as their helical pitches by virtue of the scale invariance of the frequency-domain Maxwell postulates. This theoretical result was confirmed by measuring the linear-transmittance spectrums of two zinc-selenide CSTFs with helical pitches in the ratio 1:7:97. The center wavelengths were found to be in the ratio 1:7:1, the deviation from the ratio of helical pitches being explainable at least in part be- cause the bulk refractive index of zinc selenide decreased a little with wavelength. We concluded that CSTFs can be fabricated to function as circular-polarization filters in the mid-wavelength infrared regime.
Given a graph $F$, let $I(F)$ be the class of graphs containing $F$ as an induced subgraph. Let $W[F]$ denote the minimum $k$ such that $I(F)$ is definable in $k$-variable first-order logic. The recognition problem of $I(F)$, known as Induced Subgraph Isomorphism (for the pattern graph $F$), is solvable in time $O(n^{W[F]})$. Motivated by this fact, we are interested in determining or estimating the value of $W[F]$. Using Olariu's characterization of paw-free graphs, we show that $I(K_3+e)$ is definable by a first-order sentence of quantifier depth 3, where $K_3+e$ denotes the paw graph. This provides an example of a graph $F$ with $W[F]$ strictly less than the number of vertices in $F$. On the other hand, we prove that $W[F]=4$ for all $F$ on 4 vertices except the paw graph and its complement. If $F$ is a graph on $t$ vertices, we prove a general lower bound $W[F]>(1/2-o(1))t$, where the function in the little-o notation approaches 0 as $t$ inreases. This bound holds true even for a related parameter $W^*[F]\le W[F]$, which is defined as the minimum $k$ such that $I(F)$ is definable in the infinitary logic $L^k_{\infty\omega}$. We show that $W^*[F]$ can be strictly less than $W[F]$. Specifically, $W^*[P_4]=3$ for $P_4$ being the path graph on 4 vertices. Using the lower bound for $W[F]$, we also obtain a succintness result for existential monadic second-order logic: A usage of just one monadic quantifier sometimes reduces the first-order quantifier depth at a super-recursive rate.
End-to-end spoken language understanding (SLU) models are a class of model architectures that predict semantics directly from speech. Because of their input and output types, we refer to them as speech-to-interpretation (STI) models. Previous works have successfully applied STI models to targeted use cases, such as recognizing home automation commands, however no study has yet addressed how these models generalize to broader use cases. In this work, we analyze the relationship between the performance of STI models and the difficulty of the use case to which they are applied. We introduce empirical measures of dataset semantic complexity to quantify the difficulty of the SLU tasks. We show that near-perfect performance metrics for STI models reported in the literature were obtained with datasets that have low semantic complexity values. We perform experiments where we vary the semantic complexity of a large, proprietary dataset and show that STI model performance correlates with our semantic complexity measures, such that performance increases as complexity values decrease. Our results show that it is important to contextualize an STI model's performance with the complexity values of its training dataset to reveal the scope of its applicability.
Mechanical characterization of brain tissue has been investigated extensively by various research groups over the past fifty years. These properties are particularly important for modelling Traumatic Brain Injury (TBI). In this research, we present the design and calibration of a High Rate Tension Device (HRTD) capable of performing tests up to a maximum strain rate of 90/s. We use experimental and numerical methods to investigate the effects of inhomogeneous deformation of porcine brain tissue during tension at different specimen thicknesses (4.0-14.0 mm), by performing tension tests at a strain rate of 30/s. One-term Ogden material parameters (mu = 4395.0 Pa, alpha = -2.8) were derived by performing an inverse finite element analysis to model all experimental data. A similar procedure was adopted to determine Young's modulus (E= 11200 Pa) of the linear elastic regime. Based on this analysis, brain specimens of aspect ratio (diameter/thickness) S < 1.0 are required to minimise the effects of inhomogeneous deformation during tension tests.
Deterministically growing (wild-type) populations which seed stochastically developing mutant clones have found an expanding number of applications from microbial populations to cancer. The special case of exponential wild-type population growth, usually termed the Luria-Delbr\"uck or Lea-Coulson model, is often assumed but seldom realistic. In this article we generalise this model to different types of wild-type population growth, with mutants evolving as a birth-death branching process. Our focus is on the size distribution of clones - that is the number of progeny of a founder mutant - which can be mapped to the total number of mutants. Exact expressions are derived for exponential, power-law and logistic population growth. Additionally for a large class of population growth we prove that the long time limit of the clone size distribution has a general two-parameter form, whose tail decays as a power-law. Considering metastases in cancer as the mutant clones, upon analysing a data-set of their size distribution, we indeed find that a power-law tail is more likely than an exponential one.
We report on the discovery of HAT-P-11b, the smallest radius transiting extrasolar planet (TEP) discovered from the ground, and the first hot Neptune discovered to date by transit searches. HAT-P-11b orbits the bright (V=9.587) and metal rich ([Fe=H] = +0.31 +/- 0.05) K4 dwarf star GSC 03561-02092 with P = 4.8878162 +/- 0.0000071 days and produces a transit signal with depth of 4.2 mmag. We present a global analysis of the available photometric and radial-velocity data that result in stellar and planetary parameters, with simultaneous treatment of systematic variations. The planet, like its near-twin GJ 436b, is somewhat larger than Neptune (17Mearth, 3.8Rearth) both in mass Mp = 0.081 +/- 0.009 MJ (25.8 +/- 2.9 Mearth) and radius Rp = 0.422 +/- 0.014 RJ (4.73 +/- 0.16 Rearth). HAT-P-11b orbits in an eccentric orbit with e = 0.198 +/- 0.046 and omega = 355.2 +/- 17.3, causing a reflex motion of its parent star with amplitude 11.6 +/- 1.2 m/s, a challenging detection due to the high level of chromospheric activity of the parent star. Our ephemeris for the transit events is Tc = 2454605.89132 +/- 0.00032 (BJD), with duration 0.0957 +/- 0.0012 d, and secondary eclipse epoch of 2454608.96 +/- 0.15 d (BJD). The basic stellar parameters of the host star are M* = 0.809+0.020-0.027 Msun, R* = 0.752 +/- 0.021 Rsun and Teff = 4780 +/- 50 K. Importantly, HAT-P-11 will lie on one of the detectors of the forthcoming Kepler mission. We discuss an interesting constraint on the eccentricity of the system by the transit light curve and stellar parameters. We also present a blend analysis, that for the first time treats the case of a blended transiting hot Jupiter mimicing a transiting hot Neptune, and proves that HAT-P-11b is not such a blend.
The magnetic dipole and electric quadrupole hyperfine constants of Aluminium ($^{27}Al$) atom are computed using the relativistic coupled cluster (CC) and unitary coupled cluster (UCC) methods. Effects of electron correlations are investigated using different levels of CC approximations and truncation schemes. The ionization potentials, excitation energies, transition probabilities, oscillator strengths and nuclear quadrupole moment are computed to assess the accuracy of these schemes. The nuclear quadrupole moment obtained from the present CC and UCC calculations in the singles and doubles approximations are 142.5 mbarn and 141.5 mbarn respectively. The discrepancies between our calculated IPs and EEs and their measured values are better than 0.3%. The other one-electron properties reported here are also in excellent agreement with the measurements.
This article proves that if M is a smooth manifold of dimension at least four, then for generic choice of metric on M, all prime parametrized minimal surfaces in M are free of branch points and lie on nondegenerate critical submanifolds for the two-variable energy function which have the same dimension as the group of complex automorphisms of the domain Riemann surface.
For $1\leq p\leq \infty$ and $\alpha>0$, Besov spaces $B^p_\alpha$ play a key role in the theory of $\alpha$-M\"obius invariant function spaces. In some sense, $B^1_\alpha$ is the minimal $\alpha$-M\"obius invariant function space, $B^2_\alpha$ is the unique $\alpha$-M\"obius invariant Hilbert space, and $B^\infty_\alpha$ is the maximal $\alpha$-M\"obius invariant function space. In this paper, under the $\alpha$-M\"obius invariant pairing and by the space $B^\infty_\alpha$, we identify the predual and dual spaces of $B^1_\alpha$. In particular, the corresponding identifications are isometric isomorphisms. The duality theorem via the $\alpha$-M\"obius invariant pairing for $B^p_\alpha$ with $p>1$ is also given.
In case of salient subject recognition, computer algorithms have been heavily relied on scanning of images from top-left to bottom-right systematically and apply brute-force when attempting to locate objects of interest. Thus, the process turns out to be quite time consuming. Here a novel approach and a simple solution to the above problem is discussed. In this paper, we implement an approach to object manipulation and detection through segmentation map, which would help to desaturate or, in other words, wash out the background of the image. Evaluation for the performance is carried out using the Jaccard index against the well-known Ground-truth target box technique.
Generalized estimating equations (GEE) are of great importance in analyzing clustered data without full specification of multivariate distributions. A recent approach jointly models the mean, variance, and correlation coefficients of clustered data through three sets of regressions (Luo and Pan, 2022). We observe that these estimating equations, however, are a special case of those of Yan and Fine (2004) which further allows the variance to depend on the mean through a variance function. The proposed variance estimators may be incorrect for the variance and correlation parameters because of a subtle dependence induced by the nested structure of the estimating equations. We characterize model settings where their variance estimation is invalid and show the variance estimators in Yan and Fine (2004) correctly account for such dependence. In addition, we introduce a novel model selection criterion that enables the simultaneous selection of the mean-scale-correlation model. The sandwich variance estimator and the proposed model selection criterion are tested by several simulation studies and real data analysis, which validate its effectiveness in variance estimation and model selection. Our work also extends the R package geepack with the flexibility to apply different working covariance matrices for the variance and correlation structures.
Fluid flows in nature and applications are frequently subject to periodic velocity modulations. Surprisingly, even for the generic case of flow through a straight pipe, there is little consensus regarding the influence of pulsation on the transition threshold to turbulence: while most studies predict a monotonically increasing threshold with pulsation frequency (i.e. Womersley number, $\alpha$), others observe a decreasing threshold for identical parameters and only observe an increasing threshold at low $\alpha$. In the present study we apply recent advances in the understanding of transition in steady shear flows to pulsating pipe flow. For moderate pulsation amplitudes we find that the first instability encountered is subcritical (i.e. requiring finite amplitude disturbances) and gives rise to localized patches of turbulence ("puffs") analogous to steady pipe flow. By monitoring the impact of pulsation on the lifetime of turbulence we map the onset of turbulence in parameter space. Transition in pulsatile flow can be separated into three regimes. At small Womersley numbers the dynamics are dominated by the decay turbulence suffers during the slower part of the cycle and hence transition is delayed significantly. As shown in this regime thresholds closely agree with estimates based on a quasi steady flow assumption only taking puff decay rates into account. The transition point predicted in the zero $\alpha$ limit equals to the critical point for steady pipe flow offset by the oscillation Reynolds number. In the high frequency limit puff lifetimes are identical to those in steady pipe flow and hence the transition threshold appears to be unaffected by flow pulsation. In the intermediate frequency regime the transition threshold sharply drops (with increasing $\alpha$) from the decay dominated (quasi steady) threshold to the steady pipe flow level.
Poisoning attacks have emerged as a significant security threat to machine learning algorithms. It has been demonstrated that adversaries who make small changes to the training set, such as adding specially crafted data points, can hurt the performance of the output model. Some of the stronger poisoning attacks require the full knowledge of the training data. This leaves open the possibility of achieving the same attack results using poisoning attacks that do not have the full knowledge of the clean training set. In this work, we initiate a theoretical study of the problem above. Specifically, for the case of feature selection with LASSO, we show that full-information adversaries (that craft poisoning examples based on the rest of the training data) are provably stronger than the optimal attacker that is oblivious to the training set yet has access to the distribution of the data. Our separation result shows that the two setting of data-aware and data-oblivious are fundamentally different and we cannot hope to always achieve the same attack or defense results in these scenarios.
Given the stellar density near the galactic center, close encounters between compact object binaries and the supermassive black hole are a plausible occurrence. We present results from a numerical study of close to 13 million such encounters. Consistent with previous studies, we corroborate that, for binary systems tidally disrupted by the black hole, the component of the binary remaining bound to the hole has eccentricity ~ 0.97 and circularizes dramatically by the time it enters the classical LISA band. Our results also show that the population of surviving binaries merits attention. These binary systems experience perturbations to their internal orbital parameters with potentially interesting observational consequences. We investigated the regions of parameter space for survival and estimated the distribution of orbital parameters post-encounter. We found that surviving binaries harden and their eccentricity increases, thus accelerating their merger due gravitational radiation emission and increasing the predicted merger rates by up to 1%.
The use of floating bipolar electrodes in electrowinning cells of copper constitutes a nonconventional technology that promises economic and operational impacts. This thesis presents a computational tool for the simulation and analysis of such electrochemical cells. A new model is developed for floating electrodes and a method of finite difference is used to obtain the threedimensional distribution of the potential and the field of current density inside the cell. The analysis of the results is based on a technique for the interactive visualization of three-dimensional vectorial fields as lines of flow.
We describe the PDFI_SS software library, which is designed to find the electric field at the Sun's photosphere from a sequence of vector magnetogram and Doppler velocity measurements, and estimates of horizontal velocities obtained from local correlation tracking using the recently upgraded FLCT code. The library, a collection of Fortran subroutines, uses the "PDFI" technique described by Kazachenko et al. (2014), but modified for use in spherical, Plate-Carr\'ee geometry on a staggered grid. The domain over which solutions are found is a subset of the global spherical surface, defined by user-specified limits of colatitude and longitude. Our staggered-grid approach, based on that of Yee (1966), is more conservative and self-consistent compared to the centered, Cartesian grid used by Kazachenko et al. (2014). The library can be used to compute an end-to-end solution for electric fields from data taken by the HMI instrument aboard NASA's SDO Mission. This capability has been incorporated into the HMI pipeline processing system operating at SDO's JSOC. The library is written in a general and modular way so that the calculations can be customized to modify or delete electric field contributions, or used with other data sets. Other applications include "nudging" numerical models of the solar atmosphere to facilitate assimilative simulations. The library includes an ability to compute "global" (whole-Sun) electric field solutions. The library also includes an ability to compute Potential Magnetic Field solutions in spherical coordinates. This distribution includes a number of test programs which allow the user to test the software.
The structure of the Cu(110) surface is studied at high temperatures using a combination of lattice-gas Monte Carlo and molecular dynamics methods with identical many-atom interactions derived from the effective medium theory. The anisotropic six-vertex model is used in the interpretation of the lattice-gas results. We find a clear roughening transition around T_R=1000K and T_R/T_M=0.81. Molecular dynamics reveals the clustering of surface defects as the atomistic mechanism of the transition and allows us to estimate characteristic time scales. For the system of size 50x50, the time scale of the local roughening at 1150 K of an initially smooth surface is of the order of 100 ps.
The theory of factor-equivalence of integral lattices gives a far reaching relationship between the Galois module structure of units of the ring of integers of a number field and its arithmetic. For a number field $K$ that is Galois over $\mathbb{Q}$ or an imaginary quadratic field, we prove a necessary and sufficient condition on the quotients of class numbers of subfields of $K$, for the quotient $E_{K}$ of the group of units of the ring of integers of $K$ by the subgroup of roots of unity to be factor equivalent to the standard cyclic Galois module. By using strong arithmetic properties of totally real $p$-rational number fields, we prove that the non-abelian $p$-rational $p$-extensions of $\mathbb{Q}$ do not have Minkowski units, which extends a result of Burns to non-abelian number fields. We also study the relative Galois module structure of $E_{L}$ for varying Galois extensions $L/F$ of totally real $p$-rational number fields whose Galois groups are isomorphic to a fixed finite group $G$. In that case, we prove that there is a finite set $\Omega$ of $\mathbb{Z}_p[G]$-lattices such that for every $L$, $\mathbb{Z}_{p} \otimes_{\mathbb{Z}} E_{L}$ is factor equivalent to $\mathbb{Z}_{p}[G]^{n} \oplus X$ as $\mathbb{Z}_p[G]$-lattices for some $X \in \Omega$ and an integer $n \geq 0$.
In a previous paper [3] we computed cohomology groups H^5 (Gamma_0 (N), \C), where Gamma_0 (N) is a certain congruence subgroup of SL (4, \Z), for a range of levels N. In this note we update this earlier work by extending the range of levels and describe cuspidal cohomology classes and additional boundary phenomena found since the publication of [3]. The cuspidal cohomology classes in this paper are the first cuspforms for GL(4) concretely constructed in terms of Betti cohomology.
Throughout the course of mathematical history, generalizations of previously understood concepts and structures have led to the fruitful development of the hierarchy of number systems, non-euclidean geometry, and many other epochal phases in mathematical progress. In the study of formalized theories of arithmetic, it is only natural to consider the extension from the standard model of Peano arithmetic, $\langle \mathbb{N},+,\times,\leq,0,1 \rangle$, to non-standard models of arithmetic. The existence of non-standard models of Peano arithmetic provided motivation in the early $20^{th}$ century for a variety of questions in model theory regarding the classification of models up to isomorphism and the properties that non-standard models of Peano arithmetic have. This paper presents these questions and the necessary results to prove Tennenbaum's Theorem, which draws an explicit line between the properties of standard and non-standard models; namely, that no countable non-standard model of Peano arithmetic is recursive. These model-theoretic results have contributed to the foundational framework within which research programs developed by Skolem, Rosser, Tarski, Mostowski and others have flourished. While such foundational topics were crucial to active fields of research during the middle of the $20^{th}$ century, numerous open questions about models of arithmetic, and model theory in general, still remain pertinent to the realm of $21^{st}$ century mathematical discourse.
The recent discovery of a spectacular dust plume in the system 2XMM J160050.7-514245 (referred to as "Apep") suggested a physical origin in a colliding-wind binary by way of the "Pinwheel" mechanism. Observational data pointed to a hierarchical triple-star system, however several extreme and unexpected physical properties seem to defy the established physics of such objects. Most notably, a stark discrepancy was found in the observed outflow speed of the gas as measured spectroscopically in the line-of-sight direction compared to the proper motion expansion of the dust in the sky plane. This enigmatic behaviour arises at the wind base within the central Wolf-Rayet binary: a system that has so far remained spatially unresolved. Here we present an updated proper motion study deriving the expansion speed of Apep's dust plume over a two-year baseline that is four times slower than the spectroscopic wind speed, confirming and strengthening the previous finding. We also present the results from high-angular-resolution near-infrared imaging studies of the heart of the system, revealing a close binary with properties matching a Wolf-Rayet colliding-wind system. Based on these new observational constraints, an improved geometric model is presented yielding a close match to the data, constraining the orbital parameters of the Wolf-Rayet binary and lending further support to the anisotropic wind model.
The charge $F_1(0)$ and the magnetic $F_2(0)$ form factors of heavy charged leptons have been shown in the framework of the perturbation theory to have imaginary part. The imaginary parts of the form factors for muon and tau lepton have been calculated at the two-loop level in the Standard Model. The effects where these imaginary parts could manifest themselves are discussed.
A generalist robot equipped with learned skills must be able to perform many tasks in many different environments. However, zero-shot generalization to new settings is not always possible. When the robot encounters a new environment or object, it may need to finetune some of its previously learned skills to accommodate this change. But crucially, previously learned behaviors and models should still be suitable to accelerate this relearning. In this paper, we aim to study how generative models of possible outcomes can allow a robot to learn visual representations of affordances, so that the robot can sample potentially possible outcomes in new situations, and then further train its policy to achieve those outcomes. In effect, prior data is used to learn what kinds of outcomes may be possible, such that when the robot encounters an unfamiliar setting, it can sample potential outcomes from its model, attempt to reach them, and thereby update both its skills and its outcome model. This approach, visuomotor affordance learning (VAL), can be used to train goal-conditioned policies that operate on raw image inputs, and can rapidly learn to manipulate new objects via our proposed affordance-directed exploration scheme. We show that VAL can utilize prior data to solve real-world tasks such drawer opening, grasping, and placing objects in new scenes with only five minutes of online experience in the new scene.
With the emergence of collaborative robots (cobots), human-robot collaboration in industrial manufacturing is coming into focus. For a cobot to act autonomously and as an assistant, it must understand human actions during assembly. To effectively train models for this task, a dataset containing suitable assembly actions in a realistic setting is crucial. For this purpose, we present the ATTACH dataset, which contains 51.6 hours of assembly with 95.2k annotated fine-grained actions monitored by three cameras, which represent potential viewpoints of a cobot. Since in an assembly context workers tend to perform different actions simultaneously with their two hands, we annotated the performed actions for each hand separately. Therefore, in the ATTACH dataset, more than 68% of annotations overlap with other annotations, which is many times more than in related datasets, typically featuring more simplistic assembly tasks. For better generalization with respect to the background of the working area, we did not only record color and depth images, but also used the Azure Kinect body tracking SDK for estimating 3D skeletons of the worker. To create a first baseline, we report the performance of state-of-the-art methods for action recognition as well as action detection on video and skeleton-sequence inputs. The dataset is available at https://www.tu-ilmenau.de/neurob/data-sets-code/attach-dataset .
In the paper, the thermoEMF of powder-based thermoelectric materials (TEM) is calculated. The calculation is made on the assumption of power dependence of mean free path on energy. The thermoEMF decreases with increasing the average radius of powder particles, however, it drastically increases with an increase in power exponent in the law of dependence of the mean free path of relaxation time on energy (scattering index). Therefore, it turns out that the thermoEMF of powder-based TEM with a higher scattering index can be even greater than the thermoEMF of a single-crystal material with a low scattering index. As a consequence, a significant increase in the thermoEMF and, hence, in the thermoelectric figure of merit of TEM in going to powder materials, especially in the case of degenerate electron gas, can be expected only if dielectric or vacuum barriers between powder particles do not lead to a significant decrease in electrical conductivity. At the same time, tunnelling through the abovementioned barriers should provide such an energy filtration of charge carriers, which leads both to an increase in the proportion of "useful" charge carriers with energy greater than the chemical potential, and to an increase in the scattering index. However, experimentally, there is no significant increase in thermoEMF in going from single-crystal materials to powders, most likely because such energy filtering does not take place.
We investigated the paramagnetic resonance in single crystals of LiCuVO$_4$ with special attention to the angular variation of the absorption spectrum. To explain the large resonance linewidth of the order of 1 kOe, we analyzed the anisotropic exchange interaction in the chains of edge-sharing CuO$_6$ octahedra, taking into account the ring-exchange geometry of the nearest-neighbor coupling via two symmetric rectangular Cu-O bonds. The exchange parameters, which can be estimated from theoretical considerations, nicely agree with the parameters obtained from the angular dependence of the linewidth. The anisotropy of this magnetic ring exchange is found to be much larger than it is usually expected from conventional estimations which neglect the bonding geometry. Hence, the data yield the evidence that in copper oxides with edge-sharing structures the role of the orbital degrees of freedom is strongly enhanced. These findings establish LiCuVO$_4$ as one-dimensional compound at high temperatures. PACS: 76.30.-v, 76.30.Fc, 75.30.Et
Topologically ordered systems exhibit large-scale correlation in their ground states, which may be characterized by quantities such as topological entanglement entropy. We propose that the concept of irreducible many-body correlation, the correlation that cannot be implied by all local correlations, may also be used as a signature of topological order. In a topologically ordered system, we demonstrate that for a part of the system with holes, the reduced density matrix exhibits irreducible many-body correlation which becomes reducible when the holes are removed. The appearance of these irreducible correlations then represents a key feature of topological phase. We analyze the many-body correlation structures in the ground state of the toric code model in an external magnetic field, and show that the topological phase transition is signaled by the irreducible many-body correlations.
An integration of distributionally robust risk allocation into sampling-based motion planning algorithms for robots operating in uncertain environments is proposed. We perform non-uniform risk allocation by decomposing the distributionally robust joint risk constraints defined over the entire planning horizon into individual risk constraints given the total risk budget. Specifically, the deterministic tightening defined using the individual risk constraints is leveraged to define our proposed exact risk allocation procedure. Our idea of embedding the risk allocation technique into sampling based motion planning algorithms realises guaranteed conservative, yet increasingly more risk feasible trajectories for efficient state space exploration.
When the bulk geometry in AdS/CFT contains a black hole, the boundary reconstruction of a given bulk operator will often necessarily depend on the choice of black hole microstate, an example of state dependence. As a result, whether a given bulk operator can be reconstructed on the boundary at all can depend on whether the black hole is described by a pure state or thermal ensemble. We refine this dichotomy, demonstrating that the same boundary operator can often be used for large subspaces of black hole microstates, corresponding to a constant fraction $\alpha$ of the black hole entropy. In the Schrodinger picture, the boundary subregion encodes the $\alpha$-bits (a concept from quantum information) of a bulk region containing the black hole and bounded by extremal surfaces. These results have important consequences for the structure of AdS/CFT and for quantum information. Firstly, they imply that the bulk reconstruction is necessarily only approximate and allow us to place non-perturbative lower bounds on the error when doing so. Second, they provide a simple and tractable limit in which the entanglement wedge is state-dependent, but in a highly controlled way. Although the state dependence of operators comes from ordinary quantum error correction, there are clear connections to the Papadodimas-Raju proposal for understanding operators behind black hole horizons. In tensor network toy models of AdS/CFT, we see how state dependence arises from the bulk operator being `pushed' through the black hole itself. Finally, we show that black holes provide the first `explicit' examples of capacity-achieving $\alpha$-bit codes. Unintuitively, Hawking radiation always reveals the $\alpha$-bits of a black hole as soon as possible. In an appendix, we apply a result from the quantum information literature to prove that entanglement wedge reconstruction can be made exact to all orders in $1/N$.
We compute the radiative corrections to the mass of a test boson field in an inflating space-time. The calculations are carried out in case of a boson part of a supersymmetric chiral multiplet. We show that its mass is preserved up to logarithmic divergences both in ultraviolet and infrared domains. Consequences of these results for inflationary models are discussed.
Next-generation high-power lasers that can be focused to intensities exceeding 10^23 W/cm^2 are enabling new physics and applications. The physics of how these lasers interact with matter is highly nonlinear, relativistic, and can involve lowest-order quantum effects. The current tool of choice for modeling these interactions is the particle-in-cell (PIC) method. In strong fields, the motion of charged particles and their spin is affected by radiation reaction. Standard PIC codes usually use Boris or its variants to advance the particles, which requires very small time steps in the strong-field regime to obtain accurate results. In addition, some problems require tracking the spin of particles, which creates a 9D particle phase space (x, u, s). Therefore, numerical algorithms that enable high-fidelity modeling of the 9D phase space in the strong-field regime are desired. We present a new 9D phase space particle pusher based on analytical solutions to the position, momentum and spin advance from the Lorentz force, together with the semi-classical form of RR in the Landau-Lifshitz equation and spin evolution given by the Bargmann-Michel-Telegdi equation. These analytical solutions are obtained by assuming a locally uniform and constant electromagnetic field during a time step. The solutions provide the 9D phase space advance in terms of a particle's proper time, and a mapping is used to determine the proper time step for each particle from the simulation time step. Due to the analytical integration, the constraint on the time step needed to resolve trajectories in ultra-high fields can be greatly reduced. We present single-particle simulations and full PIC simulations to show that the proposed particle pusher can greatly improve the accuracy of particle trajectories in 9D phase space for given laser fields. A discussion on the numerical efficiency of the proposed pusher is also provided.
Within the paradigm of non-perturbative Einstein gravity we study continuous manifolds which possess de Sitter interiors and Kerr exteriors. These manifolds could represent the spacetime of rotating gravastars or other similar black hole mimickers. The scheme presented here allows for a $C^{n}$ transition from the exactly de Sitter interior to the exactly Kerr exterior, with $n$ arbitrarily large. Generic properties that such models must possess are discussed, such as the changing of the topology of the ergosphere from $S^{2}$ to $S^{1}\times S^{1}$. It is shown how in the outer layers of the transition region (the "atmosphere" as it is often called in astrophysics) the dominant/weak and strong energy conditions can be respected. However, much like in the case of its static spherically symmetric gravastar counterpart, there must be some assumptions imposed in the atmosphere for the energy conditions to hold. These assumptions turn out to not be severe. The class of manifolds presented here are expected to possess all the salient features of the fully generic case. Strictly speaking, a number of the results are also applicable to the locally anti-de Sitter core scenario, although we focus on the case of a positive cosmological constant.
We investigate quantum aspects of Gopakumar-Minwalla-Strominger (GMS) solutions of noncommutative field theory (NCFT) at large noncommutativity limit, $\theta \to \infty$. Building upon a quantitative map between operator formulation of 2-(respectively, (2+1))-dimensional NCFTs and large $N$ matrix models of $c=0$ (respectively, $c=1$) noncritical strings, we show that GMS solutions are quantum mechanically sensible only if we make appropriate joint scaling of $\theta$ and $N$. For 't Hooft's planar scaling, GMS solutions are replaced by large $N$ saddle-point solutions. GMS solutions are recovered from saddle-point solutions at small 't Hooft coupling regime, but are destabilized at large 'tHooft coupling regime by quantum effects. We make comparisons between these large $N$ effects and recently studied infrared effects in NCFTs. We estimate U(N) symmetry breaking gradient effects and argue that they are suppressed only at small 't Hooft coupling regime.
We consider the pion-photon transition form factor at low to intermediate spacelike momenta within the theoretical framework of light-cone sum rules. We derive predictions which take into account all currently known contributions stemming from QCD perturbation theory up to the next-to-next-to-leading order (NNLO) and by including all twist terms up to order six. In order to enable a more detailed comparison with forthcoming high-precision data, we also estimate the main systematic theoretical uncertainties, stemming from various sources, and discuss their influence on the calculations --- in particular the dominant one related to the still uncalculated part of the NNLO contribution. The analysis addresses, in broad terms, also the role of the twist-two pion distribution amplitude derived with different approaches.
This paper presents the first results of the two-phase flow simulation obtained using recently introduced physical, mathematical and numerical model of the intermittency region between two-phases (Wac{\l}awczyk 2017, 2021). The statistical interpretation of the intermittency region evolution equations allows to account for the non-equilibrium effects in the domain separating two phases. The source of non-equilibrium are spatial variations in the ratio of work done by volume and interfacial forces governing its width. As the statistical description of the two-phase flow differs from the deterministic two-phase flow models known in the literature, in the present work we focus discussion of the results on these differences. To this goal, the rising two dimensional gas bubble is studied; differences between equilibrium and non-equilibrium solutions are investigated. It is argued the statistical description of the intermittency region has potential to account for physical phenomena not considered previously in the computer simulations of two-phase flow.
We consider dense wireless random-access networks, modeled as systems of particles with hard-core interaction. The particles represent the network users that try to become active after an exponential back-off time, and stay active for an exponential transmission time. Due to wireless interference, active users prevent other nearby users from simultaneous activity, which we describe as hard-core interaction on a conflict graph. We show that dense networks with aggressive back-off schemes lead to extremely slow transitions between dominant states, and inevitably cause long mixing times and starvation effects.
We look for spectral type differential equations for the generalized Jacobi polynomials found by T.H. Koornwinder in 1984 and for the Sobolev-Laguerre polynomials. We introduce a method which makes use of computeralgebra packages like Maple and Mathematica and we will give some preliminary results.
We consider gravity in 2+1 dimensions in presence of extended stationary sources with rotational symmetry. We prove by direct use of Einstein's equations that if i) the energy momentum tensor satisfies the weak energy condition, ii) the universe is open (conical at space infinity), iii) there are no CTC at space infinity, then there are no CTC at all.
A trie $\mathcal{T}$ is a rooted tree such that each edge is labeled by a single character from the alphabet, and the labels of out-going edges from the same node are mutually distinct. Given a trie $\mathcal{T}$ with $n$ edges, we show how to compute all distinct palindromes and all maximal palindromes on $\mathcal{T}$ in $O(n)$ time, in the case of integer alphabets of size polynomial in $n$. This improves the state-of-the-art $O(n \log h)$-time algorithms by Funakoshi et al. [PCS 2019], where $h$ is the height of $\mathcal{T}$. Using our new algorithms, the eertree with suffix links for a given trie $\mathcal{T}$ can readily be obtained in $O(n)$ time. Further, our trie-based $O(n)$-space data structure allows us to report all distinct palindromes and maximal palindromes in a query string represented in the trie $\mathcal{T}$, in output optimal time. This is an improvement over an existing (na\"ive) solution that precomputes and stores all distinct palindromes and maximal palindromes for each and every string in the trie $\mathcal{T}$ separately, using a total $O(n^2)$ preprocessing time and space, and reports them in output optimal time upon query.
Observations of Her X-1 by the Extreme Ultraviolet Explorer (EUVE) at the end of the x-ray Short High state are reported here. Her X-1 is found to exhibit a strong orbital modulation of the EUV flux, with a large dip superposed on a broad peak around orbital phase 0.5 when the neutron star is closest the observer. Alternate mechanisms for producing the observed EUV lightcurve are modeled. We conclude that: i) the x-ray heated surface of the companion is too cool to produce enough emission; ii) the accretion disk can produce enough emission but does not explain the orbital modulation; iii) reflection of x-rays off of the companion can produce the shape and intensity of the observed lightcurve. The only viable cause for the large dip at orbital phase 0.5 is shadowing of the companion by the accretion disk.
The internal control problem for the Kadomstev-Petviashvili II equation, known as KP-II, is the object of study in this paper. The controllability in $L^2(T)$ from vertical strip is proved using the Hilbert Unique Method through the techniques of semiclassical and microlocal analysis. Additionally, a negative result for the controllability in $L^2(T)$ from horizontal strip is also showed.
We study interacting bosons in a two dimensional bipartite optical lattice. By focusing on the regime where the first three excited bands are nearly degenerate we derive a three orbital tight-binding model which captures the most relevant features of the bandstructure. In addition, we also derive a corresponding generalized Bose-Hubbard model and solve it numerically under different situations, both with and without a confining trap. It is especially found that the hybridization between sublattices can strongly influence the phase diagrams and in a trap enable even appearances of condensed phases intersecting the same Mott insulating plateaus.
We propose a novel value approximation method, namely Eigensubspace Regularized Critic (ERC) for deep reinforcement learning (RL). ERC is motivated by an analysis of the dynamics of Q-value approximation error in the Temporal-Difference (TD) method, which follows a path defined by the 1-eigensubspace of the transition kernel associated with the Markov Decision Process (MDP). It reveals a fundamental property of TD learning that has remained unused in previous deep RL approaches. In ERC, we propose a regularizer that guides the approximation error tending towards the 1-eigensubspace, resulting in a more efficient and stable path of value approximation. Moreover, we theoretically prove the convergence of the ERC method. Besides, theoretical analysis and experiments demonstrate that ERC effectively reduces the variance of value functions. Among 26 tasks in the DMControl benchmark, ERC outperforms state-of-the-art methods for 20. Besides, it shows significant advantages in Q-value approximation and variance reduction. Our code is available at https://sites.google.com/view/erc-ecml23/.
We report on the elastic contact between a spherical lens and a patterned substrate, composed of an hexagonal lattice of cylindrical pillars. The stress field and the size of the contact area are obtained by means of numerical methods (discrete Green's function superposition and iterative bisection-like methods). For small indentations, a transition from a Hertzian to a Boussinesq-Cerruti-like behavior is observed when the surface fraction of the substrate that is covered by pillars is increased. In particular, we present a master curve defined by two dimensionless parameters, which allows to predict the stress at the center of contact region in terms of the surface fraction occupied by pillars. The transition between the limiting contact regimes, Hertzian and Boussinesq-Cerruti-like, is well described by a rational function. Additionally, a simple model to describe the Boussines-Cerruti-like contact between the lens and a single elastic pillar, which takes into account the pillar geometry and the elastic properties of the two bodies, is presented.
Calculations of the bootstrap current for the TJ-II stellarator are presented. DKES and NEO-MC codes are employed; the latter has allowed, for the first time, the precise computation of the bootstrap transport coefficient in the long mean free path regime of this device. The low error bars allow a precise convolution of the monoenergetic coefficients, which is confirmed by error analysis. The radial profile of the bootstrap current is presented for the first time for the 100_44_64 configuration of TJ-II for three different collisionality regimes. The bootstrap coefficient is then compared to that of other configurations of TJ-II regularly operated. The results show qualitative agreement with toroidal current measurements; precise comparison with real discharges is ongoing.