U_2=U_4$. In the four-band tunneling case, transmission decreases in $T^+_+$, \u00a0$T^-_+$ and \u00a0$T^-_-$ channels in comparison with the single barrier case. It does, however, increase for $T^+_-$ when compared to the single barrier case. Transmission is suppressed in the gap region when an interlayer bias is introduced. Our results are relevant for electron confinements in AB bilayer graphene and for the development of graphene-based transistors.'\nauthor:\n- Mouhamadou Hassane Saley\n- Ahmed Jellal\ntitle: Magnetic field effect on tunneling through triple barrier in AB bilayer"
+"---\nabstract: 'We investigate spatial correlations of strain fluctuations in sheared colloidal glasses and simulations of sheared amorphous solids. The correlations reveal a quadrupolar symmetry reminiscent of the strain field due to an Eshelby\u2019s inclusion. However, they display an algebraic decay $1/r^{\\alpha}$, where the exponent $\\alpha$ is close to $1$ in the steady state, unlike the Eshelby field, for which $\\alpha=3$ . The exponent takes values between $3$ to $1$ in the transient stages of deformation. We explain these observations using a simple model based on interacting Eshelby inclusions. As the system is sheared beyond the linear response to plastic flow, the density correlations of inclusions are enhanced and it emerges as key to understanding the elastoplastic response of the system to applied shear.'\nauthor:\n- 'Sagar Malik$^{1}$, Meenakshi L.$^{2}$, Atharva Pandit$^{1}$, Antina Ghosh$^{1}$, Peter Schall$^{3}$, Bhaskar Sengupta$^{2}$, Vijayakumar Chikkadi$^1$'\ntitle: Microscopic strain correlations in sheared amorphous solids\n---\n\nAmorphous solids are an important class of materials that appear in various forms ranging from metallic glasses to polymeric glasses and soft materials made of emulsions, foams and granular matter [@Barrat18; @Biroli11]. Even though their mechanical properties differ significantly, they display similar elastic and plastic properties. Therefore, the elastoplastic deformation of"
+"---\nabstract: '*We study singularity formation of K$\\ddot{\\text{a}}$hler-Ricci flow on a K$\\ddot{\\text{a}}$hler manifold that admits a horizontally homothetic conformal submersion into another K$\\ddot{\\text{a}}$hler manifold. We will derive necessary and sufficient conditions for the preservation of horizontally homothetic conformal submersion along the flow and establish the formation of type I singularity together with a standard splitting of the Cheeger-Gromov limit. This generalizes the setup of Calabi symmetry that was discussed in [@1] and [@2], thus gives new proofs for the results listed there.*'\nauthor:\n- NGUYEN THE HOAN\nbibliography:\n- 'references.bib'\nnocite: '[@*]'\ntitle: 'K$\\ddot{\\text{A}}$HLER-RICCI FLOW AND CONFORMAL SUBMERSION'\n---\n\nIntroduction\n============\n\nIn [@2], Song and Weinkove showed that for a K$\\ddot{\\text{a}}$hler-Ricci flow on a Hirzebruch surface (a $\\mathbb{P}^1$ bundle over a projective space) which satisfies certain rotational symmetry conditions (Calabi symmetry), then either it will collapse the $\\mathbb{P}^1$ fibers, shrink the base to a point or contract the exceptional divisor. In all of these cases, the manifold considered as a metric spaces induced by K$\\ddot{\\text{a}}$hler metrics along the flow is known to perform a Gromov-Hausdorff convergence.\n\nThe singularity behaviors of the flow in these cases are also of particular interests. Since it was proved in [@8] that the flow could"
+"---\nabstract: 'When an exposure of interest is confounded by unmeasured factors, an instrumental variable (IV) can be used to identify and estimate certain causal contrasts. Identification of the marginal average treatment effect (ATE) from IVs relies on strong untestable structural assumptions. When one is unwilling to assert such structure, IVs can nonetheless be used to construct bounds on the ATE. Famously, [@balke1997bounds] proved tight bounds on the ATE for a binary outcome, in a randomized trial with noncompliance and no covariate information. We demonstrate how these bounds remain useful in observational settings with baseline confounders of the IV, as well as randomized trials with measured baseline covariates. The resulting bounds on the ATE are non-smooth functionals, and thus standard nonparametric efficiency theory is not immediately applicable. To remedy this, we propose (1) under a novel margin condition, influence function-based estimators of the bounds that can attain parametric convergence rates when the nuisance functions are modeled flexibly, and (2) estimators of smooth approximations of these bounds. We propose extensions to continuous outcomes, explore finite sample properties in simulations, and illustrate the proposed estimators in an observational study targeting the effect of higher education on wages.'\nauthor:\n- |\n Alexander W."
+"---\nabstract: 'Machine learning (ML) compilers are an active area of research because they offer the potential to automatically speedup tensor programs. Kernel fusion is often cited as an important optimization performed by ML compilers. However, there exists a knowledge gap about how XLA, the most common ML compiler, applies this nuanced optimization, what kind of speedup it can afford, and what low-level effects it has on hardware. Our paper aims to bridge this knowledge gap by studying key compiler passes of XLA\u2019s source code. Our evaluation on a reinforcement learning environment Cartpole shows how different fusion decisions in XLA are made in practice. Furthermore, we implement several XLA kernel fusion strategies that can achieve up to 10.56x speedup compared to our baseline implementation.'\nauthor:\n- |\n [Daniel Snider Ruofan Liang]{}\\\n University of Toronto [^1]\\\n {firstname.lastname}@mail.utoronto.ca\nbibliography:\n- 'main.bib'\ntitle: 'Operator Fusion in XLA: Analysis and Evaluation'\n---\n\nIntroduction\n============\n\nMachine learning (ML) becomes more and more important in various computer tasks including computer vision, natural language processing, robotic control, etc. The computation efficiency of ML also arises as a popular topic for computer system and architecture research. Today\u2019s machine learning (ML) applications rely on specialized hardware accelerators like GPUs"
+"---\nabstract: 'Gas leakage is a serious hazard in a variety of sectors, including industrial, domestic, and gas-powered vehicles. The installation of gas leak detection systems has emerged as a critical preventative measure to solve this problem. Traditional gas sensors, such as electrochemical, infrared point, and Metal Oxide Semiconductor sensors, have been widely used for gas leak detection. However, these sensors have limitations in terms of their adaptation to various gases, as well as their high cost and difficulties in scaling. In this paper, a novel non-contact gas detection technique based on a 40 kHz ultrasonic signal is described. The proposed approach employs the reflections of the emitted ultrasonic wave to detect the gas leaks and also able to identify the exact gas in real-time. To confirm the method\u2019s effectiveness, trials were carried out using Hydrogen, Helium, Argon, and Butane gas. The system identified gas flow breaches in 0.01 seconds, whereas the gas identification procedure took 0.8 seconds. An interesting extension of the proposed approach is real-time visualisation of gas flow employing an array of transducers.'\nauthor:\n- \ntitle: 'Ultrasound based Gas Detection: Analyzing Acoustic Impedance for High-Performance and Low-Cost Solutions'\n---\n\nGas leakage, Detection, Identification, Ultrasound, Real-time.\n\nIntroduction\n============"
+"---\nabstract: 'The classical homotopy optimization approach has the potential to deal with highly nonlinear landscape, such as the energy landscape of QAOA problems. Following this motivation, we introduce Hamiltonian-Oriented Homotopy QAOA (HOHo-QAOA), that is a heuristic method for combinatorial optimization using QAOA, based on classical homotopy optimization. The method consists of a homotopy map that produces an optimization problem for each value of interpolating parameter. Therefore, HOHo-QAOA decomposes the optimization of QAOA into several loops, each using a mixture of the mixer and the objective Hamiltonian for cost function evaluation. Furthermore, we conclude that the HOHo-QAOA improves the search for low energy states in the nonlinear energy landscape and outperforms other variants of QAOA.'\nauthor:\n- 'Akash Kundu$^{1,2}$[^1], Ludmila Botelho$^{1,2 *}$[^2], Adam Glos$^{1,3}$'\nbibliography:\n- 'reference.bib'\ntitle: 'Hamiltonian-Oriented Homotopy QAOA'\n---\n\nIntroduction\n============\n\nSpeedup of practical applications is yet to be realized for quantum devices as they are small and noise prone. The limitations of available hardware initiated the Noisy Intermediate Scale Quantum (NISQ) era [@preskill2018quantum]. The NISQ algorithms [@bharti2022noisy] can operate on limited amount of resources., in particular by distributing tasks between quantum and classical devices. Many of those algorithms are represented by a broad class of variational"
+"---\nabstract: 'We prove that topological disks with positive curvature and strictly convex boundary of large length are close to round spherical caps of constant boundary curvature in the Gromov-Hausdorff sense. This proves stability for a theorem of F.\u00a0Hang and X.\u00a0Wang in [@HW], and can be viewed as an affirmative answer to a convex stability version of the Min-Oo Conjecture in dimension two. As an intermediate step, we obtain a compactness result for a Liouville-type PDE problem.'\naddress: 'The University of Pennsylvania, Department of Mathematics, David Rittenhouse Lab., 209 South 33rd Street, Philadelphia, PA 19104, USA.'\nauthor:\n- Hunter Stufflebeam\nbibliography:\n- 'Stab\\_Conv\\_Disks\\_v5\\_arxiv.bib'\ntitle: Stability of Convex Disks\n---\n\nIntroduction\n============\n\nInequalities in geometric analysis, such as the isoperimetric and systolic, Faber-Krahn and Penrose, relate given geometric objects to understood model cases, taking as input data bounds on curvatures, volumes, eigenvalues, etc. Via such relationships, much work has been done to understand the structure of spaces with natural geometric conditions phrased in terms of such quantities.\n\nGiven an inequality for which one has some understanding of extremizers (the geometric objects which realize equality), one might ask if an object *nearly* realizing equality must somehow share characteristics with the"
+"---\nabstract: |\n A polynomial homotopy is a family of polynomial systems, typically in one parameter $t$. Our problem is to compute power series expansions of the coordinates of the solutions in the parameter $t$, accurately, using multiple double arithmetic. One application of this problem is the location of the nearest singular solution in a polynomial homotopy, via the theorem of Fabry. Power series serve as input to construct Pad\u00e9 approximations.\n\n Exploiting the massive parallelism of Graphics Processing Units capable of performing several trillions floating-point operations per second, the objective is to compensate for the cost overhead caused by arithmetic with power series in multiple double precision. The application of Newton\u2019s method for this problem requires the evaluation and differentiation of polynomials, followed by solving a blocked lower triangular linear system. Experimental results are obtained on NVIDIA GPUs, in particular the RTX 2080, P100 and V100.\n\n Code generated by the CAMPARY software is used to obtain results in double double, quad double, and octo double precision. The programs in this study are self contained, available in a public github repository under the GPL-v3.0 License.\nauthor:\n- 'Jan Verschelde[^1]'\ndate: 29 January 2023\ntitle: |\n GPU Accelerated Newton for Taylor Series"
+"---\nabstract: |\n Auto-bidding has recently become a popular feature in ad auctions. This feature enables advertisers to simply provide high-level constraints and goals to an automated agent, which optimizes their auction bids on their behalf. These auto-bidding intermediaries interact in a decentralized manner in the underlying auctions, leading to new interesting practical and theoretical questions on auction design, for example, in understanding the bidding equilibrium properties between auto-bidder intermediaries for different auctions. In this paper, we examine the effect of different auctions on the incentives of advertisers to report their constraints to the auto-bidder intermediaries. More precisely, we study whether canonical auctions such as first price auction (FPA) and second price auction (SPA) are [*auto-bidding incentive compatible (AIC)*]{}: whether an advertiser can gain by misreporting their constraints to the autobidder.\n\n We consider value-maximizing advertisers in two important settings: when they have a budget constraint and when they have a target cost-per-acquisition constraint. The main result of our work is that for both settings, FPA and SPA are not AIC. This contrasts with FPA being AIC when auto-bidders are constrained to bid using a (sub-optimal) uniform bidding policy. We further extend our main result and show that any (possibly randomized)"
+"---\nabstract: 'Free-running Recurrent Neural Networks (RNNs), especially probabilistic models, generate an ongoing information flux that can be quantified with the mutual information $I\\left[\\vec{x}(t),\\vec{x}(t\\!+\\!1)\\right]$ between subsequent system states $\\vec{x}$. Although, former studies have shown that $I$ depends on the statistics of the network\u2019s connection weights, it is unclear (1) how to maximize $I$ systematically and (2) how to quantify the flux in large systems where computing the mutual information becomes intractable. Here, we address these questions using Boltzmann machines as model systems. We find that in networks with moderately strong connections, the mutual information $I$ is approximately a monotonic transformation of the root-mean-square averaged Pearson correlations between neuron-pairs, a quantity that can be efficiently computed even in large systems. Furthermore, evolutionary maximization of $I\\left[\\vec{x}(t),\\vec{x}(t\\!+\\!1)\\right]$ reveals a general design principle for the weight matrices enabling the systematic construction of systems with a high spontaneous information flux. Finally, we simultaneously maximize information flux and the mean period length of cyclic attractors in the state space of these dynamical networks. Our results are potentially useful for the construction of RNNs that serve as short-time memories or pattern generators.'\nauthor:\n- Claus Metzner\n- 'Marius E. Yamakou'\n- Dennis Voelkl\n- Achim Schilling\n-"
+"---\nabstract: 'Let $K \\subseteq {\\mathbb{R}}^d$ be a convex body and let ${\\mathbf{w}}\\in {\\operatorname{int}}(K)$ be an interior point of\u00a0$K$. The *coefficient of asymmetry* ${\\operatorname{ca}}(K,{\\mathbf{w}}) := \\min\\{ \\lambda \\geq 1 : {\\mathbf{w}}- K \\subseteq \\lambda (K - {\\mathbf{w}}) \\}$ has been studied extensively in the realm of Hensley\u2019s conjecture on the maximal volume of a $d$-dimensional lattice polytope that contains a fixed positive number of interior lattice points. We study the coefficient of asymmetry for *lattice zonotopes*, i.e., Minkowski sums of line segments with integer endpoints. Our main result gives the existence of an interior lattice point whose coefficient of asymmetry is bounded above by an explicit constant in $\\Theta(d \\log\\log d)$, for any lattice zonotope that has an interior lattice point. Our work is both inspired by and feeds on Wills\u2019 lonely runner conjecture from Diophantine approximation: we make intensive use of a discrete version of this conjecture, and reciprocally, we reformulate the lonely runner conjecture in terms of the coefficient of asymmetry of a zonotope.'\naddress:\n- |\n Department of Mathematics\\\n San Francisco State University\\\n San Francisco, CA 94132\\\n U.S.A.\n- |\n Institut f\u00fcr Mathematik\\\n Universit\u00e4t Rostock\\\n Campus Ulmenstra\u00dfe 69\\\n D-18051 Rostock\\\n Germany\nauthor:\n- Matthias Beck\n-"
+"---\nabstract: 'Household robots operate in the same space for years. Such robots incrementally build dynamic maps that can be used for tasks requiring remote object localization. However, benchmarks in robot learning often test generalization through inference on tasks in unobserved environments. In an observed environment, locating an object is reduced to choosing from among all object proposals in the environment, which may number in the 100,000s. Armed with this intuition, using only a generic vision-language scoring model with minor modifications for 3d encoding and operating in an embodied environment, we demonstrate an absolute performance gain of 9.84% on remote object grounding above state of the art models for REVERIE and of 5.04% on FAO. When allowed to pre-explore an environment, we also exceed the previous state of the art pre-exploration method on REVERIE. Additionally, we demonstrate our model on a real-world TurtleBot platform, highlighting the simplicity and usefulness of the approach. Our analysis outlines a \u201cbag of tricks\u201d essential for accomplishing this task, from utilizing 3d coordinates and context, to generalizing vision-language models to large 3d search spaces.'\nauthor:\n- |\n Gunnar A. Sigurdsson \u00a0\u00a0\u00a0\u00a0 Jesse Thomason \u00a0\u00a0\u00a0 Gaurav S. Sukhatme \u00a0\u00a0\u00a0 Robinson Piramuthu\\\n Amazon Alexa AI\nbibliography:\n- 'main.bib'\ntitle: 'RREx-BoT:"
+"---\nabstract: 'To date, epsilon near zero (ENZ) responses, characterized by an infinite phase velocity, are primarily achieved by applying a monochromatic light source to a tailored metamaterial. Here, we derive the equations for inducing a dynamically generated broadband ENZ response in a large class of many-body systems via tracking and feedback control. We further find that this response leads to a current-energy relationship identical to that of an ideal inductor. Using a Fermi-Hubbard model, we numerically confirm these results which have the potential to advance optical computation on the nanoscale.'\nauthor:\n- Jacob Masur\n- 'Denys I. Bondar'\n- Gerard McCaul\nbibliography:\n- 'refs.bib'\ntitle: 'Dynamical Generation of Epsilon-Near-Zero Behaviour via Tracking and Feedback Control'\n---\n\n#### Introduction.\n\nOne of the principal research goals of the twenty first century optics has been the development and application of epsilon-near-zero (ENZ) materials. These form a subclass of near zero index (NZI) materials, whose common feature is the extraordinary optical properties they possess [@WuENZPhotonics; @ziolkowski2004propagation; @ENZPhotonics; @nziPhotonics; @riseNZItech; @nziPhotonicMaterials]. Such materials exhibit a near zero permittivity or permeability at a given frequency, leading to a decoupling of the electric and magnetic fields [@ENZPhotonics; @nziPhotonics; @engheta2006metamaterials; @ziolkowski2004propagation; @engheta2013pursuing]. Such behaviour has exceptional potential"
+"---\nabstract: 'Hit song prediction, one of the emerging fields in music information retrieval (MIR), remains a considerable challenge. Being able to understand what makes a given song a hit is clearly beneficial to the whole music industry. Previous approaches to hit song prediction have focused on using audio features of a record. This study aims to improve the prediction result of the top 10 hits among Billboard Hot 100 songs using more alternative metadata, including song audio features provided by Spotify, song lyrics, and novel metadata-based features (title topic, popularity continuity and genre class). Five machine learning approaches are applied, including: k-nearest neighbours, Na\u00efve Bayes, Random Forest, Logistic Regression and Multilayer Perceptron. Our results show that Random Forest (RF) and Logistic Regression (LR) with all features (including novel features, song audio features and lyrics features) outperforms other models, achieving 89.1% and 87.2% accuracy, and 0.91 and 0.93 AUC, respectively. Our findings also demonstrate the utility of our novel music metadata features, which contributed most to the models\u2019 discriminative performance.'\nauthor:\n- Mengyisong Zhao\n- Morgan Harvey\n- David Cameron\n- Frank Hopfgartner\n- 'Valerie J. Gillet'\ntitle: An Analysis of Classification Approaches for Hit Song Prediction using Engineered Metadata"
+"---\nabstract: 'We propose a Model-Based Reinforcement Learning (MBRL) algorithm named VF-MC-PILCO, specifically designed for application to mechanical systems where velocities cannot be directly measured. This circumstance, if not adequately considered, can compromise the success of MBRL approaches. To cope with this problem, we define a velocity-free state formulation which consists of the collection of past positions and inputs. Then, VF-MC-PILCO uses Gaussian Process Regression to model the dynamics of the velocity-free state and optimizes the control policy through a particle-based policy gradient approach. We compare VF-MC-PILCO with our previous MBRL algorithm, MC-PILCO4PMS, which handles the lack of direct velocity measurements by modeling the presence of velocity estimators. Results on both simulated (cart-pole and UR5 robot) and real mechanical systems (Furuta pendulum and a ball-and-plate rig) show that the two algorithms achieve similar results. Conveniently, VF-MC-PILCO does not require the design and implementation of state estimators, which can be a challenging and time-consuming activity to be performed by an expert user.'\nauthor:\n- 'Fabio Amadio$^1$, Alberto Dalla Libera$^1$, Daniel Nikovski$^2$, Ruggero Carli$^1$, Diego Romeres$^2$ [^1] [^2]'\nbibliography:\n- 'references.bib'\ntitle: '**Learning Control from Raw Position Measurements**'\n---\n\nIntroduction\n============\n\nModel-Based Reinforcement Learning (MBRL) [@polydoros2017survey] proved to be a promising strategy"
+"---\nabstract: 'In this paper we investigate the ability of modern machine learning algorithms in inferring basic offline activities,\u00a0e.g., shopping and dining, from location data. Using anonymized data of thousands of users of a prominent location-based social network, we empirically demonstrate that not only state-of-the-art machine learning excels at the task at hand\u00a0(F1 score$>$0.9) but also tabular models are among the best performers. The findings we report here not only fill an existing gap in the literature, but also highlight the potential risks of such capabilities given the ubiquity of location data and the high accessibility of tabular machine learning models.'\nauthor:\n- \n- \nbibliography:\n- 'bibs.bib'\ntitle: 'Where You Are Is What You Do: On Inferring Offline Activities From Location Data'\n---\n\nlocation data, activity inference, privacy\n\nIntroduction {#sec1}\n============\n\nWith the proliferation of social media, smartphones, Internet-of-things (IOT) devices and low Earth orbit (LEO) satellites, we live in a world where location data (Data with reference to a physical location) is ubiquitous. Recent estimates [@huang2020big] suggest that location data make up over 80% of the data created on a daily basis. While location data can be and have been used for widely agreed on, positive applications"
+"---\nabstract: 'We analyze the effect of circuit parameter variation on the performance of Josephson traveling-wave parametric amplifiers (JTWPAs). Specifically, the JTWPA concept we investigate is using flux-biased nonhysteretic rf-SQUIDs in a transmission line configuration, which harnesses the three-wave mixing (3WM) regime. Dispersion engineering enables phase-matching to achieve power gain of $\\sim$20\u00a0dB, while suppressing the generation of unwanted mixing processes. Two dispersion engineering concepts using a 3WM-JTWPA circuit model, i.e., resonant phase-matching (RPM) and periodic capacitance modulation (PCM), are discussed, with results potentially also applicable to four-wave-mixing (4WM) JTWPAs. We propose suitable circuit parameter sets and evaluate amplifier performance with and without circuit parameter variance using transient circuit simulations. This approach inherently takes into account microwave reflections, unwanted mixing products, imperfect phase-matching, pump depletion, etc. In the case of RPM the resonance frequency spread is critical, while PCM is much less sensitive to parameter spread. We discuss degrees of freedom to make the JTWPA circuits more tolerant to parameter spread. Finally, our analysis shows that the flux-bias point where rf-SQUIDs exhibit Kerr-free nonlinearity is close to the sweet spot regarding critical current spread.'\nauthor:\n- 'C. Kissling, V. Gaydamachenko, F. Kaap, M. Khabipov, R. Dolata, A. B. Zorin, L."
+"---\nabstract: 'We apply the Tenerife Inversion Code (TIC) to the plage spectropolarimetric observations obtained by the Chromospheric LAyer SpectroPolarimeter (CLASP2). These unprecedented data consist of full Stokes profiles in the spectral region around the h and k lines for a single slit position, with around two thirds of the 200\u00a0arcsec slit crossing a plage region and the rest crossing an enhanced network. A former analysis of these data had allowed us to infer the longitudinal component of the magnetic field by applying the weak field approximation (WFA) to the circular polarization profiles, and to assign the inferred magnetic fields to different layers of the solar atmosphere based on the results of previous theoretical radiative transfer investigations. In this work, we apply the recently developed TIC to the same data. We obtain the stratified model atmosphere that fits the intensity and circular polarization profiles at each position along the spectrograph slit and we compare our results for the longitudinal component of the magnetic field with the previously obtained WFA results, highlighting the generally good agreement in spite of the fact that the WFA is known to produce an underestimation when applied to the outer lobes of the h and"
+"---\nabstract: 'In the present paper, the electronic, structural, thermodynamic, and elastic properties of cubic $PbGeO_3$ perovskite oxide are presented and computed using the WIEN2k code. The structural properties have been evaluated and they are in good agreement with the theoretical and experimental data. A phonon dispersion is made and it reveals that the cubic $PbGeO_3$ perovskite is dynamically stable. In addition, the electronic properties of $PbGeO_3$ shows an opening gap energy, meaning a semiconductor behavior with an indirect band gap equal to $1.67\\;eV$. Moreover, the obtained elastic constants of cubic $PbGeO_3$ satisfy Born\u2019s mechanical stability criteria, and this indicates that our compound behaves as a stable ductile material. The temperature-pressure effects on thermodynamic parameters are investigated using the Gibbs2 package. Finally, based on the obtained results about the cubic $PbGeO_3$ perovskite properties, we assume that this compound will have potential applications.'\nauthor:\n- |\n M. Agouri$^{1}$, A. Waqdim$^{1}$, A. Abbassi$^{1,}$[^1], M. Ouali$^{1}$, S. Taj$^{1}$, B. Manaut$^{1,}$[^2], M. Driouich$^1$\\\n [*[$^1$ Laboratory of Research in Physics and Engineering Sciences,]{}*]{}\\\n [*[Sultan Moulay Slimane University, Polydisciplinary Faculty, Beni Mellal, 23000, Morocco.]{}*]{}\\\ndate:\n- \n- \ntitle: '\\'\n---\n\nKeywords: DFT, Elastic, Thermodynamic, Perovskite, WIEN2K, mBJ.\n\nIntroduction\n============\n\nSince the discovery of $CaTiO_3$ [@1], the"
+"---\nabstract: 'This note addresses the problem of evaluating the impact of an attack on discrete-time nonlinear stochastic control systems. The problem is formulated as an optimal control problem with a joint chance constraint that forces the adversary to avoid detection throughout a given time period. Due to the joint constraint, the optimal control policy depends not only on the current state, but also on the entire history, leading to an explosion of the search space and making the problem generally intractable. However, we discover that the current state and whether an alarm has been triggered, or not, is sufficient for specifying the optimal decision at each time step. This information, which we refer to as the alarm flag, can be added to the state space to create an equivalent optimal control problem that can be solved with existing numerical approaches using a Markov policy. Additionally, we note that the formulation results in a policy that does not avoid detection once an alarm has been triggered. We extend the formulation to handle multi-alarm avoidance policies for more reasonable attack impact evaluations, and show that the idea of augmenting the state space with an alarm flag is valid in this extended"
+"---\nabstract: 'WebAssembly (abbreviated WASM) has emerged as a promising language of the Web and also been used for a wide spectrum of software applications such as mobile applications and desktop applications. These applications, named as WASM applications, commonly run in WASM runtimes. Bugs in WASM runtimes are frequently reported by developers and cause the crash of WASM applications. However, these bugs have not been well studied. To fill in the knowledge gap, we present a systematic study to characterize and detect bugs in WASM runtimes. We first harvest a dataset of 311 real-world bugs from hundreds of related posts on GitHub. Based on the collected high-quality bug reports, we distill 31 bug categories of WASM runtimes and summarize their common fix strategies. Furthermore, we develop a pattern-based bug detection framework to automatically detect bugs in WASM runtimes. We apply the detection framework to five popular WASM runtimes and successfully uncover 53 bugs that have never been reported previously, among which 14 have been confirmed and 6 have been fixed by runtime developers.'\nauthor:\n- Yixuan Zhang\n- Shangtong Cao\n- Haoyu Wang\n- Zhenpeng Chen\n- Xiapu Luo\n- Dongliang Mu\n- Yun Ma\n- Gang Huang\n- Xuanzhe"
+"---\nabstract: 'Enumerating the directed acyclic graphs (DAGs) of a Markov equivalence class (MEC) is an important primitive in causal analysis. The central resource from the perspective of computational complexity is the delay, that is, the time an algorithm that lists all members of the class requires between two consecutive outputs. Commonly used algorithms for this task utilize the rules proposed by Meek (1995) or the transformational characterization by Chickering (1995), both resulting in superlinear delay. In this paper, we present the first linear-time delay algorithm. On the theoretical side, we show that our algorithm can be generalized to enumerate DAGs represented by models that incorporate background knowledge, such as MPDAGs; on the practical side, we provide an efficient implementation and evaluate it in a series of experiments. Complementary to the linear-time delay algorithm, we also provide intriguing insights into Markov equivalence itself: All members of an MEC can be enumerated such that two successive DAGs have structural Hamming distance at most three.'\nauthor:\n- Author Name\n- 'First Author Name,^1^ Second Author Name, ^2^ Third Author Name ^1^'\n- 'Marcel Wien\u00f6bst,^1^ Malte Luttermann,^2^ Max Bannach,^1^ Maciej Li\u015bkiewicz^1^'\nbibliography:\n- 'main.bib'\ntitle:\n- 'Efficient Enumeration of Markov Equivalent DAGs[^1]'\n-"
+"Introduction\n============\n\nFederated Learning (FL) is an emerging privacy-preserving distributed machine learning paradigm. The model is transmitted to the clients by the server, and when the clients have completed local training, the parameter updates are sent back to the server for integration. Clients are not required to provide local raw data during this procedure, maintaining their privacy. However, the non-IID nature of clients\u2019 local distribution hinders the performance of FL algorithms\u00a0[@mcmahan2016communication; @li2018federated; @karimireddy2020scaffold; @li2021model], and the distribution shifts among clients become a main challenge in FL. =-1\n\n#### Distribution shifts in FL.\n\nAs identified in the seminal surveys\u00a0[@kairouz2021advances; @moreno2012unifying; @lu2018learning], there are three types of distribution shifts across clients that bottleneck the deployment of FL (see Figure\u00a0\\[fig:Illustration of the scenario we constructed\\]):\n\n- *Concept shift*: For tasks of using feature $\\xx$ to predict label $y$, the conditional distributions of labels $\\cP(y|\\xx)$ may differ across clients, even if the marginal distributions of labels $\\cP(y)$ and features $\\cP(\\xx)$ are shared. =-1\n\n- *Label distribution shift*: The marginal distributions of labels $\\cP(y)$ may vary across clients, even if the conditional distribution of features $\\cP(\\xx | y)$ is the same.\n\n- *Feature distribution shift*: The marginal distribution of features $\\cP(\\xx)$ may"
+"---\nabstract: 'In turbulence modeling, we are concerned with finding closure models that represent the effect of the subgrid scales on the resolved scales. Recent approaches gravitate towards machine learning techniques to construct such models. However, the stability of machine-learned closure models and their abidance by physical structure (e.g. symmetries, conservation laws) are still open problems. To tackle both issues, we take the \u2018discretize first, filter next\u2019 approach. In this approach we apply a spatial averaging filter to existing fine-grid discretizations. The main novelty is that we introduce an additional set of equations which dynamically model the energy of the subgrid scales. Having an estimate of the energy of the subgrid scales, we can use the concept of energy conservation to derive stability. The subgrid energy containing variables are determined via a data-driven technique. The closure model is used to model the interaction between the filtered quantities and the subgrid energy. Therefore the total energy should be conserved. Abiding by this conservation law yields guaranteed stability of the system. In this work, we propose a novel skew-symmetric convolutional neural network architecture that satisfies this law. The result is that stability is guaranteed, independent of the weights and biases of the"
+"---\nabstract: |\n Self-supervised sequential recommendation significantly improves recommendation performance by maximizing mutual information with well-designed data augmentations. However, the mutual information estimation is based on the calculation of Kullback\u2013Leibler divergence with several limitations, including the exponential need of the sample size and training instability. Also, existing data augmentations are mostly stochastic and can potentially break sequential correlations with random modifications. These two issues motivate us to investigate an alternative robust mutual information measurement capable of modeling uncertainty and alleviating KL divergence\u2019s limitations.\n\n To this end, we propose a novel self-supervised learning framework based on the **M**utual Wasser**Stein** discrepancy minimization\u00a0() for the sequential recommendation. We propose the Wasserstein Discrepancy Measurement to measure the mutual information between augmented sequences. Wasserstein Discrepancy Measurement builds upon the 2-Wasserstein distance, which is more robust, more efficient in small batch sizes, and able to model the uncertainty of stochastic augmentation processes. We also propose a novel contrastive learning loss based on Wasserstein Discrepancy Measurement. Extensive experiments on four benchmark datasets demonstrate the effectiveness of over baselines. More quantitative analyses show the robustness against perturbations and training efficiency in batch size. Finally, improvements analysis indicates better representations of popular users/items with significant uncertainty. The source"
+"---\nabstract: 'In recent years, cloud and edge architectures have gained tremendous focus for offloading computationally heavy applications. From machine learning and Internet of Thing (IOT) to industrial procedures and robotics, cloud computing have been used extensively for data processing and storage purposes, thanks to its \u201cinfinite\u201d resources. On the other hand, cloud computing is characterized by long time delays due to the long distance between the cloud servers and the machine requesting the resources. In contrast, edge computing provides almost real-time services since edge servers are located significantly closer to the source of data. This capability sets edge computing as an ideal option for real-time applications, like high level control, for resource-constrained platforms. In order to utilize the edge resources, several technologies, with basic ones as containers and orchestrators like Kubernetes, have been developed to provide an environment with many features, based on each application\u2019s requirements. In this context, this works presents the implementation and evaluation of a novel edge architecture based on Kubernetes orchestration for controlling the trajectory of a resource-constrained Unmanned Aerial Vehicle (UAV) by enabling Model Predictive Control (MPC).'\nauthor:\n- 'Achilleas Santi Seisa, Sumeet Gajanan Satpute and George Nikolakopoulos[^1] [^2] [^3]'\nbibliography:\n- './IEEEtranBST/IEEEabrv.bib'\n-"
+"---\nabstract: |\n Creating a taxonomy of interests is expensive and human-effort intensive: not only do we need to identify nodes and interconnect them, in order to use the taxonomy, we must also connect the nodes to relevant entities such as users, pins, and queries. Connecting to entities is challenging because of ambiguities inherent to language but also because individual interests are dynamic and evolve.\n\n Here, we offer an alternative approach that begins with bottom-up discovery of [$\\mu$-topics]{}called pincepts. The discovery process itself connects these [$\\mu$-topics]{}dynamically with relevant queries, pins, and users at high precision, automatically adapting to shifting interests. Pincepts cover all areas of user interest and automatically adjust to the specificity of user interests and are thus suitable for the creation of various kinds of taxonomies. Human experts associate taxonomy nodes with [$\\mu$-topics]{}(on average, 3 [$\\mu$-topics]{}per node), and the [$\\mu$-topics]{}offer a high-level data layer that allows quick definition, immediate inspection, and easy modification. Even more powerfully, [$\\mu$-topics]{}allow easy exploration of nearby semantic space, enabling curators to spot and fill gaps. Curators\u2019 domain knowledge is heavily leveraged and we thus don\u2019t need untrained mechanical Turks, allowing further cost reduction. These [$\\mu$-topics]{}thus offer a satisfactory \u201csymbolic\u201d stratum over which to"
+"---\nabstract: 'Anomaly detection methods are part of the systems where rare events may endanger an operation\u2019s profitability, safety, and environmental aspects. Although many state-of-the-art anomaly detection methods were developed to date, their deployment is limited to the operation conditions present during the model training. Online anomaly detection brings the capability to adapt to data drifts and change points that may not be represented during model development resulting in prolonged service life. This paper proposes an online anomaly detection algorithm for existing real-time infrastructures where low-latency detection is required and novel patterns in data occur unpredictably. The online inverse cumulative distribution-based approach is introduced to eliminate common problems of offline anomaly detectors, meanwhile providing dynamic process limits to normal operation. The benefit of the proposed method is the ease of use, fast computation, and deployability as shown in two case studies of real microgrid operation data.'\nauthor:\n- \nbibliography:\n- 'main.bib'\ntitle: 'Real-Time Outlier Detection with Dynamic Process Limits\\'\n---\n\nanomaly detection, interpretable machine learning, online machine learning, real-time systems, streaming analytics\n\nIntroduction {#Introduction}\n============\n\nThe era of Industry 4.0 is ruled by data. Effective data-based decision-making is driven by the quantity of collected data. Internet of Things (IoT) devices"
+"=1\n\nIntroduction\n============\n\nThe goal of this paper is to present explicit formulas for certain algebraic Poisson brackets on\u00a0${{\\mathbb P}}^5$.\n\nRecall that two Poisson brackets $\\{\\cdot,\\cdot\\}_1$, $\\{\\cdot,\\cdot\\}_2$ are called [*compatible*]{} if any linear combination $\\{\\cdot,\\cdot\\}_1+{\\lambda}\\cdot \\{\\cdot,\\cdot\\}_2$ is still a Poisson bracket (i.e., satisfies the Jacobi identity). Pairs of compatible Poisson brackets play an important role in the theory of integrable systems.\n\nWith every normal elliptic curve $C$ in ${{\\mathbb P}}^n$ one can associate naturally a Poisson bracket on ${{\\mathbb P}}^n$, called a [*Feigin\u2013Odesskii bracket of type $q_{n+1,1}$*]{}. The corresponding quadratic Poisson brackets on ${{\\mathbb A}}^{n+1}$ arise as quasi-classical limits of Feigin\u2013Odesskii elliptic algebras. On the other hand, they can be constructed using the geometry of vector bundles on $C$ (see\u00a0[@FO95; @P98]).\n\nIt was discovered by Odesskii\u2013Wolf\u00a0[@OW] that for every $n$ there exists a family of $9$ linearly independent mutually compatible Poisson brackets on ${{\\mathbb P}}^n$, such that their generic linear combinations are Feigin\u2013Odesskii brackets of type $q_{n+1,1}$. In\u00a0[@HP], this construction was explained and extended in terms of anticanonical line bundles on del Pezzo surfaces. It was observed in\u00a0[@HP Example\u00a04.6] that in this framework one also obtains $10$ linearly independent mutually compatible Poisson brackets on"
+"---\nabstract: 'In financial markets, the market order sign exhibits strong persistence, widely known as the long-range correlation (LRC) of order flow; specifically, the sign autocorrelation function (ACF) displays long memory with power-law exponent $\\gamma$, such that $C(\\tau) \\propto \\tau^{-\\gamma}$ for large time-lag $\\tau$. One of the most promising microscopic hypotheses is the order-splitting behaviour at the level of individual traders. Indeed, Lillo, Mike, and Farmer (LMF) introduced in 2005 a simple microscopic model of order-splitting behaviour, which predicts that the macroscopic sign correlation is quantitatively associated with the microscopic distribution of metaorders. While this hypothesis has been a central issue of debate in econophysics, its direct quantitative validation has been missing because it requires large microscopic datasets with high resolution to observe the order-splitting behaviour of all individual traders. Here we present the first quantitative validation of this LMF prediction by analysing a large microscopic dataset in the Tokyo Stock Exchange market for more than nine years. On classifying all traders as either order-splitting traders or random traders as a statistical clustering, we directly measured the metaorder-length distributions $P(L)\\propto L^{-\\alpha-1}$ as the microscopic parameter of the LMF model and examined the theoretical prediction on the macroscopic order correlation $\\gamma"
+"---\nabstract: 'We develop the first end-to-end sample complexity of model-free policy gradient (PG) methods in discrete-time infinite-horizon Kalman filtering. Specifically, we introduce the receding-horizon policy gradient (RHPG-KF) framework and demonstrate $\\tilde{\\mathcal{O}}(\\epsilon^{-2})$ sample complexity for RHPG-KF in learning a stabilizing filter that is $\\epsilon$-close to the optimal Kalman filter. Notably, the proposed RHPG-KF framework does not require the system to be open-loop stable nor assume any prior knowledge of a stabilizing filter. Our results shed light on applying model-free PG methods to control a linear dynamical system where the state measurements could be corrupted by statistical noises and other (possibly adversarial) disturbances.'\nauthor:\n- 'Xiangyuan Zhang Bin Hu Tamer Ba\u015far[^1]'\nbibliography:\n- 'main.bib'\ntitle: '**Learning the Kalman Filter with Fine-Grained Sample Complexity**'\n---\n\nIntroduction\n============\n\nIn recent years, policy-based reinforcement learning (RL) methods [@sutton2000policy; @kakade2002natural; @schulman2015trust; @schulman2017proximal] have gained increasing attention in continuous control applications [@schulman2015high; @lillicrap2015continuous; @recht2019tour]. While traditional model-based techniques synthesize controller designs in a case-by-case manner [@anderson1979optimal; @anderson1990optimal], model-free policy gradient (PG) methods promise a universal framework that learns controller designs in an end-to-end fashion. The universality of model-free PG methods makes them desired candidates in complex control applications that involve nonlinear system dynamics and imperfect state"
+"---\nabstract: 'As AI technologies are rolled out into healthcare, academia, human resources, law, and a multitude of other domains, they become de-facto arbiters of truth. But truth is highly contested, with many different definitions and approaches. This article discusses the struggle for truth in AI systems and the general responses to date. It then investigates the production of truth in InstructGPT, a large language model, highlighting how data harvesting, model architectures, and social feedback mechanisms weave together disparate understandings of veracity. It conceptualizes this performance as an *operationalization of truth*, where distinct, often conflicting claims are smoothly synthesized and confidently presented into truth-statements. We argue that these same logics and inconsistencies play out in Instruct\u2019s successor, ChatGPT, reiterating truth as a non-trivial problem. We suggest that enriching sociality and thickening \u201creality\u201d are two promising vectors for enhancing the truth-evaluating capacities of future language models. We conclude, however, by stepping back to consider AI truth-telling as a social practice: what kind of \u201ctruth\u201d do we as listeners desire?'\nauthor:\n- Luke Munn\n- Liam Magee\n- Vanicka Arora\ndate: January 2023\ntitle: 'Truth Machines: Synthesizing Veracity in AI Language Models'\n---\n\n[ ***Keywords\u2014*** truthfulness, veracity, AI, large language model, GPT-3,"
+"---\nabstract: 'Fast radio bursts (FRBs) are being detected with increasing regularity. However, their spontaneous and often once-off nature makes high-precision burst position and frequency-time structure measurements difficult without specialised real-time detection techniques and instrumentation. The Australian Square Kilometre Array Pathfinder (ASKAP) has been enabled by the Commensal Real-time ASKAP Fast Transients Collaboration (CRAFT) to detect FRBs in real-time and save raw antenna voltages containing FRB detections. We present the CRAFT Effortless Localisation and Enhanced Burst Inspection pipeline (CELEBI), an automated [[[offline]{}]{}]{} software pipeline that extends CRAFT\u2019s existing software to process ASKAP voltages in order to produce sub-arcsecond precision localisations and polarimetric data at time resolutions as fine as 3ns of FRB events. We use Nextflow to link together Bash and Python code that performs software correlation, interferometric imaging, and beamforming, making use of common astronomical software packages.'\nauthor:\n- 'D.\u00a0R.\u00a0Scott'\n- 'H.\u00a0Cho'\n- 'C.\u00a0K.\u00a0Day'\n- 'A.\u00a0T.\u00a0Deller'\n- 'M.\u00a0Glowacki'\n- 'K.\u00a0Gourdji'\n- 'K.\u00a0W.\u00a0Bannister'\n- 'A.\u00a0Bera'\n- 'S.\u00a0Bhandari'\n- 'C.\u00a0W.\u00a0James'\n- 'R.\u00a0M.\u00a0Shannon'\nbibliography:\n- 'references.bib'\ntitle: 'CELEBI: The CRAFT Effortless Localisation and Enhanced Burst Inspection Pipeline'\n---\n\nFast Radio Bursts ,Radio Interferometry ,Astronomy Software"
+"---\nabstract: 'Gromov\u2013Wasserstein (GW) transport is inherently invariant under isometric transformations of the data. Having this property in mind, we propose to estimate dynamical systems by transfer operators derived from GW transport plans, when merely the initial and final states are known. We focus on entropy regularized GW transport, which allows to utilize the fast Sinkhorn algorithm and a spectral clustering procedure to extract coherent structures. Moreover, the GW framework provides a natural quantitative assessment on the shape-coherence of the extracted structures. We discuss fused and unbalanced variants of GW transport for labelled and noisy data, respectively. Our models are verified by three numerical examples of dynamical systems with governing rotational forces.'\nauthor:\n- 'Florian Beier,'\ntitle: 'Gromov\u2013Wasserstein Transfer Operators[^1] '\n---\n\nIntroduction\n============\n\nOptimal transport (OT) aims to find an optimal mass transport between two input (marginal) measures according to an underlying cost function. To improve the speed of the numerical computation, Cuturi [@cuturi] introduced a regularized OT version which can be solved by the fast and parallelizable Sinkhorn algorithm. Further effort has been made to generalize the OT for different settings as, e.g., unbalanced optimal transport [@LMS18], which relaxes the hard matching of the marginal measures. Another line"
+"---\nabstract: |\n We introduce a new algorithm promoting sparsity called [*Support Exploration Algorithm (SEA)*]{} and analyze it in the context of support recovery/model selection problems.\n\n The algorithm can be interpreted as an instance of the [*straight-through estimator (STE)*]{} applied to the resolution of a sparse linear inverse problem. SEA uses a non-sparse exploratory vector and makes it evolve in the input space to select the sparse support. We put to evidence an oracle update rule for the exploratory vector and consider the STE update.\n\n The theoretical analysis establishes general sufficient conditions of support recovery. The general conditions are specialized to the case where the matrix $A$ performing the linear measurements satisfies the [*Restricted Isometry Property (RIP)*]{}.\n\n Experiments show that SEA can efficiently improve the results of any algorithm. Because of its exploratory nature, SEA also performs remarkably well when the columns of $A$ are strongly coherent.\nauthor:\n- Mimoun Mohamed\n- Fran\u00e7ois Malgouyres\n- Valentin Emiya\n- Caroline Chaux\nbibliography:\n- 'abbr.bib'\n- 'bibliography.bib'\ntitle: Support Exploration Algorithm for Sparse Support Recovery\n---\n\n0.3in\n\nIntroduction\n============\n\nSparse representations and sparsity-inducing algorithms are widely used in statistics and machine learning\u00a0[@Hastie_T_2015_Statistical_SLwS], as well as in signal processing\u00a0[@Elad2010]. For instance,"
+"---\nabstract: |\n We consider the gradient flow of the Ambrosio-Tortorelli functional at fixed $\\epsilon >0$, proving existence and uniqueness of a solution in dimension 2. The strategy of the proof essentially follows the one in the first part of [@feng2004analysis], but as suggested in a footnote by the authors of [@feng2004analysis] it employs a different and simpler technique, which is used for a different equation in [@barrett2006convergence] and in the end it allows to prove better estimates than the ones obtained in the original article. In particular we prove that if $U \\subset \\mathbb{R}^2$ is a bounded Lipshitz domain, the initial data $(u_0, z_0) \\in [H^1 (U)]^2$ and $0 \\leq z_0 \\leq 1$, then for every $T>0$ there exists a unique gradient flow $(u(t), z(t))$ of the Ambrosio-Tortorelli functional such that\n\n $$(u,z) \\in [L^2 (0,T; H^2 (U)) \\cap L^\\infty (0,T; H^1 (U)) \\cap H^1 (0,T; L^2 (U)) ]^2.$$\n\n The basic difference from [@feng2004analysis], as already said, is a better regularity result and a simpler proof: while in [@feng2004analysis] they used a localization argument based on an idea by Struwe (see [@struwe1996geometric]), here crucial estimates on the fourth powers of the $L^4 _t (L^4 _x )$ norms of the gradients"
+"---\nabstract: |\n Nowadays traffic on the Internet has been widely encrypted to protect its confidentiality and privacy. However, traffic encryption is always abused by attackers to conceal their malicious behaviors. Since the encrypted malicious traffic has similar features to benign flows, it can easily evade traditional detection methods. Particularly, the existing encrypted malicious traffic detection methods are supervised and they rely on the prior knowledge of known attacks (e.g., labeled datasets). Detecting unknown encrypted malicious traffic in real time, which does not require prior domain knowledge, is still an open problem.\n\n In this paper, we propose [HyperVision]{}, a realtime unsupervised machine learning (ML) based malicious traffic detection system. Particularly, [HyperVision]{}is able to detect unknown patterns of encrypted malicious traffic by utilizing a compact in-memory graph built upon the traffic patterns. The graph captures flow interaction patterns represented by the graph structural features, instead of the features of specific known attacks. We develop an unsupervised graph learning method to detect abnormal interaction patterns by analyzing the connectivity, sparsity, and statistical features of the graph, which allows [HyperVision]{}to detect various encrypted attack traffic without requiring any labeled datasets of known attacks. Moreover, we establish an information theory model to demonstrate that"
+"---\nabstract: 'Since random initial models in Federated Learning (FL) can easily result in unregulated Stochastic Gradient Descent (SGD) processes, existing FL methods greatly suffer from both slow convergence and poor accuracy, especially for non-IID scenarios. To address this problem, we propose a novel FL method named CyclicFL, which can quickly derive effective initial models to guide the SGD processes, thus improving the overall FL training performance. Based on the concept of Continual Learning (CL), we prove that CyclicFL approximates existing centralized pre-training methods in terms of classification and prediction performance. Meanwhile, we formally analyze the significance of data consistency between the pre-training and training stages of CyclicFL, showing the limited Lipschitzness of loss for the pre-trained models by CyclicFL. Unlike traditional centralized pre-training methods that require public proxy data, CyclicFL pre-trains initial models on selected clients cyclically without exposing their local data. Therefore, they can be easily integrated into any security-critical FL methods. Comprehensive experimental results show that CyclicFL can not only improve the classification accuracy by up to $16.21\\%$, but also significantly accelerate the overall FL training processes.'\nauthor:\n- Pengyu Zhang\n- Yingbo Zhou\n- Ming Hu\n- Xin Fu\n- Xian Wei\n- Mingsong Chen\nbibliography:"
+"---\nabstract: 'Optical nanofiber cavity research has mainly focused on the fundamental mode. Here, a Fabry-P\u00e9rot fiber cavity with an optical nanofiber supporting the higher-order modes, TE$_{01}$, TM$_{01}$, HE$_{21}^o$, and HE$_{21}^e$, is demonstrated. Using cavity spectroscopy, with mode imaging and analysis, we observe cavity resonances that exhibit complex, inhomogeneous states of polarization with topological features containing Stokes singularities such as C-points, Poincar\u00e9 vortices, and L-lines. *In situ* tuning of the intracavity birefringence enables the desired profile and polarization of the cavity mode to be obtained. These findings open new research possibilities for cold atom manipulation and multimode cavity quantum electrodynamics using the evanescent fields of higher-order mode optical nanofibers.'\naddress: 'Okinawa Institute of Science and Technology Graduate University, Onna, Okinawa 904-0495, Japan'\nauthor:\n- 'Maki Maeda, Jameesh Keloth, and S\u00edle [Nic Chormaic]{}'\ntitle: 'Manipulation of polarization topology using a Fabry-P\u00e9rot fiber cavity with a higher-order mode optical nanofiber'\n---\n\nIntroduction {#sec:introduction}\n============\n\nNovel phenomena that can be revealed in non-paraxial light, such as transverse spin and spin-orbit coupling, have led to increasing interest in the tightly confined light observed in nano-optical devices [@Bliokh2015]. Optical nanofibers (ONFs), where the waist is subwavelength in size, are useful in this context because they"
+"---\nabstract: 'Graph alignment, which aims at identifying corresponding entities across multiple networks, has been widely applied in various domains. As the graphs to be aligned are usually constructed from different sources, the inconsistency issues of structures and features between two graphs are ubiquitous in real-world applications. Most existing methods follow the \u201cembed-then-cross-compare\u201d paradigm, which computes node embeddings in each graph and then processes node correspondences based on cross-graph embedding comparison. However, we find these methods are unstable and sub-optimal when structure or feature inconsistency appears. To this end, we propose SLOTAlign, an unsupervised graph alignment framework that jointly performs Structure Learning and Optimal Transport Alignment. We convert graph alignment to an optimal transport problem between two intra-graph matrices without the requirement of cross-graph comparison. We further incorporate multi-view structure learning to enhance graph representation power and reduce the effect of structure and feature inconsistency inherited across graphs. Moreover, an alternating scheme based algorithm has been developed to address the joint optimization problem in SLOTAlign, and the provable convergence result is also established. Finally, we conduct extensive experiments on six unsupervised graph alignment datasets and the DBP15K knowledge graph (KG) alignment benchmark dataset. The proposed SLOTAlign shows superior performance and"
+"---\nabstract: 'Black hole (BH) evaporation is caused by creation of entangled particle-antiparticle pairs near the event horizon, with one carrying positive energy to infinity and the other carrying negative energy into the BH. Since under the event horizon, particles always move toward the BH center, they can only be absorbed but not emitted at the center. This breaks absorption-emission symmetry and, as a result, annihilation of the particle at the BH center is described by a non-Hermitian Hamiltonian. We show that due to entanglement between photons moving inside and outside the event horizon, non-unitary absorption of the negative energy photons near the BH center, alters the outgoing radiation. As a result, radiation of the evaporating BH is not thermal, it carries information about BH interior and entropy is preserved during evaporation.'\nauthor:\n- 'Anatoly A. Svidzinsky'\ntitle: Nonthermal radiation of evaporating black holes\n---\n\nIntroduction\n============\n\nAccording to principles of quantum mechanics, state of an isolated system remains pure during evolution. This is the case for both types of quantum mechanical evolution - a unitary evolution governed by the Schr\u00f6dinger equation and a non-unitary state vector collapse brought about by a measurement. If the system remains in a pure"
+"---\nabstract: 'The remarkable ability of language models (LMs) has also brought challenges at the interface of AI and security. A critical challenge pertains to how much information these models retain and leak about the training data. This is particularly urgent as the typical development of LMs relies on huge, often highly sensitive data, such as emails and chat logs. To contrast this shortcoming, this paper introduces Context-Aware Differentially Private Language Model (CADP-LM) , a privacy-preserving LM framework that relies on two key insights: First, it utilizes the notion of *context* to define and audit the potentially sensitive information. Second, it adopts the notion of Differential Privacy to protect sensitive information and characterize the privacy leakage. A unique characteristic of CADP-LM is its ability to target the protection of sensitive sentences and contexts only, providing a highly accurate private model. Experiments on a variety of datasets and settings demonstrate these strengths of CADP-LM.'\nauthor:\n- |\n My H.\u00a0Dinh\\\n Syracuse University\\\n `mydinh@syr.edu`\\\n Ferdinando Fioretto\\\n Syracuse University\\\n `ffiorett@syr.edu`\\\nbibliography:\n- 'references.bib'\ntitle: 'Context-Aware Differential Privacy for Language Modeling'\n---\n\nIntroduction\n============\n\nLanguage models (LMs) are essential components of state-of-the-art natural language processing. Their recent development has focused on training increasingly large"
+"---\nabstract: 'Robotic grasping aims to detect graspable points and their corresponding gripper configurations in a particular scene, and is fundamental for robot manipulation. Existing research works have demonstrated the potential of using a transformer model for robotic grasping, which can efficiently learn both global and local features. However, such methods are still limited in grasp detection on a 2D plane. In this paper, we extend a transformer model for 6-Degree-of-Freedom (6-DoF) robotic grasping, which makes it more flexible and suitable for tasks that concern safety. The key designs of our method are a serialization module that turns a 3D voxelized space into a sequence of feature tokens that a transformer model can consume and skip-connections that merge multiscale features effectively. In particular, our method takes a Truncated Signed Distance Function (TSDF) as input. After serializing the TSDF, a transformer model is utilized to encode the sequence, which can obtain a set of aggregated hidden feature vectors through multi-head attention. We then decode the hidden features to obtain per-voxel feature vectors through deconvolution and skip-connections. Voxel feature vectors are then used to regress parameters for executing grasping actions. On a recently proposed pile and packed grasping dataset, we showcase that"
+"---\nabstract: 'With the advent of technologies such as Edge computing, the horizons of remote computational applications have broadened multidimensionally. Autonomous Unmanned Aerial Vehicle (UAV) mission is a vital application to utilize remote computation to catalyze its performance. However, offloading computational complexity to a remote system increases the latency in the system. Though technologies such as 5G networking minimize communication latency, the effects of latency on the control of UAVs are inevitable and may destabilize the system. Hence, it is essential to consider the delays in the system and compensate for them in the control design. Therefore, we propose a novel Edge-based predictive control architecture enabled by 5G networking, PACED-5G (Predictive Autonomous Control using Edge for Drones over 5G). In the proposed control architecture, we have designed a state estimator for estimating the current states based on the available knowledge of the time-varying delays, devised a Model Predictive controller (MPC) for the UAV to track the reference trajectory while avoiding obstacles, and provided an interface to offload the high-level tasks over Edge systems. The proposed architecture is validated in two experimental test cases using a quadrotor UAV.'\nauthor:\n- |\n Viswa Narayanan Sankaranarayanan$^{1}$, Gerasimos Damigos$^{2}$, Achilleas Santi Seisa$^{1}$, Sumeet Gajanan"
+"---\nabstract: |\n Can we infer sources of errors from outputs of the complex data analytics software? Bidirectional programming promises that we can reverse flow of software, and translate corrections of output into corrections of either input or data analysis.\n\n This allows us to achieve holy grail of automated approaches to debugging, risk reporting and large scale distributed error tracking.\n\n Since processing of risk reports and data analysis pipelines can be frequently expressed using a sequence relational algebra operations, we propose a replacement of this traditional approach with a data summarization algebra that helps to determine an impact of errors. It works by defining data analysis of a necessarily complete summarization of a dataset, possibly in multiple ways along multiple dimensions. We also present a description to better communicate how the complete summarizations of the input data may facilitates easier debugging and more efficient development of analysis pipelines. This approach can also be described as an generalization of axiomatic theories of accounting into data analytics, thus dubbed data accounting.\n\n We also propose formal properties that allow for transparent assertions about impact of individual records on the aggregated data and ease debugging by allowing to find minimal changes that change behaviour"
+"---\nabstract: 'The famous Hadwiger-Nelson problem asks for the minimum number of colors needed to color the points of the Euclidean plane so that no two points unit distance apart are assigned the same color. In this note we consider a variant of the problem in Minkowski metric planes, where the unit circle is a regular polygon of even and at most $22$ vertices. We present a simple lattice\u2013sublattice coloring scheme that uses $6$ colors, proving that the chromatic number of the Minkowski planes above are at most 6. This result is new for regular polygons having more than 8 vertices.'\nauthor:\n- '*Panna Geh\u00e9r [^1]*'\ntitle: |\n Note on the chromatic number of Minkowski planes:\\\n the regular polygon case\n---\n\nIntroduction\n============\n\nIn 1950, Nelson raised the following question: What is the minimum number of colors that are needed to color the Euclidean plane so that no two points of the same color determine unit distance? We refer to such a coloring with $k$ color classes as a proper $k$-coloring. Thus Nelson\u2019s question asks for the smallest $k$ value such that the plane can be properly $k$-colored. This value is known as the chromatic number of the Euclidean plane,"
+"---\nabstract: 'We present a general lower bound for the fundamental tone for the $p$-Laplacian on Riemannian manifolds carrying a special kind of function. We then apply our result to the cases of negatively curved simply connected manifolds, a class of warped product manifolds and for a class of Riemannian submersions.'\naddress:\n- ' Universidade Federal do Piau\u00ed, Picos, PI, Brazil'\n- 'Universidade Federal de Alagoas, Macei\u00f3, AL, Brazil'\nauthor:\n- 'Francisco G. de S. Carvalho'\n- 'Marcos P. de A. Cavalcante'\nbibliography:\n- 'biblio.bib'\ndate:\n- \n- \ntitle: 'On the fundamental tone of the $p$-Laplacian on Riemannian manifolds and applications'\n---\n\nIntroduction {#intro}\n============\n\nLet $\\Omega$ be a bounded domain in a $n$-dimensional Riemannian manifold $M$, $n\\geq 2$. The first eigenvalue of the Dirichlet problem for the Laplace-Beltrami operator on $\\Omega$ can be characterized variationally as $$\\lambda_ 1(\\Omega) = \\inf \\bigg\\{\\frac{\\int_\\Omega \\|\\nabla u\\|^2 \\, dM}{\\int_\\Omega u^2\\, dM}:\nu\\in W^{1,2}_0(\\Omega), u\\neq 0 \\bigg \\},$$ where $W^{1,2}_0(\\Omega)$ denotes the Sobolev space with trace in $\\partial \\Omega$. If $M$ is not compact, *the first eigenvalue* of $M$ is defined as the limit $$\\lambda_1(M)=\\lim_{k\\to \\infty}\\lambda_1(\\Omega_k),$$ if it exists, where $\\Omega_1\\subset \\Omega_2\\subset\\cdots$ is an exhaustion of $M$. It is easy to see that this"
+"---\nabstract: 'Specification inference techniques aim at (automatically) inferring a set of assertions that capture the exhibited software behaviour by generating and filtering assertions through dynamic test executions and mutation testing. Although powerful, such techniques are computationally expensive due to a large number of assertions, test cases and mutated versions that need to be executed. To overcome this issue, we demonstrate that a small subset, i.e., 12.95% of the mutants used by mutation testing tools is sufficient for assertion inference, this subset is significantly different, i.e., 71.59% different from the subsuming mutant set that is frequently cited by mutation testing literature, and can be statically approximated through a learning based method. In particular, we propose [[*AIMS*]{}]{}, an approach that selects [[*Assertion Inferring Mutants*]{}]{}, i.e., a set of mutants that are well-suited for assertion inference, with 0.58 MCC, 0.79 Precision, and 0.49 Recall. We evaluate [[*AIMS*]{}]{}on 46 programs and demonstrate that it has comparable inference capabilities with full mutation analysis (misses 12.49% of assertions) while significantly limiting execution cost (runs 46.29 times faster). A comparison with randomly selected sets of mutants, shows the superiority of [[*AIMS*]{}]{}by inferring 36% more assertions while requiring approximately equal amount of execution time. We also show"
+"---\nabstract: 'Tidal disruption events (TDEs) are transient, multi-wavelength events in which a star is ripped apart by a supermassive black hole. Observations show that in a small fraction of TDEs, a short-lived, synchrotron emitting jet is produced. We observed the newly discovered TDE AT2022cmc with a slew of radio facilities over the first 100 days after its discovery. The light curve from the AMI\u2013LA radio interferometer shows day-timescale variability which we attribute to a high brightness temperature emitting region as opposed to scintillation. We measure a brightness temperature of 2$\\times$10^15^K, which is unphysical for synchrotron radiation. We suggest that the measured high brightness temperature is a result of relativistic beaming caused by a jet being launched at velocities close to the speed of light along our line of sight. We infer from day-timescale variability that the jet associated with AT2022cmc has a relativistic Doppler factor of at least 16, which corresponds to a bulk Lorentz factor of at least 8 if we are observing the jet directly on axis. Such an inference is the first conclusive evidence that the radio emission observed from some TDEs is from relativistic jets because it does not rely on an outflow model. We"
+"---\nabstract: 'In this paper we extensively explore the suitability of YOLO architectures to monitor the process flow across a Fischertechnik industry 4.0 application. Specifically, different YOLO architectures in terms of size and complexity design along with different prior-shapes assignment strategies are adopted. To simulate the real world factory environment, we prepared a rich dataset augmented with different distortions that highly enhance and in some cases degrade our image qualities. The degradation is performed to account for environmental variations and enhancements opt to compensate the color correlations that we face while preparing our dataset. The analysis of our conducted experiments shows the effectiveness of the presented approach evaluated using different measures along with the training and validation strategies that we tailored to tackle the unavoidable color correlations that the problem at hand inherits by nature.'\nauthor:\n- 'Slavomira Schneidereit, Ashkan Mansouri Yarahmadi'\n- Toni Schneidereit\n- 'Michael Breu[\u00df]{}'\n- Marc Gebauer\nbibliography:\n- 'ref.bib'\ntitle: 'YOLO-based Object Detection in Industry 4.0 Fischertechnik Model Environment'\n---\n\nIntroduction {#sec:Introduction}\n============\n\nWithin the context of computer vision, the task of simultaneously performing object localisation and recognition in an given image is perhaps one of the most interesting and important tasks with a broad"
+"---\nabstract: 'The StyleGAN family succeed in high-fidelity image generation and allow for flexible and plausible editing of generated images by manipulating the semantic-rich latent style space. However, projecting a real image into its latent space encounters an inherent trade-off between inversion quality and editability. Existing encoder-based or optimization-based StyleGAN inversion methods attempt to mitigate the trade-off but see limited performance. To fundamentally resolve this problem, we propose a novel two-phase framework by designating two separate networks to tackle editing and reconstruction respectively, instead of balancing the two. Specifically, in Phase I, a $\\mathcal{W}$-space-oriented StyleGAN inversion network is trained and used to perform image inversion and editing, which assures the editability but sacrifices reconstruction quality. In Phase II, a carefully designed rectifying network is utilized to rectify the inversion errors and perform ideal reconstruction. Experimental results show that our approach yields near-perfect reconstructions without sacrificing the editability, thus allowing accurate manipulation of real images. Further, we evaluate the performance of our rectifying network, and see great generalizability towards unseen manipulation types and out-of-domain images.'\nauthor:\n- 'Bingchuan Li, Tianxiang Ma, Peng Zhang, Miao Hua, Wei Liu, Qian He, Zili Yi[^1]'\n- Author Name\nbibliography:\n- 'aaai23.bib'\ntitle:\n- 'ReGANIE: Rectifying"
+"---\nabstract: 'The analysis of data stored in multiple sites has become more popular, raising new concerns about the security of data storage and communication. Federated learning, which does not require centralizing data, is a common approach to preventing heavy data transportation, securing valued data, and protecting personal information protection. Therefore, determining how to aggregate the information obtained from the analysis of data in separate local sites has become an important statistical issue. The commonly used averaging methods may not be suitable due to data nonhomogeneity and incomparable results among individual sites, and applying them may result in the loss of information obtained from the individual analyses. Using a sequential method in federated learning with distributed computing can facilitate the integration and accelerate the analysis process. We develop a data-driven method for efficiently and effectively aggregating valued information by analyzing local data without encountering potential issues such as information security and heavy transportation due to data communication. In addition, the proposed method can preserve the properties of classical sequential adaptive design, such as data-driven sample size and estimation precision when applied to generalized linear models. We use numerical studies of simulated data and an application to COVID-19 data collected from"
+"---\nabstract: 'Trustworthy machine learning aims at combating distributional uncertainties in training data distributions compared to population distributions. Typical treatment frameworks include the Bayesian approach, (min-max) distributionally robust optimization (DRO), and regularization. However, two issues have to be raised: 1) All these methods are biased estimators of the true optimal cost; 2) the prior distribution in the Bayesian method, the radius of the distributional ball in the DRO method, and the regularizer in the regularization method are difficult to specify. This paper studies a new framework that unifies the three approaches and that addresses the two challenges mentioned above. The asymptotic properties (e.g., consistency and asymptotic normalities), non-asymptotic properties (e.g., unbiasedness and generalization error bound), and a Monte\u2013Carlo-based solution method of the proposed model are studied. The new model reveals the trade-off between the robustness to the unseen data and the specificity to the training data.'\nauthor:\n- |\n Shixiong Wang\\\n Institute of Data Science\\\n National University of Singapore, Singapore\\\n `s.wang@u.nus.edu`\\\n- |\n Haowei Wang\\\n Department of Industrial Systems Engineering and Management\\\n National University of Singapore, Singapore\\\n `haowei_wang@u.nus.edu`\\\n- |\n Jean Honorio\\\n Department of Computer Science\\\n Purdue University, USA\\\n `jhonorio@purdue.edu`\nbibliography:\n- 'References.bib'\ntitle: '**Learning Against Distributional Uncertainty: On the"
+"---\nabstract: 'The graph removal lemma is a fundamental result in extremal graph theory which says that for every fixed graph $H$ and $\\varepsilon > 0$, if an $n$-vertex graph $G$ contains $\\varepsilon n^2$ edge-disjoint copies of $H$ then $G$ contains $\\delta n^{v(H)}$ copies of $H$ for some $\\delta = \\delta(\\varepsilon,H) > 0$. The current proofs of the removal lemma give only very weak bounds on $\\delta(\\varepsilon,H)$, and it is also known that $\\delta(\\varepsilon,H)$ is not polynomial in $\\varepsilon$ unless $H$ is bipartite. Recently, Fox and Wigderson initiated the study of minimum degree conditions guaranteeing that $\\delta(\\varepsilon,H)$ depends polynomially or linearly on $\\varepsilon$. In this paper we answer several questions of Fox and Wigderson on this topic.'\nauthor:\n- 'Lior Gishboliner[^1]'\n- Zhihan Jin\n- Benny Sudakov\nbibliography:\n- 'library.bib'\ntitle: The Minimum Degree Removal Lemma Thresholds \n---\n\nIntroduction\n============\n\nThe graph removal lemma, first proved by Ruzsa and Szemer\u00e9di [@ruzsatriple], is a fundamental result in extremal graph theory. It also have important applications to additive combinatorics and property testing. The lemma states that for every fixed graph $H$ and $\\varepsilon > 0$, if an $n$-vertex graph $G$ contains $\\varepsilon n^2$ edge-disjoint copies of $H$ then $G$ it contains $\\delta"
+"---\nabstract: 'In this work we consider the KAM renormalizability problem for small pseudodifferential perturbations of the semiclassical isochronous transport operator with Diophantine frequencies on the torus. Assuming that the symbol of the perturbation is real analytic and globally bounded, we prove convergence of the quantum Lindstedt series and describe completely the set of semiclassical measures and quantum limits of the renormalized system. Each of these measures is given by symplectic deformation of the Haar measure on an invariant torus for the unperturbed classical system.'\naddress: 'Laboratoire de Math\u00e9matiques Jean Leray, Nantes Universit\u00e9. UMR CNRS 6629, 2 rue de la Houssini\u00e8re, 44322 Nantes Cedex 03, France.'\nauthor:\n- V\u00edctor Arnaiz\nbibliography:\n- 'Referencias.bib'\ntitle: Convergence of quantum Lindstedt series and semiclassical renormalization\n---\n\nIntroduction and main results\n=============================\n\nThe present work is concerned with the *renormalization problem* for quantum Hamiltonian systems. Let $\\mathbb{T}^d := {\\mathbb{R}}^d/2\\pi \\mathbb{Z}^d$ be the flat torus, we consider the linear Hamiltonian $\\mathcal{L}_\\omega : T^*\\mathbb{T}^d \\to {\\mathbb{R}}$ defined by $$\\label{classical_unperturbed_hamiltonian}\n\\mathcal{L}_\\omega (x,\\xi) = \\omega \\cdot \\xi, \\quad (x,\\xi) \\in \\mathbb{T}^d \\times {\\mathbb{R}}^d \\simeq T^*\\mathbb{T}^d,$$ where the vector of frequencies $\\omega$ satisfies the Diophantine condition . The renormalization problem in the classical framework [@Elia89; @Gall82; @Gen96] wonders if,"
+"---\nabstract: 'Weakly supervised image segmentation approaches in the literature usually achieve high segmentation performance using tight bounding box supervision and decrease the performance greatly when supervised by loose bounding boxes. However, compared with loose bounding box, it is much more difficult to acquire tight bounding box due to its strict requirements on the precise locations of the four sides of the box. To resolve this issue, this study investigates whether it is possible to maintain good segmentation performance when loose bounding boxes are used as supervision. For this purpose, this work extends our previous parallel transformation based multiple instance learning (MIL) for tight bounding box supervision by integrating an MIL strategy based on polar transformation to assist image segmentation. The proposed polar transformation based MIL formulation works for both tight and loose bounding boxes, in which a positive bag is defined as pixels in a polar line of a bounding box with one endpoint located inside the object enclosed by the box and the other endpoint located at one of the four sides of the box. Moreover, a weighted smooth maximum approximation is introduced to incorporate the observation that pixels closer to the origin of the polar transformation are"
+"---\nabstract: 'Millions of smart contracts have been deployed onto the Ethereum platform, posing potential attack subjects. Therefore, analyzing contract binaries is vital since their sources are unavailable, involving identification comprising function entry identification and detecting its boundaries. Such boundaries are critical to many smart contract applications, e.g. reverse engineering and profiling. Unfortunately, it is challenging to identify functions from these stripped contract binaries due to the lack of internal function call statements and the compiler-inducing instruction reshuffling. Recently, several existing works excessively relied on a set of handcrafted heuristic rules which impose several faults. To address this issue, we propose a novel neural network-based framework for EVM bytecode Function Entries and Boundaries Identification (neural-FEBI) that does not rely on a fixed set of handcrafted rules. Instead, it used a two-level bi-Long Short-Term Memory network and a Conditional Random Field network to locate the function entries. The suggested framework also devises a control flow traversal algorithm to determine the code segments reachable from the function entry as its boundary. Several experiments on 38,996 publicly available smart contracts collected as binary demonstrate that neural-FEBI confirms the lowest and highest F1-scores for the function entries identification task across different datasets of 88.3"
+"---\nabstract: 'The surging adoption of electric vehicles (EV) calls for accurate and efficient approaches to coordinate with the power grid operation. By being responsive to distribution grid limits and time-varying electricity prices, EV charging stations can minimize their charging costs while aiding grid operation simultaneously. In this study, we investigate the economic benefit of vehicle-to-grid (V2G) using real-time price data from New York State and a real-world charging network dataset. We incorporate nonlinear battery models and price uncertainty into the V2G management design to provide a realistic estimation of cost savings from different V2G options. The proposed control method is computationally tractable when scaling up to real-world applications. We show that our proposed algorithm leads to an average of 35% charging cost savings compared to uncontrolled charging when considering unidirectional charging, and bi-directional V2G enables additional 18$\\%$ cost savings compared to unidirectional smart charging. Our result also shows the importance of using more accurate nonlinear battery models in V2G controllers and evaluating the cost of price uncertainties over V2G.'\nauthor:\n- 'Joshua Jaworski, Ningkun Zheng,\u00a0, Matthias Preindl,\u00a0, Bolun\u00a0Xu,\u00a0 [^1]'\nbibliography:\n- 'IEEEabrv.bib'\n- 'literature\\_v1.bib'\ntitle: |\n Vehicle-to-Grid Fleet Service Provision\\\n considering Nonlinear Battery Behaviors\n---\n\nEnergy"
+"---\nabstract: 'The thermodynamic evolution of Coronal Mass Ejections (CMEs) in the inner corona ($\\leq$ 1.5 R$_{sun} $) is not yet completely understood. In this work, we study the evolution of thermodynamic properties of a CME core observed in the inner corona on July 20, 2017, by combining the MLSO/K-Cor white-light and the MLSO/CoMP Fe XIII 10747 [\u00c5]{} line spectroscopic data. We also estimate the emission measure weighted temperature (T$_{EM}$) of the CME core by applying the Differential Emission Measure (DEM) inversion technique on the SDO/AIA six EUV channels data and compare it with the effective temperature (T$_{eff}$) obtained using Fe XIII line width measurements. We find that the T$_{eff}$ and T$_{EM}$ of the CME core show similar variation and remain almost constant as the CME propagates from $\\sim$1.05 to 1.35 R$_{sun}$. The temperature of the CME core is of the order of million-degree kelvin, indicating that it is not associated with a prominence. Further, we estimate the electron density of this CME core using K-Cor polarized brightness (pB) data and found it decreasing by a factor of $\\sim$3.6 as the core evolves. An interesting finding is that the temperature of the CME core remains almost constant despite expected adiabatic"
+"---\nauthor:\n- Simone Magaletti\n- Ludovic Mayer\n- Xuan Phuc Le\n- Thierry Debuisschert\ntitle: Magnetic sensitivity enhancement via polarimetric excitation and detection of an ensemble of NV centers\n---\n\nIntroduction {#introduction .unnumbered}\n============\n\nThe negatively charged nitrogen-vacancy center (NV) is a spin-1 color center in diamond composed by a nitrogen atom and a carbon vacancy in two adjacent positions of the lattice (\\[fig:fig2\\]a). It presents remarkable spin-dependent optical properties and a long spin-coherence time (ms) at room-temperature that allow for optical polarization, optical read-out and coherent manipulation of its electron spin [@Doherty2013]. These features make it an appealing tool for quantum technologies and in particular for magnetic field sensing [@Rondin2014], where magnetic field sensitivity as low as a few pT$\\cdot$Hz$^{-1/2}$ have been demonstrated [@Zhang2021; @Wolf2015; @Wang2022]. The NV center magnetic sensitivity \u00a0scales as [@Barry2020]: $$\\eta\\propto\\frac{\\Delta\\nu}{C\\sqrt{S_0}}\n\\label{eq:SensitivityScaling}$$ where () is the linewidth of the spin resonance [@Bauch2018], (S~0~) is the detected photoluminescence (PL) and (C) is the contrast of the measurement, namely the relative photoluminescence variation between the system on resonance and out of resonance. Since the sensitivity scales with the square root of S~0~, magnetic field measurements that do not require a nanometer-scale resolution are usually performed with"
+"---\nabstract: 'Graph neural networks (GNNs) are the de facto standard deep learning architectures for machine learning on graphs. This has led to a large body of work analyzing the capabilities and limitations of these models, particularly pertaining to their representation and extrapolation capacity. We offer a novel theoretical perspective on the representation and extrapolation capacity of GNNs, by answering the question: how do GNNs behave as the number of graph nodes become very large? Under mild assumptions, we show that when we draw graphs of increasing size from the Erd\u0151s-R[\u00e9]{}nyi model, the probability that such graphs are mapped to a particular output by a class of GNN classifiers tends to either *zero* or to *one*. This class includes the popular graph convolutional network architecture. The result establishes \u2018zero-one laws\u2019 for these GNNs, and analogously to other convergence laws, entails theoretical limitations on their capacity. We empirically verify our results, observing that the theoretical asymptotic limits are evident already on relatively small graphs.'\nauthor:\n- |\n Sam Adam-Day[^1]\\\n Department of Mathematics\\\n University of Oxford\\\n Oxford, UK\\\n `sam.adam-day@cs.ox.ac.uk`\\\n Theodor-Mihai Iliant\\\n Department of Computer Science\\\n University of Oxford\\\n Oxford, UK\\\n `theodor-mihai.iliant@lmh.ox.ac.uk`\\\n smail [\u0130]{}lkan Ceylan\\\n Department of Computer Science\\\n University of Oxford\\\n Oxford,"
+"---\nabstract: 'In this paper, we propose a bilateral peer-to-peer (P2P) energy trading scheme under single-contract and multi-contract market setups, both as an assignment game, a special class of coalitional games. [The proposed market formulation allows for efficient computation of a market equilibrium while keeping the desired economic properties offered by the coalitional games. Furthermore, our market model allows buyers to have heterogeneous preferences (product differentiation) over the energy sellers, which can be economic, social, or environmental. To address the problem of scalability in coalitional games, we design a novel distributed negotiation mechanism that utilizes the geometric structure of the equilibrium solution to improve the convergence speed. Our algorithm enables market participants (prosumers) to reach a consensus on a set of \u201cstable\" and \u201cfair\" bilateral contracts which encourages prosumer participation.]{} The negotiation process is executed with virtually minimal information requirements on a time-varying communication network that in turn preserves privacy. We use operator-theoretic tools to rigorously prove its convergence. [Numerical simulations illustrate the benefits of our negotiation protocol and show that the average execution time of a negotiation step is much faster than the benchmark.]{}'\nbibliography:\n- 'bibliography.bib'\n---\n\nIntroduction\n============\n\nof power systems is rapidly materializing under the smart"
+"---\nabstract: 'We present a source of states for Quantum Key Distribution (QKD) based on a modular design exploiting the iPOGNAC, a stable, low-error, and calibration-free polarization modulation scheme, for both intensity and polarization encoding. This source is immune to the security vulnerabilities of other state sources such as side channels and some quantum hacking attacks. Furthermore, our intensity modulation scheme allows full tunability of the intensity ratio between the decoy and signal states, and mitigates patterning effects. The source was implemented and tested at the near-infrared optical band around 800 nm, of particular interest for satellite-based QKD. Remarkably, the modularity of the source simplifies its development, testing, and qualification, especially for space missions. For these reasons, our work paves the way for the development of the second generation of QKD satellites that can guarantee excellent performances at higher security levels.'\nauthor:\n- Federico\u00a0Berra\n- Costantino\u00a0Agnesi\n- Andrea\u00a0Stanco\n- Marco\u00a0Avesani\n- Sebastiano\u00a0Cocchi\n- Paolo\u00a0Villoresi\n- Giuseppe\u00a0Vallone\nbibliography:\n- 'modularQKD.bib'\ntitle: 'Modular source for near-infrared quantum communication'\n---\n\n[^1]\n\n[^2]\n\nIntroduction\n============\n\nQuantum Key Distribution (QKD)\u00a0[@Gisin2002; @Pirandola2019rev] is essential to ensure the safe exchange of sensitive data between distant parties. Establishing its security"
+"---\nabstract: 'In this paper we study $g$-fractional diffusion on bounded domains in $\\mathbb{R}^d$ with absorbing boundary conditions. A new general and explicit representation of the solution is obtained. We study the first-passage time distribution, showing the dependence on the particular choice of the function $g$. Then, we specialize the analysis to the interesting case of a rectangular domain. Finally, we briefly discuss the connection of this general theory with the physical application to the so-called fractional Dodson diffusion model, recently discussed in the literature.'\naddress:\n- '$^1$ Istituto dei Sistemi Complessi, Consiglio Nazionale delle Ricerche, P.le A. Moro 2, 00185 Roma, Italy'\n- '$^2$ Dipartimento di Fisica, Sapienza Universit\u00e0 di Roma, P.le A. Moro 2, 00185 Roma, Italy'\n- '$^3$ Section of Mathematics, International Telematic University Uninettuno, Corso Vittorio Emanuele II, 39, 00186 Roma, Italy'\nauthor:\n- 'L. Angelani$^{1,2}$ and R. Garra$^{3}$'\ntitle: 'G-fractional diffusion on bounded domains in $\\mathbb{R}^d$'\n---\n\nIntroduction\n============\n\nTime-fractional diffusion processes are widely studied in the literature for their relation to continuous-time random walks (see, e.g., [@ksbook]) and for pervasive applications in various fields, from applied physics to pure mathematics and probability (see, for example, the recent monograph [@lenzi] and references therein). More recently,"
+"---\nabstract: 'As the number of open and shared scientific datasets on the Internet increases under the open science movement, efficiently retrieving these datasets is a crucial task in information retrieval (IR) research. In recent years, the development of large models, particularly the pre-training and fine-tuning paradigm, which involves pre-training on large models and fine-tuning on downstream tasks, has provided new solutions for IR match tasks. In this study, we use the original BERT token in the embedding layer, improve the Sentence-BERT model structure in the model layer by introducing the SimCSE and K-Nearest Neighbors method, and use the cosent loss function in the optimization phase to optimize the target output. Our experimental results show that our model outperforms other competing models on both public and self-built datasets through comparative experiments and ablation implementations. This study explores and validates the feasibility and efficiency of pre-training techniques for semantic retrieval of Chinese scientific datasets.'\nauthor:\n- |\n Xintao Chu\\\n College of Computer Science and Engineering\\\n North Minzu University\\\n Yinchuan\\\n `xtchu@stu.nun.edu.cn`\\\n Jianping Liu\\\n College of Computer Science and Engineering\\\n North Minzu University\\\n Yinchuan\\\n `liujianping01@nmu.edu.cn`\\\n Jian Wang\\\n Agricultural Information Institute\\\n Chinese Academy of Agricultural Sciences\\\n Beijing\\\n `wangjian01@caas.cn`\\\n Xiaofeng Wang\\\n College of Computer Science"
+"---\nabstract: 'We briefly review the origins and development of Borsuk\u2019s Theory of Shapes and the Multivalued Shape of Sanjurjo. We use a construction over metric compacta using hyperspaces to define a finite type version.'\nauthor:\n- Diego Mond\u00e9jar\nbibliography:\n- 'bibFinmul.bib'\ntitle: '**[On a finite type Multivalued Shape]{}**'\n---\n\n*Dedicated with respect to Professor J.M.R. Sanjurjo for his 70th birthday*\n\nIntroduction {#sec:intro}\n============\n\nShape Theory is a suitable extension of homotopy theory for topological spaces with bad local properties, where this theory does not give relevant information. The paradigmatic example is the Warsaw circle $\\mathcal{W}$ of Figure \\[warsawcircle\\]. It can be defined as the graph of function $f(x)=\\sin\\left(\\frac{1}{x}\\right)$ in the interval $(0,\\frac{2}{\\pi}]$ adding its closure (that is, the segment joining $(0,-1)$ and $(0,1)$) and closing the space by any simple (not intersecting itself or the rest of the space) arc joining the points $(0,-1)$ and $(\\frac{\\pi}{2},1)$.\n\nplot (, [sin((1/)r)]{}); (0,1)\u2013(0,-1.5)\u2013(,-1.5)\u2013(,[sin((1/)r)]{});\n\nIt is readily seen that the fundamental group of $\\mathcal{W}$ is trivial, since there are not continuous maps from $S^1$ to $\\mathcal{W}$ appart from the class of nullhomotopic maps. Moreover, so are all its homology and homotopy groups. But it is also easy to see that $\\mathcal{W}$ is not"
+"---\nabstract: 'A derangement is a permutation with no fixed point, and a nonderangement is a permutation with at least one fixed point. There is a one-term recurrence for the number of derangements of\u00a0$n$\u00a0elements, and we describe a bijective proof of this recurrence which can be found using a recursive map. We then show the combinatorial interpretation of this bijection and how it compares with other known bijections, and show how this gives an involution on $\\mathfrak{S}_n$. Nonderangements satisfy a similar recurrence. We convert the bijective proof of the one-term identity for derangements into a bijective proof of the one-term identity for nonderangements.'\nauthor:\n- Melanie Ferreri\nbibliography:\n- 'references.bib'\ntitle: Naturally emerging maps for derangements and nonderangements\n---\n\nIntroduction\n============\n\nA *derangement* is a permutation $\\s \\in \\S_n$ such that for all $i \\in [n]$, $$\\s(i) \\neq i,$$ i.e. a permutation which does not fix any element. We denote by $D_n$ the set of derangements on $n$ elements, and let $d_n = |D_n|$. Let $E_n$ be the set of permutations of $n$ elements with exactly one fixed point, and let $e_n = |E_n|$. Two well-known recurrence relations for counting derangements are $$\\label{eqn:twoterm}\n d_n = (n-1)d_{n-1} + (n-1)d_{n-2}$$"
+"---\nabstract: 'A likelihood function on a smooth very affine variety gives rise to a twisted de Rham complex. We show how its top cohomology vector space degenerates to the coordinate ring of the critical points defined by the likelihood equations. We obtain a basis for cohomology from a basis of this coordinate ring. We investigate the dual picture, where twisted cycles correspond to critical points. We show how to expand a twisted cocycle in terms of a basis, and apply our methods to Feynman integrals from physics.'\nauthor:\n- 'Saiei-Jaeyeong Matsubara-Heo and Simon Telen'\nbibliography:\n- 'references.bib'\ntitle: Twisted Cohomology and Likelihood Ideals\n---\n\nIntroduction\n============\n\nVery affine varieties are closed subvarieties of an algebraic torus. They have applications in algebraic statistics [@huh2014likelihood] and particle physics [@mizera2018scattering]. We study smooth such varieties given by hypersurface complements in the algebraic torus. Fix $\\ell$ Laurent polynomials $f_1, \\ldots, f_\\ell \\in \\mathbb{C}[x_1^{\\pm 1}, \\ldots, x_n^{\\pm 1}]$ in $n$ variables. Localizing at the product $f_1 \\cdots f_\\ell$ gives the very affine variety $$\\label{eq:defX}\n X \\, = \\, \\{ x \\in (\\mathbb{C}^*)^n \\, : \\, f_i(x) \\neq 0 , \\text{ for all } i\\} \\, = \\, (\\mathbb{C}^*)^n \\setminus V(f_1 \\cdots f_\\ell).$$ This is"
+"---\nabstract: 'While neural text-to-speech (TTS) has achieved human-like natural synthetic speech, multilingual TTS systems are limited to resource-rich languages due to the need for paired text and studio-quality audio data. This paper proposes a method for zero-shot multilingual TTS using text-only data for the target language. The use of text-only data allows the development of TTS systems for low-resource languages for which only textual resources are available, making TTS accessible to thousands of languages. Inspired by the strong cross-lingual transferability of multilingual language models, our framework first performs masked language model pretraining with multilingual text-only data. Then we train this model with a paired data in a supervised manner, while freezing a language-aware embedding layer. This allows inference even for languages not included in the paired data but present in the text-only data. Evaluation results demonstrate highly intelligible zero-shot TTS with a character error rate of less than 12% for an unseen language.'\nauthor:\n- Anonymous authors\n- Takaaki Saeki$^1$\n- Soumi Maiti$^2$\n- Xinjian Li$^2$\n- Shinji Watanabe$^2$\n- |\n \\\n Shinnosuke Takamichi$^1$ Hiroshi Saruwatari$^1$ $^1$The University of Tokyo, Japan\\\n $^2$Carnegie Mellon University, USA takaaki\\_saeki@ipc.i.u-tokyo.ac.jp, {smaiti, swatanab}@andrew.cmu.edu\nbibliography:\n- 'ijcai23.bib'\ntitle: 'Learning to Speak from Text: Zero-Shot Multilingual"
+"---\nabstract: 'In previous theoretical studies of phonon-mediated superconductors, the electron-phonon coupling is treated by solving the Migdal-Eliashberg equations under the bare vertex approximation, whereas the effect of Coulomb repulsion is incorporated by introducing one single pseudopotential parameter. These two approximations become unreliable in low carrier-density superconductors in which the vertex corrections are not small and the Coulomb interaction is poorly screened. Here, we shall go beyond these two approximations and employ the Dyson-Schwinger equation approach to handle the interplay of electron-phonon interaction and Coulomb interaction in a self-consistent way. We first derive the exact Dyson-Schwinger integral equation of the full electron propagator. Such an equation contains several unknown single-particle propagators and fermion-boson vertex functions, and thus seems to be intractable. To solve this difficulty, we further derive a number of identities satisfied by all the relevant propagators and vertex functions and then use these identities to show that the exact Dyson-Schwinger equation of electron propagator is actually self-closed. This self-closed equation takes into account not only all the vertex corrections, but also the mutual influence between electron-phonon interaction and Coulomb interaction. Solving it by using proper numerical methods leads to the superconducting temperature $T_{c}$ and other quantities. As an"
+"---\nabstract: 'Large-scale pre-trained language models (PLMs) have shown great potential in natural language processing tasks. Leveraging the capabilities of PLMs to enhance automatic speech recognition (ASR) systems has also emerged as a promising research direction. However, previous works may be limited by the inflexible structures of PLMs and the insufficient utilization of PLMs. To alleviate these problems, we propose the hierarchical knowledge distillation (HKD) on the continuous integrate-and-fire (CIF) based ASR models. To transfer knowledge from PLMs to the ASR models, HKD employs cross-modal knowledge distillation with contrastive loss at the acoustic level and knowledge distillation with regression loss at the linguistic level. Compared with the original CIF-based model, our method achieves 15% and 9% relative error rate reduction on the AISHELL-1 and LibriSpeech datasets, respectively.'\naddress: |\n $^1$Institute of Automation, Chinese Academy of Sciences\\\n $^2$School of Artificial Intelligence, University of Chinese Academy of Sciences\\\n $^3$School of Future Technology, University of Chinese Academy of Sciences \nbibliography:\n- 'mybib.bib'\ntitle: 'Knowledge Transfer from Pre-trained Language Models to Cif-based Speech Recognizers via Hierarchical Distillation'\n---\n\n**Index Terms**: continuous integrate-and-fire, knowledge distillation, contrastive learning, pre-trained language models\n\nIntroduction\n============\n\nEnd-to-end (E2E) models have recently made remarkable progress on automatic speech recognition (ASR)"
+"---\nabstract: 'Let ${\\mathcal{M}}$ be an invariant subvariety in the moduli space of translation surfaces. We contribute to the study of the dynamical properties of the horocycle flow on ${\\mathcal{M}}$. In the context of dynamics on the moduli space of translation surfaces, we introduce the notion of a \u2018weak classification of horocycle invariant measures\u2019 and we study its consequences. Among them, we prove genericity of orbits and related uniform equidistribution results, asymptotic equidistribution of sequences of pushed measures, and counting of saddle connection holonomies. As an example, we show that invariant varieties of rank one, Rel-dimension one and related spaces obtained by adding marked points satisfy the \u2018weak classification of horocycle invariant measures\u2019. Our results extend prior results obtained by Eskin-Masur-Schmoll, Eskin-Marklof-Morris, and Bainbridge-Smillie-Weiss.'\naddress:\n- 'University of Utah [chaika@math.utah.edu ]{}'\n- 'Dept. of Mathematics, Tel Aviv University, Tel Aviv, Israel [barakw@post.tau.ac.il]{}'\n- 'Dept. of Mathematics, Tel Aviv University, Tel Aviv, Israel [florentygouf@mail.tau.ac.il]{}'\nauthor:\n- Jon Chaika\n- Barak Weiss\n- Florent Ygouf\nbibliography:\n- 'Rel\\_weak\\_classification.bib'\ntitle: 'Horocycle dynamics in rank one invariant subvarieties I: weak measure classification and equidistribution'\n---\n\nIntroduction\n============\n\nAny stratum of translation surfaces ${\\mathcal{H}}$ is endowed with an action of the group $G {{\\, \\stackrel{\\mathrm{def}}{=}\\,"
+"In this section we provide convergence rate and sample complexity results for Algorithm \\[alg:PG\\_MAG\\]. We extend the MLMC analysis of @dorfman2022 to the actor-critic setting, where we combine it with the two-timescale finite-time analysis of @wu2020finite to obtain non-asymptotic convergence guarantees for MAC (cf. Algorithm \\[alg:PG\\_MAG\\]). Salient features of our approach: **(1)** it avoids uniform ergodicity assumptions required in previous finite-time analyses [@zou2019finite; @wu2020finite; @chen2022finite]; **(2)** it explicitly characterizes convergence rate dependence on the mixing times encountered during training; **(3)** it (i) clarifies the trade-offs between mixing times and MLMC rollout length $T_{\\max}$, and (ii) extends the standard analysis to handle additional sources of bias in the MLMC estimator, both of which were missing from the analysis of @dorfman2022; **(4)** it leverages modified Adagrad stepsizes to avoid the slower convergence rates of previous two-timescale analyses [@wu2020finite] (cf. Theorem \\[thm:convergence\\_rate\\]). The rest of this section is structured as follows. We first outline standard assumptions (cf. Sec. \\[aaaumptions\\]) from the literature and provide some preliminary results. Second, we analyze the policy gradient norm (cf. Sec. \\[actor\\_convergence\\]) associated with Algorithm \\[alg:PG\\_MAG\\], which provides a preliminary convergence rate and characterizes its dependence on the error arising from the critic estimation procedure, the MLMC bias"
+"---\nabstract: 'In this paper, we develop the mathematical framework to describe the physical phenomenon behind the equilibrium configuration joining two antiferromagnetic domains. We firstly define the total energy of the system and deduce the governing equations by minimizing it with respect to the field variables. Then, we solve the resulting system of nonlinear PDEs together with proper initial and boundary conditions by varying the orientation of the 90$^{\\circ}$ domain wall (DW) configuration along the sample. Finally, the angular dependence of elastic and magnetoelastic energies as well as of incompatibility-driven volume effects is computed.'\nauthor:\n- Pierandrea Vergallo\n- Bennet Karetta\n- Giancarlo Consolo\n- 'Olena Gomonay [^1]'\nbibliography:\n- 'Ref.bib'\ndate: \ntitle: 'Domain-wall orientation in antiferromagnets controlled by magnetoelastic effects'\n---\n\nIntroduction {#sec::introduction}\n============\n\nAntiferromagnets are considered as prospective materials for spintronic applications because of their fast internal magnetic dynamics and robustness with respect to the external magnetic field. Those of antiferromagnets that are suitable for practical applications due to high ordering temperature and high, teraherz, frequencies are also known for pronounced magnetoelastic coupling. Magnetoelastic effects are usually neglected while considering antiferromagnetic dynamics and switching. However, they can pin the domain walls [@Gomonay:PhysRevLett.91.237205], modify magnon spectra [@Bossini:PhysRevLett.127.077202; @Gomonay_2021; @Bauer:PhysRevB.104.014403],"
+"---\nabstract: 'Prior work has provided strong evidence that, within organizational settings, teams that bring a diversity of information and perspectives to a task are more effective than teams that do not. If this form of *informational diversity* confers performance advantages, why do we often see largely homogeneous teams in practice? One canonical argument is that the benefits of informational diversity are in tension with *affinity bias*. To better understand we analyze a sequential model of team formation in which individuals care about their team\u2019s performance (captured in terms of accurately predicting some future outcome based on a set of features) but experience a cost as a result of interacting with teammates who use different approaches to the prediction task. Our results formalize a fundamental limitation of utility-based motivations to drive informational diversity in organizations and hint at interventions that may improve informational diversity and performance simultaneously.'\nauthor:\n- |\n \\\n Carnegie Mellon University\\\n [hheidari@cmu.edu](hheidari@cmu.edu)\\\n- |\n \\\n Cornell University\\\n [sbarocas@cornell.edu](sbarocas@cornell.edu)\\\n- |\n \\\n Cornell University\\\n [kleinberg@cornell.edu](kleinberg@cornell.edu)\\\n- |\n \\\n Cornell University\\\n [karen.levy@cornell.edu](karen.levy@cornell.edu)\\\nbibliography:\n- 'biblio.bib'\ntitle: |\n Informational Diversity and Affinity Bias\\\n in Team Growth Dynamics\n---\n\nIntroduction\n============\n\nA long line of work in the social sciences has"
+"---\nauthor:\n- Maryam Khalid\nbibliography:\n- 'Related.bib'\ntitle: ' **Traffic Prediction in Cellular Networks using Graph Neural Networks** '\n---\n\nIntroduction\n============\n\nCellular networks are ubiquitous entities that provide major means of communications all over the world. A cellular network is composed of three main components, service provider, cellular users and base stations that connect the cellular users to the service provider. The density of base stations and cellular users is extremely high and is much more significant than the backhaul setup. Thus, the base station and cellular users constitute major part of cellular network whose study and evaluation is extremely critical as the number of users and their requirements are changing rapidly and new challenges are emerging.\\\nOne major challenge in cellular networks is dynamic change in number of users and their usage of telecommunication service which results in overloading at certain base stations. One class of solution to deal with this overloading issue is deployment of drones [@uav] that can act as temporary base stations and offload the traffic from the overloaded base station. There are two main challenges in development of this solution. Firstly, the drone is expected to be present around the base station where"
+"---\nabstract: |\n In this paper, we introduce a new, spectral notion of approximation between directed graphs, which we call *singular value (SV) approximation*. SV-approximation is stronger than previous notions of spectral approximation considered in the literature, including spectral approximation of Laplacians for undirected graphs\u00a0[@ST04], standard approximation for directed graphs\u00a0[@CKPPRSV17], and unit-circle (UC) approximation for directed graphs\u00a0[@AKMPSV20]. Further, SV approximation enjoys several useful properties not possessed by previous notions of approximation, e.g., it is preserved under products of random-walk matrices and bounded matrices.\n\n We provide a nearly linear-time algorithm for SV-sparsifying (and hence UC-sparsifying) Eulerian directed graphs, as well as $\\ell$-step random walks on such graphs, for any $\\ell\\leq {\\operatorname{poly}}(n)$. Combined with the Eulerian scaling algorithms of [@CKKPPRS18], given an arbitrary (not necessarily Eulerian) directed graph and a set $S$ of vertices, we can approximate the stationary probability mass of the $(S,S^c)$ cut in an $\\ell$-step random walk to within a multiplicative error of $1/{\\operatorname{polylog}}(n)$ and an additive error of $1/{\\operatorname{poly}}(n)$ in nearly linear time. As a starting point for these results, we provide a simple black-box reduction from SV-sparsifying Eulerian directed graphs to SV-sparsifying undirected graphs; such a directed-to-undirected reduction was not known for previous notions"
+"---\nabstract: 'Minerals, metals, and plastics are indispensable for a functioning modern society. Yet, their supply is limited causing a need for optimizing ore extraction and recuperation from recyclable materials. Typically, those processes must be meticulously adapted to the precise properties of the processed materials. Advancing our understanding of these materials is thus vital and can be achieved by crushing them into particles of micrometer size followed by their characterization. Current imaging approaches perform this analysis based on segmentation and characterization of particles imaged with computed tomography (CT), and rely on rudimentary postprocessing techniques to separate touching particles. However, their inability to reliably perform this separation as well as the need to retrain methods for each new image, these approaches leave untapped potential to be leveraged. Here, we propose ParticleSeg3D, an instance segmentation method able to extract individual particles from large CT images of particle samples containing different materials. Our approach is based on the powerful nnU-Net framework, introduces a particle size normalization, uses a border-core representation to enable instance segmentation, and is trained with a large dataset containing particles of numerous different sizes, shapes, and compositions of various materials. We demonstrate that ParticleSeg3D can be applied out-of-the-box to a"
+"---\nabstract: 'Shear and bulk viscosity of liquid water and Argon are evaluated from first principles in the Density Functional Theory (DFT) framework, by performing Molecular Dynamics simulations in the NVE ensemble and using the Kubo-Greenwood equilibrium approach. Standard DFT functional is corrected in such a way to allow for a reasonable description of van der Waals (vdW) effects. For liquid Argon the thermal conductivity has been also calculated. Concerning liquid water, to our knowledge this is the first estimate of the bulk viscosity and of the shear-viscosity/bulk-viscosity ratio from first principles. By analyzing our results we can conclude that our first-principles simulations, performed at a nominal average temperature of 366 K to guarantee that the systems is liquid-like, actually describe the basic dynamical properties of liquid water at about 330 K. In comparison with liquid water, the normal, monatomic liquid Ar is characterized by a much smaller bulk-viscosity/shear-viscosity ratio (close to unity) and this feature is well reproduced by our first-principles approach which predicts a value of the ratio in better agreement with experimental reference data than that obtained using the empirical Lennard-Jones potential. The computed thermal conductivity of liquid Argon is also in good agreement with the experimental"
+"---\nabstract: 'The process of designing costmaps for off-road driving tasks is often a challenging and engineering-intensive task. Recent work in costmap design for off-road driving focuses on training deep neural networks to predict costmaps from sensory observations using corpora of expert driving data. However, such approaches are generally subject to overconfident mis-predictions and are rarely evaluated in-the-loop on physical hardware. We present an inverse reinforcement learning-based method of efficiently training deep cost functions that are uncertainty-aware. We do so by leveraging recent advances in highly parallel model-predictive control and robotic risk estimation. In addition to demonstrating improvement at reproducing expert trajectories, we also evaluate the efficacy of these methods in challenging off-road navigation scenarios. We observe that our method significantly outperforms a geometric baseline, resulting in 44% improvement in expert path reconstruction and 57% fewer interventions in practice. We also observe that varying the risk tolerance of the vehicle results in qualitatively different navigation behaviors, especially with respect to higher-risk scenarios such as slopes and tall grass. [ Appendix](https://drive.google.com/file/d/18LitjcoVRY_s6K-3jK0hRccd7PYAcpsT/view?usp=sharing)'\nauthor:\n- |\n Samuel Triest$^{1}$, Mateo Guaman Castro$^{1}$, Parv Maheshwari$^{2}$,\\\n Matthew Sivaprakasam$^{1}$, Wenshan Wang$^{1}$, and Sebastian Scherer$^{1}$ [^1][^2][^3]\nbibliography:\n- 'refs.bib'\ntitle: 'Learning Risk-Aware Costmaps via Inverse Reinforcement Learning for"
+"---\nabstract: 'Vision-language alignment learning for video-text retrieval arouses a lot of attention in recent years. Most of the existing methods either transfer the knowledge of image-text pretraining model to video-text retrieval task without fully exploring the multi-modal information of videos, or simply fuse multi-modal features in a brute force manner without explicit guidance. In this paper, we integrate multi-modal information in an explicit manner by tagging, and use the tags as the anchors for better video-text alignment. Various pretrained experts are utilized for extracting the information of multiple modalities, including object, person, motion, audio, etc. To take full advantage of these information, we propose the TABLE (TAgging Before aLignmEnt) network, which consists of a visual encoder, a tag encoder, a text encoder, and a tag-guiding cross-modal encoder for jointly encoding multi-frame visual features and multi-modal tags information. Furthermore, to strengthen the interaction between video and text, we build a joint cross-modal encoder with the triplet input of \\[vision, tag, text\\] and perform two additional supervised tasks, Video Text Matching (VTM) and Masked Language Modeling (MLM). Extensive experimental results demonstrate that the TABLE model is capable of achieving State-Of-The-Art (SOTA) performance on various video-text retrieval benchmarks, including MSR-VTT, MSVD, LSMDC"
+"---\nbibliography:\n- 'References.bib'\n---\n\n**Article title**\\\nLow-cost fluorescence microscope with microfluidic device fabrication for optofluidic applications\n\n**Authors**\\\nNagaraj Nagalingam$^{a,*}$, Aswin Raghunathan$^{a,*}$, Vikram Korede$^{a}$, Edwin F. J. Overmars$^{a}$, Shih-Te Hung$^{b}$, Remco Hartkamp$^{a}$, Johan T. Padding$^{a}$, Carlas S. Smith$^{b}$ and Huseyin Burak Eral$^{a}$\n\n$^*$denotes equal contribution\n\n**Affiliations**\\\n$^{a}$Process & Energy Department, Delft University of Technology, Leeghwaterstraat 39, 2628 CB Delft, The Netherlands.\\\n$^{b}$Delft Center for Systems and Control, Delft University of Technology, Mekelweg 2, 2628 CD Delft, The Netherlands\n\n**Corresponding author\u2019s email address and Twitter handle**\\\nn.nagalingam@tudelft.nl (Nagaraj Nagalingam)\n\n**Abstract**\\\nOptofluidic devices have revolutionized the manipulation and transportation of fluid at smaller length scales ranging from micrometers to millimeters. We describe a dedicated optical setup for studying laser-induced cavitation inside a microchannel. In a typical experiment, we use a tightly focused laser beam to locally evaporate the solution laced with a dye resulting in the formation of a microbubble. The evolving bubble interface is tracked using high-speed microscopy and digital image analysis. Furthermore, we extend this system to analyze fluid flow through fluorescence$-$Particle Image Velocimetry (PIV) technique with minimal adaptations. In addition, we demonstrate the protocols for the in-house fabrication of a microchannel tailored to function as a sample holder in"
+"---\nabstract: 'We present global 3D radiation magnetohydrodynamical simulations of accretion onto a 6.62 solar mass black hole with quasi-steady state accretion rates reaching 0.016 to 0.9 times the critical accretion rate, which is defined as the accretion rate to power the Eddington luminosity assuming a 10% radiative efficiency, in different runs. The simulations show no sign of thermal instability over hundreds of thermal timescales at 10\u00a0$r_{\\rm g}$. The energy dissipation happens close to the mid-plane in the near-critical runs and near the disk surface in the low accretion rate run. The total radiative luminosity inside $\\sim$20\u00a0$r_{\\rm g}$ is about 1% to 30% the Eddington limit, with a radiative efficiency of about 6% and 3%, respectively, in the sub- and near-critical accretion regimes. In both cases, self-consistent turbulence generated by the magnetorotational instability (MRI) leads to angular momentum transfer, and the disk is supported by magnetic pressure. Outflows from the central low-density funnel with a terminal velocity of $\\sim$0.1$c$ are seen only in the near-critical runs. We conclude that these magnetic pressure dominated disks are thermally stable and thicker than the $\\alpha$ disk, and the effective temperature profiles are much flatter than that in the $\\alpha$ disks. The"
+"---\nabstract: 'Local search is an effective method for solving large-scale combinatorial optimization problems, and it has made remarkable progress in recent years through several subtle mechanisms. In this paper, we found two ways to improve the local search algorithms in solving Pseudo-Boolean Optimization(PBO): Firstly, some of those mechanisms such as unit propagation are merely used in solving MaxSAT before, which can be generalized to solve PBO as well; Secondly, the existing local search algorithms utilize the heuristic on variables, so-called score, to mainly guide the search. We attempt to gain more insights into the clause, as it plays the role of a middleman who builds a bridge between variables and the given formula. Hence, we first extended the combination of unit propagation-based decimation algorithm to PBO problem, giving a further generalized definition of unit clause for PBO problem, and apply it to the existing solver LS-PBO for constructing an initial assignment; then, we introduced a new heuristic on clauses, dubbed care, to set a higher priority for the clauses that are less satisfied in current iterations. Experiments on benchmarks from the most recent PB Competition, as well as three real-world application benchmarks including minimum-width confidence band, wireless sensor network"
+"---\nabstract: 'We report spectral and polarimeter observations of two weak, low frequency (${\\approx}$85-60MHz) solar coronal type II radio bursts that occurred on 2020 May 29 within a time interval ${\\approx}$2min. The bursts had fine structures, and were due to harmonic plasma emission. Our analysis indicates that the magnetohydrodynamic (MHD) shocks responsible for the 1st and 2nd type II bursts were generated by the leading edge (LE) of an extreme-ultraviolet (EUV) flux rope/coronal mass ejection (CME) and interaction of its flank with a neighbouring coronal structure, respectively. The CME deflected from the radial direction by ${\\approx}25^{\\arcdeg}$ during propagation in the near-Sun corona. The estimated power spectral density (PSD) and magnetic field strength ($B$) near the location of the 1st burst at heliocentric distance $r{\\approx}1.35R_{\\odot}$ are $\\rm {\\approx}2{\\times}10^{-3}\\,W^{2}m$ and ${\\approx}$1.8G, respectively. The corresponding values for the 2nd burst at the same $r$ are $\\rm {\\approx}10^{-3}\\,W^{2}m$ and ${\\approx}$0.9G. The significant spatial scales of the coronal turbulence at the location of the two type II bursts are ${\\approx}$62-1Mm. Our conclusions from the present work are that the turbulence and magnetic field strength in the coronal region near the CME LE are higher compared to the corresponding values close to its flank. The derived"
+"---\nabstract: 'Microquasar SS 433 located at the geometric center of radio nebula W50 is a suitable source for investigating the physical process of how galactic jets affect the surrounding interstellar medium (ISM). Previous studies have searched for evidence of the interaction between the SS 433 jet and ISM, such as neutral hydrogen gas and molecular clouds; however, it is still unclear which ISM interacts with the jet. We looked for new molecular clouds that possibly interact at the terminal of the SS 433 eastern jet using the Nobeyama 45-m telescope and the Atacama Submillimeter Telescope Experiment (ASTE). We identified two molecular clouds, comprising many small clumps, in the velocity range of 30.1\u201336.5 km s$^{-1}$ for the first time. These clouds have complex velocity structures, and one of them has a density gradient toward SS 433. Although it is difficult to conclude the relation between the molecular clouds and the SS 433/W50 system, there is a possibility that the eastern structure of W50 constructed by the SS 433 jet swept up tiny molecular clumps drifting in the surroundings and formed the molecular clouds that we identified in this study.'\nauthor:\n- 'Haruka Sakemi$^{1,*}$, Mami Machida$^{2}$, Hiroaki Yamamoto$^{3}$,"
+"---\nabstract: 'The paper presents a\u00a0novel methodology to build surrogate models of complicated functions by an active learning-based sequential decomposition of the input random space and construction of localized polynomial chaos expansions, referred to as domain adaptive localized polynomial chaos expansion (). The approach utilizes sequential decomposition of the input random space into smaller sub-domains approximated by low-order polynomial expansions. This allows approximation of functions with strong nonlinearties, discontinuities, and/or singularities. Decomposition of the input random space and local approximations alleviates the Gibbs phenomenon for these types of problems and confines error to a\u00a0very small vicinity near the non-linearity. The global behavior of the surrogate model is therefore significantly better than existing methods as shown in numerical examples. The whole process is driven by an active learning routine that uses the recently proposed $\\Theta$ criterion to assess local variance contributions [@NovVorSadShi:CMAME:21]. The proposed approach balances both *exploitation* of the surrogate model and *exploration* of the input random space and thus leads to efficient and accurate approximation of the original mathematical model. The numerical results show the superiority of the in comparison to (i) a\u00a0single global polynomial chaos expansion and (ii) the recently proposed stochastic spectral embedding (SSE)"
+"---\nabstract: 'Quantum thermodynamics allows for the interconversion of quantum coherence and mechanical work. Quantum coherence is thus a potential physical resource for quantum machines. However, formulating a general nonequilibrium thermodynamics of quantum coherence has turned out to be challenging. In particular, precise conditions under which coherence is beneficial to or, on the contrary, detrimental for work extraction from a system have remained elusive. We here develop a generic dynamic-Bayesian-network approach to the far-from-equilibrium thermodynamics of coherence. We concretely derive generalized fluctuation relations and a maximum-work theorem that fully account for quantum coherence at all times, for both closed and open dynamics. We obtain criteria for successful coherence-to-work conversion, and identify a nonequilibrium regime where maximum work extraction is increased by quantum coherence for fast processes beyond linear response.'\nauthor:\n- 'Franklin L. S. Rodrigues'\n- Eric Lutz\ntitle: Nonequilibrium thermodynamics of quantum coherence beyond linear response\n---\n\nCoherence is a central feature of quantum theory. It is intimately associated with linear superpositions of states and related interference phenomena [@str17]. In the past decades, it has been recognized as an essential physical resource for quantum technologies that can outperform their classical counterparts [@nie00], from quantum communication [@gis07] and quantum computation"
+"---\nabstract: 'While acute stress has been shown to have both positive and negative effects on performance, not much is known about the impacts of stress on students\u2019 grades during examinations. To answer this question, we examined whether a correlation could be found between physiological stress signals and exam performance. We conducted this study using multiple physiological signals of ten undergraduate students over three different exams. The study focused on three signals, i.e., skin temperature, heart rate, and electrodermal activity. We extracted statistics as features and fed them into a variety of binary classifiers to predict relatively higher or lower grades. Experimental results showed up to 0.81 ROC-AUC with $k$-nearest neighbor algorithm among various machine learning algorithms.'\nauthor:\n- |\n Willie Kang$^{*1, 2}$, Sean Kim$^{*1, 3}$, Eliot Yoo$^{*1, 4}$, and Samuel Kim$^{1}$\\\n \\\n $^1$ IF Research Lab, La Palma, CA, USA\\\n $^2$ El Toro High School, Lake Forest, CA, USA\\\n $^3$ Oxford Academy, Cypress, CA, USA\\\n $^4$ Cypress High School, Cypress, CA, USA\\\n [{wildmanwillie25, seankim.hahjean, philliot1304}@gmail.com]{}\\\n [sam@ifresearchlab.com]{} [^1]\ntitle: |\n **Predicting Students\u2019 Exam Scores\\\n Using Physiological Signals**\n---\n\nINTRODUCTION\n============\n\nCollege students are prone to stress due to the highly transitional and demanding nature of their lives, which may be"
+"---\nabstract: 'Sparse model identification enables nonlinear dynamical system discovery from data. However, the control of false discoveries for sparse model identification is challenging, especially in the low-data and high-noise limit. In this paper, we perform a theoretical study on ensemble sparse model discovery, which shows empirical success in terms of accuracy and robustness to noise. In particular, we analyse the bootstrapping-based sequential thresholding least-squares estimator. We show that this bootstrapping-based ensembling technique can perform a provably correct variable selection procedure with an exponential convergence rate of the error rate. In addition, we show that the ensemble sparse model discovery method can perform computationally efficient uncertainty estimation, compared to expensive Bayesian uncertainty quantification methods via MCMC. We demonstrate the convergence properties and connection to uncertainty quantification in various numerical studies on synthetic sparse linear regression and sparse model discovery. The experiments on sparse linear regression support that the bootstrapping-based sequential thresholding least-squares method has better performance for sparse variable selection compared to LASSO, thresholding least-squares, and bootstrapping-based LASSO. In the sparse model discovery experiment, we show that the bootstrapping-based sequential thresholding least-squares method can provide valid uncertainty quantification, converging to a delta measure centered around the true value with increased"
+"---\nabstract: |\n We propose another interpretation of well-known derivatives computations from regular expressions, due to Brzozowski, Antimirov or Lombardy and Sakarovitch, in order to abstract the underlying data structures (*e.g.* sets or linear combinations) using the notion of monad. As an example of this generalization advantage, we first introduce a new derivation technique based on the graded module monad and then show an application of this technique to generalize the parsing of expression with capture groups and back references.\n\n We also extend operators defining expressions to any $n$-ary functions over value sets, such as classical operations (like negation or intersection for Boolean weights) or more exotic ones (like algebraic mean for rational weights).\n\n Moreover, we present how to compute a (non-necessarily finite) automaton from such an extended expression, using the Colcombet and Petrisan categorical definition of automata. These category theory concepts allow us to perform this construction in a unified way, whatever the underlying monad.\n\n Finally, to illustrate our work, we present a Haskell implementation of these notions using advanced techniques of functional programming, and we provide a web interface to manipulate concrete examples.\nauthor:\n- Samira Attou\n- Ludovic Mignot\n- Cl\u00e9ment Miklarz\n- Florent Nicart\nbibliography:\n-"
+"---\nabstract: 'Infrared small object segmentation (ISOS) aims to segment small objects only covered with several pixels from clutter background in infrared images. It\u2019s of great challenge due to: 1) small objects lack of sufficient intensity, shape and texture information; 2) small objects are easily lost in the process where detection models, say deep neural networks, obtain high-level semantic features and image-level receptive fields through successive downsampling. This paper proposes a reliable segmentation model for ISOS, dubbed UCFNet (U-shape network with central difference convolution and fast Fourier convolution), which can handle well the two issues. It builds upon central difference convolution (CDC) and fast Fourier convolution (FFC). On one hand, CDC can effectively guide the network to learn the contrast information between small objects and the background, as the contrast information is very essential in human visual system dealing with the ISOS task. On the other hand, FFC can gain image-level receptive fields and extract global information on high-resolution features maps while preventing small objects from being overwhelmed. Experiments on several public datasets demonstrate that our method significantly outperforms the state-of-the-art ISOS models, and can provide useful guidelines for designing better ISOS deep models. Code are available at https://github.com/wcyjerry/BasicISOS.'\nauthor:"
+"---\nabstract: 'Motivated by quantum simulation, we consider lattice Hamiltonians for Yang-Mills gauge theories with finite gauge group, for example a finite subgroup of a compact Lie group. We show that the electric Hamiltonian admits an interpretation as a certain natural, non-unique Laplacian operator on the finite Abelian or non-Abelian group, and derive some consequences from this fact. Independently of the chosen Hamiltonian, we provide a full explicit description of the physical, gauge-invariant Hilbert space for pure gauge theories and derive a simple formula to compute its dimension. We illustrate the use of the gauge-invariant basis to diagonalize a dihedral gauge theory on a small periodic lattice.'\nauthor:\n- 'A.\u00a0Mariani$^1$, S.\u00a0Pradhan$^2$, E.\u00a0Ercolessi$^2$'\nbibliography:\n- 'biblio.bib'\ndate: |\n $^1$\\\n $^2$\\\ntitle: |\n Hamiltonians and gauge-invariant Hilbert\\\n space for lattice Yang-Mills-like theories\\\n with finite gauge group\n---\n\nIntroduction\n============\n\nQuantum simulation is a field of growing interest both experimentally and theoretically [@Feynman; @buluta2009simulators; @cirac2012goals; @georgescu2014simulation]. It holds the promise to overcome technical difficulties, such as the *sign problem* [@Troyer_Wiese; @condmattsignproblem; @banuls2020lgtreview], which affect numerical Monte Carlo simulations in several interesting regimes, with applications to particle physics and condensed matter systems [@AartsReview; @condmattsignproblem]. In recent years, advances in experimental techniques"
+"---\nabstract: 'Nonlinear optical effects including stimulated Brillouin scattering (SBS) and four-wave mixing (FWM) play an important role in microwave photonics, optical frequency combs, and quantum photonics. Harnessing SBS and FWM in a low-loss and versatile integrated platform would open the path to building large-scale Brillouin/Kerr-based photonic integrated circuits. In this letter, we investigate the Brillouin and Kerr properties of a low-index (n=1.513\u00a0@\u00a01550\u00a0nm) silicon oxynitride (SiON) platform. We observed, for the first time, backward SBS in SiON waveguides with a Brillouin gain coefficient of 0.3\u00a0m$^{-1}$W$^{-1}$, which can potentially be increased to 0.95\u00a0m$^{-1}$W$^{-1}$ by just tailoring the waveguide cross-section. We also performed FWM experiments in SiON rings and obtained the nonlinear parameter $\\gamma$, of 0.02\u00a0m$^{-1}$W$^{-1}$. Our results point to a low-loss and low-index photonic integrated platform that is both Brillouin and Kerr active.'\nauthor:\n- Kaixuan\u00a0Ye\n- Yvan\u00a0Klaver\n- Oscar\u00a0A\u00a0Jimenez\u00a0Gordillo\n- Roel\u00a0Botter\n- Okky\u00a0Daulay\n- Francesco Morichetti\n- Andrea Melloni\n- David\u00a0Marpaung\nbibliography:\n- 'library.bib'\ntitle: 'Brillouin and Kerr nonlinearities of a low-index silicon oxynitride platform'\n---\n\nIntroduction {#introduction .unnumbered}\n============\n\nStimulated Brillouin scattering (SBS), which is an interaction between optical and acoustic waves, is currently revolutionizing"
+"---\nabstract: 'Real-time social media data can provide useful information on evolving hazards. Alongside traditional methods of disaster detection, the integration of social media data can considerably enhance disaster management. In this paper, we investigate the problem of detecting geolocation-content communities on Twitter and propose a novel distributed system that provides in near real-time information on hazard-related events and their evolution. We show that content-based community analysis leads to better and faster dissemination of reports on hazards. Our distributed disaster reporting system analyzes the social relationship among worldwide geolocated tweets, and applies topic modeling to group tweets by topics. Considering for each tweet the following information: user, timestamp, geolocation, retweets, and replies, we create a publisher-subscriber distribution model for topics. We use content similarity and the proximity of nodes to create a new model for geolocation-content based communities. Users can subscribe to different topics in specific geographical areas or worldwide and receive real-time reports regarding these topics. As misinformation can lead to increase damage if propagated in hazards related tweets, we propose a new deep learning model to detect fake news. The misinformed tweets are then removed from display. We also show empirically the scalability capabilities of the proposed system.'"
+"---\nabstract: 'This work investigates online learning techniques for a cognitive radar network utilizing feedback from a central coordinator. The network attempts to optimize radar tracking accuracy by learning the optimal channel selection for spectrum sharing and radar performance. We define optimal selection for such a network in relation to the radar observation quality obtainable in a given channel. Since the presence of primary users appears as interference, the approach also improves spectrum sharing performance. In other words, maximizing radar performance also minimizes interference to primary users. Each node is able to learn the quality of several available channels through repeated sensing. Importantly, each part of the network acts as an online learner, observing the environment to inform future actions. We show that in interference-limited spectrum, where the signal-to-interference-plus-noise ratio , a cognitive radar network is able to use information from the central coordinator in order to reduce the amount of time necessary to learn the optimal channel selection. We also show that even limited use of a central coordinator can eliminate collisions, which occur when two nodes select the same channel. We provide several reward functions which capture different aspects of the dynamic radar scenario and describe the online"
+"---\nabstract: 'In-ice radio detectors are a promising tool for the discovery of EeV neutrinos. For astrophysics, the implications of such a discovery will rely on the reconstruction of the neutrino arrival direction. This paper describes a first complete neutrino arrival direction reconstruction for detectors employing deep antennas such as RNO-G or planning to employ them like IceCube-Gen2. We will didactically introduce the challenges of neutrino direction reconstruction using radio emission in ice, elaborate on the detail of the algorithm used, and describe the obtainable performance based on a simulation study and discuss its implication for astrophysics.'\nauthor:\n- Ilse Plaisier\n- Sjoerd Bouma\n- Anna Nelles\nbibliography:\n- 'BIB.bib'\ndate: 'Received: date / Accepted: date'\ntitle: 'Reconstructing the arrival direction of neutrinos in deep in-ice radio detectors'\n---\n\nIntroduction {#sec:intro}\n============\n\nNeutrinos with energies up to have been detected numerously and IceCube has discovered a component of astrophysical neutrinos above the atmospheric neutrino background [@IceCube:2014stg]. These extraterrestrial neutrinos are expected to be created in extreme cosmic sources that accelerate charged particles, cosmic rays, to high energies, which produce secondary particles in their interactions with ambient matter and photon fields: neutrinos e.g.\u00a0[@Margolis:1977wt; @Berezinsky:1969erk; @ParticleDataGroup:2022pth]. Several point sources are revealing"
+"---\nbibliography:\n- 'ecjsample.bib'\n---\n\nIntroduction\n============\n\nContinual learning (CL), also named lifelong learning, is a machine learning paradigm that progressively accumulates knowledge over a continuous data stream to support future learning while maintaining previously digested information (@CGL_survey2019 [@CGL_survey2021]). Neural networks can overshadow human-level performance on single tasks under ideal scenarios (@AlphaGo [@He_2015_ICCV]). However, being exposed to a continuous data stream, they dramatically deteriorate their performance on past tasks or steps over time, known as catastrophic forgetting (CF) (@MCCLOSKEY1989109). Retraining a model from scratch is a naive method to overcome CF. Nevertheless, previous data might be inaccessible due to privacy concerns or limited memory capacity (@CGL_survey2021); Also, revisiting a large batch of old data is expensive. To circumvent CF, a model should strike a balance between the plasticity to accommodate novel knowledge and the stability to prevent existing knowledge from being perturbed by incoming inputs, which is referred to as the stability\u2013plasticity dilemma (@grossbergstudies [@SP-delimma]). Most CL studies focus on data represented in the Euclidean space where the data are arranged regularly (e.g., computer vision), whereas CL upon graph-structured data is still in its infancy.\n\nGraphs are widely applied to represent relational data (e.g., citation network, social network and traffic"
+"---\nabstract: 'Being around for decades, the problem of Authorship Attribution is still very much in focus currently. Some of the more recent instruments used are the pre-trained language models, the most prevalent being BERT. Here we used such a model to detect the authorship of texts written in the Romanian language. The dataset used is highly unbalanced, i.e., significant differences in the number of texts per author, the sources from which the texts were collected, the time period in which the authors lived and wrote these texts, the medium intended to be read (i.e., paper or online), and the type of writing (i.e., stories, short stories, fairy tales, novels, literary articles, and sketches). The results are better than expected, sometimes exceeding 87% macro-accuracy.'\nauthor:\n- 'Sanda-Maria Avram'\nbibliography:\n- 'main.bib'\ndate: January 2023\ntitle: 'BERT-based Authorship Attribution on the Romanian Dataset called ROST'\n---\n\nIntroduction\n============\n\nThe problem of automated Authorship Attribution (AA) is not new, however it is still relevant nowadays. It is defined as the task of determining authorship of an unknown text based on the textual characteristics of the text itself\u00a0[@oliveira2013comparing].\n\nMost approaches that treat the AA problem are in the area of artificial intelligence"
+"---\nabstract: 'Motion planning for autonomous vehicles sharing the road with human drivers remains challenging. The difficulty arises from three challenging aspects: human drivers are 1) multi-modal, 2) interacting with the autonomous vehicle, and 3) actively making decisions based on the current state of the traffic scene. We propose a motion planning framework based on Branch Model Predictive Control to deal with these challenges. The multi-modality is addressed by considering multiple future outcomes associated with different decisions taken by the human driver. The interactive nature of humans is considered by modeling them as reactive agents impacted by the actions of the autonomous vehicle. Finally, we consider a model developed in human neuroscience studies as a possible way of encoding the decision making process of human drivers. We present simulation results in various scenarios, showing the advantages of the proposed method and its ability to plan assertive maneuvers that convey intent to humans.'\nauthor:\n- \n- \n- \nbibliography:\n- 'references.bib'\ntitle: 'Interaction and Decision Making-aware Motion Planning using Branch Model Predictive Control[^1] '\n---\n\nIntroduction\n============\n\nAutonomous vehicles (AVs) must drive in the presence of other traffic participants, such as human-driven vehicles (HVs) and pedestrians. To this date, sharing the road"
+"---\nauthor:\n- Roberto Serafinelli\n- Valentina Braito\n- 'James N. Reeves'\n- Paola Severgnini\n- |\n \\\n Alessandra De Rosa\n- Roberto Della Ceca\n- Tracey Jane Turner\nbibliography:\n- 'biblio.bib'\ndate: 'Received XXX; accepted YYY'\ntitle: 'The [*NuSTAR*]{} view of the changing look AGN ESO 323-G77'\n---\n\n[The presence of an obscuring torus at parsec-scale distances from the central black hole is the main ingredient for the Unified Model of Active Galactic Nuclei (AGN), as obscured sources are thought to be seen through this structure. However, the Unified Model fails to describe a class of sources that undergo dramatic spectral changes, transitioning from obscured to unobscured and vice-versa through time. The variability in such sources, so-called Changing Look AGN (CLAGN), is thought to be produced by a clumpy medium at much smaller distances than the conventional obscuring torus. ESO 323-G77 is a CLAGN that was observed in various states through the years with [*Chandra*]{}, [*Suzaku*]{}, [*Swift*]{}-XRT and [*XMM-Newton*]{}, from unobscured ($N_{\\rm H}<3\\times10^{22}$ cm$^{-2}$) to Compton-thin ($N_{\\rm H}\\sim1-6\\times10^{23}$ cm$^{-2}$) and even Compton-thick ($N_{\\rm H}>1\\times10^{24}$ cm$^{-2}$), with timescales as short as one month. We present the analysis of the first [*NuSTAR*]{} monitoring of ESO 323-G77, consisting of 5 observations taken"
+"---\nauthor:\n- 'Xin\u00a0Li, Mingqiang\u00a0Wei, and Songcan\u00a0Chen'\nbibliography:\n- 'egbib.bib'\ntitle: 'PointSmile: Point Self-supervised Learning via Curriculum Mutual Information'\n---\n\nIntroduction {#sec:intro}\n============\n\nThere is an increasing demand to capture the real world by 3D sensing techniques for applications such as Metaverse and digital twins [@zhuzhetvcg]. The captured scenes are often represented in a simple and flexible form, i.e., point cloud [@Wei2022AGConv]. Recent years have witnessed considerable efforts of using deep learning to understand point clouds [@gulipeng]. The first step for point cloud understanding is to extract discriminative geometric features [@xianzhili], which is referred to as geometric representation learning (GRL). Ideally, when fed with sufficient annotated data, GRL will become powerful that can combine various neural networks, e.g., PointNet [@DBLP:conf/cvpr/QiSMG17], PointNet++ [@DBLP:conf/nips/QiYSG17], and DGCNN [@DBLP:journals/tog/WangSLSBS19], to facilitate the downstream tasks such as classification and segmentation. However, real-world scenarios often lack labeled 3D scans, and human annotations of those scans are very laborious due to their irregular structures [@honghuachen]. Although training on synthetic scans is promising to alleviate the shortage of labeled real-world data, such trained GRL will inevitably suffer from domain shifts. Self-supervised learning, as an unsupervised learning paradigm, can relieve the shortcomings of supervised models, and"
+"---\nabstract: |\n We study the interior of black holes in the presence of charged scalar hair of small amplitude $\\epsilon$ on the event horizon and show their terminal boundary is a crushing Kasner-like singularity. These spacetimes are spherically symmetric, spatially homogeneous and they differ significantly from the hairy black holes with uncharged matter previously studied in *\\[M. Van de Moortel, Violent nonlinear collapse inside charged hairy black holes, arxiv.2109.10932\\]* in that the electric field is dynamical.\n\n We prove that the backreaction of charged matter on the electric field induces the following phenomena, that ultimately impact upon the formation of the spacelike singularity:\n\n - :\u00a0oscillatory growth of the scalar hair, nonlinearly induced by the collapse.\n\n - :\u00a0the final Kasner exponents\u2019 dependency in ${\\epsilon}$ is via an expression of the form\\\n $|\\sin\\left(\\omega_0 \\cdot {\\epsilon}^{-2}+ O(\\log ({\\epsilon}^{-1}))\\right)|$.\n\n - :\u00a0a transition from an unstable Kasner metric to a different stable Kasner metric.\n\n The Kasner inversion occurring in our spacetime is reminiscent of the celebrated BKL scenario in cosmology.\n\n We additionally propose a construction indicating the relevance of the above phenomena \u2013 including Kasner inversions \u2013 to spacelike singularities inside more general black holes, beyond the hairy case.\n\n Our spacetime finally"
+"---\nabstract: '[A major challenge in generating single photons with a single emitter is to excite the emitter while avoiding laser leakage into the collection path. Ideally, any scheme to suppress this leakage should not result in a loss in efficiency of the single-photon source. Here, we investigate a scheme in which a single emitter, a semiconductor quantum dot, is embedded in a microcavity. The scheme exploits the splitting of the cavity mode into two orthogonally-polarised modes: one mode is used for excitation, the other for collection. By linking experiment to theory, we show that the best population inversion is achieved with a laser pulse detuned from the quantum emitter. The Rabi oscillations have an unusual dependence on pulse power. Our theory describes them quantitatively allowing us to determine the absolute photon creation probability. For the optimal laser detuning, the population innversion is 98%. The Rabi oscillations depend on the sign of the laser-pulse detuning. We show that this arises from the non-trivial effect of phonons on the exciton dynamics. The exciton-phonon interaction is included in the theory and gives excellent agreement with all the experimental results.]{}'\nauthor:\n- Alisa Javadi\n- Natasha Tomm\n- 'Nadia O. Antoniadis'\n- 'Alistair"
+"---\nabstract: 'A recent advancement in the domain of biomedical Entity Linking is the development of powerful two-stage algorithms \u2013 an initial *candidate retrieval* stage that generates a shortlist of entities for each mention, followed by a *candidate ranking* stage. However, the effectiveness of both stages are inextricably dependent on computationally expensive components. Specifically, in candidate retrieval via dense representation retrieval it is important to have *hard* negative samples, which require repeated forward passes and nearest neighbour searches across the entire entity label set throughout training. In this work, we show that pairing a proxy-based metric learning loss with an adversarial regularizer provides an efficient alternative to hard negative sampling in the candidate retrieval stage. In particular, we show competitive performance on the recall@1 metric, thereby providing the option to leave out the expensive candidate ranking step. Finally, we demonstrate how the model can be used in a zero-shot setting to discover out of knowledge base biomedical entities.'\nauthor:\n- |\n Maciej Wiatrak^1^[^1], Eirini Arvaniti^1^, Angus Brayne^1^, Jonas Vetterle^1,\\ 2^, Aaron Sim^1^\\\n ^1^BenevolentAI ^2^Moonfire Ventures\\\n London, United Kingdom\\\n `{maciej.wiatrak, eirini.arvaniti, angus.brayne, aaron.sim}@benevolent.ai`\\\n `jonas@moonfire.com`\nbibliography:\n- 'anthology.bib'\n- 'custom.bib'\ntitle: 'Proxy-based Zero-Shot Entity Linking by Effective Candidate Retrieval'\n---\n\n=1\n\nIntroduction"
+"---\nabstract: 'Biedl et al. introduced the minimum ply cover problem in CG 2021 following the seminal work of Erlebach and van Leeuwen in SODA 2008. They showed that determining the minimum ply cover number for a given set of points by a given set of axis-parallel unit squares is NP-hard, and gave a polynomial time $2$-approximation algorithm for instances in which the minimum ply cover number is bounded by a constant. Durocher et al. recently presented a polynomial time $(8 + \\epsilon)$-approximation algorithm for the general case when the minimum ply cover number is $\\omega(1)$, for every fixed $\\epsilon > 0$. They divide the problem into subproblems by using a standard grid decomposition technique. They have designed an involved dynamic programming scheme to solve the subproblem where each subproblem is defined by a unit side length square gridcell. Then they merge the solutions of the subproblems to obtain the final ply cover. We use a horizontal slab decomposition technique to divide the problem into subproblems. Our algorithm uses a simple greedy heuristic to obtain a $(27+\\epsilon)$-approximation algorithm for the general problem, for a small constant $\\epsilon>0$. Our algorithm runs considerably faster than the algorithm of Durocher et al. We"
+"---\nabstract: 'Intravascular ultrasound (IVUS) is the preferred modality for capturing real-time and high resolution cross-sectional images of the coronary arteries, and evaluating the stenosis. Accurate and real-time segmentation of IVUS images involves the delineation of lumen and external elastic membrane borders. In this paper, we propose a two-stream framework for efficient segmentation of 60 MHz high resolution IVUS images. It combines shallow and deep networks, namely, CSDN. The shallow network with thick channels focuses to extract low-level details. The deep network with thin channels takes charge of learning high-level semantics. Treating the above information separately enables learning a model to achieve high accuracy and high efficiency for accurate real-time segmentation. To further improve the segmentation performance, mutual guided fusion module is used to enhance and fuse both different types of feature representation. The experimental results show that our CSDN accomplishes a good trade-off between analysis speed and segmentation accuracy.'\naddress: |\n $^{1\\ }$Institute of Artificial Intelligence, Insight Lifetech, Shenzhen, China\\\n $^{2\\ }$School of Biomedical Engineering, Southern Medical University, Guangzhou, China\nbibliography:\n- 'refs.bib'\ntitle: 'CSDN: Combining Shallow and Deep Networks for Accurate Real-time Segmentation of High-definition Intravascular Ultrasound Images'\n---\n\nShallow network, Deep network, Real-time segmentation, Intravascular ultrasound images,"
+"---\nabstract: 'The slow viscous flow through a doubly-periodic array of cylinders does not have an analytical solution. However, as a reduced model for the flow within fibrous porous media, this solution is important for many real-world systems. We asymptotically determine the flow around a doubly-periodic array of infinite slender cylinders, by placing doubly-periodic two-dimensional singularity solutions within the cylinder and expanding the no-slip condition on the cylinder\u2019s surface in powers of the cylinder radius. The asymptotic solution provides a closed-form estimate for the flow and forces as a function of the radius and the dimensions of the cell. The force is compared to results from lattice-Boltzmann simulations of low-Reynolds-number flows in the same geometry, and the accuracy of the no-slip condition on the surface of the cylinder, predicted by the asymptotic theory, is checked. Finally, the behaviour of the flow, flux, force and effective permeability of the cell is investigated as a function of the geometric parameters. The structure of the asymptotic permeability is consistent with other models for the flow parallel to an array of rods. These models could be used to help understand the flows within porous systems composed of fibres and systems involving periodic arrays such"
+"---\naddress: 'Observatoire Astronomique de l\u2019Universit\u00e9 de Gen\u00e8ve, d\u00e9partement d\u2019astronomie Chemin Pegasi 51, CH-1290 Versoix, Switzerland; william.pluriel@unige.ch\\'\n---\n\nIntroduction {#intro}\n============\n\nGiant planets are diverse and complex objects\u00a0[@Guillot2006; @Showman2008; @Guerlet2014; @Miguel2018]. Thanks to the hundreds of hot giant exoplanets discovered so far, the\u00a0study of their atmospheres is at the forefront of exoplanet research. Spectroscopic observations are now being used to probe these worlds in the search of the molecular features, physical properties and dynamics of their atmospheres. Thanks to recent space (JWST) or ground-based (Espresso, NIRPS, CRIRES) instruments, a\u00a0sort of revolution is occurring in the field since we will observe more planets in one year than has ever been observed by Hubble and Spitzer! Moreover, we are at the edge of measuring accurate abundances and accurate radial velocity and are working with temporal resolutions high enough to see the impact of the 3D structures of these exoplanets\u2019 atmospheres. Such studies are crucial in the pursuit of understanding the diverse nature of the chemical compositions, atmospheric processes and internal structures of exoplanets, as\u00a0well as the conditions required for planetary\u00a0formation.\n\nIn recent years, there has been a surge in transit spectroscopy observations using both space-borne and ground-based"
+"---\nabstract: 'A general upper bound for topological entropy of switched nonlinear systems is constructed, using an asymptotic average of upper limits of the matrix measures of Jacobian matrices of strongly persistent individual modes, weighted by their active rates. A general lower bound is constructed as well, using a similar weighted average of lower limits of the traces of these Jacobian matrices. In a case of interconnected structure, the general upper bound is readily applied to derive upper bounds for entropy that depend only on \u201cnetwork-level\u201d information. In a case of block-diagonal structure, less conservative upper and lower bounds for entropy are constructed. In each case, upper bounds for entropy that require less information about the switching signal are also derived. The upper bounds for entropy and their relations are illustrated by numerical examples of a switched Lotka\u2013Volterra ecosystem model.'\nauthor:\n- 'Guosong\u00a0Yang, Daniel\u00a0Liberzon, and Jo\u00e3o\u00a0P.\u00a0Hespanha [^1] [^2] [^3] [^4]'\nbibliography:\n- 'reference-abbr.bib'\n- 'reference-temp.bib'\ntitle: |\n **Topological entropy of switched nonlinear and\\\n interconnected systems**\n---\n\nIntroduction {#sec:intro}\n============\n\nTopological entropy is a fundamental concept in dynamical systems theory. Roughly speaking, it describes the rate at which uncertainty about the state of a system grows as"
+"---\nabstract: 'We study photoinduced dynamics triggered by an inhomogeneity due to competition between charge density waves (CDWs) and superconductivity. As a simple example, we consider the superconducting (SC) interface between two CDW domains with opposite signs. The real-time dynamics are calculated within the time-dependent Hartree\u2013Fock\u2013Bogoliubov framework, where the order parameter dynamics and the nonequilibrium quasiparticle distribution functions are studied. We also calculate the various dynamical response functions within a generalized random phase approximation. Through comparisons between the real time dynamics and the analysis of the response functions, it is found that the photo-driven SC interface can emit collective modes of the SC order parameter. This is analogous to the spin wave emission from the magnetic domain wall in an antiferromagnet, particularly in the case of a low driving frequency, where the order parameters can be mapped onto the pseudospin picture. In the high-frequency case, we find a domain wall melting caused by changes in the quasiparticle distribution, which induces superconductivity in the whole system.'\nauthor:\n- Yukihiro Matsubayashi\n- Yusuke Masaki\n- Hiroaki Matsueda\nbibliography:\n- 'reference.bib'\ntitle: |\n Photoinduced pseudospin-wave emission from\\\n charge-density-wave domain wall with superconductivity\n---\n\nINTRODUCTION\n============\n\nRecent advances in terahertz laser technologies have enabled"
+"---\nabstract: 'In this work, we elaborate on a measure-theoretic approach to negative probabilities. We study a natural notion of contextuality measure and characterize its main properties. Then, we apply this measure to relevant examples of quantum physics. In particular, we study the role played by contextuality in quantum computing circuits.'\nauthor:\n- 'Elisa Monchietti$^{1}$, C\u00e9sar Massri$^{2}$, Acacio de Barros$^{3}$ and Federico Holik$^{4}$'\ntitle: 'Measure-theoretic approach to negative probabilities'\n---\n\n\\[section\\] \\[theo\\][Definition]{} \\[theo\\][Lemma]{} \\[theo\\][Proposition]{} \\[theo\\][Corollary]{} \\[theo\\][Example]{} \\[theo\\][Remark]{} \\[theo\\][Remark]{} \\[theo\\][Corollary]{} \\[theo\\][Example]{} \\[theo\\][Principle]{}\n\n\\[theo\\][Axiom]{}\n\n1 - Universidad Nacional de Rosario.\\\n2 - Instituto de Investigaciones Matem\u00e1ticas \u201cLuis A. Santalo\u201d.\\\n3 - School of Liberal Studies, San Francisco State University, 1900 Holloway Ave., San Francisco, California, USA.\\\n4 - Instituto de F\u00edsica La Plata (CONICET-UNLP), Calle 113 entre 64 y 64 S/N, 1900, La Plata, Buenos Aires, Argentina.\n\nIntroduction\n============\n\nAs reported by Peter Shor [^1], R. P. Feynmann was very interested in *negative probabilities*. He thought that, perhaps, they could be used to give a natural explanation to the violation of Bell inequalities by quantum systems. In a subsequent paper, Feynmann gave arguments for considering negative probabilities as an interesting option for handling different problems of modern physics [@feynman_negative_1987]. They also called"
+"---\nabstract: 'Solar flares are efficient particle accelerators with a large fraction of released magnetic energy ($10-50$%) converted into energetic particles such as hard X-ray producing electrons. This energy transfer process is not well constrained, with competing theories regarding the acceleration mechanism(s), including MHD turbulence. We perform a detailed parameter study examining how various properties of the acceleration region, including its spatial extent and the spatial distribution of turbulence, affect the observed electron properties, such as those routinely determined from X-ray imaging and spectroscopy. Here, a time-independent Fokker-Planck equation is used to describe the acceleration and transport of flare electrons through a coronal plasma of finite temperature. Motivated by recent non-thermal line broadening observations that suggested extended regions of turbulence in coronal loops, an extended turbulent acceleration region is incorporated into the model. We produce outputs for the density weighted electron flux, a quantity directly related to observed X-rays, modelled in energy and space from the corona to chromosphere. We find that by combining several spectral and imaging diagnostics (such as spectral index differences or ratios, energy or spatial-dependent flux ratios, and electron depths into the chromosphere) the acceleration properties, including the timescale and velocity dependence, can be constrained alongside"
+"---\nabstract: 'The shear measurement from DECaLS (Dark Energy Camera Legacy Survey) provides an excellent opportunity for galaxy-galaxy lensing study with DESI (Dark Energy Spectroscopic Instrument) galaxies, given the large ($\\sim 9000$ deg$^2$) sky overlap. We explore this potential by combining the DESI 1% survey and DECaLS DR8. With $\\sim 106$ deg$^2$ sky overlap, we achieve significant detection of galaxy-galaxy lensing for BGS and LRG as lenses. [Scaled to the full BGS sample, we expect the statistical errors to improve from $18(12)\\%$ to a promising level of $2(1.3)\\%$ at $\\theta>8^{''}(<8^{''})$. This brings stronger requirements for future systematics control.]{} To fully realize such potential, we need to control the residual multiplicative shear bias $|m|<0.01$ and the bias in the mean redshift $|\\Delta z|<0.015$. We also expect significant detection of galaxy-galaxy lensing with DESI LRG/ELG full samples as lenses, and cosmic magnification of ELG through cross-correlation with low-redshift DECaLS shear. [If such systematical error control can be achieved,]{} we find the advantages of DECaLS, comparing with KiDS (Kilo Degree Survey) and HSC (Hyper-Suprime Cam), are at low redshift, large-scale, and in measuring the shear-ratio (to $\\sigma_R\\sim 0.04$) and cosmic magnification.'\nauthor:\n- |\n Ji Yao$^{1,2,3}$[^1], Huanyuan Shan$^{1,4}$[^2], Pengjie Zhang$^{2,3,5}$[^3], Eric Jullo$^{6}$, Jean-Paul"
+"---\nabstract: 'During outbreaks of emerging infectious diseases, internationally connected cities often experience large and early outbreaks, while rural regions follow after some delay [@brockmann_hidden_2013; @may_spatial_1984; @rice_variation_2021; @viboud_synchrony_2006; @Davis2021; @Balcan21484]. This hierarchical structure of disease spread is influenced primarily by the multiscale structure of human mobility [@watts_multiscale_2005; @alessandretti_scales_2020; @susswein_ignoring_2021]. However, during the COVID-19 epidemic, public health responses typically did not take into consideration the explicit spatial structure of human mobility when designing non-pharmaceutical interventions (NPIs). NPIs were applied primarily at national or regional scales [@oliu_barton_green_2021]. Here we use weekly anonymized and aggregated human mobility data and spatially highly resolved data on COVID-19 cases, deaths and hospitalizations at the municipality level in Mexico to investigate how behavioural changes in response to the pandemic have altered the spatial scales of transmission and interventions during its first wave (March - June 2020). We find that the epidemic dynamics in Mexico were initially driven by SARS-CoV-2 exports from Mexico State and Mexico City, where early outbreaks occurred. The mobility network shifted after the implementation of interventions in late March 2020, and the mobility network communities became more disjointed while epidemics in these communities became increasingly synchronised. Our results provide actionable and dynamic insights into"
+"---\nauthor:\n- Federico Lelli\n- 'Zhi-Yu Zhang'\n- 'Thomas G. Bisbas'\n- Lingrui Lin\n- Padelis Papadopoulos\n- 'James M. Schombert'\n- Enrico Di Teodoro\n- Antonino Marasco\n- 'Stacy S. McGaugh'\nbibliography:\n- 'highz.bib'\ndate: 'Received 30/09/2022; accepted 30/01/2023'\ntitle: |\n Cold gas disks in main-sequence galaxies at cosmic noon:\\\n Low turbulence, flat rotation curves, and disk-halo degeneracy\n---\n\nIntroduction {#sec:intro}\n============\n\nDuring the past decades, there has been outstanding progress in studying the internal dynamics of high-$z$ galaxies. Near-infrared (NIR) spectroscopy with integral field units (IFUs) allowed for the kinematics of warm ($T\\simeq10^4$ K) ionized gas to be traced using the H$\\alpha$ emission line at $z\\simeq1.0-2.5$ [e.g., @Forster2009; @Wisnioski2015; @Stott2016] and the $\\lambda$5007 \u00c5\u00a0line up to $z\\simeq3.5$ [e.g., @Gnerucci2011; @Turner2017a]. Radio and submillimeter observations with the Jansky Very Large Array (JVLA) and the NOrthern Extended Millimeter Array (NOEMA) allowed for the kinematics of cold neutral gas ($T\\simeq10-100$ K) to be traced using CO transitions at $z\\simeq1-4$ [e.g., @Hodge2012; @Ubler2018]. Moreover, the Atacama Large Millimeter Array (ALMA) made it possible to study gas dynamics using \u00a0lines at $z\\simeq1-3$ [@Lelli2018; @Dye2022; @Gururajan2022], the \u00a0line at $z\\simeq4-7$ [@DeBreuck2014; @Jones2017; @Smit2018], and high-$J$ CO lines [@Tadaki2017; @Talia2018].\n\nThe first IFU"
+"---\nabstract: 'In evolved and dusty circumstellar discs, two planets with masses comparable to Jupiter and Saturn that migrate outwards while maintaining an orbital resonance can produce distinctive features in the dust distribution. Dust accumulates at the outer edge of the common gas gap, which behaves as a dust trap, where the local dust concentration is significantly enhanced by the planets\u2019 outward motion. Concurrently, an expanding cavity forms in the dust distribution inside the planets\u2019 orbits, because dust does not filter through the common gaseous gap and grain depletion in the region continues via inward drifting. There is no cavity in the gas distribution because gas can filter through the gap, although ongoing gas accretion on the planets can reduce the gas density in the inner disc. Such behaviour was demonstrated by means of simulations neglecting the effects of dust diffusion due to turbulence and of dust backreaction on the gas. Both effects may alter the formation of the dust peak at the gap outer edge and of the inner dust cavity, by letting grains filter through the dust trap. We performed high resolution hydrodynamical simulations of the coupled evolution of gas and dust species, the latter treated as pressureless"
+"---\nabstract: 'An integro-differential equation for the probability density of the generalized stochastic Ornstein-Uhlenbeck process with jump diffusion is considered for a special case of the Laplacian distribution of jumps. It is shown that for a certain ratio between the intensity of jumps and the speed of reversion, the fundamental solution can be found explicitly, as a finite sum. Alternatively, the fundamental solution can be represented as converging power series. The properties of this solution are investigated. The fundamental solution makes it possible to obtain explicit formulas for the density at each instant of time, which is important, for example, for testing numerical methods.'\naddress: ' Mathematics and Mechanics Department, Lomonosov Moscow State University, Leninskie Gory, Moscow, 119991, Russian Federation'\nauthor:\n- 'Olga S. Rozanova\\*, Nikolai A. Krutov'\ntitle: ' The fundamental solution of the master equation for a jump-diffusion Ornstein-Uhlenbeck process '\n---\n\nIntroduction\n============\n\nModels using stochastic dynamics have natural applications in various areas of physics, biology, and financial mathematics. In recent decades, it has become clear that many phenomena cannot be explained by adding only standard Wiener processes to deterministic models, it is necessary to consider models that take into account differently distributed jumps, that is, use"
+"---\nabstract: 'The bright pulsar PSR B0329+54 was previously known for many years to have two emission modes. Sensitive observations of individual pulses reveal that the central component of pulse profile, which is called core component, is found to be very weakened occasionally for some periods and then recovered. This is the newly identified core-weak mode. Based on our long observations of PSR B0329+54 by the Jiamusi 66-m telescope at 2250 MHz, we report here that the profile components of individual pulses, including these for the core and the leading and trailing peaks, are relatedly varying over some periods even before and after the core-weak mode, forming a regular pattern in the phase-vs-time plot for a train of period-folded pulses. The pattern has a similar structure for the core-weak mode with a time scale of 3 to 14 periods. It starts with an intensity brightening at the trailing phase of the core component, and then the core intensity declines to a very low level, as if the core component is drifting out from the normal radiation window within one or two periods. Then the intensity for the trailing components is enhanced, and then the leading component appears at an advanced"
+"---\nauthor:\n- |\n Yangguang Li, Bin Huang, Zeren Chen, Yufeng Cui, Feng Liang, Mingzhu Shen,\\\n Fenggang Liu, Enze Xie, Lu Sheng, Wanli Ouyang, Jing Shao\ntitle: 'Fast-BEV: A Fast and Strong Bird\u2019s-Eye View Perception Baseline'\n---\n\nIntroduction\n============\n\nA fast and accurate 3D perception system is essential for autonomous driving. Classic methods\u00a0[@zhou2018voxelnet; @lang2019pointpillars; @shi2019pointrcnn] rely on the accurate 3D information provided by Lidar point clouds. However, Lidar sensors usually cost thousands of dollars\u00a0[@neuvition_2022], hindering their applications on economical vehicles. Pure camera-based Bird\u2019s-Eye View (BEV) methods\u00a0[@wang2022detr3d; @xie2022m; @huang2021bevdet; @liu2022petr; @li2022bevformer; @li2022bevdepth; @liu2022bevfusion] have recently shown great potential for their impressive 3D perception capability and economical cost. They basically follow the paradigm: multi-camera 2D image features are transformed to the 3D BEV feature in ego-car coordinates, then specific heads are applied on the unified BEV representation to perform specific 3D tasks, , 3D detection, segmentation, . The unified BEV representation can handle a single task separately or multi-tasks simultaneously, which is highly efficient and flexible.\n\nTo perform 3D perception from 2D image features, state-of-the-art BEV methods on nuScenes\u00a0[@caesar2020nuscenes] either uses query based transformation\u00a0[@wang2022detr3d; @li2022bevformer] or implicit/explicit depth based transformation\u00a0[@philion2020lift; @huang2021bevdet; @li2022bevdepth]. However, they are difficult"
+"---\nabstract: 'We investigate the ability of a free pro-$\\CC$ group of infinite rank to abstractly solve abstract embedding problems, and conclude that for some varieties $\\CC$, the profinite completion of any order, of a free pro-$\\CC$ group of infinite rank, is a free pro-$\\CC$ group as well, of the corresponding rank.'\nauthor:\n- 'Tamar Bar-On'\nbibliography:\n- 'references.bib'\n- '../../Results\\_for\\_PHD/references.bib'\ntitle: 'Profinite Completion of Free Pro-$\\mathcal{C}$ Groups'\n---\n\nIntroduction {#introduction .unnumbered}\n============\n\nDuring the paper, $\\mm$ will be an infinite cardinal, and $\\CC$ be a variety of finite groups, i.e, a collection of finite groups which is closed under taking subgroups, quotients and finite products. The free pro-$\\CC$ group of rank $\\mm$ is defined as follows:\n\nConsider a discrete topological space of cardinality $\\mm$, $X$. Denote by $X\\cup \\{\\ast\\}$ its one-point compactification. One easily checks that this is a pointed profinite space- i.e, a topological space which is compact, Hausdorff and totally disconnected, with $\\ast$ being the distinguished point. The free pro-$\\CC$ group over $X\\cup \\{\\ast\\}$, also known as the free pro-$\\CC$ group over the set $X$ converging to 1, is a pro-$\\CC$ group $F_{\\CC}(\\mm)$, considered as a pointed profinite space with the identity element serving as the distinguished"
+"---\nabstract: |\n Reduced graphene oxide (rGO) is a bulk-processable quasi-amorphous 2D material with broad spectral coverage and fast electronic response. rGO sheets are suspended in a polymer matrix and sequentially photoreduced while measuring the evolving optical spectra and ultrafast electron relaxation dynamics. Photoreduced rGO yields optical absorption spectra that fit with the same Fano lineshape parameters as monolayer graphene. With increasing photoreduction time, rGO transient absorption kinetics accelerate monotonically, reaching an optimal point that matches the hot electron cooling in graphene. All stages of rGO ultrafast kinetics are simulated with a hot-electron cooling model mediated by disorder-assisted supercollisions. While the rGO room temperature 0.31 ps$^{-1}$ electronic cooling rate matches monolayer graphene, subsequent photoreduction can rapidly increase the rate by 10-12$\\times$. Such accelerated supercollision rates imply a reduced mean-free scattering length caused by photoionized point-defects on the rGO sp$^2$ sub-lattice. For visible range excitations of rGO, photoreduction shows three increasing spectral peaks that match graphene quantum dot (GQD) transitions, while a broad peak from oxygenated defect edge states shrinks. These three confined GQD states donate their hot carriers to the graphene sub-lattice with a 0.17 ps rise-time that accelerates with photoreduction. Collectively, many desirable photophysical properties of 2D graphene are"
+"---\nabstract: 'The Ornstein-Uhlenbeck process is interpreted as Brownian motion in a harmonic potential. This Gaussian Markov process has a bounded variance and admits a stationary probability distribution, in contrast to the standard Brownian motion. It also tends to a drift towards its mean function, and such a process is called mean-reverting. Two examples of the generalized Ornstein-Uhlenbeck process are considered. In the first one, we study the Ornstein-Uhlenbeck process on a comb model, as an example of the harmonically bounded random motion in the topologically constrained geometry. The main dynamical characteristics (as the first and the second moments) and the probability density function are studied in the framework of both the Langevin stochastic equation and the Fokker-Planck equation. The second example is devoted to the study of the effects of stochastic resetting on the Ornstein-Uhlenbeck process, including stochastic resetting in the comb geometry. Here, the non-equilibrium stationary state is the main question in task, where the two divergent forces, namely the resetting and the drift towards the mean, lead to compelling results both in the case of the Ornstein-Uhlenbeck process with resetting and its generalization on the two dimensional comb structure.'\naddress:\n- '^1^*Research Center for Computer Science and"
+"---\nabstract: 'Topological defects and smooth excitations determine the properties of systems showing collective order. We introduce a generic non-singular field theory that comprehensively describes defects and excitations in systems with $O(n)$ broken rotational symmetry. Within this formalism, we explore fast events, such as defect nucleation/annihilation and dynamical phase transitions where the interplay between topological defects and non-linear excitations is particularly important. To highlight its versatility, we apply this formalism in the context of Bose-Einstein condensates, active nematics, and crystal lattices.'\nauthor:\n- Vidar Skogvoll\n- Jonas R\u00f8nning\n- Marco Salvalaglio\n- Luiza Angheluta\nbibliography:\n- 'Bibliography.bib'\ntitle: 'A unified field theory of topological defects and non-linear local excitations'\n---\n\nIntroduction {#Sec:Intro}\n============\n\nTopological defects are hallmarks of systems exhibiting collective order. They are widely encountered from condensed matter, including biological systems, to elementary particles, and the very early Universe\u00a0[@merminTopologicalTheoryDefects1979; @TWBKibble_1976; @michel1980symmetry; @vilenkin1994cosmic; @chaikin_lubensky_1995; @nelson2002defects; @ozawa2019topological; @ardavseva2022topological]. The small-scale dynamics of interacting topological defects are crucial for the emergence of large-scale non-equilibrium phenomena, such as quantum turbulence in superfluids\u00a0[@madeira2020quantum], spontaneous flows in active matter\u00a0[@alert2022active], or dislocation plasticity in crystals\u00a0[@papanikolaou2017avalanches]. In fact, classical discrete modeling approaches such as point vortex models\u00a0[@eyink2006onsager] and discrete dislocation dynamics\u00a0[@zhouDiscreteDislocationDynamics2010]"
+"---\nabstract: 'Monte Carlo (MC) methods are the most widely used methods to estimate the performance of a policy. Given an interested policy, MC methods give estimates by repeatedly running this policy to collect samples and taking the average of the outcomes. Samples collected during this process are called online samples. To get an accurate estimate, MC methods consume massive online samples. When online samples are expensive, e.g., online recommendations and inventory management, we want to reduce the number of online samples while achieving the same estimate accuracy. To this end, we use off-policy MC methods that evaluate the interested policy by running a different policy called behavior policy. We design a tailored behavior policy such that the variance of the off-policy MC estimator is provably smaller than the ordinary MC estimator. Importantly, this tailored behavior policy can be efficiently learned from existing offline data, i,e., previously logged data, which are much cheaper than online samples. With reduced variance, our off-policy MC method requires fewer online samples to evaluate the performance of a policy compared with the ordinary MC method. Moreover, our off-policy MC estimator is always unbiased.'\nauthor:\n- |\n Shuze Liu shuzeliu@virginia.edu\\\n Department of Computer Science\\\n University of"
+"---\nabstract: |\n Deep neural networks (DNNs) have demonstrated superior performance over classical machine learning to support many features in safety-critical systems. Although DNNs are now widely used in such systems (e.g., automobiles), there is limited progress towards the development of methods that support functional safety analysis of DNN-based systems. In particular, there is a lack of automated approaches for the identification of root causes of errors, to drive both risk analysis and DNN retraining. The identification of such root causes is indeed important in safety-analysis as they are the basis on which to identify and evaluate the risks associated to safety violations.\n\n In this paper, we propose [SAFE]{}, a black-box approach to automatically detect the root causes of DNN errors. [SAFE]{}relies on a transfer learning model pre-trained on ImageNet to extract the features from error-inducing images. It then applies a density-based clustering algorithm to detect arbitrary shaped clusters of images representing plausible causes of error. Last, clusters are used to effectively retrain and improve the DNN. The black-box nature of [SAFE]{}is due to the fact that it does not require any internal information about the DNN or its modification. This property facilitates its adoption among practitioners as it is"
+"---\nabstract: |\n There has been an increasing recognition of the value of data and of data-based decision making. As a consequence, the development of data science as a field of study has intensified in recent years. However, there is no systematic and comprehensive treatment and understanding of data science. This article describes a systematic and end-to-end framing of the field based on an inclusive definition. It identifies the core components making up the data science ecosystem, presents its lifecycle modeling the development process, and argues its interdisciplinarity.\n\n (I will write a better abstract in due course.)\nauthor:\n- |\n M. Tamer \u00d6zsu\\\n University of Waterloo, Canada\\\n [`tamer.ozsu@uwaterloo.ca`](mailto:tamer.ozsu@uwaterloo.ca)\nbibliography:\n- 'ds-references.bib'\ntitle: 'Foundations and Scoping of Data Science[^1]'\n---\n\nIntroduction {#sec:intro}\n============\n\nThere is a data-driven revolution underway in science and society, disrupting every form of enterprise. We are collecting and storing data more rapidly than ever before. The value of data as a central asset in an organization is now well-established and generally accepted. Economist called it \u201cthe world\u2019s most valuable resource\u201d\u00a0[@Economist2017]. World Economic Forum\u2019s briefing paper *A New Paradigm for Business of Data* states \u201cAt the heart of digital economy and society is the explosion of insight,"
+"---\nabstract: 'We present an efficient implementation of analytical non-adiabatic derivative coupling elements for the coupled cluster singles and doubles model. The derivative coupling elements are evaluated in a biorthonormal formulation in which the nuclear derivative acts on the right electronic state, where this state is biorthonormal with respect to the set of left states. This stands in contrast to earlier implementations based on normalized states and a gradient formula for the derivative coupling. As an illustration of the implementation, we determine a minimum energy conical intersection between the $n\\pi^\\ast$ and $\\pi\\pi^\\ast$ states in the nucleobase thymine.'\nauthor:\n- 'Eirik F.\u00a0Kj[\u00f8]{}nstad'\n- Henrik Koch\nbibliography:\n- 'apssamp.bib'\ntitle: 'Communication: Non-adiabatic derivative coupling elements for the coupled cluster singles and doubles model'\n---\n\nIntroduction\n============\n\nThe nuclear dynamics that follows photoexcitation typically involves non-adiabatic population transfer between several electronic states. For example, in the nucleobase thymine, photoexcitation to the bright $\\pi \\pi^\\ast$ state is followed by rapid ($60$ fs) non-adiabatic population transfer to the dark $n \\pi^\\ast$ state.[@wolf2017probing] As is well known, the approximate description of the electronic structure can have a dramatic qualitative impact on the simulated nuclear dynamics, often complicating the task of correctly identifying the actual physics"
+"---\nabstract: 'Quantization is the process of mapping an input signal from an infinite continuous set to a countable set with a finite number of elements. It is a non-linear irreversible process, which makes the traditional methods of system identification no longer applicable. In this work, we propose a method for parsimonious linear time invariant system identification when only quantized observations, discerned from noisy data, are available. More formally, given a priori information on the system, represented by a compact set containing the poles of the system, and quantized realizations, our algorithm aims at identifying the least order system that is compatible with the available information. The proposed approach takes also into account that the available data can be subject to fragmentation. Our proposed algorithm relies on an ADMM approach to solve a $\\ell_{p},(00 \\ \\text{in} \\ {\\Omega}\\tag{$P_\\lambda$} \\\\\n u &=0 \\ \\text{on} \\ {\\partial}{\\Omega}\\nonumber\\end{aligned}$$ where ${\\Omega}$ is a bounded domain in $\\mathbb{R}^N$ with a smooth boundary, $0<{\\beta}<1$, $10$ is a parameter. The term"
+"---\nabstract: 'Using molecular dynamics and thermodynamic integration, we report on the solvation process in water and in cyclohexane of seven polypeptides (GLY, ALA, ILE, ASN, LYS, ARG, GLU). The polypeptides are selected to cover the full hydrophobic scale while varying their chain length from tri- to undeca-homopeptides provide indications on possible non-additivity effects as well as the role of the peptide backbone in the overall stability of the polypeptides. The use of different solvents and different polypeptides allows us to investigate the relation between *solvent quality* \u2013 the capacity of a given solvent to fold a given biopolymer often described on a scale ranging from \u201cgood\u201d to \u201cpoor\u201d, and *solvent polarity* \u2013 related to the specific interactions of any solvent with respect to a reference solvent. Undeca-glycine is found to be the only polypeptides to have a proper stable collapse in water (polar solvent), with the other hydrophobic polypetides displaying in water repeated folding and unfolding events and with polar polypeptides presenting a even more complex behavior. By contrast, all polypeptides but none are found to keep an extended conformation in cyclohexane, irrespective of their polarity. All considered polypeptides are also found to have a favorable solvation free energy"
+"---\nabstract: 'We study the non-relativistic limit of quantum fields for an inertial and a non-inertial observer. We show that non-relativistic particle states appear as a superposition of relativistic and non-relativistic particles in different frames. Hence, the non-relativistic limit is frame-dependent. We detail this result when the non-inertial observer has uniform constant acceleration. Only for low accelerations, the accelerated observer agrees with the inertial frame about the non-relativistic nature of particles locally. In such a quasi-inertial regime, both observers agree about the number of particles describing quantum field states. The same does not occur when the acceleration is arbitrarily large (e.g., the Unruh effect). We furthermore prove that wave functions of particles in the inertial and the quasi-inertial frame are identical up to the coordinate transformation relating the two frames.'\nauthor:\n- Riccardo Falcone\n- Claudio Conti\nbibliography:\n- 'bibliography.bib'\ntitle: 'Frame-dependence of the non-relativistic limit of quantum fields'\n---\n\nIntroduction\n============\n\nSince the theoretical proposal of the Unruh effect [@PhysRevD.7.2850; @Davies:1974th; @PhysRevD.14.870] as the equivalent of the Hawking effect [@Hawking:1975vcx] in accelerated frames, there has been a wide interest in detectors able to reveal such effect. In their pioneering works, Unruh and DeWitt [@PhysRevD.14.870; @PhysRevD.29.1047; @hawking1980general] considered a particle"
+"---\nabstract: '*Orcinus orca* (killer whales) exhibit complex calls. They last about a second. In a call, an orca typically uses multiple frequencies simultaneously, varies the frequencies, and varies their volumes. Behavior data is hard to obtain because orcas live under water and travel quickly. Sound data is relatively easy to capture. As a science goal, we would like to know whether orca vocalizations constitute a semantic language. We do this by studying whether machine learning can predict behavior from vocalizations. Such prediction would also help scientific research and safety applications because one would like to predict behavior while only having to capture sound. A significant challenge in this process is lack of labeled data. We work with recent recordings of McMurdo Sound orcas\u00a0[@Wellard20:Cold] where each recording is labeled with the behaviors observed during the recording. This yields a dataset where sound *segments*\u2014continuous vocalizations that can be thought of as call *sequences* or more general structures\u2014within the recordings are labeled with superfluous behaviors. Despite that, with a careful combination of recent machine learning techniques, we achieve 96.4% classification accuracy. This suggests that orcas do use a semantic language. It is also promising for research and applications.'\nauthor:\n- 'Sophia"
+"---\nabstract: 'The supersymmetrized DFSZ axion model is especially compelling in that it contains 1. the SUSY solution to the gauge hierarchy problem, 2. the Peccei-Quinn (PQ) solution to the strong CP problem and 3. the Kim-Nilles solution to the SUSY $\\mu$ problem. In a string setting, where a discrete $R$-symmetry (${\\bf Z}_{24}^R$ for example) may emerge from the compactification process, a high-quality accidental axion (accion) can emerge from the accidental, approximate remnant global $U(1)_{PQ}$ symmetry where the decay constant $f_a$ is linked to the SUSY breaking scale, and is within the cosmological sweet zone. In this setup, one also expects the presence of stringy remnant moduli fields $\\phi_i$. Here, we consider the situation of a single light modulus $\\phi$ coupled to the PQMSSM in the early universe, with mixed axion plus higgsino-like WIMP dark matter. We evaluate dark matter and dark radiation production via nine coupled Boltzmann equations and assess the severity of the cosmological moduli problem (CMP) along with dark matter and dark radiation production rates. We find that typically the light modulus mass should be $m_{\\phi}\\agt 10^4$ TeV to avoid the moduli-induced dark matter overproduction problem. If one is able to (anthropically) tune the modulus field amplitude,"
+"---\nabstract: 'The continuous computational power growth in the last decades has made solving several optimization problems significant to humankind a tractable task; however, tackling some of them remains a challenge due to the overwhelming amount of candidate solutions to be evaluated, even by using sophisticated algorithms. In such a context, a set of nature-inspired stochastic methods, called meta-heuristic optimization, can provide robust approximate solutions to different kinds of problems with a small computational burden, such as derivative-free real function optimization. Nevertheless, these methods may converge to inadequate solutions if the function landscape is too harsh, e.g., enclosing too many local optima. Previous works addressed this issue by employing a hypercomplex representation of the search space, like quaternions, where the landscape becomes smoother and supposedly easier to optimize. Under this approach, meta-heuristic computations happen in the hypercomplex space, whereas variables are mapped back to the real domain before function evaluation. Despite this latter operation being performed by the Euclidean norm, we have found that after the optimization procedure has finished, it is usually possible to obtain even better solutions by employing the Minkowski $p$-norm instead and fine-tuning $p$ through an auxiliary sub-problem with neglecting additional cost and no hyperparameters. Such"
+"---\nauthor:\n- som\n- Tamas Almos Vami\n- Morris Swartz\nbibliography:\n- 'main.bib'\ntitle: 'Simulated performance and calibration of CMS Phase-2 Upgrade Inner Tracker sensors'\n---\n\nIntroduction\n============\n\nThe Large Hadron Collider (LHC) will be upgraded to its High Luminosity phase (HL-LHC) in the shutdown period from 2026. The HL-LHC will reach the instantaneous luminosity of $7.5\\,\\times\\,10^{34} \\,\\,\\,\\text{s}^{-1}\\text{cm}^{-2}$ which is four times the Run-2 value of $7.5\\,\\times\\,10^{34} \\,\\,\\,\\text{s}^{-1}\\text{cm}^{-2}$. The number of simultaneous proton-proton interactions per 25 ns bunch crossing (pileup) is expected to be between 140 and 200, which is a similar fourfold increase with respect to the current value of 55. The current detectors would not be able to operate in such conditions, which motivates the need to upgrade them. This is referred to as the Phase-2 Upgrade of CMS.\n\nThe upgraded CMS tracker detector [@CERN-LHCC-2017-009] will provide increased acceptance for tracking up to $|\\eta| < 4$ and significantly reduced mass through the use of carbon fiber mechanics, and $\\text{CO}_2$ cooling. The layout of the upgraded tracker is shown in Fig.\u00a0\\[fig:Phase2Tracker\\]. It consists of two parts, the Outer Tracker (OT) and the Inner Tracker (IT). The OT has six barrel layers and five pairs of forward disks,"
+"---\nabstract: 'Magnetic topological materials are a class of compounds with the underlying interplay of nontrivial band topology and magnetic spin configuration. Extensive interests have been aroused due to their application potential involved with an array of exotic quantum states. With angle-resolved photoemission spectroscopy and first-principles calculations, here we study the electronic properties of two magnetic Weyl semimetal candidates PrAlSi and SmAlSi. Though the two compounds harbor distinct magnetic ground states (ferromagnetic and antiferromagnetic for PrAlSi and SmAlSi, respectively) and 4$f$ shell fillings, we find that they share quite analogous low-energy band structure. By the measurements across the magnetic transitions, we further reveal that there is no evident evolution of the band structure in both compounds and the experimental spectra can be well reproduced by the nonmagnetic calculations, together suggesting a negligible effect of the magnetism on their electronic structures and a possibly weak coupling between the localized 4$f$ electrons and the itinerant conduction electrons. Our results offer essential insights into the interactions between magnetism, electron correlations, and topological orders in the $R$Al$X$ ($R$ = light rare earth and $X$ = Si or Ge) family.'\nauthor:\n- Rui Lou\n- Alexander Fedorov\n- Lingxiao Zhao\n- Alexander Yaresko\n- 'Bernd"
+"---\nabstract: 'There are various research strategies used for non-Hermitian systems, which typically involve introducing non-Hermitian terms to pre-existing Hermitian Hamiltonians. It can be challenging to directly design non-Hermitian many-body models that exhibit unique features not found in Hermitian systems. In this Letter, we propose a new method to construct non-Hermitian many-body systems by generalizing the parent Hamiltonian method into non-Hermitian regimes. This allows us to build a local Hamiltonian using given matrix product states as its left and right ground states. We demonstrate this method by constructing a non-Hermitian spin-$1$ model from the asymmetric Affleck-Kennedy-Lieb-Tasaki (AKLT) state, which preserves both chiral order and symmetry-protected topological order. Our approach opens up a new paradigm for systematically constructing and studying non-Hermitian many-body systems, providing guiding principles to explore new properties and phenomena in non-Hermitian physics.'\nauthor:\n- Ruohan Shen\n- Yuchen Guo\n- Shuo Yang\nbibliography:\n- 'ref.bib'\ntitle: 'Construction of Non-Hermitian Parent Hamiltonian from Matrix Product States'\n---\n\n[^1]\n\n[^2]\n\n*Introduction.* Non-Hermitian physics has attracted much attention both theoretically\u00a0[@Konotop2016; @Leykam2017; @Gong2018; @Kawabata2019; @Ashida2020; @Bergholtz2021; @Yamamoto2022; @Ding2022; @Guo2022; @Chen2022] and experimentally\u00a0[@Guo2009; @Schindler2011; @Zeuner2015; @El2018; @Hokmabadi2019; @Zhang2022A; @Gu2022] for describing open systems\u00a0[@Vega2017], such as photonics\u00a0[@Takata2018] and acoustics\u00a0[@Zhang2021A;"
+"---\nabstract: |\n In this paper, we investigate the equilibria and their stability in an asymmetric duopoly model of Kopel by using several tools based on symbolic computations. We explore the possible positions of the equilibria in Kopel\u2019s model. We discuss the possibility of the existence of multiple positive equilibria and establish a necessary and sufficient condition for a given number of equilibria to exist. Furthermore, if the two duopolists adopt the best response reactions or homogeneous adaptive expectations, we establish rigorous conditions for the existence of distinct numbers of positive equilibria for the first time.\n\n *Keywords: duopoly; Kopel\u2019s model; equilibrium; local stability; symbolic computation*\nauthor:\n- Xiaoliang Li\n- 'Kongyan Chen[^1]'\ntitle: Equilibria and their stability in an asymmetric duopoly model of Kopel\n---\n\n[UTF8]{}[gbsn]{}\n\nIntroduction\n============\n\nDuopoly is an intermediate market between monopoly and perfect competition [@guirao_extensions_2010]. The study of duopolistic competitions lies at the core of the field of industrial organization. Since about three decades ago, economists have been making efforts to model duopolistic competitions by discrete dynamical systems of form with unimodal reaction functions. Among them, Kopel [@kopel_simple_1996] proposed a famous duopoly model by assuming a linear demand function and nonlinear cost functions, which can be"
+"---\nauthor:\n- 'Noah Glennon,'\n- 'Anthony E. Mirasola,'\n- 'Nathan Musoke,'\n- 'Mark C. Neyrinck,'\n- 'Chanda Prescod-Weinstein'\nbibliography:\n- 'bib.bib'\ntitle: Scalar dark matter vortex stabilization with black holes\n---\n\nIntroduction\n============\n\nScalar dark matter (SDM, closely related to, or also known as superfluid, wave, and fuzzy dark matter) models have seen substantial recent attention as a promising alternative to the longstanding WIMP cold dark matter (WIMP CDM) model. Simulations of SDM show that these alternative models behave similarly to WIMP CDM in large-scale structure formation (where WIMP CDM has been successful), but ultralight scalars radically differ on galactic scales. WIMP CDM encounters well-known issues on these scales; there is an apparent deficit of observed satellite galaxies compared to simple estimates from $N$-body simulations of WIMPs, and there is a disagreement between simulated and observed density profiles in galactic cores\u00a0[@Weinberg2015; @Moore1994; @Papastergis2015]. Some work has suggested that baryonic effects could possibly account for these discrepancies\u00a0[@Kim2018; @Governato2010], but this motivation for SDM or warm dark matter remains. There also are other motivations for SDM, beyond addressing observational problems such as the missing-satellite problem. They are well-motivated in string theory\u00a0[@Svrcek2006; @Arvanitaki2010axiverse; @Marsh2016; @Hui2017].\n\nA crucial piece of"
+"---\nabstract: 'The incomplete fusion has been proved as the formation and emission of the $\\alpha$ particle by the increase in the rotational energy of the very mass-asymmetric dinuclear system. The results of the dinuclear system model have confirmed that the incomplete fusion in heavy-ion collisions occurs at a large orbital angular momentum ($L > 30 \\hbar$) due to the strong increase of the intrinsic fusion barrier.'\nauthor:\n- 'A.K. Nasirov$^{1,2}$'\n- 'B.M. Kayumov$^{2,3}$'\n- 'O.K. Ganiev$^{2,4,5}$'\n- 'G.A. Yuldasheva$^{2}$'\nbibliography:\n- 'References\\_PRL.bib'\ntitle: 'A new dynamical mechanism of incomplete fusion in heavy-ion collision'\n---\n\nThe incomplete fusion (ICF) of nuclei at the heavy-ion collision is observed in reactions of light projectiles with the intermediate-mass target nucleus. This phenomenon was first observed more than 60 years ago [@Knox1960]. From the analysis of experimental data it was found that the peripheral collisions are a favorable condition for the incomplete fusion. This phenomenon is studied by observation of the $\\alpha$ particle flying in the forward angles or other light clusters or by identification of the evaporation residue accompanied with the emitted fast light clusters.\n\nA mean value $ = 40 \\hbar$ of the angular momentum distribution of the entrance channel corresponding to"
+"---\nabstract: 'In this paper, we study the fundamental theorem of calculus and its consequences from an algebraic point of view. For functions with singularities, this leads to a generalized notion of evaluation. We investigate properties of such integro-differential rings and discuss many examples. We also construct corresponding integro-differential operators and provide normal forms via rewrite rules. They are then used to derive several identities and properties in a purely algebraic way, generalizing well-known results from analysis. In identities like shuffle relations for nested integrals and the Taylor formula, additional terms are obtained that take singularities into account. Another focus lies on treating basics of linear ODEs in this framework of integro-differential operators. These operators can have matrix coefficients, which allow to treat systems of arbitrary size in a unified way. In the appendix, using tensor reduction systems, we give the technical details of normal forms and prove them for operators including other functionals besides evaluation.'\nauthor:\n- 'Clemens G.\u00a0Raab^a,^[^1]\u00a0\u00a0and Georg Regensburger^a,b^'\nbibliography:\n- 'ref.bib'\ntitle: The fundamental theorem of calculus in differential rings\n---\n\n^a^Institute for Algebra, Johannes Kepler University Linz, Austria\\\n^b^Institute of Mathematics, University of Kassel, Germany\\\n\\\n\n\n#### Keywords\n\nIntegro-differential rings, integro-differential operators,"
+"---\nabstract: 'Given a representation of a finite group $G$ over some commutative base ring $\\mathbf{k}$, the *cofixed space* is the largest quotient of the representation on which the group acts trivially. If $G$ acts by $\\mathbf{k}$-algebra automorphisms, then the cofixed space is a module over the ring of $G$-invariants. When the order of $G$ is not invertible in the base ring, little is known about this module structure. We study the cofixed space in the case that $G$ is the symmetric group on $n$ letters acting on a polynomial ring by permuting its variables. When $\\mathbf{k}$ has characteristic 0, the cofixed space is isomorphic to an ideal of the ring of symmetric polynomials in $n$ variables. Localizing $\\mathbf{k}$ at a prime integer $p$ while letting $n$ vary reveals striking behavior in these ideals. As $n$ grows, the ideals stay stable in a sense, then jump in complexity each time $n$ reaches a multiple of $p$.'\nauthor:\n- Alexandra Pevzner\nbibliography:\n- 'biblio.bib'\ntitle: Symmetric Group Fixed Quotients of Polynomial Rings\n---\n\nIntroduction\n============\n\nFix a commutative ring ${\\mathbf{k}}$ with unit. Given a representation $U$ of a finite group $G$ over ${\\mathbf{k}}$, there are two natural ${\\mathbf{k}}G$-modules one can associate"
+"---\nabstract: 'The global trend toward renewable power generation has drawn great attention to hydrogen Fuel Cells (FCs), which have a wide variety of applications, from utility power stations to laptops. The Multi-stack Fuel Cell System (MFCS), which is an assembly of FC stacks, can be a remedy for obstacles in high-power applications. However, the output voltage of FC stacks varies dramatically under variable load conditions; hence, in order for MFCS to be efficiently operated and guarantee an appropriate load-current sharing among the FC stacks, advanced converter controllers for power conditioning need to be designed. An accurate circuit model is essential for controller design, which accounts for the fact that the parameters of some converter components may change due to aging and repetitive stress in long-term operations. Existing control frameworks and parametric and non-parametric system identification techniques do not consider the aforementioned challenges. Thus, this paper investigates the potential of a data-driven method that, without system identification, directly implements control on paralleled converters using raw data. Based on pre-collected input/output trajectories, a non-parametric representation of the overall circuit is produced for implementing predictive control. While approaching equal current sharing within the MFCS, the proposed method considers the minimization of load-following"
+"---\nabstract: 'On a general open set of the euclidean space, we study the relation between the embedding of the homogeneous Sobolev space $\\mathcal{D}^{1,p}_0$ into $L^q$ and the summability properties of the distance function. We prove that in the superconformal case (i.e. when $p$ is larger than the dimension) these two facts are equivalent, while in the subconformal and conformal cases (i.e. when $p$ is less than or equal to the dimension) we construct counterexamples to this equivalence. In turn, our analysis permits to study the asymptotic behaviour of the positive solution of the Lane-Emden equation for the $p-$Laplacian with sub-homogeneous right-hand side, as the exponent $p$ diverges to $\\infty$. The case of first eigenfunctions of the $p-$Laplacian is included, as well. As particular cases of our analysis, we retrieve some well-known convergence results, under optimal assumptions on the open sets. We also give some new geometric estimates for generalized principal frequencies.'\naddress:\n- 'Dipartimento di Matematica e Informatica Universit\u00e0 degli Studi di Ferrara Via Machiavelli 35, 44121 Ferrara, Italy'\n- 'Dipartimento di Scienze Agrarie, Alimentari e Agro-ambientali Universit\u00e0 di Pisa Via del Borghetto 80, 56124 Pisa, Italy'\n- 'Dipartimento di Scienze Matematiche, Fisiche e Informatiche Universit\u00e0 di Parma Parco"
+"---\nabstract: 'The early kinetic decoupling (eKD) effect is an inevitable ingredient in calculating the relic density of dark matter (DM) for various well-motivated scenarios. It appears naturally in forbidden dark matter annihilation, the main focus of this work, which contains fermionic DM and a light singlet scalar that connects the DM and standard model (SM) leptons. The strong suppression of the scattering between DM and SM particles happens quite early in the DM depletion history, where the DM temperature drops away from the thermal equilibrium, $T_\\chi < T_{\\rm SM}$, leading to the decreased kinetic energy of DM. The forbidden annihilation thus becomes inefficient since small kinetic energy cannot help exceed the annihilation threshold, naturally leading to a larger abundance. To show the eKD discrepancy, we numerically solve the coupled Boltzmann equations that govern the evolution of DM number density and temperature. It is found that eKD significantly affects the DM abundance, resulting in almost an order of magnitude higher than that by the traditional calculation. We also discuss the constraints from experimental searches on the model parameters, where the viable parameter space shrinks when considering the eKD effect.'\nauthor:\n- Yu Liu\n- Xuewen Liu\n- Bin Zhu\nbibliography:"
+"---\nabstract: |\n Rank-metric codes were studied by E. Gabidulin in 1985 after a brief introduction by Delsarte in 1978 as an equivalent of Reed-Solomon codes, but based on linearized polynomials. They have found applications in many areas, including linear network coding and space-time coding. They are also used in cryptography to reduce the size of the keys compared to Hamming metric codes at the same level of security. However, some families of rank-metric codes suffer from structural attacks due to the strong algebraic structure from which they are defined. It therefore becomes interesting to find new code families in order to address these questions in the landscape of rank-metric codes.\n\n In this paper, we provide a generalization of Subspace Subcodes in Rank metric introduced by Gabidulin and Loidreau. We also characterize this family by giving an algorithm which allows to have its generator and parity-check matrices based on the associated extended codes. We have also studied the specific case of Gabidulin codes whose underlying decoding algorithms are known. Bounds for the cardinalities of these codes, both in the general case and in the case of Gabidulin codes, are also provided.\nauthor:\n- Ousmane Ndiaye\n- Peter Arnaud Kidoudou\n-"
+"---\nabstract: 'The Linux kernel makes considerable use of Berkeley Packet Filter (BPF) to allow user-written BPF applications to execute in the kernel space. BPF employs a verifier to statically check the security of user-supplied BPF code. Recent attacks show that BPF programs can evade security checks and gain unauthorized access to kernel memory, indicating that the verification process is not flawless. In this paper, we present , a system that isolates potentially malicious BPF programs using Intel Memory Protection Keys (MPK). Enforcing BPF program isolation with MPK is not straightforward; \u00a0is carefully designed to alleviate technical obstacles, such as limited hardware keys and supporting a wide variety of kernel BPF helper functions. We have implemented in a prototype kernel module, and our evaluation shows that delivers low-cost isolation of BPF programs under various real-world usage scenarios, such as the isolation of a packet-forwarding BPF program for the `memcached` database with an average throughput loss of 6%.'\nauthor:\n- \nbibliography:\n- 'main.bib'\ntitle: '**: Towards Safe BPF Kernel Extension**'\n---\n\nIntroduction {#sec:intro}\n============\n\nIt is common to extend kernel functionality by allowing user applications to download code into the kernel space. In 1993, the well-known was introduced for this purpose"
+"---\nabstract: 'Regression tests are often used to ensure that software behaves as intended, as software evolution takes place. However, tests in general and regression tests in particular are inherently partial as behavioural descriptions, and thus regression tests may fail in clearly capturing how software behaviour is altered by code changes. In this paper, we propose an assertion-based approach to capture software evolution, through the notion of *commit-relevant specification*. A commit-relevant specification summarises the program properties that have changed as a consequence of a *commit* (understood as a specific software modification), via two sets of assertions, the *delta-added assertions*, properties that did not hold in the pre-commit version but hold on the post-commit, and the *delta-removed assertions*, those that were valid in the pre-commit, but no longer hold after the code change. We also present [DeltaSpec]{}, an approach that combines test generation and dynamic specification inference to automatically compute commit-relevant specifications from given commits. We evaluate [DeltaSpec]{} on two datasets that include a total of 57 commits (63 classes and 797 methods). We show that commit-relevant assertions can precisely describe the semantic deltas of code changes, providing a useful mechanism for validating the behavioural evolution of software. We"
+"---\nabstract: 'Cosmological observations precisely measure primordial variations in the density of the Universe at megaparsec and larger scales, but much smaller scales remain poorly constrained. However, sufficiently large initial perturbations at small scales can lead to an abundance of ultradense dark matter minihalos that form during the radiation epoch and survive into the late-time Universe. Because of their early formation, these objects can be compact enough to produce detectable microlensing signatures. We investigate whether the EROS, OGLE, and HSC surveys can probe these halos by fully accounting for finite source size and extended lens effects. We find that current data may already constrain the amplitudes of primordial curvature perturbations in a new region of parameter space, but this conclusion is strongly sensitive to yet undetermined details about the internal structures of these ultradense halos. Under optimistic assumptions, current and future HSC data would constrain a power spectrum that features an enhancement at scales $k \\sim 10^7/{\\rm Mpc}$, and an amplitude as low as $\\mathcal{P}_\\zeta\\simeq 10^{-4}$ may be accessible. This is a particularly interesting regime because it connects to primordial black hole formation in a portion of the LIGO/Virgo/Kagra mass range and the production of scalar-induced gravitational waves in the"
+"---\nabstract: 'Numerical simulations of neutron star mergers represent an essential step toward interpreting the full complexity of multimessenger observations and constraining the properties of supranuclear matter. Currently, simulations are limited by an array of factors, including computational performance and input physics uncertainties, such as the neutron star equation of state. In this work, we expand the range of nuclear phenomenology efficiently available to simulations by introducing a new analytic parametrization of cold, beta-equilibrated matter that is based on the relativistic enthalpy. We show that the new *enthalpy parametrization* can capture a range of nuclear behavior, including strong phase transitions. We implement the enthalpy parametrization in the [`SpECTRE`]{} code, simulate isolated neutron stars, and compare performance to the commonly used spectral and polytropic parametrizations. We find comparable computational performance for nuclear models that are well represented by either parametrization, such as simple hadronic EoSs. We show that the enthalpy parametrization further allows us to simulate more complicated hadronic models or models with phase transitions that are inaccessible to current parametrizations.'\nauthor:\n- Isaac Legred\n- Yoonsoo Kim\n- Nils Deppe\n- Katerina Chatziioannou\n- Francois Foucart\n- Fran\u00e7ois H\u00e9bert\n- 'Lawrence E.\u00a0Kidder'\nbibliography:\n- 'references.bib'\ntitle: ' Simulating neutron"
+"---\nabstract: 'In this paper, we propose several new stochastic second-order algorithms for policy optimization that only require gradient and Hessian-vector product in each iteration, making them computationally efficient and comparable to policy gradient methods. Specifically, we propose a dimension-reduced second-order method (DR-SOPO) which repeatedly solves a projected two-dimensional trust region subproblem. We show that DR-SOPO obtains an $\\mathcal{O}(\\epsilon^{-3.5})$ complexity for reaching approximate first-order stationary condition and certain subspace second-order stationary condition. In addition, we present an enhanced algorithm (DVR-SOPO) which further improves the complexity to $\\mathcal{O}(\\epsilon^{-3})$ based on the variance reduction technique. Preliminary experiments show that our proposed algorithms perform favorably compared with stochastic and variance-reduced policy gradient methods.'\n---\n\nIntroduction\n============\n\nThe policy gradient (PG) method, pioneered by @williams1992simple, is a widely used approach for finding the optimal policy in reinforcement learning (RL). The main idea behind PG is to directly maximize the total reward by using the (stochastic) gradient of the cumulative rewards. PG is particularly useful for high dimensional continuous state and action spaces due to its ease in implementation. In recent years, the PG method has gained much attention due to its significant empirical successes in a variety of challenging RL applications\u00a0[@lillicrap2015continuous; @silver2014deterministic].\n\nDespite"
+"---\nabstract: 'A multiscale mathematical model describing the genesis and ecology of algal-bacterial photogranules and the metals biosorption on their solid matrix within a sequencing batch reactor (SBR) is presented. The granular biofilm is modelled as a spherical free boundary domain with radial symmetry and a vanishing initial value. The free boundary evolution is governed by an ordinary differential equation (ODE) accounting for microbial growth, attachment and detachment phenomena. The model is based on systems of partial differential equations (PDEs) derived from mass conservation principles. Specifically, two systems of nonlinear hyperbolic PDEs model the growth of attached species and the dynamics of free adsorption sites; and two systems of quasi-linear parabolic PDEs govern the diffusive transport and conversion of nutrients and metals. The model is completed with systems of impulsive ordinary differential equations (IDEs) describing the evolution of dissolved substrates, metals, and planktonic and detached biomasses within the granular-based SBR. All main phenomena involved in the process are considered in the mathematical model. Moreover, the dual effect of metal presence on the formation process of photogranules is accounted: metal stimulates the production of EPS by sessile species and negatively affects the metabolic activities of microbial species. To describe the effects"
+"---\nabstract: |\n The Black-Litterman model extends the framework of the Markowitz Modern Portfolio Theory to incorporate investor views. We consider a case where multiple view estimates, including uncertainties, are given for the same underlying subset of assets at a point in time. This motivates our consideration of data fusion techniques for combining information from multiple sources. In particular, we consider consistency-based methods that yield fused view and uncertainty pairs; such methods are not common to the quantitative finance literature. We show a relevant, modern case of incorporating machine learning model-derived view and uncertainty estimates, and the impact on portfolio allocation, with an example subsuming Arbitrage Pricing Theory. Hence we show the value of the Black-Litterman model in combination with information fusion and artificial intelligence-grounded prediction methods.\\\n [**Keywords:**]{} Black-Litterman portfolio allocation, information fusion, machine learning, financial time-series analysis.\nauthor:\n- 'Trent Spears[^1] [^2], Stefan Zohren$^*$, Stephen Roberts$^*$'\nbibliography:\n- 'bibs.bib'\ntitle: 'View fusion vis-\u00e0-vis a Bayesian interpretation of Black-Litterman for portfolio allocation.'\n---\n\nIntroduction\n============\n\nThe Black-Litterman model extends the Markowitz portfolio optimisation framework to incorporate investor beliefs about asset returns [@Black91]. Such beliefs are termed \u2018views\u2019. In its original formulation, the Black-Litterman framework is such that a given view"
+"---\nbibliography:\n- 'refs.bib'\n---\n\nThis thesis is distributed under license \u201c**Creative Commons Atributtion - Non Commercial - Non Derivatives**\u201d.\\\n{width=\"4.2cm\"}\n\n*A mis padres,\\\nCrist\u00f3bal y Margarita.*\n\nAcknowledgements {#acknowledgements .unnumbered}\n================\n\nI owe a great deal to the people who have helped and advised me over the last three years, without whom I could not have written this thesis.\n\nFirst and foremost, I am tremendously grateful to my supervisors, who gave me such an absorbing topic to work on and offered their guidance, Eduardo J.S. Villase\u00f1or, J. Fernando Barbero G. and Juan Margalef Bentabol. I want to thank your incredible patience, continuous encouragement and your priceless time answering all of my questions. It has been a privilege to learn from you and work under your supervision.\n\nI thank Tom\u00e1s Ort\u00edn for making me fall in love with general relativity in my master studies and his patience with my endless and possibly pointless questions about spacetime and black holes. I also inevitably need to thank Mercedes Mart\u00edn Benito for encouraging me in the first place to pursue a PhD in this particular field of research.\n\nSpecial thanks also to Laura Hiscott for correcting my English, and to Pablo H. Ufarte for"
+"---\nabstract: 'We present a deep Graph Convolutional Kernel Machine (GCKM) for semi-supervised node classification in graphs. The method is built of two main types of blocks: (i) We introduce unsupervised kernel machine layers propagating the node features in a one-hop neighborhood, using implicit node feature mappings. (ii) We specify a semi-supervised classification kernel machine through the lens of the Fenchel-Young inequality. We derive an effective initialization scheme and efficient end-to-end training algorithm in the dual variables for the full architecture. The main idea underlying GCKM is that, because of the unsupervised core, the final model can achieve higher performance in semi-supervised node classification when few labels are available for training. Experimental results demonstrate the effectiveness of the proposed framework.'\nauthor:\n- 'Sonny Achten, Francesco Tonin, Panagiotis Patrinos, Johan A.K. Suykens'\nbibliography:\n- 'achten.bib'\ntitle: |\n Unsupervised Neighborhood Propagation Kernel Layers\\\n for Semi-supervised Node Classification\n---\n\nIntroduction\n============\n\nSemi-supervised node classification has been an important research area for several years. In many real-life applications, one has structured data for which the entire graph can be observed (e.g., a social network where users are represented as nodes and the relationships between users as edges). However, the node labels can only be"
+"---\nabstract: |\n We introduce partitioned matching games as a suitable model for international kidney exchange programmes, where in each round the total number of available kidney transplants needs to be distributed amongst the participating countries in a \u201cfair\u201d way. A partitioned matching game $(N,v)$ is defined on a graph $G=(V,E)$ with an edge weighting $w$ and a partition $V=V_1 \\cup \\dots \\cup V_n$. The player set is $N = \\{ 1, \\dots, n\\}$, and player $p \\in N$ owns the vertices in $V_p$. The value $v(S)$ of a coalition\u00a0$S \\subseteq N$ is the maximum weight of a matching in the subgraph of $G$ induced by the vertices owned by the players in\u00a0$S$. If $|V_p|=1$ for all $p\\in N$, then we obtain the classical matching game. Let $c=\\max\\{|V_p| \\; |\\; 1\\leq p\\leq n\\}$ be the width of $(N,v)$. We prove that checking core non-emptiness is polynomial-time solvable if $c\\leq 2$ but co-[[NP]{}]{}-hard if $c\\leq 3$. We do this via pinpointing a relationship with the known class of $b$-matching games and completing the complexity classification on testing core non-emptiness for $b$-matching games. With respect to our application, we prove a number of complexity results on choosing, out of possibly"
+"---\nabstract: 'Digital communication has made the public discourse considerably more complex, and new actors and strategies have emerged as a result of this seismic shift. Aside from the often-studied interactions among individuals during opinion formation, which have been facilitated on a large scale by social media platforms, the changing role of traditional media and the emerging role of \u201cinfluencers\u201d are not well understood, and the implications of their engagement strategies arising from the incentive structure of the attention economy even less so. Here we propose a novel opinion dynamics model that accounts for these different roles, namely that media and influencers change their own positions on slower time scales than individuals, while influencers dynamically gain and lose followers. Numerical simulations show the importance of their relative influence in creating qualitatively different opinion formation dynamics: with influencers, fragmented but short-lived clusters emerge, which are then counteracted by more stable media positions. Mean-field approximations by partial differential equations reproduce this dynamic. Based on the mean-field model, we study how strategies of influencers to gain more followers can influence the overall opinion distribution. We show that moving towards extreme positions can be a beneficial strategy for influencers to gain followers. Finally, we"
+"---\nauthor:\n- Giorgio Nicoletti\n- Matteo Bruzzone\n- Samir Suweis\n- Marco Dal Maschio\n- Daniel Maria Busiello\ntitle: '[Adaptation maximizes information and minimizes dissipation across biological scales]{}'\n---\n\nSensing and adaptation mechanisms in biological systems span a wide range of temporal and spatial scales, from cellular to multi-cellular level, forming a basis for decision-making and the optimization of limited resources [@bialek_review; @signaling_review; @gnesotto2018broken; @nemenman2012information; @nakajima2015biologically; @whiteley2017progress; @perkins2009strategies; @koshland1982amplification]. Prominent examples include the modulation of flagellar motion operated by bacteria according to changes in the local nutrient concentration [@tu_chemotaxis; @tu2008nonequilibrium; @mattingly2021escherichia], the regulation of immune responses through feedback mechanisms [@cheong_tnf; @wajant2003tumor], and the maintenance of high sensitivity in varying environments for olfactory or visual sensing in mammalian neurons [@tu_tradeoff; @menini1999calcium; @kohn2007visual; @lesica2007adaptation; @benucci2013adaptation].\n\nIn the last decade, advances in experimental techniques fostered the quest for the core biochemical mechanisms governing information processing. Simultaneous recordings of hundreds of biological signals made it possible to infer distinctive features directly from data [@maxent1; @maxent2; @kurtz2015sparse; @tunstrom2013collective]. However, many approaches fall short of describing the connection between the underlying chemical processes and the observed behaviors [@nicoletti2021mutual; @nicoletti2022mutual; @de2010advantages; @nicoletti2022information]. As a step in this direction, a multitude of works focused on the architecture"
+"---\nabstract: 'This paper links sizes of model classes to the minimum lengths of their defining formulas, that is, to their description complexities. Limiting to models with a fixed domain of size n, we study description complexities with respect to the extension of propositional logic with the ability to count assignments. This logic, called GMLU, can alternatively be conceived as graded modal logic over Kripke models with the universal accessibility relation. While GMLU is expressively complete for defining multisets of assignments, we also investigate its fragments GMLU(d) that can count only up to the integer threshold d. We focus in particular on description complexities of equivalence classes of GMLU(d). We show that, in restriction to a poset of type realizations, the order of the equivalence classes based on size is identical to the order based on description complexities. This also demonstrates a monotone connection between Boltzmann entropies of model classes and description complexities. Furthermore, we characterize how the relation between domain size n and counting threshold d determines whether or not there exists a dominating class, which essentially means a model class with limit probability one. To obtain our results, we prove new estimates on r-associated Stirling numbers. As another"
+"---\nabstract: |\n We introduce `MOSAIC`, a Python program for machine learning models. Our framework is developed with in mind accelerating machine learning studies through making implementing and testing arbitrary network architectures and data sets simpler, faster and less error-prone. `MOSAIC` features a full execution pipeline, from declaring the models, data and related hyperparameters within a simple configuration file, to the generation of ready-to-interpret figures and performance metrics. It also includes an advanced run management, stores the results within a database, and incorporates several run monitoring options. Through all these functionalities, the framework should provide a useful tool for researchers, engineers, and general practitioners of machine learning.\\\n `MOSAIC` is available from the dedicated [PyPI](https://pypi.org/project/ml-mosaic/) repository.\nauthor:\n- |\n Matt\u00e9o Papin matpapin0@gmail.com\\\n \u00c9cole 42, 75017 Paris, France Yann Beaujeault-Taudi\u00e8re yann.beaujeault-taudiere@ijclab.in2p3.fr\\\n Universit\u00e9 Paris-Saclay, CNRS/IN2P3, IJCLab, 91405 Orsay, France\\\n Laboratoire Leprince-Ringuet (LLR), \u00c9cole polytechnique, CNRS/IN2P3, 91120 Palaiseau, France Fr\u00e9d\u00e9ric Magniette frederic.magniette@llr.in2p3.fr\\\n Laboratoire Leprince-Ringuet (LLR), \u00c9cole polytechnique, CNRS/IN2P3, 91120 Palaiseau, France\nbibliography:\n- 'biblio.bib'\ntitle: '`MOSAIC`, a comparison framework for machine learning models'\n---\n\nOpen-source software, network optimization, classification, regression, Python, neural networks\n\nIntroduction\n============\n\nConceiving optimal machine learning models, especially with artificial neural networks, is known to be a complex art. Although substantial"
+"---\nabstract: 'Current multilingual semantic parsing (MSP) datasets are almost all collected by translating the utterances in the existing datasets from the resource-rich language to the target language. However, manual translation is costly. To reduce the translation effort, this paper proposes the first active learning procedure for MSP (AL-MSP). AL-MSP selects only a subset from the existing datasets to be translated. We also propose a novel selection method that prioritizes the examples diversifying the logical form structures with more lexical choices, and a novel hyperparameter tuning method that needs no extra annotation cost. Our experiments show that AL-MSP significantly reduces translation costs with ideal selection methods. Our selection method with proper hyperparameters yields better parsing performance than the other baselines on two multilingual datasets.'\nauthor:\n- |\n Zhuang Li, Gholamreza Haffari\\\n Openstream.AI\\\nbibliography:\n- 'anthology.bib'\n- 'custom.bib'\ntitle: Active Learning for Multilingual Semantic Parser\n---\n\n=1\n\nIntroduction {#sec:intro}\n============\n\nMultilingual semantic parsing converts multilingual natural language utterances into logical forms (LFs) using a single model. However, there is a severe data imbalance among the MSP datasets. Currently, most semantic parsing datasets are in English, while only a limited number of non-English datasets exist. To tackle the data imbalance issue, almost"
+"---\nabstract: 'Hybrid metal-dielectric structures, which combine the advantages of both metal and dielectric materials, support high-confined but low-loss magnetic and electric resonances under deliberate arrangements. However, their potential for enhancing magnetic emission has not been explored. Here, we study the simultaneous magnetic and electric Purcell enhancement supported by a hybrid structure consisting of a dielectric nanoring and a silver nanorod. Such a structure enables low Ohmic loss and highly-confined field under the mode hybridization of magnetic resonances on nanoring and electric resonances on nanorod in the optical communication band. So, the 60-fold magnetic Purcell enhancement and 45-fold electric Purcell enhancement can be achieved simultaneously with $>95\\%$ of the radiation transmitted to far field. The position of emitter has a several-ten-nanometer tolerance for sufficiently large Purcell enhancement, which brings convenience to experimental fabrications. Moreover, an array formed by this hybrid nanostructure can further enhance the magnetic Purcell factors. The findings provide a possibility to selectively excite the magnetic and electric emission in integrated photon circuits. It may also facilitate brighter magnetic emission sources and light-emitting metasurfaces in a simpler arrangement.'\nauthor:\n- Lingxiao Shan$^1$\n- 'Qi Liu$^{1,2}$'\n- Yun Ma$^1$\n- Yali Jia$^1$\n- Hai Lin$^1$\n- 'Guowei Lu$^{1,2,3,4}$'\n-"
+"---\nabstract: 'Detecting anomalies in multivariate time series (MTS) data plays an important role in many domains. The abnormal values could indicate events, medical abnormalities, cyber-attacks, or faulty devices which if left undetected could lead to significant loss of resources, capital, or human lives. In this paper, we propose a novel and innovative approach to anomaly detection called *Bayesian State-Space Anomaly Detection (`BSSAD`)*. The `BSSAD` consists of two modules: the neural network module and the Bayesian state-space module. The design of our approach combines the strength of Bayesian state-space algorithms in predicting the next state and the effectiveness of recurrent neural networks and autoencoders at understanding the relationship between the data to achieve high accuracy in detecting anomalies. The modular design of our approach allows flexibility in implementation with the option of changing the parameters of the Bayesian state-space models or swapping neural network algorithms to achieve different levels of performance. In particular, we focus on using Bayesian state-space models of particle filters and ensemble Kalman filters. We conducted extensive experiments on five different datasets. The experimental results show the superior performance of our model over baselines, achieving an F1-score greater than 0.95. In addition, we also propose using a"
+"---\nabstract: 'The stratified inclined duct (SID) sustains an exchange flow in a long, gently sloping duct as a model for continuously-forced density-stratified flows such as those found in estuaries. Experiments have shown that the emergence of interfacial waves and their transition to turbulence as the tilt angle is increased appears linked to a threshold in the exchange flow rate given by inviscid two-layer hydraulics. We uncover these hydraulic mechanisms with (i) recent direct numerical simulations (DNS) providing full flow data in the key flow regimes (Zhu & Atoufi *et al.*, [arXiv:2301.09773]{}, 2023), (ii) averaging these DNS into two layers, (iii) an inviscid two-layer shallow water and instability theory to diagnose interfacial wave behaviour and provide physical insight. The laminar flow is subcritical and stable throughout the duct and hydraulically controlled at the ends of the duct. As the tilt is increased, the flow becomes everywhere supercritical and unstable to long waves. An internal undular jump featuring stationary waves first appears near the centre of the duct, then leads to larger-amplitude travelling waves, and to stronger jumps, wave breaking and intermittent turbulence at the largest tilt angle. Long waves described by the (nonlinear) shallow water equation are locally interpreted as"
+"---\nabstract: 'We report our analysis results for the globular cluster (GC) NGC\u00a06341 (M92), as a millisecond pulsar (MSP) J1717$+$4308A has recently been reported found in this GC. The data used are from the Large Area Telescope onboard the [*Fermi Gamma-ray Space Telescope (Fermi)*]{}. We detect $\\gamma$-ray pulsations of the MSP at a $4.4\\sigma$ confidence level (the corresponding weighted H-test value is $\\sim$28.4). This MSP, the fourth $\\gamma$-ray pulsar found in a GC, does not have significant off-pulse emission and has $\\gamma$-ray luminosity and efficiency $1.3\\times10^{34}$ergs$^{-1}$ and 1.7% respectively. In order to have a clear view on the properties of the known GC $\\gamma$-ray MSPs, we re-analyze the [[*Fermi*]{}]{}\u00a0LAT data for the other three ones. These four MSPs share the properties of either having high $\\dot{E}$ ($\\sim 10^{36}$ergs$^{-1}$) or being in the GCs that contain only limited numbers of known MSPs. In addition, we find that PSRs\u00a0J1823$-$3021A and B1821$-$24, in NGC\u00a06624 and NGC\u00a06626 respectively, have detectable off-pulse $\\gamma$-ray emission and PSR J1835$-$3259B in NGC\u00a06652 does not. Using the obtained off-pulse spectra or spectral upper limits, we constrain the numbers of other MSPs in the four GCs. The results are consistent with the numbers of"
+"---\nabstract: |\n Using the Shioda-Tate theorem and an adaptation of Silverman\u2019s specialization theorem, we reduce the specialization of Mordell-Weil ranks for abelian varieties over fields finitely generated over infinite finitely generated fields $k$ to the the specialization theorem for N\u00e9ron-Severi ranks recently proved by Ambrosi in positive characteristic. More precisely, we prove that after a blow-up of the base surface $S$, for all vertical curves $S_x$ of a fibration $S \\to U \\subseteq {{\\mathbf{P}}}^1_k$ with $x$ from the complement of a sparse subset of $|U|$, the Mordell-Weil rank of an abelian scheme over $S$ stays the same when restricted to $S_x$.\n\n *Keywords:* specialization of Mordell-Weil ranks; abelian schemes over higher-dimensional bases; specialization of N\u00e9ron-Severi groups; rational points.\n\n *2020 Mathematics* subject classification: 11G10 (primary); 11G05; 11G35\naddress: |\n Lehrstuhl Mathematik\u00a0II (Computeralgebra)\\\n Universit\u00e4t Bayreuth\\\n Universit\u00e4tsstra\u00dfe 30\\\n 95440 Bayreuth, Germany\nauthor:\n- Timo Keller\ntitle: |\n Specialization of Mordell-Weil ranks\\\n of abelian schemes over surfaces to curves\n---\n\n[^1]\n\nIntroduction\n============\n\nMotivation and overview\n-----------------------\n\nSilverman\u2019s specialization theorem\u00a0[@Silverman1983] states that for a family ${\\mathscr{A}}$ of abelian varieties fibered over a curve $C$ over a number field $K$, the specialization homomorphism from the generic fiber ${\\mathscr{A}}(K(C))$ to a special fiber ${\\mathscr{A}}(\\kappa(x))$"
+"---\nabstract: 'Applying pressure around megabar is indispensable in the synthesis of high-temperature superconducting hydrides, such as H$_3$S and LaH$_{10}$. Stabilizing the high-pressure phase of hydride around ambient condition is a severe challenge. Based on the density-functional theory calculations, we give the first example that the structure of hydride CaBH$_5$ predicted above 280 GPa, can maintain its dynamical stability with pressure down to 1 GPa, by modulating the charge transfer from metal atoms to hydrogen atoms via the replacement of Ca with alkali metal atoms e.g. Cs, in which the \\[BH$_5$\\]$^{2-}$ anion shrinks along $c$ axis and expands in the $ab$ plane, experiencing an anisotropic virtual high pressure. This mechanism, namely charge transfer modulated virtual high pressure effect, plays a vital role in enhancing the structural stability and leading to the reemergence of ambient-pressure-forbidden \\[BH$_5$\\]$^{2-}$ anion around 1 GPa in CsBH$_5$. Moreover, we find that CsBH$_5$ is a strongly coupled superconductor, with transition temperature as high as 98 K, well above the liquid-nitrogen temperature. Our findings provide a novel mechanism to reduce the critical pressure required by hydrogen-rich compound without changing its crystal structure, and also shed light on searching ambient-pressure high-temperature superconductivity in metal borohydrides.'\nauthor:\n- Miao Gao"
+"---\nabstract: 'This paper presents a complementary approach to establish stability of finite receding horizon control with a terminal cost. First a new augmented stage cost is defined by rotating the terminal cost. Then a one-step optimisation problem is defined based on this augmented stage cost. It is shown that a slightly modified Model Predictive Control (MPC) algorithm is stable if the value function of the augmented one-step cost (OSVF) is a control Lyapunov function. The proposed stability condition is completely complementary to the existing terminal cost based MPC stability conditions in the sense that they are mutually excluded with each other. By using this approach, we are able to establish stability for MPC algorithms with zero terminal cost or even negative terminal cost as special cases. Combining this new approach with the existing MPC stability theory, we are able to significantly relax the stability requirement on MPC and extend the design space where stability are guaranteed. The proposed approach will help to further reduce the gap between stability theory and practical applications of MPC and other optimisation based control methods.'\nauthor:\n- |\n Wen-Hua Chen\\\n Department of Aeronautical and Automotive Engineering\\\n Loughborough University\\\n Loughborough LE11 3TU, U.K.\\\n `w.chen@lboro.ac.uk`\\\n Yunda"
+"---\nabstract: 'In this paper, we shall first introduce the Pascal determinantal array of order $k$ as a generalization of the standard Pascal array. We also present an algorithm to produce determinantal arrays of any order. Then we investigate geometric properties of these new determinantal arrays. As a by product, we give a proof of the correctness of our proposed algorithm. Then we give a geometric interpretation of the determinant of any $k$ by $k$ subarray of the Pascal array. Finally, we will give a generalization of Rahimpour\u2019s determinantal identity, using the above geometric interpretation.'\naddress:\n- ' Department of Computer Science, Faculty of Mathematical and Computer Sciences, Allameh Tabataba\u2019i University, Tehran, Iran.\\'\n- 'Ministry of Information and Communication Technology (ICT), Zanjan, Iran.'\nauthor:\n- 'H. Teimoori and H. Khodakarami'\ntitle: 'Pascal Determinantal Arrays and A Generalization of Rahimpour\u2019s Determinantal Identity'\n---\n\nIntroduction\n============\n\nIn [@m1], the first author and M. Bayat proved a conjecture by Rahimpour about determinants inside the Pascal array. After more investigations around the previous project, the second author of this paper made a generalization of the Rahimpour\u2019s determinantal identity. Here, we first introduce a new infinite class of squared arrays which are called the Pascal"
+"---\nauthor:\n- 'Anushree Datta$^{1,2,3}$'\n- 'M.J. Calder\u00f3n$^1$'\n- 'A. Camjayi$^{4,5}$'\n- 'E. Bascones$^1$'\ntitle: Heavy quasiparticles and cascades without symmetry breaking in twisted bilayer graphene\n---\n\nAbstract {#abstract .unnumbered}\n========\n\n[**Among the variety of correlated states exhibited by twisted bilayer graphene cascades in the spectroscopic properties and in the compressibility , twist angle and temperature to other effects. we show that the spectral weight associated the formation of local moments and heavy quasiparticles can explain the cascade without invoking symmetry breaking orders. The phenomena reproduced here include the cascade flow of spectral weight, the oscillations of remote band energies, and the asymmetric jumps of the inverse compressibility. We also predict a strong momentum differentiation in the incoherent spectral weight associated the fragile topology of .** ]{}\n\nIntroduction {#introduction .unnumbered}\n============\n\nA large variety of correlated states including insulating, superconducting and topological phases\u00a0[@CaoNat2018_1; @CaoNat2018_2; @LuNature2019; @SharpeScience2019; @SaitoNatPhys2019; @ChoiNatPhys2019; @XieNat2019; @JiangNature2019; @NuckollsNature2020; @WongNature2020; @ChoiNature2021; @WuNatMaterials2021; @StepanovPRL2021; @XieNature2021] have been detected in . Among these phenomena, scanning tunneling microscopy (STM) experiments find a strong doping dependent broadening of the density of states (DOS) with a spectral weight reorganization up to several tens of meV\u00a0[@KerelskyNat2019; @ChoiNatPhys2019; @JiangNature2019; @XieNat2019]. This reorganization happens"
+"---\nabstract: 'This work is concerned with fractional Gaussian fields, i.e. Gaussian fields whose covariance operator is given by the inverse fractional Laplacian $(-\\Delta)^{-s}$ (where, in particular, we include the case $s >1$). We define a lattice discretization of these fields and show that their scaling limits \u2013 with respect to the optimal Besov space topology (up to an endpoint case) \u2013 are the original continuous fields. As a byproduct, in dimension $d<2s$, we prove the convergence in distribution of the maximum of the fields. A key tool in the proof is a sharp error estimate for the natural finite difference scheme for $(-\\Delta)^s$ under minimal regularity assumptions, which is also of independent interest.'\naddress:\n- 'Friedrich-Alexander-Universit\u00e4t Erlangen-N\u00fcrnberg, Department of Data Science, Chair for Dynamics, Control and Numerics (Alexander von Humboldt Professorship), Cauerstr. 11, 91058 Erlangen, Germany.'\n- 'Weizmann Institute of Science, Department of Mathematics, Rehovot 7610001, Israel.'\nauthor:\n- Nicola De Nitti\n- Florian Schweiger\nbibliography:\n- 'FGF-ref.bib'\ntitle: Scaling limits for fractional polyharmonic Gaussian fields\n---\n\nIntroduction {#sec:intro}\n============\n\n\\\n\\\n\nFractional Gaussian Fields {#s:introFGF}\n--------------------------\n\nFractional Gaussian fields form a natural one-parameter family of Gaussian interface models. For a fixed parameter $s\\ge0$, the $s$-fractional Gaussian field is"
+"---\nauthor:\n- 'Carlos Mora$^\\dagger$'\n- 'Jonathan Tammer Eweis-Labolle[^1]'\n- Tyler Johnson\n- Likith Gadde\n- 'Ramin Bostanabad[^2]'\nbibliography:\n- '01\\_Ref.bib'\ntitle: 'Probabilistic Neural Data Fusion for Learning from an Arbitrary Number of Multi-fidelity Data Sets'\n---\n\nIn many applications in engineering and sciences analysts have simultaneous access to multiple data sources. In such cases, the overall cost of acquiring information can be reduced via data fusion or multi-fidelity (MF) modeling where one leverages inexpensive low-fidelity (LF) sources to reduce the reliance on expensive high-fidelity (HF) data. In this paper, we employ neural networks (NNs) for data fusion in scenarios where data is very scarce and obtained from an arbitrary number of sources with varying levels of fidelity and cost. We introduce a unique NN architecture that converts MF modeling into a nonlinear manifold learning problem. Our NN architecture inversely learns non-trivial (e.g., non-additive and non-hierarchical) biases of the LF sources in an interpretable and visualizable manifold where each data source is encoded via a low-dimensional distribution. This probabilistic manifold quantifies model form uncertainties such that LF sources with small bias are encoded close to the HF source. Additionally, we endow the output of our NN with a parametric distribution"
+"---\nabstract: 'In this work we explore in detail the presence of scalar resonances in the $WW$ fusion process in the context of the LHC experiments working in the theoretical framework provided by Higgs effective field theories (HEFTs). While the phenomenology of vector resonances is reasonably understood in the framework of Weinberg sum-rules and unitarization studies, scalar resonances are a lot less constrained and, more importantly do depend on HEFT low-energy effective couplings different from the ones of vector resoances that are difficult to constrain experimentally. More specifically, unitarization techniques combined with the requirement of causality allows us to set nontrivial bounds on Higgs self-interactions. This is due to the need for considering coupled channels in the scalar case along the unitarization process. As a byproduct, we can gain some relevant information on the Higgs sector from $WW\\to WW$ elastic processes without needing to consider two-Higgs production.'\nauthor:\n- I\u00f1igo Asi\u00e1in\n- Dom\u00e8nec Espriu\n- Federico Mescia\nbibliography:\n- 'bibliography.bib'\ntitle: ' Introducing tools to test Higgs interactions via $WW$ scattering. II. The coupled channel formalism and scalar resonances '\n---\n\nIntroduction {#sec:intro}\n============\n\nIn a companion paper Ref. [@Asiain:2021lch], we highlighted the importance of the $W_LW_L$ scattering in investigating"
+"---\nabstract: |\n In the practical business of asset management by investment trusts and the like, the general practice is to manage over the medium to long term owing to the burden of operations and increase in transaction costs with the increase in turnover ratio. However, when machine learning is used to construct a management model, the number of learning data decreases with the increase in the long-term time scale; this causes a decline in the learning precision. Accordingly, in this study, data augmentation was applied by the combined use of not only the time scales of the target tasks but also the learning data of shorter term time scales, demonstrating that degradation of the generalization performance can be inhibited even if the target tasks of machine learning have long-term time scales. Moreover, as an illustration of how this data augmentation can be applied, we conducted portfolio management in which machine learning of a multifactor model was done by an autoencoder and mispricing was used from the estimated theoretical values. The effectiveness could be confirmed in not only the stock market but also the FX market, and a general-purpose management model could be constructed in various financial markets.\n\n Highlights {#highlights"
+"---\nabstract: 'Coherent optical manipulation of electronic bandstructures via Floquet Engineering is a promising means to control quantum systems on an ultrafast timescale. However, the ultrafast switching on/off of the driving field comes with questions regarding the limits of validity of the Floquet formalism, which is defined for an infinite periodic drive, and to what extent the transient changes can be driven adibatically. Experimentally addressing these questions has been difficult, in large part due to the absence of an established technique to measure coherent dynamics through the duration of the pulse. Here, using multidimensional coherent spectroscopy we explicitly excite, control, and probe a coherent superposition of excitons in the $K$ and $K^\\prime$ valleys in monolayer WS$_2$. With a circularly polarized, red-detuned, pump pulse, the degeneracy of the $K$ and $K^\\prime$ excitons can be lifted and the phase of the coherence rotated. We demonstrate phase rotations during the 100 fs driving pulse that exceed $\\pi$, and show that this can be described by a combination of the AC-Stark shift of excitons in one valley and Bloch-Siegert shift of excitons in the opposite valley. Despite showing a smooth evolution of the phase that directly follows the intensity envelope of the pump pulse,"
+"---\nabstract: 'In $\\R^d$, it is well-known that cumulants provide an alternative to moments that can achieve the same goals with numerous benefits such as lower variance estimators. In this paper we extend cumulants to reproducing kernel Hilbert spaces (RKHS) using tools from tensor algebras and show that they are computationally tractable by a kernel trick. These kernelized cumulants provide a new set of all-purpose statistics; the classical maximum mean discrepancy and Hilbert-Schmidt independence criterion arise as the degree one objects in our general construction. We argue both theoretically and empirically (on synthetic, environmental, and traffic data analysis) that going beyond degree one has several advantages and can be achieved with the same computational complexity and minimal overhead in our experiments.'\nauthor:\n- |\n Patric Bonnier$^{1*}$ Harald Oberhauser $^{1}$ Zolt[\u00e1]{}n Szab[\u00f3]{}$^{2}$\\\n $^{1}$Mathematical Institute, University of Oxford $^{2}$Department of Statistics, London School of Economics\\\n `bonnier,oberhauser@maths.ox.ac.uk`\\\n `z.szabo@lse.ac.uk`\nbibliography:\n- 'BIB/refs\\_smoothed.bib'\ntitle: 'Kernelized Cumulants: Beyond Kernel Mean Embeddings'\n---\n\n#### Keywords:\n\nkernel, cumulant, mean embedding, Hilbert-Schmidt independence criterion, kernel Lancaster interaction, kernel Streitberg interaction, maximum mean discrepancy, maximum variance discrepancy\n\nIntroduction {#sec:intro}\n============\n\nThe moments of a random variable are arguably the most popular all-purpose statistic. However, cumulants are often more favorable statistics"
+"---\nabstract: 'Stein thinning is a promising algorithm proposed by [@riabiz2020optimal] for post-processing outputs of Markov chain Monte Carlo (MCMC). The main principle is to greedily minimize the kernelized Stein discrepancy (KSD), which only requires the gradient of the log-target distribution, and is thus well-suited for Bayesian inference. The main advantages of Stein thinning are the automatic remove of the burn-in period, the correction of the bias introduced by recent MCMC algorithms, and the asymptotic properties of convergence towards the target distribution. Nevertheless, Stein thinning suffers from several empirical pathologies, which may result in poor approximations, as observed in the literature. In this article, we conduct a theoretical analysis of these pathologies, to clearly identify the mechanisms at stake, and suggest improved strategies. Then, we introduce the regularized Stein thinning algorithm to alleviate the identified pathologies. Finally, theoretical guarantees and extensive experiments show the high efficiency of the proposed algorithm. An implementation of regularized Stein thinning as the `kernax` library in python and JAX is available at .'\nauthor:\n- |\n **Cl\u00e9ment B\u00e9nard**$^{1}$ **Brian Staber**$^{1}$ **S\u00e9bastien Da Veiga**$^{2}$\\\n $^1$ Safran Tech, Digital Sciences & Technologies, 78114 Magny-Les-Hameaux, France\\\n $^2$ ENSAI, CREST, F-35000 Rennes, France\\\n `{clement.benard, brian.staber}@{safrangroup.com}`\\\n `sebastien.da-veiga@ensai.fr`\nbibliography:\n- 'bibfile.bib'"
+"---\nabstract: 'A machine learning model that generalizes well should obtain low errors on unseen test examples. Thus, if we learn an optimal model in training data, it could have better generalization performance in testing tasks. However, learn such model is not possible in standard machine learning frameworks as the distribution of the test data is unknown. To tackle this challenge, we propose a novel robust meta learning method, which is more robust to the image-based testing tasks which is unknown and has distribution shifts with traning tasks. Our robust meta learning method can provide robust optimal models even when data from each distribution are scarce. In experiments, we demonstrate that our algorithm not only has better generalization performance, but also robust to different unknown testing tasks.'\nauthor:\n- 'Penghao Jiang, Ke Xin, Zifeng Wang, Chunxi Li[^1]'\ntitle: '**Robust Meta Learning for Image based tasks** '\n---\n\nIntroduction\n============\n\nDeep learning achieves good performance in many realworld tasks such as visual recognition [@20; @11], but relies on large training data, which indicates it is unable to generalize to small data regimes. To overcome this limitation of deep learning, recently, researchers have explored metalearning [@15; @29] approaches, whose goal is to"
+"---\nauthor:\n- 'S. Zambito,[!!]{}'\n- 'M. Milanesio,'\n- 'T. Moretti,'\n- 'L. Paolozzi,'\n- 'M. Munker,'\n- 'R. Cardella,'\n- 'T. Kugathasan,'\n- 'F. Martinelli,'\n- 'A. Picardi,'\n- 'M. Elviretti,'\n- 'H. R\u00fccker,'\n- 'A. Trusch,'\n- 'F. Cadoux,'\n- 'R. Cardarelli[!!]{},'\n- 'S. D\u00e9bieux,'\n- 'Y. Favre,'\n- 'C. A. Fenoglio,'\n- 'D. Ferrere,'\n- 'S. Gonzalez-Sevilla,'\n- 'L. Iodice,'\n- 'R. Kotitsa,'\n- 'C. Magliocca,'\n- 'M. Nessi,'\n- 'A. Pizarro-Medina,'\n- 'J. Sabater Iglesias,'\n- 'J. Saidi,'\n- 'M. Vicente Barreto Pinto'\n- 'and G. Iacobucci'\nbibliography:\n- 'bibliography.bib'\ntitle: '20 ps Time Resolution with a Fully-Efficient Monolithic Silicon Pixel Detector without Internal Gain Layer'\n---\n\nIntroduction {#sec:intro}\n============\n\nMonolithic Active Pixel Sensors (MAPS)\u00a0[@peric], which contain the sensor in the same CMOS substrate utilised for the electronics, are particularly appealing since they offer all the advantages of industrial standard CMOS processing, avoiding the production complexity and high cost of the bump-bonded hybrid pixel sensors that are commonly used in particle-physics experiments. Today MAPS represent a mature technology that matches the performance of hybrid silicon pixel sensors. Indeed MAPS are already used in a large LHC experiment\u00a0[@alice].\n\nThe large pile-up of events expected during"
+"---\nabstract: 'While Chain-of-Thought (CoT) prompting boosts Language Models\u2019 (LM) performance on a gamut of complex reasoning tasks, the generated reasoning chain does not necessarily reflect how the model arrives at the answer (aka. *faithfulness*). We propose **Faithful CoT**, a reasoning framework involving two stages: **Translation** (Natural Language query $\\rightarrow$ symbolic reasoning chain) and **Problem Solving** (reasoning chain $\\rightarrow$ answer), using an LM and a deterministic solver respectively. This guarantees that the reasoning chain provides a faithful explanation of the final answer. Aside from interpretability, Faithful CoT also improves empirical performance: it outperforms standard CoT on 9 of 10 benchmarks from 4 diverse domains, with a relative accuracy gain of 6.3% on Math Word Problems (MWP), 3.4% on Planning, 5.5% on Multi-hop Question Answering (QA), and 21.4% on Relational Inference. Furthermore, with GPT-4 and Codex, it sets the new state-of-the-art few-shot performance on 7 datasets (with 95.0+ accuracy on 6 of them), showing a strong synergy between faithfulness and accuracy.[^1]'\nauthor:\n- |\n Qing Lyu [^2] Shreya Havaldar Adam Stein Li Zhang\\\n **[Delip Rao]{} **[Eric Wong]{} **[Marianna Apidianaki]{} **[Chris Callison-Burch]{}\\\n University of Pennsylvania\\\n `{lyuqing, shreyah, steinad, zharry, exwong,`\\\n `marapi, ccb}@seas.upenn.edu, deliprao@gmail.com`********\nbibliography:\n- 'custom.bib'\ntitle: 'Faithful Chain-of-Thought Reasoning'\n---\n\n=1"
+"---\nabstract: 'Two-sample testing tests whether the distributions generating two samples are identical. We pose the two-sample testing problem in a new scenario where the sample measurements (or sample features) are inexpensive to access, but their group memberships (or labels) are costly. We devise the first *active sequential two-sample testing framework* that not only sequentially but also *actively queries* sample labels to address the problem. Our test statistic is a likelihood ratio where one likelihood is found by maximization over all class priors, and the other is given by a classification model. The classification model is adaptively updated and then used to guide an active query scheme called bimodal query to label sample features in the regions with high dependency between the feature variables and the label variables. The theoretical contributions in the paper include proof that our framework produces an *anytime-valid* $p$-value; and, under reachable conditions and a mild assumption, the framework asymptotically generates a minimum normalized log-likelihood ratio statistic that a passive query scheme can only achieve when the feature variable and the label variable have the highest dependence. Lastly, we provide a *query-switching (QS)* algorithm to decide when to switch from passive query to active query and"
+"---\nabstract: 'A highly accurate and efficient method to compute the expected values of the count, sum, and squared norm of the sum of the centre vectors of a random maximal sized collection of non-overlapping unit diameter disks touching a fixed unit-diameter disk is presented. This extends earlier work on R\u00e9nyi\u2019s parking problem \\[Magyar Tud. Akad. Mat. Kutat\u00f3 Int. K\u00f6zl. 3 (1-2), 1958, pp. 109-127\\]. Underlying the method is a splitting of the the problem conditional on the value of the first disk. This splitting is proven and then used to derive integral equations for the expectations. These equations take a lower block triangular form. They are solved using substitution and approximation of the integrals to very high accuracy using a polynomial approximation within the blocks.'\naddress:\n- 'Mathematical Sciences Institute, The Australian National University, Australian Capital Territory\u00a02601, Australia.'\n- 'Mathematical Sciences Institute, The Australian National University, Australian Capital Territory\u00a02601, Australia.'\n- 'School of Engineering, The Australian National University, Australian Capital Territory\u00a02601, Australia.'\nauthor:\n- 'M.\u00a0Hegland'\n- 'C.J.\u00a0Burden'\n- 'Z.\u00a0Stachurski'\nbibliography:\n- 'eg.bib'\n- 'eg.bib'\ntitle: Computing expected moments of the R\u00e9nyi parking problem on the circle\n---\n\nIntroduction\n============\n\nConsider"
+"---\nabstract: 'We explore the magnetohydrodynamics of Dirac fermions in neutral graphene in the Corbino geometry. Based on the fully consistent hydrodynamic description derived from a microscopic framework and taking into account all peculiarities of graphene-specific hydrodynamics, we report the results of a comprehensive study of the interplay of viscosity, disorder-induced scattering, recombination, energy relaxation, and interface-induced dissipation. In the clean limit, magnetoresistance of a Corbino sample is determined by viscosity. Hence the Corbino geometry could be used to measure the viscosity coefficient in neutral graphene.'\nauthor:\n- Vanessa Gall\n- 'Boris N. Narozhny'\n- 'Igor V. Gornyi'\nbibliography:\n- 'hydro-refs.bib'\n- 'refs-books.bib'\ntitle: Corbino magnetoresistance in neutral graphene\n---\n\nTransport measurements remain one of the most common experimental tools in condensed matter physics. Having dramatically evolved past the original task of establishing bulk material characteristics such as electrical and thermal conductivities, modern experiments often involve samples that are tailor-made to target particular properties or behavior.\n\nIn recent years considerable efforts have been devoted to uncovering the collective or hydrodynamic flows of charge carriers in ultraclean materials as predicted theoretically [@pg; @me0; @luc; @rev]. Several dedicated experiments focused on answering two major questions: is the observed electronic flow really hydrodynamic"
+"---\nabstract: '\\[S U M M A R Y\\] Self-supervised auto-encoders have emerged as a successful framework for representation learning in computer vision and natural language processing in recent years, However, their application to graph data has been met with limited performance due to the non-Euclidean and complex structure of graphs in comparison to images or text, as well as the limitations of conventional auto-encoder architectures. In this paper, we investigate factors impacting the performance of auto-encoders on graph data and propose a novel auto-encoder model for graph representation learning. Our model incorporates a hierarchical adaptive masking mechanism to incrementally increase the difficulty of training in order to mimic the process of human cognitive learning, and a trainable corruption scheme to enhance the robustness of learned representations. Through extensive experimentation on ten benchmark datasets, we demonstrate the superiority of our proposed method over state-of-the-art graph representation learning models.'\naddress: 'College of Computer Science and Technology, Jilin University, Changchun, 130000, China'\nauthor:\n- Chengyu Sun\n- Liang Hu\n- Hongtu Li\n- Shuai Li\n- Tuohang Li\n- Ling Chi\nbibliography:\n- 'myrefs.bib'\ntitle: ' HAT-GAE: Self-Supervised Graph Auto-encoders with Hierarchical Adaptive Masking and Trainable Corruption'\n---\n\ngraph representation learning ,generative"
+"---\nabstract: 'We establish sharp global regularity of a class of multilinear oscillatory integral operators that are associated to nonlinear dispersive equations with both Banach and quasi-Banach target spaces. As a consequence we also prove the (local in time) continuous dependence on the initial data for solutions of a large class of coupled systems of dispersive partial differential equations.'\naddress:\n- 'A.\u00a0Bergfeldt, Department of Mathematics, Uppsala University, SE-751 06 Uppsala, Sweden'\n- 'S.\u00a0Rodr\u00edguez-L\u00f3pez, Department of Mathematics, Stockholm University, SE-106 91 Stockholm, Sweden'\n- 'D.\u00a0Rule, Department of Mathematics, Link\u00f6ping University, SE-581 83 Link\u00f6ping, Sweden'\n- 'W.\u00a0Staubach, Department of Mathematics, Uppsala University, SE-751 06 Uppsala, Sweden'\nauthor:\n- Aksel Bergfeldt\n- 'Salvador Rodr\u00edguez-L\u00f3pez'\n- David Rule\n- Wolfgang Staubach\ntitle: Multilinear oscillatory integrals and estimates for coupled systems of dispersive PDEs\n---\n\n[^1]\n\nIntroduction {#sec:intro}\n============\n\nIn this paper, we consider the regularity of multilinear oscillatory integral operators (multilinear OIOs for short) that are associated to nonlinear dispersive equations in the realm of Banach and quasi-Banach function spaces. Examples include non-linear water-wave and capillary wave equations, nonlinear wave and Klein\u2013Gordon equations, the nonlinear Schr\u00f6dinger equations, the Korteweg\u2013deVries-type equations, and higher order nonlinear dispersive equations. To achieve this, we"
+"---\nabstract: 'Variational quantum algorithms represent a promising approach to quantum machine learning where classical neural networks are replaced by parametrized quantum circuits. However, both approaches suffer from a clear limitation, that is a lack of interpretability. Here, we present a variational method to quantize projective simulation (PS), a reinforcement learning model aimed at interpretable artificial intelligence. Decision making in PS is modeled as a random walk on a graph describing the agent\u2019s memory. To implement the quantized model, we consider quantum walks of single photons in a lattice of tunable Mach-Zehnder interferometers trained via variational algorithms. Using an example from transfer learning, we show that the quantized PS model can exploit quantum interference to acquire capabilities beyond those of its classical counterpart. Finally, we discuss the role of quantum interference for training and tracing the decision making process, paving the way for realizations of interpretable quantum learning agents.'\nauthor:\n- Fulvio Flamini\n- Marius Krumm\n- 'Lukas J. Fiderer'\n- Thomas M\u00fcller\n- 'Hans J. Briegel'\nbibliography:\n- 'qPSrefList.bib'\ntitle: 'Towards interpretable quantum machine learning via single-photon quantum walks'\n---\n\n[^1]\n\n[^2]\n\nIntroduction\n============\n\nClassical machine learning methods are revolutionizing science and technology, with applications ranging from drug discovery"
+"---\nabstract: 'We consider the problem of entanglement detection in the presence of faulty, potentially malicious detectors. A common\u2014and, as of yet, the only\u2014approach to this problem is to perform a Bell test in order to identify nonlocality of the measured entangled state. However, there are two significant drawbacks in this approach: the requirement to exceed a critical, and often high, detection efficiency, and much lower noise tolerance. In this paper, we propose an alternative approach to this problem, which is resilient to the detection loophole and is based on the standard tool of entanglement witness. We discuss how the two main techniques to detection losses, namely the discard and assignment strategies, apply to entanglement witnessing. We demonstrate using the example of a two-qubit Bell state that the critical detection efficiency can be significantly reduced compared to the Bell test approach.'\naddress:\n- '$^1$ International Centre for Theory of Quantum Technologies (ICTQT), University of Gdansk, 80-309 Gda\u0144sk, Poland'\n- '$^2$ Heinrich Heine University D\u00fcsseldorf, Universit\u00e4tsstra\u00dfe 1, 40225 D\u00fcsseldorf, Germany'\n- '$^3$ Institute for Theoretical Physics, University of Cologne, Germany'\n- '$^4$ Department of Computer Science, Technical University of Darmstadt, Darmstadt, 64289 Germany'\nauthor:\n- 'Giuseppe Viola$^1$, Nikolai Miklin$^{1,2}$, Mariami Gachechiladze$^{3,4}$,"
+"---\nabstract: 'The nature of dark matter (DM) remains one of the unsolved mysteries of modern physics. An intriguing possibility is to assume that DM consists of ultralight bosonic particles in the Bose-Einstein condensate (BEC) state. We study stationary DM structures by using the system of the Gross-Pitaevskii and Poisson equations, including the effective temperature effect with parameters chosen to describe the Milky Way galaxy. We have investigated DM structure with BEC core and isothermal envelope. We compare the spherically symmetric and vortex core states, which allows us to analyze the impact of the core vorticity on the halo density, velocity distribution, and, therefore, its gravitational field. Gravitational field calculation is done in the gravitoelectromagnetism approach to include the impact of the core rotation, which induces a gravimagnetic field. As result, the halo with a vortex core is characterized by smaller orbital velocity in the galactic disk region in comparison with the non-rotating halo. It is found that the core vorticity produces gravimagnetic perturbation of celestial body dynamics, which can modify the circular trajectories.'\nauthor:\n- 'K. Korshynska'\n- 'Y.M. Bidasyuk'\n- 'E.V. Gorbar'\n- Junji Jia\n- 'A.I. Yakimenko'\nbibliography:\n- 'Bibl.bib'\ntitle: Dynamical galactic effects induced by stable"
+"---\nabstract: 'Silicon is indispensable in semiconductor industry. Understanding its high-temperature thermodynamic properties is essential both for theory and applications. However, first-principle description of high-temperature thermodynamic properties of silicon (thermal expansion coefficient and specific heat) is still incomplete. Strong deviation of its specific heat at high temperatures from the Dulong-Petit law suggests substantial contribution of anharmonicity effects. We demonstrate, that anharmonicity is mostly due to two transverse phonon modes, propagating in (111) and (100) directions, and can be quantitatively described with formation of the certain type of nanostructured planar defects of the crystal structure. Calculation of these defects\u2019 formation energy enabled us to determine their input into the specific heat and thermal expansion coefficient. This contribution turns out to be significantly greater than the one calculated in quasi-harmonic approximation.'\nauthor:\n- 'M. V. Kondrin'\n- 'Y.B. Lebed'\n- 'V.V. Brazhkin'\ntitle: 'Planar defects as a way to account for explicit anharmonicity in high temperature thermodynamic properties of silicon.'\n---\n\nIntroduction\n============\n\nSilicon is a widely used semiconductor with immense impact in industry. Understanding its high-temperature thermodynamic properties is necessary both for theory and applications. Also, due to its archetypal diamond structure, silicon is among the touchstones for testing new theories."
+"---\nabstract: 'Data sharing partnerships are increasingly an imperative for research institutions and, at the same time, a challenge for established models of data governance and ethical research oversight. We analyse four cases of data partnership involving academic institutions and examine the role afforded to the research partner in negotiating the relationship between risk, value, trust and ethics. Within this terrain, far from being a restraint on financialisation, the instrumentation of ethics forms part of the wider mobilisation of infrastructure for the realisation of profit in the big data economy. Under what we term \u2018combinatorial data governance\u2019 academic structures for the management of research ethics are instrumentalised as organisational functions that serve to mitigate reputational damage and societal distrust. In the alternative model of \u2018experimental data governance\u2019 researchers propose frameworks and instruments for the rethinking of data ethics and the risks associated with it \u2014 a model that is promising but limited in its practical application.'\nauthor:\n- Tsvetelina Hristova\n- Liam Magee\n- Emma Kearney\ndate: January 2023\ntitle: 'Academic Institutions in Multilateral Data Governance: Emerging Arrangements for Negotiating Risk, Value and Ethics in the Big Data Economy'\n---\n\n[ ***Keywords\u2014*** Big Data, Data Governance, Data Partnerships, Data Ecologies,"
+"---\nabstract: 'Human agency and autonomy have always been fundamental concepts in HCI. New developments, including ubiquitous AI and the growing integration of technologies into our lives, make these issues ever pressing, as technologies increase their ability to influence our behaviours and values. However, in HCI understandings of autonomy and agency remain ambiguous. Both concepts are used to describe a wide range of phenomena pertaining to sense-of-control, material independence, and identity. It is unclear to what degree these understandings are compatible, and how they support the development of research programs and practical interventions. We address this by reviewing 30 years of HCI research on autonomy and agency to identify current understandings, open issues, and future directions. From this analysis, we identify ethical issues, and outline key themes to guide future work. We also articulate avenues for advancing clarity and specificity around these concepts, and for coordinating integrative work across different HCI communities.'\nauthor:\n- Daniel Bennett\n- Oussama Metatla\n- Anne Roudaut\n- 'Elisa D. Mekler'\nbibliography:\n- 'references.bib'\n- 'additional\\_refs.bib'\n- 'corpus.bib'\ntitle: 'How does HCI Understand Human Agency and Autonomy?'\n---\n\n<ccs2012> <concept> <concept\\_id>10003120.10003121.10003126</concept\\_id> <concept\\_desc>Human-centered computing\u00a0HCI theory, concepts and models</concept\\_desc> <concept\\_significance>500</concept\\_significance> </concept> </ccs2012>\n\nIntroduction\n============\n\nAutonomy and"
+"---\nabstract: 'Distributed stochastic optimization has drawn great attention recently due to its effectiveness in solving large-scale machine learning problems. Though numerous algorithms have been proposed and successfully applied to general practical problems, their theoretical guarantees mainly rely on certain boundedness conditions on the stochastic gradients, varying from uniform boundedness to the relaxed growth condition. In addition, how to characterize the data heterogeneity among the agents and its impacts on the algorithmic performance remains challenging. In light of such motivations, we revisit the classical Federated Averaging (FedAvg) algorithm [@mcmahan2017communication] as well as the more recent SCAFFOLD method [@karimireddy2020scaffold] for solving the distributed stochastic optimization problem and establish the convergence results under only a mild variance condition on the stochastic gradients for smooth nonconvex objective functions. Almost sure convergence to a stationary point is also established under the condition. Moreover, we discuss a more informative measurement for data heterogeneity as well as its implications.'\nauthor:\n- 'Kun Huang, Xiao Li, , and Shi Pu, [^1] [^2]'\nbibliography:\n- 'references\\_all.bib'\ntitle: Distributed Stochastic Optimization under a General Variance Condition\n---\n\nDistributed Optimization, Stochastic Optimization, Nonconvex Optimization\n\nIntroduction {#sec:introduction}\n============\n\nconsider solving the following optimization problem by a group of agents $[n]:={\\{1,2,\\cdots, n\\}}$:"
+"---\nabstract: 'Quantum systems driven by a time-periodic field are a platform of condensed matter physics where effective (quasi)stationary states, termed \u201cFloquet states\u201d, can emerge with external-field-dressed quasiparticles during driving. They appear, for example, as a prethermal intermediate state in isolated driven quantum systems or as a nonequilibrium steady state in driven open quantum systems coupled to environment. Floquet states may have various intriguing physical properties, some of which can be drastically different from those of the original undriven systems in equilibrium. In this article, we review fundamental aspects of Floquet states, and discuss recent topics and applications of Floquet states in condensed matter physics.'\nauthor:\n- Naoto Tsuji\nbibliography:\n- 'ref.bib'\ntitle: Floquet States \n---\n\nKey objectives\n==============\n\n- Introduce Floquet states in periodically driven quantum systems.\n\n- Describe the basic aspects of the Floquet theory for isolated and open quantum systems.\n\n- Discuss several examples of Floquet states, including ac Wannier-Stark effects and Floquet topological phases.\n\n- Review experimental observations of Floquet states in solids and cold-atom systems.\n\nIntroduction\n============\n\nDriving quantum systems far from equilibrium provides various possibilities to control quantum states and realize new phases of matter that are otherwise inaccessible within thermal equilibrium. The interest"
+"---\nauthor:\n- 'James A. C. Young'\nbibliography:\n- 'references.bib'\ndate: Summer 2021\nnocite: '[@*]'\ntitle: On the Cauchy Integral and Jump Decomposition\n---\n\nIntroduction {#introduction .unnumbered}\n============\n\nPreface {#preface .unnumbered}\n-------\n\nThe purpose of these notes is to introduce the reader to the Cauchy type integral and the Sokhotski formulae (also know as the Jump decomposition, or simply the Jump problem). The article is split into two main chapters. The first is based on the contents of the first few sections of the book \u201cBoundary Value Problems\" by F.D. Gakhov [@gakhov2014boundary]. The style of presentation is geared towards readers who have a background at the level of a first course in complex variables, and are comfortable with basic arguments in analysis. One of the goals of the text is to be expository, and for that reason nearly all proofs will contain thoroughly worked out arguments and explanations. After establishing the basic ideas in the first chapter, we will skip past much rigorous training to (informally) discuss some open questions and advancements in the field lying on the fringe of our understanding in the second chapter. We will see how many disciplines work together to tell a richer story, some"
+"---\nabstract: 'Hypergraph clustering is a basic algorithmic primitive for analyzing complex datasets and systems characterized by multiway interactions, such as group email conversations, groups of co-purchased retail products, and co-authorship data. This paper presents a practical $O(\\log n)$-approximation algorithm for a broad class of hypergraph *ratio cut* clustering objectives. This includes objectives involving generalized hypergraph cut functions, which allow a user to penalize cut hyperedges differently depending on the number of nodes in each cluster. Our method is a generalization of the cut-matching framework for graph ratio cuts, and relies only on solving maximum s-t flow problems in a special reduced graph. It is significantly faster than existing hypergraph ratio cut algorithms, while also solving a more general problem. In numerical experiments on various types of hypergraphs, we show that it quickly finds ratio cut solutions within a small factor of optimality.'\nauthor:\n- Nate Veldt\nbibliography:\n- 'arxiv-full-updated.bib'\ntitle: 'Cut-matching Games for Generalized Hypergraph Ratio Cuts'\n---\n\nIntroduction\n============\n\nGraphs are a popular way to model social networks and other datasets characterized by pairwise interactions, but there has been a growing realization that many complex systems of interactions are characterized by *multiway* relationships that cannot be directly encoded"
+"---\nauthor:\n- 'Sara Venturini[^1] , Andrea Cristofari[^2], Francesco Rinaldi[^3], Francesco Tudisco[^4]'\nbibliography:\n- 'main.bib'\ntitle: |\n Laplacian-based Semi-Supervised Learning in\\\n Multilayer Hypergraphs by Coordinate Descent\n---\n\n**Abstract.** Graph Semi-Supervised learning is an important data analysis tool, where given a graph and a set of labeled nodes, the aim is to infer the labels to the remaining unlabeled nodes. In this paper, we start by considering an optimization-based formulation of the problem for an undirected graph, and then we extend this formulation to multilayer hypergraphs. We solve the problem using different coordinate descent approaches and compare the results with the ones obtained by the classic gradient descent method. Experiments on synthetic and real-world datasets show the potential of using coordinate descent methods with suitable selection rules.\\\n**Key words.** semi-supervised learning, coordinate methods, multilayer hypergraphs.\\\n\nIntroduction\n============\n\nConsider a finite, weighted and undirected graph $G = (V,E,w)$, with node set $V$, edge set $E \\subseteq V\\times V$ and edge-weight function $w$ such that $w(e)=w(uv)>0$ if $e = (u,v)\\in E$ and $0$ otherwise. Suppose each node $u\\in V$ can be assigned to one of $m$ classes, or labels, $C_1,\\dots,C_m$. In graph-based Semi-Supervised Learning (SSL), given a graph $G$ and an observation set"
+"---\nabstract: 'The parabolic Airy line ensemble $\\mathfrak A$ is a central limit object in the KPZ universality class and related areas. On any compact set $K = \\{1, \\dots, k\\} \\times [a, a + t]$, the law of the recentered ensemble $\\mathfrak A - \\mathfrak A(a)$ has a density $X_K$ with respect to the law of $k$ independent Brownian motions. We show that $$X_K(f) = \\exp \\left(-\\textsf{S}(f) + o(\\textsf{S}(f))\\right)$$ where $\\textsf{S}$ is an explicit, tractable, non-negative function of $f$. We use this formula to show that $X_K$ is bounded above by a $K$-dependent constant, give a sharp estimate on the size of the set where $X_K < \\epsilon$ as $\\epsilon \\to 0$, and prove a large deviation principle for $\\mathfrak A$. We also give density estimates that take into account the relative positions of the Airy lines, and prove sharp two-point tail bounds that are stronger than those for Brownian motion. These estimates are a key input in the classification of geodesic networks in the directed landscape [@dauvergne2023geodesic]. The paper is essentially self-contained, requiring only tail bounds on the Airy point process and the Brownian Gibbs property as inputs.'\nauthor:\n- Duncan Dauvergne\nbibliography:\n- 'bibliography.bib'\ntitle: Wiener densities"
+"---\nabstract: 'A bootstrap approach to the effective action in quantum field theory is discussed which entails the invariance under quantum fluctuations of the effective action describing physical reality (via the S-matrix).'\n---\n\n[0.9cm On selfconsistency in quantum field theory\\\n]{}\n\n[K. Scharnhorst ]{}\\\n[ Vrije Universiteit Amsterdam, Faculty of Sciences, Department of Physics and Astronomy, De Boelelaan 1081, 1081 HV Amsterdam, The Netherlands]{}\\\n\n\\[introduction\\]Introduction\n============================\n\nThese are times of contemplation and reorientation in quantum field theory. With the experimental detection of the Higgs boson in 2012 finally the finishing stone of the Standard Model of elementary particle physics [@1992donoghue] surfaced. On the theoretical side, the Standard Model is based on the concept of renormalized local quantum field theory. The confidence in this concept originally and primarily relies on the extraordinary success of the centerpiece of the Standard Model, quantum electrodynamics (QED), which exhibits an impressive agreement between theory and experiment (Cf., e.g., [@2006mohr], for more comprehensive reviews see [@1990kinoshita].). The successful application of renormalized local quantum field theory to the other components of the Standard Model, the electroweak theory and to quantum chromodynamics (QCD), have further advanced this confidence. On the other hand, many practicing quantum field theorists are"
+"---\nabstract: 'Many practical applications of robotics require systems that can operate safely despite uncertainty. In the context of motion planning, two types of uncertainty are particularly important when planning safe robot trajectories. The first is environmental uncertainty \u2014 uncertainty in the locations of nearby obstacles, stemming from sensor noise or (in the case of obstacles\u2019 future locations) prediction error. The second class of uncertainty is uncertainty in the robots own state, typically caused by tracking or estimation error. To achieve high levels of safety, it is necessary for robots to consider both of these sources of uncertainty. In this paper, we propose a risk-bounded trajectory optimization algorithm, known as Sequential Convex Optimization with Risk Optimization (SCORA), to solve chance-constrained motion planning problems despite both environmental uncertainty and tracking error. Through experiments in simulation, we demonstrate that SCORA significantly outperforms state-of-the-art risk-aware motion planners both in planning time and in the safety of the resulting trajectories.'\nauthor:\n- \nbibliography:\n- 'references.bib'\ntitle: 'Chance-Constrained Trajectory Optimization for High-DOF Robots in Uncertain Environments'\n---\n\nIntroduction\n============\n\nFor most robots, developed in the cradle of a well-controlled, carefully structured simulation or laboratory environment, primary challenge posed by deployment in the outside world is"
+"---\nabstract: 'In this paper we give a physically transparent picture of singular-continuous spectrum in disordered systems which possess a non-ergodic extended phase. We present a simple model of identically and independently distributed level spacing in the spectrum of local density of states and show how a fat tail appears in this distribution at the broad distribution of eigenfunction amplitudes. For the model with a power-law local spacing distribution we derive the correlation function $K(\\omega)$ of the local density of states and show that depending on the relation between the eigenfunction fractal dimension $D_{2}$ and the spectral fractal dimension $D_{s}$ encoded in the power-law spacing distribution, a singular continuous spectrum of a random Cantor set or that of an isolated mini-band may appear. In the limit of an infinite number of degrees of freedom the function $K(\\omega)$ in the non-ergodic extended phase is singular at $\\omega=0$ with the branch-cut singularity for the case of a random Cantor set and with the $\\delta$-function singularity for the case of an isolated mini-band. For an absolutely continuous spectrum $K(\\omega)$ tends to a finite limit as $\\omega\\rightarrow 0$. For an arbitrary local spacing distribution function we formulated a criterion of fractality of local spectrum"
+"---\nabstract: 'Model Predictive Control (MPC) is attracting tremendous attention in the autonomous driving task as a powerful control technique. The success of an MPC controller strongly depends on an accurate internal dynamics model. However, the static parameters, usually learned by system identification, often fail to adapt to both internal and external perturbations in real-world scenarios. In this paper, we firstly (1) reformulate the problem as a Partially Observed Markov Decision Process (POMDP) that absorbs the uncertainties into observations and maintains Markov property into hidden states; and (2) learn a recurrent policy continually adapting the parameters of the dynamics model via Recurrent Reinforcement Learning (RRL) for optimal and adaptive control; and (3) finally evaluate the proposed algorithm (referred as $\\textit{MPC-RRL}$) in CARLA simulator and leading to robust behaviours under a wide range of perturbations.'\nauthor:\n- 'Yuan Zhang$^{1}$, Chuxuan Li$^{2}$, Joschka Boedecker$^{1}$ and Guyue Zhou$^{2}$ [^1] [^2] [^3]'\nbibliography:\n- 'ref.bib'\ntitle: ' **Incorporating Recurrent Reinforcement Learning into Model Predictive Control for Faster Adaption in Autonomous Driving** '\n---\n\nReinforcement Learning, Model Learning for Control, Robust/Adaptive Control\n\nINTRODUCTION {#sec:introduction}\n============\n\nModel Predictive Control (MPC) has become the primary control method for enormous fields, e.g. autonomous driving [@reiteremcore] and robotics [@song2020learning]."
+"---\nabstract: 'We investigate the tunneling phenomenon of particles through the horizon of a magnetized Ernst-like black hole. We employ the modified Lagrangian equation with the extended uncertainty principle for this black hole. We determine a tunneling rate and the related Hawking temperature for this black hole by using the WKB approach in the field equation. In addition, we examine the graph behavior of the Hawking temperature in relation to the black hole event horizon. We explore the stability analysis of this black hole by taking into account the impact of quantum gravity on Hawking temperatures. The temperature for a magnetized Ernst-like black hole rises as the correction parameter is decreased. Moreover, we analyze the thermodynamics quantities such as Hawking temperature, heat capacity and Bekenstein entropy by using the different approach. We obtain the corrected entropy to study the impact of logarithmic corrections on the different thermodynamic quantities. It is shown that these correction terms makes the system stable under thermal fluctuations.'\nauthor:\n- Riasat Ali\n- Zunaira Akhtar\n- Kazuharu Bamba\n- 'M. Umar Khan'\ntitle: 'Tunneling and thermodynamics evolution of the magnetized Ernst-like black hole'\n---\n\nIntroduction\n============\n\nThe black holes (BHs) is a real physical object, due"
+"---\nauthor:\n- 'Ying Yu$^{1}$, Xun-Wang Yan$^{2}$, Fengjie Ma$^{3}$, Miao Gao$^{1,4}$[^1], and Zhong-Yi Lu$^{5}$'\ntitle: 'Cubic C$_{20}$: An intrinsic superconducting carbon allotrope'\n---\n\nThe studies of carbon-based superconductor can be traced back to the 1960s. Superconductivity was first discovered in graphite intercalation compounds (GICs) below 1 K, such as KC$_8$ [@Hannay-PRL14]. Under ambient pressure, the highest $T_c$ for GICs is 11.5 K in CaC$_6$ [@Emery-PRL95]. Graphene, the two-dimensional form of graphite, has gapless Dirac bands. Superconductivity in doped graphene has been extensively investigated [@Uchoa-PRL98; @Profeta-NP8; @Chapman-SR6]. Recently, correlated insulating states were observed in twisted bilayer graphene, and superconductivity emerges at 1.7 K after electrostatic doping [@Cao-Nature556-1; @Cao-Nature556-2]. For twisted trilayer graphene, rich phase diagram and better tunability of electric field were realized, compared with the bilayer case [@Zhu-PRL125; @Park-N590], providing a fascinating playground to explore the interplay between correlated states and superconductivity. Insulator to superconductor transition was reported in boron-doped diamond [@Ekimov-Nature428], in which the $T_c$ exhibits a positive dependence on the proportion of boron that incorporated into diamond [@Takano-DRM16]. It was found that the onset $T_c$ of 27% boron-doped Q-carbon is 55 K [@Bhaumik-AN11]. Fullerene shows superconductivity at 18 K, after the potassium intercalation with stoichiometry of K$_3$C$_{60}$ [@Hebard-Nature350]."
+"---\nabstract: 'Systems with 1-bit quantization and oversampling are promising for the Internet of Things (IoT) devices in order to reduce the power consumption of the analog-to-digital-converters. The novel time-instance zero-crossing (TI ZX) modulation is a promising approach for this kind of channels but existing studies rely on optimization problems with high computational complexity and delay. In this work, we propose a practical waveform design based on the established TI ZX modulation for a multiuser multi-input multi-output (MIMO) downlink scenario with 1-bit quantization and temporal oversampling at the receivers. In this sense, the proposed temporal transmit signals are constructed by concatenating segments of coefficients which convey the information into the time-instances of zero-crossings according to the TI ZX mapping rules. The proposed waveform design is compared with other methods from the literature. The methods are compared in terms of bit error rate and normalized power spectral density. Numerical results show that the proposed technique is suitable for multiuser MIMO system with 1-bit quantization while tolerating some small amount of out-of-band radiation.'\naddress: |\n Centre for Telecommunications Studies,\\\n Pontifical Catholic University of Rio de Janeiro,\\\n Rio de Janeiro, Brazil 22453-900\\\n Email: diana;lukas.landau;delamare@cetuc.puc-rio.br\nbibliography:\n- 'ref.bib'\ntitle: \n---\n\nZero-crossing precoding, oversampling, Moore"
+"---\nabstract: 'We construct a $q$-analogue of truncated version of symmetric multiple zeta values which satisfies the double shuffle relation. Using it, we define a $q$-analogue of symmetric multiple zeta values and see that it satisfies many of the same relations as symmetric multiple zeta values, which are the inverse relation and a part of the double shuffle relation and the Ohno-type relation.'\naddress: 'Department of Mathematics, Institute of Pure and Applied Sciences, University of Tsukuba, Tsukuba, Ibaraki 305-8571, Japan'\nauthor:\n- Yoshihiro Takeyama\ntitle: 'A $q$-analogue of symmetric multiple zeta value'\n---\n\n[^1]\n\nIntroduction\n============\n\nThe *multiple zeta value* (MZV) is the real value defined by $$\\begin{aligned}\n \\zeta({\\mathbf{k}})=\\sum_{0 Does there exists an element $\\psi$ in the group of automorphisms of the one-sided shift Aut($\\{0,1,\\ldots,n-1\\}^{\\mathbb{N}}, {\\sigma_{n}}$) so that all points of $\\{0,1,\\ldots,n-1\\}^{{\\mathbb{N}}}$ have orbits of length $n$ under $\\psi$ and $\\psi$ is not conjugate to a permutation?\n\n Here, by a *permutation* we mean an automorphism of one-sided shift dynamical system induced by a permutation of the symbol set $\\{0,1,\\ldots,n-1\\}$.\n\n We resolve this question by showing that any $\\psi$ with properties as above must be conjugate to a permutation.\n\n Our techniques naturally extend those of BFK using the strongly synchronizing automata technology developed here and in several articles of the authors and collaborators (although, this article has been written to be largely self-contained).\nauthor:\n- Collin Bleak and Feyishayo Olukoya\nbibliography:\n- 'references.bib'\ntitle: 'Conjugacy for certain automorphisms of the one-sided shift via transducers'\n---\n\nIntroduction\n============\n\nLet $n$ be a positive integer and set ${X_{n}}{:=}\\{0,1,\\ldots,n-1\\}$. We will use ${X_{n}}$ to represent our standard alphabet of size $n$ and we will denote by ${\\sigma_{n}}$ the usual shift map on ${X_n^{\\mathbb{N}}}$. The group"
+"---\nabstract: 'Adversarial nets have proved to be powerful in various domains including generative modeling (GANs), transfer learning, and fairness. However, successfully training adversarial nets using first-order methods remains a major challenge. Typically, careful choices of the learning rates are needed to maintain the delicate balance between the competing networks. In this paper, we design a novel learning rate scheduler that dynamically adapts the learning rate of the adversary to maintain the right balance. The scheduler is driven by the fact that *the loss of an ideal adversarial net is a constant known a priori*. The scheduler is thus designed to keep the loss of the optimized adversarial net close to that of an ideal network. We run large-scale experiments to study the effectiveness of the scheduler on two popular applications: GANs for image generation and adversarial nets for domain adaptation. Our experiments indicate that adversarial nets trained with the scheduler are less likely to diverge and require significantly less tuning. For example, on CelebA, a GAN with the scheduler requires only one-tenth of the tuning budget needed without a scheduler. Moreover, the scheduler leads to statistically significant improvements in model quality, reaching up to $27\\%$ in Frechet Inception Distance"
+"---\nabstract: 'Vision Transformer (ViT) has recently gained significant attention in solving computer vision (CV) problems due to its capability of extracting informative features and modeling long-range dependencies through the self-attention mechanism. Whereas recent works have explored the trustworthiness of ViT, including its robustness and explainability, the issue of fairness has not yet been adequately addressed. We establish that the existing fairness-aware algorithms designed for CNNs do not perform well on ViT, which highlights the need for developing our novel framework via ebiased elf-ttention (DSA). DSA is a fairness-through-blindness approach that enforces ViT to eliminate spurious features correlated with the sensitive label for bias mitigation and simultaneously retain real features for target prediction. Notably, DSA leverages adversarial examples to locate and mask the spurious features in the input image patches with an additional attention weights alignment regularizer in the training objective to encourage learning real features for target prediction. Importantly, our DSA framework leads to improved fairness guarantees over prior works on multiple prediction tasks without compromising target prediction performance.'\nauthor:\n- |\n Yao Qiang Chengyin Li Prashant Khanduri Dongxiao Zhu\\\n Department of Computer Science, Wayne State University\\\n `{yao,cyli,khanduri.prashant,dzhu}@wayne.edu`\nbibliography:\n- 'egbib.bib'\ntitle: 'Fairness-aware Vision Transformer via Debiased Self-Attention'\n---"
+"---\nabstract: 'In this study, we address a control-constrained optimal control problem pertaining to the transformation of quantum states. Our objective is to navigate a quantum system from an initial state to a desired target state while adhering to the principles of the Liouville-von Neumann equation. To achieve this, we introduce a cost functional that balances the dual goals of fidelity maximization and energy consumption minimization. We derive optimality conditions in the form of the Pontryagin Maximum Principle (PMP) for the matrix-valued dynamics associated with this problem. Subsequently, we present a time-discretized computational scheme designed to solve the optimal control problem. This computational scheme is rooted in an indirect method grounded in the PMP, showcasing its versatility and efficacy. To illustrate the practicality and applicability of our methodology, we employ it to address the case of a spin $\\frac{1}{2}$ particle subjected to interaction with a magnetic field. Our findings shed light on the potential of this approach to tackle complex quantum control scenarios and contribute to the broader field of quantum state transformations.'\nauthor:\n- 'Nahid Binandeh Dehaghani and A. Pedro Aguiar [^1]'\ntitle: 'Quantum State Transfer Optimization: Balancing Fidelity and Energy Consumption using Pontryagin Maximum Principle'\n---\n\n[Shell :"
+"---\nabstract: 'Many experimental time series measurements share unobserved causal drivers. Examples include genes targeted by transcription factors, ocean flows influenced by large-scale atmospheric currents, and motor circuits steered by descending neurons. Reliably inferring this unseen driving force is necessary to understand the intermittent nature of top-down control schemes in diverse biological and engineered systems. Here, we introduce a new unsupervised learning algorithm that uses recurrences in time series measurements to gradually reconstruct an unobserved driving signal. Drawing on the mathematical theory of skew-product dynamical systems, we identify recurrence events shared across response time series, which implicitly define a recurrence graph with glass-like structure. As the amount or quality of observed data improves, this recurrence graph undergoes a percolation transition manifesting as weak ergodicity breaking for random walks on the induced landscape\u2014revealing the shared driver\u2019s dynamics, even in the presence of strongly corrupted or noisy measurements. Across several thousand random dynamical systems, we empirically quantify the dependence of reconstruction accuracy on the rate of information transfer from a chaotic driver to the response systems, and we find that effective reconstruction proceeds through gradual approximation of the driver\u2019s dominant orbit topology. Through extensive benchmarks against classical and neural-network-based signal processing techniques,"
+"---\nauthor:\n- som\n- Christian Bespin\n- Ivan Caicedo\n- Jochen Dingfelder\n- Tomasz Hemperek\n- Toko Hirono\n- Fabian H\u00fcgging\n- Hans Kr\u00fcger\n- Konstantinos Moustakas\n- Heinz Pernegger\n- Petra Riedler\n- Lars Schall\n- Walter Snoeys\n- Norbert Wermes\nbibliography:\n- 'bibliography.bib'\ntitle: 'Charge collection and efficiency measurements of the TJ-Monopix2 DMAPS in 180nm CMOS technology'\n---\n\nIntroduction\n============\n\nIn recent years, advances in CMOS technologies have fueled the development of a new generation of monolithic active pixel sensors (MAPS) with fast readout and high radiation tolerance by depleting the charge sensitive volume\u00a0[@peric_hvcmos]. These depleted MAPS (DMAPS) devices are therefore an interesting candidate for high-energy particle physics experiments with high radiation environments and high particle rate. Depletion is achieved by either using high-voltage add-ons in the CMOS technology and/or high resistivity substrates. The increasing availability of these features in commercial CMOS processes could combine the features of the detector concept with possibly faster and cheaper production than common hybrid pixel detectors for the mentioned purposes. The idea behind and measurements results from one of multiple DMAPS prototypes, TJ-Monopix2\u00a0[@moustakas_cmos_design; @moustakas_thesis], will be presented in the following.\n\nDesign of TJ-Monopix2\n=====================\n\nTJ-Monopix2 is the latest DMAPS"
+"---\nabstract: 'By performing global fits to inverse beta decay (IBD) yield measurements from existing neutrino experiments based at highly $^{235}$U enriched reactor cores and conventional low-enriched cores, we explore current direct bounds on neutrino production by the sub-dominant fission isotope $^{241}$Pu. For this nuclide, we determine an IBD yield of $\\sigma_{241}$ = 8.16 $\\pm$ 3.47 cm$^2$/fission, a value (135 $\\pm$ 58)% that of current beta conversion models. This constraint is shown to derive from the non-linear relationship between burn-in of $^{241}$Pu and $^{239}$Pu in conventional reactor fuel. By considering new hypothetical neutrino measurements at high-enriched, low-enriched, mixed-oxide, and fast reactor facilities, we investigate the feasible limits of future knowledge of IBD yields for [$^{235}$U]{}, [$^{238}$U]{}, [$^{239}$Pu]{}, [$^{241}$Pu]{}, and [$^{240}$Pu]{}. We find that first direct measurement of the [$^{240}$Pu]{}\u00a0IBD yield can be performed at plutonium-burning fast reactors, while a suite of correlated measurements at multiple reactor types can achieve a precision in direct [$^{238}$U]{}, [$^{239}$Pu]{}, and [$^{241}$Pu]{}\u00a0yield knowledge that meets or exceeds that of current theoretical predictions.'\nauthor:\n- Yoshi Fujikake\n- Bryce Littlejohn\n- 'Ohana B. Rodrigues'\n- Pranava Teja Surukuchi\nbibliography:\n- 'main.bib'\ntitle: 'Exploring Current Constraints on Antineutrino Production by [$^{241}$Pu]{}\u00a0and Paths Towards the"
+"---\nabstract: 'Synthesizing high-fidelity complex images from text is challenging. Based on large pretraining, the autoregressive and diffusion models can synthesize photo-realistic images. Although these large models have shown notable progress, there remain three flaws. 1) These models require tremendous training data and parameters to achieve good performance. 2) The multi-step generation design slows the image synthesis process heavily. 3) The synthesized visual features are difficult to control and require delicately designed prompts. To enable high-quality, efficient, fast, and controllable text-to-image synthesis, we propose Generative Adversarial CLIPs, namely GALIP. GALIP leverages the powerful pretrained CLIP model both in the discriminator and generator. Specifically, we propose a CLIP-based discriminator. The complex scene understanding ability of CLIP enables the discriminator to accurately assess the image quality. Furthermore, we propose a CLIP-empowered generator that induces the visual concepts from CLIP through bridge features and prompts. The CLIP-integrated generator and discriminator boost training efficiency, and as a result, our model only requires about $3\\%$ training data and $6\\%$ learnable parameters, achieving comparable results to large pretrained autoregressive and diffusion models. Moreover, our model achieves $\\sim$120$\\times$faster synthesis speed and inherits the smooth latent space from GAN. The extensive experimental results demonstrate the excellent performance of"
+"---\nabstract: 'The Quantum Fourier Transformation (QFT) is a well-known subroutine for algorithms on qubit-based universal quantum computers. In this work, the known QFT circuit is used to derive an efficient circuit for the multidimensional QFT. The complexity of the algorithm is $\\mathcal{O}( \\log^2(M)/d )$ for an array with $M=(2^n)^d$ elements $(n \\in \\mathbb{N})$ equally separated along $d$ dimensions. Relevant properties for application are discussed. An example on current hardware is depicted by a 6 qubit 2D-QFT with an IBM quantum computer.'\nauthor:\n- Philipp Pfeffer\nbibliography:\n- 'references.bib'\ntitle: Multidimensional Quantum Fourier Transformation\n---\n\nThe Quantum Fourier Transformation [@QFT] (QFT) is a key subroutine in quantum information processing, most prominently used within the quantum phase estimation [@Kit_QPE] and the factoring algorithm of Shor [@Shor_F]. Comparing to its classical counterpart, the fast fourier transformation [@FFT] (FFT) solves the same problem with effort $\\mathcal{O}(N\\log(N))$ while the QFT needs $\\mathcal{O}(\\log^2(N))$ operations [@NielsenChua] for a vector with $N=2^n$ elements ($n \\in \\mathbb{N}$). Though this speed-up does not lead to the broad replacement of the FFT, the aspect of quantum parallelism is important for the construction of further quantum algorithms. Single operations acting on a large quantum state, such that the whole state is"
+"---\nabstract: 'We present results of numerical simulations of the tensor-valued elliptic-parabolic PDE model for biological network formation. The numerical method is based on a non-linear finite difference scheme on a uniform Cartesian grid in a 2D domain. The focus is on the impact of different discretization methods and choices of regularization parameters on the symmetry of the numerical solution. In particular, we show that using the symmetric alternating-direction implicit (ADI) method for time discretization helps preserve the symmetry of the solution, compared to the (non symmetric) ADI method. Moreover, we study the effect of regularization by isotropic background permeability $r>0$, showing that increased condition number of the elliptic problem due to decreasing value of $r$ leads to loss of symmetry. [We show that in this case, neither the use of the symmetric ADI method preserves the symmetry of the solution.]{} Finally, we perform numerical error analysis of our method [making use of]{} Wasserstein distance.'\nauthor:\n- Clarissa Astuto\n- Daniele Boffi\n- Jan Haskovec\n- Peter Markowich\n- Giovanni Russo\ntitle: 'Asymmetry and condition number of an elliptic-parabolic system for biological network formation'\n---\n\nIntroduction\n============\n\nPrinciples of formation, adaptation and functioning of biological transportation networks have been a"
+"---\nabstract: 'Using natural language, Conversational Bot offers unprecedented ways to many challenges in areas such as information searching, item recommendation, and question answering. Existing bots are usually developed through retrieval-based or generative-based approaches, yet both of them have their own advantages and disadvantages. To assemble this two approaches, we propose a **h**ybrid r**e**t**r**ieval-generati**o**n **net**work (*HeroNet*) with the three-fold ideas: 1). To produce high-quality sentence representations, *HeroNet* performs multi-task learning on two subtasks: Similar Queries Discovery and Query-Response Matching. Specifically, the retrieval performance is improved while the model size is reduced by training two lightweight, task-specific adapter modules that share only one underlying T5-Encoder model. 2). By introducing adversarial training, *HeroNet* is able to solve both retrieval&generation tasks simultaneously while maximizing performance of each other. 3). The retrieval results are used as prior knowledge to improve the generation performance while the generative result are scored by the discriminator and their scores are integrated into the generator\u2019s cross-entropy loss function. The experimental results on a open dataset demonstrate the effectiveness of the *HeroNet* and our code is available at '\nauthor:\n- Bolin Zhang\n- Yunzhe Xu\n- 'Zhiying Tu [^1] and Dianhui Chu'\nbibliography:\n- 'ref.bib'\ntitle: 'HeroNet: A Hybrid"
+"---\nabstract: 'The question under which conditions oscillators with slightly different frequencies synchronize appears in various settings. We show that synchronization can be achieved even for *harmonic* oscillators that are bilinearly coupled via a purely dissipative interaction. By appropriately tuned gain/loss stable dynamics may be achieved where for the cases studied in this work all oscillators are synchronized. These findings are interpreted using the complex eigenvalues and eigenvectors of the non-Hermitian matrix describing the dynamics of the system.'\nauthor:\n- 'Juan N.\u00a0Moreno'\n- 'Christopher W.\u00a0W\u00e4chtler'\n- Alexander Eisfeld\ntitle: Synchronized states in dissipatively coupled harmonic oscillator networks\n---\n\nIntroduction\n============\n\nSynchronization is a fascinating phenomenon, which can be interpreted as a display of cooperative behavior appearing in many complex systems [@pikovsky2003synchronization; @StrogatzBook2018]. Since the first observation by Huygens in the late 1600s [@bennett2002huygens], it has been studied in diverse communities, where it plays an important role in our understanding for example in electric networks in engineering, circadian rhythms in biology, pattern formation in statistical mechanics, and chemical reactions in chemistry [@Strogatz1993; @Rosenblum2003; @arenas2008synchronization]. By now, it is seen as a universal phenomenon that is important both in fundamental studies and in technical applications, ranging from laser networks [@thornburg1997chaos],"
+"---\nabstract: |\n In our paper, we introduce [*special-generic-like*]{} maps or [*SGL*]{} maps as smooth maps, present their fundamental algebraic topological and differential topological theory and give non-trivial examples.\n\n The new class generalize the class of so-called [*special generic*]{} maps. Special generic maps are smooth maps which are locally projections or the product maps of Morse functions and the identity maps on disks. Morse functions with exactly two singular points on spheres or Morse functions in Reeb\u2019s theorem are simplest examples. Special generic maps and the manifolds of their domains have been studied well. Their structures are simple and this help us to study explicitly. As important properties, they have been shown to restrict the topologies and the differentiable structures of the manifolds strongly by Saeki and Sakuma, followed by Nishioka, Wrazidlo and the author. To cover wider classes of manifolds as the domains, the author previously introduced a class generalizing the class of special generic maps and smaller than our class: [*simply generalized special generic maps*]{}.\naddress: |\n Institute of Mathematics for Industry, Kyushu University, 744 Motooka, Nishi-ku Fukuoka 819-0395, Japan\\\n TEL (Office): +81-92-802-4402\\\n FAX (Office): +81-92-802-4405\\\nauthor:\n- Naoki Kitazawa\ntitle: Smooth maps like special generic maps\n---"
+"---\nabstract: 'Stream processing engines (SPEs) are widely used for large scale streaming analytics over unbounded time-ordered data streams. Modern day streaming analytics applications exhibit diverse compute characteristics and demand strict latency and throughput requirements. Over the years, there has been significant attention in building hardware-efficient stream processing engines (SPEs) that support several query optimization, parallelization, and execution strategies to meet the performance requirements of large scale streaming analytics applications. However, in this work, we observe that these strategies often fail to generalize well on many real-world streaming analytics applications due to several inherent design limitations of current SPEs. We further argue that these limitations stem from the shortcomings of the fundamental design choices and the query representation model followed in modern SPEs. To address these challenges, we first propose *TiLT*, a novel intermediate representation (IR) that offers a highly expressive temporal query language amenable to effective query optimization and parallelization strategies. We subsequently build a compiler backend for TiLT that applies such optimizations on streaming queries and generates hardware-efficient code to achieve high performance on multi-core stream query executions. We demonstrate that TiLT achieves up to $326\\times$ ($20.49\\times$ on average) higher throughput compared to state-of-the-art SPEs (e.g., Trill) across"
+"---\nabstract: 'The basic idea of lifelike computing systems is the transfer of concepts in living systems to technical use that goes even beyond existing concepts of self-adaptation and self-organisation (SASO). As a result, these systems become even more autonomous and changeable - up to a runtime transfer of the actual target function. Maintaining controllability requires a complete and dynamic (self-)quantification of the system behaviour with regard to aspects of SASO but also, in particular, lifelike properties. In this article, we discuss possible approaches for such metrics and establish a first metric for transferability. We analyse the behaviour of the metric using example applications and show that it is suitable for describing the system\u2019s behaviour at runtime.'\nauthor:\n- 'Martin Goller$^{1}$'\n- |\n Sven Tomforde$^1$\\\n \\\n $^1$Intelligent Systems, Christian-Albrechts-Universit\u00e4t zu Kiel, Germany\\\n goller.cau@gmail.com / st@informatik.uni-kiel.de\nbibliography:\n- 'references.bib'\ntitle: A Quantification Approach for Transferability in Lifelike Computing Systems\n---\n\nI. Introduction\n===============\n\nThe last two decades have seen a trend towards more autonomous behaviour in technical systems \\[[@zhang2017current]\\]. On the one hand, this leads to increased controllability by the human administrator, as self-adaptation of behaviour and self-organisation of the system structure occurs through the technical systems themselves (here summarised as"
+"---\nabstract: 'Many deep learning tasks require annotations that are too time consuming for human operators, resulting in small dataset sizes. This is especially true for dense regression problems such as crowd counting which requires the location of every person in the image to be annotated. Techniques such as data augmentation and synthetic data generation based on simulations can help in such cases. In this paper, we introduce PromptMix, a method for artificially boosting the size of existing datasets, that can be used to improve the performance of lightweight networks. First, synthetic images are generated in an end-to-end data-driven manner, where text prompts are extracted from existing datasets via an image captioning deep network, and subsequently introduced to text-to-image diffusion models. The generated images are then annotated using one or more high-performing deep networks, and mixed with the real dataset for training the lightweight network. By extensive experiments on five datasets and two tasks, we show that PromptMix can significantly increase the performance of lightweight networks by up to 26%.'\nauthor:\n- 'Arian\u00a0Bakhtiarnia, Qi\u00a0Zhang, and\u00a0Alexandros\u00a0Iosifidis [^1] [^2]'\nbibliography:\n- 'references.bib'\ntitle: 'PromptMix: Text-to-image diffusion models enhance the performance of lightweight networks'\n---\n\nEfficient deep learning, lightweight"
+"---\nabstract: 'We introduce a causal framework for designing optimal policies that satisfy fairness constraints. We take a pragmatic approach asking what we can do with an action space available to us and only with access to historical data. We propose two different fairness constraints: a moderation breaking constraint which aims at blocking moderation paths from the action and sensitive attribute to the outcome, and by that at reducing disparity in outcome levels as much as the provided action space permits; and an equal benefit constraint which aims at distributing gain from the new and maximized policy equally across sensitive attribute levels, and thus at keeping pre-existing preferential treatment in place or avoiding the introduction of new disparity. We introduce practical methods for implementing the constraints and illustrate their uses on experiments with semi-synthetic models.'\nauthor:\n- |\n Limor Gultchin\\\n University of Oxford, The Alan Turing Institute, DeepMind Siyuan Guo\\\n University of Cambridge, Max Planck Institute for Intelligent Systems Alan Malek\\\n DeepMind Silvia Chiappa\\\n DeepMind Ricardo Silva\\\n University College London\nbibliography:\n- 'bib.bib'\ntitle: 'Pragmatic Fairness: Developing Policies with Outcome Disparity Control'\n---\n\nIntroduction {#sec:intro}\n============\n\nThe fairness of decisions made by machine learning models involving underprivileged groups has seen"
+"---\nabstract: 'HTTP Adaptive Streaming (HAS) is nowadays a popular solution for multimedia delivery. The novelty of HAS lies in the possibility of continuously adapting the streaming session to current network conditions, facilitated by Adaptive Bitrate (ABR) algorithms. Various popular streaming and Video on Demand services such as Netflix, Amazon Prime Video, and Twitch use this method. Given this broad consumer base, ABR algorithms continuously improve to increase user satisfaction. The insights for these improvements are, among others, gathered within the research area of Quality of Experience (QoE). Within this field, various researchers have dedicated their works to identifying potential impairments and testing their impact on viewers\u2019 QoE. Two frequently discussed visual impairments influencing QoE are stalling events and quality switches. So far, it is commonly assumed that those stalling events have the worst impact on QoE. This paper challenged this belief and reviewed this assumption by comparing stalling events with multiple quality and high amplitude quality switches. Two subjective studies were conducted. During the first subjective study, participants received a monetary incentive, while the second subjective study was carried out with volunteers. The statistical analysis demonstrated that stalling events do not result in the worst degradation of QoE. These"
+"---\nabstract: 'We establish existence and uniqueness of the solution to the Dyson equation for the density-density response function in time-dependent density functional theory (TDDFT) in the random phase approximation (RPA). We show that the poles of the RPA density-density response function are forward-shifted with respect to those of the non-interacting response function, thereby explaining mathematically the well known empirical fact that the non-interacting poles (given by the spectral gaps of the time-independent Kohn-Sham equations) underestimate the true transition frequencies. Moreover we show that the RPA poles are solutions to an eigenvalue problem, justifying the approach commonly used in the physics community to compute these poles.'\nauthor:\n- 'Thiago Carvalho Corso[^1]'\n- 'Mi-Song Dupuy[^2]'\n- 'Gero Friesecke[^3]'\nbibliography:\n- 'linear-response.bib'\ntitle: 'The density-density response function in time-dependent density functional theory: mathematical foundations and pole shifting'\n---\n\n\\[section\\] \\[theo\\][Proposition]{} \\[theo\\][Remark]{} \\[theo\\][Lemma]{} \\[theo\\][Corollary]{} \\[theo\\][Definition]{}\n\nIntroduction\n============\n\n[While ground state properties of molecules are very successfully captured by time-independent Kohn-Sham density functional theory (KS-DFT), excitation energies provide a much greater challenge. In particular, the excitation energies of the time-independent Kohn-Sham equations do not accurately capture the true excitation energies, and [*have no theoretically supported meaning*]{}.]{}\n\nInstead, time-dependent density functional theory (TDDFT) in the"
+"---\nabstract: |\n The Delone (Selling) scalars, which are used in unit cell reduction and in lattice type determination, are studied in [$\\bf{C^{3}}$]{}, the space of three complex variables. The three complex coordinate planes are composed of the six Delone scalars.\n\n [**Note:**]{} In his later publications, Boris Delaunay used the Russian version of his surname, Delone.\\\nauthor:\n- 'Herbert J.'\nbibliography:\n- 'Reduced.bib'\ntitle: 'Delone lattice studies in [$\\bf{C^{3}}$]{}, the space of three complex variables'\n---\n\n[**]{}\\\n\n[Bernstein]{}\n\nThe space [$\\bf{C^{3}}$]{} is explained in more detail than in the original description. Boundary transformations of the fundamental unit are described in detail. A graphical presentation of the basic coordinates is described and illustrated.\n\nIntroduction\n============\n\nThe scalars used by\u00a0 in his formulation of Selling reduction \u00a0[@Selling1874] are (in the conventional order) $b \\cdot c$, $a \\cdot c$, $a \\cdot b$, $a \\cdot d$, $b \\cdot d$, $c \\cdot d$, where $d = -a-b-c$. (As a mnemonic device, observe that the first three terms use $\\alpha$, $\\beta$, and $\\gamma$, in that order, and the following terms use $a$, $b$, $c$, in that order.)\n\n\u00a0 chose to represent the Selling scalars in the space [$\\bf{S^{6}}$]{}, [{[$s_1$]{}, [$s_2$]{}, [$s_3$]{}, [$s_4$]{}, [$s_5$]{}, [$s_6$]{}}]{} (defined in the"
+"---\nabstract: 'We study the positive solutions of the logistic elliptic equation with a nonlinear Neumann boundary condition that models coastal fishery harvesting ([@GUU19]). An essential role is played by the smallest eigenvalue of the Dirichlet eigenvalue problem, with respect to which a noncritical case is studied in [@Um2022]. In this paper, we extend our analysis to the critical case and further study the noncritical case for a more precise description of the positive solution set. Our approach relies on the energy method, sub- and supersolutions, and implicit function analysis.'\naddress: 'Department of Mathematics, Faculty of Education, Ibaraki University, Mito 310-8512, Japan'\nauthor:\n- Kenichiro Umezu\ntitle: Logistic elliptic equation with a nonlinear boundary condition arising from coastal fishery harvesting II\n---\n\n[^1]\n\nIntroduction\n============\n\nThis paper is devoted to the study of the positive solutions for the following logistic elliptic equation with a nonlinear boundary condition arising from coastal fishery harvesting ([@GUU19]): $$\\begin{aligned}\n \\label{p}\n\\begin{cases}\n-\\Delta u = u-u^{p} & \\mbox{ in } \\Omega, \\\\\nu\\geq0 & \\mbox{ in } \\Omega, \\\\ \n\\frac{\\partial u}{\\partial \\nu} = -\\lambda u^q & \\mbox{ on } \\partial\\Omega. \n\\end{cases} $$ Here, $\\Omega\\subset \\mathbb{R}^N$, $N\\geq1$, is a bounded domain with smooth boundary $\\partial\\Omega$, $\\Delta ="
+"---\nabstract: 'In this paper, we investigate the uplink performance of cell-free (CF) extremely large-scale multiple-input-multiple-output (XL-MIMO) systems, which is a promising technique for future wireless communications. More specifically, we consider the practical scenario with multiple base stations (BSs) and multiple user equipments (UEs). To this end, we derive exact achievable spectral efficiency (SE) expressions for any combining scheme. It is worth noting that we derive the closed-form SE expressions for the CF XL-MIMO with maximum ratio (MR) combining. Numerical results show that the SE performance of the CF XL-MIMO can be hugely improved compared with the small-cell XL-MIMO. It is interesting that a smaller antenna spacing leads to a higher correlation level among patch antennas. Finally, we prove that increasing the number of UE antennas may decrease the SE performance with MR combining.'\nauthor:\n- |\n Hao Lei, Zhe Wang, Huahua Xiao, Jiayi Zhang,\u00a0, Bo Ai,\u00a0\\\n [^1]\nbibliography:\n- 'IEEEabrv.bib'\n- 'Ref.bib'\ntitle: 'Uplink Performance of Cell-Free Extremely Large-Scale MIMO Systems'\n---\n\nIntroduction\n============\n\nThe extremely large-scale multiple-input-multiple-output (XL-MIMO) is a promising technique for the future communication system where the system is equipped with a massive number of antennas in a compact space [@bjornson2019massive], [@9113273]. Compared with"
+"---\nabstract: |\n Over the years, Software Quality Engineering has increased interest, demonstrated by significant research papers published in this area. Determining when a software artifact is qualitatively valid is tricky, given the impossibility of providing an objective definition valid for any perspective, context, or stakeholder. Many quality model solutions have been proposed that reference specific quality attributes in this context. However, these approaches do not consider the context in which the artifacts will operate and the stakeholder\u2019s perspective who evaluate its validity. Furthermore, these solutions suffer from the limitations of being artifact-specific and not extensible.\n\n In this paper, we provide a generic and extensible mechanism that makes it possible to aggregate and prioritize quality attributes. The user, taking into account his perspective and the context in which the software artifact will operate, is guided in defining all the criteria for his quality model. The management of these criteria is then facilitated through Multi-Criteria Decision Making (MCDM). In addition, we present the PRETTEF model, a concrete instance of the proposed approach for assessing and selecting MVC frameworks.\nauthor:\n- \n- \n- \n- \nbibliography:\n- 'bib.bib'\ntitle: 'A customizable approach to assess software quality through Multi-Criteria Decision Making'\n---\n\nQuality Model,"
+"---\nauthor:\n- som\n- Kristof De Bruyn\n- Robert Fleischer\n- Eleftheria Malami\n- Philine van Vliet\ntitle: 'Studies of New Physics in $B^0_q-\\bar B^0_q$ Mixing and Implications for Leptonic Decays'\n---\n\nIntroduction\n============\n\nThe phenomenon of $B^0_q$-$\\bar{B}^0_q$ mixing (where $q=d,s$) arises only from loop processes in the Standard Model (SM) and is sensitive to possible New Physics (NP) contributions, which could enter the loop topologies or even at the tree level, for instance in $Z'$ models. Associated to the mixing phenomenon are the mixing parameters and the CP-violating phases for which we have impressive experimental data. In this presentation, we follow Ref.\u00a0[@DeBruyn:2022zhw] and explore the space allowed for NP by current measurements and the state-of-the-art parameters. In addition, we point out interesting connections to the studies of leptonic rare $B$ decays.\n\nIn order to determine the parameter space of possible NP effects to $B_q^0$\u2013$\\bar{B}_q^0$ mixing, we have to compare the SM predictions of the mixing parameters with the corresponding experimental values. For these SM predictions, a careful analysis of the Unitarity Triangle (UT) apex is required. We pay special attention to the different determinations of the Cabibbo-Kobayashi-Maskawa (CKM) parameters and the tensions that arise between the extractions"
+"---\nabstract: 'Hematite is a canted antiferromagnetic insulator, promising for applications in spintronics. Here, we present [*ab initio*]{} calculations of the tensorial exchange interactions of hematite and use them to understand its magnetic properties by parameterizing a semiclassical Heisenberg spin model. Using atomistic spin dynamics simulations, we calculate the equilibrium properties and phase transitions of hematite, most notably the Morin transition. The computed isotropic and Dzyaloshinskii\u2013Moriya interactions result in a N\u00e9el temperature and weak ferromagnetic canting angle that are in good agreement with experimental measurements. Our simulations show how dipole-dipole interactions act in a delicate balance with first and higher-order on-site anisotropies to determine the material\u2019s magnetic phase. Comparison with spin-Hall magnetoresistance measurements on a hematite single-crystal reveals deviations of the critical behavior at low temperatures. Based on a mean-field model, we argue that these differences result from the quantum nature of the fluctuations that drive the phase transitions.'\nauthor:\n- Tobias Dannegger\n- Andr\u00e1s De\u00e1k\n- Levente R\u00f3zsa\n- 'E. Galindez-Ruales'\n- Shubhankar Das\n- Eunchong Baek\n- Mathias Kl\u00e4ui\n- L\u00e1szl\u00f3 Szunyogh\n- Ulrich Nowak\ntitle: 'Magnetic properties of hematite revealed by an [*ab initio*]{} parameterized spin model'\n---\n\nIntroduction\n============\n\nAs a prototypical weak ferromagnet, the insulating"
+"---\nabstract: 'In the context of adiabatic quantum computation (AQC), it has been argued that first-order quantum phase transitions (QPTs) due to localisation phenomena cause AQC to fail by exponentially decreasing the minimal spectral gap of the Hamiltonian along the annealing path. The vanishing of the spectral gap is often linked to the localisation of the ground state in a local minimum, requiring the system to tunnel into the global minimum at a later stage of the annealing. Recent methods have been proposed to avoid this phenomena by carefully designing the involved Hamiltonians. However, it remains a challenge to formulate a comprehensive theory on the effect of the various parameters and the conditions under which QPTs make the AQC algorithm fail. Equipped with concepts from graph theory, in this work we link graph quantities associated to the Hamiltonians along the anneal path with the occurrence of QPTs. These links allow us to derive bounds on the location of the minimal spectral gap along the anneal path, augmenting the toolbox for the design of strategies to improve the runtime of AQC algorithms.'\nauthor:\n- Matthias Werner\n- 'Artur Garc\u00eda-S\u00e1ez'\n- 'Marta P. Estarellas'\nbibliography:\n- 'references.bib'\ntitle: 'Bounding first-order quantum phase"
+"---\nabstract: 'Collectives of actively-moving particles can spontaneously separate into dilute and dense phases\u2014a fascinating phenomenon known as motility-induced phase separation (MIPS). MIPS is well-studied for randomly-moving particles with no directional bias. However, many forms of active matter exhibit collective chemotaxis, directed motion along a chemical\u00a0gradient that the constituent particles can generate themselves. Here, using theory and simulations, we demonstrate that collective chemotaxis strongly competes with MIPS\u2014in some cases, arresting or completely suppressing phase separation, or in other cases, generating fundamentally new dynamic instabilities. We establish quantitative principles describing this competition, thereby helping to reveal and clarify the rich physics underlying active matter systems that perform chemotaxis, ranging from cells to robots.'\nauthor:\n- Hongbo Zhao\n- Andrej Ko\u0161mrlj\n- 'Sujit S. Datta'\ntitle: 'Chemotactic motility-induced phase separation'\n---\n\nThe thermodynamics of active matter\u2014collections of active agents that consume energy\u2014has been studied extensively due to its fundamental richness as well as its importance to biological and engineering applications\u00a0[@Marchetti2013; @Gompper2020]. One prominent class of active matter is that composed of self-propelled agents, ranging from enzymes\u00a0[@Mohajerani2018; @Agudo-Canalejo2018; @Jee2018], motile microorganisms\u00a0[@Murray2007; @Liu2019], and mammalian cells\u00a0[@Alert2019; @Scarpa2016] to synthetic microswimmers and robots\u00a0[@Palacci2013; @Theurkauff2012; @Palagi2018]. These forms of active"
+"---\nabstract: 'The cost of vision-and-language pre-training has become increasingly prohibitive due to end-to-end training of large-scale models. This paper proposes BLIP-2, a generic and efficient pre-training strategy that bootstraps vision-language pre-training from off-the-shelf frozen pre-trained image encoders and frozen large language models. BLIP-2 bridges the modality gap with a lightweight Querying Transformer, which is pre-trained in two stages. The first stage bootstraps vision-language representation learning from a frozen image encoder. The second stage bootstraps vision-to-language generative learning from a frozen language model. BLIP-2 achieves state-of-the-art performance on various vision-language tasks, despite having significantly fewer trainable parameters than existing methods. For example, our model outperforms Flamingo80B by 8.7% on zero-shot VQAv2 with 54x fewer trainable parameters. We also demonstrate the model\u2019s emerging capabilities of zero-shot image-to-text generation that can follow natural language instructions.'\nbibliography:\n- 'main.bib'\n---\n\nIntroduction\n============\n\n\\[sec:intro\\] Vision-language pre-training (VLP) research has witnessed a rapid advancement in the past few years, where pre-trained models with increasingly larger scale have been developed to continuously push the state-of-the-art on various downstream tasks\u00a0[@clip; @ALBEF; @blip; @ofa; @flamingo; @beit3]. However, most state-of-the-art vision-language models incur a high computation cost during pre-training, due to end-to-end training using large-scale models and datasets."
+"---\nauthor:\n- 'V'' it\u011bzslav Kala'\ntitle: ' Universal quadratic forms and indecomposables in number fields: A survey '\n---\n\nThe goal of this survey article is to give an overview of the arithmetic theory of universal quadratic forms. I will primarily focus on the results over number fields obtained since 2015. For other surveys focusing on different facets of the area, see\u00a0[@Ea; @Han; @Kim].\n\nWhile I try to explain the broad ideas behind the proofs and the tools that are used, many details are nevertheless missing and there are numerous simplifications. The interested reader is therefore always encouraged to look into the original papers or to contact me. Overall the notes are more aimed at a\u00a0junior audience rather than the experts in the field (who might at least prefer to start reading only in Sections\u00a0\\[sec:4\\] or\u00a0\\[sec:5\\]).\n\nThe paper is primarily based on the notes from my lectures at the *XXIII International Workshop for Young Mathematicians* in Krakow, Poland (and from some of my other talks). Parts of the text are also taken from the introduction to my habilitation thesis\u00a0[@Ka3].\n\nIntroduction {#sec:1}\n============\n\nThe study of representations of integers by quadratic forms has a\u00a0long"
+"---\nabstract: 'The reconfigurable intelligent surface (RIS) is useful to effectively improve the coverage and data rate of end-to-end communications. In contrast to the well-studied coverage-extension use case, in this paper, multiple RIS panels are introduced, aiming to enhance the data rate of multi-input multi-output (MIMO) channels in presence of insufficient scattering. Specifically, via the operator-valued free probability theory, the asymptotic mutual information of the large-dimensional RIS-assisted MIMO channel is obtained under the Rician fading with Weichselberger\u2019s correlation structure, in presence of both the direct and the reflected links. Although the mutual information of Rician MIMO channels scales linearly as the number of antennas and the signal-to-noise ratio (SNR) in decibels, numerical results show that it requires sufficiently large SNR, proportional to the Rician factor, in order to obtain the theoretically guaranteed linear improvement. This paper shows that the proposed multi-RIS deployment is especially effective to improve the mutual information of MIMO channels under the large Rician factor conditions. When the reflected links have similar arriving and departing angles across the RIS panels, a small number of RIS panels are sufficient to harness the spatial degree of freedom of the multi-RIS assisted MIMO channels.'\nauthor:\n- 'Zhong\u00a0Zheng, \u00a0 Siqiang\u00a0Wang,"
+"---\nabstract: |\n In this paper, we analyze a speed restarting scheme for the dynamical system given by $$\\ddot{x}(t) + \\dfrac{\\alpha}{t}\\dot{x}(t) + \\nabla \\phi(x(t)) + \\beta \\nabla^2 \\phi(x(t))\\dot{x}(t)=0,$$ where $\\alpha$ and $\\beta$ are positive parameters, and $\\phi:{{\\mathbb R}}^n \\to {{\\mathbb R}}$ is a smooth convex function. If $\\phi$ has quadratic growth, we establish a linear convergence rate for the function values along the restarted trajectories. As a byproduct, we improve the results obtained by Su, Boyd and Cand\u00e8s [@JMLR:v17:15-084], obtained in the strongly convex case for $\\alpha=3$ and $\\beta=0$. Preliminary numerical experiments suggest that both adding a positive Hessian driven damping parameter $\\beta$, and implementing the restart scheme help improve the performance of the dynamics and corresponding iterative algorithms as means to approximate minimizers of $\\phi$.\\\n **Keywords:** Convex optimization $\\cdot$ Hessian driven damping $\\cdot$ First order methods $\\cdot$ Restarting $\\cdot$ Differential equations.\\\n **MSC2020:** 37N40 $\\cdot$ 90C25 $\\cdot$ 65K10 (primary). 34A12 (secondary).\nauthor:\n- 'Juan Jos\u00e9 Maul\u00e9n[^1], Juan Peypouquet[^2]'\nbibliography:\n- 'referencias.bib'\ntitle: A speed restart scheme for a dynamics with Hessian driven damping\n---\n\nIntroduction\n============\n\nIn convex optimization, first-order methods are iterative algorithms that use function values and (generalized) derivatives to build minimizing sequences. Perhaps the oldest and simplest"
+"---\nabstract: 'Two different approaches exist to handle missing values for prediction: either imputation, prior to fitting any predictive algorithms, or dedicated methods able to natively incorporate missing values. While imputation is widely (and easily) use, it is unfortunately biased when low-capacity predictors (such as linear models) are applied afterward. However, in practice, naive imputation exhibits good predictive performance. In this paper, we study the impact of imputation in a high-dimensional linear model with MCAR missing data. We prove that zero imputation performs an implicit regularization closely related to the ridge method, often used in high-dimensional problems. Leveraging on this connection, we establish that the imputation bias is controlled by a ridge bias, which vanishes in high dimension. As a predictor, we argue in favor of the averaged SGD strategy, applied to zero-imputed data. We establish an upper bound on its generalization error, highlighting that imputation is benign in the $d \\gg \\sqrt{n}$ regime. Experiments illustrate our findings.'\nauthor:\n- |\n Alexis Ayme, Claire Boyer, Aymeric Dieuleveut\\\n & Erwan Scornet\nbibliography:\n- 'references.bib'\ntitle: ' Naive imputation implicitly regularizes high-dimensional linear models. '\n---\n\nIntroduction\n============\n\nMissing data has become an inherent problem in modern data science. Indeed, most real-world"
+"---\nabstract: 'In this paper, we study dimension reduction techniques for large-scale controlled stochastic differential equations (SDEs). The drift of the considered SDEs contains a polynomial term satisfying a one-sided growth condition. Such nonlinearities in high dimensional settings occur, e.g., when stochastic reaction diffusion equations are discretized in space. We provide a brief discussion around existence, uniqueness and stability of solutions. (Almost) stability then is the basis for new concepts of Gramians that we introduce and study in this work. With the help of these Gramians, dominant subspace are identified leading to a balancing related highly accurate reduced order SDE. We provide an algebraic error criterion and an error analysis of the propose model reduction schemes. The paper is concluded by applying our method to spatially discretized reaction diffusion equations.'\nauthor:\n- 'Martin Redmann[^1]'\ntitle: Model reduction for stochastic systems with nonlinear drift\n---\n\n**Keywords:** model order reduction$\\cdot$ nonlinear stochastic systems $\\cdot$ Gramians $\\cdot$ L\u00e9vy processes\n\n**MSC classification:** 60G51 $\\cdot$ 60H10 $\\cdot$ 65C30 $\\cdot$ 93C10 $\\cdot$ 93E03 $\\cdot$ 93E15\n\nIntroduction\n============\n\nModel order reduction (MOR) aims to find low-order approximations for high-/infinite-dimensional systems of differential equations reducing the complexity of the original problem. Many MOR schemes are based on projections"
+"---\nabstract: 'The great success of Deep Neural Networks (DNNs) has inspired the algorithmic development of DNN-based Fixed-Point (DNN-FP) for computer vision tasks. DNN-FP methods, trained by Back-Propagation Through Time or computing the inaccurate inversion of the Jacobian, suffer from inferior representation ability. Motivated by the representation power of the Transformer, we propose a framework to unroll the FP and approximate each unrolled process via Transformer blocks, called FPformer. To reduce the high consumption of memory and computation, we come up with FPRformer by sharing parameters between the successive blocks. We further design a module to adapt Anderson acceleration to FPRformer to enlarge the unrolled iterations and improve the performance, called FPAformer. In order to fully exploit the capability of the Transformer, we apply the proposed model to image restoration, using self-supervised pre-training and supervised fine-tuning. 161 tasks from 4 categories of image restoration problems are used in the pre-training phase. Hereafter, the pre-trained FPformer, FPRformer, and FPAformer are further fine-tuned for the comparison scenarios. Using self-supervised pre-training and supervised fine-tuning, the proposed FPformer, FPRformer, and FPAformer achieve competitive performance with state-of-the-art image restoration methods and better training efficiency. FPAformer employs only 29.82% parameters used in SwinIR models, and provides"
+"---\nabstract: 'This work describes experiments on thermal dynamics of pure excited by hydrodynamic cavitation, which has been reported to facilitate the spin conversion of para- and ortho-isomers at water interfaces. Previous measurements by NMR and capillary methods of excited samples demonstrated changes of proton density by 12-15%, the surface tension up to 15.7%, which can be attributed to a non-equilibrium para-/ortho- ratio. Beside these changes, we also expect a variation of heat capacity. Experiments use a differential calorimetric approach with two devices: one with an active thermostat for diathermic measurements, another is fully passive for long-term measurements. Samples after excitation are degassed at -0.09MPa and thermally equalized in a water bath. Conducted attempts demonstrated changes in the heat capacity of experimental samples by 4.17%\u20135.72% measured in the transient dynamics within 60 min after excitation, which decreases to 2.08% in the steady-state dynamics 90-120 min after excitation. Additionally, we observed occurrence of thermal fluctuations at the level of 10$^{-3}$ $^\\circ C$ relative temperature on 20-40 min mesoscale dynamics and a long-term increase of such fluctuations in experimental samples. Obtained results are reproducible in both devices and are supported by previously published outcomes on four-photon scattering spectra in the range from"
+"---\nabstract: |\n Measurements in heavy flavor azimuthal angular correlation provide insight into the production, propagation, and hadronization of heavy flavor jets in ultra-relativistic hadronic and heavy-ion collisions. These measurements across different colliding systems, like p\u2013A and A\u2013A, help us isolate the possible modification in particle production due to cold nuclear matter (CNM) effects and the formation of Quark-Gluon Plasma (QGP), respectively. Jet correlation studies give direct access to the initial parton dynamics produced in these collisions.\n\n This article studies the azimuthal angular correlations of electrons from heavy flavor hadron decays in pp, p\u2013Pb, and Pb\u2013Pb collisions at [$\\sqrt{s_{\\rm{NN}}}$]{}= 5.02 TeV using PYTHIA8+Angantyr. We study the production of heavy flavor jets with different parton level processes, including multi-parton interactions, different color reconnection prescriptions, and initial and final state radiation processes. In addition, we add the hadron-level processes, i.e., Bose-Einstein and rescattering effects, to quantify the effect due to these processes. The heavy flavor electron correlations are calculated in the different trigger and associated [$p_{\\rm{T}}$]{}intervals to characterize the impact of hard and soft scattering in the various colliding systems. The yields and the sigmas associated with the near-side (NS) and away-side (AS) correlation peaks are calculated and studied as a function"