diff --git "a/https:/huggingface.co/datasets/iamgroot42/mimir/tree/main/test/temporal_arxiv_2023_01.jsonl" "b/https:/huggingface.co/datasets/iamgroot42/mimir/tree/main/test/temporal_arxiv_2023_01.jsonl" new file mode 100644--- /dev/null +++ "b/https:/huggingface.co/datasets/iamgroot42/mimir/tree/main/test/temporal_arxiv_2023_01.jsonl" @@ -0,0 +1,1000 @@ +"---\nabstract: 'In this work we report state-of-the-art theoretical calculations of the dipole polarizability of the argon atom. Frequency dependence of the polarizability is taken into account by means of the dispersion coefficients (Cauchy coefficients) which is sufficient for experimentally relevant wavelengths below the first resonant frequency. In the proposed theoretical framework, all known physical effects including the relativistic, quantum electrodynamics, finite nuclear mass, and finite nuclear size corrections are accounted for. We obtained $\\alpha_0=11.0775(19)$ for the static polarizability and $\\alpha_2=27.976(15)$ and $\\alpha_4=95.02(11)$ for the second and fourth dispersion coefficients, respectively. The result obtained for the static polarizability agrees (within the estimated uncertainty) with the most recent experimental data \\[C. Gaiser and B. Fellmuth, Phys. Rev. Lett. **120**, 123203 (2018)\\], but is less accurate. The dispersion coefficients determined in this work appear to be most accurate in the literature, improving by more than an order of magnitude upon previous estimates. By combining the experimentally determined value of the static polarizability with the dispersion coefficients from our calculations, the polarizability of argon can be calculated with accuracy of around $10\\,$ppm for wavelengths above roughly $450\\,$nm. This result is important from the point of view of quantum metrology, especially for a new" +"---\nabstract: 'Gravitational-wave backgrounds are expected to arise from the superposition of gravitational wave signals from a large number of unresolved sources and also from the stochastic processes that occurred in the Early universe. So far, we have not detected any gravitational wave background, but with the improvements in the detectors\u2019 sensitivities, such detection is expected in the near future. The detection and inferences we draw from the search for a gravitational-wave background will depend on the source model, the type of search pipeline used, and the data generation in the gravitational-wave detectors. In this work, we focus on the effect of the data generation process, specifically the calibration of the detectors\u2019 digital output into strain data used by the search pipelines. Using the calibration model of the current LIGO detectors as an example, we show that for power-law source models and calibration uncertainties $\\lesssim 10 \\%$, the detection of isotropic gravitational wave background is not significantly affected. We also show that the source parameter estimation and upper limits calculations get biased. For calibration uncertainties of $\\lesssim 5 \\%$, the biases are not significant ($\\lesssim 2 \\%$), but for larger calibration uncertainties, they might become significant, especially when trying to" +"---\nabstract: |\n A class of graphs ${{\\mathscr C}}$ is [*[monadically stable]{}*]{} if for any unary expansion $\\widehat{{{\\mathscr C}}}$ of ${{\\mathscr C}}$, one cannot interpret, in first-order logic, arbitrarily long linear orders in graphs from $\\widehat{{{\\mathscr C}}}$. It is known that nowhere dense graph classes are monadically stable; these encompass most of the studied concepts of sparsity in graphs, including classes of graphs that exclude a fixed topological minor. On the other hand, monadic stability is a property expressed in purely model-theoretic terms and hence it is also suited for capturing structure in dense graphs.\n\n For several years, it has been suspected that one can construct a structure theory for monadically stable graph classes that mirrors the theory of nowhere dense graph classes in the dense setting. In this work we provide a next step in this direction by giving a characterization of monadic stability through the [*[Flipper game]{}*]{}: a game on a graph played by [*[Flipper]{}*]{}, who in each round can complement the edge relation between any pair of vertex subsets, and [*[Connector]{}*]{}, who in each round is forced to localize the game to a ball of bounded radius. This is an analog of the [*[Splitter game]{}*]{}, which characterizes" +"---\nabstract: 'A comprehensive wideband spectral analysis of the brightest black hole X-ray binary 4U $1543-47$ during its 2021 outburst is carried out for the first time using *NICER, NuSTAR,* and *AstroSat* observations by phenomenological and reflection modelling. The source attains a super-Eddington peak luminosity and remains in the soft state, with a small fraction ($< 3\\%$) of the inverse-Comptonized photons. The spectral modelling reveals a steep photon index ($\\Gamma \\sim 2-2.6$) and relatively high inner disk temperature (T$_{in}\\sim 0.9-1.27$ keV). The line-of-sight column density varies between ($0.45-0.54$)$\\times10^{22}$ cm$^{-2}$. Reflection modelling using the RELXILL model suggests that 4U $1543-47$ is a low-inclination system ($\\theta \\sim 32^\\circ - 40^\\circ$). The accretion disk is highly ionized (log $\\xi$ > 3) and has super solar abundance (3.6$-$10 $A_{Fe,\\odot}$) over the entire period of study. We detected a prominent dynamic absorption feature between $\\sim 8-11$ keV in the spectra throughout the outburst. This detection is the first of its kind for X-ray binaries. We infer that the absorption of the primary X-ray photons by the highly ionized, fast-moving disk-winds can produce the observed absorption feature. The phenomenological spectral modelling also shows the presence of a neutral absorption feature $\\sim 7.1 - 7.4$ keV, and" +"---\nabstract: 'The paper suggests a generalization of the Sign-Perturbed Sums (SPS) finite sample system identification method for the identification of closed-loop observable stochastic linear systems in state-space form. The solution builds on the theory of matrix-variate regression and instrumental variable methods to construct distributionfree confidence regions for the state-space matrices. Both direct and indirect identification are studied, and the exactness as well as the strong consistency of the construction are proved. Furthermore, a new, computationally efficient ellipsoidal outer/approximation algorithm for the confidence regions is proposed. The new construction results in a semidefinite optimization problem which has an order-of-magnitude smaller number of constraints, as if one applied the ellipsoidal outer/approximation after vectorization. The effectiveness of the approach is also demonstrated empirically via a series of numerical experiments.'\naddress:\n- 'Institute for Computer Science and Control (SZTAKI), E\u00f6tv\u00f6s Lor\u00e1nd Research Network (ELKH), Budapest, Hungary'\n- 'Department of Probability Theory and Statistics, Institute of Mathematics, E\u00f6tv\u00f6s Lor\u00e1nd University (ELTE), Budapest, Hungary'\nauthor:\n- Szabolcs Szentp\u00e9teri\n- Bal\u00e1zs Csan\u00e1d Cs\u00e1ji\nbibliography:\n- 'iv-mimo-clss-sps.bib'\ntitle: |\n Non-Asymptotic State-Space Identification of Closed-Loop\\\n Stochastic Linear Systems using Instrumental Variables\n---\n\nclosed-loop identification, distribution-free methods, non-asymptotic guarantees, instrumental variables\n\nIntroduction\n============\n\nEstimating a [*mathematical model*]{} from observations" +"---\nabstract: 'Numerical approximations of partial differential equations (PDEs) are routinely employed to formulate the solution of physics, engineering and mathematical problems involving functions of several variables, such as the propagation of heat or sound, fluid flow, elasticity, electrostatics, electrodynamics, and more. While this has led to solving many complex phenomena, there are some limitations. Conventional approaches such as Finite Element Methods (FEMs) and Finite Differential Methods (FDMs) require considerable time and are computationally expensive. In contrast, data driven machine learning-based methods such as neural networks provide a faster, fairly accurate alternative, and have certain advantages such as discretization invariance and resolution invariance. This article aims to provide a comprehensive insight into how data-driven approaches can complement conventional techniques to solve engineering and physics problems, while also noting some of the major pitfalls of machine learning-based approaches. Furthermore, we highlight, a novel and fast machine learning-based approach ($\\sim$1000x) to learning the solution operator of a PDE operator learning. We will note how these new computational approaches can bring immense advantages in tackling many problems in fundamental and applied physics.'\naddress:\n- 'Department of Computer Science, Purdue University, West Lafayette, IN, USA '\n- 'Department of Materials Science and Engineering, MIT," +"---\nabstract: 'Human readers or radiologists routinely perform full-body multi-organ multi-disease detection and diagnosis in clinical practice, while most medical AI systems are built to focus on single organs with a narrow list of a few diseases. This might severely limit AI\u2019s clinical adoption. A certain number of AI models need to be assembled non-trivially to match the diagnostic process of a human reading a CT scan. In this paper, we construct a Unified Tumor Transformer (CancerUniT) model to jointly detect tumor existence & location and diagnose tumor characteristics for eight major cancers in CT scans. CancerUniT is a query-based Mask Transformer model with the output of multi-tumor prediction. We decouple the object queries into organ queries, tumor detection queries and tumor diagnosis queries, and further establish hierarchical relationships among the three groups. This clinically-inspired architecture effectively assists inter- and intra-organ representation learning of tumors and facilitates the resolution of these complex, anatomically related multi-organ cancer image reading tasks. CancerUniT is trained end-to-end using a curated large-scale CT images of 10,042 patients including eight major types of cancers and occurring non-cancer tumors (all are pathology-confirmed with 3D tumor masks annotated by radiologists). On the test set of 631 patients, CancerUniT" +"---\nabstract: 'Deep visual models have widespread applications in high-stake domains. Hence, their black-box nature is currently attracting a large interest of the research community. We present the first survey in Explainable AI that focuses on the methods and metrics for interpreting deep visual models. Covering the landmark contributions along the state-of-the-art, we not only provide a taxonomic organisation of the existing techniques, but also excavate a range of evaluation metrics and collate them as measures of different properties of model explanations. Along the insightful discussion on the current trends, we also discuss the challenges and future avenues for this research direction.'\nauthor:\n- 'Naveed Akhtar The University of Western Australia naveed.akhtar@uwa.edu.au'\n- First Author$^1$\n- Second Author$^2$\n- |\n Third Author$^{2,3}$Fourth Author$^4$ $^1$First Affiliation\\\n $^2$Second Affiliation\\\n $^3$Third Affiliation\\\n $^4$Fourth Affiliation {first, second}@example.com, third@other.example.com, fourth@example.com\nbibliography:\n- 'ijcai23.bib'\ntitle: 'A Survey of Explainable AI in Deep Visual Modeling: Methods and Metrics'\n---\n\nIntroduction\n============\n\nVisual computational models have widespread applications, ranging from casual use in handheld devices to high-stake tasks in forensics, surveillance, autonomous driving and medical diagnosis etc. Contemporary visual models rely heavily on deep learning, which is a black-box technology. Hence, the opacity of deep visual models is" +"---\nabstract: 'Twitter (one example of microblogging) is widely being used by researchers to understand human behavior, specifically how people behave when a significant event occurs and how it changes user microblogging patterns. The changing microblogging behavior can reveal patterns that can help in detecting real-world events. However, the Twitter data that is available has limitations, such as, it is incomplete and noisy and the samples are irregular. In this paper we create a model, called *Twitter Behavior Agent-Based Model (TBAM)* to simulate Twitter pattern and behavior using Agent-Based Modeling (ABM). The generated data from ABM simulations can be used in place or to complement the real-world data toward improving the accuracy of event detection. We confirm the validity of our model by finding the cross-correlation between the real data collected from Twitter and the data generated using TBAM.'\nbibliography:\n- 'datafusion.bib'\n---\n\nIntroduction\n============\n\nThe widespread use of microblogging services, such as Twitter, which generate immense content has resulted in considerable research focusing on utilizing their counts and semantic content for many different practical applications. For example, researchers can use microblogging data to gain insight into events and how people behave when an event occurs. The change in microblogging" +"---\nabstract: 'Model-based next state prediction and state value prediction are slow to converge. To address these challenges, we do the following: i) Instead of a neural network, we do model-based planning using a parallel memory retrieval system (which we term the *slow* mechanism); ii) Instead of learning state values, we guide the agent\u2019s actions using goal-directed exploration, by using a neural network to choose the next action given the current state and the goal state (which we term the *fast* mechanism). The goal-directed exploration is trained online using hippocampal replay of visited states and future imagined states every single time step, leading to fast and efficient training. Empirical studies show that our proposed method has a 92% solve rate across 100 episodes in a dynamically changing grid world, significantly outperforming state-of-the-art actor critic mechanisms such as PPO (54%), TRPO (50%) and A2C (24%). Ablation studies demonstrate that both mechanisms are crucial. We posit that the future of Reinforcement Learning (RL) will be to model goals and sub-goals for various tasks, and plan it out in a goal-directed memory-based approach.'\nauthor:\n- |\n John Chong Min Tan\\\n Department of Electrical and Computer Engineering\\\n National University of Singapore\\\n `johntancm@u.nus.edu.sg`\\\n Mehul Motani\\" +"---\nabstract: 'Anomaly detection in videos is a significant yet challenging problem. Previous approaches based on deep neural networks employ either reconstruction-based or prediction-based approaches. Nevertheless, existing reconstruction-based methods [**1)**]{} rely on old-fashioned convolutional autoencoders and are poor at modeling temporal dependency; [**2)**]{} are prone to overfit the training samples, leading to indistinguishable reconstruction errors of normal and abnormal frames during the inference phase. To address such issues, firstly, we get inspiration from transformer and propose [**S**]{}patio-[**T**]{}emporal [**A**]{}uto-[**T**]{}rans-[**E**]{}ncoder, dubbed as STATE, as a new autoencoder model for enhanced consecutive frame reconstruction. Our STATE is equipped with a specifically designed learnable convolutional attention module for efficient temporal learning and reasoning. Secondly, we put forward a novel reconstruction-based input perturbation technique during testing to further differentiate anomalous frames. With the same perturbation magnitude, the testing reconstruction error of the normal frames lowers more than that of the abnormal frames, which contributes to mitigating the overfitting problem of reconstruction. Owing to the high relevance of the frame abnormality and the objects in the frame, we conduct object-level reconstruction using both the raw frame and the corresponding optical flow patches. Finally, the anomaly score is designed based on the combination of the raw and" +"---\nabstract: |\n In real photonic quantum systems losses are an unavoidable factor limiting the scalability to many modes and particles, restraining their application in fields as quantum information and communication. For this reason, a considerable amount of engineering effort has been taken in order to improve the quality of particle sources and system components. At the same time, data analysis and collection methods based on post-selection have been used to mitigate the effect of particle losses. This has allowed for investigating experimentally multi-particle evolutions where the observer lacks knowledge about the system\u2019s intermediate propagation states. Nonetheless, the fundamental question how losses affect the behaviour of the surviving subset of a multi-particle system has not been investigated so far. For this reason, here we study the impact of particle losses in a quantum walk of two photons reconstructing the output probability distributions for one photon conditioned on the loss of the other in a known mode and temporal step of our evolution network. We present the underlying theoretical scheme that we have devised in order to model controlled particle losses, we describe an experimental platform capable of implementing our theory in a time multiplexing encoding. In the end we show" +"---\nabstract: |\n Vandermonde matrices are usually exponentially ill-conditioned and often result in unstable approximations. In this paper, we introduce and analyze the *multivariate Vandermonde with Arnoldi (V+A) method*, which is based on least-squares approximation together with a Stieltjes orthogonalization process, for approximating continuous, multivariate functions on $d$-dimensional irregular domains. The V+A method addresses the ill-conditioning of the Vandermonde approximation by creating a set of discrete orthogonal basis with respect to a discrete measure. The V+A method is simple and general. It relies only on the sample points from the domain and requires no prior knowledge of the domain. In this paper, we first analyze the sample complexity of the V+A approximation. In particular, we show that, for a large class of domains, the V+A method gives a well-conditioned and near-optimal $N$-dimensional least-squares approximation using $M=\\mathcal{O}(N^2)$ equispaced sample points or $M=\\mathcal{O}(N^2\\log N)$ random sample points, independently of $d$. We also give a comprehensive analysis of the error estimates and rate of convergence of the V+A approximation. Based on the multivariate V+A approximation, we propose a new variant of the weighted V+A least-squares algorithm that uses only $M=\\mathcal{O}(N\\log N)$ sample points to give a near-optimal approximation. Our numerical results confirm that" +"---\nabstract: 'It has become a consensus that autonomous vehicles (AVs) will first be widely deployed on highways. However, the complexity of highway interchanges becomes the bottleneck for deploying AVs. An AV should be sufficiently tested under different highway interchanges, which is still challenging due to the lack of available datasets containing diverse highway interchanges. In this paper, we propose a model-driven method, [Flyover]{}, to generate a dataset consisting of diverse interchanges with measurable diversity coverage. First, [Flyover]{}proposes a labeled digraph to model the topology of an interchange. Second, [Flyover]{}takes real-world interchanges as input to guarantee topology practicality and extracts different topology equivalence classes by classifying the corresponding topology models. Third, for each topology class, [Flyover]{}identifies the corresponding geometrical features for the ramps and generates concrete interchanges using k-way combinatorial coverage and differential evolution. To illustrate the diversity and applicability of the generated interchange dataset, we test the built-in traffic flow control algorithm in SUMO and the fuel-optimization trajectory tracking algorithm deployed to Alibaba\u2019s autonomous trucks on the dataset. The results show that except for the geometrical difference, the interchanges are diverse in throughput and fuel consumption under the traffic flow control and trajectory tracking algorithms," +"---\nabstract: 'Numerous types of social biases have been identified in pre-trained language models (PLMs), and various intrinsic bias evaluation measures have been proposed for quantifying those social biases. Prior works have relied on human annotated examples to compare existing intrinsic bias evaluation measures. However, this approach is not easily adaptable to different languages nor amenable to large scale evaluations due to the costs and difficulties when recruiting human annotators. To overcome this limitation, we propose a method to compare intrinsic gender bias evaluation measures without relying on human-annotated examples. Specifically, we create multiple *bias-controlled* versions of PLMs using varying amounts of male vs. female gendered sentences, mined automatically from an unannotated corpus using gender-related word lists. Next, each bias-controlled PLM is evaluated using an intrinsic bias evaluation measure, and the rank correlation between the computed bias scores and the gender proportions used to fine-tune the PLMs is computed. Experiments on multiple corpora and PLMs repeatedly show that the correlations reported by our proposed method that does not require human annotated examples are comparable to those computed using human annotated examples in prior work.'\nauthor:\n- |\n Masahiro Kaneko$^{1}$ Danushka Bollegala$^{2,3}$ Naoaki Okazaki$^{1}$\\\n $^1$Tokyo Institute of Technology $^2$University of Liverpool" +"---\nabstract: 'We analyze results of a recent experiment \\[D. Razmadze et al., *Phys. Rev. Lett.*, **125**, 116803 (2020)\\] on transport through a quantum dot between two full-shell nanowires and show that the observed effects are caused by the Kondo effect enhancement due to a nontrivial geometry (magnetic flux in a full-shell nanowire) rather than the presence of Majorana bound states. Moreover, we propose that such a setup presents a unique and convenient system to study the competition between superconductivity and the Kondo effect and has significant advantages in comparison to other known approaches, as the important parameter is controlled by the magnetic flux through the full-shell nanowire, which can be significantly varied with small changes of magnetic field, and does not require additional gates. This competition is of fundamental interest as it results in a quantum phase transition between an unscreened doublet and a many-body Kondo singlet ground states of the system.'\nauthor:\n- 'Aleksandr E. Svetogorov'\n- Daniel Loss\n- Jelena Klinovaja\nbibliography:\n- 'QD.bib'\ntitle: 'Enhancement of the Kondo effect in a quantum dot formed in a full-shell nanowire.'\n---\n\nIntroduction\n============\n\nSemiconducting nanowires with full superconducting shell were recently introduced as possible realizations of topological superconductors," +"---\nabstract: 'Bagging is an important technique for stabilizing machine learning models. In this paper, we derive a finite-sample guarantee on the stability of bagging for any model. Our result places no assumptions on the distribution of the data, on the properties of the base algorithm, or on the dimensionality of the covariates. Our guarantee applies to many variants of bagging and is optimal up to a constant. Empirical results validate our findings, showing that bagging successfully stabilizes even highly unstable base algorithms.'\nauthor:\n- 'Jake A. Soloff'\n- Rina Foygel Barber\n- Rebecca Willett\nbibliography:\n- 'reference.bib'\nnocite: '[@devroye1979distribution2]'\ntitle: 'Bagging Provides Assumption-free Stability'\n---\n\nIntroduction {#sec-intro}\n============\n\nAlgorithmic stability\u2014that is, how perturbing training data influences a learned model\u2014is fundamental to modern data analysis. In learning theory, certain forms of stability are necessary and sufficient for generalization [@bousquet2002stability; @poggio2004general; @shalev2010learnability]. In model selection, stability measures can reliably identify important features [@meinshausen2010stability; @shah2013variable; @ren2021derandomizing]. In scientific applications, stable methods promote reproducibility, a prerequisite for meaningful inference [@yu2013stability]. In distribution-free prediction, stability is a key assumption for the validity of jackknife prediction intervals [@barber2021predictive; @steinberger2023conditional].\n\nAnticipating various benefits of stability, @breiman1996bagging [@breiman1996heuristics] proposed bagging as an ensemble meta-algorithm to stabilize any" +"---\nabstract: |\n Picture-valued invariants are the main achievement of parity theory by V.O. Manturov. In the paper we give a general description of such invariants which can be assigned to a parity (in general, a trait) on diagram crossings. We distinguish two types of picture-valued invariants: derivations (Turaev bracker, index polynomial etc.) and functorial maps (Kauffman bracket, parity bracket, parity projection etc.). We consider some examples of binary functorial maps.\n\n Besides known cases of functorial maps, we present two new examples. The order functorial map is closely connected with (pre)orderings of surface groups and leads to the notion of sibling knots, i.e. knots such that any diagram of one knot can be transformed to a diagram of the other by crossing switching. The other is the lifting map which is inverse to forgetting of under-overcrossings information which turns virtual knots into flat knots. We give some examples of liftable flat knots and flattable virtual ones.\n\n An appendix of the paper contains description of some smoothing skein modules. In particular, we show that $\\Delta$-equivalence of tangles in a fixed surface is classified by the extended homotopy index polynomial.\nauthor:\n- Igor Nikonov\ntitle: Local transformations and functorial maps\n---\n\nKeywords:" +"---\nabstract: 'Hazard detection and avoidance is a key technology for future robotic small body sample return and lander missions. Current state-of-the-practice methods rely on high-fidelity, *a priori* terrain maps, which require extensive human-in-the-loop verification and expensive reconnaissance campaigns to resolve mapping uncertainties. We propose a novel safety mapping paradigm that leverages deep semantic segmentation techniques to predict landing safety directly from a single monocular image, thus reducing reliance on high-fidelity, *a priori* data products. We demonstrate precise and accurate safety mapping performance on real *in-situ* imagery of prospective sample sites from the OSIRIS-REx mission.'\nauthor:\n- 'Travis Driver[^1], \u00a0Kento Tomita[^2], \u00a0Koki Ho[^3], \u00a0and Panagiotis Tsiotras[^4]'\nbibliography:\n- 'references.bib'\ntitle: |\n Deep Monocular Hazard Detection for\\\n Safe Small Body Landing\n---\n\nIntroduction\n============\n\nHazard detection and avoidance (HD&A) is a key technology for future robotic small body sample return and lander missions. Current approaches rely on high-fidelity digital elevation maps (DEMs) derived from digital terrain models (DTMs), local topography and albedo maps, generated on the ground\u00a0[@berry2022scitech]. However, DTM construction involves extensive human-in-the-loop verification, carefully designed image acquisition plans, and expensive reconnaissance campaigns to resolve mapping uncertainties\u00a0[@barnouin2020; @palmer2022practical]. We, instead, propose a novel safety mapping paradigm that leverages Bayesian" +"---\nabstract: 'Estimating the Shannon entropy of a discrete distribution from which we have only observed a small sample is challenging. Estimating other information-theoretic metrics, such as the Kullback-Leibler divergence between two sparsely sampled discrete distributions, is even harder. Existing approaches to address these problems have shortcomings: they are biased, heuristic, work only for some distributions, and/or cannot be applied to all information-theoretic metrics. Here, we propose a fast, semi-analytical estimator for sparsely sampled distributions that is efficient, precise, and general. Its derivation is grounded in probabilistic considerations and uses a hierarchical Bayesian approach to extract as much information as possible from the few observations available. Our approach provides estimates of the Shannon entropy with precision at least comparable to the state of the art, and most often better. It can also be used to obtain accurate estimates of any other information-theoretic metric, including the notoriously challenging Kullback-Leibler divergence. Here, again, our approach performs consistently better than existing estimators.'\nauthor:\n- Angelo Piga\n- 'Lluc Font-Pomarol'\n- 'Marta Sales-Pardo'\n- Roger Guimer\u00e0\nbibliography:\n- 'sample.bib'\ntitle: 'Bayesian estimation of information-theoretic metrics for sparsely sampled distributions'\n---\n\nIntroduction\n============\n\nInformation theory is gaining momentum as a methodological framework to study complex" +"---\nabstract: 'Force Sensing and Force Control are essential to many industrial applications. Typically, a 6-axis Force/Torque (F/T) sensor is mounted between the robot\u2019s wrist and the end-effector in order to measure the forces and torques exerted by the environment onto the robot (the external wrench). Although a typical 6-axis F/T sensor can provide highly accurate measurements, it is expensive and vulnerable to drift and external impacts. Existing methods aiming at estimating the external wrench using only the robot\u2019s internal signals are limited in scope: for example, wrench estimation accuracy was mostly validated in free-space motions and simple contacts as opposed to tasks like assembly that require high-precision force control. Here we present a Neural Network based method and argue that by devoting particular attention to the training data structure, it is possible to accurately estimate the external wrench in a wide range of scenarios based solely on internal signals. As an illustration, we demonstrate a pin insertion experiment with 100-micron clearance and a hand-guiding experiment, both performed without external F/T sensors or joint torque sensors. Our result opens the possibility of equipping the existing 2.7 million industrial robots with Force Sensing and Force Control capabilities without any additional hardware.'" +"---\nabstract: 'In the framework of coupled 1D Gross-Pitaevskii equations, we explore the dynamics of a binary Bose-Einstein condensate where the intra-component interaction is repulsive, while the inter-component one is attractive. The existence regimes of stable self-trapped localized states in the form of symbiotic solitons have been analyzed. Imbalanced mixtures, where the number of atoms in one component exceeds the number of atoms in the other component, are considered in parabolic potential and box-like trap. When all the intra-species and inter-species interactions are repulsive, we numerically find a new type of symbiotic solitons resembling dark-bright solitons. A variational approach has been developed which allows us to find the stationary state of the system and frequency of small amplitude dynamics near the equilibrium. It is shown that the strength of inter-component coupling can be retrieved from the frequency of the localized state\u2019s vibrations.'\nauthor:\n- 'K. K. Ismailov$^{1,3}$, B. B. Baizakov$^1$, F. Kh. Abdullaev$^{1,2}$ and M. Salerno$^3$'\ntitle: 'Dynamics of imbalanced quasi-one-dimensional binary Bose-Einstein condensate in external potentials'\n---\n\nIntroduction\n============\n\nTwo-component Bose-Einstein condensates (BEC) may show a variety of interesting phenomena depending on the character and strength of intra-species and inter-species forces [@kevrekidis2016]. Among all possible settings the case of" +"---\nabstract: 'Stellar candidates in the Ursa Minor (UMi) dwarf galaxy have been found using a new Bayesian algorithm applied to *Gaia* EDR3 data. Five of these targets are located in the extreme outskirts of UMi, from $\\sim5$ to 12 elliptical half-light radii (r$_h$), where r$_h$(UMi) $= 17.32 \\pm 0.11$ arcmin, and have been observed with the GRACES high resolution spectrograph at the Gemini-Northern telescope. Precise radial velocities ($\\sigma_{\\rm{RV}} < 2$ km s$^{-1}$) and metallicities ($\\sigma_{\\rm{\\FeH}} < 0.2$ dex) confirm their memberships of UMi. Detailed analysis of the brightest and outermost star (Target\u00a01, at $\\sim12$ r$_h$), yields precision chemical abundances for the $\\alpha$- (Mg, Ca, Ti), odd-Z (Na, K, Sc), Fe-peak (Fe, Ni, Cr), and neutron-capture (Ba) elements. With data from the literature and APOGEE DR17, we find the chemical patterns in UMi are consistent with an outside-in star formation history that includes yields from core collapse supernovae, asymptotic giant branch stars, and supernovae Ia. Evidence for a knee in the \\[$\\alpha$/Fe\\] ratios near $\\FeH\\sim-2.1$ indicates a low star formation efficiency similar to that in other dwarf galaxies. Detailed analysis of the surface number density profile shows evidence that UMi\u2019s outskirts have been populated by tidal effects, likely as" +"---\nabstract: 'We introduce a general description of localised distortions in active nematics using the framework of active nematic multipoles. We give the Stokesian flows for arbitrary multipoles in terms of differentiation of a fundamental flow response and describe them explicitly up to quadrupole order. We also present the response in terms of the net active force and torque associated to the multipole. This allows the identification of the dipolar and quadrupolar distortions that generate self-propulsion and self-rotation respectively and serves as a guide for the design of arbitrary flow responses. Our results can be applied to both defect loops in three-dimensional active nematics and to systems with colloidal inclusions. They reveal the geometry-dependence of the self-dynamics of defect loops and provide insights into how colloids might be designed to achieve propulsive or rotational dynamics, and more generally for the extraction of work from active nematics. Finally, we extend our analysis also to two dimensions and to systems with chiral active stresses.'\nauthor:\n- 'Alexander J.H. Houston'\n- 'Gareth P. Alexander'\nbibliography:\n- 'ActiveNematicMultipoles.bib'\ntitle: 'Active Nematic Multipoles: Flow Responses and the Dynamics of Defects and Colloids'\n---\n\nIntroduction {#sec:intro}\n============\n\nActive liquid crystals model a wide range of materials," +"---\nabstract: 'Tuberculosis (TB) is still considered a leading cause of death and a substantial threat to global child health. Both TB infection and disease are curable using antibiotics. However, most children who die of TB are never diagnosed or treated. In clinical practice, experienced physicians assess TB by examining chest X-rays (CXR). Pediatric CXR has specific challenges compared to adult CXR, which makes TB diagnosis in children more difficult. Computer-aided diagnosis systems supported by Artificial Intelligence have shown performance comparable to experienced radiologist TB readings, which could ease mass TB screening and reduce clinical burden. We propose a multi-view deep learning-based solution which, by following a proposed template, aims to automatically regionalize and extract lung and mediastinal regions of interest from pediatric CXR images where key TB findings may be present. Experimental results have shown accurate region extraction, which can be used for further analysis to confirm TB finding presence and severity assessment. Code publicly available at: .'\nauthor:\n- 'Daniel Capell\u00e1n-Mart\u00edn'\n- 'Juan J. G\u00f3mez-Valverde'\n- 'Ramon Sanchez\u2011Jacob'\n- 'David Bermejo-Pel\u00e1ez'\n- 'Lara Garc\u00eda-Delgado'\n- 'Elisa L\u00f3pez-Varela'\n- 'Maria J. Ledesma-Carbayo'\nbibliography:\n- 'references.bib'\ntitle: 'Deep learning-based lung segmentation and automatic regional template in chest X-ray images for" +"---\nabstract: 'This study aims at improving the performance of scoring student responses in science education automatically. BERT-based language models have shown significant superiority over traditional NLP models in various language-related tasks. However, science writing of students, including argumentation and explanation, is domain-specific. In addition, the language used by students is different from the language in journals and Wikipedia, which are training sources of BERT and its existing variants. All these suggest that a domain-specific model pre-trained using science education data may improve model performance. However, the ideal type of data to contextualize pre-trained language model and improve the performance in automatically scoring student written responses remains unclear. Therefore, we employ different data in this study to contextualize both BERT and SciBERT models and compare their performance on automatic scoring of assessment tasks for scientific argumentation. We use three datasets to pre-train the model: 1) journal articles in science education, 2) a large dataset of students\u2019 written responses (sample size over 50,000), and 3) a small dataset of students\u2019 written responses of scientific argumentation tasks. Our experimental results show that in-domain training corpora constructed from science questions and responses improve language model performance on a wide variety of downstream tasks." +"---\nabstract: 'In this paper, we study the identifiability and the estimation of the parameters of a copula-based multivariate model when the margins are unknown and are arbitrary, meaning that they can be continuous, discrete, or mixtures of continuous and discrete. When at least one margin is not continuous, the range of values determining the copula is not the entire unit square and this situation could lead to identifiability issues that are discussed here. Next, we propose estimation methods when the margins are unknown and arbitrary, using pseudo log-likelihood adapted to the case of discontinuities. In view of applications to large data sets, we also propose a pairwise composite pseudo log-likelihood. These methodologies can also be easily modified to cover the case of parametric margins. One of the main theoretical result is an extension to arbitrary distributions of known convergence results of rank-based statistics when the margins are continuous. As a by-product, under smoothness assumptions, we obtain that the asymptotic distribution of the estimation errors of our estimators are Gaussian. Finally, numerical experiments are presented to assess the finite sample performance of the estimators, and the usefulness of the proposed methodologies is illustrated with a copula-based regression model for hydrological" +"---\nabstract: 'Networked control systems (NCSs) are an example of task-oriented communication systems, where the purpose of communication is real-time control of processes over a network. In the context of NCSs, with the processes sending their state measurements to the remote controllers, the deterioration of control performance due to the network congestion can be partly mitigated by shaping the traffic injected into the network at the transport layer (TL). In this work, we conduct an extensive performance evaluation of selected TL protocols and show that existing approaches from communication and control theories fail to deliver sufficient control performance in realistic network scenarios. Moreover, we propose a new semantic-aware TL policy, which uses the process state information to filter the most relevant updates and the network state information to prevent delays due to network congestion. The proposed mechanism is shown to outperform all the considered TL protocols with respect to control performance.'\nauthor:\n- \nbibliography:\n- 'biblio.bib'\ntitle: |\n [Towards Semantic-Aware Transport Layer Protocols: A Control Performance Perspective]{}\\\n [^1]\n---\n\nIntroduction {#sec:intro}\n============\n\nIn the evolving post-Shannon 6G systems [@strinati20216g], the perspective on the network is shifting from reliable transmission of bits to delivering heterogeneous services with respect to application-specific goals." +"---\nabstract: 'Expressive text-to-speech (TTS) aims to synthesize speech with varying speaking styles to better reflect human speech patterns. In this study, we attempt to use natural language as a style prompt to control the styles in the synthetic speech, *e.g.*, \u201cSigh tone in full of sad mood with some helpless feeling\". Considering that there is no existing TTS corpus that is suitable to benchmark this novel task, we first construct a speech corpus whose speech samples are annotated with not only content transcriptions but also style descriptions in natural language. Then we propose an expressive TTS model, named InstructTTS, which is novel in the sense of the following aspects: (1) We fully take advantage of self-supervised learning and cross-modal metric learning and propose a novel three-stage training procedure to obtain a robust sentence embedding model that can effectively capture semantic information from the style prompts and control the speaking style in the generated speech. (2) We propose to model acoustic features in discrete latent space and train a novel discrete diffusion probabilistic model to generate vector-quantized (VQ) acoustic tokens rather than the commonly-used mel spectrogram. (3) We jointly apply mutual information (MI) estimation and minimization during acoustic model training" +"---\nabstract: 'We study the motion of charge carriers in curved Dirac materials, in the presence of a local Fermi velocity. An explicit parameterization of the latter emerging quantity for a nanoscroll cylindrical geometry is also provided, together with a discussion of related physical effects and observable properties.'\nauthor:\n- 'B.\u00a0Bagchi[^1]'\n- 'A.\u00a0Gallerati[^2]'\n- 'R.\u00a0Ghosh[^3]'\nbibliography:\n- 'bibliografia.bib'\ntitle: ' **Dirac equation in curved spacetime: the role of local Fermi velocity**'\n---\n\n[@r @X]{} **Keywords:** & Dirac equation, graphene, local Fermi velocity, nanoscrolls.\n\nIntroduction\n============\n\nDirac equation is one of the most relevant contributions in the history of quantum mechanics. Over the decades, its study has been conducted from different points of view [@thaller1992dirac; @bjorken1964relativistic; @Peskin:1995ev], with countless applications in many areas of physics. The reformulation of the Dirac formalism in curved backgrounds is an appealing field of research due to its remarkable applications in high-energy physics, quantum field theory, analogue gravity scenarios and condensed matter.\n\nA real, solid-state system where to observe the properties of Dirac spinorial quantum fields in a curved space is provided by graphene and other two-dimensional materials. The latter have attracted great interest because of their electronic, mechanical and optical characteristics [@novoselov2004electric;" +"---\nabstract: 'The extensive damage caused by malware requires anti-malware systems to be constantly improved to prevent new threats. The current trend in malware detection is to employ machine learning models to aid in the classification process. We propose a new dataset with the objective of improving current anti-malware systems. The focus of this dataset is to improve host based intrusion detection systems by providing API call sequences for thousands of malware samples executed in Windows 10 virtual machines. A tutorial on how to create and expand this dataset is provided along with a benchmark demonstrating how to use this dataset to classify malware. The data contains long sequences of API calls for each sample, and in order to create models that can be deployed in resource constrained devices, three feature selection methods were tested. The principal innovation, however, lies in the multi-label classification system in which one sequence of APIs can be tagged with multiple labels describing its malicious behaviours.'\nauthor:\n- '\\'\nbibliography:\n- 'main.bib'\ntitle: 'Behavioural Reports of Multi-Stage Malware'\n---\n\n[Carpenter : Behavioural Reports of Multi-Stage Malware]{}\n\nIntroduction\n============\n\nThere are billions of malware attacks worldwide every year [@StatistaMalware2022]. Most malicious programs are created by cyber-criminals" +"---\nabstract: 'In this paper we present a novel method, *Knowledge Persistence* ($\\mathcal{KP}$), for faster evaluation of Knowledge Graph (KG) completion approaches. Current ranking-based evaluation is quadratic in the size of the KG, leading to long evaluation times and consequently a high carbon footprint. $\\mathcal{KP}$ addresses this by representing the topology of the KG completion methods through the lens of topological data analysis, concretely using persistent homology. The characteristics of persistent homology allow $\\mathcal{KP}$ to evaluate the quality of the KG completion looking only at a fraction of the data. Experimental results on standard datasets show that the proposed metric is highly correlated with ranking metrics (Hits@N, MR, MRR). Performance evaluation shows that $\\mathcal{KP}$ is computationally efficient: In some cases, the evaluation time (validation+test) of a KG completion method has been reduced from 18 hours (using Hits@10) to 27 seconds (using $\\mathcal{KP}$), and on average (across methods & data) reduces the evaluation time (validation+test) by $\\approx$ **99.96**%.'\nauthor:\n- Anson Bastos\n- Kuldeep Singh\n- Abhishek Nadgeri\n- Johannes Hoffart\n- Toyotaro Suzumura\n- Manish Singh\nbibliography:\n- 'bibliography.bib'\ntitle: 'Can Persistent Homology provide an efficient alternative for Evaluation of Knowledge Graph Completion Methods?'\n---\n\nIntroduction {#sec:introduction}\n============\n\nPublicly available" +"---\nabstract: 'This paper makes a case for accelerating lattice-based post quantum cryptography (PQC) with memristor based crossbars, and shows that these inherently error-tolerant algorithms are a good fit for noisy analog MAC operations in crossbars. We compare different NIST round-3 lattice-based candidates for PQC, and identify that SABER is not only a front-runner when executing on traditional systems, but it is also amenable to acceleration with crossbars. SABER is a module-LWR based approach, which performs modular polynomial multiplications with rounding. We map the polynomial multiplications in SABER on crossbars and show that analog dot-products can yield a $1.7-32.5\\times$ performance and energy efficiency improvement, compared to recent hardware proposals. This initial design combines the innovations in multiple state-of-the-art works \u2013 the algorithm in SABER and the memristive acceleration principles proposed in ISAAC (for deep neural network acceleration). We then identify the bottlenecks in this initial design and introduce several additional techniques to improve its efficiency. These techniques are synergistic and especially benefit from SABER\u2019s power-of-two modulo operation. First, we show that some of the software techniques used in SABER, that are effective on CPU platforms, are unhelpful in crossbar-based accelerators. Relying on simpler algorithms further improves our efficiencies by $1.3-3.6\\times$." +"---\nabstract: 'Simulation-based inference (SBI) techniques are now an essential tool for the parameter estimation of mechanistic and simulatable models with intractable likelihoods. Statistical approaches to SBI such as approximate Bayesian computation and Bayesian synthetic likelihood have been well studied in the well specified and misspecified settings. However, most implementations are inefficient in that many model simulations are wasted. Neural approaches such as sequential neural likelihood (SNL) have been developed that exploit all model simulations to build a surrogate of the likelihood function. However, SNL approaches have been shown to perform poorly under model misspecification. In this paper, we develop a new method for SNL that is robust to model misspecification and can identify areas where the model is deficient. We demonstrate the usefulness of the new approach on several illustrative examples.'\nauthor:\n- 'Ryan P. Kelly'\n- 'David J. Nott'\n- 'David T. Frazier'\n- 'David J. Warne'\n- Christopher Drovandi\nbibliography:\n- 'refs.bib'\ntitle: '**Misspecification-robust Sequential Neural Likelihood**'\n---\n\n[*Keywords: generative models, implicit models, likelihood-free inference, normalising flows, simulation-based inference*]{}\n\nIntroduction {#sec:intro}\n============\n\nStatistical inference for complex models can be challenging when the likelihood function is infeasible to evaluate many times. However, if the model is computationally inexpensive" +"---\nabstract: 'Ranking algorithms in traditional search engines are powered by enormous training data sets that are meticulously engineered and curated by a centralized entity. Decentralized peer-to-peer (p2p) networks such as torrenting applications and Web3 protocols deliberately eschew centralized databases and computational architectures when designing services and features. As such, robust search-and-rank algorithms designed for such domains must be engineered specifically for decentralized networks, and must be lightweight enough to operate on consumer-grade personal devices such as a smartphone or laptop computer. We introduce G-Rank, an unsupervised ranking algorithm designed exclusively for decentralized networks. We demonstrate that accurate, relevant ranking results can be achieved in fully decentralized networks without any centralized data aggregation, feature engineering, or model training. Furthermore, we show that such results are obtainable with minimal data preprocessing and computational overhead, and can still return highly relevant results even when a user\u2019s device is disconnected from the network. G-Rank is highly modular in design, is not limited to categorical data, and can be implemented in a variety of domains with minimal modification. The results herein show that unsupervised ranking models designed for decentralized p2p networks are not only viable, but worthy of further research. *Author\u2019s note: the experiments" +"---\nauthor:\n- Biswarup Mukhopadhyaya\n- ', Tousik Samui'\n- ', and Ritesh K. Singh'\ntitle: Dynamic Radius Jet Clustering Algorithm\n---\n\n[@counter>0@toks=@toks=]{}\n\n[@counter>0@toks=@toks=]{}\n\n[@counter>0@toks=@toks=]{} [ !a! @toks= @toks= ]{}\n\n[abstract[ The study of standard QCD jets produced along with fat jets, which may appear as a result of the decay of a heavy particle, has become an essential part of collider studies. Current jet clustering algorithms, which use a fixed radius parameter for the formation of jets from the hadrons of an event, may be inadequate to capture the differing radius features. In this work, we develop an alternative jet clustering algorithm that allows the radius to vary dynamically based on local kinematics and distribution in the $\\eta$-$\\phi$ plane inside each evolving jet. We present the usefulness of this dynamic radius clustering algorithm through two Standard Model processes, and thereafter illustrate it for a scenario beyond the Standard Model at the 13\u00a0TeV LHC.]{}]{}\n\nIntroduction {#sec:intro}\n============\n\nThe physics extraction capacity of any high-energy collider depends crucially on the handling of coloured particles in various final states. These are produced as partons via either short-distance interactions of quantum chromodynamics (QCD) or electroweak processes[@Campbell:2006wx; @Ellis:1996mzs]. The partons, however, hadronize through" +"---\nauthor:\n- 'J. Bouvier'\n- 'A. Sousa'\n- 'K. Pouilly'\n- 'J.M. Almenara'\n- 'J.-F. Donati'\n- 'S. Alencar'\n- 'A. Frasca'\n- 'K. Grankin'\n- 'A. Carmona'\n- 'G. Pantolmos'\n- 'B. Zaire'\n- 'X. Bonfils'\n- 'A. Bayo'\n- 'L.M. Rebull'\n- 'J. Alonso-Santiago'\n- 'J. F. Gameiro'\n- 'N. J. Cook'\n- 'E. Artigau'\n- 'the Spirou Legacy Survey (SLS) consortium'\nbibliography:\n- 'gmaur\\_rev0.bib'\ndate: 'Received 2 November 2022; accepted 5 January 2023'\nsubtitle: 'A semester-long optical and near-infrared spectrophotometric monitoring campaign[^1][^2]'\ntitle: 'Stable accretion and episodic outflows in the young transition disk system GM Aurigae. '\n---\n\n[We investigate the structure and dynamics of magnetospheric accretion and associated outflows on a scale smaller than 0.1 au around the young transitional disk system GM Aur.]{} [We devised a coordinated observing campaign to monitor the variability of the system on timescales ranging from days to months, including partly simultaneous high-resolution optical and near-infrared spectroscopy, multiwavelength photometry, and low-resolution near-infrared spectroscopy, over a total duration of six months, covering 30 rotational cycles. We analyzed the photometric and line profile variability to characterize the accretion and ejection processes.]{} [The optical and near-infrared light curves indicate that the luminosity of" +"---\nabstract: 'The modern S-Matrix Bootstrap provides non-perturbative bounds on low-energy aspects of scattering amplitudes, leveraging the constraints of unitarity, analyticity and crossing. Typically, the solutions saturating such bounds also saturate the unitarity constraint as much as possible, meaning that they are almost exclusively elastic. This is expected to be unphysical in $d>2$ because of Aks\u2019 theorem. We explore this issue by adding inelasticity as an additional input, both using a primal approach in general dimensions which extends the usual ansatz, and establishing a dual formulation in the 2d case. We then measure the effects on the low-energy observables where we observe stronger bounds than in the standard setup.'\nauthor:\n- 'Ant\u00f3nio Antunes$^{a,b}$, Miguel S. Costa$^a$, Jos\u00e9 Pereira$^a$'\ntitle: 'Exploring Inelasticity in the S-Matrix Bootstrap '\n---\n\n[**Introduction.**]{} Scattering amplitudes are some of the most studied observables in quantum field theory. They encode the probability amplitudes of transitions between asymptotic states with a definite number of particles. The simplest non-trivial amplitude $_{\\textrm{in}}\\langle p_1,p_2|p_3,p_4\\rangle_{\\textrm{out}}$, corresponding to 2-2 scattering has been extensively studied for decades, notably through Feynman perturbation theory, which extracts the connected amplitude through the LSZ procedure which takes as input a four-point correlation function. Additionally, modern on-shell perturbative techniques" +"---\nabstract: 'We study null and timelike constant radii geodesics in the environment of an over-spinning putative Kerr-type naked singularity. We are particularly interested in two topics: first, the differences of the shadows of the naked rotating singularity and the Kerr black hole; and second, the spinning down effect of the particles falling from the accretion disk. Our findings are as follows: around the naked singularity, the non-equatorial prograde orbits in the Kerr black hole remain intact up to a critical rotation parameter ($\\alpha=\\frac{4\\sqrt{2}}{3\\sqrt{3}}$) and cease to exist above this value. This has an important consequence in the shadow of the naked singularity if the shadow is registered by an observer on the rotation plane or close to it as the shadow cannot be distinguished from that of a Kerr black hole viewed from the same angle. We also show that the timelike retrograde orbits in the equatorial plane immediately (after about an 8% increase in mass) reduce the spin parameter of the naked singularity from larger values to $\\alpha=1$ at which an event horizon appears. This happens because the retrograde orbits have a larger capture cross-section than the prograde ones. So if a naked singularity happens to have an" +"---\nauthor:\n- 'Jianyang\u00a0Qi,[!!]{}'\n- 'Noah Hood,'\n- 'Abigail Kopec,[!!]{}'\n- 'Yue Ma, Haiwen Xu, Min Zhong,'\n- 'Kaixuan\u00a0Ni[!!]{}'\nbibliography:\n- 'bibliography.bib'\ntitle: Low Energy Electronic Recoils and Single Electron Detection with a Liquid Xenon Proportional Scintillation Counter\n---\n\nIntroduction {#sec:intro}\n============\n\nDual-phase Liquid Xenon Time Projection Chambers (LXeTPCs) have traditionally been used in large scale rare event searches, and operate by detecting the prompt scintillation light (S1) and the proportional electroluminescence of ionization electrons (S2) from an energy deposition. However, these detectors have never achieved perfect charge collection efficiency in practice\u00a0[@XENON:2022ltv; @LZ:2022ufs]. Additionally, they display a background comprised of delayed single electrons that can last $\\mathcal{O}(1)$\u00a0s after a large S2 signal\u00a0[@XENONCollaborationSS:2021sgk; @LUX:2020vbj; @Kopec:2021ccm]. This background can impact low energy event searches which are only capable of producing S2s, such as those from Coherent Elastic Neutrino Nucleus Scattering (CE$\\nu$NS)\u00a0[@Ni:2021mwa]. Two main hypotheses for this background are that there are electrons trapped on impurities and then are released, or trapped at the liquid gas interface and are extracted later than most of the S2 electrons. This begs the question of whether or not it is possible to make a detector with a sensitivity to single" +"---\nabstract: 'Communications between unmanned aerial vehicles (UAVs) play an important role in deploying aerial networks. Although some studies reveal that drone-based air-to-air (A2A) channels are relatively clear and thus can be modeled as free-space propagation, such an assumption may not be applicable to drones flying in low altitudes of built-up environments. In practice, low-altitude A2A channel modeling becomes more challenging in urban scenarios since buildings can obstruct the line-of-sight (LOS) path, and multipaths from buildings lead to additional losses. Therefore, we herein focus on modeling low-altitude A2A channels considering a generic urban deployment, where we introduce the evidence of the small-size first Fresnel zone at the millimeter-wave (mmWave) band to approximately derive the LOS probability. Then, the path loss under different propagation conditions is investigated to obtain an integrated path loss model. In addition, we incorporate the impact of imperfect beam alignment on the path loss, where the relation between path loss fluctuation and beam misalignment level is modeled as an exponential form. Finally, comparisons with the 3GPP model show the effectiveness of the proposed analytical model. Numerical simulations in different environments and heights provide practical deployment guidance for aerial networks.'\nauthor:\n- \ntitle: 'Path Loss Analysis for Low-Altitude" +"---\nabstract: |\n We propose a multigrid method to solve the linear system of equations arising from a hybrid discontinuous Galerkin (in particular, a single face hybridizable, a hybrid Raviart\u2013Thomas, or a hybrid Brezzi\u2013Douglas\u2013Marini) discretization of a Stokes problem. Our analysis is centered around the augmented Lagrangian approach and we prove uniform convergence in this setting. Numerical experiments underline our analytical findings.\\\n Keywords. Augmented Lagrangian approach, hybrid discontinuous Galerkin, multigrid method, Stokes equation.\naddress:\n- 'Department of Mathematics Sciences, Soochow University, Suzhou, 215006, China'\n- 'School of Mathematics, University of Minnesota, 206 Church St SE, Minneapolis, MN, 55455, USA'\n- 'Interdisciplinary Center for Scientific Computing (IWR), Heidelberg University, Mathematikon, Im Neuenheimer Feld 205, 69120 Heidelberg, Germany'\n- 'School of Engineering Science, Lappeenranta\u2013Lahti University of Technology, P.O. Box 20, 53851 Lappeenranta, Finland'\nauthor:\n- Peipei Lu\n- Wei Wang\n- Guido Kanschat\n- Andreas Rupp\nbibliography:\n- 'MultigridStokes.bib'\ntitle: Homogeneous multigrid method for HDG applied to the Stokes equation\n---\n\nIntroduction\n============\n\nHybrid discontinuous Galerkin (HDG) methods have been a very active field of research in the last years, and they have been applied to many different partial differential equations (PDEs). For Stokes problems, one advantage of HDG schemes is" +"---\nabstract: |\n Safety in the automotive domain is a well-known topic, which has been in constant development in the past years. The complexity of new systems that add more advanced components in each function has opened new trends that have to be covered from the safety perspective. In this case, not only the have to be covered but also scenarios, which cover all relevant information of the vehicle environment. Many of them are not yet still sufficient defined or considered. In this context, Safety of the Intended Functionality (SOTIF) appears to ensure the system when it might fail because of technological shortcomings or misuses by users.\n\n An identification of the plausibly insufficiencies of ADAS/ADS functions has to be done to discover the potential triggering conditions that can lead to these unknown scenarios, which might effect a hazardous behaviour. The main goal of this publication is the definition of to identify these triggering conditions that have been applied to the collision avoidance function implemented in our self-developed mobile Hardware-in-Loop (HiL) platform.\nauthor:\n- 'V\u00edctor\u00a0J.\u00a0Exp\u00f3sito\u00a0Jim\u00e9nez'\n- Helmut\u00a0Martin\n- Christian\u00a0Schwarzl\n- Georg\u00a0Macher\n- Eugen\u00a0Brenner\nbibliography:\n- 'bibliography.bib'\ntitle: 'Triggering Conditions Analysis and for Validation of" +"---\nabstract: |\n In this paper, we use algebro-geometric methods in order to derive classification results for so-called $D$-bialgebra structures on the power series algebra $A[\\![z]\\!]$ for certain central simple non-associative algebras $A$. These structures are closely related to a version of the classical Yang-Baxter equation (CYBE) over $A$.\n\n If $A$ is a Lie algebra, we obtain new proofs for pivotal steps in the known classification of non-degenerate topological Lie bialgebra structures on $A[\\![z]\\!]$ as well as of non-degenerate solutions of the usual CYBE.\n\n If $A$ is associative, we achieve the classification of non-triangular topological balanced infinitesimal bialgebra structures on $A[\\![z]\\!]$ as well as of all non-degenerate solutions of an associative version of the CYBE.\naddress: |\n ETH Z\u00fcrich\\\n Department of Mathematics\\\n R\u00e4mistrasse 101\\\n 8092 Zurich\\\n Switzerland \nauthor:\n- Raschid Abedin\nbibliography:\n- 'Literatur.bib'\ntitle: 'Classification of $D$-bialgebra structures on power series algebras'\n---\n\nIntroduction\n============\n\nBackground and motivation {#background-and-motivation .unnumbered}\n-------------------------\n\nA Lie bialgebra $(L,\\delta)$ over a field $\\Bbbk$ consists of a Lie algebra $L$ over $\\Bbbk$ equipped with a skew-symmetric 1-cocycle $\\delta \\colon L \\to L \\otimes L$ such that the dual map $\\delta^*\\colon (L \\otimes L)^* \\to L^*$ restricted to $L^*\\otimes L^* \\subseteq (L\\otimes L)^*$ is a" +"---\nabstract: 'It is widely believed that typical finite families of $d \\times d$ matrices admit finite products that attain the joint spectral radius. This conjecture is supported by computational experiments and it naturally leads to the following question: are these spectrum maximizing products typically unique, up to cyclic permutations and powers? We answer this question negatively. As discovered by Horowitz around fifty years ago, there are products of matrices that always have the same spectral radius despite not being cyclic permutations of one another. We show that the simplest Horowitz products can be spectrum maximizing in a robust way; more precisely, we exhibit a small but nonempty open subset of pairs of $2 \\times 2$ matrices $(A,B)$ for which the products $A^2 B A B^2$ and $B^2 A B A^2$ are both spectrum maximizing.'\naddress: 'Department of Mathematics, The Pennsylvania State University'\nauthor:\n- Jairo Bochi and Piotr Laskawiec\ndate: 'First version: January, 2023. Revision: August, 2023.'\ntitle: Spectrum Maximizing Products are not generically unique\n---\n\nIntroduction\n============\n\nThe *joint spectral radius* of a family of linear operators was introduced by Rota and Strang [@rota-strang] in 1960 and later became a topic of intense research. It measures the maximal" +"---\nabstract: 'We provide a new perspective on shadow tomography by demonstrating its deep connections with the general theory of measurement frames. By showing that the formalism of measurement frames offers a natural framework for shadow tomography \u2014 in which \u201cclassical shadows\u201d correspond to unbiased estimators derived from a suitable dual frame associated with the given measurement \u2014 we highlight the intrinsic connection between standard state tomography and shadow tomography. Such perspective allows us to examine the interplay between measurements, reconstructed observables, and the estimators used to process measurement outcomes, while paving the way to assess the influence of the input state and the dimension of the underlying space on estimation errors. Our approach generalizes the method described in \\[H.-Y. Huang [*et al.*]{}, Nat. Phys. [**16**]{}, 1050 (2020)\\], whose results are recovered in the special case of covariant measurement frames. As an application, we demonstrate that a sought-after target of shadow tomography can be achieved for the entire class of tight rank-1 measurement frames \u2014 namely, that it is possible to accurately estimate a finite set of generic rank-1 bounded observables while avoiding the growth of the number of the required samples with the state dimension.'\nauthor:\n- 'L. Innocenti'" +"---\nauthor:\n- 'Jonathan W. Bartlett'\n- Camila Olarte Parra\n- Emily Granger\n- 'Ruth H. Keogh'\n- 'Erik W. van Zwet'\n- 'Rhian M. Daniel'\ntitle: 'G-formula for causal inference via multiple imputation'\n---\n\n**Keywords:** G-formula, multiple imputation, synthetic imputation\n\nIntroduction\n============\n\nThe collection of methods referred to as G-methods, developed by James Robins and co-workers, can provide valid inference for the effects of time-varying exposures or treatments in the presence of time-varying confounders\u2014variables that affect treatment over time and the outcome of interest\u2014even when these are affected by previous values of treatment [@naimi2017introduction]. One such method is parametric G-formula (sometimes known as G-computation). Parametric G-formula involves postulating models for the time-varying confounders and outcomes. The expected outcome under specified longitudinal treatment regimes of interest can then be estimated and contrasted.\n\nThe evaluation of G-formula estimators often involves intractable integrals. To overcome this in practice, G-formula implementations make use of Monte-Carlo integration (simulation) [@daniel2011gformula; @mcgrath2020gformula]. The Monte-Carlo error in the resulting estimator can be reduced by increasing the number of simulations (simulated individuals). However, the number required to ensure this error is sufficiently small may be quite large - [@deStavola2015mediation] found in a data analysis that a Monte-Carlo sample" +"---\nabstract: 'Image anomaly detection (IAD) is an urgent issue that needs to be addressed in modern industrial manufacturing (IM). Recently, many advanced algorithms have been released, but their performance varies greatly due to non-uniformed settings. That is, researchers find it difficult to analyze because they are designed for different or specific cases in IM. To eliminate this problem, we first propose a uniform IAD setting to systematically assess the effectiveness of these algorithms, mainly considering three aspects of supervision level (unsupervised, fully supervised), learning paradigm (few-shot, continual, noisy label), and efficiency (memory usage, inference speed). Then, we skillfully construct a comprehensive image anomaly detection benchmark (IM-IAD), which includes 19 algorithms on 7 major datasets with the same setting. Our extensive experiments (17,017 total) provide new insights into the redesign or selection of the IAD algorithm under uniform conditions. Importantly, the proposed IM-IAD presents feasible challenges and future directions for further work. We believe that this work can have a significant impact on the IAD field. To ensure reproducibility and accessibility, our source codes are uploaded to the website: .'\nauthor:\n- 'Guoyang Xie$^{1}$,\u00a0 Jinbao Wang$^{1}$,\u00a0 Jiaqi Liu$^{1}$, Jiayi Lyu, Yong Liu, Chengjie Wang, Feng Zheng$^{*}$,\u00a0 and Yaochu Jin$^{*}$,\u00a0 [^1]" +"---\nbibliography:\n- 'refs-emergencePHENO.bib'\n---\n\naddtoreset[equation]{}[section]{}\n\n\\#1 \\#1 2\n\n\\\n\nBased on Quantum Gravity arguments, it has been suggested that all kinetic terms of light particles below the UV cut-off could arise in the IR via quantum (loop) corrections. These loop corrections involve infinite towers of states becoming light (e.g. Kaluza-Klein or string towers). We study implications of this *Emergence Proposal* for fundamental scales in the Standard Model (SM). In this scheme all Yukawa couplings are of order one in the UV and small Yukawas for lighter generations appear via large anomalous dimensions induced by the towers of states. Thus, the observed hierarchies of quark and lepton masses are a reflection of the structure of towers of states that lie below the Quantum Gravity scale, $\\Lambda_{\\text{QG}}$. Small Dirac neutrino masses consistent with experimental observation appear due to the existence of a tower of SM singlet states of mass $m_{0}\\simeq Y_{\\nu_3}M_p\\simeq 7\\times 10^5$ GeV, opening up a new extra dimension, while the UV cut-off occurs at $\\Lambda_{\\text{QG}}\\lesssim 10^{14}$ GeV. Additional constraints relating the Electro-Weak (EW) and cosmological constant (c.c.) scales (denoted $M_{\\text{EW}}$ and $V_0$) appear if the Swampland condition $m_{\\nu_1}\\lesssim V_0^{1/4}$ is imposed (with $\\nu_1$ denoting the lightest neutrino), which itself" +"---\nabstract: 'Recently, inversion methods have been exploring the incorporation of additional high-rate information from pretrained generators (such as weights or intermediate features) to improve the refinement of inversion and editing results from embedded latent codes. While such techniques have shown reasonable improvements in reconstruction, they often lead to a decrease in editing capability, especially when dealing with complex images that contain occlusions, detailed backgrounds, and artifacts. To address this problem, we propose a novel refinement mechanism called **Domain-Specific Hybrid Refinement** (DHR), which draws on the advantages and disadvantages of two mainstream refinement techniques. We find that the weight modulation can gain favorable editing results but is vulnerable to these complex image areas and feature modulation is efficient at reconstructing. Hence, we divide the image into two domains and process them with these two methods separately. We first propose a Domain-Specific Segmentation module to automatically segment images into in-domain and out-of-domain parts according to their invertibility and editability without additional data annotation, where our hybrid refinement process aims to maintain the editing capability for in-domain areas and improve fidelity for both of them. We achieve this through Hybrid Modulation Refinement, which respectively refines these two domains by weight modulation and" +"---\nabstract: 'In the development of acoustic signal processing algorithms, their evaluation in various acoustic environments is of utmost importance. In order to advance evaluation in realistic and reproducible scenarios, several high-quality acoustic databases have been developed over the years. In this paper, we present another complementary database of acoustic recordings, referred to as the Multi-arraY Room Acoustic Database (MYRiAD). The MYRiAD database is unique in its diversity of microphone configurations suiting a wide range of enhancement and reproduction applications (such as assistive hearing, teleconferencing, or sound zoning), the acoustics of the two recording spaces, and the variety of contained signals including 1214 room impulse responses (RIRs), reproduced speech, music, and stationary noise, as well as recordings of live cocktail parties held in both rooms. The microphone configurations comprise a dummy head (DH) with in-ear omnidirectional microphones, two behind-the-ear (BTE) pieces equipped with 2 omnidirectional microphones each, 5 external omnidirectional microphones (XMs), and two concentric circular microphone arrays (CMAs) consisting of 12 omnidirectional microphones in total. The two recording spaces, namely the SONORA Audio Laboratory (SAL) and the Alamire Interactive Laboratory (AIL), have reverberation times of and , respectively. Audio signals were reproduced using 10 movable loudspeakers in the SAL" +"---\nabstract: 'The spectral index images of the jet in the nearby radio galaxy M87 have previously been shown with Very Long Baseline Interferometric arrays at 2-43GHz. They exhibit flattening of the spectra at a location of inner (central) spine and toward outer ridges. This could imply optical depth effects, lower energy cutoff or stratification of the emitting particles energy distribution. In this paper we employ simulations of multifrequency VLBI observations of M87 radio jet with various model brightness distributions. CLEAN deconvolution errors produce significant features in the observed images. For intensity images they result in the appearance of the inner ridge line in the intrinsically edge brightened jet models. For spectral index images they flatten the spectra in a series of stripes along the jet. Another bias encountered in our simulations is steepening of the spectra in a low surface brightness jet regions. These types of the imaging artefacts do not depend on the model considered. We propose a methods for the compensation of the systematics using only the observed data.'\nauthor:\n- '\\'\nbibliography:\n- 'nee.bib'\ndate: 'Accepted \u2026Received \u2026; in original form \u2026'\n---\n\n\\[firstpage\\]\n\ngalaxies: active\u00a0\u2013 radio continuum: galaxies\u00a0\u2013 galaxies: jets\u00a0\u2013 methods: data" +"---\nabstract: 'Most of today\u2019s communication systems are designed to target reliable message recovery after receiving the entire encoded message (codeword). However, in many practical scenarios, the transmission process may be interrupted before receiving the complete codeword. This paper proposes a novel rateless autoencoder (AE)-based code design suitable for decoding the transmitted message before the noisy codeword is fully received. Using particular dropout strategies applied during the training process, rateless AE codes allow to trade off between decoding delay and reliability, providing a graceful improvement of the latter with each additionally received codeword symbol. The proposed rateless AEs significantly outperform the conventional AE designs for scenarios where it is desirable to trade off reliability for lower decoding delay.'\nauthor:\n- \ntitle: |\n Rateless Autoencoder Codes:\\\n Trading off Decoding Delay and Reliability\\\n [^1] \n---\n\nIntroduction {#intro}\n============\n\nThe design of short block-length error-correcting codes for unpredictable and time-varying wireless channels still remains a challenge [@Shirvanimoghaddam_2018]. Particularly challenging is the design of codes for channels experiencing prolonged deep fades or even complete channel failures that prevent the receiver to receive the complete (noisy) codeword. Such channels, referred to as *dying channels* in [@Zeng_2008] and [@Varshney_2012], arise in various communication systems, e.g., due" +"---\nabstract: 'In this work, we create artistic closed loop curves that trace out images and 3D shapes, which we then hide in musical audio as a form of steganography. We use traveling salesperson art to create artistic plane loops to trace out image contours, and we use Hamiltonian cycles on triangle meshes to create artistic space loops that fill out 3D surfaces. Our embedding scheme is designed to faithfully preserve the geometry of these loops after lossy compression, while keeping their presence undetectable to the audio listener. To accomplish this, we hide each dimension of the curve in a different frequency, and we perturb a sliding window sum of the magnitude of that frequency to best match the target curve at that dimension, while hiding scale information in that frequency\u2019s phase. In the process, we exploit geometric properties of the curves to help to more effectively hide and recover them. Our scheme is simple and encoding happens efficiently with a nonnegative least squares framework, while decoding is trivial. We validate our technique quantitatively on large datasets of images and audio, and we show results of a crowd sourced listening test that validate that the hidden information is indeed unobtrusive.'" +"---\nabstract: 'Semantic parsing is a means of taking natural language and putting it in a form that a computer can understand. There has been a multitude of approaches that take natural language utterances and form them into lambda calculus expressions - mathematical functions to describe logic. Here, we experiment with a sequence to sequence model to take natural language utterances, convert those to lambda calculus expressions, when can then be parsed, and place them in an XML format that can be used by a finite state machine. Experimental results show that we can have a high accuracy model such that we can bridge the gap between technical and nontechnical individuals in the robotics field.'\nauthor:\n- |\n Jake Imyak\\\n `imyak.1@osu.edu`\\\n Parth Parekh\\\n `parekh.86@osu.edu`\\\n Cedric McGuire\\\n `mcguire.389@osu.edu`\\\ndate: 'December 10th, 2021'\ntitle: Underwater Robotics Semantic Parser Assistant\n---\n\nCredits\n=======\n\nJake Imyak was responsible for the creation of the 1250 dataset terms and finding the RNN encoder/decoder model. This took 48 Hours. Cedric McGuire was responsible for the handling of the output logical form via the implementation of the Tokenizer and Parser. This took 44 Hours. Parth Parekh assembled the Python structure for behavior tree as well as created the" +"---\nauthor:\n- 'Xiao Wang,'\n- 'Chi Tian[!!]{},'\n- and Fa Peng Huang\nbibliography:\n- 'ref.bib'\ntitle: 'Model-dependent analysis method for energy budget of the cosmological first-order phase transition'\n---\n\nIntroduction\n============\n\nThe cosmological first-order phase transition (FOPT) and its associated phase transition gravitational waves (GWs) open a new window to explore many fundamental problems in particle cosmology, such as electroweak baryogenesis\u00a0[@Trodden:1998ym; @Morrissey:2012db] and dark matter\u00a0[@Baker:2019ndr; @Chway:2019kft; @Huang:2017kzu; @Huang:2017rzf; @Elor:2021swj], etc. During an FOPT, GW signals can be generated by bubble collisions\u00a0[@Kosowsky:1991ua; @Kosowsky:1992vn; @Huber:2008hg], sound waves\u00a0[@Hindmarsh:2013xza; @Hindmarsh:2015qta; @Hindmarsh:2017gnf], and turbulence\u00a0[@Kosowsky:2001xp; @Caprini:2009yp; @RoperPol:2019wvy]. Future GW experiments, such as TianQin\u00a0[@TianQin:2015yph; @TianQin:2020hid], LISA\u00a0[@LISA:2017pwj], Taiji\u00a0[@Hu:2017mde], and Big Bang Observatory (BBO)\u00a0[@Corbin:2005ny], among others, may be able to detect phase transition GW signals and reveal various puzzles about our Universe. Recent studies\u00a0[@Hindmarsh:2013xza; @Hindmarsh:2015qta; @Hindmarsh:2017gnf] have shown that the sound wave is the dominant source of phase transition GWs in a thermal FOPT. To finally pin down the underlying physics of an FOPT, we need precise quantification of the GW spectra and their signal-to-noise ratio in future GW experiments. To that end, obtaining accurate estimations of phase transition parameters that determine the GW spectra become critical. The" +"[**Integral equation method for microseismic wavefield modelling in anisotropic elastic media**]{}\\\n\n[**Ujjwal Shekhar$^{1,a}$, Morten Jakobsen$^{1,a}$, Einar Iversen$^{1,a}$, Inga Berre$^{1,b}$, Florin A. Radu$^{1,b}$\\\n(January 10, 2023)\\\n**$^1$ Center for Modeling of Coupled Subsurface Dynamics, University of Bergen, Bergen 5007, Norway\\\n$^a$ Department of Earth Science, University of Bergen\\\n$^b$ Department of Mathematics, University of Bergen\\\nEmail: Ujjwal.Shekhar@uib.no****]{}\n\nABSTRACT {#abstract .unnumbered}\n========\n\nIn this paper, we present a frequency-domain volume integral method to model the microseismic wavefield in heterogeneous anisotropic-elastic media. The elastic wave equation is written as an integral equation of the Lippmann-Schwinger type, and the seismic source is represented as a general moment tensor. The displacement field due to a moment tensor source can be computed using the spatial derivative of the elastodynamic Green\u2019s function. The existing matrix-based implementation of the integral equation is computationally inefficient to model the wavefield in a three-dimensional earth. An integral equation for the particle displacement is, hence, formulated in a matrix-free manner through the application of the Fourier transform. The biconjugate gradient stabilized method is used to iteratively obtain the solution of this equation. We apply the numerical scheme to three different models in order of increasing geological complexity and obtain the elastic displacement" +"---\nabstract: 'The inclusion of long-range couplings in the Kitaev chain is shown to modify the universal scaling of topological states close to the critical point. By means of the scattering approach, we prove that the Majorana states *soften*, becoming increasingly delocalised at a universal rate which is only determined by the interaction range. This edge mechanism can be related to a change in the value of the bulk topological index at criticality, upon careful redefinition of the latter. The critical point turns out to be topologically akin to the trivial phase rather than interpolating between the two phases. Our treatment moreover showcases how various topological aspects of quantum models can be investigated analytically.'\nauthor:\n- Alessandro Tarantola\n- Nicol\u00f2 Defenu\nbibliography:\n- 'bib.bib'\ntitle: 'Softening of Majorana edge states by long-range couplings'\n---\n\nIntroduction {#sec:Intro}\n============\n\nEfficient quantum computing is probably the primary goal of modern physics research and the race for quantum advantage involves research groups all around the globe[@higgins2007fundamental; @carlo2009demonstration; @monroe2013trapped; @debnath2016small; @barends2014universal; @ofek2016extending; @arute2019quantum]. The first spark to this intense research activity came from the formulation of Shor\u2019s algorithm for prime factorization[@shor1994algorithms; @shor1997polynomial], which was followed by a large number of influential theoretical proposals[@cirac1995quantum; @beckman1996efficient; @lloyd1996universal;" +"---\nabstract: 'Challenging the Nvidia monopoly, dedicated AI-accelerator chips have begun emerging for tackling the computational challenge that the inference and, especially, the training of modern deep neural networks (DNNs) poses to modern computers. The field has been ridden with studies assessing the performance of these contestants across various DNN model types. However, AI-experts are aware of the limitations of current DNNs and have been working towards the fourth AI wave which will, arguably, rely on more biologically inspired models, predominantly on spiking neural networks (SNNs). At the same time, GPUs have been heavily used for simulating such models in the field of computational neuroscience, yet AI-chips have not been tested on such workloads. The current paper aims at filling this important gap by evaluating multiple, cutting-edge AI-chips (Graphcore IPU, GroqChip, Nvidia GPU with Tensor Cores and Google TPU) on simulating a highly biologically detailed model of a brain region, the inferior olive (IO). This IO application stress-tests the different AI-platforms for highlighting architectural tradeoffs by varying its compute density, memory requirements and floating-point numerical accuracy. Our performance analysis reveals that the simulation problem maps extremely well onto the GPU and TPU architectures, which for networks of 125,000 cells leads" +"---\nauthor:\n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \nbibliography:\n- 'epj\\_st\\_scordo.bib'\ntitle: New opportunities for kaonic atoms measurements from CdZnTe detectors\n---\n\nIntroduction {#sec_intro}\n============\n\nKaonic atoms are formed when a $\\mathrm{K^-}$ is moderated inside a target until it reaches a low enough kinetic energy to be stopped, replacing one of the outer electrons and forming an exotic atom in a highly excited state. The kaonic atom then undergoes atomic cascade to the ground state. These systems provide an ideal tool to study the low-energy regime of Quantum Chromodynamics (QCD) since, due to the much heavier $\\mathrm{K^-}$ mass with respect to the $\\mathrm{e^-}$ one, the lower levels are close enough to the nucleus to be influenced by the short-range strong interaction between the nucleus and the $\\mathrm{K^-}$ [@Napolitano:2022eik].\\\nKaonic atoms have been intensively studied in the 1970s and 1980s with a series of measurements, still representing today the main database for low-energy antikaon-nucleon studies [@Davies:1979; @Izycki:1980; @Bird:1983; @Wiegand:1971zz; @Baird:1983ub; @Friedman:1994hx]. More" +"---\nabstract: 'Extreme events are unusual and rare large-amplitude fluctuations that occur can unexpectedly in nonlinear dynamical systems. Events above the extreme event threshold of the probability distribution of a nonlinear process characterize extreme events. Different mechanisms for the generation of extreme events and their prediction measures have been reported in the literature. [ Based on the properties of extreme events, such as rare in the frequency of occurrence and extreme in amplitude, various studies have shown that extreme events are both linear and nonlinear in nature.]{} Interestingly, in this work, we report on a special class of extreme events which are nonchaotic and nonperiodic. These nonchaotic extreme events appear in between the quasi-periodic and chaotic dynamics of the system. We report the existence of such extreme events with various statistical measures and characterization techniques.'\nauthor:\n- 'Premraj Durairaj$^1$, Sathiyadevi Kanagaraj$^1$, Suresh Kumarasamy$^1$, Karthikeyan Rajagopal$^{1,2}$'\ntitle: 'Emergence of extreme events in a quasi-periodic oscillator'\n---\n\nExtreme events are unanticipated, rare events that occur in many natural and engineering systems. Extreme events (EE) can exist in various forms, including floods, cyclones, droughts, pandemics, power outages, material ruptures, explosions, chemical contamination, and stock market crashes, among others [@albeverio06]. Such events have a" +"[**Homoclinic orbit and the violation of the chaos bound around a black hole with anisotropic matter fields** ]{}\\\n\n[$\\mbox{Soyeon \\,\\, Jeong}^{\\dag}$]{}[^1], [$\\mbox{Bum-Hoon \\,\\, Lee}^{\\S\\dag}$]{}[^2], [$\\mbox{Hocheol \\,\\, Lee}^{\\dag}$]{}[^3], [$\\mbox{Wonwoo \\,\\, Lee}^{\\S}$]{}[^4]\\\n\n[\u00a7*Center for Quantum Spacetime, Sogang University, Seoul 04107, Korea*]{}\\\n[*Department of Physics, Sogang University, Seoul 04107, Korea*]{}\\\n\n[**Abstract**]{}\n\n[ We study the homoclinic orbit and the violation of chaos bound, which are obtained by particle motions around a black hole that coexist with anisotropic matter fields. The homoclinic one is associated with an unstable local maximum of the effective potential. By perturbing a particle located slightly away from the homoclinic one, we numerically compute Lyapunov exponents indicating the sensitivity of the initial value. Our results demonstrate that the violation of the chaos bound increases with higher angular momentum, and the anisotropic matter gives rise to violating the chaos bound further, even in the case of the nonextremal black hole. We utilize the Hamiltonian-Jacobi formalism to explicitly illustrate how the geodesic motion of a particle can be integrable in the procedure of obtaining our findings.]{}\n\nIntroduction \\[sec1\\]\n=====================\n\nBlack holes are not only theoretically predicted in the theory of gravitation, but also become real celestial objects to exist in the Universe" +"---\nabstract: 'For a finite simple graph $G$, the bunkbed graph $G^\\pm$ is defined to be the product graph $G\\square K_2$. We will label the two copies of a vertex $v\\in V(G)$ as $v_-$ and $v_+$. The bunkbed conjecture, posed by Kasteleyn, states that for independent bond percolation on $G^\\pm$, percolation from $u_-$ to $v_-$ is at least as likely as percolation from $u_-$ to $v_+$, for any $u,v\\in V(G)$. Despite the plausibility of this conjecture, so far the problem in full generality remains open. Recently, Hutchcroft, Nizi\u0107-Nikolac, and Kent gave a proof of the conjecture in the $p\\uparrow 1$ limit. Here we present a new proof of the bunkbed conjecture in this limit, working in the more general setting of allowing different probabilities on different edges of $G^\\pm$.'\naddress: 'Department of Pure Mathematics and Mathematical Statistics (DPMMS), University of Cambridge, Wilberforce Road, Cambridge, CB3 0WA, United Kingdom'\nauthor:\n- Lawrence Hollom\nbibliography:\n- 'main.bib'\ntitle: 'A new proof of the bunkbed conjecture in the $p\\uparrow 1$ limit'\n---\n\nIntroduction {#sec:intro}\n============\n\nIn this introduction we state two forms of the bunkbed conjecture, discuss briefly the known results about special cases of the conjecture, and then proceed to state the" +"---\nabstract: 'We introduce a new Bravais lattice determination algorithm. SELLA is a straight-forward algorithm and a program for determining Bravais lattice type based on Selling (Delone) reduction. It is a complete, closed solution, and it provides a clear metric of fit to each type.'\nauthor:\n- 'Herbert J.'\n- 'Nicholas K.'\nbibliography:\n- 'Reduced.bib'\ntitle: 'SELLA - A Program for Determining Bravais Lattice Types'\n---\n\n[**]{}\\\n\n[Bernstein]{}\n\n[Sauter]{}\n\nA method for determining likely Bravais lattice types based on Selling (Delone) reduction. It is a complete, closed solution.\n\n[**Note:**]{} Boris Delaunay in his later publications used the Russian version of his surname: Delone. We will follow that choice.\\\n\nIntroduction\n============\n\nWe introduce a new Bravais lattice determination algorithm. The Bravais lattice types were created by . and developed methods for the identification of the Bravais lattice type of a crystal using the measured unit cell dimensions; their methods were exact only if the cell parameters exactly corresponded to the actual type [@Patterson1957]. Delone discussed the issues of scalars that have nearly zero values, but the broader issues of other measurement errors were not discussed. review the literature on the efforts to create methods to utilize data that contain unavoidable measurement" +"---\nabstract: 'Siamese networks are one of the most trending methods to achieve self-supervised visual representation learning (SSL). Since hand labeling is costly, SSL can play a crucial part by allowing deep learning to train on large unlabeled datasets. Meanwhile, Neural Architecture Search (NAS) is becoming increasingly important as a technique to discover novel deep learning architectures. However, early NAS methods based on reinforcement learning or evolutionary algorithms suffered from ludicrous computational and memory costs. In contrast, differentiable NAS, a gradient-based approach, has the advantage of being much more efficient and has thus retained most of the attention in the past few years. In this article, we present NASiam, a novel approach that uses for the first time differentiable NAS to improve the multilayer perceptron projector and predictor (encoder/predictor pair) architectures inside siamese-networks-based contrastive learning frameworks (e.g., SimCLR, SimSiam, and MoCo) while preserving the simplicity of previous baselines. We crafted a search space designed explicitly for multilayer perceptrons, inside which we explored several alternatives to the standard ReLU activation function. We show that these new architectures allow ResNet backbone convolutional models to learn strong representations efficiently. NASiam reaches competitive performance in both small-scale (i.e., CIFAR-10/CIFAR-100) and large-scale (i.e., ImageNet) image" +"---\nabstract: 'Sinc-collocation methods are known to be efficient for Fredholm integral equations of the second kind, even if functions in the equations have endpoint singularity. However, existing methods have the disadvantage of inconsistent collocation points. This inconsistency complicates the implementation of such methods, particularly for large-scale problems. To overcome this drawback, this study proposes another Sinc-collocation methods with consistent collocation points. The results of a theoretical error analysis show that the proposed methods have the same convergence property as existing methods. Numerical experiments suggest the superiority of the proposed methods in terms of implementation and computational cost.'\nauthor:\n- 'Tomoaki Okayama[^1]'\ntitle: 'Sinc-collocation methods with consistent collocation points for Fredholm integral equations of the second kind'\n---\n\nIntroduction\n============\n\nThis paper is concerned with Fredholm integral equations of the second kind of the following form: $$u(t) - \\int_a^b k(t, s)u(s) {\\,\\mathrm{d}}s = g(t),\n\\quad a\\leq t\\leq b,\n\\label{eq:Fredholm}$$ where $k$ and $g$ are given continuous functions, and $u$ is the solution to be determined. Most numerical methods provided in the literature do not perform well when the functions $k$ and $g$ have derivative singularity at the endpoints. To overcome the difficulty, Rashidinia and Zarebnia\u00a0[@RZ1] proposed a Sinc-collocation method," +"---\nabstract: 'Pin fins are imperative in the cooling of turbine blades. The designs of pin fins, therefore, have seen significant research in the past. With the developments in metal additive manufacturing, novel design approaches toward complex geometries are now feasible. To that end, this article presents a Bayesian optimization approach for designing inline pins that can achieve low pressure loss. The pin-fin shape is defined using featurized (parametrized) piecewise cubic splines in 2D. The complexity of the shape is dependent on the number of splines used for the analysis. From a method development perspective, the study is performed using three splines. Owing to this piece-wise modeling, a unique pin fin design is defined using five features. After specifying the design, a computational fluid dynamics-based model is developed that computes the pressure drop during the flow. Bayesian optimization is carried out on a Gaussian processes-based surrogate to obtain an optimal combination of pin-fin features to minimize the pressure drop. The results show that the optimization tends to approach an aerodynamic design leading to low pressure drop corroborating with the existing knowledge. Furthermore, multiple iterations of optimizations are conducted with varying degree of input data. The results reveal that a convergence" +"---\nabstract: 'Improving the deployment efficiency of transformer-based language models has been challenging given their high computation and memory cost. While INT8 quantization has recently been shown to be effective in reducing both the memory cost and latency while preserving model accuracy, it remains unclear whether we can leverage INT4 (which doubles peak hardware throughput) to achieve further latency improvement. In this study, we explore the feasibility of employing INT4 weight and activation (W4A4) quantization for language models. Our findings indicate that W4A4 quantization introduces no to negligible accuracy degradation for encoder-only and encoder-decoder models, but causes a significant accuracy drop for decoder-only models. To materialize the performance gain using W4A4, we develop a highly-optimized end-to-end W4A4 encoder inference pipeline supporting different quantization strategies. Our INT4 pipeline is $8.5\\times$ faster for latency-oriented scenarios and up to $3\\times$ for throughput-oriented scenarios compared to the inference of FP16, and improves the SOTA BERT INT8 performance from FasterTransformer by up to $1.7\\times$. We provide insights into the failure cases when applying W4A4 to decoder-only models, and further explore the compatibility of INT4 quantization with other compression methods, like pruning and layer reduction.'\nauthor:\n- DeepSpeed\ntitle: 'Random-LTD: Random and Layerwise Token Dropping Brings" +"---\nabstract: 'We study generic conformally flat (analytic-)hypersurfaces in the Euclidean $4$-space $\\mathbb{R}^4$. Such a local-hypersurface is obtained as an evolution of surfaces issuing from a certain surface in $\\mathbb{R}^4$, and then, in consequence, the original surface is a (principal-)curvature surface of the hypersurface. The Poincar\u00e9 metric ${\\check g}_H$ of the upper half plane leads to a $6$-dimensional set of singular (analytic-)Riemannian $2$-metrics $g_0$ of $\\mathbb{R}^2$: on a simply connected open set in the regular domain of $g_0$, a curvature surface with the metric $g_0$ is determined. In this paper, we choose a suitable singular metric $g_0$ for ${\\check g}_H$ and clarify the structure of the curvature surfaces: the curvature surfaces extend analytically to what kind set of $\\mathbb{R}^2$ beyond the regular set of $g_0$; we explicitly catch the singularities and the points of infinity of $g_0$ for the surface. In this case, all principal curvature lines in the extended surface are expressed by a frame field of $\\mathbb{R}^4$ induced on the surface from hypersurfaces and they lie on some standard $2$-spheres $\\mathbb{S}^2$, respectively. We also provide a general method of constructing an approximation of such frame fields, and obtain the figures of those lines including the singular points of" +"---\nabstract: 'Hierarchical Clustering is a popular unsupervised machine learning method with decades of history and numerous applications. We initiate the study of [*differentially private*]{} approximation algorithms for hierarchical clustering under the rigorous framework introduced by\u00a0@dasgupta2016cost. We show strong lower bounds for the problem: that any $\\epsilon$-DP algorithm must exhibit $O(|V|^2/ \\epsilon)$-additive error for an input dataset $V$. Then, we exhibit a polynomial-time approximation algorithm with $O(|V|^{2.5}/ \\epsilon)$-additive error, and an exponential-time algorithm that meets the lower bound. To overcome the lower bound, we focus on the stochastic block model, a popular model of graphs, and, with a separation assumption on the blocks, propose a private $1+o(1)$ approximation algorithm which also recovers the blocks exactly. Finally, we perform an empirical study of our algorithms and validate their performance.'\nauthor:\n- |\n Jacob Imola[^1]\\\n UC San Diego\\\n \n- |\n Alessandro Epasto\\\n Google\\\n \n- |\n Mohammad Mahdian\\\n Google\\\n \n- |\n Vincent Cohen-Addad\\\n Google\\\n \n- |\n Vahab Mirrokni\\\n Google\\\n \nbibliography:\n- 'citations.bib'\ntitle: 'Differentially-Private Hierarchical Clustering with Provable Approximation Guarantees'\n---\n\nIntroduction {#sec:introduction}\n============\n\nHierarchical Clustering is a staple of unsupervised machine learning with more than 60 years of history\u00a0[@ward1963hierarchical]. Contrary to [*flat*]{} clustering methods (such" +"---\nabstract: 'We investigate predictions of the trilinear Higgs self-coupling with radiative corrections in the context of the Inert Doublet Model. The triple Higgs vertex is computed at the one-loop level based on the on-shell renormalization scheme. We calculate its possible deviation from the predictions within the standard model, taking into account all relevant theoretical and experimental constraints, including dark matter searches and the latest bounds on the branching fraction of the Higgs boson decaying to invisible particles. By scanning the model\u2019s parameter space, we find that the deviation in the triple Higgs boson self-coupling from standard model expectations can be substantial, exceeding 100% in certain regions of the parameter space.'\nauthor:\n- |\n Jaouad El Falaki$^{1\\,}$[^1]\\\n [*$^1$ LPTHE, Physics Department, Faculty of Sciences, Ibnou Zohr University, P.O.B. 8106 Agadir, Morocco.*]{}\\\n *I would like to dedicate this paper to my sister Fatima, who is bravely fighting against cancer*\nbibliography:\n- 'biblio.bib'\ntitle: |\n **Revisiting one-loop corrections to the trilinear Higgs boson self-coupling in the Inert Doublet Model\\\n **\n---\n\n=1\n\nIntroduction {#sec:introduction}\n============\n\nA great achievement in the history of high energy physics was made on July 4, 2012, with the discovery of the Higgs boson by ATLAS and CMS," +"---\nabstract: 'We investigate five English benchmark datasets (on the superGLUE leaderboard) and two Swedish datasets for bias, along multiple axes. The datasets are the following: , , , , , Swedish , and SWEDN. Bias can be harmful and it is known to be common in data, which models learn from. In order to mitigate bias in data, it is crucial to be able to estimate it objectively. We use bipol, a novel multi-axes bias metric with explainability, to estimate and explain how much bias exists in these datasets. Multilingual, multi-axes bias evaluation is not very common. Hence, we also contribute a new, large Swedish bias-labeled dataset (of 2 million samples), translated from the English version and train the model on it. In addition, we contribute new multi-axes lexica for bias detection in Swedish. We make the codes, model, and new dataset publicly available.'\nauthor:\n- |\n \\\n Tosin Adewumi\\*^+^, Isabella S\u00f6dergren^++^, Lama Alkhaled^+^, Sana Sabah Sabry^+^,\\\n Foteini Liwicki^+^ and Marcus Liwicki^+^\\\n ^+^Machine Learning Group, EISLAB, ^++^Digital Services and Systems\\\n Lule\u00e5 University of Technology, Sweden\\\nbibliography:\n- 'ranlp2023.bib'\ntitle: 'Bipol: Multi-axes Evaluation of Bias with Explainability in Benchmark Datasets'\n---\n\n**Caution: This paper contains examples, from datasets, of what" +"---\nabstract: 'We present a high time resolution, multi-frequency linear polarization analysis of Very Large Array (VLA) radio observations during some of the brightest radio flaring (${\\sim} 1{\\text{\\,Jy}\\xspace}$) activity of the 2015 outburst of V404 Cygni. The VLA simultaneously captured the radio evolution in two bands (each with two 1 GHz base-bands), recorded at 5/7and 21/26, allowing for a broadband polarimetric analysis. Given the source\u2019s high flux densities, we were able to measure polarization on timescales of ${\\sim}13\\,$minutes, constituting one of the highest temporal resolution radio polarimetric studies of a black hole X-ray binary (BHXB) outburst to date. Across all base-bands, we detect variable, weakly linearly polarized emission (${<} 1\\%$) with a single, bright peak in the time-resolved polarization fraction, consistent with an origin in an evolving, dynamic jet component. We applied two independent polarimetric methods to extract the intrinsic electric vector position angles and rotation measures from the 5 and 7$\\,$GHz base-band data and detected a variable intrinsic polarization angle, indicative of a rapidly evolving local environment or a complex magnetic field geometry. Comparisons to the simultaneous, spatially-resolved observations taken with the Very Long Baseline Array at 15.6, do not show a significant connection between the jet ejections and" +"---\nabstract: 'In the power system, security assessment (SA) plays a pivotal role in determining the safe operation in a normal situation and some contingencies scenarios. Electrical variables as input variables of the model are mainly considered to indicate the power system operation as secure or insecure, according to the reliability criteria for contingency scenarios. In this approach, the features are in grid format data, where the relation between features and any knowledge of network topology is absent. Moreover, the traditional and common models, such as neural networks (NN), are not applicable if the input variables are in the graph format structure. Therefore, this paper examines the security analysis in the graph neural network (GNN) framework such that the GNN model incorporates the network connection and node\u2019s neighbors\u2019 influence for the assessment. Here the input features are separated graphs representing different network conditions in electrical and structural statuses. Topological characteristics defined by network centrality measures are added in the feature vector representing the structural properties of the network. The proposed model is simulated in the IEEE 118-Bus system for the voltage static security assessment (SSA). The performance indices validate the efficiency of the GNN-based model compared to the traditional NN" +"---\nabstract: 'We extend the renormalizability study of the formulation of chiral effective field theory with a finite cutoff, applied to nucleon-nucleon scattering, by taking into account non-perturbative effects. We consider the nucleon-nucleon interaction up to next-to-leading order in the chiral expansion. The leading-order interaction is treated non-perturbatively. In contrast to the previously considered case when the leading-order interaction was assumed to be perturbative, new features related to the renormalization of the effective field theory are revealed. In particular, more severe constraints on the leading-order potential are formulated, which can enforce the renormalizability and the correct power counting for the next-to-leading order amplitude. To illustrate our theoretical findings, several partial waves in the nucleon-nucleon scattering, $^3P_0$, $^3S_1-{^3D_1}$ and $^1S_0$ are analyzed numerically. The cutoff dependence and the convergence of the chiral expansion for those channels are discussed.'\nauthor:\n- 'A.\u00a0M.\u00a0Gasparyan'\n- 'E.\u00a0Epelbaum'\nbibliography:\n- '5.bib'\ntitle: 'Renormalization of nuclear chiral effective field theory with non-perturbative leading order interactions'\n---\n\nIntroduction\n============\n\nOver the last decades, the effective field theory (EFT) approach has become a standard tool in studies of the nucleon-nucleon (NN), few-nucleon and many-nucleon systems due to the possibility to perform systematically improvable calculations in accordance" +"---\nabstract: 'X-ray radiation, in particular radiation between 0.1 keV and 10 keV, is evident from both point-like sources, such as compact objects and T-Tauri young stellar objects, and extended emission from hot, cooling gas, such as in supernova remnants. The X-ray radiation is absorbed by nearby gas, providing a source of both heating and ionization. While protoplanetary chemistry models now often include X-ray emission from the central young stellar object, simulations of star-forming regions have yet to include X-ray emission coupled to the chemo-dynamical evolution of the gas. We present an extension of the [TreeRay]{} reverse raytrace algorithm implemented in the [Flash]{} magneto-hydrodynamic code which enables the inclusion of X-ray radiation from 0.1 keV $< E_{\\gamma} <$ 100 keV, dubbed [XrayTheSpot]{}. [XrayTheSpot]{} allows for the use of an arbitrary number of bins, minimum and maximum energies, and both temperature-independent and temperature-dependent user-defined cross sections, along with the ability to include both point and extended diffuse emission and is coupled to the thermochemical evolution. We demonstrate the method with several multi-bin benchmarks testing the radiation transfer solution and coupling to the thermochemistry. Finally, we show two example star formation science cases for this module: X-ray emission from protostellar accretion irradiating" +"---\nabstract: 'This paper proposes EyeNet, a novel semantic segmentation network for point clouds that addresses the critical yet often overlooked parameter of coverage area size. Inspired by human peripheral vision, EyeNet overcomes the limitations of conventional networks by introducing a simple but efficient multi-scale input and a parallel processing network with connection blocks between parallel streams. The proposed approach effectively addresses the challenges of dense point clouds, as demonstrated by our ablation studies and state-of-the-art performance on Large-Scale Outdoor datasets.'\nauthor:\n- |\n Sunghwan Yoo, Yeonjeong Jeong, Maryam Jameela, Gunho Sohn\\\n Department of Earth and Space Science and Engineering\\\n Lassonde School of Engineering York University, Canada\\\n [(jacobyoo, yjjeong, maryumja, gsohn)@yorku.ca]{}\ntitle: 'Human Vision Based 3D Point Cloud Semantic Segmentation of Large-Scale Outdoor Scene'\n---\n\nIntroduction\n============\n\nRecently, there has been growing interest in developing digital twins of the three-dimensional world, driven by their various applications. With advancements in LiDAR devices and survey techniques, point cloud datasets have become more accurate, dense, and spatially extensive, both on the ground level[@paris2014; @iqmulus; @hackel2017semantic3d; @semantickitti; @toronto3d] and in the airborne level[@isprs2012; @dales; @sensaturban; @sum].\n\nHowever, the functional coverage area of an input batch is critical for effective feature learning in semantic segmentation" +"---\nabstract: 'This paper presents a methodology for the simulation of non-Gaussian wind field as a stochastic wave using the 3rd-order Spectral Representation Method. Traditionally, the wind field is modeled as a stochastic vector process at discrete locations in space. But the simulation of vector process is well-known to be computationally challenging and numerically unstable when modeling wind at a large number of discrete points in space. Recently, stochastic waves have been used to model the field as a continuous process indexed both in time and space. We extend the classical Spectral Representation Method for simulation of Gaussian stochastic waves to a third-order representation modeling asymmetrically skewed non-Gaussian stochastic waves from a prescribed power spectrum and bispectrum. We present an efficient implementation using the fast Fourier transform, which reduces the computational time dramatically. We then apply the method for simulation of a non-Gaussian wind velocity field along a long-span bridge.'\naddress: 'Johns Hopkins University, Baltimore, United States'\nauthor:\n- Lohit Vandanapu\n- 'Michael D. Shields'\nbibliography:\n- 'elsarticle-template-1-num.bib'\ntitle: 'Simulation of non-Gaussian wind field as a $3^{rd}$-order stochastic wave'\n---\n\nwind field simulation ,stochastic wave ,non-Gaussian ,stochastic process ,simulation\n\nIntroduction {#sec:introduction}\n============\n\nDynamic wind loads can have unpredictable and devastating" +"---\nabstract: 'Energy modelling can enable energy-aware software development and assist the developer in meeting an application\u2019s energy budget. Although many energy models for embedded processors exist, most do not account for processor-specific configurations, neither are they suitable for static energy consumption estimation. This paper introduces a set of comprehensive energy models for Arm\u2019s Cortex-M0 processor, ready to support energy-aware development of edge computing applications using either profiling- or static-analysis-based energy consumption estimation. We use a commercially representative physical platform together with a custom modified Instruction Set Simulator to obtain the physical data and system state markers used to generate the models. The models account for different processor configurations which all have a significant impact on the execution time and energy consumption of edge computing applications. Unlike existing works, which target a very limited set of applications, all developed models are generated and validated using a very wide range of benchmarks from a variety of emerging IoT application areas, including machine learning and have a prediction error of less than 5%.'\nauthor:\n- |\n Kris Nikov, Kyriakos Georgiou, Zbigniew Chamski,\\\n Kerstin Eder[^1] [ ]{}and Jose Nunez-Yanez[^2]\nbibliography:\n- 'shortedRef.bib'\ntitle: 'Accurate Energy Modelling on the Cortex-M0 Processor for Profiling and" +"---\nabstract: 'Long Term Evolution (LTE) signal is ubiquitously present in electromagnetic (EM) background environment, which make it an attractive signal source for the ambient backscatter communications (AmBC). In this paper, we propose a system, in which a backscatter device (BD) introduces artificial Doppler shift to the channel which is larger than the natural Doppler but still small enough such that it can be tracked by the channel estimator at the User Equipment (UE). Channel estimation is done using the downlink cell specific reference signals (CRS) that are present regardless the UE being attached to the network or not. FSK was selected due to its robust operation in a fading channel. We describe the whole AmBC system, use two receivers. Finally, numerical simulations and measurements are provided to validate the proposed FSK AmBC performance.'\nauthor:\n- 'Jingyi Liao, Xiyu Wang, Kalle Ruttik, Riku J\u00e4ntti, and Phan-Huy Dinh-Thuy [^1] [^2]'\nbibliography:\n- 'References.bib'\ntitle: Ambient FSK Backscatter Communications using LTE Cell Specific Reference Signals\n---\n\nAmbient Backscatter Communications, LTE Cell Specific Reference Signals, Channel Estimation\n\nIntroduction {#sec:Intro}\n============\n\n-12pt\n\nThe introduction of ambient backscatter communications (AmBC) [@liu2013ambient] in mobile networks [@PhDFara] has recently been proposed for the sustainable development of asset" +"---\nauthor:\n- 'F.\u00a0de\u00a0Gasperin'\n- 'H.\u00a0W.\u00a0Edler'\n- 'W.\u00a0L.\u00a0Williams'\n- 'J.\u00a0R.\u00a0Callingham'\n- 'B.\u00a0Asabere'\n- 'M.\u00a0Br\u00fcggen'\n- 'G.\u00a0Brunetti'\n- 'T.\u00a0J.\u00a0Dijkema'\n- 'M.\u00a0J.\u00a0Hardcastle'\n- 'M.\u00a0Iacobelli'\n- 'A.\u00a0Offringa'\n- 'M.\u00a0J.\u00a0Norden'\n- 'H.\u00a0J.\u00a0A.\u00a0R\u00f6ttgering'\n- 'T.\u00a0Shimwell'\n- 'R.\u00a0J.\u00a0van\u00a0Weeren'\n- 'C.\u00a0Tasse'\n- 'D.\u00a0J.\u00a0Bomans'\n- 'A.\u00a0Bonafede'\n- 'A.\u00a0Botteon'\n- 'R.\u00a0Cassano'\n- 'K.\u00a0T.\u00a0Chy\u017cy'\n- 'V.\u00a0Cuciti'\n- 'K.\u00a0L.\u00a0Emig'\n- 'M.\u00a0Kadler'\n- 'G.\u00a0Miley'\n- 'B.\u00a0Mingo'\n- 'M.\u00a0S.\u00a0S.\u00a0L.\u00a0Oei'\n- 'I.\u00a0Prandoni'\n- 'D.\u00a0J.\u00a0Schwarz'\n- 'P.\u00a0Zarka'\nbibliography:\n- 'library.bib'\nsubtitle: 'II. First data release'\ntitle: The LOFAR LBA Sky Survey\n---\n\n[The Low Frequency Array (LOFAR) is the only existing radio interferometer able to observe at ultra-low frequencies ($<100$\u00a0MHz) with high resolution ($<15$) and high sensitivity ($<1$\u00a0). To exploit these capabilities, the LOFAR Surveys Key Science Project is using the LOFAR Low Band Antenna (LBA) to carry out a sensitive wide-area survey at $41-66$ MHz named the LOFAR LBA Sky Survey (LoLSS).]{} [LoLSS is covering the whole northern sky above declination $24\\deg$ with a" +"---\nabstract: |\n Diffusion-driven instability and bifurcation analysis are studied in a predator-prey model with herd behavior and quadratic mortality by incorporating multiple Allee effect into prey species. The existence and stability of the equilibria of the system are studied. The sufficient and necessary conditions for Turing instability occurring are obtained. And the stability and direction of Hopf and steady state bifurcations are explored by using the normal form method. Furthermore, some numerical simulations are presented to support our theoretical analysis. We found that too large diffusion rate of prey prevents Turing instability from emerging. The biomass conversion rate does affect the stability of the system and the occurrence of Turing instability. This indicates that the biomass conversion rate is essentially significant for the predator-prey system. Finally, we summarize our findings in the conclusion.\\\n [**keywords**]{}: Turing instability; Hopf bifurcation; Steady state bifurcation; Multiple Allee effect; Herd behavior; Predator-prey;\nauthor:\n- |\n Jianglong Xiao Yonghui Xia$\\footnote{Corresponding author. Yonghui Xia, yhxia@zjnu.cn; xiadoc@163.com.}$\\\n [*$^a$ College of Mathematics Science, Zhejiang Normal University, 321004, Jinhua, China*]{}\\\n [Email: jianglongxiao@zjnu.edu.cn; yhxia@zjnu.cn; xiadoc@163.com.]{}\ntitle: 'Turing instability in a diffusive predator-prey model with multiple Allee effect and herd behavior [^1]'\n---\n\nIntroduction\n============\n\nHistory\n-------\n\nModeling the interactions between" +"---\nabstract: |\n Verification of discrete time or continuous time dynamical systems over the reals is known to be undecidable. It is however known that undecidability does not hold for various classes of systems when considering *robust* systems: if robustness is defined as the fact that reachability relation is stable under infinitesimal perturbation, then their reachability relation is decidable. In other words, undecidability implies sensitivity under infinitesimal perturbation, a property usually not expected in systems considered \u201cin practice\u201d, and hence can be seen (somehow informally) as an artifact of the theory, that always assumes exactness. In a similar vein, it is known that, while undecidability holds for logical formulas over the reals, it does not hold when considering $\\delta$-undecidability: one must determine whether a property is true, or $\\delta$-far from being true.\n\n We first extend the previous statements to a theory for general (discrete time, continuous-time, and even hybrid) dynamical systems, and we relate the two approaches. We also relate robustness to some geometric properties of reachability relation.\n\n But mainly, when a system is robust, it then makes sense to quantify at which level of perturbation. We prove that assuming robustness to polynomial perturbations on precision leads to reachability verifiable" +"---\nbibliography:\n- 'biblio.bib'\n---\n\n[**Long-range quenched bond disorder in the bi-dimensional Potts model**]{}\\\n[**Francesco\u00a0Chippari**]{}\\\nSorbonne Universit\u00e9 & CNRS, UMR 7589, LPTHE, F-75005, Paris, France\\\ne-mail: [fchippari@lpthe.jussieu.fr]{}\\\n[**Marco\u00a0Picco**]{}\\\nSorbonne Universit\u00e9 & CNRS, UMR 7589, LPTHE, F-75005, Paris, France\\\ne-mail: [picco@lpthe.jussieu.fr]{}\\\n[**Raoul\u00a0Santachiara**]{}\\\nParis-Saclay Universit\u00e9 & CNRS, UMR 8626, LPTMS, 91405, Saclay, France\\\ne-mail: [raoul.santachiara@gmail.com]{}\n\n(Dated: )\n\n.2in\n\n**ABSTRACT**\n\n> We study the bi-dimensional $q$-Potts model with long-range bond correlated disorder. Similarly to [@Chatelain], we implement a disorder bimodal distribution by coupling the Potts model to auxiliary spin-variables, which are correlated with a power-law decaying function. The universal behaviour of different observables, especially the thermal and the order-parameter critical exponents, are computed by Monte-Carlo techniques for $q=1,2,3$-Potts models for different values of the power-law decaying exponent $a$. On the basis of our conclusions, which are in agreement with previous theoretical and numerical results for $q=1$ and $q=2$, we can conjecture the phase diagram for $q\\in [1,4]$. In particular, we establish that the system is driven to a fixed point at finite or infinite long-range disorder depending on the values of $q$ and $a$. Finally, we discuss the role of the higher cumulants of the disorder distribution. This is done" +"---\nabstract: 'Disclosure avoidance (DA) systems are used to safeguard the confidentiality of data while allowing it to be analyzed and disseminated for analytic purposes. These methods, e.g., cell suppression, swapping, and k-anonymity, are commonly applied and may have significant societal and economic implications. However, a formal analysis of their privacy and bias guarantees has been lacking. This paper presents a framework that addresses this gap: it proposes differentially private versions of these mechanisms and derives their privacy bounds. In addition, the paper compares their performance with traditional differential privacy mechanisms in terms of accuracy and fairness on US Census data release and classification tasks. The results show that, contrary to popular beliefs, traditional differential privacy techniques may be superior in terms of accuracy and fairness to differential private counterparts of widely used DA mechanisms.'\nauthor:\n- Keyu Zhu$^1$\n- Ferdinando Fioretto$^2$\n- 'Pascal Van Hentenryck$^{1}$'\n- |\n Saswat Das$^3$ Christine Task$^4$ $^1$Georgia Institute of Technology\\\n $^2$Syracuse University\\\n $^3$National Institute of Science Education and Research\\\n $^4$Knexus Research Corporation\\\n keyu.zhu@gatech.edu, ffiorett@syr.edu, pvh@isye.gatech.edu, saswat.das@niser.ac.in, christine.task@knexusresearch.com\nbibliography:\n- 'ijcai23.bib'\ntitle: Privacy and Bias Analysis of Disclosure Avoidance Systems\n---\n\nIntroduction\n============\n\nDisclosure avoidance (DA) systems are methods used to protect confidentiality while still" +"---\nabstract: 'A modular form on an even lattice $M$ of signature $(l,2)$ is called reflective if it vanishes only on quadratic divisors orthogonal to roots of $M$. In this paper we show that every reflective modular form on a lattice of type $2U\\oplus L$ induces a root system satisfying certain constrains. As applications, (1) we prove that there is no lattice of signature $(21,2)$ with a reflective modular form and that $2U\\oplus D_{20}$ is the unique lattice of signature $(22,2)$ and type $U\\oplus K$ which has a reflective Borcherds product; (2) we give an automorphic proof of Shvartsman and Vinberg\u2019s theorem, asserting that the algebra of modular forms for an arithmetic subgroup of $\\mathrm{O}(l,2)$ is never freely generated when $l\\geq 11$. We also prove several results on the finiteness of lattices with reflective modular forms.'\naddress: 'Center for Geometry and Physics, Institute for Basic Science (IBS), Pohang 37673, Korea'\nauthor:\n- Haowu Wang\nbibliography:\n- 'refs.bib'\ntitle: On the classification of reflective modular forms\n---\n\nIntroduction\n============\n\nThe need to study modular forms on orthogonal groups ${\\mathop{\\null\\mathrm {O}}\\nolimits}(l,2)$ was first pointed out by Weil [@Wei79] in his program for the study of $K3$ surfaces in the late 1950s. In" +"---\nabstract: 'The bulk photovoltaic effect (BPVE) refers to the phenomenon of generating photocurrent or photovoltage in homogeneous noncentrosymmetric materials under illumination, and the intrinsic contribution to the BPVE is known as the shift current effect. We calculate the shift current conductivities of the ferroelectric SnTe monolayer using first-principles methods. We find that the monolayer SnTe has giant shift-current conductivity near the valley points. More remarkably, the linear optical absorption coefficient at this energy is very small, and therefore leads to an enormous Glass coefficient that is four orders of magnitude larger than that of BaTiO$_3$. The unusual shift-current effects are further investigated using a three-band model. We find that the giant shift current conductivities and Glass coefficient are induced by the nontrivial energy band geometries near the valley points, where the shift-vector diverges. This is a prominent example that the band geometry can play essential roles in the fundamental properties of solids.'\nauthor:\n- Gan Jin\n- Lixin He\ntitle: Peculiar band geometry induced giant shift current in ferroelectric SnTe monolayer\n---\n\nIntroduction {#introduction .unnumbered}\n============\n\nThe study of BPVE has a long history\u00a0[@Sturman_1992; @Baltz_1981; @Sipe_2000], and recently it has attracted great renewed interest because it potentially allows" +"---\nabstract: 'We consider wave propagation problems over 2-dimensional domains with piecewise-linear boundaries, possibly including scatterers. Under the assumption that the initial conditions and forcing terms are radially symmetric and compactly supported, we propose an approximation of the propagating wave as the sum of some special space-time functions. Each term in this sum identifies a particular field component, modeling the result of a single reflection or diffraction effect. We describe an algorithm for identifying such components automatically, based on the domain geometry. To showcase our proposed method, we present several numerical examples, such as waves scattering off wedges and waves propagating through a room in presence of obstacles.'\nauthor:\n- 'Davide Pradovera[^1]'\n- Monica Nonino\n- Ilaria Perugia\nbibliography:\n- 'references.bib'\ntitle: 'Geometry-based approximation of waves in complex domains[^2]'\n---\n\n**Keywords:** wave propagation, surrogate modeling, scattering, geometrical theory of diffraction.\n\n**AMS subject classifications:** 35L05, 35Q60, 65M25, 78A45, 78M34.\n\nIntroduction {#sec:intro}\n============\n\nThe discretization of numerical models for the simulation of complex phenomena results in high-dimensional systems to be solved, usually at an extremely high cost in terms of computational time and storage memory. Among these models, wave propagation problems represent an extremely interesting topic: relevant applications can be found, e.g.," +"---\nabstract: |\n Software Defined Networks have opened the door to statistical and AI-based techniques to improve efficiency of networking. Especially to ensure a certain *Quality of Service* (QoS) for specific applications by routing packets with awareness on content nature (VoIP, video, files, etc.) and its needs (latency, bandwidth, etc.) to use efficiently resources of a network.\n\n Monitoring and predicting various Key Performance Indicators (KPIs) at any level may handle such problems while preserving network bandwidth.\n\n The question addressed in this work is the design of efficient, low-cost adaptive algorithms for KPI estimation, monitoring and prediction. We focus on end-to-end latency prediction, for which we illustrate our approaches and results on data obtained from a public generator provided after the recent international challenge on GNN [@suarez2021graph].\n\n In this paper, we improve [our]{} previously proposed low-cost estimators [@larrenie2022icccnt] by adding the adaptive dimension, and show that the performances are minimally modified while gaining the ability to track varying networks.\nauthor:\n- |\n Pierre Larrenie\\\n Thales SIX & LIGM\\\n Universit\u00e9 Gustave Eiffel, CNRS\\\n Marne-la-Vall\u00e9e, France\\\n `pierre.larrenie@esiee.fr`\\\n Jean-Fran\u00e7ois Bercher\\\n LIGM\\\n Universit\u00e9 Gustave Eiffel, CNRS\\\n Marne-la-Vall\u00e9e, France\\\n `jean-francois.bercher@esiee.fr`\\\n Olivier Venard\\\n ESYCOM\\\n Universit\u00e9 Gustave Eiffel, CNRS\\\n Marne-la-Vall\u00e9e, France\\\n `olivier.venard@esiee.fr`\\\n Iyad Lahsen-Cherif\\\n Institut National des Postes" +"---\nabstract: 'Given a dataset on actions and resulting long-term rewards, a direct estimation approach fits value functions that minimize prediction error on the training data. Temporal difference learning (TD) methods instead fit value functions by minimizing the degree of temporal inconsistency between estimates made at successive time-steps. Focusing on finite state Markov chains, we provide a crisp asymptotic theory of the statistical advantages of this approach. First, we show that an intuitive *inverse trajectory pooling coefficient* completely characterizes the percent reduction in mean-squared error of value estimates. Depending on problem structure, the reduction could be enormous or nonexistent. Next, we prove that there can be dramatic improvements in estimates of the difference in value-to-go for two states: TD\u2019s errors are bounded in terms of a novel measure \u2014 the problem\u2019s *trajectory crossing time* \u2014 which can be much smaller than the problem\u2019s time horizon.'\nauthor:\n- |\n David Cheikhi Daniel Russo\\\n Columbia University\nbibliography:\n- 'ref.bib'\ntitle: On the Statistical Benefits of Temporal Difference Learning\n---\n\nIntroduction\n============\n\nTemporal difference learning is a distinctive approach to estimation in long-term optimization problems. Its importance to reinforcement learning is hard to overstate. In their seminal book, @sutton2018reinforcement write: *If one had" +"---\nabstract: 'During the last decade, forcing and response modes produced by resolvent analysis have demonstrated great potential to guide sensor and actuator placement and design in flow control applications. However, resolvent modes are frequency-dependent, which, although responsible for their success in identifying scale interactions in turbulence, complicates their use for control purposes. In this work, we seek orthogonal bases of forcing and response modes that are the most responsive and receptive, respectively, across all frequencies. We show that these frequency-independent bases of *representative* resolvent modes are given by the eigenvectors of the observability and controllability Gramians of the system considering full state inputs and outputs. We present several numerical examples where we leverage these bases by building orthogonal or interpolatory projectors onto the dominant forcing and response subspaces. Gramian-based forcing modes are used to identify dynamically relevant disturbances, to place point sensors to measure disturbances, and to design actuators for feedforward control in the subcritical linearized Ginzburg\u2013Landau equation. Gramian-based response modes are used to identify coherent structures and for point sensor placement aiming at state reconstruction in the turbulent flow in a minimal channel at $\\mathrm{Re}_{\\tau}=185$. The approach does not require data snapshots and relies only on knowledge of" +"---\nabstract: 'Propyl cyanide (PrCN) (C$_3$H$_7$CN) with both linear and branched isomers is ubiquitous in interstellar space and is important for astrochemistry as it is one of the most complex molecules found to date in the interstellar medium. Furthermore, it is the only one observed species to share the branched atomic backbone of amino acids, some of the building blocks of life. Radical-radical chemical reactions are examined in detail using density functional theory, second order M$\\phi$ller Plesset perturbation theory, coupled cluster methods, and the energy resolved master equation formalism to compute the rate constants in the low pressure limit prevalent in the ISM. Quantum chemical studies are reported for the formation of propyl-cyanide (n-PrCN) and its branched isomer (iso-PrCN) from the gas phase association and surface reactions of radicals on a 34-water model ice cluster. We identify two and three paths for the formation of iso-PrCN, and n-PrCN respectively. The reaction mechanism involves the following radicals association: CH$_3$CHCH$_3$+CN, CH$_3$+CH$_3$CHCN for iso-PrCN formation and CH$_3$CH$_2$+CH$_2$CN, CH$_3$+CH$_2$CH$_2$CN, CN+CH$_3$CH$_2$CH$_2$ leading to n-PrCN formation. We employ the M062X/6-311$++$G(d,p) DFT functional and MP2/aug-cc-pVTZ for reactions on the ice model, and gas phase respectively to optimize the structures, compute minimum energy paths and zero-point vibrational energies" +"---\nabstract: 'Self-attention weights and their transformed variants have been the main source of information for analyzing token-to-token interactions in Transformer-based models. But despite their ease of interpretation, these weights are not faithful to the models\u2019 decisions as they are only one part of an encoder, and other components in the encoder layer can have considerable impact on information mixing in the output representations. In this work, by expanding the scope of analysis to the whole encoder block, we propose , a novel context mixing score customized for Transformers that provides us with a deeper understanding of how information is mixed at each encoder layer. We demonstrate the superiority of our context mixing score over other analysis methods through a series of complementary evaluations with different viewpoints based on linguistically informed rationales, probing, and faithfulness analysis.[^1]'\nauthor:\n- |\n Hosein Mohebbi$^{1}$ \u00a0 Willem Zuidema$^{2}$ \u00a0 Grzegorz Chrupa\u0142a$^{1}$ \u00a0 Afra Alishahi$^{1}$\\\n $^1$ CSAI, Tilburg University \u00a0 $^2$ ILLC, University of Amsterdam\\\n `{h.mohebbi, a.alishahi}@tilburguniversity.edu`\\\n `w.h.zuidema@uva.nl`\\\n `grzegorz@chrupala.me`\\\nbibliography:\n- 'custom.bib'\ntitle: Quantifying Context Mixing in Transformers\n---\n\n=1\n\nIntroduction\n============\n\nTransformers [@NIPS2017_3f5ee243], with their impressive empirical success, have become a prime choice of architecture to learn contextualized representations across a wide range of modalities, such as language" +"---\nabstract: 'Here we analyze the Hawking radiation detected by an inertial observer in an arbitrary position in a Reissner-Nordstr\u00f6m spacetime, with special emphasis on the asymptotic behavior of the Hawking spectrum as an observer approaches the inner or outer horizon. Two different methods are used to analyze the Hawking flux: first, we calculate an effective temperature quantifying the rate of exponential redshift experienced by an observer from an emitter\u2019s vacuum modes, which reproduces the Hawking effect provided the redshift is sufficiently adiabatic. Second, we compute the full Bogoliubov graybody spectrum observed in the three regimes where the wave equation can be solved analytically (at infinity and at the outer and inner horizons). We find that for an observer at the event horizon, the effective Hawking temperature is finite and becomes negative when $(Q/M)^2>8/9$, while at the inner horizon, the effective temperature is always negative and infinite in every direction the observer looks, coinciding with an ultraviolet-divergent spectrum.'\nauthor:\n- Tyler McMaken\n- 'Andrew J. S. Hamilton'\nbibliography:\n- 'apsbib.bib'\ntitle: Hawking radiation inside a charged black hole\n---\n\n\\[sec:int\\]Introduction\n=======================\n\nSome of the most extraordinary effects in the study of quantum field theory in curved spacetime occur near the" +"---\nabstract: 'In recent years, the attention mechanism has demonstrated superior performance in various tasks, leading to the emergence of GAT and Graph Transformer models that utilize this mechanism to extract relational information from graph-structured data. However, the high computational cost associated with the Transformer block, as seen in Vision Transformers, has motivated the development of alternative architectures such as MLP-Mixers, which have been shown to improve performance in image tasks while reducing the computational cost. Despite the effectiveness of Transformers in graph-based tasks, their computational efficiency remains a concern. The logic behind MLP-Mixers, which addresses this issue in image tasks, has the potential to be applied to graph-structured data as well. In this paper, we propose the Graph Mixer Network (GMN), also referred to as Graph Nasreddin Nets (GNasNets), a framework that incorporates the principles of MLP-Mixers for graph-structured data. Using a PNA model with multiple aggregators as the foundation, our proposed GMN has demonstrated improved performance compared to Graph Transformers. The source code is available publicly at .'\nauthor:\n- |\n Ahmet Sar\u0131g\u00fcn\\\n \\\nbibliography:\n- 'reference.bib'\ntitle: Graph Mixer Networks\n---\n\nIntroduction\n============\n\nGraph Neural Networks (GNNs) are a powerful tool for working with graph-structured data, which" +"---\nabstract: |\n TextFormats is a software system for efficient and user-friendly creation of text format specifications, accessible from multiple programming languages (C/C++, Python, Nim) and the Unix command line. To work with a format, a specification written in the TextFormats Specification Language (TFSL) must be created. The specification defines datatypes for each part of the format.\n\n The syntax for datatype definitions in TextFormats specifications is based on the text representation. Thus this system is well suited for the description of existing formats. However, when creating a new text format for representing existing data, the user may use different possible definitions, based on the type of value and the representation choices.\n\n This study explores the possible definition syntax in the TextFormats Specification Language to be used for creating text representations of scalar values (e.g.\u00a0string, numeric value, boolean) and compound data structures (e.g.\u00a0array, mapping). The results of the analysis are presented systematically, together with examples for each each type of different values that can be represented, and usage advices.\nauthor:\n- Giorgio Gonnella\nbibliography:\n- 'references.bib'\ntitle: Designing text representations for existing data using the TextFormats Specification Language\n---\n\nTextFormats | Datatype definition | Parser | Text representation |" +"---\nabstract: 'Ulam\u2019s method is a popular discretization scheme for stochastic operators that involves the construction of a transition probability matrix controlling a Markov chain on a set of cells covering some domain. We consider an application to satellite-tracked undrogued surface-ocean drifting buoy trajectories obtained from the NOAA Global Drifter Program dataset. Motivated by the motion of in the tropical Atlantic, we apply Transition Path Theory (TPT) to drifters originating off the west coast of Africa to the Gulf of Mexico. We find that the most common case of a regular covering by equal longitude\u2013latitude side cells can lead to a large instability in the computed transition times as a function of the number of cells used. We propose a different covering based on a clustering of the trajectory data which is stable against the number of cells in the covering. We also propose a generalization of the standard transition time statistic of TPT which can be used to construct a partition of the domain of interest into weakly dynamically connected regions.'\nauthor:\n- 'G.\u00a0Bonner'\n- 'F.J.\u00a0Beron-Vera'\n- 'M.J.\u00a0Olascoaga'\ntitle: Stability of temporal statistics in Transition Path Theory with sparse data\n---\n\n> Transition Path Theory (TPT)" +"---\nabstract: 'Voice synthesis has seen significant improvements in the past decade resulting in highly intelligible voices. Further investigations have resulted in models that can produce variable speech, including conditional emotional expression. The problem lies, however, in a focus on phrase-level modifications and prosodic vocal features. Using the CREMA-D dataset we have trained a GAN conditioned on emotion to generate worth lengths for a given input text. These word lengths are relative to neutral speech and can be provided, through speech synthesis markup language (SSML) to a text-to-speech (TTS) system to generate more expressive speech. Additionally, a generative model is also trained using implicit maximum likelihood estimation (IMLE) and a comparative analysis with GANs is included. We were able to achieve better performances on objective measures for neutral speech, and better time alignment for happy speech when compared to an out-of-box model. However, further investigation of subjective evaluation is required.'\nauthor:\n- |\n Navjot Kaur\\\n `nka77@sfu.ca`\\\n Paige Tuttosi\\\n `ptuttosi@sfu.ca`\\\nbibliography:\n- 'references.bib'\ntitle: 'Time out of Mind: Generating Rate of Speech conditioned on emotion and speaker'\n---\n\nIntroduction\n============\n\nAs humans, we are particularly fascinated by the aspects of ourselves that are difficult to put words to, yet are inherent" +"---\nabstract: |\n Pulsar wind nebulae are fascinating systems, and archetypal sources for high-energy astrophysics in general. Due to their vicinity, brightness, to the fact that they shine at multi-wavelengths, and especially to their long-living emission at gamma-rays, modelling their properties is particularly important for the correct interpretation of the visible Galaxy. A complication in this respect is the variety of properties and morphologies they show at different ages. Here we discuss the differences among the evolutionary phases of pulsar wind nebulae, how they have been modeled in the past and what progresses have been recently made. We approach the discussion from a phenomenological, theoretical (especially numerical) and observational point of view, with particular attention to the most recent results and open questions about the physics of such intriguing sources.\\\n Accepted for publication in PASA, 2023 January 30. Received 2022 December 14; in original form 2022 August 16.\nauthor:\n- 'Barbara Olmi$^{1,2}$[^1] and Niccol\u00f2 Bucciantini$^{2,3,4}$[^2]'\nbibliography:\n- 'biblio.bib'\ntitle: 'From young to old: the evolutionary path of Pulsar Wind Nebulae'\n---\n\nHigh Energy Astrophysics: Plasma Astrophysics \u2013 ISM: Supernova Remnants \u2013 ISM: Pulsar Wind Nebulae \u2013 ISM: Cometary Nebulae \u2013 Pulsars: General \u2013 Relativistic Processes \u2013 Methods: Numerical\n\nINTRODUCTION {#sec:intro}" +"---\nabstract: 'Many real-world domains require safe decision making in uncertain environments. In this work, we introduce a deep reinforcement learning framework for approaching this important problem. We consider a distribution over transition models, and apply a risk-averse perspective towards model uncertainty through the use of coherent distortion risk measures. We provide robustness guarantees for this framework by showing it is equivalent to a specific class of distributionally robust safe reinforcement learning problems. Unlike existing approaches to robustness in deep reinforcement learning, however, our formulation does not involve minimax optimization. This leads to an efficient, model-free implementation of our approach that only requires standard data collection from a single training environment. In experiments on continuous control tasks with safety constraints, we demonstrate that our framework produces robust performance and safety at deployment time across a range of perturbed test environments.'\nauthor:\n- |\n James Queeney[^1]\\\n Division of Systems Engineering\\\n Boston University\\\n `jqueeney@bu.edu`\\\n Mouhacine Benosman\\\n Mitsubishi Electric Research Laboratories\\\n `benosman@merl.com`\\\nbibliography:\n- 'Queeney\\_NeurIPS23\\_CameraReady.bib'\ntitle: |\n Risk-Averse Model Uncertainty for\\\n Distributionally Robust Safe Reinforcement Learning\n---\n\nIntroduction\n============\n\nIn many real-world decision making applications, it is important to satisfy safety requirements while achieving a desired goal. In addition, real-world environments often involve" +"---\nabstract: 'Precise robotic grasping of several novel objects is a huge challenge in manufacturing, automation, and logistics. Most of the current methods for model-free grasping are disadvantaged by the sparse data in grasping datasets and by errors in sensor data and contact models. This study combines data generation and sim-to-real transfer learning in a grasping framework that reduces the sim-to-real gap and enables precise and reliable model-free grasping. A large-scale robotic grasping dataset with dense grasp labels is generated using domain randomization methods and a novel data augmentation method for deep learning-based robotic grasping to solve data sparse problem. We present an end-to-end robotic grasping network with a grasp optimizer. The grasp policies are trained with sim-to-real transfer learning. The presented results suggest that our grasping framework reduces the uncertainties in grasping datasets, sensor data, and contact models. In physical robotic experiments, our grasping framework grasped single known objects and novel complex-shaped household objects with a success rate of 90.91%. In a complex scenario with multi-objects robotic grasping, the success rate was 85.71%. The proposed grasping framework outperformed two state-of-the-art methods in both known and unknown object robotic grasping.'\nauthor:\n- 'Lei Zhang$^{1,2}$, Kaixin Bai$^{1,2}$, Zhaopeng Chen$^{2,1}$\\*, Yunlei Shi$^{1,2}$," +"---\nabstract: 'AI enabled chat bots have recently been put to use to answer customer service queries, however it is a common feedback of users that bots lack a personal touch and are often unable to understand the real intent of the user\u2019s question. To this end, it is desirable to have human involvement in the customer servicing process. In this work, we present a system where a human support agent collaborates in real-time with an AI agent to satisfactorily answer customer queries. We describe the user interaction elements of the solution, along with the machine learning techniques involved in the AI agent.'\nauthor:\n- 'Debayan Banerjee Mathis Poser Christina Wiethof Varun Shankar Subramanian Richard Paucar Eva A. C. Bittner Chris Biemann'\nbibliography:\n- 'aaai23.bib'\ntitle: 'A System for Human-AI collaboration for Online Customer Support'\n---\n\n![image](images/ui1.png){width=\"90.00000%\"}\n\nIntroduction\n============\n\nIn the pursuit of operational efficiency, companies across the globe have been deploying automation technology aided by Artificial Intelligence (AI) for Online Customer Support (OCS) use cases [^1]. With the explosive growth of social media usage, incoming customer queries have grown exponentially and to handle this growth, the use of proper technology is critical. Some estimates say that by the year" +"---\nabstract: 'Using computer simulations, we have studied the percolation and the electrical conductance of two-dimensional, random percolating networks of curved, zero-width metallic nanowires. We mimicked the curved nanowires using circular arcs. The percolation threshold decreased as the aspect ratio of the arcs increased. Comparison with published data on the percolation threshold of symmetric quadratic B\u00e9zier curves suggests that, when the percolation of slightly curved wires is simulated, the particular choice of curve to mimic the shape of real-world wires is of little importance. Considering the electrical properties, we took into account both the nanowire resistance per unit length and the junction (nanowire/nanowire contact) resistance. Using a mean-field approximation (MFA), we derived the total electrical conductance of the nanowire-based networks as a function of their geometrical and physical parameters. The MFA predictions have been confirmed by our Monte Carlo numerical simulations. For our random homogeneous and isotropic systems of conductive curved wires, the electric conductance decreased as the wire shape changed from a stick to a ring when the wire length remained fixed.'\nauthor:\n- 'Yuri\u00a0Yu.\u00a0Tarasevich'\n- 'Andrei\u00a0V.\u00a0Eserkepov'\n- 'Irina\u00a0V.\u00a0Vodolazskaya'\nbibliography:\n- 'arcs.bib'\ntitle: 'Percolation and electrical conduction in random systems of curved linear" +"---\nabstract: 'The basic idea of this work is to achieve the observed relic density of a non-thermal dark matter(DM) and its connection with Cosmic Microwave Background (CMB) via additional relativistic degrees of freedom which are simultaneously generated during the period $T_{\\rm BBN}~{\\rm to}~T_{\\rm CMB}$ from a long-lived dark sector particle. To realize this phenomena we minimally extend the type-I seesaw scenario with a Dirac fermion singlet($\\chi$) and a complex scalar singlet ($\\varphi$) which transform non-trivially under an unbroken symmetry $\\mathcal{Z}_3$. $\\chi$ being the lightest particle in the dark sector acts as a stable dark matter candidate while the next to lightest state $\\varphi$ operates like a long lived dark scalar particle. The initial density of $\\varphi$ can be thermally produced through either self-interacting number changing processes ($3 \\varphi \\to 2 \\varphi$) within dark sector or the standard annihilation to SM particles ($2 \\varphi \\to 2~ {\\rm SM}$). The late time (after neutrino decoupling) non-thermal decay of $\\varphi$ can produce dark matter in association with active neutrinos. The presence of extra relativistic neutrino degrees of freedom at the time of CMB can have a significant impact on $\\Delta \\rm N_{eff}$. Thus the precise measurement of $\\Delta \\rm N_{ eff}$ by" +"---\nabstract: |\n In $p$-median location interdiction the aim is to find a subset of edges in a graph, such that the objective value of the $p$-median problem in the same graph without the selected edges is as large as possible.\n\n We prove that this problem is $\\operatorname*{\\mathsf{NP}}$-hard even on acyclic graphs. Restricting the problem to trees with unit lengths on the edges, unit interdiction costs, and a single edge interdiction, we provide an algorithm which solves the problem in polynomial time. Furthermore, we investigate path graphs with unit and arbitrary lengths. For the former case, we present an algorithm, where multiple edges can get interdicted. Furthermore, for the latter case, we present a method to compute an optimal solution for one interdiction step which can also be extended to multiple interdicted edges.\nauthor:\n- 'L. Lei\u00df[^1]'\n- 'T. Heller'\n- 'L. Sch\u00e4fer'\n- 'M. Streicher'\n- 'S. Ruzika'\nbibliography:\n- 'mybibfile.bib'\ntitle: '$p$-median location interdiction on trees'\n---\n\n**Keywords:** Network Interdiction, Location Planning, Median Problems, Edge Interdiction, Network Location Planning\\\n\nIntroduction {#sec:intro}\n============\n\nLocation planning is a field of mathematical research which crosses our daily life more often, than we might think at first sight. The root of modern" +"---\nabstract: 'The optimized certainty equivalent (OCE) is a family of risk measures that cover important examples such as entropic risk, conditional value-at-risk and mean-variance models. In this paper, we propose a new episodic risk-sensitive reinforcement learning formulation based on tabular Markov decision processes with recursive OCEs. We design an efficient learning algorithm for this problem based on value iteration and upper confidence bound. We derive an upper bound on the regret of the proposed algorithm, and also establish a minimax lower bound. Our bounds show that the regret rate achieved by our proposed algorithm has optimal dependence on the number of episodes and the number of actions.'\nbibliography:\n- 'main.bib'\n---\n\n**Regret Bounds for Markov Decision Processes with Recursive Optimized Certainty Equivalents**\n\n[Wenhao Xu]{}[^1], Xuefeng Gao[^2], Xuedong He[^3]\n\nIntroduction\n============\n\nReinforcement learning (RL) studies the problem of sequential decision making in an unknown environment by carefully balancing between exploration and exploitation [@sutton2018reinforcement]. In the classical setting, it describes how an agent takes actions to maximize *expected cumulative rewards* in an environment typically modeled by a Markov decision process (MDP, [@puterman2014markov]). However, optimizing the expected cumulative rewards alone is often not sufficient in many practical applications such as finance, healthcare" +"---\nabstract: 'The characterization of finite-time thermodynamic processes is of crucial importance for extending equilibrium thermodynamics to nonequilibrium thermodynamics. The central issue is to quantify responses of thermodynamic variables and irreversible dissipation associated with non-quasistatic changes of thermodynamic forces applied to the system. In this study, we derive a simple formula that incorporates the non-quasistatic response coefficients with Onsager\u2019s kinetic coefficients, where the Onsager coefficients characterize the relaxation dynamics of fluctuation of extensive thermodynamic variables of semi-macroscopic systems. Moreover, the thermodynamic length and the dissipated availability that quantifies the efficiency of irreversible thermodynamic processes are formulated in terms of the derived non-quasistatic response coefficients. The present results are demonstrated by using an ideal gas model. The present results are, in principle, verifiable through experiments and are thus expected to provide a guiding principle for the nonequilibrium control of macroscopic thermodynamic systems.'\naddress: 'Department of Complexity Science and Engineering, Graduate School of Frontier Sciences, The University of Tokyo, Kashiwa 277-8561, Japan'\nauthor:\n- Yuki Izumida\ntitle: 'Non-quasistatic response coefficients and dissipated availability for macroscopic thermodynamic systems'\n---\n\nIntroduction\n============\n\nIn recent years, adiabatic control and even its shortcuts have received renewed and considerable interest both in theoretical and experimental points of" +"---\nabstract: |\n .\n\n **Context:** DevOps responds the growing need of companies to streamline the software development process and, thus, has experienced widespread adoption in the past years. However, the successful adoption of DevOps requires companies to address significant cultural and organizational changes. Understanding the organizational structure and characteristics of teams adopting DevOps is key, and comprehending the existing theories and representations of team taxonomies is critical to guide companies in a more systematic and structured DevOps adoption process. As there was no unified theory to explain the different topologies of DevOps teams, in previous work, we built a theory to represent the organizational structure and characteristics of teams adopting DevOps, harmonizing the existing knowledge. **Objective:** In this paper, we expand the theory-building in the context of DevOps Team Taxonomies. Our main contributions are presenting and executing the Operationalization and Testing phases for a continuously evolving theory on DevOps team structures. **Method:** We operationalize the constructs and propositions that make up our theory to generate empirically testable hypotheses to confirm or disconfirm the theory. Specifically, we focus on the research operation side of the theory-research cycle: identifying propositions, deriving empirical indicators from constructs, establishing testable hypotheses, and testing them. **Results:**" +"---\nabstract: |\n We investigate unbiased high-dimensional mean estimators in differential privacy. We consider differentially private mechanisms whose expected output equals the mean of the input dataset, for every dataset drawn from a fixed bounded domain $K$ in ${\\mathbb{R}}^d$. A classical approach to private mean estimation is to compute the true mean and add unbiased, but possibly correlated, Gaussian noise to it. In the first part of this paper, we study the optimal error achievable by a Gaussian noise mechanism for a given domain $K$ when the error is measured in the $\\ell_p$ norm for some $p \\ge 2$. We give algorithms that compute the optimal covariance for the Gaussian noise for a given $K$ under suitable assumptions, and prove a number of nice geometric properties of the optimal error. These results generalize the theory of factorization mechanisms from domains $K$ that are symmetric and finite (or, equivalently, symmetric polytopes) to arbitrary bounded domains.\n\n In the second part of the paper we show that Gaussian noise mechanisms achieve nearly optimal error among all private unbiased mean estimation mechanisms in a very strong sense. In particular, for *every input dataset*, an unbiased mean estimator satisfying concentrated differential privacy introduces approximately at" +"---\nabstract: 'Differential equations are used to model and predict the behaviour of complex systems in a wide range of fields, and the ability to solve them is an important asset for understanding and predicting the behaviour of these systems. Complicated physics mostly involves difficult differential equations, which are hard to solve analytically. In recent years, physics-informed neural networks have been shown to perform very well in solving systems with various differential equations. The main ways to approximate differential equations are through penalty function and reparameterization. Most researchers use penalty functions rather than reparameterization due to the complexity of implementing reparameterization. In this study, we quantitatively compare physics-informed neural network models with and without reparameterization using the approximation error. The performance of reparameterization is demonstrated based on two benchmark mechanical engineering problems, a one-dimensional bar problem and a two-dimensional bending beam problem. Our results show that when dealing with complex differential equations, applying reparameterization results in a lower approximation error.'\nauthor:\n- |\n Siddharth Nand\\\n Department of Mathematics\\\n The University of British Columbia\\\n `sidnand@student.ubc.ca`\\\n Yuecheng Cai\\\n Department of Mechanical Engineering\\\n The University of British Columbia\\\n `ycai05@mail.ubc.ca`\\\ntitle: 'Physics-informed Neural Network: The Effect of Reparameterization in Solving Differential Equations'\n---\n\nIntroduction" +"---\nabstract: 'We study the set of incentive compatible and efficient two-sided matching mechanisms. We classify all such mechanisms under an additional assumption \u2013 \u201cgender-neutrality\" \u2013 which guarantees that the two sides be treated symmetrically. All group strategy-proof, efficient, and gender-neutral mechanisms are recursive and the outcome is decided in a sequence of rounds. In each round two agents are selected, one from each side. These agents are either \u201cmatched-by-default\" or \u201cunmatched-by-default.\" In the former case either of the selected agents can unilaterally force the other to match with them while in the latter case, they may only match together if both agree. In either case, if this pair of agents is not matched together, each gets their top choices among the set of remaining agents. As an important step in the characterization, we first show that in one-sided matching all group strategy-proof and efficient mechanisms are sequential dictatorships. An immediate corollary is that there are no individually rational, group strategy-proof and efficient one-sided matching mechanisms.'\nauthor:\n- 'Sophie Bade[^1]'\n- 'Joseph Root[^2]'\nbibliography:\n- 'matching.bib'\ndate: January 2023\ntitle: 'Royal Processions: Incentives, Efficiency and Fairness in Two-sided Matching'\n---\n\nIntroduction\n============\n\ninitiated the study of stability in two-sided matching" +"---\nabstract: 'Modern data aggregation often takes the form of a platform collecting data from a network of users. More than ever, these users are now requesting that the data they provide is protected with a guarantee of privacy. This has led to the study of optimal data acquisition frameworks, where the optimality criterion is typically the maximization of utility for the agent trying to acquire the data. This involves determining how to allocate payments to users for the purchase of their data at various privacy levels. The main goal of this paper is to characterize a *fair* amount to pay users for their data at a given privacy level. We propose an axiomatic definition of fairness, analogous to the celebrated Shapley value. Two concepts for fairness are introduced. The first treats the platform and users as members of a common coalition and provides a complete description of how to divide the utility among the platform and users. In the second concept, fairness is defined only among users, leading to a potential fairness-constrained mechanism design problem for the platform. We consider explicit examples involving private heterogeneous data and show how these notions of fairness can be applied. To the best" +"---\nauthor:\n- 'G. Desprez[^1]'\n- 'V. Picouet'\n- 'T. Moutard'\n- 'S. Arnouts'\n- 'M. Sawicki [^2]'\n- 'J. Coupon'\n- 'S. Gwyn'\n- 'L. Chen'\n- 'J. Huang'\n- 'A. Golob'\n- 'H. Furusawa'\n- 'H. Ikeda'\n- 'S. Paltani'\n- 'C. Cheng'\n- 'W. Hartley'\n- 'B. C. Hsieh'\n- 'O. Ilbert'\n- 'O. B. Kauffmann'\n- 'H. J. McCracken'\n- 'M. Shuntov'\n- 'M. Tanaka'\n- 'S. Toft'\n- 'L. Tresse'\n- 'J. R. Weaver'\nbibliography:\n- 'references.bib'\ndate: 'Received date; accepted date'\nsubtitle: '$U+grizy(+YJHK_s)$ photometry and photometric redshifts for 18M galaxies in the $20~{\\rm deg}^2$ of the HSC-SSP Deep and ultraDeep fields'\ntitle: 'Combining the CLAUDS & HSC-SSP surveys'\n---\n\nIntroduction {#sec:intro}\n============\n\nDeep, wide-area multiband imaging surveys play a pivotal role in our studies of the Universe and of the formation and evolution of its content (e.g., CANDELS, @Grogin2011; COSMOS, @Scoville2007; CFHTLS, @Hudelot2012; DES, @Abbott2018). They are expected to continue to do so for the foreseeable future given the large investment of resources into projects such as LSST on the Rubin Observatory [@Ivezic2019], *Euclid* [@Laureijs2011], and Roman Space Telescope [@Akeson2019]. While spectroscopy \u2014 particularly when highly multiplexed \u2014 can yield detailed physical information on" +"---\nabstract: |\n In the last decades, due to the huge technological growth observed, it has become increasingly common that a collection of temporal data rapidly accumulates in vast amounts. This provides an opportunity for extracting valuable information through the estimation of increasingly precise models. But at the same time it imposes the challenge of continuously updating the models as new data become available.\n\n Currently available methods for addressing this problem, the so-called online learning methods, use current parameter estimations and novel data to update the estimators. These approaches avoid using the full raw data and speeding up the computations.\n\n In this work we consider three online learning algorithms for parameters estimation in the context of time series models. In particular, the methods implemented are: gradient descent, Newton-step and Kalman filter recursions. These algorithms are applied to the recently developed irregularly observed autoregressive (iAR) model. The estimation accuracy of the proposed methods is assessed by means of Monte Carlo experiments.\n\n The results obtained show that the proposed online estimation methods allow for a precise estimation of the parameters that generate the data both for the regularly and irregularly observed time series. These online approaches are numerically efficient, allowing substantial computational" +"---\nabstract: 'We consider a stochastic volatility model where the dynamics of the volatility are described by linear functions of the (time extended) signature of a primary underlying process, which is supposed to be some multidimensional continuous semimartingale. Under the additional assumption that this primary process is of polynomial type, we obtain closed form expressions for the VIX squared, exploiting the fact that the truncated signature of a polynomial process is again a polynomial process. Adding to such a primary process the Brownian motion driving the stock price, allows then to express both the log-price and the VIX squared as linear functions of the signature of the corresponding augmented process. This feature can then be efficiently used for pricing and calibration purposes. Indeed, as the signature samples can be easily precomputed, the calibration task can be split into an offline sampling and a standard optimization. For both the SPX and VIX options we obtain highly accurate calibration results, showing that this model class allows to solve the joint calibration problem without adding jumps or rough volatility.'\nauthor:\n- 'Christa Cuchiero[^1]'\n- 'Guido Gazzani[^2]'\n- 'Janka M\u00f6ller[^3]'\n- 'Sara Svaluto-Ferro[^4]'\ntitle: 'Joint calibration to SPX and VIX options with signature-based models'" +"---\nabstract: 'A connected graph is called a block graph if each of its blocks is a complete graph. Let $\\mathbf{Bl}(\\textbf{k}, \\varphi)$ be the class of block graph on $\\textbf{k}$ vertices with given dissociation number $\\varphi$. In this article, we have obtained a block graph $\\mathbb{B}_{\\textbf{k},\\varphi}$ in $\\mathbf{Bl}(\\textbf{k}, \\varphi)$ that uniquely attains the maximum spectral radius $\\rho(G)$ among all graphs $G$ in $\\mathbf{Bl}(\\textbf{k}, \\varphi)$. Furthermore, we also provide bounds on $\\rho(\\mathbb{B}_{\\textbf{k},\\varphi})$.'\nauthor:\n- 'Joyentanuj Das[^1] and Sumit Mohanty[^2]'\ntitle: |\n On the spectral radius of block graphs with a given\\\n dissociation number \n---\n\n[*Keywords*:]{} complete graphs, block graphs, dissociation number, spectral radius, bounds.\n\n[**MSC**:]{} 05C50, 15A18\n\nIntroduction {#sec:intro}\n============\n\nLet $G=(V(G),E(G))$ be a finite, simple, connected graph with $V(G)$ as the set of vertices and $E(G)$ as the set of edges in $G$. We write $u\\sim v$ to indicate that the vertices $u,v \\in V(G)$ are adjacent in $G$. The degree of the vertex $v$, denoted by $d_G(v)$, equals the number of vertices in $V$ that are adjacent to $v$. A graph $H$ is said to be a subgraph of $G$ if $V(H) \\subset V(G)$ and $E(H) \\subset E(G)$. For any subset $S \\subset V (G)$, a subgraph $H$ of" +"---\nabstract: 'We present a novel solver technique for the anisotropic heat flux equation, aimed at the high level of anisotropy seen in magnetic confinement fusion plasmas. Such problems pose two major challenges: (i) discretization accuracy and (ii) efficient implicit linear solvers. We simultaneously address each of these challenges by constructing a new finite element discretization with excellent accuracy properties, tailored to a novel solver approach based on algebraic multigrid (AMG) methods designed for advective operators. We pose the problem in a mixed formulation, introducing the heat flux as an auxiliary variable and discretizing the temperature and auxiliary fields in a discontinuous Galerkin space. The resulting block matrix system is then reordered and solved using an approach in which two advection operators are inverted using AMG solvers based on approximate ideal restriction (AIR), which is particularly efficient for upwind discontinuous Galerkin discretizations of advection. To ensure that the advection operators are non-singular, in this paper we restrict ourselves to considering open (acyclic) magnetic field lines. We demonstrate the proposed discretization\u2019s superior accuracy over other discretizations of anisotropic heat flux, achieving error $1000\\times$ smaller for anisotropy ratio of $10^9$, while also demonstrating fast convergence of the proposed iterative solver in highly" +"---\nabstract: 'Originally proposed as a method for knowledge transfer from one model to another, some recent studies have suggested that knowledge distillation ([kd]{}) is in fact a form of regularization. Perhaps the strongest argument of all for this new perspective comes from its apparent similarities with label smoothing ([ls]{}). Here we re-examine this stated equivalence between the two methods by comparing the predictive confidences of the models they train. Experiments on four text classification tasks involving models of different sizes show that: (a)\u00a0In most settings, [kd]{} and [ls]{} drive model confidence in completely opposite directions, and (b) In [kd]{}, the student inherits not only its knowledge but also its confidence from the teacher, reinforcing the classical knowledge transfer view.'\nauthor:\n- |\n Md Arafat Sultan\\\n IBM Research AI\\\n arafat.sultan@ibm.com\nbibliography:\n- 'custom.bib'\ntitle: 'Knowledge Distillation $\\approx$ Label Smoothing: Fact or Fallacy?'\n---\n\n=1\n\nIntroduction {#section:introduction}\n============\n\nKnowledge distillation ([kd]{}) was originally proposed as a mechanism for a small and lightweight *student* model to learn to perform a task, [*e.g.*]{}, classification, from a higher-capacity *teacher* model [@hinton2014distilling]. In recent years, however, this view of [kd]{} as a knowledge transfer process has come" +"---\nabstract: 'Some quality indicators have been proposed for benchmarking preference-based evolutionary multi-objective optimization algorithms using a reference point. Although a systematic review and analysis of the quality indicators are helpful for both benchmarking and practical decision-making, neither has been conducted. In this context, first, this paper reviews existing regions of interest and quality indicators for preference-based evolutionary multi-objective optimization using the reference point. We point out that each quality indicator was designed for a different region of interest. Then, this paper investigates the properties of the quality indicators. We demonstrate that an achievement scalarizing function value is not always consistent with the distance from a solution to the reference point in the objective space. We observe that the regions of interest can be significantly different depending on the position of the reference point and the shape of the Pareto front. We identify undesirable properties of some quality indicators. We also show that the ranking of preference-based evolutionary multi-objective optimization algorithms depends on the choice of quality indicators.'\nauthor:\n- 'Ryoji\u00a0Tanabe,\u00a0\u00a0and\u00a0Ke\u00a0Li,\u00a0 [^1] [^2]'\nbibliography:\n- 'reference.bib'\ntitle: 'Quality Indicators for Preference-based Evolutionary Multi-objective Optimization Using a Reference Point: A Review and Analysis'\n---\n\nPreference-based evolutionary multi-objective" +"---\naddress: |\n $^{1}$CNR-IOM-Democritos, c/o SISSA (International School for Advanced Studies), Via Bonomea 265, I-34136, Trieste, Italy\\\n $^{2}$Dipartimento di Fisica Teorica, Universit\u00e0 Trieste, Strada Costiera 11, I-34014 Trieste, Italy\\\n $^{3}$Univ Lyon, Ens de Lyon, CNRS, Laboratoire de Physique, F-69342 Lyon, France\\\n $^{4}$Dipartimento di Fisica \u201cE. R. Caianiello\u201d, Universit\u00e0 degli Studi di Salerno and CNR-SPIN, Via Giovanni Paolo II, I-84084 Fisciano (Sa), Italy\\\n $^{5}$Dipartimento di Fisica e Astronomia \u201cGalileo Galilei\u201d, INFN and QTech, Universit\u00e0 di Padova, via Marzolo 8, I-35131 Padova, Italy\\\n $^{6}$INO-CNR, Unit\u00e0 di Sesto Fiorentino, via Nello Carrara 1, I-50019 Sesto Fiorentino (Firenze), Italy\n---\n\nIntroduction\n============\n\nTrapped ultracold atomic gases offer a convenient and flexible platform to explore the fascinating aspects of many-body physics in one dimension [@cazalilla_review_bosons; @mistadikis_1d]. In particular, in the last years the one dimensional dipolar Bose gas has been largely investigated theoretically, see for instance Ref.\u00a0 [@baranov_condensed_2012]. This kind of study is nowadays a vibrant topic of research, triggered by recent observations of self-bound droplets in attractive bosonic mixtures [@tarruell2018a; @Semeghini2018; @derrico2019] and in dipolar atoms [@pfau2016; @luo2021]. In the real experiment, however, the system is not strictly one dimensional: the system made of identical atoms of mass $m$ is usually confined by a" +"---\nabstract: |\n This paper shows that to compute the Haar state on $\\mathcal{O}(SL_q(n))$, it suffices to compute the Haar states of a special type of monomials which we define as standard monomials. Then, we provide an algorithm to explicitly compute the Haar states of standard monomials on $\\mathcal{O}(SL_q(3))$ with reasonable computational cost. The numerical results on $\\mathcal{O}(SL_q(3))$ will be used in the future study of the $q$-deformed Weingarten function.\n\n *Keywords* \u2014 Quantum groups; quantum special linear group; Haar state.\naddress: 'Math department, Texas A&M University, College Station, TX 77843, USA'\nauthor:\n- Ting Lu\nbibliography:\n- 'reference.bib'\ntitle: 'Computing the Haar state on ${\\mathcal{O}(SL_q(3))}$'\n---\n\nIntroduction\n============\n\nThe Haar measure on a compact topological group is a well-studied object. In particular, when the group is $U(n)$, the group of $n\\times n$ unitary matrices, there is an elegant formula for the integral of matrix coefficients with respect to the Haar measure. This formula is given by so-called Weingarten functions, introduced by Collins in 2003\u00a0[@collins2003moments]. The current paper will study a $q$-deformation of the Haar measure on the Drinfeld\u2013Jimbo\u00a0[@drinfeld1986quantum]\u00a0[@jimbo1985aq] quantum groups $\\mathcal{O}(SL_q(n))$ which is dual to $U_q(sl_n)$\u00a0[@korogodski1998algebras].\n\nIn the context of $\\mathcal{O}(SL_q(n))$, the most relevant algebraic structure" +"---\nabstract: 'Dialogue summarization aims to condense a given dialogue into a simple and focused summary text. Typically, both the roles\u2019 viewpoints and conversational topics change in the dialogue stream. Thus how to effectively handle the shifting topics and select the most salient utterance becomes one of the major challenges of this task. In this paper, we propose a novel topic-aware Global-Local Centrality (GLC) model to help select the salient context from all sub-topics. The centralities are constructed at both the global and local levels. The global one aims to identify vital sub-topics in the dialogue and the local one aims to select the most important context in each sub-topic. Specifically, the GLC collects sub-topic based on the utterance representations. And each utterance is aligned with one sub-topic. Based on the sub-topics, the GLC calculates global- and local-level centralities. Finally, we combine the two to guide the model to capture both salient context and sub-topics when generating summaries. Experimental results show that our model outperforms strong baselines on three public dialogue summarization datasets: CSDS, MC, and SAMSUM. Further analysis demonstrates that our GLC can exactly identify vital contents from sub-topics.\u00a0[^1]'\nauthor:\n- |\n Xinnian Liang^1^, Shuangzhi Wu^2^, Chenhao Cui^2^," +"---\nabstract: 'The Schr\u00f6dinger equation admits smooth and finite solutions that spontaneously evolve into a singularity, even for a free particle. This blowup is generally ascribed to the intrinsic dispersive character of the associated time evolution. We resort to the notion of quantum trajectories to reinterpret this singular behavior. We show that the blowup can be directly related to local phase variations, which generate an underlying velocity field responsible for driving the quantum flux toward the singular region.'\nauthor:\n- 'Angel S. Sanz'\n- 'Luis\u00a0L. S\u00e1nchez-Soto'\n- Andrea Aiello\ntitle: A quantum trajectory analysis of singular wave functions\n---\n\nIntroduction\n============\n\nThe Schr\u00f6dinger equation is, perhaps, the prototype of a dispersive equation; that is, if no boundary conditions are imposed, its wave solutions spread out in space as they evolve in time\u00a0[@Tao:2006aa]. A frequent way to quantify this dispersion is by the so-called dispersive estimates, a topic with a long history\u00a0[@Schlag:2007aa; @Mandel:2020aa; @Dietze:2021aa] and whose main goal is to establish tight bounds on the decay of the solutions.\n\nRecently, it has been pointed out that the Schr\u00f6dinger equation, even for a free particle, presents dispersive singularities\u00a0[@Peres:2002aa; @Bona:2010aa]: an initial square-integrable profile $\\psi(x,0)$ could result in a" +"---\nabstract: 'We present in this reference paper an instrumental project dedicated to the monitoring of solar activity during solar cycle 25. It concerns the survey of fast evolving chromospheric events implied in Space Weather, such as flares, coronal mass ejections, filament instabilities and Moreton waves. Coronal waves are produced by large flares around the solar maximum and propagate with chromospheric counterparts; they are rare, faint, difficult to observe, and for that reason, challenging. They require systematic observations with automatic, fast and multi-channel optical instruments. MeteoSpace is a high cadence telescope assembly specially designed for that purpose. The large amount of data will be freely available to the solar community. We describe in details the optical design, the qualification tests and capabilities of the telescopes, and show how waves can be detected. MeteoSpace will be installed at Calern observatory (C\u00f4te d\u2019Azur, 1270 m) and will be in full operation in 2023.'\nauthor:\n- 'Jean-Marie Malherbe'\n- Thierry Corbard\n- Ga\u00eble Barbary\n- Fr\u00e9d\u00e9ric Morand\n- Claude Collin\n- Daniel Crussaire\n- Florence Guitton\nbibliography:\n- 'papier.bib'\ndate: 'Received: 15 December 2021 / Accepted: date'\ntitle: 'Monitoring fast solar chromospheric activity: the MeteoSpace project'\n---\n\nIntroduction {#sec:Intro}\n============\n\nSolar activity is" +"---\nabstract: 'In addition to longitudinal spin angular momentum (SAM) along the axis of propagation of light, spatially structured electromagnetic fields such as evanescent waves and focused beams have recently been found to possess transverse SAM in the direction perpendicular to the axis of propagation. In particular, the SAM of SPPs with spatial structure has been extensively studied in the last decade after it became clear that evanescent fields with spatially structured energy flow generate three-dimensional spin texture. Here we present numerical calculations of the space-time surface plasmon polariton (ST-SPP) wave packet, a plasmonic bullet that propagates at an arbitrary group velocity while maintaining its spatial distribution. ST-SPP wave packets with complex spatial structure and energy flow density distribution determined by the group velocity are found to propagate with accompanying three-dimensional spin texture and finite topological charge density. Furthermore, the spatial distribution of the spin texture and topological charge density determined by the spatial structure of the SPP is controllable, and the deformation associated with propagation is negligible. ST-SPP wave packets, which can stably transport customizable three-dimensional spin textures and topological charge densities, can be excellent subjects of observation in studies of spinphotonics and optical topological materials.'\nauthor:\n- Naoki" +"---\nabstract: 'Evolutionary Reinforcement Learning (ERL) that applying Evolutionary Algorithms (EAs) to optimize the weight parameters of Deep Neural Network (DNN) based policies has been widely regarded as an alternative to traditional reinforcement learning methods. However, the evaluation of the iteratively generated population usually requires a large amount of computational time and can be prohibitively expensive, which may potentially restrict the applicability of ERL. Surrogate is often used to reduce the computational burden of evaluation in EAs. Unfortunately, in ERL, each individual of policy usually represents millions of weights parameters of DNN. This high-dimensional representation of policy has introduced a great challenge to the application of surrogates into ERL to speed up training. This paper proposes a PE-SAERL Framework to at the first time enable surrogate-assisted evolutionary reinforcement learning via policy embedding (PE). Empirical results on 5 Atari games show that the proposed method can perform more efficiently than the four state-of-the-art algorithms. The training process is accelerated up to 7x on tested games, comparing to its counterpart without the surrogate and PE.'\nauthor:\n- Lan Tang\n- 'Xiaxi Li[^1]'\n- Jinyuan Zhang\n- Guiying Li\n- 'Peng Yang[^2] ()'\n- Ke Tang\nbibliography:\n- 'references.bib'\ntitle: 'Enabling surrogate-assisted evolutionary" +"---\nabstract: 'There is a current interest in quantum thermodynamics in the context of open quantum systems. An important issue is the consistency of quantum thermodynamics, in particular the second law of thermodynamics, i.e., the flow of heat from a hot reservoir to a cold reservoir. Here recent emphasis has been on composite system and in particular the issue regarding the application of local or global master equations. In order to contribute to this discussion we discuss two cases, namely as an example a single qubit and as a simple composite system two coupled qubits driven by two heat reservoirs at different temperatures, respectively. Applying a global Lindblad master equation approach we present explicit expressions for the heat currents in agreement with the second law of thermodynamics. The analysis is carried out in the Born-Markov approximation. We also discuss issues regarding the possible presence of coherences in the steady state.'\nauthor:\n- 'Hans C. Fogedby'\ntitle: Heat currents in qubit systems\n---\n\nIntroduction\n============\n\nThere is a current interest in quantum thermodynamics [@Alicki79; @Barra15; @Kosloff13a; @Kosloff14; @Mari12; @Kosloff19; @Levy20; @Colla22; @Linden10]. Thermodynamics is a universal theory and must from a fundamental point of view emerge from a microscopic point of" +"---\nabstract: 'In this paper, we investigate the mixed-state entanglement in a model of p-wave superconductivity phase transition using holographic methods. We calculate several entanglement measures, including holographic entanglement entropy (HEE), mutual information (MI), and entanglement wedge cross-section (EWCS). Our results show that these measures display critical behavior at the phase transition points, with the EWCS exhibiting opposite temperature behavior compared to the HEE. Additionally, we find that the critical exponents of all entanglement measures are twice those of the condensate. Moreover, we find that the EWCS is a more sensitive indicator of the critical behavior of phase transitions than the HEE. Furthermore, we uncover a universal inequality in the growth rates of EWCS and MI near critical points in thermal phase transitions, such as p-wave and s-wave superconductivity, suggesting that MI captures more information than EWCS when a phase transition first occurs.'\nauthor:\n- 'Zhe Yang $^{1}$'\n- 'Fang-Jing Cheng $^{2}$'\n- 'Chao Niu $^{1}$'\n- 'Cheng-Yong Zhang $^{1}$'\n- 'Peng Liu $^{1}$'\ntitle: ' The mixed-state entanglement in holographic p-wave superconductor model '\n---\n\n[^1]\n\nIntroduction {#sec:intro}\n============\n\nQuantum entanglement is the most crucial characteristic of the quantum system and lays the key foundation of quantum information theory." +"---\nabstract: 'We present LM-GAN, an HDR sky model that generates photorealistic environment maps with weathered skies. Our sky model retains the flexibility of traditional parametric models and enables the reproduction of photorealistic all-weather skies with visual diversity in cloud formations. This is achieved with flexible and intuitive user controls for parameters, including sun position, sky color, and atmospheric turbidity. Our method is trained directly from inputs fitted to real HDR skies, learning both to preserve the input\u2019s illumination and correlate it to the real reference\u2019s atmospheric components in an end-to-end manner. Our main contributions are a generative model trained on both sky appearance and scene rendering losses, as well as a novel sky-parameter fitting algorithm. We demonstrate that our fitting algorithm surpasses existing approaches in both accuracy and sky fidelity, and also provide quantitative and qualitative analyses, demonstrating LM-GAN\u2019s ability to match parametric input to photorealistic all-weather skies. The generated HDR environment maps are ready to use in 3D rendering engines and can be applied to a wide range of image-based lighting applications.'\nauthor:\n- Lucas Valen\u00e7a\n- Ian Maquignaz\n- Hadi Moazen\n- Rishikesh Madan\n- 'Yannick Hold-Geoffroy'\n- 'Jean-Fran\u00e7ois Lalonde'\ntitle: 'LM-GAN: A Photorealistic All-Weather Parametric Sky" +"---\nabstract: 'The robustness of the Kalman filter to double talk and its rapid convergence make it a popular approach for addressing acoustic echo cancellation (AEC) challenges. However, the inability to model nonlinearity and the need to tune control parameters cast limitations on such adaptive filtering algorithms. In this paper, we integrate the frequency domain Kalman filter (FDKF) and deep neural networks (DNNs) into a hybrid method, called NeuralKalman, to leverage the advantages of deep learning and adaptive filtering algorithms. Specifically, we employ a DNN to estimate nonlinearly distorted far-end signals, a transition factor, and the nonlinear transition function in the state equation of the FDKF algorithm. Experimental results show that the proposed NeuralKalman improves the performance of FDKF significantly and outperforms strong baseline methods.'\naddress: |\n $^1$The Ohio State University, Columbus, OH, USA\\\n $^2$Tencent AI Lab, Bellevue, WA, USA\nbibliography:\n- 'mybib.bib'\ntitle: 'NeuralKalman: A Learnable Kalman Filter for Acoustic Echo Cancellation'\n---\n\n**Index Terms**: Acoustic echo cancellation, Kalman filter, deep learning, NeuralKalman\n\nIntroduction {#sec:intro}\n============\n\nAcoustic echo cancellation (AEC), as an active and challenging research problem in the domain of speech processing, has been studied for decades and is widely used in mobile communication and teleconferencing systems. The" +"---\nabstract: 'Federated learning (FL), as an effective decentralized distributed learning approach, enables multiple institutions to jointly train a model without sharing their local data. However, the domain feature shift caused by different acquisition devices/clients substantially degrades the performance of the FL model. Furthermore, most existing FL approaches aim to improve accuracy without considering reliability (e.g., confidence or uncertainty). The predictions are thus unreliable when deployed in safety-critical applications. Therefore, aiming at improving the performance of FL in non-Domain feature issues while enabling the model more reliable. In this paper, we propose a novel reliable federated disentangling network, termed RFedDis, which utilizes feature disentangling to enable the ability to capture the global domain-invariant cross-client representation and preserve local client-specific feature learning. Meanwhile, to effectively integrate the decoupled features, an uncertainty-aware decision fusion is also introduced to guide the network for dynamically integrating the decoupled features at the evidence level, while producing a reliable prediction with an estimated uncertainty. To the best of our knowledge, our proposed RFedDis is the first work to develop an FL approach based on evidential uncertainty combined with feature disentangling, which enhances the performance and reliability of FL in non-IID domain features. Extensive experimental results show" +"---\nabstract: 'In this work, we present UV completions of the recently proposed number-changing Co-SIMP freeze-out mechanism. In contrast to the standard cannibalistic-type dark matter picture that occurs entirely in the dark sector, the $3\\to 2$ process setting the relic abundance in this case requires one Standard Model particle in the initial and final states. This prevents the dark sector from overheating and leads to rich experimental signatures. We generate the Co-SIMP interaction with a dark sector consisting of two scalars, with the mediator coupling to either nucleons or electrons. In either case, *the dark matter candidate is naturally light*: nucleophilic interactions favor the sub-GeV mass range and leptophilic interactions favor the sub-MeV mass range. Viable thermal models in these lighter mass regimes are particularly intriguing to study at this time, as new developments in low-threshold detector technologies will begin probing this region of parameter space. While particles in the sub-MeV regime can potentially impact light element formation and CMB decoupling, we show that a late-time phase transition opens up large fractions of parameter space. These thermal light dark matter models can instead be tested with dedicated experiments. We discuss the viable parameter space in each scenario in light of" +"---\nabstract: 'Semantic segmentation is an important technique for environment perception in intelligent transportation systems. With the rapid development of convolutional neural networks (CNNs), road scene analysis can usually achieve satisfactory results in the source domain. However, guaranteeing good generalization to different target domain scenarios remains a significant challenge. Recently, semi-supervised learning and active learning have been proposed to alleviate this problem. Semi-supervised learning can improve model accuracy with massive unlabeled data, but some pseudo labels containing noise would be generated with limited or imbalanced training data. And there will be suboptimal models if human guidance is absent. Active learning can select more effective data to intervene, while the model accuracy can not be improved because the massive unlabeled data are not used. And the probability of querying sub-optimal samples will increase when the domain difference is too large, increasing annotation cost. This paper proposes an iterative loop method combining active and semi-supervised learning for domain adaptive semantic segmentation. The method first uses semi-supervised to learn massive unlabeled data to improve model accuracy and provide more accurate selection models for active learning. Secondly, combined with the predictive uncertainty sample selection strategy of active learning, manual intervention is used to correct" +"---\nabstract: |\n In this work, we present new data on the $^{182,183,184}$W($\\gamma,n$) cross sections, utilizing a quasi-monochromatic photon beam produced at the NewSUBARU synchrotron radiation facility. Further, we have extracted the nuclear level density and $\\gamma$-ray strength function of $^{186}$W from data on the $^{186}$W($\\alpha,\\alpha^\\prime\\gamma$)$^{186}$W reaction measured at the Oslo Cyclotron Laboratory. Combining previous measurements on the $^{186}$W($\\gamma,n$) cross section with our new $^{182,183,184}$W($\\gamma,n$) and ($\\alpha,\\alpha^\\prime\\gamma$)$^{186}$W data sets, we have deduced the $^{186}$W $\\gamma$-ray strength function in the range of $1 < E_\\gamma < 6$ MeV and $7 < E_\\gamma < 14$ MeV.\n\n Our data are used to extract the level density and $\\gamma$-ray strength functions needed as input to the nuclear-reaction code , providing an indirect, experimental constraint for the $^{185}$W($n,\\gamma$)$^{186}$W cross section and reaction rate. Compared to the recommended Maxwellian-averaged cross section (MACS) in the KADoNiS-1.0 data base, our results are on average lower for the relevant energy range $k_B T \\in [5,100]$ keV, and we provide a smaller uncertainty for the MACS. The theoretical values of Bao *et al.* and the cross section experimentally constrained on photoneutron data of Sonnabend *et al.* are significantly higher than our result. The lower value by Mohr *et al.* is" +"---\nabstract: 'This letter presents a continuous probabilistic modeling methodology for spatial point cloud data using finite Gaussian Mixture Models (GMMs) where the number of components are adapted based on the scene complexity. Few hierarchical and adaptive methods have been proposed to address the challenge of balancing model fidelity with size. Instead, state-of-the-art mapping approaches require tuning parameters for specific use cases, but do not generalize across diverse environments. To address this gap, we utilize a self-organizing principle from information-theoretic learning to automatically adapt the complexity of the GMM model based on the relevant information in the sensor data. The approach is evaluated against existing point cloud modeling techniques on real-world data with varying degrees of scene complexity.'\nauthor:\n- 'Kshitij Goel, Nathan Michael, and Wennie Tabib[^1][^2] [^3][^4]'\nbibliography:\n- 'refs.bib'\ntitle: 'Probabilistic Point Cloud Modeling via Self-Organizing Gaussian Mixture Models'\n---\n\nMapping, RGB-D Perception, Field Robots\n\nINTRODUCTION {#sec:intro}\n============\n\npoint cloud data are used in physical simulations\u00a0[@Ummenhofer2020Lagrangian], computer graphics\u00a0[@vedaldi_neural_2020], and robotic perception\u00a0[@tabib_autonomous_2021]. For robotic perception applications, in particular, three-dimensional (3D) perception algorithms do not operate directly on raw point cloud data; instead, they subsample, discretize, or create an intermediate representation\u00a0[@eckart_compact_2017]. Gaussian mixture models (GMMs) have" +"---\nabstract: 'We introduce a symmetry class for higher dimensional partitions - *fully complementary higher dimensional partitions* (FCPs) - and prove a formula for their generating function. By studying symmetry classes of FCPs in dimension 2, we define variations of the classical symmetry classes for plane partitions. As a by-product we obtain conjectures for three new symmetry classes of plane partitions and prove that another new symmetry class, namely *quasi transpose complementary plane partitions* are equinumerous to symmetric plane partitions.'\naddress: 'University of Vienna, Austria'\nauthor:\n- 'Florian Schreier-Aigner'\nbibliography:\n- 'LiteraturListe.bib'\ntitle: Fully complementary higher dimensional partitions\n---\n\nIntroduction {#sec: intro}\n============\n\nA [*plane partition*]{} $\\pi$ is an array $(\\pi_{i,j})$ of non-negative integers with all but finitely many entries equal to $0$, which is weakly decreasing along rows and columns, i.e., $\\pi_{i,j} \\geq \\pi_{i+1,j}$ and $\\pi_{i,j} \\geq \\pi_{i,j+1}$; see Figure\u00a0\\[fig: PP\\] (left) for an example. MacMahon [@MacMahon97] introduced them at the end of the 19th century as two dimensional generalisations of ordinary partitions and proved in [@MacMahon16] two enumeration results: He showed that the generating function of plane partitions is given by $$\\sum_{\\pi} q^{|\\pi|} = \\prod_{i\\geq 1} \\frac{1}{(1-q^i)^i},$$ where the sum is over all plane partitions and $|\\pi|$" +"---\nauthor:\n- '\u00c1ngel Rinc[\u00f3]{}n [^1]'\n- 'Grigoris Panotopoulos [^2]'\n- 'Il[\u00ed]{}dio Lopes [^3]'\nbibliography:\n- 'biblio\\_1.bib'\ndate: 'Received: date / Revised version: date'\ntitle: Anisotropic stars made of exotic matter within the complexity factor formalism\n---\n\n[leer.eps]{} gsave 72 31 moveto 72 342 lineto 601 342 lineto 601 31 lineto 72 31 lineto showpage grestore\n\nIntroduction\n============\n\nAny reasonable modern cosmological model must include Dark Energy (DE). Nevertheless, the nature and origin of Dark Energy remain a mystery despite its fundamental importance in modern theoretical cosmology [@SupernovaSearchTeam:1998fmf; @SupernovaCosmologyProject:1998vns; @Freedman:2003ys]. As it is well known, a cosmological model made of only matter and radiation cannot lead to accelerated solutions to the universe as predicted by Einstein\u2019s Theory of General Relativity (GR) [@Einstein:1916vd]. This kind of solution is obtained by including a constant $\\Lambda$ in Einstein\u2019s field equations [@Einstein:1917ce], i.e., by adding the contribution of the dark energy. Despite its simplicity, such accelerated cosmological model is in exceptional agreement with a vast amount of observational data. Such a cosmological model is known as the concordance cosmological model or the $\\Lambda$CDM model. Nevertheless, $\\Lambda$ suffers from the cosmological constant ongoing problem [@Weinberg:1988cp; @Zeldovich:1967gd]. Additionally, this $\\Lambda$\u2013problem is amplified by the current values" +"---\nabstract: 'The dynamic Schr\u00f6dinger bridge problem provides an appealing setting for solving constrained time-series data generation tasks posed as optimal transport problems. It consists of learning non-linear diffusion processes using efficient iterative solvers. Recent works have demonstrated state-of-the-art results ([*e.g.*]{}, in modelling single-cell embryo RNA sequences or sampling from complex posteriors) but are limited to learning bridges with only initial and terminal constraints. Our work extends this paradigm by proposing the Iterative Smoothing Bridge (ISB). We integrate Bayesian filtering and optimal control into learning the diffusion process, enabling the generation of constrained stochastic processes governed by sparse observations at intermediate stages and terminal constraints. We assess the effectiveness of our method on synthetic and real-world data generation tasks and we show that the ISB generalises well to high-dimensional data, is computationally efficient, and provides accurate estimates of the marginals at intermediate and terminal times.'\nauthor:\n- |\n Ella Tamir ella.tamir@aalto.fi\\\n Department of Computer Science\\\n Aalto University Martin Trapp martin.trapp@aalto.fi\\\n Department of Computer Science\\\n Aalto University Arno Solin arno.solin@aalto.fi\\\n Department of Computer Science\\\n Aalto University\ntitle: 'Transport with Support: Data-Conditional Diffusion Bridges'\n---\n\nIntroduction\n============\n\nGenerative diffusion models have gained increasing popularity and achieved impressive results in a variety of" +"---\nabstract: 'We discuss a new type of delay differential equation that exhibits resonating transient oscillations. The power spectrum peak of the dynamical trajectory reaches its maximum height when the delay is suitably tuned. Furthermore, our analysis of the resonant conditions for this equation has revealed a new connection between the solutions of the transcendental trigonometric equation and the Lambert $W$ function. These results offer fresh insights into the nonlinear dynamics induced by delayed feedback.'\nauthor:\n- |\n Kenta Ohira$^{1}$ and Toru Ohira$^{2}$\\\n \u00a0$^{1}$Future Value Creation Research Center,\\\n Graduate School of Informatics, Nagoya University, Japan\\\n \u00a0$^{2}$Graduate School of Mathematics, Nagoya University, Japan\ntitle: 'Delay, resonance and the Lambert W function'\n---\n\n[**Keywords**]{}: Delay, Resonance, Transient Oscillation, Lambert W function, Transcendental equation\n\nIntroduction\n============\n\nThere has been interest in investigating the effect of delays in various fields such as biology, mathematics, economics, and engineering [@heiden1979; @bellman1963; @cabrera_1; @hayes1950; @insperger; @kcuhler; @longtinmilton1989a; @mackeyglass1977; @miltonetal2009b; @ohirayamane2000; @smith2010; @stepan1989; @stepaninsperger; @szydlowski2010]). Typically, delays introduce oscillations and complex behaviors to otherwise simple and well-behaved systems. Longer delays are known to induce an increase in the complexity of dynamics. The Mackey\u2013Glass equation[@mackeyglass1977], which exhibits various types of dynamics including chaos, serves as a representative example.\n\nMathematical" +"---\nabstract: 'The interaction of rarefied gases with functionalized surfaces is of great importance in technical applications such as gas separation membranes and catalysis. To investigate the influence of functionalization and rarefaction on gas flow rate in a defined geometry, pressure-driven gas flow experiments with helium and carbon dioxide through plain and alkyl-functionalized microchannels are performed. The experiments cover Knudsen numbers from 0.01 to 200 and therefore the slip flow regime up to free molecular flow. To minimize the experimental uncertainty which is prevalent in micro flow experiments, a methodology is developed to make optimal use of the measurement data. The results are compared to an analytical model predicting rarefied gas flow in straight channels and to numerical simulations of the S-model and BGK equations. The experimental data shows no significant difference between plain and functionalized channels. This stands in contrast to previous measurements in smaller geometries and demonstrates that the surface-to-volume ratio seems to be too small for the functionalization to have an influence and highlights the importance of geometric scale for surface effects. These results also shed light on the molecular reflection characteristics described by the TMAC.'\nauthor:\n- |\n Simon Kunze$^1$, Pierre Perrier$^2$, Rodion Groll$^{3,6}$, Benjamin Besser$^1$,\\" +"---\nauthor:\n- Minzhao Liu\n- Changhun Oh\n- Junyu Liu\n- Liang Jiang\n- Yuri Alexeev\nbibliography:\n- 'boson.bib'\ntitle: Simulating lossy Gaussian boson sampling with matrix product operators\n---\n\n[**Gaussian boson sampling, a computational model that is widely believed to admit quantum supremacy, has already been experimentally demonstrated and is claimed to surpass the classical simulation capabilities of even the most powerful supercomputers today. However, whether the current approach limited by photon loss and noise in such experiments prescribes a scalable path to quantum advantage is an open question. To understand the effect of photon loss on the scalability of Gaussian boson sampling, we analytically derive the asymptotic operator entanglement entropy scaling, which relates to the simulation complexity. As a result, we observe that efficient tensor network simulations are likely possible under the $N_\\text{out}\\propto\\sqrt{N}$ scaling of the number of surviving photons orange$N_\\text{out}$ in the number of input photons $N$. We numerically verify this result using a tensor network algorithm with $U(1)$ symmetry, and overcome previous challenges due to the large local Hilbert space dimensions in Gaussian boson sampling with hardware acceleration. Additionally, we observe that increasing the photon number through larger squeezing does not increase the entanglement entropy" +"---\nabstract: 'We explore the influences of the higher-order Gauss-Bonnet (GB) correction terms on the growth of perturbations at the early stage of a $(n+1)$-dimensional Friedmann-Robertson-Walker (FRW) universe. Considering a cosmological constant in the FRW background, we study the linear perturbations by adopting the spherically symmetric collapse (SC) formalism. In light of the modifications that appear in the field equations, we disclose the role of the GB coupling constant $\\alpha$, as well as the extra dimensions $n>3$ on the growth of perturbations. It, in essence, is done by defining a dimensionless parameter $\\tilde{\\beta}=(n-2)(n-3) \\alpha H_0^2$ in which $H_0$ is the Hubble constant. We find that the matter density contrast starts growing at the early stages of the universe and, as the universe expands, it grows faster compared to the standard cosmology. Besides, in the framework of GB gravity, the growth of matter perturbations in higher dimensions is faster than its standard counterpart $(n=3)$. Further, in the presence of $\\alpha$, the growth of perturbations increases as it increases. This is an expected result, since the higher order GB correction terms increase the strength of the gravity and thus support the growth of perturbations. For the existing cosmological model, we also investigate" +"---\nabstract: 'We give a complete characterisation of the domain of attraction of fixed points of branching Brownian motion (BBM) with critical drift. Prior to this classification, we introduce a suitable metric space of locally finite point measures on which we prove 1) that the BBM with critical drift is a well-defined Markov process and 2) that it satisfies the Feller property. Several applications of this characterisation are given.'\naddress:\n- 'Beijing Normal University, School of Mathematical Sciences, China '\n- 'Universit\u00e9 Claude Bernard Lyon 1, CNRS UMR 5208, Institut Camille Jordan, 69622 Villeurbanne, France , Institut Universitaire de France (IUF) and Universit\u00e9 de Gen\u00e8ve (Unige)'\n- 'Tata Institute of Fundamental Research-CAM, Bangalore, India'\nauthor:\n- 'Xinxin Chen, Christophe Garban, Atul Shekhar'\nbibliography:\n- 'biblio.bib'\ntitle: Domain of attraction of the fixed points of Branching Brownian motion \n---\n\n[**]{}\n\nIntroduction {#intro}\n============\n\nContext.\n--------\n\nIn this article we study the critical-drifted branching Brownian motion (BBM) seen as a Markov process, and answer some natural questions about it. A (binary)[^1] branching Brownian motion (BBM) can be described as follows: starting from a countable set of initial particles, each particle evolves independently of each other according to standard Brownian motions in ${\\mathbb{R}}$" +"---\nauthor:\n- |\n [Cameron Perot[^1]]{}\\\n [Master\u2019s Thesis]{}\\\n [submitted to]{}\\\n [The Faculty of Mathematics, Computer Science, and Natural Sciences]{}\\\n [of RWTH Aachen University]{}\\\n [written at]{}\\\n [J\u00fclich Supercomputing Centre]{}\\\n [Forschungszentrum J\u00fclich]{}\\\n [First Examiner: Prof. Dr. Kristel Michielsen[^2]^,^[^3]]{}\\\n [Second Examiner: Prof. Dr. Holger Rauhut]{}\\\n [Adviser: Dr. Dennis Willsch]{}\nbibliography:\n- 'references.bib'\ndate: 'July 4, 2022'\ntitle: |\n [Quantum Boltzmann Machines]{}\\\n [Applications in Quantitative Finance]{} \n---\n\nAbstract {#abstract .unnumbered}\n========\n\nIn this thesis we explore using the D-Wave Advantage 4.1 quantum annealer to sample from quantum Boltzmann distributions and train quantum Boltzmann machines (QBMs). We focus on the real-world problem of using QBMs as generative models to produce synthetic foreign exchange market data and analyze how the results stack up against classical models based on restricted Boltzmann machines (RBMs). Additionally, we study a small 12-qubit problem which we use to compare samples obtained from the Advantage 4.1 with theory, and in the process gain vital insights into how well the Advantage 4.1 can sample quantum Boltzmann random variables and be used to train QBMs. Through this, we are able to show that the Advantage 4.1 can sample classical Boltzmann random variables to some extent, but is limited in its ability to sample from" +"---\nabstract: 'We study PAC learnability and PAC stabilizability of Hedonic Games (HGs), i.e., efficiently inferring preferences or core-stable partitions from samples. We first expand the known learnability/stabilizability landscape for some of the most prominent HGs classes, providing results for Friends and Enemies Games, Bottom Responsive, and Anonymous HGs. Then, having a broader view in mind, we attempt to shed light on the structural properties leading to learnability/stabilizability, or lack thereof, for specific HGs classes. Along this path, we focus on the fully expressive Hedonic Coalition Nets representation of HGs. We identify two sets of conditions that lead to efficient learnability, and which encompass all of the known positive learnability results. On the side of stability, we reveal that, while the freedom of choosing an ad hoc adversarial distribution is the most obvious hurdle to achieving PAC stability, it is not the only one. First, we show a distribution independent necessary condition for PAC stability. Then, we focus on ${\\ensuremath{\\mathcal{W}}}$-games, where players have individual preferences over other players and evaluate coalitions based on the least preferred member. We prove that these games are PAC stabilizable under the class of bounded distributions, which assign positive probability mass to all coalitions. Finally," +"Introduction {#sec:introduction}\n============\n\nIn the last decade, Natural Language Processing (NLP) has gained relevance in Legal Artificial Intelligence, transitioning from symbolic to subsymbolic techniques [@Villata2022ThirtyYO]. Such a shift is motivated partially by the nature of legal resources, which appear primarily in a textual format (legislation, legal proceedings, contracts, etc.). Following the advancements in NLP technologies, the legal NLP literature [@zhong-etal-2020-nlp; @nllp-2022-natural; @katz_natural_2023] is flourishing with many new resources, such as large legal corpora [@henderson_pile_2022], task-specific datasets [@shen2022multilexsum; @christen_resolving_2023; @brugger_multilegalsbd; @niklaus_automatic_2023], and pre-trained legal-oriented language models [@chalkidis-etal-2020-legal; @zlucia/custom-legalbert; @lawformer; @niklaus-giofre-2023-pretrain; @hua_legalrelectra_2022; @chalkidis-etal-2023-lexfiles]. @DBLP:journals/corr/abs-2308-05502 offer a comprehensive survey on the topic.\n\nSpecifically, the emergence of has led to significant performance boosts on popular benchmarks like GLUE [@GLUE] or SuperGLUE [@SUPERGLUE], emphasizing the need for more challenging benchmarks to measure progress. Legal benchmark suites have also been developed to systematically evaluate the performance of , showcasing the superiority of legal-oriented models over generic ones on downstream tasks such as legal document classification or question answering [@chalkidis-etal-2022-lexglue; @hwang2022a]. Even though these PLMs are shown to be effective for numerous downstream tasks, they are general-purpose models that are trained on broad-domain resources, such as Wikipedia or News, and therefore, can be insufficient to address tasks" +"---\nabstract: |\n This work is concerned to study the bouncing nature of the universe for an isotropic configuration of fluid $\\mathcal{T}_{\\alpha\\beta}$ and Friedmann-Lema\u00eetre-Robertson-Walker metric scheme. This work is carried out under the novel $f(\\mathcal{G},\\mathcal{T}_{\\alpha\n \\beta} \\mathcal{T}^{\\alpha \\beta})$ gravitation by assuming a specific model i.e, $f(\\mathcal{G},\\mathcal{T}^2)=\\mathcal{G}+\\alpha\n \\mathcal{G}^2+2\\lambda \\mathcal{T}^2$ with $\\alpha$ and $\\lambda$ are constants, serving as free parameters. [The terms $\\mathcal{G}$ and $\\mathcal{T}^2$ served as an Gauss-Bonnet invariant and square of the energy-momentum trace term as an inclusion in the gravitational action respectively, and is proportional to $\\mathcal{T}^2=\\mathcal{T}_{\\alpha \\beta}\n \\mathcal{T}^{\\alpha \\beta}$.]{} A specific functional form of the Hubble parameter is taken to provide the evolution of cosmographic parameters. A well known equation of state parameter, $\\omega(t)=-\\frac{k \\log (t+\\epsilon\n )}{t}-1$ is used to represent the dynamical behavior of energy density, matter pressure and energy conditions. A detailed graphical analysis is also provided to review the bounce. Furthermore, all free parameters are set in a way, to make the supposed Hubble parameter act as the bouncing solution and ensure the viability of energy conditions. Conclusively, all necessary conditions for a bouncing model are checked.\nauthor:\n- |\n Z. Yousaf$^1$ [^1], M. Z. Bhatti$^1$ [^2], H. Aman$^1$ [^3], P.K. Sahoo$^2$ [^4]\\\n $^1$Department of" +"---\nabstract: 'In this work, we have presented a way to increase the contrast of an image. Our target is to find a transformation that will be image specific. We have used a fuzzy system as our transformation function. To tune the system according to an image, we have used Genetic Algorithm and Hill Climbing in multiple ways to evolve the fuzzy system and conducted several experiments. Different variants of the method are tested on several images and two variants that are superior to others in terms of fitness are selected. We have also conducted a survey to assess the visual improvement of the enhancements made by the two variants. The survey indicates that one of the methods can enhance the contrast of the images visually.'\nauthor:\n- Mohimenul Kabir\n- Jaiaid Mobin\n- Ahmad Hassanat\n- 'M. Sohel Rahman[^1]'\ntitle: Image Contrast Enhancement using Fuzzy Technique with Parameter Determination using Metaheuristics\n---\n\nIntroduction\n============\n\nImage enhancement is the procedure of improving an image\u2019s quality and information content. Image enhancement aims to increase visual differences among its features and make it more suitable for applications (e.g. increasing the brightness of dark images for viewing). Some common image enhancement techniques are" +"---\nabstract: 'We present high S/N measurements of the [H$\\;$[I]{}]{}\u00a0[Ly$\\alpha$]{}\u00a0absorption line toward 16 Galactic targets which are at distances between approximately 190 and 2200 pc, all beyond the wall of the Local Bubble. We describe the models used to remove stellar emission and absorption features and the methods used to account for all known sources of error in order to compute high precision values of the [H$\\;$[I]{}]{}\u00a0column density with robust determinations of uncertainties. When combined with [H$_2$]{}\u00a0column densities from other sources, we find total H column densities ranging from 10$^{20.01}$ to 10$^{21.25}$ cm$^{-2}.$ Using deuterium column densities from [[*FUSE*]{}]{}\u00a0observations we determine the D/H ratio along the sight lines. We confirm and strengthen the conclusion that D/H is spatially variable over these [H$\\;$[I]{}]{}\u00a0column density and target distance regimes, which predominantly probe the ISM outside the Local Bubble. We discuss how these results affect models of Galactic chemical evolution. We also present an analysis of metal lines along the five sight lines for which we have high resolution spectra and, along with results reported in the literature, discuss the corresponding column densities in the context of a generalized depletion analysis. We find that D/H is only" +"---\nauthor:\n- Xinyu\u00a0Zhou\n- Yang\u00a0Li\n- Jun Zhao\ntitle: Resource Allocation of Federated Learning Assisted Mobile Augmented Reality System in the Metaverse\n---\n\nIntroduction {#sec:introduction}\n============\n\nMetaverse has become a buzzword in recent years. It seeks to create a society integrated virtual/augmented reality and allows millions of people to communicate online with a virtual avatar. Augmented Reality (AR) technology, which has been expected to one of the most significant component of the Metaverse, is an enhanced version of the real physical world that is achieved through the use of digital visual elements, sound, or other sensory stimuli and delivered via technology. Mobile Augmented Reality (MAR) is to implement AR technology on mobile devices and allow users to experience services through AR devices (e.g. smart glasses, headsets, controllers, etc.).\n\n**Motivation**. According to Moore\u2019s law, the storage capacity and computing power of mobile devices will be further improved in the future, making it possible to implement machine learning models on mobile devices [@dionisio20133d]. However, limited by the small amount of personal data, it is difficult to train a high-performing MAR model on a single device. Federated learning (FL) [@mcmahan2017communication], presented in 2017, allows models from diverse participants to train" +"---\nauthor:\n- 'Gustavo. P.\u00a0de Brito [^1]'\n- 'Astrid Eichhorn[^2]'\n- Christopher Pfeiffer\nbibliography:\n- 'refs.bib'\ntitle: 'Higher-order curvature operators in causal set quantum gravity '\n---\n\nIntroduction and motivation\n===========================\n\nIn this paper, we construct higher-order curvature operators for causal sets. Our motivation is twofold: First, geometric quantities such as curvature operators are important when one reconstructs a continuous spacetime from a discrete causal set \u2013 which is one of the key outstanding problems in causal set quantum gravity. Second, higher-order curvature operators are important when one uses causal sets to search for asymptotic safety in quantum gravity in Lorentzian signature \u2013 which is one of the key outstanding problems in asymptotically safe quantum gravity.\\\nBelow, we introduce these motivations in more detail.\\\nOur first motivation comes from the reconstruction of continuum geometry from a discrete causal set. Causal set quantum gravity is based on a discretization of Lorentzian spacetimes [@Bombelli:1987aa], see [@Surya:2019ndm] for a recent review. It substitutes Lorentzian continuum manifolds by networks of spacetime points, in which the links that connect the nodes of the network correspond to causal relations. Mathematically, such networks are partial orders. However, the set of partial orders which satisfy the causal-set" +"---\nabstract: 'Generative adversarial networks (GANs) learn a target probability distribution by optimizing a generator and a discriminator with minimax objectives. This paper addresses the question of whether such optimization actually provides the generator with gradients that make its distribution close to the target distribution. We derive , sufficient conditions for the discriminator to serve as the distance between the distributions by connecting the GAN formulation with the concept of sliced optimal transport. Furthermore, by leveraging these theoretical results, we propose a novel GAN training scheme, called slicing adversarial network (SAN). With only simple modifications, . Experiments on synthetic and image datasets support our theoretical results and the SAN\u2019s effectiveness as compared to usual GANs.'\nauthor:\n- |\n Yuhta Takida${}^{1}$ Masaaki Imaizumi${}^{2}$ Takashi Shibuya${}^{1}$ Chieh-Hsin Lai${}^{1}$Toshimitsu Uesaka${}^{1}$ Naoki Murata${}^{1}$ Yuki Mitsufuji${}^{1,3}$\\\n ${}^1$Sony AI, Tokyo, Japan\\\n ${}^{2}$The University of Tokyo, Tokyo, Japan\\\n ${}^{3}$Sony Group Corporation, Tokyo, Japan\nbibliography:\n- 'str\\_def\\_abrv.bib'\n- 'refs\\_dgm.bib'\n- 'refs\\_ml.bib'\ntitle: |\n SAN: Inducing Metrizability of GAN with\\\n Discriminative Normalized Linear Layer\n---\n\nIntroduction\n============\n\nA generative adversarial network (GAN)\u00a0[@goodfellow2014generative] is a popular approach for generative modeling. GANs have achieved remarkable performance in various domains such as image\u00a0[@brock2018large; @karras2019style; @karras2021alias], audio\u00a0[@kumar2019melgan; @donahue2019adversarial; @kong2020hifi], and" +"---\nabstract: 'Solving optimization problems on quantum annealers usually requires each variable of the problem to be represented by a connected set of qubits called a logical qubit or a chain. Chain weights, in the form of ferromagnetic coupling between the chain qubits, are applied so that the physical qubits in a chain favor taking the same value in low energy samples. Assigning a good chain-strength value is crucial for the ability of quantum annealing to solve hard problems, but there are no general methods for computing such a value and, even if an optimal value is found, it may still not be suitable by being too large for accurate annealing results. In this paper, we propose an optimization-based approach for producing suitable logical qubits representations that results in smaller chain weights and show that the resulting optimization problem can be successfully solved using the augmented Lagrangian method. Experiments on the D-Wave Advantage system and the maximum clique problem on random graphs show that our approach outperforms both the default D-Wave method for chain-strength assignment as well as the quadratic penalty method.'\naddress: |\n Institute of Information and Communication Technologies, Bulgarian Academy of Sciences, Sofia, Bulgaria;\\\n Los Alamos National Laboratory," +"---\nabstract: 'We present 04-resolution imaging polarimetry at 8.7, 10.3, and 12.5 $\\mu$m, obtained with CanariCam at the Gran Telescopio Canarias (GTC), of the central 0.11 pc x 0.28 pc (42 x 108) region of W51 IRS2. The polarization, as high as $\\sim$14%, arises from silicate particles aligned by the interstellar magnetic field (B-field). We separate, or unfold, the polarization of each sightline into emission and absorption components, from which we infer the morphologies of the corresponding projected B-fields that thread the emitting and foreground-absorbing regions. We conclude that the projected B-field in the foreground material is part of the larger-scale ambient field. The morphology of the projected B-field in the mid-IR emitting region spanning the cometary region W51 IRS2W is similar to that in the absorbing region. Elsewhere, the two B-fields differ significantly with no clear relationship between them. The B-field across the W51 IRS2W cometary core appears to be an integral part of a champagne outflow of gas originating in the core and dominating the energetics there. The bipolar outflow, W51north jet, that appears to originate at or near SMA1/N1 coincides almost exactly with a clearly demarcated north-south swath of lower polarization. While speculative, comparison of mid-IR and" +"---\nabstract: 'The relationship between structure and dynamics in glassy fluids remains an intriguing open question. Recent work has shown impressive advances in our ability to predict local dynamics using structural features, most notably due to the use of advanced machine learning techniques. Here we explore whether a simple linear regression algorithm combined with intelligently chosen structural order parameters can reach the accuracy of the current, most advanced machine learning approaches for predicting dynamic propensity. To do this we introduce a method to pinpoint the cage state of the initial configuration \u2013 i.e. the configuration consisting of the average particle positions when particle rearrangement is forbidden. We find that, in comparison to both the initial state and the inherent state, the structure of the cage state is highly predictive of the long-time dynamics of the system. Moreover, by combining the cage state information with the initial state, we are able to predict dynamic propensities with unprecedentedly high accuracy over a broad regime of time scales, including the caging regime.'\nauthor:\n- 'Rinske M. Alkemade'\n- Frank Smallenburg\n- Laura Filion\nbibliography:\n- 'myref.bib'\ntitle: Improving the prediction of glassy dynamics by pinpointing the local cage\n---\n\nIntroduction\n============\n\nUnderstanding the" +"---\nabstract: 'Cluster states are versatile quantum resources and an essential building block for measurement-based quantum computing. The possibility to generate cluster states in specific systems may thus serve as an indicator regarding if and to what extent these systems can be harnessed for quantum technologies and quantum information processing in particular. Here, we apply this analysis to networks of degenerate optical parametric oscillators (DOPOs), also called coherent Ising machines (CIMs). CIMs are distinguished by their highly flexible coupling capabilities, which makes it possible to use them, e.g., to emulate large spin systems. As CIMs typically operate with coherent states (and superpositions thereof), it is natural to consider cluster states formed by superpositions of coherent states, i.e., coherent cluster states. As we show, such coherent cluster states can, under ideal conditions, be generated in DOPO networks with the help of beam splitters and classical pumps. Our subsequent numerical analysis provides the minimum requirements for the generation of coherent cluster states under realistic conditions. Moreover, we discuss how nonequilibrium pumps can improve the generation of coherent cluster states. In order to assess the quality of the cluster-state generation, we map the generated states to an effective spin space using modular variables," +"---\nabstract: 'Stabilization, disturbance rejection, and control of optical beams and optical spots are ubiquitous problems that are crucial for the development of optical systems for ground and space telescopes, free-space optical communication terminals, precise beam steering systems, and other types of optical systems. High-performance disturbance rejection and control of optical spots require the development of disturbance estimation and data-driven Kalman filter methods. Motivated by this, we propose a unified and experimentally verified data-driven framework for optical-spot disturbance modeling and tuning of covariance matrices of Kalman filters. Our approach is based on covariance estimation, nonlinear optimization, and subspace identification methods. Also, we use spectral factorization methods to emulate optical-spot disturbances with a desired power spectral density in an optical laboratory environment. We test the effectiveness of the proposed approaches on an experimental setup consisting of a piezo tip-tilt mirror, piezo linear actuator, and a CMOS camera.'\nauthor:\n- Aleksandar Haber\n- Michael Krainak\nbibliography:\n- 'sample.bib'\ntitle: 'Data-driven Estimation, Tracking, and System Identification of Deterministic and Stochastic Optical Spot Dynamics'\n---\n\nIntroduction\n============\n\nStabilization, disturbance rejection, and precise control of optical beams and optical spots are fundamental and ubiquitous problems that appear in a number of applications and optical systems." +"---\nauthor:\n- |\n Mehdi Sadeghi[^1] and Faramarz Rahmani[^2]\\\n \\\n \\\ntitle: '**The phase transition of Rastall AdS black hole with cloud of strings and quintessence**'\n---\n\n**Keywords:** Phase transition, Rastall theory of gravity, quintessence, cloud of strings.\n\nIntroduction {#intro}\n============\n\nBlack holes behave like thermodynamic systems [@Kubiznak:2014zwa]. This motivates us to study the thermodynamic behavior of the black holes. The Hawking-Page phase transition which is a first-order phase transition, describes the transition between phases which are static spherically symmetric vacuum solutions of the Einstein equations in AdS spacetime. According to the gauge/gravity duality [@Witten:1998qj]-[@Aharony], this must corresponds to a phase transition in gauge theory. Witten shows that this correspond to a confinement-deconfinement phase transition in the gauge theory side [@Witten:1998zw]. The thermodynamics of black hole is a combination of general relativity and quantum field theory which helps us to formulate quantum gravity. Bardeen, Carter and Hawking were the ones who introduced the four laws of black hole thermodynamics [@Bardeen:1973gs]. In this regard, the mass of a black hole and surface gravity on event horizon are interpreted as the enthalpy and the temperature of space-time respectively. This rather novel idea originates from a consideration of the Smarr relation [@Caldarelli:1999xj]-[@Smarr:1972kt]. The" +"---\nabstract: 'Within the Friedmann-Lema\u00eetre-Robertson-Walker (FLRW) framework, the Hubble constant $H_0$ is an integration constant. Thus, mathematical consistency demands that $H_0$ is also observationally a constant. Building on earlier results, we demonstrate redshift evolution of flat $\\Lambda$CDM cosmological parameters $(H_0, \\Omega_{m})$ in Pantheon+ supernove (SN) in the redshift range $0 < z \\lesssim 2.26$. We compare the whole SN sample and the SN sample split into low and high redshift subsamples demarcated by redshift $z_{\\textrm{split}}$. We show that $z_{\\textrm{split}}=1$ has a marginal Bayesian preference through the Akaike Information Criterion for evolution in $H_0$ (also $\\Omega_m)$ compared to the whole sample. Such evolution is strictly forbidden in FLRW models. Through mock analysis, we estimate the evolution as a $ 1.4 \\sigma$ effect ($p=0.08$), and the presence of $\\Omega_m >1$ best fits, indicative of negative dark energy (DE) density, beyond $z_{\\textrm{split}} =1$ as $1.3 \\sigma$ ($p=0.1$) to $1.9 \\sigma$ effects $(p=0.026$) depending on the criteria. [Finally, using complementary profile distributions we confirm a robust $> 2 \\sigma$ shift in $H_0$ for SN with $z > 1$.]{}'\nauthor:\n- 'M. Malekjani'\n- 'R. Mc Conville'\n- 'E. \u00d3 Colg\u00e1in'\n- 'S. Pourojaghi'\n- 'M. M. Sheikh-Jabbari'\ntitle: Negative Dark Energy Density from High" +"---\nabstract: 'In this article, we study the inconsistency of a system of $\\max$-product fuzzy relational equations and of a system of $\\max$-Lukasiewicz fuzzy relational equations. For a system of $\\max-\\min$ fuzzy relational equations $A \\Box_{\\min}^{\\max} x = b$ and using the $L_\\infty$ norm, [@arxiv.2301.06141] showed that the Chebyshev distance $\\Delta = \\inf_{c \\in \\mathcal{C}} \\Vert b - c \\Vert$, where $\\mathcal{C}$ is the set of second members of consistent systems defined with the same matrix $A$, can be computed by an explicit analytical formula according to the components of the matrix $A$ and its second member $b$. In this article, we give analytical formulas analogous to that of [@arxiv.2301.06141] to compute the Chebyshev distance associated to the second member of a system of $\\max$-product fuzzy relational equations and that associated to the second member of a system of $\\max$-Lukasiewicz fuzzy relational equations.'\nauthor:\n- |\n Isma\u00efl Baaj\\\n Univ. Artois, CNRS, CRIL, F-62300 Lens, France\\\n [baaj@cril.fr](baaj@cril.fr)\ntitle: 'Chebyshev distances associated to the second members of systems of max-product/Lukasiewicz fuzzy relational equations'\n---\n\nIntroduction\n============\n\nArtificial Intelligence (AI) applications based on systems of fuzzy relational equations emerged thanks to [@sanchez1976resolution; @sanchez1977]\u2019s seminal work on solving systems of $\\max-\\min$ fuzzy relational equations." +"---\nabstract: 'The melting of a homopolymer double-stranded (ds) deoxyribonucleic acid (DNA) in the dilute limit is studied numerically in the presence of an attractive and impenetrable surface on a simple cubic lattice. The two strands of the DNA are modeled using two self-avoiding walks, capable of interacting at complementary sites, thereby mimicking the base pairing. The impenetrable surface is modeled by restricting the DNA configurations at the $z\\geq 0$ plane, with attractive interactions for monomers at $z=0$. Further, we consider two variants for $z=0$ occupations by ds segments, where one or two surface interactions are counted. This consideration has significant consequences, to the extent of changing the stability of the bound phase in the adsorbed state. Interestingly, adsorption changes from critical to first-order with a modified exponent on coinciding with the melting transition. For simulations, we use the pruned and enriched Rosenbluth algorithm.'\nauthor:\n- Debjyoti Majumdar\ntitle: Adsorption of melting deoxyribonucleic acid\n---\n\nIntroduction\n============\n\nThe denaturation of the double-stranded (ds) deoxyribonucleic acid (DNA) from a bound (ds) to an unbound single-stranded (ss) phase is an important step towards fundamental biological processes such as DNA replication, ribonucleic acid (RNA) transcription, packaging of DNA and repairing [@watson2003]. [*In vitro*]{}," +"---\nauthor:\n- \n- \n- \n- \ntitle: Characterising Solutions of Anomalous Cancellation\n---\n\nIntroduction {#sec1}\n============\n\nA zeal for some interesting mathematical problems brought us to a very peculiar problem, a quest to find all odd digit integers $[a_1a_2\\ldots a_{2k+1}]$ (all $a_i$\u2019s are digits, $a_1\\neq 0$) such that the following property holds. $$[a_1a_2\\ldots a_k]\\cdot [a_{k+1}a_{k+2}\\ldots a_{2k+1}] = [a_1a_2\\ldots a_{k+1}]\\cdot [a_{k+2}a_{k+2}\\ldots a_{2k+1}]$$\n\nAn elementary example is the number 164, which has the property as shown $1\\times 64 = 16\\times 4$. Similarly $24\\times 996 = 249\\times 96$ implies that $24996$ also fits our problem requirement. Now the question is, can one find a way to generate all such numbers? A brute force algorithm always works, but it is never satisfying to leave things at that \u2013 a proof of being a mathematics aspirant. This prompted us to go through existing literature which, though scarce, hide a gold mine of information. Adding on to previous work, we arrived at several interesting results that demand a place in this paper. The paper has been structured in a format that the general reader can be presented with all the beautiful results, while the seasoned readers can move to the appendix to get a flavour of" +"---\nabstract: 'Control and characterization of networks is a paramount step for the development of many quantum technologies. Even for moderate-sized networks, this amounts to explore an extremely vast parameters space in search for the couplings defining the network topology. Here we explore the use of a genetic algorithm to retrieve the topology of a network from the measured probability distribution obtained from the evolution of a continuous-time quantum walk on the network. Our result shows that the algorithm is capable of efficiently retrieving the required information even in the presence of noise.'\nauthor:\n- Claudia Benedetti\n- Ilaria Gianani\nbibliography:\n- 'main.bib'\ntitle: Identifying network topologies via quantum walk distributions \n---\n\nNetworks are a fundamental model to understand the underlying properties of complex systems. They are invaluable tools to describe phenomena happening at different scales ranging from social interactions [@wasserman94; @Onnela07], to biological processes [@jeong00; @pastor01; @maslov02; @silva05; @plenio08], from the configurations of molecules [@Winterbach13; @dekeer21], to the structure of internet [@faloutsos99; @Caldarelli_2000; @Pastor04; @He2009] and physical systems alike [@zoller97; @kuzmich05; @mulken16; @krutitsky16; @nokkala16]. In the context of quantum technologies, networks constitute the prime structure of communication and computation protocols [@deutch89; @Christandl05; @bose07; @politi08; @aspuru12]. Understanding how quantum information" +"---\nabstract: 'Besides the transient effect, the passage of a gravitational wave also causes a persistent displacement in the relative position of an interferometer\u2019s test masses through the *nonlinear memory effect*. This effect is generated by the gravitational backreaction of the waves themselves, and encodes additional information about the source. In this work, we explore the implications of using this information for the parameter estimation of massive binary black holes with LISA. Based on a Fisher analysis for nonprecessing black hole binaries, our results show that the memory can help to reduce the degeneracy between the luminosity distance and the inclination for binaries observed only for a short time ($\\sim$\u00a0few hours) before merger. To assess how many such short signals will be detected, we utilized state-of-the-art predictions for the population of massive black hole binaries and models for the gaps expected in the LISA data. We forecast from tens to few hundreds of binaries with observable memory, but only\u00a0$\\sim \\mathcal{O}(0.1)$ events in 4 years for which the memory helps to reduce the degeneracy between distance and inclination. Based on this, we conclude that the new information from the nonlinear memory, while promising for testing general relativity in the" +"---\nabstract: 'The Milky Way halo is one of the few galactic haloes that provides a unique insight into galaxy formation by resolved stellar populations. Here, we present a catalogue of $\\sim$47 million halo stars selected independent of parallax and line-of-sight velocities, using a combination of Gaia DR3 proper motion and photometry by means of their reduced proper motion. We select high tangential velocity (halo) main sequence stars and fit distances to them using their simple colour-absolute-magnitude relation. This sample reaches out to $\\sim$21 kpc with a median distance of $6.6$ kpc thereby probing much further out than would be possible using reliable Gaia parallaxes. The typical uncertainty in their distances is $0.57_{-0.26}^{+0.56}$ kpc. Using the colour range $0.45<(G_0-G_\\text{RP,0})<0.715$ where the main sequence is narrower, gives an even better accuracy down to $0.39_{-0.12}^{+0.18}$ kpc in distance. The median velocity uncertainty for stars within this colour range is 15.5 km/s. The distribution of these sources in the sky, together with their tangential component velocities, are very well-suited to study retrograde substructures. We explore the selection of two complex retrograde streams: GD-1 and Jhelum. For these streams, we resolve the gaps, wiggles and density breaks reported in the literature more clearly. We" +"---\nabstract: 'A famous theorem by R. Brauer shows how to modify a single eigenvalue of a matrix $A$ by a rank-one update without changing the remaining eigenvalues. A generalization of this theorem (due to R. Rado) is used to change a pair of eigenvalues $\\lambda, 1/\\lambda$ of a symplectic matrix $S$ in a structure-preserving way to desired target values $\\mu, 1/\\mu$. Universal bounds on the relative distance between $S$ and the newly constructed symplectic matrix $\\hat{S}$ with modified spectrum are given. The eigenvalues Segre characteristics of $\\hat{S}$ are related to those of $S$ and a statement on the eigenvalue condition numbers of $\\hat{S}$ is derived. The main results are extended to matrix pencils.'\ntitle: 'Structure-preserving eigenvalue modification of symplectic matrices and matrix pencils'\n---\n\nIntroduction {#sec:intro}\n============\n\nIn numerical linear algebra and matrix analysis one occasionally encounters the necessity of modifying special eigenvalues of a matrix without altering its remaining eigenvalues. Techniques for changing certain eigenvalues of a matrix have, for instance, been applied to solve nonnegative inverse eigenvalue problems [@Perfect1955; @Soto2006] or, in form of deflation methods, to remove dominant eigenvalues in eigenvalue computations [@saad_evals Sec.4.2]. Furthermore, the task of modifying eigenvalues of matrices is of interest in" +"---\nabstract: |\n Software Defined Networks have opened the door to statistical and AI-based techniques to improve efficiency of networking. Especially to ensure a certain *Quality of Service* (QoS) for specific applications by routing packets with awareness on content nature (VoIP, video, files, etc.) and its needs (latency, bandwidth, etc.) to use efficiently resources of a network.\n\n Predicting various Key Performance Indicators (KPIs) at any level may handle such problems while preserving network bandwidth.\n\n The question addressed in this work is the design of efficient and low-cost algorithms for KPI prediction, implementable at the local level. We focus on end-to-end latency prediction, for which we illustrate our approaches and results on a public dataset from the recent international challenge on GNN [@suarez2021graph]. We propose several low complexity, locally implementable approaches, achieving significantly lower wall time both for training and inference, with marginally worse prediction accuracy compared to state-of-the-art global GNN solutions.\nauthor:\n- |\n Pierre Larrenie\\\n Thales SIX & LIGM\\\n Universit\u00e9 Gustave Eiffel, CNRS\\\n Marne-la-Vall\u00e9e, France\\\n `pierre.larrenie@esiee.fr`\\\n Jean-Fran\u00e7ois Bercher\\\n LIGM\\\n Universit\u00e9 Gustave Eiffel, CNRS\\\n Marne-la-Vall\u00e9e, France\\\n `jean-francois.bercher@esiee.fr`\\\n Olivier Venard\\\n ESYCOM\\\n Universit\u00e9 Gustave Eiffel, CNRS\\\n Marne-la-Vall\u00e9e, France\\\n `olivier.venard@esiee.fr`\\\n Iyad Lahsen-Cherif\\\n Institut National des Postes et T\u00e9l\u00e9communications (INPT)\\\n Rabat, Morocco\\\n `lahsencherif@inpt.ac.ma`\\\nbibliography:" +"---\nabstract: 'We consider the problem Enum$\\cdot{\\textit{IP}}$ of enumerating prime implicants of Boolean functions represented by decision decomposable negation normal form (dec-DNNF) circuits. We study Enum$\\cdot{\\textit{IP}}$ from dec-DNNF within the framework of enumeration complexity and prove that it is in [OutputP]{}, the class of output polynomial enumeration problems, and more precisely in [IncP]{}, the class of polynomial incremental time enumeration problems. We then focus on two closely related, but seemingly harder, enumeration problems where further restrictions are put on the prime implicants to be generated. In the first problem, one is only interested in prime implicants representing subset-minimal abductive explanations, a notion much investigated in AI for more than three decades. In the second problem, the target is prime implicants representing sufficient reasons, a recent yet important notion in the emerging field of eXplainable AI, since they aim to explain predictions achieved by machine learning classifiers. We provide evidence showing that enumerating specific prime implicants corresponding to subset-minimal abductive explanations or to sufficient reasons is not in [OutputP]{}.'\nauthor:\n- |\n Alexis de Colnet$^1$ Pierre Marquis$^{1,2}$ $^1$Univ. Artois, CNRS, Centre de Recherche en Informatique de Lens (CRIL), F-62300 Lens, France\\\n $^2$Institut Universitaire de France {decolnet, marquis}@cril.fr\nbibliography:\n- 'enumIP.bib'\ntitle:" +"---\nabstract: |\n Thousands of papers have reported two-way cluster-robust (TWCR) standard errors. However, the recent econometrics literature points out the potential non-gaussianity of two-way cluster sample means, and thus invalidity of the inference based on the TWCR standard errors. Fortunately, simulation studies nonetheless show that the gaussianity is rather common than exceptional. This paper provides theoretical support for this encouraging observation. Specifically, we derive a novel central limit theorem for two-way clustered triangular arrays that justifies the use of the TWCR under very mild and interpretable conditions. We, therefore, hope that this paper will provide a theoretical justification for the legitimacy of most, if not all, of the thousands of those empirical papers that have used the TWCR standard errors. We provide a guide in practice as to when a researcher can employ the TWCR standard errors.\\\n [**Keywords:**]{} asymptotic gaussianity, two-way clustering, triangular arrays, central limit theorem\\\nauthor:\n- 'Harold D. Chiang[^1]'\n- 'Yuya Sasaki[^2]'\nbibliography:\n- 'biblio.bib'\ntitle: 'On Using The Two-Way Cluster-Robust Standard Errors'\n---\n\nIntroduction\n============\n\nMulti-way clustering is ubiquitous in empirical studies. For example, market structures by construction induce two-way clustering, where common supply shocks cause cluster dependence within a firm across markets and common" +"---\nabstract: 'We propose a generalization of the standard matched pairs design in which experimental units (often geographic regions or *geos*) may be combined into larger units/regions called \u201csupergeos\u201d in order to improve the average matching quality. Unlike optimal matched pairs design which can be found in polynomial time [@lu2011optimal], this generalized matching problem is NP-hard. We formulate it as a mixed-integer program (MIP) and show that experimental design obtained by solving this MIP can often provide a significant improvement over the standard design regardless of whether the treatment effects are homogeneous or heterogeneous. Furthermore, we present the conditions under which trimming techniques that often improve performance in the case of homogeneous effects [@chen2022robust], may lead to biased estimates and show that the proposed design does not introduce such bias. We use empirical studies based on real-world advertising data to illustrate these findings.'\nauthor:\n- |\n Aiyou Chen\\\n Google\\\n- |\n Nick Doudchenko\\\n Google\\\n- |\n Shunhua Jiang\\\n Columbia\\\n- |\n Cliff Stein\\\n Google\\\n- |\n Bicheng Ying\\\n Google\\\nbibliography:\n- 'ref.bib'\ntitle: 'Supergeo Design: Generalized Matching for Geographic Experiments'\n---\n\n=1\n\nIntroduction {#intro}\n============\n\nWith online advertising revenue in the US amounting to almost 200 billion dollars in 2021" +"---\nauthor:\n- 'Thomas Gehrmann,'\n- 'Andreas von Manteuffel,'\n- 'and Tong-Zhi Yang'\nbibliography:\n- 'Renormalization.bib'\ntitle: 'Renormalization of twist-two operators in covariant gauge to three loops in QCD'\n---\n\nIntroduction {#sec:introduction}\n============\n\nThe operator product expansion (OPE)\u00a0[@Wilson:1969zs; @Frishman:1973pp] provides an elegant method to separate short-distance from long-distance contributions in quantum field theory. Its early application to deeply inelastic lepton-nucleon scattering processes\u00a0[@Gross:1974cs] in quantum chromodynamics (QCD) successfully predicted the violation of Bjorken scaling\u00a0[@Bjorken:1968dy; @Bjorken:1969ja], thereby enabling the development of the QCD-improved parton model\u00a0[@Altarelli:1977zs]. The anomalous dimensions of quark and gluon operators in the OPE are directly related to the Altarelli-Parisi splitting functions\u00a0[@Altarelli:1977zs; @Dokshitzer:1977sg; @Gribov:1972ri] of the QCD-improved parton model by an inverse Mellin transformation.\n\nThe splitting functions determine the scale evolution of the parton distributions, which are an essential ingredient to all quantitative predictions for high-energy hadron collider processes. The precise determination of parton distributions requires the iterated comparison of highly accurate experimental data for a multitude of processes with theoretical predictions at a comparable level of precision. These predictions require higher order perturbative corrections\u00a0[@Heinrich:2020ybq] to the underlying hard scattering processes as well as to the splitting functions.\n\nSplitting functions are currently known to" +"---\nabstract: '\\[sec:abs\\] Amateurs working on mini-films and short-form videos usually spend lots of time and effort on the multi-round complicated process of setting and adjusting scenes, plots, and cameras to deliver satisfying video shots. We present Virtual Dynamic Storyboard ([VDS]{}) to allow users storyboarding shots in virtual environments, where the filming staff can easily test the settings of shots before the actual filming. [VDS]{}runs on a \u201cpropose-simulate-discriminate\u201d mode: Given a formatted story script and a camera script as input, it generates several character animation and camera movement proposals following predefined story and cinematic rules to allow an off-the-shelf simulation engine to render videos. To pick up the top-quality dynamic storyboard from the candidates, we equip it with a shot ranking discriminator based on shot quality criteria learned from professional manual-created data. [VDS]{}is comprehensively validated via extensive experiments and user studies, demonstrating its efficiency, effectiveness, and great potential in assisting amateur video production.'\nauthor:\n- Anyi Rao\n- Xuekun Jiang\n- Yuwei Guo\n- Linning Xu\n- Lei Yang\n- Libiao Jin\n- Dahua Lin\n- Bo Dai\nbibliography:\n- 'main.bib'\ntitle: 'Dynamic Storyboard Generation in an Engine-based Virtual Environment for Video Production'\n---\n\n<ccs2012> <concept> <concept\\_id>10002951.10003227.10003251.10003256</concept\\_id> <concept\\_desc>Information systems\u00a0Multimedia" +"---\nabstract: |\n We provide a game-theoretic analysis of the problem of front-running attacks. We use it to distinguish attacks from legitimate competition among honest users for having their transactions included earlier in the block. We also use it to introduce an intuitive notion of the severity of front-running attacks. We then study a simple commit-reveal protocol and discuss its properties. This protocol has costs because it requires two messages and imposes a delay. However, we show that it prevents the most severe front-running attacks while preserving legitimate competition between users, guaranteeing that the earliest transaction in a block belongs to the honest user who values it the most. When the protocol does not fully eliminate attacks, it nonetheless benefits honest users because it reduces competition among attackers (and overall expenditure by attackers).\\\n **Keywords**: Front running, Game theory, MEV, Transactions reordering, commit-reveal\nauthor:\n- 'Andrea Canidio [^1] and Vincent Danos[^2]'\nbibliography:\n- 'bib.bib'\ntitle: ' Commitment Against Front-Running Attacks[^3]'\n---\n\nIntroduction\n============\n\nOn the Ethereum network, each validator decides how to order pending transactions to form the next block, hence determining the order in which these transactions are executed. As a consequence, users often compete with each other to have" +"---\nabstract: 'When estimating causal effects, it is important to assess external validity, i.e., determine how useful a given study is to inform a practical question for a specific target population. One challenge is that the covariate distribution in the population underlying a study may be different from that in the target population. If some covariates are effect modifiers, the average treatment effect (ATE) may not generalize to the target population. To tackle this problem, we propose new methods to generalize or transport the ATE from a source population to a target population, in the case where the source and target populations have different sets of covariates. When the ATE in the target population is identified, we propose new doubly robust estimators and establish their rates of convergence and limiting distributions. Under regularity conditions, the doubly robust estimators provably achieve the efficiency bound and are locally asymptotic minimax optimal. A sensitivity analysis is provided when the identification assumptions fail. Simulation studies show the advantages of the proposed doubly robust estimator over simple plug-in estimators. Importantly, we also provide minimax lower bounds and higher-order estimators of the target functionals. The proposed methods are applied in transporting causal effects of dietary intake" +"---\nabstract: 'In this work we extend the class of Consensus-Based Optimization (CBO) metaheuristic methods by considering memory effects and a random selection strategy. The proposed algorithm iteratively updates a population of particles according to a consensus dynamics inspired by social interactions among individuals. The consensus point is computed taking into account the past positions of all particles. While sharing features with the popular Particle Swarm Optimization (PSO) method, the exploratory behavior is fundamentally different and allows better control over the convergence of the particle system. We discuss some implementation aspects which lead to an increased efficiency while preserving the success rate in the optimization process. In particular, we show how employing a random selection strategy to discard particles during the computation improves the overall performance. Several benchmark problems and applications to image segmentation and Neural Networks training are used to validate and test the proposed method. A theoretical analysis allows to recover convergence guarantees under mild assumptions on the objective function. This is done by first approximating the particles evolution with a continuous-in-time dynamics, and then by taking the mean-field limit of such dynamics. Convergence to a global minimizer is finally proved at the mean-field level.'\nauthor:\n- 'Giacomo" +"---\nabstract: 'The shot-down process is a strong Markov process which is annihilated, or shot down, when *jumping over* or to the complement of a given open subset of a\u00a0vector space. Due to specific features of the shot-down time, such processes suggest new type of boundary conditions for nonlocal differential equations. In this work we construct the shot-down process for the fractional Laplacian in Euclidean space. For smooth bounded sets $D$, we study its transition density and characterize Dirichlet form. We show that the corresponding Green function is comparable to that of the fractional Laplacian with Dirichlet conditions on $D$. However, for nonconvex $D$, the transition density of the shot-down stable process is incomparable with the Dirichlet heat kernel of the fractional Laplacian for $D$. Furthermore, Harnack inequality in general fails for harmonic functions of the shot-down process.'\naddress:\n- ' Krzysztof Bogdan Department of Pure and Applied Mathematics Wroc\u0142aw University of Science and Technology, Wroc\u0142aw, Poland '\n- ' Kajetan Jastrz\u0229bski Institute of Mathematics University of Wroc\u0142aw, Wroc\u0142aw, Poland '\n- ' Moritz Kassmann Fakult\u00e4t f\u00fcr Mathematik Universit\u00e4t Bielefeld, Germany '\n- ' Micha\u0142 Kijaczko Department of Pure and Applied Mathematics Wroc\u0142aw University of Science and Technology, Wroc\u0142aw," +"---\nabstract: 'This paper presents the development of a distributed application that facilitates the understanding and application of swarm intelligence in solving optimization problems. The platform comprises a search space of customizable random particles, allowing users to tailor the solution to their specific needs. By leveraging the power of Ray distributed computing, the application can support multiple users simultaneously, offering a flexible and scalable solution. The primary objective of this project is to provide a user-friendly platform that enhances the understanding and practical use of swarm intelligence in problem-solving.'\nauthor:\n- |\n Karthik reddy Kanjula\\\n School of Coumputing and Information\\\n West Chester University of Pennsylvania\\\n West Chester, PA 19383\\\n `karthikreddykanjula99@gmail.com`\\\n Sai Meghana Kolla\\\n School of Mathematics and Computer Science\\\n Pennsylvania state University\\\n Harrisburg, PA 17057\\\n `szk6163@psu.edu`\\\ntitle: Distributed Swarm Intelligence\n---\n\nIntroduction\n============\n\nThe Particle Swarm Optimization (PSO) algorithm is an approximation algorithm that finds the best solution from all the explored feasible solutions for any problem that can be formulated into a mathematical equation. In the field of algorithms and theoretical computer science, optimization problems are known by the name \u201capproximation\u201d algorithms. In this project, we built a web application that hosts a PSO algorithm with interactive features such" +"---\nabstract: 'By flexibly manipulating the radio propagation environment, reconfigurable intelligent surface (RIS) is a promising technique for future wireless communications. However, the single-side coverage and double-fading attenuation faced by conventional RISs largely restrict their applications. To address this issue, we propose a novel concept of multi-functional RIS (MF-RIS), which provides reflection, transmission, and amplification simultaneously for the incident signal. With the aim of enhancing the performance of a non-orthogonal multiple-access (NOMA) downlink multiuser network, we deploy an MF-RIS to maximize the sum rate by jointly optimizing the active beamforming and MF-RIS coefficients. Then, an alternating optimization algorithm is proposed to solve the formulated non-convex problem by exploiting successive convex approximation and penalty-based method. Numerical results show that the proposed MF-RIS outperforms conventional RISs under different settings.'\nauthor:\n- 'Ailing\u00a0Zheng, Wanli\u00a0Ni, Wen\u00a0Wang, and Hui\u00a0Tian [^1] [^2]'\ntitle: 'Enhancing NOMA Networks via Reconfigurable Multi-Functional Surface'\n---\n\nMulti-functional reconfigurable intelligent surface, non-orthogonal multiple access, rate maximization.\n\nIntroduction {#Introduction}\n============\n\nCompared to orthogonal multiple access (OMA), non-orthogonal multiple access (NOMA) is capable of achieving high spectrum efficiency and massive connectivity [@LiuNOMA2017]. Prior investigations have shown that the differences between users\u2019 channel conditions can be exploited to enhance NOMA performance" +"---\nabstract: 'In Bhattacharya et al. (Science Advances, 2020), a set of chemical reactions involved in the dynamics of actin waves in cells was studied. Both at the microscopic level, where the individual chemical reactions are directly modelled using Gillespie-type algorithms, and on a macroscopic level where a deterministic reaction-diffusion equation arises as the large-scale limit of the underlying chemical reactions. In this work, we derive, and subsequently study, the related mesoscopic stochastic reaction-diffusion system, or Chemical Langevin Equation, that arises from the same set of chemical reactions. We explain how the stochastic patterns that arise from this equation can be used to understand the experimentally observed dynamics from Bhattacharya et al. In particular, we argue that the mesoscopic stochastic model better captures the microscopic behaviour than the deterministic reaction-diffusion equation, while being more amenable for mathematical analysis and numerical simulations than the microscopic model.'\naddress:\n- |\n Biometris - Wageningen University and Research\\\n Wageningen; The Netherlands\\\n Email: \n- |\n Biometris - Wageningen University and Research\\\n Wageningen; The Netherlands\\\n Email: \nauthor:\n- 'C. H. S. Hamster'\n- 'P. van Heijster'\nbibliography:\n- 'ref.bib'\ntitle: Waves in a Stochastic Cell Motility Model\n---\n\n,\n\n,\n\nGillespie Algorithms, Cell Motility, Mesoscopic Patterns," +"---\nabstract: 'The standard model (SM) one-loop contributions to the most general $H^*Z^*Z^*$ coupling are obtained via the background field method in terms of Passarino-Veltman scalar functions, from which the contributions to the $H^*ZZ$ and $HZZ^*$ couplings are obtained in terms of two $CP$-conserving $h_{1,2}^V$ and one $CP$-violating $h_3^V$ form factors ($V=H, Z$). The current CMS constraints on the $HZZ$ coupling ratios are then used to obtain bounds on the real and absorptive parts of the anomalous $HZZ$ couplings. The former are up to two orders of magnitude tighter than previous ones, whereas the latter are the first one of this kind. The effects of the absorptive parts of the $HZZ$ anomalous couplings, which have been overlooked in the past, are analyzed via the partial decay width $\\Gamma_{H^\\ast\\rightarrow ZZ}$, and a significant deviation from the SM tree-level contribution is observed at low energies, though it becomes negligible at high energies. We also explore the possibility that polarized $Z$ gauge bosons are used for the study of non-SM $HZZ$ contributions via a new left-right asymmetry $\\mathcal{A}_{LR}$, which is sensitive to $CP$-violating complex form factors and can be as large as the unity at most, though in a more conservative scenario it" +"---\nabstract: 'Gradient-based meta-learning methods have primarily been applied to classical machine learning tasks such as image classification. Recently, PDE-solving deep learning methods, such as neural operators, are starting to make an important impact on learning and predicting the response of a complex physical system directly from observational data. Since the data acquisition in this context is commonly challenging and costly, the call of utilization and transfer of existing knowledge to new and unseen physical systems is even more acute. Herein, we propose a novel meta-learning approach for neural operators, which can be seen as transferring the knowledge of solution operators between governing (unknown) PDEs with varying parameter fields. Our approach is a provably universal solution operator for multiple PDE solving tasks, with a key theoretical observation that underlying parameter fields can be captured in the first layer of neural operator models, in contrast to typical final-layer transfer in existing meta-learning methods. As applications, we demonstrate the efficacy of our proposed approach on PDE-based datasets and a real-world material modeling problem, illustrating that our method can handle complex and nonlinear physical response learning tasks while greatly improving the sampling efficiency in unseen tasks.'\naddress:\n- 'Department of Mathematics, Lehigh University," +"---\nabstract: |\n Self organizing complex systems can be modeled using cellular automaton models. However, the parametrization of these models is crucial and significantly determines the resulting structural pattern. In this research, we introduce and successfully apply a sound statistical method to estimate these parameters. The method is based on constructing Gaussian likelihoods using characteristics of the structures such as the mean particle size. We show that our approach is robust with respect to the method parameters, domain size of patterns, or CA iterations.\\\n Keywords. Cellular automaton, discrete model, parameter identification, statistical approach.\naddress:\n- 'Interdisciplinary Center for Scientific Computing (IWR), Heidelberg University, Mathematikon, Im Neuenheimer Feld 205, 69120 Heidelberg, Germany'\n- 'Mathematical Institute for Machine Learning and Data Science, Catholic university of Eichst\u00e4tt-Ingolstadt, Goldknopfgasse 7, 85049 Ingolstadt, Germany'\n- 'School of Engineering Science, Lappeenranta\u2013Lahti University of Technology, P.O. Box 20, 53851 Lappeenranta, Finland'\nauthor:\n- Alexey Kazarnikov\n- Nadja Ray\n- Heikki Haario\n- Joona Lappalainen\n- Andreas Rupp\nbibliography:\n- 'CAMparameters.bib'\ntitle: Parameter estimation for cellular automata\n---\n\nIntroduction\n============\n\nCellular automaton (CA) models are widely used to describe self-organizing, complex systems such as tumor growth\u00a0[@moreira2002cellular], protein bioinformatics\u00a0[@xiao2011cellular], chemical reactions\u00a0[@menshutina2020cellular], formation and turnover of" +"---\nauthor:\n- 'Zhongjie Huang$^{a,b}$,'\n- 'Bo Wang$^{a,b}$,'\n- 'Ellis Ye Yuan$^{a,b}$,'\n- 'Xinan Zhou$^{c}$'\nbibliography:\n- 'refs.bib'\ntitle: 'AdS super gluon scattering up to two loops: A position space approach'\n---\n\nIntroduction\n============\n\nThe AdS/CFT correspondence maps correlation functions of local operators in the CFT to on-shell scattering amplitudes in AdS. In the holographic limit, these observables are expanded in powers of $1/c$ with respect to the large central charge. At the leading order, the holographic correlators are just given by the generalized free field theory due to the large $N$ factorization and they can be computed simply by Wick contractions. However, to extract nontrivial dynamical information one needs to go to higher orders in $1/c$ . Computing these subleading contributions is in general intractable from the CFT side alone as the theory is strongly coupled. The weakly coupled dual description makes it possible, at least in principle, as holographic correlators can be computed as amplitudes at various loop orders by using the AdS generalization of the standard Feynman diagram expansion. However, it should be noted that such a recipe is rather impractical to use beyond the few simplest cases [@Freedman:1998tz; @DHoker:1999pj; @Arutyunov:2000py; @Arutyunov:2002fh; @Arutyunov:2003ae], due to the proliferation" +"---\nabstract: 'We propose an optical clock based on narrow, spin-forbidden M1 and E2 transitions in laser-cooled neutral titanium. These transitions exhibit much smaller black body radiation shifts than those in alkaline earth atoms, small quadratic Zeeman shifts, and have wavelengths in the S, C, and L-bands of fiber-optic telecommunication standards, allowing for integration with robust laser technology. We calculate lifetimes; transition matrix elements; dynamic scalar, vector, and tensor polarizabilities; and black body radiation shifts of the clock transitions using a high-precision relativistic hybrid method that combines a configuration interaction and coupled cluster approaches. We also calculate the line strengths and branching ratios of the transitions used for laser cooling. To identify magic trapping wavelengths, we have completed the largest-to-date direct dynamical polarizability calculations. Finally, we identify new challenges that arise in precision measurements due to magnetic dipole-dipole interactions and describe an approach to overcome them. Direct access to a telecommunications-band atomic frequency standard will aid the deployment of optical clock networks and clock comparisons over long distances.'\nauthor:\n- Scott Eustice\n- Dmytro Filin\n- Jackson Schrott\n- Sergey Porsev\n- Charles Cheung\n- Diego Novoa\n- 'Dan M. Stamper-Kurn'\n- 'Marianna S. Safronova'\nbibliography:\n- 'ti\\_clock\\_refs.bib'\ntitle: 'Optical" +"---\nabstract: '*Curriculum learning (CL)* - training using samples that are generated and presented in a meaningful order - was introduced in the machine learning context around a decade ago. While CL has been extensively used and analysed empirically, there has been very little mathematical justification for its advantages. We introduce a CL model for learning the class of $k$-parities on $d$ bits of a binary string with a neural network trained by stochastic gradient descent (SGD). We show that a wise choice of training examples, involving two or more product distributions, allows to reduce significantly the computational cost of learning this class of functions, compared to learning under the uniform distribution. We conduct experiments to support our analysis. Furthermore, we show that for another class of functions - namely the \u2018Hamming mixtures\u2019 - CL strategies involving a bounded number of product distributions are not beneficial, while we conjecture that CL with unbounded many curriculum steps can learn this class efficiently.'\nauthor:\n- 'Elisabetta Cornacchia[^1]'\n- 'Elchanan Mossel[^2]'\nbibliography:\n- 'references.bib'\ntitle: A Mathematical Model for Curriculum Learning\n---\n\nIntroduction\n============\n\nSeveral experimental studies have shown that humans and animals learn considerably better if the learning materials are presented in" +"---\nabstract: 'In this paper we solve in the positive the question of whether any finite set of integers $A$, containing $0$, is the mapping degree set between two oriented closed connected manifolds of the same dimension. We extend this question to the rational setting, where an affirmative answer is also given.'\naddress:\n- 'CITIC, CITMAga, Departamento de Computaci\u00f3n, Universidade da Coru[\u00f1]{}a, 15071-A Coru[\u00f1]{}a, Spain.'\n- 'Instituto de Matem\u00e1tica Interdisciplinar and Departamento de \u00c1lgebra, Geometr\u00eda y Topolog\u00eda, Universidad Complutense de Madrid, Plaza de las Ciencias, 3, 28040-Madrid, Spain'\n- 'Departamento de \u00c1lgebra, Geometr\u00eda y Topolog\u00eda, Universidad de M\u00e1laga, Campus de Teatinos, s/n, 29071-M\u00e1laga, Spain'\nauthor:\n- 'C. Costoya'\n- Vicente Mu\u00f1oz\n- Antonio Viruel\nbibliography:\n- 'CMV-references.bib'\ntitle: Finite sets containing zero are mapping degree sets\n---\n\n[^1]\n\nIntroduction {#sec:intro}\n============\n\nIn this paper, we settle in the positive various questions which have been raised about $D(M,N)$, the set of mapping degrees between two oriented closed connected manifolds $M$ and $N$ of the same dimension: $$D(M,N)=\\{ d \\in {\\mathbb{Z}}\\, |\\, \\exists f:M\\to N, \\, \\deg(f)=d\\}.$$\n\nC. Neofytidis, S. Wang, and Z. Wang [@NWW Problem 1.1] discuss the problem of finding, for any set $A \\subset \\mathbb Z$ containing $0$, two" +"---\nabstract: 'Magnetars are the most strongly magnetized neutron stars, and one of the most promising targets for X-ray polarimetric measurements. We present here the first Imaging X-ray Polarimetry Explorer ([[*IXPE*]{}]{}) observation of the magnetar [1RXS\u00a0J170849.0-400910]{}, jointly analysed with a new Swift observation and archival NICER data. The total (energy and phase integrated) emission in the 2\u20138\u00a0keV energy range is linerarly polarized, at a $\\sim 35$% level. The phase-averaged polarization signal shows a marked increase with energy, ranging from $\\sim$ 20% at 2\u20133\u00a0keV up to $\\sim$ 80% at 6\u20138\u00a0keV, while the polarization angle remain constant. This indicates that radiation is mostly polarized in a single direction. The spectrum is well reproduced by a combination of either two thermal (blackbody) components or a blackbody and a power law. Both the polarization degree and angle also show a variation with the spin phase, and the former is almost anti-correlated with the source counts in the 2\u20138\u00a0keV and 2\u20134\u00a0keV bands. We discuss the possible implications and interpretations, based on a joint analysis of the spectral, polarization and pulsation properties of the source. A scenario in which the surface temperature is not homogeneous, with a hotter cap covered" +"---\nabstract: 'BERTScore is an effective and robust automatic metric for reference-based machine translation evaluation. In this paper, we incorporate multilingual knowledge graph into BERTScore and propose a metric named KG-BERTScore, which linearly combines the results of BERTScore and bilingual named entity matching for reference-free machine translation evaluation. From the experimental results on WMT19 QE as a metric without references shared tasks, our metric KG-BERTScore gets higher overall correlation with human judgements than the current state-of-the-art metrics for reference-free machine translation evaluation.[^1] Moreover, the pre-trained multilingual model used by KG-BERTScore and the parameter for linear combination are also studied in this paper.'\nauthor:\n- 'Zhanglin Wu, Min Zhang, Ming Zhu, Yinglu Li, Ting Zhu, Hao Yang\\*, Song Peng, Ying Qin'\ntitle: 'KG-BERTScore: Incorporating Knowledge Graph into BERTScore for Reference-Free Machine Translation Evaluation'\n---\n\nIntroduction {#sec1}\n============\n\nMachine translation (MT) evaluation is an important research topic in natural language processing, and its development plays a crucial role in the progress of machine translation. Although Human judgement is an ideal MT evaluation metric, automatic MT evaluation metrics are applied in most cases due to the former\u2019s long evaluation cycle and high labor consumption. With the continuous deepening of research, automatic MT evaluation" +"---\nabstract: 'As human-robot collaboration increases in the workforce, it becomes essential for human-robot teams to coordinate efficiently and intuitively. Traditional approaches for human-robot scheduling either utilize exact methods that are intractable for large-scale problems and struggle to account for stochastic, time varying human task performance, or application-specific heuristics that require expert domain knowledge to develop. We propose a deep learning-based framework, called HybridNet, combining a heterogeneous graph-based encoder with a recurrent schedule propagator for scheduling stochastic human-robot teams under upper- and lower-bound temporal constraints. The HybridNet\u2019s encoder leverages Heterogeneous Graph Attention Networks to model the initial environment and team dynamics while accounting for the constraints. By formulating task scheduling as a sequential decision-making process, the HybridNet\u2019s recurrent neural schedule propagator leverages Long Short-Term Memory (LSTM) models to propagate forward consequences of actions to carry out fast schedule generation, removing the need to interact with the environment between every task-agent pair selection. The resulting scheduling policy network provides a computationally lightweight yet highly expressive model that is end-to-end trainable via Reinforcement Learning algorithms. We develop a virtual task scheduling environment for mixed human-robot teams in a multi-round setting, capable of modeling the stochastic learning behaviors of human workers. Experimental results" +"---\nabstract: 'On a closed Riemannian surface $(M,\\bar g)$ with negative Euler characteristic, we study the problem of finding conformal metrics with prescribed volume $A>0$ and the property that their Gauss curvatures $f_\\lambda= f + \\lambda$ are given as the sum of a prescribed function $f \\in C^\\infty(M)$ and an additive constant $\\lambda$. Our main tool in this study is a new variant of the prescribed Gauss curvature flow, for which we establish local well-posedness and global compactness results. In contrast to previous work, our approach does not require any sign conditions on $f$. Moreover, we exhibit conditions under which the function $f_\\lambda$ is sign changing and the standard prescribed Gauss curvature flow is not applicable.'\nauthor:\n- 'Franziska Borer[^1]'\n- 'Peter Elbau[^2]'\n- 'Tobias Weth[^3]'\nbibliography:\n- 'articles.bib'\ntitle: A Variant Prescribed Curvature Flow on Closed Surfaces with Negative Euler Characteristic\n---\n\nAcknowledgment {#acknowledgment .unnumbered}\n==============\n\nThis work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation), project 408275461 (Smoothing and Non-Smoothing via Ricci Flow).\\\nWe would like to thank Esther Cabezas\u2013Rivas for helpful discussions.\n\nIntroduction\n============\n\nLet $(M,\\bar g)$ be a two-dimensional, smooth, closed, connected, oriented Riemann manifold endowed with a smooth background metric $\\bar g$. A" +"---\nabstract: 'We have carried out the first spectro-polarimetric study of the bright NS-LMXB [GX\u00a09+9]{}using *IXPE* and *AstroSat* observations. We report a significant detection of polarization of $1.7\\pm 0.4\\%$ over the $2-8$\u00a0keV energy band, with a polarization angle of $63^{\\circ}\\pm 7^{\\circ}$. The polarization is found to be energy-dependent, with a $3\\sigma$ polarization degree consistent with null polarization in $2-4$\u00a0keV, and $3.2\\%$ in $4-8$\u00a0keV. Typical of the spectra seen in NS-LMXBs, we find that a combination of soft thermal emission from the accretion disc and Comptonized component from the optically thick corona produces a good fit to the spectra. We also attempt to infer the individual polarization of these components, and obtain a $3\\sigma$ upper limit of $\\sim 11\\%$ on the polarization degree of the thermal component, and constrain that of the Comptonized component to $\\sim 3\\%$. We comment on the possible corona geometry of the system based on our results.'\nauthor:\n- |\n Rwitika Chatterjee$^{1}$[^1], Vivek K. Agrawal$^{1}$, Kiran M. Jayasurya$^{1}$, Tilak Katoch$^{2}$\\\n $^{1}$Space Astronomy Group, ISITE Campus, U. R. Rao Satellite Center, ISRO, Bengaluru 560037, India\\\n $^{2}$Department of Astronomy and Astrophysics, Tata Institute of Fundamental Research, Homi Bhabha Road, Colaba, Mumbai 400005, India\nbibliography:\n-" +"---\nabstract: 'The Arrhenius crossover temperature, $T_{A}$, corresponds to a thermodynamic state wherein the atomistic dynamics of a liquid becomes heterogeneous and cooperative; and the activation barrier of diffusion dynamics becomes temperature-dependent at temperatures below $T_{A}$. The theoretical estimation of this temperature is difficult for some types of materials, especially silicates and borates. In these materials, self-diffusion as a function of the temperature $T$ is reproduced by the Arrhenius law, where the activation barrier practically independent on the temperature $T$. The purpose of the present work was to establish the relationship between the Arrhenius crossover temperature $T_{A}$ and the physical properties of liquids directly related to their glass-forming ability. Using a machine learning model, the crossover temperature $T_{A}$ was calculated for silicates, borates, organic compounds and metal melts of various compositions. The empirical values of the glass transition temperature $T_{g}$, the melting temperature $T_{m}$, the ratio of these temperatures $T_{g}/T_{m}$ and the fragility index $m$ were applied as input parameters. It has been established that the temperatures $T_{g}$ and $T_{m}$ are significant parameters, whereas their ratio $T_{g}/T_{m}$ and the fragility index $m$ do not correlate much with the temperature $T_{A}$. An important result of the present work is the analytical" +"---\nabstract: |\n One way of introducing sparsity into deep networks is by attaching an external table of parameters that is sparsely looked up at different layers of the network. By storing the bulk of the parameters in the external table, one can increase the capacity of the model without necessarily increasing the inference time. Two crucial questions in this setting are then: what is the lookup function for accessing the table and how are the contents of the table consumed? Prominent methods for accessing the table include 1) using words/wordpieces token-ids as table indices, 2) LSH hashing the token vector in each layer into a table of buckets, and 3) learnable softmax style routing to a table entry. The ways to consume the contents include adding/concatenating to input representation, and using the contents as expert networks that specialize to different inputs. In this work, we conduct rigorous experimental evaluations of existing ideas and their combinations. We also introduce a new method, alternating updates, that enables access to an increased token dimension without increasing the computation time, and demonstrate its effectiveness in language modeling.\n\n It has been well established that increasing scale in deep transformer networks leads to improved quality" +"---\nabstract: 'Firms have access to abundant data on market participants. They use these data to target contracts to agents with specific characteristics, and describe these contracts in opaque terms. In response to such practices, recent proposed regulations aim to increase transparency, especially in digital markets. In order to understand when opacity arises in contracting and the potential effects of proposed regulations, we study a moral hazard model in which a risk-neutral principal faces a continuum of weakly risk-averse agents. The agents differ in an observable characteristic that affects the payoff of the principal. In a *described contract*, the principal sorts the agents into groups, and to each group communicates a distribution of output-contingent payments. Within each group, the realized distribution of payments must be consistent with the communicated contract. A described contract is *transparent* if the principal communicates the realized contract to the agent ex-ante, and otherwise it is *opaque*. We provide a geometric characterization of the principal\u2019s optimal described contract as well as conditions under which the optimal described mechanism is transparent and opaque.'\nauthor:\n- 'Andreas Haupt and Zo\u00eb Hitzig[^1]'\nbibliography:\n- 'refs.bib'\ntitle: Opaque Contracts\n---\n\nIntroduction\n============\n\nFirms have access to abundant data on consumers" +"---\nabstract: 'In many modern statistical problems, the limited available data must be used both to develop the hypotheses to test, and to test these hypotheses\u2014that is, both for exploratory and confirmatory data analysis. Reusing the same dataset for both exploration and testing can lead to massive selection bias, leading to many false discoveries. Selective inference is a framework that allows for performing valid inference even when the same data is reused for exploration and testing. In this work, we are interested in the problem of selective inference for data clustering, where a clustering procedure is used to hypothesize a separation of the data points into a collection of subgroups, and we then wish to test whether these data-dependent clusters in fact represent meaningful differences within the data. Recent work by @gao2020selective provides a framework for doing selective inference for this setting, where a hierarchical clustering algorithm is used for producing the cluster assignments, which was then extended to k-means clustering by @chen2022selective. Both these works rely on assuming a known covariance structure for the data, but in practice, the noise level needs to be estimated\u2014and this is particularly challenging when the true cluster structure is unknown. In our work," +"---\nabstract: 'We study the tail of $p(U)$, the probability distribution of $U=\\vert\\psi(0,L)\\vert^2$, for $\\ln U\\gg 1$, $\\psi(x,z)$ being the solution to $\\partial_z\\psi -\\frac{i}{2m}\\nabla_{\\perp}^2 \\psi =g\\vert S\\vert^2\\, \\psi$, where $S(x,z)$ is a complex Gaussian random field, $z$ and $x$ respectively are the axial and transverse coordinates, with $0\\le z\\le L$, and both $m\\ne 0$ and $g>0$ are real parameters. We perform the first instanton analysis of the corresponding Martin-Siggia-Rose action, from which it is found that the realizations of $S$ concentrate onto long filamentary instantons, as $\\ln U\\to +\\infty$. The tail of $p(U)$ is deduced from the statistics of the instantons. The value of $g$ above which $\\langle U\\rangle$ diverges coincides with the one obtained by the completely different approach developed in Mounaix et al. 2006 [*Commun. Math. Phys.*]{} [**264**]{}\u00a0741. Numerical simulations clearly show a statistical bias of $S$ towards the instanton for the largest sampled values of $\\ln U$. The high maxima \u2014 or \u2018hot spots\u2019 \u2014 of $\\vert S(x,z)\\vert^2$ for the biased realizations of $S$ tend to cluster in the instanton region.'\nauthor:\n- Philippe Mounaix\ntitle: 'Schr\u00f6dinger Equation Driven by the Square of a Gaussian Field: Instanton Analysis in the Large Amplification Limit'\n---\n\nIntroduction {#intro}" +"---\nabstract: '[There is a persistent lack of funding, especially for SMEs, that cyclically worsens. The factoring and invoice discounting market appears to address delays in paying commercial invoices: sellers bring still-to-be-paid invoices to financial organizations, intermediaries, typically banks that provide an advance payment. This article contains research on novel decentralized approaches to said lending services without intermediaries by using liquidity pools and its associated heuristics, creating an Automated Market Maker. In our approach, the contributed collateral and the invoice trades with risk is measured with a formula: The Kelly criterion is used to calculate the optimal premium to be contributed to a liquidity pool in the funding of the said invoices. The behavior of the algorithm is studied in several scenarios of streams of invoices with representative amounts, collaterals, payment delays, and nonpayments rates or mora. We completed the study with hack scenarios with bogus, nonpayable invoices. As a result, we have created a resilient solution that performs the best with partially collateralized invoices. The outcome is decentralized market developed with the Kelly criterion that is reasonably resilient to a wide variety of the invoicing cases that provides sound profit to liquidity providers, and several premium distribution policies were" +"---\nabstract: 'A self-driven hybrid atomic spin oscillator is demonstrated in theory and experiment with a vapor Rb-Xe dual-spin system. The raw signal of Rb spin oscillation is amplified, phase-shifted and sent back to drive the Xe spins coherently. By fine tuning the driving field strength and phase, a self-sustaining spin oscillation signal with zero frequency shift is obtained. The effective coherence time is infinitely prolonged beyond the intrinsic coherence time of Xe spins, forming a hybrid atomic spin oscillator. Spectral analysis indicates that a frequency resolution of 13.1 nHz is achieved, enhancing the detection sensitivity for magnetic field. Allan deviation analysis shows that the spin oscillator can operate in continuous wave mode like a spin maser. The prototype spin oscillator can be easily implanted into other hybrid spin systems and enhance the detection sensitivity of alkali metal-noble gas comagnetometers.'\nauthor:\n- Erwei Li\n- Qianjin Ma\n- Guobin Liu\n- Peter Yun\n- Shougang Zhang\ntitle: 'Self-driven Hybrid Atomic Spin Oscillator'\n---\n\n[UTF8]{}[gbsn]{}\n\n[^1]\n\n[^2]\n\nAlkali metal-noble gas comagnetometer has been used for both fundamental and practical applications, such as the search for axion like particles or new physics beyond the standard model [@Bulatowicz2013PRL; @Domainwall2013PRL; @Limes2018PRL; @EDM2019PRL] and inertial" +"---\nabstract: 'Nearest neighbor machine translation augments the Autoregressive Translation\u00a0(AT) with $k$-nearest-neighbor retrieval, by comparing the similarity between the token-level context representations of the target tokens in the query and the datastore. However, the token-level representation may introduce noise when translating ambiguous words, or fail to provide accurate retrieval results when the representation generated by the model contains indistinguishable context information, e.g., Non-Autoregressive Translation\u00a0(NAT) models. In this paper, we propose a novel $n$-gram nearest neighbor retrieval method that is model agnostic and applicable to both AT and NAT models. Specifically, we concatenate the adjacent $n$-gram hidden representations as the key, while the tuple of corresponding target tokens is the value. In inference, we propose tailored decoding algorithms for AT and NAT models respectively. We demonstrate that the proposed method consistently outperforms the token-level method on both AT and NAT models as well on general as on domain adaptation translation tasks. On domain adaptation, the proposed method brings $1.03$ and $2.76$ improvements regarding the average BLEU score on AT and NAT models respectively.'\nauthor:\n- |\n Rui Lv$^{1}$[^1]\u00a0\u00a0, Junliang Guo$^2$\u00a0\u00a0, Rui Wang$^2$, Xu Tan$^2$, Qi Liu$^{1}$, Tao Qin$^{2}$\\\n $^1$University of Science and Technology of China $^2$Microsoft Research" +"---\nabstract: 'We report new examples of Sidon sets in abelian groups arising from generalized jacobians of curves, and discuss some of their properties with respect to size and structure.'\naddress:\n- 'Univ. Lille, CNRS, UMR 8524 - Laboratoire Paul Painlev\u00e9, F-59000 Lille, France'\n- 'CMLS, \u00c9cole polytechnique, F-91128 Palaiseau cedex, France'\n- 'D-MATH, ETH Z\u00fcrich, R\u00e4mistrasse 101, CH-8092 Z\u00fcrich, Switzerland'\nauthor:\n- Arthur Forey\n- Javier Fres\u00e1n\n- Emmanuel Kowalski\nbibliography:\n- 'sidon.bib'\ntitle: Sidon sets in algebraic geometry\n---\n\nIntroduction\n============\n\nLet $A$ be an abelian group. A subset $S$ of $A$ is called a *Sidon set* if $S$ does not contain non-trivial additive quadruples; that is, if any solution $(x_1,x_2,x_3,x_4)\\in S^4$ of the equation $$\\label{eq-sidon}\n x_1+x_2=x_3+x_4$$ satisfies $x_1\\in\\{x_3,x_4\\}$ (see, e.g.,\u00a0[@eberhard-manners \u00a71]). In other words, up to transposition an element of\u00a0$A$ is in at most one way the sum of two elements of\u00a0$S$.\n\nWe will explain how to construct a range of new examples of Sidon sets using the theory of commutative algebraic groups. In fact, we sometimes most naturally obtain a slight variant: given an element\u00a0$a$ of\u00a0$A$, we say that a subset $S$ of\u00a0$A$ is a *symmetric Sidon set with center" +"---\nabstract: 'While human evaluation remains best practice for accurately judging the faithfulness of automatically-generated summaries, few solutions exist to address the increased difficulty and workload when evaluating *long-form* summaries. Through a survey of [162]{}\u00a0papers on long-form summarization, we first shed light on current human evaluation practices surrounding long-form summaries. We find that 73% of these papers do not perform any human evaluation on model-generated summaries, while other works face new difficulties that manifest when dealing with long documents (e.g., low inter-annotator agreement). Motivated by our survey, we present [LongEval]{}, a set of guidelines for human evaluation of faithfulness in long-form summaries that addresses the following challenges: (1) How can we achieve high inter-annotator agreement on faithfulness scores? (2) How can we minimize annotator workload while maintaining accurate faithfulness scores? and (3) Do humans benefit from automated alignment between summary and source snippets? We deploy [LongEval]{}\u00a0in annotation studies on two long-form summarization datasets in different domains (SQuALITY and PubMed), and we find that switching to a finer granularity of judgment (e.g., clause-level) reduces inter-annotator variance in faithfulness scores (e.g., std-dev from 18.5 to 6.8). We also show that scores from a *partial* annotation of fine-grained units" +"---\nauthor:\n- Rui Zhu\n- Di Tang\n- Siyuan Tang\n- Guanhong Tao\n- Shiqing Ma\n- Xiaofeng Wang\n- Haixu Tang\nbibliography:\n- 'reference.bib'\ntitle: 'Gradient Shaping: Enhancing Backdoor Attack Against Reverse Engineering'\n---\n\nAbstract {#abstract .unnumbered}\n--------\n\nMost existing methods to detect backdoored machine learning (ML) models take one of the two approaches: trigger inversion (aka. reverse engineer) and weight analysis (aka. model diagnosis). In particular, the gradient-based trigger inversion is considered to be among the most effective backdoor detection techniques, as evidenced by the TrojAI competition\u00a0[@trojai], Trojan Detection Challenge\u00a0[@NIPS_competation] and backdoorBench\u00a0[@benchmark]. However, little has been done to understand why this technique works so well and, more importantly, whether it raises the bar to the backdoor attack. In this paper, we report the first attempt to answer this question by analyzing the change rate of the backdoored model around its trigger-carrying inputs. Our study shows that existing attacks tend to inject the backdoor characterized by a low change rate around trigger-carrying inputs, which are easy to capture by gradient-based trigger inversion. In the meantime, we found that the low change rate is not necessary for a backdoor attack to succeed: we design a new" +"---\nabstract: 'Motivated by the markets operating on fast time scales, we present a framework for online coalitional games with time-varying coalitional values and propose real-time payoff distribution mechanisms. Specifically, we design two online distributed algorithms to track the Shapley value and the core, the two most widely studied payoff distribution criteria in coalitional game theory. We show that the payoff distribution trajectory resulting from our proposed algorithms converges to a neighborhood of the time-varying solutions. We adopt an operator-theoretic perspective to show the convergence of our algorithms. Numerical simulations of a real-time local electricity market and cooperative energy forecasting market illustrate the performance of our algorithms: [the difference between online payoffs and static payoffs (Shapley and the core) to the participants is little; online algorithms considerably improve the scalability of the mechanism with respect to the number of market participants.]{}'\nauthor:\n- 'Aitazaz Ali Raja and Sergio Grammatico [^1] [^2]'\nbibliography:\n- 'IEEEabrv.bib'\n- 'Bibliography.bib'\ntitle: 'Online coalitional games for real-time payoff distribution with applications to energy markets'\n---\n\nIntroduction {#sec: intro}\n============\n\nA technological transformation is currently underway converting key infrastructures, such as power grids, commerce, and trading platforms, into highly dynamic complex systems. In these domains, predictive" +"---\nabstract: 'The Near-Infrared Spectrograph (NIRSpec) is one of the four focal plane instruments on the James Webb Space Telescope. In this paper, we summarize the in-orbit performance of NIRSpec, as derived from data collected during its commissioning campaign and the first few months of nominal science operations. More specifically, we discuss the performance of some critical hardware components such as the two NIRSpec Hawaii-2RG (H2RG) detectors, wheel mechanisms, and the micro-shutter array. We also summarize the accuracy of the two target acquisition procedures used to accurately place science targets into the slit apertures, discuss the current status of the spectrophotometric and wavelength calibration of NIRSpec spectra, and provide the \u2019as measured\u2019 sensitivity in all NIRSpec science modes. Finally, we point out a few important considerations for the preparation of NIRSpec science programs.'\nauthor:\n- 'T. B\u00f6ker'\n- 'T. L. Beck'\n- 'S. M. Birkmann'\n- 'G. Giardino'\n- 'C. Keyes'\n- 'N. Kumari'\n- 'J. Muzerolle'\n- 'T. Rawle'\n- 'P. Zeidler'\n- 'Y. Abul-Huda'\n- 'C. Alves de Oliveira'\n- 'S. Arribas'\n- 'K. Bechtold'\n- 'R. Bhatawdekar'\n- 'N. Bonaventura'\n- 'A. J. Bunker'\n- 'A. J. Cameron'\n- 'S. Carniani'\n- 'S. Charlot'\n- 'M. Curti'" +"---\nabstract: 'Assuming the Riemann hypothesis, we prove the latest explicit version of the prime number theorem for short intervals. Using this result, and assuming the generalised Riemann hypothesis for Dirichlet $L$-functions is true, we then establish explicit formulae for $\\psi(x,\\chi)$, $\\theta(x,\\chi)$, and an explicit version of the prime number theorem for primes in arithmetic progressions that hold for general moduli $q\\geq 3$. Finally, we restrict our attention to $q\\leq 10\\,000$ and use an exact computation to refine these results.'\naddress: 'University of Bristol, School of Mathematics, Fry Building, Woodland Road, Bristol, BS8 1UG'\nauthor:\n- Ethan Simpson Lee\nbibliography:\n- 'refs.bib'\ntitle: The prime number theorem for primes in arithmetic progressions at large values\n---\n\n[^1]\n\nIntroduction\n============\n\nSuppose that $x\\geq 2$, $p$ are prime numbers, $\\chi$ is a Dirichlet character modulo $q\\geq 3$, $$ \\psi(x,\\chi) = \\sum_{n\\leq x} \\chi(n)\\Lambda(n),\n \\quad\\text{and}\\quad\n \\psi_1(x,\\chi) \n = \\int_0^x \\psi(t,\\chi)\\,dt\n = \\sum_{n\\leq x} \\chi(n) \\Lambda(n) (x-n).$$ The purpose of this paper is to prove the latest explicit and conditional version of the prime number theorem for primes in arithmetic progressions, which is a collection of asymptotic estimates for $$\\pi(x;q,a) \n = \\sum_{\\substack{p \\leq x \\\\ p \\equiv a {\\,\\left(\\textnormal{mod }q\\right)\\,}}} 1,\n \\quad\n \\theta(x;q,a) \n = \\sum_{\\substack{p" +"---\naddress: |\n $^{1}$ Robotics Department, University of Michigan, Ann Arbor, MI 48109, USA\\\n $^{2}$ Naval Architecture and Marine Engineering, University of Michigan, Ann Arbor, MI 48109, USA;maanigj@umich.edu (M.G.)\\\n $^{3}$ Aerospace and Ocean Engineering, Virginia Tech, Blacksburg, VA 24061, USA; ematkins@vt.edu (E.M.A.)\\\n $^{4}$ Aerospace Engineering, University of Michigan, Ann Arbor, MI 48109, USA; jwcutler@umich.edu (J.W.C.)\\\nbibliography:\n- 'refs.bib'\n---\n\nIntroduction\n============\n\nEarth\u2019s magnetic field provides a reference measurement anywhere on the planet to assist with navigation. Magnetometers, or 3D compasses, onboard many modern robotics platforms can measure the local magnetic field to assist with navigation. To do this, we often rely on simplifying assumptions (i.e., the magnetic field points northward and is constant in a target region) or on maps of Earth\u2019s magnetic field like the World Magnetic Model (WMM). However, when testing in indoor environments, these assumptions and worldwide maps are inaccurate due to the contribution of the magnetic field from metallic objects inside buildings. Because of this, measurements from magnetometers are often ignored for indoor navigation.\n\nThere are a number of existing works that leverage indoor magnetic field measurements to estimate position, or attitude [@Kuevor2021; @Kok2018; @Vallivaara2011; @Vallivaara2010; @Akai2015; @Akai2017; @Suksakulchai2000; @Li2012; @Haverinen2009; @Wu2019]. However, a common theme" +"---\nabstract: |\n A proper $k$-coloring of a graph $G$ is a *neighbor-locating $k$-coloring* if for each pair of vertices in the same color class, the sets of colors found in their neighborhoods are different. The neighbor-locating chromatic number $\\chi_{NL}(G)$ is the minimum $k$ for which $G$ admits a neighbor-locating $k$-coloring. A proper $k$-coloring of a graph $G$ is a *locating $k$-coloring* if for each pair of vertices $x$ and $y$ in the same color-class, there exists a color class $S_i$ such that $d(x,S_i)\\neq d(y,S_i)$. The locating chromatic number $\\chi_{L}(G)$ is the minimum $k$ for which $G$ admits a locating $k$-coloring. It follows that $\\chi(G)\\leq\\chi_L(G)\\leq\\chi_{NL}(G)$ for any graph $G$, where $\\chi(G)$ is the usual chromatic number of $G$.\n\n We show that for any three integers $p,q,r$ with $2\\leq p\\leq q\\leq r$ (except when $2=p=qGPTCompare\u201d, which allows programmers to visually compare multiple source code solutions generated by GPT-n models for the same programming-related query by highlighting their similarities and differences.'\nauthor:\n- \ntitle: |\n Navigating Complexity in Software Engineering:\\\n A Prototype for Comparing GPT-n" +"---\nabstract: 'While the concepts of quantum many-body integrability and chaos are of fundamental importance for the understanding of quantum matter, their precise definition has so far remained an open question. In this work, we introduce an alternative indicator for quantum many-body integrability and chaos, which is based on the statistics of eigenstates by means of nearest-neighbor subsystem trace distances. We show that this provides us with a faithful classification through extensive numerical simulations for a large variety of paradigmatic model systems including random matrix theories, free fermions, Bethe-ansatz solvable systems, and models of many-body localization. While existing indicators, such as those obtained from level-spacing statistics, have already been utilized with great success, they also face limitations. This concerns for instance the quantum many-body kicked top, which is exactly solvable but classified as chaotic in certain regimes based on the level-spacing statistics, while our introduced indicator signals the expected quantum many-body integrability. We discuss the universal behaviors we observe for the nearest-neighbor trace distances and point out that our indicator might be useful also in other contexts such as for the many-body localization transition.'\nauthor:\n- Reyhaneh Khasseh\n- Jiaju Zhang\n- Markus Heyl\n- 'M.\u00a0A.\u00a0Rajabpour'\ntitle: 'Identifying" +"---\nabstract: 'This paper describes the acceleration of high energies protons captured by electrostatic waves in the frame of jet arising in the area of instability of relativistic jet, where spiral structures are excited. The wave has a spatially heterogeneous structure of $\\exp(i k_\\parallel z +i m_\\phi \\phi)$. Protons can be captured in potential wells created by spiral waves, and thereby experience acceleration with a mechanism known as surfatron acceleration. Expressions of the maximum energy ($ E_p \\simeq 10^{19} eV $) and the energy spectrum from jet parameters are obtained.'\nauthor:\n- 'Ya. N. Istomin'\n- 'A. A. Gunya'\nbibliography:\n- 'References.bib'\ntitle: 'Surfatron acceleration of the protons to high-energy in the relativistic jets'\n---\n\nIntroduction {#section1}\n============\n\nAxisymmetric collimated quasi-stationary ejections, called relativistic jets, arise in the process of plasma accretion onto the central black hole from the surrounding disk. Such a flow of relativistic plasma with a nonthermal nature of radiation is typical mainly for a number of active galactic nuclei (AGNs) and microquasars [@2006MNRAS.370..399B]. The pioneering works, [@1977MNRAS.179..433B] and [@1982MNRAS.199..883B], describe the prerequisites for the emergence of axisymmetric jets. The quasi-cylindrical structure of the jet is formed by the ratio of the prevailing toroidal magnetic field over" +"---\nabstract: 'Response generation is one of the critical components in task-oriented dialog systems. Existing studies have shown that large pre-trained language models can be adapted to this task. The typical paradigm of adapting such extremely large language models would be by fine-tuning on the downstream tasks which is not only time-consuming but also involves significant resources and access to fine-tuning data. Prompting [@schick2020exploiting] has been an alternative to fine-tuning in many NLP tasks. In our work, we explore the idea of using prompting for response generation in task-oriented dialog systems. Specifically, we propose an approach that performs *contextual dynamic prompting* where the prompts are learnt from dialog contexts. We aim to distill useful prompting signals from the dialog context. On experiments with MultiWOZ 2.2 dataset [@zang2020multiwoz], we show that contextual dynamic prompts improve response generation in terms of *combined score* [@mehri-etal-2019-structured] by 3 absolute points, and a massive 20 points when dialog states are incorporated. Furthermore, human annotation on these conversations found that agents which incorporate context were preferred over agents with vanilla prefix-tuning.'\nauthor:\n- |\n Sandesh Swamy\\\n AWS AI Labs\\\n sanswamy@amazon.com Narges Tabari\\\n AWS AI Labs\\\n nargesam@amazon.com Chacha Chen[^1]\\\n University of Chicago\\\n chacha@uchicago.edu Rashmi Gangadharaiah\\\n AWS AI" +"---\nabstract: |\n **Motivation:** As viruses that mainly infect bacteria, phages are key players across a wide range of ecosystems. Analyzing phage proteins is indispensable for understanding phages\u2019 functions and roles in microbiomes. High-throughput sequencing enables us to obtain phages in different microbiomes with low cost. However, compared to the fast accumulation of newly identified phages, phage protein classification remains difficult. In particular, a fundamental need is to annotate virion proteins, the structural proteins such as major tail, baseplate, etc. Although there are experimental methods for virion protein identification, they are too expensive or time-consuming, leaving a large number of proteins unclassified. Thus, there is a great demand to develop a computational method for fast and accurate phage virion protein classification.\\\n **Results:** In this work, we adapted the state-of-the-art image classification model, Vision Transformer, to conduct virion protein classification. By encoding protein sequences into unique images using chaos game representation, we can leverage Vision Transformer to learn both local and global features from sequence \u201cimages\u201d. Our method, PhaVIP, has two main functions: classifying PVP and non-PVP sequences and annotating the types of PVP, such as capsid and tail. We tested PhaVIP on several datasets with increasing difficulty and benchmarked it" +"---\nabstract: 'The usage of Python idioms is popular among Python developers in a formative study of 101 Python idiom performance related questions on Stack Overflow, we find that developers often get confused about the performance impact of Python idioms and use anecdotal toy code or rely on personal project experience which is often contradictory in performance outcomes. There has been no large-scale, systematic empirical evidence to reconcile these performance debates. In the paper, we create a large synthetic dataset with 24,126 pairs of non-idiomatic and functionally-equivalent idiomatic code for the nine unique Python idioms identified in\u00a0[@zhang2022making], and reuse a large real-project dataset of 54,879 such code pairs provided in\u00a0[@zhang2022making]. We develop a reliable performance measurement method to compare the speedup or slowdown by idiomatic code against non-idiomatic counterpart, and analyze the performance discrepancies between the synthetic and real-project code, the relationships between code features and performance changes, and the root causes of performance changes at the bytecode level. We summarize our findings as some actionable suggestions for using Python idioms.'\nauthor:\n- \nbibliography:\n- 'debug.bib'\ntitle: 'Faster or Slower? Performance Mystery of Python Idioms Unveiled with Empirical Evidence '\n---\n\nIntroduction\n============\n\nPython supports many unique idioms" +"---\nauthor:\n- 'Jeremy S. Sanders'\nbibliography:\n- 'refs.bib'\ntitle: Clusters of galaxies\n---\n\nIntroduction\n============\n\nMany galaxies in the universe are found to be gravitationally bound into objects known as groups and clusters of galaxies. The richest clusters of galaxies consist of thousands of individual galaxies, with total masses of $\\sim 10^{15} {\\hbox{$\\rm\\thinspace M_{\\odot}$}}$. Groups of galaxies are lower mass objects containing fewer galaxies ($\\lesssim 50$), although the boundary between clusters and groups is not exact. In the hierarchical theory of the formation of structure, clusters are expected to lie at the densest regions of the cosmic web, built up by the merger of smaller structures over time (e.g. [@Allen11]).\n\nIt was noted that a number of the X-ray sources in the sky discovered by the *Uhuru* X-ray observatory were associated with groups and clusters of galaxies [@Cavaliere71]. The discovery of an emission feature around 7 keV in an X-ray spectrum of the Perseus cluster observed using the *Ariel 5* X-ray telescope [@Mitchell76] provided key evidence that the X-ray emission from clusters of galaxies was thermal in nature, originating from Fe K-shell transitions within the hot plasma within the cluster. The presence of this line in Perseus and other" +"---\nabstract: 'Let $l\\geq 3$ and $F$ be a modular form of weight $l/2-1$ on $\\mathrm{O}(l,2)$ which vanishes only on rational quadratic divisors. We prove that $F$ has only simple zeros and that $F$ is anti-invariant under every reflection fixing a quadratic divisor in the zeros of $F$. In particular, $F$ is a reflective modular form. As a corollary, the existence of $F$ leads to $l\\leq 20$ or $l=26$, in which case $F$ equals the Borcherds form on ${\\mathop{\\mathrm {II}}\\nolimits}_{26,2}$. This answers a question posed by Borcherds in 1995.'\naddress:\n- 'Center for Geometry and Physics, Institute for Basic Science (IBS), Pohang 37673, Korea'\n- 'Lehrstuhl A f\u00fcr Mathematik, RWTH Aachen, 52056 Aachen, Germany'\nauthor:\n- Haowu Wang\n- Brandon Williams\nbibliography:\n- 'refs.bib'\ntitle: 'On the non-existence of singular Borcherds products'\n---\n\nIntroduction\n============\n\nIn this paper we prove some nice properties of holomorphic automorphic products of singular weight on orthogonal groups ${\\mathop{\\null\\mathrm {O}}\\nolimits}(l,2)$, and resolve an open problem posed by Borcherds in 1995.\n\nLet $M$ be an even integral lattice of signature $(l,2)$ with $l\\geq 3$. Let $(-,-)$ denote the bilinear form on $M$ and let $M'$ be the dual lattice of $M$. Either of the two connected" +"---\nabstract: 'Recent text-to-image generative models have demonstrated an unparalleled ability to generate diverse and creative imagery guided by a target text prompt. While revolutionary, current state-of-the-art diffusion models may still fail in generating images that fully convey the semantics in the given text prompt. We analyze the publicly available Stable Diffusion model and assess the existence of *catastrophic neglect*, where the model fails to generate one or more of the subjects from the input prompt. Moreover, we find that in some cases the model also fails to correctly bind attributes (, colors) to their corresponding subjects. To help mitigate these failure cases, we introduce the concept of *Generative Semantic Nursing (GSN)*, where we seek to intervene in the generative process on the fly during inference time to improve the faithfulness of the generated images. Using an attention-based formulation of GSN, dubbed *Attend-and-Excite*, we guide the model to refine the cross-attention units to *attend* to all subject tokens in the text prompt and strengthen \u2014 or *excite* \u2014 their activations, encouraging the model to generate all subjects described in the text prompt. We compare our approach to alternative approaches and demonstrate that it conveys the desired concepts more faithfully across" +"---\nabstract: 'Large pretrained language models are widely used in downstream NLP tasks via task-specific fine-tuning, but such procedures can be costly. Recently, Parameter-Efficient Fine-Tuning (PEFT) methods have achieved strong task performance while updating a much smaller number of parameters compared to full model fine-tuning (FFT). However, it is non-trivial to make informed design choices on the *PEFT configurations*, such as their architecture, the number of tunable parameters, and even the layers in which the PEFT modules are inserted. Consequently, it is highly likely that the current, manually designed configurations are suboptimal in terms of their performance-efficiency trade-off. Inspired by advances in neural architecture search, we propose [[autoPEFT]{}]{}for automatic PEFT configuration selection: we first design an expressive configuration search space with multiple representative PEFT modules as building blocks. Using multi-objective Bayesian optimisation in a low-cost setup, we then discover a Pareto-optimal *set* of configurations with strong performance-cost trade-offs across different numbers of parameters that are also highly transferable across different tasks. Empirically, on GLUE and SuperGLUE tasks, we show that [[autoPEFT]{}]{}-discovered configurations significantly outperform existing PEFT methods and are on par or better than FFT, without incurring substantial training efficiency costs.'\nauthor:\n- |\n Han Zhou^1,\\*^ Xingchen Wan^2,\\*^" +"---\nabstract: 'We calculate deviations in cosmological observables as a function of parameters in a class of connection\u2013based models of quantum gravity. In this theory non-trivial modifications to the background cosmology can occur due to a distortion of the wave function of the Universe at the transition from matter to dark energy domination (which acts as a \u201creflection\u201d in connection space). We are able to exclude some regions of parameter space and show with projected constraints that future experiments like DESI will be able to further constrain these models. An interesting feature of this theory is that there exists a region of parameter space that could naturally alleviate the $S_8$ tension.'\nauthor:\n- 'Michael W.\u00a0Toomey'\n- Savvas Koushiappas\n- Bruno Alexandre\n- Jo\u00e3o Magueijo\nbibliography:\n- 'bibo.bib'\ntitle: 'Quantum Gravity Signatures in the Late-Universe'\n---\n\nIntroduction\n============\n\nOne of the most striking features of $\\Lambda$CDM cosmology is the recent transition from decelerated to accelerated expansion. While the dynamics of this transition through the lens of the concordance model is well understood and has been studied *ad nauseam*, it has recently been pointed out [@Alexandre:2022ijm] that from the perspective of some theories of quantum gravity, such a transition can result" +"---\nauthor:\n- 'Adam Davis,'\n- 'Tony Menzo,'\n- 'Ahmed Youssef,'\n- 'Jure Zupan,'\nbibliography:\n- 'CPV\\_ML\\_biblio.bib'\ntitle: 'Earth mover\u2019s distance as a measure of CP violation'\n---\n\n[ !a! @toks= @toks= ]{} [ !b! @toks= @toks= ]{}\n\n[@counter>0@toks=@toks=]{} [@counter>0@toks=@toks=]{} [@counter>0@toks=@toks=]{} [@counter>0@toks=@toks=]{}\n\n[abstract[We introduce a new unbinned two sample test statistic sensitive to CP violation utilizing the optimal transport plan associated with the Wasserstein (earth mover\u2019s) distance. The efficacy of the test statistic is shown via two examples of CP asymmetric distributions with varying sample sizes: the Dalitz distributions of $B^0 \\rightarrow K^+\\pi^-\\pi^0$ and of $D^0 \\rightarrow \\pi^+\\pi^-\\pi^0$ decays. The windowed version of the Wasserstein distance test statistic is shown to have comparable sensitivity to CP violation as the commonly used energy test statistic, but also retains information about the localized distributions of CP asymmetry over the Dalitz plot. For large statistic datasets we introduce two modified Wasserstein distance based test statistics \u2013 the binned and the sliced Wasserstein distance statistics, which show comparable sensitivity to CP violation, but improved computing time and memory scalings. Finally, general extensions and applications of the introduced statistics are discussed. ]{}]{}\n\nIntroduction\n============\n\nThe Wasserstein distance or earth mover\u2019s distance (EMD) is a measure" +"---\nauthor:\n- 'Jenny Wagner,[!!]{}'\nbibliography:\n- 'charged\\_bh.bib'\ntitle: 'Observables for moving, stupendously charged and massive primordial black holes'\n---\n\nIntroduction {#sec:introduction}\n============\n\nPrimordial black holes (PBHs) have long been established as dark matter candidates, [@bib:Hawking1971; @bib:Zeldovic1966]. Their signatures are being searched for on mass scales ranging from sub-solar masses to primordial supermassive black holes (PSMBHs) with up to $10^{11} M_\\odot$, see, for instance, [@bib:Cappelluti2022; @bib:Carr2022] or [@bib:Villanueva2021] for recent overviews. As discussed in [@bib:Carr_stup], so-called Stupendously LArge Black holes (SLABs) beyond $10^{11} M_\\odot$ could also evolve and exist. Even though no clear evidence for the existence of PBHs has been found so far, they are good candidates for SLABs.\n\nConventional structure growth models, which also assume black hole formation only starts at the cosmic time when the first stars were created, were recently challenged from several James Webb Space Telescope (JWST) observations of fast galaxy evolution at high redshifts, see, for instance, [@bib:Boylan2023; @bib:Kocevski2023; @bib:Labbe2023; @bib:Matthee2023; @bib:Pacucci2023]. Thus, possible black hole formation scenarios may be extended or revised alongside explanations for these early, massive galaxies. As far as observational signatures are concerned, [@bib:Carr2021] and [@bib:Carr_stup] elaborately showed that SLABs up to $10^{16} M_\\odot$ would not leave significant anisotropic imprints" +"---\nabstract: |\n It is known that multiple partonic scatterings in high-energy proton-proton ($pp$) collisions must happen in parallel. However, a rigorous parallel scattering formalism, taking energy sharing properly into account, fails to reproduce factorization, which on the other hand is the basis of almost all $pp$ event generators. In addition, binary scaling in nuclear scatterings is badly violated. These problems are usually \u201csolved\u201d by simply not considering strictly parallel scatterings, which is not a solution. I will report on new ideas (leading to EPOS4), which allow recovering perfectly factorization, and also binary scaling in $AA$ collisions, in a rigorous unbiased parallel scattering formalism. In this new approach, dynamical saturation scales play a crucial role, and this seems to be the missing piece needed to reconcile parallel scattering with factorization. From a practical point of view, one can compute within the EPOS4 framework parton distribution functions (EPOS PDFs) and use them to compute inclusive $pp$ cross sections. So, for the first time, one may compute inclusive jet production (for heavy or light flavors) at very high transverse momentum ($p_{t}$) and at the same time in the same formalism study flow effects at low $p_{t}$ in high-multiplicity $pp$ events, making EPOS4" +"---\nabstract: |\n This paper presents a conditional convergence result of solutions to the Allen\u2013Cahn equation with arbitrary potentials to a De Giorgi type $ \\operatorname{BV}$-solution to multiphase mean curvature flow. Moreover we show that De Giorgi type $ \\operatorname{BV}$-solutions are De Giorgi type varifold solutions, and thus our solution is unique in a weak-strong sense.\\\n **Keywords:** Gradient flows, Mean curvature flow, Allen\u2013Cahn equation\\\n **Mathematical Subject Classification:** 35A15, 35K57, 35K93, 53E10, 74N20\nauthor:\n- 'Pascal Steinke[^1]'\nbibliography:\n- 'references.bib'\ntitle: 'Convergence of Allen\u2013Cahn equations to De Giorgi\u2019s multiphase mean curvature flow'\n---\n\nIntroduction\n============\n\nHistory and main results\n------------------------\n\nMultiphase mean curvature flow is an important geometric evolution equation which has been studied for a long time, bearing not only mathematical importance, but also for the applied sciences. Originally it was proposed to study the evolution of grain boundaries in annealed recrystallized metal, as described by Mullins in [@mullins_two_dimensional_motion_of_idealized_grain_boundaries], who cites Beck in [@beck_metal_interfaces] as already having observed such a behaviour in 1952.\n\nOver the years numerous different solution concepts for multiphase mean curvature flow have been proposed. Classically we have smooth solutions, where we require the evolution of the interfaces to be smooth, for example described by Huisken in" +"---\nabstract: 'Combining ideas coming from Stone duality and Reynolds parametricity, we formulate in a clean and principled way a notion of profinite $\\lambda$-term which, we show, generalizes at every type the traditional notion of profinite word coming from automata theory. We start by defining the Stone space of profinite $\\lambda$-terms as a projective limit of finite sets of usual $\\lambda$-terms, considered modulo a notion of equivalence based on the finite standard model. One main contribution of the paper is to establish that, somewhat surprisingly, the resulting notion of profinite $\\lambda$-term coming from Stone duality lives in perfect harmony with the principles of Reynolds parametricity. In addition, we show that the notion of profinite $\\lambda$-term is compositional by constructing a cartesian closed category of profinite $\\lambda$-terms, and we establish that the embedding from $\\lambda$-terms modulo $\\beta\\eta$-conversion to profinite $\\lambda$-terms is faithful using Statman\u2019s finite completeness theorem. Finally, we prove that the traditional Church encoding of finite words into $\\lambda$-terms can be extended to profinite words, and leads to a homeomorphism between the space of profinite words and the space of profinite $\\lambda$-terms of the corresponding Church type.'\naddress:\n- Universit\u00e9 Paris Cit\u00e9\n- 'CNRS, Universit\u00e9 Paris Cit\u00e9, Inria'\n- 'Universit\u00e9" +"---\nauthor:\n- \ntitle: 'Wright\u2019s Strict Finitistic Logic in the Classical Metatheory: The Propositional Case'\n---\n\nIntroduction\n============\n\nAims {#section: Aims}\n----\n\nThe present paper provides and explores the propositional part of \u2018strict finitistic logic\u2019 according to Wright, as obtained via classical counterparts of his strict finitistic models of arithmetic, under an additional assumption we call the \u2018atomic prevalence condition\u2019. \u2018Strict finitism\u2019 is the view of mathematics according to which an object, or a number in particular, is admitted iff it is constructible in practice, and a statement holds iff it is verifiable in practice. It is constructivist, in that it accepts a number and a statement on grounds of our cognitive capabilities; and more restrictive than intuitionism, since it uses the notion of \u2018in practice\u2019 in place of intuitionism\u2019s \u2018in principle\u2019. Strict finitism is finitistic, because as a consequence, it rejects the idea that there are infinitely many natural numbers. Strict finitistic logic is meant to be the abstract system of logical reasoning concerning actually constructible objects, based on actual verifiability.\n\nAmong the literary sources, Wright\u2019s in 1982 [@Wright1982], to us, is the most philosophically inspirational and appears to be in possession of most formal-logical contents. While [@Wright1982] is" +"---\nabstract: 'Quorum sensing (QS) mimickers can be used as an effective tool to disrupt biofilms which consist of communicating bacteria and extracellular polymeric substances (EPS). In this paper, a stochastic biofilm disruption model based on the usage of QS mimickers is proposed. A chemical reaction network (CRN) involving four different states is employed to model the biological processes during the biofilm formation and its disruption via QS mimickers. In addition, a state-based stochastic simulation algorithm is proposed to simulate this CRN. The proposed model is validated by the *in vitro* experimental results of *Pseudomonas aeruginosa* biofilm and its disruption by rosmarinic acid as the QS mimicker. Our results show that there is an uncertainty in state transitions due to the effect of the randomness in the CRN. In addition to the QS activation threshold, the presented work demonstrates that there are underlying two more thresholds for the disruption of EPS and bacteria, which provides a realistic modeling for biofilm disruption with QS mimickers.'\nauthor:\n- 'Fatih Gulec,\u00a0 Andrew W. Eckford,\u00a0 [^1] [^2][^3]'\nbibliography:\n- 'ref\\_fg\\_biofilm\\_disruption.bib'\ntitle: A Stochastic Biofilm Disruption Model based on Quorum Sensing Mimickers\n---\n\nMolecular communication, biological communication, biofilm disruption, quorum sensing mimickers.\n\nIntroduction\n============\n\ncan" +"---\nabstract: 'We study the classical scheduling problem on parallel machines where the precedence graph has the bounded depth $h$. Our goal is to minimize the maximum completion time. We focus on developing approximation algorithms that use only sublinear space or sublinear time. We develop the first one-pass streaming approximation schemes using sublinear space when all jobs\u2019 processing times differ no more than a constant factor $c$ and the number of machines $m$ is at most $\\tfrac {2n \\epsilon}{3 h c }$. This is so far the best approximation we can have in terms of $m$, since no polynomial time approximation better than $\\tfrac{4}{3}$ exists when $m = \\tfrac{n}{3}$ unless P=NP. The algorithms are then extended to the more general problem where the largest $\\alpha n$ jobs have no more than $c$ factor difference. We also develop the first sublinear time algorithms for both problems. For the more general problem, when $ m \\le \\tfrac { \\alpha n \\epsilon}{20 c^2 \\cdot h } $, our algorithm is a randomized $(1+\\epsilon)$-approximation scheme that runs in sublinear time. This work not only provides an algorithmic solution to the studied problem under big data environment, but also gives a methodological framework for designing" +"---\nauthor:\n- 'Cheng Zhao (\u8d75\u6210)'\nbibliography:\n- 'FCFC.bib'\ndate: 'Received September 15, 1996; accepted March 16, 1997'\nsubtitle: 'A high-performance pair counting toolkit'\ntitle: Fast Correlation Function Calculator\n---\n\n[UTF8]{}[gbsn]{}\n\n[A novel high-performance exact pair counting toolkit called Fast Correlation Function Calculator ([`FCFC`]{}) is presented, which is publicly available at .]{} [As the rapid growth of modern cosmological datasets, the evaluation of correlation functions with observational and simulation catalogues has become a challenge. High-efficiency pair counting codes are thus in great demand.]{} [We introduce different data structures and algorithms that can be used for pair counting problems, and perform comprehensive benchmarks to identify the most efficient ones for real-world cosmological applications. We then describe the three levels of parallelisms used by [`FCFC`]{} \u2013 including SIMD, OpenMP, and MPI \u2013 and run extensive tests to investigate the scalabilities. Finally, we compare the efficiency of [`FCFC`]{} against alternative pair counting codes.]{} [The data structures and histogram update algorithms implemented in [`FCFC`]{} are shown to outperform alternative methods. [`FCFC`]{} does not benefit much from SIMD as the bottleneck of our histogram update algorithm is mostly cache latency. Nevertheless, the efficiency of [`FCFC`]{} scales well with the numbers of OpenMP threads and MPI" +"---\nauthor:\n- 'Cat P. Le, Luke Dai, Michael Johnston, Yang Liu, Marilyn Walker, Reza Ghanadan'\nbibliography:\n- 'refs.bib'\n- 'athena.bib'\ntitle: 'Improving Open-Domain Dialogue Evaluation with a Causal Inference Model'\n---\n\nopen-domain dialogue, user ratings, dialogue evaluation, causal inference, user satisfaction\n\nIntroduction {#sec:intro}\n============\n\nEvaluation has always been a complex challenge for interactive dialogue systems. For task-oriented dialogues, frameworks such as Paradise\u00a0[@walker-etal-1997-paradise] model the relationship between user satisfaction, task completion, and cost factors such as dialogue length, word error rate, and dialogue behaviors. However, open-domain dialogue systems such as those built for the Alexa Prize SocialBot Grand Challenge\u00a0[@ram2018conversational; @gabriel2020further], where there is no clearly defined task, require new metrics and methods for evaluation that better reflect their affordances [@walker2021modeling; @kim2020speech; @ghazarian-etal-2019-better; @ghazarian2022wrong; @higashinaka2019improving; @sinha-etal-2020-learning].\n\nIn Alexa Prize SocialBot, user conversation ratings are elicited primarily for comparative evaluation of systems competing in the Grand Challenge. Numerous other use cases for dialogue ratings include selecting training data and running A/B tests to evaluate new capabilities, features, and models. Manually collected user ratings are limited in that only a fraction of users of \u201cAlexa Let\u2019s Chat\" leave ratings, and these ratings can be highly subjective. An alternative is the post-hoc" +"---\nabstract: 'Dynamic treatment rules or policies are a sequence of decision functions over multiple stages that are tailored to individual features. One important class of treatment policies for practice, namely multi-stage stationary treatment policies, prescribe treatment assignment probabilities using the same decision function over stages, where the decision is based on the same set of features consisting of both baseline variables (e.g., demographics) and time-evolving variables (e.g., routinely collected disease biomarkers). Although there has been extensive literature to construct valid inference for the value function associated with the dynamic treatment policies, little work has been done for the policies themselves, especially in the presence of high dimensional feature variables. We aim to fill in the gap in this work. Specifically, we first estimate the multistage stationary treatment policy based on an augmented inverse probability weighted estimator for the value function to increase the asymptotic efficiency, and further apply a penalty to select important feature variables. We then construct one-step improvement of the policy parameter estimators. Theoretically, we show that the improved estimators are asymptotically normal, even if nuisance parameters are estimated at a slow convergence rate and the dimension of the feature variables increases with the sample size. Our" +"---\nabstract: 'In compact and dense star-forming clouds a global star cluster wind could be suppressed. In this case the stellar feedback is unable to expel the leftover gas from the cluster. Young massive stars remain embedded into a dense residual gas and stir it moving in the gravitational well of the system. Here we present a self-consistent model for the molecular gas distribution in such young, enshrouded stellar clusters. It is assumed that the cloud collapse terminates and the star formation ceases when a balance between the turbulent pressure and gravity and between the turbulent energy dissipation and regeneration rates is established. These conditions result in an equation that determines the residual gas density distribution that, in turn, allows one to determine the other characteristics of the leftover gas and the star formation efficiency. It is shown that model predictions are in good agreement with several observationally determined properties of cloud D1 in nearby dwarf spheroidal galaxy NGC 5253 and its embedded cluster.'\nauthor:\n- Sergiy Silich\n- Jean Turner\n- Jonathan Mackey\n- 'Sergio Mart\u00ednez-Gonz\u00e1lez'\nbibliography:\n- 'TUR.bib'\ndate: 'Accepted . Received ; in original form'\ntitle: Molecular gas properties in young stellar clusters with a suppressed star" +"---\nabstract: 'The spectrum of laser-plasma-generated X-rays is very important as it can characterize electron dynamics and also be useful for applications, and nowadays with the forthcoming high-repetition-rate laser-plasma experiments, there is a raising demand for online diagnosis for the X-ray spectrum. In this paper, scintillators and silicon PIN diodes are used to build a wideband online filter stack spectrometer. The genetic algorithm is used to optimize the arrangements of the X-ray sensors and filters by minimizing the condition number of the response matrix, thus the unfolding error can be significantly decreased according to the numerical experiments. The detector responses are quantitatively calibrated by irradiating the scintillator and PIN diode using different nuclides and comparing the measured $\\gamma$-ray peaks. Finally, a 15-channel spectrometer prototype has been implemented. The X-ray detector, front-end electronics, and back-end electronics are integrated into the prototype, and the prototype can determine the spectrum with 1 kHz repetition rates.'\naddress:\n- 'Science and Technology on Plasma Physics Laboratory, Laser Fusion Research Center, CAEP, Mianyang, 621900, Sichuan, China'\n- 'Department of Engineering Physics, Tsinghua University, Beijing, 100084, China'\n- 'Key Laboratory of Particle and Radiation Imaging (Tsinghua University), Ministry of Education, Beijing, 100084, China'\n- 'School of Information" +"---\nbibliography:\n- 'lb.bib'\n---\n\n[ **Finite temperature spin diffusion in the Hubbard model in the strong coupling limit** ]{}\n\nOleksandr Gamayun^1,2\\*^, Arthur Hutsalyuk^3^, Bal\u00e1zs Pozsgay^3^, Mikhail B. Zvonarev^4^\n\n[**1**]{} London Institute for Mathematical Sciences, Royal Institution, 21 Albemarle St, London W1S 4BS, UK,\n\n[**2**]{} Faculty of Physics, University of Warsaw, ul. Pasteura 5, 02-093 Warsaw, Poland,\n\n[**3**]{} MTA-ELTE \u201cMomentum\u201d Integrable Quantum Dynamics Research Group, Department of Theoretical Physics, E\u00f6tv\u00f6s Lor\u00e1nd University, P\u00e1zm\u00e1ny P\u00e9ter stny. 1A, 1117 Budapest, Hungary\\\n[**4**]{} Universit\u00e9 Paris-Saclay, CNRS, LPTMS, 91405, Orsay, France\\\n\\* og@lims.ac.uk\n\nAbstract {#abstract .unnumbered}\n========\n\n[**We investigate finite temperature spin transport in one spatial dimension by considering the spin-spin correlation function of the Hubbard model in the limiting case of infinitely strong repulsion. We find that in the absence of a magnetic field the transport is diffusive, and derive the spin diffusion constant. Our approach is based on asymptotic analysis of a Fredholm determinant representation. The obtained results are in agreement with Generalized Hydrodynamics approach.**]{}\n\nIntroduction\n============\n\nQuantum transport in the integrable systems attracts ever increasing attention of the physics community\u00a0[@transport-review]. Distinctive features of these systems \u2013 a completely elastic and factorized (two-body reducible) scattering, and a presence of an infinite" +"---\nabstract: 'In online exploration systems where users with fixed preferences repeatedly arrive, it has recently been shown that $O(1)$, i.e., bounded regret, can be achieved when the system is modeled as a linear contextual bandit. This result may be of interest for recommender systems, where the popularity of their items is often short-lived, as the exploration itself may be completed quickly before potential long-run non-stationarities come into play. However, in practice, exact knowledge of the linear model is difficult to justify. Furthermore, potential existence of unobservable covariates, uneven user arrival rates, interpretation of the necessary rank condition, and users opting out of private data tracking all need to be addressed for practical recommender system applications. In this work, we conduct a theoretical study to address all these issues while still achieving bounded regret. Aside from proof techniques, the key differentiating assumption we make here is the presence of effective Synthetic Control Methods (SCM), which are shown to be a practical relaxation of the exact linear model knowledge assumption. We verify our theoretical bounded regret result using a minimal simulation experiment.'\nauthor:\n- \n- \nbibliography:\n- 'bib.bib'\ntitle: 'Bounded (O(1)) Regret Recommendation Learning via Synthetic Controls Oracle [^1] '\n---" +"---\nabstract: 'Optimizing the performance of thermal machines is an essential task of thermodynamics. the optimization of information engines that convert information about the state of a system into work. We a generalized finite-time Carnot cycle for a quantum information engine and optimize its power output in the regime of low dissipation. We derive a general formula for its efficiency at maximum power valid for arbitrary working media. We further investigate the optimal performance of a qubit information engine subjected to weak energy measurements.'\nauthor:\n- Paul Fadler\n- Alexander Friedenberger\n- Eric Lutz\ntitle: Efficiency at maximum power of a Carnot quantum information engine\n---\n\nHeat engines convert thermal energy into mechanical work by running cyclicly between two heat baths at different temperatures. They have been widely used to generate motion, from ancient steam engines to modern internal combustion motors [@cen01]. Information engines, on the other hand, extract energy from a single heat bath by processing information, for instance, via cyclic measurement and feedback operations [@cao09; @sag10; @abr11; @hor11; @bau12; @sag12; @esp12; @man13; @hor13; @um15; @par16; @yam16; @hor19]. They thus exploit information gained about the state of a system to produce useful work [@sei12; @sag12a]. Such machines may be" +"---\nauthor:\n- 'Semyon Yurchenko$^{1}$, Mikhail Zhabitsky$^{2}$[^1][^2]'\ndate: |\n $^1$Saint Petersburg State University, Laboratory of ultra-high energy physics, St.\u00a0Petersburg, Russia\\\n $^2$Joint Institute for Nuclear Research, Dubna, Russia\\\ntitle: 'Genetic Algorithm for determination of the event collision time and particle identification by time-of-flight at NICA SPD'\n---\n\n**Keywords**: Genetic Algorithm; Time-Of-Flight; Particle identification\n\nIntroduction\n============\n\nThe Spin Physics Detector (SPD) is a future experiment that will be placed in one of the two interaction points of the NICA collider in the Joint Institute for Nuclear Research. By studying collisions of polarized proton and deuteron beams, the SPD collaboration will perform a comprehensive study of the unpolarized and polarized gluon content of nucleons and other spin related phenomena\u00a0[@CDRSPD]. With polarized proton-proton collision energies $\\sqrt{s}$ up to $27~\\text{GeV}$, SPD will cover a kinematic range between the low-energy measurements at ANKE-COSY\u00a0[@Dymov:2016jgy] and SATURNE and the high-energy measurements at RHIC\u00a0[@STAR:2021mfd] and LHC\u00a0[@Hadjidakis:2018ifr].\n\nThe SPD experimental setup is planned as a general-purpose $4\\pi$\u00a0detector with advanced tracking and particle identification capabilities. The particle identification will be performed by means of $dE/dx$, Time-Of-Flight (TOF), Electromagnetic calorimetry and Muon-filtering techniques. The experiment will use a system of Multigap Resistive Plate Chambers (MRPC)\u00a0[@ZeballosMRPC;" +"---\nabstract: 'This article presents a fast direct solver, termed Algebraic Inverse Fast Multipole Method (from now on abbreviated as AIFMM), for linear systems arising out of $N$-body problems. AIFMM relies on the following three main ideas: (i) Certain sub-blocks in the matrix corresponding to $N$-body problems can be efficiently represented as low-rank matrices; (ii) The low-rank sub-blocks in the above matrix are leveraged to construct an extended sparse linear system; (iii) While solving the extended sparse linear system, certain fill-ins that arise in the elimination phase are represented as low-rank matrices and are \u201credirected\u201d though other variables maintaining zero fill-in sparsity. The main highlights of this article are the following: (i) Our method is completely algebraic (as opposed to the existing Inverse Fast Multipole Method\u00a0[@ambikasaran2014inverse; @doi:10.1137/15M1034477; @TAKAHASHI2017406], from now on abbreviated as IFMM). We rely on our new Nested Cross Approximation\u00a0[@gujjula2022nca] (from now on abbreviated as NNCA) to represent the matrix arising out of $N$-body problems. (ii) A significant contribution is that the algorithm presented in this article is more efficient than the existing IFMMs. In the existing IFMMs, the fill-ins are compressed and redirected as and when they are created. Whereas in this article, we update" +"---\nabstract: 'We say that $\\Gamma$, the boundary of a bounded Lipschitz domain, is locally dilation invariant if, at each $x\\in \\Gamma$, $\\Gamma$ is either locally $C^1$ or locally coincides (in some coordinate system centred at $x$) with a Lipschitz graph $\\Gamma_x$ such that $\\Gamma_x=\\alpha_x\\Gamma_x$, for some $\\alpha_x\\in (0,1)$. In this paper we study, for such $\\Gamma$, the essential spectrum of $D_\\Gamma$, the double-layer (or Neumann-Poincar\u00e9) operator of potential theory, on $L^2(\\Gamma)$. We show, via localisation and Floquet-Bloch-type arguments, that this essential spectrum is the union of the spectra of related continuous families of operators $K_t$, for $t\\in [-\\pi,\\pi]$; moreover, each $K_t$ is compact if $\\Gamma$ is $C^1$ except at finitely many points. For the 2D case where, additionally, $\\Gamma$ is piecewise analytic, we construct convergent sequences of approximations to the essential spectrum of $D_\\Gamma$; each approximation is the union of the eigenvalues of finitely many finite matrices arising from Nystr\u00f6m-method approximations to the operators $K_t$. Through error estimates with explicit constants, we also construct functionals that determine whether any particular locally-dilation-invariant piecewise-analytic $\\Gamma$ satisfies the well-known spectral radius conjecture, that the essential spectral radius of $D_\\Gamma$ on $L^2(\\Gamma)$ is $<1/2$ for all Lipschitz $\\Gamma$. We illustrate this theory with" +"---\nabstract: 'Structural pruning enables model acceleration by removing structurally-grouped parameters from neural networks. However, the parameter-grouping patterns vary widely across different models, making architecture-specific pruners, which rely on manually-designed grouping schemes, non-generalizable to new architectures. In this work, we study a highly-challenging yet barely-explored task, any structural pruning, to tackle general structural pruning of arbitrary architecture like CNNs, RNNs, GNNs and Transformers. The most prominent obstacle towards this goal lies in the structural coupling, which not only forces different layers to be pruned simultaneously, but also expects all removed parameters to be consistently unimportant, thereby avoiding structural issues and significant performance degradation after pruning. To address this problem, we propose a general and [fully automatic]{} method, *Dependency Graph* (DepGraph), to explicitly model the dependency between layers and comprehensively group coupled parameters for pruning. In this work, we extensively evaluate our method on several architectures and tasks, including ResNe(X)t, DenseNet, MobileNet and Vision transformer for images, GAT for graph, DGCNN for 3D point cloud, alongside LSTM for language, and demonstrate that, even with a simple norm-based criterion, the proposed method consistently yields gratifying performances.'\nauthor:\n- |\n **Gongfan Fang$^1$ Xinyin Ma$^1$ Mingli Song$^2$ Michael Bi Mi$^3$ Xinchao Wang$^1$[^1]\\\n [National University" +"---\nabstract: 'Loss function learning is a new meta-learning paradigm that aims to automate the essential task of designing a loss function for a machine learning model. Existing techniques for loss function learning have shown promising results, often improving a model\u2019s training dynamics and final inference performance. However, a significant limitation of these techniques is that the loss functions are meta-learned in an offline fashion, where the meta-objective only considers the very first few steps of training, which is a significantly shorter time horizon than the one typically used for training deep neural networks. This causes significant bias towards loss functions that perform well at the very start of training but perform poorly at the end of training. To address this issue we propose a new loss function learning technique for adaptively updating the loss function online after each update to the base model parameters. The experimental results show that our proposed method consistently outperforms the cross-entropy loss and offline loss function learning techniques on a diverse range of neural network architectures and datasets.'\nauthor:\n- Christian Raymond\n- Qi Chen\n- |\n Bing XueMengjie Zhang\\\n Victoria University of Wellington, Wellington, New Zealand\\\n {Christian.Raymond, Qi.Chen, Bing.Xue, Mengjie.Zhang}@ecs.vuw.ac.nz\nbibliography:\n- 'references.bib'" +"---\nabstract: 'We studied integration contour deformations in the chiral random matrix theory of Stephanov\u00a0[@Stephanov:1996ki] with the goal of alleviating the finite-density sign problem. We considered simple ans\u00e4tze for the deformed integration contours, and optimized their parameters. We find that optimization of a single parameter manages to considerably improve on the severity of the sign problem. We show numerical evidence that the improvement achieved is exponential in the degrees of freedom of the system, i.e., the size of the random matrix. We also compare the optimization method with contour deformations coming from the holomorphic flow equations.'\nauthor:\n- Matteo Giordano\n- Attila P\u00e1sztor\n- D\u00e1vid Peszny\u00e1k\n- Zolt\u00e1n Tulip\u00e1nt\ntitle: Fighting the sign problem in a chiral random matrix model with contour deformations\n---\n\nIntroduction\n============\n\nEuclidean quantum field theories at non-zero particle density (or chemical potential) generally suffer from a complex action problem: the weights in the path integral representation are complex, and thus cannot be interpreted as a joint probability density function on the space of field configurations (up to a proportionality factor). This prevents the use of importance sampling methods for the direct simulation of these theories. In QCD, this complex action problem severely hampers first-principles" +"---\nabstract: |\n Motivated by the challenge of nonstationarity in sequential decision making, we study Online Convex Optimization (OCO) under the coupling of two problem structures: the domain is unbounded, and the comparator sequence $u_1,\\ldots,u_T$ is arbitrarily time-varying. As no algorithm can guarantee low regret simultaneously against all comparator sequences, handling this setting requires moving from minimax optimality to comparator adaptivity. That is, sensible regret bounds should depend on certain complexity measures of the comparator relative to one\u2019s prior knowledge.\n\n This paper achieves a new type of these adaptive regret bounds via a sparse coding framework. The complexity of the comparator is measured by its energy and its sparsity on a user-specified dictionary, which offers considerable versatility. Equipped with a wavelet dictionary for example, our framework improves the state-of-the-art bound [@jacobsen2022parameter] by adapting to both ($i$) the magnitude of the comparator average ${\\left\\|{\\bar u}\\right\\|}={\\|{\\sum_{t=1}^Tu_t/T}\\|}$, rather than the maximum $\\max_t{\\left\\|{u_t}\\right\\|}$; and ($ii$) the comparator variability $\\sum_{t=1}^T{\\left\\|{u_t-\\bar u}\\right\\|}$, rather than the uncentered sum $\\sum_{t=1}^T{\\left\\|{u_t}\\right\\|}$. Furthermore, our analysis is simpler due to decoupling function approximation from regret minimization.\nauthor:\n- |\n Zhiyu Zhang[^1]\\\n Harvard University\\\n `zhiyuz@seas.harvard.edu`\\\n- |\n Ashok Cutkosky\\\n Boston University\\\n `ashok@cutkosky.com`\\\n- |\n Ioannis Ch. Paschalidis\\\n Boston University\\\n `yannisp@bu.edu`\\\nbibliography:\n-" +"---\nabstract: |\n I study mechanism design with blockchain-based tokens, that is, tokens that can be used within a mechanism but can also be saved and traded outside of the mechanism. I do so by considering a repeated, private-value auction, in which the auctioneer accepts payments in a blockchain-based token he creates and initially owns. I show that the present-discounted value of the expected revenues is the same as in a standard auction with dollars, but these revenues accrue earlier and are less variable. The optimal monetary policy involves the burning of tokens used in the auction, a common feature of many blockchain-based auctions. I then introduce non-contractible effort and the possibility of misappropriating revenues. I compare the auction with tokens to an auction with dollars in which the auctioneer can also issue financial securities. An auction with tokens is preferred when there are sufficiently severe contracting frictions, while the opposite is true when contracting frictions are low.\\\n **JEL classification**: D44, E42, L86\\\n **Keywords**: Mechanism design, Auctions, Blockchain, Cryptocurrencies, Tokens, Private Money\nauthor:\n- 'Andrea Canidio [^1]'\nbibliography:\n- 'bibliography.bib'\ntitle: ' Auctions with Tokens: Monetary Policy as a Mechanism Design Choice.[^2] '\n---\n\nFirst version: September 30, 2021. This" +"---\nauthor:\n- \n- \n- \n- \nbibliography:\n- 'sn-bibliography.bib'\ntitle: A Human Word Association based model for topic detection in social networks\n---\n\nWith the widespread use of social networks, detecting the topics discussed in these networks has become a significant challenge. The current works are mainly based on frequent pattern mining or semantic relations, and the language structure is not considered. The meaning of language structural methods is to discover the relationship between words and how humans understand them. Therefore, this paper uses the Concept of the Imitation of the Mental Ability of Word Association to propose a topic detection framework in social networks. This framework is based on the Human Word Association method. A special extraction algorithm has also been designed for this purpose. The performance of this method is evaluated on the FA-CUP dataset. It is a benchmark dataset in the field of topic detection. The results show that the proposed method is a good improvement compared to other methods, based on the Topic-recall and the keyword F1 measure. Also, most of the previous works in the field of topic detection are limited to the English language, and the Persian language, especially microblogs written in this language," +"---\nabstract: 'Existing algorithms for ensuring fairness in AI use a single-shot training strategy, where an AI model is trained on an annotated training dataset with sensitive attributes and then fielded for utilization. This training strategy is effective in problems with stationary distributions, where both training and testing data are drawn from the same distribution. However, it is vulnerable with respect to distributional shifts in the input space that may occur after the initial training phase. As a result, the time-dependent nature of data can introduce biases into the model predictions. Model retraining from scratch using a new annotated dataset is a naive solution that is expensive and time-consuming. We develop an algorithm to adapt a fair model to remain fair under domain shift using solely new unannotated data points. We recast this learning setting as an unsupervised domain adaptation problem. Our algorithm is based on updating the model such that the internal representation of data remains unbiased despite distributional shifts in the input space. We provide extensive empirical validation on three widely employed fairness datasets to demonstrate the effectiveness of our algorithm.'\nauthor:\n- 'Serban Stan and Mohammad Rostami University of Southern California {rostamim,sstan}@usc.edu'\n- First Author$^1$\n- Second" +"---\nabstract: 'We consider optimal sensor placement for a family of linear Bayesian inverse problems characterized by a deterministic hyper-parameter. The hyper-parameter describes distinct configurations in which measurements can be taken of the observed physical system. To optimally reduce the uncertainty in the system\u2019s model with a single set of sensors, the initial sensor placement needs to account for the non-linear state changes of all admissible configurations. We address this requirement through an observability coefficient which links the posteriors\u2019 uncertainties directly to the choice of sensors. We propose a greedy sensor selection algorithm to iteratively improve the observability coefficient for all configurations through orthogonal matching pursuit. The algorithm allows explicitly correlated noise models even for large sets of candidate sensors, and remains computationally efficient for high-dimensional forward models through model order reduction. We demonstrate our approach on a large-scale geophysical model of the Perth Basin, and provide numerical studies regarding optimality and scalability with regard to classic optimal experimental design utility functions.'\naddress:\n- 'Oden Institute for Computational Engineering and Sciences, University of Texas at Austin, 201 E 24th St, Austin, TX 78712, USA'\n- 'School of Computational Science and Engineering, Georgia Institute of Technology, 756 W Peachtree St NW," +"---\nabstract: 'Quantum batteries are quantum systems that store energy which can then be used for quantum tasks. One relevant question about such systems concerns the differences and eventual advantages over their classical counterparts, whether in the efficiency of the energy transference, input power, total stored energy or other relevant physical quantities. Here, we show how a purely quantum effect related to the vacuum of the electromagnetic field can enhance the charging of a quantum battery. In particular, we demonstrate how an anti-Jaynes Cummings interaction derived from an off-resonant Raman configuration can be used to increase the stored energy of an effective two-level atom when compared to its classically driven counterpart, eventually achieving full charging of the battery with zero entropic cost.'\nauthor:\n- 'Tiago F. F. Santos'\n- Yohan Vianna de Almeida\n- 'Marcelo F. Santos'\ntitle: Vacuum enhanced charging of a quantum battery\n---\n\nThe quest for advanced quantum technologies or the irreversible role of measurements in quantum dynamics are examples of subjects that have stimulated the study of thermodynamics in the microscopic world. An important recent topic of investigation involves the role played by quantum resources in the storage and use of energy by quantized systems\u00a0[@karen2013;" +"---\nauthor:\n- |\n Chenqi\u00a0Kong, Kexin\u00a0Zheng, Yibing\u00a0Liu, Shiqi\u00a0Wang,\u00a0,\\\n Anderson\u00a0Rocha,\u00a0, and Haoliang\u00a0Li,\u00a0\nbibliography:\n- 'main.bib'\ntitle: 'M$^{3}$FAS: An Accurate and Robust MultiModal Mobile Face Anti-Spoofing System'\n---\n\n[Shell : Bare Demo of IEEEtran.cls for Computer Society Journals]{}\n\nface recognition (AFR) systems have been prevalently deployed on mobile devices and play a vital role around the globe. It is reported that the market of AFR will reach USD 3.35B by 2024\u00a0[@AFR_market]. Despite AFR\u2019s extraordinary success, face presentation attacks (FPA), also known as face spoofing, have recently posed high-security risks over impersonation, financial fraud, and privacy leakage. 2D FPA, comprising photo print and video replay attacks, is the most detrimental and notorious attack type due to its low costs\u00a0[@patel2016secure]. Malicious attackers can easily launch it by accessing the target person\u2019s face images/videos on social media and presenting them to the target AFR systems. The abuse of 2D FPA will certainly lead to trust destruction and tangible concerns in the long run. Therefore, safeguarding AFR systems from 2D FPA and suppressing the pressing security concerns is of utmost importance.\n\n![Illustration of M$^3$FAS system. The mobile device employs the front camera to capture the input" +"---\nabstract: 'Carrier relaxation measurements in moir\u00e9 materials offer a unique probe of the microscopic interactions, in particular the ones that are not easily measured by transport. Umklapp scattering between phonons is a ubiquitous momentum-nonconserving process that governs the thermal conductivity of semiconductors and insulators. In contrast, Umklapp scattering between electrons and phonons has not been demonstrated experimentally. Here, we study the cooling of hot electrons in moir\u00e9 graphene using time- and frequency-resolved photovoltage measurements as a direct probe of its complex energy pathways including electron-phonon coupling. We report on a dramatic speedup in hot carrier cooling of twisted bilayer graphene near the magic angle: the cooling time is a few picoseconds from room temperature down to 5 K, whereas in pristine graphene coupling to acoustic phonons takes nanoseconds. Our analysis indicates that this ultrafast cooling is a combined effect of the formation of a superlattice with low-energy moir\u00e9 phonons, spatially compressed electronic Wannier orbitals, and a reduced superlattice Brillouin zone, enabling Umklapp scattering that overcomes electron-phonon momentum mismatch. These results demonstrate a way to engineer electron-phonon coupling in twistronic systems, an approach that could contribute to the fundamental understanding of their transport properties and enable applications in thermal management" +"---\nabstract: 'There is a long-standing question of whether it is possible to extend the formalism of equilibrium thermodynamics to the case of non-equilibrium systems in steady states. We have made such an extension for an ideal gas in a heat flow \\[Ho\u0142yst *et al.*, J. Chem. Phys. 157, 194108 (2022)\\]. Here we investigate whether such a description exists for the system with interactions: the Van der Waals gas in a heat flow. We introduce the parameters of state, each associated with a single way of changing energy. The first law of non-equilibrium thermodynamics follows from these parameters. The internal energy $U$ for the non-equilibrium states has the same form as in equilibrium thermodynamics. For the Van der Waals gas, $U(S^*, V, N, a^*,b^* )$ is a function of only 5 parameters of state (irrespective of the number of parameters characterizing the boundary conditions): the entropy $S^*$, volume $V$, number of particles $N$, and the rescaled Van der Waals parameters $a^*$, $b^*$. The state parameters, $a^*$, $b^*$, together with $S^*$, determine the net heat exchange with the environment.'\nauthor:\n- Robert Ho\u0142yst\n- Karol Makuch\n- Konrad Gi\u017cy\u0144ski\n- Anna Macio\u0142ek\n- 'Pawe\u0142 J. \u017buk'\ntitle: Steady thermodynamic fundamental relation" +"---\nabstract: 'We generate anti-self-polar polytopes via a numerical implementation of the gradient flow induced by the diameter functional on the space of all finite subsets of the sphere, and prove related results on the critical points of the diameter functional as well as results about the combinatorics of such polytopes. We also discuss potential connections to Borsuk\u2019s conjecture.'\naddress:\n- 'Bar Ilan University.'\n- 'The Ohio State University.'\n- 'University of Utah.'\nauthor:\n- Mikhail Katz\n- Facundo M\u00e9moli\n- Qingsong Wang\ntitle: 'Extremal spherical polytopes and Borsuk\u2019s conjecture'\n---\n\nIntroduction\n============\n\nLet $(X, d_X)$ be a metric space. The *Kuratowski embedding* $x\n\\mapsto d_X(x, \\cdot)$ is an embedding of $X$ into $L^{\\infty}(X)$, the space of all bounded real-valued functions on $X$ with the uniform norm. When $X$ is the unit sphere with its geodesic distance, the homotopy types of the $r$-neighborhoods $B_r(X, L^{\\infty}(X))$ in the Kuratowski embedding of $X$ were studied by Katz in\u00a0[@katz1991neighborhoods]. The values at which the homotopy type changes are closely related to the critical configurations of the diameter functional ${\\mathrm{diam}}$ of $X$ which maps a finite subset $A$ of $X$ to ${\\mathrm{diam}}(A):=\\max_{a,a'\\in A}d_X(a,a')$. When $X$ is the unit circle, such critical values turn" +"---\nabstract: 'Image denoising is a typical ill-posed problem due to complex degradation. Leading methods based on normalizing flows have tried to solve this problem with an invertible transformation instead of a deterministic mapping. However, the implicit bijective mapping is not explored well. Inspired by a latent observation that noise tends to appear in the high-frequency part of the image, we propose a fully invertible denoising method that injects the idea of disentangled learning into a general invertible neural network to split noise from the high-frequency part. More specifically, we decompose the noisy image into clean low-frequency and hybrid high-frequency parts with an invertible transformation and then disentangle case-specific noise and high-frequency components in the latent space. In this way, denoising is made tractable by inversely merging noiseless low and high-frequency parts. Furthermore, we construct a flexible hierarchical disentangling framework, which aims to decompose most of the low-frequency image information while disentangling noise from the high-frequency part in a coarse-to-fine manner. Extensive experiments on real image denoising, JPEG compressed artifact removal, and medical low-dose CT image restoration have demonstrated that the proposed method achieves competing performance on both quantitative metrics and visual quality, with significantly less computational cost.'\nauthor:\n-" +"---\nabstract: 'We investigate the mathematical capabilities of two iterations of ChatGPT (released 9-January-2023 and 30-January-2023) and of GPT-4 by testing them on publicly available datasets, as well as hand-crafted ones, using a novel methodology. In contrast to formal mathematics, where large databases of formal proofs are available (e.g., the Lean Mathematical Library), current datasets of natural-language mathematics, used to benchmark language models, either cover only elementary mathematics or are very small. We address this by publicly releasing two new datasets: GHOSTS and miniGHOSTS. These are the first natural-language datasets curated by working researchers in mathematics that (1) aim to cover graduate-level mathematics, (2) provide a holistic overview of the mathematical capabilities of language models, and (3) distinguish multiple dimensions of mathematical reasoning. These datasets also test whether ChatGPT and GPT-4 can be helpful assistants to professional mathematicians by emulating use cases that arise in the daily professional activities of mathematicians. We benchmark the models on a range of fine-grained performance metrics. For advanced mathematics, this is the most detailed evaluation effort to date. We find that ChatGPT can be used most successfully as a mathematical assistant for querying facts, acting as a mathematical search engine and knowledge base interface." +"---\nabstract: 'Ensemble methods can deliver surprising performance gains but also bring significantly higher computational costs, e.g., can be up to 2048X in large-scale ensemble tasks. However, we found that the majority of computations in ensemble methods are redundant. For instance, over 77% of samples in CIFAR-100 dataset can be correctly classified with only a single ResNet-18 model, which indicates that only around 23% of the samples need an ensemble of extra models. To this end, we propose an inference efficient ensemble learning method, to simultaneously optimize for effectiveness and efficiency in ensemble learning. More specifically, we regard ensemble of models as a sequential inference process and learn the optimal halting event for inference on a specific sample. At each timestep of the inference process, a common selector judges if the current ensemble has reached ensemble effectiveness and halt further inference, otherwise filters this challenging sample for the subsequent models to conduct more powerful ensemble. Both the base models and common selector are jointly optimized to dynamically adjust ensemble inference for different samples with various hardness, through the novel optimization goals including sequential ensemble boosting and computation saving. The experiments with different backbones on real-world datasets illustrate our method can" +"---\nabstract: 'Physics-informed neural networks (PINNs) have been widely used to solve partial differential equations in a forward and inverse manner using deep neural networks. However, training these networks can be challenging for multiscale problems. While statistical methods can be employed to scale the regression loss on data, it is generally challenging to scale the loss terms for equations. This paper proposes a method for scaling the mean squared loss terms in the objective function used to train PINNs. Instead of using automatic differentiation to calculate the temporal derivative, we use backward Euler discretization. This provides us with a scaling term for the equations. In this work, we consider the two and three-dimensional Navier-Stokes equations and determine the kinematic viscosity using the spatio-temporal data on the velocity and pressure fields. We first consider numerical datasets to test our method. We test the sensitivity of our method to the time step size, the number of timesteps, noise in the data, and spatial resolution. Finally, we use the velocity field obtained using Particle Image Velocimetry (PIV) experiments to generate a reference pressure field. We then test our framework using the velocity and reference pressure field.'\nauthor:\n- Sukirt Thakur\n- Maziar Raissi" +"---\nabstract: 'The flipped classroom is a new pedagogical strategy that has been gaining increasing importance recently. Spoken discussion dialog commonly occurs in flipped classroom, which embeds rich information indicating processes and progression of students\u2019 learning. This study focuses on learning analytics from spoken discussion dialog in the flipped classroom, which aims to collect and analyze the discussion dialogs in flipped classroom in order to get to know group learning processes and outcomes. We have recently transformed a course using the flipped classroom strategy, where students watched video-recorded lectures at home prior to group-based problem-solving discussions in class. The in-class group discussions were recorded throughout the semester and then transcribed manually. After features are extracted from the dialogs by multiple tools and customized processing techniques, we performed statistical analyses to explore the indicators that are related to the group learning outcomes from face-to-face discussion dialogs in the flipped classroom. Then, machine learning algorithms are applied to the indicators in order to predict the group learning outcome as High, Mid or Low. The best prediction accuracy reaches 78.9%, which demonstrates the feasibility of achieving automatic learning outcome prediction from group discussion dialog in flipped classroom.'\nauthor:\n- 'Hang\u00a0Su, Borislav\u00a0Dzodzo," +"---\nabstract: 'Previous work suggests that performance of cross-lingual information retrieval correlates highly with the quality of Machine Translation. However, there may be a threshold beyond which improving query translation quality yields little or no benefit to further improve the retrieval performance. This threshold may depend upon multiple factors including the source and target languages, the existing MT system quality and the search pipeline. In order to identify the benefit of improving an MT system for a given search pipeline, we investigate the sensitivity of retrieval quality to the presence of different levels of MT quality using experimental datasets collected from actual traffic. We systematically improve the performance of our MT systems quality on language pairs as measured by MT evaluation metrics including Bleu and Chrf to determine their impact on search precision metrics and extract signals that help to guide the improvement strategies. Using this information we develop techniques to compare query translations for multiple language pairs and identify the most promising language pairs to invest and improve.'\nauthor:\n- |\n Bryan Hang Zhang\\\n Amazon.com\\\n `bryzhang@amazon.com`\\\n Amita Misra\\\n Amazon.com\\\n `misrami@amazon.com`\\\nbibliography:\n- 'anthology.bib'\n- 'others.bib'\n- 'acl2021.bib'\n- 'reference.bib'\ntitle: 'Machine Translation Impact in E-commerce Multilingual Search '\n---" +"---\nabstract: 'The minimum completion (fill-in) problem is defined as follows: Given a graph family\u00a0$\\mathcal{F}$ (more generally, a property\u00a0$\\Pi$) and a graph\u00a0$G$, the completion problem asks for the minimum number of non-edges needed to be added to $G$ so that the resulting graph belongs to the graph family\u00a0$\\mathcal{F}$ (or has property\u00a0$\\Pi$). This problem is NP-complete for many subclasses of perfect graphs and polynomial solutions are available only for minimal completion sets. We study the minimum completion problem of a $P_4$-sparse graph\u00a0$G$ with an added edge. For any optimal solution of the problem, we prove that there is an optimal solution whose form is of one of a small number of possibilities. This along with the solution of the problem when the added edge connects two non-adjacent vertices of a spider or connects two vertices in different connected components of the graph enables us to present a polynomial-time algorithm for the problem.'\nauthor:\n- Anna Mpanti\n- 'Stavros D. Nikolopoulos'\n- Leonidas Palios\ntitle: 'Adding an Edge in a $P_4$-sparse Graph [^1] '\n---\n\nIntroduction\n============\n\nOne instance of the general (${\\cal C},+k$)-MinEdgeAddition problem [@NP05] is the ($P_4$-sparse,[$+$]{}$1$)-MinEdgeAddition Problem. In this problem, we add $1$" +"---\nabstract: 'We look at a stochastic time-varying optimization problem and we formulate online algorithms to find and track its optimizers in expectation. The algorithms are derived from the intuition that standard prediction and correction steps can be seen as a nonlinear dynamical system and a measurement equation, respectively, yielding the notion of nonlinear filter design. The optimization algorithms are then based on an extended Kalman filter in the unconstrained case, and on a bilinear matrix inequality condition in the constrained case. Some special cases and variations are discussed, notably the case of parametric filters, yielding certificates based on LPV analysis and, if one wishes, matrix sum-of-squares relaxations. Supporting numerical results are presented from real data sets in ride-hailing scenarios. The results are encouraging, especially when predictions are accurate, a case which is often encountered in practice when historical data is abundant.'\nauthor:\n- 'Andrea Simonetto [^1]Paolo Massioni [^2]'\nbibliography:\n- 'PaperCollection00.bib'\ntitle: 'Nonlinear Optimization Filters for Stochastic Time-Varying Convex Optimization'\n---\n\nIntroduction {#sec1}\n============\n\nWe look at time-varying optimization problems of the form $$~\\label{eq:tv}\n\\min_{\\x \\in {\\mathbb R}^n} f(\\x; \\y(t)) + g(\\x),\\qquad t\\geq 0,$$ where $f: {\\mathbb R}^n \\times {\\mathbb R}^d \\to {\\mathbb R}$ is a smooth strongly convex" +"---\nabstract: 'Dantzig-Wolfe (DW) decomposition is a well-known technique in mixed-integer programming (MIP) for decomposing and convexifying constraints to obtain potentially strong dual bounds. We investigate cutting planes that can be derived using the DW decomposition algorithm and show that these cuts can provide the same dual bounds as DW decomposition. More precisely, we generate one cut for each DW block, and when combined with the constraints in the original formulation, these cuts imply the objective function cut one can simply write using the DW bound. This approach typically leads to a formulation with lower dual degeneracy that consequently has a better computational performance when solved by standard MIP solvers in the original space. We also discuss how to strengthen these cuts to improve the computational performance further. We test our approach on the Multiple Knapsack Assignment Problem and the Temporal Knapsack Problem, and show that the proposed cuts are helpful in accelerating the solution time without the need to implement branch and price.'\nauthor:\n- |\n Rui Chen$^1$, Oktay G\u00fcnl\u00fck$^2$, Andrea Lodi$^1$\\\n $^1$ Cornell Tech, Cornell University ({rui.chen,andrea.lodi}@cornell.edu)\\\n $^2$ School of ORIE, Cornell University (oktay.gunluk@cornell.edu)\nbibliography:\n- 'ref.bib'\ntitle: 'Recovering Dantzig-Wolfe Bounds by Cutting Planes'\n---\n\nIntroduction\n============\n\nIn" +"---\nabstract: 'The presence of Galactic cirrus is an obstacle for studying both faint objects in our Galaxy and low surface brightness extragalactic structures. With the aim of studying individual cirrus filaments in SDSS Stripe\u00a082 data, we develop techniques based on machine learning and neural networks that allow one to isolate filaments from foreground and background sources in the entirety of Stripe\u00a082 with a precision similar to that of the human expert. Our photometric study of individual filaments indicates that only those brighter than 26 mag arcsec$^{-2}$ in the SDSS $r$ band are likely to be identified in SDSS Stripe\u00a082 data by their distinctive colours in the optical bands. We also show a significant impact of data processing (e.g. flat-fielding, masking of bright stars, and sky subtraction) on colour estimation. . Our work provides a useful framework for an analysis of all types of low surface brightness features (cirri, tidal tails, stellar streams, etc.) in existing and future deep optical surveys. For practical purposes, we provide the catalogue of dust filaments.'\nauthor:\n- |\n Anton\u00a0A.\u00a0Smirnov,$^{1,2}$[^1] Sergey\u00a0S.\u00a0Savchenko,$^{1,2,5}$ Denis\u00a0M.\u00a0Poliakov$^{1,2}$ Alexander A. Marchuk,$^{1,2}$ Aleksandr\u00a0V.\u00a0Mosenkov,$^{3,1}$ Vladimir\u00a0B.\u00a0Il\u2019in,$^{1,2,4}$ George\u00a0A.\u00a0Gontcharov,$^{1}$ Javier\u00a0Rom[\u00e1]{}n,$^{6,7,8}$" +"---\nabstract: 'Supernova (SN) plays an important role in galaxy formation and evolution. In high-resolution galaxy simulations using massively parallel computing, short integration timesteps for SNe are serious bottlenecks. This is an urgent issue that needs to be resolved for future higher-resolution galaxy simulations. One possible solution would be to use the Hamiltonian splitting method, in which regions requiring short timesteps are integrated separately from the entire system. To apply this method to the particles affected by SNe in a smoothed-particle hydrodynamics simulation, we need to detect the shape of the shell on and within which such SN-affected particles reside during the subsequent global step in advance. In this paper, we develop a deep learning model, 3D-MIM, to predict a shell expansion after a SN explosion. Trained on turbulent cloud simulations with particle mass $m_{\\rm gas}=1$ M$_\\odot$, the model accurately reproduces the anisotropic shell shape, where densities decrease by over 10 per cent by the explosion. We also demonstrate that the model properly predicts the shell radius in the uniform medium beyond the training dataset of inhomogeneous turbulent clouds. We conclude that our model enables the forecast of the shell and its interior where SN-affected particles will be present.'\nauthor:" +"---\ntitle: |\n **Tachyons and Misaligned Supersymmetry in Closed String Vacua**\n\n \\\n---\n\nAbstract\\\n\nIn a remarkable paper, Dienes discovered that the absence of physical tachyons in closed string theory is intimately related to oscillations in the net number of bosonic minus fermionic degrees of freedom, a pattern predicted by an underlying misaligned supersymmetry. The average of these oscillations was linked to an exponential growth controlled by an effective central charge $C_\\text{eff}$ smaller than the expected inverse Hagedorn temperature. Dienes also conjectured that $C_\\text{eff}$ should vanish when tachyons are absent.\n\nIn this paper, we revisit this problem and show that boson-fermion oscillations are realised even when tachyons are present in the physical spectrum. In fact, we prove that the average growth rate $C_\\text{eff}$ is set by the mass of the \u201clightest\u201d state, be it massless or tachyonic, and coincides with the effective inverse Hagedorn temperature of the associated thermal theory. We also provide a general proof that the necessary and sufficient condition for classical stability is the vanishing of the sector averaged sum which implies $C_\\text{eff} =0$, in agreement with Dienes\u2019 conjecture.\n\n- [carlo.angelantonj@unito.it]{}\\\n [iflorakis@uoi.gr]{}\\\n [giorgio.leone@unito.it]{}\n\nIntroduction\n============\n\nSuperstring vacua are typically unstable when space-time supersymmetry is absent. The" +"---\naddress: '$^{1}$ Max Planck Institute for the Physics of Complex Systems, Dresden; crt@pks.mpg.de\\'\n---\n\nIntroduction\n============\n\nOne of the central areas of study in quantum chaos is that of the spectral statistics of quantum chaotic systems and how they relate to classical chaos and random matrix theory (RMT) [@StoeBook; @HaakeBook]. The spectral form factor (SFF) is one of the most widely used spectral statistics, due to the stark contrast in behaviour between the chaotic and integrable regimes. However, the SFF is not a self averaging quantity [@prange1997spectral], meaning that the typical value at may be far from the average value. Because of this, its numerical computation remains challenging, and its practical evaluation requires some sort of smoothing procedure, either by computing disorder averages (only possible when considering systems with disorder) or local time averages. Nevertheless, the SFF has been used as the fundamental indicator of quantum chaos in many of the central rigorous results. A heuristic proof of the quantum chaos (Bohigas-Giannoni-Schmit) conjecture [@BGS; @CVG], that was initiated by Berry [@berry1985semiclassical] developed by Sieber and Richter [@Sieber2001] and later completed by the group of Haake\u00a0[@Mueller2004a; @Mueller2004b; @Mueller2005] clearly relates random matrix spectral correlations to correlations among classical unstable" +"---\nabstract: 'According to recent new definitions, a multi-party behavior is *genuinely multipartite nonlocal* (GMNL) if it cannot be modeled by measurements on an underlying network of bipartite-only nonlocal resources, possibly supplemented with local (classical) resources shared by all parties. The new definitions differ on whether to allow entangled measurements upon, and/or superquantum behaviors among, the underlying bipartite resources. Here, we categorize the full hierarchy of these new candidate definitions of GMNL in three-party quantum networks, highlighting the intimate link to device-independent witnesses of network effects. A key finding is the existence of a behavior in the simplest nontrivial multi-partite measurement scenario (3 parties, 2 measurement settings, and 2 outcomes) that cannot be simulated in a bipartite network prohibiting entangled measurements and superquantum resources \u2013 thus witnessing the most general form of GMNL \u2013 but can be simulated with bipartite-only quantum states *with* an entangled measurement, indicating an approach to device independent certification of entangled measurements with fewer settings than in previous protocols. Surprisingly, we also find that this (3,2,2) behavior, as well as the others previously studied as device-independent witnesses of entangled measurements, can all be simulated at a higher echelon of the GMNL hierarchy that allows superquantum bipartite" +"---\nabstract: 'We study a one-dimensional sluggish random walk with space-dependent transition probabilities between nearest-neighbour lattice sites. Motivated by trap models of slow dynamics, we consider a model in which the trap depth increases logarithmically with distance from the origin. This leads to a random walk which has symmetric transition probabilities that decrease with distance $|k|$ from the origin as $1/|k|$ for large $|k|$. We show that the typical position after time $t$ scales as $t^{1/3}$ with a nontrivial scaling function for the position distribution which has a trough (a cusp singularity) at the origin. even though the transition probabilities are symmetric. We also compute the survival probability of the walker in the presence of a sink at the origin and show that it decays as $t^{-1/3}$ at late times. Furthermore we compute the distribution of the maximum position, $M(t)$, to the right of the origin up to time $t$, and show that it has a nontrivial scaling function. Finally we provide a generalisation of this model where the transition probabilities decay as $1/|k|^\\alpha$ with $\\alpha >0$.'\naddress:\n- '$^1$ Department of Physics, Indian Institute of Science Education and Research, Dr. Homi Bhabha Road, Pune 411008, India'\n- '$^2$ Department" +"---\nabstract: |\n **Background** Alzheimer\u2019s disease and related dementia (ADRD) are characterized by multiple and progressive anatomo-clinical changes including accumulation of abnormal proteins in the brain, brain atrophy and severe cognitive impairment. Understanding the sequence and timing of these changes is of primary importance to gain insight into the disease natural history and ultimately allow earlier diagnosis. Yet, modeling changes over disease course from cohort data is challenging as the usual timescales (time since inclusion, chronological age) are inappropriate and time-to-clinical diagnosis is available on small subsamples of participants with short follow-up durations prior to diagnosis. One solution to circumvent this challenge is to define the disease time as a latent variable.\n\n **Methods** We developed a multivariate mixed model approach that realigns individual trajectories into the latent disease time to describe disease progression. In contrast with the existing literature, our methodology exploits the clinical diagnosis information as a partially observed and approximate reference to guide the estimation of the latent disease time. The model estimation was carried out in the Bayesian Framework using Stan. We applied the methodology to the MEMENTO study, a French multicentric clinic-based cohort of 2186 participants with 5-year intensive follow-up. Repeated measures of 12 ADRD markers" +"---\nabstract: 'We investigate policy transfer using image-to-semantics translation to mitigate learning difficulties in vision-based robotics control agents. This problem assumes two environments: a simulator environment with semantics, that is, low-dimensional and essential information, as the state space, and a real-world environment with images as the state space. By learning mapping from images to semantics, we can transfer a policy, pre-trained in the simulator, to the real world, thereby eliminating real-world on-policy agent interactions to learn, which are costly and risky. In addition, using image-to-semantics mapping is advantageous in terms of the computational efficiency to train the policy and the interpretability of the obtained policy over other types of sim-to-real transfer strategies. To tackle the main difficulty in learning image-to-semantics mapping, namely the human annotation cost for producing a training dataset, we propose two techniques: pair augmentation with the transition function in the simulator environment and active learning. We observed a reduction in the annotation cost without a decline in the performance of the transfer, and the proposed approach outperformed the existing approach without annotation.'\nauthor:\n- \nbibliography:\n- 'rl.bib'\ntitle: 'Few-Shot Image-to-Semantics Translation for Policy Transfer in Reinforcement Learning [^1] '\n---\n\ndeep reinforcement learning, policy transfer, sim-to-real\n\nIntroduction" +"---\nabstract: 'We present a comprehensive study of the temperature and magnetic-field dependent photoluminescence (PL) of individual NV centers in diamond, spanning the temperature-range from cryogenic to ambient conditions. We directly observe the emergence of the NV\u2019s room-temperature effective excited state structure and provide a clear explanation for a previously poorly understood broad quenching of NV PL at intermediate temperatures around $50~$K. We develop a model that quantitatively explains all of our findings, including the strong impact that strain has on the temperature-dependence of the NV\u2019s PL. These results complete our understanding of orbital averaging in the NV excited state and have significant implications for the fundamental understanding of the NV center and its applications in quantum sensing.'\nauthor:\n- Jodok Happacher\n- Juanita Bocquel\n- 'Hossein T. Dinani'\n- 'M\u00e4rta A. Tschudin'\n- Patrick Reiser\n- 'David A. Broadway'\n- 'Jeronimo R. Maze'\n- Patrick Maletinsky\nbibliography:\n- 'Bibliography\\_NV\\_Temperature\\_Dependence.bib'\ntitle: Temperature Dependent Photophysics of Single NV Centers in Diamond\n---\n\nColor centers in solid state hosts are crucial for a variety of quantum technologies, including spin-based quantum sensors[@Degen2017a], highly stable fluorescent labels[@Alkahtani2018a], and single-photon light sources for advanced microscopy[@Nelz2020a]. Among the many potential systems, the nitrogen vacancy (NV) lattice" +"---\nabstract: 'Facial expression recognition (FER) plays an important role in a variety of real-world applications such as human-computer interaction. POSTER achieves the state-of-the-art (SOTA) performance in FER by effectively combining facial landmark and image features through two-stream pyramid cross-fusion design. However, the architecture of POSTER is undoubtedly complex. It causes expensive computational costs. In order to relieve the computational pressure of POSTER, in this paper, we propose POSTER++. It improves POSTER in three directions: cross-fusion, two-stream, and multi-scale feature extraction. In cross-fusion, we use window-based cross-attention mechanism replacing vanilla cross-attention mechanism. We remove the image-to-landmark branch in the two-stream design. For multi-scale feature extraction, POSTER++ combines images with landmark\u2019s multi-scale features to replace POSTER\u2019s pyramid design. Extensive experiments on several standard datasets show that our POSTER++ achieves the SOTA FER performance with the minimum computational cost. For example, POSTER++ reached 92.21% on RAF-DB, 67.49% on AffectNet (7 cls) and 63.77% on AffectNet (8 cls), respectively, using only 8.4G floating point operations (FLOPs) and 43.7M parameters (Param). This demonstrates the effectiveness of our improvements.'\nauthor:\n- |\n Jiawei Mao$^\\dag$ Rui Xu$^\\dag$ Xuesong Yin[[^1]]{} Yuanqi Chang Binling Nie Aibin Huang$^*$\\\n School of Media and Design, Hangzhou Dianzi University, Hangzhou, China\\\n [{jiaweima0,211330017,yinxs,yuanqichang,binlingnie,huangaibin}@hdu.edu.cn" +"---\nabstract: |\n User interaction data in recommender systems is a form of dyadic relation that reflects the preferences of users with items. Learning the representations of these two discrete sets of objects, users and items, is critical for recommendation. Recent multimodal recommendation models leveraging multimodal features ([*e.g.*,]{}images and text descriptions) have been demonstrated to be effective in improving recommendation accuracy. However, state-of-the-art models enhance the dyadic relations between users and items by considering either user-user or item-item relations, leaving the high-order relations of the other side ([*i.e.*,]{}users or items) unexplored. Furthermore, we experimentally reveal that the current multimodality fusion methods in the state-of-the-art models may degrade their recommendation performance. That is, without tainting the model architectures, these models can achieve even better recommendation accuracy with uni-modal information. On top of the finding, we propose a model that enhances the dyadic relations by learning ual epresenttions of both users and items via constructing homogeneous raphs for multimdal recommedation. We name our model as [`DRAGON`]{}. Specifically, [`DRAGON`]{}constructs the user-user graph based on the commonly interacted items and the item-item graph from item multimodal features. It then utilizes graph learning on both the user-item heterogeneous graph and the homogeneous graphs (user-user and" +"---\nabstract: 'Culture shapes people\u2019s behavior, both online and offline. Surprisingly, there is sparse research on how cultural context affects network formation and content consumption on social media. We analyzed the friendship networks and dyadic relations between content producers and consumers across 73 countries through a cultural lens in a closed-network setting. Closed networks allow for intimate bonds and self-expression, providing a natural setting to study cultural differences in behavior. We studied three theoretical frameworks of culture - individualism, relational mobility, and tightness. We found that friendship networks formed across different cultures differ in egocentricity, meaning the connectedness between a user\u2019s friends. Individualism, mobility, and looseness also significantly negatively impact how tie strength affects content consumption. Our findings show how culture affects social media behavior, and we outline how researchers can incorporate this in their work. Our work has implications for content recommendations and can improve content engagement.'\nauthor:\n- Agrima Seth\n- Jiyin Cao\n- Xiaolin Shi\n- Ron Dotsch\n- Yozen Liu\n- 'Maarten W. Bos'\nbibliography:\n- 'sample-base.bib'\ntitle: 'Cultural Differences in Friendship Network Behaviors: A Snapchat Case Study'\n---\n\n<ccs2012> <concept> <concept\\_id>10003120.10003130.10003131.10003292</concept\\_id> <concept\\_desc>Human-centered computing\u00a0Social networks</concept\\_desc> <concept\\_significance>500</concept\\_significance> </concept> <concept> <concept\\_id>10003120.10003130.10003131.10011761</concept\\_id> <concept\\_desc>Human-centered computing\u00a0Social media</concept\\_desc> <concept\\_significance>300</concept\\_significance> </concept>" +"---\nabstract: 'Graph-structured data can be found in numerous domains, yet the scarcity of labeled instances hinders its effective utilization of deep learning in many scenarios. Traditional unsupervised domain adaptation (UDA) strategies for graphs primarily hinge on adversarial learning and pseudo-labeling. These approaches fail to effectively leverage graph discriminative features, leading to class mismatching and unreliable label quality. To navigate these obstacles, we develop the Denoising and Nuclear-Norm Wasserstein Adaptation Network (DNAN). DNAN employs the Nuclear-norm Wasserstein discrepancy (NWD), which can simultaneously achieve domain alignment and class distinguishment. DANA also integrates a denoising mechanism via a variational graph autoencoder that mitigates data noise. This denoising mechanism helps capture essential features of both source and target domains, improving the robustness of the domain adaptation process. Our comprehensive experiments demonstrate that DNAN outperforms state-of-the-art methods on standard UDA benchmarks for graph classification.'\nauthor:\n- |\n Mengxi Wu mengxiwu@usc.edu\\\n USC Computer Science Department\\\n Mohammad Rostami rostamim@usu.edu\\\n USC Computer Science Department\nbibliography:\n- 'main.bib'\ntitle: 'Graph Harmony: Denoising and Nuclear-Norm Wasserstein Adaptation for Enhanced Domain Transfer in Graph-Structured Data'\n---\n\nIntroduction\n============\n\nWhile deep learning has made substantial progress in handling graph-structured data, it shares a substantial drawback with other methods in the same" +"---\nabstract: 'This note modifies the reference encoding of Turing machines in the $\\l$-calculus by Dal Lago and Accattoli [@DBLP:journals/corr/abs-1711-10078], which is tuned for time efficiency, as to accommodate logarithmic space. There are two main changes: Turing machines now have *two* tapes, an input tape and a work tape, and the input tape is encoded differently, because the reference encoding comes with a linear space overhead for managing tapes, which is excessive for studying logarithmic space.'\nauthor:\n- Beniamino Accattoli\n- Ugo Dal Lago\n- Gabriele Vanoni\nbibliography:\n- 'main.bib'\ntitle: |\n A Log-Sensitive Encoding of\\\n Turing Machines in the $\\l$-Calculus\n---\n\nIntroduction\n============\n\nThis note presents a new encoding of Turing machines into the $\\l$-calculus and and proves its correctness. It is based over Dal Lago and Accattoli\u2019s reference encoding of single tape Turing machines [@DBLP:journals/corr/abs-1711-10078]. The new encoding is tuned for studying logarithmic space complexity even though such a study is not carried out here but in a companion paper. The aim of this note is to provide the formal definition of the encoding and the tedious calculations to prove its correctness.\n\nThe key points of the new encoding with respect to the reference one are:\n\n- *Log-sensitivity*:" +"---\nabstract: 'The instability of a cryogenic $^4$He jet exiting through a small nozzle into vacuum leads to the formation of $^4$He drops which are considered as ideal matrices for spectroscopic studies of embedded atoms and molecules. Here, we present a He-DFT description of droplet formation resulting from jet breaking and contraction of superfluid $^4$He filaments. Whereas the fragmentation of long jets closely follows the predictions of linear theory for inviscid fluids, leading to droplet trains interspersed with smaller satellite droplets, the contraction of filaments with an aspect ratio larger than a threshold value leads to the nucleation of vortex rings which hinder their breakup into droplets.'\nauthor:\n- Francesco Ancilotto\n- Manuel Barranco\n- Mart\u00ed Pi\ntitle: |\n Nanoscopic jets and filaments of superfluid $^4$He at zero temperature:\\\n a DFT study\n---\n\nIntroduction\n============\n\nLiquid $^4$He droplets at low temperature offer a unique environment for molecular spectroscopy[@Leh98; @Cho06; @Cal11] and the study of superfluidity on the atomic scale,[@Sin89; @Kri90; @Gre98] including the study of quantum vortices.[@Gom14; @Lan18; @Ges19; @Oco20] Usually, $^4$He droplets are produced by expansion of cooled $^4$He gas or by instability of a cryogenic $^4$He jet exiting a source chamber into vacuum throughout a nozzle, whose temperature" +"---\nabstract: 'Cross-modality data translation has attracted great interest in image computing. Deep generative models (*e.g.*, GANs) show performance improvement in tackling those problems. Nevertheless, as a fundamental challenge in image translation, the problem of Zero-shot-Learning Cross-Modality Data Translation with fidelity remains unanswered. This paper proposes a new unsupervised zero-shot-learning method named Mutual Information guided Diffusion cross-modality data translation Model (MIDiffusion), which learns to translate the unseen source data to the target domain. The MIDiffusion leverages a score-matching-based generative model, which learns the prior knowledge in the target domain. We propose a differentiable local-wise-MI-Layer ($LMI$) for conditioning the iterative denoising sampling. The $LMI$ captures the identical cross-modality features in the statistical domain for the diffusion guidance; thus, our method does not require retraining when the source domain is changed, as it does not rely on any direct mapping between the source and target domains. This advantage is critical for applying cross-modality data translation methods in practice, as a reasonable amount of source domain dataset is not always available for supervised training. We empirically show the advanced performance of MIDiffusion in comparison with an influential group of generative models, including adversarial-based and other score-matching-based models.'\nauthor:\n- |\n Zihao Wang$^1$[^1], Yingyu" +"---\nabstract: 'A new variant of Newton\u2019s method for empirical risk minimization is studied, where at each iteration of the optimization algorithm, the gradient and Hessian of the objective function are replaced by robust estimators taken from existing literature on robust mean estimation for multivariate data. After proving a general theorem about the convergence of successive iterates to a small ball around the population-level minimizer, consequences of the theory in generalized linear models are studied when data are generated from Huber\u2019s epsilon-contamination model and/or heavy-tailed distributions. An algorithm for obtaining robust Newton directions based on the conjugate gradient method is also proposed, which may be more appropriate for high-dimensional settings, and conjectures about the convergence of the resulting algorithm are offered. Compared to robust gradient descent, the proposed algorithm enjoys the faster rates of convergence for successive iterates often achieved by second-order algorithms for convex problems, i.e., quadratic convergence in a neighborhood of the optimum, with a stepsize that may be chosen adaptively via backtracking linesearch.'\nauthor:\n- Eirini Ioannou\n- Muni Sreenivas Pydi\n- 'Po-Ling Loh'\nbibliography:\n- 'refs.bib'\ntitle: 'Robust empirical risk minimization via Newton\u2019s method'\n---\n\nIntroduction\n============\n\nStatistical estimation via classical procedures often depends on strong" +"---\nabstract: 'Percolation theory shows that removing a small fraction of critical nodes can lead to the disintegration of a large network into many disconnected tiny subnetworks. The *network dismantling* task focuses on how to efficiently select the least such critical nodes. Most existing approaches focus on measuring nodes\u2019 importance from either functional or topological viewpoint. Different from theirs, we argue that nodes\u2019 importance can be measured from both of the two complementary aspects: The functional importance can be based on the nodes\u2019 competence in relaying network information; While the topological importance can be measured from nodes\u2019 regional structural patterns. In this paper, we propose an unsupervised learning framework for network dismantling, called DCRS, which encodes and fuses both node iffusion ompetence and ole ignificance. Specifically, we propose a graph diffusion neural network which emulates information diffusion for competence encoding; We divide nodes with similar egonet structural patterns into a few roles, and construct a role graph on which to encode node role significance. The DCRS converts and fuses the two encodings to output a final ranking score for selecting critical nodes. Experiments on both real-world networks and synthetic networks demonstrate that our scheme significantly outperforms the state-of-the-art competitors for" +"---\nauthor:\n- 'Devojyoti Kansabanik,$^{1}$ Surajit Mondal$^{2}$, Divya Oberoi $^1$, Puja Majee $^1$'\nbibliography:\n- 'example.bib'\ntitle: 'Space Weather Research using Spectropolarimetric Radio Imaging Combined With Aditya-L1 and PUNCH Missions'\n---\n\nIntroduction\n============\n\nThe space weather around the Earth is determined by the Sun. The most important phenomenon determining the space weather is coronal mass ejection (CME). CMEs are large-scale eruptions of magnetized plasma from the Sun into the heliosphere. It is well-established that the magnetic fields play important roles in their propagation and determining their geo-effectiveness. While propagating CMEs interact with other heliospheric components like solar wind, co-rotating interaction regions, and stream interaction regions and change their propagation direction and magnetic field topology [@Manchester2017]. These deformations complicate the prediction of CME arrival times or the $B_\\mathrm{z}$ component of the magnetic field at 1 AU. Hence, tracking and measuring the magnetic fields of a CME as it propagates from the corona into the heliosphere, is essential for improving space-weather forecasting.\n\nThere are several state-of-the-art CME models [@Isavnin_2016; @M\u00f6stl2018] developed over the last few years to incorporate these deformations of CMEs into account. These models have multiple independent parameters. One needs to constrain these model parameters of a CME during its" +"---\nabstract: 'One of the alternative theories of gravitation with a possible UV completion of general relativity is Horava-Lifshitz gravity. Regarding a particular class of pure $F(R)$ gravity in three dimensions, we obtain an analytical rotating Lifshitz-like black hole solution. We first investigate some geometrical properties of the obtained solution that reduces to a charged rotating BTZ black hole in a special limit. Then, we study the optical features of such a black hole like the photon orbit and the energy emission rate and discuss how electric charge, angular momentum, and exponents affect them. In order to have an acceptable optical behavior, we should apply some constraints on the exponents. We continue our investigation with the study of the thermodynamic behavior of the solutions in the extended phase space and examine the validity of the first law of thermodynamics besides local thermal stability by using the heat capacity. Evaluating the existence of van der Waals-like phase transition, we obtain critical quantities and show how they change under the variation of black hole parameters. Finally, we construct a holographic heat engine of such a black hole and obtain its efficiency in a cycle. By comparing the obtained efficiency with the Carnot" +"---\nabstract: 'Despite the great success of state-of-the-art deep neural networks, several studies have reported models to be over-confident in predictions, indicating miscalibration. Label Smoothing has been proposed as a solution to the over-confidence problem and works by softening hard targets during training, typically by distributing part of the probability mass from a \u2018one-hot\u2019 label uniformly to all other labels. However, neither model nor human confidence in a label are likely to be uniformly distributed in this manner, with some labels more likely to be confused than others. In this paper we integrate notions of model confidence and human confidence with label smoothing, respectively *Model Confidence LS* and *Human Confidence LS*, to achieve better model calibration and generalization. To enhance model generalization, we show how our model and human confidence scores can be successfully applied to curriculum learning, a training strategy inspired by learning of \u2018easier to harder\u2019 tasks. A higher model or human confidence score indicates a more recognisable and therefore easier sample, and can therefore be used as a scoring function to rank samples in curriculum learning. We evaluate our proposed methods with four state-of-the-art architectures for image and text classification task, using datasets with multi-rater label annotations" +"---\nabstract: 'Ultrarelativistic gamma-ray burst (GRB) jets are strong gravitational wave (GW) sources with memory-type signals. The plateau (or shallow decay) phases driven by the energy injection might appear in the early X-ray afterglows of GRBs. In this paper, we investigate the GW signal as well as X-ray afterglow emission in the framework of GRB jets with energy injection, and both short- and long-duration GRBs are considered. We find that, regardless of the case, because of the antibeaming and time delay effects, a rising slope emerging in the waveform of GW signal due to the energy injection lags far behind the energy ejection, and the typical frequency of the characteristic amplitudes falls within a low-frequency region of $\\sim10^{-4}-10^{-6} \\,{\\rm Hz}$; and we consider that the GW memory triggered by GRB jets with energy injection are previously unaware and the nearby GRBs with strong energy injection might disturb the measurement of the stochastic GW background. Such GW memory detection would provide a direct test for models of energy injection in the scenario of GRB jets.'\nauthor:\n- 'Bao-Quan Huang'\n- Tong Liu\n- Li Xue\n- 'Yan-Qing Qi'\ntitle: 'Low-frequency gravitational wave memory from gamma-ray burst afterglows with energy injection'\n---" +"[to $~{}$ revised version ]{}\n\n2.0cm\n\n[**Primordial black holes from Volkov\u2013Akulov\u2013Starobinsky\\\n.1in supergravity**]{}\n\n.3in\n\nYermek Aldabergenov\u00a0${}^{a,b,}$[^1] and Sergei V. Ketov\u00a0${}^{c,d,e,}$[^2] .1in\n\n${}^a$\u00a0[*Department of Physics, Faculty of Science, Chulalongkorn University,\\\nPhayathai Road, Pathumwan, Bangkok 10330, Thailand*]{}\\\n${}^b$\u00a0[*Department of Theoretical and Nuclear Physics, Al-Farabi Kazakh National University,\\\n71 Al-Farabi Ave., Almaty 050040, Kazakhstan*]{}\\\n${}^c$\u00a0[*Department of Physics, Tokyo Metropolitan University\\\n1-1 Minami-ohsawa, Hachioji-shi, Tokyo 192-0397, Japan*]{}\\\n${}^d$\u00a0[*Research School of High-Energy Physics, Tomsk Polytechnic University\\\n2a Lenin Avenue, Tomsk 634028, Russian Federation*]{}\\\n${}^e$\u00a0[*Kavli Institute for the Physics and Mathematics of the Universe (WPI)\\\nThe University of Tokyo Institutes for Advanced Study, Kashiwa 277-8583, Japan*]{}\\\n.1in\n\n.3in\n\n[**Abstract**]{} .2in\n\nWe study the formation of primordial black holes (PBH) in the Starobinsky supergravity coupled to the nilpotent superfield describing Volkov\u2013Akulov goldstino. By using the no-scale K\u00e4hler potential and a polynomial superpotential, we find that under certain conditions our model can describe effectively single-field inflation with the ultra-slow-roll phase that appears near a critical (near-inflection) point of the scalar potential. This can lead to the formation of PBH as part of (or whole) dark matter, while keeping the inflationary spectral tilt and the tensor-to-scalar ratio in good agreement with the" +"---\nabstract: |\n We propose A-Crab (Actor-Critic Regularized by Average Bellman error), a new algorithm for offline reinforcement learning (RL) in complex environments with insufficient data coverage. Our algorithm combines the marginalized importance sampling framework with the actor-critic paradigm, where the critic returns evaluations of the actor (policy) that are pessimistic relative to the offline data and have a small average (importance-weighted) Bellman error. Compared to existing methods, our algorithm simultaneously offers a number of advantages:\n\n 1. It is practical and achieves the optimal statistical rate of $1/\\sqrt{N}$\u2014where $N$ is the size of offline dataset\u2014in converging to the best policy covered in the offline dataset, even when combined with general function approximators.\n\n 2. It relies on a weaker *average* notion of policy coverage (compared to the $\\ell_\\infty$ single-policy concentrability) that exploits the structure of policy visitations.\n\n 3. It outperforms the data-collection behavior policy over a wide-range of hyperparameter and is the first algorithm to do so *without* solving a minimax optimization problem.\n\nbibliography:\n- 'references.bib'\ntitle: 'Importance Weighted Actor-Critic for Optimal Conservative Offline Reinforcement Learning'\n---\n\nIntroduction {#sec:intro}\n============\n\nOffline reinforcement learning (RL) algorithms aim at learning a good policy based only on historical interaction data. This paradigm allows for" +"---\nabstract: |\n We initiate the study of strategic behavior in screening processes with *multiple* classifiers. We focus on two contrasting settings: a \u201cconjunctive\u201d setting in which an individual must satisfy all classifiers simultaneously, and a sequential setting in which an individual to succeed must satisfy classifiers one at a time. In other words, we introduce the combination of *strategic classification* with screening processes.\n\n We show that sequential screening pipelines exhibit new and surprising behavior where individuals can exploit the sequential ordering of the tests to \u201czig-zag\u201d between classifiers without having to simultaneously satisfy all of them. We demonstrate an individual can obtain a positive outcome using a limited manipulation budget even when far from the intersection of the positive regions of every classifier. Finally, we consider a learner whose goal is to design a sequential screening process that is robust to such manipulations, and provide a construction for the learner that optimizes a natural objective.\nauthor:\n- 'Lee Cohen[^1]'\n- 'Saeed Sharifi-Malvajerdi[^2]'\n- 'Kevin Stangl[^3]'\n- 'Ali Vakilian[^4]'\n- 'Juba Ziani [^5]'\nbibliography:\n- 'strategic-pipeline.bib'\ntitle: Sequential Strategic Screening\n---\n\nIntroduction {#sec:intro}\n============\n\nScreening processes\u00a0[@arunachaleswaran2022pipeline; @blum2022multi; @cohen2019efficient] involve evaluating and selecting individuals for a specific, pre-defined purpose, such" +"---\nabstract: 'Finding the ground state of a quantum many-body system is a fundamental problem in quantum physics. In this work, we give a classical machine learning (ML) algorithm for predicting ground state properties with an inductive bias encoding geometric locality. The proposed ML model can efficiently predict ground state properties of an $n$-qubit gapped local Hamiltonian after learning from only $\\mathcal{O}(\\log(n))$ data about other Hamiltonians in the same quantum phase of matter. This improves substantially upon previous results that require $\\mathcal{O}(n^c)$ data for a large constant $c$. Furthermore, the training and prediction time of the proposed ML model scale as $\\mathcal{O}(n \\log n)$ in the number of qubits $n$. Numerical experiments on physical systems with up to $45$ qubits confirm the favorable scaling in predicting ground state properties using a small training dataset.'\nauthor:\n- Laura Lewis\n- 'Hsin-Yuan Huang'\n- 'Viet T. Tran'\n- Sebastian Lehner\n- Richard Kueng\n- John Preskill\ntitle: |\n Improved machine learning algorithm for\\\n predicting ground state properties\n---\n\nIntroduction\n============\n\nFinding the ground state of a quantum many-body system is a fundamental problem with far-reaching consequences for physics, materials science, and chemistry. Many powerful methods [@HohenbergKohn; @NobelKohn; @CEPERLEY555; @SandvikSSE; @becca_sorella_2017; @DMRG1; @DMRG2]" +"---\nabstract: 'In many applications of online decision making, the environment is non-stationary and it is therefore crucial to use bandit algorithms that handle changes. Most existing approaches are designed to protect against non-smooth changes, constrained only by total variation or Lipschitzness over time, where they guarantee $\\tilde \\Theta(T^{2/3})$ regret. However, in practice environments are often changing [**smoothly**]{}, so such algorithms may incur higher-than-necessary regret in these settings and do not leverage information on the rate of change. We study a non-stationary two-armed bandits problem where we assume that an arm\u2019s mean reward is a $\\beta$-H\u00f6lder function over (normalized) time, meaning it is $(\\beta-1)$-times Lipschitz-continuously differentiable. We show the first separation between the smooth and non-smooth regimes by presenting a policy with $\\tilde O(T^{3/5})$ regret for $\\beta=2$. We complement this result by an ${\\Omega}(T^{(\\beta+1)/(2\\beta+1)})$ lower bound for any integer $\\beta\\ge 1$, which matches our upper bound for $\\beta=2$.'\nauthor:\n- 'Su Jia, Qian Xie, Nathan Kallus, Peter I. Frazier'\nbibliography:\n- 'bandits.bib'\ntitle: '**Smooth Non-Stationary Bandits**'\n---\n\nIntroduction\n============\n\nAs a fundamental variant of the MAB problem, non-stationary bandits provide a middleground between the stochastic bandits [@lai1985asymptotically] and adversarial bandits [@auer2002nonstochastic]. In the standard non-stationary model [@besbes2014stochastic], the mean reward" +"---\nabstract: 'We use 28 quasar fields with high-resolution (HIRES and UVES) spectroscopy from the MUSE Analysis of Gas Around Galaxies survey to study the connection between Ly$\\alpha$ emitters (LAEs) and metal-enriched ionized gas traced by \u00a0in absorption at redshift $z\\approx3-4$. In a sample of 220 \u00a0absorbers, we identify 143 LAEs connected to \u00a0gas within a line-of-sight separation $\\pm500\\rm\\,km\\,s^{-1}$, equal to a detection rate of $36\\pm5$ per cent once we account for multiple LAEs connected to the same \u00a0absorber. The luminosity function of LAEs associated with \u00a0absorbers shows a $\\approx 2.4$ higher normalization factor compared to the field. \u00a0with higher equivalent width and velocity width are associated with brighter LAEs or multiple galaxies, while weaker systems are less often identified near LAEs. The covering fraction in groups is up to $\\approx 3$ times larger than for isolated galaxies. Compared to the correlation between optically-thick \u00a0absorbers and LAEs, \u00a0systems are twice less likely to be found near LAEs especially at lower equivalent width. Similar results are found using \u00a0as tracer of ionized gas. We propose three components to model the gas environment of LAEs: i) the circumgalactic medium of galaxies, accounting for the strongest correlations between absorption and emission; ii) overdense" +"---\nabstract: 'We generalize a Maximum Principle for optimal control problems involving sweeping systems previously derived in [@nosso_2022] to cover the case where the moving set may be nonsmooth. Noteworthy, we consider problems with constrained end point. A remarkable feature of our work is that we rely upon an ingenious smooth approximating family of standard differential equations in the vein of that used in [@nosso_2019].'\nauthor:\n- 'M. d. R. de Pinho, M. Margarida A. Ferreira [^1] and Georgi Smirnov [^2]'\ntitle: A Maximum Principle for Optimal Control Problems involving Sweeping Processes with a Nonsmooth Set\n---\n\n**Keywords:** Sweeping Process Optimal Control, Maximum Principle, Approximations\n\nIntroduction\n============\n\nIn recent years, there has been a surge of interest in optimal control problems involving the controlled sweeping process of the form $$\\label{SP}\n\\dot x(t) \\in f(t,x(t),u(t))- N_{C(t)}(x(t)), ~u(t)\\in U, ~~x(0) \\in C_0.$$ In this respect, we refer to, for example, [@ArCo17], [@BrKr], [@MoCa17], [@CoPa16], [@CoHeHoMo], [@KuMa00], [@zeidan2020], [@nosso_2019] (see also accompanying correction [@correction_2019]), [@CCBN_2021], [@Palladino2022] and [@nosso_2022]. Sweeping processes first appeared in the seminal paper [@Mo74] by J.J. Moreau as a mathematical framework for problems in plasticity and friction theory. They have proved of interest to tackle problems in mechanics, engineering, economics" +"---\nabstract: 'High-dimensional biphoton states are promising resources for quantum applications, ranging from high-dimensional quantum communications to quantum imaging. A pivotal task is fully characterising these states, which is generally time-consuming and not scalable when projective measurement approaches are adopted. However, new advances in coincidence imaging technologies allow for overcoming these limitations by parallelising multiple measurements. Here, we introduce biphoton digital holography, in analogy to off-axis digital holography, where coincidence imaging of the superposition of an unknown state with a reference one is used to perform quantum state tomography. We apply this approach to single photons emitted by spontaneous parametric down-conversion in a nonlinear crystal when the pump photons possess various quantum states. The proposed reconstruction technique allows for a more efficient (3 order-of-magnitude faster) and reliable (an average fidelity of 87%) characterisation of states in arbitrary spatial modes bases, compared with previously performed experiments. Multi-photon digital holography may pave the route toward efficient and accurate computational ghost imaging and high-dimensional quantum information processing.'\naddress:\n- 'Dipartimento di Fisica, Sapienza Universit\u00e0 di Roma, Piazzale Aldo Moro 5, I-00185 Roma, Italy'\n- 'Nexus for Quantum Technologies, University of Ottawa, Ottawa, K1N 6N5, ON, Canada'\n- 'Nexus for Quantum Technologies, University of" +"---\nabstract: 'We report the discovery with [*TESS*]{}\u00a0of a third set of eclipses from V994 Herculis (TIC 424508303), previously only known as a doubly-eclipsing system. The key implication of this discovery and our analyses is that V994 Her is the second fully-characterized (2+2) + 2 sextuple system, in which all three binaries eclipse. In this work, we use a combination of ground-based observations and [*TESS*]{}\u00a0data to analyze the eclipses of binaries A and B in order to update the parameters of the inner quadruple\u2019s orbit (with a derived period of 1062 $\\pm$ 2d). The eclipses of binary C that were detected in the [*TESS*]{}\u00a0data were also found in older ground-based observations, as well as in more recently obtained observations. The eclipse timing variations of all three pairs were studied in order to detect the mutual perturbations of their constituent stars, as well as those of the inner pairs in the (2+2) core. At the longest periods they arise from apsidal motion, which may help constraining parameters of the component stars\u2019 internal structure. We also discuss the relative proximity of the periods of binaries A and B to a 3:2 mean motion resonance. This work represents a step" +"---\nabstract: 'Team diversity can be seen as a double-edged sword. It brings additional cognitive resources to teams at the risk of increased conflict. Few studies have investigated how different types of diversity impact software teams. This study views diversity through the lens of the *categorization-elaboration model (CEM)*. We investigated how diversity in gender, age, role, and cultural background impacts team effectiveness and conflict, and how these associations are moderated by psychological safety. Our sample consisted of 1,118 participants from 161 teams and was analyzed with Covariance-Based Structural Equation Modeling (CB-SEM). We found a positive effect of age diversity on team effectiveness and gender diversity on relational conflict. Psychological safety contributed directly to effective teamwork and less conflict but did not moderate the diversity-effectiveness link. While our results are consistent with the CEM theory for age and gender diversity, other types of diversity did not yield similar results. We discuss several reasons for this, including curvilinear effects, moderators such as task interdependence, or the presence of a diversity mindset. With this paper, we argue that a dichotomous nature of diversity is oversimplified. Indeed, it is a complex relationship where context plays a pivotal role. A more nuanced understanding of diversity" +"---\nabstract: 'ALMA observations of the disk around HD 163296 have resolved a crescent-shape substructure at around 55 au, inside and off-center from a gap in the dust that extends from 38 au to 62 au. In this work we propose that both the crescent and the dust rings are caused by a compact pair (period ratio $\\simeq 4:3$) of sub-Saturn-mass planets inside the gap, with the crescent corresponding to dust trapped at the $L_5$ Lagrange point of the outer planet. This interpretation also reproduces well the gap in the gas recently measured from the CO observations, which is shallower than what is expected in a model where the gap is carved by a single planet. Building on previous works arguing for outer planets at $\\approx 86$ and $\\approx 137$ au, we provide with a global model of the disk that best reproduces the data and show that all four planets may fall into a long resonant chain, with the outer three planets in a 1:2:4 Laplace resonance. We show that this configuration is not only an expected outcome from disk-planet interaction in this system, but it can also help constraining the radial and angular position of the planet candidates" +"---\nabstract: 'In this work we study the neutron star phenomenology of $R^p$ attractor theories in the Einstein frame. The Einstein frame $R^p$ attractor theories have the attractor property that they originate from a large class of Jordan frame scalar theories with arbitrary non-minimal coupling. These theories in the Einstein frame provide a viable class of inflationary models, and in this work we investigate their implications on static neutron stars. We numerically solve the Tolman-Oppenheimer-Volkoff equations in the Einstein frame, for three distinct equations of state, and we provide the mass-radius diagrams for several cases of interest of the $R^p$ attractor theories. We confront the results with several timely constraints on the radii of specific mass neutron stars, and as we show, only a few cases corresponding to specific equations of state pass the stringent tests on neutron stars phenomenology.'\nauthor:\n- |\n Vasilis K. Oikonomou$^{1,2}$\\\n $^{1}$ Department of Physics, Aristotle University of Thessaloniki, Thessaloniki 54124, Greece\\\n $^{2}$ Institut f\u00fcr Theoretische Physik, Goethe Universit\u00e4t Frankfurt, Max-von-Laue-Str.1, 60438 Frankfurt am Main, Germany\ntitle: $R^p$ Attractors Static Neutron Star Phenomenology\n---\n\n\\[firstpage\\]\n\nstars: neutron; Physical Data and Processes, cosmology: theory\n\nIntroduction {#introduction .unnumbered}\n============\n\nThe direct gravitational wave observation GW170817 [@TheLIGOScientific:2017qsa; @Abbott:2020khf]" +"---\nabstract: 'The surprising ability of Large Language Models (LLMs) to perform well on complex reasoning with only few-shot chain-of-thought prompts is believed to emerge only in very large-scale models (100+ billion parameters). We show that such abilities can, in fact, be distilled down from GPT-3.5 ($\\ge$ 175B) to T5 variants ($\\le$ 11B). We propose *model specialization*, to specialize the model\u2019s ability towards a target task. The hypothesis is that large models (commonly viewed as larger than 100B) have strong modeling power, but are spread on a large spectrum of tasks. Small models (commonly viewed as smaller than 10B) have limited model capacity, but if we concentrate their capacity on a specific target task, the model can achieve a decent improved performance. We use multi-step math reasoning as our testbed because it is a very typical emergent ability. We show two important aspects of model abilities: (1). there exists a very complex balance/ tradeoff between language models\u2019 multi-dimensional abilities; (2). by paying the price of decreased generic ability, we can clearly lift up the scaling curve of models smaller than 10B towards a specialized multi-step math reasoning ability. We further give comprehensive discussions about important design choices for better generalization," +"---\nabstract: 'Atom controlled sub-nanometer MoS$_2$ pores have been recently fabricated. Oxidative environments are of particular interest for MoS$_2$ applications in electronics, sensing and energy storage. In this work we carried out first-principles calculations of oxygen adsorption in plain and sub-nanometer MoS$_2$ nanopores. The chemical stability of the layers and pores towards oxygen was verified using density-functional theory. Dissociation and diffusion barriers have been calculated in order to understand surface and pore oxidation and its electronic properties at the atomic scale.'\nauthor:\n- Murilo Kendjy Onita\n- Flavio Bento de Oliveira\n- Andr\u00e9ia Luisa da Rosa\ntitle: 'Interaction of oxygen with pristine and defective $\\rm MoS_2$ monolayers'\n---\n\nIntroduction\n============\n\nOwing to their fascinating properties, two dimensional transition metal dichalcogenides (TMDs) have been explored for a variety of applications, including electronics and optoelectronics, photonics, catalysis and energy storage[@Feng2015; @Feng2016; @Nature2016; @Karmodak2021; @Bhim2021]. In particular, molybdenum disulfide (MoS$_2$), the most promising TMDCs, is efficiently exfoliated in monolayer or multilayers[@Santosh2015; @NatComm2017]. Recently the fabrication of MoS2 sub-nanometer pores offer several opportunities being promising candidates for several technological applications such as membranes for DNA translocation,[@Feng2015; @Sen2021; @Graf2019], water filtration and desalination[@Cao2020; @Kou2016; @Nature2015; @Wang2021], energy harvesting[@Graf2019a] and hydrogen evolution reaction[@Wu2019; @Li2019; @Frenkeldefects2022].\n\nControl" +"---\nauthor:\n- 'H[\u00e9]{}ctor R. Olivares S.'\n- 'Monika A. Mo[\u015b]{}cibrodzka'\n- Oliver Porth\nbibliography:\n- 'references.bib'\ndate: \ntitle: General relativistic hydrodynamic simulations of perturbed transonic accretion\n---\n\n[Comparison of horizon-scale observations of Sgr\u00a0A\\* and M87\\* with numerical simulations has provided considerable insight in their interpretation. Most of these simulations are variations of the same physical scenario consisting of a rotation supported torus seeded with a poloidal magnetic fields. However, this approach has several well known limitations such as secular decreasing trends in mass accretion rate that render long term variability studies difficult, a lack of connection with the large-scale accretion flow which is replaced by an artificial medium emulating vacuum, and important differences with respect to the predictions of models of accretion onto Sgr\u00a0A\\* fed by stellar winds.]{} [We aim to study the flow patterns that arise at horizon scales in more general accretion scenarios, that have a clearer connection with the large scale flow and are at the same time controlled by a reduced set of parameters.]{} [As a first step in this direction, we perform three dimensional general relativistic hydrodynamic simulations of rotating transonic flows with velocity perturbations injected from a spherical boundary located far" +"---\nabstract: 'We employ the Feedback In Realistic Environments (FIRE-2) physics model to study how the properties of giant molecular clouds (GMCs) evolve during galaxy mergers. We conduct a pixel-by-pixel analysis of molecular gas properties in both the simulated control galaxies and galaxy major mergers. The simulated GMC-pixels in the control galaxies follow a similar trend in a diagram of velocity dispersion ($\\sigma_v$) versus gas surface density ($\\Sigma_{\\mathrm{mol}}$) to the one observed in local spiral galaxies in the Physics at High Angular resolution in Nearby GalaxieS (PHANGS) survey. For GMC-pixels in simulated mergers, we see a significant increase of factor of 5 \u2013 10 in both $\\Sigma_{\\mathrm{mol}}$ and $\\sigma_v$, which puts these pixels above the trend of PHANGS galaxies in the $\\sigma_v$ vs $\\Sigma_{\\mathrm{mol}}$ diagram. This deviation may indicate that GMCs in the simulated mergers are much less gravitationally bound compared with simulated control galaxies with virial parameter ($\\alpha_{\\mathrm{vir}}$) reaching 10 \u2013 100. Furthermore, we find that the increase in $\\alpha_{\\mathrm{vir}}$ happens at the same time as the increase in global star formation rate (SFR), which suggests stellar feedback is responsible for dispersing the gas. We also find that the gas depletion time is significantly lower for high [$\\alpha_{\\mathrm{vir}}$]{}GMCs during" +"---\nauthor:\n- \n- \n- \n- \n- \nbibliography:\n- 'sn-bibliography.bib'\ntitle: 'Physics-agnostic and Physics-infused machine learning for thin films flows: modeling, and predictions from small data'\n---\n\nIntroduction {#sec1}\n============\n\nThe study of multiphase flows is often limited by the computational effort involved in solving the Navier-Stokes equations [@glasser1997fully]. One such example, the flow of thin films of liquid on inclined planes, has fascinated researchers not only because of the wide range of industrial applications but also because of the interesting dynamics of the liquid-air interface [@Kalliadasis2012]. The Navier-Stokes (NS) equations accurately describe the fluid motion and also the evolution of the surface but suffer from high computational cost [@Pettas2019a; @Pettas2019b]. To this end, significant effort has led to several approximate interface evolution equations that are much simpler to solve but are nevertheless valid under specific assumptions and limitations. Beyond their limits of validity, it is often found that they yield nonphysical solutions, or even blow up [@Kalliadasis2012], posing significant restrictions to their applicability.\n\nIn order to drastically enable Computational Fluid Dynamics and break new barriers in flow control, uncertainty quantification and shape optimization, it is crucial to develop novel, robust and efficient data-driven/data-assisted models that combine physical and mathematical" +"---\nabstract: 'Large language models (LLMs) have shown promise for automatic summarization but the reasons behind their successes are poorly understood. By conducting a human evaluation on ten LLMs across different pretraining methods, prompts, and model scales, we make two important observations. First, we find instruction tuning, and not model size, is the key to the LLM\u2019s zero-shot summarization capability. Second, existing studies have been limited by low-quality references, leading to underestimates of human performance and lower few-shot and finetuning performance. To better evaluate LLMs, we perform human evaluation over high-quality summaries we collect from freelance writers. Despite major stylistic differences such as the amount of paraphrasing, we find that LMM summaries are judged to be on par with human written summaries.'\nauthor:\n- |\n Tianyi Zhang$\\phantom{}^{1}$[[^1]]{}, Faisal Ladhak$\\phantom{}^{2*}$, Esin Durmus$\\phantom{}^{1}$, Percy Liang$\\phantom{}^{1}$,\\\n **Kathleen McKeown$\\phantom{}^{2}$, Tatsunori B. Hashimoto$\\phantom{}^{1}$**\\\n $\\phantom{}^{1}$Stanford Univeristy $\\phantom{}^{2}$Columbia Univeristy\\\nbibliography:\n- 'tacl2021.bib'\ntitle: Benchmarking Large Language Models for News Summarization\n---\n\nIntroduction\n============\n\nLarge language models (LLMs) have shown promising results in zero-/few-shot tasks across a wide range of domains\u00a0[@palm; @anthropic; @gpt3; @opt] and raised significant interest for their potential for automatic summarization\u00a0[@gpt3-era; @Liu2022RevisitingTG]. However, the design decisions contributing to its success on summarization remain" +"---\nauthor:\n- 'Evan Grohs [^1] and George M. Fuller'\nbibliography:\n- 'references.bib'\ntitle: Big Bang Nucleosynthesis\n---\n\nIntroduction\n============\n\nThe success of Big Bang Nucleosynthesis (BBN) theory in predicting the primordial abundances of helium and deuterium and the baryon (ordinary matter) content of the universe represents one of the greatest triumphs of modern physics \\[see for reviews of the various physical phenomena present in BBN\\]. It is all the more remarkable that this success is born of very simplistic assumptions about the universe and its evolution. These are: (1) General Relativity (GR) is a correct description of spacetime dynamics and that the distribution of mass-energy on any 3-dimensional spacelike hypersurface at a given value of the time $t$ (age of the universe) is homogeneous and isotropic; and (2) that the standard model of particle physics and, more specifically, simple nuclear physics obtains at very early times in the history of the universe.\n\nIn fact, the Friedmann-LeMa\u00eetre-Robertson-Walker metric, the solution to the field equations in the symmetry implied by homogeneity and isotropy \\[see [@1973grav.book.....M]\\], was worked out in [@1917SPAW.......142E] shortly after Einstein\u2019s original work on GR. [@1922ZPhy...10..377F] and [@1924ZPhy...21..326F] showed that the solutions to the GR field equations led to" +"---\nabstract: 'Methods to detect malignant lesions from screening mammograms are usually trained with fully annotated datasets, where images are labelled with the localisation and classification of cancerous lesions. However, real-world screening mammogram datasets commonly have a subset that is fully annotated and another subset that is weakly annotated with just the global classification (i.e., without lesion localisation). Given the large size of such datasets, researchers usually face a dilemma with the weakly annotated subset: to not use it or to fully annotate it. The first option will reduce detection accuracy because it does not use the whole dataset, and the second option is too expensive given that the annotation needs to be done by expert radiologists. In this paper, we propose a middle-ground solution for the dilemma, which is to formulate the training as a weakly- and semi-supervised learning problem that we refer to as malignant breast lesion detection with incomplete annotations. To address this problem, our new method comprises two stages, namely: 1) pre-training a multi-view mammogram classifier with weak supervision from the whole dataset, and 2) extending the trained classifier to become a multi-view detector that is trained with semi-supervised student-teacher learning, where the training set contains" +"---\nabstract: 'Observations with the Hubble Space Telescope unexpectedly revealed that the dwarf galaxy ESO006-001 is a near neighbor to the Local Group at a distance of $2.70\\pm0.11$\u00a0Mpc. The stellar population in the galaxy is well resolved into individual stars to a limit of $M_I\\sim-0.5$\u00a0mag. The dominant population is older than 12 Gyr yet displays a significant range in metallicity of $-2 < \\rm{[Fe/H]} < -1$, as evidenced by a Red Giant Branch with substantial width. Superimposed on the dominant population are stars on the Main Sequence with ages less than 100\u00a0Myr and Helium burning Blue Loop stars with ages of several hundred Myr. ESO006-001 is an example of a transition dwarf; a galaxy dominated by old stars but one that has experienced limited recent star formation in a swath near the center. No [H[i]{}]{}\u00a0gas is detected at the location of the optical galaxy in spite of the evidence for young stars. Intriguingly, an [H[i]{}]{}\u00a0cloud with a similar redshift is detected 9 kpc away in projection. Otherwise, ESO006-001 is a galaxy in isolation with its nearest known neighbor IC3104, itself a dwarf, at a distance of $\\sim 500$\u00a0kpc.'\nauthor:\n- 'Lidia N. Makarova'\n-" +"---\nabstract: 'Artificial learners often behave differently from human learners in the context of neural agent-based simulations of language emergence and change. A common explanation is the lack of appropriate cognitive biases in these learners. However, it has also been proposed that more naturalistic settings of language learning and use could lead to more human-like results. We investigate this latter account focusing on the word-order/case-marking trade-off, a widely attested language universal that has proven particularly hard to simulate. We propose a new Neural-agent Language Learning and Communication framework where pairs of speaking and listening agents first learn a miniature language via supervised learning, and then optimize it for communication via reinforcement learning. Following closely the setup of earlier human experiments, we succeed in replicating the trade-off with the new framework without hard-coding specific biases in the agents. We see this as an essential step towards the investigation of language universals with neural learners.'\nauthor:\n- |\n Yuchen Lian$^\\diamond$ $^\\dagger$ Arianna Bisazza$^\\ddagger$ Tessa Verhoef$^\\dagger$\\\n \u00a0\\\n $^\\diamond$Faculty of Electronic and Information Engineering, Xi\u2019an Jiaotong University\\\n $^\\dagger$Leiden Institute of Advanced Computer Science, Leiden University\\\n `{y.lian, t.verhoef}@liacs.leidenuniv.nl`\\\n $^\\ddagger$Center for Language and Cognition, University of Groningen\\\n `a.bisazza@rug.nl`\nbibliography:\n- 'tacl2021.bib'\n- 'anthology.bib'\ntitle: |\n Communication" +"---\nauthor:\n- Simone Veronese\n- 'W. J. G. de Blok'\n- 'F. Walter'\nbibliography:\n- 'reference.bib'\ndate: 'Received <date> / Accepted <date>'\ntitle: Extended neutral hydrogen filamentary network in NGC 2403\n---\n\nIntroduction {#sec:intro}\n============\n\nGalaxies form stars through the collapsing of giant molecular (mainly molecular hydrogen, H[ii]{}) clouds on timescales of $\\sim10^7$ yr [@meidt15; @schinnerer19; @walter20] and over the cosmic time [@madau14] galaxies progressively deplete their molecular gas content. In a perfect steady state, the H[ii]{} reservoir is replenished by the cooling of the atomic hydrogen (H[i]{}) in the interstellar medium [@clark12; @walch15], which will cause a reduction in the content of the atomic gas. However, both simulations and observations reveal that the H[i]{} content in galaxies is almost constant from $z\\sim1$ [@dave17; @chen21]. Consequently, in order to maintain star formation over cosmic time, galaxies must accrete H[i]{}.\\\nPrevious studies of H[i]{} interaction features and dwarf galaxies have shown that they do not provide enough cold gas to replenish the material reservoir for star formation [@sancisi08; @putman12; @blok20]. Other extraplanar gas observed closer to the galaxy disks is usually related to galactic fountains [@putman12; @li21; @marasco22]: star formation and supernova explosions eject interstellar medium from the disk to" +"---\nabstract: 'Colloidal gels consist of percolating networks of interconnected arms. Their mechanical properties depend on the individual arms, and on their collective behaviour. We use numerical simulations to pull on a single arm, built from a model colloidal gel-former with short-ranged attractive interactions. Under elongation, the arm breaks by a necking instability. We analyse this behaviour at three different length scales: a rheological continuum model of the whole arm; a microscopic analysis of the particle structure and dynamics; and the local stress tensor. Combining these different measurements gives a coherent picture of the necking and failure: the neck is characterised by plastic flow that occurs for stresses close to the arm\u2019s yield stress. The arm has an amorphous local structure and has large residual stresses from its initialisation. We find that neck formation is associated with increased plastic flow, a reduction in the stability of the local structure, and a reduction in the residual stresses; this indicates that [the]{} system loses its solid character and starts to behave more like a viscous fluid. We discuss the implications of these results for the modelling of gel dynamics.'\nauthor:\n- Kristian Thijssen\n- 'Tanniemola B. Liverpool'\n- 'C. Patrick Royall'\n-" +"---\nabstract: '[ Simultaneous optimization of multiple objective functions results in a set of trade-off, or Pareto, solutions. Choosing a, in some sense, best solution in this set is in general a challenging task: In the case of three or more objectives the Pareto front is usually difficult to view, if not impossible, and even in the case of just two objectives constructing the whole Pareto front so as to visually inspect it might be very costly. Therefore, optimization over the Pareto (or efficient) set has been an active area of research. Although there is a wealth of literature involving finite dimensional optimization problems in this area, there is a lack of problem formulation and numerical methods for optimal control problems, except for the convex case. In this paper, we formulate the problem of optimizing over the Pareto front of nonconvex constrained and time-delayed optimal control problems as a bi-level optimization problem. Motivated by existing solution differentiability results, we propose an algorithm incorporating (i) the Chebyshev scalarization, (ii) a concept of the essential interval of weights, and (iii) the simple but effective bisection method, for optimal control problems with two objectives. We illustrate the working of the algorithm on two" +"---\nabstract: 'Millimetre-wave (mmWave) radars can generate 3D point clouds to represent objects in the scene. However, the accuracy and density of the generated point cloud can be lower than a laser sensor. Although researchers have used mmWave radars for various applications, there are few quantitative evaluations on the quality of the point cloud generated by the radar and there is a lack of a standard on how this quality can be assessed. This work aims to fill the gap in the literature. A radar simulator is built to evaluate the most common data processing chains of 3D point cloud construction and to examine the capability of the mmWave radar as a 3D imaging sensor under various factors. It will be shown that the radar detection can be noisy and have an imbalance distribution. To address the problem, a novel super-resolution point cloud construction (SRPC) algorithm is proposed to improve the spatial resolution of the point cloud and is shown to be able to produce a more natural point cloud and reduce outliers.'\nauthor:\n- \nbibliography:\n- 'ref.bib'\ntitle: 'Millimetre-wave Radar for Low-Cost 3D Imaging: A Performance Study'\n---\n\nmmWave radar, 3D imaging, point cloud\n\nIntroduction\n============\n\nMillimetre-wave (mmWave) radars" +"---\nabstract: 'Accurate acne detection plays a crucial role in acquiring precise diagnosis and conducting proper therapy. However, the ambiguous boundaries and arbitrary dimensions of acne lesions severely limit the performance of existing methods. In this paper, we address these challenges via a novel Decoupled Sequential Detection Head (DSDH), which can be easily adopted by mainstream two-stage detectors. DSDH brings two simple but effective improvements to acne detection. Firstly, the offset and scaling tasks are explicitly introduced, and their incompatibility is settled by our task-decouple mechanism, which improves the capability of predicting the location and size of acne lesions. Second, we propose the task-sequence mechanism, and execute offset and scaling sequentially to gain a more comprehensive insight into the dimensions of acne lesions. In addition, we build a high-quality acne detection dataset named ACNE-DET to verify the effectiveness of DSDH. Experiments on ACNE-DET and the public benchmark ACNE04 show that our method outperforms the state-of-the-art methods by significant margins. Our code and dataset are publicly available at (temporarily anonymous).'\nauthor:\n- Xin Wei$^1$\n- Lei Zhang$^1$\n- Jianwei Zhang$^1$\n- Junyou Wang$^1$\n- Wenjie Liu$^1$\n- |\n Jiaqi Li$^2$ Xian Jiang$^2$ $^1$College of Computer Science, Sichuan University, Chengdu 610065, China\\" +"---\nabstract: 'Dimensionality reduction is a crucial technique in data analysis, as it allows for the efficient visualization and understanding of high-dimensional datasets. The circular coordinate is one of the topological data analysis techniques associated with dimensionality reduction but can be sensitive to variations in density. To address this issue, we propose new circular coordinates to extract robust and density-independent features. Our new methods generate a new coordinate system that depends on a shape of an underlying manifold preserving topological structures. We demonstrate the effectiveness of our methods through extensive experiments on synthetic and real-world datasets.'\naddress:\n- 'Department of Mathematical Sciences and Research Institute of Mathematics, Seoul National University'\n- 'Department of Mathematical Sciences and Research Institute of Mathematics, Seoul National University'\nauthor:\n- Taejin Paik\n- Jaemin Park\nbibliography:\n- 'main.bib'\ntitle: 'Circular Coordinates for Density-Robust Analysis'\n---\n\nIntroduction\n============\n\nDimensionality reduction allows us to understand high-dimensional data and gives us intuitive information about a dataset. One of the key challenges in this area is preserving the intrinsic topological structure. Different dimensionality reduction strategies try to handle this problem in different ways.\n\nPrincipal component analysis (PCA) [@pearson1901liii; @hotelling1933analysis] is one of the most basic techniques for linear dimensionality" +"---\nabstract: 'Magnetic resonance imaging (MRI) is a common technique to scan brains for strokes, tumors, and other abnormalities that cause forms of dementia. However, correctly diagnosing forms of dementia from MRIs is difficult, as nearly 1 in 3 patients with Alzheimer\u2019s were misdiagnosed in 2019, an issue neural networks can rectify. The performance of these neural networks have been shown to be improved by applying quantum algorithms. This proposed novel neural network architecture uses a fully-connected (FC) layer, which reduces the number of features to obtain an expectation value by implementing a variational quantum circuit (VQC). This study found that the proposed hybrid quantum-classical convolutional neural network (QCCNN) provided 97.5% and 95.1% testing and validation accuracies, respectively, which was considerably higher than the classical neural network (CNN) testing and validation accuracies of 91.5% and 89.2%. Additionally, using a testing set of 100 normal and 100 dementia MRI images, the QCCNN detected normal and demented images correctly 95% and 98% of the time, compared to the CNN accuracies of 89% and 91%. With hospitals like Massachusetts General Hospital beginning to adopt machine learning applications for biomedical image detection, this proposed architecture would approve accuracies and potentially save more lives. Furthermore," +"---\nabstract: 'We propose a highly efficient and fast method of translational cooling for high-angular-momentum atoms. Optical pumping and stimulated transitions, combined with magnetic forces, can be used to compress phase-space density, and the efficiency of each compression step increases with the angular momentum. Entropy is removed by spontaneously emitted photons, and particle number is conserved. This method may be an attractive alternative to evaporative cooling of atoms and possibly molecules in order to produce quantum degenerate gases.'\naddress:\n- '$^1$ Department of Physics, The University of Texas at Austin, Austin, Texas, 78712, USA'\n- '$^2$Johannes Gutenberg-Universit[\u00e4]{}t Mainz, Helmholtz-Institut Mainz, GSI Helmholtzzentrum f[\u00fc]{}r Schwerionenforschung, 55128 Mainz, Germany'\n- '$^3$ Department of Physics, University of California, Berkeley, California 94720, USA'\n- '$^4$Rochester Scientific, LLC, El Cerrito, California 94530, USA'\nauthor:\n- 'Logan E. Hillberry$^1$, Dmitry Budker$^{2,3}$, Simon M. Rochester$^4$, Mark G. Raizen$^1$'\nbibliography:\n- 'bibliography.bib'\ntitle: 'Efficient cooling of high-angular-momentum atoms'\n---\n\nMay 2023\n\n[*Keywords*]{}: atomic physics, cold atoms, phase-space compression\n\nIntroduction\n============\n\nLaser cooling, first proposed almost half a century ago, remains the standard approach for producing ultracold atoms [@metcalf1999laser; @schreck2021laser]. This method relies on momentum transfer from light to atoms as photons are repeatedly scattered, enabling the production and" +"---\nabstract: 'Virtual Mental Health Assistants (VMHAs) have become a prevalent method for receiving mental health counseling in the digital healthcare space. An assistive counseling conversation commences with natural open-ended topics to familiarize the client with the environment and later converges into more fine-grained domain-specific topics. Unlike other conversational systems, which are categorized as open-domain or task-oriented systems, VMHAs possess a hybrid conversational flow. These counseling bots need to comprehend various aspects of the conversation, such as dialogue-acts, intents, etc., to engage the client in an effective and appropriate conversation. Although the surge in digital health research highlights applications of many general-purpose response generation systems, they are barely suitable in the mental health domain \u2013 the prime reason is the lack of understanding in the mental health counseling conversation. Moreover, in general, dialogue-act guided response generators are either limited to a template-based paradigm or lack appropriate semantics in dialogue generation. To this end, we propose [`READER`]{}\u00a0\u2013 a **RE**sponse-[**A**]{}ct guided reinforced **D**ialogue gen**ER**ation model for the mental health counseling conversations. [`READER`]{}\u00a0is built on transformer to jointly predict a potential dialogue-act $d_{t+1}$ for the next utterance (*aka* response-act) and to generate an appropriate response ($u_{t+1}$). Through the transformer-reinforcement-learning (TRL) with" +"---\nabstract: 'Preferential attachment (PA) network models have a wide range of applications in various scientific disciplines. Efficient generation of large-scale PA networks helps uncover their structural properties and facilitate the development of associated analytical methodologies. Existing software packages only provide limited functions for this purpose with restricted configurations and efficiency. We present a generic, user-friendly implementation of weighted, directed PA network generation with package . The core algorithm is based on an efficient binary tree approach. The package further allows adding multiple edges at a time, heterogeneous reciprocal edges, and user-specified preference functions. The engine under the hood is implemented in . Usages of the package are illustrated with detailed explanation. A benchmark study shows that is efficient for generating general PA networks not available in other packages. In restricted settings that can be handled by existing packages, provides comparable efficiency.'\naddress:\n- 'Department of Statistics, , '\n- 'Shanghai Center for Mathematical Sciences, , '\n- 'Department of Biostatistics, , '\nauthor:\n- \n- \n- \n- \nbibliography:\n- 'generatePA.bib'\ntitle: Generating General Preferential Attachment Networks with Package \n---\n\n.\n\nIntroduction {#sec:intro}\n============\n\nPreferential attachment (PA) networks are important network models in scientific research. The standard PA model\u00a0[@Barabasi1999emergence]" +"---\nabstract: 'The design of feedback channels in frequency division duplex (FDD) systems is a major challenge because of the limited available feedback bits. We consider non-orthogonal multiple access (NOMA) systems that incorporate reconfigurable intelligent surfaces (RISs). In limited feedback RIS-aided NOMA systems, the RIS-aided channel and the direct channel gains should be quantized and fed back to the transmitter. This paper investigates the rate loss of the overall RIS-aided NOMA systems suffering from quantization errors. We first consider random vector quantization for the overall RIS-aided channel and identical uniform quantizers for the direct channel gains. We then obtain an upper bound for the rate loss, due to the quantization error, as a function of the number of feedback bits and the size of RIS. Our numerical results indicate the sum rate performance of the limited feedback system approaches that of the system with full CSI as the number of feedback bits increases.'\nauthor:\n- '[^1]'\nbibliography:\n- 'IEEEabrv.bib'\n- 'references.bib'\ntitle: 'Reconfigurable Intelligent Surface-Aided NOMA with Limited Feedback'\n---\n\nIntroduction {#sec:introduction}\n============\n\nReconfigurable intelligent surfaces (RISs) are presumed as an attractive solution to enhance the spectral, power efficiency, and coverage of wireless communication systems\u00a0[@8741198]. These surfaces consist of" +"---\nabstract: 'The Krylov subspace methods, being one category of the most important classical numerical methods for linear algebra problems, their quantum generalisation can be much more powerful. However, quantum Krylov subspace algorithms are prone to errors due to inevitable statistical fluctuations in quantum measurements. To address this problem, we develop a general theoretical framework to analyse the statistical error and measurement cost. Based on the framework, we propose a quantum algorithm to construct the Hamiltonian-power Krylov subspace that can minimise the measurement cost. In our algorithm, the product of power and Gaussian functions of the Hamiltonian is expressed as an integral of the real-time evolution, such that it can be evaluated on a quantum computer. We compare our algorithm with other established quantum Krylov subspace algorithms in solving two prominent examples. It is shown that the measurement number in our algorithm is typically $10^4$ to $10^{12}$ times smaller than other algorithms. Such an improvement can be attributed to the reduced cost of composing projectors onto the ground state. These results show that our algorithm is exceptionally robust to statistical fluctuations and promising for practical applications.'\nauthor:\n- Zongkang Zhang\n- Anbang Wang\n- Xiaosi Xu\n- Ying Li\nbibliography:" +"---\nabstract: 'Throughout 2021, GitGuardian\u2019s monitoring of public GitHub repositories revealed a two-fold increase in the number of secrets (database credentials, API keys, and other credentials) exposed compared to 2020, accumulating more than six million secrets. To our knowledge, the challenges developers face to avoid checked-in secrets are not yet characterized. *The goal of our paper is to aid researchers and tool developers in understanding and prioritizing opportunities for future research and tool automation for mitigating checked-in secrets through an empirical investigation of challenges and solutions related to checked-in secrets*. We extract 779 questions related to checked-in secrets on Stack Exchange and apply qualitative analysis to determine the challenges and the solutions posed by others for each of the challenges. We identify 27 challenges and 13 solutions. The four most common challenges, in ranked order, are: (i) store/version of secrets during deployment; (ii) store/version of secrets in source code; (iii) ignore/hide of secrets in source code; and (iv) sanitize VCS history. The three most common solutions, in ranked order, are: (i) move secrets out of source code/version control and use template config file; (ii) secret management in deployment; and (iii) use local environment variables. Our findings indicate that the same" +"---\nabstract: 'Optimal decision-making compels us to anticipate the future at different horizons. However, in many domains connecting together predictions from multiple time horizons and abstractions levels across their organization becomes all the more important, else decision-makers would be planning using separate and possibly conflicting views of the future. This notably applies to smart grid operation. To optimally manage energy flows in such systems, accurate and *coherent* predictions must be made across varying aggregation levels and horizons. Such hierarchical structures are said to be coherent when values at different scales are equal when brought to the same level, else would need to be reconciled. With this work, we propose a novel multi-dimensional hierarchical forecasting method built upon structurally-informed machine-learning regressors and established hierarchical reconciliation taxonomy. A generic formulation of multi-dimensional hierarchies, reconciling spatial and temporal hierarchies under a common frame is initially defined. Next, a coherency-informed hierarchical learner is developed built upon a custom loss function leveraging optimal reconciliation methods. Coherency of the produced hierarchical forecasts is then secured using similar reconciliation technics. The outcome is a unified and coherent forecast across all examined dimensions, granting decision-makers a common view of the future serving aligned decision-making. The method is evaluated" +"---\nauthor:\n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \ntitle: 'Grasian: Towards the first demonstration of gravitational quantum states of atoms with a cryogenic hydrogen beam'\n---\n\nIntroduction {#sec1}\n============\n\nQuantum bouncers were first predicted in 1928 [@Bre:1928pr]. Nearly 75 years later, this phenomenon was demonstrated through the observation of neutron ($n$) gravitational quantum states (gqs) [@Lus:1978jl; @Nes:2000nima; @Nes:2002Nat; @Nes:2003prd; @Nes:2003prdbis; @Nes:2005epjc; @Wes:2007epjc]. Confined by the gravitational- and the mirror potential, the $n$ are settled in gravitationally bound quantum states.\n\nStudies of $n$ gqs have a broad impact on fundamental and applied physics. They serve as a unique method to study the interaction of a particle in a quantum state with a gravitational field. For example, paired with more recent measurements of $n$ whispering gallery states (wgs) - quantum states trapped by the centrifugal- and the mirror potential [@Nes:2010NatPhys], they result in the first direct demonstration of the validity of the weak equivalence principle for a particle in a pure quantum state.\n\nThe observation of gqs initiated active analysis of the pecularities of this phenomenon [@Rob:2004pr; @Ber:2005jmc; @Mat:2006pra; @ber:2006plb; @Rom:2007prl; @Del:2009prl; @Gar:2012pr; @Bel:2014pr] and their application to" +"---\nabstract: 'Measuring empathy in conversation can be challenging, as empathy is a complex and multifaceted psychological construct that involves both cognitive and emotional components. Human evaluations can be subjective, leading to inconsistent results. Therefore, there is a need for an automatic method for measuring empathy that reduces the need for human evaluations. In this paper, we proposed a novel approach EMP-EVAL, a simple yet effective automatic empathy evaluation method. The proposed technique takes the influence of Emotion, Cognitive and Emotional empathy. To the best knowledge, our work is the first to systematically measure empathy without the human-annotated provided scores. Experimental results demonstrate that our metrics can correlate with human preference, achieving comparable results with human judgments.'\nauthor:\n- |\n Bushra Amjad , Muhammad Zeeshan, Mirza Omer Beg\\\n Department of Artificial Intelligence and Data Science\\\n National University of Computer and Emerging Sciences\\\n Islamabad, Pakistan\\\n `{bushra.amjad,i191711,omer.beg}@nu.edu.pk`\nbibliography:\n- 'custom.bib'\ntitle: 'EMP-EVAL: A Framework for Measuring Empathy in Open Domain Dialogues'\n---\n\n=1\n\nIntroduction\n============\n\nEmpathy is a vital component of human communication, and it is increasingly being recognized as an important aspect of conversational agents and chatbots. Empathy refers to the ability to understand and share the feelings of others, which" +"---\nabstract: 'Assortment optimization has received active explorations in the past few decades due to its practical importance. Despite the extensive literature dealing with optimization algorithms and latent score estimation, uncertainty quantification for the optimal assortment still needs to be explored and is of great practical significance. Instead of estimating and recovering the complete optimal offer set, decision-makers may only be interested in testing whether a given property holds true for the optimal assortment, such as whether they should include several products of interest in the optimal set, or how many categories of products the optimal set should include. This paper proposes a novel inferential framework for testing such properties. We consider the widely adopted multinomial logit (MNL) model, where we assume that each customer will purchase an item within the offered products with a probability proportional to the underlying preference score associated with the product. We reduce inferring a general optimal assortment property to quantifying the uncertainty associated with the sign change point detection of the marginal revenue gaps. We show the asymptotic normality of the marginal revenue gap estimator, and construct a maximum statistic via the gap estimators to detect the sign change point. By approximating the distribution" +"---\nabstract: 'As global digitization continues to grow, technology becomes more affordable and easier to use, and social media platforms thrive, becoming the new means of spreading information and news. Communities are built around sharing and discussing current events. Within these communities, users are enabled to share their opinions about each event. Using Sentiment Analysis to understand the polarity of each message belonging to an event, as well as the entire event, can help to better understand the general and individual feelings of significant trends and the dynamics on online social networks. In this context, we propose a new ensemble architecture, EDSA-Ensemble (Event Detection Sentiment Analysis Ensemble), that uses Event Detection and Sentiment Analysis to improve the detection of the polarity for current events from Social Media. For Event Detection, we use techniques based on Information Diffusion taking into account both the time span and the topics. To detect the polarity of each event, we preprocess the text and employ several Machine and Deep Learning models to create an ensemble model. The preprocessing step includes several word representation models, i.e., raw frequency, $TFIDF$, Word2Vec, and Transformers. The proposed EDSA-Ensemble architecture improves the event sentiment classification over the individual Machine and" +"---\nabstract: 'The effect of viscosity contrast between a jet and its surroundings is experimentally investigated using density-matched fluids. A gravity-driven flow is established with a jet of saltwater emerging into an ambient medium composed of high-viscosity propylene glycol. Jet Reynolds numbers, $Re$, ranging from 1600 to 3400 were studied for an ambient-to-jet viscosity ratio, $M$, between 1 and 50. Visualization suggests that at low values of the viscosity ratio, the jet breakdown mode is axisymmetric, while helical modes develop at high values of viscosity ratio. The transition between these two modes is attempted to be delineated using a variety of diagnostic tools. Hot film anemometry measurements indicate that the onset of the helical mode is accompanied by the appearance of a discrete peak in the frequency spectrum of velocity fluctuations, which exhibits little spatial variation for the first several diameters in the downstream direction. Laser-Induced Fluorescence (LIF) is used to identify the jet boundary against the background. An analysis of high-speed images acquired using the LIF technique enables identification of the spatial growth rate of waves on the jet boundary, as well as the frequency of oscillation of the weakly diffusive interface. Temporal fluctuations of fluorescence intensity are found" +"---\nabstract: 'Autonomous robots are required to reason about the behaviour of dynamic agents in their environment. The creation of models to describe these relationships is typically accomplished through the application of causal discovery techniques. However, as it stands observational causal discovery techniques struggle to adequately cope with conditions such as causal sparsity and non-stationarity typically seen during online usage in autonomous agent domains. Meanwhile, interventional techniques are not always feasible due to domain restrictions. In order to better explore the issues facing observational techniques and promote further discussion of these topics we carry out a benchmark across 10 contemporary observational temporal causal discovery methods in the domain of autonomous driving. By evaluating these methods upon causal scenes drawn from real world datasets in addition to those generated synthetically we highlight where improvements need to be made in order to facilitate the application of causal discovery techniques to the aforementioned use-cases. Finally, we discuss potential directions for future work that could help better tackle the difficulties currently experienced by state of the art techniques.'\nbibliography:\n- 'references.bib'\ntitle: 'Evaluating Temporal Observation-Based Causal Discovery Techniques Applied to Road Driver Behaviour'\n---\n\nCausal Discovery, Time Series Data Analysis, Autonomous Driving\n\nIntroduction\n============" +"---\nabstract: 'Eukaryotes swim with coordinated flagellar (ciliary) beating and steer by fine-tuning the coordination. The model organism for studying flagellate motility, *C. reinhardtii* (CR), employs synchronous, breast-stroke-like flagellar beating to swim, and it modulates the beating amplitudes differentially to steer. This strategy hinges on both inherent flagellar asymmetries (e.g. different response to chemical messengers) and such asymmetries being effectively coordinated in the synchronous beating. In CR, the synchrony of beating is known to be supported by a mechanical connection between flagella, however, how flagellar asymmetries persist in the synchrony remains elusive. For example, it has been speculated for decades that one flagellum leads the beating, as its dynamic properties (i.e. frequency, waveform, etc.) appear to be copied by the other one. In this study, we combine experiments, computations, and modeling efforts to elucidate the roles played by each flagellum in synchronous beating. With a non-invasive technique to selectively load each flagellum, we show that the coordinated beating essentially responds to only load exerted on the *cis* flagellum; and that such asymmetry in response derives from a unilateral coupling between the two flagella. Our results highlight a distinct role for each flagellum in coordination and have implication for biflagellates\u2019 tactic" +"---\nabstract: 'Summation-by-parts (SBP) operators allow us to systematically develop energy-stable and high-order accurate numerical methods for time-dependent differential equations. Until recently, the main idea behind existing SBP operators was that polynomials can accurately approximate the solution, and SBP operators should thus be exact for them. However, polynomials do not provide the best approximation for some problems, with other approximation spaces being more appropriate. We recently addressed this issue and developed a theory for *one-dimensional* SBP operators based on general function spaces, coined function-space SBP (FSBP) operators. In this paper, we extend the theory of FSBP operators to *multiple dimensions*. We focus on their existence, connection to quadratures, construction, and mimetic properties. A more exhaustive numerical demonstration of multi-dimensional FSBP (MFSBP) operators and their application will be provided in future works. Similar to the one-dimensional case, we demonstrate that most of the established results for polynomial-based multi-dimensional SBP (MSBP) operators carry over to the more general class of MFSBP operators. Our findings imply that the concept of SBP operators can be applied to a significantly larger class of methods than is currently done. This can increase the accuracy of the numerical solutions and/or provide stability to the methods.'\nauthor:\n-" +"---\nabstract: 'Ultralong-range Rydberg molecules (ULRM) are highly imbalanced bound systems formed via the low-energy scattering of a Rydberg electron with a ground-state atom. We investigate for $^{23}$Na the $d$-state and the energetically close-by trilobite state, exhibiting avoided crossings that lead to the breakdown of the adiabatic Born-Oppenheimer (BO) approximation. We develop a coupled-channel approach to explore the non-adiabatic interaction effects between these electronic states. The resulting spectrum exhibits stark differences in comparison to the BO spectra, such as the existence of above-threshold resonant states without any adiabatic counterparts, and a significant rearrangement of the spectral structure as well as the localization of the eigenstates. Our study motivates the use of $^{23}$Na ULRM, as a probe to explore vibronic interaction effects on exaggerated time and length scales.'\nauthor:\n- Rohan Srikumar\n- Frederic Hummel\n- Peter Schmelcher\nbibliography:\n- 'references.bib'\ntitle: 'Non-adiabatic interaction effects in the spectra of ultralong-range Rydberg molecules'\n---\n\nIntroduction {#Sec1}\n============\n\nRydberg atoms are an important player in modern quantum physics due to their unique and extreme properties. Their size and dipole moment scale as $n^2$, and lifetimes and polarizability scale as $n^3$ and $n^7$, respectively, where $n$ is the principal quantum number [@gallagher_1994; @sibalic_2018]. They" +"---\nabstract: 'We present a new algorithm to train a robust malware detector. Malware is a prolific problem and malware detectors are a front-line defense. Modern detectors rely on machine learning algorithms. Now, the adversarial objective is to devise alterations to the malware code to decrease the chance of being detected whilst *preserving the functionality and realism of the malware*. Adversarial learning is effective in improving robustness but generating functional and realistic adversarial malware samples is non-trivial. Because: i)\u00a0in contrast to tasks capable of using gradient-based feedback, adversarial learning in a domain without a *differentiable mapping function* from the *problem space* (malware code inputs) to the *feature space* is hard; and ii)\u00a0it is difficult to ensure the adversarial malware is realistic and functional. This presents a challenge for developing scalable adversarial machine learning algorithms for large datasets at a production or commercial scale to realize robust malware detectors. We propose an alternative; perform adversarial learning in the *feature space* in contrast to the problem space. We *prove* the projection of perturbed, yet valid malware, in the problem space into feature space will always be a subset of adversarials generated in the feature space. Hence, by generating a robust" +"---\nabstract: 'This paper presents a new approach to Model Predictive Control for environments where essential, discrete variables are partially observed. Under this assumption, the belief state is a probability distribution over a finite number of states. We optimize a *control-tree* where each branch assumes a given state-hypothesis. The control-tree optimization uses the probabilistic belief state information. This leads to policies more optimized with respect to likely states than unlikely ones, while still guaranteeing robust constraint satisfaction at all times. We apply the method to both linear and non-linear MPC with constraints. The optimization of the *control-tree* is decomposed into optimization subproblems that are solved in parallel leading to good scalability for high number of state-hypotheses. We demonstrate the real-time feasibility of the algorithm on two examples and show the benefits compared to a classical MPC scheme optimizing w.r.t. one single hypothesis.'\nauthor:\n- 'Camille Phiquepal$^{1}$ and Marc Toussaint$^{2}$ [^1][^2]'\nbibliography:\n- 'references.bib'\ntitle: '**Control-Tree Optimization: an approach to MPC under discrete Partial Observability** '\n---\n\nINTRODUCTION\n============\n\nIn the field of receding horizon motion planning, uncertainty about the robot environment is often neglected or integrated in the environment representation through heuristics (e.g. by adding safety distances, or potential fields" +"---\nabstract: 'Artificial intelligence (AI) in healthcare has the potential to improve patient outcomes, but clinician acceptance remains a critical barrier. We developed a novel decision support interface that provides interpretable treatment recommendations for sepsis, a life-threatening condition in which decisional uncertainty is common, treatment practices vary widely, and poor outcomes can occur even with optimal decisions. This system formed the basis of a mixed-methods study in which 24 intensive care clinicians made AI-assisted decisions on real patient cases. We found that explanations generally increased confidence in the AI, but concordance with specific recommendations varied beyond the binary acceptance or rejection described in prior work. Although clinicians sometimes ignored or trusted the AI, they also often prioritized aspects of the recommendations to follow, reject, or delay in a process we term \u201cnegotiation.\u201d These results reveal novel barriers to adoption of treatment-focused AI tools and suggest ways to better support differing clinician perspectives.'\nauthor:\n- Venkatesh Sivaraman\n- 'Leigh A. Bukowski'\n- Joel Levin\n- 'Jeremy M. Kahn'\n- Adam Perer\nbibliography:\n- 'references.bib'\ntitle: 'Ignore, Trust, or Negotiate: Understanding Clinician Acceptance of AI-Based Treatment Recommendations in Health Care'\n---\n\n<ccs2012> <concept> <concept\\_id>10003120.10003121.10003129</concept\\_id> <concept\\_desc>Human-centered computing\u00a0Interactive systems and tools</concept\\_desc> <concept\\_significance>500</concept\\_significance> </concept>" +"---\nabstract: 'Quadrature formulas (QFs) based on radial basis functions (RBFs) have become an essential tool for multivariate numerical integration of scattered data. Although numerous works have been published on RBF-QFs, their stability theory can still be considered as underdeveloped. Here, we strive to pave the way towards a more mature stability theory for global and function-independent RBF-QFs. In particular, we prove stability of these for compactly supported RBFs under certain conditions on the shape parameter and the data points. As an alternative to changing the shape parameter, we demonstrate how the least-squares approach can be used to construct stable RBF-QFs by allowing the number of data points used for numerical integration to be larger than the number of centers used to generate the RBF approximation space. Moreover, it is shown that asymptotic stability of many global RBF-QFs is independent of polynomial terms, which are often included in RBF approximations. While our findings provide some novel conditions for stability of global RBF-QFs, the present work also demonstrates that there are still many gaps to fill in future investigations.'\nauthor:\n- 'Jan Glaubitz[^1]'\n- 'Jonah Reeger[^2]'\nbibliography:\n- 'literature.bib'\ntitle: 'Towards stability results for global radial basis function based quadrature formulas[^3]'" +"---\nabstract: |\n The use of Artificial Intelligence (AI) in the real estate market has been growing in recent years. In this paper, we propose a new method for property valuation that utilizes self-supervised vision transformers, a recent breakthrough in computer vision and deep learning. Our proposed algorithm uses a combination of machine learning, computer vision and hedonic pricing models trained on real estate data to estimate the value of a given property. We collected and pre-processed a data set of real estate properties in the city of Boulder, Colorado and used it to train, validate and test our algorithm. Our data set consisted of qualitative images (including house interiors, exteriors, and street views) as well as quantitative features such as the number of bedrooms, bathrooms, square footage, lot square footage, property age, crime rates, and proximity to amenities. We evaluated the performance of our model using metrics such as Root Mean Squared Error (RMSE). Our findings indicate that these techniques are able to accurately predict the value of properties, with a low RMSE. The proposed algorithm outperforms traditional appraisal methods that do not leverage property images and has the potential to be used in real-world applications.\\\n *keywords*: housing price" +"---\nabstract: 'We develop a novel computational framework to approximate solution operators of evolution partial differential equations (PDEs). By employing a general nonlinear reduced-order model, such as a deep neural network, to approximate the solution of a given PDE, we realize that the evolution of the model parameters is a control problem in the parameter space. Based on this observation, we propose to approximate the solution operator of the PDE by learning the control vector field in the parameter space. From any initial value, this control field can steer the parameter to generate a trajectory such that the corresponding reduced-order model solves the PDE. This allows for substantially reduced computational cost to solve the evolution PDE with arbitrary initial conditions. We also develop comprehensive error analysis for the proposed method when solving a large class of semilinear parabolic PDEs. Numerical experiments on different high-dimensional evolution PDEs with various initial conditions demonstrate the promising results of the proposed method.'\nauthor:\n- 'Nathan Gaby[^1]'\n- 'Xiaojing Ye[^2]'\n- 'Haomin Zhou[^3]'\nbibliography:\n- 'library.bib'\ntitle: 'Neural Control of Parametric Solutions for High-Dimensional Evolution PDEs[^4]'\n---\n\nIntroduction {#sec:intro}\n============\n\nPartial differential equations (PDEs) are ubiquitous in modeling and are vital in numerous applications from" +"---\nabstract: 'Generative models have the ability to synthesize data points drawn from the data distribution, however, not all generated samples are high quality. In this paper, we propose using a combination of coresets selection methods and \u201centropic regularization\u201d to select the highest fidelity samples. We leverage an Energy-Based Model which resembles a variational auto-encoder with an inference and generator model for which the latent prior is complexified by an energy-based model. In a semi-supervised learning scenario, we show that augmenting the labeled data-set, by adding our selected subset of samples, leads to better accuracy improvement rather than using all the synthetic samples.'\nauthor:\n- |\n Omead Pooladzandi$^*$,\u00a0\u00a0Pasha Khosravi$^*$,\u00a0\u00a0 Erik Nijkamp$^*$,\u00a0\u00a0 Baharan Mirzasoleiman\\\n University of California, Los Angeles\\\n ` {opooladz, pashak, enijkamp, baharan}@ucla.edu`\nbibliography:\n- 'main.bib'\ntitle: |\n Generating High Fidelity Synthetic Data\\\n via Coreset selection and Entropic Regularization\n---\n\nIntroduction\n============\n\nIn machine learning, augmenting data-sets with synthetic data has become a common practice which potentially provides significant improvements in downstream tasks such as classification. For example, in the case of images, recent methods like MixMatch, FixMatch and Mean Teacher [@berthelot2019mixmatch] [@raffel2020fixmatch] [@tarvainen2017mean] have proposed data augmentation techniques which rely on simple pre-defined transformations such as cropping, resizing," +"---\nabstract: 'In federated learning (FL), the global model at the server requires an efficient mechanism for weight aggregation and a systematic strategy for collaboration selection to manage and optimize communication payload. We introduce a practical and cost-efficient method for regularized weight aggregation and propose a laborsaving technique to select collaborators per round. We illustrate the performance of our method, regularized similarity weight aggregation (RegSimAgg), on the Federated Tumor Segmentation (FeTS) 2022 challenge\u2019s federated training (weight aggregation) problem. Our scalable approach is principled, frugal, and suitable for heterogeneous non-IID collaborators. Using FeTS2021 evaluation criterion, our proposed algorithm RegSimAgg stands at 3rd position in the final rankings of FeTS2022 challenge in the weight aggregation task. Our solution is open sourced at: '\nauthor:\n- Muhammad Irfan Khan\n- Mohammad Ayyaz Azeem\n- Esa Alhoniemi\n- Elina Kontio\n- 'Suleiman A. Khan'\n- Mojtaba Jafaritadi\nbibliography:\n- 'main.bib'\ntitle: Regularized Weight Aggregation in Networked Federated Learning for Glioblastoma Segmentation\n---\n\nIntroduction\n============\n\nFederated learning (FL) is on the horizon to replace the current paradigm of data sharing, allowing for privacy-preserving cross-institutional research including a wide range of biomedical disciplines. In simple terms, FL is a machine learning paradigm in a distributed or" +"---\nabstract: 'Few-shot learning allows pre-trained language models to adapt to downstream tasks while using a limited number of training examples. However, practical applications are limited when all model parameters must be optimized. In this work we apply a new technique for parameter efficient few shot learning while adopting a strict definition of parameter efficiency. Our training method combines 1) intermediate training by reformulating natural language tasks as entailment tasks [@wang_entailment_2021] and 2) differentiable optimization of template and label tokens [@zhang_differentiable_2021]. We quantify the tradeoff between parameter efficiency and performance in the few shot regime and propose a simple model agnostic approach that can be extended to any task By achieving competitive performance while only optimizing 3% of a model\u2019s parameters and allowing for batched inference, we allow for more efficient practical deployment of models.'\nauthor:\n- Anonymous\n- |\n Ethan Kim\\\n Harvard University\\\n Jerry Yang\\\n Harvard University\\\nbibliography:\n- 'anthology.bib'\n- 'custom.bib'\ntitle: Differentiable Entailment for Parameter Efficient Few Shot Learning\n---\n\n=1\n\nIntroduction\n============\n\nLarge pre-trained language models have demonstrated adaptability to solve natural language processing (NLP) tasks. Typically, such language models are adapted to a downstream task through fine-tuning [@howard_universal_2018]. Although fine-tuning improves performance on downstream tasks," +"---\nauthor:\n- 'Dimitra Tsigkari, George Iosifidis, and Thrasyvoulos Spyropoulos [^1] [^2] [^3] [^4] [^5]'\nbibliography:\n- 'IEEEabrv.bib'\n- 'tsigkari\\_quid-pro-quo.bib'\ntitle: |\n Quid pro Quo in Streaming Services:\\\n Algorithms for Cooperative Recommendations\n---\n\nIntroduction\n============\n\nBackground and Motivation\n-------------------------\n\nRecommender systems\u00a0(RSs) permeate today\u2019s on-demand streaming services such as Netflix, Disney+, etc.; and are affecting substantially the content requests issued by their subscribers. In Netflix, for example, it is estimated that $80\\%$ of the requests stem from the recommendations that are offered to its users\u00a0[@gomez2016netflix]. Indeed, by proposing contents that are relevant to their users\u2019 interests, Content Providers\u00a0(CPs) can increase the viewing activity in their platforms, reduce the user churn, and eventually boost their revenues\u00a0[@gomez2016netflix]. Therefore, it is not surprising that CPs comprehend the business value of these systems and invest research and financial resources to improve their accuracy.\n\nAt the same time, recommendations can be leveraged by content caching networks to steer user requests towards nearby-cached contents. These caching networks are either today\u2019s traditional Content Delivery Networks\u00a0(CDNs) or edge cache providers in future wireless architectures\u00a0(we will use, hereafter, the term CDN to imply any such caching network provider). The recently-coined terms of cache/network-friendly recommendations" +"---\nabstract: 'We consider the varifold associated to the Allen\u2013Cahn phase transition problem in $\\mathbb R^{n+1}$(or $n+1$-dimensional Riemannian manifolds with bounded curvature) with integral $L^{q_0}$ bounds on the Allen\u2013Cahn mean curvature (first variation of the Allen\u2013Cahn energy) in this paper. It is shown here that there is an equidistribution of energy between the Dirichlet and Potential energy in the phase field limit and that the associated varifold to the total energy converges to an integer rectifiable varifold with mean curvature in $L^{q_0}, q_0 > n$. The latter is a diffused version of Allard\u2019s convergence theorem for integer rectifiable varifolds.'\naddress:\n- |\n School of Mathematical Sciences\\\n Queen Mary University of London\\\n Mile End Road\\\n London E1 4NS\n- |\n School of Mathematical Sciences\\\n Queen Mary University of London\\\n Mile End Road\\\n London E1 4NS\nauthor:\n- Huy The Nguyen\n- Shengwen Wang\nbibliography:\n- 'AllardAllenCahn.bib'\ntitle: 'Quantization of the Energy for the inhomogeneous Allen\u2013Cahn mean curvature'\n---\n\nIntroduction\n============\n\nLet $\\Omega\\subset\\mathbb(M^{n+1},g)$ be an open subset in a Riemannian manifold with bounded curvature. Consider $u\\in W^{2,p}(\\Omega)$ satisfying the following equation $$\\begin{aligned}\n\\label{PFVe}\n {\\varepsilon}\\Delta u_{\\varepsilon}-\\frac{W'(u_{\\varepsilon})}{{\\varepsilon}}=f_{\\varepsilon},\n \\end{aligned}$$ where $W(t)=\\frac{(1-t^2)^2}{2}$ is a double-well potential. The equation can be viewed as a prescribed first variation" +"---\nabstract: 'This paper investigates big Ramsey degrees of unrestricted relational structures in (possibly) infinite languages. While significant progress has been made in studying big Ramsey degrees, many classes of structures with finite small Ramsey degrees still lack an understanding of their big Ramsey degrees. We show that if there are only finitely many relations of every arity greater than one, then unrestricted relational structures have finite big Ramsey degrees, and give some evidence that this is tight. This is the first time that finiteness of big Ramsey degrees has been established for an infinite-language random structure. Our results represent an important step towards a better understanding of big Ramsey degrees for structures with relations of arity greater than two.'\naddress:\n- 'Computer Science Institute of Charles University (I\u00daUK), Charles University, Malostransk\u00e9 n\u00e1m\u011bst\u00ed\u00a025, Praha\u00a01, Czech Republic'\n- 'Department of Applied Mathematics (KAM), Charles University, Malostransk\u00e9 n\u00e1m\u011bst\u00ed\u00a025, Praha\u00a01, Czech Republic'\n- 'Laboratoire Paul Painlev\u00e9, Universit\u00e9 de Lille, 59 655 Villeneuve d\u2019Ascq C\u00e9dex, France'\n- 'Department of Applied Mathematics (KAM), Charles University, Malostransk\u00e9 n\u00e1m\u011bst\u00ed\u00a025, Praha\u00a01, Czech Republic'\n- 'Department of Mathematics, University of Toronto, Toronto, Canada, M5S 2E4'\n- 'Department of Applied Mathematics (KAM), Charles University," +"---\nauthor:\n- Tien Mai\n- Avinandan Bose\n- Arunesh Sinha\n- 'Thanh H. Nguyen'\nbibliography:\n- 'refs.bib'\ntitle: Tackling Stackelberg Network Interdiction against a Boundedly Rational Adversary\n---\n\nIntroduction\n============\n\nNetwork interdiction is a well-studied topic in Artificial Intelligence. There are many practical problems\u00a0[@smith2020survey], such as in cyber systems and illicit supply networks, that can be modelled as a network interdiction problem. In literature, many variations in models of network interdiction exist, and consequentially, a variety of techniques have been used for solving different types of these problems. Our work focuses on a particular type in which there is a set of critical nodes ${{\\mathcal{L}}}$ to protect within a larger network of ${{\\mathcal{S}}}$ nodes. We employ a popular network interdiction model\u00a0[@fulkerson1977maximizing; @israeli2002shortest], where the interdictor (defender) uses a randomized allocation of limited defense resources for the critical nodes in ${{\\mathcal{L}}}$. The adversary traverses the graphs starting from an origin $s_o$ and reaching a destination $s_d$. There is an interaction with the defender only if the adversary crosses any node in ${{\\mathcal{L}}}$. The interaction is modelled using a leader-follower (Stackelberg) game setting where the defender first allocates resources in a randomized fashion and then the adversary chooses its" +"---\nabstract: |\n Pen testing is the problem of selecting high-capacity resources when the only way to measure the capacity of a resource expends its capacity. We have a set of $n$ pens with unknown amounts of ink and our goal is to select a feasible subset of pens maximizing the total ink in them. We are allowed to gather more information by writing with them, but this uses up ink that was previously in the pens. Algorithms are evaluated against the standard benchmark, i.e, the optimal pen testing algorithm, and the omniscient benchmark, i.e, the optimal selection if the quantity of ink in the pens are known.\n\n We identify optimal and near optimal pen testing algorithms by drawing analogues to auction theoretic frameworks of deferred-acceptance auctions and virtual values. Our framework allows the conversion of any near optimal deferred-acceptance mechanism into a near optimal pen testing algorithm. Moreover, these algorithms guarantee an additional overhead of at most $(1+o(1)) \\ln n$ in the approximation factor of the omniscient benchmark. We use this framework to give pen testing algorithms for various combinatorial constraints like matroid, knapsack and general downward-closed constraints and also for online environments.\nauthor:\n- Aadityan Ganesh\n- Jason" +"---\nabstract: 'The recent emergence of 6G raises the challenge of increasing the transmission data rate even further in order to break the barrier set by the Shannon limit. Traditional communication methods fall short of the 6G goals, paving the way for Semantic Communication (SemCom) systems. These systems find applications in wide range of fields such as economics, metaverse, autonomous transportation systems, healthcare, smart factories, etc. In SemCom systems, only the relevant information from the data, known as semantic data, is extracted to eliminate unwanted overheads in the raw data and then transmitted after encoding. In this paper, we first use the shared knowledge base to extract the keywords from the dataset. Then, we design an auto-encoder and auto-decoder that only transmit these keywords and, respectively, recover the data using the received keywords and the shared knowledge. We show analytically that the overall semantic distortion function has an upper bound, which is shown in the literature to converge. We numerically compute the accuracy of the reconstructed sentences at the receiver. Using simulations, we show that the proposed methods outperform a state-of-the-art method in terms of the average number of words per sentence.'\nauthor:\n- '[^1]'\nbibliography:\n- 'references.bib'\ntitle: 'Knowledge-Aware" +"---\nabstract: 'Contrary to common assumptions, a transcritical domain exists during the early times of liquid hydrocarbon fuel injection at supercritical pressure. A sharp two-phase interface is sustained before substantial heating of the liquid. Thus, two-phase dynamics has been shown to drive the early three-dimensional deformation and atomisation. A recent study of a transcritical liquid jet shows distinct deformation features caused by interface thermodynamics, low surface tension, and intraphase diffusive mixing. In the present work, the vortex identification method $\\lambda_\\rho$, which considers the fluid compressibility, is used to study the vortex dynamics in a cool liquid *n*-decane transcritical jet surrounded by a hotter oxygen gaseous stream at supercritical pressures. The relationship between vortical structures and the liquid surface evolution is detailed, along with the vorticity generation mechanisms, including variable-density effects. The roles of hairpin and roller vortices in the early deformation of lobes, the layering and tearing of liquid sheets, and the formation of fuel-rich gaseous blobs are analysed. At these high pressures, enhanced intraphase mixing and ambient gas dissolution affect the local liquid structures (i.e., lobes). Thus, liquid breakup differs from classical sub-critical atomisation. Near the interface, liquid density and viscosity drop by up to 10% and 70%, respectively," +"---\nabstract: |\n Superconducting diode effect, in analogy to the nonreciprocal resistive charge transport in semiconducting diode, is a nonreciprocity of dissipationless supercurrent. Such an exotic phenomenon originates from intertwining between symmetry-constrained supercurrent transport and intrinsic quantum functionalities of helical/chiral superconductors. In this article, research progress of superconducting diode effect including fundamental concepts, material aspects, device prospects, and theoretical/experimental development is reviewed. First, fundamental mechanisms to cause superconducting diode effect including simultaneous space-inversion and time-reversal symmetry breaking, magnetochiral anisotropy, interplay between spin-orbit interaction energy and the characteristic energy scale of supercurrent carriers, and finite-momentum Cooper pairing are discussed. Second, the progress of superconducting diode effect from theoretical predictions to experimental observations are reviewed. Third, interplay between various system parameters leading to superconducting diode effect with optimal performance is presented. Then, it is explicitly highlighted that nonreciprocity of supercurrent can be characterized either by current-voltage relation obtained from resistive direct-current measurements in the metal-superconductor fluctuation region ($T\\approx T_c$) or by current-phase relation and nonreciprocity of superfluid inductance obtained from alternating-current measurements in the superconducting phase ($T