diff --git "a/https:/huggingface.co/datasets/iamgroot42/mimir/tree/main/test/temporal_arxiv_2023_06.jsonl" "b/https:/huggingface.co/datasets/iamgroot42/mimir/tree/main/test/temporal_arxiv_2023_06.jsonl" deleted file mode 100644--- "a/https:/huggingface.co/datasets/iamgroot42/mimir/tree/main/test/temporal_arxiv_2023_06.jsonl" +++ /dev/null @@ -1,1000 +0,0 @@ -"---\nabstract: 'This work considers the problem of approximating initial condition and time-dependent optimal control and trajectory surfaces using multivariable Fourier series. A modified Augmented Lagrangian algorithm for translating the optimal control problem into an unconstrained optimization one is proposed and two problems are solved: a quadratic control problem in the context of Newtonian mechanics, and a control problem arising from an odd-circulant game ruled by the replicator dynamics. Various computational results are presented. Use of automatic differentiation is explored to circumvent the elaborated gradient computation in the first-order optimization procedure. Furthermore, mean square error bounds are derived for the case of one and two-dimensional Fourier series approximations, inducing a general bound for problems with state space of $n$ dimensions.'\nauthor:\n- 'Gabriel Nicolosi[^1]'\n- Christopher Griffin\n- Terry Friesz\nbibliography:\n- 'References.bib'\ntitle: A Multidimensional Fourier Approximation of Optimal Control Surfaces\n---\n\nIntroduction\n============\n\nRecently, the fields of nonlinear dynamics and nonlinear control have witnessed an increasing interest in the application of machine learning methods to tackle the common analytically challenging problems pertaining to them. These models are mostly useful when the so-called curse of dimensionality hinders the efficient derivation of closed-form or approximated solutions to problems presenting nonlinearities" -"---\nabstract: 'We present a detailed study of mechanically compliant, photonic-crystal-based microcavities featuring a quasibound state in the continuum. Such systems were recently predicted to reduce the optical loss in Fabry-P\u00e9rot-type optomechanical cavities. However, they require two identical photonic-crystal slabs facing each other, which poses a considerable challenge for experimental implementation. We investigate how such an ideal system can be simplified and still exhibit a quasibound state in the continuum. We find that a suspended photonic-crystal slab facing a distributed Bragg reflector realizes an optomechanical system with a quasibound state in the continuum. In this system, the radiative cavity loss can be eliminated to the extent that the cavity loss is dominated by dissipative loss originating from material absorption only. These proposed optomechanical cavity designs are predicted to feature optical quality factors in excess of $10^5$.'\nauthor:\n- Cindy P\u00e9ralle\n- Sushanth Kini Manjeshwar\n- Anastasiia Ciers\n- Witlef Wieczorek\n- Philippe Tassin\ntitle: |\n Quasibound states in the continuum\\\n in photonic-crystal-based optomechanical microcavities\n---\n\nIntroduction\n============\n\nReducing optical loss is paramount for a variety of engineered devices. Loss of confined modes can be reduced by designing structured materials that create bandgaps around the modes of interest\u00a0[@akahane2003high; @englund2005general; @deotare2009high]." -"---\nabstract: 'Real-time object detection plays a vital role in various computer vision applications. However, deploying real-time object detectors on resource-constrained platforms poses challenges due to high computational and memory requirements. This paper describes a low-bit quantization method to build a highly efficient one-stage detector, dubbed as Q-YOLO, which can effectively address the performance degradation problem caused by activation distribution imbalance in traditional quantized YOLO models. Q-YOLO introduces a fully end-to-end Post-Training Quantization (PTQ) pipeline with a well-designed Unilateral Histogram-based (UH) activation quantization scheme, which determines the maximum truncation values through histogram analysis by minimizing the Mean Squared Error (MSE) quantization errors. Extensive experiments on the COCO dataset demonstrate the effectiveness of Q-YOLO, outperforming other PTQ methods while achieving a more favorable balance between accuracy and computational cost. This research contributes to advancing the efficient deployment of object detection models on resource-limited edge devices, enabling real-time detection with reduced computational and memory overhead.'\nauthor:\n- 'Mingze Wang $^{\\dag}$'\n- 'Huixin Sun $^{\\dag}$'\n- 'Jun Shi $^{\\dag}$'\n- Xuhui Liu\n- Baochang Zhang\n- 'Xianbin Cao$^{\\star}$'\nbibliography:\n- 'references.bib'\ntitle: 'Q-YOLO: Efficient Inference for Real-time Object Detection [^1]'\n---\n\nIntroduction\n============\n\nReal-time object detection is a crucial component in various computer" -"---\nabstract: 'Infrared observations of Sgr A$^*$ and M87$^*$ are incompatible with the assumption that these sources have physical surfaces in thermal equilibrium with their accreting environments. In this paper we discuss a general parametrization of the energy balance in a horizonless object, which permits to quantify how close a horizonless object is in its behavior to a black hole, and analyze the timescale in which its surface can thermalize. We show that the thermalization timescale is unbounded, growing large for objects that mimic closely the behavior of a black hole (and being infinite for the latter). In particular, the thermalization timescale is proportional to the time that energy spends inside the horizonless object due to propagation and interactions with the bulk. Hence, these observations can be used to quantitatively restrict the dynamical behavior of horizonless objects, without being able to discard the existence of a physical surface.'\nauthor:\n- 'Ra\u00fal Carballo-Rubio'\n- Francesco Di Filippo\n- Stefano Liberati\n- Matt Visser\nbibliography:\n- 'refs.bib'\ntitle: Constraints on thermalizing surfaces from infrared observations of supermassive black holes\n---\n\nYITP-22-84\n\n\u00d8\n\nIntroduction\n============\n\nWhile black holes have been for a long time a central topic in gravitation theory, the fast-pacing advancements" -"---\nabstract: 'Let $k \\geq 2$ be a constant. Given any $k$ convex polygons in the plane with a total of $n$ vertices, we present an $O(n\\log^{2k-3}n)$ time algorithm that finds a translation of each of the polygons such that the area of intersection of the $k$ polygons is maximized. Given one such placement, we also give an $O(n)$ time algorithm which computes the set of all translations of the polygons which achieve this maximum.'\naddress:\n- 'Department of Mathematics, University of Georgia, Athens, GA 30602, USA'\n- 'Department of Mathematics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA'\nauthor:\n- Hyuk Jun Kweon\n- Honglin Zhu\nbibliography:\n- 'ref.bib'\ntitle: Maximum Overlap Area of Several Convex Polygons Under Translations\n---\n\nIntroduction\n============\n\nShape matching is a critical area in computational geometry, with overlap area or volume often used to measure the similarity between shapes when translated. In this paper, we present a quasilinear time algorithm to solve the problem of maximizing the overlap area of several convex polygons, as stated in the following theorem.\n\n[thm]{}[mainTheorem]{} \\[thm:main\\] Let $P_0,P_1,\\dots,P_{k-1}$ be convex polygons, with a total of $n$ vertices, where $k$ is constant. In $O(n\\log^{2k-3}n)$ time, we can finds translations ${\\mathbf{v}}_0,{\\mathbf{v}}_1,\\dots,{\\mathbf{v}}_{k-1}$" -"---\nabstract: 'The [orders of magnitude variation in]{} lithium abundances of evolved stars have long been a puzzle. Diluted signals, ambiguous evolutionary states and unknown masses have made it challenging to both map the expected lithium signals and explain the anomalously lithium-rich stars. We show here using a set of asteroseismically characterized evolved stars that the base lithium abundance in red giant stars is mass dependent, with higher mass stars having higher \u2018normal\u2019 lithium abundances, while highly lithium enhanced stars may cluster around 0.8 or 1.8 [M$_\\sun$]{}. We confirm previous studies that have shown that lithium enhancement and rapid rotation are often coincident, but find that the actual correlation between lithium abundance and the rotation rate, whether surface rotation, internal rotation, or radial differential rotation, is weak. Our data support previous assertions that most lithium rich giants are in the core-helium burning phase. We also note a tentative correlation between the highest lithium abundances and unusual carbon to nitrogen ratios, which is suggestive of binary interactions, though we find no simple correlation between lithium richness and indicators of binarity.'\nauthor:\n- Jamie Tayar\n- 'Joleen K.\u00a0Carlberg'\n- 'Claudia Aguilera-G\u00f3mez'\n- Maryum Sayeed\nbibliography:\n- 'arxiv.bib'\ntitle: 'Lithium in Kepler" -"---\nauthor:\n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \nbibliography:\n- 'sn-bibliography.bib'\ntitle: 'Evidence of free-bound transitions in warm dense matter and their impact on equation-of-state measurements'\n---\n\nThe study of matter at extreme temperatures ($T\\sim10^{3}-10^8\\,$K) and pressures ($P\\sim1-10^4\\,$Mbar) constitutes a highly active frontier at the interface of a variety of research fields including plasma physics, electronic structure, material science, and scientific computing\u00a0[@wdm_book; @drake2018high; @Hatfield_Nature_2021]. Such *warm dense matter* (WDM) occurs in astrophysical objects\u00a0[@Bailey2015] such as giant planet interiors\u00a0[@Liu2019; @Brygoo2021; @Kraus_Science_2022] and brown dwarfs\u00a0[@Kritcher_Nature_2020; @becker]. For terrestrial applications, WDM is of prime relevance for materials synthesis and discovery, with the recent observation of diamond formation at high pressures\u00a0[@Kraus2016; @Kraus2017] being a case in point. Additionally, the WDM regime must be traversed on the way to ignition\u00a0[@hu_ICF] in inertial confinement fusion (ICF)\u00a0[@Betti2016], where recent breakthroughs\u00a0[@Zylstra2022] promise a potential abundance of clean energy in the future.\n\nAs a direct consequence of this remarkable interest, WDM is nowadays routinely created in large research centers around the globe, including the European XFEL in Germany\u00a0[@Tschentscher_2017], SACLA in Japan\u00a0[@SACLA_2011], as well as LCLS\u00a0[@LCLS_2016], the OMEGA laser\u00a0[@OMEGA], the Z Pulsed Power" -"---\nabstract: 'In a recent work a quantum error mitigation protocol was applied to the expectation values obtained from circuits on the IBM Eagle quantum processor with up $127$ - qubits with up to $60 \\; - \\; \\mbox{CNOT}$ layers. To benchmark the efficacy of this quantum protocol a physically motivated quantum circuit family was considered that allowed access to exact solutions in different regimes. The family interpolated between Clifford circuits and was additionally evaluated at low depth where exact validation is practical. It was observed that for highly entangling parameter regimes the circuits are beyond the validation of matrix product state and isometric tensor network state approximation methods. Here we compare the experimental results to matrix product operator simulations of the Heisenberg evolution, find they provide a closer approximation than these pure-state methods by exploiting the closeness to Clifford circuits and limited operator growth. Recently other approximation methods have been used to simulate the full circuit up to its largest extent. We observe a discrepancy of up to $20\\%$ among the different classical approaches so far, an uncertainty comparable to the bootstrapped error bars of the experiment. Based on the different approximation schemes we propose modifications to the original" -"---\nabstract: 'Accurate classification of molecular chemical motifs from experimental measurement is an important problem in molecular physics, chemistry and biology. In this work, we present neural network ensemble classifiers for predicting the presence (or lack thereof) of $41$ different chemical motifs on small molecules from simulated C, N and O K-edge X-ray absorption near-edge structure (XANES) spectra. Our classifiers not only reach a maximum average class-balanced accuracy of $0.99$ but also accurately quantify uncertainty. We also show that including multiple XANES modalities improves predictions notably on average, demonstrating a \u201cmulti-modal advantage\" over any single modality. In addition to structure refinement, our approach can be generalized for broad applications with molecular design pipelines.'\nauthor:\n- 'Matthew R.\u00a0Carbone'\n- 'Phillip M.\u00a0Maffettone'\n- Xiaohui Qu\n- Shinjae Yoo\n- Deyu Lu\ntitle: 'Accurate, uncertainty-aware classification of molecular chemical motifs from multi-modal X-ray absorption spectroscopy'\n---\n\n=1 =7\n\n#### Introduction\u2014 {#introduction .unnumbered}\n\nArtificial intelligence and machine learning (AI/ML), and more broadly data-driven methods, have made a huge impact in our society over the last decade, with exciting applications in image processing, self-driving vehicles and natural language processing using generative AI. The adaptation of AI/ML methods in scientific research has quickly spread" -"---\nabstract: 'Twin revolutions in wearable technologies and smartphone-delivered digital health interventions have significantly expanded the accessibility and uptake of mobile health (mHealth) interventions across various health science domains. Sequentially randomized experiments called micro-randomized trials (MRTs) have grown in popularity to empirically evaluate the effectiveness of these mHealth intervention components. MRTs have given rise to a new class of causal estimands known as \u201ccausal excursion effects\", which enable health scientists to assess how intervention effectiveness changes over time or is moderated by individual characteristics, context, or responses in the past. However, current data analysis methods for estimating causal excursion effects require pre-specified features of the observed high-dimensional history to construct a working model of an important nuisance parameter. While machine learning algorithms are ideal for automatic feature construction, their naive application to causal excursion estimation can lead to bias under model misspecification, potentially yielding incorrect conclusions about intervention effectiveness. To address this issue, this paper revisits the estimation of causal excursion effects from a meta-learner perspective, where the analyst remains agnostic to the choices of supervised learning algorithms used to estimate nuisance parameters. The paper presents asymptotic properties of the novel estimators and compares them theoretically and through extensive simulation" -"---\nabstract: 'Deploying service robots in our daily life, whether in restaurants, warehouses or hospitals, calls for the need to reason on the interactions happening in dense and dynamic scenes. In this paper, we present and benchmark three new approaches to model and predict multi-agent interactions in dense scenes, including the use of an intuitive qualitative representation. The proposed solutions take into account static and dynamic context to predict individual interactions. They exploit an input- and a temporal-attention mechanism, and are tested on medium and long-term time horizons. The first two approaches integrate different relations from the so-called Qualitative Trajectory Calculus\u00a0(QTC) within a state-of-the-art deep neural network to create a symbol-driven neural architecture for predicting spatial interactions. The third approach implements a purely data-driven network for motion prediction, the output of which is post-processed to predict QTC spatial interactions. Experimental results on a popular robot dataset of challenging crowded scenarios show that the purely data-driven prediction approach generally outperforms the other two. The three approaches were further evaluated on a different but related human scenarios to assess their generalisation capability.'\nauthor:\n- 'Sariah Mghames$^1$, Luca Castri$^1$, Marc Hanheide$^1$, Nicola Bellotto$^{1,2}$ [^1]'\nbibliography:\n- 'IEEEabrv.bib'\n- 'references.bib'\ntitle: '**Qualitative Prediction" -"---\nabstract: 'Counterspeech offers direct rebuttals to hateful speech by challenging perpetrators of hate and showing support to targets of abuse. It provides a promising alternative to more contentious measures, such as content moderation and deplatforming, by contributing a greater amount of positive online speech rather than attempting to mitigate harmful content through removal. Advances in the development of large language models mean that the process of producing counterspeech could be made more efficient by automating its generation, which would enable large-scale online campaigns. However, we currently lack a systematic understanding of several important factors relating to the efficacy of counterspeech for hate mitigation, such as which types of counterspeech are most effective, what are the optimal conditions for implementation, and which specific effects of hate it can best ameliorate. This paper aims to fill this gap by systematically reviewing counterspeech research in the social sciences and comparing methodologies and findings with computer science efforts in automatic counterspeech generation. By taking this multi-disciplinary view, we identify promising future directions in both fields.'\nauthor:\n- |\n Yi-Ling Chung^1^ Gavin Abercrombie^2^ Florence Enock^1^\\\n [ **Jonathan Bright^1^ **Verena Rieser^2^[^1]**** ]{}\\\n \\\n ^1^The Alan Turing Institute ^2^The Interaction Lab, Heriot-Watt University\\\n {ychung, fenock, jbright}@turing.ac.uk\\" -"---\nabstract: 'The goal of Automatic Voice Over (AVO) is to generate speech in sync with a silent video given its text script. Recent AVO frameworks built upon text-to-speech synthesis (TTS) have shown impressive results. However, the current AVO learning objective of acoustic feature reconstruction brings in indirect supervision for inter-modal alignment learning, thus limiting the synchronization performance and synthetic speech quality. To this end, we propose a novel AVO method leveraging the learning objective of self-supervised discrete speech unit prediction, which not only provides more direct supervision for the alignment learning, but also alleviates the mismatch between the text-video context and acoustic features. Experimental results show that our proposed method achieves remarkable lip-speech synchronization and high speech quality by outperforming baselines in both objective and subjective evaluations. Code and speech samples are publicly available.'\naddress: |\n $^1 $National University of Singapore, Singapore $^2 $The University of Texas at Dallas, USA\\\n $^3 $Shenzhen Research Institute of Big Data, School of Data Science,\\\n The Chinese University of Hong Kong, Shenzhen, China\ntitle: |\n High-Quality Automatic Voice Over with Accurate Alignment:\\\n Supervision through Self-Supervised Discrete Speech Units[^1][^2]\n---\n\n**Index Terms**: Text-to-speech, lip-speech synchronization, automatic voice over, discrete speech units, speech synthesis\n\nIntroduction" -"---\nabstract: 'Blockchain\u2019s influence extends beyond finance, impacting diverse sectors such as real estate, oil and gas, and education. This extensive reach stems from blockchain\u2019s intrinsic ability to reliably manage digital transactions and supply chains. Within the oil and gas sector, the merger of blockchain with supply chain management and data handling is a notable trend. The supply chain encompasses several operations: extraction, transportation, trading, and distribution of resources. Unfortunately, the current supply chain structure misses critical features such as transparency, traceability, flexible trading, and secure data storage \u2014 all of which blockchain can provide. Nevertheless, it is essential to investigate blockchain\u2019s security and privacy in the oil and gas industry. Such scrutiny enables the smooth, secure, and usable execution of transactions. For this purpose, we reviewed $124$ peer-reviewed academic publications, conducting an in-depth analysis of $21$ among them. We classified the articles by their relevance to various phases of the supply chain flow: upstream, midstream, downstream, and data management. Despite blockchain\u2019s potential to address existing security and privacy voids in the supply chain, there is a significant lack of practical implementation of blockchain integration in oil and gas operations. This deficiency substantially challenges the transition from conventional methods to" -"---\nabstract: 'It is well known that a single anchor can be used to determine the position and orientation of an agent communicating with it. However, it is not clear what information about the anchor or the agent is necessary to perform this localization, especially when the agent is in the near-field of the anchor. Hence, in this paper, to investigate the limits of localizing an agent with some uncertainty in the anchor location, we consider a wireless link consisting of source and destination nodes. More specifically, we present a Fisher information theoretical investigation of the possibility of estimating different combinations of the source and destination\u2019s position and orientation from the signal received at the destination. To present a comprehensive study, we perform this Fisher information theoretic investigation under both the near and far field propagation models. One of the key insights is that while the source or destination\u2019s $3$D orientation can be jointly estimated with the source or destination\u2019s $3$D position in the near-field propagation regime, only the source or destination\u2019s $2$D orientation can be jointly estimated with the source or destination\u2019s $2$D position in the far-field propagation regime. Also, a simulation of the FIM indicates that in the" -"---\nabstract: 'Safety is a central requirement for autonomous system operation across domains. Hamilton-Jacobi (HJ) reachability analysis can be used to construct \u201cleast-restrictive\u201d safety filters that result in infrequent, but often extreme, control overrides. In contrast, control barrier function (CBF) methods apply smooth control corrections to guard the system against an often conservative safety boundary. This paper provides an online scheme to construct an implicit CBF through HJ reach-avoid differential dynamic programming in a receding-horizon framework, enabling smooth safety filtering with infinite-time safety guarantees. Simulations with the Dubins car and 5D bicycle dynamics demonstrate the scheme\u2019s ability to preserve safety smoothly without the conservativeness of handcrafted CBFs.'\nauthor:\n- 'Athindran Ramesh Kumar, Kai-Chieh Hsu, Peter J. Ramadge, and Jaime F. Fisac [^1]'\nbibliography:\n- 'kai.bib'\n- 'references.bib'\ntitle: ' Fast, Smooth, and Safe: Implicit Control Barrier Functions through Reach-Avoid Differential Dynamic Programming '\n---\n\nAutonomous systems, Reach-avoid analysis, Control barrier functions\n\nIntroduction\n============\n\nimproved sensors, computation, learning, and decision-making techniques, autonomous robotic systems are becoming increasingly capable. However, deploying these systems in safety-critical environments requires robust fail-safe methods to ensure the avoidance of catastrophic *failure states*. To enforce safe behavior, various modern techniques rely on finding a *safe set* ${{\\Omega}}$," -"---\nabstract: 'Chromonic nematics are lyotropic liquid crystals that have already been known for half a century, but have only recently raised interest for their potential applications in life sciences. Determining elastic constants and anchoring strengths for rigid substrates has thus become a priority in the characterization of these materials. Here, we present a method to determine chromonics\u2019 planar anchoring strength. We call it *geometric* as it is based on recognition and fitting of the stable equilibrium shapes of droplets surrounded by the isotropic phase in a thin cell with plates enforcing parallel alignments of the nematic director. We apply our method to shapes observed in experiments; they resemble elongated rods with round ends, which are called *b\u00e2tonnets*. Our theory also predicts other droplets\u2019 equilibrium shapes, which are either slender and round, called *discoids*, or slender and pointed, called *tactoids*. In particular, sufficiently small droplets are expected to display shape bistability, with two equilibrium shapes, one tactoid and one discoid, exchanging roles as stable and metastable shapes upon varying their common area.'\nauthor:\n- Silvia Paparini\n- 'Epifanio G. Virga'\ntitle: 'A geometric method to determine chromonics\u2019 planar anchoring strength'\n---\n\nIntroduction {#sec:intro}\n============\n\nLiquid crystals come in two fashions:" -"---\nabstract: 'Object detection and segmentation are two core modules of an autonomous vehicle perception system. They should have high efficiency and low latency while reducing computational complexity. Currently, the most commonly used algorithms are based on deep neural networks, which guarantee high efficiency but require high-performance computing platforms. In the case of autonomous vehicles, i.e. cars, but also drones, it is necessary to use embedded platforms with limited computing power, which makes it difficult to meet the requirements described above. A reduction in the complexity of the network can be achieved by using an appropriate: architecture, representation (reduced numerical precision, quantisation, pruning), and computing platform. In this paper, we focus on the first factor \u2013 the use of so-called detection-segmentation networks as a component of a perception system. We considered the task of segmenting the drivable area and road markings in combination with the detection of selected objects (pedestrians, traffic lights, and obstacles). We compared the performance of three different architectures described in the literature: MultiTask V3, HybridNets, and YOLOP. We conducted the experiments on a custom dataset consisting of approximately 500 images of the drivable area and lane markings, and 250 images of detected objects. Of the three" -"---\nabstract: 'Quantum computers and simulators can potentially outperform classical computers in finding ground states of classical and quantum Hamiltonians. However, if this advantage can persist in the presence of noise without error correction remains unclear. In this paper, by exploiting the principle of Lagrangian duality, we develop a numerical method to classically compute a certifiable lower bound on the minimum energy attainable by the output state of a quantum circuit in the presence of depolarizing noise. We provide theoretical and numerical evidence that this approach can provide circuit-architecture dependent bounds on the performance of noisy quantum circuits.'\nauthor:\n- 'Sattwik Deb Mishra$^{*}$'\n- 'Miguel Fr\u00edas-P\u00e9rez$^{*}$'\n- Rahul Trivedi\nbibliography:\n- 'vqa\\_paper.bib'\ntitle: Classically computing performance bounds on depolarized quantum circuits\n---\n\nIntroduction\n============\n\nFault-tolerant quantum computers hold promise for outperforming classical computers at several computational tasks. One of the most explored computational tasks is the problem of finding the ground state of a given many-body Hamiltonian \u2014 a problem that naturally arises in studying equilibrium properties of condensed matter systems [@amico2008entanglement]. Moreover, classical optimization problems can also be framed as finding ground states of commuting Hamiltonians [@gharibian2015quantum]. Unsurprisingly, quantum algorithms for finding Hamiltonian ground states have been extensively studied" -"---\nabstract: 'In the $0+1$ dimensional imaginary-time path integral formulation of quantum impurity problems, the retarded action encodes the hybridization of the impurity with the bath. In this Article, we explore the computational power of representing the retarded action as matrix product state (RAMPS). We focus on the challenging Kondo regime of the single-impurity Anderson model, where non-perturbative strong-correlation effects arise at very low energy scales. We demonstrate that the RAMPS approach reliably reaches the Kondo regime for a range of interaction strengths $U$, with a numerical error scaling as a weak power law with inverse temperature. We investigate the convergence behavior of the method with respect to bond dimension and time discretization by analyzing the error of local observables in the full interacting problem and find polynomial scaling in both parameters. Our results show that the RAMPS approach offers promise as an alternative tool for studying quantum impurity problems in regimes that challenge established methods, such as multi-orbital systems. Overall, our study contributes to the development of efficient and accurate non-wavefunction-based tensor-network methods for quantum impurity problems.'\nauthor:\n- Benedikt Kloss\n- Julian Thoenniss\n- Michael Sonner\n- Alessio Lerose\n- 'Matthew T. Fishman'\n- 'E. M. Stoudenmire'\n-" -"---\nabstract: 'This research paper addresses the challenge of detecting obscured wildfires (when the fire flames are covered by trees, smoke, clouds, and other natural barriers) in real-time using drones equipped only with RGB cameras. We propose a novel methodology that employs semantic segmentation based on the temporal analysis of smoke patterns in video sequences. Our approach utilizes an encoder-decoder architecture based on deep convolutional neural network architecture with a pre-trained CNN encoder and 3D convolutions for decoding while using sequential stacking of features to exploit temporal variations. The predicted fire locations can assist drones in effectively combating forest fires and pinpoint fire retardant chemical drop on exact flame locations. We applied our method to a curated dataset derived from the FLAME2 dataset that includes RGB video along with IR video to determine the ground truth. Our proposed method has a unique property of detecting obscured fire and achieves a Dice score of 85.88%, while achieving a high precision of 92.47% and classification accuracy of 90.67% on test data showing promising results when inspected visually. Indeed, our method outperforms other methods by a significant margin in terms of video-level fire classification as we obtained about 100% accuracy using MobileNet+CBAM as" -"---\nauthor:\n- Matt Ryan\n- Gary Glonek\n- Jono Tuke\n- Melissa Humphries\nbibliography:\n- 'main.bib'\ntitle: Capturing functional connectomics using Riemannian partial least squares\n---\n\nIntroduction {#introduction .unnumbered}\n============\n\nThe functional and anatomical connections of the human brain form complex networks that link the infrastructure of our minds. Understanding these connectomes has the potential to provide insight into the effect of neurological diseases which can be used to better inform targeted interventions and treatment strategies[@Contreras2015; @Yang2022a]. In particular, the functional connectome can shed new light onto neurological conditions such as schizophrenia and , two conditions that alter brain function from healthy, neurotypical controls[@Woodward2015; @Shi2017].\n\nA popular approach used to investigate brain function is , a non-invasive neuroimaging technique that measures blood flow through the brain over time[@Ogawa1990]. An image is a complex spatio-temporal picture of the brain with voxels (volumetric pixels) describing the spatial location and a time series for each voxel describing the blood flow over time. To reduce the spatial complexity, voxels can be collated into user-specified . Functional connectomes can then be investigated through the Pearson correlation matrix between , known as the functional connectivity matrix.\n\nOne approach to investigating functional connectivity is using the" -"---\nabstract: 'A growing number of studies have investigated the large-scale drivers and upstream-precursors of extreme weather events, making it clear that the earliest warning signs of extreme events can be remote in both time and space from the impacted region. Integrating and leveraging our understanding of dynamical precursors provides a new perspective on ensemble forecasting for extreme events, focused on building story-lines of possible event evolution. This then acts as a tool for raising awareness of the conditions conducive to high-impact weather, and providing early warning of their possible development. However, operational applications of this developing knowledge-base is limited so far, perhaps partly for want of a clear framework for doing so. Here, we present such a framework, supported by open software tools, designed for identifying the large-scale precursors of categorical weather events in an automated fashion, and for reducing them to scalar indices suitable for statistical prediction, forecast interpretation, and model validation. We demonstrate this framework by systematically analysing the precursor circulations of daily precipitation extremes across 18 regional- to national-scale European domains. We discuss the precursor rainfall dynamics for three disparate regions, and show our findings are consistent with, and extend, previous findings. We provide an estimation" -"---\nabstract: 'We introduce a new Hopf algebra that operates on pairs of finite interval partitions and permutations of equal length. This algebra captures *vincular patterns*, which involve specifying both the permutation patterns and the consecutive occurrence of values. Our motivation stems from linear functionals that encode the number of occurrences of these patterns, and we show that they behave well with respect to the operations of this Hopf algebra.'\nauthor:\n- 'Joscha Diehl$^\\dagger$, Emanuele Verri$^\\dagger$'\nbibliography:\n- 'references.bib'\ndate: ' $^\\dagger$*Institute of Mathematics and Computer Science, University of Greifswald*\\'\ntitle: Hopf Algebra on Vincular Permutation Patterns\n---\n\n[ ]{}\n\nIntroduction\n============\n\n*Permutation patterns* are ubiquitous in discrete mathematics. Much effort is devoted to developing algorithms that count these patterns efficiently, see for example [@even2020independence] and [@even2021counting].\n\nThey have also been successfully used in *time series analysis* in the popular work from [@bandt2002permutation], where the authors introduced the concept of *permutation entropy*. For a discrete time series, consider only the order of the values. As an example, the time series\n\n -------------------------------------------------------------\n ![image](pictures/time_series/time_series.png){width=\"2cm\"}\n 123456\n -------------------------------------------------------------\n\ncan be [\u201creduced\u201d]{} to the permutation $134265 \\in {\\mathbf{S}}_{6}$. Permutation entropy is based on specific permutation patterns, namely consecutive patterns. If we fix an order for" -"---\nabstract: 'Multimodal learning is an essential paradigm for addressing complex real-world problems, where individual data modalities are typically insufficient to accurately solve a given modelling task. While various deep learning approaches have successfully addressed these challenges, their reasoning process is often opaque; limiting the capabilities for a principled explainable cross-modal analysis and any domain-expert intervention. In this paper, we introduce SHARCS (SHARed Concept Space) \u2013 a novel concept-based approach for explainable multimodal learning. SHARCS learns and maps interpretable concepts from different heterogeneous modalities into a single unified concept-manifold, which leads to an intuitive projection of semantically similar cross-modal concepts. We demonstrate that such an approach can lead to inherently explainable task predictions while also improving downstream predictive performance. Moreover, we show that SHARCS can operate and significantly outperform other approaches in practically significant scenarios, such as retrieval of missing modalities and cross-modal explanations. Our approach is model-agnostic and easily applicable to different types (and number) of modalities, thus advancing the development of effective, interpretable, and trustworthy multimodal approaches.'\nauthor:\n- |\n Gabriele Dominici\\\n Department of Computer Science\\\n University of Cambridge\\\n Cambridge, UK\\\n `gd489@cam.ac.uk`\\\n Pietro Barbiero\\\n Department of Computer Science\\\n University of Cambridge\\\n Cambridge, UK\\\n `pb737@cam.ac.uk`\\\n Lucie Charlotte Magister\\\n Department" -"---\nabstract: 'The nearest GRB 170817A provided an opportunity to probe the angular structure of the jet of this short gamma-ray burst (SGRB), by using its off-axis observed afterglow emission. It is investigated that whether the afterglow-constrained jet structures can be consistent with the luminosity of the prompt emission of GRB 170817A. Furthermore, by assuming that all SGRBs including GRB 170817A have the same explosive mechanism and jet structure, we apply the different jet structures into the calculation of the flux and redshfit distributions of the SGRB population, in comparison with the observational distributions of the Swift and Fermi sources. As a result, it is found that the single-Gaussian structure can be basically ruled out, whereas the power-law and two-Gaussian models can in principle survive.'\nauthor:\n- 'Xiao-Feng Cao'\n- 'Wei-Wei Tan'\n- 'Yun-Wei Yu'\n- 'Zhen-Dong Zhang'\ntitle: Joint constraint on the jet structure from the short GRB population and GRB 170817A\n---\n\n[UTF8]{}[gbsn]{}\n\nIntroduction\n============\n\nGamma-ray bursts (GRBs) are generated by highly-beamed relativistic jets, which are driven by rapidly rotating black hole or neutron star engines. Before the gamma-ray emission is produced, the jets should first propagate through dense progenitor material, which can be a stellar envelope for" -"---\nabstract: 'The *Dissipative Spectral Form Factor* (DSFF), recently introduced in [@li2021spectral] for the Ginibre ensemble, is a key tool to study universal properties of dissipative quantum systems. In this work we compute the DSFF for a large class of random matrices with real or complex entries up to an intermediate time scale, confirming the predictions from [@li2021spectral]. The analytic formula for the DSFF in the real case was previously unknown. Furthermore, we show that for short times the connected component of the DSFF exhibits a non-universal correction depending on the fourth cumulant of the entries. These results are based on the central limit theorem for linear eigenvalue statistics of non-Hermitian random matrices [@cipolloni2021fluctuation; @cipolloni2019central].'\naddress:\n- 'Princeton Center for Theoretical Science, Princeton University, Princeton, NJ 08544, USA'\n- 'Sherrerd Hall, Princeton University, Princeton, NJ 08540, USA'\nauthor:\n- Giorgio Cipolloni\n- Nicolo Grometto\nbibliography:\n- 'references.bib'\ntitle: 'THE DISSIPATIVE SPECTRAL FORM FACTOR FOR I.I.D. MATRICES'\n---\n\nIntroduction\n============\n\nNon-Hermitian physics has significantly advanced in recent years, leading to a deeper understanding of open (dissipative) quantum systems [@deng2010exciton; @muller2012engineered; @ritsch2013cold; @sieberer2016keldysh; @chou2011non], optics [@feng2017non; @el2018non], biological systems [@may1972will; @marchetti2013hydrodynamics], acoustics [@ma2016acoustic; @cummer2016controlling], and many more. The relaxation of the Hermiticity" -"---\nabstract: 'We show that the hydrodynamic lubrication of contacting conformal surfaces with a typical texture height gives rise to a universal behaviour in the Stribeck curve in which the friction coefficient shows an anomalous power-law dependence on the Sommerfeld number, $\\mu \\sim S^{2/3}$. When the gap height drops below the \u2018texture length scale\u2019, deviations from $S^{2/3}$ occur, which may resemble the onset of elasto-hydrodynamic and boundary lubrication. Within this framework, we analyse literature data for oral processing and find $S^{2/3}$ scaling with deviations consistent with measured lengthscales.'\nauthor:\n- 'James A. Richards'\n- 'Patrick B. Warren'\n- 'Wilson C. K. Poon'\ntitle: 'Gap-Dependent Hydrodynamic Lubrication in Conformal Contacts'\n---\n\nIntroduction\\[sec:pre:calcNon\\]\n===============================\n\n![Schematic Stribeck curve, friction coefficient ($\\mu = {F}/{N}$) as a function of Sommerfeld number ($S=\\eta U {N}/R$, for sliding speed $U$, lubricant viscosity $\\eta$ and radius of curvature of lubrication geometry $R$). Regimes of $\\mu$ with decreasing $S$: hydrodynamic lubrication (HL), elasto-hydrodynamic lubrication (EHL) at minimum, and constant boundary lubrication (BL, shading).[]{data-label=\"fig:stribeck\"}](Fig1.pdf)\n\nThe importance of lubricated contacts between sliding surfaces cannot be overstated\u00a0[@hamrock2004fundamentals; @Hirani2016]. Such contacts are often characterised by a \u2018pin-on-disc test\u2019, which is analysed in terms of a hemisphere of radius $R$ trapping a lubricant" -"---\nabstract: 'Despite the success of two-stage few-shot classification methods, in the episodic meta-training stage, the model suffers severe overfitting. We hypothesize that it is caused by over-discrimination, i.e., the model learns to over-rely on the superficial features that fit for base class discrimination while suppressing the novel class generalization. To penalize over-discrimination, we introduce knowledge distillation techniques to keep novel generalization knowledge from the teacher model during training. Specifically, we select the teacher model as the one with the best validation accuracy during meta-training and restrict the symmetric Kullback-Leibler (SKL) divergence between the output distribution of the linear classifier of the teacher model and that of the student model. This simple approach outperforms the standard meta-training process. We further propose the Nearest Neighbor Symmetric Kullback-Leibler (NNSKL) divergence for meta-training to push the limits of knowledge distillation techniques. NNSKL takes few-shot tasks as input and penalizes the output of the nearest neighbor classifier, which possesses an impact on the relationships between query embedding and support centers. By combining SKL and NNSKL in meta-training, the model achieves even better performance and surpasses state-of-the-art results on several benchmarks.'\nauthor:\n- Siqi Hui\n- Sanping Zhou\n- Ye Deng\n- Jinjun Wang\nbibliography:" -"---\nabstract:\n- '\\[...\\]'\n- 'In this work we introduce a phase-space description based on the positive P representation for bosonic fields interacting with a system of quantum emitters. The formalism is applicable to collective light-matter interactions and open quantum systems with decoherence. Conservation of particle numbers is considered, and a Jordan-Schwinger transformation enables the representation of multi-level quantum emitters. The evolution of the phase-space description of the combined system of emitters and field is formulated in terms of stochastic trajectories and we derive the rules of mapping from traditional quantum mechanics to this stochastic formalism. The resulting equations of motion encode deterministic, classical evolution with quantum effects incorporated by stochastic noise terms. The framework\u2019s equations and properties are provided without specifying the Hamiltonian, aiming for broad applicability in diverse research domains. A potential future application is the quantum mechanical description of collective spontaneous emission of an incoherently pumped ensemble of atoms.'\nauthor:\n- Stasis Chuchurka\n- Andrei Benediktovitch\n- Nina Rohringer\nbibliography:\n- 'bib.bib'\ntitle: |\n Quantum stochastic trajectories for particles and fields\\\n based on positive P-representation\n---\n\n\\[sec:introduction\\] Introduction\n=================================\n\nPhase-space descriptions of many-body quantum systems are potentially powerful approaches to study their time evolution. In these approaches," -"---\nabstract: 'The perception of the value and propriety of modern engineered systems is changing. In addition to their functional and extra-functional properties, nowadays\u2019 systems are also evaluated by their sustainability properties. The next generation of systems will be characterized by an overall elevated sustainability\u2014including their post-life, driven by efficient value retention mechanisms. Current systems engineering practices fall short of supporting these ambitions and need to be revised appropriately. In this paper, we introduce the concept of circular systems engineering, a novel paradigm for systems sustainability, and define two principles to successfully implement it: end-to-end sustainability and bipartite sustainability. We outline typical organizational evolution patterns that lead to the implementation and adoption of circularity principles, and outline key challenges and research opportunities.'\nauthor:\n- 'Istvan David[^1^]{}'\n- 'Dominik Bork[^2^]{}'\n- 'Gerti Kappel[^2^]{}'\nbibliography:\n- 'bib/references.bib'\ndate: 'Received: date / Accepted: date'\ntitle: Circular Systems Engineering\n---\n\nIntroduction\n============\n\nThe steadily accelerating innovation pathways of humankind have rendered prevailing systems engineering paradigms unsustainable. By Brundtland\u2019s classic definition of sustainability\u00a0[@brundtland1987our], systems engineering falls short of \u201c*meeting the needs of the present without compromising the ability of future generations to meet their own needs*\u201d. Our systems engineering practices fail to fulfill the" -"---\nabstract: 'Despite great successes, model predictive control (MPC) relies on an accurate dynamical model and requires high onboard computational power, impeding its wider adoption in engineering systems, especially for nonlinear real-time systems with limited computation power. These shortcomings of MPC motivate this work to make such a control framework more practically viable for real-world applications. Specifically, to remove the required accurate dynamical model and reduce the computational cost for nonlinear MPC (NMPC), this paper develops a unified online data-driven predictive control pipeline to efficiently control a system with guaranteed safety without incurring large computational complexity. The new aspect of this idea is learning not only the real system but also the control policy, which results in a reasonable computational cost for the data-driven predictive controllers. More specifically, we first develop a spatial temporal filter (STF)-based concurrent learning scheme to systematically identify system dynamics for general nonlinear systems. We then develop a robust control barrier function (RCBF) for safety guarantees in the presence of model uncertainties and learn the RCBF-based NMPC policy. Furthermore, to mitigate the performance degradation due to the existing model uncertainties, we propose an online policy correction scheme through perturbation analysis and design of an ancillary feedback" -"---\nabstract: 'We present the unified computational framework for modeling the sulcal patterns of human brain obtained from the magnetic resonance images. The Wasserstein distance is used to align the sulcal patterns nonlinearly. These patterns are topologically different across subjects making the pattern [matching a challenge.]{} We work out the mathematical details [and develop the gradient descent algorithms for estimating the deformation field. We [further]{} quantify the image registration performance. This method is applied in identifying the differences between male and female sulcal patterns.]{}'\naddress: |\n University of Wisconsin, Madison, USA\\\n [{zijian.chen,mkchung}@wisc.edu]{}\nbibliography:\n- 'reference.ISBI.bib'\ntitle: Sulcal Pattern Matching with the Wasserstein Distance\n---\n\nIntroduction\n============\n\nThe concave regions in the highly convoluted cerebral cortex of the human brain are referred to as the [*sulci*]{} (Fig. \\[fig:R2\\_sulcal\\_2subj\\]). These complex tree-shaped sulcal curves are highly variable in length, area, depth, curvature and topology across different subjects [@cachia.TMI.2003]. There have been extensive studies that connect the variabilities of such biomarkers with the differences in cognitive or pathological characteristics between populations [@im2019sulcal]. However, since each subject have different topological patterns, it is difficult to match the sulcal patterns across subjects [@huang.2020.TMI]. One approach of reducing the difficulty of matching is to smooth the" -"---\nabstract: 'Long-period variables are bright, evolved red giant stars showing periodic photometric changes due to stellar pulsation. They follow one or more period-luminosity and period-age relations, which make them highly promising distance indicators and tracers of young and intermediate-age stellar populations. Such a potential is especially interesting in view of the massive amount of data delivered by modern large-scale variability surveys. Crucially, these applications require a clear theoretical understanding of pulsation physics in connection with stellar evolution. Here, I describe an ongoing effort from our collaboration dedicated to the modelling of stellar pulsation in evolved stars, and how this work is impacting our capability of investigating long-period variables and exploiting them for other astrophysical studies. Furthermore, I present our ongoing work aimed at assessing the potential of semi-regular variables, an often neglected sub-type of long-period variables, to be distance indicators complementary to their better-known, more evolved counterparts, the Mira variables.'\nauthor:\n- 'Michele Trabucchi$^{1,2}$'\ntitle: 'Long-Period Variables as distance and age indicators in the era of *Gaia* and LSST'\n---\n\nstars: AGB and post-AGB stars - stars: oscillations - stars: variables: general - stars: distances\n\nIntroduction {#sec:Introduction}\n============\n\nLow- and intermediate-mass stars ($0.8\\lesssim M/{\\rm M}_{\\odot}\\lesssim 8$) approach the end" -"---\nabstract: 'Inverse protein folding is challenging due to its inherent one-to-many mapping characteristic, where numerous possible amino acid sequences can fold into a single, identical protein backbone. This task involves not only identifying viable sequences but also representing the sheer diversity of potential solutions. However, existing discriminative models, such as transformer-based auto-regressive models, struggle to encapsulate the diverse range of plausible solutions. In contrast, diffusion probabilistic models, as an emerging genre of generative approaches, offer the potential to generate a diverse set of sequence candidates for determined protein backbones. We propose a novel graph denoising diffusion model for inverse protein folding, where a given protein backbone guides the diffusion process on the corresponding amino acid residue types. The model infers the joint distribution of amino acids conditioned on the nodes\u2019 physiochemical properties and local environment. Moreover, we utilize amino acid replacement matrices for the diffusion forward process, encoding the biologically meaningful prior knowledge of amino acids from their spatial and sequential neighbors as well as themselves, which reduces the sampling space of the generative process. Our model achieves state-of-the-art performance over a set of popular baseline methods in sequence recovery and exhibits great potential in generating diverse protein sequences" -"---\nabstract: 'In this paper, we study the maximum clique problem on hyperbolic random graphs. A hyperbolic random graph is a mathematical model for analyzing scale-free networks since it effectively explains the power-law degree distribution of scale-free networks. We propose a simple algorithm for finding a maximum clique in hyperbolic random graph. We first analyze the running time of our algorithm theoretically. We can compute a maximum clique on a hyperbolic random graph $G$ in $O(m + n^{4.5(1-\\alpha)})$ expected time if a geometric representation is given or in $O(m + n^{6(1-\\alpha)})$ expected time if a geometric representation is not given, where $n$ and $m$ denote the numbers of vertices and edges of $G$, respectively, and $\\alpha$ denotes a parameter controlling the power-law exponent of the degree distribution of $G$. Also, we implemented and evaluated our algorithm empirically. Our algorithm outperforms the previous algorithm \\[BFK18\\] practically and theoretically. Beyond the hyperbolic random graphs, we have experiment on real-world networks. For most of instances, we get large cliques close to the optimum solutions efficiently.'\nauthor:\n- 'Eunjin Oh[^1]'\n- 'Seunghyeok Oh[^2]'\nbibliography:\n- 'paper.bib'\ntitle: 'Algorithms for Computing Maximum Cliques in Hyperbolic Random Graphs[[^3]]{}'\n---\n\nIntroduction\n============\n\nDesigning algorithms for analyzing large" -"---\nabstract: 'When an immiscible oil drop is immersed in a stably stratified ethanol-water mixture, the Marangoni flow on the surface of the drop can experience an oscillatory instability, so that the drop undergoes a transition from levitating to bouncing. The onset of the instability and its mechanisms have been studied previously [@li2021marangoni; @li2022marangoni], yet the bouncing motion of the drop itself, which is a completely different problem, has not yet been investigated. Here we study how the bouncing characteristics (jumping height, rising and sinking time) depend on the control parameters (drop radius, stratification strength, drop viscosity). We first record experimentally the bouncing trajectories of drops of different viscosities in different stratifications. Then a simplified dynamical analysis is performed to get the scaling relations of the jumping height and the rising & sinking times. The rising & sinking time scales are found to depend on the drag coefficient of the drop $C_D^S$ in the stratified liquid, which is determined empirically for the current parameter space [@zhang2019core]. For low viscosity () oil drops the results on the drag coefficient match the ones from the literature [@yick2009enhanced; @candelier2014history]. For high viscosity () oil drops the parameter space had not been explored and" -"---\nabstract: 'This work is the second part of a simulation study investigating the processing of densely packed and moving granular assemblies by positron emission particle tracking (PEPT). Since medical PET scanners commonly used for PEPT are very expensive, a PET-like detector system based on cost-effective organic plastic scintillator bars is being developed and tested for its capabilities. In this context, the spatial resolution of a resting positron source, a source moving on a freely designed model path, and a particle motion given by a DEM (Discrete Element Method) simulation is studied using Monte Carlo simulations and the software toolkit Geant4. This not only extended the simulation and reconstruction to a moving source but also significantly improved the spatial resolution compared to previous work by adding oversampling and iteration to the reconstruction algorithm. Furthermore, in the case of a source following a trajectory developed from DEM simulations, a very good resolution of about 1mm in all three directions and an average three-dimensional deviation between simulated and reconstructed events of 2.3mm could be determined. Thus, the resolution for realistic particle motion within the generic grate system (which is the test rig for further experimental studies) is well below the smallest particle" -"---\nabstract: 'In this paper, a modification of A\\* algorithm is considered for the shortest path problem. A weightage is introduced in the heuristic part of the A\\* algorithm to improve its efficiency. An application of the algorithm is considered for UAV path planning wherein velocity is taken as the weigtage to the heuristic. At the outset, calculus of variations based Lagrange\u2019s equation was used to identify velocity as the decisive factor for the dynamical system. This approach would be useful for other problems as well to improve the efficiency of algorithms in those areas.'\naddress: 'Department of Basic Science, Muthoot Institute of Technology and Science, Ernakulam, Kerala 682308, India'\nauthor:\n- Renju Rajan\ntitle: 'Lagrangian based A\\* algorithm for automated reasoning'\n---\n\nalgorithm, heuristic, automated reasoning, graph theory\n\nIntroduction\n============\n\nArtificial Intelligence (AI) deals with a set of algorithms that realize automation which do not require human intervention in decision making\u00a0[@Mue18]. One of the most visible manifestation of artificial intelligence in daily life is in AI cameras in smartphones. In these AI cameras, camera settings are automated based on the scene detected\u00a0[@Kle14]. AI camera distinguishes between various scenes such as street, plant, food, text, etc., and captures" -"---\nabstract: 'Micro aerial vehicles are making a large impact in applications such as search-and-rescue, package delivery, and recreation. Unfortunately, these diminutive drones are currently constrained to carrying small payloads, in large part because they use propellers optimized for larger aircraft and inviscid flow regimes. Fully realizing the potential of emerging microflyers requires next-generation propellers that are specifically designed for low-Reynolds number conditions and that include new features advantageous in highly viscous flows. One aspect that has received limited attention in the literature is the addition of roughness to propeller blades as a method of reducing drag and increasing thrust. To investigate this possibility, we used large eddy simulation to conduct a numerical investigation of smooth and rough propellers. Our results indicate that roughness produces a 2% increase in thrust and a 5% decrease in power relative to a baseline smooth propeller operating at the same Reynolds number of $Re_c=6500$, held constant by rotational speed. We corroborated our numerical findings using thrust-stand-based experiments of 3D-printed propellers identical to those of the numerical simulations. Our study confirms that surface roughness is an additional parameter within the design space for micro-propellers that will lead to unprecedented drone efficiencies and payloads.'\nauthor:\n-" -"---\nabstract: 'Complex graphic representations[\u2014]{}such as annotated visualizations, molecular structure diagrams, or Euclidean geometry[\u2014]{}convey information through overlapping perceptual relations. To author such representations, users are forced to use rigid, purpose-built tools with limited flexibility and expressiveness. User interface (UI) frameworks provide only limited relief as their tree-based models are a poor fit for expressing overlaps. We present Bluefish, a diagramming framework that extends UI architectures to support overlapping perceptual relations. Bluefish graphics are instantiated as *relational scenegraphs*: hierarchical data structures augmented with adjacency relations. Authors specify these relations with *scoped references* to components found elsewhere in the scenegraph. For layout, Bluefish *lazily materializes* necessary coordinate transformations. We demonstrate that Bluefish enables authoring graphic representations across a diverse range of domains while preserving the compositional and abstractional affordances of traditional UI frameworks. Moreover, we show how relational scenegraphs capture previously latent semantics that can later be retargeted (e.g., for screen reader accessibility).'\nauthor:\n- Josh Pollock\n- Catherine Mei\n- Grace Huang\n- Daniel Jackson\n- Arvind Satyanarayan\nbibliography:\n- 'sample-base.bib'\ntitle: 'Bluefish: A Relational Framework for Graphic Representations'\n---\n\n![image](figures/Teaser_Jetpack.png){width=\"\\textwidth\"}\n\nIntroduction\n============\n\nGraphic representations, such as tables, charts, and diagrams, are essential problem solving tools. They externalize information, aiding recall," -"---\nabstract: 'We propose and analyze a scalable and fully autonomous scheme for preparing spatially distributed multi-qubit entangled states in a dual-rail waveguide QED setup. In this approach, arrays of qubits located along two separated waveguides are illuminated by correlated photons from the output of a non-degenerate parametric amplifier. These photons drive the qubits into different classes of pure entangled steady states, for which the degree of multipartite entanglement can be conveniently adjusted by the chosen pattern of local qubit-photon detunings. Numerical simulations for moderate-sized networks show that the preparation time for these complex multi-qubit states increases at most linearly with the system size and that one may benefit from an additional speedup in the limit of a large amplifier bandwidth. Therefore, this scheme offers an intriguing new route for distributing ready-to-use multipartite entangled states across large quantum networks, without requiring any precise pulse control and relying on a single Gaussian entanglement source only.'\nauthor:\n- 'J. Agust\u00ed$^{1,2,3}$, X. H. H. Zhang$^{1,2,3}$, Y. Minoguchi,$^4$, P. Rabl$^{1,2,3,4}$'\ntitle: 'Autonomous distribution of programmable multi-qubit entanglement in a dual-rail quantum network'\n---\n\nAs quantum computing and quantum communication systems with an increasing number of coherently integrated components become technologically available, a growing demand" -"---\nabstract: 'The physical connection between thermal convection in the solar interior and the solar wind remains unclear due to their significant scale separation. Using an extended version of the three-dimensional radiative magnetohydrodynamic code RAMENS, we perform the first comprehensive simulation of the solar wind formation, starting from the wave excitation and the small-scale dynamo below the photosphere. The simulation satisfies various observational constraints as a slow solar wind emanating from the coronal hole boundary. The magnetic energy is persistently released in the simulated corona, showing a hot upward flow at the interface between open and closed fields. To evaluate the energetic contributions from Alfv\u00e9n wave and interchange reconnection, we develop a new method to quantify the cross-field energy transport in the simulated atmosphere. The measured energy transport from closed coronal loops to open field accounts for approximately half of the total. These findings suggest a significant role of the supergranular-scale interchange reconnection in solar wind formation.'\nauthor:\n- Haruhisa Iijima\n- Takuma Matsumoto\n- Hideyuki Hotta\n- Shinsuke Imada\ntitle: |\n A Comprehensive Simulation of Solar Wind Formation from the Solar Interior:\\\n Significant Cross-field Energy Transport by Interchange Reconnection near the Sun \n---\n\nIntroduction {#sec:intro}\n============\n\nThe physical mechanisms" -"---\nauthor:\n- 'David Blessing [^1]'\n- 'J.D. Mireles James [^2]'\nbibliography:\n- 'papers.bib'\ntitle: |\n Weighted Birkhoff Averages\\\n and the Parameterization Method \n---\n\nIntroduction {#sec:intro}\n============\n\nSuppose that $\\Gamma$ is an invariant torus of a discrete or continuous time dynamical system. We say that $\\Gamma$ is a rotational invariant torus if the dynamical on $\\Gamma$ are are topologically conjugate to independent irrational rotations. A quasiperiodic orbit is any orbit on a rotational invariant torus and, since the rotations are independent, all such orbits are dense in the torus.\n\nCantor families of invariant tori are common in structure preserving dynamical systems like reversible maps, area and volume preserving maps on manifolds, and also for higher dimensional generalizations to symplectic maps on (even dimensional) symplectic manifolds. Indeed, for such systems typical orbits are observed to be either chaotic or quasiperiodic. Given a long enough finite orbit segment sampled from an invariant torus, an important problem is to be able to rapidly and accurately approximate a parameterization of the invariant torus.\n\nTwo powerful approaches for solving this problem are given by the Parameterization method, and the method of exponentially weighted Birkhoff sums. The Parameterization method is a functional analytic framework for studying" -"---\nabstract: 'Edge states occurring in Chern and quantum spin-Hall phases are signatures of the topological electronic band structure in two-dimensional (2D) materials. Recently, a new topological electromagnetic phase of graphene characterized by the optical N-invariant has been proposed. Optical N-invariant arises from repulsive Hall viscosity in hydrodynamic many-body electron systems, fundamentally different from the Chern and $Z_2$ invariants. In this paper, we introduce the topologically protected edge excitation \u2013 optical N-plasmon of interacting many-body electron systems in the topological optical N-phase. These optical N-plasmons are signatures of the topological plasmonic band structure in 2D materials. We demonstrate that optical N-plasmons exhibit fundamentally different dispersion relations, stability, and edge profiles from the topologically trivial edge magneto plasmons. Based on the optical N-plasmon, we design an ultra sub-wavelength broadband topological hydrodynamic circulator, which is a chiral quantum radio-frequency circuit component crucial for information routing and interfacing quantum-classical computing systems. Furthermore, we reveal that optical N-plasmons can be effectively tuned by the neighboring dielectric environment without breaking the topological properties. Our work provides a smoking gun signature of repulsive Hall viscosity and opens practical applications of topological electromagnetic phases of two-dimensional materials.'\nauthor:\n- Wenbo Sun\n- Todd Van Mechelen\n- Sathwik" -"---\nabstract: 'A micro-macro variant of the parallel-in-time algorithm Parareal has been applied to the ocean-circulation and sea-ice model model FESOM2. The state-of-the-art software in climate research has been developed by the Alfred-Wegener-Institut (AWI) in Bremen, Germany. The algorithm requires two meshes of low and high spatial resolution to define the coarse and fine propagator. As a first assessment we refined the PI mesh, increasing its resolution by factor 4. The main objective of this study was to demonstrate that micro-macro Parareal can provide convergence in diagnostic variables in complex climate research problems. After the introduction to FESOM2 we show how to generate the refined mesh and which interpolation methods were chosen. With the convergence results presented we discuss the success of this attempt and which steps have to be taken to extend the approach to current research problems.'\nauthor:\n- 'B. $\\text{Philippi}^1$, T. $\\text{Slawig}^2$'\nbibliography:\n- 'sources/sources.bib'\ntitle: 'A Micro-Macro Parareal Implementation for the Ocean-Circulation Model FESOM2'\n---\n\n**${}^1$ *Christian-Albrecht-Universit\u00e4t Kiel, Dept. of Computer Science, b.k.philippi@gmail.com***\n\n**${}^2$ *Christian-Albrecht-Universit\u00e4t Kiel, Dept. of Computer Science, ts@informatik.uni-kiel.de***\n\nIntroduction\n============\n\nPredicting the earths future climate by numerical simulation represents one of the most urgent scientific tasks of our time. The urgency for understanding" -"---\nabstract: 'We present working notes on transfer learning with semi-supervised dataset annotation for the BirdCLEF 2023 competition, focused on identifying African bird species in recorded soundscapes. Our approach utilizes existing off-the-shelf models, BirdNET and MixIT, to address representation and labeling challenges in the competition. We explore the embedding space learned by BirdNET and propose a process to derive an annotated dataset for supervised learning. Our experiments involve various models and feature engineering approaches to maximize performance on the competition leaderboard. The results demonstrate the effectiveness of our approach in classifying bird species and highlight the potential of transfer learning and semi-supervised dataset annotation in similar tasks.'\naddress: 'Georgia Institute of Technology, North Ave NW, Atlanta, GA 30332'\nauthor:\n- Anthony Miyaguchi\n- Nathan Zhong\n- Murilo Gustineli\n- Chris Hayduk\nbibliography:\n- 'report.bib'\ntitle: 'Transfer Learning with Semi-Supervised Dataset Annotation for Birdcall Classification'\n---\n\n\\[ orcid=0000-0002-9165-8718, email=acmiyaguchi@gatech.edu, url=https://acmiyaguchi.me, \\]\n\n\\[ email=nathanzhong@gatech.edu, \\]\n\n\\[ orcid=0009-0003-9818-496X, email=murilogustineli@gatech.edu, url=https://linkedin.com/in/murilo-gustineli, \\]\n\n\\[ email=chayduk3@gatech.edu \\]\n\nTransfer Learning , Dataset Annotation , BirdNET , Bird-MixIT , CEUR-WS\n\nIntroduction\n============\n\nThe BirdCLEF 2023 competition [@birdclef2023] focuses on classifying bird species in 10-minute-long soundscapes recorded in various parts of Africa as part of the LifeCLEF lab [@lifeclef2023]." -"---\nabstract: '[Transient gravitational waves (aka gravitational wave bursts)]{} within the nanohertz frequency band could be generated by a variety of astrophysical phenomena such as the encounter of supermassive black holes, the kinks or cusps in cosmic strings, or other as-yet-unknown physical processes. Radio-pulses emitted from millisecond pulsars could be perturbed by passing gravitational waves, hence the correlation of the perturbations in a pulsar timing array can be used to detect and characterize burst signals with a duration of $\\mathcal{O}(1\\text{-}10)$ years. We propose a fully Bayesian framework for the analysis of the pulsar timing array data, where the burst waveform is generically modeled by piecewise straight lines, and the waveform parameters in the likelihood can be integrated out analytically. As a result, with merely three parameters (in addition to those describing the pulsars\u2019 intrinsic and background noise), one is able to efficiently search for the existence and the sky location of [a burst signal]{}. If a signal is present, the posterior of the waveform can be found without further Bayesian inference. We demonstrate this model by analyzing simulated data sets containing a stochastic gravitational wave background [and a burst signal generated by the parabolic encounter of two supermassive black holes.]{}'" -"---\nabstract: 'Voice conversion systems have made significant advancements in terms of naturalness and similarity in common voice conversion tasks. However, their performance in more complex tasks such as cross-lingual voice conversion and expressive voice conversion remains imperfect. In this study, we propose a novel approach that combines a jointly trained speaker encoder and content features extracted from the cross-lingual speech recognition model Whisper to achieve high-quality cross-lingual voice conversion. Additionally, we introduce a speaker consistency loss to the joint encoder, which improves the similarity between the converted speech and the reference speech. To further explore the capabilities of the joint speaker encoder, we use the Phonetic posteriorgram as the content feature, which enables the model to effectively reproduce both the speaker characteristics and the emotional aspects of the reference speech. The code and pre-trained model are open-sourced [^1].'\naddress: |\n $^1$Interactive Robot Research Team, Guardian Robot Project, RIKEN, Japan\\\n $^2$Graduate School of Engineering Science, Osaka University, Japan\\\n $^3$Advanced Telecommunications Research Institute International, Japan\nbibliography:\n- 'strings.bib'\n- 'refs.bib'\ntitle: 'Using joint training speaker encoder with consistency loss to achieve cross-lingual voice conversion and expressive voice conversion '\n---\n\ncross-lingual voice conversion, expressive voice conversion, joint speaker encoder, speaker consistency" -"---\nauthor:\n- 'B. Portilla-Revelo , I. Kamp , S. Facchini , E. F. van Dishoeck , C. Law , Ch. Rab , J. Bae , M. Benisty , K. \u00d6berg , and R. Teague'\nbibliography:\n- 'references.bib'\ndate: Accepted XXX\ntitle: 'Constraining the gas distribution in the PDS 70 disk as a method to assess the effect of planet-disk interactions'\n---\n\n[Embedded planets are potentially the cause of substructures like gaps and cavities observed in the continuum images of several protoplanetary disks. Likewise, the gas distribution is expected to change in the presence of one or several planets and the effect can be detected with current observational facilities. Thus, the properties of the substructures observed in the continuum and in line emission encode information about the [[presence of planets in the system and how they interact with the natal disk]{}]{}. The pre-transitional disk around the star PDS 70 is the first case of two young planets imaged within a dust depleted gap that was likely carved by themselves.]{} [We aim to determine the spatial distribution of the gas and dust components in the PDS 70 disk. [[The axisymmetric substructures observed in the resulting profiles are interpreted in the context" -"---\nabstract: 'One of the fundamental challenges in enabling fault-tolerant quantum computation is realising fast enough quantum decoders. We present a new two-stage decoder that accelerates the decoding cycle and boosts accuracy. In the first stage, a partial decoder based on belief propagation is used to correct errors that occurred with high probability. In the second stage, a conventional decoder corrects any remaining errors. We study the performance of our two-stage decoder with simulations using the surface code under circuit-level noise. When the conventional decoder is minimum-weight perfect matching, adding the partial decoder decreases bandwidth requirements, increases speed and improves logical accuracy. Specifically, we observe partial decoding consistently speeds up the minimum-weight perfect matching stage by between $2$x-$4$x on average depending on the parameter regime, and raises the threshold from $0.94\\%$ to $1.02\\%$.'\nauthor:\n- 'Laura Caune$^*$'\n- Brendan Reid\n- Joan Camps\n- Earl Campbell\nbibliography:\n- 'references.bib'\ndate: June 2023\ntitle: Belief propagation as a partial decoder\n---\n\nIntroduction\n============\n\nQuantum computers are expected to disrupt domain areas where quantum algorithms are much faster than their classical counterparts. These algorithms typically require running deep quantum circuits. For the output of these circuits to be meaningful, one needs minimal" -"---\nabstract: 'Decreased visibility, intensive noise, and biased color are the common problems existing in low-light images. These visual disturbances further reduce the performance of high-level vision tasks, such as object detection, and tracking. To address this issue, some image enhancement methods have been proposed to increase the image contrast. However, most of them are implemented only in the spatial domain, which can be severely influenced by noise signals while enhancing. Hence, in this work, we propose a novel residual recurrent multi-wavelet convolutional neural network ([**R2-MWCNN**]{}) learned in the frequency domain that can simultaneously increase the image contrast and reduce noise signals well. This end-to-end trainable network utilizes a multi-level discrete wavelet transform to divide input feature maps into distinct frequencies, resulting in a better denoise impact. A channel-wise loss function is proposed to correct the color distortion for more realistic results. Extensive experiments demonstrate that our proposed R2-MWCNN outperforms the state-of-the-art methods quantitively and qualitatively.'\nauthor:\n- |\n Hao Chen\\\n Sun Yat-sen University\\\n [chenh366@mail2.sysu.edu.cn ]{}\n- |\n Zhi Jin\\\n Sun Yat-sen University\\\n [jinzh26@mail.sysu.edu.cn]{}\nbibliography:\n- 'egbib.bib'\ntitle: |\n Low-Light Image Enhancement\\\n in the Frequency Domain\n---\n\nIntroduction {#sec:intro}\n============\n\nLow-light image enhancement is critical for many vision tasks, including" -"---\nabstract: 'We recently introduced an efficient methodology to perform density-corrected Hartree\u2013Fock density functional theory (DC(HF)-DFT) calculations and an extension to it we called \u201ccorrected\u201d HF DFT (C(HF)-DFT). In this work, we take a further step and combine C(HF)-DFT, augmented with a straightforward orbital energy correction, with the random phase approximation (RPA). We refer to the resulting methodology as corrected HF RPA (C(HF)-RPA). We evaluate the proposed methodology across various RPA methods: direct RPA (dRPA), RPA with an approximate exchange kernel (RPA-AXK), and RPA with second-order screened exchange (RPA-SOSEX). C(HF)-dRPA, in particular, demonstrates very promising performance; for RPA with exchange methods we find over-corrections for certain chemical problems.'\nauthor:\n- Daniel Graf\n- 'Alex J. W. Thom'\nbibliography:\n- 'main.bib'\ntitle: 'Corrected Density Functional Theory and the Random Phase Approximation: Improved Accuracy at Little Extra Cost'\n---\n\ntoc\n\nDensity functional theory (DFT) can undoubtedly be considered a highly successful theory and a major driving force in computational chemistry, physics, and materials science. However, despite its success, it is well-known that standard density functional approximations (DFAs) are incapable of accurately describing dispersion interactions.[@becke1995] Various approaches, such as Grimme\u2019s dispersion corrections,[@grimme2016a; @grimme2004a; @grimme2006a; @grimme2006b; @grimme2010a; @grimme2017a; @grimme2019a] have been developed to address" -"---\nabstract: 'We apply an Ising-type model to estimate the band gaps of the polytypes of group IV elements (C, Si, and Ge) and binary compounds of groups: IV-IV (SiC, GeC, and GeSi), and III-V (nitride, phosphide, and arsenide of B, Al, and Ga). The models use reference band gaps of the simplest polytypes comprising 2\u20136 bilayers calculated with the hybrid density functional approximation, HSE06. We report four models capable of estimating band gaps of nine polytypes containing 7 and 8 bilayers with an average error of $\\lesssim0.05$ eV. We apply the best model with an error of $<0.04$ eV to predict the band gaps of 497 polytypes with up to 15 bilayers in the unit cell, providing a comprehensive view of the variation in the electronic structure with the degree of hexagonality of the crystal structure. Within our enumeration, we identify four rhombohedral polytypes of SiC\u20149$R$, 12$R$, 15$R$(1), and 15$R$(2)\u2014and perform detailed stability and band structure analysis. Of these, 15$R$(1) that has not been experimentally characterized has the widest band gap ($>3.4$ eV); phonon analysis and cohesive energy reveal 15$R$(1)-SiC to be metastable. Additionally, we model the energies of valence and conduction bands of the rhombohedral SiC phases at" -"---\nabstract: |\n Feature transformation aims to reconstruct an effective representation space by mathematically refining the existing features. It serves as a pivotal approach to combat the curse of dimensionality, enhance model generalization, mitigate data sparsity, and extend the applicability of classical models. Existing research predominantly focuses on domain knowledge-based feature engineering or learning latent representations. However, these methods, while insightful, lack full automation and fail to yield a traceable and optimal representation space. An indispensable question arises: Can we concurrently address these limitations when reconstructing a feature space for a machine learning task? Our initial work took a pioneering step towards this challenge by introducing a novel self-optimizing framework. This framework leverages the power of three cascading reinforced agents to automatically select candidate features and operations for generating improved feature transformation combinations. Despite the impressive strides made, there was room for enhancing its effectiveness and generalization capability. In this extended journal version, we advance our initial work from two distinct yet interconnected perspectives: 1) We propose a refinement of the original framework, which integrates a graph-based state representation method to capture the feature interactions more effectively and develop different Q-learning strategies to alleviate Q-value overestimation further. 2) We utilize" -"---\nauthor:\n- Yuchen Fan\n- Ning Xi\n- Changle Liu\n- Bruce Normand\n- Rong Yu\ntitle: Emergent criticality in fully frustrated quantum magnets\n---\n\n[**[ Phase transitions in condensed matter are often linked to exotic emergent properties. We study the fully frustrated bilayer Heisenberg antiferromagnet to demonstrate that an applied magnetic field creates a novel emergent criticality. The quantum phase diagram contains four states, the DS (singlets on every interlayer dimer bond), DTAF (all triplets with antiferromagnetic order), TC (a singlet-triplet checkerboard) and FM (saturated ferromagnet). The thermal phase diagram is dominated by a wall of discontinuities extending from the zero-field DTAF-DS transition to a quantum critical endpoint where the field drives the DTAF and TC into the FM. This first-order wall is terminated at finite temperatures by a line of critical points, where the Berezinskii-Kosterlitz-Thouless (BKT) transition of the DTAF and the thermal Ising transition of the TC also terminate. We demonstrate by quantum Monte Carlo simulations that the BKT transition does not change the Ising nature of the DTAF-DS critical line. By contrast, the combination of symmetries merging on the multicritical DTAF-TC line leads to a 4-state Potts universality not contained in the microscopic Hamiltonian, which" -"---\nabstract: 'This research paper focuses on the implementation of radial Basis Function (RBF) Support Vector Machines (SVM) for classifying asteroid orbits. Asteroids are important astronomical objects, and their orbits play a crucial role in understanding the dynamics of the solar system. The International Astronomical Union maintains data archives that provide a playground to experiment with various machine-learning techniques. In this study, we explore the application of RBF SVM algorithm to classify asteroids. The results show that the RBF SVM algorithm provides a good efficiency and accuracy to the dataset. We also analyze the impact of various parameters on the performance of the RBF SVM algorithm and present the optimal parameter settings. Our study highlights the importance of using machine learning techniques for classifying asteroid orbits and the effectiveness of the RBF SVM algorithm in this regard.'\nauthor:\n- Yashvir Tibrewal\n- Nishchal Dwivedi\nbibliography:\n- 'bib.bib'\ntitle: Orbit Classification of asteroids using implementation of radial Basis Function on Support Vector Machines\n---\n\nIntroduction\n============\n\nAsteroid classification is a subject of immense importance for astronomical associations and governing bodies as potentially hazardous asteroids (PHA) present the potential to threaten global safety[@board2019finding]. Many attempts to do such classifications are in place" -"---\nabstract: 'We introduce a finite element method for computing the damping rate of fluid oscillations in nozzles of drop-on-demand (DoD) microfluidic devices. Accurate knowledge of the damping rates for the least-damped oscillation modes following droplet ejection is paramount for assessing jetting stability at higher jetting frequencies, as ejection from a non-quiescent meniscus can result in deviations from nominal droplet properties. Computational fluid dynamics (CFD) simulations often struggle to accurately predict meniscus damping in the limit of low viscosity and high surface tension. Moreover, their use in design loops aimed at optimizing the nozzle geometry for stable jetting is slow and computationally expensive. The faster alternative we adopt here is to compute the damping rate directly from the eigenvalues of the linearized problem. Starting from a variational formulation of the linearized governing equations, we obtain a generalized eigenvalue problem for the oscillation modes, and approximate its solutions with a finite element method that uses Taylor-Hood elements. We solve the matrix eigenvalue problem with a sparse, parallelized implementation of the Krylov-Schur algorithm. The spatial shape and temporal evolution (angular frequency and damping rate) of the set of least-damped oscillation modes are obtained in a matter of minutes, compared to days for" -"---\nabstract: 'We analyze the optical power spectral density (PSD) for 22 active galactic nuclei (AGN) with measured X-ray PSDs using light curves from the All-Sky Automated Survey for SuperNovae (ASAS-SN) and the Transiting Exoplanet Survey Satellite (TESS). The joint optical PSD is measured over up to six orders of magnitude in frequency space from timescales of minutes to a decade. We fit either a damped random walk (DRW) or a broken power law model to constrain the PSD model and break frequency. For the broken power-law fits to the joint PSDs, we find a high-frequency timescale which is proportional to both the X-ray timescales and the black hole masses, but the optical timescale is 2.7 dex longer. Assuming the optical and X-ray breaks are related by a physical process, such as reprocessing of X-ray emission, the break frequency difference interpreted as a light crossing time is consistent with the expected size difference between the optical and X-ray emission regions. On timescales of months to a decade, we also measured a correlation between the low-frequency optical break timescales and the X-ray break timescales, but with a much shallower slope. The DRW model provides acceptable fits and we generally confirm previously" -"---\nbibliography:\n- 'SDCturnV2.bib'\n---\n\n\u00a7\n\n\u00d8\n\n[ **[ Cosmic Acceleration and Turns in the Swampland ]{}**]{}\n\n[Julian Freigang$^{1}$, Dieter\u00a0L\u00fcst$^{1,2}$, Guo-En Nian$^{3}$ and Marco\u00a0Scalisi$^{1}$]{}\n\n$^1$[*Max-Planck-Institut f\u00fcr Physik (Werner-Heisenberg-Institut),\\\nF\u00f6hringer Ring 6, 80805, M\u00fcnchen, Germany* ]{}\n\n$^2$[*Arnold-Sommerfeld-Center for Theoretical Physics,\\\nLudwig-Maximilians-Universit\u00e4t, 80333 M\u00fcnchen, Germany* ]{}\n\n$^3$[*Institute for Theoretical Physics,\\\nUtrecht University, Princetonplein 5, 3584 CE Utrecht, The Netherlands*]{}\n\n[ABSTRACT]{}\n\nWe argue that field trajectories, which lead to cosmic acceleration and feature rapid turns near the boundary of the moduli space, are in the Swampland. We obtain this result by assuming the validity of the Swampland Distance Conjecture (SDC) in the presence of a positive scalar potential and by focusing on hyperbolic spaces, as prototype geometries of infinite distance limits of Calabi\u2013Yau compactifications. We find that, in a quasi-de Sitter space with Hubble rate $H$ and acceleration parameter $\\epsilon$, the turning rate $\\Omega$ is upper bounded such as $\\Omega/H<\\mathcal{O}(\\sqrt{\\epsilon})$. Therefore, field trajectories consistent with the SDC can only have a negligible deviation from geodesics. This has direct implications for the realization and consistency of multi-field scenarios in string theory. Moreover, it implies a tension between asymptotic accelerating expansion, consistent with observations, and the de Sitter conjecture.\n\n0.5cm\n\n5.6 mm\n\nIntroduction" -"---\nabstract: 'In this paper we address the problem of path planning in an unknown environment with an aerial robot. The main goal is to safely follow the planned trajectory by avoiding obstacles. The proposed approach is suitable for aerial vehicles equipped with 3D sensors, such as LiDARs. It performs obstacle avoidance in real time and on an on-board computer. We present a novel algorithm based on the conventional Artificial Potential Field (APF) that corrects the planned trajectory to avoid obstacles. To this end, our modified algorithm uses a rotation-based component to avoid local minima. The smooth trajectory following, achieved with the MPC tracker, allows us to quickly change and re-plan the UAV trajectory. Comparative experiments in simulation have shown that our approach solves local minima problems in trajectory planning and generates more efficient paths to avoid potential collisions with static obstacles compared to the original APF method.'\nauthor:\n- 'Ana Batinovic, Jurica Goricanec, Lovro Markovic, Stjepan Bogdan [^1]'\nbibliography:\n- 'main.bib'\ntitle: 'Path Planning with Potential Field-Based Obstacle Avoidance in a 3D Environment by an Unmanned Aerial Vehicle'\n---\n\nIntroduction\n============\n\nUnmanned aerial vehicles (UAVs) have been recently utilized in various applications, such as agriculture [@Tsouros2019], wind turbine inspection" -"---\nabstract: 'Very recently, several pulsar timing array collaborations, including CPTA, EPTA, and NANOGrav, reported their results from searches for an isotropic stochastic gravitational wave background (SGWB), with each finding positive evidence for SGWB. In this work, we assessed the credibility of interpreting the Hellings-Downs correlated free-spectrum process of EPTA, PPTA, and NANOGrav as either the result of supermassive black hole binary mergers or various stochastic SGWB sources that originated in the early Universe, including first-order phase transitions, cosmic strings, domain walls, and large-amplitude curvature perturbations. Our observations show that the current new datasets do not display a strong preference for any specific SGWB source based on Bayesian analysis.'\nauthor:\n- Ligong Bian\n- Shuailiang Ge\n- Jing Shu\n- Bo Wang\n- 'Xing-Yu Yang'\n- Junchao Zong\nbibliography:\n- 'citelib.bib'\ntitle: Gravitational wave sources for Pulsar Timing Arrays\n---\n\n Introduction\n=============\n\nPulsar timing array (PTA) experiments provide a unique window to probe the gravitational waves (GWs) at nano-Hertz frequencies, with possible sources being supermassive black hole binaries (SMBHBs)\u00a0[@Rajagopal:1994zj; @Phinney:2001di; @Jaffe:2002rt; @Wyithe:2002ep; @Arzoumanian:2020vkk], curvature perturbations\u00a0[@Ananda:2006af; @Baumann:2007zm], and new-physics models including first-order phase transition (FOPT)\u00a0[@Kosowsky:1992rz; @Caprini:2010xv], cosmic strings\u00a0[@Siemens:2006yp], and domain walls\u00a0[@Hiramatsu:2013qaa], etc.\n\nPreviously, hints of a" -"---\nabstract: 'Heterogeneous federated multi-task learning (HFMTL) is a federated learning technique that combines heterogeneous tasks of different clients to achieve more accurate, comprehensive predictions. In real-world applications, visual and natural language tasks typically require large-scale models to extract high-level abstract features. However, large-scale models cannot be directly applied to existing federated multi-task learning methods. Existing HFML methods also disregard the impact of gradient conflicts on multi-task optimization during the federated aggregation process. In this work, we propose an innovative framework called $\\mathtt{FedBone}$, which enables the construction of large-scale models with better generalization from the perspective of server-client split learning and gradient projection. We split the entire model into two components: a large-scale general model (referred to as *the general model*) on the cloud server and multiple task-specific models (referred to as *the client model*) on edge clients, solving the problem of insufficient computing power on edge clients. The conflicting gradient projection technique is used to enhance the generalization of the large-scale general model between different tasks. The proposed framework is evaluated on two benchmark datasets and a real ophthalmic dataset. Comprehensive results demonstrate that $\\mathtt{FedBone}$ efficiently adapts to heterogeneous local tasks of each client and outperforms existing federated learning" -"---\nabstract: |\n The ESTHER shock tube is a new state-of-the-art facility at Instituto Superior T\u00e9cnico designed to support future ESA planetary exploration missions. Its driver is a high-pressure combustion chamber using a mixture of He:H$_2$:O$_2$ ignited by a high-power Nd:YAG laser. Both hydrogen as an energy vector and laser ignition are promising techniques with applications in high-pressure combustion. The influence of gas mixture and laser parameters, namely the air:fuel ratio, filling pressure, inert gas dilution and ignition mode, on the combustion and thus shock tube performance were extensively studied. A second, low-velocity driver mixture with nitrogen in place of helium as a dilutant was also studied and experimental shots are done.\n\n Our results show that the filling pressure and helium dilution are the most dominant parameters in both peak pressure, acoustic oscillation and combustion velocity. The gas mixture peak pressure and acoustic wave amplitude increase with the increased filling pressure. Yet, the increased filling pressure lowers the combustion velocity. The helium in the mixture had a dilution effect, with it lowering the overall effectiveness of combustion. Having higher dilution factors lowers the combustion compression ratio, acoustic waves amplitude and flame velocity. The air:fuel equivalence ratio influence was expected with" -"---\nabstract: '**[Effectively compressing and optimizing tensor networks requires reliable methods for fixing the latent degrees of freedom of the tensors, known as the gauge. Here we introduce a new algorithm for gauging tensor networks using belief propagation, a method that was originally formulated for performing statistical inference on graphical models and has recently found applications in tensor network algorithms. We show that this method is closely related to known tensor network gauging methods. It has the practical advantage, however, that existing belief propagation implementations can be repurposed for tensor network gauging, and that belief propagation is a very simple algorithm based on just tensor contractions so it can be easier to implement, optimize, and generalize. We present numerical evidence and scaling arguments that this algorithm is faster than existing gauging algorithms, demonstrating its usage on structured, unstructured, and infinite tensor networks. Additionally, we apply this method to improve the accuracy of the widely used simple update gate evolution algorithm.]{}**'\nbibliography:\n- 'Bibliography.bib'\n---\n\n[ **Gauging tensor networks with belief propagation** ]{}\n\nJoseph Tindall^1$\\star$^, Matthew T. Fishman^1^\n\n[**1**]{} Center for Computational Quantum Physics, Flatiron Institute, New York, New York 10010, USA ${}^\\star$ [jtindall@flatironinstitute.org]{}\n\n------------------------------------------------------------------------\n\n------------------------------------------------------------------------\n\nIntroduction\n============\n\nTensor networks are" -"---\nabstract: 'The IceCube Neutrino Observatory has recently reported strong evidence for neutrino emission from the Galactic plane. The signal is consistent with model predictions of diffuse emission from cosmic ray propagation in the interstellar medium. However, due to IceCube\u2019s limited potential of identifying individual neutrino sources, it is also feasible that unresolved Galactic sources could contribute to the observation. We investigate the contribution of this quasi-diffuse emission and show that the observed Galactic diffuse flux at 100\u00a0TeV could be dominated by hard emission of unresolved sources. Particularly interesting candidate sources are young massive stellar clusters that have been considered as cosmic-ray PeVatrons. We examine whether this hypothesis can be tested by the upcoming KM3NeT detector or the planned future facility IceCube-Gen2 with about five times the sensitivity of IceCube.'\nauthor:\n- Antonio Ambrosone\n- Kathrine M\u00f8rch Groth\n- Enrico Peretti\n- Markus Ahlers\nbibliography:\n- 'references.bib'\ntitle: Galactic Diffuse Neutrino Emission from Sources beyond the Discovery Horizon\n---\n\nIntroduction {#sec1}\n============\n\nCosmic rays (CRs) with energies up to a few PeV are expected to originate in Galactic sources; see [*e.g.*]{}\u00a0the reviews\u00a0[@Blasi:2013rva; @Amato:2017dbs; @Gabici:2019jvz] for a recent review. This hypothesis can be indirectly tested by observing the" -"---\naddress: ', , '\nauthor:\n- Cagri Kilic\n- Eduardo Gutierrez\n- 'Jason N. Gross'\nbibliography:\n- 'bibliography-APA.bib'\ntitle: 'Evaluation of the Benefits of Zero Velocity Update in Decentralized EKF-Based Cooperative Localization Algorithms for GNSS-Denied Multi-Robot Systems'\n---\n\nIntroduction {#sec1}\n============\n\nMobile robots rely on accurate localization estimates to perform certain tasks, such as exploration, navigation, object detection and tracking, map building, and autonomous movement through space. In a localization application, robots can enhance their ability to locate themselves accurately within the environment by fusing information from multiple sources. One common method of estimating positioning information is using the Global Navigation Satellite System (GNSS), which includes GPS and other similar systems. However, the availability of this system is often unreliable in urban, forested, and indoor areas because of obstructions that block signals from satellites\u00a0[@merry2019smartphone].\n\nGiven the challenges posed by limited GNSS availability in certain environments, cooperative localization emerges as a valuable alternative for mobile robots to achieve accurate positioning. Cooperation among multiple robots is desirable for many tasks, as the robots can perform several tasks more efficiently and robustly than a single robot\u00a0[@avinashmultirobot]. Cooperative localization is a technique in which multiple robots share information and perform relative" -"---\nbibliography:\n- 'fidelity\\_Jacobi.bib'\n---\n\n[ **Beyond Fermi\u2019s golden rule with the statistical Jacobi approximation** ]{}\n\nDavid M. Long^1,2\\*^, Dominik Hahn^3^, Marin Bukov^3^, Anushya Chandran^1^\n\n[**1**]{} Department of Physics, Boston University, Boston, Massachusetts 02215, USA\\\n[**2**]{} Condensed Matter Theory Center and Joint Quantum Institute,\\\nDepartment of Physics, University of Maryland, College Park, Maryland 20742, USA\\\n[**3**]{} Max Planck Institute for the Physics of Complex Systems, 01187 Dresden, Germany\\\n\\* dmlong@umd.edu, hahn@pks.mpg.de\n\nAbstract {#abstract .unnumbered}\n========\n\nMany problems in quantum dynamics can be cast as the decay of a single quantum state into a continuum. The time-dependent overlap with the initial state, called the fidelity, characterizes this decay. We derive an analytic expression for the fidelity after a quench to an ergodic Hamiltonian. The expression is valid for both weak and strong quenches, and timescales before finiteness of the Hilbert space limits the fidelity. It reproduces initial quadratic decay and asymptotic exponential decay with a rate which, for strong quenches, differs from Fermi\u2019s golden rule. The analysis relies on the *statistical Jacobi approximation* (SJA), which was originally applied in nearly localized systems, and which we here adapt to well-thermalizing systems. Our results demonstrate that the SJA is predictive in disparate regimes" -"---\nabstract: |\n Domain-generalized urban-scene semantic segmentation (USSS) aims to learn generalized semantic predictions across diverse urban-scene styles. Unlike generic domain gap challenges, USSS is unique in that the semantic categories are often similar in different urban scenes, while the styles can vary significantly due to changes in urban landscapes, weather conditions, lighting, and other factors. Existing approaches typically rely on convolutional neural networks (CNNs) to learn the content of urban scenes.\n\n In this paper, we propose a Content-enhanced Mask TransFormer (CMFormer) for domain-generalized USSS. The main idea is to enhance the focus of the fundamental component, the mask attention mechanism, in Transformer segmentation models on content information. We have observed through empirical analysis that a mask representation effectively captures pixel segments, albeit with reduced robustness to style variations. Conversely, its lower-resolution counterpart exhibits greater ability to accommodate style variations, while being less proficient in representing pixel segments. To harness the synergistic attributes of these two approaches, we introduce a novel content-enhanced mask attention mechanism. It learns mask queries from both the image feature and its down-sampled counterpart, aiming to simultaneously encapsulate the content and address stylistic variations. These features are fused into a Transformer decoder and integrated into a" -"---\nabstract: 'Autoregressive language models (LMs) map token sequences to probabilities. The usual practice for computing the probability of any character string (e.g. English sentences) is to first transform it into a sequence of tokens that is scored by the model. However, there are exponentially many token sequences that represent any given string. To truly compute the probability of a string one should *marginalize* over all tokenizations, which is typically intractable. Here, we analyze whether the practice of ignoring the marginalization is justified. To this end, we devise an importance-sampling-based algorithm that allows us to compute estimates of the marginal probabilities and compare them to the default procedure in a range of state-of-the-art models and datasets. Our results show that the gap in log-likelihood is no larger than 0.5% in most cases, but that it becomes more pronounced for data with long complex words.'\nauthor:\n- |\n Nadezhda Chirkova[$^1$]{}\u00a0 Germ\u00e1n Kruszewski[$^1$]{}\u00a0 Jos Rozen[$^1$]{}\u00a0 Marc Dymetman[$^2$]{}\u00a0\\\n [$^1$]{}Naver Labs Europe [$^2$]{}Independent Researcher\\\n `{nadia.chirkova, german.kruszewski, jos.rozen}@naverlabs.com`\\\n `marc.dymetman@gmail.com`\\\nbibliography:\n- 'anthology.bib'\n- 'custom.bib'\ntitle: 'Should you marginalize over possible tokenizations?'\n---\n\n=1\n\nIntroduction\n============\n\nLanguage models are probability distributions over text strings. In practice, these distributions are defined over a vocabulary of *tokens*," -"---\nabstract: 'Numerous studies have underscored the significant privacy risks associated with various leakage patterns in encrypted data stores. Most existing systems that conceal leakage either (1) incur substantial overheads, (2) focus on specific subsets of leakage patterns, or (3) apply the same security notion across various workloads, thereby impeding the attainment of fine-tuned privacy-efficiency trade-offs. In light of various detrimental leakage patterns, this paper starts with an investigation into which specific leakage patterns require our focus respectively in the contexts of key-value, range-query, and dynamic workloads. Subsequently, we introduce new security notions tailored to the specific privacy requirements of these workloads. Accordingly, we present, [Swat]{}, an efficient construction that progressively enables these workloads, while provably mitigating system-wide leakage via a suite of algorithms with tunable privacy-efficiency trade-offs. We conducted extensive experiments and compiled a detailed result analysis, showing the efficiency of our solution. [Swat]{}is about $10.6\\times$ slower than an encryption-only data store that reveals various leakage patterns and is $31.6\\times$ faster than a trivial zero-leakage solution. Meanwhile, the performance of [Swat]{}remains highly competitive compared to other designs that mitigate specific types of leakage.'\nauthor:\n- \n- \n- \n- \n- \nbibliography:\n- 'reference.bib'\ntitle: 'SWAT: A System-Wide" -"---\nabstract: 'The Super-X divertor (SXD) is an alternative divertor configuration leveraging total flux expansion at the outer strike point (OSP). According to the *extended 2-point model* (2PM), the key attractive features of the SXD are facilitated detachment access and control, but this is not always retrieved experimentally. However, parallel flows are not consistently included in the 2PM. In this work, the 2PM is refined to overcome this limitation: the role of total flux expansion on the pressure balance is made explicit, by including the effect of parallel flows. Consequentially, the effect of total flux expansion on detachment access and control is weakened, compared to predictions of the 2PM. This new model partially explains discrepancies between the 2PM and experiments performed on TCV, in ohmic L-mode scenarios, which are particularly apparent when scanning the OSP major radius Rt. In core density ramps in lower single-null (SN) configuration, the impact of Rt on the CIII emission front movement in the divertor outer leg - used as a proxy for the plasma temperature \u2013 is substantially weaker than 2PM predictions. Furthermore, in OSP radial sweeps in lower and upper SN configurations, in ohmic L-mode scenarios with a constant core density, the peak" -"---\nabstract: 'Motivated by distribution problems arising in the supply chain of Haleon, we investigate a discrete optimization problem that we call the *container delivery scheduling problem*. The problem models a supplier dispatching ordered products with shipping containers from manufacturing sites to distribution centers, where orders are collected by the buyers at agreed due times. The supplier may expedite or delay item deliveries to reduce transshipment costs at the price of increasing inventory costs, as measured by the number of containers and distribution center storage/backlog costs, respectively. The goal is to compute a delivery schedule attaining good trade-offs between the two. This container delivery scheduling problem is a temporal variant of classic bin packing problems, where the item sizes are not fixed, but depend on the item due times and delivery times. An approach for solving the problem should specify a batching policy for container consolidation and a scheduling policy for deciding when each container should be delivered. Based on the available item due times, we develop algorithms with sequential and nested batching policies as well as on-time and delay-tolerant scheduling policies. We elaborate on the problem\u2019s hardness and substantiate the proposed algorithms with positive and negative approximation bounds, including" -"---\nabstract: 'State-of-the-art deep learning-based registration methods employ three different learning strategies: supervised learning, which requires costly manual annotations, unsupervised learning, which heavily relies on hand-crafted similarity metrics designed by domain experts, or learning from synthetic data, which introduces a domain shift. To overcome the limitations of these strategies, we propose a novel self-supervised learning paradigm for unsupervised registration, relying on self-training. Our idea is based on two key insights. Feature-based differentiable optimizers 1) perform reasonable registration even from random features and 2) stabilize the training of the preceding feature extraction network on noisy labels. Consequently, we propose cyclical self-training, where pseudo labels are initialized as the displacement fields inferred from random features and cyclically updated based on more and more expressive features from the learning feature extractor, yielding a self-reinforcement effect. We evaluate the method for abdomen and lung registration, consistently surpassing metric-based supervision and outperforming diverse state-of-the-art competitors. Source code is available at .'\nauthor:\n- 'Alexander Bigalke^()^'\n- Lasse Hansen\n- 'Tony C. W. Mok'\n- 'Mattias P. Heinrich'\nbibliography:\n- 'paper1171-bibliography.bib'\ntitle: 'Unsupervised 3D registration through optimization-guided cyclical self-training'\n---\n\nIntroduction\n============\n\nMedical image registration is a fundamental task in medical imaging with applications ranging from" -"---\nabstract: 'The integration of sensing capability in the design of wireless communication systems is foreseen as a key enabler for efficient radio resource management in next-generation networks. This paper focuses on millimeter-wave communications, which are subject to severe attenuation due to blockages, ultimately detrimental to system performance. In this context, the sensing functionality can allow measuring or even imaging the wireless environment allowing anticipation of possible link failures, thus enabling proactive resource reallocation such as handover. This work proposes a novel mechanism for opportunistic environment sensing, which leverages existing network infrastructure with low complexity. More specifically, our approach exploits the fluctuations of interference, perceived in antenna side lobes, to detect local activity due to a moving blocker around the reference communication link. Numerical evaluations show that the proposed method is promising as it allows effective assessment of the blocker direction, trajectory and possibly, its location, speed, and size.'\nauthor:\n- \nbibliography:\n- 'biblio.bib'\ntitle: Sensing of Side Lobes Interference for Blockage Prediction in Dense mmWave Networks\n---\n\nSensing, Blockages Prediction, mmWave Communications, Network densification, 6G Networks.\n\nIntroduction\n============\n\nMillimeter-Wave (mmWave) frequencies (ranging, [*e.g.*]{}, between $28$ and $300$ GHz) are getting great attention recently for their various advantages over traditional" -"---\nabstract: 'We consider the eigenvalues and eigenvectors of an axisymmetric matrix$A$ with some special structures. We propose S-Oja-Brockett equation $\\frac{dX}{dt}=AXB-XBX^TSAX,$ where $X(t) \\in {\\mathbb R}^{n \\times m}$ with $m \\leq n$, $S$ is a positive definite symmetric solution of the Sylvester equation $A^TS = SA$ and $B$ is a real positive definite diagonal matrix whose diagonal elements are distinct each other, and show the S-Oja-Brockett equation has the global convergence to eigenvalues and its eigenvectors of $A$.'\nauthor:\n- 'Shintaro Yoshizawa [^1]'\ntitle: Dynamical systems for eigenvalue problems of axisymmetric matrices with positive eigenvalues\n---\n\nIntroduction\n============\n\nIn least squares optimization, Brockett [@Bro1],[@Bro2],[@Bro3] showed that the tasks of diagonalizing a matrix, linear programming, and sorting, could all be solved by dynamical systems given by $$\\frac{dX}{dt}=AXB-XBX^TAX,$$ where $X(t)$ belongs to the real special orthogonal group, that is, $X^TX=I$ and $\\det(X)=1,$ and $A$, $B$ are real symmetric matrices. The symbol $T$ denotes the transpose of the matrix. His results had their origins in earlier work dating back to that of Fischer [@Fis], Courant [@Cou] and von Neumann [@VonN], Also there were parallel efforts in numerical analysis by Chu [@Chu].\n\nOn the other hand, in the field of neural networks, Amari [@Ama]" -"---\nabstract: 'This paper studies an intelligent reflecting surface (IRS)-aided multi-antenna simultaneous wireless information and power transfer (SWIPT) system where an $M$-antenna access point (AP) serves $K$ single-antenna information users (IUs) and $J$ single-antenna energy users (EUs) with the aid of an IRS with phase errors. We explicitly concentrate on overloaded scenarios where $K + J > M$ and $K \\geq M$. Our goal is to maximize the minimum throughput among all the IUs by optimizing the allocation of resources (including time, transmit beamforming at the AP, and reflect beamforming at the IRS), while guaranteeing the minimum amount of harvested energy at each EU. Towards this goal, we propose two user grouping (UG) schemes, namely, the non-overlapping UG scheme and the overlapping UG scheme, where the difference lies in whether identical IUs can exist in multiple groups. Different IU groups are served in orthogonal time dimensions, while the IUs in the same group are served simultaneously with all the EUs via spatial multiplexing. The two problems corresponding to the two UG schemes are mixed-integer non-convex optimization problems and difficult to solve optimally. We first provide a method to check the feasibility of these two problems, and then propose efficient algorithms" -"---\nabstract: |\n Recently, the NANOGrav, PPTA, EPTA, and CPTA collaborations independently reported their evidence of the Stochastic Gravitational Waves Background (SGWB). While the inferred gravitational-wave background amplitude and spectrum are consistent with astrophysical expectations for a signal from the population of supermassive black-hole binaries (SMBHBs), the search for new physics remains plausible in this observational window. In this work, we explore the possibility of explaining such a signal by the scalar-induced gravitational waves (IGWs) in the very early universe. We use a parameterized broken power-law function as a general description of the energy spectrum of the SGWB and fit it to the new results of NANOGrav, PPTA and EPTA. We find that this approach can put constraints on the parameters of IGW energy spectrum and further yield restrictions on various inflation models that may produce primordial black holes (PBHs) in the early universe, which is also expected to be examined by the forthcoming space-based GW experiments.\\\n\n [ ****Keywords:**** pulsar timing array observation, Bayesian inference, stochastic gravitational wave background, induced gravitational waves, early universe ]{}\nauthor:\n- 'Yi-Fu Cai'\n- 'Xin-Chen He'\n- 'Xiao-Han Ma'\n- 'Sheng-Feng Yan'\n- 'Guan-Wen Yuan'\ntitle: 'Limits on scalar-induced gravitational waves from the stochastic" -"---\nabstract: 'Given sparse views of a 3D object, estimating their camera poses is a long-standing and intractable problem. Toward this goal, we consider harnessing the pre-trained diffusion model of novel views conditioned on viewpoints (Zero-1-to-3). We present ID-Pose which inverses the denoising diffusion process to estimate the relative pose given two input images. ID-Pose adds a noise to one image, and predicts the noise conditioned on the other image and a hypothesis of the relative pose. The prediction error is used as the minimization objective to find the optimal pose with the gradient descent method. We extend ID-Pose to handle more than two images and estimate each pose with multiple image pairs from triangular relations. ID-Pose requires no training and generalizes to open-world images. We conduct extensive experiments using casually captured photos and rendered images with random viewpoints. The results demonstrate that ID-Pose significantly outperforms state-of-the-art methods. \\[Project Page: \\]'\nauthor:\n- |\n Weihao Cheng Yan-Pei Cao Ying Shan\\\n ARC Lab, Tencent PCG\\\n [whcheng@tencent.com caoyanpei@gmail.com yingsshan@tencent.com]{}\nbibliography:\n- 'paper.bib'\ntitle: 'ID-Pose: Sparse-view Camera Pose Estimation by Inverting Diffusion Models'\n---\n\nIntroduction {#sec:intro}\n============\n\nEstimating camera poses of images that depict a 3D object is important to shape understanding [@mvdepthnet;" -"---\nabstract: |\n In this paper, we have considered the dense rank for assigning positions to alternatives in weak orders. If we arrange the alternatives in tiers (i.e., indifference classes), the dense rank assigns position 1 to all the alternatives in the top tier, 2 to all the alternatives in the second tier, and so on. We have proposed a formal framework to analyze the dense rank when compared to other well-known position operators such as the standard, modified and fractional ranks. As the main results, we have provided two different axiomatic characterizations which determine the dense rank by considering position invariance conditions along horizontal extensions (duplication), as well as through vertical reductions and movements (truncation, and upwards or downwards independency).\n\n *Keywords*: preferences; linear orders; weak orders; positions; dense rank; duplication; truncation.\naddress: |\n IMUVA, PRESAD Research Group, Departamento de Econom\u00eda Aplicada,\\\n Universidad de Valladolid, Valladolid, Spain\\\n $^*$Corresponding author\nauthor:\n- 'Jos\u00e9 Luis Garc\u00eda-Lapresta, Miguel Mart\u00ednez-Panero$^*$'\ntitle: Two characterizations of the dense rank\n---\n\nIntroduction {#sect:introduction}\n============\n\nWhen it is possible to rank order objects (individuals, alternatives, etc.) taking into account some quality or criterion, it is natural to assign positive integer numbers to them in an ascending manner, starting" -"---\nabstract: 'Fuzzy dark matter (FDM), a practical alternative to cold dark matter, can exist in compact stars. Here, applying the FDM equation of state (EoS) constrained by CMB and large-scale structure data, we calculate the structure of relativistic stars in the presence of FDM. For this aim, the EoS for the visible matter in neutron stars, quark stars, and hybrid stars from the observational data are employed. A piecewise polytropic EoS constrained by the observational data of GW170817 and the data of six low-mass X-ray binaries with thermonuclear burst or the symmetry energy of the nuclear interaction describes the neutron star matter. For quark star matter, we apply the EoSs within the Bayesian statistical approach using the mass and radius measurements of PSR J0030+0451 from NICER. Employing the two-fluid formalism, we study the structure of FDM admixed relativistic stars.'\nauthor:\n- |\n Zeinab Rezaei[^1]\\\n Department of Physics, School of Science, Shiraz University, Shiraz 71454, Iran.\\\n Biruni Observatory, School of Science, Shiraz University, Shiraz 71454, Iran.\ndate: Accepted XXX Received XXX \ntitle: Fuzzy Dark Matter in Relativistic Stars\n---\n\n(cosmology:) dark matter, stars: interiors, cosmology: observations.\n\nIntroduction\n============\n\nFuzzy dark matter (FDM) composed of ultralight bosonic particles with $m \\sim" -"---\nabstract: 'Despite considerable progress in neural relevance ranking techniques, search engines still struggle to process complex queries effectively \u2014 both in terms of precision and recall. Sparse and dense Pseudo-Relevance Feedback (PRF) approaches have the potential to overcome limitations in recall, but are only effective with high precision in the top ranks. In this work, we tackle the problem of search over complex queries using three complementary techniques. First, we demonstrate that applying a strong neural re-ranker before sparse or dense PRF can improve the retrieval effectiveness by 5\u20138%. This improvement in PRF effectiveness can be attributed directly to improving the precision of the feedback set. Second, we propose an enhanced expansion model, Latent Entity Expansion (LEE), which applies fine-grained word and entity-based relevance modelling incorporating localized features. Specifically, we find that by including both words and entities for expansion achieve a further 2\u20138% improvement in NDCG. Our analysis also demonstrated that LEE is largely robust to its parameters across datasets and performs well on entity-centric queries. And third, we include an \u201cadaptive\u201d component in the retrieval process, which iteratively refines the re-ranking pool during scoring using the expansion model and avoids re-ranking additional documents. We find that this" -"---\nabstract: 'Optically addressable spin defects hosted in two-dimensional van der Waals materials represent a new frontier for quantum technologies, promising to lead to a new class of ultrathin quantum sensors and simulators. Recently, hexagonal boron nitride (hBN) has been shown to host several types of optically addressable spin defects, thus offering a unique opportunity to simultaneously address and utilise various spin species in a single material. Here we demonstrate an interplay between two separate spin species within a single hBN crystal, namely $S=1$ boron vacancy defects and visible emitter spins. We unambiguously prove that the visible emitters are $S=\\sfrac{1}{2}$ spins and further demonstrate room temperature coherent control and optical readout of both spin species. Importantly, by tuning the two spin species into resonance with each other, we observe cross-relaxation indicating strong inter-species dipolar coupling. We then demonstrate magnetic imaging using the $S=\\sfrac{1}{2}$ defects, both under ambient and cryogenic conditions, and leverage their lack of intrinsic quantization axis to determine the anisotropic magnetic susceptibility of a test sample. Our results establish hBN as a versatile platform for quantum technologies in a van der Waals host at room temperature.'\nauthor:\n- 'Sam\u00a0C.\u00a0Scholten'\n- Priya\u00a0Singh\n- 'Alexander\u00a0J.\u00a0Healey'" -"---\nabstract: 'Given the rapid advancements in wireless communication and terminal devices, high-speed and convenient WiFi has permeated various aspects of people\u2019s lives, and attention has been drawn to the location services that WiFi can provide. Fingerprint-based methods, as an excellent approach for localization, have gradually become a hot research topic. However, in practical localization, fingerprint features of traditional methods suffer from low reliability and lacking robustness in complex indoor environments. To overcome these limitations, this paper proposes a innovative feature extraction-enhanced intelligent localization scheme named Secci, based on diversified channel state information (CSI). By modifying the device driver, diversified CSI data are extracted and transformed into RGB CSI images, which serve as input to a deep convolutional neural network (DCNN) with SE attention mechanism-assisted training in the offline stage. Employing a greedy probabilistic approach, rapid prediction of the estimated location is performed in the online stage using test RGB CSI images. The Secci system is implemented using off-the-shelf WiFi devices, and comprehensive experiments are carried out in two representative indoor environments to showcase the superior performance of Secci compared to four existing algorithms.'\nauthor:\n- 'Jiyu Jiao, Xiaojun Wang, Chenlin He [^1] [^2] [^3]'\nbibliography:\n- 'myrefarxiv.bib'\ntitle: Enhancing" -"---\nabstract: 'The Large Magellanic Cloud (LMC) will induce a dynamical friction (DF) wake on infall to the Milky Way (MW). The MW\u2019s stellar halo will respond to the gravity of the LMC and the dark matter (DM) wake, forming a stellar counterpart to the DM wake. This provides a novel opportunity to constrain the properties of the DM particle. We present a suite of high-resolution, windtunnel-style simulations of the LMC\u2019s DF wake that compare the structure, kinematics, and stellar tracer response of the DM wake in cold DM (CDM), with and without self-gravity, vs. fuzzy DM (FDM) with $m_a = 10^{-23}$ eV. We conclude that the self-gravity of the DM wake cannot be ignored. Its inclusion raises the wake\u2019s density by $\\sim 10\\%$, and holds the wake together over larger distances ($\\sim$ 50 kpc) than if self-gravity is ignored. The DM wake\u2019s mass is comparable to the LMC\u2019s infall mass, meaning the DM wake is a significant perturber to the dynamics of MW halo tracers. An FDM wake is more granular in structure and is $\\sim 20\\%$ dynamically colder than a CDM wake, but with comparable density. The granularity of an FDM wake increases the stars\u2019 kinematic response at" -"---\nabstract: 'It is well-known that the spectral radius of a connected uniform hypergraph is an eigenvalue of the hypergraph. However, its algebraic multiplicity remains unknown. In this paper, we use the Poisson Formula and matching polynomials to determine the algebraic multiplicity of the spectral radius of a uniform hypertree.'\naddress: 'College of Mathematical Sciences, Harbin Engineering University, Harbin, PR China'\nauthor:\n- Lixiang Chen\n- Changjiang Bu\nbibliography:\n- 'atbib.bib'\ntitle: '**The algebraic multiplicity of the spectral radius of a hypertree**'\n---\n\n[1.15]{}\n\nhypertree, spectral radius, algebraic multiplicity, characteristic polynomial\\\n*AMS classification(2020):*05C65, 05C50.\n\nIntroduction\n============\n\nFrom the Perron-Frobenius Theorem (for matrices), it is known that the spectral radius of a connected graph is an eigenvalue of the graph with the algebraic multiplicity 1. Part of the Perron-Frobenius Theorem has been generalized to tensors, in particular, it is known that the spectral radius of a connected uniform hypergraph is an eigenvalue [@chang2008perron]. However, it is unknown what its algebraic multiplicity is. In this paper, we aim to determine the algebraic multiplicity of the spectral radius of a uniform hypertree. The characteristic polynomial of a hypergraph is defined to be the characteristic polynomial of its adjacency tensor. The Poisson Formula, given" -"---\nabstract: 'In this paper, we investigate a class of $n$-dimensional degenerate parabolic equations with abstract coefficients. Our focus is on improving the regularity of solutions and establishing Carleman estimates for these equations through the construction of specialized weight functions. Using these results, we demonstrate the null controllability of the corresponding equations. Additionally, we provide a specific example to illustrate the efficacy of our methodology.'\nauthor:\n- |\n Hongli Sun$^{1}$, Yuanhang Liu$^{2}$, Weijia Wu$^{2,*}$, Donghui Yang$^{2}$\\\n [$^1$ School of Mathematics, Physics and Big data, Chongqing University of Science and Technology, Chongging 401331, China ]{}\\\n [$^2$ School of Mathematics and Statistics, Central South University, Changsha 410083, China\\\n ]{}\nbibliography:\n- 'ref20230507.bib'\ntitle: 'Null controllability of a kind of n-dimensional degenerate parabolic equation'\n---\n\nIntroduction\n============\n\nControllability is a fundamental concept in control theory that was first introduced by the renowned mathematician Kalman. It holds great importance in solving control problems within linear systems. The study of controllability for parabolic equations has a rich history spanning half a century, (see [@dolecki1977general; @fattorini1971exact; @fattorini1974uniform; @russell1973unified; @carleman1939probleme; @hormander2013linear; @hormander2009analysis; @zuily1983uniqueness; @lebeau1995controle; @fursikov1996controllability; @emanuilov1995controllability]), and can be categorized into two main branches: the controllability of non-degenerate parabolic equations and the controllability of degenerate parabolic equations." -"---\nabstract: |\n Saturating sets are combinatorial objects in projective spaces over finite fields that have been intensively investigated in the last three decades. They are related to the so-called covering problem of codes in the Hamming metric. In this paper, we consider the recently introduced linear version of such sets, which is, in turn, related to the covering problem in the rank metric. The main questions in this context are how small the rank of a saturating linear set can be and how to construct saturating linear sets of small rank. Recently, Bonini, Borello, and Byrne provided a lower bound on the rank of saturating linear sets in a given projective space, which is shown to be tight in some cases. In this paper, we provide construction of saturating linear sets meeting the lower bound and we develop a link between the saturating property and the scatteredness of linear sets. The last part of the paper is devoted to show some parameters for which the bound is not tight.\\\n **Keywords**: Linear sets, saturating sets, rank-metric codes, covering radius.\n\n **MSC2020**. Primary: 05B40, 51E20, 52C17. Secondary: 11T71, 94B75.\nauthor:\n- Daniele Bartoli\n- Martino Borello\n- Giuseppe Marino\nbibliography:\n- 'Biblio.bib'" -"---\nabstract: 'The $\\psi(x)$-function, which solves the equation $x = \\sinh(aw)e^w$ for $0]{}'\nauthor:\n- 'Cem Bilaloglu, Tobias L\u00f6w, and Sylvain Calinon[^1][^2]" -"---\nabstract: 'We consider a coupled system of partial differential equations describing the interactions between a closed free interface and two viscous incompressible fluids. The fluids are assumed to satisfy the incompressible Navier-Stokes equations in time-dependent domains that are determined by the free interface. The mean curvature of the interface induces a surface tension force that creates a jump of the Cauchy stress tensor on both sides. It influences the behavior of the surrounding fluids, and therefore the deformation of this interface via the equality of velocities. In dimension 2, the steady states correspond to immobile interfaces that are circles with all the same volume. Considering small displacements of steady states, we are lead to consider a linearized version of this system. We prove that the latter is approximately controllable to a given steady state for any time $T>0$ by the means of additional surface tension type forces, provided that the radius of the circle of reference does not coincide with a scaled zero of the Bessel function of first kind.'\nauthor:\n- S\u00e9bastien Court\nbibliography:\n- 'PAMM\\_2023\\_ref.bib'\ntitle: Approximate controllability of a 2D linear system related to the motion of two fluids with surface tension\n---\n\n[**Keywords:**]{} Navier-Stokes equations," -"---\nabstract: |\n A subset of $[n] = \\{1,2,\\ldots,n\\}$ is called stable if it forms an independent set in the cycle on the vertex set $[n]$. In 1978, Schrijver proved via a topological argument that for all integers $n$ and $k$ with $n \\geq 2k$, the family of stable $k$-subsets of $[n]$ cannot be covered by $n-2k+1$ intersecting families. We study two total search problems whose totality relies on this result.\n\n In the first problem, denoted by ${\\textsc{Schrijver}}(n,k,m)$, we are given an access to a coloring of the stable $k$-subsets of $[n]$ with $m = m(n,k)$ colors, where $m \\leq n-2k+1$, and the goal is to find a pair of disjoint subsets that are assigned the same color. While for $m = n-2k+1$ the problem is known to be ${\\mathsf{PPA}}$-complete, we prove that for $m < d \\cdot \\lfloor \\frac{n}{2k+d-2} \\rfloor$, with $d$ being any fixed constant, the problem admits an efficient algorithm. For $m = \\lfloor n/2 \\rfloor-2k+1$, we prove that the problem is efficiently reducible to the ${\\textsc{Kneser}}$ problem. Motivated by the relation between the problems, we investigate the family of [*unstable*]{} $k$-subsets of $[n]$, which might be of independent interest.\n\n In the second problem, called Unfair Independent" -"---\nabstract: 'The search and retrieval of digital histopathology slides is an important task that has yet to be solved. In this case study, we investigate the clinical readiness of four state-of-the-art histopathology slide search engines, Yottixel, SISH, RetCCL, and HSHR on both unseen datasets and several patient cases. We provide a qualitative and quantitative assessment of each model\u2019s performance in providing retrieval results that are reliable and useful to pathologists. We found high levels of performance across all models using conventional metrics for tissue and subtyping search. Upon testing the models on real patient cases, we found the results were still less than ideal for clinical use. Based on our findings, we propose a minimal set of requirements to further advance the development of accurate and reliable histopathology image search engines for successful clinical adoption.'\nauthor:\n- 'Helen H. Shang, MD, MS[^1]'\n- Mohammad Sadegh Nasr\n- Jai Prakash Veerla\n- Jillur Rahman Saurav\n- Amir Hajighasemi\n- \n- 'Manfred Huber, PhD'\n- 'Chace Moleta, MD'\n- 'Jitin Makker, MD'\n- \nbibliography:\n- 'references.bib'\ntitle: '**Histopathology Slide Indexing and Search: Are We There Yet?**'\n---\n\nIntroduction\n============\n\n![A summary of feature extraction and database creation processes proposed by **(a)**" -"---\nabstract: 'Irregularities in public health data streams (like COVID-19 Cases) hamper data-driven decision-making for public health stakeholders. A real-time, computer-generated list of the most important, outlying data points from thousands of daily-updated public health data streams could assist an expert reviewer in identifying these irregularities. However, existing outlier detection frameworks perform poorly on this task because they do not account for the data volume or for the statistical properties of public health streams. Accordingly, we developed FlaSH (**Fla**gging **S**treams in public **H**ealth), a practical outlier detection framework for public health data users that uses simple, scalable models to capture these statistical properties explicitly. In an experiment where human experts evaluate FlaSH and existing methods (including deep learning approaches), FlaSH scales to the data volume of this task, matches or exceeds these other methods in mean accuracy, and identifies the outlier points that users empirically rate as more helpful. Based on these results, [FlaSH](https://github.com/cmu-delphi/covidcast-indicators/tree/main/_delphi_utils_python/delphi_utils/flash_eval) has been deployed on data streams used by public health stakeholders.'\nauthor:\n- Ananya Joshi\n- Kathryn Mazaitis\n- Roni Rosenfeld\n- |\n Bryan Wilder Carnegie Mellon University\\\n {aajoshi, kmazaitis, rrosenfeld, bwilder}@andrew.cmu.edu\nbibliography:\n- 'bib.bib'\ntitle: Computationally Assisted Quality Control for Public Health Data Streams\n---" -"---\nabstract: |\n Parallel self-assembly is an efficient approach to accelerate the assembly process for modular robots. However, these approaches cannot accommodate complicated environments with obstacles, which restricts their applications. This paper considers the surrounding stationary obstacles and proposes a parallel self-assembly planning algorithm named SAPOA. With this algorithm, modular robots can avoid immovable obstacles when performing docking actions, which adapts the parallel self-assembly process to complex scenes. [To validate the efficiency and scalability]{}, we have designed 25 distinct grid maps with different obstacle configurations to simulate the algorithm. From the results compared to the existing parallel self-assembly algorithms, our algorithm shows a significantly higher success rate, which is more than $80\\%$. For verification in real-world applications, a multi-agent hardware testbed system is developed. The algorithm is successfully deployed on four omnidirectional unmanned surface vehicles, CuBoats. The navigation strategy that translates the discrete planner, SAPOA, to the continuous controller on the CuBoats is presented. The algorithm\u2019s feasibility and flexibility were demonstrated through successful self-assembly experiments on 5 maps with varying obstacle configurations.\n\n *Note to Practitioners-*This paper addresses deploying of self-assembly technologies for modular robots in practical environments with obstacles to facilitate overwater construction tasks or collective transportation systems. Unpredictable stationary" -"---\nabstract: |\n In this paper, numerical methods based on Vieta-Lucas wavelets are proposed for solving a class of singular differential equations. The operational matrix of the derivative for Vieta-Lucas wavelets is derived. It is employed to reduce the differential equations into the system of algebraic equations by applying the ideas of the collocation scheme, Tau scheme, and Galerkin scheme respectively. Furthermore, the convergence analysis and error estimates for Vieta-Lucas wavelets are performed. In the numerical section, the comparative analysis is presented among the different versions of the proposed Vieta-Lucas wavelet methods, and the accuracy of the approaches is evaluated by computing the errors and comparing them to the existing findings.\\\n \\\n **Keywords:** Vieta-Lucas wavelets, generating function, Rodrigues\u2019 formula, collocation method, Galerkin method, Tau method.\nauthor:\n- Shivani Aeri\n- Rakesh Kumar\n- Dumitru Baleanu\n- Kottakkaran Sooppy Nisar\ndate:\n- \n- \ntitle: ' Vieta-Lucas Wavelet based schemes for the numerical solution of the singular models'\n---\n\nIntroduction {#sec1}\n============\n\nSingular differential equations (SDEs) have been attracting applied mathematicians for many years because of their applicability in various branches of science, engineering, and technology [@stakgold2000boundary; @gatica1989singular]. We consider the following SDEs [@sabir2021design; @sabir2020new]: $$\\label{introeq1}\n\\rm{ Y''(\\zeta) + \\frac{\\mu}{\\zeta} Y'(\\zeta) +" -"---\nabstract: 'Automated Machine Learning (AutoML) frameworks regularly use ensembles. Developers need to compare different ensemble techniques to select appropriate techniques for an AutoML framework from the many potential techniques. So far, the comparison of ensemble techniques is often computationally expensive, because many base models must be trained and evaluated one or multiple times. Therefore, we present Assembled-OpenML. Assembled-OpenML is a Python tool, which builds meta-datasets for ensembles using OpenML. A meta-dataset, called Metatask, consists of the data of an OpenML task, the task\u2019s dataset, and prediction data from model evaluations for the task. We can make the comparison of ensemble techniques computationally cheaper by using the predictions stored in a metatask instead of training and evaluating base models. To introduce Assembled-OpenML, we describe the first version of our tool. Moreover, we present an example of using Assembled-OpenML to compare a set of ensemble techniques. For this example comparison, we built a benchmark using Assembled-OpenML and implemented ensemble techniques expecting predictions instead of base models as input. In our example comparison, we gathered the prediction data of $1523$ base models for $31$ datasets. Obtaining the prediction data for all base models using Assembled-OpenML took ${\\sim} 1$ hour in total. In" -"---\nabstract: |\n We consider the problem of evaluating forecasts of binary events whose predictions are consumed by rational agents who take an action in response to a prediction, but whose utility is unknown to the forecaster. We show that optimizing forecasts for a single scoring rule (e.g., the Brier score) cannot guarantee low regret for all possible agents. In contrast, forecasts that are well-calibrated guarantee that all agents incur sublinear regret. However, calibration is not a necessary criterion here (it is possible for miscalibrated forecasts to provide good regret guarantees for all possible agents), and calibrated forecasting procedures have provably worse convergence rates than forecasting procedures targeting a single scoring rule.\n\n Motivated by this, we present a new metric for evaluating forecasts that we call *U-calibration*, equal to the maximal regret of the sequence of forecasts when evaluated under any bounded scoring rule. We show that sublinear U-calibration error is a necessary and sufficient condition for all agents to achieve sublinear regret guarantees. We additionally demonstrate how to compute the U-calibration error efficiently and provide an online algorithm that achieves $O(\\sqrt{T})$ U-calibration error (on par with optimal rates for optimizing for a single scoring rule, and bypassing lower bounds" -"---\nabstract: 'The asymptotic structure of null and spatial infinities of asymptotically flat spacetimes plays an essential role in discussing gravitational radiation, gravitational memory effect, and conserved quantities in General Relativity. Bondi, Metzner and Sachs established that the asymptotic symmetry group for asymptotically simple spacetimes is the infinite-dimensional BMS group. Given that null infinity is divided into two sets: past null infinity $\\mathscr{I}^{-}$ and future null infinity $\\mathscr{I}^{+}$, one can identify two independent symmetry groups: $\\text{BMS}^{-}$ at $\\mathscr{I}^{-}$ and $\\text{BMS}^{+}$ at $\\mathscr{I}^{+}$. Associated with these symmetries are the so-called BMS charges. A recent conjecture by Strominger suggests that the generators of $\\text{BMS}^{-}$ and $\\text{BMS}^{+}$ and their associated charges are related via an antipodal reflection map near spatial infinity. To verify this matching, an analysis of the gravitational field near spatial infinity is required. This task is complicated due to the singular nature of spatial infinity for spacetimes with non-vanishing ADM mass. Different frameworks have been introduced in the literature to address this singularity, e.g., Friedrich\u2019s cylinder, Ashtekar-Hansen\u2019s hyperboloid and Ashtekar-Romano\u2019s asymptote at spatial infinity. This paper reviews the role of Friedrich\u2019s formulation of spatial infinity in the investigation of the matching of the spin-2 charges on Minkowski spacetime and in" -"---\nabstract: |\n *Functional Connectivity* between brain regions is known to be altered in Alzheimer\u2019s disease, and promises to be a biomarker for early diagnosis of the disease. While several approaches for functional connectivity obtain an un-directed network representing stochastic associations (correlations) between brain regions, association does not necessarily imply causation. In contrast, *Causal Functional Connectivity* is more informative, providing a directed network representing causal relationships between brain regions. In this paper, we obtained the causal functional connectome for the whole brain from recordings of resting-state functional magnetic resonance imaging (rs-fMRI) for subjects from three clinical groups: cognitively normal, mild cognitive impairment, and Alzheimer\u2019s disease. We applied the recently developed *Time-aware PC* (TPC) algorithm to infer the causal functional connectome for the whole brain. TPC supports model-free estimation of whole brain causal functional connectivity based on directed graphical modeling in a time series setting. We then identified the causal brain connections between brain regions which have significantly different strengths between pairs of subject groups, and over the three subject groups. We used the significant causal brain connections thus obtained to compile a comprehensive list of brain regions impacted by Alzheimer\u2019s disease according to the current data set. The obtained brain" -"---\nabstract: 'This paper develops an optimal data aggregation policy for learning-based traffic control systems based on imagery collected from Road Side Units (RSUs) under imperfect communications. Our focus is optimizing semantic information flow from RSUs to a nearby edge server or cloud-based processing units by maximizing data diversity based on the target machine learning application while taking into account heterogeneous channel conditions and constrained total transmission rate. To this end, we enforce fairness among class labels to increase data diversity for classification problems. Furthermore, we propose a greedy interval-by-interval scheduling policy powered by coalition game theory to reduce the computation complexity. Once, RSUs are selected, we employ a maximum uncertainty method to handpick data samples that contribute the most to the learning performance. Our method yields higher learning accuracy compared to random selection, uniform selection, and network-based optimization methods (e.g., FedCS).'\nauthor:\n- \n- \n- \n- \nbibliography:\n- 'main.bib'\ntitle: Diversity Maximized Scheduling in RoadSide Units for Traffic Monitoring Applications\n---\n\nIntroduction\n============\n\nRoadSide Units (RSUs) are an integral part of smart transportation systems due to their role in communicating with vehicles and collecting visual information to develop temporal and spatial traffic flow models. This information can be used" -"---\nabstract: |\n The goal of this paper is to present an approach to $\\sf{Hod\\ Pair\\ Capturing}$ (${\\sf{HPC}}$), which was introduced in [@SteelCom Definition 1.7.1]. ${\\sf{HPC}}$ is the most outstanding open problem of descriptive inner model theory (see [@DIMT]). More specifically, we introduce two principles, the ${\\sf{Direct\\ Limit\\ Independence}}$ (see [Definition\u00a0\\[dli\\]]{}) and the ${\\sf{Bounded\\ Direct\\ Limits}}$ (see [Definition\u00a0\\[bdl\\]]{}), and show that they together imply ${\\sf{HPC}}$.\n\n The paper is part of a sequence of four papers whose collective goal is to establish ${\\sf{HPC}}$ below a Woodin cardinal that is a limit of Woodin cardinals and then use this result to show that the ${\\sf{Iterability\\ Conjecture}}$ for the Mitchell-Schindler $K^c$ is not provable in ${\\sf{ZFC}}$ (see [@JSSS], [@MitSch], [@OIMT Conjecture 6.5] and [@LaSa21]).\n\n Mathematics Subject Classification: 03E55, 03E57, 03E60 and 03E45.\naddress: 'Grigor Sargsyan, IMPAN, Antoniego Abrahama 18, 81-825 Sopot, Poland.'\nauthor:\n- Grigor Sargsyan\nbibliography:\n- 'main.bib'\ntitle: Generic generators\n---\n\n[^1]\n\nDescriptive inner model theory is an area of set theory that uses tools from inner model theory and descriptive set theory to study the internal structure of models of determinacy and also to develop tools to build models of determinacy from various large cardinals. One of its" -"---\nabstract: 'Recently, Ye et\u00a0al. (Mathematical Programming 2023) designed an algorithm for solving a specific class of bilevel programs with an emphasis on applications related to hyperparameter selection, utilizing the difference of convex algorithm based on the value function approach reformulation. The proposed algorithm is particularly powerful when the lower level problem is fully convex , such as a support vector machine model or a least absolute shrinkage and selection operator model. In this paper, to suit more applications related to machine learning and statistics, we substantially weaken the underlying assumption from lower level full convexity to weak convexity. Accordingly, we propose a new reformulation using Moreau envelope of the lower level problem and demonstrate that this reformulation is a difference of weakly convex program. Subsequently, we develop a sequentially convergent algorithm for solving this difference of weakly convex program. To evaluate the effectiveness of our approach, we conduct numerical experiments on the bilevel hyperparameter selection problem from elastic net, sparse group lasso, and RBF kernel support vector machine models.'\nauthor:\n- 'Lucy L. Gao [^1]'\n- 'Jane J. Ye [^2]'\n- 'Haian Yin [^3]'\n- 'Shangzhi Zeng [^4]'\n- 'Jin Zhang [^5]'\nbibliography:\n- 'reference.bib'\ntitle: ' Moreau" -"---\nabstract: 'Tidal dissipation in star-planet systems can occur through various mechanisms, among which is the elliptical instability. This acts on elliptically deformed equilibrium tidal flows in rotating fluid planets and stars, and excites inertial waves in convective regions if the dimensionless tidal amplitude ($\\epsilon$) is sufficiently large. We study its interaction with turbulent convection, and attempt to constrain the contributions of both elliptical instability and convection to tidal dissipation. For this, we perform an extensive suite of Cartesian hydrodynamical simulations of rotating Rayleigh-B\u00e9nard convection in a small patch of a planet. We find that tidal dissipation resulting from the elliptical instability, when it operates, is consistent with $\\epsilon^3$, as in prior simulations without convection. Convective motions also act as an effective viscosity on large-scale tidal flows, resulting in continuous tidal dissipation (scaling as $\\epsilon^2$). We derive scaling laws for the effective viscosity using (rotating) mixing-length theory, and find that they predict the turbulent quantities found in our simulations very well. In addition, we examine the reduction of the effective viscosity for fast tides, which we observe to scale with tidal frequency ($\\omega$) as $\\omega^{-2}$. We evaluate our scaling laws using interior models of Hot Jupiters computed with MESA. We" -"---\nabstract: 'As Internet of Things (IoT) technology grows, so does the threat of malware infections. A proposed countermeasure, the use of benevolent \u201cwhite worms\u201d to combat malicious \u201cblack worms\u201d, presents unique ethical and practical challenges. This study examines these issues via network epidemiology models and simulations, considering the propagation dynamics of both types of worms in various network topologies. Our findings highlight the critical role of the rate at which white worms activate themselves, relative to the user\u2019s system update rate, as well as the impact of the network structure on worm propagation. The results point to the potential of white worms as an effective countermeasure, while underscoring the ethical and practical complexities inherent in their deployment.'\nauthor:\n- Francesco Bonacina\n- Ignacio Echegoyen\n- Diego Escribano\n- Marcus Krellner\n- Francesco Paolo Nerini\n- Rasha Shanaz\n- Andreia Sofia Teixeira\n- Alberto Aleta\nbibliography:\n- 'references\\_new.bib'\ntitle: |\n Ethics in rotten apples:\\\n A network epidemiology approach for active cyber defense\n---\n\n\\[sec:intro\\]Introduction\n=========================\n\n\u2018Internet of Things\u2019 (IoT) technology is everywhere. Even seemingly trivial household devices like light bulbs and toasters are connected to the internet over local networks. Unfortunately, the rise of malware infections has become a critical" -"---\nabstract: 'Weak Impulsive Narrowband Quiet Sun Emissions (WINQSEs) are a newly discovered class of radio emission from the solar corona. These emissions are characterized by their extremely impulsive, narrowband and ubiquitous nature. We have systematically been working on their detailed characterization, including their strengths, morphologies, temporal characteristics, energies, etc. This work is the next step in this series and focuses on the spectral nature of WINQSEs. Given that their strength is only a few percent of the background solar emission, we have adopted an extremely conservative approach to reliably identify WINQSES. Only a handful of WINQSEs meet all of our stringent criteria. Their flux densities lie in the 20 $-$ 50 Jy range and they have compact morphologies. For the first time, we estimate their bandwidths and find them to be less than 700 kHz, consistent with expectations based on earlier observations. Interestingly, we also find similarities between the spectral nature of WINQSEs and the solar radio spikes. This is consistent with our hypothesis that the WINQSEs are the weaker cousins of the type-III radio bursts and are likely to be the low-frequency radio counterparts of the nanoflares, originally hypothesized as a possible explanation for coronal heating.'\nauthor:\n-" -"---\nabstract: 'The idea of decision-aware model learning, that models should be accurate where it matters for decision-making, has gained prominence in model-based reinforcement learning. While promising theoretical results have been established, the empirical performance of algorithms leveraging a decision-aware loss has been lacking, especially in continuous control problems. In this paper, we present a study on the necessary components for decision-aware reinforcement learning models and we showcase design choices that enable well-performing algorithms. To this end, we provide a theoretical and empirical investigation into prominent algorithmic ideas in the field. We highlight that empirical design decisions established in the MuZero line of works are vital to achieving good performance for related algorithms, and we showcase differences in behavior between different instantiations of value-aware algorithms in stochastic environments. Using these insights, we propose the Latent Model-Based Decision-Aware Actor-Critic framework ($\\lambda$-AC) for decision-aware model-based reinforcement learning in continuous state-spaces and highlight important design choices in different environments.'\nauthor:\n- |\n Claas Voelcker\\\n University of Toronto\\\n Vector Institute, Toronto\\\n cvoelcker@cs.toronto.edu\\\n Arash Ahmadian\\\n University of Toronto\\\n Vector Institute, Toronto\\\n arash.ahmadian@mail.utoronto.ca\\\n Romina Abachi\\\n University of Toronto\\\n Vector Institute, Toronto\\\n rabachi@cs.toronto.edu Igor Gilitschenski\\\n University of Toronto\\\n gilitschenski@cs.toronto.edu\\\n Amir-massoud Farahmand\\\n Vector Institute, Toronto\\\n University of Toronto\\" -"---\nabstract: 'We present a method for solving general nonconvex-strongly-convex bilevel optimization problems. Our method\u2014the *Restarted Accelerated HyperGradient Descent* (`RAHGD`) method\u2014finds an $\\epsilon$-first-order stationary point of the objective with $\\tilde{\\mathcal{O}}(\\kappa^{3.25}\\epsilon^{-1.75})$ oracle complexity, where $\\kappa$ is the condition number of the lower-level objective and $\\epsilon$ is the desired accuracy. We also propose a perturbed variant of `RAHGD` for finding an $\\big(\\epsilon,\\mathcal{O}(\\kappa^{2.5}\\sqrt{\\epsilon}\\,)\\big)$-second-order stationary point within the same order of oracle complexity. Our results achieve the best-known theoretical guarantees for finding stationary points in bilevel optimization and also improve upon the existing upper complexity bound for finding second-order stationary points in nonconvex-strongly-concave minimax optimization problems, setting a new state-of-the-art benchmark. Empirical studies are conducted to validate the theoretical results in this paper.'\nauthor:\n- Haikuo Yang\n- Luo Luo\n- Chris Junchi Li\n- 'Michael I.\u00a0Jordan'\nbibliography:\n- 'ref.bib'\ntitle: Accelerating Inexact HyperGradient Descent for Bilevel Optimization\n---\n\nIntroduction {#intro}\n============\n\nBilevel optimization is emerging as a key unifying problem formulation in machine learning, encompassing a variety of applications including meta-learning, model-free reinforcement learning and hyperparameter optimization\u00a0[@franceschi2018bilevel; @stadie2020learning]. Our work focuses on a version of the general problem that is particularly relevant to machine learning\u2014the *nonconvex-strongly-convex bilevel optimization problem*:\n\n\\[bilevel\\_ab\\] $$\\begin{aligned}" -"---\nabstract: 'Spin$^h$ manifolds are the quaternionic analogue to manifolds. We compute the bordism groups at the prime $2$ by proving a structure theorem for the cohomology of the bordism spectrum $\\mathrm{MSpin^h}$ as a module over the mod 2 Steenrod algebra. This provides a 2-local splitting of $\\mathrm{MSpin^h}$ as a wedge sum of familiar spectra. We also compute the decomposition of $H^*(\\mathrm{MSpin^h};\\mathbb{Z}/2\\mathbb{Z})$ explicitly in degrees up through 30 via a counting process.'\naddress: 'University of Maryland, College Park'\nauthor:\n- Keith Mills\nbibliography:\n- 'spinh.bib'\ndate: 'December 8, 2023'\ntitle: The Structure of the Spin$^h$ Bordism Spectrum\n---\n\nIntroduction {#sec:intro}\n============\n\nSpin$^h$ manifolds are the quaternionic analogue to manifolds. ${\\text{Spin}}^h(n)$ is a central extension of $SO(n) \\times {\\text{Sp}}(1)$ by ${\\mathbb{Z}}_2 = \\mathbb{Z}/2\\mathbb{Z}$, and a structure on an oriented $n$-manifold is a lifting of the principal frame bundle from $SO(n)$ to ${\\text{Spin}}^h(n)$. We aim to compute the bordism groups $\\Omega_{*}^{spin^h}.$ As explained in (a section of) Jiahao Hu\u2019s thesis [@Hu], there is a bordism spectrum $\\mathrm{MSpin^h}$, so computing the bordism groups is equivalent to determining the homotopy groups of this spectrum.\n\nmanifolds have been the subject of recent research of various flavors. In [@albanesemilivojevic2021spinh] it is shown that there is a" -"---\nabstract: 'We present a novel *alignment-before-generation* approach to tackle the challenging task of generating general 3D shapes based on 2D images or texts. Directly learning a conditional generative model from images or texts to 3D shapes is prone to producing inconsistent results with the conditions because 3D shapes have an additional dimension whose distribution significantly differs from that of 2D images and texts. To bridge the domain gap among the three modalities and facilitate multi-modal-conditioned 3D shape generation, we explore representing 3D shapes in a shape-image-text-aligned space. Our framework comprises two models: a Shape-Image-Text-Aligned Variational Auto-Encoder (SITA-VAE) and a conditional Aligned Shape Latent Diffusion Model (ASLDM). The former model encodes the 3D shapes into the shape latent space aligned to the image and text and reconstructs the fine-grained 3D neural fields corresponding to given shape embeddings via the transformer-based decoder. The latter model learns a probabilistic mapping function from the image or text space to the latent shape space. Our extensive experiments demonstrate that our proposed approach can generate higher-quality and more diverse 3D shapes that better semantically conform to the visual or textural conditional inputs, validating the effectiveness of the shape-image-text-aligned space for cross-modality 3D shape generation.'\nauthor:" -"---\nabstract: |\n In this work, we revisit the problem of solving large-scale semidefinite programs using randomized first-order methods and stochastic smoothing. We introduce two oblivious stochastic mirror descent algorithms based on a complementary composite setting. One algorithm is designed for non-smooth objectives, while an accelerated version is tailored for smooth objectives. Remarkably, both algorithms work without prior knowledge of the Lipschitz constant or smoothness of the objective function. For the non-smooth case with $\\mathcal{M}-$bounded oracles, we prove a convergence rate of $ O( {\\mathcal{M}}/{\\sqrt{T}} ) $. For the $L$-smooth case with a feasible set bounded by $D$, we derive a convergence rate of $ O( {L^2 D^2}/{(T^{2}\\sqrt{T})} + {(D_0^2+\\sigma^2)}/{\\sqrt{T}} )$, where $D_0$ is the starting distance to an optimal solution, and $\n \\sigma^2$ is the stochastic oracle variance. These rates had only been obtained so far by either assuming prior knowledge of the Lipschitz constant or the starting distance to an optimal solution. We further show how to extend our framework to relative scale and demonstrate the efficiency and robustness of our methods on large scale semidefinite programs.\nauthor:\n- 'Cl\u00e9ment Lezane, Crist\u00f3bal Guzm\u00e1n, Alexandre d\u2019Aspremont'\n- |\n Cl\u00e9ment Lezane\\\n University of Twente\\\n `c.w.lezane@utwente.nl`\\\n Crist\u00f3bal Guzm\u00e1n\\\n Catholic University of" -"---\nauthor:\n- 'Johannes R. Eskilt,'\n- 'Yashar Akrami,'\n- 'Stefano Anselmi,'\n- 'Craig J. Copi,'\n- 'Andrew H. Jaffe,'\n- 'Arthur Kosowsky,'\n- 'Deyan P. Mihaylov,'\n- 'Glenn D. Starkman,'\n- 'Andrius Tamosiunas,'\n- 'James B. Mertens,'\n- 'Pip Petersen,'\n- 'Samanta Saha,'\n- 'Quinn Taylor,'\n- and \u00d6zen\u00e7 G\u00fcng\u00f6r\nbibliography:\n- 'topology.bib'\n- 'additional.bib'\ntitle: 'Cosmic topology. Part II. Eigenmodes, correlation matrices, and detectability of orientable Euclidean manifolds'\n---\n\nIntroduction {#secn:intro}\n============\n\nIn the century since the proposal of general relativity (GR) [@Einstein:1916vd] as the dynamical theory of spacetime, and therefore of cosmology [@Einstein:1917ce], we have widely come to view space as a three-dimensional (Riemannian) manifold with a geometry that is inhomogeneous on small scales but homogeneous and isotropic on large scales [@peebles:1993; @Ostriker1995; @mukhanov2005physical; @Efstathiou2020]. This geometry evolves according to the Einstein field equations, which are local second-order differential equations in which the evolution of the geometry is sourced by the stress-energy content of space. Meanwhile, the evolution of that stress-energy content is governed by Euler-Lagrange equations that incorporate the influence of the geometry on the stress-energy.\n\nIt is useful to do a small-scale/large-scale decomposition in the metric. The largest-scale geometry is, per this view, given by" -"---\nabstract: 'We present new microscopic effective shell-model interactions in the valence $sd$ shell, obtained from the modern Daejeon16 nucleon-nucleon potential using no-core shell-model (NCSM) wave functions of $^{18}$F at $N_{\\rm max} =6$ (total oscillator quanta of excitation) model space and the Okubo\u2013Lee\u2013Suzuki transformation. First, we explore the convergence properties of our calculations and show that the excitation energies of states in $^{18}$F, characterized by the largest valence-like configurations, are reasonably converged and the lowest states are in sensible agreement with experiment. Then, we investigate the monopole properties of that interaction in comparison with the phenomenological universal $sd$-shell interaction, USDB, and with the previously derived interaction at $N_{\\rm max} =4$. Theoretical binding energies and low-energy spectra of the O isotopes, as well as low-energy spectra of a selection of $sd$-shell nuclei, are presented. We conclude that the use of larger-space NCSM wave functions leads to a noticeable improvement in the quality of the derived effective interaction. We propose monopole modifications of the Daejeon16 centroids which further improve the agreement with experiment throughout the $sd$ shell, as demonstrated by a compilation of spectra contained in Supplemental Material.'\nauthor:\n- Ik\u00a0Jae\u00a0Shin\n- 'Nadezda\u00a0A.\u00a0Smirnova'\n- 'Andrey\u00a0M.\u00a0Shirokov'\n-" -"---\nabstract: 'The high temperature limit of interacting spins is usually not associated with ordering or critical phenomena. Nevertheless, spontaneous fluctuations of a local spin polarization at equilibrium have nontrivial dynamics even in this limit. Here, we demonstrate that the spin noise power spectrum of these fluctuations can undergo discontinuous changes as a function of an external magnetic field. As a simple illustration, we consider a model of Ising-like long range spin-spin interactions with a transverse magnetic field as a control parameter. This system undergoes a phase transition associated with disappearance of the noise power peak responsible for the most detrimental decoherence effect of the interactions.'\naddress:\n- 'National Technical University of Ukraine, 37 Prospect Peremogy, Kyiv 03056, Ukraine'\n- 'T-4, Los Alamos National Laboratory, Los Alamos NM 87545'\n- 'T-4, Los Alamos National Laboratory, Los Alamos NM 87545'\nauthor:\n- 'V. N. Gorshkov'\n- 'N. A. Sinitsyn'\n- 'D. Mozyrsky'\ntitle: Phase transition in fluctuations of interacting spins at infinite temperature\n---\n\nA phase transition is a discontinuous change of some measurable characteristic of a many-body system. Traditionally, phase transitions in condensed matter have been associated with sharp changes of a long range order at equilibrium when a control" -"---\nabstract: 'We consider a system of non-interacting Brownian particles on a line with a step-like initial condition, and we investigate the behavior of the local time at the origin at large times. We compute the mean and the variance of the local time, and we show that the memory effects are governed by the Fano factor associated with the initial condition. For the uniform initial condition, we show that the probability distribution of the local time admits a large deviation form, and we compute the corresponding large deviation functions for the annealed and quenched averaging schemes. The two resulting large deviation functions are very different. Our analytical results are supported by extensive numerical simulations.'\nauthor:\n- 'Ivan N. Burenev'\n- 'Satya N. Majumdar'\n- Alberto Rosso\nbibliography:\n- 'LTD\\_BM\\_v2.bib'\ntitle: Local time of a system of Brownian particles on the line with steplike initial condition\n---\n\nIntroduction {#sec:intro}\n============\n\nImagine a box with a wall dividing it into two parts: one contains a gas of particles and the other is empty. What happens when we remove the wall? Indeed the system is not in equilibrium (even if it was initially). It is also clear, that the gas will eventually" -"---\nabstract: 'In this paper we study, for each $d>0$, what is the minimum integer $h_{3,2d}\\in \\mathbb{N}$ for which there exists a complex polarized K3 surface $(X,H)$ of degree $H^2=2d$ and Picard number $\\rho (X):=\\mathrm{rank} \\operatorname{Pic}X = h_{3,2d}$ admitting an automorphism of order $3$. We show that $h_{3,2d}=6$ if $d=1$ and $h_{3,2d}=2$ if $d>1$. We provide explicit examples of K3 surfaces defined over $\\mathbb{Q}$ realizing these bounds.'\naddress: 'Dipartimenti di matematica \u201cFederigo Enriques\u201d, Universit\u00e0 degli studi di Milano, 20133, Milano, Italy.'\nauthor:\n- Dino Festi\nbibliography:\n- 'references.bib'\ntitle: Polarized K3 surfaces with an automorphism of order 3 and low Picard number\n---\n\nIntroduction\n============\n\nThe study of automorphisms of K3 surfaces has seen a very intense activity in the last 40 years. In the 80\u2019s Nikulin ad Stark proved that a group acting purely non-symplectically on a K3 is cyclic an finite\u00a0[@Nik76; @Ste85]. In\u00a0[@MO98], Machida and Oguiso prove that such a group can have order at most 66; if the group has prime order, then its maximal order is 19. In these notes we consider non-symplectic automorphisms of order $3$, a topic extensively treated in\u00a0[@AS08; @Tak11]. In particular, we focus on the interplay between the existence" -"---\nabstract: 'The objective of augmented reality (AR) is to add digital content to natural images and videos to create an interactive experience between the user and the environment. Scene analysis and object recognition play a crucial role in AR, as they must be performed quickly and accurately. In this study, a new approach is proposed that involves using oriented bounding boxes with a detection and recognition deep network to improve performance and processing time. The approach is evaluated using two datasets: a real image dataset (DOTA dataset) commonly used for computer vision tasks, and a synthetic dataset that simulates different environmental, lighting, and acquisition conditions. The focus of the evaluation is on small objects, which are difficult to detect and recognise. The results indicate that the proposed approach tends to produce better Average Precision and greater accuracy for small objects in most of the tested conditions.'\nauthor:\n- |\n [![image](orcid.pdf)Vladislav Li](https://orcid.org/0000-0003-0298-4931)\\\n Department of Networks and Digital Media\\\n Kingston University\\\n London, UK\\\n `v.li@kingston.ac.uk`\\\n [![image](orcid.pdf)Barbara Villarini](https://orcid.org/0000-0002-2846-0610)\\\n School of Computer Science and Engineering\\\n University of Westminster\\\n London, UK\\\n `b.villarini@westminster.ac.uk`\\\n [![image](orcid.pdf)Jean-Christophe Nebel](https://orcid.org/0000-0003-1812-5269)\\\n Department of Computer Science\\\n Kingston University\\\n London, UK\\\n `j.nebel@kingston.ac.uk`\\\n [![image](orcid.pdf)Thomas Lagkas](https://orcid.org/0000-0002-0749-9794)\\\n Department of Computer Science\\\n International Hellenic University\\\n Greece & S.E." -"---\nabstract: 'The Federal Communications Commission (FCC) has allocated the 6 GHz band (5.925 - 7.125 GHz) for unlicensed, shared use in the US. Incumbents in the band are protected via Low Power Indoor (LPI) rules that do not require the use of an Automatic Frequency Control (AFC) mechanism and Standard Power (SP) rules which do. As the deployment of Wi-Fi 6E APs implementing LPI rules have been increasing, there is limited research examining the real-world interference potential of dense LPI deployments to fixed links, which remains a concern for incumbents. We have conducted a first-of-its-kind extensive measurement campaign of a dense indoor Wi-Fi 6E network at the University of Michigan, which includes walking, driving, and drone measurements to assess outdoor beacon Received Signal Strength Indicator (RSSI), building entry loss (BEL), channel utilization, and appropriate enabling signal level for a proposed client-to-client (C2C) mode in 6 GHz. Our detailed measurements under various conditions show median outdoor RSSI between -75 dBm and -85 dBm, BEL between 12 dB and 16 dB through double-pane low-emission windows, and only $5\\%$ of indoor Basic Service Set Identifiers (BSSIDs) observed outdoors. Our overall conclusion is that the probability of interference to incumbent fixed links is" -"---\nabstract: |\n For the non-stationary Stokes system, it is well-known that one can improve spatial regularity in the interior, but not near the boundary if it is coupled with the no-slip boundary condition. In this note we show that, to the contrary, spatial regularity can be improved near a flat boundary if it is coupled with the Navier boundary condition, with either infinite or finite slip length. The case with finite slip length is more difficult than the case with infinite slip length.\n\n 0.2cm\n\n [[*Key words:*]{} Gradient estimates, Navier boundary condition, Stokes equations, Navier-Stokes equations, half space, slip length]{}\n\n 0.2cm\n\n [*AMS Subject Classification (2000):*]{} 35Q30, 35B65\naddress:\n- 'School of Science, Zhejiang University of Science and Technology, Hangzhou, 310023, People\u2019s Republic of China '\n- 'Department of Mathematics, University of British Columbia, Vancouver, BC V6T1Z2, Canada '\n- 'Department of Mathematics, University of British Columbia, Vancouver, BC V6T1Z2, Canada '\nauthor:\n- Hui Chen\n- Su Liang\n- 'Tai-Peng Tsai'\nbibliography:\n- 'NavierBC.bib'\ntitle: 'Gradient estimates for the non-stationary Stokes system with the Navier boundary condition'\n---\n\n**Dedicated to Professor Vladim\u00edr \u0160ver\u00e1k on the occasion of his 65th birthday**\n\nIntroduction\n============\n\nLet $\\Omega$ be an open subset of ${\\mathbb" -"---\nauthor:\n- 'Daiki Nishiguchi$^1$[^1]'\nbibliography:\n- 'Ref\\_JPSJreview.bib'\ntitle: |\n Deciphering long-range order in active matter:\\\n Insights from swimming bacteria in quasi-2D and electrokinetic Janus particles \n---\n\nIntroduction {#sec:Introduction}\n============\n\nCollections of self-propelled elements prevail in nature such as flocks of birds, schools of fish, cell populations, bacterial colonies, and cytoskeletal systems [@vicsek2012collective]. A framework of nonequilibrium statistical physics that tries to unveil the underlying universal laws in these novel nonequilibrium materials is now called active matter physics [@vicsek2012collective; @marchetti2013hydrodynamics; @chate2020dry; @chate2022dry]. More specifically, active matter refers to nonequilibrium matter composed of elements that individually convert some sort of free energy into motion.\n\nActive matter presents a rich variety of fundamental questions in statistical physics. Active matter systems are driven out of equilibrium at the individual element level and as such, they are far from equilibrium compared with other nonequilibrium systems that physics has traditionally dealt with. Examples of traditional nonequilibrium systems include electrical conduction, thermal conduction, and fluid flow, in which nonequilibrium states are realized by imposing gradients of electric potential, temperature, or pressure as boundary conditions, respectively. Other examples are glassy or granular systems, whose extremely slow dynamics prevent them from reaching thermal equilibrium within a realistic timescale." -"---\nabstract: 'This paper presents UMASS\\_BioNLP team participation in the MEDIQA-Chat 2023 shared task for Task-A and Task-C. We focus especially on Task-C and propose a novel LLMs cooperation system named a doctor-patient loop to generate high-quality conversation data sets. The experiment results demonstrate that our approaches yield reasonable performance as evaluated by automatic metrics such as ROUGE, medical concept recall, BLEU, and Self-BLEU. Furthermore, we conducted a comparative analysis between our proposed method and ChatGPT and GPT-4. This analysis also investigates the potential of utilizing cooperation LLMs to generate high-quality datasets. [^1]'\nauthor:\n- |\n Junda Wang [^2] Zonghai Yao Avijit Mitra Samuel Osebe Zhichao Yang Hong Yu\\\n \\\n **[CICS, University of Massachusetts, Amherst, MA, USA]{}\\\n \\\n **[jundawang@umass.edu zonghaiyao@umass.edu]{}\\\n ****\nbibliography:\n- 'clinical\\_nlp23.bib'\ntitle: |\n UMASS\\_BioNLP at MEDIQA-Chat 2023:\n\n Can LLMs generate high-quality synthetic note-oriented doctor-patient conversations?\n---\n\n=1\n\nIntroduction\n============\n\nThe issue of the growing burden of clinical documentation has become a critical concern in healthcare, resulting in increased job dissatisfaction and burnout rates among clinicians and adversely affecting patient experiences. Nevertheless, timely and accurate documentation of patient encounters is crucial for safe, effective care and communication between specialists. Consequently, there is a growing interest in automating assisting" -"---\nabstract: 'Electron dynamics of anatase TiO$_2$ under the influence of ultrashort and intense laser field is studied using the real-time time-dependent density functional theory (TDDFT). Our findings demonstrate the effectiveness of TDDFT calculations in modeling the electron dynamics of solids during ultrashort laser excitation, providing valuable insights for designing and optimizing nonlinear photonic devices. We analyze the perturbative and non-perturbative responses of TiO$_2$ to 30 fs laser pulses at 400 and 800 nm wavelengths, elucidating the underlying mechanisms. At 400 nm, ionization via single photon absorption dominates, even at very low intensities. At 800 nm, we observe ionization through two-photon absorption within the intensity range of $1\\times10^{10}$ to $9\\times10^{12}$ W/cm$^2$, with a transition from multiphoton to tunneling ionization occurring at $9\\times10^{12}$ W/cm$^2$. We observe a sudden increase in energy and the number of excited electrons beyond $1\\times10^{13}$ W/cm$^2$, leading to their saturation and subsequent laser-induced damage. We estimate the damage threshold of TiO$_2$ for 800 nm to be 0.1 J/cm$^2$. In the perturbative regime, induced currents exhibit a phase shift proportional to the peak intensity of the laser pulse. This phase shift is attributed to the intensity-dependent changes in the number of free carriers, indicative of the optical Kerr" -"---\nabstract: 'Demand flexibility plays a vital role in maintaining grid balance, reducing peak demand, and saving customers\u2019 energy bills. Given their highly shiftable load and significant contribution to a building\u2019s energy consumption, Heating, Ventilation, and Air Conditioning (HVAC) systems can provide valuable demand flexibility to the power systems by adjusting their energy consumption in response to electricity price and power system needs. To exploit this flexibility in both operation time and power, it is imperative to accurately model and aggregate the load flexibility of a large population of HVAC systems as well as designing effective control algorithms. In this paper, we tackle the curse of dimensionality issue in modeling and control by utilizing the concept of laxity to quantify the emergency level of each HVAC operation request. We further propose a two-level approach to address energy optimization for a large population of HVAC systems. The lower level involves an aggregator to aggregate HVAC load laxity information and use least-laxity-first (LLF) rule to allocate real-time power for individual HVAC systems based on the controller\u2019s total power. Due to the complex and uncertain nature of HVAC systems, we levrage a reinforcement learning (RL)-based controller to schedule the total power based on" -"---\nabstract: 'The unimodular theory of gravity is an alternative perspective to traditional Einstein\u2019s general relativity and opens new possibilities for exploring its implications in cosmology. In this paper, we investigate the unimodular gravity (UG) with the latest cosmological data from the Pantheon sample of Type Ia supernovae (SN), Baryon Acoustic Oscillations (BAO), and the observational H(z) data from Differential Age method (DA). We consider a model consisting of a generalized cosmological constant with radiation and dark matter. The considered theory respects only unimodular coordinate transformations. We fit our model with low-redshift data from SN and DA and determine the value of parameter $\\xi$ of the theory. We find the best-fit value of parameter $\\xi =6.23 \\pm 0.5$; which deviates from 6, for which the theory becomes the standard general theory of relativity. We further study the Hubble constant problem by combining the SN and DA data with BAO data. We observe deviation in the value of $H_0$ from the standard $\\Lambda$CDM model. We obtain $H_0$ as $70.7 \\pm 4.1 \\ \\mbox{Km s}^{-1} \\mbox{Mpc} ^{-1}$ and $69.24 \\pm 0.90 \\ \\mbox{Km s}^{-1} \\mbox{Mpc} ^{-1}$ from supernovae data and BAO data, respectively in unimodular gravity. Combining the BAO data with SN+DA" -"---\nabstract: |\n **Abstract:** The recently discovered ATi$_3$Bi$_5$ (A=Cs, Rb) exhibit intriguing quantum phenomena including superconductivity, electronic nematicity, and abundant topological states, which provide promising platforms for studying kagome superconductivity, band topology, and charge orders. In this work, we comprehensively study various properties of ATi$_3$Bi$_5$ including superconductivity under pressure and doping, band topology under pressure, thermal conductivity, heat capacity, electrical resistance, and spin Hall conductivity (SHC) using first-principles calculations. Calculated superconducting transition temperature ($\\mathrm{ T_{c}}$) of CsTi$_3$Bi$_5$ and RbTi$_3$Bi$_5$ at ambient pressure are about 1.85 and 1.92K. When subject to pressure, $\\mathrm{ T_{c}}$ of CsTi$_3$Bi$_5$ exhibits a special valley and dome shape, which arises from quasi-two-dimensional to three-dimensional isotropic compression within the context of an overall decreasing trend. Furthermore, $\\mathrm{ T_{c}}$ of RbTi$_3$Bi$_5$ can be effectively enhanced up to 3.09K by tuning the kagome van Hove singularities (VHSs) and flat band through doping. Pressure can also induce abundant topological surface states at the Fermi energy ($\\mathrm{E}_{\\mathrm{F}}$) and tune VHSs across $\\mathrm{E}_{\\mathrm{F}}$. Additionally, our transport calculations are in excellent agreement with recent experiments, confirming the absence of charge density wave. Notably, SHC of CsTi$_3$Bi$_5$ can reach as large as 226$ \\hbar $\u00b7(e\u00b7$ \\Omega $\u00b7cm)$ ^{-1} $ at $\\mathrm{E}_{\\mathrm{F}}$. Our work provides" -"---\nabstract: 'We analyze the anisotropic Dicke model in the presence of a periodic drive and under a quasiperiodic drive. The study of drive-induced phenomena in this experimentally accesible model is important since although it is simpler than full-fledged many-body quantum systems, it is still rich enough to exhibit many interesting features. We show that under a quasiperiodic Fibonacci (Thue-Morse) drive, the system features a prethermal plateau that increases as an exponential (stretched exponential) with the driving frequency before heating to an infinite-temperature state. In contrast, when the model is periodically driven, the dynamics reaches a plateau that is not followed by heating. In either case, the plateau value depends on the energy of the initial state and on the parameters of the undriven Hamiltonian. Surprisingly, this value does not always approach the infinite-temperature state monotonically as the frequency of the periodic drive decreases. We also show how the drive modifies the quantum critical point and discuss open questions associated with the analysis of level statistics at intermediate frequencies.'\nauthor:\n- Pragna Das\n- Devendra Singh Bhakuni\n- 'Lea F. Santos'\n- Auditya Sharma\nbibliography:\n- 'ref\\_new.bib'\ntitle: 'Periodically and quasiperiodically driven-anisotropic Dicke model '\n---\n\nIntroduction {#sec_1}\n============\n\nThe" -"---\nabstract: 'In this work we are revisiting the well studied Ellis wormhole solution in a Horndeski theory motivated from the Kaluza-Klein compactification procedure of the more fundamental higher dimensional Lovelock gravity. We show that the Ellis wormhole is analytically supported by a gravitational theory with a non-trivial coupling to the Gauss-Bonnet term and we expand upon this notion by introducing higher derivative contributions of the scalar field. The extension of the gravitational theory does not yield any back-reacting component on the spacetime metric, which establishes the Ellis wormhole as a stealth solution in the generalized framework. We propose two simple mechanisms that dress the wormhole with an effective ADM mass. The first procedure is related to a conformal transformation of the metric which maps the theory to another Horndeski subclass, while the second one is inspired by the spontaneous scalarization effect on black holes.'\nauthor:\n- Athanasios Bakopoulos\n- Nikos Chatzifotis\n- Cristian Erices\n- Eleftherios Papantonopoulos\nbibliography:\n- 'Bibliography.bib'\ntitle: Stealth Ellis Wormholes in Horndeski Theories\n---\n\nIntroduction\n============\n\nWormholes are one of the simplest and most exotic static solutions of Einstein\u2019s equations. The throat of a wormhole is able to connect two space-times, or sometimes two distant" -"---\nabstract: 'This paper aims to extend the Besag model, a widely used Bayesian spatial model in disease mapping, to a non-stationary spatial model for irregular lattice-type data. The goal is to improve the model\u2019s ability to capture complex spatial dependence patterns and increase interpretability. The proposed model uses multiple precision parameters, accounting for different intensities of spatial dependence in different sub-regions. We derive a joint penalized complexity prior for the flexible local precision parameters to prevent overfitting and ensure contraction to the stationary model at a user-defined rate. The proposed methodology can be used as a basis for the development of various other non-stationary effects over other domains such as time. An accompanying R package `fbesag` equips the reader with the necessary tools for immediate use and application. We illustrate the novelty of the proposal by modeling the risk of dengue in Brazil, where the stationary spatial assumption fails and interesting risk profiles are estimated when accounting for spatial non-stationary.'\nauthor:\n- |\n [![image](orcid.pdf)Esmail Abdul-Fattah](https://orcid.org/0000-0003-1587-3288)[^1]\\\n Statistics Program, CEMSE Division\\\n King Abdullah University of Science and Technology\\\n Thuwal, 23955, Makkah\\\n `esmail.abdulfattah@kaust.edu.sa`\\\n [E](https://orcid.org/0000-0002-7063-2615)lias Krainski\\\n Statistics Program, CEMSE Division\\\n King Abdullah University of Science and Technology\\\n Thuwal, 23955, Makkah\\\n `elias.krainski@kaust.edu.sa `\\\n [J](https://orcid.org/0000-0002-4334-2057)anet" -"---\nabstract: 'Motility-induced phase separation (MIPS) is a nonequilibrium phase separation that has a different origin from equilibrium phase separation induced by attractive interactions. Similarities and differences in collective behaviors between these two types of phase separation have been intensely discussed. Here, to study another kind of similarity between MIPS and attraction-induced phase separation under a nonequilibrium condition, we perform simulations of active Brownian particles with uniaxially anisotropic self-propulsion (uniaxial ABPs) in two dimensions. We find that (i) long-range density correlation appears in the homogeneous state, (ii) anisotropic particle configuration appears in MIPS, where the anisotropy removes the possibility of microphase separation suggested for isotropic ABPs \\[X.-Q. Shi *et al*., Phys. Rev. Lett. 125, 168001 (2020)\\], and (iii) critical phenomena for the anisotropic MIPS presumably belong to the universality class for two-dimensional uniaxial ferromagnets with dipolar long-range interactions. Properties (i)-(iii) are common to the well-studied randomly driven lattice gas (RDLG), which is a particle model that undergoes phase separation by attractive interactions under external driving forces, suggesting that the origin of phase separation is not essential for macroscopic behaviors of uniaxial ABPs and RDLG. Based on the observations in uniaxial ABPs, we construct a coarse-grained Langevin model, which shows properties" -"---\nabstract: |\n In [@Monod], Nicolas Monod showed that the evaluation map $$H^*_m(G\\curvearrowright G/P)\\longrightarrow H^*_m(G)$$ between the measurable cohomology of the action of a connected semisimple Lie group $G$ on its Furstenberg boundary $G/P$ and the measurable cohomology of $G$ is surjective with a non-trivial kernel in a range $c_G{\\leqslant}d{\\leqslant}C_G{\\leqslant}\\mathrm{rk}_\\mathbb{R}(G)+2$, where $c_G$ is equal to $2$ or $3$ and $C_G$ is a constant depending on $G$. In contrast, we show that $H^*_m(G)$ is isomorphic to the alternating measurable cohomology of $G$ on $G/P$ in all even degrees $$H^{2k}_{m,\\operatorname{\\mathrm{alt}}}(G\\curvearrowright G/P)\\cong H^{2k}_m(G),$$ for a majority of Lie groups, namely those for which the longest element of the Weyl group acts as $-1$ on the Lie algebra of a maximal split torus $A$ in $G$. Furthermore, we show that the cohomology of the cocomplex of non-alternating measurable functions on the Furstenberg boundary $G/P$ is isomorphic to the invariant cohomology of $A$ shifted by two, thus it is not trivial in general. Similarly, we show that the cohomology of the cocomplex of alternating measurable functions on $G/P$ surjects on the measurable cohomology of $G$ with a kernel given by the invariant cohomology of $A$ shifted by one.\n\n This analysis of the kernel sheds some" -"---\nabstract: 'Hidden Markov models (HMMs) are flexible tools for clustering dependent data coming from unknown populations, allowing nonparametric modelling of the population densities. Identifiability fails when the data is in fact independent, and we study the frontier between learnable and unlearnable two-state nonparametric HMMs. Interesting new phenomena emerge when the cluster distributions are modelled via density functions (the \u2018emission\u2019 densities) belonging to standard smoothness classes compared to the multinomial setting [@AGNparamhmm]. Notably, in contrast to the multinomial setting previously considered, the identification of a direction separating the two emission densities becomes a critical, and challenging, issue. Surprisingly, it is possible to \u201cborrow strength\u201d from estimators of the smoother density to improve estimation of the other. We conduct precise analysis of minimax rates, showing a transition depending on the relative smoothnesses of the emission densities.'\naddress:\n- 'University of Cambridge, Statistical Laboratory, Wilberforce Road, Cambridge CB3 0WB, UK'\n- 'Universit\u00e9 Paris-Saclay, CNRS, Laboratoire de math\u00e9matiques d\u2019Orsay, 91405, Orsay, France'\nauthor:\n- \u00a0\n- \u00a0\n- \u00a0\nbibliography:\n- 'bibliography.bib'\ntitle: Frontiers to the learning of nonparametric hidden Markov models\n---\n\n,\n\nIntroduction {#sec:intro}\n============\n\nConsider a two-state HMM with real-valued emissions, in which we observe the first $n$ entries of a sequence $\\bm{Y}=(Y_1,Y_2,\\dots)\\in" -"---\nabstract: 'We propose a loop optimization algorithm based on nuclear norm regularization for tensor network. The key ingredient of this scheme is to introduce a rank penalty term proposed in the context of data processing. Compared to standard variational periodic matrix product states method, this algorithm can circumvent the local minima related to short-ranged correlation in a simpler fashion. We demonstrate its performance when used as a part of the tensor network renormalization algorithms \\[S. Yang, Z.-C. Gu, and X.-G. Wen, Phys. Rev. Lett. 118, 110504 (2017)\\] for the critical 2D Ising model. The scale invariance of the renormalized tensors is attained with higher accuracy while the higher parts of the scaling dimension spectrum are obtained in a more stable fashion.'\nauthor:\n- Kenji Homma\n- Naoki Kawashima\nbibliography:\n- 'apssamp.bib'\nnocite: '[@*]'\ntitle: Nuclear norm regularized loop optimization for tensor network\n---\n\nIntroduction\n============\n\nTensor network is a convenient representation of quantum and classical many-body problems in that effective truncation of degrees of freedom can be realized in a flexible way. Because of this capability, a number of promising approximation schemes have been proposed, as typified by Density Matrix Renormalization Group (DMRG) [@PhysRevLett.69.2863; @PhysRevB.48.10345] and Corner Transfer Matrix" -"---\nabstract: 'Since the times of Holtsmark (1911), statistics of fields in random environments have been widely studied, for example in astrophysics, active matter, and line-shape broadening. The power-law decay of the two-body interaction, of the form $1/|r|^\\delta$, and assuming spatial uniformity of the medium particles exerting the forces, imply that the fields are fat-tailed distributed, and in general are described by stable L\u00e9vy distributions. With this widely used framework, the variance of the field diverges, which is non-physical, due to finite size cutoffs. We find a complementary statistical law to the L\u00e9vy-Holtsmark distribution describing the large fields in the problem, which is related to the finite size of the tracer particle. We discover bi-scaling, with a sharp statistical transition of the force moments taking place when the order of the moment is $d/\\delta$, where $d$ is the dimension. The high-order moments, including the variance, are described by the framework presented in this paper, which is expected to hold for many systems. The new scaling solution found here is non-normalized similar to infinite invariant densities found in dynamical systems.'\nauthor:\n- Avraham Samama$^1$\n- Eli Barkai$^1$\ntitle: 'Statistics of Long-Range Force Fields in Random Environments: Beyond Holtsmark'\n---\n\nINTRODUCTION\n============" -"---\nabstract: 'It was recently discovered that scalarized neutron stars in scalar-tensor theories can undergo a gravitational phase transition to a non-scalarized (GR) state. Surprisingly, even though the driving mechanism is totally different, the process resembles closely the first-order matter phase transition from confined nuclear matter to deconfined quark matter in neutron star cores. The studies until now were limited, though, to only one theory of gravity and a limited range of parameters. With the present paper, we aim at demonstrating that gravitational phase transitions are more common than expected. More specifically, we show that the phenomenon of nonlinear scalarization is present for neutron stars in Gauss-Bonnet gravity leading to the possibility of gravitational phase transition. Moreover, it can be observed for a wide range of parameters so no fine-tuning is needed. This solidifies the conjecture that gravitational phase transitions are an important phenomenon for compact objects and their astrophysical implications deserve an in-depth study.'\nauthor:\n- 'Daniela D. Doneva'\n- 'Christian J. Kr\u00fcger'\n- 'Kalin V. Staykov'\n- 'Petar Y. Yordanov'\nbibliography:\n- 'references.bib'\ndate: April 2023\ntitle: 'Neutron stars in Gauss-Bonnet gravity \u2013 nonlinear scalarization and gravitational phase transitions'\n---\n\nIntroduction\n============\n\nElectromagnetic and gravitational wave observations have" -"---\nabstract: 'Uses of artificial intelligence (AI), especially those powered by machine learning approaches, are growing in sectors and societies around the world. How will AI adoption proceed, especially in the international security realm? Research on automation bias suggests that humans can often be overconfident in AI, whereas research on algorithm aversion shows that, as the stakes of a decision rise, humans become more cautious about trusting algorithms. We theorize about the relationship between background knowledge about AI, trust in AI, and how these interact with other factors to influence the probability of automation bias in the international security context. We test these in a preregistered task identification experiment across a representative sample of 9000 adults in 9 countries with varying levels of AI industries. The results strongly support the theory, especially concerning AI background knowledge. A version of the Dunning Kruger effect appears to be at play, whereby those with the lowest level of experience with AI are slightly more likely to be algorithm-averse, then automation bias occurs at lower levels of knowledge before leveling off as a respondent\u2019s AI background reaches the highest levels. Additional results show effects from the task\u2019s difficulty, overall AI trust, and whether a" -"---\nabstract: |\n Convex optimization is crucial in controlling legged robots, where stability and optimal control are vital. Many control problems can be formulated as convex optimization problems, with a convex cost function and constraints capturing system dynamics. Our review focuses on active balancing problems and presents a general framework for formulating them as second-order cone programming (SOCP) for robustness and efficiency with existing interior point algorithms. We then discuss some prior work around the Zero Moment Point stability criterion, Linear Quadratic Regulator Control, and then the feedback model predictive control (MPC) approach to improve prediction accuracy and reduce computational costs. Finally, these techniques are applied to stabilize the robot for jumping and landing tasks.\\\n Further research in convex optimization of legged robots can have a significant societal impact. It can lead to improved gait planning and active balancing which enhances their ability to navigate complex environments, assist in search and rescue operations and perform tasks in hazardous environments. These advancements have the potential to revolutionize industries and help humans in daily life.\nauthor:\n- '\\* All authors have equal contribution'\ntitle: Convex Optimization in Legged Robots\n---\n\nconvex optimization, legged robots, model predictive control, stability\n\nIntroduction\n============\n\nControl problems" -"---\nabstract: 'Detecting objects and estimating their 6D poses is essential for automated systems to interact safely with the environment. Most 6D pose estimators, however, rely on a single camera frame and suffer from occlusions and ambiguities due to object symmetries. We overcome this issue by presenting a novel symmetry-aware multi-view 6D pose estimator called SyMFM6D. Our approach efficiently fuses the RGB-D frames from multiple perspectives in a deep multi-directional fusion network and predicts predefined keypoints for all objects in the scene simultaneously. Based on the keypoints and an instance semantic segmentation, we efficiently compute the 6D poses by least-squares fitting. To address the ambiguity issues for symmetric objects, we propose a novel training procedure for symmetry-aware keypoint detection including a new objective function. Our SyMFM6D network significantly outperforms the state-of-the-art in both single-view and multi-view 6D pose estimation. We furthermore show the effectiveness of our symmetry-aware training procedure and demonstrate that our approach is robust towards inaccurate camera calibration and dynamic camera setups.'\nauthor:\n- 'Fabian Duffhauss$^{1, 2}$, Sebastian Koch$^{1, 3}$, Hanna Ziesche$^{1}$, Ngo Anh Vien$^{1}$, and Gerhard Neumann$^{4}$[^1][^2][^3][^4][^5]'\nbibliography:\n- 'IEEEabrv.bib'\ntitle: ' SyMFM6D: Symmetry-aware Multi-directional Fusion for Multi-View 6D Object Pose Estimation '\n---\n\nIntroduction\n============\n\nEstimating" -"---\nabstract: 'Large pretrained plain vision Transformers (ViTs) have been the workhorse for many downstream tasks. However, existing works utilizing off-the-shelf ViTs are inefficient in terms of training and deployment, because adopting ViTs with individual sizes requires separate trainings and is restricted by fixed performance-efficiency trade-offs. In this paper, we are inspired by stitchable neural networks (SN-Net), which is a new framework that cheaply produces a single model that covers rich subnetworks by stitching pretrained model families, supporting diverse performance-efficiency trade-offs at runtime. Building upon this foundation, we introduce SN-Netv2, a systematically improved model stitching framework to facilitate downstream task adaptation. Specifically, we first propose a two-way stitching scheme to enlarge the stitching space. We then design a resource-constrained sampling strategy that takes into account the underlying FLOPs distributions in the space for better sampling. Finally, we observe that learning stitching layers as a low-rank update plays an essential role on downstream tasks to stabilize training and ensure a good Pareto frontier. With extensive experiments on ImageNet-1K, ADE20K, COCO-Stuff-10K and NYUv2, SN-Netv2 demonstrates superior performance over SN-Netv1 on downstream dense predictions and shows strong ability as a flexible vision backbone, achieving great advantages in both training efficiency and deployment flexibility." -"---\nabstract: 'Numerous observations have shown that almost all galaxies in our Universe host supermassive black holes (SMBHs), but there is still much debate about their formation and evolutionary processes. Recently, gravitational waves (GWs) have been expected to be a new and important informative observation, in particular, in the low-frequency region by making use of the Laser Interferometer Space Antenna (LISA) and Pulsar Timing Arrays (PTAs). As an evolutionary process of the SMBHs, we revisit a dark matter (DM) halo-SMBH coevolution model based on the halo merger tree employing an ansatz for the mass relation between the DM halos and the SMBHs at $z=6$. In this model, the mass of SMBHs grows through their mergers associated with the halo mergers, and hence the evolutionary information must be stored in the GWs emitted at the mergers. We investigate the stochastic gravitational background from the coalescing SMBH binaries, which the PTAs can detect, and also the GW bursts emitted at the mergers, which can be detected by the mHz band observations such as LISA. We also discuss the possibility of probing the mass relation between the DM halos and the SMBHs at high redshift by future GW observations.'\nauthor:\n- Kazuya Furusawa" -"---\nabstract: 'Recent generative approaches for multi-hop question answering (QA) utilize the fusion-in-decoder method\u00a0[@izacard-grave-2021-leveraging] to generate a single sequence output which includes both a final answer and a reasoning path taken to arrive at that answer, such as passage titles and key facts from those passages. While such models can lead to better interpretability and high quantitative scores, they often have difficulty accurately identifying the passages corresponding to key entities in the context, resulting in incorrect passage hops and a lack of faithfulness in the reasoning path. To address this, we propose a single-sequence prediction method over a local reasoning graph ([SeqGraph]{})[^1] that integrates a graph structure connecting key entities in each context passage to relevant subsequent passages for each question. We use a graph neural network to encode this graph structure and fuse the resulting representations into the entity representations of the model. Our experiments show significant improvements in answer exact-match/F1 scores and faithfulness of grounding in the reasoning path on the HotpotQA dataset and achieve state-of-the-art numbers on the Musique dataset with only up to a 4% increase in model parameters.'\nauthor:\n- |\n Gowtham Ramesh[^2], Makesh Sreedhar, and Junjie Hu\\\n University of Wisconsin-Madison\\\n `{gramesh4,msreedhar,junjie.hu}`@wisc.edu\nbibliography:" -"---\nabstract: 'The paper has explored analogue of gravitational synchrotron massive particle and Penrose process in MOdified Gravity (MOG) known as Scalar-Tensor-Vector-Gravity (STVG). Investigation of the gravitational field around Kerr-MOG black hole showed that it has strong gravitational field with large horizon and can rotate faster than Kerr black hole due to the effect of STVG. We have studied influence of STVG in circular motion of massive particle around Kerr-MOG black hole and discussed the Innermost Stable Circular Orbit (ISCO) of massive test particle. It is shown that STVG plays a crucial role in energy extraction from a rotating black hole, with an energy efficiency of more than $100\\%$ according to the Penrose process. Furthermore, we have explored the gravitational synchrotron radiation analogue produced by a massive particle orbiting around a Kerr-MOG black hole. It has been shown that the intensity of gravitational radiation from binary systems of stellar black holes (SBH) and supermassive black holes (SMBH).'\naddress:\n- 'Ulugh Beg Astronomical Institute, Astronomy St. 33, Tashkent 100052, Uzbekistan'\n- 'School of Engineering, Akfa University, Milliy Bog St. 264, Tashkent 111221, Uzbekistan'\n- 'Department of Civil Systems Engineering, Ajou University in Tashkent, Asalobod St. 113, Tashkent 100204, Uzbekistan'\n- 'Webster" -"---\nabstract: 'Continual learning (CL) is an approach to address catastrophic forgetting, which refers to forgetting previously learned knowledge by neural networks when trained on new tasks or data distributions. The adversarial robustness has decomposed features into robust and non-robust types and demonstrated that models trained on robust features significantly enhance adversarial robustness. However, no study has been conducted on the efficacy of robust features from the lens of the CL model in mitigating catastrophic forgetting in CL. In this paper, we introduce the CL robust dataset and train four baseline models on both the standard and CL robust datasets. Our results demonstrate that the CL models trained on the CL robust dataset experienced less catastrophic forgetting of the previously learned tasks than when trained on the standard dataset. Our observations highlight the significance of the features provided to the underlying CL models, showing that CL robust features can alleviate catastrophic forgetting.'\nauthor:\n- |\n Hikmat Khan\\\n *Dept. of Electrical and*\\\n *Computer Engineering*\\\n *Rowan University*\\\n Glassboro, New Jersey, USA\\\n bouaynaya@rowan.edu Nidhal C. Bouaynaya\\\n *Dept. of Electrical and*\\\n *Computer Engineering*\\\n *Rowan University*\\\n Glassboro, New Jersey, USA\\\n bouaynaya@rowan.edu Ghulam Rasool\\\n *Dept. of Machine Learning*\\\n *Moffitt Cancer Center*\\\n Tampa, Florida, USA\\\n ghulam.rasool@moffitt.org\\\nbibliography:" -"---\nabstract: 'We introduce an open-domain topic classification system that accepts user-defined taxonomy in real time. Users will be able to classify a text snippet with respect to any candidate labels they want, and get instant response from our web interface. To obtain such flexibility, we build the backend model in a zero-shot way. By training on a new dataset constructed from Wikipedia, our label-aware text classifier can effectively utilize implicit knowledge in the pretrained language model to handle labels it has never seen before. We evaluate our model across four datasets from various domains with different label sets. Experiments show that the model significantly improves over existing zero-shot baselines in open-domain scenarios, and performs competitively with weakly-supervised models trained on in-domain data.[^1][^2]'\nauthor:\n- |\n Hantian Ding$^1$, Jinrui Yang$^{1,2}$, Yuqian Deng$^1$, Hongming Zhang$^1$, Dan Roth$^1$\\\n $^1$University of Pennsylvania, $^2$University of Melbourne\\\n `{hantian2, jinruiy, yuqiand, hzhangal, danroth}@seas.upenn.edu`\\\nbibliography:\n- 'anthology.bib'\ntitle: 'Towards Open-Domain Topic Classification'\n---\n\n=1\n\nIntroduction\n============\n\nText classification is a fundamental natural language processing problem, with one of its major applications in topic labeling [@Lang95; @wang-manning-2012-baselines]. Over the past decades, supervised classification models have achieved great success in closed-domain tasks with large-scale annotated datasets [@DBLP:conf/nips/ZhangZL15; @tang-etal-2015-document; @yang-etal-2016-hierarchical]." -"---\nabstract: 'We review recent results for forward jests at the LHC and EIC as obtained within small-x Improved Transverse Momentum Dependent factorization (ITMD). In addition to elementary overview of various approaches to perturbative QCD at high energy, including High Energy Factorization, Color Glass Condensate and ITMD, we describe the Monte Carlo implementation and discuss the existing and unpublished phenomenological results for forward dijets.'\nauthor:\n- |\n A. van Hameren$\\,\\,^a$, H. Kakkad$\\,\\,^b$, P. Kotko$\\,\\,^b$,\\\n K. Kutak$\\,\\,^a$, S. Sapeta$\\,\\,^a$\\\n \\\n $^a$ [*Institute of Nuclear Physics, Polish Academy of Sciences*]{}\\\n [*Radzikowskiego 152, 31-342 Krak\u00f3w, Poland* ]{}\\\n \\\n $^b$ [*AGH University Of Krakow,* ]{}\\\n [*Faculty of Physics and Applied Computer Science,*]{}\\\n [*al. Mickiewicza 30, 30-059 Krak\u00f3w, Poland*]{}\\\n \\\nbibliography:\n- 'sudakov.bib'\n- 'references.bib'\n---\n\nIntroduction {#sec:Intro}\n============\n\nQuantum Chromodynamics (QCD) is a well established theory that describes interactions of quarks and gluons. However, it still has its challenges. In the high energy domain, one of the long standing problems is finding clear experimental signals of gluon saturation, which is a signature of quasi equilibrium between gluon splitting and gluon fusion in dense nuclear systems. Gluon saturation has been predicted from QCD long time ago [@Gribov:1984tu; @Mueller:1985wy] and has been extensively studied using various" -"---\nabstract: 'Feature engineering is of critical importance in the field of Data Science. While any data scientist knows the importance of rigorously preparing data to obtain good performing models, only scarce literature formalizes its benefits. In this work, we will present the method of Statistically Enhanced Learning (SEL), a formalization framework of existing feature engineering and extraction tasks in Machine Learning (ML). The difference compared to classical ML consists in the fact that certain predictors are not directly observed but obtained as statistical estimators. Our goal is to study SEL, aiming to establish a formalized framework and illustrate its improved performance by means of simulations as well as applications on real life use cases.'\nauthor:\n- |\n Florian Felice\\\n Department of Mathematics\\\n University of Luxembourg\\\n [`florian.felice@uni.lu`](mailto:florian.felice@uni.lu)\\\n Christophe Ley\\\n Department of Mathematics\\\n University of Luxembourg\\\n [`christophe.ley@uni.lu`](mailto:christophe.ley@uni.lu)\\\n St\u00e9phane Bordas\\\n Department of Engineering\\\n University of Luxembourg\\\n [`stephane.bordas@uni.lu`](mailto:stephane.bordas@uni.lu)\\\n Andreas Groll\\\n Department of Statistics\\\n University of Dortmund\\\n [`groll@statistik.tu-dortmund.de`](mailto:groll@statistik.tu-dortmund.de)\\\nbibliography:\n- 'references.bib'\ntitle: 'Statistically Enhanced Learning: a feature engineering framework to boost (any) learning algorithms'\n---\n\n=\\[draw=black,thick,anchor=west\\] =\\[draw=red,fill=red!30\\] =\\[dashed,fill=gray!50\\]\n\n#### Significance statement\n\nStatistically Enhanced Learning (SEL) is a promising approach to improving learning performance. This work provides a formal definition of SEL and presents a" -"---\naddress: |\n $^{1}$ Department of Physics and Astronomy, University of Alabama, Tuscaloosa, AL 35487, USA\\\n $^{2}$ Department of Physics, University of Colorado Denver, Campus Box 157, P.O.Box 173364 ,Denver, CO 80217, USA; alberto.sadun@ucdenver.edu\n---\n\nIntroduction\n============\n\nBlazars form a subclass of radio-loud active galactic nuclei (AGN) that have jets closely aligned to our line-of-sight, resulting in the emission from these objects being highly Doppler-boosted and making them some of the brightest gamma-ray sources in the extragalactic sky\u00a0[@gammaagn]. Blazars are generally characterized by non-thermal, highly-polarized continuum emission, spanning the entire electromagnetic spectrum and characteristically show very fast variability, which has been observed down to timescales of minutes in the gamma-ray regime\u00a0[@1996_Gaidos; @PKS_2155_minute_variability; @2020_Mrk421], [[as well as in the optical regime\u00a0[@2018_Aranzana; @2018_Kim; @galaxies6010034]]{}.]{}\n\nThe spectral energy distribution (SED) of a typical blazar comprises two distinct peaks. Although the first peak, occurring in the radio to the X-ray regime, has been attributed to synchrotron emission from electrons and positrons within the jet, the physical mechanisms responsible for the second peak, produced in the X-ray to gamma-ray regime, is still a matter of debate and two main scenarios have been postulated to explain it. Leptonic models [@Blandford_and_Levinson_1995; @Georganopoulos_2002] attribute the" -"---\nabstract: 'This study aims to explore the complex interactions between an internal solitary wave and an external force using the Benjamin-Ono equation as the theoretical framework. The investigation encompasses both asymptotic and numerical approaches. By assuming a small amplitude for the external force, we derive a dynamical system that describes the behavior of the solitary wave amplitude and the position of its crest. Our findings reveal three distinct scenarios: (i) resonance between the solitary wave and the external force, (ii) oscillatory motion with closed orbits, and (iii) displacement from the initial position while maintaining the wave direction. However, through numerical simulations, we observe a different relationship between the amplitude of the solitary wave and its crest position. Specifically, for external forces of small amplitude, the simulations indicate the presence of an unstable spiral pattern. Conversely, when subjected to external forces of larger amplitudes, the solitary wave exhibits a stable spiral trajectory which resembles the classical damped mass-spring system.'\nauthor:\n- 'Marcelo V. Flamarion$^{1}$ and Efim Pelinovsky$^{2,3}$'\ntitle: 'Interaction of interfacial waves with an external force: The Benjamin-Ono equation framework'\n---\n\n[$^1$Unidade Acad[\\^ e]{}mica do Cabo de Santo Agostinho,\\\nUFRPE/Rural Federal University of Pernambuco, BR 101 Sul, Cabo de Santo" -"---\nabstract: 'Singularly perturbed problems present inherent difficulty due to the presence of a thin boundary layer in its solution. To overcome this difficulty, we propose using deep operator networks (DeepONets), a method previously shown to be effective in approximating nonlinear operators between infinite-dimensional Banach spaces. In this paper, we demonstrate for the first time the application of DeepONets to one-dimensional singularly perturbed problems, achieving promising results that suggest their potential as a robust tool for solving this class of problems. We consider the convergence rate of the approximation error incurred by the operator networks in approximating the solution operator, and examine the generalization gap and empirical risk, all of which are shown to converge uniformly with respect to the perturbation parameter. By utilizing Shishkin mesh points as locations of the loss function, we conduct several numerical experiments that provide further support for the effectiveness of operator networks in capturing the singular boundary layer behavior.'\naddress: |\n \u00a0Tsinghua University, Beijing, China.\\\n \u00a0Nanjing University of Aeronautics and Astronautics, Nanjing, China.\nauthor:\n- 'Ting Du, Zhongyi Huang and Ye Li'\ntitle: Approximation and Generalization of DeepONets for Learning Operators Arising from a Class of Singularly Perturbed Problems\n---\n\nIntroduction\n============\n\nSingularly perturbed" -"---\nabstract: 'Quantifying treatment effect heterogeneity is a crucial task in many areas of causal inference, e.g. optimal treatment allocation and estimation of subgroup effects.\u00a0We study the problem of estimating the level sets of the conditional average treatment effect (CATE), identified under the no-unmeasured-confounders assumption. Given a user-specified threshold, the goal is to estimate the set of all units for whom the treatment effect exceeds that threshold. For example, if the cutoff is zero, the estimand is the set of all units who would benefit from receiving treatment. Assigning treatment just to this set represents the optimal treatment rule that maximises the\u00a0mean\u00a0population outcome.\u00a0Similarly, cutoffs greater than zero represent optimal rules under resource constraints. Larger cutoffs can also be used for anomaly detection, i.e., finding which subjects are most affected by treatments.\u00a0Being able to accurately estimate CATE level sets is therefore of great practical relevance. The level set estimator that we study follows the plug-in principle and consists of simply thresholding a good estimator of the CATE. While many CATE estimators have been recently proposed and analysed, how their properties relate to those of the corresponding level\u00a0set\u00a0estimators remains unclear. Our first goal is thus" -"---\nabstract: 'The Unmanned Aerial Vehicle (UAV) swarm networks will play a crucial role in the B5G/6G network thanks to its appealing features, such as wide coverage and on-demand deployment. Emergency communication (EC) is essential to promptly inform UAVs of potential danger to avoid accidents, whereas the conventional communication-only feedback-based methods, which separate the digital and physical identities (DPI), bring intolerable latency and disturb the unintended receivers. In this paper, we present a novel DPI-Mapping solution to match the identities (IDs) of UAVs from dual domains for visual networking, which is the first solution that enables UAVs to communicate promptly with what they see without the tedious exchange of beacons. The IDs are distinguished dynamically by defining feature similarity, and the asymmetric IDs from different domains are matched via the proposed bio-inspired matching algorithm. We also consider Kalman filtering to combine the IDs and predict the states for accurate mapping. Experiment results show that the DPI-Mapping reduces individual inaccuracy of features and significantly outperforms the conventional broadcast-based and feedback-based methods in EC latency. Furthermore, it also reduces the disturbing messages without sacrificing the hit rate.'\nauthor:\n- \ntitle: 'Dual Identities Enabled Low-Latency Visual Networking for UAV Emergency Communication'\n---\n\nUAV" -"---\nabstract: 'Missing cascades from TeV blazar beams indicate that collective plasma effects may play a significant role in their energy loss. It is possible to mimic the evolution of such highly energetic pair beams in laboratory experiments using modern accelerators. The fate of the beam is governed by two different processes, energy loss through the unstable mode and energetic broadening of the pair beam through diffusion in momentum space. We chalk out this evolution using a Fokker-Planck approach in which the drift and the diffusion terms respectively describe these phenomena in a compact form. We present particle-in-cell simulations to trace the complete evolution of the unstable beam-plasma system for a generic narrow Gaussian pair beam for which the growth rate is reactive. We show that the instability leads to an energetic broadening of the pair beam, slowing down the instability growth in the linear phase, in line with the analytical and numerical solutions of the Fokker-Planck equation. Whereas in a laboratory experiment the change in the momentum distribution is an easily measured observable as a feedback of the instability, the consequence of diffusive broadening in an astrophysical scenario can be translated to an increase in the opening angle of" -"---\nabstract: 'Large documents written in juridical language are difficult to interpret, with long sentences leading to intricate and intertwined relations between the nouns. The present paper frames this problem in the context of recent European security directives. The complexity of their language is here thwarted by automating the extraction of the relevant information, namely of the parts of speech from each clause, through a specific tailoring of Natural Language Processing (NLP) techniques. These contribute, in combination with ontology development principles, to the design of our automated method for the representation of security directives as ontologies. The method is showcased on a practical problem, namely to derive an ontology representing the NIS 2 directive, which is the peak of cybersecurity prescripts at the European level. Although the NLP techniques adopted showed some limitations and had to be complemented by manual analysis, the overall results provide valid support for directive compliance in general and for ontology development in particular.'\naddress: Universit\u00e0 degli Studi di Catania\nauthor:\n- Giampaolo Bella\n- Gianpietro Castiglione\n- Daniele Francesco Santamaria\nbibliography:\n- 'sample-ceur.bib'\ntitle: An automated method for the ontological representation of security directives\n---\n\n\\[ orcid=0000-0002-7615-8643, email=giamp@dmi.unict.it, \\]\n\n\\[orcid=0000-0003-2215-0416, email=gianpietro.castiglione@phd.unict.it, \\]\n\n\\[orcid=0000-0002-4273-6521, email=daniele.santamaria@unict.it, \\]" -"---\nabstract: 'The accurate segmentation of medical images is a crucial step in obtaining reliable morphological statistics. However, training a deep neural network for this task requires a large amount of labeled data to ensure high-accuracy results. To address this issue, we propose using progressive text prompts as prior knowledge to guide the segmentation process. Our model consists of two stages. In the first stage, we perform contrastive learning on natural images to pretrain a powerful prior prompt encoder (PPE). This PPE leverages text prior prompts to generate multimodality features. In the second stage, medical image and text prior prompts are sent into the PPE inherited from the first stage to achieve the downstream medical image segmentation task. A multiscale feature fusion block (MSFF) combines the features from the PPE to produce multiscale multimodality features. These two progressive features not only bridge the semantic gap but also improve prediction accuracy. Finally, an UpAttention block refines the predicted results by merging the image and text features. This design provides a simple and accurate way to leverage multiscale progressive text prior prompts for medical image segmentation. Compared with using only images, our model achieves high-quality results with low data annotation costs. Moreover," -"---\nabstract: 'It has been argued that recycled gas from stellar mass loss in galaxies might serve as an important fuelling source for black holes (BHs) in their centers. Utilizing spectroscopic samples of galaxies from the Sloan Digital Sky Survey (SDSS) at $z = 0$\u20130.35 and the Large Early Galaxy Astrophysics Census (LEGA-C) survey at $z = 0.6$\u20131 that have X-ray coverage from \u00a0or , we test this stellar mass loss fuelling scenario by investigating how AGN activity and BH growth vary with the break strength at 4000\u00a0\u00c5, \u00a0(which is closely related to the age of stellar populations), as younger galaxies are considered to have higher stellar mass loss rates. We found that when controlling for host-galaxy properties, the fraction of log\u00a0/\u00a0$> 32$ (which roughly corresponds to Eddington ratios $\\gtrsim 1$%) AGN and sample-averaged black hole accretion rate () decrease with \u00a0among \u00a0$\\lesssim$ 1.9 galaxies, suggesting a higher level of AGN activity among younger galaxies, which supports the stellar mass loss fuelling scenario. For the oldest and most massive galaxies at $z = 0$\u20130.35, this decreasing trend is not present anymore. We found that, among these most massive galaxies at low redshift, the fraction of low specific-accretion-rate" -"---\nabstract: 'Triangulenes are open-shell triangular graphene flakes with total spin increasing with their size.\u00a0In the last years, on-surface-synthesis strategies have permitted fabricating and engineering triangulenes of various sizes and structures with atomic precision.\u00a0However, direct proof of the increasing total spin with their size remains elusive.\u00a0In this work, we report the combined in-solution and on-surface synthesis of a large nitrogen-doped triangulene (aza-\\[5\\]-triangulene) and the detection of its high spin ground state on a Au(111) surface.\u00a0Bond-resolved scanning tunneling microscopy images uncovered radical states distributed along the zigzag edges, which were detected as weak zero-bias resonances in scanning tunneling spectra.\u00a0These spectral features reveal the partial Kondo screening of a high spin state.\u00a0Through a combination of several simulation tools, we find that the observed distribution of radical states is explained by a quintet ground state ($S=2$), instead of the expected quartet state ($S=3/2$), confirming the positively charged state of the molecule on the surface.\u00a0We further provide a qualitative description of the change of (anti)aromaticity introduced by N-substitution, and its role in the charge stabilization on a surface, resulting in a $S=2$ aza-\\[5\\]-triangulene on Au(111).'\nauthor:\n- 'Manuel Vilas-Varela'\n- 'Francisco Romero-Lara'\n- Alessio Vegliante\n- Jan" -"---\nabstract: 'Large language models (LLMs) may not equitably represent diverse global perspectives on societal issues. In this paper, we develop a quantitative framework to evaluate whose opinions model-generated responses are more similar to. We first build a dataset, GlobalOpinionQA, comprised of questions and answers from cross-national surveys designed to capture diverse opinions on global issues across different countries. Next, we define a metric that quantifies the similarity between LLM-generated survey responses and human responses, conditioned on country. With our framework, we run three experiments on an LLM trained to be helpful, honest, and harmless with Constitutional AI. By default, LLM responses tend to be more similar to the opinions of certain populations, such as those from the USA, and some European and South American countries, highlighting the potential for biases. When we prompt the model to consider a particular country\u2019s perspective, responses shift to be more similar to the opinions of the prompted populations, but can reflect harmful cultural stereotypes. When we translate GlobalOpinionQA questions to a target language, the model\u2019s responses do not necessarily become the most similar to the opinions of speakers of those languages. We release our dataset for others to use and build on.[^1] We" -"---\nabstract: 'The study of complex human interactions and group activities has become a focal point in human-centric computer vision. However, progress in related tasks is often hindered by the challenges of obtaining large-scale labeled datasets from real-world scenarios. To address the limitation, we introduce [[`M^{3}Act`]{}]{}, a synthetic data generator for **m**ulti-view **m**ulti-group **m**ulti-person human atomic **act**ions and group **act**ivities. Powered by the Unity engine, [[`M^{3}Act`]{}]{} features multiple semantic groups, highly diverse and photorealistic images, and a comprehensive set of annotations, which facilitates the learning of human-centered tasks across single-person, multi-person, and multi-group conditions. We demonstrate the advantages of [[`M^{3}Act`]{}]{} across three core experiments using various input modalities. First, adding our synthetic data significantly improves the performance of MOTRv2 on DanceTrack, leading to a hop on the leaderboard from $10^{th}$ to $2^{nd}$ place. With [[`M^{3}Act`]{}]{}, we achieve tracking results on par with MOTRv2\\*, which is trained with 62.5% more real-world data. Second, [[`M^{3}Act`]{}]{} improves the benchmark performances on CAD2 by 5.59% and 7.43% on human group activity and atomic action recognition accuracy respectively. Moreover, [[`M^{3}Act`]{}]{} opens new research for controllable 3D group activity generation. We define multiple metrics and propose a competitive baseline for the novel task.'\nauthor:\n- |" -"---\nabstract: 'Computational chemical combustion problems are known to be stiff, and are typically solved with implicit time integration methods. A novel exponential time integrator, EPI3V, is introduced and applied to a spatially homogeneous isobaric reactive mixture. Three chemical mechanism of increasing complexity are considered, and in two cases the novel method can perform similar if not marginally better to a well-known implementation of a BDF implicit method. In one specific case we see relative performance degradation of the EPI3V to the implicit method. Despite this, the novel exponential method does converge for this case. A performance analysis of the exponential method is provided, demonstrating possible avenues for performance improvement.'\nauthor:\n- '**Stewart Jared**'\n- '**Tokman Mayya**'\n- '**Bisetti Fabrizio**'\n- Dallerit Valentin\n- '**Diaz-Ibarra Oscar**'\ntitle: '**Variable time-stepping exponential integrators for chemical reactors with analytical Jacobians**\\'\n---\n\n------------------------------------------------------------------------\n\n------------------------------------------------------------------------\n\nIntroduction\n============\n\nCombustion is relevant to energy production, transportation, military technology, and most industrial processes. Furthermore, combustion is central to natural events relevant to ecological systems and climate, such as forest fires. Because of combustion\u2019s ubiquity, the ability to model and predict combustion accurately is critical to many engineering and scientific applications. Due to the physical complexity of combustion, numerical" -"---\nauthor:\n- 'Akshatha Jagadish, Manoj Varma'\nbibliography:\n- 'sn-article.bib'\ntitle: 'Navigation of micro-robot swarms for targeted delivery using reinforcement learning'\n---\n\n**Keywords**: micro-swimmers, RL, PPO, RPO, curriculum learning, swarm-control\n\nIntroduction {#sec:IntroCh6}\n============\n\nMicro-scale is a fertile area for research and provides the promise of great applications in a variety of fields such as micro-surgery [@Li17], micro-manufacturing [@Goodrich17], cargo delivery [@Yang20], pollution rectification [@Soler13; @Chen21] and many more. While there has been substantial research going on to understand the physics at this scale for many decades, the research in the design and development of robots that can operate at this scale has exponentially increased in recent years, and we see different methods of realizing them [@Li16RL; @Li17; @Yin21]. In addition to the design and propulsion methods, researchers have also been looking at different navigation strategies for these micro-robots [@Ider21]. These methods, however, require complete information of the environment that the micro-robots operate in, which is generally difficult to obtain.\n\nThe physical system of micro-robots can be controlled computationally, making it a cyber-physical system, which makes it scalable and reliable. Here, we explore reinforcement learning (RL) as the computational part of the system owing to its incredible performance in recent years" -"---\nabstract: 'We study characters of states in $p$-adic vertex operator algebras. In particular, we show that the image of the character map for both the $p$-adic Heisenberg and $p$-adic lattice vertex operator algebras contain infinitely-many non-classical $p$-adic modular forms which are not contained in the image of the algebraic character map. We obtain also new expressions for square-bracket modes in the Heisenberg VOA which are used in the study of such characters.'\nauthor:\n- Daniel Barake\n- Cameron Franc\nbibliography:\n- 'references.bib'\ntitle: 'Characters in *p*-adic Heisenberg and Lattice Vertex Operator Algebras'\n---\n\nIntroduction\n============\n\nWe study $p$-adic properties of certain vertex operator algebras. Motivated by both physical and number-theoretical methods, the authors of [@FM] introduce the study of $p$-adic VOAs which arise from a completion of the axioms for usual (algebraic) VOAs. The existence of $p$-adic variants of known VOAs such as the Virasoro, Monster and Heisenberg was central to this work, however the precise image of the character map on such VOAs described in Sections 9 and 10 remains undetermined. Here, we expand on the techniques of [@FM] to provide results which we hope will assist in resolving this problem.\n\nOur focus is first on the $p$-adic" -"---\nabstract: 'Reconstructing a signal on a graph from observations on a subset of the vertices is a fundamental problem in the field of graph signal processing. It is often assumed that adding additional observations to an observation set will reduce the expected reconstruction error. We show that under the setting of noisy observation and least-squares reconstruction this is not always the case, characterising the behaviour both theoretically and experimentally.'\nauthor:\n- \nbibliography:\n- 'main.bib'\ntitle: On the Impact of Sample Size in Reconstructing Graph Signals\n---\n\nGraph signal processing, sampling, reconstruction, least squares, robustness.\n\nIntroduction\n============\n\nsignal processing (GSP) has gained popularity owing to its ability to process and analyze signals on graphs, such as political preferences [@renoust2017estimating], brain fMRIs [@itani2021graph] and urban air pollution [@jain2014big]. GSP generalises the highly successful tools of classical signal processing from regular domains such as grids to graphs. Similar to the classical case, the computational costs of processing and storing large volumes of graph signals can be prohibitive, and complete data may not be available owing to impractically high observation costs. Graph sampling provides a solution to these problems by efficiently extrapolating the full data across the graph from observations on a set" -"---\nabstract: 'Neutral particles capable of travelling cosmic distances from a source to detectors on Earth are limited to photons and neutrinos. Examination of the Dark Matter annihilation/decay spectra for these particles reveals the presence of continuum spectra (e.g. due to fragmentation and W or Z decay) and peaks (due to direct annihilations/decays). However, when one explores extensions of the Standard Model (BSM), unexplored spectra emerge that differ significantly from those of the Standard Model (SM) for both neutrinos and photons. In this paper, we argue for the inclusion of important spectra that include peaks as well as previously largely unexplored entities such as boxes and combinations of box, peak and continuum decay spectra.'\nauthor:\n- 'Wim Beenakker [^1]'\n- 'Sascha Caron [^2]'\n- 'Jochem Kip [^3]'\n- 'Roberto Ruiz de Austri [^4]'\n- 'Zhongyi Zhang [^5]'\nbibliography:\n- 'main.bib'\ntitle: New energy spectra in neutrino and photon detectors to reveal hidden dark matter signals\n---\n\nIntroduction\n============\n\nThe search for Dark Matter (DM) by indirect detection is the subject of many studies. A large number of experiments have investigated the cosmic antiproton, positron, photon and neutrino spectra. Notable experiments include but are not limited to, AMS-02\u00a0[@AMS-02:AGUILAR20211], Fermi-LAT\u00a0[@FERMI-LAT:Ackermann_2015]," -"---\nabstract: 'The repulsive three-body force between the lambda ($\\Lambda$) hyperon and medium nucleons is a key element in solving the hyperon puzzle in neutron stars. We investigate the binding energies of the $\\Lambda$ hyperon in hypernuclei to verify the repulsive $\\Lambda$ potentials from the chiral effective field theory ($\\chi$EFT) employing the Skyrme Hartree-Fock method. We find that the $\\chi$EFT $\\Lambda$ potential with $\\Lambda NN$ three-body forces reproduces the existing hypernuclear binding energy data, whereas the $\\Lambda$ binding energies are overestimated without the $\\Lambda NN$ three-body force. Additionally, we search for the parameter space of the $\\Lambda$ potentials by varying the Taylor coefficients of the $\\Lambda$ potential and the effective mass of $\\Lambda$ at the saturation density. Our analysis demonstrates that the parameter region consistent with the $\\Lambda$ binding energy data spans a wide range of the parameter space, including even more repulsive potentials than the $\\chi$EFT prediction. We confirm that these strong repulsive $\\Lambda$ potentials suppress the presence of $\\Lambda$ in neutron star matter. We found that the $\\Lambda$ potentials repulsive at high densities are favored when the depth of the $\\Lambda$ potential at the saturation density, $U_\\Lambda(\\rho_0)=J_\\Lambda$, is $J_\\Lambda\\gtrsim-29~{\\textrm{MeV}}$, while attractive ones are favored when $J_\\Lambda \\lesssim -31~{\\textrm{MeV}}$." -"---\nabstract: 'Over recent years, denoising diffusion generative models have come to be considered as state-of-the-art methods for synthetic data generation, especially in the case of generating images. These approaches have also proved successful in other applications such as tabular and graph data generation. However, due to computational complexity, to this date, the application of these techniques to graph data has been restricted to small graphs, such as those used in molecular modeling. In this paper, we propose [[SaGess]{}]{}, a discrete denoising diffusion approach, which is able to generate large real-world networks by augmenting a diffusion model ([[DiGress]{}]{}) with a generalized divide-and-conquer framework. The algorithm is capable of generating larger graphs by sampling a covering of subgraphs of the initial graph in order to train [[DiGress]{}]{}. [[SaGess]{}]{} then constructs a synthetic graph using the subgraphs that have been generated by [[DiGress]{}]{}. We evaluate the quality of the synthetic data sets against several competitor methods by comparing graph statistics between the original and synthetic samples, as well as evaluating the utility of the synthetic data set produced by using it to train a task-driven model, namely link prediction. In our experiments, [SaGess]{} outperforms most of" -"---\nabstract: 'In this paper, we focus on the parametrization of the effective equation of state (EoS) parameter within the framework of $f(Q)$ symmetric teleparallel gravity. Here, the gravitational action is represented by an arbitrary function of the non-metricity scalar $Q$. By utilizing a specific parametrization of the effective EoS parameter and a power-law model of $f(Q)$ theory, namely $f(Q)=\\beta Q^{\\left( m+1\\right) }$ (where $\\beta$ and $m$ are arbitrary constants), we derive the cosmological solution of the Hubble parameter $H(z)$. To constrain model parameters, we employ recent observational data, including the Observational Hubble parameter Data ($OHD$), Baryon Acoustic Oscillations data ($BAO$), and Type Ia supernovae data ($SNe$ Ia). The current constrained value of the deceleration parameter is found to be $q_{0}=-0.50^{+0.01}_{-0.01}$, indicating that the current Universe is accelerating. Furthermore, we examine the evolution of the density, EoS, and $Om(z)$ diagnostic parameters to deduce the accelerating nature of the Universe. Finally, we perform a stability analysis with linear perturbations to confirm the model\u2019s stability.'\nauthor:\n- 'A. Mussatayeva [[](https://orcid.org/0000-0000-0000-0000)]{}'\n- 'N. Myrzakulov[[](https://orcid.org/0000-0001-8691-9939)]{}'\n- 'M. Koussour[[](https://orcid.org/0000-0002-4188-0572)]{}'\ntitle: 'Cosmological constraints on dark energy in $f(Q)$ gravity: A parametrized perspective'\n---\n\nIntroduction {#sec1}\n============\n\nIn modern cosmology, the observational aspect is critical. The introduction" -"---\nabstract: 'The complete positivity vs positivity correspondence in the Choi-Jamio[\u0142]{}kowski-Kraus-Sudarshan quantum channel-state isomorphism depends on the choice of basis. Instead of the \u201ccanonical\u201d basis, if we use, e.g., the Pauli spin matrices along with the identity as the basis for the space of bounded operators on the two-dimensional complex Hilbert space, this correspondence breaks down. A sufficient condition on the basis for validity of this correspondence is provided in the work of Paulsen and Shultz\u00a0[@Paulsen], which was later proven to be necessary by Kye\u00a0[@Kye]. A correspondence is also present between the space of super-maps and the tensor product of the spaces of the inputs and outputs of the same. In particular, a super-map is completely CP-preserving if and only if its Choi-type representation is completely positive (CP). This correspondence also depends on a specific choice of basis. In this work, we find the necessary and sufficient condition on a basis such that this correspondence holds true.'\nauthor:\n- 'Sohail$^1$, Sahil$^{2,3}$, Ritabrata Sengupta$^4$, and Ujjwal Sen$^1$'\nbibliography:\n- 'referencefile.bib'\ntitle: 'Duality between quantum channels and super-channels is basis-dependent'\n---\n\nintroduction\n============\n\nIn quantum mechanics, a physical system is represented by a complex separable Hilbert space. The state of" -"---\nabstract: 'This paper is devoted to investigating the nonlinear non-abelian Yang-Mills black holes. We consider three Born-Infeld, exponential, and logarithmic nonlinear Yang-Mills theories with $SO(n-1)$ and $SO(n-2,1)$ semi-simple groups, which n is the dimension of spacetime, and obtain a new class of nonlinear Yang-Mills (NYM) black hole solutions. Depending on the values of dimension $n$, Yang-Mills charge $e$ and the mass $m$ and nonlinear parameters $\\beta$, our solutions can lead to a naked singularity, a black hole with two horizons, an extreme or a Schwarzschild-type black hole. We also investigate the thermodynamic behaviors of the NYM black holes. For small charge values, the NYM solutions may be thermally stable in the canonical ensemble, if we consider an AdS spacetime with spherical $k=+1$ and hyperbolic $k=-1$ coordinates or a flat one with $k=+1$. However, there are no stable regions in the grand canonical ensemble in higher dimensions. For the NYM black hole, we observe a reentrant phase transition between large and small black holes in the BI-branch with small $\\beta$, which cannot be visible for the nonlinear Reissner-Nordstrom AdS black hole in the higher dimension. For the limit $\\beta\\rightarrow\\infty$, the critical ratio $\\frac{P_{c} v_{c}}{T_{c}}$ tends to the constant value $3/8$" -"---\nabstract: 'The unambiguous identification of Majorana zero modes (MZMs) is one of the most outstanding problems of condensed matter physics. Thermal transport provides a detection tool that is sensitive to these chargeless quasiparticles. We study thermoelectric transport between metallic leads transverse to a Josephson junction. The central double quantum dot hosts conventional or topological Andreev states that depend on the phase difference $\\phi$. We show that the presence of MZMs can be identified by a significant amplification of both the electrical and thermal conductance at $\\phi \\approx \\pi$ as well as the Seebeck coefficient at $\\phi \\approx 0$. We further investigate the robustness of our results against Cooper pair splitting processes.'\nauthor:\n- 'Raffael\u00a0L.\u00a0Klees'\n- Daniel\u00a0Gresta\n- Jonathan\u00a0Sturm\n- 'Laurens W. Molenkamp'\n- 'Ewelina\u00a0M.\u00a0Hankiewicz'\nbibliography:\n- 'refs.bib'\ntitle: 'Majorana-mediated thermoelectric transport in multiterminal junctions'\n---\n\n[^1]\n\n[^2]\n\n*Introduction.*Josephson junctions (JJs) have been extensively studied in numerous works, driven by their wide range of applications, from metrology [@Fatemi2021; @belcher2018] and quantum simulation [@manousakis2002quantum] to quantum computation [@Flensberg2011; @stern2013topological; @sarma2015majorana]. Recently, topological JJs gained significant attention as they provide robust platforms hosting Majorana zero modes (MZMs) [@Fernando2014Robustsignatures; @sato2017topological; @Ren2019; @Fornieri2019]. In particular, similar quantum-dot-based setups" -"---\nabstract: 'The ability to characterise the three-dimensional microstructure of multiphase materials is essential for understanding the interaction between phases and associated materials properties. Here, laboratory-based diffraction-contrast tomography (lab-based DCT), a recently-established materials characterization technique that can determine grain phases, morphologies, positions and orientations in a voxel-based reconstruction method, was used to map part of a dual-phase steel alloy sample. To assess the resulting microstructures that were produced by the lab-based DCT technique, an electron backscatter diffraction (EBSD) map was collected within the same sample volume. To identify the two-dimensional (2D) slice of the three-dimensional (3D) lab-based DCT reconstruction that best corresponded to the 2D EBSD map, a novel registration technique based solely on grain-averaged orientations was developed \u2013 this registration technique requires very little *a priori* knowledge of dataset alignment and can be extended to other techniques that only recover grain-averaged orientation data such as far-field 3D X-ray diffraction microscopy. Once the corresponding 2D slice was identified in the lab-based DCT dataset, comparisons of phase balance, grain size, shape and texture were performed between lab-based DCT and EBSD techniques. More complicated aspects of the microstructural morphology such as grain boundary shape and grains less than a critical size were" -"---\nabstract: |\n The advancement of new digital image sensors has enabled the design of exposure multiplexing schemes where a single image capture can have multiple exposures and conversion gains in an interlaced format, similar to that of a Bayer color filter array. In this paper, we ask the question of how to design such multiplexing schemes for *adaptive* high-dynamic range (HDR) imaging where the multiplexing scheme can be updated according to the scenes. We present two new findings.\n\n \\(i) We address the problem of *design optimality*. We show that given a multiplex pattern, the conventional optimality criteria based on the input/output-referred signal-to-noise ratio (SNR) of the independently measured pixels can lead to flawed decisions because it cannot encapsulate the location of the saturated pixels. We overcome the issue by proposing a new concept known as the spatially varying exposure risk (SVE-Risk) which is a pseudo-idealistic quantification of the amount of recoverable pixels. We present an efficient enumeration algorithm to select the optimal multiplex patterns.\n\n \\(ii) We report a *design universality* observation that the design of the multiplex pattern can be decoupled from the image reconstruction algorithm. This is a significant departure from the recent literature that the multiplex pattern" -"---\nabstract: 'Various simplicial complexes can be associated with a graph. Box complexes form an important families of such simplicial complexes and are especially useful for providing lower bounds on the chromatic number of the graph via some of their topological properties. They provide thus a fascinating topic mixing topology and discrete mathematics. This paper is intended to provide an up-do-date survey on box complexes. It is based on classical results and recent findings from the literature, but also establishes new results improving our current understanding of the topic, and identifies several challenging open questions.'\naddress:\n- 'HR. Daneshpajouh, School of Mathematical Sciences, University of Nottingham Ningbo China, 199 Taikang East Road, Ningbo 315100, China'\n- 'F. Meunier, CERMICS, \u00c9cole des Ponts, 77455 Marne-la-Vall\u00e9e CEDEX, France'\nauthor:\n- Hamid Reza Daneshpajouh\n- Fr\u00e9d\u00e9ric Meunier\nbibliography:\n- 'box-complex-survey.bib'\ntitle: |\n Box complexes: at the crossroad of\\\n graph theory and topology\n---\n\nIntroduction\n============\n\nSince the 1978 breakthrough paper by Lov\u00e1sz solving the Kneser conjecture\u00a0[@lovasz1978kneser], various simplicial complexes associated with graphs have been studied, in relations with other combinatorial problems or in their own right. The search for good topological bounds on the chromatic number of graphs has been a great" -"---\nabstract: 'A central task in finite-time thermodynamics is to minimize the excess or dissipated work $W_{\\rm diss}$ when manipulating the state of a system immersed in a thermal bath. We consider this task for an $N$-body system whose constituents are identical and uncorrelated at the beginning and end of the process. In the regime of slow but finite-time processes, we show that $W_{\\rm diss}$ can be dramatically reduced by considering collective protocols in which interactions are suitably created along the protocol. This can even lead to a sub-linear growth of $W_{\\rm diss}$ with $N$: $W_{\\rm diss}\\propto N^x$ with $x<1$; to be contrasted to the expected\u00a0$W_{\\rm diss}\\propto N$ satisfied in any non-interacting protocol. We derive the fundamental limits to such collective advantages and show that $x=0$ is in principle possible, however it requires long-range interactions. We explore collective processes with spin models featuring two-body interactions and achieve noticeable gains under realistic levels of control in simple interaction architectures. As an application of these results, we focus on the erasure of information in finite time and prove a faster convergence to Landauer\u2019s bound.'\nauthor:\n- Alberto Rolandi\n- Paolo Abiuso\n- 'Mart\u00ed Perarnau-Llobet'\nbibliography:\n- 'mybib.bib'\ntitle: 'Collective advantages in" -"---\nabstract: 'The flying ad hoc network (FANET) will play a crucial role in the B5G/6G era since it provides wide coverage and on-demand deployment services in a distributed manner. The detection of Sybil attacks is essential to ensure trusted communication in FANET. Nevertheless, the conventional methods only utilize the untrusted information that UAV nodes passively \u201cheard\u201d from the \u201cauditory\" domain (AD), resulting in severe communication disruptions and even collision accidents. In this paper, we present a novel VA-matching solution that matches the neighbors observed from both the AD and the \u201cvisual\u201d domain (VD), which is the first solution that enables UAVs to accurately correlate what they \u201csee\u201d from VD and \u201chear\u201d from AD to detect the Sybil attacks. Relative entropy is utilized to describe the similarity of observed characteristics from dual domains. The dynamic weight algorithm is proposed to distinguish neighbors according to the characteristics\u2019 popularity. The matching model of neighbors observed from AD and VD is established and solved by the vampire bat optimizer. Experiment results show that the proposed VA-matching solution removes the unreliability of individual characteristics and single domains. It significantly outperforms the conventional RSSI-based method in detecting Sybil attacks. Furthermore, it has strong robustness and" -"---\nabstract: 'With the availability of extensive databases of inorganic materials, data-driven approaches leveraging machine learning have gained prominence in materials science research. In this study, we propose an innovative adaptation of data-driven concepts to the mapping and exploration of chemical compound space. Recommender systems, widely utilized for suggesting items to users, employ techniques such as collaborative filtering, which rely on bipartite graphs composed of users, items, and their interactions. Building upon the Open Quantum Materials Database (OQMD), we constructed a bipartite graph where elements from the periodic table and sites within crystal structures are treated as separate entities. The relationships between them, defined by the presence of ions at specific sites and weighted according to the thermodynamic stability of the respective compounds, allowed us to generate an embedding space that contains vector representations for each ion and each site. Through the correlation of ion-site occupancy with their respective distances within the embedding space, we explored new ion-site occupancies, facilitating the discovery of novel stable compounds. Moreover, the graph\u2019s embedding space enabled a comprehensive examination of chemical similarities among elements, and a detailed analysis of local geometries of sites. To demonstrate the effectiveness and robustness of our method, we conducted" -"---\nabstract: 'Black-box zero-th order optimization is a central primitive for applications in fields as diverse as finance, physics, and engineering. In a common formulation of this problem, a designer sequentially attempts candidate solutions, receiving noisy feedback on the value of each attempt from the system. In this paper, we study scenarios in which feedback is also provided on the *safety* of the attempted solution, and the optimizer is constrained to limit the number of unsafe solutions that are tried throughout the optimization process. Focusing on methods based on Bayesian optimization (BO), prior art has introduced an optimization scheme \u2013 referred to as \u2013 that is guaranteed not to select *any* unsafe solution with a controllable probability over feedback noise as long as strict assumptions on the safety constraint function are met. In this paper, a novel BO-based approach is introduced that satisfies safety requirements irrespective of properties of the constraint function. This strong theoretical guarantee is obtained at the cost of allowing for an arbitrary, controllable but non-zero, rate of violation of the safety constraint. The proposed method, referred to as , builds on online conformal prediction (CP) and is specialized to the cases in which feedback on the" -"---\nabstract: '6G and beyond networks will merge communication and computation capabilities in order to adapt to changes. As they will consist of many sensors gathering information from its environment, new schemes for managing these large amounts of data are needed. For this purpose, we review Over the Air (OTA) computing in the context of estimation and detection. For distributed scenarios, such as a Wireless Sensor Network, it has been proven that a separation theorem does not necessarily hold, whereas analog schemes may outperform digital designs. We outline existing gaps in the literature, evincing that current state of the art requires a theoretical framework based on analog and hybrid digital-analog schemes that will boost the evolution of OTA computing. Furthermore, we motivate the development of 3D networks based on OTA schemes, where satellites function as sensors. We discuss its integration within the satellite segment, delineate current challenges and present a variety of use cases that benefit from OTA computing in 3D networks.'\nauthor:\n- \nbibliography:\n- 'refs.bib'\ntitle: |\n Over the Air Computing for\\\n Satellite Networks in 6G\\\n [^1] \n---\n\nIntroduction\n============\n\nNext-generation networks will join communication and computing since a large number of sensors will be deployed to enable" -"---\nabstract: 'We propose a second-order accurate semi-implicit and well-balanced finite volume scheme for the equations of ideal magnetohydrodynamics (MHD) including gravitational source terms. The scheme treats all terms associated with the acoustic pressure implicitly while keeping the remaining terms part of the explicit sub-system. This semi-implicit approach makes the method particularly well suited for problems in the low Mach regime. We combine the semi-implicit scheme with the deviation well-balancing technique and prove that it maintains equilibrium solutions for the magnetohydrostatic case up to rounding errors. In order to preserve the divergence-free property of the magnetic field enforced by the solenoidal constraint, we incorporate a constrained transport method in the semi-implicit framework. Second order of accuracy is achieved by means of a standard spatial reconstruction technique with total variation diminishing (TVD) property, and by an asymptotic preserving (AP) time stepping algorithm built upon the implicit-explicit (IMEX) Runge-Kutta time integrators. Numerical tests in the low Mach regime and near magnetohydrostatic equilibria support the low Mach and well-balanced properties of the numerical method.'\nauthor:\n- Claudius Birke\n- Walter Boscheri\n- Christian Klingenberg\nbibliography:\n- 'biblio.bib'\ndate: 'Received: date / Accepted: date'\ntitle: 'A well-balanced semi-implicit IMEX finite volume scheme for ideal" -"---\nabstract: 'In the context of Industry 4.0, the use of artificial intelligence (AI) and machine learning for anomaly detection is being hampered by high computational requirements and associated environmental effects. This study seeks to address the demands of high-performance machine learning models with environmental sustainability, contributing to the emerging discourse on \u2019Green AI.\u2019 An extensive variety of machine learning algorithms, coupled with various Multilayer Perceptron (MLP) configurations, were meticulously evaluated. Our investigation encapsulated a comprehensive suite of evaluation metrics, comprising Accuracy, Area Under the Curve (AUC), Recall, Precision, F1 Score, Kappa Statistic, Matthews Correlation Coefficient (MCC), and F1 Macro. Simultaneously, the environmental footprint of these models was gauged through considerations of time duration, CO2 equivalent, and energy consumption during the training, cross-validation, and inference phases. Traditional machine learning algorithms, such as Decision Trees and Random Forests, demonstrate robust efficiency and performance. However, superior outcomes were obtained with optimised MLP configurations, albeit with a commensurate increase in resource consumption. The study incorporated a multi-objective optimisation approach, invoking Pareto optimality principles, to highlight the trade-offs between a model\u2019s performance and its environmental impact. The insights derived underscore the imperative of striking a balance between model performance, complexity, and environmental implications, thus" -"---\nabstract: 'Analyzing quantum many-body problems and elucidating the entangled structure of quantum states is a significant challenge common to a wide range of fields. Recently, a novel approach using machine learning was introduced to address this challenge. The idea is to \u201cembed\u201d nontrivial quantum correlations (quantum entanglement) into artificial neural networks. Through intensive developments, artificial neural network methods are becoming new powerful tools for analyzing quantum many-body problems. Among various artificial neural networks, this topical review focuses on Boltzmann machines and provides an overview of recent developments and applications.'\naddress: 'Department of Applied Physics and Physico-Informatics, Keio University, 3-14-1 Hiyoshi, Kohoku-ku, Yokohama, 223-8522, Japan'\nauthor:\n- Yusuke Nomura\nbibliography:\n- 'main.bib'\ntitle: 'Boltzmann machines and quantum many-body problems'\n---\n\nJanuary 2023\n\nIntroduction {#Sec_introduction}\n============\n\nThe motions of particles in a quantum many-body system are described by the following Schr\u00f6dinger equation $$\\begin{aligned}\n\\label{eq:many-body}\n {\\mathcal H} | \\psi \\rangle = E | \\psi \\rangle \\end{aligned}$$ where ${\\mathcal H}$ is the Hamiltonian, an operator that represents the total energy of the system. The specific form of the Hamiltonian depends on the type of particles constituting the quantum many-body system, how they interact with each other, and the form of the potential.\n\nDespite" -"---\nabstract: 'When an EMRI in a perturbed integrable gravitational field, such as a deformed Kerr black hole, undergoes a prolonged resonance, the frequencies that engage in resonance retain a fixed rational ratio, despite experiencing adiabatic changes due to radiation reaction. In the past this plateau effect in the evolution of the ratio of frequencies has been investigated by studying the orbital evolution through kludge models, which provide approximate average losses of energy and angular momentum experienced by a test particle in this field. By employing a Newtonian gravitational field that closely resembles a pure Kerr or a perturbed Kerr relativistic field, we demonstrate that the actual adiabatic evolution of an orbit driven by an artificial \u201cself-force\u201d results in more prolonged periods of resonance crossings compared to those obtained by imposing a predetermined rate of energy and angular momentum change throughout the orbital progression..'\nauthor:\n- |\n Areti Eleni [^1]\\\n \\\n- |\n Theocharis A. Apostolatos [^2]\\\n \\\nbibliography:\n- 'sample.bib'\ntitle: |\n Enhanced plateau effect at resonance\\\n in realistic non-integrable EMRIs\n---\n\nIntroduction\n============\n\nExtreme mass ratio inpirals (EMRIs), are prominent sources of gravitational waves (GWs) for the future space-based detector Laser Interferometer Space Antenna (LISA) [@LISA]. EMRIs are" -"---\nabstract: 'We introduce a classical algorithm for sampling the output of shallow, noisy random circuits on two-dimensional qubit arrays. The algorithm builds on the recently-proposed \u201cspace-evolving block decimation\u201d (SEBD)\u00a0\\[[Napp [*et al*]{}, PRX [**12**]{}, 021021 (2022)](https://journals.aps.org/prx/abstract/10.1103/PhysRevX.12.021021)\\] and extends it to the case of noisy circuits. SEBD is based on a mapping of 2D unitary circuits to 1D [*monitored*]{} ones, which feature measurements alongside unitary gates; it exploits the presence of a measurement-induced entanglement phase transition to achieve efficient (approximate) sampling below a finite critical depth $T_c$. Our noisy-SEBD algorithm unravels the action of noise into measurements, further lowering entanglement and enabling efficient classical sampling up to larger circuit depths. We analyze a class of physically-relevant noise models (unital qubit channels) within a two-replica statistical mechanics treatment, finding weak measurements to be the optimal (i.e. most disentangling) unraveling. We then locate the noisy-SEBD complexity transition as a function of circuit depth and noise strength in realistic circuit models. As an illustrative example, we show that circuits on heavy-hexagon qubit arrays with noise rates of $\\approx 2\\%$ per CNOT, based on IBM Quantum processors, can be efficiently sampled up to a depth of 5 iSWAP (or 10 CNOT) gate layers. Our" -"---\nabstract: 'Pretraining with large-scale 3D volumes has a potential for improving the segmentation performance on a target medical image dataset where the training images and annotations are limited. Due to the high cost of acquiring pixel-level segmentation annotations on the large-scale pretraining dataset, pretraining with unannotated images is highly desirable. In this work, we propose a novel self-supervised learning strategy named Volume Fusion (VF) for pretraining 3D segmentation models. It fuses several random patches from a foreground sub-volume to a background sub-volume based on a predefined set of discrete fusion coefficients, and forces the model to predict the fusion coefficient of each voxel, which is formulated as a self-supervised segmentation task without manual annotations. Additionally, we propose a novel network architecture based on parallel convolution and transformer blocks that is suitable to be transferred to different downstream segmentation tasks with various scales of organs and lesions. The proposed model was pretrained with 110k unannotated 3D CT volumes, and experiments with different downstream segmentation targets including head and neck organs, thoracic/abdominal organs showed that our pretrained model largely outperformed training from scratch and several state-of-the-art self-supervised training methods and segmentation models. The code and pretrained model are available at .'" -"---\nabstract: 'Logic programming has long being advocated for legal reasoning, and several approaches have been put forward relying upon explicit representation of the law in logic programming terms. In this position paper we focus on the \u00a0 logic-programming-based framework for formalizing and reasoning with Japanese presupposed ultimate fact theory. Specifically, we examine challenges and opportunities in leveraging deep learning techniques for improving legal reasoning using , identifying four distinct options ranging from enhancing fact extraction using deep learning to end-to-end solutions for reasoning with textual legal descriptions. We assess advantages and limitations of each option, considering their technical feasibility, interpretability, and alignment with the needs of legal practitioners and decision-makers. We believe that our analysis can serve as a guideline for developers aiming to build effective decision-support systems for the legal domain, while fostering a deeper understanding of challenges and potential advancements by neuro-symbolic approaches in legal applications.'\naddress:\n- 'National Institute of Informatics (NII), 2-1-2 Hitotsubashi, Chiyoda City, Tokyo, Japan'\n- 'Imperial College London, Exhibition Rd, South Kensington, London SW7 2BX, United Kingdom'\n- 'Royal Holloway University of London, Egham Hill, Egham TW20 0EX, United Kingdom'\nauthor:\n- Ha Thanh Nguyen\n- Francesca Toni\n- Kostas Stathis\n- Ken" -"---\nabstract: 'The rotations of the polarization angle (PA) with time (energy) can lead to the depolarization of the time-integrated (energy-integrated) polarization. However, we don\u2019t know how and when it will rotate. Here, we consider the magnetic reconnection model to investigate the polarizations, especially the PA rotations of GRB prompt emission. For the large-scale ordered aligned magnetic field configuration, we find that PAs will evolve with time (energy) for off-axis observations. Our studies show that the rotations of the PAs are due to the changes of the \u201cobserved shape\u201d of the emitting region (before averaged). We apply our models to the single pulse burst of GRB 170101A and GRB 170114A with time-resolved PA observations. We find it can interpret the violent PA variation of GRB 170101A. The model could not predict the twice $90^{\\circ}$ PA changes in GRB 170114A. Detailed model should be considered.'\nauthor:\n- 'Hao-Bing Wang'\n- 'Mi-Xiang Lan'\ntitle: 'Rotation of polarization angle in gamma-ray burst prompt phase'\n---\n\nIntroduction {#intro}\n============\n\nGamma-ray bursts (GRBs) are the bursts of high-energy electromagnetic radiation in the universe. GRBs can be divided into long and short bursts with a duration seperation of two seconds. Long bursts originate from the collapse" -"---\nabstract: 'This paper explores the [similarity]{} of the streamwise velocity fluctuations in a channel. In the analysis, we employ a one-dimensional scalar variant of the proper orthogonal decomposition (POD). This approach naturally motivates the introduction of two different levels of [similarity]{} which we will refer to as strong and weak [similarity]{}. Strong [similarity]{} requires that [the two-point correlation, and thus, all POD modes, show Reynolds number similarity]{}, while weak [similarity]{} only requires that the first few POD modes [show similarity]{}. As POD concerns information at more than one location, these [similarities]{} are more general than various similarities found in the literature concerning single-point flow statistics. We examine flows at $Re_\\tau=180$, 540, 1000, and 5200. Strong [similarity]{} is observed in the viscous layer and the wake region, and weak [similarity]{} is found in both the viscous wall region and the outer part of the logarithmic layer. The presence of weak [similarity]{} suggests the existence of an extension to the law of the wall (LoW). We propose such an extension based on the results from the one-dimensional POD analysis. The usefulness of the LoW extension is then assessed by comparing flow reconstructions according to the conventional equilibrium LoW and the extended" -"---\nabstract: '\u00a0is an Be/X-ray binary system ([BeXRB]{}) in the Large Magellanic Cloud (LMC) exhbiting a $\\sim$6s pulse period. Like many such systems the variable X-ray emission is believed to be driven by the underlying behaviour of the mass donor Be star. In this paper we report on X-ray observations of the brightest known outburst from this system which reached a luminosity of ${\\sim8} \\times 10^{37}$\u00a0erg$\\cdot {\\rm s}^{-1}$. These observations are supported by contemporaneous optical photometric observations, the first reported optical spectrum, as well as several years of historical data from OGLE and GAIA. The latter strongly suggest a binary period of 46.1d. All the observational data indicate that \u00a0is a system that spends the vast majority of its time in X-ray quiescence, or even switched off completely. This suggests that occasional observations may easily miss it, and many similar systems, and thereby underestimate the massive star evolution numbers for the LMC.'\nauthor:\n- |\n M.\u00a0J. Coe$^{1}$[^1], J.\u00a0A. Kennea$^{2}$, I. M. Monageng$^{3,4}$, D.A.H. Buckley $^{3}$, A. Udalski$^{5}$ & P.\u00a0A. Evans$^{6}$\\\n $^{1}$Physics & Astronomy, The University of Southampton, SO17 1BJ, UK\\\n $^{2}$Department of Astronomy and Astrophysics, The Pennsylvania State University, 525 Davey Lab, University Park, PA" -"---\nabstract: 'Dynamics models learned from visual observations have shown to be effective in various robotic manipulation tasks. One of the key questions for learning such dynamics models is what scene representation to use. Prior works typically assume representation at a fixed dimension or resolution, which may be inefficient for simple tasks and ineffective for more complicated tasks. In this work, we investigate how to learn dynamic and adaptive representations at different levels of abstraction to achieve the optimal trade-off between efficiency and effectiveness. Specifically, we construct dynamic-resolution particle representations of the environment and learn a unified dynamics model using graph neural networks (GNNs) that allows continuous selection of the abstraction level. During test time, the agent can adaptively determine the optimal resolution at each model-predictive control (MPC) step. We evaluate our method in object pile manipulation, a task we commonly encounter in cooking, agriculture, manufacturing, and pharmaceutical applications. Through comprehensive evaluations both in the simulation and the real world, we show that our method achieves significantly better performance than state-of-the-art fixed-resolution baselines at the gathering, sorting, and redistribution of granular object piles made with various instances like coffee beans, almonds, corn, etc.'\nauthor:\n- \nbibliography:\n- 'references.bib'\ntitle: |" -"---\nabstract: 'Recently, the NANOGrav, PPTA, EPTA and CPTA collaborations reported compelling evidence of the existence of the Stochastic Gravitational-Wave Background (SGWB). The amplitude and spectrum of this inferred gravitational-wave background align closely with the astrophysical predictions for a signal originating from the population of supermassive black-hole binaries. In light of these findings, we explore the possibility to detect dark matter spikes surrounding massive black holes, which could potentially impact the gravitational-wave waveform and modulate the SGWB. We demonstrate that the SMBH binary evolution induced by the combined effects of GW radiation and the dynamical friction of the dark matter spike exhibits detectable manifestations within the nHz frequency range of the SGWB.'\nauthor:\n- 'Zhao-Qiang Shen'\n- 'Guan-Wen Yuan'\n- 'Yi-Ying Wang'\n- 'Yuan-Zhu Wang'\ntitle: Dark Matter Spike surrounding Supermassive Black Holes Binary and the nanohertz Stochastic Gravitational Wave Background\n---\n\nIntroduction\n============\n\nGravitational waves (GWs) were initially predicted by Einstein in General Relativity, but their actual detection through the orbital decay of a binary pulsar system PSR B1913+16 [@PSRB1913+16] took more than half a century. And almost a century had to elapse until their direct measurement (GW150914) from binaries of stellar-mass black holes [@GW150914]. In most galaxies, supermassive" -"---\nabstract: 'The framework of iterated Prisoner\u2019s Dilemma (IPD) is commonly used to study direct reciprocity and cooperation, with a focus on the assessment of the generosity and reciprocal fairness of an IPD strategy in one-on-one settings. In order to understand the persistence and resilience of reciprocal cooperation, here we study long-term population dynamics of IPD strategies using the Moran process where stochastic dynamics of strategy competition can lead to the rise and fall of cooperation. Although prior work has included a handful of typical IPD strategies in the consideration, it remains largely unclear which type of IPD strategies is pivotal in steering the population away from defection and providing an escape hatch for establishing cooperation. We use a network-based approach to analyze and characterize networks of evolutionary pathways that bridge transient episodes of evolution dominated by depressing defection and ultimately catalyze the evolution of reciprocal cooperation in the long run. We group IPD strategies into three types according to their stationary cooperativity with an unconditional cooperator: the good (fully cooperative), the bad (fully exploitive), and the ugly (in between the former two types). We consider the mutation-selection equilibrium with rare mutations and quantify the impact of the presence versus" -"---\nabstract: 'State estimation and control is a well-studied problem in conventional aerial vehicles such as multi-rotors. But multi-rotors, while versatile, are not suitable for all applications. Due to turbulent airflow from ground effects, multi-rotors cannot fly in confined spaces. Flapping wing micro aerial vehicles have gained research interest in recent years due to their lightweight structure and ability to fly in tight spaces. Further, their soft deformable wings also make them relatively safer to fly around humans. This thesis will describe the progress made towards developing state estimation and controls on Northeastern University\u2019s Aerobat, a bio-inspired flapping wing micro aerial vehicle, with the goal of achieving untethered autonomous flight. Aerobat has a total weight of about 40g and an additional payload capacity of 40g, precluding the use of large processors or heavy sensors. With limited computation resources, this report discusses the challenges in achieving perception on such a platform and the steps taken towards untethered autonomous flight.'\nauthor:\n- Adarsh Salagame\nbibliography:\n- 'bib/references.bib'\nnocite:\n- '[@ramezani_bat_2016; @ramezani_nonlinear_nodate; @ramezani_modeling_2016; @hoff_synergistic_2016; @hoff_reducing_2017; @ramezani_describing_2017; @ramezani_biomimetic_2017; @syed_rousettus_2017-1; @hoff_optimizing_2018; @hoff_trajectory_2019; @ramezani_towards_2020]'\n- '[@sihite_mechanism_2020; @sihite_computational_2020; @sihite_enforcing_2020; @sihite_orientation_2021; @ramezani_aerobat_2022; @sihite_unsteady_2022]'\ntitle: 'Progress Towards Untethered Autonomous Flight of Northeastern University\u2019s Aerobat'\n---\n\nTo my family.\n\nList" -"---\nabstract: 'Automatic query reformulation is a widely utilized technology for enriching user requirements and enhancing the outcomes of code search. It can be conceptualized as a machine translation task, wherein the objective is to rephrase a given query into a more comprehensive alternative. While showing promising results, training such a model typically requires a large parallel corpus of query pairs (, the original query and a reformulated query) that are confidential and unpublished by online code search engines. This restricts its practicality in software development processes. In this paper, we propose [*SSQR*]{}, a self-supervised query reformulation method that does not rely on any parallel query corpus. Inspired by pre-trained models, [*SSQR*]{}treats query reformulation as a masked language modeling task conducted on an extensive unannotated corpus of queries. [*SSQR*]{}extends T5 (a sequence-to-sequence model based on Transformer) with a new pre-training objective named *corrupted query completion* (CQC), which randomly masks words within a complete query and trains T5 to predict the masked content. Subsequently, for a given query to be reformulated, [*SSQR*]{}identifies potential locations for expansion and leverages the pre-trained T5 model to generate appropriate content to fill these gaps. The selection of expansions is then based on the information gain" -"---\nabstract: |\n Recent years have witnessed the fast penetration of Virtual Reality (VR) and Augmented Reality (AR) systems into our daily life, the security and privacy issues of the VR/AR applications have been attracting considerable attention. Most VR/AR systems adopt head-mounted devices (i.e., smart headsets) to interact with users and the devices usually store the users\u2019 private data. Hence, authentication schemes are desired for the head-mounted devices. Traditional knowledge-based authentication schemes for general personal devices have been proved vulnerable to shoulder-surfing attacks, especially considering the headsets may block the sight of the users. Although the robustness of the knowledge-based authentication can be improved by designing complicated secret codes in virtual space, this approach induces a compromise of usability. Another choice is to leverage the users\u2019 biometrics; however, it either relies on highly advanced equipments which may not always be available in commercial headsets or introduce heavy cognitive load to users.\n\n In this paper, we propose a vibration-based authentication scheme, VibHead, for smart headsets. Since the propagation of vibration signals through human heads presents unique patterns for different individuals, VibHead employs a CNN-based model to classify registered legitimate users based the features extracted from the vibration signals. We also design" -"---\nabstract: 'Numerically solving partial differential equations typically requires fine discretization to resolve necessary spatiotemporal scales, which can be computationally expensive. Recent advances in deep learning have provided a new approach to solving partial differential equations that involves the use of neural operators. Neural operators are neural network architectures that learn mappings between function spaces and have the capability to solve partial differential equations based on data. This study utilizes a novel neural operator called Hyena, which employs a long convolutional filter that is parameterized by a multilayer perceptron. The Hyena operator is an operation that enjoys sub-quadratic complexity and state space model to parameterize long convolution that enjoys a global receptive field. This mechanism enhances the model\u2019s comprehension of the input\u2019s context and enables data-dependent weight for different partial differential equations instances. To measure how effective the layers are in solving partial differential equations, we conduct experiments on Diffusion-Reaction equation and Navier Stokes equation. Our findings indicate Hyena Neural operator can serve as an efficient and accurate model for learning partial differential equations solution operator. The data and code used can be found at: '\nauthor:\n- Saurabh Patil\n- Zijie Li\n- Amir Barati Farimani\nbibliography:\n- 'reference.bib'" -"---\nauthor:\n- 'Matilde Barberi Squarotti,'\n- 'Stefano Camera,'\n- and Roy Maartens\ntitle: 'Radio-optical synergies at high redshift to constrain primordial non-Gaussianity'\n---\n\nIntroduction {#sec:intro}\n============\n\nThe $\\mathrm{\\Lambda CDM}$ model can successfully explain a wide range of cosmological observations, but still leaves some issues open. In addition to dark matter and dark energy, a phase of exponential expansion \u2013 cosmological inflation \u2013 is required to produce the fluctuations in the density field that seed the large-scale structure of the Universe. A key probe of inflation is primordial non-Gaussianity [@encyclinfl; @bartoloPNGinfl].\n\nThe simplest inflationary models generate primordial fluctuations that follow a Gaussian distribution, but many scenarios of inflation predict departures from Gaussianity, which may be quantified by the local primordial non-Gaussianity parameter, ${f_\\mathrm{NL}}$ [@bartoloPNGinfl; @celoriamatarresePNG]. The effects of non-Gaussianity can be probed in various ways, the most prominent being a measurement of a non-vanishing primordial bispectrum [@planck18PNG; @baldaufsenatore; @creminelliPNG; @pajerPNG]. However, the power spectrum of clustering of biased tracers of the underlying matter density field also exhibits a peculiar scale-dependence on the largest scales [@dalalPNG; @matarresePNG]. A measurement of ${f_\\mathrm{NL}}\\neq 0$ from the power spectrum would allow us to rule out entire classes of inflationary models.\n\nCurrently, the tightest constraints" -"---\nabstract: 'In this paper, we provide a precise mathematical model of crystal-to-crystal response which is used to generate the white image - a necessary compensation model needed to overcome the physical limitations of the PET scanner. We present a closed-form solution, as well as several accurate approximations, due to the complexity of the exact mathematical expressions. We prove, experimentally and analytically, that the difference between the best approximations and real crystal-to-crystal response is insignificant. The obtained responses are used to generate the white image compensation model. It can be written as a single closed-form expression making it easy to implement in known reconstruction methods. The maximum likelihood expectation maximization (MLEM) algorithm is modified and our white image model is integrated into it. The modified MLEM algorithm is not based on the system matrix, rather it is based on ray-driven projections and back-projections. The compensation model provides all necessary information about the system. Finally, we check our approach on synthetic and real data. For the real-world acquisition, we use the Raytest ClearPET camera for small animals and the NEMA NU 4-2008 phantom. The proposed approach overperforms competitive, non-compensated reconstruction methods.'\nbibliography:\n- 'refer.bib'\n---\n\nTomislav Matuli\u0107\\*, Damir Ser\u0161i\u0107\n\nDepartment of" -"---\nabstract: 'We determine the minimal spectral radii among all skew-reciprocal integer matrices of a fixed even dimension that are primitive or nonnegative and irreducible. In particular, except for dimension six, we show that each such class of matrices realises smaller spectral radii than the corresponding reciprocal class.'\naddress: |\n Department of Mathematics\\\n University of Fribourg\\\n Chemin du Mus\u00e9e 23\\\n 1700 Fribourg\\\n Switzerland\nauthor:\n- Livio Liechti\ntitle: 'On the minimal spectral radii of skew-reciprocal integer matrices'\n---\n\nIntroduction\n============\n\nCuriously, orientation-reversing integer linear dynamical systems can be simpler than orientation-preserving ones in the following sense: among all matrices\u00a0$A\\in\\mathrm{GL}_2(\\mathbb{Z})$ with\u00a0$\\det(A)=-1$, the smallest spectral radius $>1$ is the golden ratio\u00a0$\\varphi$, while among matrices\u00a0$A$ with\u00a0$\\det(A)=1$, the smallest spectral radius\u00a0$>1$ is\u00a0$\\varphi^2$. In this article, we generalise this comparison to reciprocal and skew-reciprocal matrices of any even dimension, under the assumption of either primitivity or nonnegativity and irreducibility.\n\nA matrix is *nonnegative* if all its coefficients are nonnegative. Such a matrix is *primitive* if some power has strictly positive coefficients. A matrix is *irreducible* if it is not conjugate via a permutation matrix to an upper triangular block matrix. We call a matrix *reciprocal* if the set" -"---\nabstract: 'We present here a theory of Majorana excitons, photo-excited conduction electron-valence band hole pairs, interacting with Majorana Fermions in a Kitaev chain of semiconductor quantum dots embedded in a nanowire. Using analytical tools and exact diagonalisation methods we identify the presence of Majorana Zero Modes in the nanowire absorption spectra.'\nauthor:\n- 'Mahan Mohseni, Hassan Allami, Daniel Miravet, David J. Gayowsky, Marek Korkusinski\\*, Pawel Hawrylak'\nbibliography:\n- 'ref.bib'\ntitle: Majorana excitons in a Kitaev chain of semiconductor quantum dots in a nanowire\n---\n\nIntroduction\n============\n\nThere is currently interest in realizing synthetic topological quantum matter with topologically protected quasiparticles at its edges [@gyongyosi2019survey; @field2018introduction; @campbell2017roads], with potential application in topological quantum computation [@stern2013topological; @nayak2008non; @sarma2015majorana; @das2006topological; @freedman2003topological; @jaworowski2019quantum]. Haldane fractional spin quasiparticles in a spin one chain and Majorana Fermions in topological superconductors are good examples [@haldane1983nonlinear; @kitaev2001unpaired; @jaworowski2019quantum]. To realize Majorana Fermions Kitaev proposed [@kitaev2001unpaired; @kitaev2003fault] a chain of quantum dots on a p-wave superconductor that carries such non-local zero energy Majorana Fermions localized on its two ends, the Majorana zero modes (MZMs). Since then there have been numerous proposals to realize the Kitaev chain [@lutchyn2010majorana; @mourik2012signatures; @s-type_sc_qd_array; @poor_man_mzm; @minimal_mzm; @nadj2014observation; @sun2017detection]. In all cases, experimental confirmation" -"---\nabstract: 'Federated edge learning (FEEL) is a framework for training models in a distributed fashion using edge devices and a server that coordinates the learning process. In FEEL, edge devices periodically transmit model parameters to the server, which aggregates them to generate a global model. To reduce the burden of transmitting high-dimensional data by many edge devices, a broadband analog transmission scheme has been proposed. The devices transmit the parameters simultaneously using a linear analog modulation, which are aggregated by the superposition nature of the wireless medium. However, linear analog modulations incur in an excessive power consumption for edge devices and are not suitable for current digital wireless systems. To overcome this issue, in this paper we propose a digital frequency broadband aggregation. The scheme integrates a Multiple Frequency Shift Keying (MFSK) at the transmitters and a type-based multiple access (TBMA) at the receiver. Using concurrent transmission, the server can recover the type (i.e., a histogram) of the transmitted parameters and compute any aggregation function to generate a shared global model. We provide an extensive analysis of the communication scheme in an additive white Gaussian noise (AWGN) channel and compare it with linear analog modulations. Our experimental results show" -"---\nabstract: 'Over the years, integer linear programs have been employed to model inference in many natural language processing problems. This survey is meant to guide the reader through the process of framing a new inference problem as an instance of an integer linear program and is structured as a collection of recipes. At the end, we will see two worked examples to illustrate the use of these recipes.'\nauthor:\n- |\n Vivek Srikumar\\\n University of Utah\n- |\n Dan Roth\\\n University of Pennsylvania\nbibliography:\n- 'cited.bib'\n- 'ccg.bib'\ntitle: The Integer Linear Programming Inference Cookbook\n---\n\nIntroduction {#sec:introduction}\n============\n\nEffective decision-making requires the use of knowledge. This has been a clear, and long-standing principle in AI research, as reflected, for example, in the seminal early work on knowledge and AI\u2014summarized by\u00a0@brachman1985readings\u2014and the thriving *Knowledge Representation and Reasoning* and the *Uncertainty in AI* communities. However, the message has been somewhat diluted as data-driven statistical learning has become increasingly pervasive across AI. Nevertheless, the idea that reasoning and learning need to work together\u00a0[@khardon1996reasoning; @roth1996learning] and that knowledge representation is a crucial bridge between them has not been lost.\n\nOne area where the link between learning, representation, and reasoning has" -"---\nabstract: 'The back-propagation algorithm has long been the de-facto standard in optimizing weights and biases in neural networks, particularly in cutting-edge deep learning models. Its widespread adoption in fields like natural language processing, computer vision, and remote sensing has revolutionized automation in various tasks. The popularity of back-propagation stems from its ability to achieve outstanding performance in tasks such as classification, detection, and segmentation. Nevertheless, back-propagation is not without its limitations, encompassing sensitivity to initial conditions, vanishing gradients, overfitting, and computational complexity. The recent introduction of a forward-forward algorithm (FFA), which computes local goodness functions to optimize network parameters, alleviates the dependence on substantial computational resources and the constant need for architectural scaling. This study investigates the application of FFA for hyperspectral image classification. Experimental results and comparative analysis are provided with the use of the traditional back-propagation algorithm. Preliminary results show the potential behind FFA and its promises.'\nauthor:\n- Sidike Paheding\n- 'Abel A. Reyes-Angulo'\nbibliography:\n- 'ref.bib'\ntitle: 'Forward-Forward Algorithm for Hyperspectral Image Classification: A Preliminary Study'\n---\n\nIntroduction\n============\n\nDeep Learning(DL) [@lecun2015deep] has been revolutionizing many different fields due to its ability to achieve unprecedented performance when applied to real-world problems, including applications in agriculture" -"---\nauthor:\n- |\n Greg Serapio-Garc\u00eda,$^{1,2,3\\dag}$ Mustafa Safdari,$^{1\\dag}$ Cl\u00e9ment Crepy,$^{4}$ Luning Sun,$^{3}$\\\n Stephen Fitz,$^{5}$ Peter Romero,$^{3,5}$ Marwa Abdulhai,$^{6}$ Aleksandra Faust,$^{1\\ddag}$ Maja Matari\u0107$^{1\\ddag}$[^1]\\\n \\\n \\\n \\\n \\\nbibliography:\n- 'complete\\_refs.bib'\ntitle: Personality Traits in Large Language Models\n---\n\nIntroduction\n============\n\nLarge language models (LLMs) have revolutionized natural language processing with their ability to generate human-like text. As LLMs become ubiquitous and are increasingly used by the general public world-wide, the synthetic personality embedded in these models and its potential for misalignment are becoming a topic of importance for responsible AI. Some observed LLM agents have inadvertently manifested undesirable personality profiles[^2], raising serious safety and fairness concerns in AI, computational social science, and psychology research [@hagendorff2023machine]. LLMs are large-capacity machine-learned models that generate text, recently inspired major breakthroughs in natural language processing (NLP) and conversational agents [@wei2022emergent; @OpenAI2023GPT4; @palm]. Vast amounts of human-generated training data [@gpt3] enable LLMs to mimic human characteristics in their outputs and exhibit a form of synthetic personality. [*Personality*]{} encompasses an entity\u2019s characteristic patterns of thought, feeling, and behavior [@allport1937personality; @roberts2022personalityreview]. In humans, personality is formed from biological and social factors, and fundamentally influences daily interactions and preferences [@roberts2007personalityoutcomes]. *Psychometrics*, the science of psychological test construction and validation" -"---\n---\n\n\\\n[**Mohd Ali [^1]**]{}, [**Vardarajan Suneeta[^2]**]{}\\\n[*The Indian Institute of Science Education and Research (IISER),\\\nPune, India - 411008.*]{}\n\n[**Abstract**]{}\n\nRecently, Chandrasekaran, Penington and Witten (CPW) have shown that the generalized entropy of the Schwarzschild black hole at the bifurcation surface equals the entropy of an extended von Neumann algebra of quantum observables in the black hole exterior, in semiclassical Einstein gravity. They also derive a version of the Generalized Second law. We generalize these results to a static black hole in an arbitrary diffeomorphism invariant theory of gravity. Thus, a version of the Generalized second law for an arbitrary diffeomorphism invariant theory of gravity follows.\n\nIntroduction\n============\n\nGeneralized entropy in Einstein gravity was introduced by Bekenstein in order that the second law of thermodynamics be valid near black holes [@JB], [@JB2]. He suggested the Generalized Second Law (GSL) holds, namely that the generalized entropy increases under future evolution along the black hole horizon. The generalized entropy for a quantum black hole coupled to matter, in the semiclassical $G \\to 0$ limit was defined to be $$\\label{In1}\nS_{gen} = <\\frac{A}{4\\hbar G}> + S_{QFT},$$ where $A$ is the black hole horizon area and $S_{QFT}$ is the entanglement entropy of" -"---\nabstract: 'Contemporary distribution network can be seen with diverse dispatchable and non-dispatchable energy resources. The coordinated scheduling of these dispatchable resources with non-dispatchable resources can provide several techno-economic and social benefits. Since, battery energy storage systems (BESSs) and microturbine (MT) units are capital intensive, a thorough investigation of their coordinated scheduling on pure economic basis will be an interesting and challenging task while considering dynamic electricity price and uncertainty handling of non-dispatchable resources and load demand. This paper proposes a new methodology for optimal coordinated scheduling of BESSs and MT units considering existing renewable energy resources and dynamic electricity price to maximize daily profit function of the utility by employing a recently explored modified African buffalo optimization (MABO) algorithm. The key attributes of the proposed methodology are comprised of mean price-based adaptive scheduling embedded within a decision mechanism system (DMS) to maximize arbitrage benefits. DMS keeps a track of system states as a-priori thus guides the artificial intelligence based solution technique for sequential optimization. This may also reduce the computational burden of complex real-life engineering optimization problems. Further, a novel concept of fictitious charges is proposed to restrict the counterproductive operational management of BESSs. The application results investigated and" -"---\nabstract: |\n After three decades of computational multibody system (MBS) dynamics, current research is centered at the development of compact and user friendly yet computationally efficient formulations for the analysis of complex MBS. The key to this is a holistic geometric approach to the kinematics modeling observing that the general motion of rigid bodies as well as the relative motion due to technical joints are screw motions. Moreover, screw theory provides the geometric setting and Lie group theory the analytic foundation for an intuitive and compact MBS modeling. The inherent frame invariance of this modeling approach gives rise to very efficient recursive $O\\left(\n n\\right) $ algorithms, for which the so-called \u2019spatial operator algebra\u2019 is one example, and allows for use of readily available geometric data. In this paper three variants for describing the configuration of tree-topology MBS in terms of relative coordinates, i.e. joint variables, are presented: the standard formulation using body-fixed joint frames, a formulation without joint frames, and a formulation without either joint or body-fixed reference frames. This allows for describing the MBS kinematics without introducing joint reference frames and therewith rendering the use of restrictive modeling convention, such as Denavit-Hartenberg parameters, redundant. Four different definitions of" -"---\nabstract: 'We show that if a group $G$ acts geometrically by type-preserving automorphisms on a building, then $G$ satisfies the weak Tits alternative, namely, that $G$ is either virtually abelian or contains a non-abelian free group.'\nauthor:\n- 'Chris Karpinski, Damian Osajda, and Piotr Przytycki'\nbibliography:\n- 'refs2.bib'\ntitle: Weak Tits alternative for uniform lattices in buildings\n---\n\nIntroduction\n============\n\nBuildings were introduced by Jacques Tits in the 1950s as a tool to study semisimple algebraic groups. Since their inception, buildings have found diverse applications throughout mathematics, well beyond their roots in the theory of algebraic groups; see for instance the survey article [@BuildingsApp].\n\nAn ongoing area of interest has been in the study of algebraic properties of groups acting on buildings. Buildings might be equipped with a structure of a nonpositively curved metric space (see e.g.\u00a0[@Davis Chapter 18]), so it is believed that groups acting on them in a nice enough manner exhibit a property shared by many \u2018non-positively curved\u2019 groups: the *Tits alternative*. The Tits alternative is a dichotomy for groups and their subgroups, first studied by Tits in [@Tits1], where it was shown that every finitely generated linear group is either virtually solvable or contains" -"---\nabstract: 'Multimodal Sarcasm Explanation (MuSE) is a new yet challenging task, which aims to generate a natural language sentence for a multimodal social post (an image as well as its caption) to explain why it contains sarcasm. Although the existing pioneer study has achieved great success with the BART backbone, it overlooks the gap between the visual feature space and the decoder semantic space, the object-level metadata of the image, as well as the potential external knowledge. To solve these limitations, in this work, we propose a novel mulTi-source sEmantic grAph-based Multimodal sarcasm explanation scheme, named TEAM. In particular, TEAM extracts the object-level semantic meta-data instead of the traditional global visual features from the input image. Meanwhile, TEAM resorts to ConceptNet to obtain the external related knowledge concepts for the input text and the extracted object meta-data. Thereafter, TEAM introduces a multi-source semantic graph that comprehensively characterize the multi-source ([*i.e.,* ]{}caption, object meta-data, external knowledge) semantic relations to facilitate the sarcasm reasoning. Extensive experiments on a public released dataset MORE verify the superiority of our model over cutting-edge methods.'\nauthor:\n- |\n Liqiang Jing$^1$, [**Xuemeng Song**]{}$^1$[^1], [**Kun Ouyang**]{}$^1$, [**Mengzhao Jia**]{}$^1$, [**Liqiang Nie**]{}$^2$\\\n $^1$Shandong University\\\n $^2$Harbin Institute of Technology (Shenzhen)\\" -"---\nabstract: 'This paper tackles the problem of object counting in images. Existing approaches rely on extensive training data with point annotations for each object, making data collection labor-intensive and time-consuming. To overcome this, we propose a training-free object counter that treats the counting task as a segmentation problem. Our approach leverages the Segment Anything Model (SAM), known for its high-quality masks and zero-shot segmentation capability. However, the vanilla mask generation method of SAM lacks class-specific information in the masks, resulting in inferior counting accuracy. To overcome this limitation, we introduce a prior-guided mask generation method that incorporates three types of priors into the segmentation process, enhancing efficiency and accuracy. Additionally, we tackle the issue of counting objects specified through text by proposing a two-stage approach that combines reference object selection and prior-guided mask generation. Extensive experiments on standard datasets demonstrate the competitive performance of our training-free counter compared to learning-based approaches. This paper presents a promising solution for counting objects in various scenarios without the need for extensive data collection and counting-specific training. Code is available at .'\nauthor:\n- |\n Zenglin Shi ^1^, Ying Sun^1,\\ 2^, Mengmi Zhang^1,\\ 2,\\ 3^\\\n ^1^ I2R, Agency for Science, Technology and Research," -"---\nabstract: 'We have investigated the structure of hydrogen-intercalated quasi-free-standing monolayer graphene (QFMLG) grown on 6H-SiC(0001) by employing total-reflection high-energy positron diffraction (TRHEPD). At least nine diffraction spots of the zeroth order Laue zone were resolved along and three along , which are assigned to graphene, SiC and higher order spots from multiple diffraction on both lattices. We further performed rocking curve analysis based on the full dynamical diffraction theory to precisely determine the spacing between QFMLG and the SiC substrate. Our study yields a spacing of that is in excellent agreement with the results from density-functional theory (DFT) calculations published previously.'\nauthor:\n- Matthias Dodenh\u00f6ft\n- Izumi Mochizuki\n- Ken Wada\n- Toshio Hyodo\n- Peter Richter\n- Philip Sch\u00e4dlich\n- Thomas Seyller\n- Christoph Hugenschmidt\nbibliography:\n- 'bibliography\\_TRHEPD\\_QFMLG\\_paper.bib'\ntitle: 'Determination of the Spacing Between Hydrogen-Intercalated Quasi-Free-Standing Monolayer Graphene and 6H-SiC(0001) Using Total-Reflection High-Energy Positron Diffraction'\n---\n\n\\[sec:Intro\\]Introduction\n=========================\n\nGraphene has been extensively studied due to its exceptional properties, such as extremely high thermal conductivity [@Bal08] and mechanical strength [@Lee08; @Cao20], as well as massless charge carriers with unconventional behavior in tunneling, confinement or magnetotransport . Among different approaches to produce large-area graphene on an industrial scale, its synthesis" -"---\nabstract: 'We analyze the spectral properties of a particular class of unbounded open sets. These are made of a central bounded \u201ccore\u201d, with finitely many unbounded tubes attached to it. We adopt an elementary and purely variational point of view, studying the compactness (or the defect of compactness) of level sets of the relevant constrained Dirichlet integral. As a byproduct of our argument, we also get exponential decay at infinity of variational eigenfunctions. Our analysis includes as a particular case a planar set (sometimes called \u201cbookcover\u201d), already encountered in the literature on curved quantum waveguides. J. Hersch suggested that this set could provide the sharp constant in the [*Makai-Hayman inequality*]{} for the bottom of the spectrum of the Dirichlet-Laplacian of planar simply connected sets. We disprove this fact, by means of a singular perturbation technique.'\naddress:\n- 'Dipartimento di Scienze Matematiche, Fisiche e Informatiche Universit\u00e0 di Parma Parco Area delle Scienze 53/a, Campus, 43124 Parma, Italy'\n- 'Dipartimento di Matematica e Informatica Universit\u00e0 degli Studi di Ferrara Via Machiavelli 35, 44121 Ferrara, Italy'\n- 'Dipartimento di Matematica Universit\u00e0 di Pisa Largo Bruno Pontecorvo 5, 56127 Pisa, Italy'\nauthor:\n- Francesca Bianchi\n- Lorenzo Brasco\n- Roberto Ognibene\ntitle: On" -"---\nauthor:\n- \n- \n- \n- \n- \n- \nbibliography:\n- 'bibliography.bib'\ntitle: 'Surgical Phase and Instrument Recognition: How to identify appropriate Dataset Splits'\n---\n\nIntroduction\n============\n\nTechnologies that enable next-generation context-aware systems in the operating room are currently intensively researched in the domain of surgical workflow recognition [@MaierHein2017]. Recent studies that apply machine learning algorithms to this task have shown the most promising results [@Garrow2021]. To further support advances in this area, academic machine learning competitions are hosted regularly [@MaierHein2021; @Wagner2023]. However, despite the progress in surgical workflow recognition, the developers of machine learning algorithms are faced with several challenges that result from the heterogeneous nature and complexity of surgical workflows, and the temporal correlation of sensor data.\n\nSpecifically, one of the major challenges of surgical workflow data lies in the unequal distribution of classes (i.e., surgical phases), which is commonly referred to as data imbalance in the machine learning literature. This issue is further exacerbated by the fact that some phases can occur several times during surgery while other phases may not occur at all. This results in an imbalanced representation of classes in the dataset which in turn hinders the ability of machine learning classifiers to accurately predict" -"---\nabstract: 'Monocular depth estimation is scale-ambiguous, and thus requires scale supervision to produce metric predictions. Even so, the resulting models will be geometry-specific, with learned scales that cannot be directly transferred across domains. Because of that, recent works focus instead on relative depth, eschewing scale in favor of improved up-to-scale zero-shot transfer. In this work we introduce [ZeroDepth]{}, a novel monocular depth estimation framework capable of predicting metric scale for arbitrary test images from different domains and camera parameters. This is achieved by (i) the use of input-level geometric embeddings that enable the network to learn a scale prior over objects; and (ii) decoupling the encoder and decoder stages, via a variational latent representation that is conditioned on single frame information. We evaluated [ZeroDepth]{}targeting both outdoor (KITTI, DDAD, nuScenes) and indoor (NYUv2) benchmarks, and achieved a new state-of-the-art in both settings using the same pre-trained model, outperforming methods that train on in-domain data and require test-time scaling to produce metric estimates. Project page: .'\nauthor:\n- |\n Vitor Guizilini Igor Vasiljevic Dian Chen Rare\u0219 Ambru\u0219 Adrien Gaidon\\\n Toyota Research Institute (TRI), Los Altos, CA\nbibliography:\n- 'references.bib'\ntitle: ' Towards Zero-Shot Scale-Aware Monocular Depth Estimation '\n---\n\nIntroduction\n============" -"---\nabstract: 'In the context of operator valued W$^*$-free probability theory, we study Haar unitaries, R-diagonal elements and circular elements. Several classes of Haar unitaries are differentiated from each other. The term bipolar decomposition is used for the expression of an element as $vx$ where $x$ is self-adjoint and $v$ is a partial isometry, and we study such decompositions of operator valued R-diagonal and circular elements that are free, meaning that $v$ and $x$ are $*$-free from each other. In particular, we prove, when $B={{\\mathbf C}}^2$, that if a $B$-valued circular element has a free bipolar decomposition with $v$ unitary, then it has one where $v$ normalizes $B$.'\naddress:\n- 'Ken Dykema, Department of Mathematics, Texas A&M University, College Station, TX 77843-3368, USA.'\n- John Griffin\nauthor:\n- Ken Dykema\n- John Griffin\ndate: 29 June 2023\ntitle: 'On operator valued Haar unitaries and bipolar decompositions of R-diagonal elements'\n---\n\n[^1]\n\nIntroduction\n============\n\nConsider a tracial W$^*$-noncommutative probability space $(A,\\tau)$, namely a von Neumann algebra $A$, with a normal faithful tracial state $\\tau$. Voiculescu\u2019s circular operator, introduced in\u00a0[[@V91]]{}, arises naturally in free probability theory. Voiculescu proved\u00a0[[@V-paper]]{} that a circular operator has polar decomposition $z=u|z|$, where $u$ is a" -"---\nabstract: 'Opacity is a notion that describes an eavesdropper\u2019s inability to estimate a system\u2019s \u2018secret\u2019 states by observing the system\u2019s outputs. In this paper, we propose algorithms to compute the minimum sparse perturbation to be added to a system to make its initial states opaque. For these perturbations, we consider two sparsity constraints - structured and affine. We develop an algorithm to compute the global minimum-norm perturbation for the structured case. For the affine case, we use the global minimum solution of the structured case as initial point to compute a local minimum. Empirically, this local minimum is very close to the global minimum. We demonstrate our results via a running example.'\nauthor:\n- 'Varkey M. John and Vaibhav Katewa[^1]'\ntitle: '**Minimum-norm Sparse Perturbations for Opacity in Linear Systems** '\n---\n\nIntroduction\n============\n\nPrivacy in Cyber-Physical Systems (CPS) has attracted significant interest in recent years, particularly due to increasing connectivity and computation capability in embedded devices. Recent works on privacy in CPS have explored ideas on opacity, differential privacy and information-theoretic security, among others [@DiffPriv_ACC_2016; @InfPriv_ARC_2019; @OpacityLinearSystems_IEEETAC_2020].\n\nOpacity, in particular, is a privacy notion which was introduced in the computer science literature, and has later been studied in depth" -"---\nabstract: 'We show that the convolution algebra of smooth, compactly-supported functions on a Lie groupoid is H-unital in the sense of Wodzicki. We also prove H-unitality of infinite order vanishing ideals associated to invariant, closed subsets of the unit space. This furthermore gives H-unitality for the quotients by such ideals, which are noncommutative algebras of Whitney functions. These results lead immediately to excision properties in discrete Hochschild and cyclic homology around invariant, closed subsets. This work extends previous work of the author establishing the Dixmier-Malliavin theorem in this setting.'\nauthor:\n- 'Michael D. Francis'\nbibliography:\n- 'Francis.bib'\ntitle: 'H-Unitality of Smooth Groupoid Algebras'\n---\n\nIntroduction\n============\n\nLet $A$ denote an associative algebra over ${\\mathbb{C}}$. We do not assume $A$ is commutative or unital. Let us say that $A$ has the *weak factorization property* if every $a \\in A$ can be expressed as a finite sum $a = \\sum b_i c_i$, where $b_i,c_i \\in A$. Notice that every unital algebra has the weak factorization property, so this notion is only of interest in the nonunital setting.\n\nRecall that, given a Lie group $G$ equipped with Haar measure, the space $C_c^\\infty(G)$ of smooth, compactly-supported functions on $G$ becomes an algebra with" -"---\nabstract: 'The K\u00e4hler-Dirac fermion, recognized as an elegant geometric approach, offers an alternative to traditional representations of relativistic fermions. Recent studies have demonstrated that symmetric mass generation (SMG) can precisely occur with two copies of K\u00e4hler-Dirac fermions across any spacetime dimensions. This conclusion stems from the study of anomaly cancellation within the fermion system. Our research provides an alternative understanding of this phenomenon from a condensed matter perspective, by associating the interacting K\u00e4hler-Dirac fermion with the boundary of bosonic symmetry-protected topological (SPT) phases. We show that the low-energy bosonic fluctuations in a single copy of the K\u00e4hler-Dirac fermion can be mapped to the boundary modes of a $\\mathbb{Z}_2$-classified bosonic SPT state, protected by an inversion symmetry universally across all dimensions. This implies that two copies of K\u00e4hler-Dirac fermions can always undergo SMG through interactions mediated by these bosonic modes. This picture aids in systematically designing SMG interactions for K\u00e4hler-Dirac fermions in any dimension. We present the exact lattice Hamiltonian of these interactions and validate their efficacy in driving SMG.'\nauthor:\n- Yuxuan Guo\n- 'Yi-Zhuang You'\nbibliography:\n- 'main.bib'\ntitle: 'Symmetric Mass Generation of K\u00e4hler-Dirac Fermions from the Perspective of Symmetry-Protected Topological Phases '\n---\n\nIntroduction\n============\n\nSymmetric mass" -"---\nabstract: 'Photoinjectors and Free Electron Lasers (FEL) are amongst the most advanced systems in accelerator physics and have consistently pushed the boundaries of emittance and x-ray peak power. In this paper, laser shaping at the cathode is proposed to further lower the emittance and reduce electron beam tails, which would result in brighter x-ray production. Using dispersion controlled nonlinear shaping (DCNS), laser pulses and beam dynamics were simulated in LCLS-II. The photoinjector emittance was optimized and the resulting e-beam profiles were then simulated and optimized in the linac. Finally, the expected FEL performance is estimated and compared to the current technology: Gaussian laser pulses on the cathode. The e-beams produced by DCNS pulses show a potential for 35% increase in x-ray power per pulse during SASE when compared to the standard Gaussian laser pulses.'\nauthor:\n- Nicole Neveu\n- Randy Lemons\n- Joseph Duris\n- Jingyi Tang\n- Yuantao Ding\n- Agostino Marinelli\n- Sergio Carbajo\nbibliography:\n- 'frontiers.bib'\ntitle: 'Nonlinearly Shaped Pulses in Photoinjectors and Free-Electron Lasers'\n---\n\n[^1]\n\n\\[sec:intro\\]Introduction\n=========================\n\nAccelerator performance and optimization is the foundation of advances in state-of-the-art ultrafast and fundamental space-time resolution instrumentation. Advances in light sources, emerging medical, industrial, and ultrafast electron" -"---\nabstract: 'Various pulsar timing array (PTA) experiments (NANOGrav, EPTA, PPTA, CPTA, including data from InPTA) very recently reported evidence for excess red common-spectrum signals in their latest datasets, with inter-pulsar correlations following the Hellings-Downs pattern, pointing to a stochastic gravitational wave background (SGWB) origin. Focusing for concreteness on the NANOGrav signal (given that all signals are in good agreement between each other), I inspect whether it supports an inflationary SGWB explanation, finding that such an interpretation calls for an extremely blue tensor spectrum, with spectral index $n_T \\simeq 1.8 \\pm 0.3$, while Big Bang Nucleosynthesis limits require a very low reheating scale, $T_{\\rm rh} \\lesssim 10\\,{\\rm GeV}$. While not impossible, an inflationary origin for the PTA signal is barely tenable: within well-motivated inflationary models it is hard to achieve such a blue tilt, whereas models who do tend to predict sizeable non-Gaussianities, excluded by observations. Intriguingly, ekpyrotic models naturally predict a SGWB with spectral index $n_T=2$, although with an amplitude too suppressed to be able to explain the signal detected by PTA experiments. Finally, I provide explicit expressions for a bivariate Gaussian approximation to the joint posterior distribution for the intrinsic-noise amplitude and spectral index of the NANOGrav signal," -"---\nabstract: 'We present CausalVLR (Causal Visual-Linguistic Reasoning), an open-source toolbox containing a rich set of state-of-the-art causal relation discovery and causal inference methods for various visual-linguistic reasoning tasks, such as VQA, image/video captioning, medical report generation, model generalization and robustness, etc. These methods have been included in the toolbox with PyTorch implementations under NVIDIA computing system. It not only includes training and inference codes, but also provides model weights. We believe this toolbox is by far the most complete visual-linguitic causal reasoning toolbox. We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to re-implement existing methods and develop their own new causal reasoning methods. Code and models are available at . The project is under active development by HCP-Lab[^1]\u2019s contributors and we will keep this document updated.'\nauthor:\n- |\n Yang Liu\\\n Sun Yat-sen University\\\n `liuy856@mail.sysu.edu.cn`\\\n Weixing Chen\\\n Sun Yat-sen University\\\n `chen867820261@gmail.com`\\\n Guanbin Li\\\n Sun Yat-sen University\\\n `liguanbin@mail.sysu.edu.cn`\\\n Liang Lin\\\n Sun Yat-sen University\\\n `linliang@ieee.org`\\\nbibliography:\n- 'refs.bib'\ntitle: 'CausalVLR: A Toolbox and Benchmark for Visual-Linguistic Causal Reasoning'\n---\n\nIntroduction\n============\n\nThe emergence of vast amounts of heterogeneous multi-modal data, including images [@he2016deep; @liu2016combining], videos [@liu2018transferable; @liu2018global; @liu2018hierarchically; @liu2022tcgl; @yan2023skeletonmae], languages" -"---\nbibliography:\n- 'refs.bib'\ntitle: |\n Interfaces and Quantum Algebras, II:\\\n Cigar Partition Function\n---\n\n[by1SCGPnumberedinst[numberedinstto 0pt[-5pt${}^{\\the\\instnum}$]{}Simons Center for Geometry and Physics,Stony Brook University, Stony Brook, NY 11794-3636, USA]{}unnumberedinst[unnumberedinstSimons Center for Geometry and Physics,Stony Brook University, Stony Brook, NY 11794-3636, USA]{}]{}\n\n[authors[Mykola Dedushenko[${}^{\\SCGP}$instused[yes]{}]{} and Nikita Nekrasov[${}^{\\SCGP}$instused[yes]{}]{}]{}]{}\n\n[abstract[The supersymmetric cigar (half-)index or cigar partition function of 3d $\\mathcal{N}=2$ gauge theories contains a wealth of information. Physically, it captures the spectrum of BPS states, the non-perturbative corrections to various partition functions, the effective twisted superpotential and the data of supersymmetric vacua. Mathematically, it defines the K-theoretic Vertex counting vortices/quasimaps, and connects to quantum K-theory, as well as elliptic cohomology and stable envelopes. We explore these topics from the physics standpoint, systematically developing the foundations and explaining various mathematical properties using the quantum field theory machinery.]{}]{}\n\nIntroduction\n============\n\nQuantum field theory is a rich source of mathematical problems, including formulating its own principles. It is believed that whatever quantum field theory is, it is probably defined on a variety of geometric backgrounds, both real and complex, contains its own deformation theory, and is decorated with a plethora of algebraic structures. A class of quantum field theories admits finite dimensional approximations, or solvable sectors," -"---\nabstract: 'Uncertainty is an important and fundamental concept in physics education. Students are often first exposed to uncertainty in introductory labs, expand their knowledge across lab courses, and then are introduced to quantum mechanical uncertainty in upper-division courses. This study is part of a larger project evaluating student thinking about uncertainty across these contexts. In this research, we investigate advanced physics student thinking about uncertainty by asking them conceptual questions about how a hypothetical distribution of measurements would change if \u2018more\u2019 or \u2018better\u2019 data were collected in four different experimental scenarios. The scenarios include both classical and quantum experiments, as well as experiments that theoretically result in an expected single value or an expected distribution. This investigation is motivated by our goal of finding insights into students\u2019 potential point- and set-like thinking about uncertainty and of shining light on the limitations of those binary paradigms.'\nauthor:\n- Andy Schang\n- Matthew Dew\n- 'Emily M. Stump'\n- 'N. G. Holmes'\n- Gina Passante\nbibliography:\n- 'references.bib'\n- 'ref2.bib'\ntitle: 'More or better data: A new perspective on student reasoning about measurement uncertainty'\n---\n\nIntroduction\n============\n\nThe concept of uncertainty is a fundamental aspect of physics\u00a0[@heron_phys21_2016], particularly in undergraduate" -"---\nabstract: 'Image super-resolution research recently been dominated by transformer models which need higher computational resources than CNNs due to the quadratic complexity of self-attention. We propose a new neural network \u2013 WaveMixSR \u2013 for image super-resolution based on WaveMix architecture which uses a 2D-discrete wavelet transform for spatial token-mixing. Unlike transformer-based models, WaveMixSR does not unroll the image as a sequence of pixels/patches. It uses the inductive bias of convolutions along with the lossless token-mixing property of wavelet transform to achieve higher performance while requiring fewer resources and training data. We compare the performance of our network with other state-of-the-art methods for image super-resolution. Our experiments show that WaveMixSR achieves competitive performance in all datasets and reaches state-of-the-art performance in the BSD100 dataset on multiple super-resolution tasks. Our model is able to achieve this performance using less training data and computational resources while maintaining high parameter efficiency compared to current state-of-the-art models.'\nauthor:\n- |\n Pranav Jeevan, Akella Srinidhi, Pasunuri Prathiba, Amit Sethi\\\n Department of Electrical Engineering\\\n Indian Institute of Technology Bombay\\\n Mumbai, India\\\n `{194070025, 213079003, asethi }@iitb.ac.in`\\\nbibliography:\n- 'references.bib'\ntitle: 'WaveMixSR: A Resource-efficient Neural Network for Image Super-resolution '\n---\n\nIntroduction {#sec:intro}\n============\n\n![Comparison of PSNR and" -"---\nabstract: 'Considering ($1+1$)-dimensional fluid in presence of gravitational trace anomaly, as an effective description of higher-dimensional fluid, the hydrodynamics is discussed through a first order thermodynamic description. Contrary to the existing approaches, the fluid velocity is identified through the auxiliary field required to describe the Polyakov action for the effective description of relevant energy-momentum tensor. The thermodynamic and fluid quantities, on a static black hole spacetime, are calculated both near the horizon as well as at the asymptotic infinity. The Unruh vacuum appears to be suitable one for the present analysis. Interestingly, we observe that such a fluid description is equally capable of calculating Hawking flux and thereby establishing a probable close connection with the well known anomaly cancellation method for the Hawking effect.'\nauthor:\n- 'Abhinove Nagarajan Seenivasan[^1]'\n- 'Sayan Chakrabarti[^2]'\n- 'Bibhas Ranjan Majhi[^3]'\nbibliography:\n- 'bibl.bib'\n---\n\n0 ex\n\nIntroduction\n============\n\nAnomalies in field theory at the quantum scale is quite ubiquitous. Usually the classical symmetries may break down in the quantum regime and leads to anomalies. Breaking of diffeomorphism symmetry yields non-conservation of energy-momentum tensor (EMT) and similarly breaking of conformal symmetry generates trace anomaly. In gravitational theories one encounters such scenarios and therefore either" -"---\nabstract: 'Reliable real-world deployment of reinforcement learning (RL) methods requires a nuanced understanding of their strengths and weaknesses and how they compare to those of humans. Human-machine systems are becoming more prevalent and the design of these systems relies on a task-oriented understanding of both human learning (HL) and RL. Thus, an important line of research is characterizing how the structure of a learning task affects learning performance. While increasingly complex benchmark environments have led to improved RL capabilities, such environments are difficult to use for the dedicated study of task structure. To address this challenge we present a learning environment built to support rigorous study of the impact of task structure on HL and RL. We demonstrate the environment\u2019s utility for such study through example experiments in task structure that show performance differences between humans and RL algorithms.'\nauthor:\n- |\n Eric Pulick$^{1*}$, Vladimir Menkov$^2$\\\n **Yonatan Mintz$^1$, Paul Kantor$^1$, Vicki M.\u00a0Bier$^1$**\\\n $^1$University of Wisconsin - Madison, $^2$Rutgers University\\\n {pulick, ymintz, pkantor, vmbier}@wisc.edu, vmenkov@gmail.com\nbibliography:\n- 'gohr\\_2023.bib'\ntitle: Comparing Reinforcement Learning and Human Learning using the Game of Hidden Rules\n---\n\nIntroduction\n============\n\nReinforcement learning (RL) [@sutton_reinforcement_2018] benchmarks often come directly from human benchmarks (e.g., games) or are" -"---\nabstract: 'Recent research has examined the possibility of using robots to guide evacuees to safe exits during emergencies [@NayyarWagner2019; @NayyarExplanations2020]. Yet, there are many factors that can impact a person\u2019s decision to follow a robot. Being able to model how an evacuee follows an emergency robot guide could be crucial for designing robots that effectively guide evacuees during an emergency. This paper presents a method for developing realistic and predictive human evacuee models from physical human evacuation experiments. The paper analyzes the behavior of 14 human subjects during physical robot-guided evacuation. We then use the video data to create evacuee motion models that predict the person\u2019s future positions during the emergency. Finally, we validate the resulting models by running a k-fold cross-validation on the data collected during physical human subject experiments. We also present performance results of the model using data from a similar simulated emergency evacuation experiment demonstrating that these models can serve as a tool to predict evacuee behavior in novel evacuation simulations.'\nauthor:\n- |\n Mollik Nayyar$^{1}$, Ghanghoon Paik$^{2}$, Zhenyuan Yuan$^{3}$, Tongjia Zheng$^{4}$,\\\n Minghui Zhu$^{5}$, Hai Lin$^{6}$ and Alan R. Wagner$^{7}$[^1][^2][^3][^4][^5][^6][^7]\nbibliography:\n- 'root.bib'\ntitle: '**Learning Evacuee Models from Robot-Guided Emergency Evacuation Experiments** '\n---\n\nINTRODUCTION" -"---\nabstract: 'The NL2SQL task involves parsing natural language statements into SQL queries. While most state-of-the-art methods treat NL2SQL as a slot-filling task and use feature representation learning techniques, they overlook explicit correlation features between the SELECT and WHERE clauses and implicit correlation features between sub-tasks within a single clause. To address this issue, we propose the Clause Feature Correlation Decoupling and Coupling (CFCDC) model, which uses a feature representation decoupling method to separate the SELECT and WHERE clauses at the parameter level. Next, we introduce a multi-task learning architecture to decouple implicit correlation feature representation between different SQL tasks in a specific clause. Moreover, we present an improved feature representation coupling module to integrate the decoupled tasks in the SELECT and WHERE clauses and predict the final SQL query. Our proposed CFCDC model demonstrates excellent performance on the WikiSQL dataset, with significant improvements in logic precision and execution accuracy. The source code for the model will be publicly available on GitHub[^1].'\nauthor:\n- Chenduo Hao\n- Xu Zhang\n- Chuanbao Gao\n- 'Deyu Zhou[^2]'\nbibliography:\n- 'conference.bib'\ntitle: Feature Representation Learning for NL2SQL Generation Based on Coupling and Decoupling\n---\n\nIntroduction\n============\n\nNL2SQL aims to automate the process of" -"---\nabstract: 'We propose a novel value approximation method, namely \u201cigensubspace egularized ritic (ERC)\u201d for deep reinforcement learning (RL). ERC is motivated by an analysis of the dynamics of Q-value approximation error in the Temporal-Difference (TD) method, which follows a path defined by the 1-eigensubspace of the transition kernel associated with the Markov Decision Process (MDP). It reveals a fundamental property of TD learning that has remained unused in previous deep RL approaches. In ERC, we propose a regularizer that guides the approximation error tending towards the 1-eigensubspace, resulting in a more efficient and stable path of value approximation. Moreover, we theoretically prove the convergence of the ERC method. Besides, theoretical analysis and experiments demonstrate that ERC effectively reduces the variance of value functions. Among 26 tasks in the DMControl benchmark, ERC outperforms state-of-the-art methods for 20. Besides, it shows significant advantages in Q-value approximation and variance reduction. Our code is available at\u00a0.'\nauthor:\n- 'Qiang He$^{\\text{(\\Letter)}}$'\n- Tianyi Zhou\n- Meng Fang\n- Setareh Maghsudi\nbibliography:\n- 'sample.bib'\ntitle: 'Eigensubspace of Temporal-Difference Dynamics and How It Improves Value Approximation in Reinforcement Learning'\n---\n\nIntroduction\n============\n\nIn recent years, deep reinforcement learning (RL), which is built upon the basis" -"---\nabstract: 'We systematically investigate the problem of representing a Grothendieck topos with topological groupoids. We characterise, in model theoretic terms, which open topological groupoids may represent the classifying topos of a theory. Intuitively, this characterises which groupoids of models contain enough information to reconstruct the theory. Our treatment subsumes many previous examples of representing groupoids found in the literature. In addition, we demonstrate that a representing groupoid for a theory remains representing when new isomorphisms are added to the groupoid, yielding a topological parallel to the extant theory of \u00e9tale completeness for localic groupoids.'\nauthor:\n- 'J. L. Wrigley[^1]'\nbibliography:\n- 'biblio.bib'\ntitle: On topological groupoids that represent theories\n---\n\nIntroduction\n============\n\n#### Representation by groups.\n\nFor some theories, there exist models whose automorphism structure is sufficiently rich that we would expect the data of the theory to be somehow recoverable from the group of automorphisms. For example, the rationals with the usual ordering is a conservative, ultrahomogeneous model for the theory of dense linear orders without endpoints. Grothendieck topos theory is a natural language in which to formalise the problem of recoverability since the discipline sits at the intersection of categorical logic and topological algebra.\n\nOn the logical" -"---\nabstract: 'The coset construction is a tool for systematically building low energy effective actions for Nambu-Goldstone modes. This technique is typically used to compute time-ordered correlators appropriate for $S$-matrix computations for systems in their ground state. In this paper, we extend this technique to the Schwinger-Keldysh formalism, which enables one to calculate a wider variety of correlators and applies also to systems in a mixed state. We focus our attention on internal symmetries and demonstrate that, after identifying the appropriate symmetry breaking pattern, Schwinger-Keldysh effective actions for Nambu-Goldstone modes can be constructed using the standard rules of the coset construction. Particular emphasis is placed on the thermal state and ensuring that correlators satisfy the KMS relation. We also discuss explicitly the power counting scheme underlying our effective actions. We comment on the similarities and differences between our approach and others that have previously appeared in the literature. In particular, our prescription does not require the introduction of additional \u201cdiffusive\u201d symmetries and retains the full non-linear structure generated by the coset construction. We conclude with a series of explicit examples, including a computation of the finite-temperature two-point functions of conserved spin currents in non-relativistic paramagnets, antiferromagnets, and ferromagnets. Along the" -"---\nabstract: 'Machine learning has had an enormous impact in many scientific disciplines. Also in the field of low-temperature plasma modeling and simulation it has attracted significant interest within the past years. Whereas its application should be carefully assessed in general, many aspects of plasma modeling and simulation have benefited substantially from recent developments within the field of machine learning and data-driven modeling. In this survey, we approach two main objectives: *(a)* We review the state-of-the-art, focusing on approaches to low-temperature plasma modeling and simulation. By dividing our survey into plasma physics, plasma chemistry, plasma-surface interactions, and plasma process control, we aim to extensively discuss relevant examples from literature. *(b)* We provide a perspective of potential advances to plasma science and technology. We specifically elaborate on advances possibly enabled by adaptation from other scientific disciplines. We argue that not only the known unknowns, but also unknown unknowns may be discovered due to the inherent propensity of data-driven methods to spotlight hidden patterns in data.'\nauthor:\n- Jan Trieschmann\n- Luca Vialetto\n- Tobias Gergs\nbibliography:\n- 'references.bib'\ntitle: 'Review: Machine learning for advancing low-temperature plasma modeling and simulation'\n---\n\n[**\\***Jan Trieschmann, ]{}\n\n[2]{}\n\nIntroduction\n============\n\nLow-temperature plasmas (LTPs) consist of" -"---\nabstract: 'An emerging application of Raman spectroscopy is monitoring the state of chemical reactors during biologic drug production. Raman shift intensities scale linearly with the concentrations of chemical species and thus can be used to analytically determine real-time concentrations using non-destructive light irradiation in a label-free manner. Chemometric algorithms are used to interpret Raman spectra produced from complex mixtures of bioreactor contents as a reaction evolves. Finding the optimal algorithm for a specific bioreactor environment is challenging due to the lack of freely available Raman mixture datasets. The RaMix Python package addresses this challenge by enabling the generation of synthetic Raman mixture datasets with controllable noise levels to assess the utility of different chemometric algorithm types for real-time monitoring applications. To demonstrate the capabilities of this package and compare the performance of different chemometric algorithms, 48 datasets of simulated spectra were generated using the RaMix Python package. The four tested algorithms include partial least squares regression (PLS), a simple neural network, a simple convolutional neural network (simple CNN), and a 1D convolutional neural network with a ResNet architecture (ResNet). The performance of the PLS and simple CNN model was found to be comparable, with the PLS algorithm slightly outperforming" -"---\nabstract: 'The activity in the brain cortex remarkably shows a simultaneous presence of robust collective oscillations and neuronal avalanches, where intermittent bursts of pseudo-synchronous spiking are interspersed with long periods of quiescence. The mechanisms allowing for such a coexistence are still a matter of an intensive debate. Here, we demonstrate that avalanche activity patterns can emerge in a rather simple model of an array of diffusively coupled neural oscillators with multiple timescale local dynamics in vicinity of a canard transition. The avalanches coexist with the fully synchronous state where the units perform relaxation oscillations. We show that the mechanism behind the avalanches is based on an inhibitory effect of interactions, which may quench the spiking of units due to an interplay with the maximal canard. The avalanche activity bears certain heralds of criticality, including scale-invariant distributions of event sizes. Furthermore, the system shows an increased sensitivity to perturbations, manifested as critical slowing down and a reduced resilience.'\nauthor:\n- Max Contreras\n- 'Everton S. Medeiros'\n- Anna Zakharova\n- Philipp H\u00f6vel\n- Igor Franovi\u0107\nbibliography:\n- 'fhn.bib'\ntitle: 'Scale-free avalanches in arrays of FitzHugh-Nagumo oscillators'\n---\n\n> Cascading dynamics are a prominent feature of many complex systems, from information" -"---\nabstract: |\n In this paper we investigate the existence of subexponential parameterized algorithms of three fundamental cycle-hitting problems in geometric graph classes. The considered problems, Triangle Hitting (TH), Feedback Vertex Set (FVS), and Odd Cycle Transversal (OCT) ask for the existence in a graph $G$ of a set $X$ of at most $k$ vertices such that $G-X$ is, respectively, triangle-free, acyclic, or bipartite. Such subexponential parameterized algorithms are known to exist in planar and even $H$-minor free graphs from bidimensionality theory \\[Demaine et al., JACM 2005\\], and there is a recent line of work lifting these results to geometric graph classes consisting of intersection of \u201cfat\u201d objects (\\[Grigoriev et al., FOCS 2022\\] and \\[Lokshtanov et al., SODA 2022\\]). In this paper we focus on \u201cthin\u201d objects by considering intersection graphs of segments in the plane with $d$ possible slopes ($d$-DIR graphs) and contact graphs of segments in the plane. Assuming the ETH, we rule out the existence of algorithms:\n\n - solving TH in time $2^{o(n)}$ in [2-DIR]{} graphs; and\n\n - solving TH, FVS, and OCT in time $2^{o(\\sqrt{n})}$ in $K_{2,2}$-free contact-[2-DIR]{} graphs.\n\n These results indicate that additional restrictions are necessary in order to obtain subexponential parameterized" -"---\nabstract: 'We have generalized the well-known statement that the Clifford group is a unitary 3-design into symmetric cases by extending the notion of unitary design. Concretely, we have proven that a symmetric Clifford group is a symmetric unitary 3-design if and only if the symmetry constraint is described by some Pauli subgroup. We have also found a complete and unique construction method of symmetric Clifford groups with simple quantum gates for Pauli symmetries. For the overall understanding, we have also considered physically relevant U(1) and SU(2) symmetry constraints, which cannot be described by a Pauli subgroup, and have proven that the symmetric Clifford group is a symmetric unitary 1-design but not a 2-design under those symmetries. Our findings are numerically verified by computing the frame potentials, which measure the difference in randomness between the uniform ensemble on the symmetric group of interest and the symmetric unitary group. This work will open a new perspective into quantum information processing such as randomized benchmarking, and give a deep understanding to many-body systems such as monitored random circuits.'\nauthor:\n- Yosuke Mitsuhashi\n- Nobuyuki Yoshioka\nbibliography:\n- 'bib.bib'\ntitle: Clifford Group and Unitary Designs under Symmetry\n---\n\n*Introduction*.\u2014 Randomness in quantum systems" -"---\nabstract: 'Despite its broad practical applications such as in fraud prevention, open-set speaker identification (OSI) has received less attention in the speaker recognition community compared to speaker verification (SV). OSI deals with determining if a test speech sample belongs to a speaker from a set of pre-enrolled individuals (in-set) or if it is from an out-of-set speaker. In addition to the typical challenges associated with speech variability, OSI is prone to the \u201cfalse-alarm problem\u201d; as the size of the in-set speaker population (a.k.a watchlist) grows, the out-of-set scores become larger, leading to increased false alarm rates. This is in particular challenging for applications in financial institutions and border security where the watchlist size is typically of the order of several thousand speakers. Therefore, it is important to systematically quantify the false-alarm problem, and develop techniques that alleviate the impact of watchlist size on detection performance. Prior studies on this problem are sparse, and lack a common benchmark for systematic evaluations. In this paper, we present the first public benchmark for OSI, developed using the VoxCeleb dataset. We quantify the effect of the watchlist size and speech duration on the watchlist-based speaker detection task using three strong neural network based" -"---\nauthor:\n- 'Samuel Crew,'\n- 'Daniel Zhang,'\n- Boan Zhao\nbibliography:\n- 'higgs.bib'\ntitle: 'Boundaries & Localisation with a Topological Twist'\n---\n\nIntroduction\n============\n\nSupersymmetric indices are powerful tools to study supersymmetric quantum field theories. Localisation techniques allow us to explicitly compute these indices and provide connections between supersymmetric quantum field theories and moduli space geometry. In three dimensions, the Coulomb branch localisation of $\\mathcal{N}=2$ topologically twisted theories on $S^2\\times S^1$ was first considered in [@benini2015topologically] and extended to other closed Riemann surfaces in [@Benini:2016hjo; @Closset_2016]. The study of Higgs branch localisation for 2d $\\mathcal{N}=(2,2)$ theories on $S^2$ was initiated by [@benini2015partition; @Doroud:2012xw] and later extended to $S^3$ partition functions and superconformal indices on $S^2 \\times S^1$ in the works [@benini2014higgs; @fujitsuka2014higgs].\n\nIn this work we study the partial topological twist of 3d $\\mathcal{N}=2$ theories on a spacetime with boundary $HS^2 \\times S^1$, where $HS^2$ is the 2d hemisphere. Our first result is to formulate supersymmetric boundary conditions on $\\partial(HS^2 \\times S^1)=T^2$ for this spacetime and compute the Witten index via both Higgs and Coulomb branch localisation. In particular, we provide a formula for the index with Dirichlet-type boundary conditions for the $\\mathcal{N}=2$ vector multiplet. This provides a UV" -"---\nauthor:\n- Zhengxiang Wang\ntitle: 'Probabilistic Linguistic Knowledge and Token-level Text Augmentation'\n---\n\nIntroduction\n============\n\nData serves as a crucial component in training high-performing and robust machine learning models that can effectively tackle real-world learning tasks. However, data availability is often unpredictable and not guaranteed. In the realm of supervised learning, the development of reliably deployable models typically requires the collection of vast amounts of annotated data, which is affordable only for a select few. In low-resource settings, in particular, the available data may be limited or entirely nonexistent. There are also situations where existing data is imbalanced for specific classes, causing models trained on such data to be easily biased towards classes with abundant training examples. This can potentially be harmful when the models are deployed. Practical considerations like these have given rise to data augmentation, a widely adopted strategy to mitigate the problems of scarce or imbalanced data. Data augmentation involves applying label-preserving transformations to existing data to generate novel labeled data. This approach has seen considerable success in various fields, such as image and speech recognition [@Simard2003; @alexnet2012; @ko15_interspeech; @Cui2015; @Park2019; @Shorten2019ASO; @Iwana2021].\n\nText augmentation, a subcategory of data augmentation that focuses on augmenting text data," -"---\nabstract: 'Trust is crucial for ensuring the safety, security, and widespread adoption of automated vehicles (AVs), and if trust is lacking, drivers and the public may not be willing to use them. This research seeks to investigate trust profiles in order to create personalized experiences for drivers in AVs. This technique helps in better understanding drivers\u2019 dynamic trust from a persona\u2019s perspective. The study was conducted in a driving simulator where participants were requested to take over control from automated driving in three conditions that included a control condition, a false alarm condition, and a miss condition with eight takeover requests (TORs) in different scenarios. Drivers\u2019 dispositional trust, initial learned trust, dynamic trust, personality, and emotions were measured. We identified three trust profiles (i.e., *believers*, *oscillators*, and *disbelievers*) using a K-means clustering model. In order to validate this model, we built a multinomial logistic regression model based on SHAP explainer that selected the most important features to predict the trust profiles with an F1-score of 0.90 and accuracy of 0.89. We also discussed how different individual factors influenced trust profiles which helped us understand trust dynamics better from a persona\u2019s perspective. Our findings have important implications for designing a" -"---\nabstract: 'Recently, a new class of non-convex optimization problems motivated by the statistical problem of learning an acyclic directed graphical model from data has attracted significant interest. While existing work uses standard first-order optimization schemes to solve this problem, proving the global optimality of such approaches has proven elusive. The difficulty lies in the fact that unlike other non-convex problems in the literature, this problem is not \u201cbenign\u201d, and possesses multiple spurious solutions that standard approaches can easily get trapped in. In this paper, we prove that a simple path-following optimization scheme globally converges to the global minimum of the population loss in the bivariate setting.'\nauthor:\n- '**Chang Deng[^1]**'\n- '**Kevin Bello**'\n- '**Bryon Aragam**'\n- '**Pradeep Ravikumar**'\nbibliography:\n- 'main.bib'\ntitle: '[Global Optimality in Bivariate Gradient-based DAG Learning]{}'\n---\n\nIntroduction\n============\n\nOver the past decade, non-convex optimization has become a major topic of research within the machine learning community, in part due to the successes of training large-scale models with simple first-order methods such as gradient descent\u2014along with their stochastic and accelerated variants\u2014in spite of the non-convexity of the loss function. A large part of this research has focused on characterizing which problems have *benign* loss landscapes" -"---\nabstract: 'Advances in information technology have increased the availability of time-stamped relational data such as those produced by email exchanges or interaction through social media. Whereas the associated information flows could be aggregated into cross-sectional panels, the temporal ordering of the events frequently contains information that requires new models for the analysis of continuous-time interactions, subject to both endogenous and exogenous influences. The introduction of the *Relational Event Model* (REM) has been a major development that has led to further methodological improvements stimulated by new questions that REMs made possible. In this review, we track the intellectual history of the REM, define its core properties, and discuss why and how it has been considered useful in empirical research. We describe how the demands of novel applications have stimulated methodological, computational, and inferential advancements.'\nauthor:\n- |\n Federica Bianchi, Edoardo Filippi-Mazzola, Alessandro Lomi, and Ernst C. Wit\\\n Universit[\u00e0]{} della Svizzera italiana\\\n Lugano, Switzerland\\\n `federica.bianchi@usi.ch`\\\nbibliography:\n- '00\\_references.bib'\ntitle: Relational Event Modeling\n---\n\nIntroduction {#sec:introduction .unnumbered}\n============\n\nStatistical models for social and other networks are receiving increased attention not only in specialized field journals such as *Network Science* or *Social Networks*, but also in prominent interdisciplinary science journals such as *Science*" -"---\nabstract: 'Model-agnostic feature attributions can provide local insights in complex ML models. If the explanation is correct, a domain expert can validate and trust the model\u2019s decision. However, if it contradicts the expert\u2019s knowledge, related work only corrects irrelevant features to improve the model. To allow for unlimited interaction, in this paper we provide model-agnostic implementations for two popular explanation methods (Occlusion and Shapley values) to enforce entirely different attributions in the complex model. For a particular set of samples, we use the corrected feature attributions to generate extra local data, which is used to retrain the model to have the right explanation for the samples. Through simulated and real data experiments on a variety of models we show how our proposed approach can significantly improve the model\u2019s performance only by augmenting its training dataset based on corrected explanations. Adding our interactive explanations to active learning settings increases the sample efficiency significantly and outperforms existing explanatory interactive strategies. Additionally we explore how a domain expert can provide feature attributions which are sufficiently correct to improve the model.'\nauthor:\n- 'Joran Michiels [^1]'\n- Maarten De Vos\n- Johan Suykens\nbibliography:\n- 'refs\\_neurips.bib'\ntitle: 'Increasing Performance And Sample Efficiency With" -"---\nauthor:\n- 'Taichi\u00a0Kato, Franz-Josef\u00a0Hambsch$^,$$^,$ Berto\u00a0Monard,$^,$ Rod\u00a0Stubbings'\ntitle: 'ASASSN-22ak: ***La Belle au bois dormant*** in a hydrogen-depleted dwarf nova?'\n---\n\nIntroduction\n============\n\nIn the famous fairy tale *La belle au bois dormant* (the Beauty in the Sleeping Forest or the Sleeping Beauty), a princess was cursed by an evil fairy to sleep for a hundred years before being awakened by a prince [@per1697sleepingbeauty]. This tale produced one of the world most famous ballets composed by Pyotr Tchaikovsky [@tch1889sleepingbeauty][^1]. The similar things appear to have happened in the world of dwarf novae. The giant outburst and subsequent superoutbursts in V3101 Cyg = TCP J21040470$+$4631129 [@tam20v3101cyg; @ham21DNrebv3101cyg] could be a signature of long \u201cdormant\u201d phase before the initial outburst. MASTER OT J030227.28$+$191754.5 [@tam23j0302; @kim23j0302] might be another such example. Here, we report on an instance of ASASSN-22ak, which may be the first similar case in a cataclysmic variable (CV) with an evolved core in the secondary.\n\nASASSN-22ak\n===========\n\nASASSN-22ak was discovered as a dwarf nova by the All-Sky Automated Survey for Supernovae (ASAS-SN: [@ASASSN]) at $g$=15.0 on 2022 January 7.[^2] The object further brightened and reached the peak of $g$=13.2 on 2022 January 8. The object apparently faded" -"---\nabstract: 'We present a comprehensive characterization of the interconnections between single-mode, phase-insensitive Gaussian Bosonic Channels resulting from channel concatenation. This characterization enables us to identify, in the parameter space of these maps, two distinct regions: low-ground and high-ground. In the low-ground region, the information capacities are smaller than a designated reference value, while in the high-ground region, they are provably greater. As a direct consequence, we systematically outline an explicit set of upper bounds for the quantum and private capacity of these maps, which combine known upper bounds and composition rules, improving upon existing results.'\nauthor:\n- Farzad Kianvash\n- Marco Fanizza\n- Vittorio Giovannetti\ntitle: 'Low-ground/High ground capacity regions analysis for Bosonic Gaussian Channels'\n---\n\nIntroduction\n============\n\nThe efficiency of classical communication lines can be expressed using a single, simple formula\u00a0[@shannon1; @shannon2]. However, when it comes to quantum communication lines (quantum channels) that utilize quantum systems as information carriers instead of classical signals\u00a0[@HOL; @BOOK; @WILDE; @BOOK; @VGHOL; @BENSHOR], this simplification no longer holds. Instead, a multitude of different and computationally challenging capacity functionals are required to fully assess the quality of these transmission lines. For instance, the classical capacity of a quantum channel, characterizes the optimal" -"---\nauthor:\n- 'Yihang Zeng$^{1}$'\n- 'Q. Shi$^{1}$'\n- 'A. Okounkova$^{1}$'\n- 'Dihao Sun$^{1}$'\n- 'K. Watanabe$^{2}$'\n- 'T. Taniguchi$^{3}$'\n- 'J. Hone$^{4}$'\n- 'C.R. Dean$^{1}$$^{\\dag}$'\n- 'J.I.A. Li$^{5}$$^{\\dag}$'\ntitle: 'Evidence for a Superfluid-to-solid Transition of Bilayer Excitons '\n---\n\n**The low-temperature phase diagram of a Bosonic system is predicted to contain an exotic quantum phase, called a supersolid, that is defined by broken translational symmetry and off-diagonal long-range order \u00a0[@Fisher1989BoseMott; @Penrose1956supersolid; @Andreev1969supersolid; @Leggett1970supersolid; @Meisel1992supersolid]. This unique combination of properties enables a seemingly paradoxical scenario where a bosonic solid exhibits dissipationless mass flow. However, despite decades of extensive efforts, experimental realization of such a supersolid phase remains elusive[@Kim2004He4; @Day2007He4; @Hunt2009He4]. In this work we report experimental observation of a superfluid-to-insulating transition in the bosonic system of spatially indirect excitons in double layer graphene. Utilizing a variety of transport methods to characterize the superfluid-insulator phase boundary as a function of both density and temperature suggests the insulator to be a solid phase driven by repulsive dipole-dipole interactions in the dilute limit. The exciton solid exhibits a unique melting transition, with the high-temperature phase recovering a hallmark transport signature of off-diagonal long-range order, perfect Coulomb drag\u00a0[@Nandi.12; @Eis.14]. The reentrant superfluid-like behaviour" -"---\nabstract: 'The two-time scale nature of SAC, which is an actor-critic algorithm, is characterised by the fact that the critic estimate has not converged for the actor at any given time, but since the critic learns faster than the actor, it ensures eventual consistency between the two. Various strategies have been introduced in literature to learn better gradient estimates to help achieve better convergence. Since gradient estimates depend upon the critic, we posit that improving the critic can provide a better gradient estimate for the actor at each time. Utilizing this, we propose Soft Actor Retrospective Critic (SARC), where we augment the SAC critic loss with another loss term - retrospective loss - leading to faster critic convergence and consequently, better policy gradient estimates for the actor. An existing implementation of SAC can be easily adapted to SARC with minimal modifications. Through extensive experimentation and analysis, we show that SARC provides consistent improvement over SAC on benchmark environments. We plan to open-source the code and all experiment data at .'\nauthor:\n- |\n Sukriti Verma[^1]\\\n Carnegie Mellon University\\\n `sukritiv@andrew.cmu.edu`\\\n Ayush Chopra\\\n MIT Media Lab\\\n `ayushc@mit.edu`\\\n Jayakumar Subramanian\\\n Adobe\\\n `jasubram@adobe.com`\\\n Mausoom Sarkar\\\n Adobe\\\n `msarkar@adobe.com`\\\n Nikaash Puri\\\n Adobe\\\n `nikpuri@adobe.com`\\\n Piyush Gupta\\" -"---\nbibliography:\n- 'library.bib'\n---\n\n[-1.5in]{}[0in]{}\n\n[**** ]{}\\\nS\u00e1ndor Istv\u00e1n Mah\u00f3^1,\\*^, Sergiy Vasylkevych^1^, Nedjeljka \u017dagar^1^,\\\n**[1]{} Meteorological Institute, Center for Earth System Research and Sustainability, Universit\u00e4t Hamburg, Hamburg, Germany\\\nsandor.maho@uni-hamburg.de**\n\nAbstract {#abstract .unnumbered}\n========\n\nThe equatorial mixed Rossby-gravity wave (MRGW) is an important contributor to tropical variability. Its excitation mechanism capable of explaining the observed MRGW variance peak at synoptic scales remains elusive. This study investigates wave-mean flow interactions as a generation process for the MRGWs using the barotropic version of the global Transient Inertia-Gravity And Rossby wave dynamics model (TIGAR), which employs Hough harmonics as the basis of spectral expansion, thereby representing MRGWs as prognostic variables. High accuracy numerical simulations manifest that interactions between waves emanating from a tropical heat source and zonal mean jets in the subtropics generate MRGWs with the variance spectra resembling the one observed in the tropical troposphere. Quantification of spectral tendencies associated with the MRGW energy growth underscores the significance of wave-mean flow interactions in comparison to excitation mechanisms driven by external forcing and wave-wave interactions. The MRGW growth and amplitude depend on the asymmetry in the zonal mean flow that may explain not only seasonal variability but also differences between the troposphere and" -"---\nabstract: 'An undisturbed Brownian oscillator may not reach thermal equilibrium with the thermal bath due to the formation of a localized normal mode. The latter may emerge when the spectrum of the thermal bath has a finite upper bound $\\omega_0$ and the oscillator natural frequency exceeds a critical value $\\omega_c$, which depends on the specific form of the bath spectrum. We consider the response of the oscillator with and without a localized mode to the external periodic force with frequency $\\Omega$ lower than $\\omega_0$. The results complement those obtained earlier for the high-frequency response at $\\Omega\\ge \\omega_0$ and require a different mathematical approach. The signature property of the high-frequency response is resonance when the external force frequency $\\Omega$ coincides with the frequency of the localized mode $\\omega_*$. In the low-frequency domain $\\Omega<\\omega_0$ the condition of resonance $\\Omega=\\omega_*$ cannot be met (since $\\omega_*>\\omega_0$). Yet, in the limits $\\omega\\to\\omega_c$ and $\\Omega\\to\\omega_0^-$, the oscillator shows a peculiar quasi-resonance response with an amplitude increasing with time sublinearly.'\nauthor:\n- 'Alex V. Plyukhin'\ntitle: 'Nonergodic Brownian oscillator: Low-frequency response'\n---\n\nIntroduction.\n=============\n\nConsider a dissipative harmonic oscillator, with the mass $m$ and natural frequency $\\omega$, driven by an external ac force $F_{ex}(t)=F_0\\sin (\\Omega t)$," -"---\nabstract: |\n In this paper we consider bilinear sparse forms intimately related to iterated commutators of a rather general class of operators. We establish Bloom weighted estimates for these forms in the full range of exponents, both in the diagonal and off-diagonal cases. As an application, we obtain new Bloom bounds for commutators of (maximal) rough homogeneous singular integrals and the Bochner-Riesz operator at the critical index.\n\n We also raise the question about the sharpness of our estimates. In particular we obtain the surprising fact that even in the case of Calder\u00f3n\u2013Zygmund operators, the previously known quantitative Bloom weighted estimates are not sharp for the second and higher order commutators.\naddress:\n- 'Department of Mathematics, Bar-Ilan University, 5290002 Ramat Gan, Israel'\n- |\n Delft Institute of Applied Mathematics\\\n Delft University of Technology\\\n P.O. Box 5031\\\n 2600 GA Delft\\\n The Netherlands\n- |\n Departamento de An\u00e1lisis Matem\u00e1tico y Matem\u00e1tica Aplicada\\\n Universidad Complutense (Spain) & Departamento de Matem\u00e1tica e Instituto de Matem\u00e1tica. Universidad Nacional del Sur - CONICET Argentina\nauthor:\n- 'Andrei K. Lerner'\n- Emiel Lorist\n- Sheldy Ombrosi\nbibliography:\n- 'commutatorbib.bib'\ntitle: Bloom weighted bounds for sparse forms associated to commutators\n---\n\n[^1]\n\nIntroduction\n============\n\nLet ${\\mathcal S}$ be" -"---\nabstract: 'The prodigious growth of digital health data has precipitated a mounting interest in harnessing machine learning methodologies, such as natural language processing (NLP), to scrutinize medical records, clinical notes, and other text-based health information. Although NLP techniques have exhibited substantial potential in augmenting patient care and informing clinical decision-making, data privacy and adherence to regulations persist as critical concerns. Federated learning (FL) emerges as a viable solution, empowering multiple organizations to train machine learning models collaboratively without disseminating raw data. This paper proffers a pragmatic approach to medical NLP by amalgamating FL, NLP models, and the NVFlare framework, developed by NVIDIA. We introduce two exemplary NLP models, the Long-Short Term Memory (LSTM)-based model and Bidirectional Encoder Representations from Transformers (BERT), which have demonstrated exceptional performance in comprehending context and semantics within medical data. This paper encompasses the development of an integrated framework that addresses data privacy and regulatory compliance challenges while maintaining elevated accuracy and performance, incorporating BERT pretraining, and comprehensively substantiating the efficacy of the proposed approach.'\nauthor:\n- |\n Won Joon Yun$^{1}$, Samuel Kim$^{2}$, and Joongheon Kim$^{1}$\\\n \\\nbibliography:\n- 'reference.bib'\ntitle: 'Multi-Site Clinical Federated Learning using Recursive and Attentive Models and NVFlare'\n---\n\nFederated Learning," -"---\nabstract: 'In this research, we reveal the inborn but hitherto ignored properties of quantitative differential phase contrast (qDPC) imaging: the phase transfer function being an edge detection filter. Inspired by this, we highlight the duality of qDPC between optics and pattern recognition, and propose a simple and effective qDPC reconstruction algorithm, termed Pupil-Driven qDPC (pd-qDPC), to facilitate the phase reconstruction quality for the family of qDPC-based phase reconstruction algorithms. We formed a new cost function in which modified ${L_{0}\\text{-norm}}$ was used to represent the pupil-driven edge sparsity, and the qDPC convolution operator is duplicated in the data fidelity term to achieve automatic background removal. Further, we developed the iterative reweighted soft-threshold algorithms based on split Bregman method to solve this modified ${L_{0}\\text{-norm}}$ problem. We tested pd-qDPC on both simulated and experimental data and compare against state-of-the-art (SOTA) methods including ${L_{2}\\text{-norm}}$, total variation regularization (TV-qDPC), isotropic-qDPC, and Retinex qDPC algorithms. Our model is superior in terms of phase reconstruction quality and implementation efficiency, in which it significantly increases the experimental robustness while maintaining the data fidelity. In general, the pd-qDPC enables high-quality qDPC reconstruction without any modification to the optical system. It simplifies the system complexity and benefits the qDPC" -"---\nauthor:\n- Daniel Stremmer\n- and Malgorzata Worek\ntitle: 'Associated production of a top-quark pair with two isolated photons at the LHC through NLO in QCD'\n---\n\nIntroduction {#sec:introduction}\n============\n\nThe observation of the $pp \\to t\\bar{t}H$ process at the Large Hadron Collider (LHC) reported by the CMS [@CMS:2018uxb] and ATLAS [@ATLAS:2018mme] collaborations has launched a new endeavour to investigate the tree-level top quark Yukawa coupling $(Y_t)$ and the ${\\cal CP}$ structure of the Higgs boson. One of the most sensitive Higgs-boson decay channels for probing the $pp \\to t\\bar{t}H$ process is $H \\to \\gamma\\gamma$. Despite the small branching ratio the Higgs-boson signal can be extracted in this channel thanks to the excellent photon reconstruction and identification efficiency of the ATLAS and CMS detectors. Even though by probing the interactions between the $H$ boson and electroweak $W/Z$ gauge bosons, CMS and ATLAS have determined that the $H$ boson quantum numbers are consistent with the Standard Model (SM) [@ATLAS:2016ifi; @CMS:2016tad; @ATLAS:2017azn; @ATLAS:2018hxb; @CMS:2019ekd; @CMS:2019jdw], the presence of a pseudoscalar admixture, which introduces a second coupling to the top quark, has not yet been ruled out and is worth investigating. The observation of a non-zero ${\\cal CP}$-odd coupling component would signal" -"---\nauthor:\n- 'Hanjo D. Boekhout'\n- 'Arjan A.J. Blokland'\n- 'Frank W. Takes'\nbibliography:\n- 'bibliography.bib'\ntitle: Early warning signals for predicting cryptomarket vendor success using dark net forum networks\n---\n\nIntroduction {#sect:introduction .unnumbered}\n============\n\nThe dark net, a part of the internet that requires specific software or authorization to access\u00a0[@darkwebhow], hosts a myriad of online fora that are increasingly a hotbed for criminal behavior and radicalisation\u00a0[@nadini2022emergence; @chainanlysis2021]. Dark net fora can, both theoretically and empirically, be split in those functioning as meeting places for the exchange of criminal information and those where criminal goods and services are traded, i.e., criminal marketplaces. These fora and marketplaces can serve up to hundreds of thousands of users. They are often moderated and organized in a professional manner, with cryptocurrencies, such as Bitcoin, serving as currency and are therefore referred to as *cryptomarkets*\u00a0[@martin2014lost; @shortis2020drug]. To efficiently coordinate its activities disrupting these cryptomarkets, law enforcement aims to target key players that are vital to these market\u2019s existence and success\u00a0[@fonhof2018characterizing; @shortis2020drug].\n\nKey players include the administrators and moderators responsible for the existence and proper functioning of the cryptomarket. But also the more successful vendors that are responsible for the majority" -"---\nabstract: 'In a broad class of theories, the accumulation of ultralight dark matter (ULDM) with particles of mass $10^{-22}~\\textrm{eV} < m_{\\phi} < 1~\\textrm{eV}$ leads the to formation of long-lived bound states known as boson stars. When the ULDM exhibits self-interactions, prodigious bursts of energy carried by relativistic bosons are released from collapsing boson stars in bosenova explosions. We extensively explore the potential reach of terrestrial and space-based experiments for detecting transient signatures of emitted relativistic bursts of scalar particles, including ULDM coupled to photons, electrons, and gluons, capturing a wide range of motivated theories. For the scenario of relaxion ULDM, we demonstrate that upcoming experiments and technology such as nuclear clocks as well as space-based interferometers will be able to sensitively probe orders of magnitude in the ULDM coupling-mass parameter space, challenging to study otherwise, by detecting signatures of transient bosenova events. Our analysis can be readily extended to different scenarios of relativistic scalar particle emission.'\nauthor:\n- Jason Arakawa\n- Joshua Eby\n- 'Marianna S. Safronova'\n- Volodymyr Takhistov\n- 'Muhammad H. Zaheer'\nbibliography:\n- 'ref.bib'\ntitle: Detection of Bosenovae with Quantum Sensors on Earth and in Space\n---\n\nIntroduction {#sec:introduction}\n============\n\nThe influence of dark matter (DM)," -"---\nabstract: 'Many natural phenomena are intrinsically causal. The discovery of the cause-effect relationships implicit in these processes can help us to understand and describe them more effectively, which boils down to causal discovery about the data and variables that describe them. However, causal discovery is not an easy task. Current methods for this are extremely complex and costly, and their usefulness is strongly compromised in contexts with large amounts of data or where the nature of the variables involved is unknown. As an alternative, this paper presents an original methodology for causal discovery, built on essential aspects of the main theories of causality, in particular probabilistic causality, with many meeting points with the inferential approach of regularity theories and others. Based on this methodology, a non-parametric algorithm is developed for the discovery of causal relationships between binary variables associated to data sets, and the modeling in graphs of the causal networks they describe. This algorithm is applied to gene expression data sets in normal and cancerous prostate tissues, with the aim of discovering cause-effect relationships between gene dysregulations leading to carcinogenesis. The gene characterizations constructed from the causal relationships discovered are compared with another study based on principal component" -"---\nabstract: 'We investigate families of soliton solutions in a spin-orbit coupled Bose-Einstein condensate embedded in an optical lattice, which bifurcate from the nearly flat lowest band. Unlike the conventional gap solitons the obtained solutions have the shape well approximated by a Wannier function (or a few Wannier functions) of the underlying linear Hamiltonian with amplitudes varying along the family and with nearly constant widths. The Wannier solitons (WSs) sharing all symmetries of the system Hamiltonian are found to be stable. Such solutions allow for the construction of Wannier breathers, that can be viewed as nonlinearly coupled one-hump solitons. The breathers are well described by a few-mode model and manifest stable behavior either in an oscillatory regime with balanced average populations or in a self-trapping regime characterized by unbalanced atomic populations of the local potential minima (similarly to the conventional boson Josephson junction), with the frequencies controlled by the inter-atomic interactions.'\nauthor:\n- Chenhui Wang\n- Yongping Zhang\n- 'V. V. Konotop'\ntitle: 'Wannier solitons in spin-orbit-coupled Bose-Einstein condensates in optical lattices with a flat-band.'\n---\n\nIntroduction\n============\n\nPeriodic modulation of parameters of a medium, where a wave propagates, introduces artificial dispersion. If the medium is nonlinear the existence of" -"---\nabstract: 'Consider a pair of input distributions which after passing through a Poisson channel become $\\epsilon$-close in total variation. We show that they must necessarily then be $\\epsilon^{0.5+o(1)}$-close after passing through a Gaussian channel as well. In the opposite direction, we show that distributions inducing $\\epsilon$-close outputs over the Gaussian channel must induce $\\epsilon^{1+o(1)}$-close outputs over the Poisson. This quantifies a well-known intuition that \u201csmoothing\u201d induced by Poissonization and Gaussian convolution are similar. As an application, we improve a recent upper bound of Han-Miao-Shen\u20192021 for estimating mixing distribution of a Poisson mixture in Gaussian optimal transport distance from $n^{-0.1 + o(1)}$ to $n^{-0.25 + o(1)}$.'\nauthor:\n- \nbibliography:\n- 'references.bib'\ntitle:\n- 'Comparing Poisson and Gaussian channels (extended)'\n- Comparing Poisson and Gaussian channels\n---\n\n=1\n\nIntroduction\n============\n\nFix three positive parameters $a,\\sigma, \\gamma > 0$ and consider *two channels* with a common input space ${{\\mathcal{X}}}=[0,a]$. The first channel, denoted $\\Gsn_\\sigma$, acts on input $X=x_0$ by outputting $Y_G \\sim {{\\mathcal{N}}}(x_0,\\sigma^2)$. The second channel, denoted $\\Poi_{\\gamma}$, acts by outputting $Y_P \\sim \\text(Poi)(\\gamma x_0)$. Note that the output spaces of these two channels are very different. For the first one $Y_G\n \\in \\mathbb{R}$ and for the second one $Y_P \\in \\mathbb{Z}_+$." -"---\nauthor:\n- 'G. Mart\u00ednez-Somonte'\n- 'A. Marcos-Caballero'\n- 'E. Mart\u00ednez-Gonz\u00e1lez'\nbibliography:\n- 'bibliography.bib'\ntitle: Bayesian inference methodology for Primordial Power Spectrum reconstructions from Large Scale Structure\n---\n\nIntroduction {#sec:introduction}\n============\n\nMost cosmological observations support the hypothesis that the primordial fluctuations were adiabatic, Gaussian and quasi-scale invariant, and that the background universe was spatially isotropic and homogeneous [@Planck18Parameters]. These properties, together with several shortcomings of the standard Hot Big Bang scenario [@InflaGuth1981; @InflaLinde1982], provide strong motivation for cosmological inflation [@InflaGuth1981; @InflaLinde1982; @InflaBrout1978; @InflaStarobinski1980; @InflaAlbrechtSteinhardt1982; @InflaLinde1983], a hypothetical epoch of exponential expansion in the early universe. However, the nature and origin of the fields that drove inflation remain largely unknown and poorly constrained by current observations.\n\nThe primordial correlation functions encode very valuable information about the physical mechanism that generated the initial conditions for cosmic structure formation. Some well-motivated theoretical scenarios can produce distinctive features in those functions, such as the primordial scalar power spectrum of curvature perturbations[^1] $P_\\mathcal{R}(k)$. $P_\\mathcal{R}(k)$ is a key quantity to probe the physics of the very early universe, allowing us to test and constrain different inflationary models. $P_\\mathcal{R}(k)$ is usually parametrized by a simple power law with two parameters: the amplitude $A_s$ and the spectral index" -"---\nabstract: 'Triangle fees are a novel fee structure for AMMs, in which marginal fees are decreasing in a trade\u2019s size. That decline is proportional to the movement in the AMM\u2019s implied price, i.e. for every basis point the trade moves the ratio of assets, the marginal fee declines by a basis point. These fees create incentives that protect against price staleness, while still allowing the AMM to earn meaningful fee revenue. Triangle fees can strictly improve the Pareto frontier of price accuracy versus losses generated by the status quo of constant fee mechanisms.'\nauthor:\n- Rithvik Rao\n- Nihar Shah\nbibliography:\n- 'acmart.bib'\ntitle: Triangle Fees\n---\n\nIntroduction\n============\n\nWhen traders make swaps on AMMs, they traditionally pay fees that are constant with respect to the volume traded. A trade that is double the size of another pays twice the fees.\n\nWe introduce the concept of \u201ctriangle fees,\" or fees that are *declining* on the margin with respect to the volume swapped. A trade that is double size of another pays less than twice the fees. More concretely, rather than charging twenty basis points on the first dollar and second dollar alike of a two-dollar trade, this paper proposes" -"---\nabstract: 'Causal reasoning and logical reasoning are two important types of reasoning abilities for human intelligence. However, their relationship has not been extensively explored under machine intelligence context. In this paper, we explore how the two reasoning abilities can be jointly modeled to enhance both accuracy and explainability of machine learning models. More specifically, by integrating two important types of reasoning ability\u2014counterfactual reasoning and (neural) logical reasoning\u2014we propose Counterfactual Collaborative Reasoning (CCR), which conducts counterfactual logic reasoning to improve the performance. In particular, we use recommender system as an example to show how CCR alleviate data scarcity, improve accuracy and enhance transparency. Technically, we leverage counterfactual reasoning to generate \u201cdifficult\u201d counterfactual training examples for data augmentation, which\u2014together with the original training examples\u2014can enhance the model performance. Since the augmented data is model irrelevant, they can be used to enhance any model, enabling the wide applicability of the technique. Besides, most of the existing data augmentation methods focus on \u201cimplicit data augmentation\u201d over users\u2019 implicit feedback, while our framework conducts \u201cexplicit data augmentation\u201d over users explicit feedback based on counterfactual logic reasoning. Experiments on three real-world datasets show that CCR achieves better performance than non-augmented models and implicitly augmented models," -"---\nabstract: 'In multimodal-aware recommendation, the extraction of meaningful multimodal features is at the basis of high-quality recommendations. Generally, each recommendation framework implements its multimodal extraction procedures with specific strategies and tools. This is limiting for two reasons: (i) different extraction strategies do not ease the interdependence among multimodal recommendation frameworks; thus, they cannot be efficiently and fairly compared; (ii) given the large plethora of pre-trained deep learning models made available by different open source tools, model designers do not have access to shared interfaces to extract features. Motivated by the outlined aspects, we propose , a unified framework for the extraction of multimodal features in recommendation. By integrating three widely-adopted deep learning libraries as backends, namely, TensorFlow, PyTorch, and Transformers, we provide a shared interface to extract and process features where each backend\u2019s specific methods are abstracted to the end user. Noteworthy, the extraction pipeline is easily configurable with a YAML-based file where the user can specify, for each modality, the list of models (and their specific backends/parameters) to perform the extraction. Finally, to make accessible to the community, we build a public Docker image equipped with a ready-to-use CUDA environment and propose three demos to test its functionalities" -"---\nabstract: 'Quantum computing has emerged as a promising field with the potential to revolutionize various domains by harnessing the principles of quantum mechanics. As quantum hardware and algorithms continue to advance, the development of high-quality quantum software has become crucial. However, testing quantum programs poses unique challenges due to the distinctive characteristics of quantum systems and the complexity of multi-subroutine programs. In this paper, we address the specific testing requirements of multi-subroutine quantum programs. We begin by investigating critical properties through a survey of existing quantum libraries, providing insights into the challenges associated with testing these programs. Building upon this understanding, we present a systematic testing process tailored to the intricacies of quantum programming. The process covers unit testing and integration testing, with a focus on aspects such as IO analysis, quantum relation checking, structural testing, behavior testing, and test case generation. We also introduce novel testing principles and criteria to guide the testing process. To evaluate our proposed approach, we conduct comprehensive testing on typical quantum subroutines, including diverse mutations and randomized inputs. The analysis of failures provides valuable insights into the effectiveness of our testing methodology. Additionally, we present case studies on representative multi-subroutine quantum programs, demonstrating" -"---\nabstract: 'This paper provides collapses of massive, fully convective, and non-rotating white dwarfs (WDs) formed by accretion-induced collapse or merger-induced collapse and the subsequent explosions with the general relativistic neutrino-radiation hydrodynamics simulations. We produce initial WDs in hydrostatic equilibrium, which have super-Chandrasekhar mass and are about to collapse. The WDs have masses of 1.6$M_\\odot$ with different initial central densities specifically at $10^{10}$, $10^{9.6}$, $10^{9.3}$ and $10^{9.0}\\,{\\rm g\\,cm^{-3}}$. First, we check whether initial WDs are stable without weak interactions. Second, we calculate the collapse of WDs with weak interactions. We employ hydrodynamics simulations with Newtonian gravity in the first and second steps. Third, we calculate the formation of neutron stars and accompanying explosions with general relativistic simulations. As a result, WDs with the highest density of $10^{10}\\,{\\rm g\\,cm^{-3}}$ collapse not by weak interactions but by the photodissociation of the iron, and three WDs with low central densities collapse by the electron capture as expected at the second step and succeed in the explosion with a small explosion energy of $\\sim 10^{48}$ erg at the third step. By changing the surrounding environment of WDs, we find that there is a minimum value of ejecta masses being $\\sim 10^{-5}M_{\\odot}$. With the most" -"---\nabstract: 'We investigate the single photon scattering in a phonon-photon hybrid system in the waveguide QED scheme. In our consideration, an artificial giant atom, which is dressed by the phonons in a surface acoustic wave resonator, interacts with a coupled resonator waveguide (CRW) nonlocally via two connecting sites. Together with the interference effect by the nonlocal coupling, the phonon serves as a controller to the transport of the photon in the waveguide. On the one hand, the coupling strength between the giant atom and the surface acoustic wave resonator modulates the width of the transmission valley or window in the near resonant regime. On the other hand, the two reflective peaks induced by the Rabi splitting degrade into a single one when the giant atom is large detuned from the surface acoustic resonator, which implies an effective dispersive coupling. Our study paves the way for the potential application of giant atoms in the hybrid system.'\nauthor:\n- Xinyu Li\n- Wei Zhao\n- Zhihai Wang\ntitle: Controlling photons by phonons via giant atom in a waveguide QED setup\n---\n\nIntroduction\n============\n\nWaveguide quantum electrodynamics (QED)\u00a0[@Gu2017; @Roy2017] mainly studies the interaction between the limited light field in the waveguides" -"---\nabstract: 'Understanding the geometric properties of gradient descent dynamics is a key ingredient in deciphering the recent success of very large machine learning models. A striking observation is that trained over-parameterized models retain some properties of the optimization initialization. This \u201cimplicit bias\u201d is believed to be responsible for some favorable properties of the trained models and could explain their good generalization properties. The purpose of this article is threefold. First, we rigorously expose the definition and basic properties of \u201cconservation laws\u201d, which are maximal sets of independent quantities conserved during gradient flows of a given model (e.g. of a ReLU network with a given architecture) with any training data and any loss. Then we explain how to find the exact number of these quantities by performing finite-dimensional algebraic manipulations on the Lie algebra generated by the Jacobian of the model. Finally, we provide algorithms (implemented in SageMath) to: a) compute a family of polynomial laws; b) compute the number of (not necessarily polynomial) conservation laws. We provide showcase examples that we fully work out theoretically. Besides, applying the two algorithms confirms for a number of ReLU network architectures that all known laws are recovered by the algorithm, and that" -"---\nabstract: 'We propose a causal framework for decomposing a group disparity in an outcome in terms of an intermediate treatment variable. Our framework captures the contributions of group differences in baseline potential outcome, treatment prevalence, average treatment effect, and selection into treatment. This framework is counterfactually formulated and readily informs policy interventions. The decomposition component for differential selection into treatment is particularly novel, revealing a new mechanism for explaining and ameliorating disparities. This framework reformulates the classic Kitagawa-Blinder-Oaxaca decomposition in causal terms, supplements causal mediation analysis by explaining group disparities instead of group effects, and resolves conceptual difficulties of recent random equalization decompositions. We also provide a conditional decomposition that allows researchers to incorporate covariates in defining the estimands and corresponding interventions. We develop nonparametric estimators based on efficient influence functions of the decompositions. We show that, under mild conditions, these estimators are $\\sqrt{n}$-consistent, asymptotically normal, semiparametrically efficient, and doubly robust. We apply our framework to study the causal role of education in intergenerational income persistence. We find that both differential prevalence of and differential selection into college graduation significantly contribute to the disparity in income attainment between income origin groups.'\nauthor:\n- 'Ang Yu[^1]'\n- 'Felix Elwert[^2]'\nbibliography:" -"---\nabstract: 'We compute moments of $L$-functions associated to the polynomial, odd polynomial and ordinary families of Artin\u2013Schreier covers over $\\mathbb{F}_q$, where $q$ is a power of a prime $p$, when the size of the finite field is fixed and the genus of the family goes to infinity. In the polynomial family we compute the $k^{\\text{th}}$ moment for a large range of values of $k$, depending on the sizes of $p$ and $q$. We also compute the second moment in absolute value of the polynomial family, obtaining an exact formula with a lower order term, and confirming the unitary symmetry type of the family. For the odd polynomial family, we obtain asymptotic formulas for the first two moments, in agreement with the symplectic random matrix theory model, and identifying a lower order term in the case of the first moment. We finally obtain an asymptotic formula for the first moment in the ordinary family of Artin-Schreier $L$\u2013functions, again explicitly computing a lower order term.'\naddress:\n- 'Alexandra Florea: Department of Mathematics, UC Irvine, 340 Rowland Hall, Office 540E, Irvine, CA 92697, USA'\n- 'Edna Jones: Department of Mathematics, Duke University, 120 Science Drive, Durham, NC 27708, USA'\n- 'Matilde Lal\u00edn:" -"---\nabstract: 'The possibility to engineer artificial Kitaev chains in arrays of quantum dots coupled via narrow superconducting regions has emerged as an attractive way to overcome the disorder issues that complicate the realization and detection of topological superconducting phases in other platforms. Although a true topological phase would require long chains, already a two-site chain realized in a double quantum dot can be tuned to points in parameter space where it hosts zero-energy states that seem identical to the Majorana bound states that characterize a topological phase. These states were named \u201cpoor man\u2019s Majorana bound states\u201d (PMMs) because they lack formal topological protection. In this work, we propose a roadmap for next-generation experiments on PMMs. The roadmap starts with experiments to characterize a single pair of PMMs by measuring the Majorana quality, then moves on to initialization and readout of the parity of a PMM pair, which allows measuring quasiparticle poisoning times. The next step is to couple two PMM systems to form a qubit. We discuss measurements of the coherence time of such a qubit, as well as a test of Majorana fusion rules in the same setup. Finally, we propose and analyse three different types of braiding-like" -"---\nabstract: 'As artificial intelligence (AI) systems are increasingly embedded in our lives, their presence leads to interactions that shape our behaviour, decision-making, and social interactions. Existing theoretical research has primarily focused on human-to-human interactions, overlooking the unique dynamics triggered by the presence of AI. In this paper, resorting to methods from evolutionary game theory, we study how different forms of AI influence the evolution of cooperation in a human population playing the one-shot Prisoner\u2019s Dilemma game in both well-mixed and structured populations. We found that Samaritan AI agents that help everyone unconditionally, including defectors, can promote higher levels of cooperation in humans than Discriminatory AI that only help those considered worthy/cooperative, especially in slow-moving societies where change is viewed with caution or resistance (small intensities of selection). Intuitively, in fast-moving societies (high intensities of selection), Discriminatory AIs promote higher levels of cooperation than Samaritan AIs.'\nauthor:\n- |\n Tim Booker\\*\\\n Complexity Science Hub Vienna, Austria\\\n `booker@csh.ac.at`\\\n Manuel Miranda\\*\\\n IFISC\\\n UIB-CSIC\\\n `mmiranda@ifisc.uib-csic.es`\\\n Jes\u00fas A. Moreno L\u00f3pez\\*\\\n IFISC\\\n UIB-CSIC\\\n `jeslop@ifisc.uib-csic.es`\\\n Jos\u00e9 Mar\u00eda Ramos Fern\u00e1ndez\\*\\\n Universidad de La Laguna\\\n `alu0101100883@ull.edu.es`\\\n Max Reddel\\*\\\n Department of Emerging Technology Governance\\\n International Center for Future Generations, Belgium\\\n `max@reddel.ai`\\\n Valeria Widler\\*\\\n Modeling and Simulation of Complex Processes\\" -"---\nabstract: 'Federated learning (FL) has emerged as a promising approach for training machine learning models on decentralized data without compromising data privacy. In this paper, we propose a FL algorithm for object detection in quality inspection tasks using YOLOv5 as the object detection algorithm and Federated Averaging (FedAvg) as the FL algorithm. We apply this approach to a manufacturing use-case where multiple factories/clients contribute data for training a global object detection model while preserving data privacy on a non-IID dataset. Our experiments demonstrate that our FL approach achieves better generalization performance on the overall clients\u2019 test dataset and generates improved bounding boxes around the objects compared to models trained using local clients\u2019 datasets. This work showcases the potential of FL for quality inspection tasks in the manufacturing industry and provides valuable insights into the performance and feasibility of utilizing YOLOv5 and FedAvg for federated object detection.'\nauthor:\n- \n- \n- \n- \nbibliography:\n- 'lit.bib'\ntitle: |\n Federated Object Detection for Quality Inspection in Shared Production\\\n [^1] \n---\n\nFederated Object Detection (FedOD), Federated Learning (FL), YOLOv5, non-IID Dataset, Data Privacy\n\nIntroduction\n============\n\nObject detection (OD) is a pivotal deep learning task, sparked by breakthroughs like YOLO (You Only Look Once)" -"---\nabstract: 'This study performed a numerical analysis of the hairpin vortex and heat transport generated by the interference of the wakes behind two hills in a laminar boundary layer. In the case of hills with the same scale, the interference between hairpin vortices in the wake is more intensive than in the different-scale hills. When the hills with different scales are installed, hairpin vortices with different scales are periodically shed. Regardless of the scale ratio of the hills, when the hill spacing in the spanwise direction is narrowed, the asymmetry of the hairpin vortex in the wake increases due to the interference between the wakes. At this time, the turbulence caused by the leg and the horn-shaped secondary vortex on the spanwise center side in the hairpin vortex increases, and heat transport around the hairpin vortex becomes active. In addition, the leg approaches the wall surface and removes high-temperature fluid near the wall surface over a wide area, resulting in a high heat transfer coefficient. These tendencies are most remarkable in the same-scale hills. In the case of hills with different scales, the heat transfer coefficient decreases because the leg on the spanwise center side in a small hairpin" -"---\nabstract: 'Geometrical approaches for room acoustics simulation have the advantage of requiring limited computational resources while still achieving a high perceptual plausibility. A common approach is using the image source model for direct and early reflections in connection with further simplified models such as a feedback delay network for the diffuse reverberant tail. When recreating real spaces as virtual acoustic environments using room acoustics simulation, the perceptual relevance of individual parameters in the simulation is unclear. Here we investigate the importance of underlying acoustical measurements and technical evaluation methods to obtain high-quality room acoustics simulations in agreement with dummy-head recordings of a real space. We focus on the role of source directivity. The effect of including measured, modelled, and omnidirectional source directivity in room acoustics simulations was assessed in comparison to the measured reference. Technical evaluation strategies to verify and improve the accuracy of various elements in the simulation processing chain from source, the room properties, to the receiver are presented. Perceptual results from an ABX listening experiment with random speech tokens are shown and compared with technical measures for a ranking of simulation approaches.'\nauthor:\n- \n- \n- \nbibliography:\n- 'refs.bib'\ntitle: ' On the relevance of acoustic" -"---\nabstract: 'Neuro-symbolic artificial intelligence is an emerging area that combines traditional symbolic techniques with neural networks. In this paper, we consider its application to sequential decision making under uncertainty. We introduce neuro-symbolic partially observable Markov decision processes (NS-POMDPs), which model an agent that perceives a continuous-state environment using a neural network and makes decisions symbolically, and study the problem of optimising discounted cumulative rewards. This requires functions over continuous-state beliefs, for which we propose a novel piecewise linear and convex representation (P-PWLC) in terms of polyhedra covering the continuous-state space and value vectors, and extend Bellman backups to this representation. We prove the convexity and continuity of value functions and present two value iteration algorithms that ensure finite representability by exploiting the underlying structure of the continuous-state model and the neural perception mechanism. The first is a classical (exact) value iteration algorithm extending $\\alpha$-functions of Porta [*et al*]{} (2006) to the P-PWLC representation for continuous-state spaces. The second is a point-based (approximate) method called NS-HSVI, which uses the P-PWLC representation and belief-value induced functions to approximate value functions from below and above for two types of beliefs, particle-based and region-based. Using a prototype implementation, we show the practical applicability" -"---\nabstract: |\n We consider a network equilibrium model (i.e.\u00a0a combined model), which was proposed as an alternative to the classic four-step approach for travel forecasting in transportation networks. This model can be formulated as a convex minimization program. We extend the combined model to the case of the stable dynamics (SD) model in the traffic assignment stage, which imposes strict capacity constraints in the network. We propose a way to solve corresponding dual optimization problems with accelerated gradient methods and give theoretical guarantees of their convergence. We conducted numerical experiments with considered optimization methods on Moscow and Berlin networks.\n\n **Keywords:** forecasting, combined model, trip distribution, traffic assignment, capacity constraints, gradient method\nauthor:\n- Meruza\u00a0Kubentayeva\n- Demyan\u00a0Yarmoshik\n- Mikhail\u00a0Persiianov\n- Alexey\u00a0Kroshnin\n- Ekaterina\u00a0Kotliarova\n- Nazarii\u00a0Tupitsa\n- Dmitry\u00a0Pasechnyuk\n- Alexander\u00a0Gasnikov\n- Vladimir\u00a0Shvetsov\n- Leonid\u00a0Baryshev\n- Alexey\u00a0Shurupov\nbibliography:\n- 'lib.bib'\ntitle: 'Primal-Dual Gradient Methods for Searching Network Equilibria in Combined Models with Nested Choice Structure and Capacity Constraints'\n---\n\n=1\n\nIntroduction\n============\n\nOne of the most popular approaches to travel forecasting in transportation networks is the four-step procedure [@ortuzar2011]: sequential run of trip generation, trip distribution, modal split, and traffic" -"---\nauthor:\n- 'Eric Yanchenko, Tsuyoshi Murata and\u00a0Petter Holme'\nbibliography:\n- 'refs.bib'\ntitle: |\n Influence maximization on temporal\\\n networks: a review\n---\n\n[Yanchenko : Influence maximization on temporal networks: a review]{}\n\nIntroduction\n============\n\nNetworks, or graphs, are a simple tool to abstractly represent a system involving interacting entities, where the objects are modeled as nodes and their relationship as edges. Because of their generality and flexibility, many real-world settings have leveraged networks over the past few decades including: online social networks\u00a0[@garton1997studying; @mislove2007measurement; @phuvipadawat2010breaking], infrastructure networks\u00a0[@latora2005vulnerability; @liu2020review; @guimera2004modeling] and biological process networks\u00a0[@girvan2002community; @pavlopoulos2011using]. Recently, there has been great interest in not only understanding the topological structure of networks, but also how information diffuses on them\u00a0[@lopez2008diffusion; @rodriguez2011uncovering; @xu2010information; @harush2017dynamic]. For example, in social networks, we may be interested in understanding viral outbreaks in a population, or breaking news spreads in an online setting.\n\nThe most fundamental assumption of network science and machine learning applied to graphs is that the network structure begets the function of the networked system. First discussed in abstract terms by Georg Simmel in the 1890s\u00a0[@simmel] and in the language of graph theory by Jacob Moreno and Helen Jennings in the 1930s\u00a0[@moreno_jennings]," -"---\nabstract: 'Timing synchronization (TS) is one of the key tasks in orthogonal frequency division multiplexing (OFDM) systems. However, multi-path uncertainty corrupts the TS correctness, making OFDM systems suffer from a severe inter-symbol-interference (ISI). To tackle this issue, we propose a timing-metric learning-based TS method assisted by a lightweight one-dimensional convolutional neural network (1-D CNN). Specifically, the receptive field of 1-D CNN is specifically designed to extract the metric features from the classic synchronizer. Then, to combat the multi-path uncertainty, we employ the varying delays and gains of multi-path (the characteristics of multi-path uncertainty) to design the timing-metric objective, and thus form the training labels. This is typically different from the existing timing-metric objectives with respect to the timing synchronization point. Our method substantively increases the completeness of training data against the multi-path uncertainty due to the complete preservation of metric information. By this mean, the TS correctness is improved against the multi-path uncertainty. Numerical results demonstrate the effectiveness and generalization of the proposed TS method against the multi-path uncertainty.'\nauthor:\n- \nbibliography:\n- 'ref.bib'\ntitle: |\n Metric Learning-Based Timing Synchronization by Using Lightweight Neural Network\\\n [^1] \n---\n\nTiming synchronization, OFDM, lightweight CNN, timing-metric objective, multi-path uncertainty\n\nIntroduction {#I:I}\n============" -"---\nabstract: 'Through extensive research on deep learning in recent years and its application in construction, crack detection has evolved rapidly from rough detection at the image-level and patch-level to fine-grained detection at the pixel-level, which better suits the nature of this field. Despite numerous existing studies utilizing off-the-shelf deep learning models or enhancing them, these models are not always effective or efficient in real-world applications. In order to bridge this gap, we propose a High-resolution model with Semantic guidance, specifically designed for real-time crack segmentation, referred to as HrSegNet. Our model maintains high resolution throughout the entire process, as opposed to recovering from low-resolution features to high-resolution ones, thereby maximizing the preservation of crack details. Moreover, to enhance the context information, we use low-resolution semantic features to guide the reconstruction of high-resolution features. To ensure the efficiency of the algorithm, we design a simple yet effective method to control the computation cost of the entire model by controlling the capacity of high-resolution channels, while providing the model with extremely strong scalability. Extensive quantitative and qualitative evaluations demonstrate that our proposed HrSegNet has exceptional crack segmentation capabilities, and that maintaining high resolution and semantic guidance are crucial to the final" -"---\nabstract: 'With the rise of bidirectional encoder representations from Transformer models in natural language processing, the speech community has adopted some of their development methodologies. Therefore, the Wav2Vec models were introduced to reduce the data required to obtain state-of-the-art results. This work leverages this knowledge and improves the performance of the pre-trained speech models by simply replacing the fine-tuning dense layer with a lateral inhibition layer inspired by the biological process. Our experiments on Romanian, a low-resource language, show an average improvement of 12.5% word error rate (WER) using the lateral inhibition layer. In addition, we obtain state-of-the-art results on both the Romanian Speech Corpus and the Robin Technical Acquisition Corpus with 1.78% WER and 29.64% WER, respectively.'\nauthor:\n- \nbibliography:\n- 'mybib.bib'\ntitle: 'Towards Improving the Performance of Pre-Trained Speech Models for Low-Resource Languages Through Lateral Inhibition'\n---\n\nLateral Inhibition; Romanian Language; Speech Recognition; Wav2Vec 2.0\n\nIntroduction\n============\n\nDeep neural networks benefit from large amounts of annotated training data. However, annotated data is challenging to obtain in many settings. Except for English, generating thousands of hours of transcribed audio necessary to train a state-of-the-art speech recognition system is infeasible for most languages worldwide. Self-supervised learning\u00a0[@bao2021beit] has become" -"---\nabstract: 'Strong correlations lead to emergent excitations at low energies. When combined with symmetry constraints, they may produce topological electronic states near the Fermi energy. Within this general framework, here we address the topological features in iron-based superconductors. We examine the effects of orbital-selective correlations on the band inversion in the iron chalcogenide FeSe$_{x}$Te$_{1-x}$ near its doping of optimal superconductivity, within a multiorbital model and using a $U(1)$ slave spin theory. The orbital selectivity of the quasiparticle spectral weight, along with its counterpart of the energy level renormalization, leads to a band inversion and Dirac node formation pinned to the immediate vicinity of the Fermi energy. Our work demonstrates both the naturalness and robustness of the topological properties in FeSe$_{x}$Te$_{1-x}$, and uncovers a new setting in which strong correlations and space-group symmetry cooperate in generating strongly correlated electronic topology.'\nauthor:\n- Zhiguang Liao\n- Rong Yu\n- 'Jian-Xin Zhu'\n- Qimiao Si\ntitle: 'Orbital-selective correlations for topology in FeSe$_{x}$Te$_{1-x}$'\n---\n\n[*Introduction.\u00a0*]{} Since its discovery\u00a0[@Kamihara_JACS_2008], iron-based superconductors (FeSCs) have attracted extensive research interest because of their unconventional high-temperature superconductivity and a rich landscape of electronic orders \u00a0[@Si_Hussey2023; @Yi_npjQM_2017; @Bascones_CRP_2016; @Hirschfeld_CRP_2016; @Si_NRM_2016; @Dai_RMP_2015; @Dagotto_RMP_2013; @Wang_Sci_2011; @Johnston_AP_2010]. These properties originate" -"---\nabstract: 'Object detection in 3D is a\u00a0crucial aspect in the context of autonomous vehicles and drones. However, prototyping detection algorithms is time-consuming and costly in terms of energy and environmental impact. To address these challenges, one can check the effectiveness of different models by training on a\u00a0subset of the original training set. In this paper, we present a\u00a0comparison of three algorithms for selecting such a\u00a0subset \u2013 *random sampling*, *random per class sampling*, and our proposed *MONSPeC* (Maximum Object Number Sampling per Class). We provide empirical evidence for the superior effectiveness of random per class sampling and MONSPeC over basic random sampling. By replacing random sampling with one of the more efficient algorithms, the results obtained on the subset are more likely to transfer to the results on the entire dataset. The code is available at: *https://github.com/vision-agh/monspec*.'\nauthor:\n- \n- \nbibliography:\n- 'references.bib'\ntitle: 'Comparative study of subset selection methods for rapid prototyping of 3D object detection algorithms [^1] '\n---\n\nLiDAR, point cloud, object detection, PointPillars, CenterPoint, subset selection, MONSPeC, random per class sampling\n\nIntroduction {#sec:introduction}\n============\n\nAdvanced Driver Assistance Systems (ADAS), Autonomous Vehicles (AVs), and Unmanned Aerial Vehicles (UAVs) rely on object detection for" -"---\nabstract: 'The deconfined quantum critical point (DQCP) is an example of phase transitions beyond the Landau symmetry breaking paradigm that attracts wide interest. However, its nature has not been settled after decades of study. In this paper, we apply the recently proposed fuzzy sphere regularization to study the $\\mathrm{SO}(5)$ non-linear sigma model (NL$\\sigma$M) with a topological Wess-Zumino-Witten term, which serves as a dual description of the DQCP with an exact $\\mathrm{SO}(5)$ symmetry. We demonstrate that the fuzzy sphere functions as a powerful microscope, magnifying and revealing a wealth of crucial information about the DQCP, ultimately paving the way towards its final answer. In particular, through exact diagonalization, we provide clear evidence that the DQCP exhibits approximate conformal symmetry. The evidence includes the existence of a conserved $\\mathrm{SO}(5)$ symmetry current, a stress tensor, and integer-spaced levels between conformal primaries and their descendants. Most remarkably, we have identified 23 primaries and 76 conformal descendants. Furthermore, by examining the renormalization group flow of the lowest symmetry singlet as well as other primaries, we provide numerical evidence in favour of DQCP being pseudo-critical, with the approximate conformal symmetry plausibly emerging from nearby complex fixed points. The primary spectrum we compute also has important" -"---\nabstract: 'We investigate cases where the finite dual coalgebra of a twisted tensor product of two algebras is a crossed product coalgebra of their respective finite duals. This is achieved by interpreting the finite dual as a topological dual; in order to prove this, we show that the continuous dual is a strong monoidal functor on linearly topologized vector spaces whose open subspaces have finite codimension. We describe a sufficient condition for the result on finite dual coalgebras to be applied, and we this condition to particular constructions including Ore extensions, smash product algebras, and crossed product bialgebras.'\naddress: |\n Department of Mathematics\\\n University of California, Irvine\\\n 419 Rowland Hall\\\n Irvine, CA 92697\u20133875\\\n USA\nauthor:\n- 'Manuel L. Reyes'\nbibliography:\n- 'twisted-dual-v1.bib'\ndate: 'June 29, 2023'\ntitle: Dual coalgebras of twisted tensor products\n---\n\n[^1]\n\n\u0142@subsection[tocline[2]{}[0pt]{}[2.5pc]{}[5pc]{}]{} \u0142@subsubsection[tocline[2]{}[0pt]{}[5pc]{}[7.5pc]{}]{}\n\nIntroduction {#sec:intro}\n============\n\nLet $k$ be an arbitrary field, and let $A$ be a $k$-algebra. The finite dual\u00a0[@heynemansweedler 1.3] of $A$ is a well-known coalgebra associated to $A$ that is defined as follows. Let ${\\mathcal{F}}(A)$ denote the family of all ideals of finite $k$-codimension in $A$. Then $A^\\circ$ is the subspace of the dual space $A^*$ consisting of all functionals that" -"---\nauthor:\n- \nbibliography:\n- 'matfun.bib'\n- 'my\\_bib.bib'\ntitle: On convergence of waveform relaxation for nonlinear systems of ordinary differential equations\n---\n\nIntroduction\n============\n\nLarge systems of time-dependent ordinary differential equations arise in various applications and in many cases have to be integrated in time by implicit methods, see e.g.\u00a0[@HundsdorferVerwer:book; @ElmanSilvesterWathen:book]. Last decennia the niche of implicit methods has been gradually taken up by exponential time integrators\u00a0[@HochbruckOstermann2010]. For implicit and exponential methods the key issue is how to solve arising nonlinear systems and (or) to evaluate related matrix functions efficiently. To achieve efficiency, different approaches exist and widely applied, such as inexact Newton methods combined with powerful linear preconditioned solvers\u00a0[@BrownSaad; @ChoquetErhel; @TromeurdervoutVassilevski2006], splitting and Rosenbrock methods\u00a0[@Yanenko; @Dyakonov64; @CsomosFaragoHavasi05; @ros2; @HundsdorferVerwer:book] and approximate iterative implicit schemes (which can be seen as stabilized explicit schemes)\u00a0[@RKC; @LokLokDAN; @RKC97; @TalEzer89; @Lebedev98; @RKC2004; @MRAIpap; @MRAIpar; @Zhukov2011].\n\nAnother important approach to achieve efficiency in implicit and exponential time integrators is based on waveform relaxation methods\u00a0[@Lelarasmee_ea1982; @NewtonSangiovanni1983; @Vandewalle1993corrected], where iterative approximations are time dependent functions rather than time step values of a numerical solution. These methods have also been known as dynamic iteration or Picard\u2013Lindel\u00f6f iteration\u00a0[@MiekkalaNevanlinna1996]. They have been developed" -"---\nabstract: 'Generative Large language models (LLMs) have demonstrated remarkable capabilities for a wide range of applications, but reducing ungrounded or erroneous responses remains a major growth area. Unlike task-specific models, there lack an effective method to calibrate the confidence level of LLM responses to indicate potential errors and facilitate human-in-the-loop verification. An important source of calibration stems from expert-stipulated programmatic supervision, which is often available at low cost but has its own limitations such as noise and coverage. In this paper, we introduce a Pareto optimal self-supervision framework that can leverage available programmatic supervision to systematically calibrate LLM responses by producing a risk score for every LLM response, without any additional manual efforts. This is accomplished by learning a harmonizer model to align with LLM output as well as other weak supervision sources. The model assigns higher risk scores to more uncertain LLM responses and facilitate error correction. Experiments on standard relation extraction and classification tasks in biomedical and general domains demonstrate that the proposed risk score is highly correlated with the actual LLM error rate. By using a dynamic prompting strategy based on the risk score, we observed significant accuracy improvement for off-the-shelf LLMs, boosting GPT-3.5 results past" -"---\nauthor:\n- 'Axel Lazzarotto, Alain Hui-Bon-Hoa and Michel Rieutord'\nbibliography:\n- 'Article\\_def.bib'\ndate: 'Received 12 April 2023; accepted 1 June 2023'\ntitle: 'Photometric determination of rotation axis inclination, rotation rate, and mass of rapidly rotating intermediate-mass stars[^1]'\n---\n\n[Intermediate-mass stars are often fast rotators, and hence are centrifugally flattened and notably affected by gravity darkening. To analyse this kind of stars properly, one must resort to 2D models to compute the visible radiative flux and to take the geometrical effect of the star inclination into account.]{} [Assuming a given stellar age and chemical composition, our aim is to derive the mass and rotation rates of main sequence fast rotating stars, along with their inclination, from photometric quantities influenced by gravity darkening.]{} [We chose three observables that vary with mass, rotation, and inclination: the temperature derived by the infrared flux method $T_\\mathrm{IRFM}$, the Str\u00f6mgren $c_1$ index, and a second index $c_2$ built in the same way as the $c_1$ index, but sensitive to the UV side of the Balmer jump. These observables are computed from synthetic spectra produced with the PHOENIX code and rely on a 2D stellar structure from the ESTER code. These quantities are computed for a grid" -"---\nabstract: 'We demonstrate time-of-flight measurements for an ultracold levitated nanoparticle and reveal its velocity for the translational motion brought to the quantum ground state. We discover that the velocity distributions obtained with repeated release-and-recapture measurements are significantly broadened via librational motions of the nanoparticle. Under feedback cooling on all the librational motions, we recover the velocity distributions in reasonable agreement with an expectation from the occupation number, with approximately twice the width of the quantum limit. The strong impact of librational motions on the translational motions is understood as a result of the deviation between the libration center and the center of mass, induced by the asymmetry of the nanoparticle. Our results elucidate the importance of the control over librational motions and establish the basis for exploring quantum mechanical properties of levitated nanoparticles in terms of their velocity.'\nauthor:\n- 'M.Kamba'\n- 'K.Aikawa'\ntitle: Revealing the velocity uncertainties of a levitated particle in the quantum ground state\n---\n\nThe ingenious control over the motions of nano- and micro-mechanical oscillators has, over the past decade, opened up a wide variety of opportunities such as quantum transducers\u00a0[@andrews2014bidirectional; @bagci2014optical], ultrasensitive force and position sensors\u00a0[@tao2014single; @wilson2015measurement], and nonreciprocal devices[@shen2016experimental; @bernier2017nonreciprocal; @peterson2017demonstration;" -"---\nabstract: 'Large language models show impressive results on few-shot NLP tasks. However, these models are memory and computation-intensive. Meta-training allows one to leverage smaller models for few-shot generalization in a domain-general and task-agnostic manner [@min2022metaicl; @wei2022zeroshot; @chen-etal-2022-meta]; however, these methods alone results in models that may not have sufficient parameterization or knowledge to adapt quickly to a large variety of tasks. To overcome this issue, we propose meta-training *with demonstration retrieval*, where we use a dense passage retriever to retrieve semantically similar labeled demonstrations to each example for more varied supervision. By separating external knowledge from model parameters, we can use meta-training to train parameter-efficient models that generalize well on a larger variety of tasks. We construct a meta-training set from UnifiedQA and CrossFit, and propose a demonstration bank based on UnifiedQA tasks. To our knowledge, our work is the first to combine retrieval with meta-training, to use DPR models to retrieve demonstrations, and to leverage demonstrations from many tasks simultaneously, rather than randomly sampling demonstrations from the training set of the target task. Our approach outperforms a variety of targeted parameter-efficient and retrieval-augmented few-shot methods on QA, NLI, and text classification tasks (including SQuAD, QNLI," -"---\nabstract: 'We have performed classical and quantum dynamical simulations to calculate dynamical quantities for physical processes of atom - surface scattering, e.g., trapping probability and average energy loss, final angular distribution of a particle scattered from a corrugated thermal surface. Here we have restricted ourselves to in-plane scattering so that only two degrees of freedom of the particle have to be considered - the vertical distance $z$ and the horizontal coordinate $x$. Moreover, we assumed further that only the vertical coordinate fluctuates due to interaction with thermal phonon bath of the surface. Initial phase - space variables of the system and the bath for our classical simulations were generated according to Wigner distribution functions which were derived from initial wavefunctions of our quantum dynamics. At very low incident energy, we have found that the quantum mechanical average energy loss of the escaped particle from the corrugated as well as thermal surface are smaller than the classical ones at a particular surface temperature. It is important to note that the rate of escaping probability of the scattered particle obtained by classical simulation increases with increasing surface temperature. On the other hand, quantum rate is almost temperature independent at 2 meV" -"---\nabstract: 'The Pulmonary Function Test (PFT) is an widely utilized and rigorous classification test for lung function evaluation, serving as a comprehensive tool for lung diagnosis. Meanwhile, Electrical Impedance Tomography (EIT) is a rapidly advancing clinical technique that visualizes conductivity distribution induced by ventilation. EIT provides additional spatial and temporal information on lung ventilation beyond traditional PFT. However, relying solely on conventional isolated interpretations of PFT results and EIT images overlooks the continuous dynamic aspects of lung ventilation. This study aims to classify lung ventilation patterns by extracting spatial and temporal features from the 3D EIT image series. The study uses a Variational Autoencoder network with a MultiRes block to compress the spatial distribution in a 3D image into a one-dimensional vector. These vectors are then concatenated to create a feature map for the exhibition of temporal features. A simple convolutional neural network is used for classification. Data collected from 137 subjects were finally used for training. The model is validated by ten-fold and leave-one-out cross-validation first. The accuracy and sensitivity of normal ventilation mode are 0.95 and 1.00, and the f1-score is 0.94. Furthermore, we check the reliability and feasibility of the proposed pipeline by testing it on" -"---\nabstract: 'By molecular dynamics simulations we study the spin Seebeck effect as a function of magnetic field in the prototype classical easy-axis antiferromagnetic chain, in the far-out of equilibrium as well as linear response regime. We find distinct behavior in the low field antiferromagnetic, middle field canted and high field ferromagnetic phase. In particular, in the open boundary system at low temperatures, we observe a divergence of the spin current in the spin-flop transition between the antiferromagnetic and canted phase, accompanied by a change of sign in the generated spin current by the temperature gradient. These results are corroborated by a simple spin-wave phenomenological analysis and simulations in the linear response regime. They shed light on the spin current sign change observed in experiments in bulk antiferromagnetic materials.'\nauthor:\n- 'X. Zotos$^{1,2}$'\ntitle: 'Spin Seebeck effect in the classical easy-axis antiferromagnetic chain'\n---\n\nIntroduction\n============\n\nThe generation and control of spin currents is a central topic in the field of spintronics [@review]. In particular the spin Seebeck effect [@kikkawa], the generation of a spin current by a temperature gradient in a magnetic field, has been extensively experimentally and theoretically studied in a great variety of bulk magnetic systems as" -"---\nabstract: |\n This article deals with an autonomous differential equation model that studies the interaction between the immune system and the growth of tumor cells with strong and weak Allee effects. The Allee effect refers to interspecific competition, and when the population is small, it can retard population growth. The work focuses on describing analytically, using a set of parameters, the conditions in the phases of the immunoediting theory, particularly in the equilibrium phase, where a latent tumor would exist. Saddle-Node, Saddle-symmetric, Hopf, generalized Hopf, and Takens-Bogdanov bifurcations get presented for both Allee effects, and their biological interpretation regarding cancer dynamics gets discussed. The Hopf and generalized Hopf bifurcation curves get analyzed through hyper-parameter projections of the model, where it gets observed that with a strong Allee effect, more tumor control persists as it has higher antigenicity, in contrast to the weak Allee effect, where lower antigenicity gets observed. Also, we observe that the equilibrium phase persists as antigenicity increases with a strong Allee effect. Finally, the numerical continuation gets performed to replicate the analytical curves\u2019 bifurcations and draw the limit and double limit cycles.\\\n \\\n [*Keywords: Generalized Hopf bifurcation, Cancer modeling, Immunoediting, Weak-strong Allee effects.*]{}\nauthor:\n- 'Eymard" -"---\nabstract: |\n With the explosion of applications of Data Science, the field is has come loose from its foundations. This article argues for a new program of applied research in areas familiar to researchers in Bayesian methods in AI that are needed to ground the practice of Data Science by borrowing from AI techniques for model formulation that we term \u201cDecision Modelling.\u201d This article briefly reviews the formulation process as building a causal graphical model, then discusses the process in terms of six principles that comprise *Decision Quality*, a framework from the popular business literature. We claim that any successful applied ML modelling effort must include these six principles.\n\n We explain how Decision Modelling combines a conventional machine learning model with an explicit value model. To give a specific example we show how this is done by integrating a model\u2019s ROC curve with a utility model.\nauthor:\n- '[John Mark Agosta](mailto:?Subject=Your UAI 2022 paper)'\n- Robert Horton\nbibliography:\n- 'uai2022-ds.bib'\ntitle: Redeeming Data Science by Decision Modelling\n---\n\n Introduction\n=============\n\nData Science suffers from its own success, having seen such rapid adoption across so many fields, in so many different ways that it has lost its principled theoretical foundation." -"---\nabstract: 'This work seeks to answer key research questions regarding the viability of reinforcement learning over the S&P 500 index. The on-policy techniques of Value Iteration (VI) and State\u2013action\u2013reward\u2013state\u2013action (SARSA) are implemented along with the off-policy technique of Q-Learning. The models are trained and tested on a dataset comprising multiple years of stock market data from 2000-2023. The analysis presents the results and findings from training and testing the models using two different time periods: one including the COVID-19 pandemic years and one excluding them. The results indicate that including market data from the COVID-19 period in the training dataset leads to superior performance compared to the baseline strategies. During testing, the on-policy approaches (VI and SARSA) outperform Q-learning, highlighting the influence of bias-variance tradeoff and the generalization capabilities of simpler policies. However, it is noted that the performance of Q-learning may vary depending on the stability of future market conditions. Future work is suggested, including experiments with updated Q-learning policies during testing and trading diverse individual stocks. Additionally, the exploration of alternative economic indicators for training the models is proposed.'\nauthor:\n- |\n Ishan Khare\\\n Stanford University\\\n `iskhare@stanford.edu`\\\n Tarun Martheswaran\\\n Stanford University\\\n `tarunkm@stanford.edu`\\\n Jonah Ezekiel\\\n Stanford University\\\n `jezekiel@stanford.edu`\\" -"---\nabstract: 'We explore the feasibility of learning the connection between SDSS galaxies and ELUCID subhaloes with random forest (RF). ELUCID is a constrained $N$-body simulation constructed using the matter density field of SDSS. Based on an SDSS-ELUCID matched catalogue, we build RF models that predict $M_r$ magnitude, colour, stellar mass $M_*$, and specific star formation rate (sSFR) with several subhalo properties. While the RF can predict $M_r$ and $M_*$ with reasonable accuracy, the prediction accuracy of colour and sSFR is low, which could be due to the mismatch between galaxies and subhaloes. To test this, we shuffle the galaxies in subhaloes of narrow mass bins in the local neighbourhood using galaxies of a semi-analytic model (SAM) and the TNG hydrodynamic simulation. We find that the shuffling only slightly reduces the colour prediction accuracy in SAM and TNG, which is still considerably higher than that of the SDSS. This suggests that the true connection between SDSS colour and subhalo properties could be weaker than that in the SAM and TNG without the mismatch effect. We also measure the Pearson correlation coefficient between galaxy properties and the subhalo properties in SDSS, SAM, and TNG. Similar to the RF results, we find" -"---\nabstract: 'This paper addresses the efficient management of Mobile Access Points (MAPs), which are Unmanned Aerial Vehicles (UAV), in 5G networks. We propose a two-level hierarchical architecture, which dynamically reconfigures the network while considering Integrated Access-Backhaul (IAB) constraints. The high-layer decision process determines the number of MAPs through consensus, and we develop a joint optimization process to account for co-dependence in network self-management. In the low-layer, MAPs manage their placement using a double-attention based Deep Reinforcement Learning (DRL) model that encourages cooperation without retraining. To improve generalization and reduce complexity, we propose a federated mechanism for training and sharing one placement model for every MAP in the low-layer. Additionally, we jointly optimize the placement and backhaul connectivity of MAPs using a multi-objective reward function, considering the impact of varying MAP placement on wireless backhaul connectivity.'\nbibliography:\n- 'biblio.bib'\ndate: November 2022\ntitle:\n- 'DEDICAT-EUCNC23-Hierarchical Multi-MAP cooperation for reconfigurable 5G dynamic networks'\n- 'DEDICAT-EUCNC23-Hierarchical Deep Reinforcement Learning for reconfigurable 5G dynamic networks via Multi-UAV cooperation'\n- 'Learning Reconfigurable Cooperative Multi-UAV 5G networks via Federated Multi-Agent Deep Reinforcement Learning'\n- 'Federated Multi-Agent Deep Reinforcement Learning for Dynamic and Flexible 3D Operation of 5G Multi-MAP Networks'\n---\n\nMobile Access Points, Integrated access" -"---\nabstract: 'We consider a Su-Schrieffer-Heeger chain to which we attach a semi-infinite undimerized chain (lead) to both ends. We study the effect of the openness of the SSH model on its properties. A representation of the infinite system using an effective Hamiltonian allows us to examine its low-energy states in more detail. We show that, as one would expect, the topological edge states hybridize as the coupling between the systems is increased. As this coupling grows, these states are suppressed, while a new type of edge state emerges from the trivial topological phase. These new states, referred to as phase-inverted edge states, are localized low-energy modes very similar to the edge states of the topological phase. Interestingly, localization occurs on a new shifted interface, moving from the first (and last) site to the second (and second to last) site. This suggests that the topology of the system is strongly affected by the leads, with three regimes of behavior. For very small coupling the system is in a well-defined topological phase; for very large coupling it is in the opposite phase; in the intermediate region, the system is in a transition regime.'\nauthor:\n- Alexei Bissonnette\n- Nicolas Delnour\n-" -"---\nabstract: 'Isolated Sign Language Recognition (SLR) has mostly been applied on datasets containing signs executed slowly and clearly by a limited group of signers. In real-world scenarios, however, we are met with challenging visual conditions, coarticulated signing, small datasets, and the need for signer independent models. To tackle this difficult problem, we require a robust feature extractor to process the sign language videos. One could expect human pose estimators to be ideal candidates. However, due to a domain mismatch with their training sets and challenging poses in sign language, they lack robustness on sign language data and image-based models often still outperform keypoint-based models. Furthermore, whereas the common practice of transfer learning with image-based models yields even higher accuracy, keypoint-based models are typically trained from scratch on every SLR dataset. These factors limit their usefulness for SLR. From the existing literature, it is also not clear which, if any, pose estimator performs best for SLR. We compare the three most popular pose estimators for SLR: OpenPose, MMPose and MediaPipe. We show that through keypoint normalization, missing keypoint imputation, and learning a pose embedding, we can obtain significantly better results and enable transfer learning. We show that keypoint-based embeddings contain" -"---\nabstract: 'The Transformer-based detectors (i.e., DETR) have demonstrated impressive performance on end-to-end object detection. However, transferring DETR to different data distributions may lead to a significant performance degradation. Existing adaptation techniques focus on model-based approaches, which aim to leverage feature alignment to narrow the distribution shift between different domains. In this study, we propose a hierarchical Prompt Domain Memory (PDM) for adapting detection transformers to different distributions. PDM comprehensively leverages the prompt memory to extract domain-specific knowledge and explicitly constructs a long-term memory space for the data distribution, which represents better domain diversity compared to existing methods. Specifically, each prompt and its corresponding distribution value are paired in the memory space, and we inject top M distribution-similar prompts into the input and multi-level embeddings of DETR. Additionally, we introduce the Prompt Memory Alignment (PMA) to reduce the discrepancy between the source and target domains by fully leveraging the domain-specific knowledge extracted from the prompt domain memory. Extensive experiments demonstrate that our method outperforms state-of-the-art domain adaptive object detection methods on three benchmarks, including scene, synthetic to real, and weather adaptation. Codes will be released.'\nauthor:\n- Peidong Jia\n- Jiaming Liu\n- Senqiao Yang\n- Jiarui Wu\n- Xiaodong" -"---\nabstract: 'With the increasing popularity and the increasing size of vision transformers (ViTs), there has been an increasing interest in making them more efficient and less computationally costly for deployment on edge devices with limited computing resources. Binarization can be used to help reduce the size of ViT models and their computational cost significantly, using popcount operations when the weights and the activations are in binary. However, ViTs suffer a larger performance drop when directly applying convolutional neural network (CNN) binarization methods or existing binarization methods to binarize ViTs compared to CNNs on datasets with a large number of classes such as ImageNet-1k. With extensive analysis, we find that binary vanilla ViTs such as DeiT miss out on a lot of key architectural properties that CNNs have that allow binary CNNs to have much higher representational capability than binary vanilla ViT. Therefore, we propose BinaryViT, in which inspired by the CNN architecture, we include operations from the CNN architecture into a pure ViT architecture to enrich the representational capability of a binary ViT without introducing convolutions. These include an average pooling layer instead of a token pooling layer, a block that contains multiple average pooling branches, an affine transformation" -"---\nbibliography:\n- 'ref.bib'\n---\n\n[ **2D Fractons from Gauging Exponential Symmetries** ]{}\n\nGuilherme Delfino^1,&^ and Claudio Chamon^1,\\*^ and Yizhi You^2,\\#^\n\n${}^{1}$ [*Department of Physics, Boston University, MA, 02215, USA*]{}\\\n${}^{2}$ [*Department of Physics, Northeastern University, MA, 02115, USA*]{}\\\n\n& \\\n\\* \\\n\\# \n\nAbstract {#abstract .unnumbered}\n========\n\nThe scope of quantum field theory is extended by introducing a broader class of discrete gauge theories with fracton behavior in 2+1D. We consider translation invariant systems that carry special charge conservation laws, which we refer to as exponential polynomial symmetries. Upon gauging these symmetries, the resulting $\\mathbb{Z}_N$ gauge theories exhibit fractonic physics, including constrained mobility of quasiparticles and UV dependence of the ground state degeneracy. For appropriate values of theory parameters, we find a family of models whose excitations, albeit being deconfined, can only move in the form of bound states rather than isolated monopoles. For concreteness, we study in detail the low-energy physics and topological sectors of a particular model through a universal protocol, developed for determining the holonomies of a given theory. We find that a single excitation, isolated in a region of characteristic size $R$, can only move from its original position through the action of operators" -"---\nabstract: 'The Galactic diffuse emission (GDE) is formed when cosmic rays leave the sources where they were accelerated, diffusively propagate in the Galactic magnetic field, and interact with the interstellar medium and interstellar radiation field. GDE in $\\gamma$-ray (GDE-$\\gamma$) has been observed up to sub-PeV energies, though its origin may be explained by either cosmic-ray nuclei or electrons. We show that the $\\gamma$-rays accompanying the high-energy neutrinos recently observed by the IceCube Observatory from the Galactic plane have a flux that is consistent with the GDE-$\\gamma$ observed by the [*Fermi*]{}-LAT and Tibet AS$\\gamma$ experiments around 1\u00a0TeV and 0.5\u00a0PeV, respectively. The consistency suggests that the diffuse $\\gamma$-ray emission above $\\sim$1\u00a0TeV could be dominated by hadronuclear interactions, though partial leptonic contribution cannot be excluded. Moreover, by comparing the fluxes of the Galactic and extragalactic diffuse emission backgrounds, we find that the neutrino luminosity of the Milky Way is one to two orders of magnitude lower than the average of distant galaxies. This implies that our Galaxy has not hosted the type of neutrino emitters that dominates the isotropic neutrino background at least in the past few tens of kiloyears.'\nauthor:\n- Ke Fang\n- 'John S. Gallagher'\n-" -"---\nauthor:\n- 'Fabian Zimmer,[!!]{}'\n- 'Camila A. Correa,'\n- 'and Shin\u2019ichiro Ando'\ntitle: Influence of local structure on relic neutrino abundances and anisotropies \n---\n\nIntroduction {#sec:intro}\n============\n\nThe cosmic neutrino background (CNB) is one of the last fundamental predictions of the cosmological model which remains undetected in a laboratory setting. Indirect evidence was found as phase shifts in the cosmic microwave background (CMB) and baryon acoustic oscillations (BAO) power spectra\u00a0[@Follin:2015hya; @Baumann:2019keh]. Additionally, the precise determinations of the effective number of relativistic species in the early Universe from the Planck collaboration\u00a0[@Planck:2018vyg], from big bang nucleosynthesis (BBN) analyses\u00a0[@Pisanti:2020efz; @Fields:2019pfx] and the theoretical predictions of its value from the $\\Lambda$CDM cosmological model\u00a0[@Akita:2020szl; @EscuderoAbenza:2020cmq; @Bennett:2019ewm; @Froustey:2020mcq; @Cielo:2023bqp] all are in remarkable agreement and give us high confidence of the existence of three neutrino families in the early Universe. Understanding the subsequent evolution of the neutrinos comprising the CNB, henceforth referred to as relic neutrinos, is subject to ongoing theoretical and experimental efforts. Of vital importance to future experiments aiming to detect relic neutrinos, especially for neutrino capture on beta-decaying nuclei experiments such as PTOLEMY [@Betts:2013uya; @Long:2014zva; @PTOLEMY:2019hkd], is the number of relic neutrinos in the vicinity of Earth. We" -"---\nabstract: |\n A new multivariate integer-valued Generalized AutoRegressive Conditional Heteroscedastic process based on a multivariate Poisson generalized inverse Gaussian distribution is proposed. The estimation of parameters of the proposed multivariate heavy-tailed count time series model via maximum likelihood method is challenging since the likelihood function involves a Bessel function that depends on the multivariate counts and its dimension. As a consequence, numerical instability is often experienced in optimization procedures. To overcome this computational problem, two feasible variants of the Expectation-Maximization (EM) algorithm are proposed for estimating parameters of our model under low and high-dimensional settings. These EM algorithm variants provide computational benefits and help avoid the difficult direct optimization of the likelihood function from the proposed model. Our model and proposed estimation procedures can handle multiple features such as modeling of multivariate counts, heavy-taildness, overdispersion, accommodation of outliers, allowances for both positive and negative autocorrelations, estimation of cross/contemporaneous-correlation, and the efficient estimation of parameters from both statistical and computational points of view. Extensive Monte Carlo simulation studies are presented to assess the performance of the proposed EM algorithms. An application to modeling bivariate count time series data on cannabis possession-related offenses in Australia is discussed.\\\n [**MOS subject Classification**]{}. Primary:" -"---\nabstract: |\n Recurring auctions are ubiquitous for selling durable assets like artworks and homes, with follow-up auctions held for unsold items. We investigate such auctions theoretically and empirically. Theoretical analysis demonstrates that recurring auctions outperform single-round auctions when buyers face entry costs, enhancing efficiency and revenue due to sorted entry of potential buyers. Optimal reserve price sequences are characterized. Empirical findings from home foreclosure auctions in China reveal significant annual gains in efficiency (3.40 billion USD, 16.60%) and revenue (2.97 billion USD, 15.92%) using recurring auctions compared to single-round auctions. Implementing optimal reserve prices can further improve efficiency (3.35%) and revenue (3.06%).\n\n Keywords:\n\n : recurring auctions, auction design, sorting, entry.\n\n JEL Classification Codes:\n\n : D44, D82, R31.\n\nauthor:\n- 'Shanglyu Deng, Qiyao Zhou[^1]'\nbibliography:\n- 'reference.bib'\ntitle: |\n Recurring Auctions with Costly Entry:\\\n Theory and Evidence\n---\n\nIntroduction\n============\n\nAuctions for durable assets like houses and artworks are commonly recurring: a subsequent auction will often be held if the initial one fails to sell the item. Despite the prevalence of recurring auctions, there has been limited scholarly effort to understand why they exist, let alone their equilibrium properties. One possible explanation for their existence is that sellers are subject" -"---\naddress: ', , '\nauthor:\n- 'Ouyuan Qin and Kuan Xu\\*'\nbibliography:\n- 'nonlinear-bib.bib'\ntitle: Solving nonlinear ODEs with the ultraspherical spectral method\n---\n\nIntroduction {#sec:intro}\n============\n\nIn this article, we extend the ultraspherical spectral method [@olv] to solving the nonlinear ODE boundary value problem $$\\begin{aligned}\n\\mF(u) = 0, ~\\text{s.t.}~~ \\mN(u) = 0,\\end{aligned}$$ where $\\mF$ is a nonlinear differential operator on $u(x)$. The solution $u(x)$ is a univariate function of the independent variable $x \\in [-1, 1]$. By nonlinear, it is meant that $\\mF$ cannot be written in the form of \\[opL\\] (see \\[sec:us\\] below). The functional constraint $\\mN$ contains linear or nonlinear boundary conditions or side constraints of other types, such as interior point conditions, global constraints, etc.\n\nIn the very last paragraph of @olv, the authors briefly discussed the possibility of solving nonlinear differential equations by the ultraspherical spectral method and cautioned the loss of bandedness in the multiplication operators as a threat to the sparsity of the linear system and, therefore, to the exceptional speed of the ultraspherical spectral method. A decade has elapsed since the publication of @olv and it seems that no progress has been made towards this extension. This paper intends to fill" -"---\nabstract: 'The morphological evolution of nanoporous gold is generally believed to be governed by surface diffusion. This work specifically explores the dependence of mass transport by surface diffusion on the curvature of a gold surface. The surface diffusivity is estimated by molecular dynamics simulations for a variety of surfaces of constant mean curvature, eliminating any chemical potential gradients and allowing the possible dependence of the surface diffusivity on mean curvature to be isolated. The apparent surface diffusivity is found to have an activation energy of ${\\raise.17ex\\hbox{$\\scriptstyle\\sim$}}0.74$ eV with a weak dependence on curvature, but is consistent with the values reported in the literature. The apparent concentration of mobile surface atoms is found to be highly variable, having an Arrhenius dependence on temperature with an activation energy that also has a weak curvature dependence. These activation energies depend on curvature in such a way that the rate of mass transport by surface diffusion is nearly independent of curvature, but with a higher activation energy of ${\\raise.17ex\\hbox{$\\scriptstyle\\sim$}}1.01$ eV. The curvature dependencies of the apparent surface diffusivity and concentration of mobile surface atoms is believed to be related to the expected lifetime of a mobile surface atom, and has the practical consequence" -"---\nauthor:\n- 'M. Pieczarka'\n- 'M. Gbski'\n- 'A. N. Piasecka'\n- 'J. A.\u00a0Lott'\n- 'A. Pelster'\n- 'M. Wasiak'\n- 'T. Czyszanowski'\nbibliography:\n- 'references.bib'\ntitle: 'Bose-Einstein condensation of photons in a vertical-cavity surface-emitting laser'\n---\n\n**Many bosons can occupy a single quantum state without a limit. This state is described by quantum-mechanical Bose-Einstein statistics, which allows the formation of a Bose-Einstein condensate at low temperatures and high particle densities. Photons, historically the first considered bosonic gas, were late to show this phenomenon, which was observed in rhodamine-filled microlaser cavities and doped fiber cavities. These more recent findings have raised the natural question as to whether condensation is common in laser systems, with potential technological applications. Here, we show the Bose-Einstein condensation of photons in a broad-area vertical-cavity surface-emitting laser with positive cavity mode-gain peak energy detuning. We observed a Bose-Einstein condensate in the fundamental transversal optical mode at the critical phase-space density. The experimental results follow the equation of state for a two-dimensional gas of bosons in thermal equilibrium, although the extracted spectral temperatures were lower than those of the device. This is interpreted as originating from the driven-dissipative nature of the device and the stimulated" -"---\nabstract: 'One-dimensional quantized conductance is derived from the electrons in a homogeneous electric field by calculating the traveling time of the accelerated motion and the number of electrons in the one-dimensional region. As a result, the quantized conductance is attributed to the finite time required for ballistic electrons to travel a finite length. In addition, this model requires no Joule heat dissipation, even if the conductance is a finite value, because the electric power is converted to kinetic energy of electrons. Furthermore, the relationship between the non-equilibrium source-drain bias $V_\\mathrm{sd}$ and wavenumber $k$ in a one-dimensional conductor is shown as $k \\propto \\sqrt{V_\\mathrm{sd}}$. This correspondence accounts for the wavelength of the coherent electron flows emitted from a quantum point contact. Furthermore, it explains the anomalous $0.7 \\cdot 2e^2/h$ ($e$ is the elementary charge, and $h$ is the Plank\u2019s constant) conductance plateau as a consequence of the perturbation gap at the crossing point of the wavenumber-directional-splitting dispersion relation. We propose that this splitting is caused by the Rashba spin-orbit interaction induced by the potential gradient of the quantum well at quantum point contacts.'\nauthor:\n- 'D. Terasawa'\ntitle: Quantized Conductance by Accelerated Electrons\n---\n\nIntroduction\n============\n\nSince the successful interpretation" -"---\nabstract: 'We develop several provably efficient model-free reinforcement learning (RL) algorithms for infinite-horizon average-reward Markov Decision Processes (MDPs). We consider both online setting and the setting with access to a simulator. In the online setting, we propose model-free RL algorithms based on reference-advantage decomposition. Our algorithm achieves $\\widetilde{O}(S^5A^2\\mathrm{sp}(h^*)\\sqrt{T})$ regret after $T$ steps, where $S\\times A$ is the size of state-action space, and $\\mathrm{sp}(h^*)$ the span of the optimal bias function. Our results are the first to achieve optimal dependence in $T$ for weakly communicating MDPs. In the simulator setting, we propose a model-free RL algorithm that finds an $\\epsilon$-optimal policy using $\\widetilde{O} \\left(\\frac{SA\\mathrm{sp}^2(h^*)}{\\epsilon^2}+\\frac{S^2A\\mathrm{sp}(h^*)}{\\epsilon} \\right)$ samples, whereas the minimax lower bound is $\\Omega\\left(\\frac{SA\\mathrm{sp}(h^*)}{\\epsilon^2}\\right)$. Our results are based on two new techniques that are unique in the average-reward setting: 1) better discounted approximation by value-difference estimation; 2) efficient construction of confidence region for the optimal bias function with space complexity $O(SA)$.'\nauthor:\n- |\n Zihan Zhang$^\\dagger$, Qiaomin Xie$^\\mathsection$ [^1]\\\n \u00a0\\\n $^\\dagger$ Princeton University\\\n $^\\mathsection$ University of Wisconsin-Madison\nbibliography:\n- 'rl\\_refs.bib'\ntitle: 'Sharper Model-free Reinforcement Learning for Average-reward Markov Decision Processes'\n---\n\nIntroduction\n============\n\nReinforcement learning (RL) has emerged as a paradigm for solving challenging sequential decision-making problems and recently led to" -"---\nabstract: 'These notes present the fundamentals of Fermi acceleration at shocks, with a special attention to the role that supernova remnants have in producing Galactic cosmic rays. Then, the recent discoveries in the theory of diffusive shock acceleration (DSA) that stem from first-principle kinetic plasma simulations are discussed. When ion acceleration is efficient, the back-reaction of non-thermal particles and self-generated magnetic fields becomes prominent and leads to both enhanced shock compression and particle spectra significantly softer than those predicted by the standard test-particle DSA theory. These results are discussed in the context of the non-thermal phenomenology of astrophysical shocks, with a special focus on the remnant of SN1006.'\nauthor:\n- 'D.Caprioli'\nbibliography:\n- 'Total.bib'\ntitle: 'Particle Acceleration at Shocks: An Introduction'\n---\n\nThe SNR Paradigm for the Origin of Galactic CRs\n===============================================\n\nThe origin of cosmic rays (CRs) has been an outstanding issue in Astrophysics since the pioneering discovery by V. Hess in 1911. At least for relatively low-energies, below and around the so-called knee of the overall CR spectrum ($\\sim 10^{15}$ eV), the best source candidates have been supernova remnants (SNRs).\n\nIn 1934 Baade and Zwicky [@baade+34] suggested that supernova (SN) explosions were due to the release of" -"---\nabstract: 'Nowadays, the compression performance of neural-network-based image compression algorithms outperforms state-of-the-art compression approaches such as JPEG or HEIC-based image compression. Unfortunately, most neural-network based compression methods are executed on GPUs and consume a high amount of energy during execution. Therefore, this paper performs an in-depth analysis on the energy consumption of state-of-the-art neural-network based compression methods on a GPU and show that the energy consumption of compression networks can be estimated using the image size with mean estimation errors of less than $7\\%$. Finally, using a correlation analysis, we find that the number of operations per pixel is the main driving force for energy consumption and deduce that the network layers up to the second downsampling step are consuming most energy.'\naddress: |\n Multimedia Communications and Signal Processing\\\n [Friedrich-Alexander-Universit\u00e4t Erlangen-N\u00fcrnberg (FAU)]{}\\\n Erlangen, Germany \nbibliography:\n- 'literature.bib'\ntitle: |\n Processing Energy Modeling\\\n for Neural Network Based Image Compression \n---\n\n[15]{}(0.5,0.7) \u00a92023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of" -"---\nabstract: 'In this paper, we mainly study the impact of the implied certainty equivalent rate on investment in financial markets. First, we derived the mathematical expression of the implied certainty equivalent rate by using put-call parity, and then we selected some company stocks and options; we considered the best-performing and worst-performing company stocks and options from the beginning of 2023 to the present for empirical research. By visualizing the relationship between the time to maturity, moneyness, and implied certainty equivalent rate of these options, we have obtained a universal conclusion\u2014a positive implied certainty equivalent rate is more suitable for investment than a negative implied certainty equivalent rate, but for a positive implied certainty equivalent rate, a larger value also means a higher investment risk. Next, we applied these results to the electric vehicle industry, and by comparing several well-known US electric vehicle production companies, we further strengthened our conclusions. Finally, we give a warning concerning risk, that is, investment in the financial market should not focus solely on the implied certainty equivalent rate, because investment is not an easy task, and many factors need to be considered, including some factors that are difficult to predict with models.'\nauthor:\n-" -"---\nabstract: 'Approaching the era of ubiquitous computing, human motion sensing plays a crucial role in smart systems for decision making, user interaction, and personalized services. Extensive research has been conducted on human tracking, pose estimation, gesture recognition, and activity recognition, which are predominantly based on cameras in traditional methods. However, the intrusive nature of cameras limits their use in smart home applications. To address this, mmWave radars have gained popularity due to their privacy-friendly features. In this work, we propose *milliFlow*, a novel deep learning method for scene flow estimation as a complementary motion information for mmWave point cloud, serving as an intermediate level of features and directly benefiting downstream human motion sensing tasks. Experimental results demonstrate the superior performance of our method with an average 3D endpoint error of 4.6cm, significantly surpassing the competing approaches. Furthermore, by incorporating scene flow information, we achieve remarkable improvements in human activity recognition, human parsing, and human body part tracking. To foster further research in this area, we will provide our codebase and dataset for open access upon acceptance.'\nauthor:\n- Fangqiang Ding\n- Zhen Luo\n- Peijun Zhao\n- Chris Xiaoxuan Lu\nbibliography:\n- 'reference.bib'\ntitle: 'milliFlow: Scene Flow Estimation on" -"---\nabstract: 'We propose a generalized qubitization technique for quantum amplitude estimation\u00a0(QAE), which is a fundamental technique used in various problems like quantum simulation and quantum machine learning. Without prior information on the amplitude, we optimize the number of queries to $\\frac{\\pi}{\\sqrt{6}\\epsilon}\\approx 1.28\\epsilon^{-1}$, which is exactly a half compared to the quantum phase estimation based algorithm. We also discuss how our result improves the performance of quantum expectation value estimation and quantum nonlinear quantity estimation like the von Neumann entropy.'\nauthor:\n- Xi Lu\n- Hongwei Lin\nbibliography:\n- 'ref.bib'\ntitle: Quantum Amplitude Estimation by Generalized Qubitization\n---\n\nIntroduction\n============\n\nAn essential application of quantum computing is to simulate quantum systems\u00a0[@lloyd1996universal; @brown2010using]. For example, in quantum chemistry, quantum simulation improves the efficiency in estimating the ground state energy, the dipole moment, the polarizability, the electron density and so on\u00a0[@aspuru2005simulated; @wang2008quantum; @abrams1997simulation; @abrams1999quantum]. Even though the quantum state in the quantum computer is a full description of the quantum system, it gives us only limited access to the process. Like all quantum algorithms, we have to design smart quantum algorithms to retrieve the information. Quantum parameter estimation is a task of estimating the value of a continuous parameter" -"---\nabstract: 'Invariance to spatial transformations such as translations and rotations is a desirable property and a basic design principle for classification neural networks. However, the commonly used convolutional neural networks (CNNs) are actually very sensitive to even small translations. There exist vast works to achieve exact or approximate transformation invariance by designing transformation-invariant models or assessing the transformations. These works usually make changes to the standard CNNs and harm the performance on standard datasets. In this paper, rather than modifying the classifier, we propose a pre-classifier restorer to recover translated (or even rotated) inputs to the original ones which will be fed into any classifier for the same dataset. The restorer is based on a theoretical result which gives a sufficient and necessary condition for an affine operator to be translational equivariant on a tensor space.'\nauthor:\n- |\n Yihan Wang, Lijia Yu, Xiao-Shan Gao\\\n Academy of Mathematics and Systems Science, Chinese Academy of Sciences\\\n University of Chinese Academy of Sciences\ntitle: Restore Translation Using Equivariant Neural Networks \n---\n\nIntroduction {#sec:intro}\n============\n\nDeep convolutional neural networks (CNNs) had outperformed humans in many computer vision tasks\u00a0[@lecun1998gradient; @he2016deep]. One of the key ideas in designing the CNNs is that the" -"---\nabstract: '[*Versatile Video Coding* ]{}\u00a0(VVC) allows for large compression efficiency gains over its predecessor, [*High Efficiency Video Coding* ]{}\u00a0(HEVC). The added efficiency comes at the cost of increased runtime complexity, especially for encoding. It is thus highly relevant to explore all available runtime reduction options. This paper proposes a novel first pass for two-pass rate control in all-intra configuration, using low-complexity video analysis and a Random Forest (RF)-based machine learning model to derive the data required for driving the second pass. The proposed method is validated using VVenC, an open and optimized VVC encoder. Compared to the default two-pass rate control algorithm in VVenC, the proposed method achieves around 32% reduction in encoding time for the preset *faster*, while on average only causing $2\\%$ BD-rate increase and achieving similar rate control accuracy.'\naddress: |\n $^1$ Christian Doppler Laboratory ATHENA, Alpen-Adria-Universit[\u00e4]{}t, Klagenfurt, Austria\\\n $^2$ Video Communication and Applications Department, Fraunhofer HHI, Berlin, Germany\\\n $^3$ CEA, List, F-91120 Palaiseau, Universit\u00e9 Paris-Saclay, France\nbibliography:\n- 'references.bib'\ntitle: |\n All-intra rate control using\\\n low complexity video features for Versatile Video Coding\n---\n\nRate control, Complexity reduction, Random Forest, Machine learning, VVC.\n\nIntroduction\n============\n\nModern video standards come with ever-increasing complexity. The" -"---\nabstract: 'Quantization is commonly used in Deep Neural Networks (DNNs) to reduce the storage and computational complexity by decreasing the arithmetical precision of activations and weights, a.k.a. tensors. Efficient hardware architectures employ linear quantization to enable the deployment of recent DNNs onto embedded systems and mobile devices. However, linear uniform quantization cannot usually reduce the numerical precision to less than 8 bits without sacrificing high performance in terms of model accuracy. The performance loss is due to the fact that tensors do not follow uniform distributions. In this paper, we show that a significant amount of tensors fit into an exponential distribution. Then, we propose DNA-TEQ to exponentially quantize DNN tensors with an adaptive scheme that achieves the best trade-off between numerical precision and accuracy loss. The experimental results show that DNA-TEQ provides a much lower quantization bit-width compared to previous proposals, resulting in an average compression ratio of 40% over the linear INT8 baseline, with negligible accuracy loss and without retraining the DNNs. Besides, DNA-TEQ leads the way in performing dot-product operations in the exponential domain. On average for a set of widely used DNNs, DNA-TEQ provides $1.5x$ speedup and $2.5x$ energy savings over a baseline DNN accelerator" -"---\nabstract: 'We develop a self-supervised ensemble learning (SSEL) method to accurately classify distinct types of phase transitions by analyzing the fluctuation properties of machine learning outputs. Employing the 2D Potts model and the 2D Clock model as benchmarks, we demonstrate the capability of SSEL in discerning first-order, second-order, and Berezinskii-Kosterlitz-Thouless transitions, using in-situ spin configurations as the input features. Furthermore, we show that the SSEL approach can also be applied to investigate quantum phase transitions in 1D Ising and 1D XXZ models upon incorporating quantum sampling. We argue that the SSEL model simulates a special state function with higher-order correlations between physical quantities, and hence provides richer information than previous machine learning methods. Consequently, our SSEL method can be generally applied to the identification/classification of phase transitions even without explicit knowledge of the underlying theoretical models.'\nauthor:\n- 'Chi-Ting Ho'\n- 'Daw-Wei Wang'\nbibliography:\n- 'main.bib'\ntitle: ' Self-Supervised Ensemble Learning: A Universal Method for Phase Transition Classification of Many-Body Systems '\n---\n\nIntroduction\n============\n\nThe integration of machine learning (ML) techniques into theoretical and experimental physics has attracted substantial interest in recent years. A notable advantage of these methods, specifically supervised learning, lies in the efficient simulation of" -"---\nauthor:\n- 'M. Corpart'\n- 'F. Restagno'\n- 'F. Boulogne'\nbibliography:\n- 'biblio.bib'\ntitle: Analytical prediction of the temperature and the lifetime of an evaporating spherical droplet\n---\n\nIntroduction\n============\n\nSublimation of solid spheres has been investigated experimentally by Morse in 1910 revealing that the mass loss is not proportional to the surface area but to the radius [@Morse1910]. Langmuir rationalized these findings by considering an adiabatic process where mass transfer is controlled by the diffusion of the vapor in the air\u00a0[@Langmuir1918].\n\nThe study of spherical droplet evaporation holds significant importance in diverse scientific and technical domains that involve aerosols. Aerosol are produced naturally under different phenomena such as sea spray, fog, clouds, and rain drops. Suspended droplets are also generated by animals and humans during breathing and speaking, which has recently gained attention for airborne contaminants\u00a0[@Netz2020; @Rezaei2021; @Pan2022]. Aerosols can also be produced artificially with spraying techniques for cooling, painting applications, or fuel dispersion in motor engines\u00a0[@Erbil2012]. Therefore, understanding the mass transfer of airborne volatile drops is crucial. This phenomenon is complex due to the coupled heat and mass transfer associated with the phase change, while the transport could occur in a diffusive or a" -"---\nabstract: 'We propose a novel probabilistically robust controller for the guidance of an unmanned aerial vehicle (UAV) in coverage planning missions, which can simultaneously optimize both the UAV\u2019s motion, and camera control inputs for the 3D coverage of a given object of interest. Specifically, the coverage planning problem is formulated in this work as an optimal control problem with logical constraints to enable the UAV agent to jointly: a) select a series of discrete camera field-of-view states which satisfy a set of coverage constraints, and b) optimize its motion control inputs according to a specified mission objective. We show how this hybrid optimal control problem can be solved with standard optimization tools by converting the logical expressions in the constraints into equality/inequality constraints involving only continuous variables. Finally, probabilistic robustness is achieved by integrating the unscented transformation to the proposed controller, thus enabling the design of robust open-loop coverage plans which take into account the future posterior distribution of the UAV\u2019s state inside the planning horizon.'\nauthor:\n- |\n Savvas\u00a0Papaioannou,\u00a0Panayiotis\u00a0Kolios,\u00a0Theocharis\u00a0Theocharides,\\\n \u00a0Christos\u00a0G.\u00a0Panayiotou\u00a0 and \u00a0Marios\u00a0M.\u00a0Polycarpou[^1]\nbibliography:\n- 'IEEEabrv.bib'\n- 'main.bib'\ntitle: |\n Unscented Optimal Control for 3D Coverage Planning\\\n with an Autonomous" -"---\nabstract: 'The worldwide COVID-19 pandemic has led to a significant growth of interest in the development of mathematical models that allow to describe effects such as social distancing measures, the development of vaccines, and mutations. Several of these models are based on concepts from soft matter theory. Considerably less well investigated is the reverse direction, i.e., how results from epidemiological research can be of interest for the physics of colloids and polymers. In this work, we consider the SIR-DDFT model, a combination of the susceptible-infected-recovered (SIR) model from epidemiology with dynamical density functional theory (DDFT) from nonequilibrium soft matter physics, which allows for an explicit modeling of social distancing. We extend the SIR-DDFT model both from an epidemiological perspective by incorporating vaccines, asymptomaticity, reinfections, and mutations, and from a soft matter perspective by incorporating noise and self-propulsion and by deriving a phase field crystal (PFC) model that allows for a simplified description. On this basis, we investigate via computer simulations how epidemiological models are affected by the presence of non-reciprocal interactions. This is done in a numerical study of a zombie outbreak.'\nauthor:\n- Michael te Vrugt\n- Julian Jeggle\n- Raphael Wittkowski\nbibliography:\n- 'refs.bib'\ntitle: Passive and" -"---\nabstract: 'In Koopman operator theory, a finite-dimensional nonlinear system is transformed into an infinite but linear system using a set of observable functions. However, manually selecting observable functions that span the invariant subspace of the Koopman operator based on prior knowledge is inefficient and challenging, particularly when little or no information is available about the underlying systems. Furthermore, current methodologies tend to disregard the importance of the invertibility of observable functions, which leads to inaccurate results. To address these challenges, we propose the so-called FlowDMD, a Flow-based Dynamic Mode Decomposition that utilizes the Coupling Flow Invertible Neural Network (CF-INN) framework. FlowDMD leverages the intrinsically invertible characteristics of the CF-INN to learn the invariant subspaces of the Koopman operator and accurately reconstruct state variables. Numerical experiments demonstrate the superior performance of our algorithm compared to state-of-the-art methodologies.'\nauthor:\n- Yuhuang Meng\n- Jianguo Huang\n- Yue Qiu\nbibliography:\n- 'flowDMD.bib'\ntitle: 'Physics-informed invertible neural network for the Koopman operator learning '\n---\n\nKoopman operator ,Generative models ,Invertible neural networks\n\nIntroduction {#sec-Introduction}\n============\n\nNonlinear dynamic systems are widely prevalent in both theory and engineering applications. Since the governing equations are generally unknown in many situations, it can be challenging to study" -"---\nabstract: 'We develop a numerical method based on canonical conformal variables to study two eigenvalue problems for operators fundamental to finding a Stokes wave and its stability in a 2D ideal fluid with a free surface in infinite depth. We determine the spectrum of the linearization operator of the quasiperiodic Babenko equation, and provide new results for eigenvalues and eigenvectors near the limiting Stokes wave identifying new bifurcation points via the Fourier-Floquet-Hill (FFH) method. We conjecture that infinitely many secondary bifurcation points exist as the limiting Stokes wave is approached. The eigenvalue problem for stability of Stokes waves is also considered. The new technique is extended to allow finding of quasiperiodic eigenfunctions by introduction of FFH approach to the canonical conformal variables based method. Our findings agree and extend existing results for the Benjamin-Feir, high-frequency and localized instabilities. For both problems the numerical methods are based on Krylov subspaces and do not require forming of operator matrices. Application of each operator is pseudospectral employing the fast Fourier transform (FFT), thus enjoying the benefits of spectral accuracy and $O(N\\log N)$ numerical complexity. Extension to nonuniform grid spacing is possible via introducing auxiliary conformal maps.'\nauthor:\n- 'Sergey A. Dyachenko'\n-" -"---\nabstract: 'Solving symbolic reasoning problems that require compositionality and systematicity is considered one of the key ingredients of human intelligence. However, symbolic reasoning is still a great challenge for deep learning models, which often cannot generalize the reasoning pattern to out-of-distribution test cases. In this work, we propose a hybrid system capable of solving arithmetic problems that require compositional and systematic reasoning over sequences of symbols. The model acquires such a skill by learning appropriate substitution rules, which are applied iteratively to the input string until the expression is completely resolved. We show that the proposed system can accurately solve nested arithmetical expressions even when trained only on a subset including the simplest cases, significantly outperforming both a sequence-to-sequence model trained end-to-end and a state-of-the-art large language model.'\naddress:\n- 'Department of Mathematics, University of Padova, Padova, Italy'\n- 'Department of General Psychology, University of Padova, Padova, Italy'\nauthor:\n- Flavio Petruzzellis\n- Alberto Testolin\n- Alessandro Sperduti\nbibliography:\n- '9-bib.bib'\ntitle: A Hybrid System for Systematic Generalization in Simple Arithmetic Problems\n---\n\n\\[email=flavio.petruzzellis@phd.unipd.it,\\]\n\n\\[email=alberto.testolin@unipd.it, \\]\n\n\\[email=alessandro.sperduti@unipd.it, \\]\n\ndeep learning , neural networks , mathematical reasoning , neuro-symbolic systems , formula simplification\n\nIntroduction {#par:intro}\n============\n\nDesigning systems that are" -"---\nabstract: 'Discovering the intended items of user queries from a massive repository of items is one of the main goals of an e-commerce search system. Relevance prediction is essential to the search system since it helps improve performance. When online serving a relevance model, the model is required to perform fast and accurate inference. Currently, the widely used models such as Bi-encoder and Cross-encoder have their limitations in accuracy or inference speed respectively. In this work, we propose a novel model called the Entity-Based Relevance Model (EBRM). We identify the entities contained in an item and decompose the QI (query-item) relevance problem into multiple QE (query-entity) relevance problems; we then aggregate their results to form the QI prediction using a soft logic formulation. The decomposition allows us to use a Cross-encoder QE relevance module for high accuracy as well as cache QE predictions for fast online inference. Utilizing soft logic makes the prediction procedure interpretable and intervenable. We also show that pretraining the QE module with auto-generated QE data from user logs can further improve the overall performance. The proposed method is evaluated on labeled data from e-commerce websites. Empirical results show that it achieves promising improvements with computation" -"---\nabstract: |\n Effective resistances are ubiquitous in graph algorithms and network analysis. For an undirected graph $G$, its effective resistance $R_G(s,t)$ between two vertices $s$ and $t$ is defined as the equivalent resistance between $s$ and $t$ if $G$ is thought of as an electrical network with unit resistance on each edge. If we use $L_G$ to denote the Laplacian matrix of $G$ and $L_G^{\\dagger}$ to denote its pseudo-inverse, we have $R_G(s,t)=(\\mathbf{1}_s-\\mathbf{1}_t)^{\\top} L^{\\dagger} (\\mathbf{1}_s-\\mathbf{1}_t)$ such that classical Laplacian solvers [@SpielmanT14] provide almost-linear time algorithms to approximate $R_G(s,t)$.\n\n In this work, we study *sublinear* time algorithms to approximate the effective resistance of an *adjacent pair* $s$ and $t$. We consider the classical adjacency list model [@ron2019sublinear] for local algorithms. While recent works [@andoni2018solving; @peng2021local; @li2023new] have provided sublinear time algorithms for *expander graphs*, we prove several lower bounds for *general graphs* of $n$ vertices and $m$ edges:\n\n 1. It needs $\\Omega(n)$ queries to obtain $1.01$-approximations of the effective resistance of an adjacent pair $s$ and $t$, even for graphs of degree at most 3 except $s$ and $t$.\n\n 2. For graphs of degree at most $d$ and any parameter $\\ell$, it needs $\\Omega(m/\\ell)$ queries to obtain $c \\cdot \\min\\{d, \\ell\\}$-approximations" -"---\nauthor:\n- |\n M.\u00a0S.\u00a0Mirmoosa$^{1}$[^1], M.\u00a0H.\u00a0Mostafa$^{2}$[^2], A.\u00a0Norrman$^{1}$, and S.\u00a0A.\u00a0Tretyakov$^{2}$\\\n \\\nbibliography:\n- 'IEEEabrv.bib'\n- 'references.bib'\ntitle: Time Interfaces in Bianisotropic Media\n---\n\n[Wave phenomena in bianisotropic media have been broadly scrutinized in classical electrodynamics, as these media offer additional degrees of freedom to engineer electromagnetic waves. However, all investigations concerning such systems have so far been limited to stationary (time-invariant) media. Temporally varying the magnetoelectric coupling manifesting bianisotropy engenders a unique prospect to manipulate wave-matter interactions in new ways. In this paper, we theoretically contemplate electromagnetic effects in weakly dispersive bianisotropic media of all classes when the corresponding magnetoelectric coupling parameter suddenly jumps in time, creating a time interface in spatially uniform bianisotropic media. We investigate scattering effects at such time interfaces, revealing novel polarization- and direction-dependent phenomena. We anticipate that our work paves the road for further exploration of time-varying bianisotropic metamaterials (metasurfaces) and bianisotropic photonic time crystals, thus opening up interesting possibilities to control wave polarization and amplitude in reciprocal and nonreciprocal manners.]{}\n\nIntroduction\n============\n\nInteraction of waves with systems whose effective properties change in time, although remaining uniform in space, has engrossed significant curiosity\u00a0[@Engheta20NPH; @galiffi2022photonics; @Ptitcyn2023Tutorial]. In particular, the" -"---\nabstract: 'We report on the curation of several publicly available datasets for age and gender prediction. Furthermore, we present experiments to predict age and gender with models based on a pre-trained [[wav2vec2.0]{}]{}. Depending on the dataset, we achieve an between $7.1$years and $10.8$years for age, and at least $91.1$% for gender (*female*, *male*, *child*). Compared to a modelling approach built on hand-crafted features, our proposed system shows an improvement of $9$% for age and $4$% for gender. To make our findings reproducible, we release the best performing model to the community as well as the sample lists of the data splits.'\naddress: |\n $^1$audEERING GmbH, Germany,\\\n $^2$Chair EIHW, University of Augsburg, Germany,\\\n $^3$GLAM, Imperial College London, UK\nauthor:\n- 'Felix Burkhardt$^1$, Johannes Wagner$^1$, Hagen Wierstorf$^1$, Florian Eyben$^1$, Bj\u00f6rn Schuller,$^{1,2,3}$'\nbibliography:\n- 'bibliography.bib'\ntitle: 'Speech-based Age and Gender Prediction with Transformers'\n---\n\nIntroduction {#sec:intro}\n============\n\nThe automatic detection of speaker age and gender has many use cases in human computer interaction, for example for dialogue adaption or market research. In contrast to subjective phenomena such as emotional arousal, the age of a person may be objectively determined, like for example body size, by an exact measurement. But, just like emotional" -"---\nabstract: 'When introducing a nanoparticle into an optical trap, its mass and shape are not immediately apparent. We combine a charge-based mass measurement with a shape determination method based on light scattering and an analysis of the damping rate anisotropy, all on the same set of silica nanoparticles, trapped using optical tweezers in vacuum. These methods have previously only been used separately, and the mass determination method has not been applied to asymmetric particles before. We demonstrate that the combination of these classification techniques is required to distinguish particles with similar mass but different shape, and vice versa. The ability to identify these parameters is a key step for a range of experiments on precision measurements and sensing using optically levitated nanoparticles.'\nauthor:\n- Bart Schellenberg\n- Mina Morshed Behbahani\n- Nithesh Balasubramanian\n- 'Ties H. Fikkers'\n- Steven Hoekstra\nbibliography:\n- 'nanospheres.bib'\ntitle: Mass and shape determination of optically levitated nanoparticles\n---\n\nWith a rapidly increasing number of developments over the recent years, levitated nanospheres have evolved into an exciting platform for innovative measurement opportunities and applications. Demonstrated applications span from the manipulation of microscopic biological systems[@10.1021/cr4003006; @10.1140/epje/i2005-10060-4; @10.1016/s0006-3495(97)78780-0; @10.1007/s12551-019-00599-y] to ultra-sensitive accelerometers and force-sensors, torque detectors,[@10.1103/physrevlett.121.033603; @10.1038/s41565-019-0605-9;" -"---\naddress: '$^{1}$ Department of Aerospace Structures and Materials, Faculty of Aerospace Engineering, Delft University of Technology. Kluyverweg 1, 2629 HS, Delft, The Netherlands'\nbibliography:\n- 'bibliography.bib'\n---\n\nIntroduction\n============\n\nQuantum computers are rather unique devices that, by leveraging quantum mechanical principles, theoretically allow certain types of problems to be solved much more efficiently than is possible with classical computers [@nielsen_chuang_2010]. While classical computers use binary bits, 1s and 0s, to perform their computations, quantum computers make use of *quantum bits*. Quantum bits, or *qubits*, can not only represent the classical 0 and 1 states, but can also exist in a quantum superposition of these states. This quantum superposition, when leveraged effectively, is one of the reasons why quantum computers promise better performance in certain applications.\n\nThere are two main types of quantum computers currently in development, being the General Purpose Quantum Computer (GPQC) and the Quantum Annealer (QA). With the GPQC, most of the potential improvements stem from the fact that these systems can run complex quantum algorithms, allowing for more efficient problem-solving methods to be devised. An overview of quantum algorithms is given by Montanaro [@Montanaro2016]. On the other hand, a QA can only use the quantum annealing" -"---\nabstract: 'The pattern of neutrino flavor oscillations could be altered by the influence of noisy perturbations such as those arising from a gravitational wave background (GWB). A stochastic process that is consistent with a GWB has been recently reported by the independent analyses of pulsar timing array (PTA) data sets collected over a decadal timescale by the North American Nanohertz Observatory for Gravitational Waves, the European Pulsar Timing Array jointly with the Indian Pulsar Timing Array, the Parkes Pulsar Timing Array, and the Chinese Pulsar Timing Array collaborations. We investigate the modifications in the neutrino flavor oscillations under the influence of the GWB reported by the PTA collaborations and we discuss how such effects could be potentially revealed in near-future neutrino detectors, possibly helping the discrimination of different models for the GWB below the nHz frequency range.'\nauthor:\n- Gaetano Lambiase\n- Leonardo Mastrototaro\n- Luca Visinelli\nbibliography:\n- 'sources.bib'\ntitle: Astrophysical neutrino oscillations after pulsar timing array analyses\n---\n\nIntroduction {#sec:introduction}\n============\n\nNeutrinos are ideal astrophysical messengers owing to their distinctive properties such as feeble interactions and neutrality, which allow them to reach us from the cosmic accelerator where they had originated avoiding absorption and deflection by magnetic" -"---\nabstract: 'We conduct a theoretical investigation into the impacts of local microwave electric field frequency detuning, laser frequency detuning, and transit relaxation rate on enhancing heterodyne Rydberg atomic receiver sensitivity. To optimize the output signal\u2019s amplitude given the input microwave signal, we derive the steady-state solutions of the atomic density matrix. Numerical results show that laser frequency detuning and local microwave electric field frequency detuning can improve the system detection sensitivity, which can help the system achieve extra sensitivity gain. It also shows that the heterodyne Rydberg atomic receiver can detect weak microwave signals continuously over a wide frequency range with the same sensitivity or even more sensitivity than the resonance case. To evaluate the transit relaxation effect, a modified Liouville equation is used. We find that the transition relaxation rate increases the time it takes to reach steady state and decreases the sensitivity of the system detection.'\nauthor:\n- 'Shanchi Wu, Chen Gong, Shangbin Li, Rui Ni, Jinkang Zhu [^1] [^2] [^3]'\nbibliography:\n- './mybib.bib'\ntitle: Theoretical Analysis of Heterodyne Rydberg Atomic Receiver Sensitivity Based on Transit Relaxation Effect and Frequency Detuning\n---\n\nRydberg atom, frequency detuning, sensitivity optimization, transit relaxation.\n\nIntroduction\n============\n\natoms show extremely strong microwave" -"---\nabstract: 'We discuss the mathematical modelling of two of the main mechanisms which pushed forward the emergence of multicellularity: phenotype divergence in cell differentiation, and between-cell cooperation. In line with the atavistic theory of cancer, this disease being specific of multicellular animals, we set special emphasis on how both mechanisms appear to be reversed, however not totally impaired, rather hijacked, in tumour cell populations. Two settings are considered: the completely innovating, tinkering, situation of the emergence of multicellularity in the evolution of species, which we assume to be constrained by external pressure on the cell populations, and the completely planned - in the [*body plan*]{} - situation of the physiological construction of a developing multicellular animal from the zygote, or of bet hedging in tumours, assumed to be of clonal formation, although the body plan is largely - but not completely - lost in its constituting cells. We show how cancer impacts these two settings and we sketch mathematical models for them. We present here our contribution to the question at stake with a background from biology, from mathematics, and from philosophy of science.'\nauthor:\n- |\n Frank Ernesto Alvarez$^1$ & Jean Clairambault$^2$\\\n $^{1}$INSA Toulouse, France // orcid 0000-0002-6651-7374\\" -"---\nbibliography:\n- 'library.bib'\n---\n\n**Roman CCS White Paper**\n\n[Balanced Prism Plus Filter Cadence in\\\nthe High Latitude Time Domain Survey Core Community Survey]{}\n\n**Roman Core Community Survey:** High Latitude Time Domain Survey\n\n**Scientific Categories:** stellar physics and stellar types; stellar populations and the interstellar medium; large scale structure of the universe **Additional scientific keywords:** Supernovae, Exotic Transients, Cosmology, Dark energy\n\n**Submitting Author:**\\\nGreg Aldering, Lawrence Berkeley National Lab (galdering@lbl.gov)\n\n**List of contributing authors:**\\\nDavid Rubin, UH, drubin@hawaii.edu\\\nBenjamin Rose, Baylor University (Ben\\_Rose@baylor.edu)\\\nRebekah Hounsell, University of Maryland Baltimore County/ NASA Goddard Space Flight Center, (rebekah.a.hounsell@nasa.gov)\\\nSaul Perlmutter, University of California, Berkeley (saul@lbl.gov)\\\nSusana Deustua, NIST (susana.deustua@nist.gov)\\\n**Abstract:** The Nancy Grace Roman Space Telescope\u2019s (RST) Wide Field Imager (WFI) is equipped with a slitless prism that can be used for spectroscopic discovery and follow-up of explosive transients at high redshift as part of its High Latitude Time Domain Survey. This is new and unique spectroscopic capability, not only for its original purpose for cosmology, but also for other types of explosive transients. This white paper is intended to help make this new capability more clear to the community. The depth of the RST prism compared to ground-based spectrographs is explored," -"---\nabstract: 'The work of Darmon, Pozzi, and Vonk [@DPVdr] has recently shown that the RM-values of the Dedekind-Rademacher cocycle $J_{DR}$ are Gross-Stark units up to controlled torsion. The authors of [@DPVdr] remarked that the measure-valued cohomology class $\\mu_{DR}$ which underlies $J_{DR}$ is the level 1 incarnation of earlier constructions in [@DD]. In this paper, we make this relationship explicit by computing a concrete cocycle representative of $\\mu_{DR}$ by tracing the construction of the cohomology class and comparing periods of weight 2 Eisenstein series. While maintaining a global perspective in our computations, we configure the appropriate method of smoothing cocycles which exactly yields the $p$-adic measures of [@DD] when applied to $\\mu_{DR}$. These methods will also explain the optional degree zero condition imposed in [@DD] which was remarked upon in [@DK] and [@FLcomp].'\nauthor:\n- Jae Hyung Sim\nbibliography:\n- 'ksdist.bib'\ntitle: 'Explicit Cocycle of the Dedekind-Rademacher Cohomology Class and the Darmon-Dasgupta Measures'\n---\n\nIn [@DVsingmoduli], Darmon and Vonk introduced the theory of rigid cocycles which drew analogies from classical Complex Multiplication theory to address previously inaccessible questions regarding the arithmetic of real quadratic fields. In [@DPVdr], Darmon, Pozzi, and Vonk used the deformation of Hilbert Eisenstein series to show" -"---\nabstract: 'The truncated singular value decomposition is a widely used methodology in music recommendation for direct similar-item retrieval and downstream tasks embedding musical items. This paper investigates a curious effect that we show naturally occurring on many recommendation datasets: spiking formations in the embedding space. We first propose a metric to quantify this spiking organization\u2019s strength, then mathematically prove its origin tied to underlying communities of items of varying internal popularity. With this new-found theoretical understanding, we finally open the topic with an industrial use case of estimating how music embeddings\u2019 top-k similar items will change over time under the addition of data.'\nauthor:\n- Darius Afchar\n- Romain Hennequin\n- Vincent Guigue\nbibliography:\n- 'main.bib'\ntitle: Of Spiky SVDs and Music Recommendation\n---\n\nIntroduction\n============\n\nThere is no unique definition of music recommendation, but rather a range of tasks that fall under this name: track sequence recommendation, context-aware recommendation, playlist continuation or generation, similar track, artist, or playlist retrieval [@schedl2018current]. These settings represent the many use cases of recommendation found in the wild (*e.g.,* in a streaming service). Despite this proteiformity, many recommenders leverage a model of similar item retrieval as a basis for their computation. A standard" -"---\nabstract: 'Regardless of the domain, forecasting the future behaviour of a running process instance is a question of interest for decision makers, especially when multiple instances interact. Fostered by the recent advances in machine learning research, several methods have been proposed to predict the next activity, outcome or remaining time of a process automatically. Still, building a model with high predictive power requires both \u2013 intrinsic knowledge of how to extract meaningful features from the event log data and a model that captures complex patterns in data. This work builds upon the recent progress in inter-case Predictive Process Monitoring (PPM) and comprehensively benchmarks the impact of inter-case features on prediction accuracy. Moreover, it includes quantum machine learning models, which are expected to provide an advantage over classical models with a scaling amount of feature dimensions. The evaluation on real-world training data from the BPI challenge shows that the inter-case features provide a significant boost by more than 4% in accuracy and quantum algorithms are indeed competitive in a handful of feature configurations. Yet, as quantum hardware is still in its early stages of development, this paper critically discusses these findings in the light of runtime, noise and the risk" -"---\nauthor:\n- 'Yifeng\u00a0Xiao, Jiang\u00a0Xue,\u00a0 and\u00a0Deyu\u00a0Meng'\nbibliography:\n- 'reference.bib'\ntitle: |\n Hashing-Based Distributed Clustering\\\n for Massive High-Dimensional Data\n---\n\n[Yifeng : Hashing-Based Distributed Clustering]{}\n\nis a classical unsupervised technique widely used to discover the latent structure within a large dataset. Specifically, clustering aims to classify samples in one dataset into several clusters by their distribution features and maximize the similarity of the samples in one cluster while minimizing the samples\u2019 similarity between different clusters. There are various clustering algorithms that can distinguish clusters effectively on different kinds of datasets. However, with the arrival of the big-data era, the change in data properties brings new challenges. Real-world data are usually generated and stored in distributed machines[@2004DBDC], which cannot meet the requirements of centralized clustering. Whereas collecting all the data into a central computer is nearly impossible because of the unaffordable transmission cost and privacy concerns. Meanwhile, the high dimension of big data also results in the curse of dimensionality[@2015Dynamic] and the rising of computational complexity. It may be impossible to process massive high-dimensional data in a single computer. Therefore, how to process and cluster data in a distributed scenario is an inevitable problem.\n\nSome distributed clustering algorithms" -"---\nabstract: 'The [*Fermi*]{}-LAT observations of SN 2023ixf, a Type II supernova in the nearby Pinwheel Galaxy, Messier 101 (M101), presents us with an excellent opportunity to constrain MeV-scale Axion-Like Particles (ALPs). By examining the photon decay signature from heavy ALPs that could be produced in the explosion, the existing constraints on the ALP-photon coupling can be improved, under optimistic assumptions, by up to a factor of $ \\sim 2 $ for masses $ m_a \\lesssim 3 \\operatorname{MeV}$. Under very conservative assumptions, we find a bound that is slightly weaker than the existing ones for $ m_a \\lesssim 0.5$\u00a0MeV. The exact reach of these searches depends mostly on properties of the SN progenitor. This study demonstrates the relevance of core-collapse supernovae, also beyond the Magellanic Clouds, as probes of fundamental physics.'\nauthor:\n- Eike Ravensburg\n- Pierluca\u00a0Carenza\n- Christopher Eckner\n- Ariel\u00a0Goobar\nbibliography:\n- 'biblio.bib'\ntitle: |\n Constraining MeV-scale axion-like particles\\\n with [*Fermi*]{}-LAT observations of SN 2023ixf\n---\n\nIntroduction\n============\n\nThe explosion of core-collapse supernova (SN) 2023ixf in the nearby galaxy M101 has been followed over the entire electromagnetic spectrum by the astronomical community: at radio wavelengths\u00a0[@2023ATel16052....1C; @2023TNSAN.146....1M; @Berger:2023jcl], infrared\u00a0[@Jencson:2023bxz; @Soraisam:2023ktz; @Teja:2023hcm; @Yamanaka:2023gbr], optical\u00a0[@Jacobson-Galan:2023ohh;" -"---\nabstract: 'The performance of Federated learning (FL) is negatively affected by device differences and statistical characteristics between participating clients. To address this issue, we introduce a deep unfolding network (DUN)-based technique that learns adaptive weights that unbiasedly ameliorate the adverse impacts of heterogeneity. The proposed method demonstrates impressive accuracy and quality-aware aggregation. Furthermore, it evaluated the best-weighted normalization approach to define less computational power on the aggregation method. The numerical experiments in this study demonstrate the effectiveness of this approach and provide insights into the interpretability of the unbiased weights learned. By incorporating unbiased weights into the model, the proposed approach effectively addresses quality-aware aggregation under the heterogeneity of the participating clients and the FL environment. Codes and details are [here](https://github.com/shanikairoshi/Improved_DUN_basedFL_Aggregation).'\nauthor:\n- 'Shanika I Nanayakkara, Shiva Raj Pokhrel and Gang Li'\nbibliography:\n- 'report-ieee.bib'\ntitle: 'Improving Federated Aggregation with Deep Unfolding Networks [^1] '\n---\n\n\\[section\\] \\[section\\] \\[section\\] \\[section\\] \\[section\\] \\[section\\]\n\nIntroduction {#sec-intro}\n============\n\nFederated learning (FL), initially introduced by Google in [@mcmahan2017communicationFedAvg], revolutionizes collaborative training of machine learning models using data from numerous participating devices, called clients, while ensuring that the privacy of local data remains intact. Google\u2019s FL consists of a central server, which iteratively incorporates" -"---\nabstract: 'We extend the theory of formal languages in monoidal categories to the multi-sorted, symmetric case, and show how this theory permits a graphical treatment of topics in concurrency. In particular, we show that Mazurkiewicz trace languages are precisely *symmetric monoidal languages* over *monoidal distributed alphabets*. We introduce *symmetric monoidal automata*, which define the class of regular symmetric monoidal languages. Furthermore, we prove that Zielonka\u2019s asynchronous automata coincide with symmetric monoidal automata over monoidal distributed alphabets. Finally, we apply the string diagrams for symmetric premonoidal categories to derive serializations of traces.'\nauthor:\n- Matthew Earnshaw\n- Pawe\u0142 Soboci\u0144ski\nbibliography:\n- 'main.bib'\ntitle: String Diagrammatic Trace Theory\n---\n\nIntroduction\n============\n\n*Monoidal languages* [@earnshaw22] are a generalization of formal languages of words to formal languages of *string diagrams*. String diagrams [@joyal91; @Selinger2011] are a graphical representation of morphisms in *monoidal categories*, introduced in . Monoidal categories can be considered *2-dimensional monoids* [@BURRONI199343]: just as monoids are categories with one object, in which the morphisms are elements of the monoid, (strict) monoidal categories can also be defined as 2-categories with one object. Accordingly, *monoidal languages* are subsets of morphisms in free monoidal categories, just as word languages are subsets of free monoids." -"---\nabstract: 'In recent years, deep learning has become a breakthrough technique in assisting medical image diagnosis. Supervised learning using convolutional neural networks (CNN) provides state-of-the-art performance and has served as a benchmark for various medical image segmentation and classification. However, supervised learning deeply relies on large-scale annotated data, which is expensive, time-consuming, and even impractical to acquire in medical imaging applications. Active Learning (AL) methods have been widely applied in natural image classification tasks to reduce annotation costs by selecting more valuable examples from the unlabeled data pool. However, their application in medical image segmentation tasks is limited, and there is currently no effective and universal AL-based method specifically designed for 3D medical image segmentation. To address this limitation, we propose an AL-based method that can be simultaneously applied to 2D medical image classification, segmentation, and 3D medical image segmentation tasks. We extensively validated our proposed active learning method on three publicly available and challenging medical image datasets, Kvasir Dataset, COVID-19 Infection Segmentation Dataset, and BraTS2019 Dataset. The experimental results demonstrate that our PCDAL can achieve significantly improved performance with fewer annotations in 2D classification and segmentation and 3D segmentation tasks. The codes of this study are available at" -"---\nabstract: '[We study for the first time the gravitational waves generated during the collapse of domain walls, incorporating the potential bias in the lattice simulations. The final stages of domain wall collapse are crucial for the production of gravitational waves, but have remained unexplored due to computational difficulties. As a significant application of this new result, we show that the observed NANOGrav, EPTA, PPTA, and CPTA data, which indicate stochastic gravitational waves in the nanohertz regime, can be attributed to axion domain walls coupled to QCD. In our model, non-perturbative effects of QCD induce a temperature-dependent bias around the QCD crossover, inducing the rapid collapse of the domain walls. We use sophisticated lattice simulations that account for the temperature-dependent bias to measure the gravitational waves resulting from the domain wall annihilation. We also discuss the future prospects for accelerator-based searches for the axion and the potential for the formation and detection of primordial black holes. ]{}'\nauthor:\n- Naoya Kitajima\n- Junseok Lee\n- Kai Murai\n- Fuminobu Takahashi\n- Wen Yin\nbibliography:\n- 'Ref.bib'\ntitle: ' Gravitational Waves from Domain Wall Collapse, and Application to Nanohertz Signals with QCD-coupled Axions '\n---\n\n${\\left(}\n\\def$[)]{} \u00d8\n\n\u00f8 c \u0142" -"---\nabstract: 'Most successes in autonomous robotic assembly have been restricted to single target or category. We propose to investigate general part assembly, the task of creating novel target assemblies with unseen part shapes. We present General Part Assembly Transformer (GPAT), a transformer-based model architecture that accurately predicts part poses by inferring how each part shape corresponds to the target shape. Our experiments on both 3D CAD models and real-world scans demonstrate GPAT\u2019s generalization abilities to novel and diverse target and part shapes.'\nauthor:\n- |\n Yulong Li^1^ Andy Zeng^2^ Shuran Song^1^\\\n ^1^Columbia University ^2^Google Deepmind\\\n `https://general-part-assembly.github.io/`\\\nbibliography:\n- 'main.bib'\ntitle: Rearrangement Planning for General Part Assembly\n---\n\nIntroduction\n============\n\n[r]{}[0.5]{} ![image](figs/teaser_new.pdf){width=\"\\linewidth\"}\n\nThe ability to assemble new objects is a hallmark of visuo-spatial reasoning. With the mental image of a novel target shape, one can arrange possibly unseen parts at hand to create a resembling assembly, either building an alien spaceship with lego blocks or a rain shelter with stones. Building autonomous robotic systems that exhibit these capabilities may give rise to wide range of robotics applications from autonomously assembling new objects in a manufacturing plant to building shelter in disaster response scenarios.\n\nDespite the interest and progress in part" -"---\nabstract: 'Plane Couette flow at $\\Rey=1200$ (based on the channel half-height and half the velocity difference between the top and bottom plates) is investigated in the minimal multi-scale flow unit (i.e. a flow unit with only two spanwise integral length scales), a system for which the computation of invariant solutions that are physically representative of the turbulent state has been understood to be challenging. To address this challenge, our approach is to employ an accurate reduced-order model with 600 degrees of freedom (Cavalieri & Nogueira, *Phys. Rev. Fluids*, vol. 7, 2022, L102601). Using the two-scale energy budget and the temporal cross-correlation of key observables, it is first demonstrated that the model contains most of the multi-scale physical processes identified recently (Doohan *et al.*, *J. Fluid Mech.*, vol. 913, 2021, A8): i.e. the large- and small-scale self-sustaining processes, the energy cascade for turbulent dissipation, and an energy-cascade mediated small-scale production mechanism. Invariant solutions of the reduced-order model are subsequently computed, including 96 equilibria and 43 periodic orbits. It is found that all the computed equilibrium solutions are not able to reproduce sound energy balance associated with the multi-scale dynamics of turbulent state. Incorporation of unsteadiness into invariant solutions is seen" -"---\nabstract: 'In this paper, we present a solution for robot arm-controlled agricultural spraying, handling the spraying task as a constrained prioritized 3T2R task. 3T2R tasks in robot manipulation consist of three translational and two rotational degrees of freedom, and are frequently used when the end-effector is axis-symmetric. The solution presented in this paper introduces a prioritization between the translational and rotational degrees of freedom of the 3T2R task, and we discuss the utility of this kind of approach for both velocity and positional inverse kinematics, which relate to continuous and selective agricultural spraying applications respectively.'\nauthor:\n- 'Ivo Vatavuk, Zdenko Kova\u010di\u0107 [^1] [^2] [^3]'\nbibliography:\n- 'bibliography/asdf.bib'\nnocite: '[@*]'\ntitle: ' **Constrained Prioritized 3T2R Task Control for Robotic Agricultural Spraying** '\n---\n\nAgricultural Automation, Mobile Manipulation, Optimization and Optimal Control\n\nIntroduction\n============\n\nAgricultural robotics is a rapidly advancing research field that focuses on developing and deploying robotic technology for various agricultural tasks. The goal is to enhance the efficiency and sustainability of different agricultural procedures and address labor shortages. Research presented in this paper is a part of the project HEKTOR [@hektor; @Goricanec2021], which aims to introduce heterogeneous robotic systems to the agricultural areas of viticulture and mariculture. A" -"---\nabstract: |\n Graph Neural Networks (GNNs) are emerging as a powerful tool for learning from graph-structured data and performing sophisticated inference tasks in various application domains. Although GNNs have been shown to be effective on modest-sized graphs, training them on large-scale graphs remains a significant challenge due to lack of efficient data access and data movement methods. Existing frameworks for training GNNs use CPUs for graph sampling and feature aggregation, while the training and updating of model weights are executed on GPUs. However, our in-depth profiling shows the CPUs cannot achieve the throughput required to saturate GNN model training throughput, causing gross under-utilization of expensive GPU resources. Furthermore, when the graph and its embeddings do not fit in the CPU memory, the overhead introduced by the operating system, say for handling page-faults, comes in the critical path of execution.\n\n To address these issues, we propose the GPU Initiated Direct Storage Access ([[GIDS]{}]{}[^1][^2]) dataloader, to enable GPU-oriented GNN training for large-scale graphs while efficiently utilizing all hardware resources, such as CPU memory, storage, and GPU memory with a hybrid data placement strategy. By enabling GPU threads to fetch feature vectors directly from storage, [[GIDS]{}]{} dataloader solves the memory capacity problem" -"---\nabstract: 'We analyze the experimental data on nuclei and hypernuclei yields recently obtained by the STAR collaboration. The hybrid dynamical and statistical approaches which have been developed previously are able to describe the experimental data reasonably. We discuss the intriguing difference between the yields of normal nuclei and hypernuclei which may be related to the properties of hypermatter at subnuclear densities. New (hyper-)nuclei could be detected via particle correlations. Such measurements are important to pin down the production mechanism.'\nauthor:\n- 'N.\u00a0Buyukcizmeci$^{1}$, T.\u00a0Reichert$^{2,3,4}$, A.S.\u00a0Botvina$^{2,3}$, M.\u00a0Bleicher$^{2,3,5}$'\ntitle: 'Nucleosynthesis of light nuclei and hypernuclei in central Au+Au collisions at $\\sqrt{s_{NN}}$=3 GeV'\n---\n\nIntroduction\n============\n\nDuring recent years the production of new nuclei has become again one of the central topics in relativistic nuclear reaction studies. It is known since the late 1970s that many different light complex nuclei can be formed in central nucleus-nucleus collisions [@Gos77]. Later on these studies were considerably extended and presently they involve the production of both normal nuclei and hypernuclei, including exotic nuclear species. In central relativistic nucleus-nucleus collisions the yields and spectra of hydrogen and helium isotopes have been observed. In addition, more heavy species, like Li, Be and others were" -"---\nabstract: 'Backscatter communication (BSC) is a promising solution for Internet-of-Things (IoT) connections due to its low-complexity, low-cost, and energy-efficient solution for sensors. There are several network infrastructure setups that can be used for BSC with IoT nodes/passive devices. One of them is a bistatic setup where there is a need for high dynamic range and high-resolution analog-to-digital converters at the reader side. In this paper, we investigate a bistatic BSC setup with multiple antennas. We propose a novel algorithm to suppress direct link interference between the carrier emitter (CE) and the reader using beamforming into the nullspace of the CE-reader direct link to decrease the dynamic range of the system and increase the detection performance of the backscatter device (BSD). Further, we derive a Neyman-Pearson (NP) test and an exact closed-form expression for its performance in the detection of the BSD. Finally, simulation results show that the dynamic range of the system is significantly decreased and the detection performance of the BSD is increased by the proposed algorithm compared to a system not using beamforming in the CE, which could then be used in a host of different practical fields such as agriculture, transportation, factories, hospitals, smart cities, and" -"---\nauthor:\n- Wanming Yu\n- Chuanyu Yang\n- Christopher McGreavy\n- Eleftherios Triantafyllidis\n- Guillaume Bellegarda\n- Milad Shafiee\n- Auke Jan Ijspeert\n- Zhibin Li\nbibliography:\n- 'arxiv.bib'\ntitle: '**Identifying Important Sensory Feedback for Learning Locomotion Skills** '\n---\n\nAbstract {#abstract .unnumbered}\n========\n\nRobot motor skills can be learned through deep reinforcement learning (DRL) by neural networks as state-action mappings. While the selection of state observations is crucial, there has been a lack of quantitative analysis to date. Here, we present a systematic saliency analysis that quantitatively evaluates the relative importance of different feedback states for motor skills learned through DRL. Our approach can identify the most essential feedback states for locomotion skills, including balance recovery, trotting, bounding, pacing and galloping. By using only key states \u2013 joint positions, gravity vector, base linear and angular velocities \u2013 we demonstrate that a simulated quadruped robot can achieve robust performance in various test scenarios across these distinct skills. The benchmarks using task performance metrics show that locomotion skills learned with key states can achieve comparable performance to those with all states, and the task performance or learning success rate will drop significantly if key states are missing. This work provides" -"---\nabstract: 'This paper presents an investigation into machine learning techniques for violence detection in videos and their adaptation to a federated learning context. The study includes experiments with spatio-temporal features extracted from benchmark video datasets, comparison of different methods, and proposal of a modified version of the \u201cFlow-Gated\u201d architecture called \u201cDiff-Gated.\u201d Additionally, various machine learning techniques, including super-convergence and transfer learning, are explored, and a method for adapting centralized datasets to a federated learning context is developed. The research achieves better accuracy results compared to state-of-the-art models by training the best violence detection model in a federated learning context.'\nauthor:\n- PAJON Quentin$^1$\n- SERRE Swan$^1$\n- WISSOCQ Hugo$^1$\n- RABAUD L\u00e9o$^1$\n- |\n HAIDAR Siba$^{1,2}$YAACOUB Antoun$^{1,2}$ $^1$ESIEA, Paris, France\\\n $^2$Learning, Data and Robotics (LDR) Lab, ESIEA, Paris, France {pajon, sserre, wissocq, lrabaud}@et.esiea.fr, {siba.haidar, antoun.yaacoub}@esiea.fr\nbibliography:\n- 'ijcai23.bib'\ntitle: 'Balancing Accuracy and Training Time in Federated Learning for Violence Detection in Surveillance Videos: A Study of Neural Network Architectures'\n---\n\nIntroduction\n============\n\nViolence detection can be used in many contexts: soccer stadiums, surveillance cameras, video sharing services, etc. Moreover, humans aren\u2019t able to detect violence on this scale because of the huge quantity of data involved. In the context" -"---\nabstract: 'Weak multivalent interactions govern a large variety of biological processes like cell-cell adhesion and virus-host interactions. These systems distinguish sharply between surfaces based on receptor density, known as superselectivity. Earlier experimental and theoretical work provided insights into the control of selectivity: Weak interactions and a high number of ligands facilitate superselectivity. Present experimental studies typically involve tens or hundreds of interactions, resulting in a high entropic contribution leading to high selectivities. However, if, and if so how, systems with few ligands, such as multi-domain proteins and virus binding to a membrane, show superselective behavior is an open question. Here, we address this question with a multivalent experimental model system based on star shaped branched DNA nanostructures (DNA nanostars) with each branch featuring a single stranded overhang that binds to complementary receptors on a target surface. Each DNA nanostar possesses a fluorophore, to directly visualize DNA nanostar surface adsorption by total internal reflection fluorescence microscopy (TIRFM). We observe that DNA nanostars can bind superselectively to surfaces and bind optimally at a valency of three. We quantitatively explain this optimum by extending the current theory with interactions between DNA nanostar binding sites (ligands). Our results add to the understanding of" -"---\nabstract: 'In the task of texture transfer, reference texture images typically exhibit highly repetitive texture features, and the texture transfer results from different content images under the same style also share remarkably similar texture patterns. Encoding such highly similar texture features often requires deep layers and a large number of channels, making it is also the main source of the entire model\u2019s parameter count and computational load, and inference time. We propose a lightweight texture transfer based on texture feature preset (**TFP**). TFP takes full advantage of the high repetitiveness of texture features by providing preset universal texture feature maps for a given style. These preset feature maps can be fused and decoded directly with shallow color transfer feature maps of any content to generate texture transfer results, thereby avoiding redundant texture information from being encoded repeatedly. The texture feature map we preset is encoded through noise input images with consistent distribution (standard normal distribution). This consistent input distribution can completely avoid the problem of texture transfer differentiation, and by randomly sampling different noise inputs, we can obtain different texture features and texture transfer results under the same reference style. Compared to state-of-the-art techniques, our TFP not only produces" -"---\nabstract: 'This paper investigates an intelligent reflecting surface (IRS) enabled multiuser integrated sensing and communications (ISAC) system, which consists of one multi-antenna base station (BS), one IRS, multiple single-antenna communication users (CUs), and one target at the non-line-of-sight (NLoS) region of the BS. The IRS is deployed to not only assist the communication from the BS to the CUs, but also enable the BS\u2019s NLoS target sensing based on the echo signals from the BS-IRS-target-IRS-BS link. We consider two types of targets, namely the extended and point targets, for which the BS aims to estimate the complete target response matrix and the target direction-of-arrival (DoA) with respect to the IRS, respectively. To provide full degrees of freedom for sensing, we consider that the BS sends dedicated sensing signals in addition to the communication signals. Accordingly, we model two types of CU receivers, namely Type-I and Type-II CU receivers, which do not have and have the capability of canceling the interference from the sensing signals, respectively. Under each setup, we jointly optimize the transmit beamforming at the BS and the reflective beamforming at the IRS to minimize the Cram\u00e9r-Rao bound (CRB) for target estimation, subject to the minimum signal-to-interference-plus-noise ratio" -"---\nabstract: 'In this paper, we apply approximate entropy (ApEn) analysis to the nonlinear beam dynamics in circular accelerators. Due to the presence of strong nonlinear magnets, chaos of beam motion gradually increases with amplitude. Such chaos can be quantitatively characterized with ApEn of beam turn-by-turn readings. Then ApEn, as a chaos indicator, can be used for nonlinear lattice optimization and analysis.'\nauthor:\n- Yongjun Li\nbibliography:\n- 'apen.bib'\ntitle: Approximate Entropy Analysis for Nonlinear Beam Dynamics\n---\n\n[^1]\n\n\\[sect:intro\\]introduction\n==========================\n\nFor circular particle accelerators, the nonlinearities of beam dynamics confine long-term motions to be stable only within a limited region in 6-dimensional phase space, namely, dynamic aperture (DA)\u00a0[@chao2023hb]. Even within DA, particle motions could still be chaotic. It is commonly believed that, for a given magnetic lattice, through suppressing chaos, one can enlarge its DA and local momentum acceptance (LMA), and also enhance its robustness to errors. Therefore, various chaos indicators have been adopted to characterize the nonlinearities of beam motions\u00a0[@Bazzani:2023hbb], such as the Lyapunov exponent (LE)\u00a0[@wolf1985determining; @schmidt1991comparison; @Habib:1995], frequency map analysis (FMA)\u00a0[@laskar1999introduction], forward-reversal integration (FRI)\u00a0[@panichi2017reversibility; @li2021fast; @borland2000elegant], data-driven chaos indicator\u00a0[@li2022data], fluctuation of approximate invariant\u00a0[@li2021design], etc. In this paper we apply approximate" -"---\nabstract: 'For an odd prime $p$, we consider the Chern classes $\\gamma_i$ of the conjugation representation of the projective unitary group ${PU(p^{l})}$. We show that the restriction of $\\gamma_i$ on a maximal elementary abelian $p$ subgroup are expressed as Dickson invariants, a collection of purely algebraically defined elements in the polynomial ring $F_p[x_1,\\cdots,x_n]$. Furthermore, we show some relations in the cohomology algebra $H^*(B{PU(p^{2})};{\\mathbb{F}}_p)$ involving the classes $\\gamma_i$.'\naddress:\n- 'Institute for Theoretical Sciences, Westlake University, 600 Dunyu Road, Sandun town, Xihu district, Hangzhou 310030, Zhejiang Province, China.'\n- 'School of Science, Westlake University, 18 Shilongshan Road, Hangzhou 310024, Zhejiang Province, China.'\n- 'Institute of Natural Sciences, Westlake Institute for Advanced Study, 18 Shilongshan Road, Hangzhou 310024, Zhejiang Province, China.'\nauthor:\n- Xing Gu\nbibliography:\n- 'RefConjRep.bib'\ntitle: Dickson invariants and Chern classes of the conjugation representations\n---\n\n[^1]\n\nIntroduction {#sec:intro}\n============\n\nLet $p$ be a prime number, $q = p^m$ and let ${\\mathbb{F}}_q$ be the finite field of $q$ elements. Consider the polynomial ring ${\\mathbb{F}}_q[x_1,\\cdots,x_n]$. The general linear group ${GL_{n}(\\mathbb{F}_q)}$ acts canonically on the vector space spanned by $x_1,\\cdots,x_n$. This action extends uniquely to a ${GL_{n}(\\mathbb{F}_q)}$-action on ${\\mathbb{F}}_q[x_1,\\cdots,x_n]$ that preserves the products of polynomials.\n\nOver a hundred years ago," -"---\nabstract: 'The multimarginal optimal transport problem with Coulomb cost arises in quantum physics and is vital in understanding strongly correlated quantum systems. Its intrinsic curse of dimensionality can be overcome with a Monge-like ansatz. A nonconvex quadratic programmming then emerges after employing discretization and $\\ell_1$ penalty. To globally solve this nonconvex problem, we adopt a grid refinements-based framework, in which a local solver is heavily invoked and hence significantly determines the overall efficiency. The block structure of this nonconvex problem suggests taking block coordinate descent-type methods as the local solvers, while the existing ones can get seriously afflicted with the poor scalability induced by the associated sparse-dense matrix multiplications. In this work, borrowing the tools from optimal transport, we develop novel methods that favor highly scalable schemes for subproblems and are completely free of the full matrix multiplications after introducing entrywise sampling. Convergence and asymptotic properties are built on the theory of random matrices. The numerical results on several typical physical systems corroborate the effectiveness and better scalability of our approach, which also allows the first visualization for the approximate optimal transport maps between electrons in three-dimensional contexts.'\nauthor:\n- 'Yukuan Hu[^1]'\n- 'Mengyu Li[^2]'\n- Xin Liu\n-" -"---\nauthor:\n- Tong Li\n- 'Chang-Yuan Yao'\n- Man Yuan\nbibliography:\n- 'refs.bib'\ntitle: 'Searching for heavy neutral lepton and lepton number violation through VBS at high-energy muon colliders'\n---\n\nIntroduction {#sec:Intro}\n============\n\nNeutrino oscillation experiments provide clear and compelling evidence that neutrinos have non-zero, but very small masses. The Standard Model (SM) includes only left-handed neutrino fields in the lepton doublets, and therefore predicts neutrino masses of exactly zero. Although one can certainly introduce right-handed (RH) neutrino field, the Yukawa coupling has to be tuned to a very small constant $y_\\nu\\lesssim 10^{-13}$ to accommodate the observed neutrino mass. A more economical way to generate neutrino mass in the SM content is through the so-called \u201cWeinberg operator\u201d\u00a0[@Weinberg:1979sa] $$\\begin{aligned}\n\\ell_L\\ell_L HH\\;,\\end{aligned}$$ where $\\ell_L$ and $H$ stand for the SM left-handed lepton doublet and the Higgs doublet, respectively. The price we pay here is to introduce a high-dimensional (dimension-5) operator and the violation of global lepton number symmetry. The minimal ultraviolet (UV) realization of this dimension-5 operator is the Type I Seesaw mechanism\u00a0[@Minkowski:1977sc; @Yanagida:1979as; @GellMann:1980vs; @Glashow:1979nm; @Mohapatra:1979ia; @Shrock:1980ct; @Schechter:1980gr]. In the minimal Type I Seesaw, the \u201csterile neutrinos\u201d have the nature of Majorana fermions as they transform as singlet" -"---\nabstract: 'Neural 3D scene reconstruction methods have achieved impressive performance when reconstructing complex geometry and low-textured regions in indoor scenes. However, these methods heavily rely on 3D data which is costly and time-consuming to obtain in real world. In this paper, we propose a novel neural reconstruction method that reconstructs scenes using sparse depth under the plane constraints without 3D supervision. We introduce a signed distance function field, a color field, and a probability field to represent a scene. We optimize these fields to reconstruct the scene by using differentiable ray marching with accessible 2D images as supervision. We improve the reconstruction quality of complex geometry scene regions with sparse depth obtained by using the geometric constraints. The geometric constraints project 3D points on the surface to similar-looking regions with similar features in different 2D images. We impose the plane constraints to make large planes parallel or vertical to the indoor floor. Both two constraints help reconstruct accurate and smooth geometry structures of the scene. Without 3D supervision, our method achieves competitive performance compared with existing methods that use 3D supervision on the ScanNet dataset.'\nauthor:\n- |\n Yi Guo^1^, Che Sun^1^, Yunde Jia^2,1^, and Yuwei Wu^1,2^\\\n ^1^Beijing Key" -"---\nabstract: 'Close to equilibrium, the underlying symmetries of a system determine its possible universal behavior. Far from equilibrium, however, different universal phenomena associated with the existence of multiple non-thermal fixed points can be realized for given microscopic symmetries. Here, we study this phenomenon using a quasi-one-dimensional spinor Bose-Einstein condensate. We prepare two different initial conditions and observe two distinct universal scaling dynamics with different exponents. Measurements of the complex-valued order parameter with spatial resolution allow us to characterize the phase-amplitude excitations for the two scenarios. Our study provides new insights into the phenomenon of universal dynamics far from equilibrium and opens a path towards mapping out the associated basins of non-thermal fixed points.'\nauthor:\n- Stefan Lannig\n- Maximilian Pr\u00fcfer\n- Yannick Deller\n- Ido\u00a0Siovitz\n- Jan\u00a0Dreher\n- Thomas Gasenzer\n- Helmut Strobel\n- 'Markus K. Oberthaler'\ntitle: 'Observation of two non-thermal fixed points for the same microscopic symmetry'\n---\n\n[^1]\n\nUniversality is a powerful concept for characterizing systems by means of effective models based on features that are independent of microscopic details. This concept led to the identification of universality classes for systems at and near equilibrium [@Hohenberg1977; @Bray1994]. For example, close to a phase transition," -"---\nabstract: 'Let $K$ be a field with a discrete valuation, and let $p$ be a prime. It is known that if $\\Gamma \\lhd \\Gamma_0 < {\\mathrm{PGL}}_2(K)$ is a Schottky group normally contained in a larger group which is generated by order-$p$ elements each fixing $2$ points $a_i, b_i \\in {\\mathbb{P}}_K^1$, then the quotient of a certain subset of the projective line ${\\mathbb{P}}_K^1$ by the action of $\\Gamma$ can be algebraized as a superelliptic curve $C : y^p = f(x) / K$. The subset $S \\subset {\\mathbb{P}}_K^1$ consisting of these pairs $a_i, b_i$ of fixed points is mapped modulo $\\Gamma$ to the set of branch points of the superelliptic map $x : C \\to {\\mathbb{P}}_K^1$. We produce an algorithm for determining whether an input even-cardinality subset $S \\subset {\\mathbb{P}}_K^1$ consists of fixed points of generators of such a group $\\Gamma_0$ and which, in the case of a positive answer, modifies $S$ into a subset ${S^{\\mathrm{min}}}\\subset {\\mathbb{P}}_K^1$ with particularly nice properties. Our results do not involve any restrictions on the prime $p$ or on the residue characteristic of $K$ and allow these to be the same.'\nauthor:\n- Jeffrey Yelton\nbibliography:\n- 'bibfile.bib'\ntitle: 'Branch points of split degenerate superelliptic curves" -"---\nabstract: '\\[Background.\\] Empirical research in requirements engineering (RE) is a constantly evolving topic, with a growing number of publications. Several papers address this topic using literature reviews to provide a snapshot of its \u201ccurrent\u201d state and evolution. However, these papers have never built on or updated earlier ones, resulting in overlap and redundancy. The underlying problem is the unavailability of data from earlier works. Researchers need technical infrastructures to conduct sustainable literature reviews. \\[Aims.\\] We examine the use of the Open Research Knowledge Graph (ORKG) as such an infrastructure to build and publish an initial Knowledge Graph of Empirical research in RE (KG-EmpiRE) whose data is openly available. Our long-term goal is to continuously maintain KG-EmpiRE with the research community to synthesize a comprehensive, up-to-date, and long-term available overview of the state and evolution of empirical research in RE. \\[Method.\\] We conduct a literature review using the ORKG to build and publish KG-EmpiRE which we evaluate against competency questions derived from a published vision of empirical research in software (requirements) engineering for 2020 \u2013 2025. \\[Results.\\] From 570 papers of the IEEE International Requirements Engineering Conference (2000 \u2013 2022), we extract and analyze data on the reported empirical research" -"---\nabstract: 'A machine-learning approach for data-driven Reynolds-Averaged Navier\u2013Stokes (RANS) predictions of turbulent flows including estimates of turbulence modelling uncertainties is developed by combining a Bayesian symbolic identification methodology for learning customised RANS model corrections for selected classes of flows and a space-dependent model-aggregation algorithm that combines the predictions of a set of competing machine-learned RANS models by means of weighting functions depending on a vector of local flow features. The customised model corrections are learned by using the SBL-SpaRTA algorithm, recently proposed by @cherroud2022sparse, which delivers sparse model correction terms in analytical form and whose parameters are described by probability distribution functions. This makes the learned models naturally interpretable and endowed with a measure of uncertainty. The learned models are subsequently aggregated by training Random Forests Regressors (RFR), which associates a model performance score with a set of local flow features. The scores can be interpreted as the probability that a candidate model performs better than its competitors, given the flow behavior at a given location. Predictions of new flows are then formulated as a locally weighted average of the solutions of a set of machine-learned models. An uncertainty measure is obtained by propagating through the models their posterior" -"---\nauthor:\n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \n- \nbibliography:\n- 'references.bib'\ntitle: 'Receiver design for the REACH global 21-cm signal experiment'\n---\n\nIntroduction {#intro}\n============\n\nThe Radio Experiment for the Analysis of Cosmic Hydrogen (REACH) [@reach] is designed to measure the impact of the intergalactic medium (IGM) on the 21-cm neutral hydrogen line attributed to X-ray and UV emission from the first bright objects in the Universe [@furlanetto]. This \u201cglobal\u201d experiment focuses on detecting the spatial 21-cm cosmic signature which is orders of magnitude smaller than the bright foregrounds at frequencies in the region of 50\u2013200MHz. As such, the experiment requires instrumental calibration of millikelvin-level accuracy to remove systematics that would ordinarily hinder such a measurement.\n\nA number of global experiments have already been conducted in this domain such as SARAS [@saras] and LEDA [@leda] as well as EDGES, which in 2018 reported the detection of an absorption profile at 78 MHz, potentially revealing the general characteristics of the Epoch of Reionisation (EoR) and Cosmic Dawn such as" -"---\nabstract: 'The ability to conduct retrospective analyses of attacks on human rights defenders over time and by location is important for humanitarian organizations to better understand historical or ongoing human rights violations and thus better manage the global impact of such events. We hypothesize that NLP can support such efforts by quickly processing large collections of news articles to detect and summarize the characteristics of attacks on human rights defenders. To that end, we propose a new dataset for detecting **Attack**s on **H**uman **R**ights **D**efender**s** ([HRDsAttack]{}) consisting of crowdsourced annotations on 500 online news articles. The annotations include fine-grained information about the type and location of the attacks, as well as information about the victim(s). We demonstrate the usefulness of the dataset by using it to train and evaluate baseline models on several sub-tasks to predict the annotated characteristics.'\nauthor:\n- |\n Shihao Ran Di Lu\\\n **Joel Tetreault Aoife Cahill Alejandro Jaimes\\\n Dataminr Inc.\\\n `{sran,dlu,jtetreault,`\\\n `acahill,ajaimes}@dataminr.com`**\nbibliography:\n- 'custom.bib'\ntitle: A New Task and Dataset on Detecting Attacks on Human Rights Defenders \n---\n\n=1\n\nIntroduction\n============\n\nIt is essential for human rights organizations to track, analyze and summarize attacks on human rights defenders over time and across locations for" -"---\nauthor:\n- \n- \nbibliography:\n- 'references.bib'\ntitle: Coherent loop states and angular momentum\n---\n\nIntroduction\n============\n\nThe purpose of this paper is to study the [*Bohr-Sommerfeld states*]{} [@borthwick1995legendrian] (which we will call [*coherent loop states*]{} in our setting) in the context of the irreducible representations of $\\operatorname{SU}(2)$, and to use these states to derive the \u2018spherical area\u2019 formula stated in [@littlejohn2009uniform] for the asymptotics of the matrix elements of these representations. We will see that the general theory in [@borthwick1995legendrian] takes a particularly simple and elegant form in this context, where the geometry of the Hopf fibration $S^3 \\rightarrow S^2$ will play a central role.\n\nFrom the viewpoint of physics, the key feature of coherent loop states is that they allow one to actually make rigorous many of the intuitive classical mental images we have for spin angular momentum (such as in [@Ponzano1968; @brussaard1957classical; @biedenharn1981racah]), since they offer a precise and convenient bridge from the classical to the quantum world.\n\nBorthwick, Paul and Uribe\u2019s asymptotic formula {#borthwick-paul-and-uribes-asymptotic-formula .unnumbered}\n----------------------------------------------\n\nRecall that in geometric quantization of K\u00e4hler manifolds, one starts with a compact holomorphic manifold $M$ and a Hermitian line bundle $L$ over $M$ which is positive in the sense" -"---\nabstract: 'Deep learning based image compression has gained a lot of momentum in recent times. To enable a method that is suitable for image compression and subsequently extended to video compression, we propose a novel deep learning model architecture, where the task of image compression is divided into two sub-tasks, learning structural information from luminance channel and color from chrominance channels. The model has two separate branches to process the luminance and chrominance components. The color difference metric CIEDE2000 is employed in the loss function to optimize the model for color fidelity. We demonstrate the benefits of our approach and compare the performance to other codecs. Additionally, the visualization and analysis of latent channel impulse response is performed.'\naddress: |\n *Moving Picture Technologies*, Fraunhofer Institute for Integrated Circuits IIS,\\\n Erlangen, Germany\nbibliography:\n- 'refs.bib'\ntitle: Color Learning for Image Compression\n---\n\n[ 0= ]{}\n\nImage compression, deep learning, color learning, non-linear transform coding\n\nIntroduction {#sec:intro}\n============\n\nImage compression is a high impact technology that minimizes resources for transmission bandwidth and storage. A conventional image codec such as JPEG[@125072] uses a block based transform coding approach. The images are partitioned into blocks and transformed into the frequency domain using the" -"---\nabstract: 'Advances in optical imaging always look for an increase in sensitivity and resolution among other practicability aspects. Within the same scope, in this work we report a versatile interference contrast imaging technique, capable of sub- sample-thickness resolution, with a large field-of-view of several $\\SI{}{\\milli\\meter}^{2}$. Sensitivity is increased through the use of a self-imaging non-resonant cavity, which causes photons to probe the sample in multiple rounds before being detected, where the configuration can be transmissive or reflective. Phase profiles can be resolved individually for each round thanks to a specially designed single-photon camera with time-of-flight capabilities and true pixels-off gating. Measurement noise is reduced by novel data processing combining the retrieved sample profiles from multiple rounds. Our protocol is specially useful under extremely low light conditions as require by biological or photo-sensitive samples. Results demonstrate at least a five-fold reduction in phase measurement noise, compared to single round imaging, and close values to the predicted sensitivity in case of the best possible cavity configuration, where all photons are maintained until n rounds. We also find a good agreement with the theoretical predictions for low number of rounds, where experimental imperfections would place a minor role. The absence of a" -"---\nabstract: 'According to some discussions based on syllogism, we present results on the binary Goldbach conjecture in three categories: results that are weaker than the Goldbach conjecture, sufficient conditions for the Goldbach conjecture, and results that are similar in nature to the Goldbach conjecture. Additionally, we explore the connections between the Goldbach conjecture and other well-known conjectures.'\naddress: 'School of Mathematical Sciences and LPMC, Nankai University, Tianjin 300071, China'\nauthor:\n- Huixi Li\nbibliography:\n- 'bib.bib'\ntitle: Some discussions on the Goldbach conjecture\n---\n\n*Dedicated to Professor Chen Jingrun on the occasion of the 90th anniversary of his birth*\n\n*and the 50th anniversary of the birth of Chen\u2019s theorem.*\n\nCurrent status\n==============\n\nIn 1742, Goldbach proposed two conjectures: the binary Goldbach conjecture, which states that every even integer greater than $2$ can be expressed as the sum of two primes, and the ternary Goldbach conjecture, which states that every odd integer greater than $5$ can be expressed as the sum of three primes. For a detailed historical account of the conjecture\u2019s origins, readers can refer to Section 1 of Vaughan\u2019s paper [@Vaughan2016]. Note that the ternary Goldbach conjecture can be deduced from the binary Goldbach conjecture, as every odd" -"---\nabstract: '**Abstract.** Due to existence of periodic windows, chaotic systems undergo numerous bifurcations as system parameters vary, rendering it hard to employ an analytic continuation, which constitutes a major obstacle for its effective analysis or computation. In this manuscript, however, based on cycle expansions we found that spectral functions and thus dynamical averages are analytic, if symbolic dynamics is preserved so that a perturbative approach is indeed possible. Even if it changes, a subset of unstable periodic orbits (UPOs) can be selected to preserve the analyticity of the spectral functions. Therefore, with the help of cycle expansions, perturbation theory can be extended to chaotic regime, which opens a new avenue for the analysis and computation in chaotic systems.'\nauthor:\n- Huanyu Cao\n- Yueheng Lan\nbibliography:\n- 'bibfile.bib'\ntitle: Perturbing Chaos with Cycle Expansions\n---\n\n\\[sec:Introduction\\]Introduction\n================================\n\nTurbulent systems often exhibit characteristic recurrent patterns, which are routinely observed in both numerical simulations and wet experiments and are termed coherent structures. These recurrent patterns are compact invariant sets with relatively simple topology in phase space\u00a0[@kline1967structure; @liepmann2001elements; @cvitanovic2010geometry] and dominate dynamics of fluid systems\u00a0[@cvitanovic1991periodic2; @cvitanovic2010geometry]. Intuitively, at finite resolution, the spatiotemporal evolution can be regarded as a walk through" -"---\nauthor:\n- 'Devesh Nandal, John A. Regan, Tyrone E. Woods, Eoin Farrell, Sylvia Ekstr\u00f6m, Georges Meynet'\nbibliography:\n- 'biblio.bib'\ntitle: Critical accretion rates for rapidly growing massive Population III stars\n---\n\nIntroduction\n============\n\nSupermassive stars (SMSs) and massive Population III (PopIII) stars are theorised to be a key intermediate stage in producing black holes with masses in the range $10^3$ [$\\rm{M_{\\odot}}~$]{}to $10^5$ [$\\rm{M_{\\odot}}~$]{}in the early Universe. Observations of distant quasars [@Willott_2010; @Mortlock_2011; @Banados_2018; @Wang_2021] powered by supermassive black holes (SMBHs) with masses in excess of $10^9$ [$\\rm{M_{\\odot}}~$]{}place extremely tight constraints on the time available to grow seed black holes up to these extreme masses. The recent discovery of a SMBH with a mass of approximately $10^7$ [$\\rm{M_{\\odot}}~$]{}by the CEERS survey team at $z \\sim 8.7$ only exacerbates the problem [@Larson_2023].\n\nWhile the seeds of these early SMBHs could in theory be stellar mass black holes formed from the endpoint of PopIII stars, there are a number of significant challenges to this pathway. Firstly, in order for a typical stellar mass black hole of $\\approx$\u00a0100 [$\\rm{M_{\\odot}}~$]{}to grow by the required 6 - 8 orders of magnitude within a few hundred Myr, it would need to accrete at the Eddington" -"---\nabstract: 'Leveraging the extensive training data from SA-1B, the Segment Anything Model (SAM) demonstrates remarkable generalization and zero-shot capabilities. However, as a category-agnostic instance segmentation method, SAM heavily relies on prior manual guidance, including points, boxes, and coarse-grained masks. Furthermore, its performance in remote sensing image segmentation tasks remains largely unexplored and unproven. In this paper, we aim to develop an automated instance segmentation approach for remote sensing images, based on the foundational SAM model and incorporating semantic category information. Drawing inspiration from prompt learning, we propose a method to learn the generation of appropriate prompts for SAM. This enables SAM to produce semantically discernible segmentation results for remote sensing images, a concept we have termed RSPrompter. We also propose several ongoing derivatives for instance segmentation tasks, drawing on recent advancements within the SAM community, and compare their performance with RSPrompter. Extensive experimental results, derived from the WHU building, NWPU VHR-10, and SSDD datasets, validate the effectiveness of our proposed method. The code for our method is publicly available at .'\nauthor:\n- |\n Keyan\u00a0Chen$^1$,\u00a0Chenyang\u00a0Liu$^1$,\u00a0Hao\u00a0Chen$^2$,\u00a0Haotian\u00a0Zhang$^1$,\u00a0Wenyuan\u00a0Li$^3$, Zhengxia\u00a0Zou$^1$,\u00a0and\u00a0Zhenwei\u00a0Shi$^{1, \\star}$\\\n $^1$Beihang University, $^2$Shanghai AI Laboratory, $^3$The University of Hong" -"---\nabstract: 'In the present paper we propose a new approach on \u2018distributed systems\u2019: the processes are represented through total orders and the communications are characterized by means of biorders. The resulting distributed systems capture situations met in various fields (such as computer science, economics and decision theory). We investigate questions associated to the numerical representability of order structures, relating concepts of economics and computing to each other. The concept of \u2018quasi-finite partial orders\u2019 is introduced as a finite family of chains with a communication between them. The representability of this kind of structure is studied, achieving a construction method for a finite (continuous) Richter-Peleg multi-utility representation.'\nauthor:\n- Asier Estevan Muguerza\ntitle: 'A new approach on distributed systems: orderings and representability'\n---\n\n[Universidad P\u00fablica de Navarra,\\\nDpto. EIM, INARBE Institute\\\nCampus Arrosad\u00eda. Pamplona, 31006, Spain.\\\nasier.mugertza@unavarra.es]{}\n\n**[Keywords:]{}**\n\nDistributed systems, partial orders, biorders, representability.\n\nIntroduction and motivation {#s1}\n===========================\n\nIn the present paper we focus on an ordered structure known as *distributed system*. Although this concept belongs primarily to the field of computer science, its mathematical structure is common to many areas.\n\nThe representability issue appears too in a wide range of fields, such as economics and decision making [@Alc;" -"---\nabstract: 'We search for Nicomachean identities by adding translation parameters, variable parameters, sequence products and adjoining further numbers to sequences. The solutions of definite and indefinite quadratic forms arise in this study of cubic equations obtained from translation parameters. Our search leads to many general Nicomachean-type identities. We also study the geometry of adjoining two numbers to sequences satisfying the Nicomachean identity.'\nauthor:\n- 'Seon-Hong Kim, Kenneth B. Stolarsky'\ntitle: Translations and extensions of the Nicomachus identity\n---\n\n[**Key words and phrases:**]{} sums of cubes, variable parameters, translation parameters, Nicomachean identity, Pell\u2019s equations, sequential multiplications, finite differences, Fibonacci numbers, elliptic curves 0.3cm [**2010 Mathematics Subject Classification:**]{} Primary 11B83; Secondary 11D25, 30C15\n\nINTRODUCTION {#intro}\n============\n\nDefine an operator $\\nu$ on finite sequences of complex numbers $a_i$, i.e., sequences $\\sigma=\\{a_1, a_2, \\cdots, a_n\\}$, by $$\\nu(a_1, a_2, \\cdots, a_n)=\\left(\\sum_{i=1}^n a_i\\right)^2-\\sum_{i=1}^n a_i^3.$$ The ancient classical Nicomachean identity is $$\\nu\\left(1, 2, 3, \\cdots, n\\right)=0.$$ The above definition of $\\nu$ does in fact make sense for the $a_i$ being elements of any ring.\n\nThe number and variety of Nicomachean-type identities is very large. Let $\\sigma$ be an arbitrary sequence of positive integers. Then Mason has shown [@Ma], by using bag products, that it can be" -"---\nauthor:\n- 'Oliver\u00a0Buchmueller,'\n- 'John\u00a0Ellis,'\n- Ulrich\u00a0Schneider\nbibliography:\n- 'main.bib'\ntitle: 'Large-Scale Atom Interferometry for Fundamental Physics'\n---\n\n[abstract[ Atom interferometers measure quantum interference patterns in the wave functions of cold atoms that follow superpositions of different space-time trajectories. These can be sensitive to phase shifts induced by fundamental physics processes such as interactions with ultralight dark matter or the passage of gravitational waves. The capabilities of large-scale atom interferometers are illustrated by their estimated sensitivities to the possible interactions of ultralight dark matter with electrons and photons, and to gravitational waves in the frequency range around 1 Hz, intermediate between the peak sensitivities of the LIGO and LISA experiments. Atom interferometers can probe ultralight scalar couplings with much greater sensitivity than is currently available from probes of the Equivalence Principle. Their sensitivity to mid-frequency gravitational waves may open a window on mergers of masses intermediate between those discovered by the LIGO and Virgo experiments and the supermassive black holes present in the cores of galaxies, as well as fundamental physics processes in the early Universe such as first-order phase transitions and the evolution of networks of cosmic strings. \u00a0\u00a0\\\n\u00a0\u00a0\\\n\u00a0\u00a0\\\n\u00a0\u00a0\\\n\u00a0\u00a0\\\n\u00a0\u00a0\\\nAION-REPORT/2023-04," -"---\nabstract: 'The motion of S2, one of the stars closest to the Galactic Centre, has been measured accurately and used to study the compact object at the centre of the Milky Way. It is commonly accepted that this object is a supermassive black hole but the nature of its environment is open to discussion. Here, we investigate the possibility that dark matter in the form of an ultralight scalar field \u201ccloud\u201d clusters around Sgr\u00a0A\\*. We use the available data for S2 to perform a Markov Chain Monte Carlo analysis and find the best-fit estimates for a scalar cloud structure. Our results show no substantial evidence for such structures. When the cloud size is of the order of the size of the orbit of S2, we are able to constrain its mass to be smaller than $0.1\\%$ of the central mass, setting a strong bound on the presence of new fields in the galactic centre.'\nauthor:\n- |\n GRAVITY Collaboration [^1]: A.\u00a0Foschi$^{1, 2}$, R.\u00a0Abuter$^{3}$, N.\u00a0Aimar$^{4}$, P.\u00a0Amaro Seoane$^{5, 7, 9, 26}$, A.\u00a0Amorim$^{1, 10}$, M.\u00a0Baub\u00f6ck$^{5, 11}$, J.P.\u00a0Berger$^{12}$, H.\u00a0Bonnet$^{3}$, G.\u00a0Bourdarot$^{5}$, W.\u00a0Brandner$^{13}$, V.\u00a0Cardoso$^{1, 6}$, Y.\u00a0Cl\u00e9net$^{4}$, Y.\u00a0Dallilar$^{5}$, R.\u00a0Davies$^{5}$, P.T.\u00a0de" -"---\nabstract: 'Links in $S^3$ can be encoded by grid diagrams; a grid diagram is a collection of points on a toroidal grid such that each row and column of the grid contains exactly two points. Grid diagrams can be reinterpreted as front projections of Legendrian links in the standard contact $3$\u2013sphere. In this paper, we define and investigate triple grid diagrams, a generalization to toroidal diagrams consisting of horizontal, vertical, and diagonal grid lines. In certain cases, a triple grid diagram determines a closed Lagrangian surface in ${\\mathbb{CP}^{2}}$. Specifically, each triple grid diagram determines three grid diagrams (row-column, column-diagonal and diagonal-row) and thus three Legendrian links, which we think of collectively as a Legendrian link in a disjoint union of three standard contact $3$\u2013spheres. We show that a triple grid diagram naturally determines a Lagrangian cap in the complement of three Darboux balls in ${\\mathbb{CP}^{2}}$, whose negative boundary is precisely this Legendrian link. When these Legendrians are maximal Legendrian unlinks, the Lagrangian cap can be filled by Lagrangian slice disks to obtain a closed Lagrangian surface in ${\\mathbb{CP}^{2}}$. We construct families of examples of triple grid diagrams and discuss potential applications to obstructing Lagrangian fillings.'\naddress:\n- 'Max-Planck-Institut f[\u00fcr]{}" -"---\nabstract: 'Multi-talker automatic speech recognition (MT-ASR) has been shown to improve ASR performance on speech containing overlapping utterances from more than one speaker. While MT-ASR models have typically been trained from scratch using simulated overlapping speech datasets, there is generally an underlying goal that these models also obtain state of the art performance on single speaker utterances as well. This implies that they must be competitive with the best available fine-tuned speech models that have been trained using massive datasets collected from a wide variety of task domains. This paper presents an MT-ASR model formed by combining a well-trained foundation model with a multi-talker mask model in a cascaded RNN-T encoder configuration. Experimental results show that the cascade configuration provides improved WER on overlapping speech utterances with respect to a baseline multi-talker model with minimal impact on the performance achievable by the foundation model on non-overlapping utterances.'\naddress: 'Google Inc., New York'\nbibliography:\n- 'refs.bib'\ntitle: 'Cascaded encoders for fine-tuning ASR models on overlapped speech'\n---\n\n**Index Terms**: multi-talker speech recognition\n\nIntroduction {#sec-intro}\n============\n\nIt is well known that overlapping speech exists in utterances arising from human-human interaction\u00a0[@Cetin2006; @Tripathi2020]. A study of utterances in a meetings domain found" -"---\nabstract: 'Glass surfaces of transparent objects and mirrors are not able to be uniquely and explicitly characterized by their visual appearances because they contain the visual appearance of other reflected or transmitted surfaces as well. Detecting glass regions from a single-color image is a challenging task. Recent deep-learning approaches have paid attention to the description of glass surface boundary where the transition of visual appearances between glass and non-glass surfaces are observed. In this work, we analytically investigate how glass surface boundary helps to characterize glass objects. Inspired by prior semantic segmentation approaches with challenging image types such as X-ray or CT scans, we propose separated internal-external boundary attention modules that individually learn and selectively integrate visual characteristics of the inside and outside region of glass surface from a single color image. Our proposed method is evaluated on six public benchmarks comparing with state-of-the-art methods showing promising results.'\nauthor:\n- |\n Dongshen Han$^{1}$ Seungkyu Lee$^{1}$\\\n $^1$Kyunghee University\\\n [{han-0129, seungkyu}@khu.ac.kr]{}\nbibliography:\n- 'egbib.bib'\ntitle: 'Internal-External Boundary Attention Fusion for Glass Surface Segmentation'\n---\n\nIntroduction {#sec:intro}\n============\n\nTransparent objects and mirror surfaces are everywhere around such as windows, bottles, eye glasses and omnipresent mirrors. They are expected to be detected, localized" -"---\nabstract: 'We investigate the sparsity of null vectors of real symmetric matrices whose off-diagonal pattern of zero and nonzero entries is described by the adjacencies of a graph. We use the definition of the spark of a matrix, the smallest number of nonzero coordinates of any null vector, to define the spark of a graph as the smallest possible spark of a corresponding matrix. We study connections of graph spark to well-known concepts including minimum rank, forts, orthogonal representations, Parter and Fiedler vertices, and vertex connectivity.'\nauthor:\n- 'Louis Deaett[^1]'\n- 'Shaun Fallat [^2]'\n- 'Veronika Furst [^3]'\n- 'John Hutchens [^4]\u00a0[^5]'\n- 'Lon Mitchell [^6]'\n- 'Yaqi Zhang[^7]'\nbibliography:\n- 'refs.bib'\ntitle: The Spark of Symmetric Matrices Described by a Graph\n---\n\n[**Keywords:**]{} Null vectors, maximum nullity, spark, zero forcing, forts, connectivity, generic nullity, minimum rank.\n\n[**AMS subject classification:**]{} 05C50, 15A18 (primary) 15A29 (secondary).\n\nIntroduction\n============\n\nDenote the set of all real symmetric $n \\times n$ matrices by $S_n(\\mathbb{R})$, and suppose $A=[a_{ij}] \\in S_n(\\mathbb{R})$. We say $G(A)$ is the graph of $A$ if $G(A)$ has the vertex set $V=\\{v_1,v_2,\\dotsc,v_n\\}$ and edge set $E=\\{v_iv_j \\mid a_{ij} \\neq 0, \\ i \\neq j\\}$. Note that $G(A)$ is independent of" -"---\nabstract: 'Recently, high-order sideband polarimetry has been established as an experimental method that links the polarization of sidebands to an interference of Bloch wavefunctions. However, the robustness of sideband polarizations to increasing dephasing remains to be explored. Here, we investigate the dependence of high-order sideband generation in bulk gallium arsenide on dephasing by tuning temperature. We find that the intensities of the sidebands, but not their polarizations, depend strongly on temperature. Using our polarimetry method, we are able to isolate the contributions of electron-heavy hole (HH) and electron-light hole (LH) pairs to sideband intensities, and separately extract the nonequilibrium dephasing coefficients associated with the longitudinal optical (LO) phonons and acoustic (A) phonons for each species of electron-hole pair. We find that $\\Gamma_{\\text{HH},\\text{A}} = 6.1 \\pm 1.6$ $\\mu$eV/K, $\\Gamma_{\\text{LH},\\text{A}} < 1.5$ $\\mu$eV/K, $\\Gamma_{\\text{HH},\\text{LO}} = 14 \\pm 3$ meV, and $\\Gamma_{\\text{LH},\\text{LO}} = 30 \\pm 3$ meV.'\nauthor:\n- 'Joseph B. Costello'\n- 'Seamus D. O\u2019Hara'\n- Qile Wu\n- Moonsuk Jang\n- 'Loren N. Pfeiffer'\n- 'Ken W. West'\n- 'Mark S. Sherwin'\nbibliography:\n- 'references.bib'\ntitle: 'Breaking a Bloch-wave interferometer: quasiparticle species-specific temperature-dependent nonequilibrium dephasing'\n---\n\n\\[intro\\]Introduction\n=====================\n\nOne of the chief aims of modern condensed matter physics is to" -"---\nabstract: 'Liquidity providers (LPs) on decentralized exchanges (DEXs) can protect themselves from adverse selection risk by updating their positions more frequently. However, repositioning is costly, because LPs have to pay gas fees for each update. We analyze the causal relation between repositioning and liquidity concentration around the market price, using the entry of a blockchain scaling solution, Polygon, as our instrument. Polygon\u2019s lower gas fees allow LPs to update more frequently than on Ethereum. Our results demonstrate that higher repositioning intensity and precision lead to greater liquidity concentration, which benefits small trades by reducing their slippage.'\nauthor:\n- 'Basile Caparros[^1], Amit Chaudhary[^2], Olga Klein[^3]'\nbibliography:\n- 'biblio.bib'\ntitle: 'Blockchain scaling and liquidity concentration on decentralized exchanges[^4]'\n---\n\n**JEL classifications:** G14, G18, G19\n\n**Keywords:** decentralized exchanges, FinTech, gas fees, liquidity concentration, market depth, slippage\n\nIntroduction\n============\n\nOne of the major cryptocurrency exchanges, FTX, filed for bankruptcy on November 11, 2022. Crucially, FTX kept custody of its client deposits. Thus, whereas the exact reasons for its collapse are still being investigated by the US Securities and Exchange Commission (SEC), at least \\$1B of its customers\u2019 funds have vanished, according to Reuters[^5]. FTX was a centralized exchange (CEX), with all trades taking" -"---\nabstract: 'The aim of this research is to recognize human actions performed on stage to aid visually impaired and blind individuals. To achieve this, we have created a theatre human action recognition system that uses skeleton data captured by depth image as input. We collected new samples of human actions in a theatre environment, and then tested the transfer learning technique with three pre-trained Spatio-Temporal Graph Convolution Networks for skeleton-based human action recognition: the spatio-temporal graph convolution network, the two-stream adaptive graph convolution network, and the multi-scale disentangled unified graph convolution network. We selected the NTU-RGBD human action benchmark as the source domain and used our collected dataset as the target domain. We analyzed the transferability of the pre-trained models and proposed two configurations to apply and adapt the transfer learning technique to the diversity between the source and target domains. The use of transfer learning helped to improve the performance of the human action system within the context of theatre. The results indicate that Spatio-Temporal Graph Convolution Networks is positively transferred, and there was an improvement in performance compared to the baseline without transfer learning.'\nauthor:\n- Leyla Benhamida\n- Slimane Larabi\ntitle: 'Theater Aid System for the" -"---\nabstract: 'Procuring flexibility services from energy consumers has been a potential solution to accommodating renewable generations in future power system. However, efficiently and securely coordinating the behaviors of diverse market participants within a privacy-preserving environment remains a challenge. This paper addresses this issue by introducing a game-theoretic market framework for real-time energy balancing. The competition among energy consumers is modeled as a Generalized Nash Game (GNG), which enables the analysis of their strategic decision-making. To mitigate the market power exerted by active energy consumers, we employ a supply function-based bidding method in this market design. We incorporate physical constraints to ensure the secure operation of the distribution network. Previous approaches to steering consumers towards the Generalized Nash Equilibrium (GNE) of this game often necessitate the sharing of private information, either in full or in part, which may not be practically feasible. To overcome this limitation, we propose a preconditioned forward-backward algorithm, with analytical convergence guarantees, that only requires participants to share limited, non-private sensitive information with others. Finally, numerical simulations on the enhanced IEEE 33-bus test case validate the effectiveness of our proposed market mechanism and algorithm.'\nauthor:\n- 'Xiupeng Chen, Koorosh Shomalzadeh, Jacquelien M. A. Scherpen, and Nima" -"---\nabstract: 'Analytical methods are fundamental in studying acoustics problems. One of the important tools is the Wiener-Hopf method, which can be used to solve many canonical problems with sharp transitions in boundary conditions on a plane/plate. However, there are some strict limitations to its use, usually the boundary conditions need to be imposed on parallel lines (after a suitable mapping). Such mappings exist for wedges with continuous boundaries, but for discrete boundaries, they have not yet been constructed. In our previous article, we have overcome this limitation and studied the diffraction of acoustic waves by a wedge consisting of point scatterers. Here, the problem is generalised to an arbitrary number of periodic semi-infinite arrays with arbitrary orientations. This is done by constructing several coupled systems of equations (one for every semi-infinite array) which are treated independently. The derived systems of equations are solved using the discrete Wiener\u2013Hopf technique and the resulting matrix equation is inverted using elementary matrix arithmetic. Of course, numerically this matrix needs to be truncated, but we are able to do so such that thousands of scatterers on every array are included in the numerical results. Comparisons with other numerical methods are considered, and their strengths/weaknesses" -"---\nabstract: 'We investigate the problem of joint statistical estimation of several parameters for a stochastic differential equations driven by an additive fractional Brownian motion. Based on discrete-time observations of the model, we construct an estimator of the Hurst parameter, the diffusion parameter and the drift, which lies in a parametrised family of coercive drift coefficients. Our procedure is based on the assumption that the stationary distribution of the SDE and of its increments permits to identify the parameters of the model. Under this assumption, we prove consistency results and derive a rate of convergence for the estimator. Finally, we show that the identifiability assumption is satisfied in the case of a family of fractional Ornstein-Uhlenbeck processes and illustrate our results with some numerical experiments.'\nauthor:\n- 'El Mehdi Haress [^1]'\n- 'Alexandre Richard[^2]'\ntitle: \n---\n\nIntroduction {#sec:intro}\n============\n\nConsider the following $\\mathbb{R}^d$-valued stochastic differential equation $$\\begin{aligned}\n\\label{eq:fsde0}\n Y_t = Y_0 + \\int_0^t b_{\\xi_0}(Y_s) d s + \\sigma_0 B_t,\\end{aligned}$$ where $B$ is an $\\mathbb{R}^d$-fractional Brownian motion (fBm) with Hurst parameter $H_0 \\in (0,1)$. The goal in this work is to estimate simultaneously the parameter $\\xi_{0}$, the diffusion coefficient $\\sigma_{0}$ and the Hurst parameter $H_0$ from discrete observations of the process" -"---\nabstract: |\n We provide a fully nonlinear port-Hamiltonian formulation for discrete elastodynamical systems as well as a structure-preserving time discretization. The governing equations are obtained in a variational manner and represent index-1 differential algebraic equations. Performing an index reduction one obtains the port-Hamiltonian state space model, which features the nonlinear strains as an independent state next to position and velocity. Moreover, hyperelastic material behavior is captured in terms of a nonlinear stored energy function. The model exhibits passivity and losslessness and has an underlying symmetry yielding the conservation of angular momentum. We perform temporal discretization using the midpoint discrete gradient, such that the beneficial properties are inherited by the developed time stepping scheme in a discrete sense. The numerical results obtained in a representative example are demonstrated to validate the findings.\\\n \\\n [**Keywords:** Port-Hamiltonian systems; Structure-preserving discretization; Discrete mechanics; Nonlinear elastodynamics; Discrete gradients. ]{}\nauthor:\n- 'Philipp L. Kinon [^1]'\n- Tobias Thoma\n- Peter Betsch\n- Paul Kotyczka\nbibliography:\n- 'bib.bib'\ntitle: '[ Discrete nonlinear elastodynamics in a port-Hamiltonian framework[^2] ]{}'\n---\n\nIntroduction\n============\n\nDue to its well-known beneficial properties [@duindam_modeling_2009], the port-Hamiltonian (PH) framework has gained large popularity also when modeling elastodynamical systems (which might occur in" -"---\nabstract: 'We theoretically investigate high-order harmonic generation (HHG) in graphene under mid-infrared (MIR) and terahertz (THz) fields based on a quantum master equation. Numerical simulations show that MIR-induced HHG in graphene can be enhanced by a factor of 10 for fifth harmonic and a factor of 25 for seventh harmonic under a THz field with a peak strength of 0.5\u00a0MV/cm by optimizing the relative angle between the MIR and THz fields. To identify the origin of this enhancement, we compare the fully dynamical calculations with a simple thermodynamic model and a nonequilibrium population model. The analysis shows that the enhancement of the high-order harmonics mainly results from a coherent coupling between MIR- and THz-induced transitions that goes beyond a simple THz-induced population contribution.'\nauthor:\n- Wenwen\u00a0Mao\n- Angel\u00a0Rubio\n- 'Shunsuke\u00a0A.\u00a0Sato'\nbibliography:\n- 'ref.bib'\ntitle: 'Enhancement of high-order harmonic generation in graphene by mid-infrared and terahertz fields'\n---\n\n[Max Planck Institute for the Structure and Dynamics of Matter, Luruper Chaussee 149, 22761 Hamburg, Germany]{}\n\n[Max Planck Institute for the Structure and Dynamics of Matter, Luruper Chaussee 149, 22761 Hamburg, Germany]{} [Center for Computational Quantum Physics, Flatiron Institute, 162 Fifth Avenue, New York, NY 10010, USA]{}" -"---\nabstract: 'Hyperspectral images (HSI) captured from earth observing satellites and aircraft is becoming increasingly important for applications in agriculture, environmental monitoring, mining, etc. Due to the limited available hyperspectral datasets, the pixel-wise random sampling is the most commonly used training-test dataset partition approach, which has significant overlap between samples in training and test datasets. Furthermore, our experimental observations indicates that regions with larger overlap often exhibit higher classification accuracy. Consequently, the pixel-wise random sampling approach poses a risk of data leakage. Thus, we propose a block-wise sampling method to minimize the potential for data leakage. Our experimental findings also confirm the presence of data leakage in models such as 2DCNN. Further, We propose a spectral-spatial axial aggregation transformer model, namely SaaFormer, to address the challenges associated with hyperspectral image classifier that considers HSI as long sequential three-dimensional images. The model comprises two primary components: axial aggregation attention and multi-level spectral-spatial extraction. The axial aggregation attention mechanism effectively exploits the continuity and correlation among spectral bands at each pixel position in hyperspectral images, while aggregating spatial dimension features. This enables SaaFormer to maintain high precision even under block-wise sampling. The multi-level spectral-spatial extraction structure is designed to capture the sensitivity" -"---\nabstract: 'Despite its simplicity, the single-trajectory thawed Gaussian approximation has proven useful for calculating vibrationally resolved electronic spectra of molecules with weakly anharmonic potential energy surfaces. Here, we show that the thawed Gaussian approximation can capture surprisingly well even more subtle observables, such as the isotope effects in the absorption spectra, and we demonstrate it on the four isotopologues of ammonia (NH$_{3}$, NDH$_{2}$, ND$_{2}$H, ND$_{3}$). The differences in their computed spectra are due to the differences in the semiclassical trajectories followed by the four isotopologues, and the isotope effects\u2014narrowing of the transition band and reduction of the peak spacing\u2014are accurately described by this semiclassical method. In contrast, the adiabatic harmonic model shows a double progression instead of the single progression seen in the experimental spectra. The vertical harmonic model correctly shows only a single progression but fails to describe the anharmonic peak spacing. Analysis of the normal-mode activation upon excitation provides insight into the elusiveness of the symmetric stretching progression in the spectra.'\nauthor:\n- \u0112riks Kl\u0113tnieks\n- Yannick Calvino Alonso\n- 'Ji\u0159\u00ed J.L. Van\u00ed\u010dek'\nbibliography:\n- 'Append\\_biblio53.bib'\n- 'biblio53.bib'\ntitle: Isotope effects in the electronic spectra of ammonia from ab initio semiclassical dynamics\n---\n\n![image](TOC)\n\nVibrationally resolved electronic" -"---\nabstract: 'This is a lightweight manual for [`PTArcade`]{}, a wrapper of `ENTERPRISE` and `ceffyl` that allows for easy implementation of new-physics searches in PTA data. In this manual, we describe how to get [`PTArcade`]{} installed (either on your local machine or an HPC cluster). We discuss how to define a stochastic or deterministic signal and how [`PTArcade`]{} implements these signals in PTA-analysis pipelines. Finally, we show how to handle and analyze the [`PTArcade`]{} output using a series of utility functions that come together with [`PTArcade`]{}.'\nauthor:\n- '**Main developers:**'\n- |\n [Andrea Mitridate]{}\\\n [*Deutsches Elektronen-Synchrotron DESY, Notkestr. 85, D-22607 Hamburg, Germany*]{}\n- |\n [David Wright]{}\\\n [*Department of Physics, University of Central Florida, Orlando, FL 32816-2385, USA*]{}\n- '**Contributors:**'\n- |\n [Richard von Eckardstein and Tobias Schr\u00f6der]{}\\\n [*Institute for Theoretical Physics, University of M\u00fcnster, 48149 M\u00fcnster, Germany*]{}\n- |\n [Jonathan Nay]{}\\\n [*Department of Physics, The University of Texas at Austin, Austin, TX 78712, USA*]{}\n- |\n [Ken Olum]{}\\\n [*Institute of Cosmology, Department of Physics and Astronomy, Tufts University, Medford, MA 02155, USA*]{}\n- |\n [Kai Schmitz]{}\\\n [*Institute for Theoretical Physics, University of M\u00fcnster, 48149 M\u00fcnster, Germany*]{}\n- |\n [Tanner Trickle]{}\\\n [*Theoretical Physics Division, Fermi National Accelerator Laboratory, Batavia, IL 60510," -"---\nabstract: 'In this work we present several characteristic examples of theories of gravity and particle physics scenarios that may yield an observable energy spectrum of stochastic primordial gravitational waves, compatible with the 2023 NANOGrav observations. The resulting theories yield a flat or a peak-like energy spectrum, and we further seek the conditions which if hold true, the energy spectrum can be compatible with the recent NANOGrav stochastic gravitational wave detection. As we show, in most cases a blue tilted spectrum combined with a relatively low reheating temperature is needed, the scale of which is determined by whether the radiation domination era is ordinary or it is an abnormal radiation domination era. One intriguing Higgs-axion model, which predicts short slow-roll eras for the axion field at the post-electroweak breaking epoch, which eventually change the total equation of state parameter at the reheating era, can explain the NANOGrav signal, if a blue tilted tensor spectral index inflationary era precedes the reheating era, and a reheating temperature of the order $\\mathcal{O}(400)\\,$GeV. This specific model produces an energy spectrum of primordial gravitational waves with a characteristic peak that is detectable from both the NANOGrav and future LISA experiment, but not from the future" -"---\nabstract: 'This study delves into the temporal dynamics within the equity market through the lens of bond traders. Recognizing that the riskless interest rate fluctuates over time, we leverage the Black-Derman-Toy model to trace its temporal evolution. To gain insights from a bond trader\u2019s perspective, we focus on a specific type of bond: the zero-coupon bond. This paper introduces a pricing algorithm for this bond and presents a formula that can be used to ascertain its real value. By crafting an equation that juxtaposes the theoretical value of a zero-coupon bond with its actual value, we can deduce the risk-neutral probability. It is noteworthy that the risk-neutral probability correlates with variables like the instantaneous mean return, instantaneous volatility, and inherent upturn probability in the equity market. Examining these relationships enables us to discern the temporal shifts in these parameters. Our findings suggest that the mean starts at a negative value, eventually plateauing at a consistent level. The volatility, on the other hand, initially has a minimal positive value, peaks swiftly, and then stabilizes. Lastly, the upturn probability is initially significantly high, plunges rapidly, and ultimately reaches equilibrium.'\nauthor:\n- 'Yifan He[^1]'\n- 'Yuan Hu[^2]'\n- 'Svetlozar Rachev[^3]'\nbibliography:\n-" -"---\nbibliography:\n- 'references.bib'\n---\n\naddtoreset[equation]{}[section]{}\n\n\\\n\nThe Swampland Distance Conjecture (SDC) states that, as we move towards an infinite distance point in moduli space, a tower of states becomes exponentially light with the geodesic distance in any consistent theory of Quantum Gravity. Although this fact has been tested in large sets of examples, it is fair to say that a bottom-up justification based on fundamental Quantum Gravity principles that explains both the *geodesic* requirement and the *exponential behavior* has been missing so far. In the present paper we address this issue by making use of the Covariant Entropy Bound as applied to the EFT. When applied to backgrounds of the Dynamical Cobordism type in theories with a moduli space, we are able to recover these main features of the SDC. Moreover, this naturally leads to universal lower and upper bounds on the \u2018decay rate\u2019 parameter $\\lambda_{\\text{sp}}$ of the species scale, that we propose as a convex hull condition under the name of Species Scale Distance Conjecture (SSDC). This is in contrast to already proposed universal bounds, that apply to the SDC parameter of the lightest tower. We also extend the analysis to the case in which asymptotically exponential potentials" -"---\nabstract: 'In this work is considered a diffusion problem, referred to as the *Ventcel problem*, involving a second order term on the domain boundary (the Laplace-Beltrami operator). A variational formulation of the Ventcel problem is studied, leading to a finite element discretization. The focus is on the construction of high order curved meshes for the discretization of the physical domain and on the definition of the lift operator, which is aimed to transform a function defined on the mesh domain into a function defined on the physical one. This *lift* is defined in a way as to satisfy adapted properties on the boundary, relatively to the trace operator. Error estimations are computed and expressed both in terms of finite element approximation error and of geometrical error, respectively associated to the finite element degree $k\\ge 1$ and to the mesh order $r\\ge 1$. The numerical experiments we led allow us to validate the results obtained and proved on the *a priori* error estimates depending on the two parameters $k$ and\u00a0$r$.'\nauthor:\n- 'Fabien Caubet[^1], Joyce Ghantous[^2], Charles Pierre[^3]'\nbibliography:\n- 'biblio.bib'\ntitle: A priori error estimates of a diffusion equation with Ventcel boundary conditions on curved meshes\n---\n\nIntroduction" -"---\nabstract: 'Information is instrumental in our understanding of thermodynamics. Their interplay has been studied through completely degenerate Hamiltonians whereby the informational contributions to thermodynamic transformations can be isolated. In this setting, all states other then the maximally mixed state are considered to be in informational non-equilibrium. An important yet still open question is: how to characterise the ability of quantum dynamics to maintain informational non-equilibrium? Here, the dynamical resource theory of informational non-equilibrium preservability is introduced to begin providing an answer to this question. A characterisation of the allowed operations is given for qubit channels and the n dimensional Weyl-covariant channels - a physically relevant subset of the general channels. An operational interpretation of a state discrimination game with Bell state measurements is given. Finally, an explicit link between a channels classical capacity and its ability to maintain informational non-equilibrium is made.'\nauthor:\n- Benjamin Stratton\n- 'Chung-Yun Hsieh'\n- Paul Skrzypczyk\nbibliography:\n- 'mainTextBib.bib'\ntitle: 'The Dynamical Resource Theory of Informational Non-Equilibrium'\n---\n\n[\\[sec:Introdcution\\]Introduction]{}\n====================================\n\nResources are precious. Their value arises from their limitation, incentivising them to be efficiently utilised and maintained. Formally, an object is considered to be a resource if it can be used by some" -"---\nabstract: 'Resilience of cyber-physical networks to unexpected failures is a critical need widely recognized across domains. For instance, power grids, telecommunication networks, transportation infrastructures and water treatment systems have all been subject to disruptive malfunctions and catastrophic cyber-attacks. Following such adverse events, we investigate scenarios where a network node suffers a loss of control authority over some of its actuators. These actuators are not following the controller\u2019s commands and are instead producing undesirable outputs. The repercussions of such a loss of control can propagate and destabilize the whole network despite the malfunction occurring at a single node. To assess system vulnerability, we establish resilience conditions for networks with a subsystem enduring a loss of control authority over some of its actuators. Furthermore, we quantify the destabilizing impact on the overall network when such a malfunction perturbs a nonresilient subsystem. We illustrate our resilience conditions on two academic examples and on the classical IEEE 39-bus system.'\nauthor:\n- 'Jean-Baptiste Bouvier[^1], Sai Pushpak Nandanoori[^2], Melkior Ornik[^3]'\nbibliography:\n- 'references.bib'\ntitle: 'Losing Control of your Network? Try Resilience Theory'\n---\n\nIntroduction\n============\n\nResilience of cyber-physical networks to catastrophic events is a crucial challenge, widely recognized across government levels [@White_house; @Europe] and research" -"---\nabstract: 'We construct and dynamically evolve dipolar, self-interacting scalar boson stars in a model with sextic (+ quartic) self-interactions. The domain of existence of such dipolar *$Q$-stars* has a similar structure to that of the fundamental monopolar stars of the same model. For the latter it is structured in a Newtonian plus a relativistic branch, wherein perturbatively stable solutions exist, connected by a middle unstable branch. Our evolutions support similar dynamical properties of the dipolar $Q$-stars that: 1) in the Newtonian and relativistic branches are dynamically robust over time scales longer than those for which dipolar stars without self-interactions are seen to decay; 2) in the middle branch migrate to either the Newtonian or the relativistic branch; 3) beyond the relativistic branch decay to black holes. Overall, these results strengthen the observation, seen in other contexts, that self-interactions can mitigate dynamical instabilities of scalar boson star models.'\nauthor:\n- Pedro\u00a0Ildefonso\n- Miguel\u00a0Zilh\u00e3o\n- Carlos\u00a0Herdeiro\n- Eugen\u00a0Radu\n- 'Nuno M. Santos'\nbibliography:\n- 'references.bib'\ntitle: 'Self-interacting dipolar boson stars and their dynamics'\n---\n\nIntroduction {#sec:intro}\n============\n\nAs it is by now well-understood, Einstein\u2019s gravity minimally coupled to massive scalar fields gives rise to macroscopic stable configurations" -"---\nabstract: 'Multi-cancer early detection (MCED) tests offer to screen for multiple types of cancer with a single blood sample. Despite their promising diagnostic performance, evidence regarding their population benefit is not yet available. Expecting that benefit will derive from detecting cancer before it progresses to an advanced stage, we develop a general two-stage model to project the reduction in advanced-stage diagnoses given stage-specific test sensitivities and testing ages. The model can be estimated using cancer registry data and assumptions about overall and advanced-stage preclinical sojourn times. We first estimate the model for lung cancer and validate it using the stage shift observed in the National Lung Screening Trial. We then estimate the model for liver, pancreas, and bladder cancer, which have no recommended screening tests, and we project stage shifts under a shared MCED testing protocol. Our framework transparently integrates available data and working hypotheses to project reductions in advanced-stage diagnoses due to MCED testing.'\nauthor:\n- 'Jane M. Lange$^1$'\n- 'Kemal C. Gogebakan$^2$'\n- Roman Gulati$^2$\n- 'Ruth Etzioni$^{2,3}$'\nbibliography:\n- 'stageshiftbib.bib'\ndate: |\n $^1$Oregon Health and Science University\\\n $^2$Fred Hutchinson Cancer Research Center\\\n $^3$University of Washington, Department of Health Services\\\n June 30, 2023 \ntitle: 'A general framework" -"---\nabstract: 'The notion of a continuous $G$-action on a topological space readily generalizes to that of a continuous $D$-action, where $D$ is any small category. Dror Farjoun and Zabrodsky introduced a generalized notion of orbit, which is key to understanding spaces with continuous $D$-action. We give an overview of the theory of orbits and then prove a generalization of \u201cElmendorf\u2019s Theorem,\u201d which roughly states that the homotopical data of of a $D$-space is precisely captured by the homotopical data of its orbits.'\naddress: ' Department of Mathematics, Vanderbilt University, Nashville, TN 37240 USA '\nauthor:\n- Hannah Housden\nbibliography:\n- 'cited.bib'\ntitle: 'Elmendorf\u2019s Theorem for Diagrams'\n---\n\n[^1]\n\nIntroduction\n============\n\nEquivariant homotopy theory is the study of topological spaces with the action of a (usually finite) group $G$. Any $G$-space $X$ automatically inherits an action via any subgroup $H \\leq G$, and the corresponding fixed-point subspaces $X^H$ are key to understanding the structure of $X$. One concrete way to see this is via the celebrated result of \u201cElmendorf\u2019s theorem,\u201d which was originally proven by Elmendorf [@Elmendorf]. The following is a reformulation due to Piacenza [@Piacenza Theorem 6.3]:\n\nLet $G$ be a topological group, and let $\\mathcal{O}_G$ be the category" -"---\nabstract: 'We study the problem of watermarking large language models (LLMs) generated text \u2014 one of the most promising approaches for addressing the safety challenges of LLM usage. In this paper, we propose a rigorous theoretical framework to quantify the effectiveness and robustness of LLM watermarks. We propose a robust and high-quality watermark method, [Unigram-Watermark]{}, by extending an existing approach with a simplified fixed grouping strategy. We prove that our watermark method enjoys guaranteed generation quality, correctness in watermark detection, and is robust against text editing and paraphrasing. Experiments on three varying LLMs and two datasets verify that our [Unigram-Watermark]{}achieves superior detection accuracy and comparable generation quality in perplexity, thus promoting the responsible use of LLMs. Code is available at .'\nauthor:\n- |\n Xuandong Zhao Prabhanjan Ananth Lei Li Yu-Xiang Wang\\\n UC Santa Barbara\\\n `{xuandongzhao,prabhanjan,leili,yuxiangw}@cs.ucsb.edu`\\\nbibliography:\n- 'custom.bib'\ntitle: 'Provable Robust Watermarking for AI-Generated Text'\n---\n\n[ ]{}\n\nIntroduction {#sec:intro}\n============\n\nGenerative Artificial Intelligence (AI) [@brown2020language; @ramesh2022hierarchical; @saharia2022photorealistic; @OpenAI2023GPT4TR] has achieved significant progress in recent years, spanning from computer vision (CV) to natural language processing (NLP). Large language models (LLMs) such as ChatGPT [@OpenAI2022ChatGPT] can generate coherent and contextually relevant long-form text in response to user-specified" -"---\nabstract: 'We introduce TTSWING, a novel dataset designed for table tennis swing analysis. This dataset comprises comprehensive swing information obtained through 9-axis sensors integrated into custom-made racket grips, accompanied by anonymized demographic data of the players. We detail the data collection and annotation procedures. Furthermore, we conduct pilot studies utilizing diverse machine learning models for swing analysis. TTSWING holds tremendous potential to facilitate innovative research in table tennis analysis and is a valuable resource for the scientific community. We release the dataset and experimental codes at .'\nauthor:\n- 'Author Name Affiliation email@example.com'\n- 'Che-Yu Chou$^{1\\dagger}$'\n- 'Zheng-Hao Chen$^{1\\dagger}$'\n- 'Yung-Hoh Sheu$^{2}$'\n- |\n Hung-Hsuan Chen$^{1}$Sheng K.\u00a0Wu$^{3}$ $^1$Department of Computer Science and Information Engineering, National Central University\\\n $^2$Department of Computer Science and Information Engineering, National Formosa University\\\n $^3$Department of Sport Performance, National Taiwan University of Sport\\\n \u00a0tetsuyu89617@gmail.com, s10727220@cycu.org.tw, yhsheu@nfu.edu.tw, hhchen1105@acm.org, skwu@ntus.edu.tw\nbibliography:\n- 'ref.bib'\ntitle: 'TTSWING: a Dataset for Table Tennis Swing Analysis'\n---\n\nIntroduction\n============\n\nSince its inclusion as an Olympic sport in 1988, table tennis has gained widespread popularity and is enjoyed worldwide not only as a competitive sport but also as a common recreational pastime among players of all levels and ages. Meanwhile, with the" -"---\nabstract: 'Single image 3D reconstruction is an important but challenging task that requires extensive knowledge of our natural world. Many existing methods solve this problem by optimizing a neural radiance field under the guidance of 2D diffusion models but suffer from lengthy optimization time, 3D inconsistency results, and poor geometry. In this work, we propose a novel method that takes a single image of any object as input and generates a full 360-degree 3D textured mesh in a single feed-forward pass. Given a single image, we first use a view-conditioned 2D diffusion model, Zero123, to generate multi-view images for the input view, and then aim to lift them up to 3D space. Since traditional reconstruction methods struggle with inconsistent multi-view predictions, we build our 3D reconstruction module upon an SDF-based generalizable neural surface reconstruction method and propose several critical training strategies to enable the reconstruction of 360-degree meshes. Without costly optimizations, our method reconstructs 3D shapes in significantly less time than existing methods. Moreover, our method favors better geometry, generates more 3D consistent results, and adheres more closely to the input image. We evaluate our approach on both synthetic data and in-the-wild images and demonstrate its superiority in terms" -"---\nabstract: 'Galaxy morphologies provide valuable insights into their formation processes, tracing the spatial distribution of ongoing star formation and encoding signatures of dynamical interactions. While such information has been extensively investigated at low redshift, it is crucial to develop a robust system for characterising galaxy morphologies at earlier cosmic epochs. Relying solely on the nomenclature established for low-redshift galaxies risks introducing biases that hinder our understanding of this new regime. In this paper, we employ variational auto-encoders to perform feature extraction on galaxies at z $>$ 2 using JWST/NIRCam data. Our sample comprises 6869 galaxies at z $>$ 2, including 255 galaxies z $>$ 5, which have been detected in both the CANDELS/HST fields and CEERS/JWST, ensuring reliable measurements of redshift, mass, and star formation rates. To address potential biases, we eliminate galaxy orientation and background sources prior to encoding the galaxy features, thereby constructing a physically meaningful feature space. We identify 11 distinct morphological classes that exhibit clear separation in various structural parameters, such as [CAS]{}-$M_{20}$, S[\u00e9]{}rsic indices, specific star formation rates, and axis ratios. We observe a decline in the presence of spheroidal-type galaxies with increasing redshift, indicating a dominance of disk-like galaxies in the early universe." -"---\nabstract: 'Pre-trained large language models (PLMs) underlie most new developments in natural language processing. They have shifted the field from application-specific model pipelines to a single model that is adapted to a wide range of tasks. Autoregressive PLMs like GPT-3 or PaLM, alongside techniques like few-shot learning, have additionally shifted the output modality to generation instead of classification or regression. Despite their ubiquitous use, the generation quality of language models is rarely evaluated when these models are introduced. Additionally, it is unclear how existing generation tasks\u2013\u2013while they can be used to compare systems at a high level\u2013\u2013relate to the real world use cases for which people have been adopting them. In this work, we discuss how to adapt existing application-specific generation benchmarks to PLMs and provide an in-depth, empirical study of the limitations and capabilities of PLMs in natural language generation tasks along dimensions such as scale, architecture, input and output language. Our results show that PLMs differ in their applicability to different data regimes and their generalization to multiple languages and inform which PLMs to use for a given generation task setup. We share best practices to be taken into consideration when benchmarking generation capabilities during the development" -"---\nabstract: 'Electronic spin defects in the environment of an optically-active spin can be used to increase the size and hence the performance of solid-state quantum registers, especially for applications in quantum metrology and quantum communication. Previous works on multi-qubit electronic-spin registers in the environment of a Nitrogen-Vacancy (NV) center in diamond have only included spins directly coupled to the NV. As this direct coupling is limited by the central spin coherence time, it significantly restricts the register\u2019s maximum attainable size. To address this problem, we present a scalable approach to increase the size of electronic-spin registers. Our approach exploits a weakly-coupled probe spin together with double-resonance control sequences to mediate the transfer of spin polarization between the central NV spin and an environmental spin that is not directly coupled to it. We experimentally realize this approach to demonstrate the detection and coherent control of an unknown electronic spin outside the coherence limit of a central NV. Our work paves the way for engineering larger quantum spin registers with the potential to advance nanoscale sensing, enable correlated noise spectroscopy for error correction, and facilitate the realization of spin-chain quantum wires for quantum communication.'\nauthor:\n- Alexander Ungar\n- Paola Cappellaro" -"---\nauthor:\n- 'Evgeniy\u00a0Martyushev, Snehal\u00a0Bhayani, and\u00a0Tomas\u00a0Pajdla'\nbibliography:\n- 'biblio.bib'\ntitle: Automatic Solver Generator for Systems of Laurent Polynomial Equations\n---\n\nIntroduction {#sec:intro}\n============\n\nproblems of applied science can be reduced to finding common roots of a system of multivariate (Laurent) polynomial equations. Such problems arise in chemistry, mathematical biology, theory of ODE\u2019s, geodesy, robotics, kinematics, acoustics, geometric computer vision, and many other areas. For some problems, it is only required to find all (or some) roots of a particular polynomial system, and the root-finding time does not matter much.\n\nIn contrast, other problems require finding roots for a family of polynomial systems with the same monomial structure, but different coefficient values. For a given set of coefficients, the roots must be found quickly and with acceptable accuracy. Under appropriate genericity assumptions on the coefficients, the dimension, and degree of the corresponding polynomial ideal remain unchanged. The state-of-the-art approach to solving such problems is to use symbolic-numeric solvers based on elimination templates\u00a0[@kukelova2008automatic; @larsson2017efficient; @bhayani2020sparse; @martyushev2022optimizing]. These solvers have two main parts. In the first offline part, an elimination template is constructed. The template consists of a map (formulas) from input data to a (Macaulay) coefficient matrix." -"---\nabstract: |\n We use the geometry of functions associated with martingales under nonlinear expectations to solve risk-sensitive Markovian optimal stopping problems. Generalising the linear case due to Dynkin and Yushkievich (1969), the value function is the majorant or pointwise infimum of those functions which dominate the gain function. An emphasis is placed on the geometry of the majorant and pathwise arguments, rather than exploiting convexity, positive homogeneity or related analytical properties. An algorithm is provided to construct the value function at the computational cost of a two-dimensional search.\n\n [**Key words:** Optimal stopping, nonlinear expectation, risk measures, geometric method, Markov property.]{}\n\n [**MSC2010 Classification:** 60G40, 91B08, 91B06, 60J25.]{}\n\n [**JEL Classification:** C61, D81.]{}\nauthor:\n- Tomasz Kosmala\n- 'John Moriarty[^1]'\nbibliography:\n- 'References.bib'\ntitle: 'Optimal stopping with nonlinear expectation: geometric and algorithmic solutions[^2] '\n---\n\nConvexity and associated properties have been used as main tools in the global solution of both stochastic and deterministic optimisation problems. This paper concerns maximising a Markovian performance criterion when stopping a Brownian motion, where the relationship between concavity and excessivity has been used since the seminal work of Dynkin [@dynkin1963optimum]. However this concavity arises from the linearity of the expectation operator, and there is increasing recent" -"---\nabstract: 'We address polarization coherence in terms of correlations of Stokes variables. We develop an scalar polarization mutual coherence function that allows us to define a polarization coherence time. We find a suitable spectral polarization density allowing a polarization version of the Wiener-Khintchine theorem. With these tools we also address the polarization version of the van Cittert-Zernike theorem.'\nauthor:\n- Alfredo Luis\ntitle: Spatial and temporal coherence via polarization mutual coherence function\n---\n\nIntroduction\n============\n\nCoherence is a fundamental physical concept at the hearth of classical optics and quantum physics [@MW95; @EW07]. Moreover, coherence has been acknowledged in quantum theory as the actual resource for the emerging quantum technologies [@SP17; @CG19].\n\nBeing such a fundamental principle it only manifest indirectly through some other observable phenomena. The standard realm where coherence is addressed is interference. However, another equally valid domain is polarization, which is actually simpler, more robust, and far more easier to handle than interference [@EW07; @GO22; @CB98].\n\nPolarization is conveniently expressed by the Stokes parameters, which involve correlations of complex-field amplitudes. In this work we go beyond to investigate polarization coherence in terms of correlations of Stokes variables.\n\nFollowing the works in Refs. [@SSKF08; @SRFS17; @SSKF09] we develop" -"---\nabstract: 'Classical pulsating stars such as Cepheid and RR Lyrae variables exhibit well-defined Period\u2013Luminosity relations at near-infrared wavelengths. Despite their extensive use as stellar standard candles, the effects of metallicity on Period\u2013Luminosity relations for these pulsating variables, and in turn, on possible biases in distance determinations, are not well understood. We present ongoing efforts in determining accurate and precise metallicity coefficients of Period\u2013Luminosity-Metallicity relations for classical pulsators at near-infrared wavelengths. For Cepheids, it is crucial to obtain a homogeneous sample of photometric light curves and high-resolution spectra for a wide range of metallicities to empirically determine metallicity coefficient and reconcile differences with the predictions of the theoretical models. For RR Lyrae variables, using their host globular clusters covering a wide range of metallicities, we determined the most precise metallicity coefficient at near-infrared wavelengths, which is in excellent agreement with the predictions of the horizontal branch evolution and stellar pulsation models.'\nauthor:\n- Anupam Bhardwaj\nbibliography:\n- 'mybib\\_final.bib'\ntitle: 'Period\u2013Luminosity\u2013Metallicity relations for Classical Pulsators at Near-infrared Wavelengths'\n---\n\nVariable Stars - Cepheids, RR Lyrae, Pulsations, Distance Scale\n\nIntroduction\n============\n\nCepheid and RR Lyrae variables are the most popular subclasses of radially pulsating stars and are often referred as classical pulsators." -"---\nabstract: 'We consider Walsh\u2019s conformal map from the exterior of a set\u00a0$E=\\bigcup_{j=1}^\\ell E_j$ consisting of $\\ell$\u00a0compact disjoint components onto a lemniscatic domain. In particular, we are interested in the case when\u00a0$E$ is a polynomial preimage of $[-1,1]$, i.e., when $E=P^{-1}([-1,1])$, where $P$ is an algebraic polynomial of degree\u00a0$n$. Of special interest are the exponents and the centers of the lemniscatic domain. In the first part of this series of papers, a very simple formula for the exponents has been derived. In this paper, based on general results of the first part, we give an iterative method for computing the centers when $E$ is the union of $\\ell$ intervals. Once the centers are known, the corresponding Walsh map can be computed numerically. In addition, if $E$ consists of $\\ell=2$ or $\\ell=3$ components satisfying certain symmetry relations then the centers and the corresponding Walsh map are given by explicit formulas. All our theorems are illustrated with analytical or numerical examples.'\nauthor:\n- Klaus Schiefermayr\n- Olivier S\u00e8te\nbibliography:\n- 'walshmap.bib'\ndate: 'June 30, 2023'\ntitle: 'Walsh\u2019s Conformal Map onto Lemniscatic Domains for Polynomial Pre-images II'\n---\n\n#### Keywords: {#keywords .unnumbered}\n\nWalsh\u2019s conformal map, lemniscatic domain, multiply connected" -"---\nabstract: |\n The features in many prediction models naturally take the form of a hierarchy. The lower levels represent individuals or events. These units group naturally into locations and intervals or other aggregates, often at multiple levels. Levels of groupings may intersect and join, much as relational database tables do. Besides representing the structure of the data, predictive features in hierarchical models can be assigned to their proper levels. Such models lend themselves to hierarchical Bayes solution methods that \u201cshare\u201d results of inference between groups by generalizing over the case of individual models for each group versus one model that aggregates all groups into one.\n\n In this paper we show our work-in-progress applying a hierarchical Bayesian model to forecast purchases throughout the day at store franchises, with groupings over locations and days of the week. We demonstrate using the package on individual sales transaction data collected over the course of a year. We show how this solves the dilemma of having limited data and hence modest accuracy for each day and location, while being able to scale to a large number of locations with improved accuracy.\nauthor:\n- '[John\u00a0Mark\u00a0Agosta](mailto:?Subject=Your UAI 2022 paper)'\n- '[Mario\u00a0Inchiosa]{}'\nbibliography:\n-" -"---\nabstract: 'The detection and accurate astrometry of fast-moving near-Earth objects (NEOs) has been a challenge for the follow-up community. Their fast apparent motion results in streaks in sidereal images, thus affecting the telescope\u2019s limiting magnitude and astrometric accuracy. A widely adopted technique to mitigate trailing losses is non-sidereal tracking, which transfers the streaking to background reference stars. However, no existing publicly available astrometry software is configured to detect such elongated stars. We present [`Astreaks`]{}, a streaking source detection algorithm, to obtain accurate astrometry of NEOs in non-sidereal data. We validate the astrometric accuracy of [`Astreaks`]{} on 371 non-sidereally tracked images for 115 NEOs with two instrument set-ups of the GROWTH-India Telescope. The observed NEOs had V-band magnitude in the range \\[15, 22\\] with proper motion up to 140[[$^{\\prime\\prime}$]{}]{}/min, thus resulting in stellar streaks as high as 6.5$^\\prime$ (582 pixels) in our data. Our method obtained astrometric solutions for all images with 100% success rate. The standard deviation in Observed-minus-Computed (O-C) residuals is 0.52[[$^{\\prime\\prime}$]{}]{}, with O-C residuals <2[[$^{\\prime\\prime}$]{}]{}(<1[[$^{\\prime\\prime}$]{}]{}) for 98.4% (84.4%) of our measurements. These are appreciable, given the pixel scale of $\\sim$0.3[[$^{\\prime\\prime}$]{}]{}and $\\sim$0.7[[$^{\\prime\\prime}$]{}]{}of our two instrument set-ups. This demonstrates that our modular and fully-automated algorithm helps improve the telescope" -"---\nabstract: 'Multi-arm bandit (MAB) algorithms have been used to learn optimal beams for millimeter wave communication systems. Here, the complexity of learning the optimal beam linearly scales with the number of beams, leading to high latency when there are a large number of beams. In this work, we propose to integrate radar with communication to enhance the MAB learning performance by searching only those beams where the radar detects a scatterer. Further, we use radar to distinguish the beams that show mobile targets from those which indicate the presence of static clutter, thereby reducing the number of beams to scan. Simulations show that our proposed radar-enhanced MAB reduces the exploration time by searching only the beams with distinct radar mobile targets resulting in improved throughput.'\nauthor:\n- '; mhanawal@iitb.ac.in'\nbibliography:\n- 'main.bib'\ntitle: 'Radar Enhanced Multi-Armed Bandit for Rapid Beam Selection in Millimeter Wave Communications '\n---\n\nmulti-armed bandit, joint radar communication, upper confidence bound, analog beamforming\n\nIntroduction {#sec: Intro}\n============\n\nMillimeter wave (mmW) unlicensed spectrum has been identified as a viable solution for realizing high data rate communications between connected vehicles [@USspectrum; @Canadaspectrum; @Europespectrum; @Japanspectrum; @SKspectrum]. The communication links are, however, characterized by high atmospheric absorption and hence" -"---\nabstract: |\n In this paper, we discuss some numerical realizations of Shannon\u2019s sampling theorem. First we show the poor convergence of classical Shannon sampling sums by presenting sharp upper and lower bounds of the norm of the Shannon sampling operator. In addition, it is known that in the presence of noise in the samples of a bandlimited function, the convergence of Shannon sampling series may even break down completely. To overcome these drawbacks, one can use oversampling and regularization with a convenient window function. Such a window function can be chosen either in frequency domain or in time domain. We especially put emphasis on the comparison of these two approaches in terms of error decay rates. It turns out that the best numerical results are obtained by oversampling and regularization in time domain using a $\\sinh$-type window function or a continuous Kaiser\u2013Bessel window function, which results in an interpolating approximation with localized sampling. Several numerical experiments illustrate the theoretical results.\n\n *Key words*: Shannon sampling sums, Whittaker\u2013Kotelnikov\u2013Shannon sampling theorem, bandlimited function, regularization with window function, regularized Shannon sampling formulas, error estimates, numerical robustness.\n\n AMS *Subject Classifications*: 94A20, 65T50.\nauthor:\n- Melanie Kircheis\n- Daniel Potts\n- Manfred Tasche\ntitle: 'On" -"---\nabstract: 'In this paper we propose a simplified model to describe the dissipative effects of tides. We assume a spherical Earth with a dissipative coupling with a mechanical dumbbell. The latter has a mass much smaller than the Earth\u2019s, and it models the presence of the tidal bulges. Using properly the scale analysis, we will show that some of the consequences of tidal dissipation are the circularization and the enlargement of orbit of the Moon and the slowing down of the Earth\u2019s rotation. We will also see that tidal dissipation plays a fundamental role for the establishment of a regime of spin-orbit resonance in the celestial systems. The mathematical tools used make our treatment appropriate for senior high school students or college students.'\nauthor:\n- 'Benedetto Scoppola$^{1}$'\n- 'Matteo Veglianti$^{2}$'\nbibliography:\n- 'biblio.bib'\ntitle: 'Dumbbell dynamics: a didactical approach'\n---\n\n[$^{1}$ Dipartimento di Matematica,\\\nUniversit\u00e0 di Roma \u201cTor Vergata\u201d\\\nVia della Ricerca Scientifica - 00133 Roma, Italy\\\n`scoppola@mat.uniroma2.it`\\\n$^{2}$ Dipartimento di Fisica,\\\nUniversit\u00e0 di Roma \u201cTor Vergata\u201d\\\nVia della Ricerca Scientifica - 00133 Roma, Italy\\\n`matteoveglianti@gmail.com`\\\n]{}\n\nIntroduction\n============\n\nAll textbooks in introductory astronomy and many in physics and mechanics mention the existence of oceanic tides as an interesting" -"---\nabstract: 'We investigate a long-ranged coupled and non-Hermitian two-dimensional array of nanomagnets, fabricated on a thin magnetic substrate and subjected to an in-plane magnetic field. We predict topology-driven edge and corner skin effects of magnetic eigenmodes with the localization position at boundaries precisely characterized by a topological winding tuple $({\\cal W}_1,{\\cal W}_2)$. By varying the direction of the in-plane field, all magnon states pile up either at different edges of the array with $({\\cal W}_1=\\pm 1,{\\cal W}_2=0)$ or $({\\cal W}_1=0,{\\cal W}_2=\\pm 1)$, or at different corners characterized by $({\\cal W}_1=\\pm 1,{\\cal W}_2=\\pm 1)$. Exploiting the non-Hermitian topology is potentially helpful for designing useful magnonic metasurface in the future.'\nauthor:\n- Chengyuan Cai\n- 'Dante M. Kennes'\n- 'Michael A. Sentef'\n- Tao Yu\ntitle: Edge and corner skin effects of chirally coupled magnons characterized by a topological winding tuple\n---\n\n*Introduction*.\u2014The discovery of the one-dimensional non-Hermitian skin effect, yielding a localization of a macroscopic number of bulk eigenstates at the edge\u00a0[@Bergholtz; @XZhang_review; @Okuma_review; @RLin; @KDing; @Yu_review], stimulated the recent explorations of open systems, achieving useful functionalities such as funneling of light\u00a0[@Weidemann], unidirectional amplification\u00a0[@XWen; @McDonald], non-local response\u00a0[@Helbig], and enhanced device sensitivity\u00a0[@Ghatak; @Yu_Zeng; @Budich; @HYuan]. The" -"---\nauthor:\n- |\n Mostafa Behtouei\\\n [INFN, Laboratori Nazionali di Frascati, 00044 Frascati RM, Italy]{}\ntitle: 'Invariant Subspace Problem in Hilbert Spaces: Exploring Applications in Quantum Mechanics, Control Theory, Operator Algebras, Functional Analysis and Accelerator Physics'\n---\n\nIntroduction\n============\n\nThe Invariant Subspace Problem is a fundamental question in operator theory and functional analysis [@Halmos]. It addresses the existence of invariant subspaces for bounded linear operators on a given Hilbert space. In this paper, we aim to explore the applications of the invariant subspace problem in various areas of mathematics and physics and highlight its significance in understanding the behavior of linear operators.\\\nIn the field of operator theory, a crucial topic of investigation is the behavior of linear operators acting on a Hilbert space $\\mathcal{H}$. An invariant subspace of an operator $T$ is a closed subspace $\\mathcal{M}\\subseteq\\mathcal{H}$ that remains unchanged under the action of $T$, i.e., $T\\mathcal{M}\\subseteq\\mathcal{M}$. The invariant subspace problem poses the question of whether every bounded operator on a Hilbert space possesses a non-trivial closed invariant subspace [@Aronszajn].\\\nThe problem can be stated as follows: Given a bounded operator $T\\in\\mathcal{B}(\\mathcal{H})$, does there exist a closed subspace $\\mathcal{M}\\subseteq\\mathcal{H}$ such that $T\\mathcal{M}\\subseteq\\mathcal{M}$ and $0\\neq\\mathcal{M}\\neq\\mathcal{H}$? The invariant subspace problem was initially" -"---\nauthor:\n- Muhammad Zain Mobeen\n- 'Tomasz Kami[\u0144]{}ski'\n- Alexis Matter\n- Markus Wittkowski\n- 'John D. Monnier'\n- Stefan Kraus\n- 'Jean-Baptiste Le Bouquin'\n- Narsireddy Anugu\n- Theo Ten Brummelaar\n- 'Claire L. Davies'\n- Jacob Ennis\n- Tyler Gardner\n- Aaron Labdon\n- Cyprien Lanthermann\n- 'Gail H. Schaefer'\n- 'Benjamin R. Setterholm'\n- Nour Ibrahim\n- 'Steve B. Howell'\nbibliography:\n- 'export-bibtex.bib'\ntitle: 'Reconstructing the mid-infrared environment in the stellar merger remnant V838 Monocerotis'\n---\n\nIntroduction {#intro}\n============\n\nAt the start of 2002 V838 Monocerotis erupted in a luminous red nova event and in a few weeks brightened by almost two orders of magnitude, finally reaching a peak luminosity of $10^{6} L_{\\sun}$ . The event is thought to have been the result of a stellar merger. According to the scenario proposed in , an 8 $M_{\\sun}$ B-type main sequence star coalesced with a 0.4 $M_{\\sun}$ young stellar object. The outburst was soon followed by a gradual decrease in temperature, and its spectra soon evolved to that of a late M-type supergiant [@2003MNRAS.343.1054E; @2015AJ....149...17L]. Spectra taken in the 2000s revealed the presence of various molecules in V838 Mon, including water and transition-metal oxides [@ref161B; @2009ApJS..182...33K]. Dust" -"---\nabstract: 'We introduce the domain wall color code, a new variant of the quantum error-correcting color code that exhibits exceptionally high code-capacity error thresholds for qubits subject to biased noise. In the infinite bias regime, a two-dimensional color code decouples into a series of repetition codes, resulting in an error-correcting threshold of 50%. Interestingly, at finite bias, our color code demonstrates thresholds identical to those of the noise-tailored XZZX surface code for all single-qubit Pauli noise channels. The design principle of the code is that it introduces domain walls which permute the code\u2019s excitations upon domain crossing. For practical implementation, we supplement the domain wall code with a scalable restriction decoder based on a matching algorithm. The proposed code is identified as a comparably resource-efficient quantum error-correcting code highly suitable for realistic noise.'\nauthor:\n- Konstantin Tiurev\n- Arthur Pesah\n- 'Peter-Jan H. S. Derks'\n- Joschka Roffe\n- Jens Eisert\n- 'Markus S. Kesselring'\n- 'Jan-Michael Reiner'\ntitle: The domain wall color code\n---\n\nQuantum computers hold the promise to solve certain classes of computational problems with exponential speedups over the best known classical algorithms\u00a0[@9781107002173]. To enable large-scale quantum computations, information must be stored and processed in" -"---\nabstract: 'Shifting social opinions has far-reaching implications in various aspects, such as public health campaigns, product marketing, and political candidates. In this paper, we study a problem of opinion optimization based on the popular Friedkin-Johnsen (FJ) model for opinion dynamics in an unweighted directed social network with $n$ nodes and $m$ edges. In the FJ model, the internal opinion of every node lies in the closed interval $[0, 1]$, with 0 and 1 being polar opposites of opinions about a certain issue. Concretely, we focus on the problem of selecting a small number of $ k\\ll n $ nodes and changing their internal opinions to 0, in order to minimize the average opinion at equilibrium. We then design an algorithm that returns the optimal solution to the problem in $O(n^3)$ time. To speed up the computation, we further develop a fast algorithm by sampling spanning forests, the time complexity of which is $ O(ln) $, with $l$ being the number of samplings. Finally, we execute extensive experiments on various real directed networks, which show that the effectiveness of our two algorithms is similar to each other, both of which outperform several baseline strategies of node selection. Moreover, our fast" -"---\nabstract: 'The measurement problem dates back to the dawn of quantum mechanics. Here, we measure a quantum dot electron spin qubit through off-resonant coupling with thousands of redundant nuclear spin ancillae. We show that the link from quantum to classical can be made without any \u201cwavefunction collapse\u201d, in agreement with the Quantum Darwinism concept. Large ancilla redundancy allows for single-shot readout with high fidelity $\\approx99.85\\%$. Repeated measurements enable heralded initialization of the qubit and probing of the equilibrium electron spin dynamics. Quantum jumps are observed and attributed to burst-like fluctuations in a thermally populated phonon bath.'\nauthor:\n- 'Harry E. Dyte'\n- George Gillard\n- Santanu Manna\n- 'Saimon F. Covre da Silva'\n- Armando Rastelli\n- 'Evgeny A. Chekhovich'\ntitle: 'Quantum non-demolition measurement of an electron spin qubit through its low-energy many-body spin environment'\n---\n\nHigh fidelity qubit readout is essential in quantum information processing. Usually, such readout starts with conversion of a fragile quantum state into a more robust form, detectable by a classical apparatus. Some readout techniques rely on high-energy excitations, making this conversion dissipative (irreversible). Examples include spin-to-charge conversion [@Elzerman2004; @Hensen2020; @Meunier2006; @Veldhorst2014], single photon detection [@Hadfield2009], optical readout of spin in defects [@Jiang2009; @Robledo2011;" -"---\nabstract: 'We consider the questions of whether or not large language models (LLMs) have beliefs, and, if they do, how we might measure them. First, we evaluate two existing approaches, one due to [@azaria2023internal] and the other to [@burns2022discovering]. We provide empirical results that show that these methods fail to generalize in very basic ways. We then argue that, even if LLMs have beliefs, these methods are unlikely to be successful for conceptual reasons. Thus, there is still no lie-detector for LLMs. After describing our empirical results we take a step back and consider whether or not we should expect LLMs to have something like beliefs in the first place. We consider some recent arguments aiming to show that LLMs cannot have beliefs. We show that these arguments are misguided. We provide a more productive framing of questions surrounding the status of beliefs in LLMs, and highlight the empirical nature of the problem. We conclude by suggesting some concrete paths for future work.'\nauthor:\n- |\n B.A. Levinstein\\\n University of Illinois at Urbana-Champaign\\\n `benlevin@illinois.edu`\\\n Daniel A. Herrmann\\\n University of California, Irvine\\\n `daherrma@uci.edu`\\\nbibliography:\n- 'references.bib'\ntitle: 'Still No Lie Detector for Language Models: Probing Empirical and Conceptual Roadblocks '" -"---\nabstract: |\n Inpatient length of stay (LoS) is an important managerial metric which if known in advance can be used to efficiently plan admissions, allocate resources and improve care. Using historical patient data and machine learning techniques, LoS prediction models can be developed. Ethically, these models can not be used for patient discharge in lieu of unit heads but are of utmost necessity for hospital management systems in charge of effective hospital planning. Therefore, the design of the prediction system should be adapted to work in a true hospital setting. In this study, we predict early hospital LoS at the granular level of admission units by applying domain adaptation to leverage information learned from a potential source domain. Time-varying data from 110,079 and 60,492 patient stays to 8 and 9 intensive care units were respectively extracted from eICU-CRD and MIMIC-IV. These were fed into a Long-Short Term Memory and a Fully connected network to train a source domain model, the weights of which were transferred either partially or fully to initiate training in target domains. Shapley Additive exPlanations (SHAP) algorithms were used to study the effect of weight transfer on model explanability. Compared to the benchmark, the proposed weight" -"---\nbibliography:\n- 'ref.bib'\n---\n\nIntroduction. {#sec:intro}\n=============\n\nSimulation optimization (SO) is a powerful technique used to analyze and optimize complex stochastic systems. One well-known problem in SO is ranking and selection (R&S), which involves comparing a finite number of designs to identify the best one. The performance of each design is measured by a statistical characteristic, such as the mean, which is unknown but can be estimated using simulation samples. In our work, we focus on a contextual top-$m_{c}$ selection problem that requires allocating a fixed number of simulation replications to identify the top-$m_{c}$ designs for each context $c\\in\\mathcal{C}$, where contexts are known prior to decision-making. The true performance of each design is context-dependent, and the top-$m_{c}$ designs also vary with the context. Contexts can include user profiles and known environmental features, thereby enabling personalized decision-making. Taking product design as an example, a manufacturer aims to identify designs of highest quality within each class of production designs, with each class meeting distinct standards. The production standards are considered as contexts. Other application scenarios include recommendation systems [@woerndl2007hybrid], cancer prevention treatment [@li2022efficient], and assortment optimization [@miao2022online].\n\nWe focus on the finite-dimensional context space and does not assume any prior knowledge" -"---\nabstract: 'This paper concerns the structural stability of supersonic flows with a contact discontinuity in a finitely long curved nozzle for the two-dimensional steady compressible rotating Euler system. Concerning the effect of Coriolis force, we first establish the existence of supersonic shear flows with a contact discontinuity in the flat nozzle. Then we consider the stability of these background supersonic shear flows with a contact discontinuity when the incoming supersonic flow and the upper and lower nozzle walls are suitably perturbed. The problem can be formulated as an initial boundary value problem with a contact discontinuity as a free boundary. To deal with the free boundary value problem, the Lagrangian transformation is introduced to straighten and fix the contact discontinuity. The rotating Euler system is reduced to a first order hyperbolic system for the Riemann invariants. We design an iteration scheme and derive some estimates for the solution to the hyperbolic system. Finally, by using the inverse Lagrangian transformation, we prove the original free boundary problem admits two layers of smooth supersonic flows separated by a smooth contact discontinuity.'\nauthor:\n- 'Shangkun Weng[^1]'\n- 'Zihao Zhang[^2]'\ntitle: 'Supersonic flows with a contact discontinuity to the two-dimensional steady rotating Euler" -"---\nabstract: 'University admission at many highly selective institutions uses a holistic review process, where all aspects of the application, including protected attributes (e.g., race, gender), grades, essays, and recommendation letters are considered, to compose an excellent and diverse class. In this study, we empirically evaluate how influential protected attributes are for predicting admission decisions using a machine learning (ML) model, and in how far textual information (e.g., personal essay, teacher recommendation) may substitute for the loss of protected attributes in the model. Using data from 14,915 applicants to an undergraduate admission office at a selective U.S. institution in the 2022-2023 cycle, we find that the exclusion of protected attributes from the ML model leads to substantially reduced admission-prediction performance. The inclusion of textual information via both a TF-IDF representation and a Latent Dirichlet allocation (LDA) model partially restores model performance, but does not appear to provide a full substitute for admitting a similarly diverse class. In particular, while the text helps with gender diversity, the proportion of URM applicants is severely impacted by the exclusion of protected attributes, and the inclusion of new attributes generated from the textual information does not recover this performance loss.'\nauthor:\n- Jinsook Lee" -"---\nabstract: 'Generating learning-friendly representations for points in space is a fundamental and long-standing problem in machine learning. Recently, multi-scale encoding schemes (such as [Space2Vec]{}) were proposed to directly encode any point in 2D space as a high-dimensional vector, and has been successfully applied to various (geo)spatial prediction tasks. However, To solve , we propose a multi-scale location encoder called *[Sphere2Vec]{}* which We provide theoretical proof that the *[Sphere2Vec]{}* encoding preserves the spherical surface distance between any two points. Experiments on 20 synthetic datasets show that *[Sphere2Vec]{}* can outperform all baseline models including the state-of-the-art (SOTA) on all these datasets with up to 30.8% error rate reduction. We then apply *[Sphere2Vec]{}* to three geo-aware image classification tasks - fine-grained species recognition, Flickr image recognition, and remote sensing image classification. Results on 7 real-world datasets show the superiority of *[Sphere2Vec]{}* over multiple 2D location encoders on all three tasks. Further analysis shows that *[Sphere2Vec]{}* outperforms other location encoder models, especially in the polar regions and data-sparse areas because of its nature for spherical surface distance preservation.'\naddress:\n- 'Spatially Explicit Artificial Intelligence Lab, Department of Geography, University of Georgia, Athens, Georgia, 30602, USA'\n- 'Department of Mathematics, University of California Santa" -"---\nabstract: 'Federated learning is a decentralized machine learning paradigm that allows multiple clients to collaborate by leveraging local computational power and the model\u2019s transmission. This method reduces the costs and privacy concerns associated with centralized machine learning methods while ensuring data privacy by distributing training data across heterogeneous devices. On the other hand, federated learning has the drawback of data leakage due to the lack of privacy-preserving mechanisms employed during storage, transfer, and sharing, thus posing significant risks to data owners and suppliers. Blockchain technology has emerged as a promising technology for offering secure data-sharing platforms in federated learning, especially in Industrial Internet of Things (IIoT) settings. This survey aims to compare the performance and security of various data privacy mechanisms adopted in blockchain-based federated learning architectures. We conduct a systematic review of existing literature on secure data-sharing platforms for federated learning provided by blockchain technology, providing an in-depth overview of blockchain-based federated learning, its essential components, and discussing its principles, and potential applications. The primary contribution of this survey paper is to identify critical research questions and propose potential directions for future research in blockchain-based federated learning.'\nauthor:\n- |\n Bipin Chhetri, Saroj Gopali, Rukayat Olapojoye, Samin Dehbashi" -"---\nabstract: 'The rapid advancements in computer vision have stimulated remarkable progress in face forgery techniques, capturing the dedicated attention of researchers committed to detecting forgeries and precisely localizing manipulated areas. Nonetheless, with limited fine-grained pixel-wise supervision labels, deepfake detection models perform unsatisfactorily on precise forgery detection and localization. To address this challenge, we introduce the well-trained vision segmentation foundation model, i.e., Segment Anything Model (SAM) in face forgery detection and localization. Based on SAM, we propose the Detect Any Deepfakes (DADF) framework with the Multiscale Adapter, which can capture short- and long-range forgery contexts for efficient fine-tuning. Moreover, to better identify forged traces and augment the model\u2019s sensitivity towards forgery regions, Reconstruction Guided Attention (RGA) module is proposed. The proposed framework seamlessly integrates end-to-end forgery localization and detection optimization. Extensive experiments on three benchmark datasets demonstrate the superiority of our approach for both forgery detection and localization. The codes will be released soon at [](https://github.com/laiyingxin2/DADF).'\nauthor:\n- 'Yingxin Lai , Zhiming Luo'\n- 'Zitong Yu^2[^1]^'\nbibliography:\n- 'ref.bib'\ntitle: 'Detect Any Deepfakes: Segment Anything Meets Face Forgery Detection and Localization'\n---\n\nIntroduction\n============\n\nAmongst the diverse human biometric traits, the face is endowed with relatively abundant information and holds" -"---\nabstract: 'In cancer genomics, it is of great importance to distinguish driver mutations, which contribute to cancer progression, from causally neutral passenger mutations. We propose a random-effect regression approach to estimate the effects of mutations on the expressions of genes in tumor samples, where the estimation is assisted by a prespecified gene network. The model allows the mutation effects to vary across subjects. We develop a subject-specific mutation score to quantify the effect of a mutation on the expressions of its downstream genes, so mutations with large scores can be prioritized as drivers. We demonstrate the usefulness of the proposed methods by simulation studies and provide an application to a breast cancer genomics study.'\nauthor:\n- |\n Kin Yau Wong\\\n Department of Applied Mathematics, The Hong Kong Polytechnic University\\\n \\\n Donglin Zeng and D. Y. Lin\\\n Department of Biostatistics, University of North Carolina at Chapel Hill\ntitle: '**A network-based regression approach for identifying subject-specific driver mutations**'\n---\n\n\\#1\n\n1\n\n[1]{}\n\n0\n\n[1]{}\n\n[**A network-based regression approach for identifying subject-specific driver mutations**]{}\n\n[*Keywords:*]{} EM algorithm; lasso; multivariate regression; penalized regression; random effects.\n\nIntroduction {#sec:intro}\n============\n\nCancer is caused by progressive accumulation of somatic mutations. For better understanding of the disease" -"---\nabstract: |\n Regardless of whether or not all fast radio bursts (FRBs) repeat, those that do form a population with a distribution of rates. This work considers a power-law model of this population, with rate distribution $\\Phi_r \\sim R^{\\ensuremath{{\\gamma_r}}}$ between [$R_{\\rm min}$]{}\u00a0and [$R_{\\rm max}$]{}. The [[zDM]{}]{}\u00a0code is used to model the probability of detecting this population as either apparently once-off or repeat events as a function of redshift, $z$, and dispersion measure, DM. I demonstrate that in the nearby Universe, repeating sources can contribute significantly to the total burst rate. This causes an apparent deficit in the total number of observed sources (once-off and repeaters) relative to the distant Universe that will cause a bias in FRB population models. Thus instruments with long exposure times should explicitly take repetition into account when fitting the FRB population.\n\n I then fit data from The Canadian Hydrogen Intensity Mapping Experiment (CHIME). The relative number of repeat and apparently once-off FRBs, and their DM, declination, and burst rate distributions, can be well-explained by 50\u2013100% of CHIME \u2018single\u2019 FRBs being due to intrinsic repeaters, with ${\\ensuremath{R_{\\rm max}}}> 0.75$day$^{-1}$ above $10^{39}$erg, and ${\\ensuremath{{\\gamma_r}}}= -2.2_{-0.8}^{+0.6}$. This result is surprisingly consistent with follow-up studies of" -"---\nabstract: 'One of the most fundamental hypotheses in astrochemistry and astrobiology states that crucial biotic molecules like glycine () found in meteorites and comets are inherited from early phases of star formation. Most observational searches for glycine in the interstellar medium have focused on warm, high-mass molecular cloud sources. However, recent studies suggest that it might be appropriate to shift the observational focus to cold, low-mass sources. We aim to detect glycine towards the so-called methanol hotspot in the Barnard 5 dark cloud. The hotspot is a cold source ($T_\\mathrm{gas}\\approx 7.5$K) with yet high abundances of complex organic molecules (COMs) and water in the gas phase. We carried out deep, pointed observations with the Onsala 20m telescope, targeting several transitions of glycine conformers and ([Gly- ]{}and [Gly-]{}) in the frequency range $70.2$\u2013$77.9$GHz. No glycine lines are detected towards the targeted position, but we use a line stacking procedure to derive sensitive abundance upper limits w.r.t. for [Gly- ]{}and [Gly-]{}, i.e. $\\leq(2$\u2013$5)\\times10^{-10}$ and $\\leq(0.7$\u2013$3)\\times10^{-11}$, respectively. The obtained [Gly- ]{}upper limits are the most stringent for a cold source, while the [Gly- ]{}upper limits are mostly on the same order as previously measured limits. The measured abundances w.r.t. of other COMs" -"---\nabstract: 'We present [$\\hat{P}aRTE$]{}, a collection of 1,126 pairs of Recognizing Textual Entailment (RTE) examples to evaluate whether models are robust to paraphrasing. We posit that if RTE models understand language, their predictions should be consistent across inputs that share the same meaning. We use the evaluation set to determine if RTE models\u2019 predictions change when examples are paraphrased. In our experiments, contemporary models change their predictions on 8-16% of paraphrased examples, indicating that there is still room for improvement.'\nauthor:\n- |\n Dhruv Verma Yash Kumar Lal\\\n Stony Brook University\\\n `{dhverma,ylal}@cs.stonybrook.edu` Shreyashee Sinha\\\n Bloomberg\\\n `ssinha176@bloomberg.net` Benjamin Van Durme\\\n Johns Hopkins University\\\n `vandurme@jhu.edu` Adam Poliak\\\n Bryn Mawr College\\\n `apoliak@brynmawr.edu`\nbibliography:\n- 'anthology.bib'\n- 'custom.bib'\ntitle: Evaluating Paraphrastic Robustness in Textual Entailment Models\n---\n\n=1\n\nIntroduction\n============\n\nRecognizing Textual Entailment (RTE), the task of predicting whether one sentence (*hypothesis*) would likely be implied by another (*premise*), is central to natural language understanding [NLU; @dagan2005pascal], as this task captures \u201call manners of linguistic phenomena and broad variability of semantic expression\u201d\u00a0[@maccartney2009natural]. If an RTE model has a sufficiently high *capacity for reliable, robust inference necessary for full NLU*\u00a0[@maccartney2009natural], then the model\u2019s predictions should be consistent across paraphrased examples.\n\nWe introduce" -"---\nauthor:\n- 'Mauricio J. del Razo'\n- Daan Crommelin\n- 'Peter G. Bolhuis'\nbibliography:\n- 'references.bib'\ntitle: 'Data-driven dynamical coarse-graining for condensed matter systems'\n---\n\nComputer simulations have been extremely powerful in the study of (soft) condensed matter systems. Application of molecular dynamics, Langevin or Monte Carlo enables the modeling of phase transitions, material properties, conformational changes in biomolecules, and many other applications [@frenkel2001understanding; @leimkuhler2016molecular; @Karplus2002; @Dror2012; @Glotzer2002; @Dren2009; @zhou2022molecular; @Praprotnik2008]. In addition to thermodynamical properties, dynamical simulations also give access to time-correlation-dependent properties such as diffusion, viscosity, mean first passage times for reactive events, relaxation and even aging processes. While accurate, but still approximate, (classical) atomistic force fields are available for many type of molecules, they become computationally very costly for large systems. One important reason is that the timescale separation between the fundamental timestep in the integration of the equation of motion and the timescale needed to observe the phenomenon of interest can be many orders of magnitude. This renders the computational effort to reach the physically or biologically relevant timescales needed for e.g. nucleation or protein-ligand (un)binding prohibitively large.\n\nPrecisely for that reason a plethora of coarse-grained methodologies have been developed [@espanol2004statistical; @Praprotnik2008; @saunders2013coarse; @Marrink2013; @Tozzini2005]," -"---\nabstract: |\n This paper considers two well-studied problems Minimum Fill-In (Min Fill-In) and Treewidth. Since both problems are -hard, various reduction rules simplifying an input graph have been intensively studied to better understand the structural properties relevant to these problems. Bodlaender at el.\u00a0[@minDegree] introduced the concept of a safe edge that is included in a solution of the Minimum Fill-In problem and showed some initial results. In this paper, we extend their result and prove a new condition for an edge set to be safe. This in turn helps us to construct a novel reduction tool for Min Fill-In that we use to answer other questions related to the problem.\n\n In this paper, we also study another interesting research question: Whether there exists a triangulation that answers both problems Min Fill-In and Treewidth. To formalise our study, we introduce a new parameter reflecting a distance of triangulations optimising both problems. We present some initial results regarding this parameter and study graph classes where both problems can be solved with one triangulation.\naddress: 'School of Computing, University of Portsmouth, United Kingdom'\nauthor:\n- Janka Chleb\u00edkov\u00e1\n- Mani Ghahremani\nbibliography:\n- 'mybibfile.bib'\ntitle:" -"---\nabstract: 'We explore a three-dimensional counterpart of the Farey tessellation and its relations to Penner\u2019s lambda lengths and $SL_2$-tilings. In particular, we prove a three-dimensional version of Ptolemy relation, and generalise results of Short\u00a0[@Sh] to classify tame $SL_2$-tilings over Eisenstein integers in terms of pairs of paths in the $3$D Farey graph.'\naddress:\n- 'Department of Mathematical Sciences, Durham University, Upper Mountjoy Campus, Stockton Road, Durham, DH1 3LE, UK'\n- 'Department of Mathematical Sciences, University of Liverpool, Mathematical Sciences Building, Liverpool L69 7ZL, UK'\n- 'University of Kentucky, Lexington, Department of Mathematics, 951 Patterson Office Tower, Lexington, KY 40506-0027, USA'\n- 'Department of Mathematical Sciences, Durham University, Upper Mountjoy Campus, Stockton Road, Durham, DH1 3LE, UK'\nauthor:\n- Anna Felikson\n- Oleg Karpenkov\n- Khrystyna Serhiyenko\n- Pavel Tumarkin\ntitle: '$3$D Farey graph, lambda lengths and $SL_2$-tilings'\n---\n\n[^1]\n\nIntroduction and main results\n=============================\n\nWe study geometric aspects of Farey graph over Eisenstein integers and its realisation in the hyperbolic three-dimensional space as the $1$-skeleton of the union of the symmetry planes (including points at the absolute) of the reflection group of the regular ideal hyperbolic tetrahedron. Our first main goal is to generalise relations between Penner\u2019s $\\lambda$-lengths" -"---\nabstract: 'Multiple-input multiple-output (MIMO) is a key ingredient of next-generation wireless communications. Recently, various MIMO signal detectors based on deep learning techniques or quantum(-inspired) algorithms have been proposed to improve the detection performance compared with conventional detectors. This paper focuses on the simulated bifurcation (SB) algorithm, a quantum-inspired algorithm. This paper proposes two techniques to improve its detection performance. The first is modifying the algorithm inspired by the Levenberg\u2013Marquardt algorithm to eliminate local minima of the maximum likelihood detection. The second is the use of deep unfolding, a deep learning technique to train the internal parameters of an iterative algorithm. We propose a deep-unfolded SB by making the update rule of SB differentiable. The numerical results show that these proposed detectors significantly improve the signal detection performance in massive MIMO systems.'\nauthor:\n- 'Satoshi Takabe, [^1] [^2]'\ntitle: Deep Unfolded Simulated Bifurcation for Massive MIMO Signal Detection\n---\n\nMIMO, signal detection, deep learning, deep unfolding, simulated bifurcation\n\nIntroduction\n============\n\n-input multiple-output (MIMO) is a key ingredient of next-generation wireless communications\u00a0[@MUMIMO; @Yang]. In massive MIMO systems, the exact maximum likelihood (ML) detection is computationally intractable, and the performance of conventional detectors such as a minimum mean-squared error (MMSE) detector" -"---\nabstract: 'Recent efforts in fake news detection have witnessed a surge of interest in using graph neural networks (GNNs) to exploit rich social context. Existing studies generally leverage fixed graph structures, assuming that the graphs accurately represent the related social engagements. However, *edge noise* remains a critical challenge in real-world graphs, as training on suboptimal structures can severely limit the expressiveness of GNNs. Despite initial efforts in graph structure learning (GSL), prior works often leverage node features to update edge weights, resulting in heavy computational costs that hinder the methods\u2019 applicability to large-scale social graphs. In this work, we approach the fake news detection problem with a novel aspect of *social graph refinement*. We find that the *degrees* of news article nodes exhibit distinctive patterns, which are indicative of news veracity. Guided by this, we propose DECOR, a novel application of Degree-Corrected Stochastic Blockmodels to the fake news detection problem. Specifically, we encapsulate our empirical observations into a lightweight social graph refinement component that iteratively updates the edge weights via a learnable degree correction mask, which allows for joint optimization with a GNN-based detector. Extensive experiments on two real-world benchmarks validate the effectiveness and efficiency of DECOR. [^1]'\nauthor:" -"---\nabstract: 'We developed a resistance measurement using radio frequency reflection to investigate the electrical transport characteristics under destructive pulsed magnetic fields above 100 T. A homemade flexible printed circuit for a sample stage reduced the noise caused by the induced voltage from the pulsed magnetic fields, improving the accuracy of the measurements of the reflected waves. From the obtained reflectance data, the absolute value of the magnetoresistance was successfully determined by using a phase analysis with admittance charts. These developments enable more accurate and comprehensive measurements of electrical resistance in pulsed magnetic fields.'\nauthor:\n- 'T. Shitaokoshi'\n- 'S. Kawachi'\n- 'T. Nomura'\n- 'F. F. Balakirev'\n- 'Y. Kohama'\nbibliography:\n- 'reference.bib'\nnocite: '[@*]'\ntitle: Radio Frequency Electrical Resistance Measurement under Destructive Pulsed Magnetic Fields\n---\n\nIntroduction\n============\n\nIn the study of metals and their transport properties, the measurement of electrical resistivity in high magnetic fields plays a crucial role. Conventional approaches for measuring magnetoresistance have relied on standard alternating current (AC) techniques, typically conducted at magnetic fields below 100 T, generated by non-destructive pulsed magnets. However, performing magnetoresistance measurements in higher magnetic fields exceeding 100 T has been a significant challenge, requiring the use of destructive pulse" -"---\nauthor:\n- 'Pedro Cal,'\n- 'Rebecca von Kuk,'\n- 'Matthew A. Lim'\n- 'and Frank J. Tackmann'\nbibliography:\n- 'refs.bib'\ndate:\n- '[, [:<10 0]{}\u00a0(last compiled)]{}'\n- \n- 'June 28, 2023'\ntitle: 'The $q_T$ spectrum for Higgs production via heavy quark annihilation at N$^3$LL$''$+aN$^3$LO'\n---\n\nIntroduction {#sec:intro}\n============\n\nWith the discovery of the Higgs boson by the ATLAS and CMS experiments at the LHC [@ATLAS:2012yve; @CMS:2012qbp], the precise measurement of its properties has become essential to establish the Standard Model (SM) as the true mechanism of electroweak symmetry breaking. The four main production mechanisms \u2013 gluon fusion, vector boson fusion, Higgstrahlung, and top-quark pair associated production \u2013 have been observed experimentally\u00a0[@ATLAS:2012yve; @CMS:2012qbp; @ATLAS:2018kot; @ATLAS:2018mme; @CMS:2018uxb], while the Higgs couplings to vector bosons have been found consistent with the SM down to an accuracy of $4\\%$ [@ATLAS:2022vkf; @ATLAS:2021vrm].\n\nProbing the Higgs interactions with the fermionic sector is also of great importance. In the SM, the couplings of the Higgs boson to fermions, i.e.\u00a0the Yukawa couplings $y_F$, are proportional to the fermion mass $m_F$, $y_F^\\mathrm{SM}\\equiv m_F / v$, where $v$ denotes the Higgs vacuum expectation value. This implies that the measurement of the Yukawa couplings to the heavy" -"---\nabstract: 'Text correction, especially the semantic correction of more widely used scenes, is strongly required to improve, for the fluency and writing efficiency of the text. An adversarial multi-task learning method is proposed to enhance the modeling and detection ability of character polysemy in Chinese sentence context. Wherein, two models, the masked language model and scoring language model, are introduced as a pair of not only coupled but also adversarial learning tasks. Moreover, the Monte Carlo tree search strategy and a policy network are introduced to accomplish the efficient Chinese text correction task with semantic detection. The experiments are executed on three datasets and five comparable methods, and the experimental results show that our method can obtain good performance in Chinese text correction task for better semantic rationality.'\nauthor:\n- Fanyu Wang\n- 'Zhenping Xie\\*'\nbibliography:\n- 'mybibliography.bib'\ntitle: 'An Adversarial Multi-Task Learning Method for Chinese Text Correction with Semantic Detection'\n---\n\nIntroduction\n============\n\nBottlenecks and Defects\n-----------------------\n\nText correction is an essential process for daily writing, also, has been widely developed in current office software products\u00a0[@ghufron2018role; @napoles-etal-2017-jfleg; @omelianchuk-etal-2020-gector]. However, the complexity and flexibility of natural language especially for Chinese, cause huge obstacles to developing a high-quality text" -"---\nabstract: 'This study addresses the problem of 3D human mesh reconstruction from multi-view images. Recently, approaches that directly estimate the skinned multi-person linear model (SMPL)-based human mesh vertices based on volumetric heatmap representation from input images have shown good performance. We show that representation learning of vertex heatmaps using an autoencoder helps improve the performance of such approaches. Vertex heatmap autoencoder (VHA) learns the manifold of plausible human meshes in the form of latent codes using AMASS, which is a large-scale motion capture dataset. Body code predictor (BCP) utilizes the learned body prior from VHA for human mesh reconstruction from multi-view images through latent code-based supervision and transfer of pretrained weights. According to experiments on Human3.6M and LightStage datasets, the proposed method outperforms previous methods and achieves state-of-the-art human mesh reconstruction performance.'\naddress: |\n $^{\\star}$Dept of ECE, Kwangwoon University, Seoul, Korea\\\n $^{\\dagger}$Netmarble, Seoul, Korea\\\n [`{asw9161,jychang}@kw.ac.kr`,\u00a0\u00a0`spark0916@netmarble.com`]{} \nbibliography:\n- 'strings.bib'\n- 'refs.bib'\ntitle: 'Representation Learning of Vertex Heatmaps for 3D Human Mesh Reconstruction from Multi-view Images'\n---\n\nComputer vision, human mesh reconstruction, representation learning.\n\nIntroduction {#sec:intro}\n============\n\n3D human pose and mesh reconstruction from images is an interesting research topic in image processing and computer vision. It can be used" -"---\nabstract: 'Perceptually Aligned Gradients (PAG) refer to an intriguing property observed in robust image classification models, wherein their input gradients align with human perception and pose semantic meanings. While this phenomenon has gained significant research attention, it was solely studied in the context of unimodal vision-only architectures. In this work, we extend the study of PAG to Vision-Language architectures, which form the foundations for diverse image-text tasks and applications. Through an adversarial robustification finetuning of CLIP, we demonstrate that robust Vision-Language models exhibit PAG in contrast to their vanilla counterparts. This work reveals the merits of CLIP with PAG (CLIPAG) in several vision-language generative tasks. Notably, we show that seamlessly integrating CLIPAG in a \u201cplug-n-play\u201d manner leads to substantial improvements in vision-language generative applications. Furthermore, leveraging its PAG property, CLIPAG enables text-to-image generation without any generative model, which typically requires huge generators.'\nauthor:\n- |\n Roy Ganz\\\n Department of ECE\\\n Technion, Haifa, Israel\\\n [ganz@campus.technion.ac.il]{}\n- |\n Michael Elad\\\n Department of Computer Science\\\n Technion, Haifa, Israel\\\n [elad@cs.technion.ac.il]{}\nbibliography:\n- 'egbib.bib'\ntitle: 'CLIPAG: Towards Generator-Free Text-to-Image Generation'\n---\n\nIntroduction {#sec:intro}\n============\n\n![**Unimodal Perceptually Aligned Gradients**. Visualizations of large-$\\epsilon$ targeted adversarial attacks on non-robust and robust ResNet-50, trained on ImageNet. Such attacks" -"---\nabstract: |\n We study robust mean-variance optimization in multiperiod portfolio selection by allowing the true probability measure to be inside a Wasserstein ball centered at the empirical probability measure. Given the confidence level, the radius of the Wasserstein ball is determined by the empirical data. The numerical simulations of the US stock market provide a promising result compared to other popular strategies.\\\n [**Keywords:**]{} Mean-Variance, Robust Portfolio Selection, Wasserstein Distance, Modern Portfolio Theory\\\nauthor:\n- |\n Xin Hai & Gregoire Loeper & Kihun Nam\\\n Monash University\\\n Clayton, VIC 3800, Australia\nbibliography:\n- 'ethan.bib'\ntitle: 'Data-driven Multiperiod Robust Mean-Variance Optimization'\n---\n\nIntroduction {#section1}\n============\n\nIn this article, we study robust mean-variance optimization in multiperiod portfolio selection. In particular, we allow the true probability measure to be inside a Wasserstein ball specified by the empirical data and the given confidence level. We transform our optimization problem into a non-robust minimization problem with a penalty, which provides a tractable model for robust mean-variance optimization. This extends the single-period model of [@blanchet2021distributionally] to a multiperiod model. Then, we apply our framework to the US stock market on five different 10-year intervals between 2002 and 2019, which provides Sharpe ratios competitive with the equal-weighted portfolio," -"---\nabstract: 'We study the stability of the electroweak vacuum in the supersymmetric (SUSY) standard model (SM), paying particular attention to its relation to the SUSY contribution to the muon anomalous magnetic moment $a_\\mu$. If the SUSY contribution to $a_\\mu$ is sizable, the electroweak vacuum may become unstable because of enhanced trilinear scalar interactions in particular when the sleptons are heavy. Consequently, assuming enhanced SUSY contribution to $a_\\mu$, an upper bound on the slepton masses is obtained. We give a detailed prescription to perform a full one-loop calculation of the decay rate of the electroweak vacuum for the case that the SUSY contribution to $a_\\mu$ is enhanced. We also give an upper bound on the slepton masses as a function of the SUSY contribution to $a_\\mu$.'\nbibliography:\n- 'ewvacmumdm.bib'\n---\n\nJune, 2023\\\n.5in\n\n****\n\nStability of Electroweak Vacuum and\\\nSupersymmetric Contribution to Muon $g-2$\\\n\n.5in\n\n[ So Chigusa$^{(a,b)}$, Takeo Moroi$^{(c)}$ and Yutaro Shoji$^{(d)}$ ]{}\n\n.5in\n\n$^{(a)}$ [*Berkeley Center for Theoretical Physics, Department of Physics,\\\nUniversity of California, Berkeley, CA 94720, USA*]{}\n\n0.1in\n\n$^{(b)}$ [*Theoretical Physics Group, Lawrence Berkeley National Laboratory,\\\nBerkeley, CA 94720, USA*]{}\n\n0.1in\n\n$^{(c)}$ [*Department of Physics, The University of Tokyo, Tokyo 113-0033, Japan* ]{}\n\n0.1in\n\n$^{(d)}$ [*Racah" -"---\nabstract: 'Historically, the rotorcraft community has experienced a higher fatal accident rate than other aviation segments, including commercial and general aviation. To date, traditional methods applied to reduce incident rates have not proven hugely successful for the rotorcraft community. Recent advancements in artificial intelligence (AI) and the application of these technologies in different areas of our lives are both intriguing and encouraging. When developed appropriately for the aviation domain, AI techniques may provide an opportunity to help design systems that can address rotorcraft safety challenges. Our recent work demonstrated that AI algorithms could use video data from onboard cameras and correctly identify different flight parameters from cockpit gauges, e.g., indicated airspeed. These AI-based techniques provide a potentially cost-effective solution, especially for small helicopter operators, to record the flight state information and perform post-flight analyses. We also showed that carefully designed and trained AI systems can accurately predict rotorcraft attitude (i.e., pitch and yaw) from outside scenes (images or video data). Ordinary off-the-shelf video cameras were installed inside the rotorcraft cockpit to record the outside scene, including the horizon. The AI algorithm was able to correctly identify rotorcraft attitude at an accuracy in the range of 80%. In this work," -"---\nabstract: 'A quantum analysis of the vacuum Bianchi IX model is performed, focusing in particular on the chaotic nature of the system. The framework constructed here is general enough for the results to apply in the context of any theory of quantum gravity, since it includes only minimal approximations that make it possible to encode the information of all quantum degrees of freedom in the fluctuations of the usual anisotropy parameters. These fluctuations are described as canonical variables that extend the classical phase space. In this way, standard methods for dynamical systems can be applied to study the chaos of the model. Two specific methods are applied that are suitable for time-reparameterization invariant systems. First, a generalized version of the Misner-Chitre variables is constructed, which provides an isomorphism between the quantum Bianchi IX dynamics and the geodesic flow on a suitable Riemannian manifold, extending, in this way, the usual billiard picture. Secondly, the fractal dimension of the boundary between points with different outcomes in the space of initial data is numerically analyzed. While the quantum system remains chaotic, the main conclusion is that its strength is considerably diminished by quantum effects as compared to its classical counterpart.'\n---\n\n[**The" -"---\nabstract: 'Geometric regularity, which leverages data symmetry, has been successfully incorporated into deep learning architectures such as CNNs, RNNs, GNNs, and Transformers. While this concept has been widely applied in robotics to address the curse of dimensionality when learning from high-dimensional data, the inherent reflectional and rotational symmetry of robot structures has not been adequately explored. Drawing inspiration from cooperative multi-agent reinforcement learning, we introduce novel network structures for deep learning algorithms that explicitly capture this geometric regularity. Moreover, we investigate the relationship between the geometric prior and the concept of Parameter Sharing in multi-agent reinforcement learning. Through experiments conducted on various challenging continuous control tasks, we demonstrate the significant potential of the proposed geometric regularity in enhancing robot learning capabilities.'\nauthor:\n- \nbibliography:\n- 'references.bib'\ntitle: Geometric Regularity with Robot Intrinsic Symmetry in Reinforcement Learning\n---\n\nIntroduction\n============\n\nRobots have the ability to undertake tasks that are dangerous or difficult for humans. With more degrees of freedom, they can perform increasingly complex tasks. For example, humanoid robots and quadrupedal robots can walk over challenging terrain, while robot arms and hands can achieve dexterous manipulation. However, controlling robots with a large number of degrees of freedom becomes increasingly difficult" -"---\nauthor:\n- 'S. Momme Hengstenberg[^1]'\n- 'Caroline E. P. Robin [^2]'\n- 'Martin J. Savage[^3]'\nbibliography:\n- 'biblio.bib'\ntitle: |\n ![image](IQuSLogo.png)\\\n [IQuS@UW-21-053]{}\\\n \\\n \\\n Multi-Body Entanglement and Information Rearrangement in Nuclear Many-Body Systems \n---\n\nIntroduction {#sec:intro}\n============\n\nEntanglement is a common feature of interacting quantum many-body systems. This phenomenon arises due to the interaction between the constituents of a system, making them correlated in a non-local manner, and thus strongly influences both the structure and dynamics of these systems. Because they are entangled, interacting particles cannot be described independently from each others, and instead have to be considered as a whole. In the context of solving the many-body problem, this means that the state of the system cannot be represented by a simple tensor product state, or classical configuration, and instead is an entangled superposition of those configurations, whose number grows [*a priori*]{} exponentially with the number of degrees of freedom. Such exponential scalings make quantum many-body problems typically hard to solve on classical computers, as they require exponential amounts of computational resources (time and/or memory). On the other hand, it is well known that the number of relevant states needed to describe low-energy eigenstates of many interesting systems" -"---\nabstract: |\n We study the long-run behavior of land prices when land plays the dual role of factor of production and store of value. In modern economies where technological progress is faster in non-land sectors, when the elasticity of substitution in production exceeds 1 at high input levels (which always holds if non-land factors do not fully depreciate), unbalanced growth occurs and land becomes overvalued on the long-run trend relative to the fundamental value defined by the present value of land rents. Around the trend, land prices exhibit recurrent stochastic fluctuations, with expansions and contractions in the size of land overvaluation.\n\n **Keywords:** asset price, elasticity of substitution, land, unbalanced growth.\n\n **JEL codes:** D53, G12, O41.\nauthor:\n- 'Tomohiro Hirano[^1]'\n- 'Alexis Akira Toda[^2]'\nbibliography:\n- 'localbib.bib'\ntitle: 'Unbalanced Growth, Elasticity of Substitution, and Land Overvaluation'\n---\n\nIntroduction\n============\n\nAs economies develop and per capita incomes rise, the importance of land as a factor of production diminishes.[^3] This is partly because people face biological constraints regarding the amount of food they can consume (where land produces agricultural products) or the amount of leisure time they can spend (where land produces amenities like tennis courts and national parks). Although people living" -"---\nabstract: 'We study the election control problem with multi-votes, where each voter can present a single vote according different views (or layers, we use \u201clayer\" to represent \u201cview\"). For example, according to the attributes of candidates, such as: education, hobby or the relationship of candidates, a voter may present different preferences for the same candidate set. Here, we consider a new model of election control that by assigning different rules to the votes from different layers, makes the special candidate p being the winner of the election (a rule can be assigned to different layers). Assuming a set of candidates C among a special candidate \u201cp\", a set of voters V, and t layers, each voter gives t votes over all candidates, one for each layer, a set of voting rules R, the task is to find an assignment of rules to each layer that p is acceptable for voters (possible winner of the election). Three models are considered (denoted as sum-model, max-model, and min-model) to measure the satisfaction of each voter. In this paper, we analyze the computational complexity of finding such a rule assignment, including classical complexity and parameterized complexity. It is interesting to find out that" -"---\nabstract: 'Quantum machine learning with parametrised quantum circuits has attracted significant attention over the past years as an early application for the era of noisy quantum processors. However, the possibility of achieving concrete advantages over classical counterparts in practical learning tasks is yet to be demonstrated. A promising avenue to explore potential advantages is the learning of data generated by quantum mechanical systems and presented in an inherently quantum mechanical form. In this article, we explore the applicability of quantum-data learning to practical problems in high-energy physics, aiming to identify domain specific use-cases where quantum models can be employed. We consider quantum states governed by one-dimensional lattice gauge theories and a phenomenological quantum field theory in particle physics, generated by digital quantum simulations or variational methods to approximate target states. We make use of an ansatz based on quantum convolutional neural networks and numerically show that it is capable of recognizing quantum phases of ground states in the Schwinger model, (de)confinement phases from time-evolved states in the\u00a0$\\mathbb{Z}_2$ gauge theory, and that it can extract fermion flavor/coupling constants in a quantum simulation of parton shower. The observation of non-trivial learning properties demonstrated in these benchmarks will motivate further exploration" -"---\nabstract: 'Large Language Models (LLMs) are trained primarily on minimally processed web text, which exhibits the same wide range of social biases held by the humans who created that content. Consequently, text generated by LLMs can inadvertently perpetuate stereotypes towards marginalized groups, like the LGBTQIA+ community. In this paper, we perform a comparative study of how LLMs generate text describing people with different sexual identities. Analyzing bias in the text generated by an LLM using regard score shows measurable bias against queer people. We then show that a post-hoc method based on chain-of-thought prompting using SHAP analysis can increase the regard of the sentence, representing a promising approach towards debiasing the output of LLMs in this setting.'\nbibliography:\n- 'main.bib'\ntitle: |\n Queer People are People First:\\\n Deconstructing Sexual Identity Stereotypes in Large Language Models\n---\n\n=1\n\n@finalcopytrue\n\nIntroduction\n============\n\nA large number of current Natural Language Processing (NLP) models, especially Large Language Models (LLMs), yield biased predictions. The output of an LLM is contextually associated with the input prompt [@liang2021towards]. However, in some cases, the generated text can be biased against one or more human identities such as gender, sexual identity, or race. These biases arise due to" -"---\nabstract: 'This article shows how to develop an efficient solver for a stabilized numerical space-time formulation of the advection-dominated diffusion transient equation. At the discrete space-time level, we approximate the solution by using higher-order continuous B-spline basis functions in its spatial and temporal dimensions. This problem is very difficult to solve numerically using the standard Galerkin finite element method due to artificial oscillations present when the advection term dominates the diffusion term. However, a first-order constraint least-square formulation allows us to obtain numerical solutions avoiding oscillations. The advantages of space-time formulations are the use of high-order methods and the feasibility of developing space-time mesh adaptive techniques on well-defined discrete problems. We develop a solver for a least-square formulation to obtain a stabilized and symmetric problem on finite element meshes. The computational cost of our solver is bounded by the cost of the inversion of the space-time mass and stiffness (with one value fixed at a point) matrices and the cost of the GMRES solver applied for the symmetric and positive definite problem. We illustrate our findings on an advection-dominated diffusion space-time model problem and present two numerical examples: one with isogeometric analysis discretizations and the second one with an" -"---\nabstract: 'Superconductivity allows electrical current to flow without any energy loss, and thus making solids superconducting is a grand goal of physics, material science, and electrical engineering. More than 16 Nobel Laureates have been awarded for their contribution in superconductivity research. Superconductors are valuable for sustainable development goals (SDGs), such as climate change mitigation, affordable and clean energy, industry, innovation and infrastructure, and so on. However, a unified physics theory explaining all superconductivity mechanism is still unknown. It is believed that superconductivity is microscopically due to not only molecular compositions but also the geometric crystal structure. Hence a new dataset, S2S, containing both crystal structures and superconducting critical temperature, is built upon SuperCon and Material Project. Based on this new dataset, we propose a novel model, S2SNet, which utilizes the attention mechanism for superconductivity prediction. To overcome the shortage of data, S2SNet is pre-trained on the whole Material Project dataset with Masked-Language Modeling (MLM). S2SNet makes a new state-of-the-art, with out-of-sample accuracy of 92% and Area Under Curve (AUC) of 0.92. To the best of our knowledge, S2SNet is the first work to predict superconductivity with only information of crystal structures. This work is beneficial to superconductivity discovery and" -"---\nabstract: 'Multimodal multitask learning has attracted an increasing interest in recent years. Singlemodal models have been advancing rapidly and have achieved astonishing results on various tasks across multiple domains. Multimodal learning offers opportunities for further improvements by integrating data from multiple modalities. Many methods are proposed to learn on a specific type of multimodal data, such as vision and language data. A few of them are designed to handle several modalities and tasks at a time. In this work, we extend and improve Omninet, an architecture that is capable of handling multiple modalities and tasks at a time, by introducing cross-cache attention, integrating patch embeddings for vision inputs, and supporting structured data. The proposed Structured-data-enhanced Omninet (S-Omninet) is a universal model that is capable of learning from structured data of various dimensions effectively with unstructured data through cross-cache attention, which enables interactions among spatial, temporal, and structured features. We also enhance spatial representations in a spatial cache with patch embeddings. We evaluate the proposed model on several multimodal datasets and demonstrate a significant improvement over the baseline, Omninet.'\nauthor:\n- Ye Xue$^1$\n- |\n Diego Klabjan$^2$Jean Utke$^{3}$$^{1,2}$Northwestern Universion\\\n $^3$Allstate Insurance Company\\\n ye.xue@u.northwestern.edu, d-klabjan@northwestern.edu, jutke@allstate.com\nbibliography:\n- 'main.bib'\ntitle: 'S-Omninet:" -"---\nabstract: 'The class imbalance problem in deep learning has been explored in several studies, but there has yet to be a systematic analysis of this phenomenon in object detection. Here, we present comprehensive analyses and experiments of the foreground-background (F-B) imbalance problem in object detection, which is very common and caused by small, infrequent objects of interest. We experimentally study the effects of different aspects of F-B imbalance (object size, number of objects, dataset size, object type) on detection performance. In addition, we also compare 9 leading methods for addressing this problem, including Faster-RCNN, SSD, OHEM, Libra-RCNN, Focal-Loss, GHM, PISA, YOLO-v3, and GFL with a range of datasets from different imaging domains. We conclude that (1) the F-B imbalance can indeed cause a significant drop in detection performance, (2) The detection performance is more affected by F-B imbalance when fewer training data are available, (3) in most cases, decreasing object size leads to larger performance drop than decreasing number of objects, given the same change in the ratio of object pixels to non-object pixels, (6) among all selected methods, Libra-RCNN and PISA demonstrate the best performance in addressing the issue of F-B imbalance. (7) When the training dataset size" -"---\nabstract: 'Kernel ridge regression, KRR, is a generalization of linear ridge regression that is non-linear in the data, but linear in the parameters. Here, we introduce an equivalent formulation of the objective function of KRR, opening up both for using penalties other than the ridge penalty and for studying kernel ridge regression from the perspective of gradient descent. Using a continuous-time perspective, we derive a closed-form solution for solving kernel regression with gradient descent, something we refer to as kernel gradient flow, KGF, and theoretically bound the differences between KRR and KGF, where, for the latter, regularization is obtained through early stopping. We also generalize KRR by replacing the ridge penalty with the $\\ell_1$ and $\\ell_\\infty$ penalties, respectively, and use the fact that analogous to the similarities between KGF and KRR, $\\ell_1$ regularization and forward stagewise regression (also known as coordinate descent), and $\\ell_\\infty$ regularization and sign gradient descent, follow similar solution paths. We can thus alleviate the need for computationally heavy algorithms based on proximal gradient descent. We show theoretically and empirically how the $\\ell_1$ and $\\ell_\\infty$ penalties, and the corresponding gradient-based optimization algorithms, produce sparse and robust kernel regression solutions, respectively.'\nauthor:\n- |\n Oskar Allerbo\\\n Mathematical" -"---\nauthor:\n- 'Sankarshana Srinivasan,'\n- 'Daniel B Thomas,'\n- and Richard Battye\nbibliography:\n- 'References.bib'\ndate: April 2020\ntitle: 'Cosmological gravity on all scales III: non-linear matter power spectrum in phenomenological modified gravity'\n---\n\nIntroduction\n============\n\nOver the last two decades, the field of cosmology has entered an era of precision measurements. The current cosmological paradigm, the $\\Lambda$CDM model contains Cold Dark Matter (CDM) and dark energy (sourced by the cosmological constant $\\Lambda$) as its main components, but the underlying nature of both remains unknown. The validity of General Relativity (GR) is a crucial assumption in this picture since both CDM and $\\Lambda$ are inferred from solely gravitational observations. In addition, efforts to detect dark matter directly or indirectly have not yet been successful, while quantum field theory predicts a $\\Lambda$ value that is many orders of magnitude larger than that inferred from cosmological measurements. These issues form part of the motivation for modifying the law of gravity and the establishment of a vast space of modified gravity models [@ref:CliftonReview; @Nojiri_2017] in order to explain the physics attributed to the dark sector. Efficiently testing this model space is a major challenge in cosmology.\n\nThe assumption that GR is the" -"---\nabstract: 'The Gaussian graphical model (GGM) incorporates an undirected graph to represent the conditional dependence between variables, with the precision matrix encoding partial correlation between pair of variables given the others. To achieve flexible and accurate estimation and inference of GGM, we propose the novel method FLAG, which utilizes the random effects model for pairwise conditional regression to estimate the precision matrix and applies statistical tests to recover the graph. Compared with existing methods, FLAG has several unique advantages: (i) it provides accurate estimation without sparsity assumptions on the precision matrix, (ii) it allows for element-wise inference of the precision matrix, (iii) it achieves computational efficiency by developing an efficient PX-EM algorithm and a MM algorithm accelerated with low-rank updates, and (iv) it enables joint estimation of multiple graphs using FLAG-Meta or FLAG-CA. The proposed methods are evaluated using various simulation settings and real data applications, including gene expression in the human brain, term association in university websites, and stock prices in the U.S. financial market. The results demonstrate that FLAG and its extensions provide accurate precision estimation and graph recovery.'\nauthor:\n- 'Yueqi Qian, Xianghong Hu, and Can Yang'\nbibliography:\n- 'ref.bib'\ntitle: Flexible and Accurate Methods for" -"---\nabstract: 'Pattern speeds are a fundamental parameter of the dynamical features (e.g. bars, spiral arms) of a galaxy, setting resonance locations. Pattern speeds are not directly observable, so the Tremaine-Weinberg (TW) method has become the most common method used to measure them in galaxies. However, it has not been tested properly whether this method can straightforwardly be applied to gas tracers, despite this being widely done in the literature. When applied to observations, the TW method may return invalid results, which are difficult to diagnose due to a lack of ground truth for comparison. Although some works applying the TW method to simulated galaxies exist, only stellar populations have been tested. Therefore, here we explore the applicability of the TW method for gas gracers, by applying it to hydrodynamical simulations of galaxies, where we know the true value of the bar pattern speed. We perform some simple tests to see if the TW method has a physically reasonable output. We add different kinds of uncertainties (e.g. in position angle or flux) to the data to mock observational errors based on the magnitude of uncertainty present in the observations. Second, we test the method on 3D simulations with chemical networks." -"---\nabstract: |\n Resilience assessment is a critical requirement of a power grid to maintain high availability, security, and quality of service. Most grid research work that is currently pursued does not have the capability to have hardware testbeds. Additionally, with the integration of distributed energy resources, the attack surface of the grid is increasing. This increases the need for reliable and realistic modeling techniques that are usable by the wider research community. Therefore, simulation testbeds have been used to model a real-world power grid topology and measure the impact of various perturbations.\n\n Existing co-simulation platforms for powergrid focus on a limited components of the overall system, such as focusing only on the dynamics of the physical layer. Additionally a significant number of existing platforms need specialized hardware that may be too expensive for most researchers. Finally, not many platforms support realistic modeling of the communication layer, which requires use of Supervisory Control and Data Acquisition communication protocol such as [DNP3]{}while modeling cybersecurity scenarios.\n\n We present Network Attack Testbed in \\[Power\\] Grid (NATI\\[P\\]G), (pronounced *natig*), a standalone, containerized, and reusable environment to enable cyber analysts and researchers to run different cybersecurity and performance scenarios on powergrid. Our tool combines GridLAB-D," -"---\nabstract: 'Although resonant planets have orbital periods near commensurability, resonance is also dictated by other factors, such as the planets\u2019 eccentricities and masses, and therefore must be confirmed through a study of the system\u2019s dynamics. Here, we perform such a study for five multi-planet systems: Kepler-226, Kepler-254, Kepler-363, Kepler-1542, and K2-32. For each system, we run a suite of *N*-body simulations that span the full parameter-space that is consistent with the constrained orbital and planetary properties. We study the stability of each system and look for resonances based on the libration of the critical resonant angles. We find strong evidence for a two-body resonance in each system; we confirm a 3:2 resonance between Kepler-226c and Kepler-226d, confirm a 3:2 resonance between Kepler-254c and Kepler-254d, and confirm a three-body 1:2:3 resonant chain between the three planets of Kepler-363. We explore the dynamical history of two of these systems and find that these resonances most likely formed without migration. Migration leads to the libration of the three-body resonant angle, but these angles circulate in both Kepler-254 and Kepler-363. Applying our methods to additional near-resonant systems could help us identify which systems are truly resonant or non-resonant and which systems require additional" -"---\nabstract: 'Motivated by the need to study the performance of vehicular communication protocols as applicable to heterogeneous traffic conditions, we study the performance of IEEE 802.11p medium access protocol under such a traffic setup. We consider a setup comprising connected vehicles and human-driven Motorised Two Wheelers (MTWs), where the connected vehicles are required to move as platoon with a desired constant headway despite interruptions from the two wheelers. We invoke specific mobility models for the movement of the vehiclescar following models for connected vehicle platoons and gap-acceptance model to capture the movement of the MTWsand use them to configure (i) the traffic setup and (ii) the rate at which data packets related to safety-critical messages need to be transmitted. A control-theoretic analysis of the car-following models yields a bound on the admissible communication delay to ensure non-oscillatory convergence of the platoon headway. We then use suitable Markov chain models to derive the distribution of the MAC access delay experienced by packets pertaining to safety-critical events as well as routine safety messages. The distribution along with the bound on the admissible delay enables us to derive the reliability of the 802.11p MAC protocol in terms of traffic and EDCA parameters." -"---\nabstract: 'In this paper we introduce the Boosted Double-proximal Subgradient Algorithm (BDSA), a novel splitting algorithm designed to address general structured nonsmooth and nonconvex mathematical programs expressed as sums and differences of composite functions. BDSA exploits the combined nature of subgradients from the data and proximal steps, and integrates a line-search procedure to enhance its performance. While BDSA encompasses existing schemes proposed in the literature, it extends its applicability to more diverse problem domains. We establish the convergence of BDSA under the Kurdyka\u2013[\u0141]{}ojasiewicz property and provide an analysis of its convergence rate. To evaluate the effectiveness of BDSA, we introduce a novel family of challenging test functions with an abundance of critical points. We conduct comparative evaluations demonstrating its ability to effectively escape non-optimal critical points. Additionally, we present two practical applications of BDSA for testing its efficacy, namely, a constrained minimum-sum-of-squares clustering problem and a nonconvex generalization of Heron\u2019s problem.'\nauthor:\n- 'Francisco J. Arag\u00f3n-Artacho [^1]'\n- 'Pedro P\u00e9rez-Aros[^2]'\n- 'David Torregrosa-Bel\u00e9n'\nbibliography:\n- 'references.bib'\ntitle: |\n The Boosted Double-Proximal Subgradient Algorithm\\\n for Nonconvex Optimization[^3]\n---\n\nnonconvex optimization, difference programming, Kurdyka\u2013[\u0141]{}ojasiewicz property, descent lemma, minimum-sum of squares clustering, Heron\u2019s problem\n\n49J53, 90C26, 90C30, 65K05, 90C46\n\nIntroduction\n============\n\nIn" -"---\nabstract: 'Anterior segment optical coherence tomography (AS-OCT) is a non-invasive imaging technique that is highly valuable for ophthalmic diagnosis. However, speckles in AS-OCT images can often degrade the image quality and affect clinical analysis. As a result, removing speckles in AS-OCT images can greatly benefit automatic ophthalmology analysis. Unfortunately, challenges still exist in deploying effective AS-OCT image denoising algorithms, including collecting sufficient paired training data and the requirement to preserve consistent content in medical images. To address these practical issues, we propose an unsupervised AS-OCT despeckling algorithm via Content Preserving Diffusion Model (CPDM) with statistical knowledge. At the training stage, a Markov chain transforms clean images to white Gaussian noise by repeatedly adding random noise and removes the predicted noise in a reverse procedure. At the inference stage, we first analyze the statistical distribution of speckles and convert it into a Gaussian distribution, aiming to match the fast truncated reverse diffusion process. We then explore the posterior distribution of observed images as a fidelity term to ensure content consistency in the iterative procedure. Our experimental results show that CPDM significantly improves image quality compared to competitive methods. Furthermore, we validate the benefits of CPDM for subsequent clinical analysis, including" -"---\nabstract: 'We show that for every $\\Delta\\in\\mathbb N$, there exists a constant $C$ such that if $G$ is an $(n,d,\\lambda)$-graph with $d/\\lambda\\ge C$ and $d$ is large enough, then $G^2$ contains every $n$-vertex tree with maximum degree bounded by $\\Delta$. This answers a question of Krivelevich.'\nauthor:\n- 'Mat\u00edas Pavez-Sign\u00e9[^1]'\nbibliography:\n- 'spanningtrees.bib'\ntitle: Spanning trees in the square of pseudorandom graphs\n---\n\nIntroduction\n============\n\nA pseudorandom graph $G$ on $n$ vertices is a sparse graph that \u201cresembles\u201d many of the properties that are typically present in the binomial random graph $G(n,p)$ with edge density $p=e(G)/\\binom{n}{2}$. Arguably, the most crucial characteristic of random graphs that pseudorandom graphs try to capture is the *uniform edge distribution* property, that is, that all large subsets of vertices span approximately the expected number of edges that appear in the truly random case.\n\nIn this paper, we will take a widely used approach to pseudorandom graphs based on a *spectral gap* condition. Say that a graph $G$ is an $(n,d,\\lambda)$-graph if $G$ is an $n$-vertex $d$-regular graph such that all of the non-trivial eigenvalues of $G$ are bounded by $\\lambda$ in absolute value, in which case, the so-called *expander mixing lemma* implies that $G$" -"---\nabstract: 'This paper addresses the important need for advanced techniques in continuously allocating workloads on shared infrastructures in data centers, a problem arising due to the growing popularity and scale of cloud computing. It particularly emphasizes the scarcity of research ensuring guaranteed capacity in capacity reservations during large-scale failures. To tackle these issues, the paper presents scalable solutions for resource management. It builds on the prior establishment of capacity reservation in cluster management systems and the two-level resource allocation problem addressed by the Resource Allowance System (RAS). Recognizing the limitations of Mixed Integer Linear Programming (MILP) for server assignment in a dynamic environment, this paper proposes the use of Deep Reinforcement Learning (DRL), which has been successful in achieving long-term optimal results for time-varying systems. A novel two-level design that utilizes a DRL-based algorithm is introduced to solve optimal server-to-reservation assignment, taking into account of fault tolerance, server movement minimization, and network affinity requirements due to the impracticality of directly applying DRL algorithms to large-scale instances with millions of decision variables. This design involves a reinforcement learning agent making sequential decisions at the higher level, which are then converted into specific numbers of servers to reserve from the Main" -"---\nauthor:\n- 'L. Biaus, S.E.\u00a0Nuza & C.\u00a0Scannapieco'\nbibliography:\n- 'bibliografia.bib'\ntitle: |\n Kinematics of the Local Group gas and galaxies\\\n in the [Hestia]{} simulations\n---\n\nIntroduction {#S_intro}\n============\n\nThe Local Group (LG) encompases the Milky Way (MW), Andromeda (M31) and several other minor galaxies. The MW and M31 are on a collision course, due to the general motion of LG galaxies towards the group\u2019s barycenter [e.g, @BT08]. Observations suggest that a giant multiphase gas halo surrounds the MW and M31 and possibly point out to the existence of LG gas located outside the virial radius of the MW. Observational evidence for the kinematics of the LG gas is mainly derived from absorption-line measurements in the spectra of background sources, probing the chemical composition of intervening material by studying the imprint of a variety of ions at different wavelengths. In particular, [@Richter17] analysed a large sample of high-velocity absorbers drawn from archival UV spectra of extragalactic background sources and determined the existence of a velocity dipole at high Galactic latitudes (as seen from the Local Standard of Rest or LSR). They interpreted this as possible evidence for intragroup gas streaming towards the LG barycenter as a result of" -"---\nabstract: 'Radio blazars have been linked both to individual high-energy neutrino events and to excesses in likelihood sky maps constructed from lower-energy neutrino data. However, the exact mechanism by which neutrinos are produced in these sources is still unknown. Here, we demonstrate that IceCube neutrinos with energies over 200\u00a0TeV, which were previously associated with bright radio blazars, are significantly more likely to be accompanied by flares of lower-energy events, compared to those lacking blazar counterparts. The parsec-scale core radio flux density of blazars, positioned within the error regions of energetic events, is strongly correlated with the likelihood of a day-scale lower-energy neutrino flare in directional and temporal coincidence with the high-energy event, reported by IceCube. The probability of a chance correlation is $3.6\\times 10^{-4}$. This confirms the neutrino-blazar connection in a new and independent way, and provides valuable clues to understanding the origin of astrophysical neutrinos.'\nauthor:\n- |\n Alisa Suray$^{1}$[^1] and Sergey Troitsky$^{2,1}$\\\n $^{1}$Physics Department, Lomonosov Moscow State University, 1-2 Leninskie Gory, Moscow 119991, Russia\\\n $^{2}$Institute for Nuclear Research of the Russian Academy of Sciences, 60th October Anniversary prospect\u00a07a, 117312 Moscow, Russia\nbibliography:\n- 'neutrino-flares.bib'\ndate: 'Submitted to MNRAS Letters on June 29, 2023. Revised on" -"---\nabstract: 'It is theoretically well established that a spin-dependent electron transmission generally appears in chiral systems, even without magnetic components, as long as a strong spin-orbit coupling is present in some of its elements. However, how this translates into the so-called chirality-induced spin selectivity in experiments, where the system is taken out of equilibrium, is still debated. Aided by non-equilibrium DFT-based quantum transport calculations, here we show that, when spatial symmetries that forbid a finite spin polarization in equilibrium are broken, a *net* spin accumulation appears at finite bias in an arbitrary two-terminal nanojunction. Furthermore, when a suitably magnetized detector is introduced in the system, the net spin accumulation, in turn, translates into a finite magneto-conductance. The symmetry prerequisites are mostly analogous to those for the spin polarization at any bias, with the vectorial nature given by the direction of magnetization.'\nauthor:\n- 'M. A. Garc\u00eda-Bl\u00e1zquez'\n- 'W. Dednam'\n- 'J. J. Palacios'\nbibliography:\n- 'SHEnano.bib'\ntitle: 'Non-equilibrium spin accumulation and magneto-conductance in chiral nanojunctions from density-functional & group theory'\n---\n\n![image](ToC_v2.2.png){width=\"8.25cm\" height=\"4.45cm\"}\n\nIntroduction\n============\n\nRelativistic effects experienced by electrons propagating through matter, most importantly in the presence of heavy atoms, are essential for many of the intrinsic magnetic" -"---\nabstract: 'The Laser Interferometer Space Antenna (LISA), due for launch in the mid 2030s, is expected to observe gravitational waves (GW)s from merging massive black hole binaries (MBHB)s. These signals can last from days to months, depending on the masses of the black holes, and are expected to be observed with high signal to noise ratios (SNR)s out to high redshifts. We have adapted the PyCBC software package to enable a template bank search and inference of GWs from MBHBs. The pipeline is tested on the LISA data challenge (LDC)\u2019s Challenge 2a (\u201cSangria\u201d), which contains MBHBs and thousands of galactic binaries (GBs) in simulated instrumental LISA noise. Our search identifies all 6 MBHB signals with more than $92\\%$ of the optimal SNR. The subsequent parameter inference step recovers the masses and spins within their $90\\%$ confidence interval. Sky position parameters have 8 high likelihood modes which are recovered but often our posteriors favour the incorrect sky mode. We observe that the addition of GBs biases the parameter recovery of masses and spins away from the injected values, reinforcing the need for a global fit pipeline which will simultaneously fit the parameters of the GB signals before estimating the parameters" -"---\nabstract: 'The accumulation of swimming bacteria near surfaces may lead to biological processes such as biofilm formation and wound infection. Previous experimental observations of *Vibrio alginolyticus* showed an interesting correlation between the bacterial entrapment near surfaces and the concentration of NaCl in the swimming medium. At higher concentrations of the ions, *V.\u00a0alginolyticus* in the puller mode (with flagella in front of the body) tends to stay close to the surface whereas in the pusher mode (with flagella behind the body) it is more likely to escape from the surface. Motivated by these observations, we numerically investigate the locomotion of a uniflagellated model bacterium in unbounded fluid and near a planar surface. In our elastohydrodynamic model, the boundary integral technique and Kirchhoff rod model are employed respectively to calculate the hydrodynamic forces on the swimmer and model the elastic deformations of the flagellum consisting of a short, flexible hook and a long, relatively stiff filament. Our numerical results demonstrate that hydrodynamic interactions between the model bacterium and the solid wall cause the puller type to be attracted to the surface, whereas the pusher type is either repelled from or attracted to the surface depending on the flagellum and hook" -"---\nabstract: 'Quantum processor architectures must enable scaling to large qubit numbers while providing two-dimensional qubit connectivity and exquisite operation fidelities. For microwave-controlled semiconductor spin qubits, dense arrays have made considerable progress, but are still limited in size by wiring fan-out and exhibit significant crosstalk between qubits. To overcome these limitations, we introduce the SpinBus architecture, which uses electron shuttling to connect qubits and features low operating frequencies and enhanced qubit coherence. Device simulations for all relevant operations in the Si/SiGe platform validate the feasibility with established semiconductor patterning technology and operation fidelities exceeding $\\SI{99.9}{\\percent}$. Control using room temperature instruments can plausibly support at least 144 qubits, but much larger numbers are conceivable with cryogenic control circuits. Building on the theoretical feasibility of high-fidelity spin-coherent electron shuttling as key enabling factor, the SpinBus architecture may be the basis for a spin-based quantum processor that meets the scalability requirements for practical quantum computing.'\nauthor:\n- Matthias K\u00fcnne\n- Alexander Willmes\n- Max Oberl\u00e4nder\n- Christian Gorjaew\n- 'Julian D.'\n- Harsh Bhardwaj\n- Max Beer\n- Eugen Kammerloher\n- Ren\u00e9 Otten\n- Inga Seidler\n- Ran Xue\n- 'Lars R.'\n- Hendrik Bluhm\ntitle: 'The SpinBus Architecture: Scaling Spin Qubits with" -"---\nabstract: 'Detecting the salient objects in a remote sensing image has wide applications for the interdisciplinary research. Many existing deep learning methods have been proposed for Salient Object Detection (SOD) in remote sensing images and get remarkable results. However, the recent adversarial attack examples, generated by changing a few pixel values on the original remote sensing image, could result in a collapse for the well-trained deep learning based SOD model. Different with existing methods adding perturbation to original images, we propose to jointly tune adversarial exposure and additive perturbation for attack and constrain image close to cloudy image as Adversarial Cloud. Cloud is natural and common in remote sensing images, however, camouflaging cloud based adversarial attack and defense for remote sensing images are not well studied before. Furthermore, we design DefenseNet as a learn-able pre-processing to the adversarial cloudy images so as to preserve the performance of the deep learning based remote sensing SOD model, without tuning the already deployed deep SOD model. By considering both regular and generalized adversarial examples, the proposed DefenseNet can defend the proposed Adversarial Cloud in white-box setting and other attack methods in black-box setting. Experimental results on a synthesized benchmark from the public" -"---\nabstract: 'Among the single-trajectory Gaussian-based methods for solving the time-dependent Schr\u00f6dinger equation, the variational Gaussian approximation is the most accurate one. In contrast to Heller\u2019s original thawed Gaussian approximation, it is symplectic, conserves energy exactly, and may partially account for tunneling. However, the variational method is also much more expensive. To improve its efficiency, we symmetrically compose the second-order symplectic integrator of Faou and Lubich and obtain geometric integrators that can achieve an arbitrary even order of convergence in the time step. We demonstrate that the high-order integrators can speed up convergence drastically compared to the second-order algorithm and, in contrast to the popular fourth-order Runge-Kutta method, are time-reversible and conserve the norm and the symplectic structure exactly, regardless of the time step. To show that the method is not restricted to low-dimensional systems, we perform most of the analysis on a non-separable twenty-dimensional model of coupled Morse oscillators. We also show that the variational method may capture tunneling and, in general, improves accuracy over the non-variational thawed Gaussian approximation.'\nauthor:\n- Roya Moghaddasi Fereidani\n- 'Ji\u0159\u00ed J. L. Van\u00ed\u010dek'\nbibliography:\n- 'VGA\\_high-order.bib'\ntitle: 'High-order geometric integrators for the variational Gaussian approximation'\n---\n\nIntroduction\n============\n\nNuclear quantum effects play" -"---\nauthor:\n- 'Jason\u00a0Aebischer,'\n- 'Marko Pesut,'\n- Zachary Polonsky\nbibliography:\n- 'refs.bib'\ntitle: 'Renormalization scheme factorization of one-loop Fierz identities'\n---\n\nIntroduction\n============\n\nComputations in Effective Field Theories often involve the use of four-dimensional identities in order to reduce Dirac structures. In combination with dimensional regularization such relations have to be generalized to arbitrary space-time dimensions, which is traditionally treated using evanescent operators [@Buras:1989xd; @Buras:2020xsm; @Dugan:1990df]. A simpler way of dealing with such complications has been proposed recently in the literature in the form of one-loop Fierz transformations [@Aebischer:2022aze; @Aebischer:2022rxf]. The traditional Fierz identities [@Fierz:1939zz], which relate combinations of gamma matrices and spinors, are manifestly valid in $d=4$ space-time dimensions. When performing basis transformations at the one-loop level, such identities have to be generalized in order to accommodate the contributions from evanescent operators. Such Fierz-evanescent operators, even though vanishing in $d=4$ space-time dimensions, can generate finite contributions when inserted into divergent loop-diagrams. These finite contributions, which fix the scheme of the calculation [@Herrlich:1994kh], can be computed at any given loop-order for a complete set of four-Fermi operators. The respective contributions from evanescent operators can then be interpreted as one-loop corrections to the (tree-level) Fierz identities: given an operator" -"---\nabstract: 'Differential privacy (DP), as a promising privacy-preserving model, has attracted great interest from researchers in recent years. Currently, the study on combination of machine learning and DP is vibrant. In contrast, another widely used artificial intelligence technique, the swarm intelligence (SI) algorithm, has received little attention in the context of DP even though it also triggers privacy concerns. For this reason, this paper attempts to combine DP and SI for the first time, and proposes a general differentially private swarm intelligence algorithm framework (DPSIAF). Based on the exponential mechanism, this framework can easily develop existing SI algorithms into the private versions. As examples, we apply the proposed DPSIAF to four popular SI algorithms, and corresponding analyses demonstrate its effectiveness. More interestingly, the experimental results show that, for our private algorithms, their performance is not strictly affected by the privacy budget, and one of the private algorithms even owns better performance than its non-private version in some cases. These findings are different from the conventional cognition, which indicates the uniqueness of SI with DP. Our study may provide a new perspective on DP, and promote the synergy between metaheuristic optimization community and privacy computing community.'\nauthor:\n- 'Zhiqiang\u00a0Zhang," -"---\nabstract: 'Stationary coherence in small conducting arrays has been shown to influence the transport efficiency of electronic nanodevices. Model schemes that capture the interplay between electron delocalization and system-reservoir interactions on the device performance are therefore important for designing next-generation nanojunctions powered by quantum coherence. We use a Lindblad open quantum system approach to obtain the current-voltage characteristics of small-size networks of interacting conducting sites subject to radiative and non-radiative interactions with the environment, for experimentally-relevant case studies. Lindblad theory is shown to reproduce recent measurements of negative conductance in single-molecule junctions using a biased two-site model driven by thermal fluctuations. For array sites with conducting ground and excited orbitals in the presence of radiative incoherent pumping, we show that Coulomb interactions that otherwise suppress charge transport can be overcome to produce light-induced currents. We also show that in nanojunctions having asymmetric transfer rates between the array and electrical contacts, an incoherent driving field can induce photocurrents at zero bias voltage whose direction depend on the type or orbital delocalization established between sites. Possible extensions of the theory are discussed.'\nauthor:\n- Felipe Recabal\n- Felipe Herrera\nbibliography:\n- 'sources.bib'\ntitle: |\n Driven-Dissipative Conductance in Nanojunction Arrays:\\\n Negative Conductance" -"---\nabstract: 'Living systems are chiral on multiple scales, from constituent biopolymers to large scale morphology, and their active mechanics is both driven by chiral components and serves to generate chiral morphologies. We describe the mechanics of active fluid membranes in coordinate-free form, with focus on chiral contributions to the stress. These generate geometric \u2018odd elastic\u2019 forces in response to mean curvature gradients but directed perpendicularly. As a result, they induce tangential membrane flows that circulate around maxima and minima of membrane curvature. When the normal viscous force amplifies perturbations the membrane shape can become linearly unstable giving rise to shape instabilities controlled by an active Scriven-Love number. We describe examples for spheroids, membranes tubes and helicoids, discussing the relevance and predictions such examples make for a variety of biological systems from the sub-cellular to tissue level.'\nauthor:\n- 'Sami C. Al-Izzi'\n- 'Gareth P. Alexander'\ntitle: 'A Twist On Active Membranes: Odd Mechanics, Spontaneous Flows and Shape Instabilities'\n---\n\nThe mechanics of active materials is distinct from that of passive systems: active stresses and forces are used to enable motility, sustain flows and coordinate a multitude of cellular functions essential for all living tissues. Theories of biological tissues as" -"---\nabstract: 'Data augmentation is now an essential part of the image training process, as it effectively prevents overfitting and makes the model more robust against noisy datasets. Recent mixing augmentation strategies have advanced to generate the mixup mask that can enrich the saliency information, which is a supervisory signal. However, these methods incur a significant computational burden to optimize the mixup mask. From this motivation, we propose a novel saliency-aware mixup method, GuidedMixup, which aims to retain the salient regions in mixup images with low computational overhead. We develop an efficient pairing algorithm that pursues to minimize the conflict of salient regions of paired images and achieve rich saliency in mixup images. Moreover, GuidedMixup controls the mixup ratio for each pixel to better preserve the salient region by interpolating two paired images smoothly. The experiments on several datasets demonstrate that GuidedMixup provides a good trade-off between augmentation overhead and generalization performance on classification datasets. In addition, our method shows good performance in experiments with corrupted or reduced datasets.'\nauthor:\n- 'Minsoo Kang^1,2^, Suhyun Kim^2^[^1]'\nbibliography:\n- 'aaai23.bib'\ntitle: 'GuidedMixup: An Efficient Mixup Strategy Guided by Saliency Maps'\n---\n\nIntroduction\n============\n\nImprovement and success of deep neural networks have been" -"---\nabstract: 'Depth perception is a crucial component of monocular 3D detection tasks that typically involve ill-posed problems. In light of the success of sample mining techniques in 2D object detection, we propose a simple yet effective mining strategy for improving depth perception in 3D object detection. Concretely, we introduce a plain metric to evaluate the quality of depth predictions, which chooses the mined sample for the model. Moreover, we propose a Gradient-aware and Model-perceive Mining strategy (GMM) for depth learning, which exploits the predicted depth quality for better depth learning through easy mining. GMM is a general strategy that can be readily applied to several state-of-the-art monocular 3D detectors, improving the accuracy of depth prediction. Extensive experiments on the nuScenes dataset demonstrate that the proposed methods significantly improve the performance of 3D object detection while outperforming other state-of-the-art sample mining techniques by a considerable margin. On the nuScenes benchmark, GMM achieved the state-of-the-art (**42.1%** mAP and **47.3%** NDS) performance in monocular object detection.'\nauthor:\n- |\n Weixin Mao$^{*1}$, Jinrong Yang$^{*2}$, Zheng Ge$^{1}$, Lin Song${^3}$,\\\n Hongyu Zhou$^{4}$, Tiezheng Mao${^1}$, Zeming Li$^{3}$, Osamu Yoshie$^{1}$[^1]\nbibliography:\n- 'sample.bib'\ntitle: 'GMM: Delving into Gradient Aware and Model Perceive Depth Mining for Monocular 3D" -"---\nabstract: 'The Hong-Ou-Mandel interference between independent photons plays a pivotal role in the large-scale quantum networks involving distant nodes. Photons need to work in a pure state for indistinguishability to reach high-quality interference. Also, they need to have a sufficiently long coherence time to reduce the time synchronization requirements in practical application. In this paper, we discuss a scheme for generating a pure-state photon-pair source with a long coherence time in periodically poled potassium titanyl phosphate (PPKTP) crystals. By selecting the appropriate pump laser and filter, we could simultaneously eliminate the frequency correlation of the parametric photons while achieving a long coherence time. We experimentally developed this pure-state photon-pair source of 780 nm on PPKTP crystals pumped by a 390 nm pulsed laser. The source provided a coherence time of tens of picoseconds, and it showed to have the potential to be applied in long-distance quantum interference. Furthermore, we experimentally demonstrated the Hong-Ou-Mandel (HOM) interference between two photon sources with visibility exceeding the classical limit.'\nauthor:\n- Bo Li\n- 'Yu-Huai Li'\n- Yuan Cao\n- Juan Yin\n- 'Cheng-Zhi Peng'\ntitle: 'Pure-state photon-pair source with a long coherence time for large-scale quantum information processing'\n---\n\nIntroduction\n============\n\nPhoton-pair" -"---\nabstract: 'Calculations at finite temperatures are fundamental in different scientific fields, from nuclear physics to condensed matter. Evolution in imaginary time is a prominent classical technique for preparing thermal states of quantum systems. We propose a new quantum algorithm that prepares thermal states based on the quantum imaginary time propagation method, using a diluted operator with ancilla qubits to overcome the non-unitarity nature of the imaginary time operator. The presented method is the first that allows us to obtain the correct thermal density matrix on a general quantum processor for a generic Hamiltonian. We prove its reliability in the actual quantum hardware computing thermal properties for two and three neutron systems.'\nauthor:\n- Francesco\u00a0Turro\nbibliography:\n- 'references.bib'\ntitle: Quantum Imaginary Time Propagation algorithm for preparing thermal states\n---\n\nCalculations at finite temperature are essential for understanding quantum systems across scientific fields. In particular, the thermodynamic properties of nuclear matter play a crucial role in heavy-ion collisions, astrophysics, and general nuclear applications. Some examples are nuclear reactions in the evolution of matter in the early universe and inside the core of stars\u00a0[@RevModPhys_83_195; @annurev-astro-081811-125543; @annurev-nucl-020620-063734], supernova explosions and the phase diagram of QCD\u00a0[@de2010simulating; @shuryak2017strongly]. The recent detection of" -"---\nabstract: 'Dust coagulation in protoplanetary disks is not straightforward and is subject to several slow-down mechanisms, such as bouncing, fragmentation and radial drift to the star. Furthermore, dust grains in UV-shielded disk regions are negatively charged due to collisions with the surrounding electrons and ions, which leads to their electrostatic repulsion. For typical disk conditions, the relative velocities between micron-size grains are small and their collisions are strongly affected by the repulsion. On the other hand, collisions between pebble-size grains can be too energetic, leading to grain fragmentation. The aim of the present paper is to study a combined effect of the electrostatic and fragmentation barriers on dust evolution. We numerically solve the Smoluchowski coagulation-fragmentation equation for grains whose charging occurs under conditions typical for the inner disk regions, where thermal ionization operates. We find that dust fragmentation efficiently resupplies the population of small grains under the electrostatic barrier. As a result, the equilibrium abundance of sub-micron grains is enhanced by several orders of magnitude compared to the case of neutral dust. For some conditions with fragmentation velocities $\\sim1$ms$^{-1}$, macroscopic grains are completely destroyed.'\nauthor:\n- Vitaly Akimkin\n- 'Alexei V. Ivlev'\n- Paola Caselli\n- Munan Gong\n-" -"---\nabstract: 'This paper revisits an adaptation of the random forest algorithm for Fr\u00e9chet regression, addressing the challenge of regression in the context of random objects in metric spaces. Recognizing the limitations of previous approaches, we introduce a new splitting rule that circumvents the computationally expensive operation of Fr\u00e9chet means by substituting with a medoid-based approach. We validate this approach by demonstrating its asymptotic equivalence to Fr\u00e9chet mean-based procedures and establish the consistency of the associated regression estimator. The paper provides a sound theoretical framework and a more efficient computational approach to Fr\u00e9chet regression, broadening its application to non-standard data types and complex use cases.'\nauthor:\n- Matthieu Bult\u00e9\n- Helle S\u00f8rensen\nbibliography:\n- 'main.bib'\ntitle: Medoid splits for efficient random forests in metric spaces\n---\n\nLeast squares regression ,Medoid ,Metric spaces ,Random forest ,Random objects\n\nIntroduction {#sec-intro}\n============\n\nWe study the extension of random forest to regression situations where the response takes values in a metric space. The usual expectation is replaced by the Fr\u00e9chet mean, which can be be very computationally intensive with restrictions on possible applications to follow. Emphasis is therefore on a new medoid-based splitting rule used in the individual trees. Application of this new splitting" -"---\nabstract: 'The Electron-Ion Collider (EIC) provides unique opportunities in searching for new physics through its high center of mass energy and coherent interactions of large nuclei. We examine how light weakly interacting vector bosons from a variety of models can be discovered or constrained, over significant parts of their parameter space, through clean displaced vertex signals at the EIC. Our results indicate that the searches we propose favorably compare with or surpass existing experimental projections for the models examined. The reach for the new physics that we consider can be markedly improved if \u201cfar backward\" particle identification capabilities are included in the EIC detector complex.'\nauthor:\n- 'Hooman Davoudiasl[^1]'\n- 'Roman Marcarelli[^2]'\n- 'Ethan T. Neil[^3]'\nbibliography:\n- 'eic-displaced.bib'\ntitle: 'Displaced Signals of Hidden Vectors at the Electron-Ion Collider'\n---\n\nIntroduction\\[sec:intro\\]\n=========================\n\nA number of experimental observations, as well as conceptual puzzles, lead us to the unavoidable conclusion that new physics beyond the Standard Model (SM) is required for a complete fundamental description of natural phenomena. For long, conventional thinking largely assigned new physics to ever shorter distances, corresponding to increasingly larger energy scales. However, recent years have seen a surge of interest in searching for new low mass" -"---\nabstract: 'This paper presents a simple yet novel two-dimensional modelling approach for approximating the coupling coefficient between neighbouring inductors as a function of co-planar separation and relative angular displacement. The approach employs simple geometric arguments to predict the effective magnetic flux between inductors. Two extreme coil geometry regimes are considered; planar coils (i.e. on printed circuit board), and solenoid coils, each with asymmetric ferrite cores about the central magnetic plane of the inductor. The proposed geometric approximation is used to predict the coupling coefficient between sensors as a function of separation distance and angular displacement and the results are validated against two-dimensional finite element modelling results. The analytical approximations show excellent agreement with the FE analysis, predicting comparable trends with changing separation and angular displacement, enabling best fitting to 2D FE and 3D numerical data with a residual standard deviation of less than $0.5\\%$ for the planar coil approximation. The work demonstrates the validity of the analytical approximation for predicting coupling behaviour between neighbouring coils. This has practical uses for the automated estimation of the physical separation between coils, or the curvature of surfaces they are rested or adhered to.'\nauthor:\n- 'Robert R. Hughes[^1], Alexis [Hernandez Arroyo]{}\\*, Anthony" -"---\nauthor:\n- 'I.\u00a0V.\u00a0Anikin'\ntitle: 'Conformal Symmetry and Effective Potential: II. Evolution'\n---\n\nIntroduction\n============\n\nIn the field-theoretical models with spontaneous symmetry breaking at the classical level, the geometrical analysis of the Goldstone theorem, based on the interpretation of a given potential minimum as a vacuum state (see for instance [@Peskin:1995ev]), plays an important role for the different theoretical approaches. As well-known, the quantum corrections which have been taken into account for the theoretical studies distort the geometrical picture. However, the use of the effective potential methods allows to return to the classical geometrical analysis of the models with spontaneous symmetry breaking.\n\nThe effective potential is given by the corresponding vacuum diagram sets which are obtained as a result of the stationary phase method applied to the generating functional. In the most interesting cases, we need to include the massive parameter in the models. As well-known, if the loop integrations are made from the massive (scalar) propagators, the conformal symmetry is useless in principle. However, within the effective potential approach, we are able to avoid the use of massive propagator appearing in the vacuum diagrams. To this aim, we propose to treat the mass terms in Lagrangian together" -"---\ntitle: A survey on algebraic dilatations\n---\n\n$$\\text{{\\large {Adrien Dubouloz, Arnaud Mayeux and Jo\u00e3o Pedro dos Santos }}}$$\n\n$$\\text{\\today}$$\n\n$~~$\n\nIn this text, we wish to provide the reader with a short guide to recent works on the theory of dilatations in Commutative Algebra and Algebraic Geometry. These works fall naturally into two categories: one emphasises [*foundational and theoretical*]{} aspects and the other [*applications*]{} to existing theories.\n\n[ ]{}\n\nIntroduction {#introduction .unnumbered}\n============\n\n***[What is the concept of algebraic dilatations about ? ]{}***\n\nDilatation of rings is a basic construction of commutative algebra, like localization or tensor product. It can be globalized so that it also make sense on schemes or algebraic spaces. In fact dilatations generalize localizations.\n\nLet $A$ be a ring and let $S$ be a multiplicative subset of $A$. Recall that the localization $S^{-1}A$ is an $A$-algebra such that for any $A$-algebra $ A \\to B $ such that the image of $s$ is invertible for any $s \\in S$, then $A \\to B$ factors through $A \\to S^{-1}A$. Intuitively, $S^{-1}A$ is the $A$-algebra obtained from $A$ adding all fractions $\\frac{a}{s}$ with $a \\in A $ and $s \\in S$. Formally, $S^{-1} A$ is made of" -"---\nabstract: 'Individualized treatment rules, cornerstones of precision medicine, inform patient treatment decisions with the goal of optimizing patient outcomes. These rules are generally unknown functions of patients\u2019 pre-treatment covariates, meaning they must be estimated from clinical or observational study data. Myriad methods have been developed to learn these rules, and these procedures are demonstrably successful in traditional asymptotic settings with moderate number of covariates. The finite-sample performance of these methods in high-dimensional covariate settings, which are increasingly the norm in modern clinical trials, has not been well characterized, however. We perform a comprehensive comparison of state-of-the-art individualized treatment rule estimators, assessing performance on the basis of the estimators\u2019 accuracy, interpretability, and computational efficacy. Sixteen data-generating processes with continuous outcomes and binary treatment assignments are considered, reflecting a diversity of randomized and observational studies. We summarize our findings and provide succinct advice to practitioners needing to estimate individualized treatment rules in high dimensions. All code is made publicly available, facilitating modifications and extensions to our simulation study. A novel pre-treatment covariate filtering procedure is also proposed and is shown to improve estimators\u2019 accuracy and interpretability.'\nauthor:\n- |\n Philippe Boileau\\\n Graduate Group in Biostatistics and\\\n Center for Computational Biology, and\\" -"---\nabstract: 'NVMe SSD caching has demonstrated impressive capabilities in solving cloud block storage\u2019s I/O bottleneck and enhancing application performance in public, private, and hybrid cloud environments. However, traditional host-side caching solutions have several serious limitations. First, the cache cannot be shared across hosts, leading to low cache utilization. Second, the commonly-used fix-sized cache block allocation mechanism is unable to provide good cache performance with low memory overhead for diverse cloud workloads with vastly different I/O patterns. This paper presents AdaCache, a novel userspace disaggregated cache system that utilizes adaptive cache block allocation for cloud block storage. First, AdaCache proposes an innovative adaptive cache block allocation scheme that allocates cache blocks based on the request size to achieve both good cache performance and low memory overhead. Second, AdaCache proposes a group-based cache organization that stores cache blocks into groups to solve the fragmentation problem brought by variable-sized cache blocks. Third, AdaCache designs a two-level cache replacement policy that replaces cache blocks in both single blocks and groups to improve the hit ratio. Experimental results with real-world traces show that AdaCache can substantially improve I/O performance and reduce storage access caused by cache miss with a much lower memory usage compared" -"---\nabstract: |\n In this paper we prove the following new sufficient condition for a digraph to be Hamiltonian:\n\n [*Let $D$ be a 2-strong digraph of order $n\\geq 9$. If $n-1$ vertices of $D$ have degrees at least $n+k$ and the remaining vertex has degree at least $n-k-4$, where $k$ is a non-negative integer, then $D$ is Hamiltonian*]{}.\n\n This is an extension of Ghouila-Houri\u2019s theorem for 2-strong digraphs and is a generalization of an early result of the author (DAN Arm. SSR (91(2):6-8, 1990). The obtained result is best possible in the sense that for $k=0$ there is a digraph of order $n=8$ (respectively, $n=9$) with the minimum degree $n-4=4$ (respectively, with the minimum degree $n-5=4$) whose $n-1$ vertices have degrees at least $n-1$, but it is not Hamiltonian.\n\n We also give a new sufficient condition for a 3-strong digraph to be Hamiltonian-connected.\nauthor:\n- 'Samvel Kh. Darbinyan'\ntitle: 'A new sufficient condition for a 2-strong digraph to be Hamiltonian '\n---\n\nIntroduction\n============\n\nIn this paper, we consider finite digraphs without loops and multiple arcs. We shall assume that the reader is familiar with the standard terminology on digraphs and refer the reader to [@[5]]. Every cycle and path" -"---\nabstract: 'Traditionally, convolutional neural networks (CNN) and vision transformers (ViT) have dominated computer vision. However, recently proposed vision graph neural networks (ViG) provide a new avenue for exploration. Unfortunately, for mobile applications, ViGs are computationally expensive due to the overhead of representing images as graph structures. In this work, we propose a new graph-based sparse attention mechanism, Sparse Vision Graph Attention (SVGA), that is designed for ViGs running on mobile devices. Additionally, we propose the first hybrid CNN-GNN architecture for vision tasks on mobile devices, MobileViG, which uses SVGA. Extensive experiments show that MobileViG beats existing ViG models and existing mobile CNN and ViT architectures in terms of accuracy and/or speed on image classification, object detection, and instance segmentation tasks. Our fastest model, MobileViG-Ti, achieves 75.7% top-1 accuracy on ImageNet-1K with 0.78 ms inference latency on iPhone 13 Mini NPU (compiled with CoreML), which is faster than MobileNetV2x1.4 (1.02 ms, 74.7% top-1) and MobileNetV2x1.0 (0.81 ms, 71.8% top-1). Our largest model, MobileViG-B obtains 82.6% top-1 accuracy with only 2.30 ms latency, which is faster and more accurate than the similarly sized EfficientFormer-L3 model (2.77 ms, 82.4%). Our work proves that well designed hybrid CNN-GNN architectures can be a new" -"---\naddress: 'Department of Atomic Physics, Faculty of Science, E\u00f6tv\u00f6s Lor\u00e1nd University, P\u00e1zm\u00e1ny P\u00e9ter s\u00e9t\u00e1ny 1/A, H-1111 Budapest, Hungary; balazs.korodi@cern.ch'\n---\n\nIntroduction\n============\n\nThe investigation of the femtometer-scale space\u2013time geometry of high-energy heavy-ion collisions has been an important area, called femtoscopy, of\u00a0high-energy physics for several decades\u00a0[@Lednicky:2001qv]. The\u00a0main idea of this field originates from astronomy, since it is analogous with the well-known Hanbury Brown and Twiss (HBT) effect that describes the intensity correlation of photons\u00a0[@HanburyBrown:1956bqd; @Glauber:1962tt]. In\u00a0high-energy physics, however, the\u00a0observable is the quantum-statistical momentum correlation of hadrons, which carries information about the femtometer-scale structure of the particle-emitting source\u00a0[@Csorgo:1999sj; @Wiedemann:1999qn]. The\u00a0measurements of such momentum correlations are partially responsible for establishing the fluid nature of the quark\u2013gluon plasma (QGP) created in heavy-ion collisions\u00a0[@Adler:2004rq; @Csorgo:1995bi]. Furthermore, the\u00a0measured source radii provide information about the transition from the QGP to the hadronic phase\u00a0, as well as about the phase space of quantum chromodynamics\u00a0[@Lacey:2014wqa].\n\nRecent high-precision femtoscopic measurements\u00a0[@PHENIX:2017ino; @NA61SHINE:2023qzr] have shown that the previously widely assumed Gaussian\u00a0[@Adler:2004rq; @STAR:2004qya; @ALICE:2015hvw] or Cauchy\u00a0[@CMS:2017mdg; @ATLAS:2017shk] source distributions do not provide an adequate description of the measured correlation functions. Instead, a\u00a0generalization of these distribution, the" -"---\nabstract: 'Meeting online is becoming the new normal. Creating an immersive experience for online meetings is a necessity towards more diverse and seamless environments. Efficient photorealistic rendering of human 3D dynamics is the core of immersive meetings. Current popular applications achieve real-time conferencing but fall short in delivering photorealistic human dynamics, either due to limited 2D space or the use of avatars that lack realistic interactions between participants. Recent advances in neural rendering, such as the Neural Radiance Field (NeRF), offer the potential for greater realism in metaverse meetings. However, the slow rendering speed of NeRF poses challenges for real-time conferencing. We envision a pipeline for a future extended reality metaverse conferencing system that leverages monocular video acquisition and free-viewpoint synthesis to enhance data and hardware efficiency. Towards an immersive conferencing experience, we explore an accelerated NeRF-based free-viewpoint synthesis algorithm for rendering photorealistic human dynamics more efficiently. We show that our algorithm achieves comparable rendering quality while performing training and inference $44.5\\%$ and $213\\%$ faster than state-of-the-art methods, respectively. Our exploration provides a design basis for constructing metaverse conferencing systems that can handle complex application scenarios, including dynamic scene relighting with customized themes and multi-user conferencing that harmonizes real-world" -"---\nabstract: 'Although deuterium (D) on Mars has received substantial attention, the deuterated ionosphere remains relatively unstudied. This means that we also know very little about non-thermal D escape from Mars, since it is primarily driven by excess energy imparted to atoms produced in ion-neutral reactions. Most D escape from Mars is expected to be non-thermal, highlighting a gap in our understanding of water loss from Mars. In this work, we set out to fill this knowledge gap. To accomplish our goals, we use an upgraded 1D photochemical model that fully couples ions and neutrals and does not assume photochemical equilibrium. To our knowledge, such a model has not been applied to Mars previously. We model the atmosphere during solar minimum, mean, and maximum, and find that the deuterated ionosphere behaves similarly to the H-bearing ionosphere, but that non-thermal escape on the order of 8000-9000 [cm$^{-2}$s$^{-1} $]{} dominates atomic D loss under all solar conditions. The total fractionation factor, $f$, is $f=0.04$\u20130.07, and integrated water loss is 147\u2013158 m GEL. This is still less than geomorphological estimates. Deuterated ions at Mars are likely difficult to measure with current techniques due to low densities and mass degeneracies with more abundant H" -"---\nabstract: 'We propose an electro-hydrodynamics model to describe the dynamic evolution of a slender drop containing a dilute ionic surfactant on a naturally wettable surface, with a varying external electric field. This unified model reproduces fundamental microfluidic operations controlled by electrical signals, including dewetting, rewetting, and droplet shifting. In this paper, lubrication theory analysis and numerical simulations illustrate how to electrically control the wettability of surface via the charged surfactant. Our numerical results show that electric field promotes dewetting by attracting ionic surfactants onto the transition thin-film region and promotes rewetting by attracting them away from the region.'\nauthor:\n- Weiqi Chu\n- Hangjie Ji\n- Qining Wang\n- 'Chang-Jin \u201cCJ\u201d Kim'\n- 'Andrea L. Bertozzi'\nbibliography:\n- 'bibfile.bib'\ntitle: 'An electro-hydrodynamics modeling of droplet actuation on solid surface by surfactant-mediated electro-dewetting'\n---\n\nIntroduction {#sec: intro}\n============\n\nIn recent years, digital microfluidics (DMF) [@kim2001micropumping], which allows manipulation of liquid droplets individually and independently [@li2020current], has been intensively studied as an important liquid-handling technology [@choi2012digital] for lab-on-a-chip devices [@samiei2016review; @abdelgawad2009digital] and many other applications [@chiu2012liquid; @sen2008microscale; @cheng2010active; @nelson2011miniature; @cha2016thermal]. Among the different mechanisms to actuate droplets for DMF, electrowetting [@beni1981electro] in the form of electrowetting-on-dielectric (EWOD) [@kim2001micropumping] is the most" -"---\nabstract: 'Captions that describe or explain charts help improve recall and comprehension of the depicted data and provide a more accessible medium for people with visual disabilities. However, current approaches for automatically generating such captions struggle to articulate the perceptual or cognitive features that are the hallmark of charts (e.g., complex trends and patterns). In response, we introduce VisText: a dataset of 12,441 pairs of charts and captions that describe the charts\u2019 construction, report key statistics, and identify perceptual and cognitive phenomena. In VisText, a chart is available as three representations: a rasterized image, a backing data table, and a *scene graph*\u2014a hierarchical representation of a chart\u2019s visual elements akin to a web page\u2019s Document Object Model (DOM). To evaluate the impact of VisText, we fine-tune state-of-the-art language models on our chart captioning task and apply prefix-tuning to produce captions that vary the semantic content they convey. Our models generate coherent, semantically rich captions and perform on par with state-of-the-art chart captioning models across machine translation and text generation metrics. Through qualitative analysis, we identify six broad categories of errors that our models make that can inform future work.'\nauthor:\n- |\n Benny J. Tang$^*$\\\n MIT CSAIL\\\n `benjtang@csail.mit.edu`\\\n Angie" -"---\nabstract: 'The study of waveguide propagating modes is essential for achieving directional electronic transport in two-dimensional materials. Simultaneously, exploring potential gaps in these systems is crucial for developing devices akin to those employed in conventional electronics. Building upon the theoretical groundwork laid by Hartmann et al. [@Hartmann2014Waveguides], which focused on implementing waveguides in pristine graphene monolayers, this work delves into the impact of a waveguide on two-dimensional gapped Dirac systems. We derive exact solutions encompassing wave functions and energy-bound states for a secant-hyperbolic attractive potential in gapped graphene, with a gap generated by sublattice asymmetry or a Kekul\u00e9-distortion. These solutions leverage the inherent properties and boundary conditions of the Heun polynomials. Our findings demonstrate that the manipulation of the number of accessible energy-bound states, i.e., transverse propagating modes, relies on factors such as the width and depth of the potential as well as the gap value of the two-dimensional material.'\nauthor:\n- 'V. G. Ibarra-Sierra'\n- 'E. J. Robles-Raygoza'\n- 'J. C. Sandoval-Santana'\n- 'R. Carrillo-Bastos'\nbibliography:\n- 'references.bib'\ntitle: 'Waveguiding in massive two-dimensional Dirac systems'\n---\n\n[***Index terms\u2014*** Suggested keywords]{}\n\nIntroduction\n============\n\nTwo-dimensional (2D) materials[@Xu2013; @miro2014atlas; @Mounet2018; @Ibarra2019], such as graphene, hBN, MoS$_2$, black phosphorus, and borophene ($8-Pmmn$)," -"---\nabstract: 'We use techniques of Alper-Hall-Rydh to prove a local structure theorem for smooth morphisms between smooth stacks around points with linearly reductive stabilizers. This implies that the good moduli space of a smooth stack over a base has equisingular fibers. As an application, we show that any two fibres have isomorphic $\\ell$-adic cohomology rings and intersection cohomology groups. If we work over the complex numbers, we show that the family is topologically locally trivial on the base, and that the intersection cohomology groups of the fibers fit into a polarizable variation of pure Hodge structures. We apply these results to derive some consequences for the moduli spaces of $G$-bundles on smooth projective curves, and for the moduli spaces of sheaves on \u201cnegatively polarized\" surfaces and on del Pezzo Gorenstein surfaces for nongeneric stability parameters.'\nauthor:\n- 'Mark Andrea de Cataldo, Andres Fernandez Herrero and Andr\u00e9s Ib\u00e1\u00f1ez N\u00fa\u00f1ez'\ntitle: '**Relative \u00e9tale slices and cohomology of moduli spaces**'\n---\n\n=1\n\nIntroduction\n============\n\nThe cohomology of smooth projective varieties is locally constant in families. More precisely, if $X \\to S$ is a smooth projective family of varieties over a connected base $S$, then the ($\\ell$-adic) cohomology groups of any two fibers" -"---\nabstract: 'The Sparse Identification of Nonlinear Dynamics (SINDy) algorithm can be applied to stochastic differential equations to estimate the drift and the diffusion function using data from a realization of the SDE. The SINDy algorithm requires sample data from each of these functions, which is typically estimated numerically from the data of the state. We analyze the performance of the previously proposed estimates for the drift and diffusion function to give bounds on the error for finite data. However, since this algorithm only converges as both the sampling frequency and the length of trajectory go to infinity, obtaining approximations within a certain tolerance may be infeasible. To combat this, we develop estimates with higher orders of accuracy for use in the SINDy framework. For a given sampling frequency, these estimates give more accurate approximations of the drift and diffusion functions, making SINDy a far more feasible system identification method.'\nauthor:\n- 'Mathias Wanner[^1]'\n- 'Dr. Igor Mezi\u0107'\nbibliography:\n- 'Sources.bib'\ntitle: 'On Numerical Methods for Stochastic SINDy [^2]'\n---\n\nStochastic Differential Equations, System Identification, Numerical Methods, SINDy\n\n37H99,37M15,60H35,65C40,93E12\n\nIntroduction\n============\n\nFor many dynamical systems, data may abundant while there remains no analytic models to describe the system. These systems" -"---\nabstract: 'Knowledge Graph Construction (KGC) can be seen as an iterative process starting from a high quality nucleus that is refined by knowledge extraction approaches in a virtuous loop. Such a nucleus can be obtained from knowledge existing in an open KG like Wikidata. However, due to the size of such generic KGs, integrating them as a whole may entail irrelevant content and scalability issues. We propose an analogy-based approach that starts from seed entities of interest in a generic KG, and keeps or prunes their neighboring entities. We evaluate our approach on Wikidata through two manually labeled datasets that contain either domain-homogeneous or -heterogeneous seed entities. We empirically show that our analogy-based approach outperforms LSTM, Random Forest, SVM, and MLP, with a drastically lower number of parameters. We also evaluate its generalization potential in a transfer learning setting. These results advocate for the further integration of analogy-based inference in tasks related to the KG lifecycle.'\nauthor:\n- Lucas Jarnac\n- Miguel Couceiro\n- Pierre Monnin\nbibliography:\n- 'bibliography.bib'\ntitle: 'Relevant Entity Selection: Knowledge Graph Bootstrapping via Zero-Shot Analogical Pruning'\n---\n\n<ccs2012> <concept> <concept\\_id>10010147.10010178.10010187</concept\\_id> <concept\\_desc>Computing methodologies\u00a0Knowledge representation and reasoning</concept\\_desc> <concept\\_significance>500</concept\\_significance> </concept> <concept> <concept\\_id>10010147.10010178</concept\\_id> <concept\\_desc>Computing methodologies\u00a0Artificial intelligence</concept\\_desc> <concept\\_significance>300</concept\\_significance>" -"---\nabstract: 'Solving long sequential tasks poses a significant challenge in embodied artificial intelligence. Enabling a robotic system to perform diverse sequential tasks with a broad range of manipulation skills is an active area of research. In this work, we present a Hybrid Hierarchical Learning framework, the Robotic Manipulation Network (ROMAN), to address the challenge of solving multiple complex tasks over long time horizons in robotic manipulation. ROMAN achieves task versatility and robust failure recovery by integrating behavioural cloning, imitation learning, and reinforcement learning. It consists of a central manipulation network that coordinates an ensemble of various neural networks, each specialising in distinct re-combinable sub-tasks to generate their correct in-sequence actions for solving complex long-horizon manipulation tasks. Experimental results show that by orchestrating and activating these specialised manipulation experts, ROMAN generates correct sequential activations for accomplishing long sequences of sophisticated manipulation tasks and achieving adaptive behaviours beyond demonstrations, while exhibiting robustness to various sensory noises. These results demonstrate the significance and versatility of ROMAN\u2019s dynamic adaptability featuring autonomous failure recovery capabilities, and highlight its potential for various autonomous manipulation tasks that demand adaptive motor skills.'\nauthor:\n- '**Eleftherios Triantafyllidis**'\n- '**Fernando Acero**'\n- '**Zhaocheng Liu**'\n- '**Zhibin Li**'\nbibliography:\n-" -"---\nabstract: 'We report the dynamics of a droplet levitated in a single-axis acoustic levitator. The deformation and atomization behavior of the droplet in the acoustic field exhibits a myriad of complex phenomena, in sequences of steps. These include the primary breakup of the droplet through stable levitation, deformation, sheet formation, and equatorial atomization, followed by secondary breakup which could be umbrella breakup, bag breakup, bubble breakup or multistage breakup depending on the initial size of the droplet. The visualization of the interfacial instabilities on the surface of the liquid sheet using both side and top-view imaging is presented. An approximate size distribution of the droplet after a complete breakup is also provided. Lastly, an aggregation of the atomized smaller droplets is observed after the complete atomization. The primary breakup of the droplet precedes with a stable levitation of the droplet, when the acoustic force balances the downward gravity force and the resulting ellipsoidal shape of the droplet is a consequence of the balance of deforming acoustic force and the restoring surface tension force. The acoustic force changes with the change in the shape of the droplet leading to further deformation of the droplet, ultimately resulting in a highly flattened" -"---\nabstract: 'Automatically generating textual content with desired attributes is an ambitious task that people have pursued long. Existing works have made a series of progress in incorporating unimodal controls into language models (LMs), whereas how to generate controllable sentences with multimodal signals and high efficiency remains an open question. To tackle the puzzle, we propose a new paradigm of zero-shot controllable text generation with multimodal signals (ZeroGen). Specifically, ZeroGen leverages controls of text and image successively from token-level to sentence-level and maps them into a unified probability space at decoding, which customizes the LM outputs by weighted addition without extra training. To achieve better inter-modal trade-offs, we further introduce an effective dynamic weighting mechanism to regulate all control weights. Moreover, we conduct substantial experiments to probe the relationship of being in-depth or in-width between signals from distinct modalities. Encouraging empirical results on three downstream tasks show that ZeroGen not only outperforms its counterparts on captioning tasks by a large margin but also shows great potential in multimodal news generation with a higher degree of control. Our code will be released at .'\nauthor:\n- |\n Haoqin Tu, Bowen Yang, Xianfeng Zhao\\\n State Key Laboratory of Information" -"---\nabstract: 'We investigate the increasingly prominent task of jointly inferring *multiple* networks from nodal observations. While most *joint* inference methods assume that observations are available at all nodes, we consider the realistic and more difficult scenario where a subset of nodes are *hidden* and cannot be measured. Under the assumptions that the partially observed nodal signals are graph stationary and the networks have similar connectivity patterns, we derive structural characteristics of the connectivity between hidden and observed nodes. This allows us to formulate an optimization problem for estimating networks while accounting for the influence of hidden nodes. We identify conditions under which a convex relaxation yields the sparsest solution, and we formalize the performance of our proposed optimization problem with respect to the effect of the hidden nodes. Finally, synthetic and real-world simulations provide evaluations of our method in comparison with other baselines.'\nauthor:\n- 'Madeline Navarro,\u00a0, Samuel Rey,\u00a0, Andrei Buciulea,\u00a0, Antonio G. Marques,\u00a0, and Santiago Segarra,\u00a0[^1]'\nbibliography:\n- 'biblio.bib'\ntitle: Joint Network Topology Inference in the Presence of Hidden Nodes\n---\n\nGraph learning, network topology inference, hidden nodes, graph signal processing, graph stationarity, multi-layer graphs.\n\nIntroduction {#S:intro}\n============\n\nrecent years, graphs have become" -"---\nabstract: 'Multi-robot platforms are playing an increasingly important role in [warehouse automation for efficient goods transport]{}. This paper proposes [a novel customization of a multi-robot system, called]{} Tactile Mobile Manipulators (TacMMs). Each TacMM integrates a soft optical tactile sensor and a mobile robot with a load-lifting mechanism, enabling cooperative transportation in tasks requiring coordinated physical interaction. More specifically, we mount the TacTip (biomimetic optical tactile sensor) on the Distributed Organisation and Transport System (DOTS) mobile robot. The tactile information then helps the mobile robots adjust the relative robot-object pose, thereby increasing the efficiency of load-lifting tasks. This study compares the performance of [using two TacMMs]{} with tactile perception with traditional vision-based pose adjustment for load-lifting. The results show that the average success rate of the TacMMs (66$\\%$) is improved over a purely visual-based method (34$\\%$), with a larger improvement when the mass of the load was non-uniformly distributed. [Although this initial study considers two TacMMs, we expect the benefits of tactile perception to extend to multiple mobile robots.]{} Website:'\nauthor:\n- 'Zhuochao He, Xuyang Zhang, Simon Jones, Sabine Hauert, Dandan Zhang, Nathan F. Lepora$^{1}$[^1][^2][^3]'\nbibliography:\n- 'RAL-format.bib'\ntitle: 'TacMMs: Tactile Mobile Manipulators for Warehouse Automation'\n---\n\nTactile Sensing, Multi-robot" -"---\nabstract: '\\[sec:Abstract\\] For robots to assist users with household tasks, they must first learn about the tasks from the users. Further, performing the same task every day, in the same way, can become boring for the robot\u2019s user(s), therefore, assistive robots must find creative ways to perform tasks in the household. In this paper, we present a cognitive architecture for a household assistive robot that can learn personalized breakfast options from its users and then use the learned knowledge to set up a table for breakfast. The architecture can also use the learned knowledge to create new breakfast options over a longer period of time. The proposed cognitive architecture combines state-of-the-art perceptual learning algorithms, computational implementation of cognitive models of memory encoding and learning, a task planner for picking and placing objects in the household, a graphical user interface (GUI) to interact with the user and a novel approach for creating new breakfast options using the learned knowledge. The architecture is integrated with the Fetch mobile manipulator robot and validated, as a proof-of-concept system evaluation in a large indoor environment with multiple kitchen objects. Experimental results demonstrate the effectiveness of our architecture to learn personalized breakfast options from the" -"---\nabstract: 'Despite recent theoretical progress on the non-convex optimization of two-layer neural networks, it is still an open question whether gradient descent on neural networks without unnatural modifications can achieve better sample complexity than kernel methods. This paper provides a clean mean-field analysis of projected gradient flow on polynomial-width two-layer neural networks. Different from prior works, our analysis does not require unnatural modifications of the optimization algorithm. We prove that with sample size $n = O(d^{3.1})$ where $d$ is the dimension of the inputs, the network trained with projected gradient flow converges in ${\\textup{poly}}(d)$ time to a non-trivial error that is not achievable by kernel methods using $n \\ll d^4$ samples, hence demonstrating a clear separation between unmodified gradient descent and NTK. As a corollary, we show that projected gradient descent with a positive learning rate and a polynomial number of iterations converges to low error with the same sample complexity.'\nauthor:\n- |\n Arvind Mahankali[^1]\\\n Stanford University\\\n `amahanka@stanford.edu`\\\n- |\n Jeff Z. Haochen\\\n Stanford University\\\n `jhaochen@stanford.edu`\\\n- |\n Kefan Dong\\\n Stanford University\\\n `kefandong@stanford.edu`\\\n- |\n Margalit Glasgow\\\n Stanford University\\\n `mglasgow@stanford.edu`\\\n- |\n Tengyu Ma\\\n Stanford University\\\n `tengyuma@stanford.edu`\nbibliography:\n- 'all.bib'\n- 'new.bib'\n- 'sample.bib'\ntitle: 'Beyond NTK with" -"---\nabstract: 'Entangled photon pairs are essential for quantum communication technology. They can be generated on-demand by semiconductor quantum dots, but several mechanisms are known to reduce the degree of entanglement. While some obstacles like the finite fine-structure splitting can be overcome by now, the excitation scheme itself can impair the entanglement fidelity. Here, we demonstrate that the swing-up of quantum emitter population (SUPER) scheme applied to a quantum dot in a cavity yields almost perfectly entangled photons. The entanglement degree remains robust against phonon influences even at elevated temperatures, due to decoupling of the excitation and emission process. With this achievement, quantum dots are ready to be used as entangled photon pair sources in applications requiring high degrees of entanglement up to temperatures of about $\\SI{80}{K}$.'\nauthor:\n- 'Thomas K. Bracht'\n- Moritz Cygorek\n- Tim Seidelmann\n- Vollrath Martin Axt\n- 'Doris E. Reiter'\nbibliography:\n- 'bibfile.bib'\ntitle: 'Temperature-independent almost perfect photon entanglement from quantum dots via the SUPER scheme'\n---\n\nIntroduction\n============\n\nWith their ability to generate entangled photons on-demand [@orieux2017semiconductor; @stevenson2006semiconductor; @Huber2018semiconductor], quantum dots offer exciting possibilities for advancing the field of quantum communication [@vajner2022quantum]. To harness their usefulness for quantum applications, considerable efforts have been" -"---\nabstract: 'This paper focuses on an important type of black-box attacks, i.e., transfer-based adversarial attacks, where the adversary generates adversarial examples by a substitute (source) model and utilize them to attack an unseen target model, without knowing its information. Existing methods tend to give unsatisfactory adversarial transferability when the source and target models are from different types of DNN architectures (e.g. ResNet-18 and Swin Transformer). In this paper, we observe that the above phenomenon is induced by the output inconsistency problem. To alleviate this problem while effectively utilizing the existing DNN models, we propose a common knowledge learning (CKL) framework to learn better network weights to generate adversarial examples with better transferability, under fixed network architectures. Specifically, to reduce the model-specific features and obtain better output distributions, we construct a multi-teacher framework, where the knowledge is distilled from different teacher architectures into one student network. By considering that the gradient of input is usually utilized to generated adversarial examples, we impose constraints on the gradients between the student and teacher models, to further alleviate the output inconsistency problem and enhance the adversarial transferability. Extensive experiments demonstrate that our proposed work can significantly improve the adversarial transferability.'\nauthor:\n- Ruijie" -"---\nabstract: 'We study the long-term evolution of the Milky Way (MW) over cosmic time by modeling the star formation, cosmic rays, metallicity, stellar dynamics, outflows and inflows of the galactic system to obtain various insights into the galactic evolution. The mass accretion is modeled by the results of cosmological $N$-body simulations for the cold dark matter. We find that the star formation rate is about half the mass accretion rate of the disk, given the consistency between observed Galactic Diffuse X-ray Emissions (GDXEs) and possible conditions driving the Galactic wind. Our model simultaneously reproduces the quantities of star formation rate, cosmic rays, metals, and the rotation curve of the current MW. The most important predictions of the model are that there is an unidentified accretion flow with a possible number density of $\\sim10^{-2}~{\\rm cm^{-3}}$ and the part of the GDXEs originates from a hot, diffuse plasma which is formed by consuming about 10\u00a0% of supernova explosion energy. The latter is the science case for future X-ray missions; [*XRISM*]{}, [*Athena*]{}, and so on. We also discuss further implications of our results for the planet formation and observations of externalgalaxies in terms of the multimessenger astronomy.'\nauthor:\n- 'Jiro .'\nauthor:\n- 'Xiang Zhuang$^{1,2,3}$[^1]'\n- 'Qiang Zhang$^{1,2}$[^2]'\n- 'Bin Wu$^{2}$'\n- 'Keyan Ding$^{2}$'\n- |\n Yin Fang$^{1,2,3}$\\\n Huajun" -"---\nabstract: 'We study the convergence behavior of the celebrated temporal-difference (TD) learning algorithm. By looking at the algorithm through the lens of optimization, we first argue that TD can be viewed as an iterative optimization algorithm where the function to be minimized changes per iteration. By carefully investigating the divergence displayed by TD on a classical counter example, we identify two forces that determine the convergent or divergent behavior of the algorithm. We next formalize our discovery in the linear TD setting with quadratic loss and prove that convergence of TD hinges on the interplay between these two forces. We extend this optimization perspective to prove convergence of TD in a much broader setting than just linear approximation and squared loss. Our results provide a theoretical explanation for the successful application of TD in reinforcement learning.'\nauthor:\n- |\n Kavosh Asadi[^1]\\\n Amazon\\\n Shoham Sabach$^*$\\\n Amazon & Technion\\\n Yao Liu\\\n Amazon\\\n Omer Gottesman\\\n Amazon\\\n Rasool Fakoor\\\n Amazon\\\nbibliography:\n- 'ref.bib'\ndate: March 2023\ntitle: 'TD Convergence: An Optimization Perspective'\n---\n\n=1\n\nIntroduction\n============\n\nTemporal-difference (TD) learning is arguably one of the most important algorithms in reinforcement learning (RL), and many RL algorithms are based on principles that TD embodies. TD" -"---\nabstract: 'Document-level event extraction is a long-standing challenging information retrieval problem involving a sequence of sub-tasks: entity extraction, event type judgment, and event type-specific multi-event extraction. However, addressing the problem as multiple learning tasks leads to increased model complexity. Also, existing methods insufficiently utilize the correlation of entities crossing different events, resulting in limited event extraction performance. This paper introduces a novel framework for document-level event extraction, incorporating a new data structure called token-event-role and a multi-channel argument role prediction module. The proposed data structure enables our model to uncover the primary role of tokens in multiple events, facilitating a more comprehensive understanding of event relationships. By leveraging the multi-channel prediction module, we transform entity and multi-event extraction into a single task of predicting token-event pairs, thereby reducing the overall parameter size and enhancing model efficiency. The results demonstrate that our approach outperforms the state-of-the-art method by 9.5 percentage points in terms of the *F*1 score, highlighting its superior performance in event extraction. Furthermore, an ablation study confirms the significant value of the proposed data structure in improving event extraction tasks, further validating its importance in enhancing the overall performance of the framework.'\nauthor:\n- Qizhi Wan\n- Changxuan" -"---\nabstract: |\n Current methods of evaluating search strategies and automated citation screening for systematic literature reviews typically rely on counting the number of relevant publications (i.e. those to be included in the review) and not relevant publications (i.e. those to be excluded). Significant importance is put into promoting the retrieval of all relevant publications through great attention to recall-oriented measures, and demoting the retrieval of non-relevant publications through precision-oriented or cost metrics. This established practice, however, does not accurately reflect the reality of conducting a systematic review, because not all included publications have the same influence on the final outcome of the systematic review. More specifically, if an important publication gets excluded or included, this might significantly change the overall review outcome, while not including or excluding less influential studies may only have a limited impact. However, in terms of evaluation measures, all inclusion and exclusion decisions are treated equally and, therefore, failing to retrieve publications with little to no impact on the review outcome leads to the same decrease in recall as failing to retrieve crucial publications.\n\n We propose a new evaluation framework that takes into account the impact of the reported study on the overall systematic review" -"---\nabstract: 'Grating magneto-optical traps are an enabling quantum technology for portable metrological devices with ultracold atoms. *However*, beam diffraction efficiency and angle *are* affected by wavelength, creating a *single-optic design* challenge for laser cooling in two stages at two distinct wavelengths \u2013 as commonly used for loading e.g.\u00a0Sr or Yb atoms into optical lattice or tweezer clocks. Here, we optically *characterize* a wide variety of binary gratings at different wavelengths to find a simple empirical fit to experimental grating diffraction efficiency data in terms of dimensionless etch depth and period for various duty cycles. The model avoids complex 3D light-grating surface *calculations*, yet still yields results accurate to a few percent across a broad range of parameters. Gratings *optimized* for two (or more) wavelengths can now be designed in an informed manner suitable for a wide class of atomic species enabling advanced quantum technologies.'\naddress:\n- 'Department of Physics, SUPA, University of Strathclyde, Glasgow, G4 0NG, United Kingdom'\n- 'National Institute of Standards and Technology, 325 Broadway, Boulder, Colorado 80305, USA'\n- 'University of Colorado, Department of Physics, Boulder, Colorado 80309, USA'\nauthor:\n- 'Oliver S. Burrow, Robert J.\u00a0Fasano, Wesley Brand, Michael W.\u00a0Wright, Wenbo Li, Andrew" -"---\nabstract: 'With [*JWST*]{}\u2019s successful deployment and unexpectedly high fuel reserves, measuring the masses of sub-Neptunes transiting bright, nearby stars will soon become the bottleneck for characterizing the atmospheres of small exoplanets via transmission spectroscopy. Using a carefully curated target list and more than two years\u2019 worth of [APF-Levy]{}and [Keck-HIRES]{}Doppler monitoring, the TESS-Keck Survey is working toward alleviating this pressure. Here we present mass measurements for 11 transiting planets in eight systems that are particularly suited to atmospheric follow-up with [*JWST*]{}. We also report the discovery and confirmation of a temperate super-Jovian-mass planet on a moderately eccentric orbit. The sample of eight host stars, which includes one subgiant, spans early-K to late-F spectral types ([$T_\\mathrm{eff}$]{}$=$ 5200\u20136200 K). We homogeneously derive planet parameters using a joint photometry and radial velocity modeling framework, discuss the planets\u2019 possible bulk compositions, and comment on their prospects for atmospheric characterization.'\nauthor:\n- 'Joseph M. Akana Murphy'\n- 'Natalie M. Batalha'\n- Nicholas Scarsdale\n- Howard Isaacson\n- 'David R. Ciardi'\n- 'Erica J. Gonzales'\n- Steven Giacalone\n- 'Joseph D. Twicken'\n- Anne Dattilo\n- Tara Fetherolf\n- 'Ryan A. Rubenzahl'\n- 'Ian J. M. Crossfield'\n- 'Courtney D. Dressing'\n- Benjamin Fulton\n- 'Andrew" -"---\nabstract: |\n Motivated by the idea that lack of experience is a source of errors but that experience should reduce them, we model agents\u2019 behavior using a stochastic choice model, *leaving endogenous the accuracy of their choice*. In some games, increased accuracy is conducive to unstable best-response dynamics. We define the barrier to learning as the minimum level of noise which keeps the best-response dynamic stable. Using logit Quantal Response, this defines a *limitQR equilibrium*. We apply the concept to centipede, travelers\u2019 dilemma, and 11-20 money-request games and to first-price and all-pay auctions, and discuss the role of strategy restrictions in reducing or amplifying barriers to learning.\n\n **Keywords:** Learning, Bounded Rationality, Stochastic choice\n\n **JEL Classification Codes:** C72, D83, D90\nauthor:\n- 'Olivier Compte[^1][^2]'\nbibliography:\n- 'reflib.bib'\ndate: June 28th 2023\ntitle: Endogenous Barriers to Learning\n---\n\nIntroduction\n============\n\nPlaying a Nash equilibrium requires that each player comes to play a best response to others\u2019 behavior. One original motivation for studying QRE rather than exact equilibria is that learning to play a best response may be hard to accomplish, or require enough experience. As @mckelvey95 put it in their seminal work, \u201c*as a player gains experience playing a particular game" -"---\nabstract: 'Breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) plays an important role in the screening and prognosis assessment of high-risk breast cancer. The segmentation of cancerous regions is essential useful for the subsequent analysis of breast MRI. To alleviate the annotation effort to train the segmentation networks, we propose a weakly-supervised strategy using extreme points as annotations for breast cancer segmentation. Without using any bells and whistles, our strategy focuses on fully exploiting the learning capability of the routine training procedure, i.e., the *train* - *fine-tune* - *retrain* process. The network first utilizes the pseudo-masks generated using the extreme points to *train* itself, by minimizing a contrastive loss, which encourages the network to learn more representative features for cancerous voxels. Then the trained network *fine-tune*s itself by using a similarity-aware propagation learning (SimPLe) strategy, which leverages feature similarity between unlabeled and positive voxels to propagate labels. Finally the network *retrain*s itself by employing the pseudo-masks generated using previous fine-tuned network. The proposed method is evaluated on our collected DCE-MRI dataset containing 206 patients with biopsy-proven breast cancers. Experimental results demonstrate our method effectively fine-tunes the network by using the SimPLe strategy, and achieves a mean Dice value of 81%." -"---\nabstract: 'The excitations in the Kitaev spin liquid (KSL) can be described by Majorana fermions, which have characteristic field dependence of bulk gap and topological edge modes. In the high-field state of layered honeycomb magnet $\\alpha$-RuCl$_3$, experimental results supporting these Majorana features have been reported recently. However, there are challenges due to sample dependence and the impact of inevitable disorder on the KSL is poorly understood. Here we study how low-energy excitations are modified by introducing point defects in $\\alpha$-RuCl$_3$ using electron irradiation, which induces site vacancies and exchange randomness. High-resolution measurements of the temperature dependence of specific heat $C(T)$ under in-plane fields $H$ reveal that while the field-dependent Majorana gap is almost intact, additional low-energy states with $C/T=A(H)T$ are induced by introduced defects. At low temperatures, we obtain the data collapse of $C/T\\sim H^{-\\gamma}(T/H)$ expected for a disordered quantum spin system, but with an anomalously large exponent $\\gamma$. This leads us to find a new power-law scaling of the coefficient $A(H)$ with the field-sensitive Majorana gap. These results imply that the disorder induces low-energy linear Majorana excitations, which may be considered as a weak localization effect of Majorana fermions in the KSL.'\nauthor:\n- 'K.\u00a0Imamura'\n- 'Y." -"---\nabstract: |\n Although dominant for tabular data, ML libraries that train tree models over normalized databases (e.g., [`LightGBM`]{}, [`XGBoost`]{}) require the data to be denormalized as a single table, materialized, and exported. This process is not scalable, slow, and poses security risks. In-DB ML aims to train models within DBMSes to avoid data movement and provide data governance. Rather than modify a DBMS to support In-DB ML, is it possible to offer competitive tree training performance to specialized ML libraries...with only SQL?\n\n We present [`JoinBoost`]{}, a Python library that rewrites tree training algorithms over normalized databases into pure SQL. It is portable to any DBMS, offers performance competitive with specialized ML libraries, and scales with the underlying DBMS capabilities. [`JoinBoost`]{}extends prior work from both algorithmic and systems perspectives. Algorithmically, we support factorized gradient boosting, by updating the $Y$ variable to the residual in the [*non-materialized join result*]{}. Although this view update problem is generally ambiguous, we identify [*addition-to-multiplication preserving*]{}, the key property of variance semi-ring to support $rmse$, the most widely used criterion. System-wise, we identify residual updates as a performance bottleneck. Such overhead can be natively minimized on columnar DBMSes by creating a new column of residual values" -"---\nabstract: 'We consider fractional Sobolev spaces $H^\\theta$, $\\theta \\in (0,1)$, on 2D domains and $H^1$-conforming discretizations by globally continuous piecewise polynomials on a mesh consisting of shape-regular triangles and quadrilaterals. We prove that the norm obtained from interpolating between the discrete space equipped with the $L^2$-norm on the one hand and the $H^1$-norm on the other hand is equivalent to the corresponding continuous interpolation Sobolev norm, and the norm-equivalence constants are independent of meshsize and polynomial degree. This characterization of the Sobolev norm is then used to show an inverse inequality between $H^1$ and $H^{\\theta}$.'\nauthor:\n- 'Michael Karkulik[^1], Jens Markus Melenk[^2], Alexander Rieder[^3],'\nbibliography:\n- 'literature.bib'\ntitle: 'On interpolation spaces of piecewise polynomials on mixed meshes[^4]'\n---\n\nIntroduction\n============\n\nFractional Sobolev spaces arise frequently in both analysis and numerical analysis of partial differential or integral equations. As examples, we mention the classical trace space $H^{1/2}(\\partial\\Omega)$ and its dual $H^{-1/2}(\\partial\\Omega)$ on the boundary of some domain $\\Omega$, which are basic function spaces in the analysis of boundary intgral equations, or the more general spaces $H^\\theta(\\Omega)$ for $\\theta\\in(0,1)$, which arise, e.g., in problems involving fractional diffusion processes. These spaces can be characterized as interpolation spaces between $L^2$ and $H^1$, e.g.," -"---\nabstract: |\n As machine learning (ML) based systems are adopted in domains such as law enforcement, criminal justice, finance, hiring and admissions, ensuring the fairness of ML aided decision-making is becoming increasingly important. In this paper, we focus on the problem of fair classification, and introduce a novel min-max F-divergence regularization framework for learning fair classification models while preserving high accuracy.\n\n Our framework consists of two trainable networks, namely, a classifier network and a bias/fairness estimator network, where the fairness is measured using the statistical notion of F-divergence. We show that F-divergence measures possess convexity and differentiability properties, and their variational representation make them widely applicable in practical gradient based training methods. The proposed framework can be readily adapted to multiple sensitive attributes and for high dimensional datasets. We study the F-divergence based training paradigm for two types of group fairness constraints, namely, demographic parity and equalized odds. We present a comprehensive set of experiments for several real-world data sets arising in multiple domains (including COMPAS, Law Admissions, Adult Income, and CelebA datasets).\n\n To quantify the fairness-accuracy tradeoff, we introduce the notion of fairness-accuracy receiver operating characteristic (FA-ROC) and a corresponding *low-bias* FA-ROC, which we argue is an appropriate" -"---\nabstract: 'Maximal Extractable Value (MEV) has become a critical issue for blockchain ecosystems, as it enables validators or block proposers to extract value by ordering, including or censoring users\u2019 transactions. This paper aims to present a formal approach for determining the appropriate compensation for users whose transactions are executed in bundles, as opposed to individually. We explore the impact of MEV on users, discuss the Shapley value as a solution for fair compensation, and delve into the mechanisms of MEV rebates and auctions as a means to undermine the power of the block producer.'\nauthor:\n- Bruno Mazorra Roig\n- Nicol\u00e1s Della Penna\ntitle: 'Towards Optimal Prior-Free Permissionless Rebate Mechanisms, with applications to Automated Market Makers & Combinatorial Orderflow Auctions.'\n---\n\nIntroduction\n============\n\nIn the design of decentralized permissionless systems, it can be desirable to rebate users part of the value they create for the system. Applying recent advances in the understanding on the limits of permissionless false-name proof (aka Sybil resistant) mechanisms [@mazorra2023cost], we study the fundamental limits on what optimal rebates can be in such settings. We make concrete contributions to the understanding of rebate applications related to (1) automated market maker liquidity providers who have a" -"---\nabstract: 'Inspired by recent observations for elliptic curves, we calculate the murmuration density for Dirichlet characters, normalized by their Gauss sums, over geometric and short intervals.'\naddress:\n- 'Department of Mathematics, University of Connecticut, Storrs, CT 06269, U.S.A.'\n- 'Teesside University, Middlesbrough, U.K.'\n- 'Department of Mathematics, University of Connecticut, Storrs, CT 06269, U.S.A.'\nauthor:\n- 'Kyu-Hwan Lee$^{\\star}$'\n- Thomas Oliver\n- Alexey Pozdnyakov\ntitle: Murmurations of Dirichlet Characters\n---\n\n[^1]\n\nIntroduction {#s:intro}\n============\n\nMurmurations of elliptic curves were discovered in [@HLOP], in which a striking oscillation in the average value of $a_p(E)$ was observed. In the original work, the average was taken over elliptic curves $E/\\mathbb{Q}$ with conductor in certain intervals $I\\subset\\mathbb{R}$. Motivated by the Modularity Theorem, one might expect a similar phenomenon for Fourier coefficients $a_p(f)$ averaged over newforms $f$ with rational coefficients and level $N$ in a suitable interval $I$. Such expectations will be validated in [@HLOPS].\n\nTwo important ideas subsequently emerged based on contributions of J. Ellenberg, A. Sutherland, and J. Bober. Firstly, on Ellenberg\u2019s suggestion, Sutherland pursued the idea that it is interesting to study murmurations attached not only to newforms with rational coefficients, but, moreover, Galois orbits of those with coefficients in arbitrary" -"---\nabstract: 'In this letter we evaluate whether the gravitational wave background recently observed by a number of different pulsar timing arrays could be due to merging *primordial* supermassive black hole binaries. We find that for homogeneously distributed primordial black holes this possibility is inconsistent with strong cosmological and astrophysical constraints on their total abundance. If the distribution exhibits some clustering, however, the merger rate will in general be enhanced, opening the window for a consistent interpretation of the PTA data in terms of merging primordial black holes.'\nauthor:\n- Paul Frederik Depta\n- 'Kai Schmidt-Hoberg'\n- Pedro Schwaller\n- Carlo Tasillo\nbibliography:\n- 'bibliography.bib'\ndate: 'August 16, 2023'\ntitle: 'Do pulsar timing arrays observe merging primordial black holes?'\n---\n\n#### Introduction.\u2014 {#introduction. .unnumbered}\n\nThe discovery of gravitational waves (GWs) with frequencies $\\mathcal{O}(100)\\,\\mathrm{Hz}$ from binary black hole mergers in 2015\u00a0[@LIGOScientific:2016aoc] has opened new possibilities for the exploration of our Universe. Depending on the astrophysical or cosmological source, GW signals may exist over a wide range of frequencies and complementary experimental approaches aim to explore much of the available parameter space\u00a0[@LISA:2017pwj; @NANOGrav:2020bcs]. Pulsar timing arrays (PTAs) in particular are sensitive at frequencies in the nHz range. In 2020 NANOGrav" -"---\nabstract: 'In this paper, we develop a three-dimensional multiple-relaxation-time lattice Boltzmann method (MRT-LBM) based on a set of non-orthogonal basis vectors. Compared with the classical MRT-LBM based on a set of orthogonal basis vectors, the present non-orthogonal MRT-LBM simplifies the transformation between the discrete velocity space and the moment space, and exhibits better portability across different lattices. The proposed method is then extended to multiphase flows at large density ratio with tunable surface tension, and its numerical stability and accuracy are well demonstrated by some benchmark cases. Using the proposed method, a practical case of a fuel droplet impacting on a dry surface at high Reynolds and Weber numbers is simulated and the evolution of the spreading film diameter agrees well with the experimental data. Furthermore, another realistic case of a droplet impacting on a super-hydrophobic wall with a cylindrical obstacle is reproduced, which confirms the experimental finding of Liu *et al.* \\[\u201cSymmetry breaking in drop bouncing on curved surfaces,\" Nature communications 6, 10034 (2015)\\] that the contact time is minimized when the cylinder radius is comparable with the droplet cylinder.'\nauthor:\n- Linlin Fei\n- Jingyu Du\n- 'Kai H. Luo'\n- Sauro Succi\n- Marco Lauricella\n-" -"---\nabstract: 'Several pulsar timing array collaborations recently reported evidence of a stochastic gravitational wave background (SGWB) at nHz frequencies. Whilst the SGWB could originate from the merger of supermassive black holes, it could be a signature of new physics near the 100scale. Supercooled first-order phase transitions (FOPTs) that end at the 100scale are intriguing explanations, because they could connect the nHz signal to new physics at the electroweak scale or beyond. Here, however, we provide a clear demonstration that it is not simple to create a nHz signal from a supercooled phase transition, due to two crucial issues that should be checked in any proposed supercooled explanations. As an example, we use a model based on non-linearly realized electroweak symmetry that has been cited as evidence for a supercooled explanation. First, we show that a FOPT cannot complete for the required transition temperature of around 100. Such supercooling implies a period of vacuum domination that hinders bubble percolation and transition completion. Second, we show that even if completion is not required or if this constraint is evaded, the Universe typically reheats to the scale of any physics driving the FOPT. The hierarchy between the transition and reheating temperature makes" -"---\nbibliography:\n- 'library.bib'\ntitle: Field Locations\n---\n\n**Roman CCS White Paper**\n\n[Considerations for Selecting Fields for the Roman High-latitude\\\nTime Domain Core Community Survey]{}\n\n**Roman Core Community Survey:** High Latitude Time Domain Survey\n\n**Scientific Categories:** stellar physics and stellar types; stellar populations and the interstellar medium; large scale structure of the universe **Additional scientific keywords:** Supernovae, Cosmology, Dark energy\n\n**Submitting Author:**\\\nBenjamin Rose, Baylor University (Ben\\_Rose@baylor.edu)\\\n**List of contributing authors:**\\\nGreg Aldering, Lawrence Berkeley National Lab (galdering@lbl.gov)\\\nRebekah Hounsell, University of Maryland Baltimore County, NASA Goddard Space Flight Center (rebekah.a.hounsell@nasa.gov)\\\nBhavin Joshi, Johns Hopkins University (bjoshi5@jhu.edu)\\\nDavid Rubin, Univserity of Hawaii (drubin@hawaii.edu)\\\nDan Scolnic, Duke University (dan.scolnic@duke.edu)\\\nSaul Perlmutter, University of California, Berkeley (saul@lbl.gov)\\\nSusana Deustua, NIST (susana.deustua@nist.gov)\\\nMasao Sako, University of Pennsylvania (masao@sas.upenn.edu)\n\n**Abstract:**\\\nIn this white paper, we review five top considerations for selecting locations of the fields of the Roman High-latitude Time Domain Survey. Based on these considerations, we recommend Akari Deep Field South (ADFS)/Euclid Deep Field South (EDFS) in the Southern Hemisphere has it avoids bright stars, has minimal Milky Way dust, is in Roman Continuous viewing zone, overlaps with multiple past and future surveys, and minimal zodiacal background variation. In the North, Extended Groth" -"---\nabstract: 'We propose Medial Atom Ray Fields (MARFs), a novel neural object representation that enables accurate differentiable surface rendering with a single network evaluation per camera ray. Existing neural ray fields struggle with multi-view consistency and representing surface discontinuities. MARFs address both using a medial shape representation, a dual representation of solid geometry that yields cheap geometrically grounded surface normals, in turn enabling computing analytical curvature despite the network having no second derivative. MARFs map a camera ray to multiple medial intersection candidates, subject to ray-sphere intersection testing. We illustrate how the learned medial shape quantities applies to sub-surface scattering, part segmentation, and aid representing a space of articulated shapes. Able to learn a space of shape priors, MARFs may prove useful for tasks like shape retrieval and shape completion, among others. Code and data can be found at [github.com/pbsds/MARF](https://github.com/pbsds/MARF).'\nauthor:\n- Peder Bergebakken Sundt\n- Theoharis Theoharis\ntitle: 'MARF: The Medial Atom Ray Field Object Representation'\n---\n\nLearning efficient and accurate ways to represent 3D geometry is valuable to applications such as 3D shape analysis, computer graphics, computer vision, and robotics. The recent discovery of *neural fields*, also known as coordinate-based networks or implicit neural representations, has brought" -"---\nabstract: 'Spiking neural networks (SNNs) have ultra-low energy consumption and high biological plausibility due to their binary and bio-driven nature compared with artificial neural networks (ANNs). While previous research has primarily focused on enhancing the performance of SNNs in classification tasks, the generative potential of SNNs remains relatively unexplored. In our paper, we put forward Spiking Denoising Diffusion Probabilistic Models (SDDPM), a new class of SNN-based generative models that achieve high sample quality. To fully exploit the energy efficiency of SNNs, we propose a purely Spiking U-Net architecture, which achieves comparable performance to its ANN counterpart using only 4 time steps, resulting in significantly reduced energy consumption. Extensive experimental results reveal that our approach achieves state-of-the-art on the generative tasks and substantially outperforms other SNN-based generative models, achieving up to $12\\times$ and $6\\times$ improvement on the CIFAR-10 and the CelebA datasets, respectively. Moreover, we propose a threshold-guided strategy that can further improve the performances by 2.69% in a training-free manner. The SDDPM symbolizes a significant advancement in the field of SNN generation, injecting new perspectives and potential avenues of exploration. Our code is available at .'\nauthor:\n- |\n Jiahang Cao^1^[^1] \u00a0\u00a0 Ziqing Wang^1,2^\u00a0\u00a0 Hanzhong Guo^3^\u00a0\u00a0 Hao Cheng^1^\u00a0 Qiang Zhang^1^" -"---\nauthor:\n- 'Luke\u00a0M.\u00a0Kearney'\n- 'Richard\u00a0F.\u00a0Katz'\n- 'Christopher\u00a0W.\u00a0MacMinn'\n- Chris\u00a0Kirkham\n- Joe\u00a0Cartwright\ntitle: |\n Episodic fluid venting from sedimentary basins fuelled by\\\n pressurised mudstones\n---\n\n**Subsurface sandstone reservoirs sealed by overlying, low-permeability layers provide capacity for long-term sequestration of anthropogenic waste. Leakage can occur if reservoir pressures rise sufficiently to fracture the seal. Such pressures can be generated within the reservoir by vigorous injection of waste or, over thousands of years, by natural processes. In either case, the precise role of intercalated mudstones in the long-term evolution of reservoir pressure remains unclear; these layers have variously been viewed as seals, as pressure sinks or as pressure sources. Here, we use the geological record of episodic fluid venting in the Levant Basin to provide striking evidence for the pressure-source hypothesis. We use a Bayesian framework to combine recently published venting data, which record critical subsurface pressures since $\\sim$2\u00a0Ma, with a stochastic model of pressure evolution to infer a pressure-recharge rate of $\\sim$30\u00a0MPa/Myr. To explain this large rate, we quantify and compare a range of candidate mechanisms. We find that poroelastic pressure diffusion from mudstones provides the most plausible explanation for these" -"---\nabstract: 'This paper presents Diffusion Model for Scene Text Recognition (DiffusionSTR), an end-to-end text recognition framework using diffusion models for recognizing text in the wild. While existing studies have viewed the scene text recognition task as an image-to-text transformation, we rethought it as a text-text one under images in a diffusion model. We show for the first time that the diffusion model can be applied to text recognition. Furthermore, experimental results on publicly available datasets show that the proposed method achieves competitive accuracy compared to state-of-the-art methods.'\naddress: |\n FA Research, Fast Accounting Co., Ltd.\\\n [fujitake@fastaccounting.co.jp]{} \nbibliography:\n- 'article.bib'\ntitle: 'DiffusionSTR: Diffusion Model for Scene Text Recognition'\n---\n\nScene text recognition, Document analysis, Diffusion model, Deep learning, Machine learning\n\nIntroduction {#sec:intro}\n============\n\nText recognition in natural images is one of the active areas in computer vision and a fundamental and vital task in real-world applications such as document analysis and automated driving\u00a0[@fujitake2021tcbam; @fujitake2023a3s]. However, scene text recognition is challenging because it requires recognizing text in various fonts, colors, and shapes. Many methods have been proposed to address this challenge. Early research proposed methods that utilize information from images using Convolutional Neural Networks (CNNs) and recognizes text sequences using" -"---\nabstract: 'The volume function $V(t)$ of a compact set $S\\in{\\mathbb R}^d$ is just the Lebesgue measure of the set of points within a distance to $S$ not larger than $t$. According to some classical results in geometric measure theory, the volume function turns out to be a polynomial, at least in a finite interval, under a quite intuitive, easy to interpret, sufficient condition (called \u201cpositive reach\u201d) which can be seen as an extension of the notion of convexity. However, many other simple sets, not fulfilling the positive reach condition, have also a polynomial volume function. To our knowledge, there is no general, simple geometric description of such sets. Still, the polynomial character of $V(t)$ has some relevant consequences since the polynomial coefficients carry some useful geometric information. In particular, the constant term is the volume of $S$ and the first order coefficient is the boundary measure (in Minkowski\u2019s sense). This paper is focused on sets whose volume function is polynomial on some interval starting at zero, whose length (that we call \u201cpolynomial reach\u201d) might be unknown. Our main goal is to approximate such polynomial reach by statistical means, using only a large enough random sample of points inside $S$." -"---\nauthor:\n- 'Wei Zhang$^{*, \\ddagger}$'\n- 'Christof Sch\u00fctte$^{*,\\ddagger}$'\nbibliography:\n- 'reference.bib'\ntitle: 'Understanding recent deep-learning techniques for identifying collective variables of molecular dynamics'\n---\n\nmolecular dynamics, collective variable identification, eigenfunction, autoencoder, variational characterisation, deep learning\n\nIntroduction {#sec-intro}\n============\n\nMolecular dynamics (MD) simulation is a mature computational technique for the study of biomolecular systems. It has proven valuable in a wide range of applications, e.g.\u00a0understanding functional mechanisms of proteins and discovering new drugs\u00a0[@Durrant2011-md-drug; @HOLLINGSWORTH20181129-md-for-all]. However, the capability of direct (all-atom) MD simulations is often limited, due to the disparity between the tiny step-sizes that the simulations have to adopt in order to ensure numerical stability and the large timescales on which the functionally relevant conformational changes of biomolecules, such as protein folding, typically occur.\n\nOne general approach to overcome the aforementioned challenge in MD simulations is by utilizing the fact that in many cases the dynamics of a high-dimensional metastable molecular system can be characterised by a few features, i.e.\u00a0collective variables (CVs) of the system. In deed, many enhanced sampling methods (see\u00a0[@enhanced-sampling-for-md-review] for a review) and approaches for building surrogate models\u00a0[@perspective-noid-cg; @effective_dynamics; @effective_dyn_2017; @non-markovian-modeling-pfolding-netz] rely on knowing a set of CVs of the underlying molecular" -"---\nabstract: '[The non-radiative electron-relaxation dynamics in ${\\mbox{C$_{60}$}}$ molecule is studied after selective initial photoexcitations. The methodology includes nonadibabtic molecular simulation combined with time-dependent density functional theory (DFT) and semi-classical surface hopping approach. Results of treating the DFT exchange-correlation (xc) interaction by the non-empirical Perdew-Burke-Ernzerhof (PBE), hybrid PBE0, and hybrid Becke 3-parameter Lee\u2013Yang\u2013Parr (B3LYP) functional are compared. Even though some differences in the details are found, all three functionals produce qualitatively similar unoccupied band structures in the ground state. The model-dependent differences in the ultrafast population dynamics, including the occurrences of transient entrapment of population, are studied systematically. The trend of the results demonstrates a universal dependence on the structure of unoccupied band offering a spectroscopic route to probe this structure. Results can be verified, as well as the best xc model for quantitative accuracy can be determined, by comparing with ultrafast transient absorption or time-resolved photoelectron spectroscopy measurements. From the computational standpoint, the study facilitates method optimization to simulate nonadiabatic relaxation dynamics in technologically important fullerene derivatives.]{}'\nauthor:\n- Esam Ali\n- 'Mohamed El-Amine Madjet'\n- Ruma De\n- Thomas Frauenheim\n- 'Himadri S. Chakraborty'\ntitle: 'Ultrafast nonadiabatic electron dynamics in photoexcited ${\\mbox{C$_{60}$}}$: A comparative study among DFT exchange-correlation" -"---\nauthor:\n- 'M.P. Garcia del Moral'\n- 'P. Le\u00f3n,[!!]{}'\n- 'A. Restuccia'\ntitle: 'Worldsheet description of a *massive* type IIA superstring in 10D'\n---\n\nIntroduction\n============\n\nThe superstring theory in 10D and its nonperturbative description, M-theory in 11D are unification theories of all the fundamental interactions in a single framework. The low energy limit of M-theory on a flat space corresponds to the maximal supergravity in eleven dimensions. From the 11D supergravity formulation, it is possible to obtain, through a Kaluza Klein reduction, the maximal IIA supergravity in 10D, which is the low energy limit of the $N=2$ type IIA superstring. A Scherk-Schwarz\u2019s reduction of the 11D supergravity leads to a gauged deformation of the type IIA supergravity [@Howe]. On the other hand, in 10D, the type IIB sector is obtained through a T-duality transformation. In [@Romans], it was found that there exists a massive type IIA supergravity in 10D, known as Romans supergravity, whose origin in M-theory was not clear. Brane solutions associated with Type IIA massive supergravity were found in [@Janssen]. In [@mpgm13] a M-theory origin proposal was made. Its uplift to eleven dimensions supergravity had been previously established in [@Bergshoeff6]. There, the authors found that the" -"---\nabstract: 'A novel relative localization approach for cooperative guidance of a micro-scale fusing with is proposed in this paper. -based localization is accurate and robust to challenging environmental conditions, but 3D are relatively heavy and require large platforms. Visual cameras are cheap and lightweight. However, visual-based self-localization methods exhibit lower accuracy and can suffer from significant drift with respect to the global reference frame. We focus on cooperative navigation in a heterogeneous team of a primary -equipped and secondary camera-equipped . We propose a novel cooperative approach combining relative localization data with output on board the primary to obtain an accurate pose of the secondary . The pose estimate is used to guide the secondary along trajectories defined in the primary reference frame. The experimental evaluation has shown the superior accuracy of our method to the raw output and demonstrated its capability to guide the secondary along desired trajectories while mitigating drift.'\naddress:\n- 'The authors are with the Multi-robot Systems Group, Department of Cybernetics, Faculty of Electrical Engineering, Czech Technical University in Prague, Czech Republic, [{vaclav.pritzl, matous.vrba, petr.stepan, martin.saska}@fel.cvut.cz]{}.'\n- Corresponding author\nauthor:\n- V\u00e1clav Pritzl\n- Matou\u0161 Vrba\n- Petr \u0160t\u011bp\u00e1n\n- Martin Saska\nbibliography:\n- 'main.bib'" -"---\nabstract: 'Voltage control generally requires accurate information about the grid\u2019s topology in order to guarantee network stability. However, accurate topology identification is challenging for existing methods, especially as the grid is subject to increasingly frequent reconfiguration due to the adoption of renewable energy. In this work, we combine a nested convex body chasing algorithm with a robust predictive controller to achieve provably finite-time convergence to safe voltage limits in the online setting where there is uncertainty in both the network topology as well as load and generation variations. In an online fashion, our algorithm narrows down the set of possible grid models that are consistent with observations and adjusts reactive power generation accordingly to keep voltages within desired safety limits. Our approach can also incorporate existing partial knowledge of the network to improve voltage control performance. We demonstrate the effectiveness of our approach in a case study on a Southern California Edison 56-bus distribution system. Our experiments show that in practical settings, the controller is indeed able to narrow the set of consistent topologies quickly enough to make control decisions that ensure stability in both linearized and realistic non-linear models of the distribution grid.'\nauthor:\n- |\n Christopher Yeh" -"---\nauthor:\n- 'Massimo Bertolini, Matteo Longo, and Rodolfo Venerucci'\nbibliography:\n- 'BLV.bib'\ntitle: The anticyclotomic main conjectures for elliptic curves\n---\n\nIntroduction\n============\n\nLet $E/{\\mathbf{Q}}$ be a modular elliptic curve of conductor $N$ and let $f$ be the cuspidal eigenform on $\\Gamma_0(N)$ associated to $E$ by the modularity theorem. Denote by $K_\\infty$ the anticyclotomic ${\\mathbf{Z}}_p$-extension of an imaginary quadratic field $K$. The goal of this article is to obtain a proof of the Main conjectures of Iwasawa theory for $E$ over $K_\\infty$, both in the case where the rational prime $p$ is [*good ordinary*]{} or [*supersingular*]{} for $E$.\n\nThe anticyclotomic setting displays a well-known dichotomy, depending on whether the generic sign of the functional equation of the complex $L$-function of $E/K$ twisted by finite order characters of the Galois group of $K_\\infty/K$ is $+1$ or $-1$. For reasons which will be explained later we call the former case [*definite*]{} and the latter case [*indefinite*]{}.\n\nAssume first that $p$ is a [*good ordinary*]{} prime for $E$. In the [*indefinite*]{} case, a norm-compatible sequence of Heegner points arising from a Shimura curve parametrisation is defined over the finite layers of $K_\\infty/K$. Its position in the compact $p$-adic Selmer group of $E/K_\\infty$" -"---\nabstract: 'We present the Chinese Elementary School Math Word Problems (CMATH) dataset, comprising 1.7k elementary school-level math word problems with detailed annotations, source from actual Chinese workbooks and exams. This dataset aims to provide a benchmark tool for assessing the following question: to what grade level of elementary school math do the abilities of popular large language models (LLMs) correspond? We evaluate a variety of popular LLMs, including both commercial and open-source options, and discover that only GPT-4 achieves success (accuracy $\\geq$ 60%) across all six elementary school grades, while other models falter at different grade levels. Furthermore, we assess the robustness of several top-performing LLMs by augmenting the original problems in the CMATH dataset with distracting information. Our findings reveal that GPT-4 is able to maintains robustness, while other model fail. We anticipate that our study will expose limitations in LLMs\u2019 arithmetic and reasoning capabilities, and promote their ongoing development and advancement.'\nauthor:\n- |\n Tianwen WeiJian LuanWei LiuShuang Dong Bin Wang\\\n Xiaomi AI Lab\\\n `{weitianwen,luanjian,liuwei40,dongshuang1,wangbin11}@xiaomi.com`\nbibliography:\n- 'nlp.bib'\n- 'llm.bib'\ntitle: 'CMATH: Can Your Language Model Pass Chinese Elementary School Math Test?'\n---\n\nIntroduction\n============\n\nRecently, the field of artificial intelligence has witnessed groundbreaking advancements, particularly in" -"---\nabstract: 'We develop a representation of the second kind for certain Hardy classes of solutions to nonhomogeneous Cauchy-Riemann equations and use it to show that boundary values in the sense of distributions of these functions can be represented as the sum of an atomic decomposition and an error term. We use the representation to show continuity of the Hilbert transform on this class of distributions and use it to show that solutions to a Schwarz-type boundary value problem can be constructed in the associated Hardy classes.'\naddress: |\n Department of Mathematical Sciences\\\n University of Arkansas\\\n Fayetteville, Arkansas\nauthor:\n- 'William L. Blair'\nbibliography:\n- 'refs.bib'\ntitle: 'An atomic representation for Hardy classes of solutions to nonhomogeneous Cauchy-Riemann equations'\n---\n\nIntroduction\n============\n\nIn this paper, we work to illustrate connections between Hardy classes of functions on the unit disk which satisfy certain nonhomogeneous Cauchy-Riemann equations and the Hardy spaces of distributions on the unit circle\n\nIn [@GHJH2], G. Hoepfner and J. Hounie prove that functions in the holomorphic Hardy spaces of functions on the unit disk have boundary values in the sense of distributions and these distributions can be represented by an atomic decomposition. While Coifman proved the analogous result" -"---\nabstract: 'We provide a comprehensive theory of magnetic phases in two-dimensional triangulene crystals, using both Hubbard model and density functional theory (DFT) calculations. We consider centrosymmetric and non-centrosymmetric triangulene crystals. In all cases, DFT and mean-field Hubbard model predict the emergence of broken-symmetry antiferromagnetic (ferrimagnetic) phases for the centrosymmetric (non-centrosymmetric) crystals. This includes the special case of the \\[4,4\\]triangulene crystal, whose non-interacting energy bands feature a gap with flat valence and conduction bands. We show how the lack of contrast between the local density of states of these bands, recently measured via scanning tunneling spectroscopy, is a natural consequence of a broken-symmetry N\u00e9el state that blocks intermolecular hybridization. Using random phase approximation, we also compute the spin wave spectrum of these crystals, including the recently synthesized \\[4,4\\]triangulene crystal. The results are in excellent agreement with the predictions of a Heisenberg spin model derived from multi-configuration calculations for the unit cell. We conclude that experimental results are compatible with an antiferromagnetically ordered phase where each triangulene retains the spin predicted for the isolated species.'\nauthor:\n- 'G. Catarina$^{1,2}$, J. C. G. Henriques$^{1,3}$, A. Molina-S\u00e1nchez$^{4}$, A. T. Costa$^1$, J. Fern\u00e1ndez-Rossier$^{1,}$'\nbibliography:\n- 'bibshort.bib'\ntitle: ' Broken-symmetry magnetic phases in two-dimensional triangulene" -"---\nabstract: 'Enhancing properties and performances of aluminium alloys by a control of their solidification is pivotal in automotive and aerospace industries. The fundamental role of the structure-diffusion relationship is investigated for Al-Mg-Si liquid alloys taken as a prototype of Al-6xxx. For this purpose, first principles-based molecular dynamics simulations were performed for various Si and Mg content for Al-rich compositions, including the binary alloy counterparts. Results indicate that Mg and/or Si in alloys create a more compact ordering around Al than in pure Al, lowering diffusion. Mg promotes icosahedral short-range order, while Si displays a preference towards cubic local ordering, impacting diffusion based on their respective content. It suggests a mechanism whereby an increase in Mg content generally lowers the diffusion of each species, whereas an increase in Si content enhances their diffusion, providing insights for future alloy design.'\nauthor:\n- Alaa Fahs\n- Philippe Jarry\n- No\u00ebl Jakse\nbibliography:\n- 'References.bib'\ntitle: 'Structure-Dynamics Relationship in Al-Mg-Si Liquid Alloys'\n---\n\nIntroduction\n============\n\nAluminium alloys represent one of the main categories of structural metallic materials widely utilized in automotive construction and the aerospace industries [@Holmestad2012; @Zandbergen1997; @Ravi2004; @Froseth2003; @Jarry2018]. They are attractive due to their high strength-to-density ratio, functional extrudability, age" -"---\nabstract: 'We describe a compression-aware method to compute all-vs-all maximal exact matches (MEM) among strings of a repetitive collection $\\mathcal{T}$. The key concept in our work is the construction of a fully-balanced grammar $\\mathcal{G}$ from $\\mathcal{T}$ that meets a property that we call *fix-free*: the expansions of the nonterminals that have the same height in the parse tree form a fix-free set (i.e., prefix-free and suffix-free). The fix-free property allows us to compute the MEMs of $\\mathcal{T}$ incrementally over $\\mathcal{G}$ using a standard suffix-tree-based MEM algorithm, which runs on a subset of grammar rules at a time and does not decompress nonterminals. By modifying the locally-consistent grammar of Christiansen et al.\u00a0[@christiansen2020optimal], we show how we can build $\\mathcal{G}$ from $\\mathcal{T}$ in linear time and space. We also demonstrate that our MEM algorithm runs on top of $\\mathcal{G}$ in $O(G +occ)$ time and uses $O(\\log G(G+occ))$ bits, where $G$ is the grammar size, and $occ$ is the number of MEMs in $\\mathcal{T}$. In the conclusions, we discuss how our idea can be modified to implement approximate pattern matching in compressed space.'\nauthor:\n- 'Diego D\u00edaz-Dom\u00ednguez'\n- Leena Salmela\nbibliography:\n- 'references.bib'\ntitle: 'Computing all-vs-all MEMs in grammar-compressed text[^1]'\n---" -"---\nabstract: 'For a given graph $G$, a depth-first search (DFS) tree $T$ of $G$ is an $r$-rooted spanning tree such that every edge of $G$ is either an edge of $T$ or is between a *descendant* and an *ancestor* in $T$. A graph $G$ together with a DFS tree is called a *lineal topology* $\\mathcal{T} = (G, r, T)$. Sam et al. (2023) initiated study of the parameterized complexity of the Min-LLT and Max-LLT problems which ask, given a graph $G$ and an integer $k\\geq 0$, whether $G$ has a DFS tree with at most $k$ and at least $k$ leaves, respectively. Particularly, they showed that for the dual parameterization, where the tasks are to find DFS trees with at least $n-k$ and at most $n-k$ leaves, respectively, these problems are fixed-parameter tractable when parameterized by $k$. However, the proofs were based on Courcelle\u2019s theorem, thereby making the running times a tower of exponentials. We prove that both problems admit polynomial kernels with ${\\mathcal{O}}(k^3)$ vertices. In particular, this implies FPT algorithms running in $k^{{\\mathcal{O}}(k)}\\cdot n^{O(1)}$ time. We achieve these results by making use of a ${\\mathcal{O}}(k)$-sized vertex cover structure associated with each problem. This also allows us" -"---\nbibliography:\n- 'romanwp.bib'\n---\n\nRoman CCS White Paper\n\nNew Compact Object Binary Populations with Precision Astrometry\n\n**Roman Core Community Survey:** Galactic Bulge Time Domain Survey (GBTDS)\n\n**Scientific Categories:** Stellar physics and stellar types\n\n**Additional scientific keywords:** Astrometry, Compact Objects, Black holes, Neutron stars, Binary stars\n\n**Submitting Author:**\n\nName: P. Gandhi Affiliation: University of Southampton, Highfield SO171BJ, UK Email: poshak.gandhi@soton.ac.uk\n\n**Contributing authors:** C.Dashwood Brown (Univ. Southampton)\\\nY.Zhao (Univ. Southampton)\\\nK.El-Badry (Harvard Univ. CfA)\\\nT.J.Maccarone (Texas Tech)\\\nC.Knigge (Univ. Southampton)\\\nJ.Anderson (STScI)\\\nM.Middleton (Univ. Southampton)\\\nJ.C.A.Miller-Jones (ICRAR, Curtin University)\n\n**Abstract:** Compact object binaries (a black hole or a neutron star orbiting a non-degenerate stellar companion) are key to our understanding of late massive star evolution, in addition to being some of the best probes of extreme gravity and accretion physics. \u00a0has opened the door to astrometric studies of these systems, enabling geometric distance measurements, kinematic estimation, and the ability to find new previously unknown systems through measurement of binary orbital elements. Particularly puzzling are newly found massive black holes in wide orbits ($\\sim$AU or more) whose evolutionary history is difficult to explain. Astrometric identification of such binaries is challenging for , with only two such examples currently known. Roman\u2019s enormous grasp," -"---\nabstract: 'We investigate two-phase flow in porous media and derive a two-scale model, which incorporates pore-scale phase distribution and surface tension into the effective behavior at the larger Darcy scale. The free-boundary problem at the pore scale is modeled using a diffuse interface approach in the form of a coupled Allen-Cahn Navier-Stokes system with an additional momentum flux due to surface tension forces. Using periodic homogenization and formal asymptotic expansions, a two-scale model with cell problems for phase evolution and velocity contributions is derived. We investigate the computed effective parameters and their relation to the saturation for different fluid distributions, in comparison to commonly used relative permeability saturation curves. The two-scale model yields non-monotone relations for relative permeability and saturation. The strong dependence on local fluid distribution and effects captured by the cell problems highlights the importance of incorporating pore-scale information into the macro-scale equations.'\nauthor:\n- 'Mathis Kelm [^1], Carina Bringedal [^2] \u00a0, Bernd Flemisch'\nbibliography:\n- 'preprint\\_kelm\\_20230630.bib'\ntitle: 'Upscaling and Effective Behavior for Two-Phase Porous-Medium Flow using a Diffuse Interface Model'\n---\n\nIntroduction {#sec1}\n============\n\nFlow through porous media, especially in multi-phase systems, is of interest in a variety of applications from oil recovery and $CO_2$ sequestration" -"---\nabstract: 'Vortex stretching is a common feature of many complex flows, including turbulence. Experiments and simulations of isolated vortex knots demonstrate that this behavior can also be seen in relatively simple systems, and appears to be dependent on vortex topology. Here we simulate the advection of material lines in the frozen flow fields of vortices on the surface of a torus. We find that knotted configurations lead to exponential stretching behavior which is qualitatively different than that observed by collections of unknots. This stretching can be explained by the formation of bights, sharp bends in the material lines which can be used to predict the stretching rate. This behavior is confirmed by computing the finite time Lyapunov exponents of the flow fields, which demonstrate the exponential stretching is mediated by bight forming regions between the vortex lines. This work both establishes a clear connection between topology and stretching behavior, as well as providing an intuitive mechanism for exponential growth of material lines in knotted flows.'\nauthor:\n- Stefan Faaland\n- Diego Tapia Silva\n- Dustin Kleckner\nbibliography:\n- 'refs.bib'\ntitle: Stretching Behavior of Knotted and Unknotted Flow Fields\n---\n\n\\[sec:level1\\]Introduction:\n===========================\n\n![ **(a)** The time evolution of a circular" -"---\nabstract: 'We carried out 3D smoothed particle hydrodynamics simulations of the common envelope binary interaction using the approximation of Bowen to calculate the dust opacity in order to investigate the resulting dust-driven accelerations. We have simulated two types of binary star: a 1.7 and a 3.7\u00a0[M$_{\\odot}$]{}\u00a0thermally-pulsating, asymptotic giant branch stars with a 0.6\u00a0[M$_{\\odot}$]{}\u00a0companion. We carried out simulations using both an ideal gas and a tabulated equations of state, with the latter considering the recombination energy of the envelope. We found that the dust-driven wind leads to a relatively small increase in the unbound gas, with the effect being smaller for the tabulated equation of state simulations. Dust acceleration does contribute to envelope expansion with only a slightly elongated morphology, if we believe the results from the tabulated equation of state as more reliable. The Bowen opacities in the outer envelopes of the two models, at late times, are large enough that the photosphere of the post-inspiral object is about ten times larger compared to the same without accounting for the dust opacities. As such, the prediction of the appearance of the transient would change substantially if dust is included.'\nauthor:\n- |\n Miguel Gonz\u00e1lez-Bol\u00edvar $^{1,2}$" -"---\nabstract: 'Three-dimensional device integration facilitates the construction of superconducting quantum information processors with more than several tens of qubits by distributing elements such as control wires, qubits, and resonators between multiple layers. The frequencies of resonators and qubits in flip-chip-bonded multi-chip modules depend on the details of their electromagnetic environment defined by the conductors and dielectrics in their vicinity. Accurate frequency targeting therefore requires precise control of the separation between chips and minimization of their relative tilt. Here, we describe a method to control the inter-chip separation by using polymer spacers. Compared to an identical process without spacers, we reduce the measured planarity error by a factor of , to a mean tilt of , and the deviation from the target inter-chip separation by a factor of ten, to a mean of . We apply this process to coplanar waveguide resonator samples and observe chip-to-chip resonator frequency variations below ($\\approx \\SI{1}{\\percent}$). We measure internal quality factors of at the single-photon level, suggesting that the added spacers are compatible with low-loss device fabrication.'\nauthor:\n- 'Graham J. Norris'\n- Laurent Michaud\n- David Pahl\n- Michael Kerschbaum\n- Christopher Eichler\n- 'Jean-Claude Besse'\n- Andreas Wallraff\nbibliography:\n- 'references.bib'\ndate:" -"---\nabstract: 'We investigate the application of deep learning techniques employing the conditional variational autoencoders for semi-supervised learning of latent parameters to describe phase transition in the two-dimensional (2D) ferromagnetic Ising model and the two-dimensional XY model. For both models, we utilize spin configurations generated using the Wolff algorithms below and above the critical temperatures. For the 2D Ising model we find the latent parameter of conditional variational autoencoders is correlated to the known order parameter of magnetization more efficiently than their correspondence in variational autoencoders used previously. It can also clearly identify the restoration of the $\\mathbb{Z}_2$ symmetry beyond the critical point. The critical temperature extracted from the latent parameter at larger lattices are found to be approaching its correct value. Similarly, for the 2D XY model, we find our chosen network with the latent representation of conditional variational autoencoders is equally capable of separating the two phases between the high and low temperatures, again at the correct critical temperature with reasonable accuracy. Together these results show that the latent representation of conditional variational autoencoders can be employed efficiently to identify the phases of condensed matter systems, without their prior knowledge.'\nauthor:\n- Adwait Naravane\n- Nilmani Mathur\nbibliography:" -"---\nabstract: 'Image inpainting, which refers to the synthesis of missing regions in an image, can help restore occluded or degraded areas and also serve as a precursor task for self-supervision. The current state-of-the-art models for image inpainting are computationally heavy as they are based on transformer or CNN backbones that are trained in adversarial or diffusion settings. This paper diverges from vision transformers by using a computationally-efficient WaveMix-based fully convolutional architecture \u2013 WavePaint. It uses a 2D-discrete wavelet transform (DWT) for spatial and multi-resolution token-mixing along with convolutional layers. The proposed model outperforms the current state-of-the-art models for image inpainting on reconstruction quality while also using less than half the parameter count and considerably lower training and evaluation times. Our model even outperforms current GAN-based architectures in CelebA-HQ dataset without using an adversarially trainable discriminator. Our work suggests that neural architectures that are modeled after natural image priors require fewer parameters and computations to achieve generalization comparable to transformers.'\nauthor:\n- |\n Pranav Jeevan, Dharshan Sampath Kumar, Amit Sethi\\\n Department of Electrical Engineering\\\n Indian Institute of Technology Bombay\\\n Mumbai, India\\\n `{pranav13phoenix, dharshan2609 }@gmail.com`\\\nbibliography:\n- 'references.bib'\ntitle: 'WavePaint: Resource-efficient Token-mixer for Self-supervised Inpainting '\n---\n\n![A sample of inpainted" -"---\nabstract: 'Recent years have witnessed a rapid growth of Artificial Intelligence Generated Content (AIGC), among which with the development of text-to-image techniques, AI-based image generation has been applied to various fields. However, AI Generated Images (AIGIs) may have some unique distortions compared to natural images, thus many generated images are not qualified for real-world applications. Consequently, it is important and significant to study subjective and objective Image Quality Assessment (IQA) methodologies for AIGIs. In this paper, in order to get a better understanding of the human visual preferences for AIGIs, a large-scale IQA database for AIGC is established, which is named as AIGCIQA2023. We first generate over 2000 images based on 6 state-of-the-art text-to-image generation models using 100 prompts. Based on these images, a well-organized subjective experiment is conducted to assess the human visual preferences for each image from three perspectives including ***quality***, ***authenticity*** and ***correspondence***. Finally, based on this large-scale database, we conduct a benchmark experiment to evaluate the performance of several state-of-the-art IQA metrics on our constructed database. The AIGCIQA2023 database and benchmark will be released to facilitate future research on'\nauthor:\n- Jiarui Wang\n- Huiyu Duan\n- Jing Liu\n- |\n Shi Chen\\\n Xiongkuo Min$^*$," -"---\nabstract: 'In this paper we formulate a geometric nonlinear theory of the mechanics of accreting-ablating bodies. This is a generalization of the theory of accretion mechanics of @Sozio2019. More specifically, we are interested in large deformation analysis of bodies that undergo a continuous and simultaneous accretion and ablation on their boundaries while under external loads. In this formulation the natural configuration of an accreting-ablating body is a time-dependent Riemannian $3$-manifold with a metric that is an unknown a priori and is determined after solving the accretion-ablation initial-boundary-value problem. In addition to the time of attachment map, we introduce a time of detachment map that along with the time of attachment map, and the accretion and ablation velocities describes the time-dependent reference configuration of the body. The kinematics, material manifold, material metric, constitutive equations, and the balance laws are discussed in detail. As a concrete example and application of the geometric theory, we analyze a thick hollow circular cylinder made of an arbitrary incompressible isotropic material that is under a finite time-dependent extension while undergoing continuous ablation on its inner cylinder boundary and accretion on its outer cylinder boundary. The state of deformation and stress during the accretion-ablation process, and" -"---\nabstract: 'For the first time, the expected stochastic gravitational wave background is probably discovered after observing the Hellings Downs correlation curve by several pulsar timing array (PTA) collaborations around the globe including NANOGrav, European PTA, Parkes PTA, and Chinese PTA. These new observations can help to explore or constrain the dark matter formation mechanisms in the early universe. We study the implication of those results on the dynamical dark matter formation mechanisms through a first-order phase transition in the early universe. Both the Q-ball dark matter and super-cool dark matter are investigated in the strong super-cooling phase transition scenario which may give an interpretation of the observed stochastic gravitational wave background.'\nauthor:\n- Siyu Jiang\n- Aidi Yang\n- Jiucheng Ma\n- Fa Peng Huang\ntitle: 'Implication of nano-Hertz stochastic gravitational wave on dynamical dark matter through a first-order phase transition'\n---\n\nIntroduction \n=============\n\nRecently, various pulsar timing array (PTA) collaborations from NANOGrav, European PTA, Parkes PTA, and Chinese PTA\u00a0[@NANOGrav:2023gor; @Antoniadis:2023ott; @Antoniadis:2023zhi; @Reardon:2023gzh; @Xu:2023wog] have published their most recent findings on the first observation of the leading-order overlap reduction function, namely, the famous Hellings Downs\u00a0[@Hellings:1983fr] curve which supports the discovery of the stochastic gravitational wave background (SGWB)." -"---\nabstract: |\n Foundational ontologies devoted to the effective representation of processes and procedures are not widely investigated at present, thereby limiting the practical adoption of semantic approaches in real scenarios where the precise instructions to follow must be considered. Also, the representation ought to include how agents should carry out the actions associated with the process, whether or not agents are able to perform those actions, the possible roles played as well as the related events.\n\n The 2 ontology\u00a0[@ia2022; @oasis2] provides an established model to capture agents and their interactions but lacks means for representing processes and procedures carried out by agents. This motivates the research presented in this article, which delivers an extension of the 2 ontology to combine the capabilities for representing agents and their behaviours with the full conceptualization of processes and procedures. The overarching goal is to deliver a foundational OWL ontology that deals with agent planning, reaching a balance between generality and applicability, which is known to be an open challenge.\naddress: 'Department of Mathematics and Computer Science, University of Catania, Viale Andrea Doria 6 - 95125 - Catania, Italy'\nauthor:\n- Giampaolo Bella\n- Gianpietro Castiglione\n- Daniele Francesco Santamaria\nbibliography:\n-" -"---\nabstract: 'We study an evaporating black hole in the boundary conformal field theory (BCFT) model. We show that a new BCFT solution that acts as a time-dependent brane which we call the moving end-of-the-radiation (METR) brane leads to a new type of Hubeny-Rangamani-Takayanagi surface. We further examine the island formulation in this particular time-dependent spacetime. The Page curve is calculated by using Holographic Entanglement Entropy (HEE) in the context of double holography.'\nauthor:\n- 'Chia-Jui Chou'\n- 'Hans B. Lao'\n- Yi Yang\nbibliography:\n- 'Page\\_Curve\\_of\\_AdS\\_Vaidya.bib'\ntitle: 'Page Curve of AdS-Vaidya Model for Evaporating Black Holes'\n---\n\nIntroduction\n============\n\nThe predictions of Hawking based on semiclassical effective field theory suggest that black holes emit radiation similar to black bodies with a corresponding temperature. As a consequence of Hawking radiation, a black hole should eventually evaporate away if there is no ingoing matter to compensate for the loss of energy [@BF02345020; @PhysRevD.13.191]. This phenomenon encapsulates the essence of the information loss paradox for which if we assume that the black hole forms in a pure state, it ends up in a mixed state after evaporation which essentially violates one of the tenets of quantum mechanics, i.e., unitarity principle. The fine-grained" -"---\nabstract: 'However, existing recognition models suffer from the limited availability of annotated datasets with both kinematic and video data and an inability to generalize to unseen subjects and tasks. We leverage an aggregated dataset of six dry-lab surgical tasks to train activity recognition models at the gesture and motion primitive levels and for separate robotic arms using only kinematic data. The models are evaluated using the LOUO But, using s enables the training of models that can generalize better to unseen tasks. Also, higher recognition accuracy can be achieved by training separate models for the left and right robot arms. For task-generalization, recognition models perform best if trained on tasks and/or tasks from the same dataset.'\nauthor:\n- 'Kay Hutchinson$^{*1}$, Ian Reyes$^{2}$, Zongyu Li$^{1}$, and Homa Alemzadeh$^{1}$[^1][^2][^3] [^4]'\nbibliography:\n- 'main.bib'\n---\n\nrobotic surgery, surgical context, gesture recognition, activity recognition, surgical process modeling, action triplets\n\nIntroduction {#sec:introduction}\n============\n\n, modeling and analysis at levels of the surgical hierarchy [@neumuth2011modeling; @lalys2014surgical] is performed to gain a understanding of surgical activity and improve skill assessment [@tao2012sparse; @varadarajan2009data], error detection [@yasar2019context; @yasar2020real; @hutchinson2022analysis; @li2022runtime], and autonomy [@ginesi2021overcoming]. Towards these applications, automated segmentation and classification of surgical workflow has been an active area" -"---\nabstract: 'Progress in fault-tolerant quantum computation (FTQC) has driven the pursuit of practical applications with *early fault-tolerant quantum computers* (EFTQC). These devices, limited in their qubit counts and fault-tolerance capabilities, require algorithms that can accommodate some degrees of error, which are known as EFTQC algorithms. To predict the onset of early quantum advantage, a comprehensive methodology is needed to develop and analyze EFTQC algorithms, drawing insights from both the methodologies of noisy intermediate-scale quantum (NISQ) and traditional FTQC. To address this need, we propose such a methodology for modeling algorithm performance on EFTQC devices under varying degrees of error. As a case study, we apply our methodology to analyze the performance of Randomized Fourier Estimation (RFE)\u00a0[@kshirsagar2022proving], an EFTQC algorithm for phase estimation. We investigate the runtime performance and the fault-tolerant overhead of RFE in comparison to the traditional quantum phase estimation algorithm. Our analysis reveals that RFE achieves significant savings in physical qubit counts while having a much higher runtime upper bound. We anticipate even greater physical qubit savings when considering more realistic assumptions about the performance of EFTQC devices. By providing insights into the performance trade-offs and resource requirements of EFTQC algorithms, our work contributes to the" -"---\nabstract: 'Crack front waves (FWs) are dynamic objects that propagate along moving crack fronts in 3D materials. We study FW dynamics in the framework of a 3D phase-field framework that features a rate-dependent fracture energy $\\Gamma(v)$ ($v$ is the crack propagation velocity) and intrinsic lengthscales, and quantitatively reproduces the high-speed oscillatory instability in the quasi-2D limit. We show that in-plane FWs feature a rather weak time dependence, with decay rate that increases with $d\\Gamma(v)/dv\\!>\\!0$, and largely retain their properties upon FW-FW interactions, similarly to a related experimentally-observed solitonic behavior. Driving in-plane FWs into the nonlinear regime, we find that they propagate slower than predicted by a linear perturbation theory. Finally, by introducing small out-of-plane symmetry-breaking perturbations, coupled in- and out-of-plane FWs are excited, but the out-of-plane component decays under pure tensile loading. Yet, including a small anti-plane loading component gives rise to persistent coupled in- and out-of-plane FWs.'\nauthor:\n- Sanhita Das\n- Yuri Lubomirsky\n- Eran Bouchbinder\ntitle: The dynamics of crack front waves in 3D material failure\n---\n\n[*Introduction*]{}.\u2014Material failure is a highly complex phenomenon, involving multiple scales, strong spatial localization and nonlinear dissipation. It is mediated by the propagation of cracks, which feature nearly singular stresses" -"---\nabstract: |\n Nesse artigo, apresentamos v\u00e1rios paradoxos aparentes da relatividade restrita e suas respectivas solu\u00e7\u00f5es. Esse paradoxos aparecem desde o advento da relatividade, em 1905, e de fato nunca s\u00e3o paradoxos. Do ponto de vista did\u00e1tico, os paradoxos s\u00e3o uma excelente ferramenta de aprendizado. Eles levam o estudante a confrontar, e abandonar, v\u00e1rios conceitos centrais da teoria Galileana, como a simultaneidade e a rigidez dos corpos extensos. Particularmente, revisaremos uma nova e simples solu\u00e7\u00e3o para o paradoxo dos g\u00eameos, encontrada recentemente por um dos presentes autores [@TwinAlencar]. Ela n\u00e3o necessita considerar referenciais acelerados ou sinais de luz, que s\u00e3o as solu\u00e7\u00f5es apresentadas na literatura.\\\n [**Palavras-chave**]{}: Relatividade especial, simultaneidade, contra\u00e7\u00e3o de lorentz,.\\\n \\\n In this article, we present several apparent paradoxes of special relativity and their respective solutions. These paradoxes have appeared since the advent of relativity in 1905, and in fact they are never paradoxes. From a didactic point of view, paradoxes are an excellent learning tool. They lead the student to confront, and abandon, several central concepts of Galilean theory, such as simultaneity and rigidity of extended bodies. In particular, we review a new and simple solution to the twin paradox, recently found by one of the present" -"---\nabstract: 'Identifying and understanding the large-scale biodiversity patterns in time and space is vital for conservation and addressing fundamental ecological and evolutionary questions. Network-based methods have proven useful for simplifying and highlighting important structures in species distribution data. However, current network-based biogeography approaches cannot exploit the evolutionary information available in phylogenetic data. We introduce a method for incorporating evolutionary relationships into species occurrence networks to produce more biologically informative and robust bioregions. To keep the bipartite network structure where bioregions are grid cells indirectly connected through shared species, we incorporate the phylogenetic tree by connecting ancestral nodes to the grid cells where their descendant species occur. To incorporate the whole tree without destroying the spatial signal of narrowly distributed species or ancestral nodes, we weigh tree nodes by the geographic information they provide. For a more detailed analysis, we enable integration of the evolutionary relationships at a specific time in the tree. By sweeping through the phylogenetic tree in time, our method interpolates between finding bioregions based only on distributional data and finding spatially segregated clades, uncovering evolutionarily distinct bioregions at different time slices. We also introduce a way to segregate the connections between evolutionary branches at a selected" -"---\nabstract: 'Fisher information is a lower bound on the uncertainty in the statistical estimation of classical and quantum mechanical parameters. While some deterministic dynamical systems are not subject to random fluctuations, they do still have a form of uncertainty: Infinitesimal perturbations to the initial conditions can grow exponentially in time, a signature of deterministic chaos. As a measure of this uncertainty, we introduce another classical information, specifically for the deterministic dynamics of classical systems not subject to noise. This classical measure of information is defined with Lyapunov vectors in tangent space, making it less akin to the classical Fisher information and more akin to the quantum Fisher information defined with wavevectors in Hilbert space. Our analysis of the local state space structure and linear stability lead to upper and lower bounds on this information, giving it an interpretation as the net stretching action of the flow. Numerical calculations of this information for illustrative mechanical examples show that it depends directly on the phase space curvature and speed of the flow.'\nauthor:\n- Mohamed\u00a0Sahbani\n- Swetamber\u00a0Das\n- 'Jason\u00a0R.\u00a0Green'\nbibliography:\n- 'references.bib'\ntitle: Classical Fisher information for differentiable dynamical systems\n---\n\nIntroduction\n============\n\nAcross science and engineering" -"---\nabstract: 'Stateoftheart perception construct models via multiple disparate pipelines that reuse the same underlying sensor data, which leads to increased computation, redundancy, and complexity.'\nauthor:\n- 'Kshitij Goel and Wennie Tabib[^1]'\nbibliography:\n- 'refs.bib'\n- 'do-not-modify.bib'\ntitle: '**GIRA: Gaussian Mixture Models for Inference and Robot Autonomy**'\n---\n\nIntroduction {#sec:intro}\n============\n\nRecent large-scale robotic exploration deployments, like the DARPA Subterranean (Sub-T) Challenge\u00a0[@chung2022into], have highlighted the need for map compression increase the by facilitating information sharing. Further, state-of-the-art perception systems typically leverage separate concurrent perceptual processing pipelines, which increases computation, redundancy, and complexity\u00a0[@Eckart-2017-104773]. For example, the highly sophisticated perception module of the NeBula system architecture\u00a0[@agha_nebula_2022-1] processes the same LiDAR data repeatedly (e.g., odometry, SLAM, terrain mapping, etc.), which is inefficient. Instead, what is needed is a unified framework for common perceptual processing elements, which is compact, generative, and amenable for deployment on low-power embedded systems\u00a0[@Eckart-2017-104773].\n\nGaussian mixture models (GMMs) provide high-fidelity and communication-efficient point cloud modeling and inference\u00a0[@corah_communication-efficient_2019] in real-world environments\u00a0[@tabib_autonomous_2021]. . However, there are few open-source implementations, which poses a barrier to broad adoption by the general robotics community. To bridge this gap, this paper introduces GIRA, an open-source, framework for \u00a0[@tabib_-manifold_2018; @tabib_simultaneous_2021; @tabib_autonomous_2021]." -"---\nabstract: 'Cell line authentication plays a crucial role in the biomedical field, ensuring researchers work with accurately identified cells. Supervised deep learning has made remarkable strides in cell line identification by studying cell morphological features through cell imaging. However, batch effects, a significant issue stemming from the different times at which data is generated, lead to substantial shifts in the underlying data distribution, thus complicating reliable differentiation between cell lines from distinct batch cultures. To address this challenge, we introduce CLANet, a pioneering framework for cross-batch cell line identification using brightfield images, specifically designed to tackle three distinct batch effects. We propose a cell cluster-level selection method to efficiently capture cell density variations, and a self-supervised learning strategy to manage image quality variations, thus producing reliable patch representations. Additionally, we adopt multiple instance learning(MIL) for effective aggregation of instance-level features for cell line identification. Our innovative time-series segment sampling module further enhances MIL\u2019s feature-learning capabilities, mitigating biases from varying incubation times across batches. We validate CLANet using data from 32 cell lines across 93 experimental batches from the AstraZeneca Global Cell Bank. Our results show that CLANet outperforms related approaches (e.g. domain adaptation, MIL), demonstrating its effectiveness in addressing" -"---\nabstract: 'Performance bugs are non-functional bugs that can even manifest in well-tested commercial products. Fixing these performance bugs is an important yet challenging problem. In this work, we address this challenge and present a new approach called Retrieval-Augmented Prompt Generation (RAPGen). Given a code snippet with a performance issue, RAPGen first retrieves a prompt instruction from a pre-constructed knowledge-base of previous performance bug fixes and then generates a prompt using the retrieved instruction. It then uses this prompt on a Large Language Model (such as Codex) in zero-shot to generate a fix. We compare our approach with the various prompt variations and state of the art methods in the task of performance bug fixing. Our evaluation shows that RAPGen can generate performance improvement suggestions equivalent or better than a developer in $\\sim$60% of the cases, getting $\\sim$39% of them verbatim, in an expert-verified dataset of past performance changes made by C\\# developers.'\nauthor:\n- Spandan Garg\n- Roshanak Zilouchian Moghaddam\n- Neel Sundaresan\nbibliography:\n- 'references.bib'\ntitle: 'RAPGen: An Approach for Fixing Code Inefficiencies in Zero-Shot'\n---\n\nIntroduction {#submission}\n============\n\nPerformance bugs are inefficient code snippets in software code that can unnecessarily waste time and resources. Unlike functional bugs," -"---\nabstract: |\n **Objective**: To determine whether machine learning methods can generate useful potion recipes for research and teaching at Hogwarts School of Witchcraft and Wizardry.\\\n **Design**: Using deep neural networks to classify generated recipes into a standard drug classification system.\\\n **Setting**: Hogwarts School of Witchcraft and Wizardry.\\\n **Data sources**: 72 potion recipes from the Hogwarts curriculum, extracted from the Harry Potter Wiki.\\\n **Results**: Most generated recipes fall into the categories of psychoanaleptics and dermatologicals. The number of recipes predicted for each category reflected the number of training recipes. Predicted probabilities were often above 90% but some recipes were classified into 2 or more categories with similar probabilities which complicates anticipating the predicted effects.\\\n **Conclusions**: Machine learning powered methods are able to generate potentially useful potion recipes for teaching and research at Hogwarts. This corresponds to similar efforts in the non-magical world where such methods have been applied to identify potentially effective drug combinations.\nauthor:\n- 'Christoph F. Kurz$^{1,2}$, Adriana N. K\u00f6nig$^{1,2}$'\nbibliography:\n- 'biblio.bib'\ndate: |\n \\\n $^2$Munich School of Management and Munich Center of Health Sciences, Ludwig-Maximilians-Universit\u00e4t M\u00fcnchen, Geschwister-Scholl-Platz 1, 80539 Munich, Germany\\\ntitle: Machine learning for potion development at Hogwarts\n---\n\nIntroduction {#introduction .unnumbered}\n============\n\nPotions are" -"---\nabstract: 'In the past fifteen years, dispersion relations (DRs) in the forward limit have been widely accepted as a model-independent method for estimating the $\\gamma Z$-exchange contributions to the parity asymmetry $A_{\\textrm{PV}}$ in elastic $ep$ scattering. In this work, for the first time, we estimate the corrections to these DRs. Firstly, we analyze the properties of $A_{\\textrm{PV}}$ based on a general formalism, and discuss the possibility of the DRs breaking down due to two kinematic poles in $A_{\\textrm{PV}}$. Then, we use point-like interactions as an example to illustrate the exact breakdown of these DRs at the experimental energy regions. Furthermore, by using the effective low-energy interactions, we estimate the $\\gamma Z$-exchange contributions for the upcoming P2 experiment, and the results indicate that the correction to the forward limit DR for $\\Box_{\\gamma Z}^{V}$ is abot 47%, which is significantly larger than the naive expectation prior to this study.'\nauthor:\n- |\n Qian-Qian Guo and Hai-Qing Zhou\\\n School of Physics, Southeast University, NanJing 211189, China\ntitle: 'Corrections to the Forward Limit Dispersion Relations for $\\gamma Z$-Exchange Contributions'\n---\n\nThe proton is one of the most fundamental particles in our world, and studies of its structure have been ongoing for nearly a" -"---\nabstract: 'This technical report describes our QuAVF@NTU-NVIDIA submission to the Ego4D Talking to Me (TTM) Challenge 2023. Based on the observation from the TTM task and the provided dataset, we propose to use two separate models to process the input videos and audio. By doing so, we can utilize all the labeled training data, including those without bounding box labels. Furthermore, we leverage the face quality score from a facial landmark prediction model for filtering noisy face input data. The face quality score is also employed in our proposed quality-aware fusion for integrating the results from two branches. With the simple architecture design, our model achieves $67.4\\%$ mean average precision (mAP) on the test set, which ranks **first** on the leaderboard and outperforms the baseline method by a large margin. Code is available at: https://github.com/hsi-che-lin/Ego4D-QuAVF-TTM-CVPR23'\nauthor:\n- |\n Hsi-Che Lin$^1$ Chien-Yi Wang$^{2}$ Min-Hung Chen$^{2}$ Szu-Wei Fu$^{2}$ Yu-Chiang Frank Wang$^{1,2}$\\\n ^1^ National Taiwan University ^2^ NVIDIA\\\n [hsichelin@gmail.com, {chienyiw, minhungc, szuweif, frankwang}@nvidia.com]{}\nbibliography:\n- 'egbib.bib'\ntitle: 'QuAVF: Quality-aware Audio-Visual Fusion for Ego4D Talking to Me Challenge'\n---\n\n![image](img/QuAVF.pdf){width=\"80.00000%\"}\n\nIntroduction\n============\n\nEgo4D [@grauman2022ego4d] is a large-scale dataset introduced by Meta AI, specifically designed for the purpose of egocentric video understanding. Within the" -"---\nabstract: 'Turing machines and spin models share a notion of universality according to which some simulate all others. Is there a theory of universality that captures this notion? We set up a categorical framework for universality which includes as instances universal Turing machines, universal spin models, NP completeness, top of a preorder, denseness of a subset, and more. By identifying necessary conditions for universality, we show that universal spin models cannot be finite. We also characterize when universality can be distinguished from a trivial one and use it to show that universal Turing machines are non-trivial in this sense. Our framework allows not only to compare universalities within each instance, but also instances themselves. We leverage a Fixed Point Theorem inspired by a result of Lawvere to establish that universality and negation give rise to unreachability (such as uncomputability). As such, this work sets the basis for a unified approach to universality and invites the study of further examples within the framework.'\nauthor:\n- Tom\u00e1\u0161 Gonda\n- Tobias Reinhart\n- Sebastian Stengele\n- Gemma De les Coves\nbibliography:\n- 'all-my-bibliography.bib'\n- 'references\\_tomas.bib'\ntitle: |\n A Framework for Universality in Physics,\\\n Computer Science, and Beyond\n---\n\nIntroduction {#sec:Introduction}\n============\n\nTuring" -"---\nabstract: 'Recent advancements in language models (LMs) have led to the emergence of powerful models such as Small LMs[^1] (e.g., T5) and Large LMs (e.g., GPT-4). These models have demonstrated exceptional capabilities across a wide range of tasks, such as name entity recognition (NER) in the general domain. Nevertheless, their efficacy in the medical section remains uncertain and the performance of medical NER always needs high accuracy because of the particularity of the field. This paper aims to provide a thorough investigation to compare the performance of LMs in medical few-shot NER and answer How far is LMs from 100% Few-shot NER in Medical Domain, and moreover to explore an effective entity recognizer to help improve the NER performance. Based on our extensive experiments conducted on 16 NER models spanning from 2018 to 2023, our findings clearly indicate that LLMs outperform SLMs in few-shot medical NER tasks, given the presence of suitable examples and appropriate logical frameworks. Despite the overall superiority of LLMs in few-shot medical NER tasks, it is important to note that they still encounter some challenges, such as misidentification, wrong template prediction, etc. Building on previous findings, we introduce a simple and effective method called Euclidean Steiner Minimal Tree]{}problem takes as input a set $\\mathcal P$ of points in the Euclidean plane and finds the minimum length network interconnecting all the points of $\\mathcal P$. In this paper, in continuation to the works of\u00a0[@du1987steiner] and \u00a0[@weng1995steiner], we study [Euclidean Steiner Minimal Tree]{}when $\\mathcal P$ is formed by the vertices of a pair of regular, concentric and parallel $n$-gons. We restrict our attention to the cases where the two polygons are not very close to each other. In such cases, we show that [Euclidean Steiner Minimal Tree]{}is polynomial-time solvable, and we describe an explicit structure of a Euclidean Steiner minimal tree for $\\mathcal P$. We also consider point sets $\\mathcal P$ of size $n$ where the number of input points not on the convex hull of $\\mathcal P$ is $f(n) \\leq n$. We give an exact algorithm with running time $2^{{\\mathcal{O}}(f(n)\\log n)}$ for such input point sets $\\mathcal P$. Note that when $f(n) = {\\mathcal{O}}(\\frac{n}{\\log n})$, our algorithm runs in single-exponential time, and when $f(n) = o(n)$ the running time is $2^{o(n\\log n)}$ which is better than the known algorithm in\u00a0[@hwang1992steiner].\n\n We know that no FPTAS exists" -"---\nabstract: 'When adopting a model-based formulation, solving inverse problems encountered in multiband imaging requires to define spatial and spectral regularizations. In most of the works of the literature, spectral information is extracted from the observations directly to derive data-driven spectral priors. Conversely, the choice of the spatial regularization often boils down to the use of conventional penalizations (e.g., total variation) promoting expected features of the reconstructed image (e.g., piecewise constant). In this work, we propose a generic framework able to capitalize on an auxiliary acquisition of high spatial resolution to derive tailored data-driven spatial regularizations. This approach leverages on the ability of deep learning to extract high level features. More precisely, the regularization is conceived as a deep generative network able to encode spatial semantic features contained in this auxiliary image of high spatial resolution. To illustrate the versatility of this approach, it is instantiated to conduct two particular tasks, namely multiband image fusion and multiband image inpainting. Experimental results obtained on these two tasks demonstrate the benefit of this class of informed regularizations when compared to more conventional ones.'\nauthor:\n- 'Min\u00a0Zhao,\u00a0 Nicolas Dobigeon,\u00a0 and Jie\u00a0Chen,\u00a0 [^1] [^2]'\nbibliography:\n- 'IEEEfull.bib'\n- 'BIB.bib'\ntitle: 'Guided Deep" -"---\nabstract: 'To comprehend complex systems with multiple states, it is imperative to reveal the identity of these states by system outputs. Nevertheless, the mathematical models describing these systems often exhibit nonlinearity so that render the resolution of the parameter inverse problem from the observed spatiotemporal data a challenging endeavor. Starting from the observed data obtained from such systems, we propose a novel framework that facilitates the investigation of parameter identification for multi-state systems governed by spatiotemporal varying parametric partial differential equations. Our framework consists of two integral components: a constrained self-adaptive physics-informed neural network, encompassing a sub-network, as our methodology for parameter identification, and a finite mixture model approach to detect regions of probable parameter variations. Through our scheme, we can precisely ascertain the unknown varying parameters of the complex multi-state system, thereby accomplishing the inversion of the varying parameters. Furthermore, we have showcased the efficacy of our framework on two numerical cases: the 1D Burgers\u2019 equation with time-varying parameters and the 2D wave equation with a space-varying parameter.'\naddress:\n- 'SandGold AI Research, Guangzhou 510642, China'\n- |\n Department of Mathematics, Faculty of Science and Technology,\\\n University of Macau, Macau 519000, China\n- |\n School of Reliability and" -"---\nabstract: 'The distribution of the first-passage time $T_a$ for a Brownian particle with drift $\\mu$ subject to hitting an absorber at a level $a>0$ is well-known and given by its density $\\gamma(t) = \\frac{a}{\\sqrt{2 \\pi t^3} } e^{-\\frac{(a-\\mu t)^2}{2 t}}, t>0$, which is normalized only if $\\mu \\geq 0$. In this article, we show that there are two other families of diffusion processes (the first with one parameter and the second with two parameters) having the same first passage-time distribution when $\\mu <0$. In both cases we establish the propagators and study in detail these new processes. An immediate consequence is that the distribution of the first-passage time does not allow us to know if the process comes from a drifted Brownian motion or from one of these new processes.'\naddress: '$^{1}$Universit\u00e9 Paris-Saclay, CEA, Service d\u2019\u00c9tudes des R\u00e9acteurs et de Math\u00e9matiques Appliqu\u00e9es, 91191, Gif-sur-Yvette, France'\nauthor:\n- 'Alain Mazzolo$^{1}$'\ndate: 'Received: / Accepted: / Published '\ntitle: 'First-passage time of a Brownian motion: two unexpected journeys'\n---\n\n[*Keywords*]{}: Stochastic particle dynamics (theory), Brownian motion\\\n\\\n\nIntroduction {#sec_intro}\n============\n\nFinding the distribution for the first time when a diffusion reaches a boundary\u00a0[@ref_book_Redner; @ref_intro_Redner] is a fundamental quantity to characterize" -"---\nabstract: 'Binary boson stars can be used to model the nonlinear dynamics and gravitational wave signals of merging ultracompact, but horizonless, objects. However, doing so requires initial data satisfying the Hamiltonian and momentum constraints of the Einstein equations, something that has not yet been addressed. In this work, we construct constraint-satisfying initial data for a variety of binary boson star configurations. We do this using the conformal thin-sandwich formulation of the constraint equations, together with a specific choice for the matter terms appropriate for scalar fields. The free data is chosen based upon a superposition of isolated boson star solutions, but with several modifications designed to suppress the spurious oscillations in the stars that such an approach can lead to. We show that the standard approach to reducing orbital eccentricity can be applied to construct quasi-circular binary boson star initial data, reducing the eccentricity of selected binaries to the $\\sim 10^{-3}$ level. Using these methods, we construct initial data for quasi-circular binaries with different mass-ratios and spins, including a configuration where the spin is misaligned with the orbital angular momentum, and where the dimensionless spins of the boson stars exceeds the Kerr bound. We evolve these to produce the" -"---\nabstract: 'Social mediator robots facilitate human-human interactions by producing behavior strategies that positively influence how humans interact with each other in social settings. As robots for social mediation gain traction in the field of human-human-robot interaction, their ability to \u201cunderstand\u201d the humans in their environments becomes crucial. This objective requires models of human understanding that consider multiple humans in an interaction as a collective entity and represent the group dynamics that exist among its members. Group dynamics are defined as the influential actions, processes, and changes that occur within and between group interactants. Since an individual\u2019s behavior may be deeply influenced by their interactions with other group members, the social dynamics existing within a group can influence the behaviors, attitudes, and opinions of each individual and the group as a whole. Therefore, models of group dynamics are critical for a social mediator robot to be effective in its role. In this paper, we survey existing models of group dynamics and categorize them into models of social dominance, affect, social cohesion, conflict resolution, and engagement. We highlight the multimodal features these models utilize, and emphasize the importance of capturing the interpersonal aspects of a social interaction. Finally, we make a" -"---\nabstract: 'Large language models (LLMs) have demonstrated impressive performance on various downstream tasks without requiring fine-tuning, including ChatGPT, a chat-based model built on top of LLMs such as GPT-3.5 and GPT-4. Despite having a lower training proportion compared to English, these models also exhibit remarkable capabilities in other languages. In this study, we assess the performance of GPT-3.5 and GPT-4 models on seven distinct Arabic NLP tasks: sentiment analysis, translation, transliteration, paraphrasing, part of speech tagging, summarization, and diacritization. Our findings reveal that GPT-4 outperforms GPT-3.5 on five out of the seven tasks. Furthermore, we conduct an extensive analysis of the sentiment analysis task, providing insights into how LLMs achieve exceptional results on a challenging dialectal dataset. Additionally, we introduce a new Python interface[^1] that facilitates the evaluation of these tasks effortlessly.'\nauthor:\n- |\n **Zaid Alyafeai**[^2]$\\;^{,1,\\gamma}$\u00a0\u00a0\u00a0 **Maged S. Alshaibani**$\\;^{,1}$\u00a0\u00a0\u00a0 **Badr AlKhamissi**$\\;^{,2}$\u00a0\u00a0\u00a0\\\n **Hamzah Luqman**$^{1,3}$\u00a0\u00a0\u00a0 **Ebrahim Alareqi**$^4$\u00a0\u00a0\u00a0 **Ali Fadel**$^2$\u00a0\u00a0\u00a0\\\n \\\n $^1$ King Fahd University of Petroleum and Minerals, Saudi Arabia\\\n $^2$ ARBML\\\n $^3$ SDAIA-KFUPM Joint Research Center for Artificial Intelligence\\\n $^4$ Volvo Cars R&D Tech Center, United States\\\n \\\n $^{\\gamma}$ Corresponding Author:\nbibliography:\n- 'references.bib'\ntitle: |\n [![image](taqyim_logo.png){height=\"1.5cm\"}]{}\n\n **Taqyim: Evaluating Arabic NLP Tasks Using ChatGPT Models**\n---\n\n=1" -"---\nabstract: 'One strategy to obtain user location information in a wireless network operating at is based on the exploitation of the geometric relationships between the channel parameters and the user position. These relationships can be easily built from the LoS path and/or first order reflections, but high resolution channel estimates are required for high accuracy. In this paper, we consider a mmWave MIMO system based on a hybrid architecture, and develop first a low complexity channel estimation strategy based on MOMP suitable for high dimensional channels, as those associated to operating with large planar arrays. Then, a called *PathNet* is designed to classify the order of the estimated channel paths, so that only the path and first order reflections are selected for localization purposes. Next, a 3D localization strategy exploiting the geometry of the environment is developed to operate in both and conditions, while considering the unknown clock offset between the and the . Finally, a *Transformer* based network exploiting attention mechanisms called *ChanFormer* is proposed to refine the initial position estimate obtained from the geometric system of equations that connects user position and channel paraneters. Simulation results obtained with realistic vehicular channels generated by ray tracing indicate that" -"---\nabstract: 'In this work, drawing inspiration from the type of noise present in real hardware, we study the output distribution of random quantum circuits under practical non\u2013unital noise sources with constant noise rates. We show that even in the presence of unital sources like the depolarizing channel, the distribution, under the combined noise channel, never resembles a maximally entropic distribution at any depth. To show this, we prove that the output distribution of such circuits never anticoncentrates \u2014 meaning it is never too \u201cflat\u201d \u2014 regardless of the depth of the circuit. This is in stark contrast to the behavior of noiseless random quantum circuits or those with only unital noise, both of which anticoncentrate at sufficiently large depths. As consequences, our results have interesting algorithmic implications on both the hardness and easiness of noisy random circuit sampling, since anticoncentration is a critical property exploited by both state-of-the-art classical hardness and easiness results.'\nauthor:\n- '[^1]'\n- 'Soumik\u00a0Ghosh[^2]'\n- 'Michael\u00a0Gullans[^3]'\n- 'Kohdai\u00a0Kuroiwa[^4]'\n- '[^5]'\nbibliography:\n- 'MasterBib.bib'\ntitle: 'Effect of non\u2013unital noise on random circuit sampling'\n---\n\nIntroduction\n============\n\nThe defining feature of quantum systems today is noise [@Preskill_2018]. A fundamental question in this era of" -"---\nabstract: 'Image-on-scalar regression has been a popular approach to modeling the association between brain activities and scalar characteristics in neuroimaging research. The associations could be heterogeneous across individuals in the population, as indicated by recent large-scale neuroimaging studies, e.g., the Adolescent Brain Cognitive Development (ABCD) study. The ABCD data can inform our understanding of heterogeneous associations and how to leverage the heterogeneity and tailor interventions to increase the number of youths who benefit. It is of great interest to identify subgroups of individuals from the population such that: 1) within each subgroup the brain activities have homogeneous associations with the clinical measures; 2) across subgroups the associations are heterogeneous; and 3) the group allocation depends on individual characteristics. Existing image-on-scalar regression methods and clustering methods cannot directly achieve this goal. We propose a latent subgroup image-on-scalar regression model (LASIR) to analyze large-scale, multi-site neuroimaging data with diverse sociodemographics. LASIR introduces the latent subgroup for each individual and group-specific, spatially varying effects, with an efficient stochastic expectation maximization algorithm for inferences. We demonstrate that LASIR outperforms existing alternatives for subgroup identification of brain activation patterns with functional magnetic resonance imaging data via comprehensive simulations and applications to the ABCD study." -"---\nabstract: 'Graph neural networks (GNNs) have been widely applied in multi-variate time-series forecasting (MTSF) tasks because of their capability in capturing the correlations among different time-series. These graph-based learning approaches improve the forecasting performance by discovering and understanding the underlying graph structures, which represent the data correlation. When the explicit prior graph structures are not available, most existing works cannot guarantee the sparsity of the generated graphs that make the overall model computational expensive and less interpretable. In this work, we propose a decoupled training method, which includes a graph generating module and a GNNs forecasting module. First, we use Graphical Lasso (or GraphLASSO) to directly exploit the sparsity pattern from data to build graph structures in both static and time-varying cases. Second, we fit these graph structures and the input data into a Graph Convolutional Recurrent Network (GCRN) to train a forecasting model. The experimental results on three real-world datasets show that our novel approach has competitive performance against existing state-of-the-art forecasting algorithms while providing sparse, meaningful and explainable graph structures and reducing training time by approximately $40\\%$. Our PyTorch implementation is publicly available at .'\nauthor:\n- 'Ngoc-Dung Do , Truong Son Hy , Duy Khuong Nguyen" -"---\nabstract: 'Accurately estimating gas usage is essential for the efficient functioning of gas distribution networks and saving operational costs. Traditional methods rely on centralized data processing, which poses privacy risks. Federated learning (FL) offers a solution to this problem by enabling local data processing on each participant, such as gas companies and heating stations. However, local training and communication overhead may discourage gas companies and heating stations from actively participating in the FL training process. To address this challenge, we propose a Hierarchical FL Incentive Mechanism for Gas Usage Estimation ([Hi-GAS]{}), which has been testbedded in the ENN Group, one of the leading players in the natural gas and green energy industry. It is designed to support horizontal FL among gas companies, and vertical FL among each gas company and heating station within a hierarchical FL ecosystem, rewarding participants based on their contributions to FL. In addition, a hierarchical FL model aggregation approach is also proposed to improve the gas usage estimation performance by aggregating models at different levels of the hierarchy. The incentive scheme employs a multi-dimensional contribution-aware reward distribution function that combines the evaluation of data quality and model contribution to incentivize both gas companies and" -"---\nabstract: 'Agent-based models (ABMs) provide an intuitive and powerful framework for studying social dynamics by modeling the interactions of individuals from the perspective of each individual. In addition to simulating and forecasting the dynamics of ABMs, the demand to solve optimization problems to support, for example, decision-making processes naturally arises. Most ABMs, however, are non-deterministic, high-dimensional dynamical systems, so objectives defined in terms of their behavior are computationally expensive. In particular, if the number of agents is large, evaluating the objective functions often becomes prohibitively time-consuming. We consider data-driven reduced models based on the Koopman generator to enable the efficient solution of multi-objective optimization problems involving ABMs. In a first step, we show how to obtain data-driven reduced models of non-deterministic dynamical systems (such as ABMs) that depend on potentially nonlinear control inputs. We then use them in the second step as surrogate models to solve multi-objective optimal control problems. We first illustrate our approach using the example of a voter model, where we compute optimal controls to steer the agents to a predetermined majority, and then using the example of an epidemic ABM, where we compute optimal containment strategies in a prototypical situation. We demonstrate that the surrogate" -"---\nabstract: 'The gradient index phononic crystal (GRIN-PC) lens concept has been proven very effective for focusing elastic waves at a desired location. Although well-studied for planar structures, GRIN-PC lenses for elastic wave focusing in curved structures are scarce and lack the theoretical framework for studying the wave focusing mechanism. In this work, we develop conformal GRIN-PC theory to analyze wave focusing in non-planar geometries and present a design framework for conformal GRIN-PC lenses to be implemented over curved structures. The proposed conformal GRIN-PC theory studies the wave propagation in a curved GRIN-PC lens using ray trajectories that meet at the focal spot of the lens. We apply the conformal GRIN-PC theory to accurately predict the focal region of the GRIN-PC lens implemented over a steel pipe and validate the results with numerical simulations. Further, the design framework is utilized to design a 3D-printed conical GRIN-PC lens. The elastic wave focusing in the conical lens is demonstrated using numerical simulations and is further validated with experiments.'\naddress:\n- 'Department of Mechanical Engineering, University of Michigan, Ann Arbor, MI USA 48109'\n- 'Department of Mechanical Engineering, University of Michigan, Ann Arbor, MI USA 48109'\nauthor:\n- Hrishikesh Danawe\n- Serife Tol" -"---\nabstract: 'Due to the absence of a single standardized imaging protocol, domain shift between data acquired from different sites is an inherent property of medical images and has become a major obstacle for large-scale deployment of learning-based algorithms. For retinal vessel images, domain shift usually presents as the variation of intensity, contrast and resolution, while the basic tubular shape of vessels remains unaffected. Thus, taking advantage of such domain-invariant morphological features can greatly improve the generalizability of deep models. In this study, we propose a method named *VesselMorph* which generalizes the 2D retinal vessel segmentation task by synthesizing a shape-aware representation. Inspired by the traditional Frangi filter and the diffusion tensor imaging literature, we introduce a Hessian-based bipolar tensor field to depict the morphology of the vessels so that the shape information is taken into account. We map the intensity image and the tensor field to a latent space for feature extraction. Then we fuse the two latent representations via a weight-balancing trick and feed the result to a segmentation network. We evaluate on six public datasets of fundus and OCT angiography images from diverse patient populations. VesselMorph achieves superior generalization performance compared with competing methods in different domain" -"---\nabstract: 'Using a density dependent quark model and a relativistic model within the mean-field approximation for hadrons with density dependent meson-baryon couplings, [we construct, for the first time, models that describe hybrid neutron stars]{} consisting of nucleons and exotic baryons (hyperons and $\\Delta$-resonances). We do the study using a Maxwell construction. The quark-hadron phase transition in the stellar matter is determined through; the structure, composition, and properties of the hybrid neutron star matter. The macroscopic properties of the star are determined, and the results [ for these particular models]{} are [ found to be compatible with recent]{} observational astrophysical data.'\nauthor:\n- |\n A.\u00a0Issifu$^1$ [^1], F. M. da Silva$^1$ and D.\u00a0P.\u00a0Menezes$^1$\\\n \\\n $^1$Departamento de F\u00edsica - CFM - Universidade Federal de Santa Catarina Florian\u00f3polis - SC - CP. 476 - CEP 88.040 - 900 - Brazil\\\nbibliography:\n- 'references.bib'\ntitle: Hybrid Stars Built with Density Dependent Models\n---\n\n\\[firstpage\\]\n\nStars: Neutron, Stars: interiors\n\nIntroduction\n============\n\nRecent progress made in nuclear astrophysics due to the detection of gravitational waves from the merging of two neutron stars (NSs) in the event GW170817 [@LIGOScientific:2017vwq], followed by the kilonova event observation in several wavelength bands of the electromagnetic spectrum [@Abbott_2017]," -"---\nabstract: |\n The cutting plane method is a key technique for successful branch-and-cut and branch-price-and-cut algorithms that find the exact optimal solutions for various vehicle routing problems (VRPs). Among various cuts, the rounded capacity inequalities (RCIs) are the most fundamental. To generate RCIs, we need to solve the separation problem, whose exact solution takes a long time to obtain; therefore, heuristic methods are widely used. We design a learning-based separation heuristic algorithm with graph coarsening that learns the solutions of the exact separation problem with a graph neural network (GNN), which is trained with small instances of 50 to 100 customers. We embed our separation algorithm within the cutting plane method to find a lower bound for the capacitated VRP (CVRP) with up to 1,000 customers. We compare the performance of our approach with CVRPSEP, a popular separation software package for various cuts used in solving VRPs. Our computational results show that our approach finds better lower bounds than CVRPSEP for large-scale problems with 400 or more customers, while CVRPSEP shows strong competency for problems with less than 400 customers.\n\n #### Summary of Contribution:\n\n We suggest a novel learning-based separation algorithm for RCIs arising in solving CVRPs. While some" -"---\nabstract: 'Deep learning based food image classification has enabled more accurate nutrition content analysis for image-based dietary assessment by predicting the types of food in eating occasion images. However, there are two major obstacles to apply food classification in real life applications. First, real life food images are usually heavy-tailed distributed, resulting in severe class-imbalance issue. Second, it is challenging to train a single-stage (*i.e.* end-to-end) framework under heavy-tailed data distribution, which cause the over-predictions towards head classes with rich instances and under-predictions towards tail classes with rare instance. In this work, we address both issues by introducing a novel single-stage heavy-tailed food classification framework. Our method is evaluated on two heavy-tailed food benchmark datasets, Food101-LT and VFN-LT, and achieves the best performance compared to existing work with over $5\\%$ improvements for top-1 accuracy.'\naddress: |\n Elmore Family School of Electrical and Computer Engineering\\\n Purdue University, West Lafayette, Indiana, U.S.A. \nbibliography:\n- 'strings.bib'\n- 'refs.bib'\ntitle: 'Single-Stage Heavy-Tailed Food Classification'\n---\n\nFood classification, Heavy-tailed distribution, Single-stage, Image-based dietary assessment\n\nIntroduction {#sec:intro}\n============\n\nImage-based dietary assessment\u00a0[@he2020multitask] aims to determine the foods and corresponding nutrition from eating occasion images to enable automated analysis of nutrition intake. Despite significant progress made" -"---\nabstract: 'We consider the problem of approximating a $d \\times d$ covariance matrix $M$ with a rank-$k$ matrix under $({\\varepsilon},\\delta)$-differential privacy. We present and analyze a complex variant of the Gaussian mechanism and show that the Frobenius norm of the difference between the matrix output by this mechanism and the best rank-$k$ approximation to $M$ is bounded by roughly $\\tilde{O}(\\sqrt{kd})$, whenever there is an appropriately large gap between the $k$\u2019th and the $k+1$\u2019th eigenvalues of $M$. This improves on previous work that requires that the gap between every pair of top-$k$ eigenvalues of $M$ is at least $\\sqrt{d}$ for a similar bound. Our analysis leverages the fact that the eigenvalues of complex matrix Brownian motion repel more than in the real case, and uses Dyson\u2019s stochastic differential equations governing the evolution of its eigenvalues to show that the eigenvalues of the matrix $M$ perturbed by complex Gaussian noise have large gaps with high probability. Our results contribute to the analysis of low-rank approximations under average-case perturbations and to an understanding of eigenvalue gaps for random matrices, which may be of independent interest.'\nauthor:\n- |\n Oren Mangoubi\\\n Worcester Polytechnic Institute\n- |\n Nisheeth K. Vishnoi\\\n Yale University\nbibliography:\n-" -"---\nabstract: 'In 1968, R. Steinberg proved a theorem stating that the exterior powers of an irreducible reflection representation of a Euclidean reflection group are again irreducible and pairwise non-isomorphic. We extend this result to a more general context where the inner product invariant under the group action may not necessarily exist.'\naddress: |\n Beijing International Center for Mathematical Research, Peking University,\\\n No. 5 Yiheyuan Road, Haidian District, Beijing 100871, China\nauthor:\n- Hongsheng Hu\nbibliography:\n- 'exterior-powers.bib'\ndate: 'September 13, 2023'\ntitle: On exterior powers of reflection representations\n---\n\nexterior powers ,generalized reflections ,reflection representations\n\n15A75 ,05E10 ,20C15 ,20F55 ,51F15\n\nIntroduction {#sec-intro}\n============\n\nIn [@Steinberg1968 \u00a714.1, \u00a714.3], R. Steinberg proved the following theorem (see also [@Bourbaki2002 Ch. V, \u00a72, Exercise 3.], [@CIK71 Theorem 9.13], [@GP00 Theorem 5.1.4] and [@Kane01 \u00a724-3]).\n\n\\[R. Steinberg\\] \\[thm-steinberg\\] Let $V$ be a finite-dimensional vector space endowed with an inner product (for example, a Euclidean space, or a complex Hilbert space). Let $\\{v_1, \\dots, v_n\\}$ be a basis of $V$, and $W \\subseteq {\\operatorname{GL}}(V)$ be the group generated by (orthogonal) reflections with respect to these basis vectors. Suppose $V$ is a simple $W$-module. Then the $W$-modules $\\{\\bigwedge^d V \\mid 0 \\le d \\le n\\}$ are" -"---\nabstract: 'Contact matrices have become a key ingredient of modern epidemic models. They account for the stratification of contacts for the age of individuals and, in some cases, the context of their interactions. However, age and context are not the only factors shaping contact structures and affecting the spreading of infectious diseases. Socio-economic status (SES) variables such as wealth, ethnicity, and education play a major role as well. Here, we introduce generalized contact matrices capable of stratifying contacts across any number of dimensions including any SES variable. We derive an analytical expression for the basic reproductive number of an infectious disease unfolding on a population characterized by such generalized contact matrices. Our results, on both synthetic and real data, show that disregarding higher levels of stratification might lead to the under-estimation of the reproductive number and to a mis-estimation of the global epidemic dynamics. Furthermore, including generalized contact matrices allows for more expressive epidemic models able to capture heterogeneities in behaviours such as different levels of adoption of non-pharmaceutical interventions across different groups. Overall, our work contributes to the literature attempting to bring socio-economic, as well as other dimensions, to the forefront of epidemic modeling. Tackling this issue is" -"---\nabstract: 'Mergers of galaxies are a ubiquitous phenomenon in the Universe and represent a natural consequence of the \u201cbottom-up\u201d mass accumulation and galaxy evolution cosmological paradigm. It is generally accepted that the peak of AGN accretion activity occurs at nuclear separations of $\\lesssim10$kpc for major mergers. Here we present new and observations for a subsample of mid-IR preselected dual AGN candidates in an effort to better constrain the column densities along the line-of-sight for each system. Only one dual AGN candidate, J0841+0101, is detected as a single, unresolved source in the and imaging, while the remaining three dual AGN candidates, J0122+0100, J1221+1137, and J1306+0735, are not detected with ; if these non-detections are due to obscuration alone, these systems are consistent with being absorbed by column densities of log($\\nhm{}$) $\\geq$ 24.9, 24.8, and 24.6, which are roughly consistent with previously inferred column densities in these merging systems. In the case of J0841+0101, the analysis of the 0.3-30 keV spectra reveal a line-of-sight column density of $\\gtrsim10^{24}$cm$^{-2}$, significantly larger than the column densities previously reported for this system and demonstrating the importance of the higher signal-to-noise spectra and access to the $>10$keV energies via . Though it is unclear if" -"---\nabstract: 'JWST has shown that CO$_2$ and CO are common on the surfaces of objects in the Kuiper belt and have apparent surface coverages even higher than that of water ice, though water ice is expected to be significantly more abundant in the bulk composition. Using full Mie scattering theory, we show that the high abundance and the unusual spectral behaviour around the 4.26 $\\mu$m $\\nu_1$ band of CO$_2$ can be explained by a surface covered in a few $\\mu$m thick layer of $\\sim 1-2$ $\\mu$m CO$_2$ particles. CO is unstable at the temperatures in the Kuiper belt, so the CO must be trapped in some more stable species. While hydrate clathrates [ or amorphous water ice]{} are often invoked as a trapping mechanism for outer solar system ices, the expected spectral shift of the absorption line for a CO hydrate clathrates or [ trapping in amorphous ice]{} is not seen, nor does the H$_2$O abundance appear to be high enough to explain the depth of the CO absorption line. Instead, we suggest that the CO is created via irradiation of CO$_2$ and trapped in the CO$_2$ grains during this process. The presence of a thin surface layer of" -"---\nabstract: 'We present the spectra of Complex Organic Molecules (COMs) detected in HOPS 373SW with the Atacama Large Millimeter/submillimeter Array (ALMA). HOPS 373SW, which is a component of a protostellar binary with a separation of 1500 au, has been discovered as a variable protostar by the JCMT\u00a0Transient monitoring survey with a modest ($\\sim30\\%$) brightness increase at submillimeter wavelengths. Our ALMA Target of Opportunity (ToO) observation at $\\sim$345 GHz for HOPS 373SW revealed extremely young chemical characteristics with strong deuteration of methanol. The dust continuum opacity is very high toward the source center, obscuring line emission from within 0.03. The other binary component, HOPS 373NE was detected only in C$^{17}$O in our observation, implying a cold and quiescent environment. We compare the COMs abundances relative to in HOPS 373SW with those of V883 Ori, which is an eruptive disk object, as well as other hot corinos, to demonstrate the chemical evolution from envelope to disk. High abundances of singly, doubly, and triply deuterated methanol (CH$_2$DOH, CHD$_2$OH, and CD$_3$OH) and a low CH$_3$CN abundance in HOPS 373SW compared to other hot corinos suggest a very early evolutionary stage of HOPS 373SW in the hot corino phase. Since the COMs detected" -"---\nabstract: 'This paper explores new methods for locating the sources used to write a text, by fine-tuning a variety of language models to rerank candidate sources. After retrieving candidates sources using a baseline BM25 retrieval model, a variety of reranking methods are tested to see how effective they are at the task of source attribution. We conduct experiments on two datasets\u2014English Wikipedia and medieval Arabic historical writing\u2014and employ a variety of retrieval- and generation-based reranking models. In particular, we seek to understand how the degree of supervision required affects the performance of various reranking models. We find that semi-supervised methods can be nearly as effective as fully supervised methods while avoiding potentially costly span-level annotation of the target and source documents.'\nauthor:\n- Ryan Muther\n- 'David A. Smith'\nbibliography:\n- 'custom.bib'\ntitle: |\n Citations as Queries:\\\n Source Attribution Using Language Models as Rerankers\n---\n\n<ccs2012> <concept> <concept\\_id>10002951.10003317.10003338.10003341</concept\\_id> <concept\\_desc>Information systems\u00a0Language models</concept\\_desc> <concept\\_significance>500</concept\\_significance> </concept> </ccs2012>\n\nIntroduction\n============\n\nWhen reading a text, it is often useful to know which sources were used to write it. Knowledge of the sources used to write a news article, for example, can inform a reader of bias in how information in the article is" -"---\nabstract: 'Devices authentication is one crucial aspect of any communication system. Recently, the physical layer approach radio frequency (RF) fingerprinting has gained increased interest as it provides an extra layer of security without requiring additional components. In this work, we propose an RF fingerprinting based transmitter authentication approach density trace plot (DTP) to exploit device-identifiable fingerprints. By considering IQ imbalance solely as the feature source, DTP can efficiently extract device-identifiable fingerprints from symbol transition trajectories and density center drifts. In total, three DTP modalities based on constellation, eye and phase traces are respectively generated and tested against three deep learning classifiers: the 2D-CNN, 2D-CNN+biLSTM and 3D-CNN. The feasibility of these DTP and classifier pairs is verified using a practical dataset collected from the ADALM-PLUTO software-defined radios (SDRs).'\nauthor:\n- |\n \\\n [^1]\nbibliography:\n- 'mylib.bib'\ntitle: Deep Learning Methods for Device Identification Using Symbols Trace Plot\n---\n\nphysical layer security, device authentication, RF fingerprinting, IQ imbalance, deep learning\n\nIntroduction\n============\n\nTransmitter authentication has long been a significant task of communication security. Conventionally, popular device authentication algorithms such as challenge-handshake authentication protocol (CHAP)\u00a0[@liang2005performance] and cryptography-based algorithms\u00a0[@Marin2015] are mainly software-based. Given their complexity, these algorithms are less realistic to" -"---\nabstract: 'Machine learning techniques have shown their competence for representing and reasoning in symbolic systems such as language and phonology. In Sinitic Historical Phonology, notable tasks that could benefit from machine learning include the comparison of dialects and reconstruction of proto-languages systems. Motivated by this, this paper provides an approach for obtaining multi-dialectal representations of Sinitic syllables, by constructing a knowledge graph from structured phonological data, then applying the BoxE technique from knowledge base learning. We applied unsupervised clustering techniques to the obtained representations to observe that the representations capture phonemic contrast from the input dialects. Furthermore, we trained classifiers to perform inference of unobserved Middle Chinese labels, showing the representations\u2019 potential for indicating archaic, proto-language features. The representations can be used for performing completion of fragmented Sinitic phonological knowledge bases, estimating divergences between different characters, or aiding the exploration and reconstruction of archaic features.'\nauthor:\n- |\n Zhibai Jia\\\n No.2 High School of East China Normal University\\\n `jiazhibai@proton.me`\nbibliography:\n- 'anthology.bib'\n- 'custom.bib'\ntitle: 'Multi-Dialectal Representation Learning of Sinitic Phonology'\n---\n\n=1\n\nIntroduction\n============\n\nThe evolution of languages in the Sinitic family created intricate correspondences and divergences in its dense dialect clusters. Investigating the dynamics of this evolution," -"---\nabstract: 'Multiple pulsar timing array (PTA) collaborations recently announced the evidence of common-spectral processes caused by gravitational waves (GWs). These can be the stochastic GW background and its origin may be astrophysical and/or cosmological. We interpret it as the GWs induced by the primordial curvature perturbations and discuss their implications on primordial black holes (PBHs). We show that the newly released data suggest PBHs much lighter than the Sun ($\\mathcal{O}(10^{-4}) \\, M_\\odot$) in contrast to what was expected from the previous PTA data releases.'\nauthor:\n- Keisuke Inomata\n- Kazunori Kohri\n- Takahiro Terada\nbibliography:\n- 'nanograv\\_gw.bib'\ntitle: |\n The Detected Stochastic Gravitational Waves\\\n and Subsolar-Mass Primordial Black Holes \n---\n\nIntroduction\n============\n\nRecently, the evidence of the Hellings-Downs curve\u00a0[@Hellings:1983fr], a smoking-gun signal of the isotropic stochastic gravitational waves (GWs) representing a particular pattern of angular correlations, has been reported by pulsar timing array (PTA) experiments, in particular, by NANOGrav\u00a0[@NANOGrav:2023gor; @NANOGrav:2023hde] and by EPTA and InPTA\u00a0[@Antoniadis:2023ott; @Antoniadis:2023lym; @Antoniadis:2023xlr] (see also the results of PPTA\u00a0[@Reardon:2023gzh; @Zic:2023gta; @Reardon:2023zen] and CPTA\u00a0[@Xu:2023wog]). The GWs are consistent with the stochastic GW background (SGWB) as there have not been strong hints for continuous GW signals or anisotropy\u00a0[@NANOGrav:2023tcn; @NANOGrav:2023pdq; @Antoniadis:2023bjw]. A" -"---\nabstract: 'We study the scaling properties of the entanglement entropy (EE) near quantum critical points in interacting random antiferromagnetic (AF) spin chains. Using density-matrix renormalization group, we compute the half-chain EE near the topological phase transition between Haldane and Random Singlet phases in a disordered spin-1 chain. It is found to diverge logarithmically in system size with an effective central charge $c_{\\rm eff} = 1.17(4)$ at the quantum critical point (QCP). Moreover, a scaling analysis of EE yields the correlation length exponent $\\nu=2.28(5)$. Our unbiased calculation establishes that the QCP is in the universality class of the infinite-randomness fixed point predicted by previous studies based on strong disorder renormalization group technique. However, in the disordered spin-1/2 Majumdar-Ghosh chain, where a valence bond solid phase is unstable to disorder, the crossover length exponent obtained from a scaling analysis of EE disagrees with the expectation based on Imry-Ma argument. We provide a possible explanation.'\nauthor:\n- 'Prashant Kumar$^{1,2}$ and R. N. Bhatt$^{3}$'\nbibliography:\n- 'bigbib.bib'\ntitle: Scaling of entanglement entropy at quantum critical points in random spin chains\n---\n\n*Introduction:* Entanglement entropy (EE) measures gross quantum mechanical correlations between different parts of a system and incorporates experimentally observable quantities in an" -"---\nauthor:\n- \n- \n- \n- \n- \n- \n- \ntitle: Simultaneous nanorheometry and nanothermometry using intracellular diamond quantum sensors \n---\n\nMain {#sec1}\n====\n\nNanorheology addresses the question of how soft materials deform and flow at the nanoscale [@Squires2010; @Waigh2016]. Of significant interest in nanorheology is the study of complex cellular media such as the cytoplasm, which heavily influence cellular processes such as transport [@GUO2014822], division [@Hurst2021; @adeniba2020simultaneous] and morphological changes [@Pittman2022]. These properties, like many others in the cell, are linked to local biochemical energetics where temperature plays a critical role [@Postmus2008; @Kieling2013]. It is well-established that cells regulate their viscoelastic properties in response to external temperature changes through homeoviscous adaption [@Sinensky1974; @Budin2018] and viscoadaption [@Persson2020]. Variations in intracellular temperature, rheology and their interdependence at the nanoscale remain outstanding questions today [@Baffou2014; @jawerth2018salt] in the pursuit of a deeper understanding of cellular homeostasis, disease progression [@Chung2022] and pathways for cancer treatment [@sharma2019nanoparticles]. The current challenges for existing biosensing tools include small length scales and poor signal-to-noise ratio of the phenomena under investigation.\n\nOptical techniques can provide means for investigating intracellular phenomena at the nanoscale in a non-invasive way. These methods are often susceptible to variations in autofluorescence [@Arai2015], spectral transmission" -"---\nabstract: 'This paper investigates an intelligent reflecting surface (IRS) aided millimeter-wave integrated sensing and communication (ISAC) system. Specifically, based on the passive beam scanning in the downlink, the IRS finds the optimal beam for reflecting the signals from the base station to a communication user. Meanwhile, the IRS estimates the angle of a nearby target based on its echo signal received by the sensing elements mounted on the IRS (i.e., semi-passive IRS). We propose an ISAC protocol for achieving the above objective via simultaneous (beam) training and sensing (STAS). Then, we derive the achievable rate of the communication user and the Cramer-Rao bound (CRB) of the angle estimation for the sensing target in closed-form. [The achievable rate and CRB exhibit different performance against the duration of beam scanning. Specifically, the average achievable rate initially rises and subsequently declines, while the CRB monotonically decreases. Consequently, the duration of beam scanning should be carefully selected to balance communication and sensing performance.]{} Simulation results have verified our analytical findings and shown that, thanks to the efficient use of downlink beam scanning signal for simultaneous communication and target sensing, the STAS protocol outperforms the benchmark protocol with orthogonal beam training and sensing.'\nauthor:" -"---\nabstract: |\n This paper traces the historical and analytical development of what is known in the econometrics literature as the Frisch-Waugh-Lovell theorem. This theorem demonstrates that the coefficients on any subset of covariates in a multiple regression is equal to the coefficients in a regression of the residualized outcome variable on the residualized subset of covariates, where residualization uses the complement of the subset of covariates of interest. In this paper, I suggest that the theorem should be renamed as the Yule-Frisch-Waugh-Lovell (YFWL) theorem to recognize the pioneering contribution of the statistician G. Udny Yule in its development. Second, I highlight recent work by the statistician, P. Ding, which has extended the YFWL theorem to a comparison of estimated covariance matrices of coefficients from multiple and partial, i.e. residualized regressions. Third, I show that, in cases where Ding\u2019s results do not apply, one can still resort to a computational method to conduct statistical inference about coefficients in multiple regressions using information from partial regressions.\\\n **JEL Codes:** C01.\\\n **Keywords:** multiple regression; partial regression; Frisch-Waugh-Lovell theorem.\nauthor:\n- 'Deepankar Basu[^1]'\nbibliography:\n- 'yfwl\\_refs.bib'\ntitle: 'The Yule-Frisch-Waugh-Lovell Theorem'\n---\n\nIntroduction\n============\n\nThe Frisch-Waugh-Lovell theorem is a remarkable result about linear regressions estimated" -"---\nabstract: 'This work is inspired by recent experiments on the formation of vortices in exciton-polariton condensates placed in rotating optical traps. We study theoretically the dynamics of formation of such vortices and elucidate the fundamental role of the mode competition effect in determining the properties of stationary polariton states triggered by stimulated scattering of exciton-polaritons. The interplay between linear and non-linear effects is shown to result in a peculiar polariton dynamics. However, near the lasing threshold, the predominant contribution of the nonlinear effects is the saturation of the linear gain.'\nauthor:\n- 'A.V. Yulin'\n- 'I.A. Shelykh'\n- 'E. S. Sedov'\n- 'A.V. Kavokin'\nbibliography:\n- 'main\\_text\\_yulin.bib'\ntitle: 'Vorticity of polariton condensates in rotating traps. '\n---\n\nIntroduction\n============\n\nSemiconductor systems suitable for the realization of strong light-matter coupling have been actively studied in the recent years [@Carusotto2013]. The reason for the interest they attract is the hybridization between the cavity photons and electronic excitations, which gives rise to the appearance of quasiparticles having extremely low effective masses and able to efficiently interact with each other. Probably the most remarkable achievement in this field is the experimental realization of Bose-Einstein condensation of exciton polaritons at extraordinarily high temperatures [@kasprzak2006bose;" -"---\nabstract: 'One of the challenges of excitonic materials is the accurate determination of the exciton binding energy and bandgap. The difficulty arises from the overlap of the discrete and continuous excitonic absorption at the band edge. Many researches have modeled the shape of the absorption edge of such materials on the Elliott model and its several modifications such as non-parabolic bands, magnetic potentials and electro-hole-polaron interactions. However, exciton binding energies obtained from measured data often vary strongly depending on the chosen model. Here, we propose an alternative and rather simple approach, which has previously been successful in the determination of the optical bandgap of amorphous, direct and indirect semiconductors, based on the bands-fluctuations (BF) model. In this model, the fluctuations due to disorder, temperature or lattice vibrations give rise to the well known exponential distribution of band tail states (Urbach tails). This analysis results in an analytic equation with 5 parameters only. The binding energies and optical bandgaps of GaAs and the family of tri-halide perovskites ($\\textrm{MAPbX}_{3}$), $\\textrm{X=Br,I,Cl}$, over a wide range of temperatures, are obtained with this model. The results for the bandgap, linewidth and exciton binding energy are in good agreement with previous reports. Moreover, due to" -"---\nabstract: 'We propose a method for speech-to-speech emotion-preserving translation that operates at the level of discrete speech units. Our approach relies on the use of multilingual emotion embedding that can capture affective information in a language-independent manner. We show that this embedding can be used to predict the pitch and duration of speech units in a target language, allowing us to resynthesize the source speech signal with the same emotional content. We evaluate our approach to English and French speech signals and show that it outperforms a baseline method that does not use emotional information, including when the emotion embedding is extracted from a different language. Even if this preliminary study does not address directly the machine translation issue, our results demonstrate the effectiveness of our approach for cross-lingual emotion preservation in the context of speech resynthesis.'\naddress: |\n $^1$LIA - Avignon Universite, France\\\n $^2$University of Cambridge, United-Kingdom\nbibliography:\n- 'mybib.bib'\ntitle: Learning Multilingual Expressive Speech Representation for Prosody Prediction without Parallel Data\n---\n\n**Index Terms**: speech synthesis, prosody prediction, speech generation\n\nIntroduction\n============\n\nSpeech-to-speech translation has become increasingly important in today\u2019s globalized world, facilitating communication across different languages and cultures. However, current speech-to-speech translation systems often fail to" -"---\nabstract: 'We show that heat production in slowly driven quantum systems is linked to the topological structure of the driving protocol through the Fubini-Study tensor. Analyzing a minimal model of a spin weakly coupled to a heat bath, we find that dissipation is controlled by the quantum metric and a \u201cquality factor\" characterizing the spin\u2019s precession. Utilizing these findings, we establish lower bounds on the heating rate in two-tone protocols, such as those employed in topological frequency converters. Notably, these bounds are determined by the topology of the protocol, independent of its microscopic details. Our results bridge topological phenomena and energy dissipation in slowly driven quantum systems, providing a design principle for optimal driving protocols.'\nauthor:\n- Iliya Esin\n- '\u00c9tienne Lantagne-Hurtubise'\n- Frederik Nathan\n- Gil Refael\ntitle: Quantum geometry and bounds on dissipation in slowly driven quantum systems \n---\n\nHeating is a ubiquitous non-equilibrium phenomenon that influences a broad range of systems, including quantum computation platforms, semiconductor devices, and mesoscopic setups. In closed, driven quantum many-body systems, heating manifests through the growth of entanglement entropy, resulting in volume-law entangled states that are akin to infinite temperature states in equilibrium\u00a0[@DAlessio2014; @Lazarides2014; @Kaufman2016]. Typically lacking distinctive features, these" -"---\nabstract: 'The accuracy of finger vein recognition systems gets degraded due to low and uneven contrast between veins and surroundings, often resulting in poor detection of vein patterns. We propose a finger-vein enhancement technique, ResFPN (*Residual Feature Pyramid Network*), as a generic preprocessing method agnostic to the recognition pipeline. A bottom-up pyramidal architecture using the novel Structure Detection block (SDBlock) facilitates extraction of veins of varied widths. Using a feature aggregation module (FAM), we combine these vein-structures, and train the proposed ResFPN for detection of veins across scales. With enhanced presentations, our experiments indicate a reduction upto 5% in the average recognition errors for commonly used recognition pipeline over two publicly available datasets. These improvements are persistent even in cross-dataset scenario where the dataset used to train the ResFPN is different from the one used for recognition.'\nauthor:\n- |\n Ketan\u00a0Kotwal\\\n Idiap Research Institute, Switzerland\\\n `ketan.kotwal@idiap.ch`\\\n S\u00e9bastien Marcel\\\n Idiap Research Institute, Switzerland\\\n University of Lausanne, Switzerland\\\n ` sebastien.marcel@idiap.ch`\\\nbibliography:\n- 'egbib.bib'\ntitle: Residual Feature Pyramid Network for Enhancement of Vascular Patterns\n---\n\nIntroduction {#sec:intro}\n============\n\nUse of vascular patterns as the biometric recognition trait is becoming more prevalent due to its distinctive advantages such as high recognition accuracy," -"---\nabstract: 'The use of smart devices (e.g., smartphones, smartwatches) and other wearables to deliver digital interventions to improve health outcomes has grown significantly in the past few years. Mobile health (mHealth) systems are excellent tools for the delivery of adaptive interventions that aim to provide the right type/amount of support, at the right time, by adapting to an individual\u2019s changing context. Micro-randomized trials (MRTs) are an increasingly common experimental design that are the main source for data-driven evidence of mHealth intervention effectiveness. To assess time-varying causal effect moderation in an MRT, individuals are intensively randomized to receive treatment over time. In addition, measurements, including individual characteristics, and context are also collected throughout the study. The effective utilization of covariate information to improve inferences regarding causal effects has been well-established in the context of randomized control trials (RCTs), where covariate adjustment is applied to leverage baseline data to address chance imbalances and improve the asymptotic efficiency of causal effect estimation. However, the application of this approach to longitudinal data, such as MRTs, has not been thoroughly explored. Recognizing the connection to Neyman Orthogonality, we propose a straightforward and intuitive method to improve the efficiency of moderated causal excursion effects by" -"---\nabstract: 'Recently, observational hints for supermassive black holes have been accumulating, which has inspired ones to wonder: Can primordial black holes (PBHs) be supermassive, in particular with the mass $M\\gtrsim 10^{9}M_\\odot$? A supercritical bubble (with an inflating baby universe inside it) that nucleated during inflation can develop into a PBH in our observable Universe. Here, we find that when the inflaton slowly passes by a neighboring vacuum, the nucleating rate of supercritical bubbles would inevitably attain a peak, so the mass distribution of multiverse PBHs, and the mass of peak can be up to $M\\gtrsim 10^{11}M_\\odot$. Thus our mechanism naturally provides a primordial origin of supermassive BHs.'\nauthor:\n- 'Hai-Long Huang$^{1}$[^1]'\n- 'Yong Cai$^{2}$[^2]'\n- 'Jun-Qian Jiang$^{1}$[^3]'\n- 'Jun Zhang$^{3,4}$[^4]'\n- 'Yun-Song Piao$^{1,3,5,6}$[^5]'\ntitle: 'Supermassive primordial black holes in multiverse: for nano-Hertz gravitational wave and high-redshift JWST galaxies'\n---\n\nIntroduction\n============\n\nIn past years, the cosmological implications of PBHs [@Zeldovich; @Hawking:1971ei; @Carr:1974nx], which might be responsible for dark matter and LIGO-Virgo gravitational wave (GW) events [@Bird:2016dcv; @Clesse:2016vqa; @Sasaki:2016jop], have been intensively studied, e.g., in Refs.\u00a0[@Sasaki:2018dmp; @Carr:2020gox; @Carr:2023tpt]. However, it has still been interesting to ask: Can PBHs be supermassive? In particular can the mass of PBHs reach $M\\gtrsim" -"---\nabstract: 'Patents serve as valuable indicators of innovation and provide insights into the spaces of innovation and venture formation within geographic regions. In this study, we utilise patent data to examine the dynamics of innovation and venture formation in the biotech sector across the United Kingdom (UK). By analysing patents, we identify key regions that drive biotech innovation in the UK. Our findings highlight the crucial role of biotech incubators in facilitating knowledge exchange between scientific research and industry. However, we observe that the incubators themselves do not significantly contribute to the diversity of innovations which might be due to the underlying effect of geographic proximity on the influences and impact of the patents. These insights contribute to our understanding of the historical development and future prospects of the biotech sector in the UK, emphasising the importance of promoting innovation diversity and fostering inclusive enterprise for achieving equitable economic growth.'\nauthor:\n- Francesco Marzolla\n- Przemys\u0142aw Nowak\n- Rohit Sahasrabuddhe\n- Chakresh Singh\n- Matteo Straccamore\n- Erik Zhivkoplias\n- Elsa Arcaute\ntitle: 'Spaces of innovation and venture formation: the case of biotech in the United Kingdom'\n---\n\nKeywords {#keywords .unnumbered}\n========\n\nInnovation, diversity, knowledge spillovers, patents, startups, biotechnology." -"---\nabstract: |\n Discount is the difference between the face value of a bond and its present value. I propose an arbitrage-free dynamic framework for discount models, which provides an alternative to the Heath\u2013Jarrow\u2013Morton framework for forward rates. I derive general consistency conditions for factor models, and discuss affine term structure models in particular. There are several open problems, and I outline possible directions for further research.\\\n **Keywords:** discount, factor models, stochastic partial differential equation, term structure models, zero-coupon bonds\\\n **JEL classification:** C32, G12, G13\\\n **MSC 2020 classification:** 91B70, 91G20, 91G30\nauthor:\n- 'Damir Filipovi\u0107[^1]'\nbibliography:\n- 'bib.bib'\ndate: |\n 25 July 2023\\\n forthcoming in *Finance and Stochastics*\ntitle: 'Discount Models[^2]'\n---\n\nDiscount\n========\n\nLet $P(t,T)$ denote the time-$t$ price of a zero-coupon bond with maturity $T$, or short, a $T$-bond. Define the corresponding *discount* $$H(t,T):=1-P(t,T).$$ The discount $H(t,T)$ is the difference between the face value of the bond and its present value. It is the interest earned on investing in a $T$-bond at $t$ and hold it to maturity $T$. As such, it quantifies the time value of money. It also equals the time-$t$ price of a long position in a floating rate note paying overnight short rates $r_t=-\\partial_T" -"---\nabstract: 'Central to black hole perturbation theory calculations is the Teukolsky equation that governs the propagation and the generation of radiation emitted by Kerr black holes. However, it is plagued by a long-ranged potential associated to the perturbation equation and hence a direct numerical integration of the equation is challenging. Sasaki and Nakamura devised a formulation that transforms the equation into a new equation that is free from the issue for the case of out-going gravitational radiation. The formulation was later generalized by Hughes to work for any type of radiation. In this work, we revamp the Generalized Sasaki-Nakamura (GSN) formalism and explicitly show the transformations that convert solutions between the Teukolsky and the GSN formalism for both in-going and out-going radiation of scalar, electromagnetic and gravitational type. We derive all necessary ingredients for the GSN formalism to be used in numerical computations. In particular, we describe a new numerical implementation of the formalism, `GeneralizedSasakiNakamura.jl`, that computes homogeneous solutions to both perturbation equation in the Teukolsky and the GSN formalism. The code works well at low frequencies and is even better at high frequencies by leveraging the fact that black holes are highly permeable to waves at high frequencies." -"---\nabstract: |\n In this paper, we investigate the effects of applying generalised (non-exponential) discounting on a long-run impulse control problem for a Feller-Markov process. We show that the optimal value of the discounted problem is the same as the optimal value of its undiscounted version. Next, we prove that an optimal strategy for the undiscounted discrete time functional is also optimal for the discrete-time discounted criterion and nearly optimal for the continuous-time discounted one. This shows that the discounted problem, being time-inconsistent in nature, admits a time-consistent solution. Also, instead of a complex time-dependent Bellman equation one may consider its simpler time-independent version.\n\n **Keywords:** impulse control, average cost per unit time, generalised discounting, non-exponential discounting, Markov process\n\n **MSC2020 subject classifications:** 93E20, 49J21, 49K21, 60J25\naddress: '$^{\\ast}$Institute of Mathematics, Polish Academy of Sciences, Warsaw, Poland'\nauthor:\n- 'Damian Jelito$^{\\ast,\\dagger}$'\n- '\u0141ukasz Stettner$^{\\ast}$'\nbibliography:\n- 'RSC\\_bibliografia.bib'\ntitle: 'Long-run impulse control with generalised discounting'\n---\n\nIntroduction\n============\n\nIn recent years, control problems with generalised discounting have attracted considerable attention in the literature; see e.g.\u00a0[@HuaNgu2018; @JasNow2020; @BauJasNow2021] and references therein. This can be attributed to the observation that the classic exponential discount function improperly describes the behaviour of economic agents; we refer" -"---\nabstract: 'Shor\u2019s factoring algorithm, believed to provide an exponential speedup over classical computation, relies on finding the period of an exactly periodic quantum modular multiplication operator. This exact periodicity is the hallmark of an integrable system, which is paradoxical from the viewpoint of quantum chaos, given that the classical limit of the modular multiplication operator is a highly chaotic system that occupies the \u201cmaximally random\u201d Bernoulli level of the classical ergodic hierarchy. In this work, we approach this apparent paradox from a quantum dynamical systems viewpoint, and consider whether signatures of ergodicity and chaos may indeed be encoded in such an \u201cintegrable\u201d quantization of a chaotic system. We show that Shor\u2019s modular multiplication operator, in specific cases, can be written as a superposition of quantized $A$-baker\u2019s maps exhibiting more typical signatures of quantum chaos and ergodicity. This work suggests that the integrability of Shor\u2019s modular multiplication operator may stem from the interference of other \u201cchaotic\u201d quantizations of the same family of maps, and paves the way for deeper studies on the interplay of integrability, ergodicity and chaos in and via quantum algorithms.'\nauthor:\n- Abu Musa Patoary\n- Amit Vikram\n- Laura Shou\n- Victor Galitski\nbibliography:\n- 'references.bib'" -"---\nabstract: 'This paper addresses the Optimal Transport problem, which is regularized by the square of Euclidean $\\ell_2$-norm. It offers theoretical guarantees regarding the iteration complexities of the Sinkhorn\u2013Knopp algorithm, Accelerated Gradient Descent, Accelerated Alternating Minimisation, and Coordinate Linear Variance Reduction algorithms. Furthermore, the paper compares the practical efficiency of these methods and their counterparts when applied to the entropy-regularized Optimal Transport problem. This comparison is conducted through numerical experiments carried out on the MNIST dataset.'\nauthor:\n- 'Dmitry A. Pasechnyuk[^1]'\n- 'Michael Persiianov[^^]{}'\n- Pavel Dvurechensky\n- Alexander Gasnikov\nbibliography:\n- 'main.bib'\ntitle: 'Algorithms for Euclidean-regularised Optimal Transport[^2]'\n---\n\nIntroduction\n============\n\nOptimal Transport (OT) problem has a long history [@kantorovich1942translocation; @monge1781memoire], has been extensively studied [@peyre2017computational; @villani2009optimal] and piques interest in the modern statistical learning community [@arjovsky2017wasserstein; @kolouri2017optimal]. This paper focuses on the discrete OT problem statement and the numerical optimisation methods applied to it. Formally, the original problem to solve is: $$\\label{eq:original}\n \\textstyle\\min_{\\substack{X\\mathbf{1}_m = a\\\\X^\\top\\mathbf{1}_n = b\\\\x_{ij} \\geq 0}} \\langle C, X \\rangle,$$ where $a \\in \\mathcal{S}_n$ and $b \\in \\mathcal{S}_m$ are the source and destination distributions (measures), the unit simplex $\\mathcal{S}_d \\equiv \\{x \\in \\mathbb{R}^d_+\\;\\vert\\; \\sum_{i=1}^d x_i = 1\\}$, $X \\in \\mathbb{R}^{n\\times m}_+$ is a transportation plan" -"---\nabstract: 'Foundation models have exhibited remarkable success in various applications, such as disease diagnosis and text report generation. To date, a foundation model for endoscopic video analysis is still lacking. In this paper, we propose Endo-FM, a foundation model specifically developed using massive endoscopic video data. First, we build a video transformer, which captures both local and global long-range dependencies across spatial and temporal dimensions. Second, we pre-train our transformer model using global and local views via a self-supervised manner, aiming to make it robust to spatial-temporal variations and discriminative across different scenes. To develop the foundation model, we construct a large-scale endoscopy video dataset by combining 9 publicly available datasets and a privately collected dataset from Baoshan Branch of Renji Hospital in Shanghai, China. Our dataset overall consists of over 33K video clips with up to 5 million frames, encompassing various protocols, target organs, and disease types. Our pre-trained Endo-FM can be easily adopted for a given downstream task via fine-tuning by serving as the backbone. With experiments on 3 different types of downstream tasks, including classification, segmentation, and detection, our Endo-FM surpasses the current state-of-the-art (SOTA) self-supervised pre-training and adapter-based transfer learning methods by a significant margin," -"---\nabstract: 'The Mathisson-Papapetrou-Dixon (MPD) equations describe the motion of spinning test particles in the pole-dipole approximation. It is well-known that these equations, which couple the Riemann curvature tensor with the antisymmetric spin tensor $S^{\\alpha\\beta}$, together with the normalization condition for the four-velocity, is a system of eleven equations relating fourteen unknowns. To \u201cclose\u201d the system, it is necessary to introduce a constraint of the form $V_\\mu S^{\\mu \\nu} = 0$, usually known as the spin supplementary condition (SSC), where $V_\\mu$ is a future-oriented reference vector satisfying the normalization condition $V_\\alpha V^\\alpha = -1$. There are several SSCs in the literature. In particular, the Tulzcyjew-Dixon, Mathisson-Pirani, and Ohashi-Kyrian-Semer\u00e1k are the most used by the community. From the physical point of view, choosing a different SSC (a different reference vector $V^\\mu$) is equivalent to fixing the centroid of the test particle. In this manuscript, we compare different SSCs for spinning test particles moving around a Morris-Thorne traversable wormhole. To do so, we first obtain the orbital frequency and expand it up to third-order in the particle\u2019s spin; as expected, the zero-order coincides with the Keplerian frequency, the same in all SSCs; nevertheless, we found that differences appear in the second order" -"---\nauthor:\n- |\n M.\u00a0R.\u00a0Drout$^{1,2,*,\\dagger}$, Y.\u00a0G\u00f6tberg$^{2,*,\\dagger}$, B.\u00a0A.\u00a0Ludwig$^{1}$, J.\u00a0H.\u00a0Groh$^{3}$, S.\u00a0E.\u00a0de Mink$^{4,5}$,\\\n A.\u00a0J.\u00a0G.\u00a0O\u2019Grady$^{1,6}$, N.\u00a0Smith$^{7}$\\\n- 'M.\u00a0R.\u00a0Drout$^{1,2,*,\\dagger}$, Y.\u00a0G\u00f6tberg$^{2,*,\\dagger}$, B.\u00a0A.\u00a0Ludwig$^{1}$, J.\u00a0H.\u00a0Groh$^{3}$, S.\u00a0E.\u00a0de Mink$^{4,5,6}$, A.\u00a0J.\u00a0G.\u00a0O\u2019Grady$^{1,7}$, N.\u00a0Smith$^{8}$\\'\ntitle:\n- |\n Discovery of the missing\\\n intermediate-mass helium stars stripped in binaries\n- |\n [Supplementary Materials for]{}\\\n [ **Discovery of the missing intermediate-mass helium stars stripped in binaries**]{}\n---\n\n\\\n\\\n\\\n\\\n\\\n\\\n\\\n\\\n\nThe theory of binary evolution predicts that many massive stars should lose their hydrogen-rich envelopes via interaction with a companion\u2014revealing hot helium stars with masses of $\\sim$2\u20138M$_{\\odot}$. However, only one candidate system had been identified, leaving a large discrepancy between theory and observation. Here, we present a new sample of stars\u2014identified via excess ultraviolet emission\u2014whose luminosities, colors, and spectral morphologies are consistent with predictions for the missing population. We detect radial velocity variations indicative of binary motion and measure high temperatures ($T_{\\rm eff}\\sim60-100$kK), high surface gravities ($\\log(g)\\sim5$) and depleted surface hydrogen mass fractions ($X_{\\rm{H,surf}}\\lesssim0.3$), which match expectations for stars with initial masses between 8\u201325 M$_{\\odot}$ that have been stripped via binary interaction. These systems fill" -"---\nabstract: 'In the recent years, deep learning techniques have shown great success in various tasks related to inverse problems, where a target quantity of interest can only be observed through indirect measurements by a forward operator. Common approaches apply deep neural networks in a post-processing step to the reconstructions obtained by classical reconstruction methods. However, the latter methods can be computationally expensive and introduce artifacts that are not present in the measured data and, in turn, can deteriorate the performance on the given task. To overcome these limitations, we propose a class of equivariant neural networks that can be directly applied to the measurements to solve the desired task. To this end, we build appropriate network structures by developing layers that are equivariant with respect to data transformations induced by well-known symmetries in the domain of the forward operator. We rigorously analyze the relation between the measurement operator and the resulting group representations and prove a representer theorem that characterizes the class of linear operators that translate between a given pair of group actions. Based on this theory, we extend the existing concepts of Lie group equivariant deep learning to inverse problems and introduce new representations that result from" -"---\nabstract: 'It is sometimes claimed that the twin \u201cparadox\u201d requires general relativity for a resolution. This paper presents a simple, exact resolution using only special relativity and the equivalence principle. Two earlier approximate solutions are considered, along with some background review to render the article self-contained. It is hoped that this material will be suitable for classroom instruction.'\nauthor:\n- |\n David Derbes[^1]\\\n *1700 E.56th St., Apt.2007*\\\n *Chicago, IL 60637*\ntitle: 'Special relativity and the twins: a review and a new approach'\n---\n\nProlegomena and apologia\n========================\n\nFull disclosure: Shorter versions of this article have been rejected four times over thirty-five years. The objections (after the first, in 1988) were broadly two: that there was nothing new in it, but more forcefully, that articles on the twins were harmful. Not in themselves, but in the second order: these articles frequently induce a cloud of cranks who swamp journals in the misbegotten hope of disproving relativity and Einstein, thereby obliging conscientious editors to waste valuable time and energy refuting nonsense. So why write another article? Why read one?\n\nThe selfish motive is to publish what this author thinks, *pace* his referees, is a new approach to this old puzzle.[^2] Less" -"---\nabstract: 'We propose a method to measure the respiration of a rhesus monkey using a millimeter-wave radar system with an antenna array. Unlike humans, small animals are generally restless and hyperactive in nature, and suppression of their body motion components is thus necessary to realize accurate respiratory measurements. The proposed method detects and suppresses nonperiodic body motion components while also combining and emphasizing the periodic components from multiple echoes acquired from the target. Results indicate that the proposed method can measure respiration rate of the target monkey accurately, even with frequent body movements.'\nauthor:\n- 'Takuya\u00a0Sakamoto,\u00a0 Daisuke\u00a0Sanematsu, Itsuki\u00a0Iwata,\u00a0 Toshiki\u00a0Minami, and\u00a0Masako\u00a0Myowa [^1] [^2]'\ntitle: |\n Radar-Based Respiratory Measurement of\\\n a Rhesus Monkey by Suppressing Nonperiodic\\\n Body Motion Components\n---\n\n[Sakamoto *et al.*: Radar-Based Respiratory Measurement of a Rhesus Monkey by Suppressing Nonperiodic Body Motion Components]{}\n\nBody movement, radar, respiration, rhesus monkey.\n\nIntroduction\n============\n\npatterns in both humans and animals are known to be affected by mental stress and health conditions. Measurement of an animal\u2019s respiration can play an important role in detecting early signs of mental stress, respiratory infections, and other health conditions. Although contact-type respiratory sensors are commonly used for medical purposes, these" -"---\nabstract: 'Generating a long-distance quantum entanglement is one of the most essential functions of a quantum network to support quantum communication and computing applications. The successful entanglement rate during a probabilistic entanglement process decreases dramatically with distance, and swapping is a widely-applied quantum technique to address this issue. Most existing entanglement routing protocols use a classic entanglement-swapping method based on Bell State measurements that can only fuse two successful entanglement links. This paper appeals to a more general and efficient swapping method, namely $n$-fusion based on Greenberger-Horne-Zeilinger measurements that can fuse $n$ successful entanglement links, to maximize the entanglement rate for multiple quantum-user pairs over a quantum network. We propose efficient entanglement routing algorithms that utilize the properties of $n$-fusion for quantum networks with general topologies. Evaluation results highlight that our proposed algorithm under $n$-fusion can greatly improve the network performance compared with existing ones.'\nauthor:\n- '{yiming.zeng, jiarui.zhang.2, ji.liu, zhenhua.liu, yuanyuan.yang}@stonybrook.edu'\nbibliography:\n- 'reference.bib'\ntitle: 'Entanglement Routing over Quantum Networks Using Greenberger-Horne-Zeilinger Measurements '\n---\n\n=1\n\nQuantum Networks; Entanglement Routing; $n$-fusion Entanglement-swapping; Greenberger-Horne-Zeilinger (GHZ) Measurements\n\nIntroduction\n============\n\nQuantum information science is viewed as the next scientific breakthrough that will propel scientific and economic developments for the whole society" -"---\nabstract: 'Cold atoms in an optical cavity have been widely used for quantum simulations of many-body physics, where the quantum control capability has been advancing rapidly in recent years. Here, we show the atom cavity system is universal for quantum optimization with arbitrary connectivity. We consider a single-mode cavity and develop a Raman coupling scheme by which the engineered quantum Hamiltonian for atoms directly encodes number partition problems (NPPs). The programmability is introduced by placing the atoms at different positions in the cavity with optical tweezers. The NPP solution is encoded in the ground state of atomic qubits coupled through a photonic cavity mode, that can be reached by adiabatic quantum computing (AQC). We construct an explicit mapping for the 3-SAT and vertex cover problems to be efficiently encoded by the cavity system, which costs linear overhead in the number of atomic qubits. The atom cavity encoding is further extended to quadratic unconstrained binary optimization (QUBO) problems. The encoding protocol is optimal in the cost of atom number scaling with the number of binary degrees of freedom of the computation problem. Our theory implies the atom cavity system is a promising quantum optimization platform searching for practical quantum advantage.'" -"---\nabstract: 'Let $Z$ be the germ of a complex hypersurface isolated singularity of equation $f,$ with $Z$ at least of dimension $2.$ We consider the family of analytic $D$-modules generated by the powers of $1/f$ and describe it in terms of the pole order filtration on the de Rham cohomology of the complement of $\\{f=0\\}$ in the neighborhood of the singularity.'\nauthor:\n- Thomas Bitoun\nbibliography:\n- 'bibfilex.bib'\ntitle: 'On the D-module of an isolated singularity'\n---\n\n.\n\nIntroduction\n============\n\nThe $\\mathcal{D}$-modules generated by powers of a polynomial (or analytic function) $f$ have been the topic of several noted publications in the last decade, for example [@MR3867305], [@MR4322001], [@10.1093/imrn/rnac369] and [@saito2022length]. On the one hand, they are elementary objects accessible to those with basic knowledge of $\\mathcal{D}$-module theory. On the other, they relate to Hodge theory and analytic invariants of singularities in deep and subtle ways.\n\nIn this note, we deal with the general isolated singularity case and explain how (in the case of integral powers at least) these modules are characterized by the pole order filtration on the de Rham cohomology, as opposed to the Hodge filtration. We believe our fairly elementary approach, based on residues, is a" -"---\nabstract: |\n We consider the following well studied problem of metric distortion in social choice. Suppose we have an election with $n$ voters and $m$ candidates who lie in a shared metric space. We would like to design a voting rule that chooses a candidate whose average distance to the voters is small. However, instead of having direct access to the distances in the metric space, each voter gives us a ranked list of the candidates in order of distance. Can we design a rule that regardless of the election instance and underlying metric space, chooses a candidate whose cost differs from the true optimum by only a small factor (known as the *distortion*)?\n\n A long line of work culminated in finding deterministic voting rules with metric distortion $3$, which is the best possible for deterministic rules and many other classes of voting rules. However, without any restrictions, there is still a significant gap in our understanding: Even though the best lower bound is substantially lower at $2.112$, the best upper bound is still $3$, which is attained even by simple rules such as Random Dictatorship. Finding a rule that guarantees distortion $3 - {\\varepsilon}$ for some constant ${\\varepsilon}$" -"---\nabstract: |\n Interaction of plasma flow with a magnetic obstacle is a frequent process in many laser-plasma experiments in the laboratory, and is an important event in many astrophysical objects such as X-ray pulsars, AGN, GRB etc.\n\n As a result of plasma penetration through the magnetic wall we could expect a formation of magnetohydrodynamic (MHD) shock waves, as well as of electromagnetic (EM) ones. To study these processes we need equations following from hydrodynamic and Maxwell equations, which in the limiting situations describe MHD and EM waves, and are valid for the general case, when both phenomena are present.\n\n Here we derive a set of equations following from hydrodynamic and Maxwell equations, without neglecting a displacement current, needed for a formation of EM waves. We find a dispersion equation describing a propagation of a weak linear wave in a magnetized plasma along the $x$ axis, perpendicular to the magnetic field $H_y(x)$, which contains MHD, hydrodynamic and EM waves in the limiting cases, and some new types of behaviour in a general situation. We consider a plasma with zero viscosity and heat conductivity, but with a finite electric conductivity with a scalar coefficient.\nauthor:\n- 'Gennady S. Bisnovatyi-Kogan'\n- 'Ilya" -"---\nabstract: 'Massive multiple-input multiple-output (mMIMO) technology is considered a key enabler for the 5G and future wireless networks. In most wireless communication systems, mMIMO is employed together with orthogonal frequency-division multiplexing (OFDM) which exhibits a high peak-to-average-power ratio (PAPR). While passing the OFDM signal through one of the common RF front-ends of limited linearity, significant distortion of the transmitted signal can be expected. In mMIMO systems, this problem is still relevant as in some channels the distortion component is beamformed in the same directions as the desired signal. In this work, we propose a multi-antenna clipping noise cancellation (MCNC) algorithm for the downlink of the mMIMO OFDM system. Computer simulations show it can remove nonlinear distortion even under severe nonlinearity. Next, a simplified version of the algorithm is proposed. It was observed that for the direct visibility channels, its performance is only slightly degraded with respect to the MCNC algorithm.'\nauthor:\n- 'Marcin Wachowiak,\u00a0 and\u00a0Pawel\u00a0Kryszkiewicz,\u00a0[^1][^2][^3]'\nbibliography:\n- 'biblio.bib'\ntitle: Clipping noise cancellation receiver for the downlink of massive MIMO OFDM system\n---\n\n[M. Wachowiak, P. Kryszkiewicz: Clipping noise cancellation receiver for the downlink of massive MIMO OFDM system]{}\n\northogonal frequency-division multiplexing (OFDM), massive MIMO (mMIMO), front-end" -"---\nabstract: 'Satellite Internet of Things (Sat-IoT) is a novel framework in which satellites integrate sensing, communication and computing capabilities to carry out task-oriented communications. In this paper we propose to use the Long Range (LoRa) modulation for the purpose of estimation in a Sat-IoT scenario. Then we realize that the collisions generated by LoRa can be harnessed in an Over-the-Air Computing (AirComp) framework. Specifically, we propose to use LoRa for Type-based Multiple Access (TBMA), a semantic-aware scheme in which communication resources are assigned to different parameters, not users. Our experimental results show that LoRa-TBMA is suitable as a massive access scheme, provides large gains in terms of mean squared error (MSE) and saves scarce satellite communication resources (i.e., power, latency and bandwidth) with respect to orthogonal multiple access schemes. We also analyze the satellite scenarios that could take advantage of the LoRa-TBMA scheme. In summary, that angular modulations, which are very useful in satellite communications, can also benefit from AirComp.'\nauthor:\n- \nbibliography:\n- 'refs.bib'\ntitle: 'LoRa-based Over-the-Air Computing for Sat-IoT [^1] '\n---\n\nIntroduction\n============\n\nThe future 6G (6th Generation) communication is expected to integrate the terrestrial systems and non-terrestrial satellite constellations seamlessly, which is known as a" -"---\nabstract: 'In this work, we introduce a class of Timmermann\u2019s measured multiplier Hopf [$*$]{}-algebroids called [*algebraic quantum transformation groupoids of compact type*]{}. Each object in this class admits a Pontrjagin-like dual called an [*algebraic quantum transformation groupoid of discrete type*]{}. This compact/discrete duality in the framework of algebraic quantum transformation groupoids recover the one between compact and discrete Van Daele\u2019s algebraic quantum groups. Among the non-trivial examples of algebraic quantum transformation groupoids of compact type, we give constructions arising from Fell bundles and quantum quotient spaces.'\naddress: 'Universit\u00e9 Paris-Saclay, CNRS, Laboratoire de Math\u00e9matiques d\u2019Orsay, 91405 Orsay, France'\nauthor:\n- Frank Taipe\nbibliography:\n- 'biblio.bib'\ntitle: |\n Algebraic quantum transformation groupoids\\\n of compact type\n---\n\nIntroduction\n============\n\nMeasured quantum groupoids were introduced by F. Lesieur in [@LPHD03] using as main ingredients two operator algebraic structures introduced by J.-M. Vallin, namely Hopf-von Neumann bimodules [@V96] and pseudo-multiplicative unitaries [@V00]. Two important features of these general quantum objects is that, on one hand, they generalize von Neumann locally compact quantum groups, in the sense of J. Kustermans and S. Vaes [@KV03], and on the other hand, they are natural objects arising as quantum symmetries of general inclusions of von Neumann algebras [@EV00;" -"---\nabstract: 'Recently, the quality and performance of text-to-image generation significantly advanced due to the impressive results of diffusion models. However, text-to-image diffusion models still fail to generate high fidelity content with respect to the input prompt. One problem where text-to-diffusion models struggle is generating the exact number of objects specified in the text prompt. E.g. given a prompt \u201cfive apples and ten lemons on a table\u201d, diffusion-generated images usually contain the wrong number of objects. In this paper, we propose a method to improve diffusion models to focus on producing the correct object count given the input prompt. We adopt a counting network that performs reference-less class-agnostic counting for any given image. We calculate the gradients of the counting network and refine the predicted noise for each step. To handle multiple types of objects in the prompt, we use novel attention map guidance to obtain high-fidelity masks for each object. Finally, we guide the denoising process by the calculated gradients for each object. Through extensive experiments and evaluation, we demonstrate that our proposed guidance method greatly improves the fidelity of diffusion models to object count.'\nauthor:\n- 'Wonjun Kang^1,2^, Kevin Galim^1^, Hyung Il Koo^1,3^'\n- Author Name\n- 'First" -"---\nabstract: 'We focus on the task of approximating the optimal value function in deep reinforcement learning. This iterative process is comprised of solving a sequence of optimization problems where the loss function changes per iteration. The common approach to solving this sequence of problems is to employ modern variants of the stochastic gradient descent algorithm such as Adam. These optimizers maintain their own internal parameters such as estimates of the first-order and the second-order moments of the gradient, and update them over time. Therefore, information obtained in previous iterations is used to solve the optimization problem in the current iteration. We demonstrate that this can contaminate the moment estimates because the optimization landscape can change arbitrarily from one iteration to the next one. To hedge against this negative effect, a simple idea is to reset the internal parameters of the optimizer when starting a new iteration. We empirically investigate this resetting idea by employing various optimizers in conjunction with the Rainbow algorithm. We demonstrate that this simple modification significantly improves the performance of deep RL on the Atari benchmark.'\nauthor:\n- |\n Kavosh Asadi\\\n Amazon\\\n Rasool Fakoor\\\n Amazon\\\n Shoham Sabach\\\n Amazon & Technion\\\nbibliography:\n- 'ref.bib'\ndate: March 2023" -"---\nabstract: 'We introduce *NaturalInversion*, a novel model inversion-based method to synthesize images that agrees well with the original data distribution without using real data. In NaturalInversion, we propose: (1) a *Feature Transfer Pyramid* which uses enhanced image prior of the original data by combining the multi-scale feature maps extracted from the pre-trained classifier, (2) a *one-to-one* approach generative model where only one batch of images are synthesized by one generator to bring the non-linearity to optimization and to ease the overall optimizing process, (3) learnable *Adaptive Channel Scaling* parameters which are end-to-end trained to scale the output image channel to utilize the original image prior further. With our NaturalInversion, we synthesize images from classifiers trained on CIFAR-10/100 and show that our images are more consistent with original data distribution than prior works by visualization and additional analysis. Furthermore, our synthesized images outperform prior works on various applications such as knowledge distillation and pruning, demonstrating the effectiveness of our proposed method. Code is available at '\nauthor:\n- 'Yujin Kim^1,2^, Dogyun Park^1,2^, Dohee Kim^2^, Suhyun Kim^2^[^1]'\n- Author Name\n- 'First Author Name,^1^ Second Author Name, ^2^ Third Author Name ^1^'\nbibliography:\n- 'aaai22.bib'\ntitle:\n- 'NaturalInversion: Data-Free Image Synthesis" -"---\nabstract: 'Magnetic Resonance Imaging (MRI) produces excellent soft tissue contrast, albeit it is an inherently slow imaging modality. Promising deep learning methods have recently been proposed to reconstruct accelerated MRI scans. However, existing methods still suffer from various limitations regarding image fidelity, contextual sensitivity, and reliance on fully-sampled acquisitions for model training. To comprehensively address these limitations, we propose a novel self-supervised deep reconstruction model, named Self-Supervised Diffusion Reconstruction (SSDiffRecon). SSDiffRecon expresses a conditional diffusion process as an unrolled architecture that interleaves cross-attention transformers for reverse diffusion steps with data-consistency blocks for physics-driven processing. Unlike recent diffusion methods for MRI reconstruction, a self-supervision strategy is adopted to train SSDiffRecon using only undersampled k-space data. Comprehensive experiments on public brain MR datasets demonstrates the superiority of SSDiffRecon against state-of-the-art supervised, and self-supervised baselines in terms of reconstruction speed and quality. Implementation will be available at .'\nauthor:\n- Yilmaz Korkmaz\n- Tolga Cukur\n- Vishal Patel\nbibliography:\n- 'main.bib'\ntitle: 'Self-Supervised MRI Reconstruction with Unrolled Diffusion Models'\n---\n\n\\#1[~\\#1~]{}\n\nIntroduction\n============\n\nMagnetic Resonance Imaging (MRI) is one of the most widely used imaging modalities due to its excellent soft tissue contrast, but it has prolonged and costly scan sessions. Therefore," -"---\nabstract: 'Describing analytically the transport properties of electrolytes, such as their conductivity or the self-diffusion of the ions, has been a central challenge of chemical physics for almost a century. In recent years, this question has regained some interest in light of Stochastic Density Field Theory (SDFT) \u2013 an analytical framework that allows the approximate determination of density correlations in fluctuating systems. In spite of the success of this theory to describe dilute electrolytes, its extension to concentrated solutions raises a number of technical difficulties, and requires simplified descriptions of the short-range repulsion between the ions. In this article, we discuss recent approximations that were proposed to compute the conductivity of electrolytes, in particular truncations of Coulomb interactions at short distances. We extend them to another observable (the self-diffusion coefficient of the ions) and compare them to earlier analytical approaches, such as the mean spherical approximation and mode-coupling theory. We show how the treatment of hydrodynamic effects in SDFT can be improved, that the choice of the modified Coulomb interactions significantly affects the determination of the properties of the electrolytes, and that comparison with other theories provides a guide to extend SDFT approaches in this context.'\nauthor:\n- Olivier" -"---\nabstract: |\n In a many-to-one matching model, with or without contracts, where doctors\u2019 preferences are private information and hospitals\u2019 preferences are substitutable and public information, any stable matching rule could be manipulated for doctors. Since manipulations can not be completely avoided, we consider the concept of *obvious manipulations* and look for stable matching rules that prevent at least such manipulations (for doctors). For the model with contracts, we prove that: *(i)* the doctor-optimal matching rule is non-obviously manipulable and *(ii)* the hospital-optimal matching rule is obviously manipulable, even in the one-to-one model. In contrast to *(ii)*, for a many-to-one model without contracts, we prove that the hospital-optimal matching rule is not obviously manipulable.Furthermore, if we focus on quantile stable rules, then we prove that the doctor-optimal matching rule is the only non-obviously manipulable quantile stable rule.\n\n *JEL classification:* D71, D72.\n\n *Keywords:* obvious manipulations, matching, contracts\nauthor:\n- 'R. Pablo Arribillaga[^1]'\n- Eliana Pepa Risma\nbibliography:\n- 'biblio.bib'\ntitle: 'Obvious Manipulations in Matching with and without Contracts[^2]'\n---\n\nIntroduction\n============\n\nIn the two-sided many-to-one matching model with contracts, there is a bilateral market whose disjoint sides are typically referred to as doctors and hospitals. Each contract refers to a doctor-hospital" -"---\nabstract: |\n We propose a novel surrogate modelling approach to efficiently and accurately approximate the response of complex dynamical systems driven by time-varying exogenous excitations over extended time periods. Our approach, namely *manifold nonlinear autoregressive modelling with exogenous input* (mNARX), involves constructing a problem-specific exogenous input manifold that is optimal for constructing autoregressive surrogates. The manifold, which forms the core of mNARX, is constructed incrementally by incorporating the physics of the system, as well as prior expert- and domain- knowledge. Because mNARX decomposes the full problem into a series of smaller sub-problems, each with a lower complexity than the original, it scales well with the complexity of the problem, both in terms of training and evaluation costs of the final surrogate. Furthermore, mNARX synergizes well with traditional dimensionality reduction techniques, making it highly suitable for modelling dynamical systems with high-dimensional exogenous inputs, a class of problems that is typically challenging to solve.\n\n Since domain knowledge is particularly abundant in physical systems, such as those found in civil and mechanical engineering, mNARX is well suited for these applications. We demonstrate that mNARX outperforms traditional autoregressive surrogates in predicting the response of a classical coupled spring-mass system excited by a one-dimensional" -"---\nabstract: 'We derive an explicit formula, valid for all integers $r,d\\ge 0$, for the dimension of the vector space $C^r_d(\\Delta)$ of piecewise polynomial functions continuously differentiable to order $r$ and whose constituents have degree at most $d$, where $\\Delta$ is a planar triangulation that has a single totally interior edge. This extends previous results of Toh\u01ceneanu, Min\u00e1\u010d, and Sorokina. Our result is a natural successor of Schumaker\u2019s 1979 dimension formula for splines on a planar vertex star. Indeed, there has not been a dimension formula in this level of generality (valid for all integers $d,r\\ge 0$ and any vertex coordinates) since Schumaker\u2019s result. We derive our results using commutative algebra.'\naddress:\n- |\n DiPasquale: Department of Mathematics and Statistics\\\n University of South Alabama\\\n Mobile, AL\\\n USA\n- |\n Yuan: Department of Mathematics\\\n Swansea university\\\n Swansea SA1 8EN\\\n UK\nauthor:\n- Michael Dipasquale\n- Beihui Yuan\ntitle: Planar splines on a triangulation with a single totally interior edge\n---\n\nIntroduction\n============\n\nSuppose $\\Delta$ is a planar triangulation. Given integers $r,d\\ge 0$, we denote by $C^r_d(\\Delta)$ the vector space of piecewise polynomial functions on $\\Delta$ which are continuously differentiable of order $r$. A fundamental problem in numerical analysis and computer-aided geometric" -"---\nabstract: |\n **Introduction**\n\n Clinical trials (CTs) often fail due to inadequate patient recruitment. Finding eligible patients involves comparing the patient\u2019s information with the CT eligibility criteria. Automated patient matching offers the promise of improving the process, yet the main difficulties of CT retrieval lie in the semantic complexity of matching unstructured patient descriptions with semi-structured, multi-field CT documents and in capturing the meaning of negation coming from the eligibility criteria.\n\n **Objectives**\n\n This paper tackles the challenges of CT retrieval by presenting an approach that addresses the patient-to-trials paradigm. Our approach involves two key components in a pipeline-based model: (i) a data enrichment technique for enhancing both queries and documents during the first retrieval stage, and (ii) a novel re-ranking schema that uses a Transformer network in a setup adapted to this task by leveraging the structure of the CT documents.\n\n **Methods**\n\n We use named entity recognition and negation detection in both patient description and the eligibility section of CTs. We further classify patient descriptions and CT eligibility criteria into current, past, and family medical conditions. This extracted information is used to boost the importance of disease and drug mentions in both query and index for lexical retrieval. Furthermore, we" -"---\nauthor:\n- 'S. Zarattini, J. A. L. Aguerri, P. Tarr\u00edo, and E. M. Corsini'\nbibliography:\n- 'bibliografia.bib'\nsubtitle: 'XIII. A paradigm shift: fossil groups as isolated structures rather than relics of the ancient Universe.'\ntitle: Fossil group origins\n---\n\n[In this work we study the large-scale structure around a sample of non-fossil systems and compare the results with earlier findings for a sample of genuine fossil systems selected using their magnitude gap.]{} [We compute the distance from each system to the closest filament and intersection as obtained from a catalogue of galaxies in the redshift range $0.05 \\le z \\le 0.7$. We then estimate the average distances and distributions of cumulative distances to filaments and intersections for different bins of magnitude gap.]{} [We find that the average distance to filaments is $(3.0\\pm 0.8)$ $R_{200}$ for fossil systems, whereas it is $(1.1\\pm 0.1)\\,R_{200}$ for non-fossil systems. Similarly, the average distance to intersections is larger in fossil than in non-fossil systems, with values of $(16.3\\pm 3.2)$ and $(8.9\\pm 1.1) \\,R_{200}$, respectively. Moreover, the cumulative distributions of distances to intersections are statistically different between fossil and non-fossil systems.]{} [Fossil systems selected using the magnitude gap appear to be, on average, more isolated" -"---\nabstract: 'Numerical data imputation algorithms replace missing values by estimates to leverage incomplete data sets. Current imputation methods seek to minimize the error between the unobserved ground truth and the imputed values. But this strategy can create artifacts leading to poor imputation in the presence of multimodal or complex distributions. To tackle this problem, we introduce the $k$NN$\\times$KDE algorithm: a data imputation method combining nearest neighbor estimation ($k$NN) and density estimation with Gaussian kernels (KDE). We compare our method with previous data imputation methods using artificial and real-world data with different data missing scenarios and various data missing rates, and show that our method can cope with complex original data structure, yields lower data imputation errors, and provides probabilistic estimates with higher likelihood than current methods. We release the code in open-source for the community[^1].'\nauthor:\n- |\n Lalande Florian florian.lalande@oist.jp\\\n Neural Computation Unit\\\n Okinawa Institute of Science and Technology\\\n 1919-1 Tancha, Onna-son, Okinawa, JAPAN Doya Kenji doya@oist.jp\\\n Neural Computation Unit\\\n Okinawa Institute of Science and Technology\\\n 1919-1 Tancha, Onna-son, Okinawa, JAPAN\nbibliography:\n- 'main.bib'\ntitle: |\n Numerical Data Imputation for Multimodal Data Sets:\\\n A Probabilistic Nearest-Neighbor Kernel Density Approach\n---\n\nBackground and related work {#sec:background}\n===========================\n\nAs sensors" -"---\nabstract: 'Positioning accuracy is a critical requirement for vehicle-to-everything (V2X) use cases. Therefore, this paper derives the theoretical limits of estimation for the position and orientation of vehicles in a cooperative vehicle-to-vehicle (V2V) scenario, using a lens-based multiple-input multiple-output (lens-MIMO) system. Following this, we analyze the Cram$\\acute{\\text{e}}$r-Rao lower bounds (CRLBs) of the position and orientation estimation and explore a received signal model of a lens-MIMO for the particular angle of arrival (AoA) estimation with a V2V geometric model. Further, we propose a lower complexity AoA estimation technique exploiting the unique characteristics of the lens-MIMO for a single target vehicle; as a result, its estimation scheme is effectively extended by the successive interference cancellation (SIC) method for multiple target vehicles. Given these AoAs, we investigate the lens-MIMO estimation capability for the positions and orientations of vehicles. Subsequently, we prove that the lens-MIMO outperforms a conventional uniform linear array (ULA) in a certain configuration of a lens\u2019s structure. Finally, we confirm that the proposed localization algorithm is superior to ULA\u2019s CRLB as the resolution of the lens increases in spite of the lower complexity.'\nauthor:\n- 'Joo-Hyun Jo,\u00a0 Jae-Nam Shim,\u00a0 Byoungnam (Klaus) Kim,\u00a0 Chan-Byoung Chae,\u00a0 and\u00a0Dong Ku Kim,\u00a0 [^1]'\nbibliography:" -"---\nabstract: 'We theoretically study the heat flux between electrons and phonons in a thin metallic film embedded in a suspended dielectric slab (called a *membrane*, in accordance with the established nomenclature), forming a layered structure. The thickness of the membrane is much smaller than the other two dimensions and, in the considered temperature range, is comparable to the dominant phonon wavelength. The thickness of the metallic layer is an order of magnitude smaller than the thickness of the membrane. While the dependence of the heat exchange on the thicknesses of the film and of the membrane has been studied before, it is not yet known how this depends on the position of the film inside the membrane. Here we show that the position strongly influences the heat exchange. If we denote by $T_e$ the effective temperature of the electrons in the metal and by $T_{ph}$ the effective temperature of the phonons (assumed to be uniform in the entire system), then we may write in general the heat power as $P \\equiv P^{(0)}(T_e) - P^{(0)}(T_{ph})$, where $P^{(0)}(T) \\equiv P_s^{(0)}(T) + P_a^{(0)}(T)$, with $P_s^{(0)}(T)$ and $P_a^{(0)}(T)$ being the contributions of the symmetric and antisymmetric Lamb modes, respectively. In the low temperature" -"---\nabstract: 'As research interests in medical image analysis become increasingly fine-grained, the cost for extensive annotation also rises. One feasible way to reduce the cost is to annotate with coarse-grained superclass labels while using limited fine-grained annotations as a complement. In this way, fine-grained data learning is assisted by ample coarse annotations. Recent studies in classification tasks have adopted this method to achieve satisfactory results. However, there is a lack of research on efficient learning of fine-grained subclasses in semantic segmentation tasks. In this paper, we propose a novel approach that leverages the hierarchical structure of categories to design network architecture. Meanwhile, a task-driven data generation method is presented to make it easier for the network to recognize different subclass categories. Specifically, we introduce a Prior Concatenation module that enhances confidence in subclass segmentation by concatenating predicted logits from the superclass classifier, a Separate Normalization module that stretches the intra-class distance within the same superclass to facilitate subclass segmentation, and a HierarchicalMix model that generates high-quality pseudo labels for unlabeled samples by fusing only similar superclass regions from labeled and unlabeled images. Our experiments on the BraTS2021 and ACDC datasets demonstrate that our approach achieves comparable accuracy to a" -"---\nbibliography:\n- 'giulia.bib'\n- 'library.bib'\n---\n\n[**Thermodynamic phase diagram of the competition between superconductivity and charge order in cuprates** ]{}\n\nGiulia Venditti^1\\*^, Ilaria Maccari^2^, Jose Lorenzana^3$\\dagger$^ and Sergio Caprara^3^\n\n[**1**]{} SPIN-CNR Institute for Superconducting and other Innovative Materials and Devices, Area della Ricerca di Tor Vergata, Via del Fosso del Cavaliere 100, 00133 Rome, Italy\\\n[**2**]{} Department of Physics, Stockholm University, Stockholm SE-10691, Sweden\\\n[**3**]{} ISC-CNR and Department of Physics, Sapienza University of Rome, Piazzale Aldo Moro 2, 00185, Rome, Italy\\\n\\*giulia.venditti@spin.cnr.it, $^\\dagger$jose.lorenzana@cnr.it\n\nAbstract {#abstract .unnumbered}\n========\n\n[**We argue that there is a special doping point in the phase diagram of cuprates, such that the condensation of holes into a charge-ordered and into a superconducting phase are degenerate in energy but with an energy barrier in between. We present Monte Carlo simulations of this problem without and with quenched disorder in two-dimensions. While in the clean case, charge order and superconductivity are separated by a first-order line which is nearly independent of temperature, in the presence of quenched disorder, charge order is fragmented into domains separated by superconducting filaments reminiscent of the supersolid behaviour in $^4$He. Assuming weak interlayer couplings, the resulting three-dimensional phase diagram is in good agreement" -"---\nabstract: 'Monitoring key elements of disease dynamics (e.g., prevalence, case counts) is of great importance in infectious disease prevention and control, as emphasized during the COVID-19 pandemic. To facilitate this effort, we propose a new capture-recapture (CRC) analysis strategy that takes misclassification into account from easily-administered, imperfect diagnostic test kits, such as the Rapid Antigen Test-kits or saliva tests. Our method is based on a recently proposed \u201canchor stream\u201d design, whereby an existing voluntary surveillance data stream is augmented by a smaller and judiciously drawn random sample. It incorporates manufacturer-specified sensitivity and specificity parameters to account for imperfect diagnostic results in one or both data streams. For inference to accompany case count estimation, we improve upon traditional Wald-type confidence intervals by developing an adapted Bayesian credible interval for the CRC estimator that yields favorable frequentist coverage properties. When feasible, the proposed design and analytic strategy provides a more efficient solution than traditional CRC methods or random sampling-based biased-corrected estimation to monitor disease prevalence while accounting for misclassification. We demonstrate the benefits of this approach through simulation studies that underscore its potential utility in practice for economical disease monitoring among a registered closed population.'\nauthor:\n- |\n Lin Ge$^*$, Yuzi" -"---\nabstract: |\n This research proposes a new methodology for optimizing MPI collective communication, specifically the `MPI_Alltoallv` in HPC applications like HeFFTe, a scalable parallel solver for Fast Fourier Transforms (FFTs). Standard implementations of alltoallv consist either of sending to a single process and receiving from a single process at each step, bottlenecked by synchronization costs, or initializing all communication at one time, incurring large costs associated with network contention and queue search costs. The authors present novel methods that eliminate synchronization costs without communicating a large number of messages at once.\n\n This paper measures the impact of the various alltoallv methods within HeFFTe. Results are analyzed within Beatnik, a Z-model solver that is bottlenecked by HeFFTe and representative of applications that rely on FFTs. We evaluate our methodology on UTC\u2019s Epyc cluster. This cluster consists of 16 compute nodes based on the AMD EPYC 7662 128-core processor.\n\n We made a significant discovery regarding the optimization of OpenMPI `MPI_Alltoallv` by utilizing MPI Advance\u2019s algorithms. We observed notable reduction in the minimum, maximum, average time and improvements in the scalability.\nauthor:\n- Evelyn Namugwanya\n- Amanda Bienz\n- Derek Schafer\n- Anthony Skjellum\nbibliography:\n- 'references.bib'\ntitle: 'Collective-Optimized FFTs '\n---" -"---\nabstract: |\n Cloud platforms are widely adopted by many systems, such as time series processing systems, to store and process massive amounts of sensitive time series data. Unfortunately, several incidents have shown that cloud platforms are vulnerable to internal and external attacks that lead to critical data breaches. Adopting cryptographic protocols such as homomorphic encryption and secure multi-party computation adds high computational and network overhead to query operations.\n\n We present [TimeClave]{}, a fully oblivious in-enclave time series processing system: [TimeClave]{} leverages Intel SGX to support aggregate statistics on time series with minimal memory consumption inside the enclave. To hide the access pattern inside the enclave, we introduce a non-blocking read-optimised ORAM named [RoORAM]{}. [TimeClave]{} integrates [RoORAM]{} to obliviously and securely handle client queries with high performance. With an aggregation time interval of $10s$, $2^{14}$ summarised data blocks and 8 aggregate functions, [TimeClave]{} run point query in $0.03ms$ and a range query of 50 intervals in $0.46ms$. Compared to the ORAM baseline, [TimeClave]{} achieves lower query latency by up to $2.5\\times$ and up to $2\\times$ throughput, with up to 22K queries per second.\nauthor:\n- Kassem Bagher\n- Shujie Cui\n- Xingliang Yuan\n- Carsten Rudolph\n- Xun Yi\nbibliography:" -"---\nabstract: 'An orbital current can be generated whenever an object has a translational and rotational degree of freedom. In condensed matter physics, intra-atomic contributions to the transverse orbital transport, labeled orbital Hall effect, rely on propagating wave packets that must consist of hybridized atomic orbitals. However, inter-atomic contributions have to be considered as well because they give rise to a new mechanism for generating orbital currents. As we show, even wave packets consisting purely of $s$ electrons can transport orbital angular momentum if they move on a cycloid trajectory. We introduce the kagome lattice with a single $s$ orbital per atom as the minimal model for the orbital Hall effect and observe the cycloid motion of the electrons in the surface states.'\nauthor:\n- Oliver Busch\n- Ingrid Mertig\n- 'B[\u00f6]{}rge G[\u00f6]{}bel'\nbibliography:\n- 'short.bib'\n- 'MyLibrary.bib'\ntitle: Orbital Hall effect and orbital edge states caused by $s$ electrons\n---\n\nIntroduction\n============\n\nThe field of orbitronics is concerned with the orbital degree of freedom of electrons instead of their spin and charge\u00a0[@go2021orbitronics]. Despite the fact that orbital quenching\u00a0[@kittel2004surface] leads to a suppressed orbital magnetization in most solids, orbital currents often surpass spin currents in magnitude, as the" -"---\nabstract: 'In this review, we examine an extended Bayesian inference method and its relation to biological information processing. We discuss the idea of combining two modes of Bayesian inference. The first is the standard Bayesian inference which contracts probability space. The second is its inverse, which extends and enriches the probability space of latent and observable variables. Their combination has been observed that, greatly, facilitates discovery. Moreover, this dual search during the updating process elucidates a crucial difference between biological and artificial information processing. The latter is restricted due to nonlinearities, while the former utilizes it. This duality is ubiquitous in biological information process dynamics (\u2018flee-or-fight\u2019, \u2018explore-or-exploit\u2019 etc.) as is the role of fractality and chaos in its underlying nonequilibrium, nonlinear dynamics. We also propose a new experimental set up that stems from testing these ideas.'\nauthor:\n- 'Vasileios Basios, Yukio-Pegio Gunji & Pier-Francesco Moretti'\ntitle: Extending the Bayesian Framework from Information to Action\n---\n\nIntroduction {#intro}\n============\n\n> *\u201cExplore different areas.The statement that one cannot be both deep and broad is a myth. Actually, the importance of being a polymath is that it allows one to make remote associations, and thus to understand the deeper essence of things." -"---\nabstract: 'Most detector systems used for positron emission particle tracking (PEPT) are very expensive due to the use of inorganic plastic scintillators combined with a high number of readout electronic channels. This work aims to reduce the overall cost of a PEPT-capable detector system by using large and cost-effective plastic scintillators and developing custom 2$\\times$2 silicon photomultiplier (SiPM) arrays, preamplifiers, and discriminators. The use of long (20mm\u00a0$\\times$\u00a020mm\u00a0$\\times$\u00a01000mm) plastic scintillator bars read out with photodetectors only at their respective ends allows an overall smaller number of photodetectors and associated readout electronics, which in turn reduces the overall cost of the system. In addition, the development of a custom SiPM array and preamplifier allows a free selection of interconnection and readout, as most commercial producers only offer specific types of interconnections and therefore lack other connections such as serial or hybrid. Thus, several common circuit types for SiPMs and preamplifiers were tested and compared in this work, and it was found that a serial connection implemented in a hybrid interconnection for the SiPMs and an inverting preamplifier based on a high-frequency operational amplifier provided the best results for the proposed detector system. Measured with a $^{22}$Na source," -"---\nabstract: 'We study the satisfiability problem of symbolic finite automata and decompose it into the satisfiability problem of the theory of the input characters and the monadic second-order theory of the indices of accepted words. We use our decomposition to obtain tight computational complexity bounds on the decision problem for this automata class and an extension that considers linear arithmetic constraints on the underlying effective Boolean algebra.'\nauthor:\n- Rodrigo Raya\ntitle: The Complexity of Satisfiability Checking for Symbolic Finite Automata\n---\n\nIntroduction {#section:intro}\n============\n\nSymbolic finite automata (SFAs) are an extension of finite automata that allow transitions to be labelled with monadic predicates over some universe rather than symbols from a finite alphabet. They were first mentioned in [@watson_implementing_1996], but they attracted renewed interest starting in [@veanes_rex_2010]. SFAs have been used in a variety of applications including the analysis of regular expressions [@veanes_rex_2010; @dantoni_minimization_2014], string encoders, sanitizers [@hooimeijer_fast_2011; @dantoni_extended_2015; @hu_automatic_2017], functional programs [@dantoni_fast_2015], code generation, parallelization [@saarikivi_fusing_2017] and symbolic matching [@saarikivi_symbolic_2019].\n\nA series of theoretical investigations has been carried out on this automata model, including [@dantoni_minimization_2014; @tamm_theoretical_2018; @argyros_learnability_2018]. In particular, the authors of [@veanes_monadic_2017] observed that such an automata model had been studied previously by B\u00e8s in [@bes_application_2008]." -"---\nabstract: 'A near-field wideband communication system is studied, wherein a base station (BS) employs an extremely large-scale antenna array (ELAA) to serve multiple users situated within its near-field region. To facilitate the near-field beamfocusing and mitigate the wideband beam split, true-time delayer (TTD)-based hybrid beamforming architectures are employed at the BS. Apart from the fully-connected TTD-based architecture, a new sub-connected TTD-based architecture is proposed for enhancing energy efficiency. Three wideband beamfocusing optimization approaches are proposed to maximize spectral efficiency for both architectures. 1) *Fully-digital approximation (FDA) approach*: In this approach, the TTD-based hybrid beamformers are optimized to approximate the optimal fully-digital beamformers using block coordinate descent. 2) *Penalty-based FDA approach*: In this approach, the penalty method is leveraged in the FDA approach to guarantee the convergence to a stationary point of the spectral maximization problem. 3) *Heuristic two-stage (HTS) approach*: In this approach, the closed-form TTD-based analog beamformers are first designed based on the outcomes of near-field beam training and the piecewise-near-field approximation. Subsequently, the low-dimensional digital beamformer is optimized using knowledge of the low-dimensional equivalent channels, resulting in reduced computational complexity and channel estimation complexity. Our numerical results unveil that 1) the proposed approaches effectively eliminate the near-field" -"---\nabstract: 'We define a homological action on sutured instanton Floer homology. This action is well-defined up to scalars, and behaves well under connected sums and sutured manifold decompositions. As an application, we show that instanton knot homology detects link splitting for two-component links.'\naddress: 'Department of Mathematics, Stanford University, Stanford, CA, 94305'\nauthor:\n- Hongjian Yang\nbibliography:\n- 'ref.bib'\ntitle: A Homological Action on Sutured Instanton Homology\n---\n\nIntroduction\n============\n\nSutured manifold theory, introduced by Gabai [@gabai1983foliations], has shown to be a very useful machinery to study knots and $3$-manifolds. Its interaction with Floer theory leads to even more interesting results. It is done in two ways: using Gabai\u2019s result [@gabai1987foliations] to produce taut foliations and then obtain non-vanishing results by relations to contact structures [@eliashberg1998confoliations; @eliashberg2004few; @kronheimer2004witten; @ozsvath2004holomorphic], or using the sutured manifold hierarchies directly.\n\nThe second approach was first realized by Juh\u00e1sz [@juhasz2006holomorphic] in the setting of Heegaard Floer theory, and later by Kronheimer and Mrowka [@kronheimer2010knots] for monopole and instanton Floer theories. In particular, it leads to series of topological applications, including fibered knot detection for knot Floer homology [@ghiggini2008knot; @ni2007knot], and unknot detection for Khovanov homology [@kronheimer2011khovanov].\n\nThe homological action on Floer homology groups can" -"---\nabstract: 'We show that when a $K3$ surface acquires a node, the existence of stable spherical sheaves of certain Chern classes can be obstructed.'\nauthor:\n- Yeqin Liu\nbibliography:\n- 'sphvbsing.bib'\ntitle: Spherical vector bundles on nodal $K3$ surfaces\n---\n\nIntroduction\n============\n\n\\[nodalK3\\] In this paper, a *nodal $K3$ surface* is a projective surface $X$ with a unique ordinary double point, $\\omega_{X}\\cong \\mathcal{O}_{X}$, and $ \\mathrm{H}^{1}(X, \\mathcal{O}_{X})=0$. A nodal $K3$ surface is called *general*, if the group of Cartier divisors $\\mathrm{Pic}(X)=\\mathbb{Z}$.\n\nThroughout the paper, let $p$ be the unique singularity of a nodal $K3$ surface $X$. The blow up $\\widetilde{X}=\\mathrm{bl}_{p}X$ is a smooth $K3$ surface, and the exceptional divisor $L$ is a $(-2)$-curve on $\\widetilde{X}$. Let $\\pi: \\widetilde{X} \\rightarrow X$ be the contraction. Let $\\mathrm{Pic}(X)$ be the group of Cartier divisors and $\\mathrm{Cl}(X)$ be the group of Weil divisors.\n\nMain theorem\n------------\n\nIn [@Sim94], moduli spaces of stable sheaves on singular varieties are constructed. Stable sheaves on smooth $K3$ surfaces and their moduli spaces have been extensively studied (e.g. [@KLS06; @O'G99; @PR14; @Yos01; @Yos99]). However, less is known when they specialize to a singular surface. In this paper we study stable spherical sheaves on general nodal $K3$ surfaces (Definition" -"---\nabstract: 'With the rapid development of Graph Neural Networks (GNNs), more and more studies focus on system design to improve training efficiency while ignoring the efficiency of GNN inference. Actually, GNN inference is a non-trivial task, especially in industrial scenarios with giant graphs, given three main challenges, i.e., scalability tailored for full-graph inference on huge graphs, inconsistency caused by stochastic acceleration strategies (e.g., sampling), and the serious redundant computation issue. To address the above challenges, we propose a scalable system named InferTurbo to boost the GNN inference tasks in industrial scenarios. Inspired by the philosophy of \u201cthink-like-a-vertex\", a GAS-like (Gather-Apply-Scatter) schema is proposed to describe the computation paradigm and data flow of GNN inference. The computation of GNNs is expressed in an iteration manner, in which a vertex would gather messages via in-edges and update its state information by forwarding an associated layer of GNNs with those messages and then send the updated information to other vertexes via out-edges. Following the schema, the proposed InferTurbo can be built with alternative backends (e.g., batch processing system or graph computing system). Moreover, InferTurbo introduces several strategies like shadow-nodes and partial-gather to handle nodes with large degrees for better load balancing. With" -"---\nabstract: |\n Droplet impact on surfaces is ubiquitous in many natural and industrial processes. While the impact dynamics of droplets composed of simple fluids have been studied extensively, droplets containing particles are less explored, but are more application relevant. The non-Newtonian behavior of particle suspension introduces new physics affecting the impact dynamics. Here, we investigated the impact dynamics of droplets containing cornstarch particles on a deep water pool and systematically characterized the impact outcomes with various Weber number and particle volume fractions. Distinctive phenomena compared to Newtonian droplet impact have been observed. A regime map of the impact outcomes is unveiled and the transition boundaries are quantified with scaling analysis. Rheology of the suspension is found to play a pivotal role in giving rise to distinct impact outcomes. The results lay the foundation for further characterization of the dynamics of suspension droplet impacting on liquid surfaces and can be translated to other suspension fluids.\n\n **Keywords:** Drop impact; Non-colloidal suspension; Shear thickening; Complex fluids; Jamming; Soft matter.\nauthor:\n- Boqian Yan\n- Xiaoyu Tang\nbibliography:\n- 'cornstarch.bib'\ntitle: '**Impact Dynamics of Droplet Containing Particle Suspensions on Deep Liquid Pool**'\n---\n\n\\[section1\\] Introduction\n=========================\n\nDroplet impact on surfaces [@yarin2006drop; @rein1993phenomena; @pan2007dynamics;" -"---\nabstract: 'Alpha matting is widely used in video conferencing as well as in movies, television, and social media sites. Deep learning approaches to the matte extraction problem are well suited to video conferencing due to the consistent subject matter (front-facing humans), however training-based approaches are somewhat pointless for entertainment videos where varied subjects (spaceships, monsters, etc.) may appear only a few times in a single movie \u2013 if a method of creating ground truth for training exists, just use that method to produce the desired mattes. We introduce a *training-free* high quality neural matte extraction approach that specifically targets the assumptions of visual effects production. Our approach is based on the deep image prior, which optimizes a deep neural network to fit a single image, thereby providing a deep encoding of the particular image. We make use of the representations in the penultimate layer to interpolate coarse and incomplete \u201ctrimap\u201d constraints. Videos processed with this approach are temporally consistent. The algorithm is both very simple and surprisingly effective.'\nauthor:\n- Sharif Elcott\n- 'J.P.\u00a0Lewis'\n- Nori Kanazawa\n- Christoph Bregler\nbibliography:\n- 'REFERENCES.bib'\ntitle: 'Training-Free Neural Matte Extraction for Visual Effects'\n---\n\n<ccs2012> <concept> <concept\\_id>10010147.10010371.10010382.10010383</concept\\_id> <concept\\_desc>Computing methodologies\u00a0Image" -"---\nabstract: 'A version of $\\mathcal{N} = 1$ supersymmetric scalar electrodynamics is considered here, and it is shown that an electrically charged nontopological soliton exists in this model. In addition to the long-range electric field, the soliton also possesses a long-range scalar field, which leads to a modification of the intersoliton interaction potential at large distances. The supersymmetry of the model makes it possible to express fermionic zero modes of the soliton in terms of bosonic fields. The properties of the nontopological soliton are investigated using analytical and numerical methods.'\naddress: 'Laboratory of Applied Mathematics and Theoretical Physics, Tomsk State University of Control Systems and Radioelectronics, 634050 Tomsk, Russia'\nauthor:\n- 'A.Yu.\u00a0Loginov'\nbibliography:\n- 'article.bib'\ntitle: 'A nontopological soliton in an $\\mathcal{N} = 1$ supersymmetric gauge Abelian model'\n---\n\nnontopological soliton ,electric charge ,supersymmetry ,fermionic zero modes\n\nIntroduction {#seq:I}\n============\n\nMany models of field theory have solutions that describe spatially localised and nonspreading field configurations with a finite energy [@Manton; @Rubakov]. Nontopological solitons [@lee_pang_1992] represent one of these field configurations. A necessary condition for the existence of a nontopological soliton is the symmetry of the corresponding field model, which may be both global and local. In addition, the interaction" -"---\nabstract: 'Lexical complexity prediction (LCP) is the task of predicting the complexity of words in a text on a continuous scale. It plays a vital role in simplifying or annotating complex words to assist readers. To study lexical complexity in Japanese, we construct the first Japanese LCP dataset. Our dataset provides separate complexity scores for Chinese/Korean annotators and others to address the readers\u2019 L1-specific needs. In the baseline experiment, we demonstrate the effectiveness of a BERT-based system for Japanese LCP.'\nauthor:\n- |\n \\\n **Yusuke Ide${}^{1}$ $\\;\\;\\;$ Masato Mita${}^2$ $\\;\\;\\;$ Adam Nohejl${}^1$ $\\;\\;\\;$ Hiroki Ouchi${}^{1,3}$ $\\;\\;\\;$ Taro Watanabe${}^1$**\\\n ${}^1$Nara Institute of Science and Technology ${}^2$CyberAgent Inc. ${}^3$RIKEN\\\n `{ide.yusuke.ja6, nohejl.adam.mt3, hiroki.ouchi, taro}@is.naist.jp,`\\\n `mita_masato@cyberagent.co.jp`\\\nbibliography:\n- 'anthology.bib'\n- 'custom.bib'\ntitle: 'Japanese Lexical Complexity for Non-Native Readers: A New Dataset'\n---\n\n=1\n\nIntroduction\n============\n\nReading comprehension requires a certain level of vocabulary knowledge. The results reported by @Hu2000 suggest that most English learners need to understand 98% of tokens in a text to comprehend it. A follow-up study by @Komori2004-es estimates the percentage to be 96% for Japanese learners to comprehend text. Acquiring vocabulary to reach such levels, in turn, is a lengthy and challenging task for learners. This opens up opportunities" -"---\nabstract: 'Cold atom traps are at the heart of many quantum applications in science and technology. The preparation and control of atomic clouds involves complex optimization processes, that could be supported and accelerated by machine learning. In this work, we introduce reinforcement learning to cold atom experiments and demonstrate a flexible and adaptive approach to control a magneto-optical trap. Instead of following a set of predetermined rules to accomplish a specific task, the objectives are defined by a reward function. This approach not only optimizes the cooling of atoms just as an experimentalist would do, but also enables new operational modes such as the preparation of pre-defined numbers of atoms in a cloud. The machine control is trained to be robust against external perturbations and able to react to situations not seen during the training. Finally, we show that the time consuming training can be performed in-silico using a generic simulation and demonstrate successful transfer to the real world experiment.'\nauthor:\n- Malte Reinschmidt\n- J\u00f3zsef Fort\u00e1gh\n- Andreas G\u00fcnther\n- Valentin Volchkov\ntitle: Reinforcement Learning in Ultracold Atom Experiments\n---\n\nLaser cooling and magneto-optical trapping of neutral atoms has been one of the major breakthroughs in science in" -"---\nabstract: |\n We consider quantum variants of Parrondo games on low-dimensional Hilbert spaces. The two games which form the Parrondo game are implemented as quantum walks on a small cycle of length $M$. The dimension of the Hilbert space is $2M$. We investigate a random sequence of these two games which is realized by a quantum coin, so that the total Hilbert space dimension is $4M$. We show that in the quantum Parrondo game constructed in this way a systematic win or loss occurs in the long time limit. Due to entaglement and self-interference on the cycle, the game yields a rather complex structure for the win or loss depending on the parameters.\n\n Keywords: Quantum games; Parrondo\u2019s paradox; Quantum Parrondo games; Entanglement; Self-interference\nauthor:\n- |\n Andreas Mielke[^1]\\\n Institut f[\u00fc]{}r Theoretische Physik\\\n Ruprecht-Karls-Universit[\u00e4]{}t Heidelberg\\\n Philosophenweg 12\\\n D-69120 Heidelberg, Germany\ntitle: 'Quantum Parrondo Games in Low-Dimensional Hilbert Spaces'\n---\n\nIntroduction {#sec:org17068b9}\n============\n\nSince their initial description by Harmer and Abbott [@Harmer_Abbott_1999], [@Abbott_Harmer_1999] in 1999, classical Parrondo games attracted a lot of attention and many different variants have beem proposed. In all these variants, two simple fair (or losing but almost fair) games are played in some regular or random sequence" -"---\nabstract: |\n A RAC graph is one admitting a RAC drawing, that is, a polyline drawing in which each crossing occurs at a right angle. Originally motivated by psychological studies on readability of graph layouts, RAC graphs form one of the most prominent graph classes in beyond planarity.\n\n In this work, we study a subclass of RAC graphs, called axis-parallel RAC (or apRAC, for short), that restricts the crossings to pairs of axis-parallel edge-segments. apRAC drawings combine the readability of planar drawings with the clarity of (non-planar) orthogonal drawings. We consider these graphs both with and without bends. Our contribution is as follows: (i)\u00a0We study inclusion relationships between apRAC and traditional RAC graphs. (ii)\u00a0We establish bounds on the edge density of apRAC graphs. (iii)\u00a0We show that every graph with maximum degree $8$ is $2$-bend apRAC and give a linear time drawing algorithm. Some of our results on apRAC graphs also improve the state of the art for general RAC graphs. We conclude our work with a list of open questions and a discussion of a natural generalization of the apRAC model.\nauthor:\n- Patrizio\u00a0Angelini\n- 'Michael\u00a0A.\u00a0Bekos'\n- Julia\u00a0Katheder\n- Michael\u00a0Kaufmann\n-" -"---\nabstract: 'Reconfigurable robot swarms are capable of connecting with each other to form complex structures. Current mechanical or magnetic connection mechanisms can be complicated to manufacture, consume high power, have a limited load-bearing capacity, or can only form rigid structures. In this paper, we present our low-cost soft anchor design that enables flexible coupling and decoupling between robots. Our asymmetric anchor requires minimal force to be pushed into the opening of another robot while having a strong pulling force so that the connection between robots can be secured. To maintain this flexible coupling mechanism as an assembled structure, we present our Model Predictive Control (MPC) frameworks with polygon constraints to model the geometric relationship between robots. We conducted experiments on the soft anchor to obtain its force profile, which informed the three-bar linkage model of the anchor in the simulations. We show that the proposed mechanism and MPC frameworks enable the robots to couple, decouple, and perform various behaviors in both the simulation environment and hardware platform. Our code is available at . Video is available at .'\nauthor:\n- \nbibliography:\n- 'references.bib'\ntitle: Reconfigurable Robot Control Using Flexible Coupling Mechanisms\n---\n\nIntroduction\n============\n\nRobot swarms demonstrated collective behaviors" -"---\nabstract: 'For a given class of materials, universal displacements are those displacements that can be maintained for any member of the class by applying only boundary tractions. In this paper we study universal displacements in compressible anisotropic linear elastic solids reinforced by a family of inextensible fibers. For each symmetry class and for a uniform distribution of straight fibers respecting the corresponding symmetry we characterize the respective universal displacements. A goal of this paper is to investigate how an internal constraint affects the set of universal displacements. We have observed that other than the triclinic and cubic solids in the other five classes (a fiber-reinforced solid with straight fibers cannot be isotropic) the presence of inextensible fibers enlarges the set of universal displacements.'\nauthor:\n- 'Arash Yavari[^1]'\nbibliography:\n- 'ref.bib'\ntitle: '**Universal Displacements in Inextensible Fiber-Reinforced Linear Elastic Solids**'\n---\n\nKeywords:\n\n: Universal displacement, universal deformation, fiber-reinforced solids, anisotropic solids.\n\nIntroduction\n============\n\nA universal motion (deformation or displacement) is one that can be maintained in the absence of body forces for all materials in some given class. In other words, a universal motion of a body can be maintained by applying only boundary tractions when the body is made" -"---\nabstract: 'Shock waves are common in astrophysical environments. On many occasions, they are collisionless, which means they occur in settings where the mean free path is much larger than the dimensions of the system. For this very reason, magnetohydrodynamic (MHD) is not equipped to deal with such shocks, be it because it assumes binary collisions, hence temperature isotropy, when such isotropy is not guaranteed in the absence of collisions. Here we solve a model capable of dealing with perpendicular shocks with anisotropic upstream pressure. The system of MHD conservation equations is closed assuming the temperature normal to the flow is conserved at the crossing of the shock front. In the strong shock sonic limit, the behavior of a perpendicular shock with isotropic upstream is retrieved, regardless of the upstream anisotropy. Generally speaking, a rich variety of behaviors is found, inaccessible to MHD, depending on the upstream parameters. The present work can be viewed as the companion paper of MNRAS 520, 6083\u20136090 (2023), where the case of a parallel shock was treated. Differences and similarities with the present case are discussed.'\nauthor:\n- |\n Antoine Bret$^{1,2}$[^1]\\\n $^{1}$ETSI Industriales, Universidad de Castilla-La Mancha, 13071 Ciudad Real, Spain\\\n $^{2}$Instituto de Investigaciones Energ\u00e9ticas" -"---\nabstract: 'The irregularity and permutation invariance of point cloud data pose challenges for effective learning. Conventional methods for addressing this issue involve converting raw point clouds to intermediate representations such as 3D voxel grids or range images. While such intermediate representations solve the problem of permutation invariance, they can result in significant loss of information. Approaches that do learn on raw point clouds either have trouble in resolving neighborhood relationships between points or are too complicated in their formulation. In this paper, we propose a novel approach to representing point clouds as a locality preserving 1D ordering induced by the Hilbert space-filling curve. We also introduce Point2Point, a neural architecture that can effectively learn on Hilbert-sorted point clouds. We show that Point2Point shows competitive performance on point cloud segmentation and generation tasks. Finally, we show the performance of Point2Point on Spatio-temporal Occupancy prediction from Point clouds.'\nauthor:\n- 'Athrva Atul Pandhare$^{1}$[^1]'\nbibliography:\n- 'mainv2.bib'\ntitle: '**Point2Point : A Framework for Efficient Deep Learning on Hilbert sorted Point Clouds with applications in Spatio-Temporal Occupancy Prediction**'\n---\n\nIntroduction\n============\n\nDeep learning on point clouds has been an active research area in recent years. Existing literature usually converts point clouds to an" -"---\nabstract: 'We consider the Online Rent Minimization problem, where online jobs with release times, deadlines, and processing times must be scheduled on machines that can be rented for a fixed length period of $T$. The objective is to minimize the number of machine rents. This problem generalizes the Online Machine Minimization problem where machines can be rented for an infinite period, and both problems have an asymptotically optimal competitive ratio of $O(\\log(p_{\\max}/p_{\\min}))$ for general processing times, where $p_{\\max}$ and $p_{\\min}$ are the maximum and minimum processing times respectively. However, for small values of $p_{\\max}/p_{\\min}$, a better competitive ratio can be achieved by assuming unit-size jobs. Under this assumption, Devanur et al. (2014) gave an optimal $e$-competitive algorithm for Online Machine Minimization, and Chen and Zhang (2022) gave a $(3e+7)\\approx 15.16$-competitive algorithm for Online Rent Minimization. In this paper, we significantly improve the competitive ratio of the Online Rent Minimization problem under unit size to $6$, by using a clean oracle-based online algorithm framework.'\nauthor:\n- 'Enze Sun [^1]'\n- 'Zonghan Yang [^2]'\n- 'Yuhao Zhang [^3]'\nbibliography:\n- 'ref.bib'\ntitle: 'Improved Algorithms for Online Rent Minimization Problem Under Unit-Size Jobs'\n---\n\nIntroduction\n============\n\n*Machine Minimization* is a classical scheduling" -"---\nabstract: 'As in other fields of artificial intelligence, the information retrieval community has grown interested in investigating the power consumption associated with neural models, particularly models of search. This interest has become particularly relevant as the energy consumption of information retrieval models has risen with new neural models based on large language models, leading to an associated increase of emissions, albeit relatively low compared to fields such as natural language processing. Consequently, researchers have started exploring the development of a green agenda for sustainable information retrieval research and operation. Previous work, however, has primarily considered energy consumption and associated emissions alone. In this paper, we seek to draw the information retrieval community\u2019s attention to the overlooked aspect of water consumption related to these powerful models. We supplement previous energy consumption estimates with corresponding water consumption estimates, considering both off-site water consumption (required for operating and cooling energy production systems such as carbon and nuclear power plants) and on-site consumption (for cooling the data centres where models are trained and operated). By incorporating water consumption alongside energy consumption and emissions, we offer a more comprehensive understanding of the environmental impact of information retrieval research and operation.'\nauthor:\n- Guido Zuccon" -"---\nabstract: 'State-of-the-art speech synthesis models try to get as close as possible to the human voice. Hence, modelling emotions is an essential part of Text-To-Speech (TTS) research. In our work, we selected FastSpeech2 as the starting point and proposed a series of modifications for synthesizing emotional speech. According to automatic and human evaluation, our model, EmoSpeech, surpasses existing models regarding both MOS score and emotion recognition accuracy in generated speech. We provided a detailed ablation study for every extension to FastSpeech2 architecture that forms EmoSpeech. The uneven distribution of emotions in the text is crucial for better, synthesized speech and intonation perception. Our model includes a conditioning mechanism that effectively handles this issue by allowing emotions to contribute to each phone with varying intensity levels. The human assessment indicates that proposed modifications generate audio with higher MOS and emotional expressiveness.'\naddress: |\n $^1$VK, deepvk, Saint Petersburg, Russia\\\n $^2$VK, deepvk, Saint Petersburg, Russia\nbibliography:\n- 'mybib.bib'\ntitle: 'EmoSpeech: Guiding FastSpeech2 Towards Emotional Text to Speech'\n---\n\n**Index Terms**: text to speech, emotional text to speech, fast speech\n\nIntroduction {#sec:1}\n============\n\nIn recent years, the field of Text-to-Speech (TTS) has made significant progress in terms of the quality of synthesised speech" -"---\nabstract: |\n Information extraction (IE) plays very important role in natural language processing (NLP) and is fundamental to many NLP applications that used to extract structured information from unstructured text data. Heuristic-based searching and data-driven learning are two main stream implementation approaches. However, no much attention has been paid to document genre and length influence on IE tasks. To fill the gap, in this study, we investigated the accuracy and generalization abilities of heuristic-based searching and data-driven to perform two IE tasks: named entity recognition (NER) and semantic role labeling (SRL) on domain-specific and generic documents with different length. We posited two hypotheses: first, short documents may yield better accuracy results compared to long documents; second, generic documents may exhibit superior extraction outcomes relative to domain-dependent documents due to training document genre limitations.\n\n Our findings reveals that no single method demonstrated overwhelming performance in both tasks. For named entity extraction, data-driven approaches outperformed symbolic methods in terms of accuracy, particularly in short texts. In the case of semantic roles extraction, we observed that heuristic-based searching method and data-driven based model with syntax representation surpassed the performance of pure data-driven approach which only consider semantic information. Additionally, we discovered that" -"---\nauthor:\n- \n- \nbibliography:\n- 'sn-bibliography.bib'\ntitle: 'Defending Black-box Classifiers by Bayesian Boundary Correction'\n---\n\nIntroduction\n============\n\nDeep learning classifiers have been proven to be universally vulnerable to malicious perturbations on data and training, i.e. adversarial attack (AA)\u00a0[@chakraborty_adversarial_2018], causing alarming concerns because such perturbations are imperceptible to humans but destructive to machine intelligence. Existing AA methods can be characterized by the amount of information required. White-box attack\u00a0[@GoodfellowFGSM] assumes the access to the victim model and can compute the loss gradient with respect to samples. Transfer-based attack\u00a0[@dong2018boosting] does not require access to the victim but needs a surrogate victim. Black-box/semi black-box approaches only require access to the input/output of the victim\u00a0[@Bandits; @BA].\n\nCorresponding to AA, defense methods have emerged as a new field recently\u00a0[@chakraborty_adversarial_2018] where most research can be categorized into data enhancement and model enhancement. Popular data enhancement methods involve finding potential adversarial examples e.g. adversarial training (AT)\u00a0[@madry_towards_2018] and randomized smoothing (RS)\u00a0[@Cohen19_ICML], or removing perturbation via denoising\u00a0[@nie2022diffusion]. The philosophy behind them is different. The former biases the classifier by exposing it to potential threats while the latter focuses on learning an accurate representation of the data distribution. Both can be seen" -"---\nabstract: 'Deep learning has successfully solved a wide range of tasks in 2D vision as a dominant AI technique. Recently, deep learning on 3D point clouds has become increasingly popular for addressing various tasks in this field. Despite remarkable achievements, deep learning algorithms are vulnerable to adversarial attacks. These attacks are imperceptible to the human eye, but can easily fool deep neural networks in the testing and deployment stage. To encourage future research, this survey summarizes the current progress on adversarial attack and defense techniques on point-cloud classification. This paper first introduces the principles and characteristics of adversarial attacks and summarizes and analyzes adversarial example generation methods in recent years. Additionally, it provides an overview of defense strategies, organized into data-focused and model-focused methods. Finally, it presents several current challenges and potential future research directions in this domain.'\naddress:\n- 'Department of Computer Engineering, Sharif University of Technology Tehran (e-mail: hanieh.naderii@gmail.com)'\n- 'School of Engineering Science, Simon Fraser University, Burnaby, BC, Canada (e-mail: ibajic@ensc.sfu.ca)'\nauthor:\n- ', ,\\'\nbibliography:\n- 'References.bib'\ntitle: 'Adversarial Attacks and Defenses on 3D Point Cloud Classification: A Survey'\n---\n\n3D deep learning, deep neural network, adversarial examples, adversarial defense, machine learning security, 3D point" -"---\nabstract: |\n Recent data search platforms use ML task-based utility measures rather than metadata-based keywords, to search large dataset corpora. Requesters submit a training dataset, and these platforms search for [*augmentations*]{}\u2014join or union-compatible datasets\u2014that, when used to augment the requester\u2019s dataset, most improve model (e.g., linear regression) performance. Although effective, providers that manage personally identifiable data demand differential privacy (`DP`) guarantees before granting these platforms data access. Unfortunately, making data search differentially private is nontrivial, as a single search can involve training and evaluating datasets hundreds or thousands of times, quickly depleting privacy budgets.\n\n We present [[*Saibot*]{}]{}, a differentially private data search platform that employs Factorized Privacy Mechanism ([`FPM`]{}), a novel `DP` mechanism, to calculate sufficient semi-ring statistics for ML over different combinations of datasets. These statistics are privatized once, and can be freely reused for the search. This allows [[*Saibot*]{}]{}to scale to arbitrary numbers of datasets and requests, while minimizing the amount that `DP` noise affects search results. We optimize the sensitivity of [`FPM`]{}for common augmentation operations, and analyze its properties with respect to linear regression. Specifically, we develop an unbiased estimator for many-to-many joins, prove its bounds, and develop an optimization to redistribute `DP` noise to minimize" -"---\nabstract: 'Wide-area deep imaging surveys have discovered large numbers of extremely low surface brightness (LSB) dwarf galaxies, which challenge galaxy formation theory and, potentially, offer new constraints on the nature of dark matter. Here we discuss one as-yet-unexplored formation mechanism that may account for a fraction of LSB dwarfs. We call this the \u2018ghost galaxy\u2019 scenario. In this scenario, inefficient radiative cooling prevents star formation in the \u2018main branch\u2019 of the merger tree of a low-mass dark matter halo, such that almost all its stellar mass is acquired through mergers with less massive (but nevertheless star-forming) progenitors. Present-day systems formed in this way would be \u2018ghostly\u2019 isolated stellar halos with no central galaxy. We use merger trees based on the extended Press\u2013Schechter formalism and the Copernicus Complexio cosmological $N$-body simulation to demonstrate that mass assembly histories of this kind can occur for low-mass halos in $\\Lambda$CDM, but they are rare. They are most probable in isolated halos of present-day mass $\\sim4\\times10^{9}\\,\\mathrm{M_{\\odot}}$, occurring for $\\sim5\\%$ of all halos of that mass under standard assumptions about the timing and effect of cosmic reionization. The stellar masses of star-forming progenitors in these systems are highly uncertain; abundance-matching arguments imply a bimodal present-day" -"---\nabstract: 'We introduce the first large-scale dataset, MNISQ, for both the Quantum and the Classical Machine Learning community during the Noisy Intermediate-Scale Quantum era. MNISQ consists of 4,950,000 data points organized in 9 subdatasets. Building our dataset from the quantum encoding of classical information (e.g., MNIST dataset), we deliver a dataset in a dual form: in quantum form, as circuits, and in classical form, as quantum circuit descriptions (quantum programming language, QASM). In fact, also the Machine Learning research related to quantum computers undertakes a dual challenge: enhancing machine learning exploiting the power of quantum computers, while also leveraging state-of-the-art classical machine learning methodologies to help the advancement of quantum computing. Therefore, we perform circuit classification on our dataset, tackling the task with both quantum and classical models. In the quantum endeavor, we test our circuit dataset with Quantum Kernel methods, and we show excellent results up to $97\\%$ accuracy. In the classical world, the underlying quantum mechanical structures within the quantum circuit data are not trivial. Nevertheless, we test our dataset on three classical models: Structured State Space sequence model (S4), Transformer and LSTM. In particular, the S4 model applied on the tokenized QASM sequences reaches an impressive" -"---\nabstract: 'We study visual question answering in a setting where the answer has to be mined from a pool of relevant and irrelevant images given as a context. For such a setting, a model must first retrieve relevant images from the pool and answer the question from these retrieved images. We refer to this problem as retrieval-based visual question answering (or RetVQA in short). The RetVQA is distinctively different and more challenging than the traditionally-studied Visual Question Answering (VQA), where a given question has to be answered with a single relevant image in context. Towards solving the RetVQA task, we propose a unified ulti mage (MI-BART) that takes a question and retrieved images using our relevance encoder for free-form fluent answer generation. Further, we introduce the largest dataset in this space, namely RetVQA, which has the following salient features: multi-image and retrieval requirement for VQA, metadata-independent questions over a pool of heterogeneous images, expecting a mix of classification-oriented and open-ended generative answers. Our proposed framework achieves an accuracy of 76.5% and a fluency of 79.3% on the proposed dataset, namely RetVQA and also outperforms state-of-the-art methods by 4.9% and 11.8% on the image segment" -"---\nabstract: |\n We examine the assumption that the hidden-state vectors of recurrent neural networks (RNNs) tend to form clusters of semantically similar vectors, which we dub the *clustering hypothesis*. While this hypothesis has been assumed in the analysis of RNNs in recent years, its validity has not been studied thoroughly on modern neural network architectures. We examine the clustering hypothesis in the context of RNNs that were trained to recognize regular languages. This enables us to draw on perfect ground-truth automata in our evaluation, against which we can compare the RNN\u2019s accuracy and the distribution of the hidden-state vectors.\n\n We start with examining the (piecewise linear) separability of an RNN\u2019s hidden-state vectors into semantically different classes. We continue the analysis by computing clusters over the hidden-state vector space with multiple state-of-the-art unsupervised clustering approaches. We formally analyze the accuracy of computed clustering functions and the validity of the clustering hypothesis by determining whether clusters group semantically similar vectors to the same state in the ground-truth model.\n\n Our evaluation supports the validity of the clustering hypothesis in the majority of examined cases. We observed that the hidden-state vectors of well-trained RNNs are separable, and that the unsupervised clustering techniques succeed" -"---\nabstract: 'Many large accelerator facilities have adopted the open-source EPICS software as the quasi-industry standard for control systems. They typically have access to their own electronics laboratory and dedicated personnel for control system development. On the other hand, small laboratories, many based at universities, use commercial software like LabView, or entirely homebrewed systems. These often become cumbersome when the number of controlled devices increases over time. Here we present a control system setup, based on a combination of EPICS, React Automation Studio, and our own drivers for electronics available to smaller laboratories \u2013 such as Arduinos \u2013 that is flexible, modular, and robust. It allows small laboratories, working with off-the-shelf modular electronics, power supplies, and other devices to quickly set up a control system without a large facility overhead, while retaining maximum compatibility and upgradeability. We demonstrate our setup for the MIST-1 ion source experiment at MIT. This control system will later be used to serve the entire IsoDAR accelerator complex and, as such, must be easily expandable.'\naddress: 'Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA 02139, USA'\nauthor:\n- Philip Weigel\n- Monica Busza\n- Abutalib Namazov\n- Janette Park\n- Joshua Villarreal\n- 'Loyd H." -"---\nbibliography:\n- 'submitted.bib'\n---\n\n[**** ]{}\\\nNir Lotan^^, Einat Minkov^\\*^\\\n**U**niversity of Haifa, Haifa, Israel\\\n\n\\* einatm@is.haifa.ac.il\n\nAbstract {#abstract .unnumbered}\n========\n\nSocial world knowledge is a key ingredient in effective communication and information processing by humans and machines alike. As of today, there exist many knowledge bases that represent factual world knowledge. Yet, there is no resource that is designed to capture [*social*]{} aspects of world knowledge. We believe that this work makes an important step towards the formulation and construction of such a resource. We introduce [*SocialVec*]{}, a general framework for eliciting low-dimensional entity embeddings from the social contexts in which they occur in social networks. In this framework, [*entities*]{} correspond to highly popular accounts which invoke general interest. We assume that entities that individual users tend to co-follow are socially related, and use this definition of social context to learn the entity embeddings. Similar to word embeddings which facilitate tasks that involve text semantics, we expect the learned social entity embeddings to benefit multiple tasks of social flavor. In this work, we elicited the social embeddings of roughly $200K$ entities from a sample of 1.3M Twitter users and the accounts that they follow. We employ and gauge" -"---\nauthor:\n- 'Jilang Miao,$^{*1}$ and Miaomiao Jin$^{*}$'\nbibliography:\n- 'SN1D.bib'\ntitle: 'An Accurate S$_N$ Method for Solving Static Multigroup Neutron Transport Equations in Slab Geometry'\n---\n\nIntroduction \n=============\n\nThis paper presents an accurate S$_N$[@Car1953][@Lar2008] solver for slab geometry. For constant cross-section regions, it gives accurate angular fluxes without need for fine meshes or approximation of solution forms. The method provides a potentially accurate and efficient axial solver in the 2D-1D scheme\u00a0[@stimpson2015azimuthal]\u00a0[@hursin2014development] to solve 3D transport equations.\n\nIn this summary, we first derive the solution form for a constant cross-section region. The solution generalizes the earlier work\u00a0[@analytical2A2G] considering an 1D problem with only two angles and two groups to any number of energy groups and discrete angles. Then we show the steps to find the coefficients for each region from boundary conditions of the slab. Finally, a two group case problem is studied with S$_2$,S$_4$,S$_6$ and results are verified with references from Monte Carlo simulations.\n\nTheory\n======\n\nS$_N$ equation in a homogeneous slab\n------------------------------------\n\nFor energy groups $g=1,...,G$ and a quadrature set $\\left . \\{\\mu_n,\\omega_n\\} \\right | _{n=1,...,N}$, the transport equation for angular flux $\\psi_{g,n}$ can be written as in Eq\u00a0\\[eq::sn1d\\].\n\n$$\\begin{aligned}\n& \\mu_n \\frac{\\partial}{\\partial x}" -"---\nabstract: 'IoT deployments grow in numbers and size and questions of long time support and maintainability become increasingly important. To prevent vendor lock-in, standard compliant capabilities to transfer control of IoT devices between service providers must be offered. We propose a lightweight protocol for transfer of control, and we show that the overhead for the involved IoT devices is small and the overall required manual overhead is minimal. We analyse the fulfilment of the security requirements to verify that the stipulated requirements are satisfied.'\nauthor:\n- \n- \nbibliography:\n- 'ref.bib'\ntitle: Towards Automated PKI Trust Transfer for IoT\n---\n\nsecurity, IoT, PKI, digital certificates, enrollment, embedded systems\n\nIntroduction {#sec:intro}\n============\n\nThe increasing number of IoT devices used worldwide for safety and security critical applications such as grid infrastructure and e-health highlights the need for robust and scalable security solutions suitable for IoT. The last couple of years have seen an increase in protocols and standards targeting the Internet of Things, including standards covering security aspects. These standards define security services such as relatively lightweight secure communication and authentication. Together with recent proposals for key establishment and certificate enrollment, important steps towards bringing Public Key Infratructure, PKI, to IoT have" -"---\nabstract: |\n Mass live content, such as world cups, the Superbowl or the Olympics, attract audiences of hundreds of millions of viewers. While such events were predominantly consumed on TV, more and more viewers follow big events on the Internet, which poses a scalability challenge: current unicast delivery over the web comes with large overheads and is inefficient. An attractive alternative are multicast-based transmissions, however, current solutions have several drawbacks, mostly related to security and privacy, which prevent them from being implemented in browsers.\n\n In this paper we introduce a multicast extension to QUIC, a widely popular transport protocol standardized by the IETF, that solves several of these problems. It enables multicast delivery by offering encryption as well as integrity verification of packets distributed over multicast and automatic unicast fallback, which solves one of multicasts major obstacles to large scale deployment. It is transparent to applications and can be easily utilized by simply enabling an option in QUIC. This extension is soley focused on the transport layer and uses already existing multicast mechanisms on the network layer.\nauthor:\n- Max Franke\n- Jake Holland\n- Stefan Schmid\nbibliography:\n- 'sample.bib'\ntitle: 'MCQUIC - A Multicast Extension for QUIC [^1]" -"---\nabstract: 'We describe explicitly all actions of the quantum permutation groups on classical compact spaces. In particular, we show that the defining action is the only non-trivial ergodic one. We then extend these results to all easy quantum groups associated to non-crossing partitions.'\naddress:\n- 'Laboratoire de Math\u00e9matiques d\u2019Orsay, CNRS, Universit\u00e9 Paris-Saclay, 91405 Orsay, France'\n- 'Instituto de Matem\u00e1tica y Ciencias Afines, Universidad Nacional de Ingenier\u00eda, 15012 Lima, Peru'\n- 'Institute for Advanced Study in Mathematics, Harbin Institute of Technology, Harbin 150001, China'\nauthor:\n- Amaury Freslon\n- Frank Taipe\n- Simeng Wang\nbibliography:\n- 'quantum.bib'\ntitle: Classical actions of quantum permutation groups\n---\n\nIntroduction\n============\n\nCompact quantum groups where first defined by S.L. Woronowicz in [@woronowicz1987compact] and [@woronowicz1995compact] as generalisations of compact groups. Among several aspects, it was clear that they could serve as quantum symmetries of \u201cnon-commutative\u201d spaces, and this was explored immediately through the notion of quantum homogeneous space, for instance in the work of P. Podle[\u015b]{} and S.L. Woronowicz [@podles1990quantum]. Besides investigating the spaces on which a given quantum group can act, it was natural to consider the converse question: given a space, which quantum group can act on it? This quantum symmetry problem led" -"---\nabstract: 'This paper established a novel multi-input multi-output (MIMO) communication network, in the presence of full-duplex (FD) transmitters and receivers with the assistance of dual-side intelligent omni surface. Compared with the traditional IOS, the dual-side IOS allows signals from both sides to reflect and refract simultaneously, which further exploits the potential of metasurfaces to avoid frequency dependence, and size, weight, and power (SWaP) limitations. By considering both the downlink and uplink transmissions, we aim to maximize the weighted sum rate, subject to the transmit power constraints of the transmitter and the users and the dual-side reflecting and refracting phase shifts constraints. However, the formulated sum rate maximization problem is not convex, hence we exploit the weighted minimum mean square error (WMMSE) approach, and tackle the original problem iteratively by solving two sub-problems. For the beamforming matrices optimizations of the downlink and uplink, we resort to the Lagrangian dual method combined with a bisection search to obtain the results. Furthermore, we resort to the quadratically constrained quadratic programming (QCQP) method to optimize the reflecting and refracting phase shifts of both sides of the IOS. In addition, we introduce the case without a dual-side IOS for comparison. Simulation results validate the" -"---\nabstract: 'Depression is a growing issue in society\u2019s mental health that affects all areas of life and can even lead to suicide. Fortunately, prevention programs can be effective in its treatment. In this context, this work proposes an automatic system for detecting depression on social media based on machine learning and natural language processing methods. This paper presents the following contributions: (i) an ensemble learning system that combines several types of text representations for depression detection, including recent advances in the field; (ii) a contextualization schema through topic and affective information; (iii) an analysis of models\u2019 energy consumption, establishing a trade-off between classification performance and overall computational costs. To assess the proposed models\u2019 effectiveness, a thorough evaluation is performed in two datasets that model depressive text. Experiments indicate that the proposed contextualization strategies can improve the classification and that approaches that use Transformers can improve the overall F-score by 2% while augmenting the energy cost a hundred times. Finally, this work paves the way for future energy-wise systems by considering both the performance classification and the energy consumption.'\nauthor:\n- Andrea Laguna\n- Oscar Araque\ntitle: 'A Cost-aware Study of Depression Language on Social Media using Topic and Affect" -"---\nabstract: 'In this paper, we provide a novel method for the estimation of unknown parameters of the Gaussian Mixture Model (GMM) in Positron Emission Tomography (PET). A vast majority of PET imaging methods are based on reconstruction model that is defined by values on some pixel/voxel grid. Instead, we propose a continuous parametric GMM model. Usually, Expectation-Maximization (EM) iterations are used to obtain the GMM model parameters from some set of point-wise measurements. The challenge of PET reconstruction is that the measurement is represented by the so called lines of response (LoR), instead of points. The goal is to estimate the unknown parameters of the Gaussian mixture directly from a relatively small set of LoR-s. Estimation of unknown parameters relies on two facts: the marginal distribution theorem of the multivariate normal distribution; and the properties of the marginal distribution of LoR-s. We propose an iterative algorithm that resembles the maximum-likelihood method to determine the unknown parameters. Results show that the estimated parameters follow the correct ones with a great accuracy. The result is promising, since the high-quality parametric reconstruction model can be obtained from lower dose measurements, and is directly suitable for further processing.'\nbibliography:\n- 'biblio.bib'\n---\n\nTomislav" -"---\nauthor:\n- 'Dominik R.G. Schleicher'\n- Juan Pablo Hidalgo\n- Daniele Galli\ntitle: |\n Survival of fossil fields during the pre-main sequence evolution\\\n of intermediate-mass stars\n---\n\n[Chemically peculiar Ap and Bp stars host strong large-scale magnetic fields in the range of $200$\u00a0G up to $30$\u00a0kG, which are often considered to be the origin of fossil magnetic fields.]{} [We assess the evolution of such fossil fields during the star formation process and the pre-main sequence evolution of intermediate stars, considering fully convective models, models including a transition to a radiative protostar and models with a radiative core. We also examine the implications of the interaction between the fossil field and the core dynamo. ]{} [We employ analytic and semi-analytic calculations combined with current observational constraints.]{} [For fully convective models, we show that magnetic field decay via convection can be expected to be very efficient for realistic parameters of turbulent resistivities. Based on the observed magnetic field strength - density relation, as well as the expected amount of flux loss due to ambipolar diffusion, it appears unlikely that convection could be suppressed via strong enough magnetic fields. On the other hand, a transition from a convective to a" -"---\nabstract: 'Federated learning (FL) has gained significant traction as a privacy-preserving algorithm, but the underlying resemblances of federated learning algorithms like Federated averaging (FedAvg) or Federated SGD (Fed SGD) to ensemble learning algorithms has not been fully explored. The purpose of this paper is to examine the application of FL to object detection as a method to enhance generalizability, and to compare its performance against a centralized training approach for an object detection algorithm. Specifically, we investigate the performance of a YOLOv5 model trained using FL across multiple clients and employ a random sampling strategy without replacement, so each client holds a portion of the same dataset used for centralized training. Our experimental results showcase the superior efficiency of the FL object detector\u2019s global model in generating accurate bounding boxes for unseen objects, with the test set being a mixture of objects from two distinct clients not represented in the training dataset. These findings suggest that FL can be viewed from an ensemble algorithm perspective, akin to a synergistic blend of Bagging and Boosting techniques. As a result, FL can be seen not only as a method to enhance privacy, but also as a method to enhance the performance" -"UTOPIA: Methods of Aggregations and Constructions of Predictive Intervals {#sec:methods}\n=========================================================================\n\nWe start by assuming that we have a pair of random variables $(X, Y) \\sim P$, where $X\\in\\cX$ and $Y\\in\\cY$. In this paper we typically assume $\\cX = \\reals^p$ and $\\cY = \\reals$. For any such pair, we have the following decomposition: $$Y = m_0(X) + \\sqrt{v_0(X)} \\xi \\,,$$ where $m_0(x) = \\bbE[Y \\mid X = x]$ is the conditional mean function and $v_0(x) = \\var(Y \\mid X = x)$ is the conditional variance function. By definition, we have $\\bbE[\\xi \\mid X] = 0$ and $\\var(\\xi \\mid X) = 1$, albeit $\\xi$ needs not be independent of $X$. A special case is when $\\xi \\indep X$, which is known as the *location-scale* family. As mentioned in the introduction, our goal is to construct a prediction interval of the form $[l(X), u(X)]$ of $Y$ given $X$ based on $n$ i.i.d random observations $(X_1, Y_1), \\dots, (X_n, Y_n)$ from $P$. Here we aim for *expected guarantee* instead of *conditional guarantee*, i.e. $\\bbP(\\ell(X) \\leq Y \\leq u(X)) \\ge 1 -\\alpha$ where the probability is taken over the joint distribution of $(X, Y)$. Constructing any such prediction interval is easy; given any positive" -"---\nabstract: 'Human supervisors in multi-robot systems are primarily responsible for monitoring robots, but can also be assigned with secondary tasks. These tasks can act as interruptions and can be categorized as either intrinsic, i.e., being directly related to the monitoring task, or extrinsic, i.e., being unrelated. In this paper, we investigate the impact of these two types of interruptions through a user study ($N=39$), where participants monitor a number of remote mobile robots while intermittently being interrupted by either a robot fault correction task (intrinsic) or a messaging task (extrinsic). We find that task performance of participants does not change significantly with the interruptions but depends greatly on the number of robots. However, interruptions result in an increase in perceived workload, and extrinsic interruptions have a more negative effect on workload across all NASA-TLX scales. Participants also reported switching between extrinsic interruptions and the primary task to be more difficult compared to the intrinsic interruption case. Statistical significance of these results is confirmed using ANOVA and one-sample t-test. These findings suggest that when deciding task assignment in such supervision systems, one should limit interruptions from secondary tasks, especially extrinsic ones, in order to limit user workload.'\nauthor:\n- 'Abhinav" -"---\nabstract: 'We show, for the first time, that neural networks trained only on synthetic data achieve state-of-the-art accuracy on the problem of 3D human pose and shape (HPS) estimation from real images. Previous synthetic datasets have been small, unrealistic, or lacked realistic clothing. Achieving sufficient realism is non-trivial and we show how to do this for full bodies in motion. Specifically, our BEDLAM dataset contains monocular RGB videos with ground-truth 3D bodies in format. It includes a diversity of body shapes, motions, skin tones, hair, and clothing. The clothing is realistically simulated on the moving bodies using commercial clothing physics simulation. We render varying numbers of people in realistic scenes with varied lighting and camera motions. We then train various HPS regressors using BEDLAM and achieve state-of-the-art accuracy on real-image benchmarks despite training with synthetic data. We use BEDLAM to gain insights into what model design choices are important for accuracy. With good synthetic training data, we find that a basic method like HMR approaches the accuracy of the current SOTA method (CLIFF). BEDLAM is useful for a variety of tasks and all images, ground truth bodies, 3D clothing, support code, and more are available for research purposes. Additionally," -"---\nabstract: |\n Nonconvex-nonconcave minimax problems have found numerous applications in various fields including machine learning. However, questions remain about what is a good surrogate for local minimax optimum and how to characterize the minimax optimality. Recently Jin, Netrapalli, and Jordan (ICML 2020) introduced a concept of local minimax point and derived optimality conditions for the smooth and unconstrained case. In this paper, we introduce the concept of calm local minimax point, which is a local minimax point with a calm radius function. With the extra calmness property we obtain first and second-order sufficient and necessary optimality conditions for a very general class of nonsmooth nonconvex-nonconcave minimax problem. Moreover we show that the calm local minimax optimality and the local minimax optimality coincide under a weak sufficient optimality condition for the maximization problem. This equivalence allows us to derive stronger optimality conditions under weaker assumptions for local minimax optimality. [**Key words:**]{} minimax problem, local optimality, calmness, first-order optimality condition, second-order optimality condition\n\n [**AMS Subject Classifications:**]{} 90C26, 90C30, 90C31, 90C33, 90C46, 49J52, 91A65\nauthor:\n- 'Xiaoxiao Ma[^1]'\n- 'Wei Yao[^2]'\n- 'Jane J. Ye[^3]'\n- 'Jin Zhang[^4]'\ntitle: 'Calm local optimality for nonconvex-nonconcave minimax problems [^5]'\n---\n\nIntroduction\n============\n\nIn this" -"---\nabstract: '**The study of the iron-based superconductor, FeSe, has resulted in various topics, such as the interplay among superconductivity, nematicity, and magnetism, Bardeen-Cooper-Schrieffer Bose-Einstein-condensation (BCS-BEC) crossover, and Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) superconductivity. Recently, topologically protected nodal Fermi surfaces, referred to as Bogoliubov Fermi surfaces (BFSs), have garnered much attention. A theoretical model for the S-substituted FeSe system demonstrated that BFSs can manifest under the conditions of spin-orbit coupling, multi-band systems, and superconductivity with time-reversal symmetry breaking. Here we report the observation of spin fluctuations originating from BFSs in the superconducting (SC) state via $^{77}$Se-nuclear magnetic resonance measurements to 100 mK. In a heavily S-substituted FeSe, we found an anomalous enhancement of low-energy spin fluctuations deep in the SC state, which cannot be explained by an impurity effect. Such unusual behavior implies the presence of significant spin fluctuations of Bogoliubov quasiparticles, which are associated with possible nesting properties between BFSs.**'\nauthor:\n- 'Zhongyu Yu$^1$[^1], Koya Nakamura$^1$, Kazuya Inomata$^1$, Xiaoling Shen$^{2, 3}$, Taketora Mikuri$^3$, Kohei Matsuura$^4$[^2] Yuta Mizukami$^4$[^3], Shigeru Kasahara$^5$[^4], Yuji Matsuda,$^5$ Takasada Shibauchi,$^4$ Yoshiya Uwatoko,$^3$ and Naoki Fujiwara$^1$[^5]'\nbibliography:\n- 'References.bib'\ntitle: 'Spin fluctuations from Bogoliubov Fermi surfaces in the superconducting state of S-substituted FeSe'\n---\n\nIntroduction {#introduction .unnumbered}\n============\n\nThe study" -"---\nabstract: 'The emergence of new public forums in the shape of online social media has introduced unprecedented challenges to public discourse, including polarization, misinformation, and the emergence of echo chambers. While existing research has extensively studied the behavior of active users within echo chambers, little attention has been given to the hidden audience, also known as lurkers, who passively consume content without actively engaging. This study aims to estimate the share of the hidden audience and investigate their interplay with the echo chamber effect. Using Twitter as a case study, we analyze a polarized political debate to understand the engagement patterns and factors influencing the hidden audience\u2019s presence. Our findings reveal a relevant fraction of users that consume content without active interaction, which underscores the importance of considering their presence in online debates. Notably, our results indicate that the engagement of the hidden audience is primarily influenced by factors such as the reliability of media sources mentioned in tweets rather than the ideological stance of the user that produced the content. These findings highlight the need for a comprehensive understanding of the hidden audience\u2019s role in online debates and how they may influence public opinion.'\nauthor:\n- |\n Anees" -"---\nabstract: 'We present a Mathematica package that takes any reductive gauge algebra and fully-reducible fermion representation, and outputs all semisimple gauge extensions under the condition that they have no additional fermions, and are free of local anomalies. These include all simple completions, also known as grand unified theories (GUT). We additionally provide a list of all semisimple completions for 5835 fermionic extensions of the one-generation Standard Model.'\nauthor:\n- Andrew Gomes\n- Maximillian Ruhdorfer\n- 'Joseph Tooby-Smith'\nbibliography:\n- 'SuperFlocci.bib'\ntitle: |\n Superfloccinaucinihilipilification:\\\n Semisimple unifications of any gauge theory\n---\n\nIntroduction\n============\n\nUnification, the idea that the Standard Model (SM) gauge algebra ${\\mathfrak{su}}(3)\\oplus {\\mathfrak{su}}(2) \\oplus {\\mathfrak{u}(1)}$ and particle representations embed into a semisimple algebra in the UV, is an appealing scenario for physics beyond the Standard Model (BSM). Theories based on semisimple algebras have phenomenological benefits over their reductive counterparts (those algebras containing ${\\mathfrak{u}(1)}$ factors). These include the simplicity of local anomaly-cancellation, a possible origin for flavor symmetries, and the freedom from Landau poles.\n\nArguably the most famous of such extensions are Grand Unified Theories (GUTs), a particularly elegant subclass where the unifying algebra is simple. These include the well-known $\\mathfrak{su}(5)$ and $\\mathfrak{so}(10)$ GUTs\u00a0[@Georgi:1974sy; @Fritzsch:1974nn; @Georgi:1974my], which unify" -"---\nabstract: |\n This paper deals with the approximation and homogenization of thermoelastic wave model. First, we study the homogenization problem of a weakly coupled thermoelastic wave model with rapidly varying coefficients, using a semigroup approach, two-scale convergence method and some variational techniques. We show that the limit semigroup can be obtained by using a weak version of the Trotter Kato convergence Theorem. Secondly, we consider the approximation of two thermoelastic wave model, one with exponential decay and the other one with polynomial decay. the numerical experiments indicate that the two discrete systems show different behavior of the spectra. Moreover, their discrete energies inherit the same behavior of the continuous ones. Finally we show numerically how the smoothness of data can impact the rate of decay of the energy associated the weakly coupled thermoelastic wave model.\\\n \\\n [**Key words:**]{} Homogenization, weakly coupled thermoelastic eqn, periodic coefficients, long-time behavior, two scale convergence, exponential stability, polynomial stability, spectral element.\\\n \\\n [**Mathematics Subject Classification:** 35B27, 93C20, 93D20, 73C25, 65M06, 65M60, 65M70.]{}\nauthor:\n- 'S. Nafiri$^{\\dagger}$'\ndate:\n- \n- \ntitle: ' **Approximation and Homogenisation of thermoelastic wave model**'\n---\n\n\\[section\\] \\[section\\] \\[section\\] \\[section\\] \\[section\\]\n\n*Dedicated to the memory of Professor Hammadi Bouslous*\n\n**Introduction** {#sect1}" -"---\nabstract: 'Named entity recognition (NER) is a crucial task for online advertisement. State-of-the-art solutions leverage pre-trained language models for this task. However, three major challenges remain unresolved: web queries differ from natural language, on which pre-trained models are trained; web queries are short and lack contextual information; and labeled data for NER is scarce. We propose DeepTagger, a knowledge-enhanced NER model for web-based ads queries. The proposed knowledge enhancement framework leverages both model-free and model-based approaches. For model-free enhancement, we collect unlabeled web queries to augment domain knowledge; and we collect web search results to enrich the information of ads queries. We further leverage effective prompting methods to automatically generate labels using large language models such as ChatGPT. Additionally, we adopt a model-based knowledge enhancement method based on adversarial data augmentation. We employ a three-stage training framework to train DeepTagger models. Empirical results in various NER tasks demonstrate the effectiveness of the proposed framework.'\nauthor:\n- |\n Simiao Zuo[^1], Pengfei Tang, Xinyu Hu, Qiang Lou, Jian Jiao, Denis Charles\\\n `{simiaozuo,pengfeitang,xinyuhu,qilou,jian.jiao,cdx}@microsoft.com`\\\n Microsoft\nbibliography:\n- 'main.bib'\ntitle: '**DeepTagger: Knowledge Enhanced Named Entity Recognition for Web-Based Ads Queries**'\n---\n\nIntroduction\n============\n\nNamed Entity Recognition (NER) is the task of classifying each token" -"---\nabstract: 'In nature, $\\alpha$-quartz crystals frequently form contact twins - two adjacent crystals with the same chemical structure but different crystallographic orientation, sharing a common lattice plane. As $\\alpha$-quartz crystallises in a chiral space group, such twinning can occur between enantiomorphs with the same handedness or with opposite handedness. Here, we use first-principle methods to investigate the effect of twinning and chirality on the bulk and surface phonon spectra, as well as on the topological properties of phonons in $\\alpha$-quartz. We demonstrate that, even though the dispersion appears identical for all twins along all high-symmetry lines and at all high-symmetry points in the Brillouin zone, the dispersions can be distinct at generic momenta for some twin structures. Furthermore, when the twinning occurs between different enantiomorphs, the charges of all Weyl nodal points flip, which leads to mirror symmetric isofrequency contours of the surface arcs. We show that this allows negative refraction to occur at interfaces between certain twins of $\\alpha$-quartz.'\nauthor:\n- 'Juan D. F. Pottecher'\n- 'Gunnar F. Lange'\n- Cameron Robey\n- Bartomeu Monserrat\n- Bo Peng\nbibliography:\n- 'references.bib'\ntitle: Negative refraction of Weyl phonons at twin quartz interfaces \n---\n\n[^1]\n\n[^2]\n\nIntroduction\n============\n\nNegative refraction" -"---\nabstract: 'The Gross-Neveu model in the $N \\to \\infty$ limit in $d=1$ spatial dimensions exhibits a chiral inhomogeneous phase (IP), where the chiral condensate has a spatial dependence that spontaneously breaks translational invariance and the $\\mathbb{Z}_2$ chiral symmetry. This phase is absent in $d=2$, while in $d=3$ its existence and extent strongly depends on the regularization and the value of the finite regulator. This work connects these three results smoothly by extending the analysis to noninteger spatial dimensions $1 \\leq d <3$, where the model is fully renormalizable. To this end, we adapt the stability analysis, which probes the stability of the homogeneous ground state under inhomogeneous perturbations, to noninteger spatial dimensions. We find that the IP is present for all $d<2$ and vanishes exactly at $d=2$. Moreover, we find no instability towards an IP for $2\\leq d<3$, which suggests that the IP in $d=3$ is solely generated by the presence of a regulator.'\nauthor:\n- Laurin Pannullo\nbibliography:\n- 'main.bib'\ntitle: ' Inhomogeneous condensation in the Gross-Neveu model in noninteger spatial dimensions $1\\leq d <3$ '\n---\n\nIntroduction\n============\n\nA chiral features a condensate with a spatial dependence that spontaneously breaks translational invariance in addition to chiral symmetry" -"---\nabstract: |\n Many Next Generation (NextG) applications feature devices that are capable of communicating and sensing in the Millimeter-Wave (mmWave) bands. Trust establishment is an important first step to bootstrap secure mmWave communication links, which is challenging due to the lack of prior secrets and the fact that traditional cryptographic authentication methods cannot bind digital trust with physical properties. Previously, context-based device pairing approaches were proposed to extract shared secrets from common context, using various sensing modalities. However, they suffer from various limitations in practicality and security.\n\n In this work, we propose the first secret-free device pairing scheme in the mmWave band that explores the unique physical-layer properties of mmWave communications. Our basic idea is to let Alice and Bob derive common randomness by sampling physical activity in the surrounding environment that disturbs their wireless channel. They construct reliable fingerprints of the activity by extracting event timing information from the channel state. We further propose an uncoordinated path hopping mechanism to resolve the challenges of beam alignment for activity sensing without prior trust. A key novelty of our protocol is that it remains secure against both co-located passive adversaries and active Man-in-the-Middle attacks, which is not possible with existing" -"---\nabstract: 'Soft, growing inflated beam robots, also known as everting vine robots, have previously been shown to navigate confined spaces with ease. Less is known about their ability to navigate three-dimensional open spaces where they have the potential to collapse under their own weight as they attempt to move through a space. Previous work has studied collapse of inflated beams and vine robots due to purely transverse or purely axial external loads. Here, we extend previous models to predict the length at which straight vine robots will collapse under their own weight at arbitrary launch angle relative to gravity, inflated diameter, and internal pressure. Our model successfully predicts the general trends of collapse behavior of straight vine robots. We find that collapse length increases non-linearly with the robot\u2019s launch angle magnitude, linearly with the robot\u2019s diameter, and with the square root of the robot\u2019s internal pressure. We also demonstrate the use of our model to determine the robot parameters required to grow a vine robot across a gap in the floor. This work forms the foundation of an approach for modeling the collapse of vine robots and inflated beams in arbitrary shapes.'\nauthor:\n- 'Ciera McFarland and Margaret M." -"---\nabstract: 'Fully distributed estimation and tracking solutions to large-scale multi-agent networks suffer slow convergence and are vulnerable to network failures. In this paper, we aim to speed up the convergence and enhance the resilience of state estimation and tracking using a simple hierarchical system architecture wherein agents are clusters into smaller networks, and a parameter server exists to aid the information exchanges among networks. The information exchange among networks is expensive and occurs only once in a while. We propose two \u201cconsensus + innovation\u201d algorithms for the state estimation and tracking problems, respectively. In both algorithms, we use a novel hierarchical push-sum consensus component. For the state estimation, we use dual averaging as the local innovation component. State tracking is much harder to tackle in the presence of dropping-link failures and the standard integration of the consensus and innovation approaches are no longer applicable. Moreover, dual averaging is no longer feasible. Our algorithm introduces a pair of additional variables per link and ensure the relevant local variables evolve according to the state dynamics, and use projected local gradient descent as the local innovation component. We also characterize the convergence rates of both of the algorithms under linear local observation" -"---\nabstract: 'The electrical environment of a ground vacuum testing chamber creates facility effects for gridded ion thrusters. For example, it is well known that the plume from the thruster generates current paths that are very different from what occurs in space, and the neutralization of this plume is also different. For reasons such as this, it is important to clarify how the experimental testing environment affects plasma flows, but understanding this effect solely through ground experiments is difficult. To that end, this study utilizes particle-in-cell and direct simulation Monte Carlo methods to simulate xenon beam ions and electrons emitted from a neutralizer. First, we compare simulations conducted within the chamber to those conducted in space, demonstrating that grounded chamber walls increase the electric potential and electron temperature. Next, we investigate the impact of the neutralizer\u2019s position and the background pressure on the plume in the vacuum chamber. We find that as the neutralizer position moves closer to the location of maximum potential, more electrons are extracted, resulting in increased neutralization of the plume. We also observe that high background pressure generates slow charge-exchange ions, creating ion sheaths on the side walls that alter ion current paths. Finally, we discuss" -"---\nabstract: 'Fast development in science and technology has driven the need for proper statistical tools to capture special data features such as abrupt changes or sharp contrast. Many applications in the data science seek spatiotemporal reconstruction from a sequence of time-dependent objects with discontinuity or singularity, e.g. dynamic computerized tomography (CT) images with edges. Traditional methods based on Gaussian processes (GP) may not provide satisfactory solutions since they tend to offer over-smooth prior candidates. Recently, Besov process (BP) defined by wavelet expansions with random coefficients has been proposed as a more appropriate prior for this type of Bayesian inverse problems. While BP outperforms GP in imaging analysis to produce edge-preserving reconstructions, it does not automatically incorporate temporal correlation inherited in the dynamically changing images. In this paper, we generalize BP to the spatiotemporal domain (STBP) by replacing the random coefficients in the series expansion with stochastic time functions following Q-exponential process which governs the temporal correlation strength. Mathematical and statistical properties about STBP are carefully studied. A white-noise representation of STBP is also proposed to facilitate the point estimation through maximum a posterior (MAP) and the uncertainty quantification (UQ) by posterior sampling. Two limited-angle CT reconstruction examples and a" -"---\nauthor:\n- '[![image](orcid_16x16.png)](https://orcid.org/0000-0002-9637-4319)'\n- '[![image](orcid_16x16.png)](https://orcid.org/0000-0001-6363-0833)'\n- '[![image](orcid_16x16.png)](https://orcid.org/0000-0002-6867-7080)'\n- '[![image](orcid_16x16.png)](https://orcid.org/0000-0002-0587-7704)'\nbibliography:\n- 'sn-bibliography.bib'\ntitle: 'Feature Selection: A perspective on inter-attribute cooperation.'\n---\n\nIntroduction\n============\n\nLarge amounts of data are being generated in various fields of scientific research, including economic, financial, and marketing applications [@IM.chanda2009mining]. These data often have the characteristic of high dimensionality, which poses a high challenge for data analysis and knowledge discovery. Redundant and irrelevant features increase the learning difficulty of the prediction model, cause overfitting and reduce prediction performance [@YAO2022117002]. In order to use machine learning methods effectively, preprocessing of the data is essential. Feature selection has been proven effective in preprocessing high-dimensional data and in enhancing learning efficiency, from both theoretical and practical standpoints [@BLUM1997245; @liu2012libro; @JMLR:Guyon-2003]. Thus, to overcome problems arising from the high dimensionality of data, feature selection removes irrelevant and redundant dimensions by analyzing the entire dataset.\n\nDepending on whether the class label is used in the feature selection process or not, the feature selection methods can be categorized into supervised and unsupervised. Unsupervised feature selection is used to explore the dataset without the labeled data. The supervised feature selection uses the labels of samples to select the feature subset. In addition, supervised" -"---\nabstract: 'In this paper, we introduce \u201cLottery and Sprint\u201d, a board game creation methodology that cooperates human design intuition with the structured Design Sprint framework executed by the AutoGPT system. By aligning AI-driven processes with human creativity, we aim to facilitate a collaborative game design experience. A user study is conducted to investigate the playability and enjoyment of the generated games, revealing both successes and challenges in employing systems like AutoGPT for board game design. Insights and future research directions are proposed to overcome identified limitations and enhance computational-driven game creation.'\nauthor:\n- Maya Grace Torii\n- Takahito Murakami\n- Yoichi Ochiai\nbibliography:\n- 'camera-ready\\_ref.bib'\ntitle: 'Lottery and Sprint: Generate a Board Game with Design Sprint Method on AutoGPT'\n---\n\n<ccs2012> <concept> <concept\\_id>10003120.10003121.10003129</concept\\_id> <concept\\_desc>Human-centered computing\u00a0Interactive systems and tools</concept\\_desc> <concept\\_significance>300</concept\\_significance> </concept> <concept> <concept\\_id>10010405.10010469</concept\\_id> <concept\\_desc>Applied computing\u00a0Arts and humanities</concept\\_desc> <concept\\_significance>500</concept\\_significance> </concept> </ccs2012>\n\nIntroduction\n============\n\nCreating novel, enjoyable and effective board games typically requires a detailed understanding of game mechanics, player engagement, and strategic balance\u00a0[@eck2017leveling]. Inexperienced individuals often face challenges in designing board games that cater to various play styles, objectives, and constraints\u00a0[@book]. To address this, we present the \u201cLottery and Sprint\u201d method. This approach allows human designers to" -"---\nauthor:\n- 'Shaoshuai\u00a0Shi$^*$, Li\u00a0Jiang$^*$, Dengxin\u00a0Dai, and\u00a0Bernt\u00a0Schiele,\u00a0\u00a0'\nbibliography:\n- 'egbib.bib'\ntitle: ' MTR++: Multi-Agent Motion Prediction with Symmetric Scene Modeling and Guided Intention Querying '\n---\n\n[Shell : Bare Demo of IEEEtran.cls for Computer Society Journals]{}\n\nprediction constitutes a pivotal undertaking within the realm of contemporary autonomous driving systems, and it has gained significant attention in recent years due to its vital role in enabling robotic vehicles to understand driving scenarios and make judicious decisions [@gu2021densetnt; @jia2021ide; @tolstaya2021identifying; @liu2021multimodal; @ye2021tpcn; @jia2022multi; @ngiam2021scene; @zhou2022hivt; @jia2023towards]. The core of motion prediction lies in accurately anticipating the future actions of traffic participants by considering observed agent states and complex road maps. However, this task is challenging due to the inherent multimodal behaviors exhibited by agents and the intricacies of the surrounding environment.\n\n![The comparison of MTR and MTR++ frameworks. The MTR++ framework surpasses its predecessor, MTR, in several key aspects. In subfigure (a), MTR++ demonstrates its ability to predict the future trajectories of multiple agents simultaneously. Notably, in subfigure (b), MTR++ excels in both inference speed and memory efficiency, particularly when dealing with a larger number of interested agents. Additionally, as depicted in subfigure (c), the MTR++" -"---\nabstract: 'Variable selection is a procedure to attain the truly important predictors from inputs. Complex nonlinear dependencies and strong coupling pose great challenges for variable selection in high-dimensional data. In addition, real-world applications have increased demands for interpretability of the selection process. A pragmatic approach should not only attain the most predictive covariates, but also provide ample and easy-to-understand grounds for removing certain covariates. In view of these requirements, this paper puts forward an approach for transparent and nonlinear variable selection. In order to transparently decouple information within the input predictors, a three-step heuristic search is designed, via which the input predictors are grouped into four subsets: the relevant to be selected, and the uninformative, redundant, and conditionally independent to be removed. A nonlinear partial correlation coefficient is introduced to better identify the predictors which have nonlinear functional dependence with the response. The proposed method is model-free and the selected subset can be competent input for commonly used predictive models. Experiments demonstrate the superior performance of the proposed method against the state-of-the-art baselines in terms of prediction accuracy and model interpretability.'\naddress:\n- 'School of Economics and Management, Beihang University, Beijing, China'\n- 'Beijing Key Laboratory of Emergency Support" -"---\nabstract: 'In addition to the light curve and energy spectrum, polarization is also important for the study of Gamma-ray burst (GRB) prompt emission. Rotation of the polarization angle (PA) with time will cause depolarization of the time-integrated polarization degree. However, it is rarely studied before. Here, we use the magnetic reconnection model with a large-scale ordered aligned magnetic field in the emitting region to study the influence of the key parameters on the PA rotations. We find that half-opening angle of the jet $\\theta_{j}$, the observational angle $\\theta_{V}$, and the bulk Lorentz factor $\\Gamma$ all have significant impacts on the PA rotations. For a fixed $\\theta_{j}\\Gamma_{0}$ value ($\\Gamma_{0}$ is the normalization factor of $\\Gamma$), regardless of concrete $\\theta_{j}$ and $\\Gamma_{0}$ values, PA rotation within $T_{90}$ ($\\triangle$PA) remains roughly unchanged for a $q\\equiv\\theta_{V}/\\theta_{j}$ value. As $\\theta_{j}\\Gamma_{0}$ value increases, the $q$ range for $\\triangle$PA$>10^{\\circ}$ becomes smaller. The most significant PA rotation with $\\triangle$PA$\\thicksim90^{\\circ}$ will happen when $\\theta_{j}\\Gamma_{0}\\thicksim100$ and $1.1\\leq q\\leq1.2$. For the top-hat jet, observations of the PA rotation within $T_{90}$ will imply a slightly off-axis observation.'\nauthor:\n- 'Hao-Bing Wang'\n- 'Mi-Xiang Lan'\ntitle: 'Rotation of Polarization Angle in Gamma-Ray Burst Prompt Phase$-$. The Influence of The Parameters'\n---\n\nIntroduction" -"---\nabstract: 'We theoretically study the low-lying collective modes of an even-parity spin-singlet superconducting bilayer, where strong spin-orbit coupling leads to a closely competing odd-parity pairing state. We develop a gauge-invariant theory for the coupling of phase fluctuations to an external electromagnetic field and show that the competing odd-parity pairing instability gives rise to a Bardasis-Schrieffer-like phase mode within the excitation gap. Accounting for the long-range Coulomb interaction, however, we find that this mode is converted into an antisymmetric plasmon and is likely pushed into the quasiparticle continuum.'\nauthor:\n- 'Nico A. Hackner'\n- 'P. M. R. Brydon'\nbibliography:\n- 'refs.bib'\ndate: 'July 4, 2023'\ntitle: 'Bardasis-Schrieffer-like phase mode in a superconducting bilayer'\n---\n\n[*Introduction.*]{} The first-order field-induced transition within the superconducting state of CeRh$_2$As$_2$\u00a0[@khim2021; @Landaeta2022] has been interpreted as a transition between even- and odd-parity pairing\u00a0[@Mockli2021; @Schertenleib2021; @Cavanagh2022; @Nogaki2022]. This requires a near-degeneracy of these different pairing channels, which naturally arises from the sublattice structure of the unit cell\u00a0[@Fischer2011; @Yoshida2012; @Yoshida2014]. Specifically, for on-site singlet pairing, even- and odd-parity states can be constructed by setting the pair potential to have the same (\u201cuniform\") or opposite (\u201cstaggered\") sign on the two sublattices, respectively. The staggered state is" -"---\nabstract: 'Large-scale plasma simulations are critical for designing and developing next-generation fusion energy devices and modeling industrial plasmas. BIT1 is a massively parallel Particle-in-Cell code designed for specifically studying plasma material interaction in fusion devices. Its most salient characteristic is the inclusion of collision Monte Carlo models for different plasma species. In this work, we characterize single node, multiple nodes, and I/O performances of the BIT1 code in two realistic cases by using several HPC profilers, such as perf, IPM, Extrae/Paraver, and Darshan tools. We find that the BIT1 sorting function on-node performance is the main performance bottleneck. Strong scaling tests show a parallel performance of 77% and 96% on 2,560 MPI ranks for the two test cases. We demonstrate that communication, load imbalance and self-synchronization are important factors impacting the performance of the BIT1 on large-scale runs.'\nauthor:\n- 'Jeremy J. Williams'\n- David Tskhakaya\n- Stefan Costea\n- 'Ivy B. Peng'\n- 'Marta Garcia-Gasulla'\n- Stefano Markidis\ntitle: 'Leveraging HPC Profiling & Tracing Tools to Understand the Performance of Particle-in-Cell Monte Carlo Simulations'\n---\n\nIntroduction\n============\n\nPlasma simulations are a key asset and tool for improving current and next-generation plasma-based technologies, such as fusion devices, and industrial" -"---\nabstract: 'In this paper, we propose a novel architecture for a lens antenna array (LAA) designed to work with a small number of antennas and enable angle-of-arrival (AoA) estimation for advanced 5G vehicle-to-everything (V2X) use cases that demand wider bandwidths and higher data rates. We derive a received signal in terms of optical analysis to consider the variability of the focal region for different carrier frequencies in a wideband multi-carrier system. By taking full advantage of the beam squint effect for multiple pilot signals with different frequencies, we propose a novel reconfiguration of antenna array (RAA) for the sparse LAA and a max-energy antenna selection (MS) algorithm for the AoA estimation. In addition, this paper presents an analysis of the received power at the single antenna with the maximum energy and compares it to simulation results. In contrast to previous studies on LAA that assumed a large number of antennas, which can require high complexity and hardware costs, the proposed RAA with MS estimation algorithm is shown meets the requirements of 5G V2X in a vehicular environment while utilizing limited RF hardware and has low complexity.'\nauthor:\n- |\n Joo-Hyun Jo,\u00a0 Jae-Nam Shim,\u00a0\\\n Chan-Byoung Chae,\u00a0 \u00a0Dong Ku Kim," -"---\nabstract: 'Cross-validation is a widely used technique for evaluating the performance of prediction models. It helps avoid the optimism bias in error estimates, which can be significant for models built using complex statistical learning algorithms. However, since the cross-validation estimate is a random value dependent on observed data, it is essential to accurately quantify the uncertainty associated with the estimate. This is especially important when comparing the performance of two models using cross-validation, as one must determine whether differences in error estimates are a result of chance fluctuations. Although various methods have been developed for making inferences on cross-validation estimates, they often have many limitations, such as stringent model assumptions This paper proposes a fast bootstrap method that quickly estimates the standard error of the cross-validation estimate and produces valid confidence intervals for a population parameter measuring average model performance. Our method overcomes the computational challenge inherent in bootstrapping the cross-validation estimate by estimating the variance component within a random effects model. It is just as flexible as the cross-validation procedure itself. To showcase the effectiveness of our approach, we employ comprehensive simulations and real data analysis across three diverse applications.'\nauthor:\n- |\n Bryan Cai$^1$, Fabio Pellegrini$^2$, Menglan" -"---\nabstract: 'In recent years, channel state information (CSI) at sub-6 GHz has been widely exploited for Wi-Fi sensing, particularly for activity and gesture recognition. In this work, we instead explore mmWave (60 GHz) Wi-Fi signals for gesture recognition/pose estimation. Our focus is on the mmWave Wi-Fi signals so that they can be used not only for high data rate communication but also for improved sensing *e.g.*, for extended reality (XR) applications. For this reason, we extract spatial beam signal-to-noise ratios (SNRs) from the periodic beam training employed by IEEE 802.11ad devices. We consider a set of 10 gestures/poses motivated by XR applications. We conduct experiments in two environments and with three people. As a comparison, we also collect CSI from IEEE 802.11ac devices. To extract features from the CSI and the beam SNR, we leverage a deep neural network (DNN). The DNN classifier achieves promising results on the beam SNR task with state-of-the-art 96.7% accuracy in a single environment, even with a limited dataset. We also investigate the robustness of the beam SNR against CSI across different environments. Our experiments reveal that features from the CSI generalize without additional re-training, while those from beam SNRs do not. Therefore, re-training" -"---\nabstract: 'Kagome spin ice is one of the canonical examples of highly frustrated magnets. The effective magnetic degrees of freedom in kagome spin ice are Ising spins residing on a two-dimensional network of corner-sharing triangles. Due to strong geometrical frustration, nearest-neighbor antiferromagnetic interactions on the kagome lattice give rise to a macroscopic number of degenerate classical ground states characterized by ice rules. Elementary excitations at low temperatures are defect-triangles that violate the ice rules and carry an additional net magnetic charge relative to the background. We perform large-scale Glauber dynamics simulations to study the nonequilibrium dynamics of kagome ice under slow cooling. We show that the density of residual charge defects exhibits a power law dependence on the quench rate for the class of algebraic cooling protocols. The numerical results are well captured by the rate equation for the charge defects based on the reaction kinetics theory. As the relaxation time of the kagome ice phase remains finite, there is no dynamical freezing as in the Kibble-Zurek scenario. Instead, we show that the power-law behavior originates from a thermal excitation that decay algebraically with time at the late stage of the cooling schedule. Similarities and differences in quench dynamics" -"---\nabstract: 'One of the mainstream schemes for 2D human pose estimation (HPE) is learning keypoints heatmaps by a neural network. Existing methods typically improve the quality of heatmaps by customized architectures, such as high-resolution representation and vision Transformers. In this paper, we propose **DiffusionPose**, a new scheme that formulates 2D HPE as a keypoints heatmaps generation problem from noised heatmaps. During training, the keypoints are diffused to random distribution by adding noises and the diffusion model learns to recover ground-truth heatmaps from noised heatmaps with respect to conditions constructed by image feature. During inference, the diffusion model generates heatmaps from initialized heatmaps in a progressive denoising way. Moreover, we further explore improving the performance of DiffusionPose with conditions from human structural information. Extensive experiments show the prowess of our DiffusionPose, with improvements of 1.6, 1.2, and 1.2 mAP on widely-used COCO, CrowdPose, and AI Challenge datasets, respectively.'\nauthor:\n- 'Zhongwei Qiu$^{1,3}$'\n- Qiansheng Yang$^2$\n- Jian Wang$^2$\n- Xiyu Wang$^3$\n- Chang Xu$^3$\n- Dongmei Fu$^1$\n- Kun Yao$^2$\n- Junyu Han$^2$\n- Errui Ding$^2$\n- Jingdong Wang$^2$\n- '$^1$ University of Science and Technology Beijing, $^2$ Baidu, $^3$ University of Sydney'\nbibliography:\n- 'ref.bib'\ntitle: 'Learning Structure-Guided Diffusion" -"---\nabstract: 'Spin qubits in semiconductor structures bring the promise of large-scale 2D integration, with the possibility to incorporate the control electronics on the same chip. In order to perform error correction on this platform, the characteristic features of spin qubits need to be accounted for. E.g., qubit readout involves an additional qubit which necessitates careful reconsideration of the qubit layout. The noise affecting spin qubits has further peculiarities such as the strong bias towards dephasing. In this work we consider state-of-the-art error correction codes that require only nearest-neighbour connectivity and are amenable to fast decoding via minimum-weight perfect matching. Compared to the surface code, the XZZX code, the reduced-connectivity surface code, the XYZ$^2$ matching code, and the Floquet code all bring different advantages in terms of error threshold, connectivity, or logical qubit encoding. We present the spin-qubit layout required for each of these error correction codes, accounting for reference qubits required for spin readout. The performance of these codes are studied under circuit-level noise accounting for distinct error rates for gates, readout and qubit decoherence during idling stages.'\nauthor:\n- Bence Het\u00e9nyi\n- 'James R. Wootton'\nbibliography:\n- 'references.bib'\ntitle: Tailoring quantum error correction to spin qubits\n---\n\nIntroduction" -"---\nabstract: 'We propose a presentation for central simple algebras over a field $k$ using Amitsur cohomology. We provide efficient algorithms for computing a cocycle corresponding to any such algebra given by structure constants. If $k$ is a number field, we use this presentation to prove that the explicit isomorphism problem (i.e., finding an isomorphism between central simple algebras given by structure constants) reduces to $S$-unit group computation and other related number theoretical computational problems. This also yields the first polynomial quantum algorithm for the explicit isomorphism problem over number fields.'\nauthor:\n- P\u00e9ter Kutas and Micka\u00ebl Montessinos\nbibliography:\n- 'biblio.bib'\ntitle: Efficient computations in central simple algebras using Amitsur cohomology\n---\n\nIntroduction\n============\n\nThe *explicit isomorphism problem* is the algorithmic problem of, given some algebra $A$ isomorphic to $M_d(k)$, constructing an explicit isomorphism $\\varphi\\colon A \\to M_d(k)$. The explicit isomorphism problem may be thought of as a natural problem in computational representation theory. Given an $k$-algebra $A$, one may wish to assay it. That is, compute the Jacobson radical of $A$, and the decomposition of the semi-simple part of $A$ as a sum of simple $F$-algebras, themselves identified to some $M_d(D)$, for $D$ a division $k$-algebra. In general, the" -"---\nabstract: 'We present [Palm]{}, a solution to the Long-Term Action Anticipation (LTA) task utilizing vision-language and large language models. Given an input video with annotated action periods, the LTA task aims to predict possible future actions. We hypothesize that an optimal solution should capture the interdependency between past and future actions, and be able to infer future actions based on the structure and dependency encoded in the past actions. Large language models have demonstrated remarkable commonsense-based reasoning ability. Inspired by that, [Palm]{}\u00a0chains an image captioning model and a large language model. It predicts future actions based on frame descriptions and action labels extracted from the input videos. Our method outperforms other participants in the EGO4D LTA challenge and achieves the best performance in terms of action prediction. Our code is available at .'\nauthor:\n- |\n Daoji Huang Otmar Hilliges Luc Van Gool Xi Wang\\\n ETH Z\u00fcrich\nbibliography:\n- 'egbib.bib'\ntitle: |\n [Palm]{}: Predicting Actions through Language Models\\\n @ Ego4D Long-Term Action Anticipation Challenge 2023\n---\n\nIntroduction {#sec:intro}\n============\n\nPredicting future actions from egocentric videos is inherently a challenging task given the uncertainty of the future, and very often there exist multiple plausible action candidates and executive orders." -"---\nabstract: 'A Laval nozzle can accelerate expanding gas above supersonic velocities, while cooling the gas in the process. This work investigates this process for microscopic Laval nozzles by means of non-equilibrium molecular dynamics simulations of statioary flow, using grand canonical Monte-Carlo particle reservoirs. We study the expansion of a simple fluid, a mono-atomic gas interacting via a Lennard-Jones potential, through an idealized nozzle with atomically smooth walls. We obtain the thermodynamic state variables pressure, density, and temperature, but also the Knudsen number, speed of sound, velocity, and the corresponing Mach number of the expanding gas for nozzles of different sizes. We find that the temperature is well-defined in the sense that the each velocity components of the particles obey the Maxwell-Boltzmann distribution, but it is anisotropic, especially for small nozzles. The velocity auto-correlation function reveals a tendency towards condensation of the cooled supersonic gas, although the nozzles are too small for the formation of clusters. Overall we find that microscopic nozzles act qualitatively like macroscopic nozzles in that the particles are accelerated to supersonic speeds while their thermal motion relative to the stationary flow is cooled. We find that, like macroscopic Laval nozzles, microscopic nozzles also exhibit a sonic" -"---\nabstract: 'The initial transient phase of an emerging epidemic is of critical importance for data-driven model building, model-based prediction of the epidemic trend, and articulation of control/prevention strategies. In principle, quantitative models for real-world epidemics need to be memory-dependent or non-Markovian, but this presents difficulties for data collection, parameter estimation, computation and analyses. In contrast, the difficulties do not arise in the traditional Markovian models. To uncover the conditions under which Markovian and non-Markovian models are equivalent for transient epidemic dynamics is outstanding and of significant current interest. We develop a comprehensive computational and analytic framework to establish that the transient-state equivalence holds when the average generation time matches and average removal time, resulting in minimal Markovian estimation errors in the basic reproduction number, epidemic forecasting, and evaluation of control strategy. Strikingly, the errors depend on the generation-to-removal time ratio but not on the specific values and distributions of these times, and this universality will further facilitate estimation rectification. Overall, our study provides a general criterion for modeling memory-dependent processes using the Markovian frameworks.'\nauthor:\n- Mi Feng\n- Liang Tian\n- 'Ying-Cheng Lai'\n- Changsong Zhou\ntitle: 'Validity of Markovian modeling for transient memory-dependent epidemic dynamics'\n---\n\nIntroduction" -"---\nabstract: 'We combine searches for scalar resonances at the electroweak scale performed by the Large Hadron Collider experiments ATLAS and CMS where persisted excesses have been observed in recent years. Using both the side-bands of Standard Model Higgs analyses as well as dedicated beyond the Standard Model analyses, we find significant hints for new scalars at $\\approx 95\\,$GeV ($S^\\prime$) and $\\approx152\\,$GeV ($S$). The presence of a $95\\,$GeV scalar is preferred over the Standard Model hypothesis by $3.8\\sigma$, while interpreting the $152\\,$GeV excesses in a simplified model with resonant pair production of $S$ via a new heavier scalar $H(270)$, a global significance of $\\approx5\\sigma$ is obtained. While the production mechanism of the $S^\\prime$ cannot yet be determined, data strongly favours the associated production of $S$, i.e.\u00a0via the decay of a heavier boson $H$ ($pp\\to H\\to SS^*$). A possible alternative or complementary decay chain is $H\\rightarrow SS^{\\prime}$, where $S\\to WW^*$ ($S^{\\prime}$) would be the source of the leptons ($b$-quarks) necessary to explain the multi-lepton anomalies found in Large Hadron Collider data.'\nauthor:\n- Srimoy Bhattacharya\n- Guglielmo Coloretti\n- Andreas Crivellin\n- 'Salah-Eddine Dahbi'\n- Yaquan Fang\n- Mukesh Kumar\n- Bruce Mellado\nbibliography:\n- 'apssampMukesh.bib'\ntitle: Growing Excesses of" -"---\nabstract: 'In this paper, we extend the work of Grohe & Neuen (*ACM T. Comput. Log.*, 2023) to show that the $(6k+3)$-dimensional Weisfeiler\u2013Leman (WL) algorithm can identify graphs of rank width $k$ using only $O(\\log n)$ rounds. As a consequence, we obtain that graphs of bounded rank width are identified by $\\textsf{FO} + \\textsf{C}$ formulas with $6k+4$ variables and quantifier depth $O(\\log n)$. Furthermore, in light of the parallel WL implementation due to Grohe & Verbitsky (ICALP 2006), we obtain $\\textsf{TC}^{1}$ upper bounds for isomorphism testing of graphs of bounded rank width. Prior to this paper, isomorphism testing for graphs of bounded rank width was not known to be in $\\textsf{NC}$.'\nauthor:\n- Michael Levet\n- Nicholas Sieger\nbibliography:\n- 'references.bib'\ntitle: |\n Logarithmic Weisfeiler\u2013Leman Identifies All Graphs of\\\n Bounded Rank Width[^1] \n---\n\nIntroduction {#sec:introduction}\n============\n\nThe Graph Isomorphism problem (GI) takes as input two graphs $G$ and $H$, and asks if there exists an isomorphism $\\varphi : V(G) \\to V(H)$. $\\textsc{GI}$ is in particular conjectured to be $\\textsf{NP}$-intermediate; that is, belonging to $\\textsf{NP}$ but neither in $\\textsf{P}$ nor $\\textsf{NP}$-complete [@Ladner]. Algorithmically, the best known upper-bound is $n^{\\Theta(\\log^{2} n)}$, due to Babai [@BabaiGraphIso]. It remains open as" -"---\nauthor:\n- 'Johannes Milz [^1]'\nbibliography:\n- 'JMilz\\_bilinear\\_epi.bib'\ndate: 'June 19, 2023'\ntitle: 'Consistency of sample-based stationary points for infinite-dimensional stochastic optimization'\n---\n\n[ [ **Abstract.** [We consider stochastic optimization problems with possibly nonsmooth integrands posed in Banach spaces and approximate these stochastic programs via a sample-based approaches. We establish the consistency of approximate Clarke stationary points of the sample-based approximations. Our framework is applied to risk-averse semilinear PDE-constrained optimization using the average value-at-risk and to risk-neutral bilinear PDE-constrained optimization. ]{}\\\n]{} ]{}\n\n[ [ **Key words.** [ stochastic programming, sample average approximation, optimization under uncertainty, PDE-constrained optimization, uncertainty quantification, bilinear optimal control]{}\\\n]{} ]{}\n\n[ [ **AMS subject classifications.** [ 65C05, 90C15, 35R60, 90C48, 90C30, 60H25, 49M41, 35Q93 ]{} ]{} ]{}\n\nIntroduction {#sec:intro}\n============\n\nInfinite-dimensional optimization problems arise in a plethora of research fields such as dynamic programming [@Langen1981], statistical estimation [@Gine2016], feedback stabilization of dynamical systems [@Kunisch2020], and optimization problems governed by partial differential equations (PDEs) [@Kouri2018a]. PDE-constrained optimization is an active research field with a focus on modeling, analyzing and solving complex optimization problems with PDE constraints. For example, numerous applications in the field of renewable and sustainable energy yield challenging PDE-constrained optimization problems, such as" -"---\nabstract: |\n Mathematical morphology is a theory concerned with non-linear operators for image processing and analysis. The underlying framework for mathematical morphology is a partially ordered set with well-defined supremum and infimum operations. Because vectors can be ordered in many ways, finding appropriate ordering schemes is a major challenge in mathematical morphology for vector-valued images, such as color and hyperspectral images. In this context, the irregularity issue plays a key role in designing effective morphological operators. Briefly, the irregularity follows from a disparity between the ordering scheme and a metric in the value set. Determining an ordering scheme using a metric provide reasonable approaches to vector-valued mathematical morphology. Because total orderings correspond to paths on the value space, one attempt to reduce the irregularity of morphological operators would be defining a total order based on the shortest length path. However, this paper shows that the total ordering associated with the shortest length path does not necessarily imply minimizing the irregularity.\n\n [**Keywords**]{}. Vector-valued mathematical morphology, irregularity issue, shortest length path.\nauthor:\n- 'Samuel Francisco[^1] and Marcos Eduardo Valle[^2]'\nbibliography:\n- 'refs.bib'\ntitle: 'Shortest Length Total Orders Do Not Minimize Irregularity in Vector-Valued Mathematical Morphology[^3]'\n---\n\nIntroduction\n============\n\nMathematical morphology is" -"---\nabstract: 'The general sequential decision-making problem, which includes Markov decision processes (MDPs) and partially observable MDPs (POMDPs) as special cases, aims at maximizing a cumulative reward by making a sequence of decisions based on a history of observations and actions over time. Recent studies have shown that the sequential decision-making problem is statistically learnable if it admits a low-rank structure modeled by predictive state representations (PSRs). Despite these advancements, existing approaches typically involve oracles or steps that are not computationally efficient. On the other hand, the upper confidence bound (UCB) based approaches, which have served successfully as computationally efficient methods in bandits and MDPs, have not been investigated for more general PSRs, due to the difficulty of optimistic bonus design in these more challenging settings. This paper proposes the first known UCB-type approach for PSRs, featuring a novel bonus term that upper bounds the total variation distance between the estimated and true models. We further characterize the sample complexity bounds for our designed UCB-type algorithms for both online and offline PSRs. In contrast to existing approaches for PSRs, our UCB-type algorithms enjoy computational efficiency, last-iterate guaranteed near-optimal policy, and guaranteed model accuracy.'\nauthor:\n- 'Ruiquan Huang[^1], Yingbin Liang[^2], Jing" -"---\nabstract: 'Temporal Point Processes (TPPs) serve as the standard mathematical framework for modeling asynchronous event sequences in continuous time. However, classical TPP models are often constrained by strong assumptions, limiting their ability to capture complex real-world event dynamics. To overcome this limitation, researchers have proposed Neural TPPs, which leverage neural network parametrizations to offer more flexible and efficient modeling. While recent studies demonstrate the effectiveness of Neural TPPs, they often lack a unified setup, relying on different baselines, datasets, and experimental configurations. This makes it challenging to identify the key factors driving improvements in predictive accuracy, hindering research progress. To bridge this gap, we present a comprehensive large-scale experimental study that systematically evaluates the predictive accuracy of state-of-the-art neural TPP models. Our study encompasses multiple real-world and synthetic event sequence datasets, following a carefully designed unified setup. We thoroughly investigate the influence of major architectural components such as event encoding, history encoder, and decoder parametrization on both time and mark prediction tasks. Additionally, we delve into the less explored area of probabilistic calibration for neural TPP models. By analyzing our results, we draw insightful conclusions regarding the significance of history size and the impact of architectural components on predictive" -"---\nabstract: 'The broad line region (BLR) size-luminosity relation has paramount importance for estimating the mass of black holes in active galactic nuclei (AGNs). Traditionally, the size of the H$\\beta$ BLR is often estimated from the optical continuum luminosity at 5100 , while the size of the H$\\alpha$ BLR and its correlation with the luminosity is much less constrained. As a part of the Seoul National University AGN Monitoring Project (SAMP) which provides six-year photometric and spectroscopic monitoring data, we present our measurements of the H$\\alpha$ lags of 6 high-luminosity AGNs. Combined with the measurements for 42 AGNs from the literature, we derive the size-luminosity relations of H$\\alpha$ BLR against broad H$\\alpha$ and 5100 continuum luminosities. We find the slope of the relations to be $0.61\\pm0.04$ and $0.59\\pm0.04$, respectively, which are consistent with the [[H$\\beta$]{}]{} size-luminosity relation. Moreover, we find a linear relation between the 5100 continuum luminosity and the broad H$\\alpha$ luminosity across 7 orders of magnitude. Using these results, we propose a new virial mass estimator based on the H$\\alpha$ broad emission line, finding that the previous mass estimates based on the scaling relations in the literature are overestimated by up to 0.7 dex at masses lower than" -"---\nabstract: 'Masked image modelling (MIM) is a powerful self-supervised representation learning paradigm, whose potential has not been widely demonstrated in medical image analysis. In this work, we show the capacity of MIM to capture rich semantic representations of Haemotoxylin & Eosin (H&E)-stained images at the nuclear level. Inspired by Bidirectional Encoder representation from Image Transformers ([BEiT]{}) [@beit], we split the images into smaller patches and generate corresponding discrete visual tokens. In addition to the regular grid-based patches, typically used in visual Transformers, we introduce patches of individual cell nuclei. We propose positional encoding of the irregular distribution of these structures within an image. We pre-train the model in a self-supervised manner on H&E-stained whole-slide images of diffuse large B-cell lymphoma, where cell nuclei have been segmented. The pre-training objective is to recover the original discrete visual tokens of the masked image on the one hand, and to reconstruct the visual tokens of the masked object instances on the other. Coupling these two pre-training tasks allows us to build powerful, context-aware representations of nuclei. Our model generalizes well and can be fine-tuned on downstream classification tasks, achieving improved cell classification accuracy on PanNuke dataset by more than $5\\%$ compared to" -"---\nabstract: |\n > Large Language Models (LLMs) exhibit exceptional abilities for causal analysis between concepts in numerous societally impactful domains, including medicine, science, and law. Recent research on LLM performance in various causal discovery and inference tasks has given rise to a new ladder in the classical three-stage framework of causality. In this paper, we advance the current research of LLM-driven causal discovery by proposing a novel framework that combines knowledge-based LLM causal analysis with data-driven causal structure learning. To make LLM more than a query tool and to leverage its power in discovering natural and new laws of causality, we integrate the valuable LLM expertise on existing causal mechanisms into statistical analysis of objective data to build a novel and practical baseline for causal structure learning.\n >\n > We introduce a universal set of prompts designed to extract causal graphs from given variables and assess the influence of LLM prior causality on recovering causal structures from data. We demonstrate the significant enhancement of LLM expertise on the quality of recovered causal structures from data, while also identifying critical challenges and issues, along with potential approaches to address them. As a pioneering study, this paper aims to emphasize the" -"---\nabstract: 'Incremental random weight neural networks (IRWNNs) have gained attention in view of its easy implementation and fast learning. However, a significant drawback of IRWNNs is that the relationship between the hidden parameters (node) and the residual error (model performance) is difficult to be interpreted. To address the above issue, this article proposes an interpretable constructive algorithm (ICA) with geometric information constraint. First, based on the geometric relationship between the hidden parameters and the residual error, an interpretable geometric information constraint is proposed to randomly assign the hidden parameters. Meanwhile, a node pool strategy is employed to obtain hidden parameters that is more conducive to convergence from hidden parameters satisfying the proposed constraint. Furthermore, the universal approximation property of the ICA is proved. Finally, a lightweight version of ICA is presented for large-scale data modeling tasks. Experimental results on six benchmark datasets and a numerical simulation dataset demonstrate that the ICA outperforms other constructive algorithms in terms of modeling speed, model accuracy, and model network structure. Besides, two practical industrial application case are used to validate the effectiveness of ICA in practical applications.'\nauthor:\n- 'Jing Nan, Wei Dai,\u00a0, Guan Yuan, and Ping Zhou,\u00a0, [^1][^2]'\ntitle: An" -"---\nauthor:\n- 'Yifan Zhang, Zhiyu Zhu, Junhui Hou,\u00a0, and Dapeng Wu,\u00a0'\nbibliography:\n- 'multi\\_frame\\_3DOD.bib'\ntitle: 'Spatial-Temporal Enhanced Transformer Towards Multi-Frame 3D Object Detection'\n---\n\n[Shell : Bare Demo of IEEEtran.cls for Computer Society Journals]{}\n\n0\n\n-dimensional (3D) object detection is one of the fundamental tasks in the computer vision community that aims to identify and localize the oriented 3D bounding boxes of objects in specific classes. It plays a critical role in broad applications, including autonomous driving, object manipulation, and augmented reality. Recent years have witnessed the emergence of a large number of deep learning-based single-frame 3D detectors[@shi2020pv; @zhang2023glenet; @zhang2023upidet] with the advent of large-scale datasets[@Sun_2020_CVPR; @caesar2020nuscenes]. Nonetheless, given the intricacies of traffic environments, including long distances and inter-object occlusion, the object information encapsulated within point clouds may be inevitably subject to distortions of potential sparsity or incompleteness. Consequently, these aspects typically engender a sub-optimal performance of the single-frame 3D detectors\u00a0[@yin2021center]. As the point cloud sequence intrinsically provides multiple views of objects, it implies promising approaches to extract vital spatiotemporal information for facilitating more accurate detection, especially for objects that pose significant detection challenges. By incorporating complementary information from other frames, a multi-frame 3D object detector" -"---\nabstract: 'We study free fermion systems under adaptive quantum dynamics consisting of unitary gates and projective measurements followed by corrective unitary operations. We further introduce a classical flag for each site, allowing for an active or inactive status which determines whether or not the unitary gates are allowed to apply. In this dynamics, the individual quantum trajectories exhibit a measurement-induced entanglement transition from critical to area-law scaling above a critical measurement rate, similar to previously studied models of free fermions under continuous monitoring. Furthermore, we find that the corrective unitary operations can steer the system into a state characterized by charge-density-wave order. Consequently, an additional phase transition occurs, which can be observed at both the level of the quantum trajectory and the quantum channel. We establish that the entanglement transition and the steering transition are fundamentally distinct. The latter transition belongs to the parity-conserving (PC) universality class, arising from the interplay between the inherent fermionic parity and classical labelling. We demonstrate both the entanglement and the steering transitions via efficient numerical simulations of free fermion systems, which confirm the PC universality class of the latter.'\nauthor:\n- Vikram Ravindranath\n- 'Zhi-Cheng Yang'\n- Xiao Chen\nbibliography:\n- 'biblio\\_quantum.bib'\ntitle:" -"---\nabstract: 'Metamaterials are artificial materials designed to exhibit effective material parameters that go beyond those found in nature. Composed of unit cells with rich designability that are assembled into multiscale systems, they hold great promise for realizing next-generation devices with exceptional, often exotic, functionalities. However, the vast design space and intricate structure-property relationships pose significant challenges in their design. A compelling paradigm that could bring the full potential of metamaterials to fruition is emerging: data-driven design. In this review, we provide a holistic overview of this rapidly evolving field, emphasizing the general methodology instead of specific domains and deployment contexts. We organize existing research into data-driven modules, encompassing data acquisition, machine learning-based unit cell design, and data-driven multiscale optimization. We further categorize the approaches within each module based on shared principles, analyze and compare strengths and applicability, explore connections between different modules, and identify open research questions and opportunities.'\nauthor:\n- |\n Doksoo Lee\\\n Dept. of Mechanical Engineering\\\n Northwestern University\\\n Evanston, IL 60208\\\n `dslee@northwestern.edu`\\\n Wei (Wayne) Chen\\\n Dept. of Mechanical Engineering\\\n Texas A&M University\\\n College Station, TX 77840\\\n `w.chen@tamu.edu`\\\n Liwei Wang\\\n Dept. of Mechanical Engineering\\\n Northwestern University\\\n Evanston, IL 60208\\\n `liwei.wang@northwestern.edu`\\\n Yu-Chin Chan\\\n Siemens Corporate Technology\\\n Princeton, New Jersey" -"---\nabstract: 'Xenon dual-phase time projections chambers (TPCs) have proven to be a successful technology in studying physical phenomena that require low-background conditions. With $\\SI{40}{t}$ of liquid xenon (LXe) in the TPC baseline design, DARWIN will have a high sensitivity for the detection of particle dark matter, neutrinoless double beta decay ($0\\upnu\\upbeta\\upbeta$), and axion-like particles (ALPs). Although cosmic muons are a source of background that cannot be entirely eliminated, they may be greatly diminished by placing the detector deep underground. In this study, we used Monte Carlo simulations to model the cosmogenic background expected for the DARWIN observatory at four underground laboratories: Laboratori Nazionali del Gran Sasso (LNGS), Sanford Underground Research Facility (SURF), Laboratoire Souterrain de Modane (LSM) and SNOLAB. We determine the production rates of unstable xenon isotopes and tritium due to muon-included neutron fluxes and muon-induced spallation. These are expected to represent the dominant contributions to cosmogenic backgrounds and thus the most relevant for site selection.'\nauthor:\n- 'M.\u00a0Adrover'\n- 'L.\u00a0Althueser'\n- 'B.\u00a0Andrieu'\n- 'E.\u00a0Angelino'\n- 'J.\u00a0R.\u00a0Angevaare'\n- 'B.\u00a0Antunovic'\n- 'E.\u00a0Aprile'\n- 'M.\u00a0Babicz'\n- 'D.\u00a0Bajpai'\n- 'E.\u00a0Barberio'\n- 'L.\u00a0Baudis'\n- 'M.\u00a0Bazyk'\n- 'N.\u00a0Bell'\n-" -"---\nabstract: 'Procedural Content Generation via Machine Learning (PCGML) faces a significant hurdle that sets it apart from other fields, such as image or text generation, which is limited annotated data. Many existing methods for procedural level generation via machine learning require a secondary representation besides level images. However, the current methods for obtaining such representations are laborious and time-consuming, which contributes to this problem. In this work, we aim to address this problem by utilizing gameplay videos of two human-annotated games to develop a novel multi-tail framework that learns to perform simultaneous level translation and generation. The translation tail of our framework can convert gameplay video frames to an equivalent secondary representation, while its generation tail can produce novel level segments. Evaluation results and comparisons between our framework and baselines suggest that combining the level generation and translation tasks can lead to an overall improved performance regarding both tasks. This represents a possible solution to limited annotated level data, and we demonstrate the potential for future versions to generalize to unseen games.'\nauthor:\n- \n- \nbibliography:\n- 'conference\\_101719.bib'\ntitle: 'Joint Level Generation and Translation Using Gameplay Videos\\'\n---\n\nVideo Games, Procedural Content Generation, Level Design, Level Translation\n\nIntroduction\n============" -"---\nabstract: 'We study the hypothesis of deformation of the invariance of Lorentz transformations produced by the introduction of a universal minimum velocity relative to a preferred frame. Our goal with this job is to apply this hypothesis to superfluids and study its consequences relating the minimum velocity to the idea of a fluid, with superfluid properties. In previous works we related the minimum velocity to the cosmological constant and even to cosmic inflation. Soon we could generate a hypothetical superfluid capable of modeling with characteristics of a cosmological fluid with dark energy properties. The first excited state of this universal superfluid would be a preferred frame from which all other excited states are observed and then we would have a preferred frame $S_{V}$ associated with the critical Landau velocity, thus implying that the universal minimum velocity coincides with the critical Landau velocity, and the objects observed by the preferred frame are excited states of the superfluid. This coincidence between the concepts of minimum velocity and Landau\u2019s critical velocity makes Landau\u2019s critical velocity a type of limit velocity, modifying the usual causal structure of restricted relativity. Formulating the phenomena in this preferred frame would have the advantage of providing a" -"---\nabstract: 'Motivated by classical work on the numerical integration of ordinary differential equations we present a ResNet-styled neural network architecture that encodes non-expansive (1-Lipschitz) operators, as long as the spectral norms of the weights are appropriately constrained. This is to be contrasted with the ordinary ResNet architecture which, even if the spectral norms of the weights are constrained, has a Lipschitz constant that, in the worst case, grows exponentially with the depth of the network. Further analysis of the proposed architecture shows that the spectral norms of the weights can be further constrained to ensure that the network is an averaged operator, making it a natural candidate for a learned denoiser in Plug-and-Play algorithms. Using a novel adaptive way of enforcing the spectral norm constraints, we show that, even with these constraints, it is possible to train performant networks. The proposed architecture is applied to the problem of adversarially robust image classification, to image denoising, and finally to the inverse problem of deblurring.'\nauthor:\n- Ferdia Sherry\n- Elena Celledoni\n- 'Matthias J.\u00a0Ehrhardt'\n- Davide Murari\n- Brynjulf Owren\n- 'Carola-Bibiane Sch\u00f6nlieb'\nbibliography:\n- 'references.bib'\ntitle: Designing Stable Neural Networks using Convex Analysis and ODEs\n---\n\nDeep learning," -"---\nabstract: |\n Speech emotion recognition (SER) is vital for obtaining emotional intelligence and understanding the contextual meaning of speech. Variations of consonant-vowel (CV) phonemic boundaries can enrich acoustic context with linguistic cues, which impacts SER. In practice, speech emotions are treated as single labels over an acoustic segment for a given time duration. However, phone boundaries within speech are not discrete events, therefore the perceived emotion state should also be distributed over potentially continuous time-windows.\n\n This research explores the implication of acoustic context and phone boundaries on local markers for SER using an attention-based approach. The benefits of using a distributed approach to speech emotion understanding are supported by the results of cross-corpora analysis experiments. Experiments where phones and words are mapped to the attention vectors along with the fundamental frequency to observe the overlapping distributions and thereby the relationship between acoustic context and emotion. This work aims to bridge psycholinguistic theory research with computational modelling for SER.\nauthor:\n- |\n Anna Ollerenshaw, Md. Asif Jalal, Rosanna Milner, Thomas Hain\\\n Speech and Hearing Research Group, Department of Computer Science, University of Sheffield, UK [^1]\nbibliography:\n- 'main.bib'\ntitle: |\n Empirical Interpretation of the Relationship\\\n Between Speech Acoustic Context and\\" -"---\nabstract: 'We present Variational Bayesian Network (VBN) - a novel Bayesian entity representation learning model that utilizes hierarchical and relational side information and is particularly useful for modeling entities in the \u201clong-tail\u201d, where the data is scarce. VBN provides better modeling for long-tail entities via two complementary mechanisms: First, VBN employs informative hierarchical priors that enable information propagation between entities sharing common ancestors. Additionally, VBN models explicit relations between entities that enforce complementary structure and consistency, guiding the learned representations towards a more meaningful arrangement in space. Second, VBN represents entities by densities (rather than vectors), hence modeling uncertainty that plays a complementary role in coping with data scarcity. Finally, we propose a scalable Variational Bayes optimization algorithm that enables fast approximate Bayesian inference. We evaluate the effectiveness of VBN on linguistic, recommendations, and medical inference tasks. Our findings show that VBN outperforms other existing methods across multiple datasets, and especially in the long-tail.'\nauthor:\n- Oren Barkan\n- Avi Caciularu\n- Idan Rejwan\n- Ori Katz\n- Jonathan Weill\n- Itzik Malkiel\n- Noam Koenigstein\nbibliography:\n- 'references.bib'\ntitle: Representation Learning via Variational Bayesian Networks\n---\n\n<ccs2012> <concept> <concept\\_id>10002951.10002952.10003219.10003223</concept\\_id> <concept\\_desc>Information systems\u00a0Entity resolution</concept\\_desc> <concept\\_significance>500</concept\\_significance> </concept> <concept> <concept\\_id>10002950.10003648.10003649.10003650</concept\\_id> <concept\\_desc>Mathematics" -"---\nabstract: 'We present a deep learning-based iterative approach to solve the discrete heterogeneous Helmholtz equation for high wavenumbers. Combining classical iterative multigrid solvers and convolutional neural networks (CNNs) via preconditioning, we obtain a faster, learned neural solver that scales better than a standard multigrid solver. Our approach offers three main contributions over previous neural methods of this kind. First, we construct a multilevel U-Net-like encoder-solver CNN with an implicit layer on the coarsest grid of the U-Net, where convolution kernels are inverted. This alleviates the field of view problem in CNNs and allows better scalability. Second, we improve upon the previous CNN preconditioner in terms of the number of parameters, computation time, and convergence rates. Third, we propose a multiscale training approach that enables the network to scale to problems of previously unseen dimensions while still maintaining a reasonable training procedure. Our encoder-solver architecture can be used to generalize over different slowness models of various difficulties and is efficient at solving for many right-hand sides per slowness model. We demonstrate the benefits of our novel architecture with numerical experiments on various heterogeneous two-dimensional problems at high wavenumbers.'\nauthor:\n- 'Bar Lerer[^1]'\n- 'Ido Ben-Yair'\n- Eran Treister\nbibliography:\n-" -"---\nabstract: 'Tables form a central component in both exploratory data analysis and formal reporting procedures across many industries. These tables are often complex in their conceptual structure and in the computations that generate their individual cell values. We introduce both a conceptual framework and a reference implementation for declaring, generating, rendering and modeling such tables. We place tables within the existing grammar of graphics paradigm for general statistical visualizations. Our open source `rtables` software implementation utilizes these connections to facilitate an intuitive way to declare complex table structure and construct those tables from data. In the course of this work, we relax several constraints present in the traditional grammar of graphics framing. Finally, `rtables` models instantiated tables as tree structures, which allows powerful, semantically meaningful and self-describing queries and manipulations of tables after creation. We showcase our framework in practice by creating complex, realistic example tables.'\nauthor:\n- |\n Gabriel Becker [^1]\\\n No Affiliation\\\n and\\\n Adrian Waddell\\\n Genentech Inc.\u00a0South San Francisco, USA\\\nbibliography:\n- 'allrefs.bib'\ntitle: '**`rtables` - A Framework For Creating Complex Structured Reporting Tables Via Multi-Level Faceted Computations**'\n---\n\n\\#1\n\n0\n\n[0]{}\n\n1\n\n[0]{}\n\n[**`rtables` - A Framework For Creating Complex Structured Reporting Tables Via Multi-Level" -"---\nabstract: 'Large-scale visual-language pre-trained models (VLPM) have proven their excellent performance in downstream object detection for natural scenes. However, zero-shot nuclei detection on H&E images via VLPMs remains underexplored. The large gap between medical images and the web-originated text-image pairs used for pre-training makes it a challenging task. In this paper, we attempt to explore the potential of the object-level VLPM, Grounded Language-Image Pre-training (GLIP) model, for zero-shot nuclei detection. Concretely, an automatic prompts design pipeline is devised based on the association binding trait of VLPM and the image-to-text VLPM BLIP, avoiding empirical manual prompts engineering. We further establish a self-training framework, using the automatically designed prompts to generate the preliminary results as pseudo labels from GLIP and refine the predicted boxes in an iterative manner. Our method achieves a remarkable performance for label-free nuclei detection, surpassing other comparison methods. Foremost, our work demonstrates that the VLPM pre-trained on natural image-text pairs exhibits astonishing potential for downstream tasks in the medical field as well. Code will be released at [github.com/VLPMNuD](https://github.com/wuyongjianCODE/VLPMNuD).'\nauthor:\n- Yongjian Wu\n- Yang Zhou\n- Jiya Saiyin\n- Bingzheng Wei\n- Maode Lai\n- Jianzhong Shou\n- Yubo Fan\n- 'Yan Xu$^{(\\textrm{\\Letter})}$'\nbibliography:\n- 'mybibliography.bib'\ntitle:" -"---\nabstract: 'The edge clique cover (ECC) problem\u2014where the goal is to find a minimum cardinality set of cliques that cover all the edges of a graph\u2014is a classic NP-hard problem that has received much attention from both the theoretical and experimental algorithms communities. While small sparse graphs can be solved exactly via the branch-and-reduce algorithm of Gramm et al. \\[JEA 2009\\], larger instances can currently only be solved inexactly using heuristics with unknown overall solution quality. We revisit computing minimum ECCs exactly in practice by combining data reduction for both the ECC *and* vertex clique cover (VCC) problems. We do so by modifying the polynomial-time reduction of Kou et al. \\[Commun. ACM 1978\\] to transform a reduced ECC instance to a VCC instance; alternatively, we show it is possible to \u201clift\u201d some VCC reductions to the ECC problem. Our experiments show that combining data reduction for both problems (which we call *synergistic data reduction*) enables finding exact minimum ECCs orders of magnitude faster than the technique of Gramm et al., and allows solving large sparse graphs on up to millions of vertices and edges that have never before been solved. With these new exact solutions, we evaluate the quality" -"---\nabstract: |\n Coalition formation considers the question of how to partition a set of $n$ agents into disjoint coalitions according to their preferences. We consider a cardinal utility model with additively separable aggregation of preferences and study the online variant of coalition formation, where the agents arrive in sequence and whenever an agent arrives, they have to be assigned to a coalition immediately. The goal is to maximize social welfare. In a purely deterministic model, the greedy algorithm, where an agent is assigned to the coalition with the largest gain, is known to achieve an optimal competitive ratio, which heavily relies on the range of utilities.\n\n We complement this result by considering two related models. First, we study a model where agents arrive in a random order. We find that the competitive ratio of the greedy algorithm is $\\Theta\\left(\\frac{1}{n^2}\\right)$, whereas an alternative algorithm, which is based on alternating between waiting and greedy phases, can achieve a competitive ratio of $\\Theta\\left(\\frac{1}{n}\\right)$. Second, we relax the irrevocability of decisions by allowing to dissolve coalitions into singleton coalitions, presenting a matching-based algorithm that once again achieves a competitive ratio of $\\Theta\\left(\\frac 1n\\right)$. Hence, compared to the base model, we present two ways" -"---\nabstract: 'As malicious actors employ increasingly advanced and widespread bots to disseminate misinformation and manipulate public opinion, the detection of Twitter bots has become a crucial task. Though graph-based Twitter bot detection methods achieve state-of-the-art performance, we find that their inference depends on the neighbor users multi-hop away from the targets, and fetching neighbors is time-consuming and may introduce sampling bias. At the same time, our experiments reveal that after finetuning on Twitter bot detection task, pretrained language models achieve competitive performance while do not require a graph structure during deployment. Inspired by this finding, we propose a novel bot detection framework [`LMBot`]{}[^1] that distills the graph knowledge into language models (LMs) for graph-less deployment in Twitter bot detection to combat data dependency challenge. Moreover, [`LMBot`]{} is compatible with graph-based and graph-less datasets. Specifically, we first represent each user as a textual sequence and feed them into the LM for domain adaptation. For graph-based datasets, the output of LM serves as input features for the GNN, enabling [`LMBot`]{} to optimize for bot detection and distill knowledge back to the LM in an iterative, mutually enhancing process. Armed with the LM, we can perform graph-less inference with graph knowledge, which" -"---\nabstract: 'When a detailed model of a stellar population is unavailable, it is most common to assume that stellar masses are independently and identically distributed according to some distribution: the universal initial mass function (IMF). However, stellar masses resulting from causal, long-ranged physics cannot be truly random and independent, and the IMF may vary with environment. To compare stochastic sampling with a physical model, we run a suite of 100 [STARFORGE]{} radiation magnetohydrodynamics simulations of low-mass star cluster formation in $2000M_\\odot$ clouds that form $\\sim 200$ stars each on average. The stacked IMF from the simulated clouds has a sharp truncation at $\\sim 28 M_\\odot$, well below the typically-assumed maximum stellar mass $M_{\\rm up} \\sim 100-150M_\\odot$ and the total cluster mass. The sequence of star formation is not totally random: massive stars tend to start accreting sooner and finish later than the average star. However, final cluster properties such as maximum stellar mass and total luminosity have a similar amount of cloud-to-cloud scatter to random sampling. Therefore stochastic sampling does not generally model the stellar demographics of a star cluster as it is forming, but may describe the end result fairly well, if the correct IMF \u2013 and its" -"---\nabstract: 'Survival prediction based on whole slide images (WSIs) is a challenging task for patient-level multiple instance learning (MIL). Due to the vast amount of data for a patient (one or multiple gigapixels WSIs) and the irregularly shaped property of WSI, it is difficult to fully explore spatial, contextual, and hierarchical interaction in the patient-level bag. Many studies adopt random sampling pre-processing strategy and WSI-level aggregation models, which inevitably lose critical prognostic information in the patient-level bag. In this work, we propose a hierarchical vision Transformer framework named HVTSurv, which can encode the local-level relative spatial information, strengthen WSI-level context-aware communication, and establish patient-level hierarchical interaction. Firstly, we design a feature pre-processing strategy, including feature rearrangement and random window masking. Then, we devise three layers to progressively obtain patient-level representation, including a local-level interaction layer adopting Manhattan distance, a WSI-level interaction layer employing spatial shuffle, and a patient-level interaction layer using attention pooling. Moreover, the design of hierarchical network helps the model become more computationally efficient. Finally, we validate HVTSurv with 3,104 patients and 3,752 WSIs across 6 cancer types from The Cancer Genome Atlas (TCGA). The average C-Index is 2.50-11.30% higher than all the prior weakly supervised methods" -"---\nabstract: 'Next generation microwave communications systems face several challenges, particularly from congested communications frequencies and complex propagation environments. Taking inspiration from the Yagi\u2013Uda antenna, we present, and experimentally test, a framework based on the coupled dipole approximation for designing structures composed of a single simple emitter with a passive disordered scattering structure of rods that is optimised to provide a desired radiation pattern. Our numerical method provides an efficient way to model, and then design and test, otherwise inaccessibly large scattering systems.'\nauthor:\n- 'J. R. Capers'\n- 'L. D. Stanfield'\n- 'J. R. Sambles'\n- 'S. J. Boyes'\n- 'A. W. Powell'\n- 'A. P. Hibbins'\n- 'S. A. R Horsley'\ntitle: 'Generalising the Yagi\u2013Uda Antenna: Designing Disordered Metamaterials to Manipulate Antenna Radiation'\n---\n\nIntroduction\n============\n\nIn recent years metamaterials, man\u2013made materials structured at the sub\u2013wavelength scale, have attracted much interest due to their versatile wave\u2013shaping capabilities. From invisibility cloaks [@Leonhardt2006; @Pendry2006] to perfect lenses [@Kaina2015], metamaterials offer novel ways to control and to shape the propagation of electromagnetic, acoustic and elastic waves [@Kadic2019]. While early examples of metamaterials consist of periodically patterned metal [@Munk2000] or dielectric [@Joannopoulos2008], more recently disorder has been exploited [@Cao2022] to achieve a" -"---\nabstract: 'We recall some basic computations in the Milnor-Witt K-theory of a field, following Morel. We then focus on the Witt K-theory of a field of characteristic two and give an elementary proof of the fact that it is isomorphic as a graded ring to the Rees algebra of the fundamental ideal of the Witt ring of symmetric bilinear forms using Kato\u2019s solution to Milnor\u2019s conjecture on quadratic form.'\nauthor:\n- 'Robin Carlier[^1]'\nbibliography:\n- 'main\\_bibliography.bib'\ntitle: 'Milnor-Witt K-theory and Witt K-theory of a field'\n---\n\nIntroduction\n============\n\nThese notes are intended as a companion to\u00a0[@deglise_KMW_2023]. While the relevance of Milnor-Witt K-theory to $\\mathbb{A}^1$-homotopy theory is explained there, we take a much more elementary approach here and focus on the case of fields. In the first section, we recall the basics of Milnor-Witt K-theory following the definition of Hopkins and Morel\u00a0[@morel_puissances2004 Def. 5.1] and recall some basics computations in the ring ${\\ensuremath{\\mathrm{K}^{MW}}}_*(F)$. The main result in this first section is Theorem\u00a0\\[prop\\_iso\\], which fully describes the negative part of Milnor-Witt K-theory as well as Corollary\u00a0\\[cor\\_loc\\], which is a direct consequence of Theorem\u00a0\\[prop\\_iso\\] and which describes the $\\eta$-periodic structure of Milnor-Witt K-Theory. Our exposition in the" -"---\nabstract: 'In this paper we revisit the fixed-confidence identification of the Pareto optimal set in a multi-objective multi-armed bandit model. As the sample complexity to identify the exact Pareto set can be very large, a relaxation allowing to output some additional near-optimal arms has been studied. In this work we also tackle alternative relaxations that allow instead to identify a relevant *subset* of the Pareto set. Notably, we propose a single sampling strategy, called Adaptive Pareto Exploration, that can be used in conjunction with different stopping rules to take into account different relaxations of the Pareto Set Identification problem. We analyze the sample complexity of these different combinations, quantifying in particular the reduction in sample complexity that occurs when one seeks to identify at most $k$ Pareto optimal arms. We showcase the good practical performance of Adaptive Pareto Exploration on a real-world scenario, in which we adaptively explore several vaccination strategies against Covid-19 in order to find the optimal ones when multiple immunogenicity criteria are taken into account.'\nauthor:\n- |\n Cyrille Kone$^1$\\\n `cyrille.kone@inria.fr`\\\n Emilie Kaufmann$^1$\\\n `emilie.kaufmann@univ-lille.fr`\\\n Laura Richert$^2$\\\n `laura.richert@u-bordeaux.fr`\\\n \\\n $^1$ Univ. Lille, Inria, CNRS, Centrale Lille, UMR 9198-CRIStAL, F-59000 Lille, France\\\n $^2$ Univ. Bordeaux, Inserm, Inria, BPH," -"---\nabstract: 'Droplet formation happens in finite time due to the surface tension force. The linear stability analysis is useful to estimate droplet size but fails to approximate droplet shape. This is due to a highly non-linear flow description near the point where the first pinch-off happens. A one-dimensional axisymmetric mathematical model was first developed by Eggers and Dupont[@EggersDupont1994] using asymptotic analysis. This asymptotic approach to the Navier-Stokes equations leads to a universal scaling explaining the self-similar nature of the solution. Numerical models for the one-dimensional model were developed using the finite difference[@EggersDupont1994] and finite element method[@AmbravaneswaranWilkesBasaran2002]. The focus of this study is to provide a robust computational model for one-dimensional axisymmetric droplet formation using the Portable, Extensible Toolkit for Scientific Computation (PETSc). The code is verified using the Method of Manufactured Solutions (MMS) and validated using previous experimental studies done by Zhang and Basaran[@ZhangBasaran1995]. The present model is used for simulating pendant drops of water, glycerol, and paraffin wax, with an aspiration of extending the application to simulate more complex pinch-off phenomena.'\nauthor:\n- '**Darsh K. Nathawani**'\n- '**Matthew G. Knepley**'\nbibliography:\n- 'main.bib'\ntitle: Droplet formation simulation using mixed finite elements\n---\n\n[\\[sec:1\\]Introduction]{}\n=========================\n\nSingularity in free surface" -"---\nauthor:\n- 'V. Vecchiotti[!!]{}'\n- 'and F.L. Villante'\n- 'and G. Pagliaroli'\nbibliography:\n- 'bibliography.bib'\ntitle: Setting an upper limit for the total TeV neutrino flux from the disk of our Galaxy\n---\n\nIntroduction {#sec:outline}\n============\n\nNeutrino telescopes have finally reached the required sensitivity to probe TeV neutrino production in our Galaxy. During the past few years, several upper limits were obtained for Galactic ridge TeV neutrino emission. By considering the region $|l|<40^\\circ$, $|b|<3^\\circ$, ANTARES reported in, e.g., [@ANTARES:2016mwq] an upper bound for the one-flavor neutrino flux at the level of $6.0 \\times 10^{-5} (E_\\nu/1\\,{\\rm GeV})^{-\\Gamma_\\nu}\\, {\\rm GeV}^{-1}\\, {\\rm cm}^{-2}\\,{\\rm s}^{-1}\\,{\\rm sr}^{-1}$ for the assumed neutrino spectral index $\\Gamma_\\nu = 2.5$. Other restrictive limits were also obtained, by using different techniques and/or classes of data by ANTARES [@ANTARES:2017nlh], IceCube [@IceCube:2017trr] and by a joint analysis performed by the two experiments [@ANTARES:2018nyb]. More recently, a possible breakthrough was achieved by ANTARES collaboration that has reported the first possible hint of neutrino emission from the Galactic ridge with $2.2\\,\\sigma$ significance in the angular region $|l|<30^{\\circ}$ and $|b|<2^{\\circ}$ and in the $1-100$ TeV energy band [@ANTARES:2022izu][^1]. In this experimental context, it is particularly relevant to discuss the expected contributions to TeV Galactic" -"---\naddress: |\n $^{1}$ Mullard Space Science Laboratory, University College London, Dorking, RH5 6NT, UK;\\\n $^{2}$ Alan Turing Institute, London, NW1 2DB, UK;\nbibliography:\n- 'sources.bib'\n---\n\nIntroduction\n============\n\nModel selection is the task of evaluating which of the statistical models under consideration best describes observed data. In the Bayesian formalism this involves computing the marginal likelihood (also called the Bayesian model evidence), which gives a way to quantitatively compare the suitability of models for a given problem. Model selection is relevant in a range of fields such as astronomy, biostatistics, economics, medical research and many more. However, in practice Bayesian model selection is often difficult as computing the marginal likelihood requires evaluating a high-dimensional integral, which can be a challenging task. A number of methods to compute the marginal likelihood have been proposed (for reviews see [@clyde2007current; @friel2012estimating]), such as nested sampling [@skilling2006nested; @ashton2022nested].\n\nThe learned harmonic mean estimator was proposed recently by some of the authors of the current article as an effective technique to compute the marginal likelihood [@mcewen2021machine]. The estimator requires only samples from the posterior and so is agnostic to the method used to generate samples, in contrast to nested sampling. Thus, the learned harmonic" -"---\nbibliography:\n- 'refs.bib'\n---\n\n=10000\n\n[**Superselection Rules, Quantum Error Correction, and Quantum Chromodynamics**]{}\\\n\\\n[ *${}^a$Computational Science Initiative, Brookhaven National Lab, Upton, NY 11973, USA\\\n${}^b$Department of Physics, Northeastern University, Boston, MA 02115, USA\\\n${}^c$Institute for Quantum Information and Matter, Caltech, Pasadena, CA 91125, USA\\\n${}^d$Department of Physics, Virginia Tech, Blacksburg, VA 24060, USA\\\n${}^e$Department of Physics and Astronomy, University of British Columbia\\\n6224 Agricultural Road, Vancouver, BC, V6T 1Z1, Canada\\\n${}^f$Institute for Theoretical Physics, KU Leuven\\\nCelestijnenlaan 200D B-3001 Leuven, Belgium\\\n${}^g$Maryland Center for Fundamental Physics, University of Maryland\\\nCollege Park, MD 20740, USA\\\n${}^h$IBM Quantum, T.J. Watson Research Center, Yorktown Heights, NY 10598, USA* ]{}\n\n[^1]\\\n\n**Abstract**\n\n> We investigate the relationship between superselection rules and quantum error correcting codes. We demonstrate that the existence of a superselection rule implies the Knill-Laflamme condition in quantum error correction. As an example, we examine quantum chromodynamics through the lens of quantum error correction, where the proton and neutron states in the model are explored as different superselection sectors that protect logical information. Finally we comment on topological quantum error correcting codes and supersymmetric quantum field theory within this framework.\n\nIntroduction\n============\n\nConnecting quantum error correction and the AdS/CFT correspondence" -"---\nabstract: 'Imbalanced data poses a significant challenge in classification as model performance is affected by insufficient learning from minority classes. Balancing methods are often used to address this problem. However, such techniques can lead to problems such as overfitting or loss of information. This study addresses a more challenging aspect of balancing methods - their impact on model behavior. To capture these changes, Explainable Artificial Intelligence tools are used to compare models trained on datasets before and after balancing. In addition to the variable importance method, this study uses the partial dependence profile and accumulated local effects techniques. Real and simulated datasets are tested, and an open-source Python package `edgaro` is developed to facilitate this analysis. The results obtained show significant changes in model behavior due to balancing methods, which can lead to biased models toward a balanced distribution. These findings confirm that balancing analysis should go beyond model performance comparisons to achieve higher reliability of machine learning models. Therefore, we propose a new method `performance gain plot` for informed data balancing strategy to make an optimal selection of balancing method by analyzing the measure of change in model behavior versus performance gain.'\nauthor:\n- |\n \\\n [![image](orcid.pdf)Adrian Stando](https://orcid.org/0009-0006-8819-0268)\\" -"---\nabstract: 'The ever-growing complexity of reinforcement learning (RL) tasks demands a distributed RL system to efficiently generate and process a massive amount of data to train intelligent agents. However, existing open-source libraries suffer from various limitations, which impede their practical use in challenging scenarios where large-scale training is necessary. While industrial systems from OpenAI and DeepMind have achieved successful large-scale RL training, their system architecture and implementation details remain undisclosed to the community. In this paper, we present a novel system abstraction on the dataflows of RL training, which unifies practical RL training across diverse applications into a general and flexible framework and enables fine-grained system-level optimizations. Following this abstraction, we develop a scalable, efficient, and extensible distributed RL system called [ealy calable ]{}([SRL]{}). The system architecture of [[SRL]{}]{} separates major RL computation components and allows massively parallelized training. We also introduce a collection of techniques to further optimize the system performance. Moreover, [SRL]{}offers user-friendly and extensible interfaces, which facilitate the development of customized algorithms. In this paper, we introduce a general, scalable, and efficient RL system called [ealy calable ]{}([SRL]{}). The system architecture of [[SRL]{}]{} follows a novel computation abstraction that separates major computation components in RL training" -"---\nabstract: 'Immersive technologies such as virtual reality (VR), augmented reality (AR), and holograms will change users\u2019 digital experience. These immersive technologies have a multitude of applications, including telesurgeries, teleconferencing, Internet shopping, computer games, etc. Holographic-type communication (HTC) is a type of augmented reality media that provides an immersive experience to Internet users. However, HTC has different characteristics and network requirements, and the existing network architecture and transport protocols may not be able to cope with the stringent network requirements of HTC. Therefore, in this paper, we provide an in-depth and critical study of the transport protocols for HTC. We also discuss the characteristics and the network requirements for HTC. Based on the performance evaluation of the existing transport protocols, we propose a roadmap to design new high-performance transport protocols for immersive applications.'\nauthor:\n- \n- \n- \n- \ntitle: |\n Performance Evaluation of Transport Protocols and Roadmap to a High-Performance Transport Design for Immersive Applications\\\n [^1] \n---\n\nat (current page.south) ;\n\nHolographic-type communication, Transport protocols, Immersive applications, AR, VR\n\nIntroduction\n============\n\nVirtual reality (VR), augmented reality (AR), mixed reality (MR), and holographic technologies are examples of immersive technologies. These technologies have gained much attention from industry and academia in recent" -"---\nabstract: 'An important functional of Poisson random measure is the negative binomial process (NBP). We use NBP to introduce a generalized Poisson-Kingman distribution and its corresponding random discrete probability measure. This random discrete probability measure provides a new set of priors with more flexibility in nonparametric Bayesian models. It is shown how this random discrete probability measure relates to the nonparametric Bayesian priors such as Dirichlet process, normalized positive $\\alpha$-stable process, Poisson-Dirichlet process (PDP), and others. An extension of the DP with its almost sure approximation is presented. Using our representation for NBP, we derive a new series representation for the PDP.'\nauthor:\n- \n- \nbibliography:\n- 'sample.bib'\ntitle: Random Discrete Probability Measures Based on Negative Binomial Process\n---\n\nIntroduction {#section1}\n============\n\nThe Poisson random measure (or Poisson point process) relates to other important processes through its functionals. The setup for a general point process follows the exposition in @Kellenberg1983 and @Resnick1987 [Ch.3]. Let $(\\mathbb{E},\\mathscr{E})$ be a locally compact space with a countable basis with its associated Borel $\\sigma$-algebra and also let $(\\mathbb{M},\\mathscr{M})$ be the space of all point measures defined on $\\mathbb{E}$ with its associated $\\sigma$-algebra. A point process $\\xi$ on $\\mathbb{E}$ is a measurable map from the" -"Introduction {#sec:Intro}\n============\n\nModern machine learning models often give rise complex high-dimensional learning problems that can be computationally very expensive to optimization. Therefore, to reduce computational costs, it becomes more important to use optimization algorithms that have the properties of parallelism [@Dean_2012; @Dekel_2012], which can reduce the time required for training. Because of this property, there are modern state-of-the-art generative models\u00a0[@Ramesh_2021; @Radford_2021], language models\u00a0[@Chowdhery_2022; @Touvron_2023], and many others [@Wang_2020]. However, it is worth noting that modern machine learning algorithms with the property of parallelism are often based on Stochastic Gradient Descent (SGD) [@Robbins_1951], which uses the stochastic gradient estimation approach of the objective function, which causes a variance component that can affect convergence (when the variance is large). As a rule, the convergence result of such algorithms can be improved by batching the stochastic gradient estimates, which are just easily distributed on several computing resources (machines). That is why in the last 5 years algorithms with the property of parallelism for solving federated optimization problems [@Woodworth_2020; @Woodworth_2021; @Lobanov_2022] and distributed optimization problems (centralized [@Stich_2021; @Wang_2022; @Balasubramanian_2022] and decentralized [@Yu_2019; @Li_2021; @Kovalev_2022] approaches) have been actively developed.\n\nDistributed decentralized optimization arises when there is no central server that receives" -"---\nabstract: 'Increasing penetrations of electric vehicles (EVs) presents a large source of flexibility, which can be used to assist balancing the power grid. The flexibility of an individual EV can be quantified as a convex polytope and the flexibility of a population of EVs is the Minkowski sum of these polytopes. In general computing the exact Minkowski sum is intractable. However, exploiting symmetry in a restricted but significant case, enables an efficient computation of the aggregate flexibility. This results in a polytope with exponentially many vertices and facets with respect to the time horizon. We show how to use a lifting procedure to provide a representation of this polytope with a reduced number of facets, which makes optimising over more tractable. Finally, a disaggregation procedure that takes an aggregate signal and computes dispatch instructions for each EV in the population is presented. The complexity of the algorithms presented is independent of the size of the population and polynomial in the length of the time horizon. We evaluate this work against existing methods in the literature, and show how this method guarantees optimality with lower computational burden than existing methods.'\nauthor:\n- 'Karan Mukhi and Alessandro Abate[^1][^2]'\nbibliography:\n- 'references.bib'" -"---\nabstract: 'Machine learning from training data with a skewed distribution of examples per class can lead to models that favor performance on common classes at the expense of performance on rare ones. AudioSet has a very wide range of priors over its 527 sound event classes. Classification performance on AudioSet is usually evaluated by a simple average over per-class metrics, meaning that performance on rare classes is equal in importance to the performance on common ones. Several recent papers have used dataset balancing techniques to improve performance on AudioSet. We find, however, that while balancing improves performance on the public AudioSet evaluation data it simultaneously hurts performance on an unpublished evaluation set collected under the same conditions. By varying the degree of balancing, we show that its benefits are fragile and depend on the evaluation set. We also do not find evidence indicating that balancing improves rare class performance relative to common classes. We therefore caution against blind application of balancing, as well as against paying too much attention to small improvements on a public evaluation set.'\naddress: |\n Google Research, Mountain View, CA, and New York, NY, USA\\\n *{channingmoore, dpwe, efonseca, shershey, arenjansen, plakal}@google.com*\nbibliography:\n- 'citation.bib'\ntitle:" -"---\nabstract: 'We consider viscous steady streaming induced by oscillatory flow past a cylinder between two plates, where the cylinder\u2019s axis is normal to the plates. While this phenomenon was first studied in the 1930s, it has received renewed interest recently for possible applications in particle manipulations and non-Newtonian flows. The flow is driven at the ends of the channel by the boundary condition which is a series solution of the oscillating flow problem in a rectangular channel in the absence of a cylinder. We use a combination of Fourier series and an asymptotic expansion to study the confinement effects for steady-streaming. The Fourier series in time naturally simplifies to a finite series. In contrast, it is necessary to truncate the Fourier series in $z$, which is in the direction of the axis of the cylinder, to solve numerically. The successive equations for the Fourier coefficients resulting from the asymptotic expansion are then solved numerically using finite element methods. We use our model to evaluate how steady streaming depends on the domain width and distance from the cylinder to the outer walls, including the possible breaking of the four-fold symmetry due to the domain shape. We utilize the tangential steady-streaming" -"---\nabstract: |\n Using machine learning models to generate synthetic data has become common in many fields. Technology to generate synthetic transactions that can be used to detect fraud is also growing fast. Generally, this synthetic data contains only information about the transaction, such as the time, place, and amount of money. It does not usually contain the individual user\u2019s characteristics (age and gender are occasionally included). Using relatively complex synthetic demographic data may improve the complexity of transaction data features, thus improving the fraud detection performance. Benefiting from developments of machine learning, some deep learning models have potential to perform better than other well-established synthetic data generation methods, such as microsimulation. In this study, we built a deep-learning Generative Adversarial Network (GAN), called DGGAN[^1], which will be used for demographic data generation. Our model generates samples during model training, which we found important to overcame class imbalance issues. This study can help improve the cognition of synthetic data and further explore the application of synthetic data generation in card fraud detection.\\\nauthor:\n- |\n Shuo Wang\\\n Department of Computer Science\\\n Memorial University of Newfoundland\\\n St. John\u2019s, NL A1C5S7\\\n `shuow@mun.ca`\\\n Terrence Tricco\\\n Department of Computer Science\\\n Memorial University of Newfoundland\\" -"---\nabstract: 'We have performed the first systematic search of the full GALEX data archive for astrophysical variability on timescales of seconds to minutes by rebinning data across the whole mission to 30-second time resolution. The result is the GALEX Flare Catalog (GFCAT) which describes 1426 ultraviolet variable sources, including stellar flares, eclipsing binaries, $\\delta$ Scuti and RR Lyrae variables, and Active Galactic Nuclei (AGN). Many of these sources have never previously been identified as variable. We have also assembled a table of observations of ultraviolet flares and accompanying statistics and measurements, including energies, and of candidate eclipsing stars. This effort was enabled by a significantly-enhanced version of the gPhoton software for analyzing time-domain GALEX data; this \u201cgPhoton2\u201d package is available to support follow-on efforts.'\nauthor:\n- 'Chase C. Million'\n- 'Michael St. Clair'\n- 'Scott W. Fleming'\n- Luciana Bianchi\n- Rachel Osten\nbibliography:\n- 'gfcat.bib'\ntitle: 'The GFCAT: a catalog of ultraviolet variables observed by GALEX with sub-minute resolution'\n---\n\nIntroduction {#sec:intro}\n============\n\nMany known and theorized astrophysical phenomena can or could only be observed at \u201cfast\u201d timescales (seconds to minutes): flares (including \u2018conventional\u2019 stellar flares, tidal disruption flares, and shock breakout flares); fast binary systems; fast AGNs;" -"---\nabstract: |\n This is a foundation for algebraic geometry, developed internal to the Zariski topos, building on the work of Kock and Blechschmidt ([@kock-sdg]\\[I.12\\], [@ingo-thesis]). The Zariski topos consists of sheaves on the site opposite to the category of finitely presented algebras over a fixed ring, with the Zariski topology, i.e.\u00a0generating covers are given by localization maps $A\\to A_{f_1}$ for finitely many elements $f_1,\\dots,f_n$ that generate the ideal $(1)=A\\subseteq A$. We use homotopy type theory together with three axioms as the internal language of a (higher) Zariski topos.\n\n One of our main contributions is the use of higher types \u2013 in the homotopical sense \u2013 to define and reason about cohomology. Actually computing cohomology groups, seems to need a principle along the lines of our \u201cZariski local choice\u201d axiom, which we justify as well as the other axioms using a cubical model of homotopy type theory.\nauthor:\n- 'Felix Cherubini, Thierry Coquand and Matthias Hutzler'\ntitle: A Foundation for Synthetic Algebraic Geometry\n---\n\nIntroduction {#introduction .unnumbered}\n============\n\nAlgebraic geometry is the study of solutions of polynomial equations using methods from geometry. The central geometric objects in algebraic geometry are called *schemes*. Their basic building blocks are called *affine" -"---\nauthor:\n- 'Andreas Nygaard, Emil Brinch Holm, Thomas Tram, and Steen Hannestad'\nbibliography:\n- 'bibliography.bib'\ntitle: Decaying Dark Matter and the Hubble Tension\n---\n\nIntroduction\n============\n\nAlthough the nature of dark matter remains unknown, a brief look at the Standard Model contents of the Universe reveals that a majority of the known particles are unstable and decay. By analogy, a natural question to ask is whether dark matter may decay on cosmological timescales. Decays of dark matter into electromagnetically interacting particles are strongly constrained by CMB observations\u00a0[@Zhang:2007zzh]. Decays into a dark sector, so-called *invisible decays*, on the other hand, are much less constrained because no direct observation channel exists. Nonetheless, there are strong constraints on models that assume *all* of dark matter to decay on cosmological timescales (e.g., the simple observation that we observe it today)\u00a0[@Audren:2014bca; @Simon:2022ftd]. However, these constraints may always be evaded by considering a scenario where only a fraction of the dark matter decays invisibly. It is this class of models we study in this chapter.\n\nThere exist several phenomenological models of invisibly decaying dark matter, largely varying in their assumptions on the decaying particle (cold or warm) and on the decay products (massive" -"---\nabstract: 'Vision transformers (ViTs) have significantly changed the computer vision landscape and have periodically exhibited superior performance in vision tasks compared to convolutional neural networks (CNNs). Although the jury is still out on which model type is superior, each has unique inductive biases that shape their learning and generalization performance. For example, ViTs have interesting properties with respect to early layer non-local feature dependence, as well as self-attention mechanisms which enhance learning flexibility, enabling them to ignore out-of-context image information more effectively. We hypothesize that this power to ignore out-of-context information (which we name *patch selectivity*), while integrating in-context information in a non-local manner in early layers, allows ViTs to more easily handle occlusion. In this study, our aim is to see whether we can have CNNs *simulate* this ability of patch selectivity by effectively hardwiring this inductive bias using Patch Mixing data augmentation, which consists of inserting patches from another image onto a training image and interpolating labels between the two image classes. Specifically, we use Patch Mixing to train state-of-the-art ViTs and CNNs, assessing its impact on their ability to ignore out-of-context patches and handle natural occlusions. We find that ViTs do not improve nor degrade when" -"---\nabstract: 'We study an optical Raman square lattice with $\\mathrm{U}(1)$ synthetic gauge flux to show chiral spin liquid (CSL) phase for cold atoms based on slave-rotor theory and spinon mean-field theory, respectively. An effective U($1$) gauge flux generated by Raman potentials plays a major role in realizing the CSL phase. By using slave-rotor techniques we find CSL phase at intermediate on-site Fermi Hubbard interacting regime. For the strong interacting regime we derive an effective spin model including up to the four spin interactions. By spinon mean-field analysis it is shown that CSL phase is stabilized in the case of strong magnetic frustration. The two mean-field approximation methods give consistent phase diagrams and provide qualitative numerical evidence of the CSL phase.'\nauthor:\n- Jian Yang\n- 'Xiong-Jun Liu'\ntitle: 'Chiral spin liquid phase in an optical lattice at mean-field level'\n---\n\nIntroduction {#sec1}\n============\n\nMore than thirty years ago, Kalmeyer and Laughlin pointed out that the ground state wave function of the antiferromagnetic Heisenberg Hamiltonian in two-dimensional triangular lattice is equivalent to a fractional quantum Hall state for bosons\u00a0[@Kalmeyer]. The elementary excitations of the ground state are neutral spin-$\\frac{1}{2}$ particles. They obey fractional (braiding) statistics and are called anyons." -"---\nabstract: '3D single object tracking with point clouds is a critical task in 3D computer vision. Previous methods usually input the last two frames and use the predicted box to get the template point cloud in previous frame and the search area point cloud in the current frame respectively, then use similarity-based or motion-based methods to predict the current box. Although these methods achieved good tracking performance, they ignore the historical information of the target, which is important for tracking. In this paper, compared to inputting two frames of point clouds, we input multi-frame of point clouds to encode the spatio-temporal information of the target and learn the motion information of the target implicitly, which could build the correlations among different frames to track the target in the current frame efficiently. Meanwhile, rather than directly using the point feature for feature fusion, we first crop the point cloud features into many patches and then use sparse attention mechanism to encode the patch-level similarity and finally fuse the multi-frame features. Extensive experiments show that our method achieves competitive results on challenging large-scale benchmarks (62.6% in KITTI and 49.66% in NuScenes).'\nauthor:\n- 'Yubo Cui, Zhiheng Li, Zheng Fang$^*$[^1][^2] [^3] [^4]" -"---\nabstract: |\n We prove a uniqueness result for the broken ray transform acting on the sums of functions and $1$-forms on surfaces in the presence of an external force and a reflecting obstacle. We assume that the considered twisted geodesic flows have nonpositive curvature. The broken rays are generated from the twisted geodesic flows by the law of reflection on the boundary of a suitably convex obstacle. Our work generalizes recent results for the broken geodesic ray transform on surfaces to more general families of curves including the magnetic flows and Gaussian thermostats.\n\n [**Keywords.**]{} geodesic ray transform, magnetic flows, Gaussian thermostats, broken rays, inverse problems.\n\n [**Mathematics Subject Classification (2020)**]{}: 44A12, 58C99, 37E35\naddress:\n- ' Indian Institute of Science Education and Research (IISER) Bhopal, India'\n- ' Indian Institute of Science Education and Research (IISER) Bhopal, India'\n- ' Department of Pure Mathematics and Mathematical Statistics, University of Cambridge, Cambridge CB3 0WB, UK'\nauthor:\n- 'Shubham R. Jathar'\n- Manas Kar\n- Jesse Railo\nbibliography:\n- 'math.bib'\ntitle: Broken ray transform for twisted geodesics on surfaces with a reflecting obstacle\n---\n\nIntroduction\n============\n\nThis article studies generalizations of the geodesic ray transform to general families of curves. Our main" -"---\nabstract: 'We establish the existence of generalized Busemann functions and Gibbs-Dobrushin-Landford-Ruelle measures for a general class of lattice random walks in random potential with finitely many admissible steps. This class encompasses directed polymers in random environments, first- and last-passage percolation, and elliptic random walks in both static and dynamic random environments in all dimensions and with minimal assumptions on the random potential.'\naddress:\n- |\n Sean Groathouse\\\n University of Utah\\\n Department of Mathematics\\\n 155 S 1400 E\\\n Salt Lake City, UT 84112\n- |\n Christopher Janjigian\\\n Purdue University\\\n Department of Mathematics\\\n 150 N University St\\\n West Lafayette, IN 47907\n- |\n Firas Rassoul-Agha\\\n University of Utah\\\n Department of Mathematics\\\n 155 S 1400 E\\\n Salt Lake City, UT 84112\nauthor:\n- Sean Groathouse\n- Christopher Janjigian\n- 'Firas Rassoul-Agha'\nbibliography:\n- 'firasbib2010.bib'\ndate: '[JanuaryFebruaryMarchAprilMayJuneJulyAugustSeptemberOctoberNovemberDecember, ]{}'\ntitle: Existence of generalized Busemann functions and Gibbs measures for random walks in random potentials\n---\n\nIntroduction\n============\n\nThe model of a random walk interacting with a random potential (RWRP) has been a major topic of research in probability over the last half-century. Through various specializations, it encompasses random walks in both static and dynamic random environments, directed polymers in random environments, as well as" -"---\nabstract: 'This paper introduces CORAE, a novel web-based open-source tool for *COntinuous Retrospective Affect Evaluation*, designed to capture continuous affect data about interpersonal perceptions in dyadic interactions. Grounded in behavioral ecology perspectives of emotion, this approach replaces valence as the relevant rating dimension with approach and withdrawal, reflecting the degree to which behavior is perceived as increasing or decreasing social distance. We conducted a study to experimentally validate the efficacy of our platform with 24 participants. The tool\u2019s effectiveness was tested in the context of dyadic negotiation, revealing insights about how interpersonal dynamics evolve over time. We find that the continuous affect rating method is consistent with individuals\u2019 perception of the overall interaction. This paper contributes to the growing body of research on affective computing and offers a valuable tool for researchers interested in investigating the temporal dynamics of affect and emotion in social interactions.'\nauthor:\n- \nbibliography:\n- 'bibliography.bib'\ntitle: 'CORAE: A Tool for Intuitive and Continuous Retrospective Evaluation of Interactions '\n---\n\nAffective computing, interpersonal perception, annotation tool,continuous affect, human-computer interaction\n\nIntroduction\n============\n\nAffect is a dynamic phenomenon. Observable behavior, subjective experience, and physiology all dynamically evolve across time [@kuppens2017emotion]. In interactions, affective dynamics co-evolve with those" -"---\nabstract: 'Video retrieval (VR) involves retrieving the ground truth video from the video database given a text caption or vice-versa. The two important components of compositionality: objects & attributes and actions are joined using correct semantics to form a proper text query. These components (objects & attributes, actions and semantics) each play an important role to help distinguish among videos and retrieve the correct ground truth video. However, it is unclear what is the effect of these components on the video retrieval performance. We therefore, conduct a systematic study to evaluate the compositional and semantic understanding of video retrieval models on standard benchmarks such as MSRVTT, MSVD and DIDEMO. The study is performed on two categories of video retrieval models: (i) which are pre-trained on video-text pairs and fine-tuned on downstream video retrieval datasets (Eg. Frozen-in-Time, Violet, MCQ etc.) (ii) which adapt pre-trained image-text representations like CLIP for video retrieval (Eg. CLIP4Clip, XCLIP, CLIP2Video etc.). Our experiments reveal that actions and semantics play a minor role compared to objects & attributes in video understanding. Moreover, video retrieval models that use pre-trained image-text representations (CLIP) have better semantic and compositional understanding as compared to models pre-trained on video-text data.'\nauthor:" -"---\nabstract: 'We show that every Borel graph $G$ of subexponential growth has a Borel proper edge-coloring with $\\Delta(G) + 1$ colors. We deduce this from a stronger result, namely that an $n$-vertex (finite) graph $G$ of subexponential growth can be properly edge-colored using $\\Delta(G) + 1$ colors by an $O(\\log^\\ast n)$-round deterministic distributed algorithm in the [$\\mathsf{LOCAL}$]{}model, where the implied constants in the $O(\\cdot)$ notation are determined by a bound on the growth rate of $G$.'\nauthor:\n- Anton\u00a0Bernshteyn\n- Abhishek\u00a0Dhawan\nbibliography:\n- 'references.bib'\ntitle: |\n Borel Vizing\u2019s Theorem for Graphs\\\n of Subexponential Growth\n---\n\nIntroduction\n============\n\nIn this note we study a classical concept in graph theory\u2014namely proper edge-colorings\u2014from the perspective of descriptive set theory. This line of inquiry forms part of the active and growing field of [[***[descriptive combinatorics]{}***]{}]{}, which was created in the seminal work of Kechris, Solecki, and Todorcevic [@KST]. For surveys of this area, see [@KechrisMarks] by Kechris and Marks and [@Pikh_survey] by Pikhurko. We use standard terminology from graph theory [@Diestel; @BondyMurty] and from descriptive set theory [@KechrisDST; @AnushDST]. A graph $G$ consists of a vertex set $V(G)$ and an edge set $E(G) \\subseteq [V(G)]^2$, where for a set $X$, we" -"---\nabstract: 'Indirect time-of-flight (iToF) imaging allows us to capture dense depth information at a low cost. However, iToF imaging often suffers from multipath interference (MPI) artifacts in the presence of scattering media, resulting in severe depth-accuracy degradation. For instance, iToF cameras cannot measure depth accurately through fog because ToF active illumination scatters back to the sensor before reaching the farther target surface. In this work, we propose a polarimetric iToF imaging method that can capture depth information robustly through scattering media. Our observations on the principle of indirect ToF imaging and polarization of light allow us to formulate a novel computational model of scattering-aware polarimetric phase measurements that enables us to correct MPI errors. We first devise a scattering-aware polarimetric iToF model that can estimate the phase of unpolarized backscattered light. We then combine the optical filtering of polarization and our computational modeling of unpolarized backscattered light via scattering analysis of phase and amplitude. This allows us to tackle the MPI problem by estimating the scattering energy through the participating media. We validate our method on an experimental setup using a customized off-the-shelf iToF camera. Our method outperforms baseline methods by a significant margin by means of our scattering" -"---\nabstract: 'We compute the effect of a *d*-wave magnetization (altermagnetism) on the spectrum of bound states (Andreev levels) in a junction between two *s*-wave superconductors (gap $\\Delta_0$, phase difference $\\phi$). Compared to a nonmagnetic junction, the $\\phi$-dependence of the spectrum is shifted by an offset $\\pm\\delta\\phi$, dependent on the spin direction, so that the Andreev levels become spin-polarized. In a planar junction, oriented along the crystalline axis of $d_{xy}$-wave symmetry, the excitation energies are determined by the normal-state transmission probability $T$ according to $E=\\Delta_0\\sqrt{1-T\\sin^2\\tfrac{1}{2}(\\phi\\pm\\delta\\phi)}$. We calculate the corresponding Josephson energy and supercurrent, recovering the 0\u2013$\\pi$ transition of related studies.'\nauthor:\n- 'C. W. J. Beenakker'\n- 'T. Vakhtel'\ndate: June 2023\ntitle: 'Phase-shifted Andreev levels in an altermagnet Josephson junction'\n---\n\nIntroduction\n============\n\nAltermagnets (metals with a *d*-wave magnetization that \u201calternates\u201d direction in momentum space) differ from ferromagnets and antiferromagnets in that they combine a spin-polarized Fermi surface with a vanishing net magnetization [@Sme22a; @Sme22b; @Maz22a; @Maz23]. Candidate altermagnetic materials include $\\text{RuO}_2$, MnTe, and $\\text{Mn}_5\\text{Si}_3$ [@Fen22; @Occ22; @Gon23; @Sme22c]. The interplay of altermagnetism and superconductivity produces unusual effects [@Maz22b], including orientation-dependent Andreev reflection [@Sun23; @Pap23], negative critical supercurrent with finite-momentum Cooper pairing [@Oua23; @Zha23], and topological Majorana modes [@Zhu23;" -"---\nabstract: 'Information technology and telecommunications have rapidly permeated various domains, resulting in a significant influx of data traversing the networks between computers. Consequently, research of cyberattacks in computer systems has become crucial for many organizations. Accordingly, recent cybersecurity incidents have underscored the rapidly evolving nature of future threats and attack methods, particularly those involving computer viruses wireless injection. This paper aims to study and demonstrate the feasibility of remote computer virus radiation injection. To achieve this objective, digital signal processing (DSP) plays a vital role. By studying the principles and models of radiation attacks and computer virus propagation, the modulation of the binary data stream of the simulated virus into a terahertz radar carrier signal by Phase-Shift Keying (PSK) is simulated, enabling the implementation of an attack through the \u201cfield to line\u201d coupling of electromagnetic signals. Finally, the defense and countermeasures based on signal recognition are discussed for such attacks. Additionally, an idea of establishing a virus library for cyberattack signals and employing artificial intelligence (AI) algorithms for automated intrusion detection is proposed as a means to achieve cybersecurity situation awareness.'\nauthor:\n- |\n [![image](orcid.pdf)Ruochen Wu](https://orcid.org/0000-0003-0852-424X)[^1]\\\n Dept. of Signal Theory and Communications\\\n Universitat Polit\u00e8cnica de Catalunya\\\n C/ Jordi Girona" -"---\nabstract: 'Our result contains as special cases the Frobenius theorem (1895) on the number of solutions to the equation $x^n=1$ in a finite group and the Solomon theorem (1969) on the number of solutions in a group to systems of equations with fewer equations than unknowns. We consider arbitrary first-order formulae in the group language with constants instead of systems of equations. Our result generalizes substantially a theorem of Klyachko and Mkrtchyan (2014) on this topic.'\nauthor:\n- |\n Elena K. Brusyanskaya[^1]\\\n *Faculty of Mechanics and Mathematics of Moscow State University*\\\n *Moscow 119991, Leninskie gory, MSU.*\\\n *Moscow Center of Fundamental and Applied Mathematics*\\\n *ebrusianskaia@gmail.com*\ntitle: 'On the number of tuples of group elements satisfying a first-order formula'\n---\n\nIntroduction\n============\n\nThe research was inspired by the two classical results on divisibility in groups: the Frobenius theorem (1895), and the Solomon theorem (1969).\n\n[Frobenius theorem [@Frob95]]{}The number of solutions to the equation $x^n=1$ in a finite group $G$ is divisible by the greatest common divisor of the group order and $n$ for any positive integer $n$.\n\n[Solomon theorem [@Solo69]]{} In any group, the number of solutions to a system of coefficient-free equations is divisible by the order of the group" -"---\nabstract: 'We explore a new approach to boundaries and interfaces in the $O(N)$ model where we add certain localized cubic interactions. These operators are nearly marginal when the bulk dimension is $4-\\epsilon$, and they explicitly break the $O(N)$ symmetry of the bulk theory down to $O(N-1)$. We show that the one-loop beta functions of the cubic couplings are affected by the quartic bulk interactions. For the interfaces, we find real fixed points up to the critical value $N_{\\rm crit}\\approx 7$, while for $N> 4$ there are IR stable fixed points with purely imaginary values of the cubic couplings. For the boundaries, there are real fixed points for all $N$, but we don\u2019t find any purely imaginary fixed points. We also consider the theories of $M$ pairs of symplectic fermions and one real scalar, which have quartic $OSp(1|2M)$ invariant interactions in the bulk. We then add the $Sp(2M)$ invariant localized cubic interactions. The beta functions for these theories are related to those in the $O(N)$ model via the replacement of $N$ by $1- 2M$. In the special case $M=1$, there are boundary or interface fixed points that preserve the $OSp(1|2)$ symmetry, as well as other fixed points that break it.'" -"---\nauthor:\n- 'Pedro Simpl\u00edcio[^1]'\n- Paul Acquatella\n- Samir Bennani\nbibliography:\n- 'main.bib'\ntitle: |\n **LAUNCHER ATTITUDE CONTROL BASED ON INCREMENTAL NONLINEAR\\\n DYNAMIC INVERSION: A FEASIBILITY STUDY TOWARDS\\\n FAST AND ROBUST DESIGN APPROACHES**\n---\n\nIntroduction {#sec:intro}\n============\n\nBackground and Motivation\n-------------------------\n\nThe space industry has undergone significant changes in recent years with the advent of the \u201cNew Space era\u201d marked by disruptive changes in the business models, manufacturing technologies, and agile practices of launch vehicle companies; all aimed at minimising their production and operating costs in an ever more competitive market. However, limited attention has been given to the benefits of control theory innovation in this context despite the potential for such innovations to increase performance limits and reduce mission preparation (or \u201cmissionisation\u201d) efforts. Moreover, government-led developments of recent launchers such as Ares I and VEGA still use the same design approach of the Saturn V, i.e. linear controllers\u00a0[@SICE2020]. This approach relies on single channel-at-a-time tuning and ad\u2013hoc gain-scheduling followed by extensive validation and verification (V&V); these are in fact quite time- and cost-consuming processes.\n\nIn contrast to the approach presented above, the past few years have seen a growing interest in the application of artificial intelligence and" -"---\nabstract: 'We present a method for obtaining unbiased signal estimates in the presence of a significant background, eliminating the need for a parametric model for the background itself. Our approach is based on a minimal set of conditions for observation and background estimators, which are typically satisfied in practical scenarios. To showcase the effectiveness of our method, we apply it to simulated data from the planned dielectric axion haloscope MADMAX.'\nauthor:\n- Johannes Diehl\n- Jakob Knollm\u00fcller\n- Oliver Schulz\nbibliography:\n- 'main.bib'\ndate: ', '\ntitle: 'Bias-Free Estimation of Signals on Top of Unknown Backgrounds'\n---\n\nIntroduction {#sec:Introduction}\n============\n\n[-5mm]{}\n\nFitting a small-amplitude signal in the presence of a large-amplitude background is both a common and often challenging problem. If one has a valid parametric model for both signal and background, and the response of the experimental apparatus can be accurately modelled as well, then a forward-modelling approach can be employed: With signal parameters $\\theta$ and background parameters $\\phi$ we can usually construct a tractable and parameterised probability distribution $p^{{\\mathrm{obs}}}_ {\\theta, \\phi}(X)$ that models the probability of observing a specific realisation of $X$. The combination of such a distribution with some actual observed data results in a likelihood" -"---\nauthor:\n- 'Akshatha Jagadish, Manoj Varma'\nbibliography:\n- 'sn-article.bib'\ntitle: 'Role of single particle motility statistics on efficiency of targeted delivery of micro-robot swarms'\n---\n\n**Keywords**: microbots, ABP, Chiral ABP, RTP, capture efficiency, motility statistics\n\nIntroduction {#sec1}\n============\n\nRecent advancements in micro- and nano-fabrication technology is enabling rapid advances in the interdisciplinary field of targeted drug delivery systems [@Liu19]. The vision of targeted drug delivery system is captured well in the science fiction movie Fantastic Voyage (1966) in which a team of scientists travel to the site of infection in a shrunken submarine to treat a blood clot. While the current state of the art does not yet allow us to perform such a task, it is a future that many researchers are working towards [@Martinho11; @Nava18].\n\nCurrently, major Drug Delivery Systems (DDSs) are still oral or parenteral (intravenous, subcutaneous, and intramuscular) [@Srivastava18]. This has numerous problems such as the risk of adverse drug reactions, undesired toxic side effects, and low patient compliance, to name a few. These can be conceptually overcome with the help of targeted drug delivery systems. Paul Ehrlich put forward the seed for this thinking through his magic bullet concept [@Strebhardt08]: \u201cDrugs that go straight" -"---\nabstract: 'We solve the Schr\u00f6dinger equation for an arbitrary one-dimensional potential energy to calculate the transmission coefficient in the fission channel of compound nucleus reactions. We incorporate the calculated transmission coefficients into the statistical Hauser-Feshbach model calculation for neutron-induced reactions on $^{235,238}$U and $^{239}$Pu. The one-dimensional model reproduces the evaluated fission cross section data reasonably well considering the limited number of model parameters involved. A resonance-like structure appears in the transmission coefficient for a double-humped fission barrier shape that includes an intermediate well, which is understood to be a quantum mechanical effect in the fission channel. The calculated fission cross sections for the neutron-induced reactions on $^{235,238}$U and $^{239}$Pu all exhibit a similar structure.'\nauthor:\n- 'T. Kawano'\n- 'P. Talou'\n- 'S. Hilaire'\ntitle: 'Solving one-dimensional penetration problem for fission channel in the statistical Hauser-Feshbach theory'\n---\n\nIntroduction {#sec:introduction}\n============\n\nThe statistical compound nucleus theory describes the probability for a formed compound nucleus to decay into a channel $a$ by the partial width $\\Gamma_a$, and the Hauser-Feshbach theory\u00a0[@Hauser1952] tells us that the energy-average of width $\\langle \\Gamma_a \\rangle$ can be replaced by the optical model transmission coefficient $T_a$ in the time-reverse process. This is intuitive for particle" -"---\nauthor:\n- 'Leonardo Badurina,'\n- Ankit Beniwal and\n- Christopher McCabe\nbibliography:\n- 'main.bib'\ntitle: 'Super-Nyquist ultralight dark matter searches with broadband atom gradiometers'\n---\n\nIntroduction {#sec: intro}\n============\n\nver since the formulation of the dark matter (DM) hypothesis, the search for dark matter in direct detection experiments has been one of the greatest priorities in particle physics\u00a0[@APPEC; @Cooley:2022ufh]. Until recently, the possibility of charting the strikingly diverse and vast landscape of DM models beyond conventional GeV-scale candidates seemed like a remote possibility. Now, thanks to extraordinary advancements in a wealth of cutting-edge technologies with ever-increasing sensitivity to minute effects, it is expected that large regions of DM model space will be within the reach of the next generation of direct detection experiments, such as atom interferometers.\n\nIn addition to being excellent probes of gravitational waves in the mid-frequency gap\u00a0[@Badurina:2021rgt], large-scale atom interferometer experiments, such as AION\u00a0[@Badurina:2019hst], MAGIS\u00a0[@MAGIS-100:2021etm], MIGA\u00a0[@Canuel:2017rrp], ELGAR\u00a0[@Canuel:2019abg], and ZAIGA\u00a0[@Zhan:2019quq], would be powerful probes of ultralight dark matter (ULDM). In particular, these experiments would be especially ideal probes of scalar ULDM signatures through their exquisite sensitivity to changes in atomic structures. Indeed, scalar ULDM with dilatonic couplings to Standard Model" -"---\nabstract: 'In this paper, we study the error in first order Sobolev norm in the approximation of solutions to linear parabolic PDEs. We use a Monte Carlo Euler scheme obtained from combining the Feynman\u2013Kac representation with a Euler discretization of the underlying stochastic process. We derive approximation rates depending on the time-discretization, the number of Monte Carlo simulations, and the dimension. In particular, we show that the Monte Carlo Euler scheme breaks the curse of dimensionality with respect to the first order Sobolev norm. Our argument is based on new estimates on the weak error of the Euler approximation of a diffusion process together with its derivative with respect to the initial condition. As a consequence, we obtain that neural networks are able to approximate solutions of linear parabolic PDEs in first order Sobolev norm without the curse of dimensionality if the coefficients of the PDEs admit an efficient approximation with neural networks.'\nauthor:\n- 'Patrick Cheridito[^1] Florian Rossmannek$^*$'\nbibliography:\n- 'bibfile\\_FR\\_2023\\_june\\_9.bib'\n---\n\nIntroduction\n============\n\nWe consider the following linear parabolic partial differential equation (PDE); $$\\label{Intro_PDE}\n\\begin{split}\n \\partial_t u(t,x) + \\nabla u(t,x)^T \\mu(t,x) + \\frac{1}{2} {\\mathrm{Tr}}\\big( \\sigma(t,x) \\sigma(t,x)^T \\nabla^2u(t,x) \\big) + g(t,x) &= 0 \\quad \\text{on } [0,T] \\times" -"---\nabstract: 'Plasmon resonances at the surface of plasmonic antennas allow for extremely strong enhancement of Raman scattering. Intrinsic to plasmonics, however, is that extreme field confinement lacks precise spectral control, which would hold great promise in shaping the optomechanical interaction between light and molecular vibrations at will. We demonstrate an experimental platform composed of a plasmonic nanocube-on-mirror antenna coupled to an open, tunable Fabry-Perot microcavity for selective addressing of individual vibrational lines of molecules with strong Raman scattering enhancement. Multiple narrow and intense optical resonances arising from the hybridization of the cavity modes and the plasmonic broad resonance are used to simultaneously enhance the laser pump and the local density of optical states (LDOS) and are characterized using rigorous modal analysis. The versatile bottom-up fabrication approach permits quantitative comparison with the bare nanocube-on-mirror system, both theoretically and experimentally. This shows that the hybrid system allows for similar SERS enhancement ratios with narrow optical modes, paving the way for dynamical back action effects in molecular optomechanics.'\nauthor:\n- Ilan Shlesinger\n- Jente Vandersmissen\n- Eitan Oksenberg\n- Ewold Verhagen\n- 'A. Femius Koenderink'\ntitle: 'Hybrid cavity-antenna architecture for strong and tunable sideband-selective molecular Raman scattering enhancement'\n---\n\nIntroduction\n============\n\nSurface-enhanced" -"---\nauthor:\n- '[^1]'\nbibliography:\n- 'caloronsSolitons.bib'\ntitle: 'Calorons, monopoles and stable, charged solitons'\n---\n\nCaloron monopoles {#Sec-Intro}\n=================\n\nAn important property of SU(2) Yang\u2013Mills theory is the existence of topologically different vacua, characterised by a winding number. There are solutions to the classical equations of motion of this theory in Euclidean space-time, instantons, describing transitions between neighbouring vacua. Instantons have minimal action with the action density concentrated around an event in 4D space-time. They are characterised by a topological quantum number, the topological charge\u00a0[@BELAVIN197585]. Periodic boundary conditions in Euclidean time model field theories at finite temperatures, where the temperature $T$ is proportional to the inverse time extent. The solutions of the Yang-Mills equations are modified by finite temperature $T$. As shown by Kraan and van Baal\u00a0[@Kraan:1998pm] and Lee and Lu\u00a0[@Lee:1998bb] finite $T$ deforms instantons to periodic solutions, calorons. With increasing $T$ calorons separate into constituents, monopoles (dyons), as can be nicely observed in the action density, see e.g. Fig.1 of Ref.\u00a0[@Kraan:1998pm]. These interesting solutions fire our imagination due to the similarity to localised quantised charges in electrodynamics, atomic and nuclear physics.\n\nOn the 4D Euclidean lattice with the gauge field $U_\\mu(x)\\in SU(2)$ defined on links," -"---\nabstract: 'An unidentified 3.55 keV X-ray line in stacked spectra of galaxies and clusters raises the interesting possibility that it originates from the decay of sterile neutrino dark matter. In this work, we explore mixed sterile neutrino dark matter models that combine cold dark matter and warmer sterile neutrino dark matter produced through lepton number-driven active-to-sterile neutrino transformation. We analyze the sensitivity of the sterile neutrino spectra on active-sterile mixing and on initial neutrino lepton numbers. Furthermore, we assess the viability of these models with estimates of the number of subhalos formed as the host sites of satellite galaxies.'\nauthor:\n- 'Emma L. Horner'\n- 'Francisco [Mungia Wulftange]{}'\n- 'Isabella A.\u00a0Ianora'\n- 'Chad T.\u00a0Kishimoto'\nbibliography:\n- 'refs.bib'\ntitle: Exploring resonantly produced mixed sterile neutrino dark matter models\n---\n\nIntroduction\n============\n\nThe nature of dark matter, which comprises nearly all of the non-relativistic matter in the universe, remains a mystery. An interesting piece of the dark matter puzzle is the discovery of an unidentified $3.55~{\\rm keV}$ X-ray line in stacked spectra of galaxies and clusters [@bulbul14; @boy14]. One possible explanation for this X-ray line is the decay of sterile neutrino dark matter with a mass of $7.1~{\\rm keV}$" -"---\nabstract: 'In this work, we explore the possibilities of producing Axion-Like Particles (ALPs) in a future $e^-p$ collider. Specifically, we focus on the proposed Large Hadron electron collider (LHeC), which can achieve a center-of-mass energy of $\\sqrt{s} \\approx 1.3$\u00a0TeV, enabling us to probe relatively high ALP masses with $m_a \\lesssim 300$\u00a0GeV. The production of ALPs can occur through various channels, including $W^+W^-$, $\\gamma\\gamma$, $ZZ$, and $Z\\gamma$-fusion within the collider environment. To investigate this, we conduct a comprehensive analysis that involves estimating the production cross section and constraining the limits on the associated couplings of ALPs, namely $g_{WW}$, $g_{\\gamma\\gamma}$, $g_{ZZ}$, and $g_{Z\\gamma}$. To achieve this, we utilize a multiple-bin $\\chi^2$ analysis on sensitive differential distributions. Through the analysis of these distributions, we determine upper bounds on the associated couplings within the mass range of 5\u00a0GeV $\\leq m_a \\leq$ 300\u00a0GeV. The obtained upper bounds are of the order of ${\\cal O}(10^{-1})$ for $g_{\\gamma\\gamma}$ ($g_{WW}$, $g_{ZZ}$, $g_{Z\\gamma}$) in $m_a \\in$\u00a0\\[5, 200 (300)\\]\u00a0GeV considering an integrated luminosity of 1\u00a0ab$^{-1}$. Furthermore, we compare the results of our study with those obtained from other available experiments. We emphasize the limits obtained through our analysis and showcase the potential" -"---\nabstract: 'We consider the classical Smoluchowski coagulation equation with a general frequency kernel. We show that there exists a natural deterministic solution expansion in the non-associative algebra generated by the convolution product of the coalescence term. The non-associative solution expansion is equivalently represented by binary trees. We demonstrate that the existence of such solutions corresponds to establishing the compatibility of two binary-tree generating procedures, by: (i) grafting together the roots of all pairs of order-compatibile trees at preceding orders, or (ii) attaching binary branches to all free branches of trees at the previous order. We then show that the solution represents a linearised flow, and also establish a new numerical simulation method based on truncation of the solution tree expansion and approximating the integral terms at each order by fast Fourier transform. In particular, for general separable frequency kernels, the complexity of the method is linear-loglinear in the number of spatial modes/nodes.'\nauthor:\n- 'Simon\u00a0J.A.\u00a0Malham'\ntitle: 'Coagulation, non-associative algebras and binary trees'\n---\n\nSmoluchowski coagulation ,non-associative algebras ,binary trees\n\nIntroduction {#sec:intro}\n============\n\nHerein we consider the classical Smoluchowski coagulation equation with a general frequency kernel. We show there exists a natural deterministic solution expansion in a non-associative" -"---\nabstract: 'Sensitivity to unmeasured confounding is not typically a primary consideration in designing treated-control comparisons in observational studies. We introduce a framework allowing researchers to optimize robustness to omitted variable bias at the design stage using a measure called design sensitivity. Design sensitivity, which describes the asymptotic power of a sensitivity analysis, allows transparent assessment of the impact of different estimation strategies on sensitivity. We apply this general framework to two commonly-used sensitivity models, the marginal sensitivity model and the variance-based sensitivity model. By comparing design sensitivities, we interrogate how key features of weighted designs, including choices about trimming of weights and model augmentation, impact robustness to unmeasured confounding, and how these impacts may differ for the two different sensitivity models. We illustrate the proposed framework on a study examining drivers of support for the 2016 Colombian peace agreement.'\nauthor:\n- 'Melody Huang[^1], Dan Soriano[^2], and Samuel D. Pimentel[^3]'\nbibliography:\n- 'references.bib'\ntitle: 'Design Sensitivity and Its Implications for Weighted Observational Studies[^4]'\n---\n\nIntroduction\n============\n\nIncreasingly, observational studies are being used to answer causal questions in the social and biomedical sciences. Estimating causal effects in observational settings often requires an assumption that unmeasured confounding is absent. In practice, this" -"---\nabstract: 'Evolutionary differential equation discovery proved to be a tool to obtain equations with less a priori assumptions than conventional approaches, such as sparse symbolic regression over the complete possible terms library. The equation discovery field contains two independent directions. The first one is purely mathematical and concerns differentiation, the object of optimization and its relation to the functional spaces and others. The second one is dedicated purely to the optimizatioal problem statement. Both topics are worth investigating to improve the algorithm\u2019s ability to handle experimental data a more artificial intelligence way, without significant pre-processing and a priori knowledge of their nature. In the paper, we consider the prevalence of either single-objective optimization, which considers only the discrepancy between selected terms in the equation, or multi-objective optimization, which additionally takes into account the complexity of the obtained equation. The proposed comparison approach is shown on classical model examples \u2013 Burgers equation, wave equation, and Korteweg - de Vries equation.'\nauthor:\n- Mikhail Maslyaev\n- Alexander Hvatov\nbibliography:\n- 'main.bib'\ntitle: 'Comparison of Single- and Multi- Objective Optimization Quality for Evolutionary Equation Discovery'\n---\n\n<ccs2012> <concept> <concept\\_id>10010405.10010432.10010442</concept\\_id> <concept\\_desc>Applied computing\u00a0Mathematics and statistics</concept\\_desc> <concept\\_significance>300</concept\\_significance> </concept> <concept> <concept\\_id>10010147.10010178.10010205.10010206</concept\\_id> <concept\\_desc>Computing methodologies\u00a0Heuristic function" -"---\nabstract: 'Synchronization of coupled dynamical systems is a widespread phenomenon in both biological and engineered networks, and understanding this behavior is crucial for controlling such systems. Considerable research has been dedicated to identifying the conditions that promote synchronization in *diffusively* coupled systems, where coupling relies on the difference between the states of neighboring systems and vanishes on the synchronization manifold. In particular, contraction theory provides an elegant method for analyzing synchronization patterns in diffusively coupled networks. However, these approaches do not fully explain the emergence of synchronization behavior in *non-diffusively* coupled networks where the coupling does not vanish on the synchronization manifold and hence the dynamics on the synchronization manifold differ from the uncoupled systems. Inspired by neuronal networks connected via non-diffusive chemical synapses, we extend contraction theory to establish sufficient conditions for global synchronization in general non-diffusively coupled nonlinear networks. We demonstrate the theoretical results on a network of Hindmarsh-Rose oscillators connected via chemical synapses and networks of FitzHugh-Nagumo oscillators connected via chemical synapses and additive coupling.'\nauthor:\n- 'Fatou K. Ndow$^{1}$ and Zahra Aminzare$^{2}$ [^1] [^2][^3]'\ntitle: ' **Global synchronization analysis of non-diffusively coupled networks through Contraction Theory** '\n---\n\n**Keywords** Complete synchronization, non-diffusive coupling, digraphs, contraction" -"---\nabstract: 'Neural Radiance Field (NeRF) has been a mainstream in novel view synthesis with its remarkable quality of rendered images and simple architecture. Although NeRF has been developed in various directions improving continuously its performance, the necessity of a dense set of multi-view images still exists as a stumbling block to progress for practical application. In this work, we propose FlipNeRF, a novel regularization method for few-shot novel view synthesis by utilizing our proposed flipped reflection rays. The flipped reflection rays are explicitly derived from the input ray directions and estimated normal vectors, and play a role of effective additional training rays while enabling to estimate more accurate surface normals and learn the 3D geometry effectively. Since the surface normal and the scene depth are both derived from the estimated densities along a ray, the accurate surface normal leads to more exact depth estimation, which is a key factor for few-shot novel view synthesis. Furthermore, with our proposed Uncertainty-aware Emptiness Loss and Bottleneck Feature Consistency Loss, FlipNeRF is able to estimate more reliable outputs with reducing floating artifacts effectively across the different scene structures, and enhance the feature-level consistency between the pair of the rays cast toward the photo-consistent" -"---\nabstract: |\n The generalized Chazy differential equation corresponds to the following two-parameter family of differential equations $$\\label{gcdeq}\n \\dddot x+|x|^q \\ddot x+\\dfrac{k |x|^q}{x}\\dot x^2=0,$$ which has its regularity varying with $q$ , a positive integer. Indeed, for $q=1$ it is discontinuous on the straight line $x=0$, whereas for $q$ a positive even integer it is polynomial, and for $q>1$ a positive odd integer it is continuous but not differentiable on the straight line $x=0$. In 1999, the existence of periodic solutions in the generalized Chazy differential equation was numerically observed for $q=2$ and $k=3$. In this paper, we prove analytically the existence of such periodic solutions. Our strategy allows to establish sufficient conditions ensuring that the generalized Chazy differential equation, for $k=q+1$ and any positive integer $q$ , has actually an invariant topological cylinder foliated by periodic solutions in the $(x,\\dot x,\\ddot x)$-space . In order to set forth the bases of our approach, we start by considering $q=1,2,3$, which are representatives of the different classes of regularity. For an arbitrary positive integer $q$ , an algorithm is provided for checking the sufficient conditions for the existence of such an invariant cylinder, which we conjecture that always exists. The algorithm" -"---\nabstract: 'Cybersecurity, which notoriously concerns both human and technological aspects, is becoming more and more regulated by a number of textual documents spanning several pages, such as the European GDPR Regulation and the NIS Directive. This paper introduces an approach that leverages techniques of semantic representation and reasoning, hence an ontological approach, towards the compliance check with the security measures that textual documents prescribe. We choose the ontology instrument to achieve two fundamental objectives: domain modelling and resource interrogation. The formalisation of *entities* and *relations* from the directive, and the consequent improved structuring with respect to sheer prose is dramatically helpful for any organisation through the hard task of compliance verification. The semantic approach is demonstrated with two articles of the new European NIS 2 directive.'\naddress: 'Dipartimento di Matematica e Informatica, Universit\u00e0 di Catania, Italy'\nauthor:\n- '[^1]'\n- '[^2]'\n- '[^3]'\nbibliography:\n- 'ref.bib'\ntitle: An ontological approach to compliance verification of the NIS 2 directive\n---\n\n,\n\n,\n\nOntology, security directive, compliance\n\nIntroduction\n============\n\nThe increasingly rapid growth and complexity of security issues concerns both private and public organisations. It could be argued that the broad scope of security measures is demonstrated by recent *security directives*," -"---\nabstract: 'The Red Palm Weevil (RPW) is a highly destructive insect causing economic losses and impacting palm tree farming worldwide. This paper proposes an innovative approach for sustainable palm tree farming by utilizing advanced technologies for early detection and management of RPW. Our approach combines computer vision, deep learning (DL), the Internet of Things (IoT), and geospatial data to effectively detect and classify RPW-infested palm trees. The main phases include; (1) DL classification using sound data from IoT devices, (2) palm tree detection using YOLOv8 on UAV images, and (3) RPW mapping using geospatial data. Our custom DL model achieves 100% precision and recall in detecting and localizing infested palm trees. The integration of geospatial data enables the creation of a comprehensive RPW distribution map for efficient monitoring and targeted management strategies. This technology-driven approach benefits agricultural authorities, farmers, and researchers in managing RPW infestations, safeguarding palm tree plantations\u2019 productivity.'\nauthor:\n- |\n Yosra Hajjaji\\\n RIADI Laboratory, National School of Computer Science, University of Manouba, Tunisia\\\n `yossra.hajjaji@ensi-uma.tn`\\\n Ayyub Alzahem\\\n Robotics and Internet-of-Things Laboratory, Prince Sultan University, Riyadh, Saudi Arabia\\\n `aalzahem@psu.edu.sa `\\\n Wadii Boulila\\\n Robotics and Internet-of-Things Laboratory, Prince Sultan University, Riyadh, Saudi Arabia\\\n RIADI Laboratory, National School of Computer" -"---\nabstract: 'With the integration of communication and computing, it is expected that part of the computing is transferred to the transmitter side. In this paper we address the general problem of Frequency Modulation (FM) for function approximation through a communication channel. We exploit the benefits of the Discrete Cosine Transform (DCT) to approximate the function and design the waveform. In front of other approximation schemes, the DCT uses basis of controlled dynamic, which is a desirable property for a practical implementation. Furthermore, the proposed modulation allows to recover both the measurement and the function in a single transmission. Our experiments show that this scheme outperforms the double side-band (DSB) modulation in terms of mean squared error (MSE). This can also be implemented with an agnostic receiver, in which the function is unknown to the receiver. Finally, the proposed modulation is compatible with some of the existing transmission technologies for sensor networks.'\naddress: |\n $^{1}$ Centre Tecnol\u00f2gic de Telecomunicacions de Catalunya, Spain\\\n $^{2}$ Dept. of Signal Theory and Communications, Universitat Polit\u00e8cnica de Catalunya, Spain\\\n $^{3}$ ICREA Acad\u00e8mia, Spain\\\nbibliography:\n- 'refs.bib'\ntitle: 'DCT-based air interface design for function computation'\n---\n\nTask-oriented communication, joint communication and computing, Over-the-Air Computing (AirComp), WSN." -"---\nabstract: 'In this work we study the autonomous dynamical system of different $F(R)$ models in the formalism of holographic dark energy using the generalized Nojiri-Odintsov cut-off. We explicitly give the expression of the fixed points as functions of the infrared cut-off for vacuum $F(R)$ gravity in flat and non-flat FRW background and for $F(R)$ coupling axion dark matter. Each fixed point component can be taken as a condition on the cut-off and on the expression of $F(R)$, leading to physically interesting constraints on these functions.'\nauthor:\n- 'Simone\u00a0D\u2019Onofrio[^1]'\nbibliography:\n- 'Bibliography.bib'\ntitle: '**Holographic description of $F(R)$ gravity coupled with Axion Dark Matter**'\n---\n\nIntroduction\n============\n\nThe importance of $F(R)$ modified gravity [@nojiri2017modified; @capozziello2010beyond; @nojiri2011unified] is given by the possibility to describe various eras of our Universe and the possibility to unify such scenarios in a single theory. The late time evolution is driven by dark energy and a description of such an era as a type of modified gravity can be found in [@capozziello2002curvature]. Several modified models provide a unification from an early-time acceleration to a late-time acceleration [@nojiri2003modified; @nojiri2006modified; @nojiri2011unified; @capozziello2015connecting; @cognola2008class; @nojiri2007unifying; @nojiri2008modified], which provides a full description of the post-inflationary evolution. For reviews see [@faraoni2011landscape;" -"---\nauthor:\n- 'Matilde Signorini[^1], Guido Risaliti, Elisabeta Lusso, Emanuele Nardini, Giada Bargiacchi, Andrea Sacchi,'\n- Bartolomeo Trefoloni\nbibliography:\n- 'bibl.bib'\nsubtitle: 'IV. Analysis of the X-ray and UV indicators of the disc-corona relation'\ntitle: Quasars as Standard Candles\n---\n\nIntroduction {#intro}\n============\n\nQuasars are the most luminous persistent objects in our Universe. Their spectral energy distribution (SED) is complex; it goes from the radio to the X-rays, with the most intense emission emerging at optical\u2013UV wavelengths [e.g. @Sanders89; @Richards06; @Elvis2012]. The origin of this emission is attributed to accretion from an optically thick and geometrically thin disc around the central supermassive black hole [SMBH, @SS73]. Since decades, the presence of a non-linear relation between the X-ray and UV luminosities of quasars has been observed [@Tananbaum79]. This relation is usually parameterised as $\\log({L_{\\rm X}}) = \\gamma \\log({L_{\\rm UV}}) + \\beta$, where the slope is found to be $\\gamma\\simeq0.6$ over a wide range of redshifts and luminosities [e.g. @Steffen06; @Lusso10; @Young2010]. This relation must be based on the interaction between the accretion disc, which emits mainly in the UV, and the so-called X-ray corona, which consists of a hot-electron plasma. UV photons coming from the disc are up-scattered in the corona," -"---\nabstract: 'We extend the Mattis-Bardeen theory for the dynamical response of superconductors to include different types of Hall responses. This is possible thanks to a recent modification of the quasiclassical Usadel equation, which allows for analyzing Hall effects in disordered superconductors and including the precise frequency dependence of such effects. Our results form a basis for analyzing dynamical experiments especially on novel thin-film superconductors, where ordinary Hall and spin Hall effects can both show up.'\nauthor:\n- Alberto Hijano\n- 'Sakineh Vosoughi-nia'\n- 'F. Sebasti\u00e1n Bergeret'\n- Pauli Virtanen\n- 'Tero T. Heikkil\u00e4'\ntitle: Dynamical Hall responses of disordered superconductors\n---\n\nIntroduction\n============\n\nSimultaneous application of electric and magnetic fields on a conductor leads to the presence of a charge current with a transverse component perpendicular to both fields, in addition to the ordinary longitudinal current in the direction of the electric field. This ordinary Hall effect has been known since the 19th century\u00a0[@hall1879new] and it can be directly incorporated into the Drude model [@drude1900elektronentheorie; @drude1900elektronentheorie2] of electronic conduction once the Lorenz force due to the magnetic field is included. Varying the electric field in time leads to similarly varying longitudinal and transverse charge currents\u00a0[@Ashcroft-Mermin]. This dynamical" -"---\nabstract: 'An impurity particle interacting with a Bose-Einstein condensate (BEC) leads to the formation of a quasiparticle known as the Bose polaron. We investigate the properties of the two-dimensional Bose polaron, applying a variational ansatz that contains up to two Bogoliubov excitations of the BEC. Similar to its three-dimensional counterpart, we observe the existence of two quasiparticle branches, namely the attractive and the repulsive polarons, at different coupling strengths. We find that their energies agree well with recent quantum Monte Carlo calculations. In particular, we observe that the inclusion of two excitations is crucial to capture the attractive polaron energy towards the regime of strong attraction, where the quasiparticle properties are dominated by few-body correlations. We also calculate the attractive polaron effective mass and residue, where we find significant differences between considering a weakly interacting Bose medium and taking the non-interacting limit, signalling enhanced impurity dressing by excitations in the latter case. By contrast, the spectral weight of the metastable repulsive polaron is largely insensitive to the interactions in the BEC and the number of Bogoliubov excitations. Our model may be experimentally realized in dilute atomic vapors and atomically thin semiconductors.'\nauthor:\n- Yasufumi Nakano\n- 'Meera M. Parish'" -"---\nabstract: 'State-of-the-art model-checking algorithms like [IC3/PDR]{}are based on uni-directional modular SAT solving for finding and/or blocking counterexamples. Modular SAT-solvers divide a SAT-query into multiple sub-queries, each solved by a separate SAT-solver (called a module), and propagate information (lemmas, proof obligations, blocked clauses, etc.) between modules. While modular solving is key to [IC3/PDR]{}, it is obviously not as effective as monolithic solving, especially when individual sub-queries are harder to solve than the combined query. This is partially addressed in SAT modulo SAT\u00a0([SMS]{}) by propagating unit literals back and forth between the modules and using information from one module to simplify the sub-query in another module as soon as possible (i.e., before the satisfiability of any sub-query is established). However, bi-directionality of [SMS]{}is limited because of the strict order between decisions and propagation \u2013 only one module is allowed to make decisions, until its sub-query is SAT. In this paper, we propose a generalization of [SMS]{}, called [specSMS]{}, that *speculates* decisions between modules. This makes it bi-directional \u2013 decisions are made in multiple modules, and learned clauses are exchanged in both directions. We further extend DRUP proofs and interpolation, these are useful in model checking, to [specSMS]{}. We" -"---\nauthor:\n- \nbibliography:\n- 'sn-bibliography.bib'\ntitle: 'On the exact survival probability by setting discrete random variables in E. Sparre Andersen\u2019s model'\n---\n\nIntroduction {#sec:intr}\n============\n\nIn applied probability, the following stochastic process $$\\begin{aligned}\n\\label{eq:process}\nW(0):=u,\\,W(t):=u+ct-\\sum_{i=1}^{N(t)}X_i,\\,t>0,\\end{aligned}$$ is well known. Here $u\\geqslant 0$, $c>0$, the non-negative random variables $X_1,\\,X_2,\\,\\ldots$ are independent copies of $X$, and $$N(t)=\\max\\{n\\in\\mathbb{N}:\\,\\theta_1+\\theta_2+\\ldots+\\theta_n\\leqslant t\\}$$ is the renewal process generated by the non-negative random variables $\\theta_1,\\,\\theta_2,\\,\\ldots$, which are the independent copies of $\\theta$.\n\nThe model is often met in queuing theory arguing that it represents the G/G/1 queue. The notation G/G/1 means that the queue length in a system with a single server is described by the interarrival times having an arbitrary distribution and the service times having some different distribution, see [@Asmussen]. In ruin theory and insurance mathematics, the process is known as E. Sparre Andersen\u2019s model or the renewal risk model. One may assume that $W(t)$ describes the insurer\u2019s wealth in time, where $u\\geqslant 0$ denotes the initial surplus, $c>0$ represents the constant premium amount paid by the customers per unit of time and the subtracted random sum means payoffs caused by the random size claims at the random points in time, see for example [@thorin_1974] or" -"---\nabstract: 'The effect of two- and three-body interactions on the modulation instability (MI) domain formation of a spin-orbit (SO) and Rabi-coupled Bose-Einstein condensate is studied within a quasi-one-dimensional model. To this aim, we perform numerical and analytical investigations of the associated dispersion relations derived from the corresponding coupled Gross-Pitaevskii equation. The interplay between the linear (SO and Rabi) couplings with the nonlinear cubic-quintic interactions are explored in the mixture, considering miscible and immiscible configurations, with a focus on the impact in the analysis of experimental realizations with general binary coupled systems, in which nonlinear interactions can be widely varied together with linear couplings.'\naddress: |\n $^1$Department of Physics, Presidency College (Autonomous), Chennai - 600005, India\\\n $^2$Instituto de F\u00edsica Te\u00f3rica, UNESP \u2013 Universidade Estadual Paulista, 01140-070 S\u00e3o Paulo, Brazil\nauthor:\n- 'R. Sasireka$^1$, S. Sabari$^2$, and A. Uthayakumar$^1$, and Lauro Tomio$^2$'\ntitle: ' Domain formation of modulation instability in spin-orbit-Rabi coupled Gross-Pitaevskii equation with cubic-quintic interactions'\n---\n\nBose-Einstein condensates, Spin-orbit coupling, Modulational instability, Three-body interactions, Linear stability analysis.\n\nIntroduction {#sec:1}\n============\n\n[Modulational instability (MI) is a generic phenomenon leading to large-amplitude periodic waves, which occurs in dynamic systems, like fluids, nonlinear optics, and plasmas. It results from the interplay between" -"---\nauthor:\n- 'H\u00e9l\u00e8ne Eynard-Bontemps & Andr\u00e9s Navas'\ndate:\n- \n- \ntitle: 'The space of $C^{1+ac}$ actions of ${\\mathbb Z}^d$ on a one-dimensional manifold is path-connected'\n---\n\n[**Abstract.**]{} We show path-connectedness for the space of ${\\mathbb Z}^d$ actions by $C^1$ diffeomorphisms with absolutely continuous derivative on both the closed interval and the circle. We also give a new and short proof of the connectedness of the space of ${\\mathbb Z}^d$ actions by $C^2$ diffeomorphisms on the interval, as well as an analogous result in the real-analytic setting.\n\n[**Keywords:**]{} centralizer, flow, connectedness, ${\\mathbb Z}^d$ action.\n\n[**MCS 2020:**]{} 37C05, 37C10, 37C15, 37E05, 37E10, 57S25.\n\nIntroduction {#introduction .unnumbered}\n============\n\nActions of free abelian groups on one-dimensional manifolds have been deeply studied from the 60\u2019s on, with seminal works such as [@Sz; @Ko; @Ta; @Se; @Yo], and have known a regain of interest in the past decade [@Na14; @BE].[^1] A historical motivation is the study of foliations of $3$-manifolds by surfaces, where actions of ${\\mathbb Z}^2=\\pi_1({\\mathbb T}^2)$ on the interval appear as holonomy representations of (germs of) foliations near toric leaves, which play a special role in the $3$-dimensional context. Actually, there is a deep relation between ${\\mathbb Z}^2$ actions and the problem of" -"---\nabstract: 'The electron-phonon Wannier interpolation (EPWI) method is an efficient way to compute the properties of electron-phonon interactions (EPIs) accurately. This study presents a GPU-accelerated implementation of the EPWI method for computing transport properties, followed by a performance analysis. The implementation is tested on common systems such as aluminum and silicon. The results show complete consistency with those obtained through CPU computations. The proposed algorithm has the capability of computing the conductivity of aluminum in 20 minutes on a single NVIDIA Tesla V100 GPU, adopting a $200^3$ electron and phonon sampling grid. This speed is 173 times higher than the CPU-based algorithm, running on two nodes of the Intel Xeon Platinum 8260 CPU. Such impressive acceleration is achieved by carefully designing the algorithm to exploit the GPU\u2019s specific features. Furthermore, this methodology establishes a generic foundation for EPWI algorithms, which can be applied to other EPI-related properties.'\nauthor:\n- Zhe Liu\n- Bo Zhang\n- Zheyong Fan\n- Wu Li\nbibliography:\n- 'Citations.bib'\ntitle: 'A high-performance GPU implementation of the electron-phonon Wannier interpolation and the related transport properties'\n---\n\nIntroduction {#intro}\n============\n\nIn crystals, electron-phonon interactions (EPIs) give rise to a wide variety of phenomena, making it a critical" -"---\nabstract: 'Do neural networks, trained on well-understood algorithmic tasks, reliably rediscover known algorithms for solving those tasks? Several recent studies, on tasks ranging from group arithmetic to in-context linear regression, have suggested that the answer is yes. Using modular addition as a prototypical problem, we show that algorithm discovery in neural networks is sometimes more complex. Small changes to model hyperparameters and initializations can induce discovery of qualitatively different algorithms from a fixed training set, and even parallel implementations of multiple such algorithms. Some networks trained to perform modular addition implement a familiar *Clock* algorithm (previously described by Nanda et al.\u00a0[@nanda2023progress]); others implement a previously undescribed, less intuitive, but comprehensible procedure we term the *Pizza* algorithm, or a variety of even more complex procedures. Our results show that even simple learning problems can admit a surprising diversity of solutions, motivating the development of new tools for characterizing the behavior of neural networks across their algorithmic phase space. [^1]'\nauthor:\n- |\n Ziqian Zhong\\*, Ziming Liu\\*, Max Tegmark, Jacob Andreas\\\n Massachusetts Institute of Technology\\\n `{ziqianz, zmliu, tegmark, jda}@mit.edu`\nbibliography:\n- 'ref.bib'\ntitle: 'The Clock and the Pizza: Two Stories in Mechanistic Explanation of Neural Networks'\n---\n\nIntroduction\n============\n\nMechanistically" -"---\nabstract: 'This work investigates interference mitigation techniques in multi-user multiple input multiple output (MU-MIMO) Intelligent Reflecting Surface (IRS)-aided networks, focusing on the base station end. Two methods of precoder design based on block diagonalization are proposed. The first method does not consider the interference caused by the IRS, seeking to mitigate only the multi-user interference. The second method mitigates both the IRS-caused interference and the multi-user interference. A comparison between both methods within an no-IRS MU-MIMO network with strong direct links is provided. The results show that, although in some circumstances IRS interference can be neglected, treating it can improve system capacity and provide higher spectral efficiency.'\nauthor:\n- |\n Wilker de O. Feitosa, Igor M. Guerreiro, Fco. Rodrigo P. Cavalcanti, Tarcisio F. Maciel,\\\n Maria Clara R. Lob\u00e3o, Fazal-E-Asim, Behrooz Makki and G\u00e1bor Fodor. [^1] [^2]\nbibliography:\n- 'IEEEabrv.bib'\n- '3gpp.bib'\n- 'refs.bib'\ntitle: 'Interference mitigation with block diagonalization for IRS-aided MU-MIMO communications'\n---\n\nMU-MIMO, Interference Mitigation, Block Diagonalization, Intelligent Reflecting Surfaces.\n\nIntroduction\n============\n\nThe of cellular networks is expected to present significant advances in terms of system capacity, energy efficiency, number of supported users and\u00a0 compared to the\u00a0\u00a0[@Zhang2019]. To accomplish this goal, new physical layer technologies are" -"---\nabstract: 'Despite the coupled nature of water and electricity demand, the two utilities are often managed by different entities with minimal interaction. Neglecting the water-energy demand nexus leads to to suboptimal management decisions, particularly under climate change. Here, we leverage state-of-the-art machine learning and contemporary climate analogs to project the city-level coupled water and electricity demand of 46 major U.S. cities into the future. The results show that many U.S. cities may experience an increase in electricity (water) demand of up to 20% (15%) due to climate change under a high emissions scenario, with a clear north-south gradient. In the absence of appropriate mitigation strategies, these changes will likely stress current infrastructure, limiting the effectiveness of the ongoing grid decarbonization efforts. In the event that cities are unable to match the increasing demand, there may be increased occurrence of supply shortages, leading to blackouts with disproportionate impacts on vulnerable populations. As such, reliable projections of future water and electricity demand under climate change are critical not only for preventing further exacerbation of the existing environmental injustices but also for more effective design and execution of climate change mitigation and adaptation plans.'\nauthor:\n- Renee Obringer\n- Roshanak Nateghi\n-" -"---\nabstract: 'We show that even order contributions to energy differences between any two iso-electronic compounds vanish when using perturbation theory around an averaged electronic reference Hamiltonian. This finding generalizes the previously introduced alchemical chirality concept \\[von Rudorff, von Lilienfeld, [*Science Advances*]{}, [**7**]{} 2021\\] by lifting the symmetry requirements for transmutating atoms in the iso-electronic reference system. The leading order term corresponds to twice the Hellmann-Feynman derivative evaluated using the electron density of the averaged Hamiltonian. Analogous analysis reveals Mel Levy\u2019s formula for relative energies \\[[*J. Chem. Phys.*]{} [**70**]{}, 1573 (1979)\\] to include the first order contribution while overestimating the higher odd order energy contributions by a factor linearly increasing in order. Using density functional theory, we illustrate the predictive power of the leading order term for estimating relative energies among diatomics in the charge-neutral iso-electronic 14 proton series N$_2$, CO, BF, BeNe, LiNa, HeMg, HAl, and the united atom, Si. The framework\u2019s potential for the simultaneous exploration of multiple dimensions in chemical space is demonstrated for toluene by evaluating relative energies between all the possible 35 antisymmetric BN doped isomers (dubbed \u201calchemical diastereomers\u201d). Based solely on toluene\u2019s electron density, necessary to evaluate all the respective Hellmann-Feynman derivatives, mean absolute" -"---\nabstract: 'Surrogate modeling is a viable solution for applications involving repetitive evaluations of expensive computational fluid dynamics (CFD) models, such as uncertainty quantification and inverse problems. This study proposes two machine-learning surrogates for canopy flow statistics accommodating any approaching mean-wind angle. The first model is based on a K-nearest neighbors (KNN) approach, while the second utilizes a more advanced multi-layer perceptron (MLP) technique. The training and testing of these models are based on results from large-eddy simulation of open-channel flow over and within an array of surface-mounted cuboids under neutral ambient stratification. Training datasets comprise flow statistics from various approaching wind angles, and the surrogates are asked to \u201cconnect between the dots\u201d, i.e., to predict flow statistics for unseen values of the approaching wind angle. The KNN- and MLP-based surrogates are orders of magnitude faster than the LES algorithm and use only a fraction of the computational resources. KNN and MLP can reconstruct time-averaged three-dimensional flow statistics with a coefficient of determination $R^2 > 0.96$ for combined flow statistics when trained using many training samples (big-data regime). As the number of training samples is reduced, the accuracy of the MLP model deteriorates more gradually, featuring a superior performance overall." -"---\nabstract: 'We consider quantum two-block group algebra (2BGA) codes, a previously unstudied family of smallest lifted-product (LP) codes. These codes are related to generalized-bicycle (GB) codes, except a cyclic group is replaced with an arbitrary finite group, generally non-abelian. As special cases, 2BGA codes include a subset of square-matrix LP codes over abelian groups, including quasi-cyclic codes, and all square-matrix hypergraph-product codes constructed from a pair of classical group codes. We establish criteria for permutation equivalence of 2BGA codes and give bounds for their parameters, both explicit and in relation to other quantum and classical codes. We also enumerate the optimal parameters of all inequivalent connected 2BGA codes with stabilizer generator weights $W\\le 8$, of length $n\\le 100$ for abelian groups, and $n\\le 200$ for non-abelian groups.'\nauthor:\n- 'Hsiang-Ku Lin'\n- 'Leonid P. Pryadko'\nbibliography:\n- 'lpp.bib'\n- 'qc\\_all.bib'\n- 'more\\_qc.bib'\n- 'ldpc.bib'\n- 'linalg.bib'\n- 'teach.bib'\ntitle: 'Quantum two-block group algebra codes'\n---\n\nIntroduction {#sec:introduction}\n============\n\nRecent years have seen a substantial progress in theory of quantum low-density parity-check (LDPC) codes[@Evra-Kaufman-Zemor-2020; @Hastings-Haah-ODonnell-2020; @Panteleev-Kalachev-2020; @Breuckmann-Eberhardt-2020; @Panteleev-Kalachev-2021]. Generally, any code family with bounded-weight stabilizer generators and distance scaling logarithmically or faster with the block length has a finite fault-tolerant" -"---\nabstract: 'Increasingly popular home assistants are widely utilized as the central controller for smart home devices. However, current designs heavily rely on voice interfaces with accessibility and usability issues; some latest ones are equipped with additional cameras and displays, which are costly and raise privacy concerns. These concerns jointly motivate Beyond-Voice, a novel deep-learning-driven acoustic sensing system that allows commodity home assistant devices to track and reconstruct hand poses continuously. It transforms the home assistant into an active sonar system using its existing onboard microphones and speakers. We feed a high-resolution range profile to the deep learning model that can analyze the motions of multiple body parts and predict the 3D positions of 21 finger joints, bringing the granularity for acoustic hand tracking to the next level. It operates across different environments and users without the need for personalized training data. A user study with 11 participants in 3 different environments shows that Beyond-Voice can track joints with an average mean absolute error of 16.47mm without any training data provided by the testing subject.'\nauthor:\n- 'Yin Li, Rohan Reddy, Cheng Zhang, Rajalakshmi Nandakumar'\nbibliography:\n- 'main.bib'\ntitle: 'Beyond-Voice: Towards Continuous 3D Hand Pose Tracking on Commercial Home Assistant" -"---\nabstract: 'Accurate and timely detection of plant stress is essential for yield protection, allowing better-targeted intervention strategies. Recent advances in remote sensing and deep learning have shown great potential for rapid non-invasive detection of plant stress in a fully automated and reproducible manner. However, the existing models always face several challenges: 1) computational inefficiency and the misclassifications between the different stresses with similar symptoms; and 2) the poor interpretability of the host-stress interaction. In this work, we propose a novel fast Fourier Convolutional Neural Network (FFDNN) for accurate and explainable detection of two plant stresses with similar symptoms (i.e. Wheat Yellow Rust And Nitrogen Deficiency). Specifically, unlike the existing CNN models, the main components of the proposed model include: 1) a fast Fourier convolutional block, a newly fast Fourier transformation kernel as the basic perception unit, to substitute the traditional convolutional kernel to capture both local and global responses to plant stress in various time-scale and improve computing efficiency with reduced learning parameters in Fourier domain; 2) Capsule Feature Encoder to encapsulate the extracted features into a series of vector features to represent part-to-whole relationship with the hierarchical structure of the host-stress interactions of the specific stress. In addition," -"---\nabstract: 'The directed bond percolation is a paradigmatic model in non-equilibrium statistical physics. It captures essential physical information on the nature of continuous phase transition between active and absorbing states. In this paper, we study this model by means of the field-theoretic formulation with a subsequent renormalization group analysis. We calculate all critical exponents needed for the quantitative description of the corresponding universality class to the third order in perturbation theory. Using dimensional regularization with minimal subtraction scheme, we carry out perturbative calculations in a formally small parameter ${\\varepsilon}$, where ${\\varepsilon}= 4-d$ is a deviation from the upper critical dimension $d_c=4$. We use a non-trivial combination of analytical and numerical tools in order to determine ultraviolet divergent parts of Feynman diagrams.'\nauthor:\n- 'Loran Ts. Adzhemyan'\n- Michal Hnati\u010d\n- 'Ella V. Ivanova'\n- 'Mikhail V. Kompaniets'\n- Tom\u01ce\u0161 Lu\u010divjansk\u00fd\n- Luk\u00e1\u0161 Mi\u017ei\u0161in\nbibliography:\n- 'mybib.bib'\ntitle: 'Field-theoretic Analysis of Directed Percolation: Three-loop Approximation'\n---\n\n\\[sec:level1\\] Introduction\n===========================\n\nNon-equilibrium processes are prevalent in nature and the majority of observed phenomena are being in some form of non-equilibrium state\u00a0[@marro_dickman1999; @krapivsky_book2010]. Famous examples encompass turbulent flows\u00a0[@davidson], pattern formations\u00a0[@hohenberg1993], Earth\u2019s atmosphere\u00a0[@marston2012], and living organisms\u00a0[@wang2019]. A plethora of" -"---\nabstract: |\n Let ${\\mathcal{V}}$ and ${\\mathcal{U}}$ be the point sets of two independent homogeneous Poisson processes on ${\\mathbb{R}^d}$. A graph ${\\mathcal{G}}_{\\mathcal{V}}$ with vertex set ${\\mathcal{V}}$ is constructed by first connecting pairs of points $(v,u)$ with $v\\in{\\mathcal{V}}$ and $u\\in{\\mathcal{U}}$ independently with probability $g(v-u)$, where $g$ is a non-increasing radial function, and then connecting two points $v_1,v_2\\in{\\mathcal{V}}$ if and only if they have a joint neighbor $u\\in{\\mathcal{U}}$. This gives rise to a random intersection graph on ${\\mathbb{R}^d}$. Local properties of the graph, including the degree distribution, are investigated and quantified in terms of the intensities of the underlying Poisson processes and the function $g$. Furthermore, the percolation properties of the graph are characterized and shown to differ depending on whether $g$ has bounded or unbounded support.\n\n *Keywords:* Random intersection graphs, spatial random graphs, complex networks, AB percolation, degree distribution, percolation phase transition.\n\n AMS 2010 Subject Classification: 60K35.\nauthor:\n- 'Maria Deijfen[^1]'\n- 'Riccardo Michielan[^2]'\ndate: June 2023\ntitle: Geometric random intersection graphs with general connection probabilities\n---\n\nplus1pt minus1pt\n\nIntroduction {#sec:intro}\n============\n\nRandom intersection graphs have been popular in network modeling to describe networks arising from bipartite structures. In general, an intersection graph is constructed by assigning each vertex a subset" -"---\nabstract: 'This paper is the third part of a series of papers about empirical approaches to open circuit voltage (OCV) modeling of lithium-ion batteries. The first part of the series [@SlowOCVp1] proposed models to quantify various sources of uncertainties in the OCV models; and, the second part of the series [@SlowOCVp2] presented systematic data collection approaches to compute the uncertainties in the OCV-SOC models. This paper uses data collected from 28 OCV characterization experiments, performed according to the data collection plan presented in [@SlowOCVp2], to compute and analyze the following three different OCV uncertainty metrics: cell-to-cell variations, cycle-rate error, and curve fitting error. From the computed metrics, it was observed that a lower C-Rate showed smaller errors in the OCV-SOC model and vice versa. The results reported in this paper establish a relationship between the C-Rate and the uncertainty of the OCV-SOC model. This research can be thus useful to battery researchers for quantifying the tradeoff between the time taken to complete the OCV characterization test and the corresponding uncertainty in the OCV-SOC modeling. Further, quantified uncertainty model parameters can be used to accurately characterize the uncertainty in various battery management functionalities, such as state of charge and state" -"---\nabstract: 'The event streams generated by dynamic vision sensors (DVS) are sparse and non-uniform in the spatial domain, while still dense and redundant in the temporal domain. Although spiking neural network (SNN), the event-driven neuromorphic model, has the potential to extract spatio-temporal features from the event streams, it is not effective and efficient. Based on the above, we propose an events sparsification spiking framework dubbed as Razor SNN, pruning pointless event frames progressively. Concretely, we extend the dynamic mechanism based on the global temporal embeddings, reconstruct the features, and emphasize the events effect adaptively at the training stage. During the inference stage, eliminate fruitless frames hierarchically according to a binary mask generated by the trained temporal embeddings. Comprehensive experiments demonstrate that our Razor SNN achieves competitive performance consistently on four events-based benchmarks: DVS 128 Gesture, N-Caltech 101, CIFAR10-DVS and SHD.'\nauthor:\n- |\n Yuan Zhang${}^{1}$ Jian Cao${}^{1} $ Ling Zhang${}^{1} $ [^1]\\\n Jue Chen${}^{1}$ Wenyu Sun${}^{2}$ Yuan Wang${}^{3}$\nbibliography:\n- 'strings.bib'\ntitle: 'Razor SNN: Efficient Spiking Neural Network with Temporal Embeddings'\n---\n\nIntroduction\n============\n\nEvent-based neuromorphic computation utilizes sparse and asynchronous events captured by DVS to represent signals more efficiently. Unlike RGB cameras, DVS encodes the time, location, and" -"---\nabstract: 'We study the problem of PAC learning $\\gamma$-margin halfspaces with Random Classification Noise. We establish an information-computation tradeoff suggesting an inherent gap between the sample complexity of the problem and the sample complexity of computationally efficient algorithms. Concretely, the sample complexity of the problem is $\\widetilde{\\Theta}(1/(\\gamma^2 {\\epsilon}))$. We start by giving a simple efficient algorithm with sample complexity $\\widetilde{O}(1/(\\gamma^2 {\\epsilon}^2))$. Our main result is a lower bound for Statistical Query (SQ) algorithms and low-degree polynomial tests suggesting that the quadratic dependence on $1/{\\epsilon}$ in the sample complexity is inherent for computationally efficient algorithms. Specifically, our results imply a lower bound of $\\widetilde{\\Omega}(1/(\\gamma^{1/2} {\\epsilon}^2))$ on the sample complexity of any efficient SQ learner or low-degree test.'\nauthor:\n- |\n Ilias Diakonikolas[^1]\\\n UW Madison\\\n [ilias@cs.wisc.edu]{}\\\n- |\n Jelena Diakonikolas[^2]\\\n UW Madison\\\n [jelena@cs.wisc.edu]{}\\\n- |\n Daniel M. Kane[^3]\\\n UC San Diego\\\n [dakane@ucsd.edu]{}\n- |\n Puqian Wang[^4]\\\n UW Madison\\\n [pwang333@wisc.edu]{}\\\n- |\n Nikos Zarifis[^5]\\\n UW Madison\\\n [zarifis@wisc.edu]{}\\\nbibliography:\n- 'clean2.bib'\ntitle: |\n Information-Computation Tradeoffs for Learning Margin Halfspaces with Random Classification Noise\n\n [^6]\n---\n\nIntroduction\n============\n\n[[This work studies the efficient learnability of halfspaces with a margin in the presence of random label noise. Before we present our contributions, we provide the" -"---\nabstract: 'The development of Natural Language Generation models has led to the creation of powerful Artificial Intelligence-assisted writing tools. These tools are capable of predicting users\u2019 needs and actively providing suggestions as they write. In this work, we conduct a comparative user-study between such tools from an information retrieval lens: pull and push. Specifically, we investigate the user demand of AI-assisted writing, the impact of the two paradigms on quality, ownership of the writing product, and efficiency and enjoyment of the writing process. We also seek to understand the impact of bias of AI-assisted writing. Our findings show that users welcome seamless assistance of AI in their writing. Furthermore, AI helped users to diversify the ideas in their writing while keeping it clear and concise more quickly. Users also enjoyed the collaboration with AI-assisted writing tools and did not feel a lack of ownership. Finally, although participants did not experience bias in our experiments, they still expressed explicit and clear concerns that should be addressed in future AI-assisted writing tools.'\nauthor:\n- Carlos Alves Pereira\n- Tanay Komarlu\n- Wael Mobeirek\nbibliography:\n- 'references.bib'\ntitle: 'The Future of AI-Assisted Writing'\n---\n\nIntroduction\n============\n\nComputer-assisted writing tools have been rapidly" -"---\nabstract: 'This paper introduces DreamDiffusion, a novel method for generating high-quality images directly from brain electroencephalogram (EEG) signals, without the need to translate thoughts into text. DreamDiffusion leverages pre-trained text-to-image models and employs temporal masked signal modeling to pre-train the EEG encoder for effective and robust EEG representations. Additionally, the method further leverages the CLIP image encoder to provide extra supervision to better align EEG, text, and image embeddings with limited EEG-image pairs. Overall, the proposed method overcomes the challenges of using EEG signals for image generation, such as noise, limited information, and individual differences, and achieves promising results. Quantitative and qualitative results demonstrate the effectiveness of the proposed method as a significant step towards portable and low-cost \u201cthoughts-to-image\u201d, with potential applications in neuroscience and computer vision. The code is available here .'\nauthor:\n- |\n Yunpeng Bai$^{1}$, Xintao Wang$^{2}$, Yan-Pei Cao$^{2}$, Yixiao Ge$^{2}$, Chun Yuan$^{1, 3}$, Ying Shan$^{2}$\\\n $^{1}$ Tsinghua Shenzhen International Graduate School,\\\n $^{2}$Tencent AI Lab, $^{3}$Peng Cheng Laboratory\\\nbibliography:\n- 'egbib.bib'\ntitle: 'DreamDiffusion: Generating High-Quality Images from Brain EEG Signals'\n---\n\n![image](./figures/eeg_teaser.pdf){width=\"\\textwidth\"} \\[fig:teaser\\]\n\nIntroduction\n============\n\nImage generation\u00a0[@goodfellow2020generative; @karras2019style; @brock2018large] has made great strides in recent years, especially after breakthroughs in text-to-image generation\u00a0[@ramesh2021zero; @ding2022cogview2; @ramesh2022hierarchical;" -"---\nabstract: 'This study investigates the impact of molecular thermal fluctuations on compressible decaying isotropic turbulence using the unified stochastic particle (USP) method, encompassing both two-dimensional (2D) and three-dimensional (3D) scenarios. The findings reveal that the turbulent spectra of velocity and thermodynamic variables follow the wavenumber scaling law of ${k}^{(d-1)}$ for different spatial dimensions $d$ within the high wavenumber range, indicating the impact of thermal fluctuations on small-scale turbulent statistics. With the application of Helmholtz decomposition, it is found that the thermal fluctuation spectra of solenoidal and compressible velocity components (${\\vec{u}}_{s}$ and ${\\vec{u}}_{c}$) follow an energy ratio of 1:1 for 2D cases, while the ratio changes to 2:1 for 3D cases. Comparisons between 3D turbulent spectra obtained through USP simulations and direct numerical simulations of the Navier-Stokes equations demonstrate that thermal fluctuations dominate the spectra at length scales comparable to the Kolmogorov length scale. Additionally, the effect of thermal fluctuations on the spectrum of ${\\vec{u}}_{c}$ is significantly influenced by variations in the turbulent Mach number. We further study the impact of thermal fluctuations on the predictability of turbulence. With initial differences caused by thermal fluctuations, different flow realizations display significant disparities in velocity and thermodynamic fields at larger scales after" -"---\nabstract: |\n Legal language can be understood as the language typically used by those engaged in the legal profession and, as such, it may come both in spoken or written form. Recent legislation on cybersecurity obviously uses legal language in writing, thus inheriting all its interpretative complications due to the typical abundance of cases and sub-cases as well as to the general richness in detail. This paper faces the challenge of the essential interpretation of the legal language of cybersecurity, namely of the extraction of the essential Parts of Speech (POS) from the legal documents concerning cybersecurity.\n\n The challenge is overcome by our methodology for POS tagging of legal language. It leverages state-of-the-art open-source tools for Natural Language Processing (NLP) as well as manual analysis to validate the outcomes of the tools. As a result, the methodology is automated and, arguably, general for any legal language following minor tailoring of the preprocessing step. It is demonstrated over the most relevant EU legislation on cybersecurity, namely on the NIS 2 directive, producing the first, albeit essential, structured interpretation of such a relevant document. Moreover, our findings indicate that tools such as SpaCy and ClausIE reach their limits over the legal" -"---\nabstract: 'When we apply comparative phylogenetic analyses to genome data, it is a well-known problem and challenge that some of given species (or taxa) often have missing genes. In such a case, we have to impute a missing part of a gene tree from a sample of gene trees. In this short paper we propose a novel method to infer a missing part of a phylogenetic tree using an analogue of a classical linear regression in the setting of tropical geometry. In our approach, we consider a tropical polytope, a convex hull with respect to the tropical metric closest to the data points. We show a condition that we can guarantee that an estimated tree from our method has at most four Robinson\u2013Foulds (RF) distance from the ground truth and computational experiments with simulated data show our method works well.'\nauthor:\n- Ruriko Yoshida\nbibliography:\n- 'refs.bib'\ntitle: Imputing phylogenetic trees using tropical polytopes over the space of phylogenetic trees\n---\n\nIntroduction\n============\n\nDue to a new technology, today we are able to generate sequences from genome with lower cost. However, at the same time, we have a great challenge to analyze large scale datasets from genome sequences. In" -"---\nabstract: 'Class-incremental learning aims to learn new classes in an incremental fashion without forgetting the previously learned ones. Several research works have shown how additional data can be used by incremental models to help mitigate catastrophic forgetting. In this work, following the recent breakthrough in text-to-image generative models and their wide distribution, we propose the use of a pretrained Stable Diffusion model as a source of additional data for class-incremental learning. Compared to competitive methods that rely on external, often unlabeled, datasets of real images, our approach can generate synthetic samples belonging to the same classes as the previously encountered images. This allows us to use those additional data samples not only in the distillation loss but also for replay in the classification loss. Experiments on the competitive benchmarks CIFAR100, ImageNet-Subset, and ImageNet demonstrate how this new approach can be used to further improve the performance of state-of-the-art methods for class-incremental learning on large scale datasets.'\nauthor:\n- |\n Quentin Jodelet$^{1,2}$, Xin Liu$^{2}$, Yin Jun Phua$^{1}$, Tsuyoshi Murata$^{1,2}$\\\n [$^{1}$ Department of Computer Science, Tokyo Institute of Technology, Japan]{}\\\n [$^{2}$ Artificial Intelligence Research Center, AIST, Japan]{}\\\n [jodelet@net.c.titech.ac.jp, xin.liu@aist.go.jp, phua@c.titech.ac.jp, murata@c.titech.ac.jp]{}\nbibliography:\n- 'egbib.bib'\ntitle: 'Class-Incremental Learning using Diffusion Model for" -"---\nauthor:\n- 'L.-A. H\u00fchn'\n- 'B. Bitsch'\nbibliography:\n- 'references.bib'\ntitle: 'How does accretion of planet-forming disks influence stellar abundances?'\n---\n\nIntroduction\n============\n\nThe formation of exoplanets takes place in protoplanetary disks around young host stars, consisting of mainly hydrogen and helium gas, but also heavier elements in both solid and gaseous form. Their presence is a natural outcome of star formation (for a review, see @williams2011). In these disks, planet cores grow by accreting material from the disk. As this process takes place around the young host star, it is apparent that the stellar evolution cannot be treated as taking place in an isolated system. While the stellar irradiation is a common aspect considered in planet formation models as a form of stellar influence on the disk (e.g., @chiang1997 [@dullemond2004; @bitsch2015a; @savvidou2020]), the reverse impact of the surroundings on the star is not to be neglected.\n\nThe large fraction of stars hosting at least one planet naturally leads to the conclusion that planet formation is a ubiquitous phenomenon, further arguing for its consideration in the study of young stars. The protostar and its disk initially share the same chemical composition, having formed from collapsing molecular cloud material. With" -"---\nabstract: 'The self-interacting dark matter (SIDM) paradigm offers a potential solution to small-scale structure problems faced by the collision-less cold dark matter. This framework incorporates self-interactions among dark matter particles, typically mediated by a particle with a MeV-scale mass. Recent evidences of nano-Hertz gravitational waves from pulsar timing arrays (PTAs) such as NANOGrav, CPTA, EPTA, and PPTA suggest the occurrence of a first-order phase transition (FOPT) at a MeV-scale temperature. Considering the close proximity between these two scales, we propose that the mediator mass in the SIDM model originates from the spontaneous breaking of a $U(1)''$ symmetry, which is driven by the FOPT indicated by PTA data. Consequently, the alignment of these two scales is believed to be deeply connected by the same underlying physics. By extensively exploring the parameter space, remarkably, we find that the parameter space favored by SIDM just provides an explanation for the PTA data.'\nauthor:\n- Chengcheng Han\n- 'Ke-Pan Xie'\n- Jin Min Yang\n- Mengchao Zhang\nbibliography:\n- 'ref.bib'\ntitle: 'Self-interacting dark matter implied by nano-Hertz gravitational waves'\n---\n\nIntroduction\n============\n\nThe widely accepted cold dark matter (CDM) model successfully explains the Universe\u2019s structure and evolution. However, it faces challenges when addressing" -"---\nabstract: 'Students are able to produce correctly functioning program code even though they have a fragile understanding of how it actually works. Questions derived automatically from individual exercise submissions (QLC) can probe if and how well the students understand the structure and logic of the code they just created. Prior research studied this approach in the context of the first programming course. We replicate the study on a follow-up programming course for engineering students which contains a recap of general concepts in CS1. The task was the classic rainfall problem which was solved by 90% of the students. The QLCs generated from each passing submission were kept intentionally simple, yet 27% of the students failed in at least one of them. Students who struggled with questions about their own program logic had a lower median for overall course points than students who answered correctly.'\nauthor:\n- Teemu Lehtinen\n- Otto Sepp\u00e4l\u00e4\n- Ari Korhonen\nbibliography:\n- 'base.bib'\ntitle: 'Automated Questions About Learners\u2019 Own Code Help to Detect Fragile Knowledge'\n---\n\n<ccs2012> <concept> <concept\\_id>10010405.10010489.10010491</concept\\_id> <concept\\_desc>Applied computing\u00a0Interactive learning environments</concept\\_desc> <concept\\_significance>500</concept\\_significance> </concept> <concept> <concept\\_id>10003456.10003457.10003527.10003531.10003533</concept\\_id> <concept\\_desc>Social and professional topics\u00a0Computer science education</concept\\_desc> <concept\\_significance>500</concept\\_significance> </concept> </ccs2012>\n\nIntroduction\n============\n\nWhile novice programmers might be" -"---\nabstract: 'Integrating multiple observational studies to make unconfounded causal or descriptive comparisons of group potential outcomes in a large natural population is challenging. Moreover, retrospective cohorts, being convenience samples, are usually unrepresentative of the natural population of interest and have groups with unbalanced covariates. We propose a general covariate-balancing framework based on pseudo-populations that extends established weighting methods to the meta-analysis of multiple retrospective cohorts with multiple groups. Additionally, by maximizing the effective sample sizes of the cohorts, we propose a ible, ptimized, and ealistic (FLEXOR) weighting method appropriate for integrative analyses. We develop new weighted estimators for unconfounded inferences on wide-ranging population-level features and estimands relevant to group comparisons of quantitative, categorical, or multivariate outcomes. The asymptotic properties of these estimators are examined, and accurate small-sample procedures are devised for quantifying estimation uncertainty. Through simulation studies and meta-analyses of TCGA datasets, we discover the differential biomarker patterns of the two major breast cancer subtypes in the United States and demonstrate the versatility and reliability of the proposed weighting strategy, especially for the FLEXOR pseudo-population.'\nauthor:\n- |\n Subharup Guha[^1]\\\n Department of Biostatistics, University of Florida\\\n and\\\n Yi Li\\\n Department of Biostatistics, University of Michigan\ntitle: '**Causal Meta-Analysis by" -"---\nabstract: 'Rydberg atom-based radio frequency electromagnetic field sensors are drawing wide-spread interest because of their unique properties, such as small size, dielectric construction, and self-calibration. These photonic sensors use lasers to prepare atoms and read out the atomic response to a radio frequency electromagnetic field based on electromagnetically induced transparency, or related phenomena. Much of the theoretical work has focused on the Autler-Townes splitting induced by the radio frequency wave. The amplitude regime, where the change in transmission observed on resonance is measured to determine electric field strength, has received less attention. In this paper, we deliver analytic expressions that are useful for calculating the absorption coefficient and sensitivity in the amplitude regime. We describe the approximations that we applied to obtain the analytic expressions and demonstrate their validity over a large range of the interesting parameter space. The effect of the thermal motion of the atoms is explicitly addressed. Residual Doppler shifts are shown to limit sensitivity. An analytic expression for the amplitude regime of Rydberg atom-based sensing has not, to our knowledge, been obtained previously. The expressions, approximations and descriptions presented in the paper are important for maximizing the sensitivity of Rydberg atom-based sensors and for providing" -"---\nabstract: |\n We study the online variant of the Min-Sum Set Cover ([[Mssc]{.nodecor}]{}) problem, a generalization of the well-known list update problem. In the [[Mssc]{.nodecor}]{}problem, an algorithm has to maintain the time-varying permutation of the list of $n$ elements, and serve a sequence of requests $R_1, R_2, \\dots, R_t, \\dots$. Each $R_t$ is a subset of elements of cardinality at most $r$. For a requested set $R_t$, an online algorithm has to pay the cost equal to the position of the first element from $R_t$ on its list. Then, it may arbitrarily permute its list, paying the number of swapped adjacent element pairs.\n\n We present the first *constructive* deterministic algorithm for this problem, whose competitive ratio does not depend on $n$. Our algorithm is $O(r^2)$-competitive, which beats both the *existential* upper bound of $O(r^4)$ by Bienkowski and Mucha\u00a0\\[AAAI \u201923\\] and the previous constructive bound of $O(r^{3/2} \\cdot \\sqrt{n})$ by Fotakis et al.\u00a0\\[ICALP \u201920\\]. Furthermore, we show that our algorithm attains an asymptotically optimal competitive ratio of $O(r)$ when compared to the best fixed permutation of elements.\nauthor:\n- Mateusz Basiak\n- Marcin Bienkowski\n- Agnieszka Tatarczuk\nbibliography:\n- 'references.bib'\ntitle: 'An Improved Deterministic Algorithm for" -"---\nabstract: 'Estimating 3D human poses only from a 2D human pose sequence is thoroughly explored in recent years. Yet, prior to this, no such work has attempted to unify 2D and 3D pose representations in the shared feature space. In this paper, we propose [*MPM*]{}, a unified 2D-3D human pose representation framework via masked pose modeling. We treat 2D and 3D poses as two different modalities like vision and language and build a single-stream transformer-based architecture. We apply three pretext tasks, which are masked 2D pose modeling, masked 3D pose modeling, and masked 2D pose lifting to pre-train our network and use full-supervision to perform further fine-tuning. A high masking ratio of $72.5~\\%$ in total with a spatio-temporal mask sampling strategy leading to better relation modeling both in spatial and temporal domains. [*MPM*]{}\u00a0can handle multiple tasks including 3D human pose estimation, 3D pose estimation from occluded 2D pose, and 3D pose completion in a **single** framework. We conduct extensive experiments and ablation studies on several widely used human pose datasets and achieve state-of-the-art performance on Human3.6M and MPI-INF-3DHP. Codes and model checkpoints are available at [this https URL](https://github.com/vvirgooo2/MPM).'\nauthor:\n- |\n **Zhenyu Zhang**$^{1}$ **Wenhao Chai**$^{2}$ [^1] **Zhongyu Jiang**$^2$" -"---\nabstract: 'Pulsar Timing Arrays (PTAs) have reported evidence for a stochastic gravitational wave (GW) background at nHz frequencies, possibly originating in the early Universe. We show that the spectral shape of the low-frequency (causality) tail of GW signals sourced at temperatures around $T\\gtrsim 1$ GeV is distinctively affected by confinement of strong interactions (QCD), due to the corresponding sharp decrease in the number of relativistic species. Bayesian analyses in the NANOGrav 15 years and the previous International PTA datasets reveal a significant improvement in the fit with respect to cubic power-law spectra, previously employed for the causality tail. This suggests that the inclusion of Standard Model effects on GWs can have a potentially decisive impact on model selection.'\nauthor:\n- Gabriele Franciolini\n- Davide\u00a0Racco\n- Fabrizio Rompineve\nbibliography:\n- 'bib\\_SGWB-PTA.bib'\ntitle: |\n Footprints of the QCD Crossover on Cosmological Gravitational Waves\\\n at Pulsar Timing Arrays \n---\n\nCERN-TH-2023-080\n\nIntroduction {#sec:introduction}\n============\n\nA stochastic background of Gravitational Waves (SGWB) may be the only direct probe into the early stages of cosmological evolution, where it can be produced by physics beyond the Standard Model (SM). The recently-reported evidence for a nHz SGWB in the NANOGrav 15 years\u00a0[@NG15-pulsars; @NG15-SGWB] (NG15), European" -"---\nabstract: 'Let $D$ be a domain and let $\\operatorname{Int}(D)$ and $\\operatorname{Int{}^\\text{R}}(D)$ be the ring of integer-valued polynomials and the ring of integer-valued rational functions, respectively. Skolem proved that if $I$ is a finitely-generated ideal of $\\operatorname{Int}({{\\mathbb Z}})$ with all the value ideals of $I$ not being proper, then $I = \\operatorname{Int}({{\\mathbb Z}})$. This is known as the Skolem property, which does not hold in ${{\\mathbb Z}}[x]$. One obstruction to $\\operatorname{Int}(D)$ having the Skolem property is the existence of unit-valued polynomials. This is no longer an obstruction when we consider the Skolem property on $\\operatorname{Int{}^\\text{R}}(D)$. We determine that the Skolem property on $\\operatorname{Int{}^\\text{R}}(D)$ is equivalent to the maximal spectrum being contained in the ultrafilter closure of the set of maximal pointed ideals. We generalize the Skolem property using star operations and determine an analogous equivalence under this generalized notion.'\nauthor:\n- Baian Liu\nbibliography:\n- 'references.bib'\ntitle: 'The Skolem property in rings of integer-valued rational functions'\n---\n\nIntroduction\n============\n\nGiven a domain $D$, the ring of integer-valued polynomials over $D$ has been studied extensively. A collection of results on integer-valued polynomials can be found in [@Cahen]. However, not much is known about the ring of integer-valued rational functions. Despite having" -"---\nabstract: 'Recent advancements in technology have expanded the possibilities of human action recognition by leveraging 3D data, which offers a richer representation of actions through the inclusion of depth information, enabling more accurate analysis of spatial and temporal characteristics. However, 3D human action recognition is a challenging task due to the irregularity and Disarrangement of the data points in action sequences. In this context, we present our novel model for human action recognition from fixed topology mesh sequences based on Spiral Auto-encoder and Transformer Network, namely SpATr. The proposed method first disentangles space and time in the mesh sequences. Then, an auto-encoder is utilized to extract spatial geometrical features, and tiny transformer is used to capture the temporal evolution of the sequence. Previous methods either use 2D depth images, sample skeletons points or they require a huge amount of memory leading to the ability to process short sequences only. In this work, we show competitive recognition rate and high memory efficiency by building our auto-encoder based on spiral convolutions, which are light weight convolution directly applied to mesh data with fixed topologies, and by modeling temporal evolution using a attention, that can handle large sequences. The proposed method is" -"---\nabstract: 'We provide an algorithm for constructing a [Kirby diagram]{}of a $4$-dimensional open book given a [Heegaard diagram]{}of the page. As an application, we show that an open book constructed with arbitrary page and trivial monodromy is diffeomorphic to an open book constructed with a punctured handlebody as page and a composition of torus twists and sphere twists as monodromy.'\naddress: 'Freie Universit\u00e4t Berlin, Germany'\nauthor:\n- 'Chun-Sheng Hsueh'\nbibliography:\n- 'Sources.bib'\ntitle: 'Kirby diagrams of 4-dimensional open books'\n---\n\nIntroduction {#intro}\n============\n\nKirby calculus is a successful approach to studying $4$-dimensional manifolds [@gs], which we would like to apply to study open books in dimension $4$. Obstructions to the existence of open books are found in all dimensions and are known to be complete in all dimensions except $4$ [@Quinn]. There are two primary goals to this paper. Firstly, we introduce the notion of *half open books* and give an algorithm for constructing a [Kirby diagram]{}of half open books. Then we show that a [Kirby diagram]{}of an open book can be obtained by adding a framed link to the [Kirby diagram]{}of a half open book. Secondly, a handlebody is a $3$-manifold with boundary obtained from $D^3$ by attaching" -"---\nabstract: 'A widely accepted definition of intelligence in the context of Artificial Intelligence (AI) still eludes us. Due to our exceedingly rapid development of AI paradigms, architectures, and tools, the prospect of naturally arising AI consciousness seems more likely than ever. In this paper, we claim that all current intelligence tests are insufficient to point to the existence or lack of intelligence **as humans intuitively perceive it**. We draw from ideas in the philosophy of science, psychology, and other areas of research to provide a clearer definition of the problems of artificial intelligence, self-awareness, and agency. We furthermore propose a new heuristic approach to test for artificial self-awareness and outline a possible implementation. Finally, we discuss some of the questions that arise from this new heuristic, be they philosophical or implementation-oriented.'\naddress:\n- 'The Hebrew University of Jerusalem, Department of Philosophy'\n- Independent Researcher\nauthor:\n- '\u00a0[^1]'\nbibliography:\n- 'ecai.bib'\ntitle: 'Suffering Toasters - A New Self-Awareness Test for AI'\n---\n\nIntroduction\n============\n\nThe age of information may have brought humans to the brink of a new evolutionary stage. The amount of data collected and analyzed stands today at one Petabyte (PB) a day [@kn:clissa_survey_2022]. The information explosion" -"---\nabstract: 'The proposed [*Daksha*]{} mission comprises of a pair of highly sensitive space telescopes for detecting and characterising high-energy transients such as electromagnetic counterparts of gravitational wave events and gamma-ray bursts (GRBs). Along with spectral and timing analysis, [*Daksha*]{} can also undertake polarisation studies of these transients, providing data crucial for understanding the source geometry and physical processes governing high-energy emission. Each [*Daksha*]{} satellite will have 340 pixelated Cadmium Zinc Telluride (CZT) detectors arranged in a quasi-hemispherical configuration without any field-of-view collimation (open detectors). These CZT detectors are good polarimeters in the energy range 100 \u2013 400 keV, and their ability to measure polarisation has been successfully demonstrated by the Cadmium Zinc Telluride Imager (CZTI) onboard [[*AstroSat*]{}]{}. Here we demonstrate the hard X-ray polarisation measurement capabilities of [*Daksha*]{} and estimate the polarisation measurement sensitivity (in terms of the Minimum Detectable Polarisation: MDP) using extensive simulations. We find that [*Daksha*]{} will have MDP of\u00a0$30\\%$ for a fluence threshold of $10^{-4}$\u00a0[$\\mathrm{erg~cm}^{-2}$]{}\u00a0(in 10 \u2013 1000 keV). We estimate that with this sensitivity, if GRBs are highly polarised, [*Daksha*]{} can measure the polarisation of about five GRBs per year.'\nauthor:\n- Suman Bala\u00a0\n- Sujay Mate\u00a0\n- Advait Mehla\u00a0\n-" -"---\nabstract: 'With the proliferation of short video applications, the significance of short video recommendations has vastly increased. Unlike other recommendation scenarios, short video recommendation systems heavily rely on feedback from watch time. Existing approaches simply treat watch time as a direct label, failing to effectively harness its extensive semantics and introduce bias, thereby limiting the potential for modeling user interests based on watch time. To overcome this challenge, we propose a framework named [[Debiased]{}]{} Multiple-semantics-extracting Labeling (DML). DML constructs labels that encompass various semantics by utilizing quantiles derived from the distribution of watch time, prioritizing relative order rather than absolute label values. This approach facilitates easier model learning while aligning with the ranking objective of recommendations. Furthermore, we introduce a method inspired by [causal adjustment]{} to refine label definitions, [[thereby directly mitigating bias at the label level.]{}]{} We substantiate the effectiveness of our DML framework through both online and offline experiments. Extensive results demonstrate that [our DML could effectively leverage watch time to discover users\u2019 real interests, enhancing their engagement in our application. ]{}'\nauthor:\n- 'Yang Zhang\\*'\n- 'Yimeng Bai\\*'\n- 'Jianxin Chang\\*'\n- Xiaoxue Zang\n- Song Lu\n- Jing Lu\n- 'Fuli Feng$^{\\dag}$'\n- Yanan Niu" -"---\nauthor:\n- Sagar Ghorai\n- Rafael Martinho Vieira\n- Vitalii Shtender\n- 'Erna K. Delczeg-Czirjak'\n- 'Heike C. Herper'\n- Torbj\u00f6rn Bj\u00f6rkman\n- 'Sergei I. Simak'\n- Olle Eriksson\n- Martin Sahlberg\n- Peter Svedlindh\nbibliography:\n- 'ref.bib'\ntitle: ' Giant magnetocaloric effect in the (Mn,Fe)NiSi-system '\n---\n\n[**The search for energy-efficient and environmentally friendly cooling technologies is a key driver for the development of magnetic refrigeration based on the magnetocaloric effect (MCE). This phenomenon arises from the interplay between magnetic and lattice degrees of freedom that is strong in certain materials, leading to a change in temperature upon application or removal of a magnetic field. Here we report on a new material, Mn$_{1-x}$Fe$_x$NiSi$_{0.95}$Al$_{0.05}$, with an exceptionally large isothermal entropy at room temperature. By combining experimental and theoretical methods we outline the microscopic mechanism behind the large MCE in this material. It is demonstrated that the competition between the Ni$_2$In-type hexagonal phase and the MnNiSi-type orthorhombic phase, that coexist in this system, combined with the distinctly different magnetic properties of these phases, is a key parameter for the functionality of this material for magnetic cooling.** ]{} Materials exhibiting a large magnetic field-induced isothermal entropy change ($\\Delta S_M$) are classified" -"---\nabstract: 'We calculate the momentum diffusion coefficients and energy loss of a heavy quark (HQ) traversing through the quark-gluon plasma in the presence of a weak magnetic field, upto leading order in the strong coupling $\\alpha_s$. $t$ channel Coulomb scatterings of the HQ with the thermal quarks and gluons are considered, whereas Compton scatterings and gluon radiation are neglected. The scale hierarchy considered in this work is $M_Q\\gg T\\gg eB/T$. The calculations are carried out ina perturbative framework where the interaction rate $\\Gamma$ is calculated from the imaginary part of the HQ self energy. We find that the longitudinal and transverse momentum diffusion coefficients of the HQ decrease with temperature, whereas the energy loss increases with temperature. Variation with both the temperature and magnetic field is amplified for the Charm quark in comparison to bottom quark, due to the lighter mass of the former. We also find that the extent of anisotropy in the momentum diffusion coefficient depends strongly on the current mass of the HQ, with a lighter mass leading to a greater anisotropy.'\nauthor:\n- |\n Debarshi Dey[^1]\u00a0\u00a0and\u00a0\u00a0Binoy Krishna Patra[^2]\\\n Department of Physics,\\\n Indian Institute of Technology Roorkee, Roorkee 247667, India\ntitle: '**[Dynamics of open" -"---\nabstract: 'A number field $K$ is primitive if $K$ and ${\\mathbb{Q}}$ are the only subextensions of $K$. Let $C$ be a curve defined over ${\\mathbb{Q}}$. We call an algebraic point $P\\in C(\\overline{{\\mathbb{Q}}})$ primitive if the number field ${\\mathbb{Q}}(P)$ is primitive. We present several sets of sufficient conditions for a curve $C$ to have finitely many primitive points of a given degree $d$. For example, let $C/{\\mathbb{Q}}$ be a hyperelliptic curve of genus $g$, and let $3 \\le d \\le g-1$. Suppose that the Jacobian $J$ of $C$ is simple. We show that $C$ has only finitely many primitive degree $d$ points, and in particular it has only finitely many degree $d$ points with Galois group $S_d$ or $A_d$. However, for any even $d \\ge 4$, a hyperelliptic curve $C/{\\mathbb{Q}}$ has infinitely many imprimitive degree $d$ points whose Galois group is a subgroup of $S_2 \\wr S_{d/2}$.'\naddress:\n- |\n School of Mathematics and Statistics\\\n Hicks Building\\\n University of Sheffield\\\n Sheffield S3 7RH\\\n United Kingdom \n- |\n Mathematics Institute\\\n University of Warwick\\\n CV4 7AL\\\n United Kingdom\nauthor:\n- Maleeha Khawaja\n- Samir Siksek\nbibliography:\n- 'Primitive.bib'\ntitle: Primitive Algebraic Points on Curves \n---\n\n[^1]\n\nIntroduction\n============\n\nBy a **curve** $C$" -"---\nabstract: |\n The AlphaGarden is an automated testbed for indoor polyculture farming which combines a first-order plant simulator, a gantry robot, a seed planting algorithm, plant phenotyping and tracking algorithms, irrigation sensors and algorithms, and custom pruning tools and algorithms. In this paper, we systematically compare the performance of the AlphaGarden to professional horticulturalists on the staff of the UC Berkeley Oxford Tract Greenhouse. The humans and the machine tend side-by-side polyculture gardens with the same seed arrangement. We compare performance in terms of canopy coverage, plant diversity, and water consumption. Results from two 60-day cycles suggest that the automated AlphaGarden performs comparably to professional horticulturalists in terms of coverage and diversity, and reduces water consumption by as much as 44%.\n\n Code, videos, and datasets are available at .\nauthor:\n- |\n Simeon Adebola$^{*1}$, Rishi Parikh$^{*1}$, Mark Presten$^{1}$, Satvik Sharma$^{1}$, Shrey Aeron$^{1}$, Ananth\\\n Rao$^{1}$, Sandeep Mukherjee$^{1}$, Tomson Qu$^{1}$, Christina Wistrom$^{2}$, Eugen Solowjow$^{3}$ , Ken Goldberg$^{1}$ [^1] [^2][^3][^4]\nbibliography:\n- 'IEEEabrv.bib'\n- 'references.bib'\nnocite: '[@goldberg2002beyond; @harper1977population; @toshioK]'\ntitle: |\n **Can Machines Garden? Systematically Comparing\\\n the AlphaGarden vs. Professional Horticulturalists**\n---\n\nIntroduction\n============\n\nIn 1950, Alan Turing considered the question \u201cCan Machines Think?\" and proposed a test based on comparing human" -"---\nauthor:\n- 'Jianhang\u00a0Chen[^1]'\n- 'R.J.\u00a0Ivison'\n- 'Martin\u00a0A.\u00a0Zwaan'\n- Anne\u00a0Klitsch\n- C\u00e9line P\u00e9roux\n- 'Christopher C.\u00a0Lovell'\n- 'Claudia del P.\u00a0Lagos'\n- 'Andrew D.\u00a0Biggs'\n- Victoria Bollo\nbibliography:\n- 'cautionary\\_tale\\_protocluster2.bib'\ndate: 'Received 6 June 2023 / Accepted 29 June 2023'\nsubtitle: A cautionary tale\ntitle: 'ALMACAL XI: Over-densities as signposts for proto-clusters?'\n---\n\n[It may be unsurprising that the most common approach to finding proto-clusters is to search for over-densities of galaxies. Upgrades to submillimetre (submm) interferometers and the advent of the [*James Webb Space Telescope*]{} will soon offer the opportunity to find more distant candidate proto-clusters in deep sky surveys without any spectroscopic confirmation. In this letter, we report the serendipitous discovery of an extremely dense region centred on the blazar, J0217$-$0820, at $z=0.6$ in the ALMACAL sky survey. Its density is eight times higher than that predicted by blind submm surveys. Among the seven submm-bright galaxies, three are as bright as conventional single-dish submm galaxies, with $S_{\\rm 870\\mu m}\\!>\\!3$mJy. The over-density is thus comparable to the densest known and confirmed proto-cluster cores. However, their spectra betray a wide range of redshifts. We investigate the likelihood of line-of-sight projection effects using light" -"---\nabstract: |\n We study the zero-temperature stochastic Ising model on some connected planar quasi-transitive graphs, which are invariant under rotations and translations. The initial spin configuration is distributed according to a Bernoulli product measure with parameter $ p\\in(0,1) $. In particular, we prove that if $ p=1/2 $ and the graph underlying the model satisfies the *planar shrink property* then all vertices flip infinitely often almost surely.\n\n *Keywords:* Coarsening; zero-temperature dynamics; quasi-transitive planar graphs.\n\n *AMS MSC 2010:* 82C20, 82C35.\naddress:\n- 'University of Rome La Sapienza, Department of Mathematics Piazzale Aldo Moro, 5, 00185, Rome, Italy'\n- 'University of Rome La Sapienza, Department of Mathematics Piazzale Aldo Moro, 5, 00185, Rome, Italy'\nauthor:\n- Emilio De Santis\n- Leonardo Lelli\ntitle: 'Zero-temperature stochastic Ising model on planar quasi-transitive graphs'\n---\n\nIntroduction\n============\n\nIn this paper, we deal with the zero-temperature stochastic Ising model $ (\\sigma_t)_{t\\geq 0} $ on some connected planar quasi-transitive graphs with homogeneous ferromagnetic interactions (see e.g. [@GNS2000; @NNS2000]), i.e. all the interactions are equal to a positive constant. The initial spin configuration is distributed according to a Bernoulli product measure with parameter $ p\\in(0,1) $, see e.g. [@FSS2002; @M2011; @NNS2000]. The dynamic evolves in the following" -"---\nabstract: 'While multilinear algebra appears natural for studying the multiway interactions modeled by hypergraphs, tensor methods for general hypergraphs have been stymied by theoretical and practical barriers. A recently proposed adjacency tensor is applicable to nonuniform hypergraphs, but is prohibitively costly to form and analyze in practice. We develop tensor times same vector (TTSV) algorithms for this tensor which improve complexity from $O(n^r)$ to a low-degree polynomial in $r$, where $n$ is the number of vertices and $r$ is the maximum hyperedge size. Our algorithms are implicit, avoiding formation of the order $r$ adjacency tensor. We demonstrate the flexibility and utility of our approach in practice by developing tensor-based hypergraph centrality and clustering algorithms. We also show these tensor measures offer complementary information to analogous graph-reduction approaches on data, and are also able to detect higher-order structure that many existing matrix-based approaches provably cannot.'\nauthor:\n- 'Sinan G. Aksoy[^1]'\n- 'Ilya Amburg[^2]'\n- 'Stephen J. Young'\nbibliography:\n- 'main.bib'\ntitle: Scalable tensor methods for nonuniform hypergraphs\n---\n\nhypergraph, adjacency tensor, tensor times same vector, tensor-free methods, centrality, clustering\n\n05C65, 15A69, 05C50, 05C85\n\nIntroduction\n============\n\nThe study of hypergraphs is fraught with choices of representation. From Laplacians [@bolla1993spectra; @cardoso2022signless; @rodri2002laplacian;" -"---\nauthor:\n- 'T.\u00a0Takeshita'\n- 'R.\u00a0Terada'\ntitle: ' Simulation results of a New type of sandwich calorimeter, Double readout Sandwich Calorimeter (DSC) performance '\n---\n\nIntroduction\n============\n\nFuture high-energy experiments in particle physics necessitate substantial advancements in the energy resolution of hadron calorimeters. While homogeneous calorimeters constructed from a single material are known for their excellent energy resolution, they face challenges related to light transmission, radiation-tolerance, segmentation, and cost [@bib:1]. In this study, we propose a novel approach to address these challenges by introducing a fully active, three-dimensional segmented calorimeter employing two similar materials: scintillator glass and lead glass which has cost advantage. By combining these materials in a sandwich structure, we aim to achieve a finely segmented calorimeter that maintains high energy resolution while mitigating the limitations associated with large homogeneous calorimeters. While these materials bear few distinctions, the scintillator glass measures energy through scintillation light, whereas the lead glass, excels in detecting Cerenkov light, which is directory proportional to the length of charged particle trajectories.\n\nWe have previously established a correlation between energy measurements and track length of high-energy hadrons and electron showers [@bib:2]. This relationship is depicted in Figure \\[fig:1\\] as a scatter plot, where" -"---\nabstract: 'Adversarial attack patches have gained increasing attention due to their practical applicability in physical-world scenarios. However, the bright colors used in attack patches represent a significant drawback, as they can be easily identified by human observers. Moreover, even though these attacks have been highly successful in deceiving target networks, which specific features of the attack patch contribute to its success are still unknown. Our paper introduces a brightness-restricted patch (BrPatch) that uses optical characteristics to effectively reduce conspicuousness while preserving image independence. We also conducted an analysis of the impact of various image features (such as color, texture, noise, and size) on the effectiveness of an attack patch in physical-world deployment. Our experiments show that attack patches exhibit strong redundancy to brightness and are resistant to color transfer and noise. Based on our findings, we propose some additional methods to further reduce the conspicuousness of BrPatch. Our findings also explain the robustness of attack patches observed in physical-world scenarios.'\nauthor:\n- |\n Mingzhen Shao\\\n Kahlert School of Computing\\\n University of Utah\\\n Salt Lake City, UT 84108\\\n `shao@cs.utah.edu`\\\nbibliography:\n- 'ijcai23.bib'\ntitle: 'Brightness-Restricted Adversarial Attack Patch'\n---\n\nIntroduction\n============\n\nDeep neural networks (DNNs) have experienced significant success across various" -"---\nabstract: 'Depth is a very important modality in computer vision, typically used as complementary information to RGB, provided by RGB-D cameras. In this work, we show that it is possible to obtain the same level of accuracy as RGB-D cameras on a semantic segmentation task using infrared (IR) and depth images from a single Time-of-Flight (ToF) camera. In order to fuse the IR and depth modalities of the ToF camera, we introduce a method utilizing depth-specific convolutions in a multi-task learning framework. In our evaluation on an in-car segmentation dataset, we demonstrate the competitiveness of our method against the more costly RGB-D approaches.'\ntitle: 'Achieving RGB-D Level Segmentation Performance From a Single ToF Camera'\n---\n\nmulti-modal image segmentation, depth image, infrared image\n\nIntroduction {#sec:intro}\n============\n\nThe research field of semantic segmentation is dominated by RGB images. Only recently it shifted in the direction of RGB-D semantic segmentation [@Hazirbas2016FuseNetID; @depthawareCNN; @cao2021shapeconv; @Cheng2017deconvnet]. However, RGB images may not always be available due to practical, logistical and financial reasons. RGB-D cameras incur higher cost and more effort to calibrate the two cameras. Their larger package size often limits their place in real-world applications. Indeed, Time-of-Flight (ToF) depth cameras are often deployed without" -"---\nabstract: 'Galactic and extragalactic objects in the universe are sources of high-energy neutrinos that can be detected by the IceCube neutrino detector, with the former being easier to resolve due to comparatively smaller distances. Recently, a study done using cascade-like events seen by IceCube reported neutrino emission from the Galactic plane with $>$4$\\sigma$ significance. In this work, we put a limit on the number of Galactic sources required to explain this emission. To achieve this, we make use of a simulation package created to simulate point sources in the Galaxy along with the neutrino and gamma-ray flux emissions originating from them. Along with making use of past IceCube sensitivity curves, we also account for Eddington bias effects due to Poisson fluctuations in the number of detected neutrino events. By making use of a toy-Monte Carlo simulation method, we find that there should be more than 10 sources, each with luminosities $10^{35}$\u00a0erg/s responsible for the Galactic neutrino emission. Our results constrain the number of individual point-like emission regions, which applies both to discrete astrophysical sources and to individual points of diffuse emission.'\nauthor:\n- Abhishek Desai\n- Justin Vandenbroucke\n- Samalka Anandagoda\n- Jessie Thwaites\n- 'M.J. Romfoe'\nbibliography:" -"---\naddress: 'Department of Physics, National and Kapodistrian University of Athens, University Campus,Zografos GR-157 84 Athens, Greece; vlahakis@phys.uoa.gr '\n---\n\nIntroduction\n============\n\nPlasma flows are widespread in nature. Astrophysical magnetized, relativistic jets are an important subclass, related to high energy phenomena, e.g., in active galactic nuclei and gamma-ray bursts. It is desirable to analyze them by constructing steady-state solutions of the magnetohydrodynamic (MHD) equations, but also to explore their time evolution through waves and instabilities.\n\nMore generally, the stability of magnetized flows in astrophysics, but also in the laboratory, despite its obvious importance, has not been fully understood. There are various known modes, some of them internal and related to the current distribution inside the flows, some others related to discontinuities at interfaces, such as the Kelvin\u2013Helmholtz or Rayleigh\u2013Taylor instabilities, but in general the result is a mixture of all that depends on the characteristics of the unperturbed state.\n\nThere are a plethora of analytical studies on the hydrodynamic limit, but much fewer with the magnetic field included (e.g., Refs.\u00a0[@1961hhs..book.....C; @Goedbook2]) due to the increasing complexity of the mathematics involved. There are even fewer studies of the relativistic regime for cylindrical geometry, even if we use simplified ideal MHD" -"---\nabstract: 'Information projections have found important applications in probability theory, statistics, and related areas. In the field of hypothesis testing in particular, the reverse information projection (RIPr) has recently been shown to lead to so-called growth-rate optimal (GRO) [*e*]{}-statistics for testing simple alternatives against composite null hypotheses. However, the RIPr as well as the GRO criterion are undefined whenever the infimum information divergence between the null and alternative is infinite. We show that in such scenarios there often still exists an element in the alternative that is \u2018closest\u2019 to the null: the universal reverse information projection. The universal reverse information projection and its non-universal counterpart coincide whenever information divergence is finite. Furthermore, the universal RIPr is shown to lead to optimal [*e*]{}-statistics in a sense that is a novel, but natural, extension of the GRO criterion. We also give conditions under which the universal RIPr is a strict sub-probability distribution, as well as conditions under which an approximation of the universal RIPr leads to approximate [*e*]{}-statistics. For this case we provide tight relations between the corresponding approximation rates.'\nauthor:\n- 'Tyron Lardy, Peter Gr\u00fcnwald and Peter Harremo[\u00eb]{}s, [^1] [^2] [^3]'\nbibliography:\n- 'proper\\_bib.bib'\n- 'master.bib'\n- 'peter.bib'\n- 'database1.bib'" -"---\nabstract: 'Dictionary learning is an effective tool for pattern recognition and classification of time series data. Among various dictionary learning techniques, the dynamic time warping (DTW) is commonly used for dealing with temporal delays, scaling, transformation, and many other kinds of temporal misalignments issues. However, the DTW suffers overfitting or information loss due to its discrete nature in aligning time series data. To address this issue, we propose a generalized time warping invariant dictionary learning algorithm in this paper. Our approach features a generalized time warping operator, which consists of linear combinations of continuous basis functions for facilitating continuous temporal warping. The integration of the proposed operator and the dictionary learning is formulated as an optimization problem, where the block coordinate descent method is employed to jointly optimize warping paths, dictionaries, and sparseness coefficients. The optimized results are then used as hyperspace distance measures to feed classification and clustering algorithms. The superiority of the proposed method in terms of dictionary learning, classification, and clustering is validated through ten sets of public datasets in comparing with various benchmark methods.'\nauthor:\n- 'Ruiyu Xu, Chao Wang, Yongxiang Li, Jianguo Wu$^*$ [^1][^2][^3] [^4] [^5]'\nbibliography:\n- 'reference.bib'\ntitle: Generalized Time Warping Invariant" -"---\nauthor:\n- 'Aurelio Amerio,[!!]{}'\n- 'Francesca Calore,'\n- 'Pasquale Dario Serpico,'\n- Bryan Zaldivar\nbibliography:\n- 'biblio.bib'\ntitle: 'Deepening gamma-ray point-source catalogues with sub-threshold information'\n---\n\nLAPTH-035/23\n\nIntroduction\n============\n\nOur view of the high-energy $\\gamma$-ray sky has been revolutionised by the Large Area Telescope (LAT) onboard the [[*Fermi*]{}]{}\u00a0satellite, which is on its surveying mission since 2008: Since the publication of the fourth source catalogue (4FGL) based on 8 years of data\u00a0[@Fermi-LAT:2019yla], incremental updates appear periodically. The latest incarnation, the data release 3 (DR3) based on 12 yrs of data\u00a0[@Fermi-LAT:2022byn], includes 6658 point-like sources in the energy range from 50 MeV to 1 TeV, with extragalactic blazars constituting the largest associated class. Apart for revealing entirely new classes of objects (such as Galactic millisecond pulsars\u00a0[@Caraveo:2013lra]), the explosion of the number of $\\gamma$-ray sources has allowed for numerous applications in multi-wavelength and multi-messenger astrophysics and astroparticle physics. Unsurprisingly, the essential requirement for a source to enter a catalogue is that its signal strength is significantly above the background, dominated by $\\gamma$ rays associated to energy-loss processes of cosmic rays in the interstellar gas and radiation field. Since the pioneering analysis of EGRET data\u00a0[@Mattox:1996zz], the signal strength" -"---\nabstract: 'Deep learning models undergo a significant increase in the number of parameters they possess, leading to the execution of a larger number of operations during inference. This expansion significantly contributes to higher energy consumption and prediction latency. In this work, we propose [*EAT*]{}, a gradient-based algorithm that aims to reduce energy consumption during model training. To this end, we leverage a differentiable approximation of the $\\ell_0$ norm, and use it as a sparse penalty over the training loss. Through our experimental analysis conducted on three datasets and two deep neural networks, we demonstrate that our energy-aware training algorithm [*EAT*]{}is able to train networks with a better trade-off between classification performance and energy efficiency.'\nauthor:\n- Dario Lazzaro\n- 'Antonio Emanuele Cin\u00e0 [^1]'\n- Maura Pintor\n- Ambra Demontis\n- Battista Biggio\n- Fabio Roli\n- Marcello Pelillo\nbibliography:\n- 'updated\\_bib.bib'\ntitle: 'Minimizing Energy Consumption of Deep Learning Models by Energy-Aware Training'\n---\n\nIntroduction {#sec:intro}\n============\n\nDeep learning is widely adopted across various domains due to its remarkable performance in various tasks. The increase in model size, primarily driven by the number of parameters, often leads to improved performance. However, this growth in model size also leads to a" -"---\nabstract: 'Moduli stabilisation is key to obtaining phenomenologically viable string models. Non\u2013geometric compactifications, like T\u2013duality orbifolds (T\u2013folds), are capable of freezing many moduli. However, T\u2013folds, admitting free fermionic descriptions, can be associated to a large number of different T\u2013folds with varying number of moduli, since the fermion pairings for bosonisation is far from unique. Fermion symmetries induce mappings in the bosonised description that extend the T\u2013duality group.'\nauthor:\n- 'Alon E. Faraggi'\n- Stefan Groot Nibbelink\n- Benjamin Percival\nbibliography:\n- 'papers.bib'\ntitle: 'Free fermionic webs of heterotic T\u2013folds'\n---\n\n\\[sc:Introction\\] Introduction\n==============================\n\nString theory realises a unification of gravity, gauge interactions and their charged matter via the properties of Conformal Field Theories (CFTs) residing on its two dimensional (2D) worldsheet. Heterotic strings on toroidal orbifolds\u00a0[@Dixon:1985jw; @Dixon:1986jc] led to some of the most realistic string\u2013derived models to date\u00a0[@Faraggi:1989ka; @Lebedev:2006kn; @Blaszczyk:2009in]. However, orbifolds and other geometrical backgrounds result in free moduli (such as the metric, B\u2013field or Wilson lines) on which detailed physics, like gauge and Yukawa couplings, depend.\n\nStrings on tori and their orbifolds admit exact quantisation. This was instrumental in the discovery of T\u2013dualities\u00a0[@Duff:1989tf], like the famous $R {\\rightarrow}1/R$ duality, which sets the effective minimum" -"=1.2\n\n[**Visualizing departures from marginal homogeneity for square contingency tables with ordered categories**]{}\n\nSatoru Shinoda${}^{1}$, Takuya Yoshimoto${}^{2}$ and Kouji Tahata${}^{3}$\\\n${}^{1}$[*Department of Biostatistics, Yokohama City University, School of Medicine, Japan*]{}\\\n${}^{2}$[*Biometrics Department, Chugai Pharmaceutical Co., Ltd., Japan*]{}\\\n${}^{3}$[*Department of Information Sciences, Faculty of Science and Technology, Tokyo University of Science, Japan*]{}\\\nE-mail: shinoda.sat.cg@yokohama-cu.ac.jp\\\n\nSquare contingency tables are a special case commonly used in various fields to analyze categorical data. Although several analysis methods have been developed to examine marginal homogeneity (MH) in these tables, existing measures are single-summary ones. To date, a visualization approach has yet to be proposed to intuitively depict the results of MH analysis. Current measures used to assess the degree of departure from MH are based on entropy such as the Kullback-Leibler divergence and do not satisfy distance postulates. Hence, the current measures are not conducive to visualization. Herein we present a measure utilizing the Matusita distance and introduce a visualization technique that employs sub-measures of categorical data. Through multiple examples, we demonstrate the meaningfulness of our visualization approach and validate its usefulness to provide insightful interpretations.\n\n*Key words*: Marginal homogeneity, Matusita distance, power-divergence, visualization.\\\n\n**1. Introduction**\n\nNumerous research areas employ categorical data analysis. Such data" -"---\nabstract: 'We evaluated the capability of a generative pre-trained transformer (GPT-4) to automatically generate high-quality learning objectives (LOs) in the context of a practically oriented university course on Artificial Intelligence. Discussions of opportunities (e.g., content generation, explanation) and risks (e.g., cheating) of this emerging technology in education have intensified, but to date there has not been a study of the models\u2019 capabilities in supporting the course design and authoring of LOs. LOs articulate the knowledge and skills learners are intended to acquire by engaging with a course. To be effective, LOs must focus on what students are intended to achieve, focus on specific cognitive processes, and be measurable. Thus, authoring high-quality LOs is a challenging and time consuming (i.e., expensive) effort. We evaluated 127 LOs that were automatically generated based on a carefully crafted prompt (detailed guidelines on high-quality LOs authoring) submitted to GPT-4 for conceptual modules and projects of an AI Practitioner course. We analyzed the generated LOs if they follow certain best practices such as beginning with action verbs from Bloom\u2019s taxonomy in regards to the level of sophistication intended. Our analysis showed that the generated LOs are sensible, properly expressed (e.g., starting with an action verb)," -"---\nabstract: 'The proliferation of deep learning applications in healthcare calls for data aggregation across various institutions, a practice often associated with significant privacy concerns. This concern intensifies in medical image analysis, where privacy-preserving mechanisms are paramount due to the data being sensitive in nature. Federated learning, which enables cooperative model training without direct data exchange, presents a promising solution. Nevertheless, the inherent vulnerabilities of federated learning necessitate further privacy safeguards. This study addresses this need by integrating differential privacy, a leading privacy-preserving technique, into a federated learning framework for medical image classification. *We introduce a novel differentially private federated learning model and meticulously examine its impacts on privacy preservation and model performance.* Our research confirms the existence of a trade-off between model accuracy and privacy settings. However, we demonstrate that strategic calibration of the privacy budget in differential privacy can uphold robust image classification performance while providing substantial privacy protection.'\nauthor:\n- |\n Kishore Babu Nampalle, Pradeep Singh,\\\n Uppala Vivek Narayan, Balasubramanian Raman\\\n Department of Computer Science and Engineering\\\n Indian Institute of Technology Roorkee\ndate:\n- \n- \ntitle: |\n [**Vision Through the Veil: Differential Privacy in Federated Learning for Medical Image Classification**]{}\\\n---\n\nIntroduction\n============\n\nMedical imaging, an integral" -"---\nabstract: 'Let $R$ be a domain that is a complete local ${\\mathbb{k}}$ algebra in dimension one. In an effort to address the Berger\u2019s conjecture, a crucial invariant reduced type $s(R)$ was introduced in [@huneke2021torsion]. In this article, we study this invariant and its max/min values separately and relate it to the valuation semigroup of $R$. We justify the need to study $s(R)$ in the context of numerical semigroup rings and consequently investigate the occurrence of the extreme values of $s(R)$ for the Gorenstein, almost Gorenstein, and far-flung Gorenstein complete numerical semigroup rings. Finally, we study the finiteness of the category $\\cm(R)$ of maximal Cohen Macaulay modules and the category $\\rf(R)$ of reflexive modules for rings which are of maximal/minimal reduced type and provide many classifications.'\naddress:\n- 'Department of Mathematics, University of Utah, Salt Lake City, UT, USA'\n- 'Department of Mathematics, Indian Institute of Technology Delhi, Hauz Khas, India.'\nauthor:\n- Sarasij Maitra\n- Vivek Mukundan\nbibliography:\n- 'references.bib'\ntitle: Extremal behavior of reduced type of one dimensional rings\n---\n\nIntroduction\n============\n\nLet $(R,\\mathfrak{m},{\\mathbb{k}})$ be a non-regular one dimensional complete local domain which is a ${\\mathbb{k}}$-algebra. We assume that ${\\mathbb{k}}$ is algebraically closed of characteristic $0$. Assume ${\\mathbb{k}}{\\llbracket" -"---\nabstract: 'In open-domain dialogue generation tasks, contexts and responses in most datasets are one-to-one mapped, violating an important many-to-many characteristic: a context leads to various responses, and a response answers multiple contexts. Without such patterns, models poorly generalize and prefer responding safely. Many attempts have been made in either multi-turn settings from a one-to-many perspective or in a many-to-many perspective but limited to single-turn settings. The major challenge to many-to-many augment multi-turn dialogues is that discretely replacing each turn with semantic similarity breaks fragile context coherence. In this paper, we propose DialoGue Path Sampling (DialoGPS) method in continuous semantic space, the first many-to-many augmentation method for multi-turn dialogues. Specifically, we map a dialogue to our extended Brownian Bridge, a special Gaussian process. We sample latent variables to form coherent dialogue paths in the continuous space. A dialogue path corresponds to a new multi-turn dialogue and is used as augmented training data. We show the effect of DialoGPS with both automatic and human evaluation.'\nauthor:\n- |\n Ang Lv$^{1}$[^1], Jinpeng Li$^{2}$, Yuhan Chen$^1$, Xing Gao$^{3}$, Ji Zhang$^{3}$, Rui Yan$^{1,4}$[^2]\\\n $^1$Gaoling School of Artifical Intelligence, Renmin University of China\\\n $^2$Wangxuan Institute of Computer Technology, Peking University\\\n $^3$Alibaba DAMO Academy\\\n $^4$Engineering Research" -"=1\n\nIntroduction and statement of results\n=====================================\n\nHurwitz numbers count branched coverings of the sphere by a Riemann surface with prescribed ramification profiles. Hurwitz himself\u00a0[@Hurwitz] showed that this geometric counting problem boils down, via monodromy representation, to a combinatorial one. The latter is the problem of counting factorizations of the identity in the symmetric group with factors in prescribed conjugacy classes. Today, Hurwitz numbers have been generalized in various directions and are the subject of renewed interest because of their connections to integrable systems\u00a0[@HarnadPaquet; @HarnadOrlov; @Okounkov] and enumerative geometry\u00a0[@Dijkgraaf; @ELSV; @OkounkovPandharipande].\n\nThere are many matrix models connected with (various versions of) Hurwitz numbers, e.g., the Harish-Chandra\u2013Itzykson\u2013Zuber integral\u00a0[@GGPN] and the Br\u00e9zin\u2013Gross\u2013Witten model\u00a0[@Novak], as well as externally coupled Br\u00e9zin\u2013Hikami type models with a Meijer-G weight\u00a0[@BertolaHarnad]. A matrix model for simple Hurwitz numbers was given in\u00a0[@BEMS]. Moreover, it has been shown\u00a0[@GisonniGravaRuzza2021] that correlators (cf.\u00a0 below) of a random Hermitian matrix distributed according to the Jacobi unitary ensemble are generating functions for a type of Hurwitz numbers ([*triple monotone*]{} Hurwitz numbers); this result extends the combinatorial interpretation of correlators for the Gaussian\u00a0[@IZ] and Laguerre\u00a0[@CundenDahlqvistOConnell; @GisonniGravaRuzza2020; @GLM; @HanlonSS] unitary ensembles.\n\nRecently, a deformation of Hurwitz" -"---\nabstract: |\n This paper studies robust time-inconsistent (TIC) linear-quadratic stochastic control problems, formulated by stochastic differential games. By a spike variation approach, we derive sufficient conditions for achieving the Nash equilibrium, which corresponds to a time-consistent (TC) robust policy, under mild technical assumptions. To illustrate our framework, we consider two scenarios of robust mean-variance analysis, namely with state- and control-dependent ambiguity aversion. We find numerically that with time inconsistency haunting the dynamic optimal controls, the ambiguity aversion enhances the effective risk aversion faster than the linear, implying that the ambiguity in the TIC cases is more impactful than that under the TC counterparts, e.g., expected utility maximization problems.\\\n [**Keywords:** Robust stochastic controls, equilibrium controls, model uncertainty, time inconsistency, stochastic differential game, robust mean-variance analysis]{}\\\n [**Mathematics Subject Classification:** 91A15, 49N90, 91A80, 91G10]{}\\\nauthor:\n- 'Bingyan Han[^1]'\n- 'Chi Seng Pun[^2]'\n- 'Hoi Ying Wong[^3]'\ntitle: 'Robust Time-inconsistent Linear-Quadratic Stochastic Controls: A Stochastic Differential Game Approach[^4]'\n---\n\nIntroduction\n============\n\nStochastic control theory embraces dynamic state evolution and decision-making problems arising in many disciplines. To capture the probabilistic uncertainty of states, the agent adopts stochastic processes under some specified probability measure for the modelling of states. Linear stochastic processes are perhaps the" -"---\nabstract: 'We study the prophet inequality when the gambler has an access only to a single sample from each distribution. Rubinstein, Wang and Weinberg showed that an optimal guarantee of $1/2$ can be achieved when the underlying matroid has rank $1$, i.e. in the case of a single choice. We show that this guarantee can be achieved also in the case of a uniform matroid of rank $2$ by a deterministic mechanism, and we show that this is best possible among deterministic mechanisms. We also conjecture that a straightforward generalization of our policy achieves the guarantee of $1/2$ for all uniform matroids.'\nauthor:\n- 'Kanstantsin Pashkovich, Alice Sayutina'\nbibliography:\n- 'literature.bib'\ntitle: Single Sample Prophet Inequality for Uniform Matroids of Rank $2$\n---\n\nIntroduction\n============\n\nWe study the single-sample prophet inequalities (SSPI) for uniform matroids. This is a variation of the prophet inequalities problem where the gambler does not know the distributions $X_1, X_2, \\ldots, X_n$ for the arriving items, but has access only to a single sample $s_1 \\sim X_1$, $s_2 \\sim X_2$, \u2026, $s_n \\sim X_n$ from each of the distributions. After getting access to the samples, the gambler is presented realizations $r_1 \\sim X_1$, $r_2 \\sim" -"---\nabstract: 'Deep neural networks (DNNs) offer the highest performance in a wide range of applications in computer vision. These results rely on over-parameterized backbones, which are expensive to run. This computational burden can be dramatically reduced by quantizing (in either data-free (DFQ), post-training (PTQ) or quantization-aware training (QAT) scenarios) floating point values to ternary values (2 bits, with each weight taking value in $\\{-1,0,1\\}$). In this context, we observe that rounding to nearest minimizes the expected error given a uniform distribution and thus does not account for the skewness and kurtosis of the weight distribution, which strongly affects ternary quantization performance. This raises the following question: shall one minimize the highest or average quantization error? To answer this, we design two operators: TQuant and MQuant that correspond to these respective minimization tasks. We show experimentally that our approach allows to significantly improve the performance of ternary quantization through a variety of scenarios in DFQ, PTQ and QAT and give strong insights to pave the way for future research in deep neural network quantization.'\naddress: 'Sorbonne Universit\u00e9$^1$, CNRS, ISIR, f-75005, 4 Place Jussieu 75005 Paris, France Datakalab$^2$, 114 boulevard Malesherbes, 75017 Paris, France'\nbibliography:\n- 'output.bib'\ntitle: Designing strong baselines" -"---\nabstract: 'Spatially-structured laser beams, eventually carrying orbital angular momentum, affect electronic transitions of atoms and their motional states in a complex way. We present a general framework, based on the spherical tensor decomposition of the interaction Hamiltonian, for computing atomic transition matrix elements for light fields of arbitrary spatial mode and polarization structures. We study both the bare electronic matrix elements, corresponding to transitions with no coupling to the atomic center-of-mass motion, as well as the matrix elements describing the coupling to the quantized atomic motion in the resolved side-band regime. We calculate the spatial dependence of electronic and motional matrix elements for tightly focused Hermite-Gaussian, Laguerre-Gaussian and for radially and azimuthally polarized beams. We show that near the diffraction limit, all these beams exhibit longitudinal fields and field gradients, which strongly affect the selection rules and could be used to tailor the light-matter interaction. The presented framework is useful for describing trapped atoms or ions in spatially-structured light fields and therefore for designing new protocols and setups in quantum optics, -sensing and -information processing.'\nauthor:\n- |\n [![image](orcid.pdf)Maurizio Verde](https://orcid.org/0000-0002-5363-1194)\\\n Institut f\u00fcr Physik\\\n Johannes Gutenberg-Universit\u00e4t\\\n Mainz, 55128, Germany\\\n `mauverde@uni-mainz.de`\\\n [![image](orcid.pdf)Ulrich Poschinger](https://orcid.org/0000-0001-5341-7860)\\\n Institut f\u00fcr Physik\\\n Johannes Gutenberg-Universit\u00e4t\\\n Mainz, 55128, Germany\\" -"---\nabstract: 'Game theory on graphs is a basic tool in computer science. In this paper, we propose a new game-theoretic framework for studying the privacy protection of a user who interactively uses a software service. Our framework is based on the idea that an objective of a user using software services should not be known to an adversary because the objective is often closely related to personal information of the user. We propose two new notions, ${\\mathcal{O}}$-indistinguishable strategy (${\\mathcal{O}}$-IS) and objective-indistinguishability equilibrium (OIE). For a given game and a subset ${\\mathcal{O}}$ of winning objectives (or objectives in short), a strategy of a player is ${\\mathcal{O}}$-indistinguishable if an adversary cannot shrink ${\\mathcal{O}}$ by excluding any objective $O$ from ${\\mathcal{O}}$ as an impossible objective. A strategy profile, which is a tuple of strategies of all players, is an OIE if the profile is locally maximal in the sense that no player can expand her set of objectives indistinguishable from her real objective from the viewpoint of an adversary. We show that for a given multiplayer game with Muller objectives, both of the existence of an ${\\mathcal{O}}$-IS and that of OIE are decidable.'\nauthor:\n- Rindo Nakanishi\n- Yoshiaki Takata\n- Hiroyuki" -"---\nabstract: 'We consider a general class of nonsmooth optimal control problems with partial differential equation (PDE) constraints, which are very challenging due to their nonsmooth objective functionals and the resulting high-dimensional and ill-conditioned systems after discretization. We focus on the application of a primal-dual method, with which different types of variables can be treated individually in iterations and thus its main computation at each iteration only requires solving two PDEs. Our target is to accelerate the primal-dual method with either enlarged step sizes or operator learning techniques. The accelerated primal-dual method with enlarged step sizes improves the numerical performance of the original primal-dual method in a simple and universal way, while its convergence can be still proved rigorously. For the operator learning acceleration, we construct deep neural network surrogate models for the involved PDEs. Once a neural operator is learned, solving a PDE requires only a forward pass of the neural network, and the computational cost is thus substantially reduced. The accelerated primal-dual method with operator learning is mesh-free, numerically efficient, and scalable to different types of PDEs. The acceleration effectiveness of these two techniques is promisingly validated by some preliminary numerical results.'\nauthor:\n- 'Yongcun Song[^1]'\n- 'Xiaoming" -"---\nabstract: 'The relativistic extension of the classic stellar structure equations is investigated. It is pointed out that the Tolman-Oppenheimer-Volkov (TOV) equation with the gradient equation for local gravitational mass can be made complete as a closed set of differential equations by adding that for the Tolman temperature with one equation of state, and the set is proposed as the relativistic hydrostatic structure equations. The exact forms of the relativistic Poisson equation and the steady-state heat conduction equation in the curved spacetime are derived. The application to an ideal gas of particles with the conserved particle number current leads to a strong prediction that the heat capacity ratio almost becomes one in any Newtonian convection zone such as the solar surface. The steady-state heat conduction equation is solved exactly in the system and thermodynamic observables exhibit the power law behavior, which implies the possibility for the system to be a new model of stellar corona and a flaw in the earlier one obtained by using the non-relativistic stellar structure equations. The mixture with another ideal gas yields multilayer structure to a stellar model, in which classic stellar structure equations are reproduced and analytic multilayer structure of luminous stars including the" -"---\nauthor:\n- |\n [Subhadra Dasgupta, Holger Dette]{}\\\n [Ruhr-Universit\u00e4t Bochum]{}\\\n [Fakult\u00e4t f\u00fcr Mathematik]{}\\\n [44780 Bochum, Germany]{}\nbibliography:\n- 'References.bib'\ntitle: Efficient subsampling for exponential family models\n---\n\n**Abstract**\n\n[We propose a novel two-stage subsampling algorithm based on optimal design principles. In the first stage, we use a density-based clustering algorithm to identify an approximating design space for the predictors from an initial subsample. Next, we determine an optimal approximate design on this design space. Finally, we use matrix distances such as the Procrustes, Frobenius, and square-root distance to define the remaining subsample, such that its points are \u201cclosest\u201d to the support points of the optimal design. Our approach reflects the specific nature of the information matrix as a weighted sum of non-negative definite Fisher information matrices evaluated at the design points and applies to a large class of regression models including models where the Fisher information is of rank larger than $1$. ]{}\n\n[*Keywords:*]{} Subsampling, optimal design, exponential family, matrix distances\n\nIntroduction {#sec1}\n============\n\nNowadays, with the easy accessibility to data collecting frameworks and computing devices, a large amount of data is encountered in various fields ranging from terrestrial data, manufacturing sector, and e-commerce to name a few. Training statistical" -"---\nabstract: 'In this work, we study the elastic scattering of some light particles, such as $^2$H, $^3$H, $^3$He and $^4$He, by heavy target nuclei with an extended Watanabe model, which uses as input the neutron-nucleus and proton-nucleus optical potentials and the ground-state wave functions of the projectile. The nucleon-nucleus optical potential used in this work was obtained within a semi-microscopic nuclear matter approach, whose real and imaginary parts are provided by the first and second-order terms, respectively, of the Taylor expansion of the Brueckner-Hartree-Fock mass operator obtained with the reaction G-matrix built up with the Gogny force [@lopez21]. The angular distributions of the scattering of $^2$H, $^3$H, $^3$He, and $^4$He from different target nuclei and at a different incident energy of the projectile computed with this model are analyzed. The reaction cross sections corresponding to some of these scattering processes are also calculated. Our results are compared with the experimental values as well as with another Watanabe calculation where the nucleon-nucleus optical potential is provided by the phenomenological K\u00f6ning-Delaroche model. The limitations of the extended Watanabe model used in this work are also discussed.'\nauthor:\n- 'J. L\u00f3pez Mora\u00f1a'\n- 'X. Vi\u00f1as'\ntitle: '**Light projectile elastic scattering by nuclei" -"---\nauthor:\n- Peter Benner\n- Kathryn Lund\n- Jens Saak\nbibliography:\n- 'mor.bib'\n- 'benchmarking.bib'\ntitle: 'Towards a Benchmark Framework for Model Order Reduction in the Mathematical Research Data Initiative (MaRDI)'\n---\n\nIntroduction {#sec:intro}\n============\n\nThe Mathematical Research Data Initiative (MaRDI)[^1] is a consortium of the National Research Data Initiative (NFDI)[^2], whose overarching goal is to improve and promote responsible research data management practices in the German scientific landscape and beyond. MaRDI itself concentrates on several fields of mathematics featured as content-specific task areas (TA):\n\n1. Computer Algebra,\\[ta1\\]\n\n2. Scientific Computing,\\[ta2\\]\n\n3. Statistics and Machine Learning, and\\[ta3\\]\n\n4. Cooperation with Other Disciplinces.\\[ta4\\]\n\nWithin TA2, we focus primarily on numerical algorithms and their software implementations, which are used to compute approximate solutions and simulations of scientific models. Via collaboration between the Max Planck Institute for Dynamics of Complex Technical Systems and the University of M\u00fcnster, TA2 addresses the following measures (M):\n\n1. Knowledge graph of numerical algorithms;\\[m1\\]\n\n2. Open interfaces for scientific computing;\\[m2\\]\n\n3. Benchmark framework; and\\[m3\\]\n\n4. Description and design of Findable, Accessible, Interoperable, and Reproducible (FAIR)[^3] workflows for computational science and engineering (CSE).\\[m4\\]\n\nFor this report, we will concentrate on progress in\u00a0\\[m3\\].\n\nMaRDIMark {#sec:mardimark}\n=========\n\nWe begin" -"---\nabstract: 'This work presents [pantr]{}, an efficient solver for nonconvex constrained optimization problems, that is well-suited as an inner solver for an augmented Lagrangian method. The proposed scheme combines forward-backward iterations with solutions to trust-region subproblems: the former ensures global convergence, whereas the latter enables fast update directions. We discuss how the algorithm is able to exploit exact Hessian information of the smooth objective term through a linear Newton approximation, while benefiting from the structure of box-constraints or $\\ell_1$-regularization. An open-source [C-0.04em+-0.01em+]{} implementation of [pantr]{} is made available as part of the NLP solver library [alpaqa]{}. Finally, the effectiveness of the proposed method is demonstrated in nonlinear model predictive control applications.'\nauthor:\n- 'Alexander Bodard, Pieter Pas and Panagiotis Patrinos [^1]'\nbibliography:\n- 'references.bib'\ntitle: 'PANTR: A proximal algorithm with trust-region updates for nonconvex constrained optimization'\n---\n\nIntroduction {#sec:introduction}\n============\n\nBackground and motivation\n-------------------------\n\nVarious areas of science and engineering naturally give rise to constrained, potentially nonconvex optimization problems, motivating the need for efficient solvers. A prominent example is found in the field of model predictive control (MPC) [@rawlings_model_2017]. Historically, there has been a strong focus on linear MPC, where linear system dynamics lead to convex" -"---\nabstract: |\n We study the problem of extracting randomness from somewhere-random sources, and related combinatorial phenomena: partition analogues of Shearer\u2019s lemma on projections.\n\n A somewhere-random source is a tuple $(X_1, \\ldots, X_t)$ of (possibly correlated) $\\{0,1\\}^n$-valued random variables $X_i$ where for some unknown $i \\in [t]$, $X_i$ is guaranteed to be uniformly distributed. An [*extracting merger*]{} is a seeded device that takes a somewhere-random source as input and outputs nearly uniform random bits. We study the seed-length needed for extracting mergers with constant $t$ and constant error.\n\n Since a somewhere-random source has min-entropy at least $n$, a standard extractor can also serve as an extracting merger. Our goal is to understand whether the further structure of being somewhere-random rather than just having high entropy enables smaller seed-length, and towards this we show:\n\n - Just like in the case of standard extractors, seedless extracting mergers with even just one output bit do not exist.\n\n - Unlike the case of standard extractors, it [*is*]{} possible to have extracting mergers that output a constant number of bits using only constant seed. Furthermore, a random choice of merger does not work for this purpose!\n\n - Nevertheless, just like in the case of standard" -"---\nauthor:\n- 'N. M. McStay$^*$ & R. A. Reid-Edwards$^{\\dagger}$'\nbibliography:\n- 'covering\\_maps.bib'\ntitle: 'Symmetries and Covering Maps for the Minimal Tension String on $\\mathbf{AdS_3\\times S^3\\times T^4}$'\n---\n\nIntroduction\n============\n\nIn [@Maldacena:1997re], Maldacena proposed that the large $N$ limit of certain conformal field theories (without gravity) appear to be equivalent to string theories in asymptotically Anti-de Sitter spaces. Each of these theories are defined perturbatively, meaning they are only well understood for small values of their perturbative parameters. Fascinatingly, the matching of the parameters in the AdS/CFT duality is a strong-weak correspondence, meaning that when the inverse string tension is taken to be small (the supergravity approximation), the corresponding CFT is strongly coupled and vice versa. This makes it difficult to meaningfully compare observables on each side of the picture and directly prove Maldacena\u2019s conjecture in perturbation theory. Nevertheless, in a remarkable series of papers [@Gaberdiel:2018rqv; @Eberhardt:2018ouy; @Eberhardt:2019ywk; @Eberhardt:2019qcl; @Eberhardt:2020akk; @Dei:2020zui; @Gaberdiel:2020ycd; @Knighton:2020kuh; @Gaberdiel:2021njm; @Gaberdiel:2021kkp; @Gaberdiel:2022bfk; @Dei:2022pkr; @Gaberdiel:2022oeu; @Naderi:2022bus; @Eberhardt:2019][^1], Gaberdiel, Gopakumar, Eberhardt and collaborators propose a type IIB string theory on $AdS_3 \\times S^3 \\times T^4$ in the minimal tension limit[^2] of one unit of NS-NS flux wrapping the $S^3$, and argue that it is exactly dual to the" -"---\nabstract: 'In this paper, we generalize the notion of unconstrained quantization of the classical Cantor distribution to constrained quantization and give a general definition of constrained quantization. Toward this, we calculate the optimal sets of $n$-points, $n$th constrained quantization errors, the constrained quantization dimensions, and the constrained quantization coefficients, taking different families of constraints for all $n\\in \\mathbb N$. The results in this paper show that both the constrained quantization dimension and the constrained quantization coefficient for the Cantor distribution depend on the underlying constraints. It also shows that the constrained quantization coefficient for the Cantor distribution can exist and be equal to the constrained quantization dimension. These facts are not true in the unconstrained quantization for the Cantor distribution.'\naddress:\n- |\n $^{1}$Department of Mathematical Sciences\\\n Indian Institute of Technology (Banaras Hindu University)\\\n Varanasi, 221005, India.\n- |\n $^{2}$School of Mathematical and Statistical Sciences\\\n University of Texas Rio Grande Valley\\\n 1201 West University Drive\\\n Edinburg, TX 78539-2999, USA.\nauthor:\n- $^1$Megha Pandey\n- '$^2$Mrinal K. Roychowdhury'\nbibliography:\n- 'References.bib'\ntitle: Constrained quantization for the Cantor distribution\n---\n\nIntroduction\n============\n\nReal-life problems, such as information theory, data compression, signal processing, etc., consist of a large number of data that" -"---\nabstract: 'Massive star clusters are often used as tracers of galaxy formation and assembly. In order to do so, we must understand their properties at formation, and how those properties change with time, galactic environment, and galaxy assembly history. The two most important intrinsic properties that govern star cluster evolution are mass and radius. In this paper, we investigate 10 theoretically and observationally motivated initial size-mass relations for star clusters, and evolve populations of clusters through galaxy formation models. We compare our results to each other and to observations of cluster populations in M83, M31, and the Milky Way. We find that none of our size-mass relations agree with the observations after 6-10 Gyr of evolution. We can successfully reproduce the cluster mass functions with models that have a small range of initial radii, and which do not allow cluster radii to change with time. However, these models do not agree with our understanding of cluster evolution, which does involve radius evolution, and do not match the observed distributions of radii. We note that there is a region of parameter space where clusters are optimally protected from both tidal shocks and evaporation due to two-body relaxation. Clusters which are" -"---\nabstract: 'The lack of ability to adapt the motion compensation model to video content is an important limitation of current end-to-end learned video compression models. This paper advances the\u00a0state-of-the-art by proposing an adaptive motion-compensation model for end-to-end rate-distortion optimized hierarchical bi-directional video compression. In particular, we propose two novelties: i) a multi-scale deformable alignment scheme at the feature level combined with multi-scale conditional coding, ii) motion-content adaptive inference. In addition, we employ\u00a0a\u00a0gain unit, which enables a single model to operate at multiple rate-distortion operating points. We also exploit the\u00a0gain unit to control bit allocation among intra-coded vs. bi-directionally coded frames by fine tuning corresponding models for truly flexible-rate learned video coding. Experimental results demonstrate state-of-the-art rate-distortion performance exceeding those of all prior art in learned video coding[^1].'\naddress: 'Dept. of Electrical & Electronics Engineering, Ko\u00e7 University, Istanbul, Turkey'\nbibliography:\n- 'references.bib'\ntitle: |\n Multi-scale Deformable Alignment and Content-Adaptive Inference\\\n for Flexible-Rate Bi-Directional Video Compression\n---\n\nbi-directional video compression, hierarchical B pictures, end-to-end rate-distortion optimization, content-adaptive inference, flexible-rate coding\n\nIntroduction {#intro}\n============\n\nVideo compression technology is in the midst of a transition from the traditional approaches, such as H.265/HEVC\u00a0[@h265] and H.266/VVC\u00a0[@vvc], to deep learning" -"---\nabstract: 'Spiking Transformers have gained considerable attention because they achieve both the energy efficiency of Spiking Neural Networks (SNNs) and the high capacity of Transformers. However, the existing Spiking Transformer architectures, derived from Artificial Neural Networks (ANNs), exhibit a notable architectural gap, resulting in suboptimal performance compared to their ANN counterparts. Manually discovering optimal architectures is time-consuming. To address these limitations, we introduce [`AutoST` ]{}, a training-free NAS method for Spiking Transformers, to rapidly identify high-performance Spiking Transformer architectures. Unlike existing training-free NAS methods, which struggle with the non-differentiability and high sparsity inherent in SNNs, we propose to utilize Floating-Point Operations (FLOPs) as a performance metric, which is independent of model computations and training dynamics, leading to a stronger correlation with performance. Our extensive experiments show that [`AutoST` ]{}models outperform state-of-the-art manually or automatically designed SNN architectures on static and neuromorphic datasets. Full code, model, and data are released for reproduction.[^1]'\naddress: North Carolina State University\ntitle: 'AutoST: Training-free Neural Architecture Search for Spiking Transformers'\n---\n\nSpiking Neural Network, Transformer, Neural Architecture Search\n\nIntroduction {#sec:intro}\n============\n\nSpiking neural networks (SNNs) have gained extensive attention owing to their remarkable energy efficiency\u00a0[@maassNetworksSpikingNeurons1997]. Concurrently, the Transformer has exhibited impressive performance in" -"---\nabstract: '\\[sec:Abstract\\] Continual learning (CL) has emerged as an important avenue of research in recent years, at the intersection of Machine Learning (ML) and Human-Robot Interaction (HRI), to allow robots to continually learn in their environments over long-term interactions with humans. Most research in continual learning, however, has been *robot-centered* to develop continual learning algorithms that can quickly learn new information on static datasets. In this paper, we take a *human-centered* approach to continual learning, to understand how humans teach continual learning robots over the long term and if there are variations in their teaching styles. We conducted an in-person study with 40 participants that interacted with a continual learning robot in 200 sessions. In this between-participant study, we used two different CL models deployed on a Fetch mobile manipulator robot. An extensive qualitative and quantitative analysis of the data collected in the study shows that there is significant variation among the teaching styles of individual users indicating the need for personalized adaptation to their distinct teaching styles. The results also show that although there is a difference in the teaching styles between expert and non-expert users, the style does not have an effect on the performance of the" -"---\nabstract: 'We investigate the production of particle Dark Matter (DM) in a minimal freeze-in model considering a non-instantaneous reheating phase after inflation. We demonstrate that for low reheating temperatures, bosonic or fermionic reheating from monomial potentials can lead to a different evolution in the DM production and hence to distinct predictions for the parent particle lifetime and mass, constrained by long-lived particle (LLP) searches. We highlight that such scenario predicts parent particle decay lengths larger compared to using the instantaneous reheating approximation. Moreover, we demonstrate the importance of an accurate definition of the reheating temperature and emphasize its relevance for the correct interpretation of experimental constraints. We explore different models of inflation, which can lead to the considered reheating potential. We find that the extent to which the standard DM freeze-in production can be modified crucially depends on the underlying inflationary model. Based on latest CMB constraints, we derive lower limits on the decay length of the parent particle and confront these results with the corresponding reach of LLP searches. Our findings underscore the impact of the specific dynamics of inflation on DM freeze-in production and highlight their importance for the interpretation of collider signatures. At the same time," -"---\nabstract: 'We provide partial implementations of von Neumann\u2019s universal constructor and universal copier, starting out with three types of simple building blocks using minimal assumptions. Using the same principles, we also construct Turing machines. Combining both, we arrive at a proposal for a self-replicating Turing machine. Our construction allows for mutations if desired, and we give a simple description language.'\naddress: 'ralph.lano@th-nuernberg.de'\nauthor:\n- |\n Ralph P. Lano[^1]\\\n Technische Hochschule N\u00fcrnberg - Georg Simon Ohm\\\n Ke\u00dflerplatz 12, 90489 N\u00fcrnberg, Germany\\\n ralph.lano@th-nuernberg.de\nbibliography:\n- 'citations.bib'\nnocite: '[@*]'\ntitle: 'Towards a Self-Replicating Turing Machine'\n---\n\nself-replication, Turing machine, universal constructor, von Neumann, self-reproduction, artificial life, nanorobots\n\nIntroduction\n============\n\nLong before the details of biological self-reproduction were understood, von Neumann proposed a purely logical model for self-reproduction [@neumann1966theory]. It consisted of a universal constructor automaton A, a universal copier automaton B, and descriptions thereof, $\\Phi(A)$ and $\\Phi(B)$. It also included a third automaton C that would control A and B, and might be needed for additional manipulations. Finally, he introduced an automaton D that could be any automaton. Automaton D had nothing to do with self-reproduction: however, it could undergo mutation [@rocha2015neumann].\n\nClearly, living organisms provide an implementation of von Neumann\u2019s scheme." -"---\nabstract: 'Topological color codes are widely acknowledged as promising candidates for fault-tolerant quantum computing. Neither a two-dimensional nor a three-dimensional topology, however, can provide a universal gate set {H, T, CNOT}, with the T-gate missing in the two-dimensional and the H-gate in the three-dimensional case. These complementary shortcomings of the isolated topologies may be overcome in a combined approach, by switching between a two- and a three-dimensional code while maintaining the logical state. In this work, we construct resource-optimized deterministic and non-deterministic code switching protocols for two- and three-dimensional distance-three color codes using fault-tolerant quantum circuits based on flag-qubits. Deterministic protocols allow for the fault-tolerant implementation of logical gates on an encoded quantum state, while non-deterministic protocols may be used for the fault-tolerant preparation of magic states. Taking the error rates of state-of-the-art trapped-ion quantum processors as a reference, we find a logical failure probability of $3\\%$ for deterministic logical gates, which cannot be realized transversally in the respective code. By replacing the three-dimensional distance-three color code in the protocol for magic state preparation with the morphed code introduced in\u00a0[@vasmer2022morphing], we reduce the logical failure rates by two orders of magnitude, thus rendering it a viable method for" -"---\nabstract: 'For quantum computing (QC) to emerge as a practically indispensable computational tool, there is a need for quantum protocols with an end-to-end practical applications\u2014in this instance, fluid dynamics. We debut here a high performance quantum simulator which we term *QFlowS* (Quantum Flow Simulator), designed for fluid flow simulations using QC. Solving nonlinear flows by QC generally proceeds by solving an equivalent infinite dimensional linear system as a result of linear embedding. Thus, we first choose to simulate two well known flows using QFlowS and demonstrate a previously unseen, full gate-level implementation of a hybrid and high precision Quantum Linear Systems Algorithms (QLSA) for simulating such flows at low Reynolds numbers. The utility of this simulator is demonstrated by extracting error estimates and power law scaling that relates $T_{0}$ (a parameter crucial to Hamiltonian simulations) to the condition number $\\kappa$ of the simulation matrix, and allows the prediction of an optimal scaling parameter for accurate eigenvalue estimation. Further, we include two speedup preserving algorithms for (a) the functional form or sparse quantum state preparation, and (b) *in-situ* quantum post-processing tool for computing nonlinear functions of the velocity field. We choose the viscous dissipation rate as an example, for which" -"---\nabstract: |\n We introduce the notion of an ${\\varepsilon}$-cover for a kernel range space. A kernel range space concerns a set of points $X \\subset {\\ensuremath{\\mathbb{R}}}^d$ and the space of all queries by a fixed kernel (e.g., a Gaussian kernel $K(p,\\cdot) = \\exp(-\\|p-\\cdot\\|^2)$). For a point set $X$ of size $n$, a query returns a vector of values $R_p \\in {\\ensuremath{\\mathbb{R}}}^n$, where the $i$th coordinate $(R_p)_i = K(p,x_i)$ for $x_i \\in X$. An ${\\varepsilon}$-cover is a subset of points $Q \\subset {\\ensuremath{\\mathbb{R}}}^d$ so for any $p \\in {\\ensuremath{\\mathbb{R}}}^d$ that $\\frac{1}{n} \\|R_p - R_q\\|_1\\leq {\\varepsilon}$ for some $q \\in Q$. This is a smooth analog of Haussler\u2019s notion of ${\\varepsilon}$-covers for combinatorial range spaces (e.g., defined by subsets of points within a ball query) where the resulting vectors $R_p$ are in $\\{0,1\\}^n$ instead of $[0,1]^n$. The kernel versions of these range spaces show up in data analysis tasks where the coordinates may be uncertain or imprecise, and hence one wishes to add some flexibility in the notion of inside and outside of a query range.\n\n Our main result is that, unlike combinatorial range spaces, the size of kernel ${\\varepsilon}$-covers is independent of the input size $n$ and dimension $d$. We" -"---\nabstract: 'A dynamical system produces a dependent multivariate sequence called dynamical time series, developed with an evolution function. As variables in the dynamical time series at the current time-point usually depend on the whole variables in the previous time-point, existing studies forecast the variables at the future time-point by estimating the evolution function. However, some variables in the dynamical time-series are missing in some practical situations. In this study, we propose an *autoregressive with slack time series\u00a0(ARS)* model. ARS model involves the simultaneous estimation of the evolution function and the underlying missing variables as a slack time series, with the aid of the time-invariance and linearity of the dynamical system. This study empirically demonstrates the effectiveness of the proposed ARS model.'\nauthor:\n- 'Akifumi Okuno[^1]'\n- 'Yuya Morishita[^2]'\n- 'Yoh-ichi Mototake[^3]'\ntitle: |\n Forecasting of the development of\\\n a partially-observed dynamical time series\\\n with the aid of time-invariance and linearity\n---\n\n**Keywords:** dynamical system, completely missing variables, slack time series\n\nIntroduction\n============\n\nNotwithstanding its difficulty, forecasting of the development of intricate non-linear dynamical systems has been in a spotlight of various scientific fields\u00a0[@strogatz2001nonlinear; @jackson2015applications]. A plausible approach to forecasting the development is to isolate the non-linear estimation" -"---\nabstract: 'We construct solutions to the Schwarz boundary value problem on the unit disk and the upper half-plane when the boundary condition is with respect to boundary values in the sense of distributions.'\naddress: |\n Department of Mathematical Sciences\\\n University of Arkansas\\\n Fayetteville, Arkansas\nauthor:\n- 'William L. Blair'\nbibliography:\n- 'refs.bib'\ntitle: The Schwarz boundary value problem for boundary values in the sense of distributions\n---\n\nIntroduction\n============\n\nIn this paper, we extend the classes of boundary conditions under which the Schwarz boundary value problem is solvable. The Schwarz boundary value problem is a classically studied boundary value problem in the setting of complex-valued partial differential equations. The problem is to find a holomorphic function on a domain in the plane that has real part which agrees with a prescribed function on the boundary of the domain.\n\nWhen considered on the unit disk or upper half-plane, this problem is solvable by considering the Dirichlet problem with the same boundary condition, i.e., finding a real-valued harmonic function on the domain which agrees with the prescribed boundary condition. For the unit disk and the upper half-plane, the Dirichlet problem is solved by the Poisson integral of the boundary condition, for" -"---\nabstract: 'Gravitational waves (GWs) are useful to test gravitational theories and to probe the physics in the early universe. In this paper, we investigate the scalar induced gravitational waves (SIGWs) in symmetric teleparallel gravity with a parity-violating term. The presence of the parity-violating term leads to the velocity birefringence effect of the SIGWs. However, after taking into account the observational constraints on the speed of GWs, the contribution from the parity-violating term to SIGWs is negligible. Nevertheless, the contribution to SIGWs from the perturbations of the connection can be significant, and results in a multipeak structure in the energy density of SIGWs. This feature makes the symmetric teleparallel gravity distinguishable from the general relativity.'\nauthor:\n- Fengge Zhang\n- 'Jia-Xi Feng'\n- Xian Gao\nbibliography:\n- 'main.bib'\ntitle: 'Scalar induced gravitational waves in symmetric teleparallel gravity with a parity-violating term'\n---\n\nIntroduction\n============\n\nThe detection of gravitational waves (GWs) by the Laser Interferometer Gravitational-Wave Observatory (LIGO) scientific collaboration and Virgo collaboration [@Abbott:2016nmj; @Abbott:2016blz; @Abbott:2017gyy; @TheLIGOScientific:2017qsa; @Abbott:2017oio; @Abbott:2017vtc; @LIGOScientific:2018mvr; @Abbott:2020khf; @Abbott:2020uma; @LIGOScientific:2020stg] opens a new window to probe the nature of gravity in the strong gravitational field and nonlinear regime. Although the observation from cosmic microwave background (CMB) constrains the" -"---\nabstract: 'Sand mining is a booming industry. The river sandbank is one of the primary sources of sand mining. Detection of potential river sandbank regions for sand mining directly impacts the economy, society, and environment. In the past, semi-supervised and supervised techniques have been used to detect mining regions including sand mining. A few techniques employ multi-modal analysis combining different modalities such as multi-spectral imaging, synthetic aperture radar (*SAR*) imaging, aerial images, and point cloud data. However, the distinguishing spectral characteristics of river sandbank regions are yet to be fully explored. This paper provides a novel method to detect river sandbank regions for sand mining using multi-spectral images without any labeled data over the seasons. Association with a river stream and the abundance of minerals are the most prominent features of such a region. The proposed work uses these distinguishing features to determine the spectral signature of a river sandbank region, which is robust to other high mineral abundance regions. It follows a two-step approach, where first, potential high mineral regions are detected and next, they are segregated using the presence of a river stream. The proposed technique provides average accuracy, precision, and recall of $90.75\\%$, $85.47\\%$, and $73.5\\%$," -"---\nabstract: 'Major innovations in computing have been driven by scaling up computing infrastructure, while aggressively optimizing operating costs. The result is a network of worldwide datacenters that consume a large amount of energy, mostly in an energy-efficient manner. Since the electric grid powering these datacenters provided a simple and opaque abstraction of an unlimited and reliable power supply, the computing industry remained largely oblivious to the carbon intensity of the electricity it uses. Much like the rest of the society, it generally treated the carbon intensity of the electricity as constant, which was mostly true for a fossil fuel-driven grid. As a result, the cost-driven objective of increasing energy-efficiency \u2014 by doing more work per unit of energy \u2014 has generally been viewed as the most carbon-efficient approach. However, as the electric grid is increasingly powered by clean energy and is exposing its time-varying carbon intensity, the most energy-efficient operation is no longer necessarily the most carbon-efficient operation. There has been a recent focus on exploiting the flexibility of computing\u2019s workloads\u2014along temporal, spatial, and resource dimensions\u2014to reduce carbon emissions, which comes at the cost of either performance or energy efficiency. In this paper, we discuss the trade-offs between energy" -"---\nabstract: 'This paper addresses the scheduling problem on two identical parallel machines with a single server in charge of loading and unloading operations of jobs. Each job has to be loaded by the server before being processed on one of the two machines and unloaded by the same server after its processing. No delay is allowed between loading and processing, and between processing and unloading. The objective function involves the minimization of the makespan. This problem referred to as $P2,S1|s_j, t_j|C_{max}$ generalizes the classical parallel machine scheduling problem with a single server which performs only the loading (i.e., setup) operation of each job. For this $\\mathcal{NP}$-hard problem, no solution algorithm was proposed in the literature. Therefore, we present two mixed-integer linear programming (MILP) formulations, one with completion-time variables along with two valid inequalities and one with time-indexed variables. In addition, we propose some polynomial-time solvable cases and a tight theoretical lower bound. In addition, we show that the minimization of the makespan is equivalent to the minimization of the total idle times on the machines. To solve large-sized instances of the problem, an efficient General Variable Neighborhood Search (GVNS) metaheuristic with two mechanisms for finding an initial solution is" -"---\nabstract: 'BatGPT is a large-scale language model designed and trained jointly by Wuhan University and Shanghai Jiao Tong University. It is capable of generating highly natural and fluent text in response to various types of input, including text prompts, images, and audio. In the modeling level, we employ a bidirectional autoregressive architecture that allows the model to efficiently capture the complex dependencies of natural language, making it highly effective in tasks such as language generation, dialog systems, and question answering. Moreover, the bidirectional autoregressive modeling not only operates from left to right but also from right to left, effectively reducing fixed memory effects and alleviating model hallucinations. In the training aspect, we utilize a parameter expansion strategy for leveraging the pre-training of existing models and employ reinforcement learning from both AI and human feedback, aimed at improving the model\u2019s alignment performance. Overall, these approaches significantly improve the effectiveness of BatGPT, and the model can be utilized for a wide range of natural language applications.'\nauthor:\n- |\n Zuchao Li, Shitou Zhang, Hai Zhao[^1], Yifei Yang, Dongjie Yang\\\n School of Computer Science, Wuhan University\\\n Department of Computer Science and Engineering, Shanghai Jiao Tong University\\\n `zcli-charlie@whu.edu.cn, shitouzhang@whu.edu.cn, zhaohai@cs.sjtu.edu.cn`\\\nbibliography:" -"---\nauthor:\n- Shriya Soma\n- Horst St\u00f6cker\n- 'Kai Zhou[!!]{}'\nbibliography:\n- 'biblio.bib'\ntitle: Mass and tidal parameter extraction from gravitational waves of binary neutron stars mergers using deep learning\n---\n\nIntroduction\n============\n\nThe era of gravitational wave astronomy commenced with LIGO\u2019s first detection of gravitational waves\u00a0(GWs) from the collision of two black holes on 14$^{\\text{th}}$ September 2015\u00a0[@LIGOScientific:2016emj]. Since then, the LIGO-Virgo Scientific collaboration has made several GW detections from compact binary coalescences; 11 events in the first and second observing runs\u00a0(O1 and O2), and 79 in the third observing run\u00a0(O3)\u00a0[@LIGOScientific:2018mvr; @LIGOScientific:2020ibl; @LIGOScientific:2021usb; @LIGOScientific:2021djp]. These events comprise mergers of binary black holes\u00a0(BBHs), binary neutron stars\u00a0(BNSs), neutron star-black hole\u00a0(NSBH) binaries\u00a0[@LIGOScientific:2021qlt] and also component objects from the \u2018mass gap\u2019\u00a0[@LIGOScientific:2020zkf]. GW170817, the first GW event from a BNS merger detected by Advanced LIGO and Virgo, marked a major advancement in the ongoing research on neutron stars (NSs)\u00a0[@LIGOScientific:2017vwq].\n\nPrior to the event GW170817, the NS equation of state (EoS) in the intermediate density range\u00a0(2-7[*n*]{}$_s$, where [*n*]{}$_s$ is the nuclear saturation density) was mainly constrained by precise mass measurements of pulsars, i.e., any EoS that does not satisfy the minimum lower band" -"---\nabstract: 'Deep artificial neural networks achieve surprising generalization abilities that remain poorly understood. In this paper, we present a new approach to analyzing generalization for deep feed-forward ReLU networks that takes advantage of the degree of sparsity that is achieved in the hidden layer activations. By developing a framework that accounts for this reduced effective model size for each input sample, we are able to show fundamental trade-offs between sparsity and generalization. Importantly, our results make no strong assumptions about the degree of sparsity achieved by the model, and it improves over recent norm-based approaches. We illustrate our results numerically, demonstrating non-vacuous bounds when coupled with data-dependent priors in specific settings, even in over-parametrized models.'\nauthor:\n- 'Ramchandran Muthukumar[^1]'\n- 'Jeremias Sulam[^2]'\nbibliography:\n- 'references.bib'\ntitle: 'Sparsity-aware generalization theory for deep neural networks'\n---\n\nIntroduction\n============\n\nStatistical learning theory seeks to characterize the generalization ability of machine learning models, obtained from finite training data, to unseen test data. The field is by now relatively mature, and several tools exist to provide upper bounds on the generalization error, $R(h)$. Often the upper bounds depend on the empirical risk, $\\hat{R}(h)$, and different characterizations of complexity of the hypothesis class as well" -"---\nabstract: 'Efforts are underway to magnetically confine electron\u2013positron pair plasmas to study their unique behavior, which is characterized by significant changes in plasma time and length scales, supported waves, and unstable modes. However, use of conventional plasma diagnostics presents challenges with these low-density and annihilating matter-antimatter plasma. To address this problem, we propose to develop techniques based on the distinct emission provided by annihilation. This emission exhibits two spatial correlations: the distance attenuation of isotropic sources and the back-to-back propagation of momentum-preserving 2-$\\gamma$ annihilation. We present the results of our analysis of the $\\gamma$ emission rate and the spatial profile of the annihilation in a magnetized pair plasma from direct pair collisions, from the formation and decay of positronium, as well as from transport processes. In order to demonstrate the effectiveness of annihilation-based techniques, we tested them on annular $\\gamma$ emission profiles produced by a $\\beta^+$ radioisotope on a rotating turntable. Direct and positronium-mediated annihilation result in overlapping volumetric $\\gamma$ sources, and the 2-$\\gamma$ emission from these volumetric sources can be tomographically reconstructed from coincident counts in multiple detectors. Transport processes result in localized annihilation where field lines intersect walls, limiters, or internal magnets. These localized sources can be" -"---\nabstract: 'We use numerical simulations of circumplanetary disks to determine the boundary between disks that are radially truncated by the tidal potential, and those where gas escapes the Hill sphere. We consider a model problem, in which a coplanar circumplanetary disk is resupplied with gas at an injection radius smaller than the Hill radius. We evolve the disk using the [Phantom]{} Smoothed Particle Hydrodynamics code until a steady-state is reached. We find that the most significant dependence of the truncation boundary is on the disk aspect ratio $H/R$. Circumplanetary disks are efficiently truncated for $H/R \\lesssim 0.2$. For $H/R \\simeq 0.3$, up to about half of the injected mass, depending on the injection radius, flows outwards through the decretion disk and escapes. As expected from analytic arguments, the conditions ($H/R$ and Shakura-Sunyaev $\\alpha$) required for tidal truncation are independent of planet mass. A simulation with larger $\\alpha=0.1$ shows stronger outflow than one with $\\alpha=0.01$, but the dependence on transport efficiency is less important than variations of $H/R$. Our results suggest two distinct classes of circumplanetary disks: tidally truncated thin disks with dust-poor outer regions, and thicker actively decreting disks with enhanced dust-to-gas ratios. Applying our results to the PDS" -"---\nabstract: 'We propose a variational approach for preparing entangled quantum states on quantum computers. The methodology involves training a unitary operation to match with a target unitary using the Fubini-Study distance as a cost function. We employ various gradient-based optimization techniques to enhance performance, including Adam and quantum natural gradient. Our investigation showcases the versatility of different ansatzes featuring a hypergraph structure, enabling the preparation of diverse entanglement target states such as GHZ, W, and absolutely maximally entangled states. Remarkably, the circuit depth scales efficiently with the number of layers and does not depend on the number of qubits. Moreover, we explore the impacts of barren plateaus, readout noise, and error mitigation techniques on the proposed approach. Through our analysis, we demonstrate the effectiveness of the variational algorithm in maximizing the efficiency of quantum state preparation, leveraging low-depth quantum circuits.'\nauthor:\n- Vu Tuan Hai\n- Nguyen Tan Viet\n- Le Bin Ho\nbibliography:\n- 'refs.bib'\ntitle: '**Variational preparation of entangled states on quantum computers** '\n---\n\n[^1]\n\n[^2]\n\nIntroduction\n============\n\nQuantum computation leverages principles of quantum physics to perform calculations, and recent advances in engineering have led to the development of quantum computers with great potential for practical" -"---\nabstract: 'Environmental health studies are increasingly measuring endogenous omics data ($\\boldsymbol{M}$) to study intermediary biological pathways by which an exogenous exposure ($\\boldsymbol{A}$) affects a health outcome ($\\boldsymbol{Y}$), given confounders ($\\boldsymbol{C}$). Mediation analysis is frequently carried out to understand such mechanisms. If intermediary pathways are of interest, then there is likely literature establishing statistical and biological significance of the total effect, defined as the effect of $\\boldsymbol{A}$ on $\\boldsymbol{Y}$ given $\\boldsymbol{C}$. For mediation models with continuous outcomes and mediators, we show that leveraging external summary-level information on the total effect improves estimation efficiency of the natural direct and indirect effects. Moreover, the efficiency gain depends on the asymptotic partial $R^2$ between the outcome ($\\boldsymbol{Y}\\mid\\boldsymbol{M},\\boldsymbol{A},\\boldsymbol{C}$) and total effect ($\\boldsymbol{Y}\\mid\\boldsymbol{A},\\boldsymbol{C}$) models, with smaller (larger) values benefiting direct (indirect) effect estimation. We robustify our estimation procedure to incongenial external information by assuming the total effect follows a random distribution. This framework allows shrinkage towards the external information if the total effects in the internal and external populations agree. We illustrate our methodology using data from the Puerto Rico Testsite for Exploring Contamination Threats, where Cytochrome p450 metabolites are hypothesized to mediate the effect of phthalate exposure on gestational age at delivery. External information" -"---\nauthor:\n- 'Carlos M. R. Rocha'\n- Octavio Roncero\n- Niyazi Bulut\n- Piotr Zuchowski\n- 'David Navarro-Almaida'\n- Asunci\u00f3n Fuente\n- Valentine Wakelam\n- 'Jean-Christophe Loison'\n- Evelyne Roueff\n- 'Javier R. Goicoechea'\n- Gisela Esplugues\n- 'Leire Beitia-Antero'\n- Paola Caselli\n- Valerio Lattanzi\n- Jaime Pineda\n- Romane Le Gal\n- 'Marina Rodr\u00edguez-Baras'\n- 'Pablo Riviere-Marichalar'\nsubtitle: 'VIII. Unlocking the CS chemistry: the CH + S$\\rightarrow$ CS + H and C$_2$ + S$\\rightarrow$ CS + C reactions'\ntitle: 'Gas phase Elemental abundances in Molecular cloudS (GEMS)'\n---\n\n[Carbon monosulphide (CS) is among a few sulphur-bearing species that has been widely observed in all environments, including the most extreme ones such as diffuse clouds. Moreover, it has been widely used as a tracer of the gas density in the interstellar medium in our Galaxy and external galaxies. Therefore, the full understanding of its chemistry in all environments is of paramount importance for the study of the interstellar matter. ]{} [Our group is revising the rates of the main formation and destruction mechanisms of CS. In particular, we focus on those which involve open-shell species for which the classical capture model might not be accurate enough. In this" -"---\nabstract: 'Recommender systems play a crucial role in helping users discover information that aligns with their interests based on their past behaviors. However, developing personalized recommendation systems becomes challenging when historical records of user-item interactions are unavailable, leading to what is known as the *system cold-start* recommendation problem. This issue is particularly prominent in start-up businesses or platforms with insufficient user engagement history. Previous studies focus on user or item cold-start scenarios, where systems could make recommendations for new users or items but are still trained with historical user-item interactions in the same domain, which cannot solve our problem. To bridge the gap, our research introduces an innovative and effective approach, capitalizing on the capabilities of pre-trained language models. We transform the recommendation process into sentiment analysis of natural languages containing information of user profiles and item attributes, where the sentiment polarity is predicted with prompt learning. By harnessing the extensive knowledge housed within language models, the prediction can be made without historical user-item interaction records. A benchmark is also introduced to evaluate the proposed method under the cold-start setting, and the results demonstrate the effectiveness of our method. To the best of our knowledge, this is the first" -"---\nabstract: 'We introduce a new method of detecting when the fundamental group of a Dehn surgery on a knot admits a left-ordering, a method which is particularly useful for 2-bridge knots. As an illustration of this method, we show that all Dehn surgeries on the knot $6_2$ with slope in the interval $(-4, 8)\\cap\\mathbb{Q}$ have left-orderable fundamental groups by exhibiting a family of hyperbolic $\\widetilde{PSL}(2,\\mathbb{R})$-representations of the knot complement group.'\nauthor:\n- Ollie Thakar\nbibliography:\n- 'bib.bib'\ntitle: 'Left-Orderable Surgeries on the knot $6_2$ via hyperbolic $\\widetilde{PSL}(2,\\mathbb{R})$-Representations'\n---\n\nIntroduction\n============\n\nThe $L$-space conjecture is an ambitious conjecture in 3-manifold topology attempting to unite information about the Heegaard Floer homology of a three-manifold with information about its fundamental group. To state this conjecture, we must first go through some preliminary definitions:\n\nA group $G$ is said to be *left-orderable* if it admits a total ordering $<$ such that if $a5.5$; see @2022Fan for a recent review], implying that very massive supermassive black holes (SMBHs) existed only a few hundred million years after the Big Bang [e.g., @2017Mazzucchelli; @2021Yang; @2022Farina]. There is strong evidence that luminous AGN activity is linked to galaxy mergers [@2012Treister], although the causality of this relation is still a matter of debate. Numerical simulations strongly suggest that in the early Universe, the most massive SMBHs reside in the densest regions, built up from the accretion and merger of massive dark matter halo seeds, and surrounded by a large number of fainter galaxies [@2005Springel; @2006Volonteri; @2014Costa; @2019Habouzit]. Mergers and high gas" -"---\nabstract: 'Motivated by constraints on the dark energy equation of state from supernova-data, we propose a formalism for the Bayesian inference of functions: Starting at a functional variant of the Kullback-Leibler divergence we construct a functional Fisher-matrix and a suitable partition functional which takes on the shape of a path integral. After showing the validity of the Cram[\u00e9]{}r-Rao bound and unbiasedness for functional inference in the Gaussian case, we construct Fisher-functionals for the dark energy equation of state constrained by the cosmological redshift-luminosity relationship of supernovae of type Ia, for both the linearised and the lowest-order non-linear model. Introducing Fourier-expansions and expansions into Gegenbauer-polynomials as discretisations of the dark energy equation of state function shows how the uncertainty on the inferred function scales with model complexity and how functional assumptions can lead to errors in extrapolation to poorly constrained redshift ranges.'\nauthor:\n- |\n Rebecca Maria Kuntz$^2$[^1], Maximilian Philipp Herzog$^2$, Heinrich von Campe$^2$,Lennart R[\u00f6]{}ver$^{1,2}$, Bj[\u00f6]{}rn Malte Sch[\u00e4]{}fer$^2$[^2]\\\n $^1$ Institut f[\u00fc]{}r Theoretische Physik, Universit[\u00e4]{}t Heidelberg, Philosophenweg 16, 69120 Heidelberg, Germany\\\n $^2$Zentrum f[\u00fc]{}r Astronomie der Universit[\u00e4]{}t Heidelberg, Astronomisches Rechen-Institut, Philosophenweg 12, 69120 Heidelberg, Germany\nbibliography:\n- 'references.bib'\ntitle: 'Partition function approach to non-Gaussian likelihoods: partitions for the inference of functions and" -"---\nabstract: 'The development of practical, high-performance decoding algorithms reduces the resource cost of fault-tolerant quantum computing. Here we propose a decoder for the surface code that finds low-weight correction operators for errors produced by the depolarising noise model. The decoder is obtained by mapping the syndrome of the surface code onto that of the color code, thereby allowing us to adopt more sophisticated color-code decoding algorithms. Analytical arguments and exhaustive testing show that the resulting decoder can find a least-weight correction for all weight $d/2$ depolarising errors for even code distance $d$. This improves the logical error rate by an exponential factor $O(2^{d/2})$ compared with decoders that treat bit-flip and dephasing errors separately. We demonstrate this improvement with analytical arguments and supporting numerical simulations at low error rates. Of independent interest, we also demonstrate an exponential improvement in logical error rate for our decoder used to correct independent and identically distributed bit-flip errors affecting the color code compared with more conventional color-code decoding algorithms.'\nauthor:\n- Asmae Benhemou\n- Kaavya Sahay\n- Lingling Lao\n- 'Benjamin J. Brown'\nbibliography:\n- 'references.bib'\ntitle: 'Minimising surface-code failures using a color-code decoder'\n---\n\nIntroduction\n============\n\nWe envisage that a large-scale quantum computer" -"---\nabstract: 'A frieze on a polygon is a map from the diagonals of the polygon to an integral domain which respects the Ptolemy relation. Conway and Coxeter previously studied positive friezes over $\\mathbb{Z}$ and showed that they are in bijection with triangulations of a polygon. We extend their work by studying friezes over ${\\mathbb{Z}}[\\sqrt{2}]$ and their relationships to dissections of polygons. We largely focus on the characterization of unitary friezes that arise from dissecting a polygon into triangles and quadrilaterals. We identify a family of dissections that give rise to unitary friezes and conjecture that this gives a complete classification of dissections which admit a unitary frieze.'\nauthor:\n- 'Esther Banaian, Libby Farrell, Amy Tao, Kayla Wright, Joy Zhichun Zhang'\nbibliography:\n- 'Arxiv.bib'\ntitle: 'Friezes over ${\\mathbb{Z}}[\\sqrt{2}]$'\n---\n\nIntroduction\n============\n\nIn this paper, we will study friezes. A frieze is a ring homomorphism from a cluster algebra $\\mathcal{A}(Q)$ to an integral domain $R$. When the cluster algebra arises from surface $S$ with marked points $M$, the generators of the algebra correspond to arcs on the surface with relations provided by skein relations [@fomin2008cluster]. Therefore, a frieze from such a cluster algebra can instead be viewed as a map from" -"---\nabstract: 'With the commercial application of automated vehicles (AVs), the sharing of roads between AVs and human-driven vehicles (HVs) becomes a common occurrence in the future. While research has focused on improving the safety and reliability of autonomous driving, it\u2019s also crucial to consider collaboration between AVs and HVs. Human-like interaction is a required capability for AVs, especially at common unsignalized intersections, as human drivers of HVs expect to maintain their driving habits for inter-vehicle interactions. This paper uses the social value orientation (SVO) in the decision-making of vehicles to describe the social interaction among multiple vehicles. Specifically, we define the quantitative calculation of the conflict-involved SVO at unsignalized intersections to enhance decision-making based on the reinforcement learning method. We use naturalistic driving scenarios with highly interactive motions for performance evaluation of the proposed method. Experimental results show that SVO is more effective in characterizing inter-vehicle interactions than conventional motion state parameters like velocity, and the proposed method can accurately reproduce naturalistic driving trajectories compared to behavior cloning.'\nauthor:\n- 'Yan Tong$^{1,2}$, Licheng Wen$^{1}$, Pinlong Cai$^{1, \\ast}$, Daocheng Fu$^{1}$, Song Mao$^{1}$, Yikang Li$^{1}$ [^1][^2][^3]'\nbibliography:\n- 'references.bib'\ntitle: 'Human-like Decision-making at Unsignalized Intersection using Social Value Orientation'\n---\n\nInteraction," -"---\nabstract: |\n Many IoT use cases demand both secure storage and secure communication. Resource-constrained devices cannot afford having one set of crypto protocols for storage and another for communication. Lightweight application layer security standards are being developed for IoT communication. Extending these protocols for secure storage can significantly reduce communication latency and local processing.\n\n We present BLEND, combining secure storage and communication by storing IoT data as pre-computed encrypted network packets. Unlike local methods, BLEND not only eliminates separate crypto for secure storage needs, but also eliminates a need for real-time crypto operations, reducing the communication latency significantly. Our evaluation shows that compared with a local solution, BLEND reduces send latency from 630 $\\mu$$s$ to 110 $\\mu$$s$ per packet. BLEND enables PKI based key management while being sufficiently lightweight for IoT. BLEND doesn\u2019t need modifications to communication standards used when extended for secure storage, and can therefore preserve underlying protocols\u2019 security guarantees.\nauthor:\n- \nbibliography:\n- 'IEEEabrv.bib'\n- 'ref.bib'\ntitle: '[BLEND]{}: Efficient and blended IoT data storage and communication with application layer security'\n---\n\nSecure storage, communication security, application layer security, OSCORE, EDHOC, IoT\n\nIntroduction {#sec:intro}\n============\n\nIoT is being deployed in extremely heterogeneous and wild scenarios such as" -"---\nabstract: 'The sample-based Gibbs sampler has been the dominant method for approximating joint distribution from a collection of compatible full-conditional distributions. However for conditionally specified model, mixtures of incompatible full and non-full conditional distributions are the realities; but, their updating orders are hard to identified. We propose a new algorithm, the Iterative Conditional Replacement (ICR), that produces distributional approximations toward the stationary distributions, dispensing Markov chain entirely. ICR always converges, and it produces mutually stationary distributions, which will be consistent among one another when the conditional distributions are compatible. Examples show ICR to be superior in quality, while being more parallelizable and requiring little effort in monitoring its convergence. Last, we propose an ensemble approach to decide the final model.'\nauthor:\n- |\n Kun-Lin Kuo\\\n Institute of Statistics, National University of Kaohsiung, Kaohsiung, Taiwan\\\n and\\\n Yuchung J. Wang[^1]\\\n Department of Mathematical Sciences, Rutgers University, Camden, NJ, USA\ntitle: '**Iterative conditional replacement algorithm for conditionally specified models**'\n---\n\n[**Keywords**]{}: Dependency network; $I$-projection; Method of alternating projection; Mutually stationary distributions; Unsupervised leaning.\n\nIntroduction\n============\n\nUsing the two cultures of @Breiman2001, the assumption of a joint distribution is data modeling, whereas conditionally specified model (CSM)\u2014specifying a joint distribution via conditional distributions\u2014belongs to" -"---\nauthor:\n- |\n Fabian Baumann, Agnieszka Czaplicka, and Iyad Rahwan$^*$\\\n \\\n \\\n \\\nbibliography:\n- 'scibib.bib'\ntitle: Network Structure shapes the Impact of Diversity in Collective Learning\n---\n\nIt is widely believed that diversity arising from different skills enhances the performance of teams, and in particular, their ability to learn and innovate. However, diversity has also been associated with negative effects on the communication and coordination within collectives. Yet, despite the importance of diversity as a concept, we still lack a mechanistic understanding of how its impact is shaped by the underlying social network. To fill this gap, we model skill diversity within a simple model of collective learning and show that its effect on collective performance differs depending on the complexity of the task and the network density. In particular, we find that diversity consistently impairs performance in simple tasks. In contrast, in complex tasks, link density modifies the effect of diversity: while homogeneous populations outperform diverse ones in sparse networks, the opposite is true in dense networks, where diversity boosts collective performance. Our findings also provide insight on how to forge teams in an increasingly interconnected world: the more we are connected, the more we can benefit" -"---\nabstract: 'We use algebraic geometry over pointed monoids to give an intrinsic interpretation for the compactification of the spectrum of the ring of integers of a number field $K$, for the projective line over algebraic extensions of ${{\\mathbb F}}_1$ and for maps between them induced by elements of $K$, as introduced by Alexander Smirnov in his approach to the ABC conjecture.'\naddress: 'Manoel Jarra, University of Groningen, the Netherlands, and IMPA, Rio de Janeiro, Brazil'\nauthor:\n- Manoel Jarra\nbibliography:\n- 'dimension.bib'\ntitle: 'On Smirnov\u2019s approach to the ABC conjecture'\n---\n\nIntroduction {#introduction .unnumbered}\n============\n\nIn [@Smirnov93], Smirnov proposes an approach to the ABC conjecture based on the analogy between number fields and function fields of algebraic curves (see also [@LeBruyn16]).\n\nThe main idea is to consider the \u201ccompactification\u201d $\\overline{\\operatorname{Spec}{{\\mathbb Z}}}^\\textup{Smi}$ of $\\operatorname{Spec}{{\\mathbb Z}}$ as a curve over \u201cthe field with one element\u201d ${{\\mathbb F}}_1$. The curve $\\overline{\\operatorname{Spec}{{\\mathbb Z}}}^\\textup{Smi}$ is defined as the set of non-trivial places of the \u201cfunction field\u201d ${{\\mathbb Q}}$, [i.e.]{}, as the set $$\\{[2], [3], [5], [7], [11], \\dotsc\\} \\cup \\{[\\infty]\\},$$ where $[p]$ is the class of the $p$-adic valuation $$q \\mapsto v_p(q) = n \\quad \\text{if} \\quad q = p^n\\dfrac{a}{b} \\enspace \\text{ with }" -"---\nabstract: 'Quantum phase estimation (QPE) serves as a building block of many different quantum algorithms and finds important applications in computational chemistry problems. Despite the rapid development of quantum hardware, experimental demonstration of QPE for chemistry problems remains challenging due to its large circuit depth and the lack of quantum resources to protect the hardware from noise with fully fault-tolerant protocols. In the present work, we take a step towards fault-tolerant quantum computing by demonstrating a QPE algorithm on a Quantinuum trapped-ion computer. We employ a Bayesian approach to QPE and introduce a routine for optimal parameter selection, which we combine with a $\\llbracket n+2,n,2\\rrbracket$ quantum error detection code carefully tailored to the hardware capabilities. As a simple quantum chemistry example, we take a hydrogen molecule represented by a two-qubit Hamiltonian and estimate its ground state energy using our QPE protocol. In the experiment, we use the quantum circuits containing as many as 920 physical two-qubit gates to estimate the ground state energy within $6\\times 10^{-3}$ hartree of the exact value.'\nauthor:\n- Kentaro Yamamoto\n- Samuel Duffield\n- Yuta Kikuchi\n- David Mu\u00f1oz Ramo\nbibliography:\n- 'bib\\_prxq.bib'\ntitle: Demonstrating Bayesian Quantum Phase Estimation with Quantum Error Detection\n---" -"---\nabstract: 'We explore the possibility that a confining first-order phase transition of a nearly-conformal dark sector generates the reported NANOGrav signal of a stochastic gravitational wave background. The visible Standard Model (SM) sector and the dark sector are initially thermally decoupled so that their temperatures are different. The nearly conformal phase transition is described by the shallow potential of a dilaton (or a radion in the 5D holographic perspective) generated by a new dark Yang-Mills field coupled to the conformal sector. For a dark sector only gravitationally connected with the visible sector, the NANOGrav signal is explained by the phase transition without contradicting the $\\Delta N_{\\rm eff}$ constraint, together with a contribution from supermassive black hole binaries. While the dilaton and dark glueballs can be produced after the phase transition, they immediately decay into dark radiation, which can help ameliorate the Hubble tension and be tested by the future CMB-S4 experiment. Alternatively, for a dark conformal sector decaying into the visible sector after the phase transition, the $\\Delta N_{\\rm eff}$ constraint is not applied and the phase transition can solely explain the NANOGrav signal.'\nauthor:\n- |\n Kohei Fujikura$^{1}$,[^1] Sudhakantha Girmohanta$^{2,3}$,[^2] Yuichiro Nakai$^{2,3}$[^3] and Motoo Suzuki$^{4,5}\\footnote{\n E-mail address: }$\\" -"---\nabstract: 'Cryptocurrencies come with a variety of [*tokenomic*]{} policies as well as aspirations of desirable monetary characteristics that have been described by proponents as \u201csound money\u201d or even \u201cultra sound money.\u201d These propositions are typically devoid of economic analysis so it is a pertinent question how such aspirations fit in the wider context of monetary economic theory. In this work, we develop a framework that determines the optimal token supply policy of a cryptocurrency, as well as investigate how such policy may be algorithmically implemented. Our findings suggest that the optimal policy complies with the Friedman rule and it is dependent on the risk free rate, as well as the growth of the cryptocurrency platform. Furthermore, we demonstrate a wide set of conditions under which such policy can be implemented via contractions and expansions of token supply that can be realized algorithmically with block rewards, taxation of consumption and burning the proceeds, and blockchain oracles.'\nauthor:\n- |\n Aggelos Kiayias\\\n University of Edinburgh, IOG\\\n `akiayias@inf.ed.ac.uk`\n- |\n Philip Lazos\\\n IOG\\\n `philip.lazos@iohk.io`\n- |\n Jan Christoph Schlegel\\\n City, University of London\\\n `jansc@alumni.ethz.ch`\nbibliography:\n- 'main.bib'\ntitle: 'Would Friedman Burn your Tokens?'\n---\n\nIntroduction\n============\n\nTokenomics, referring to the algorithmic adjustment" -"---\nabstract: 'Aspect-based sentiment analysis is a long-standing research interest in the field of opinion mining, and in recent years, researchers have gradually shifted their focus from simple ABSA subtasks to end-to-end multi-element ABSA tasks. However, the datasets currently used in the research are limited to individual elements of specific tasks, usually focusing on in-domain settings, ignoring implicit aspects and opinions, and with a small data scale. To address these issues, we propose a large-scale Multi-Element Multi-Domain dataset (MEMD) that covers the four elements across five domains, including nearly 20,000 review sentences and 30,000 quadruples annotated with explicit and implicit aspects and opinions for ABSA research. Meanwhile, we evaluate generative and non-generative baselines on multiple ABSA subtasks under the open domain setting, and the results show that open domain ABSA as well as mining implicit aspects and opinions remain ongoing challenges to be addressed. The datasets are publicly released at .'\nauthor:\n- |\n Hongjie Cai, Nan Song, Zengzhi Wang, Qiming Xie, Qiankun Zhao, Ke Li,\\\n , , , [^1]\\\n School of Computer Science and Engineering,\\\n Nanjing University of Science and Technology, China\\\n `{hjcai, nsong, zzwang, qmxie, kkzhao, kli, `\\\n `wusiwei, sjliu, jfyu, rxia}@njust.edu.cn`\\\nbibliography:\n- 'acl2021.bib'\ntitle: 'MEMD-ABSA: A" -"---\nabstract: |\n We present new explicit upper bounds for the smoothness of the distribution of the random diagonal sum $S_n=\\sum_{j=1}^nX_{j,\\pi(j)}$ of a random $n\\times n$ matrix $X=(X_{j,r})$, where the $X_{j,r}$ are independent integer valued random variables, and $\\pi$ denotes a uniformly distributed random permutation on $\\{1,\\dots,n\\}$ independent of $X$. As a measure of smoothness, we consider the total variation distance between the distributions of $S_n$ and $1+S_n$. Our approach uses a new auxiliary inequality for a generalized normalized matrix hafnian, which could be of independent interest. This approach is also used to prove upper bounds of the L\u00e9vy concentration function of $S_n$ in the case of independent real valued random variables $X_{j,r}$.\\\n **Keywords:** generalized hafnian; Hoeffding permutation statistic; L\u00e9vy concentration function inequality; random diagonal sum; smoothness inequality\\\n **2020 Mathematics Subject Classification:** 60F05; 62E17.\nauthor:\n- |\n Bero Roos[^1]\\\n University of Trier\nbibliography:\n- 'sirds\\_32.bib'\ntitle: Smoothness and L\u00e9vy concentration function inequalities for distributions of random diagonal sums\n---\n\nIntroduction and main result {#s257568}\n============================\n\nSmoothness estimates and L\u00e9vy concentration function bounds (the latter sometimes also called anti-concentration bounds) for probability distributions are often useful in the proofs of distributional approximations or limit theorems, e.g.\u00a0see @MR0331448, @MR636780, @MR974089, @MR1368759," -"---\nabstract: |\n We report pore-scale statistical properties of temperature and thermal energy dissipation rate in a two-dimensional porous Rayleigh-B\u00e9nard (RB) cell. High-resolution direct numerical simulations were carried out for the fixed Rayleigh number ($Ra$) of $10^{9}$ and the Prandtl numbers ($Pr$) of 5.3 and 0.7. We consider sparse porous media where the solid porous matrix is impermeable to both fluid and heat flux. The porosity ($\\phi$) range $0.86 \\leq \\phi \\le 0.98$, the corresponding Darcy number ($Da$) range $10^{-4}]{}]{}2 pipeline. We run a dedicated algorithm to detect flares in the pipeline produced lightcurves and find some of the most energetic flares observed to date within the NUV bandpass, with energies of $\\sim10^{34}$ ergs. Using GALEX data, we constrain flare frequency distributions for stars from M0 to M6 in the NUV up to $10^5\\,$s in equivalent duration and $10^{34}$ ergs in energy, orders of magnitude above any previous study in the UV. We estimate the combined effect of NUV luminosities and flare rates of stars later than M2 to be sufficient for abiogenesis on habitable zone exoplanets orbiting them. As a counterpoint, we speculate the high frequencies of energetic UV flares and associated coronal mass ejections would inhibit" -"---\nauthor:\n- David Alonso\n- Giulio Fabbian\n- 'Kate Storey-Fisher'\n- 'Anna-Christina Eilers'\n- 'Carlos Garc\u00eda-Garc\u00eda'\n- 'David W. Hogg'\n- 'Hans-Walter Rix'\nbibliography:\n- 'gaia\\_qsoXcorr.bib'\n- 'non\\_ads.bib'\ntitle: 'Constraining cosmology with the [*Gaia*]{}\u2013[*unWISE*]{}Quasar Catalog and CMB lensing: structure growth'\n---\n\nIntroduction {#sec:intro}\n============\n\nMuch of the progress in constraining the physical parameters governing the initial conditions and evolution of our Universe is currently driven by the analysis of tracers of the large-scale structure. In these analyses, we study the spatial distribution and time evolution of various tracers of the matter density fluctuations, which allows us to constrain the Universe\u2019s geometry, as well as the growth of structure within it. Two of the most powerful large-scale structure tracers are weak gravitational lensing and galaxy clustering [@2007.08991; @2007.15632; @2105.13549]. The former provides largely unbiased maps of the matter fluctuations integrated along the line of sight from the source redshift, while the latter is a biased tracer of these fluctuations at the redshifts of the galaxies being observed.\n\nThis complementarity (unbiased vs. biased, cumulative in redshift vs. local) motivates the combined analysis of weak lensing and galaxy clustering data in a technique commonly known as *lensing tomography* (often also labelled \u201c2$\\times$2-point\u201d" -"---\nabstract: |\n Spinning black holes can transfer a significant fraction of their energy to ultralight bosonic fields via superradiance, condensing them in a co-rotating structure or \u201ccloud.\u201d This mechanism turns black holes into powerful particle detectors for bosons with extremely feeble interactions. To explore its full potential, the couplings between such particles and the Maxwell field in the presence of plasma need to be understood.\n\n In this work, we study these couplings using numerical relativity. We first focus on the coupled axion-Maxwell system evolving on a black hole background. By taking into account the axionic coupling concurrently with the growth of the cloud, we observe for the first time that a new stage emerges:\u00a0that of a stationary state where a constant flux of electromagnetic waves is fed by superradiance, for which we find accurate analytical estimates. Moreover, we show that the existence of electromagnetic instabilities in the presence of plasma is entirely controlled by the axionic coupling; even for dense plasmas, an instability is triggered for high enough couplings.\nauthor:\n- 'Thomas F.M.\u00a0Spieksma'\n- Enrico Cannizzaro\n- Taishi Ikeda\n- Vitor Cardoso\n- Yifan Chen\nbibliography:\n- 'References.bib'\ntitle: 'Superradiance:\u00a0Axionic Couplings and Plasma Effects'\n---\n\nIntroduction" -"---\nabstract: 'This work delves into the realm of logic puzzles by focusing on the Knight and Knave problems popularized by Raymond Smullyan in his book series \u201cWhat is the Name of This Book?\" The puzzles revolve around characters known as Knights (truth-tellers) and Knaves (liars), challenging solvers to determine the true identity of each person based on their statements. This work explores the utilization of Python algorithms to automate the process of solving these puzzles, offering a computational approach that enhances efficiency and accessibility. In this research we aim to develop a Python algorithm capable of parsing and analyzing the statements provided in the Knight and Knave puzzles. A logical reasoning framework is integrated within the algorithm to deduce the identities of the characters based on their statements. The algorithm processes the input statements, create a knowledge base, and make deductions following the rules of Knight and Knave logic. The developed algorithm is thoroughly tested on various instances of Knight and Knave puzzles, comparing its results to known solutions and manual approaches. We further expand the scope of the problem by introducing a Normal (who can sometimes lie and sometimes say the truth).'\nauthor:\n- Ujaan Rakshit\n- Nishchal" -"---\nabstract: 'In the problem one is given an undirected graph $G = (V,E)$ and an integer $k$ and seeks to add or delete at most $k$ edges in $G$ to obtain a trivially perfect graph. In a recent work, proved that this problem admits a kernel with $O(k^3)$ vertices. This result heavily relies on the fact that the size of trivially perfect modules can be bounded by $O(k^2)$ as shown by Drange and Pilipczuk\u00a0[@DP18]. To obtain their cubic vertex-kernel, then showed that a more intricate structure, so-called *comb*, can be reduced to $O(k^2)$ vertices. In this work we show that the bound can be improved to $O(k)$ for both aforementioned structures and thus obtain a kernel with $O(k^2)$ vertices. Our approach relies on the straightforward yet powerful observation that any large enough structure contains unaffected vertices whose neighborhood remains unchanged by an editing of size $k$, implying strong structural properties.'\nauthor:\n- Ma\u00ebl Dumas\n- Anthony Perez\nbibliography:\n- 'bibliography.bib'\ntitle: An improved kernelization algorithm for \n---\n\n=1\n\nIntroduction {#sec:intro}\n============\n\nIn the problem one is given an undirected graph $G = (V,E)$ and an integer $k$ and seeks to *edit* (add or delete) at most $k$ edges" -"---\nabstract: 'This paper introduces a framework that utilizes the Safe Screening technique to accelerate the optimization process of the Unbalanced Optimal Transport (UOT) problem by proactively identifying and eliminating zero elements in the sparse solutions. We demonstrate the feasibility of applying Safe Screening to the UOT problem with $\\ell_2$-penalty and KL-penalty by conducting an analysis of the solution\u2019s bounds and considering the local strong convexity of the dual problem. Considering the specific structural characteristics of the UOT in comparison to general Lasso problems on the index matrix, we specifically propose a novel approximate projection, an elliptical safe region construction, and a two-hyperplane relaxation method. These enhancements significantly improve the screening efficiency for the UOT\u2019s without altering the algorithm\u2019s complexity.'\nauthor:\n- 'Xun SU [^1]'\n- 'Zhongxi Fang [^2]'\n- 'Hiroyuki Kasai [^3]'\nbibliography:\n- 'ref.bib'\ntitle: Safe Screening for Unbalanced Optimal Transport\n---\n\nINTRODUCTION {#sec:int}\n============\n\nOptimal transport (OT), as a metric, has gained significant attention in the field of machine learning in recent years due to its remarkable ability to capture geometric relationships between data distributions. It has demonstrated impressive achievements in many fields [@Courty_PAMI_2017; @arjovsky2017wasserstein; @Chen_ICLR_2019; @Maretic_NIPS_2019]. To overcome the limitation of OT in handling data with" -"---\nabstract: |\n The randomized play-the-winner (RPW) model is a generalized P\u00f3lya Urn process with broad applications ranging from clinical trials to molecular evolution. We derive an exact expression for the variance of the RPW model by transforming the P\u00f3lya Urn process into a martingale, correcting an earlier result of Matthews and Rosenberger (1997). We then use this result to approximate the full probability mass function of the RPW model for certain parameter values relevant to genetic applications. Finally, we fit our model to genomic sequencing data of SARS-CoV-2, demonstrating a novel method of estimating the viral mutation rate that delivers comparable results to existing scientific literature.\n\n **Keywords:** P\u00f3lya Urn models, branching processes, martingales, applied probability, computational genetics.\nauthor:\n- 'Ivan Specht\\*'\n- Michael Mitzenmacher\ntitle: 'Analyzing Generalized P\u00f3lya Urn Models using Martingales, with an Application to Viral Evolution'\n---\n\nIntroduction\n============\n\nConsider the following generalized P\u00f3lya Urn model: An urn starts out with $u$ white balls and $v$ black balls, with $u + v > 0$. At each step $i=1, 2, 3, \\dots$, a ball in the urn is chosen uniformly at random. If the chosen ball is white, a black ball is added to the urn with probability" -"---\nabstract: 'Egocentric action anticipation aims to predict the future actions the camera wearer will perform from the observation of the past. While predictions about the future should be available before the predicted events take place, most approaches do not pay attention to the computational time required to make such predictions. As a result, current evaluation schemes assume that predictions are available right after the input video is observed, i.e., presuming a negligible runtime, which may lead to overly optimistic evaluations. We propose a streaming egocentric action evaluation scheme which assumes that predictions are performed online and made available only after the model has processed the current input segment, which depends on its runtime. To evaluate all models considering the same prediction horizon, we hence propose that slower models should base their predictions on temporal segments sampled ahead of time. Based on the observation that model runtime can affect performance in the considered streaming evaluation scenario, we further propose a lightweight action anticipation model based on feed-forward 3D CNNs which is optimized using knowledge distillation techniques with a novel past-to-future distillation loss. Experiments on the three popular datasets EPIC-KITCHENS-55, EPIC-KITCHENS-100 and EGTEA Gaze+ show that i) the proposed evaluation scheme" -"---\nabstract: 'Longitudinal studies with binary or ordinal responses are widely encountered in various disciplines, where the primary focus is on the temporal evolution of the probability of each response category. Traditional approaches build from the generalized mixed effects modeling framework. Even amplified with nonparametric priors placed on the fixed or random effects, such models are restrictive due to the implied assumptions on the marginal expectation and covariance structure of the responses. We tackle the problem from a functional data analysis perspective, treating the observations for each subject as realizations from subject-specific stochastic processes at the measured times. We develop the methodology focusing initially on binary responses, for which we assume the stochastic processes have Binomial marginal distributions. Leveraging the logits representation, we model the discrete space processes through sequences of continuous space processes. We utilize a hierarchical framework to model the mean and covariance kernel of the continuous space processes nonparametrically and simultaneously through a Gaussian process prior and an Inverse-Wishart process prior, respectively. The prior structure results in flexible inference for the evolution and correlation of binary responses, while allowing for borrowing of strength across all subjects. The modeling approach can be naturally extended to ordinal responses. Here," -"---\nabstract: 'Spatially homogeneous FLRW solutions constitute an infinite dimensional family of explicit solutions of the Einstein\u2013massless Vlasov system with vanishing cosmological constant. Each member expands towards the future at a decelerated rate. These solutions are shown to be nonlinearly future stable to compactly supported spherically symmetric perturbations, in the case that the spatial topology is that of $\\mathbb{R}^3$. The decay rates of the energy momentum tensor components, with respect to an appropriately normalised double null frame, are compared to those around Minkowski space. When measured with respect to their respective $t$ coordinates, certain components decay faster around Minkowski space, while others decay faster around FLRW.'\nauthor:\n- Martin Taylor\nbibliography:\n- 'masslessVlasovFLRW.bib'\ndate: 'June 30, 2023'\ntitle: 'Future stability of expanding spatially homogeneous FLRW solutions of the spherically symmetric Einstein\u2013massless Vlasov system with spatial topology $\\mathbb{R}^3$'\n---\n\nIntroduction\n============\n\nStandard homogeneous isotropic cosmological models in general relativity are described by the Friedmann\u2013Lema\u00eetre\u2013Robertson\u2013Walker (FLRW) spacetimes $$\\label{eq:FLRWgeneral}\n \\mathcal{M} = I \\times \\Sigma,\n \\qquad\n g = -dt^2 + a(t)^2 g_{\\Sigma},$$ where $I\\subset \\mathbb{R}$ is an open interval, $(\\Sigma,g_{\\Sigma})$ is a constant curvature manifold, and $a\\colon I \\to (0,\\infty)$ is an appropriate *scale factor*. See Section 5.3 of [@HaEl] for more on FLRW" -"---\nabstract: 'We study causal inference and efficient estimation for the expected number of recurrent events in the presence of a terminal event. We define our estimand as the vector comprising both the expected number of recurrent events and the failure survival function evaluated along a sequence of landmark times. We identify the estimand in the presence of right-censoring and causal selection as an observed data functional under coarsening at random, derive the nonparametric efficiency bound, and propose a multiply-robust estimator that achieves the bound and permits nonparametric estimation of nuisance parameters. Throughout, no absolute continuity assumption is made on the underlying probability distributions of failure, censoring, or the observed data. Additionally, we derive the class of influence functions when the coarsening distribution is known and review how published estimators may belong to the class. Along the way, we highlight some interesting inconsistencies in the causal lifetime analysis literature.'\nauthor:\n- |\n Benjamin R. Baer[^1], Robert L. Strawderman, Ashkan Ertefaie\\\n Department of Biostatistics and Computational Biology\\\n University of Rochester, Rochester, New York, U.S.A.\nbibliography:\n- 'references.bib'\ntitle: Causal inference for the expected number of recurrent events in the presence of a terminal event\n---\n\nIntroduction\n============\n\nSurvival analysis is a" -"---\nabstract: 'The nonlinear Schr\u00f6dinger equation (NLSE) is a rich and versatile model, which in one spatial dimension has stationary solutions similar to those of the linear Schr\u00f6dinger equation as well as more exotic solutions such as solitary waves and quantum droplets. We present a unified theory of the NLSE, showing that all stationary solutions of the cubic-quintic NLSE can be classified according to a single number called the cross-ratio. Any two solutions with the same cross-ratio can be converted into one another using a conformal transformation, and the same also holds true for traveling wave solutions. In this way we demonstrate a conformal duality between solutions of cubic-quintic NLSEs and lower-order NLSEs. The same analysis can be applied to the Newtonian dynamics of classical particles with polynomial potentials. Our framework provides a deeper understanding of the connections between the physics of the NLSE and the mathematics of algebraic curves and conformal symmetry.'\nauthor:\n- 'David B. Reinhardt'\n- Dean Lee\n- 'Wolfgang P. Schleich'\n- Matthias Meister\nbibliography:\n- 'biblio.bib'\ndate: 'July 14, 2023'\ntitle: Unified theory of the nonlinear Schr\u00f6dinger equation\n---\n\n*Introduction \u2013* The nonlinear Schr\u00f6dinger equation (NLSE) is ubiquitous in physics, where it plays a key" -"---\nabstract: 'Person re-identification (PRe-ID) is a crucial task in security, surveillance, and retail analysis, which involves identifying an individual across multiple cameras and views. However, it is a challenging task due to changes in illumination, background, and viewpoint. Efficient feature extraction and metric learning algorithms are essential for a successful PRe-ID system. This paper proposes a novel approach for PRe-ID, which combines a Convolutional Neural Network (CNN) based feature extraction method with Cross-view Quadratic Discriminant Analysis (XQDA) for metric learning. Additionally, a matching algorithm that employs Mahalanobis distance and a score normalization process to address inconsistencies between camera scores is implemented. The proposed approach is tested on four challenging datasets, including VIPeR, GRID, CUHK01, and PRID450S. The proposed approach has demonstrated its effectiveness through promising results obtained from the four challenging datasets.'\nauthor:\n- |\n \\\n \\\n \\\ntitle: 'Improving CNN-based Person Re-identification using score Normalization'\n---\n\nPRe-ID, Score Normalization, XQDA, CNN, feature extraction.\n\nIntroduction\n============\n\nPerson re-identification, or PRe-ID, involves recognizing an individual across different images or videos captured in a surveillance system [@himeur2023video]. This is critical in various real-time applications such as person retrieval, video monitoring, public safety, long-term human behavior analysis, and cross-camera tracking [@liu2021prgcn; @himeur2022deep;" -"---\nabstract: 'In this paper, we propose an adaptive sieving (AS) strategy for solving general sparse machine learning models by effectively exploring the intrinsic sparsity of the solutions, wherein only a sequence of reduced problems with much smaller sizes need to be solved. We further apply the proposed AS strategy to generate solution paths for large-scale sparse optimization problems efficiently. We establish the theoretical guarantees for the proposed AS strategy including its finite termination property. Extensive numerical experiments are presented in this paper to demonstrate the effectiveness and flexibility of the AS strategy to solve large-scale machine learning models.'\nauthor:\n- 'Yancheng Yuan, Meixia Lin, Defeng Sun, Kim-Chuan Toh'\ntitle: '**Adaptive sieving: A dimension reduction technique for sparse optimization problems** '\n---\n\n[**Keywords:**]{} Adaptive sieving, dimension reduction, sparse optimization problems\\\n[**AMS subject classification:**]{} 90C06, 90C25, 90C90\n\nIntroduction {#sec:intro}\n============\n\nConsider the convex composite optimization problems of the following form: $$\\begin{aligned}\n \\label{eq: lasso_model}\n \\min_{x\\in \\mathbb{R}^n}\\ \\displaystyle \\Big\\{\\Phi(x) + P(x)\\Big\\},\\end{aligned}$$ where $\\Phi:\\mathbb{R}^n\\rightarrow \\mathbb{R}$ is a convex twice continuously differentiable function and $P: \\mathbb{R}^n \\to (-\\infty,+\\infty]$ is a closed and proper convex function. The optimization problems in this form cover a wide class of models in modern data science applications and statistical learning." -"---\nabstract: 'Chinese Text Error Correction (CTEC) aims to detect and correct errors in the input text, which benefits human\u2019s daily life and various downstream tasks. Recent approaches mainly employ Pre-trained Languge Models (PLMs) to resolve CTEC task and achieve tremendous success. However, previous approaches suffer from issues of over-correction and under-correction, and the former is especially conspicuous in the precision-critical CTEC task. To mitigate the issue of over-correction, we propose a novel model-agnostic progressive multi-task learning framework for CTEC, named [ProTEC]{}, which guides a CTEC model to learn the task from easy to difficult. We divide CTEC task into three sub-tasks from easy to difficult: Error Detection, Error Type Identification, and Correction Result Generation. During the training process, [ProTEC]{} guides the model to learn text error correction progressively by incorporating these sub-tasks into a multi-task training objective. During the inference process, the model completes these sub-tasks in turn to generate the correction results. Extensive experiments and detailed analyses fully demonstrate the effectiveness and efficiency of our proposed framework.'\nauthor:\n- |\n Shirong Ma, Yinghui Li, Haojing Huang, Shulin Huang, Yangning Li,\\\n Hai-Tao Zheng\u00a0, and Ying Shen\u00a0 [^1][^2] [^3]\nbibliography:\n- 'IEEEabrv.bib'\ntitle: |\n Progressive Multi-task Learning Framework\\\n for" -"---\nabstract: 'This paper focuses on optimal beamforming to maximize the mean signal-to-noise ratio (SNR) for a passive reconfigurable intelligent surface (RIS)-aided multiple-input single-output (MISO) downlink system. We consider that both the direct and indirect (through RIS) links to the user experience correlated Rician fading. The assumption of passive RIS imposes the unit modulus constraint, which makes the beamforming problem non-convex. To tackle this issue, we apply semidefinite relaxation (SDR) for obtaining the optimal phase-shift matrix and propose an iterative algorithm to obtain the fixed-point solution for statistically optimal transmit beamforming vector and RIS-phase shift matrix. Further, to measure the performance of the proposed beamforming scheme, we analyze key system performance metrics such as outage probability (OP) and ergodic capacity (EC). Just like the existing works, the OP and EC evaluations rely on the numerical computation of the proposed iterative algorithm, which does not clearly reveal the functional dependence of system performance on key parameters such as line-of-sight (LoS) components, correlated fading, number of reflecting elements, number of antennas at the base station (BS), and fading factor. In order to overcome this limitation, we derive closed-form expressions for the optimal beamforming vector and phase shift matrix along with OP for" -"---\nabstract: 'Deep learning based food recognition has achieved remarkable progress in predicting food types given an eating occasion image. However, there are two major obstacles that hinder deployment in real world scenario. First, as new foods appear sequentially overtime, a trained model needs to learn the new classes continuously without causing catastrophic forgetting for already learned knowledge of existing food types. Second, the distribution of food images in real life is usually long-tailed as a small number of popular food types are consumed more frequently than others, which can vary in different populations. This requires the food recognition method to learn from class-imbalanced data by improving the generalization ability on instance-rare food classes. In this work, we focus on long-tailed continual learning and aim to address both aforementioned challenges. As existing long-tailed food image datasets only consider healthy people population, we introduce two new benchmark food image datasets, VFN-INSULIN and VFN-T2D, which exhibits on the real world food consumption for insulin takers and individuals with type 2 diabetes without taking insulin, respectively. We propose a novel end-to-end framework for long-tailed continual learning, which effectively addresses the catastrophic forgetting by applying an additional predictor for knowledge distillation to avoid misalignment" -"---\nabstract: 'Let $X$ be a projective variety over a number field $K$ endowed with a height function associated to an ample line bundle on $X$. Given an algebraic extension $F$ of $K$ with a sufficiently big Northcott number, we can show that there are finitely many cycles in $X_{{\\bar{\\mathbb{Q}}}}$ of bounded degree defined over $F$. Fields $F$ with the required properties were explicitly constructed in [@fab] and [@oksa], motivating our investigation. We point out explicit specializations to canonical heights associated to abelian varieties and selfmaps of ${\\mathbb{P}}^n$. As a crucial tool, we introduce a refinement of Northcott\u2019s theorem.'\naddress: 'Nuno Hultberg. University of Copenhagen, Institute of Mathematics, Universitetsparken 5, 2100 Copenhagen, Denmark'\nauthor:\n- Nuno Hultberg\nbibliography:\n- 'samp.bib'\ntitle: Fields with few small points\n---\n\nThere have recently been advances on the study of height properties of algebraic extensions of ${\\mathbb{Q}}$ in [@fab] and [@oksa]. Let ${\\mathcal N}$ denote the Northcott number with respect to the logarithmic Weil height. The key result of their work is the following theorem.\n\n\\[fields\\]For every $t \\in [0,\\infty]$ there exist sequences of prime numbers $(p_i)_{i \\in {\\mathbb{N}}}$, $(q_i)_{i \\in {\\mathbb{N}}}$, and $(d_i)_{i \\in {\\mathbb{N}}}$ such that the field $F = {\\mathbb{Q}}((\\frac{p_i}{q_i})^{1/d_i}| i" -"---\nabstract: 'We propose four-field and five-field Hu\u2013Washizu-type mixed formulations for nonlinear poroelasticity \u2013 a coupled fluid diffusion and solid deformation process \u2013 considering that the permeability depends on a linear combination between fluid pressure and dilation. As the determination of the physical strains is necessary, the first formulation is written in terms of the primal unknowns of solid displacement and pore fluid pressure as well as the poroelastic stress and the infinitesimal strain, and it considers strongly symmetric Cauchy stresses. The second formulation imposes stress symmetry in a weak sense and it requires the additional unknown of solid rotation tensor. We study the unique solvability of the problem using the Banach fixed-point theory, properties of twofold saddle-point problems, and the Banach\u2013Ne\u010das\u2013Babu\u0161ka theory. We propose monolithic Galerkin discretisations based on conforming Arnold\u2013Winther for poroelastic stress and displacement, and either PEERS or Arnold\u2013Falk\u2013Winther finite element families for the stress-displacement-rotation field variables. The wellposedness of the discrete problem is established as well, and we show a priori error estimates in the natural norms. Some numerical examples are provided to confirm the rates of convergence predicted by the theory, and we also illustrate the use of the formulation in some typical tests in" -"---\nabstract: 'Feature alignment is the primary means of fusing multimodal data. We propose a feature alignment method that fully fuses multimodal information, which alternately shifts and expands feature information from different modalities to have a consistent representation in a feature space. The proposed method can robustly capture high-level interactions between features of different modalities, thus significantly improving the performance of multimodal learning. We also show that the proposed method outperforms other popular multimodal schemes on multiple tasks. Experimental evaluation of ETT and MIT-BIH-Arrhythmia, datasets shows that the proposed method achieves state of the art performance.'\naddress: 'Xi\u2019an Jiaotong-Liverpool University'\nauthor:\n- '\u00a0[^1]'\n- \u00a0\n- \u00a0\n- \u00a0\n- \u00a0\n- \u00a0\nbibliography:\n- 'ecai.bib'\ntitle: 'Alternative Telescopic Displacement: An Efficient Multimodal Alignment Method'\n---\n\nIntroduction\n============\n\nMultimodal data, such as images, audio, and text, is ubiquitous in the real world. The data for each model has different eigenvectors, However, each modality has its own set of eigenvectors, which are located in different subspaces [@lahat_multimodal_2015]. Consequently, they have different distributions and statistical properties, resulting in vectors with similar semantics being represented differently in different subspaces. Therefore, vectors with similar semantics represented in different subspaces will be completely different. This phenomenon is commonly known" -"---\nabstract: 'We employ the microcanonical inflection-point analysis method, developed for the systematic identification and classification of phase transitions in systems of any size, to study the two-dimensional Ising model at various lattice sizes and in the thermodynamic limit. Exact results for the density of states, which were obtained by exact algorithmic computation, provide evidence for higher-order transitions in addition to the well-studied second-order ferromagnetic-paramagnetic phase transition. An independent third-order phase transition is identified in the ferromagnetic phase, whereas another third-order transition resides in the paramagnetic phase. The latter is a dependent transition, i.e., it is inevitably associated with the critical transition, but it remains separate from the critical point in the thermodynamic limit. For a deeper insight into the nature of these additional transitions, a detailed analysis of spin clusters is performed.'\nauthor:\n- Kedkanok Sitarachu\n- Michael Bachmann\ntitle: |\n Evidence for Additional Third-Order Transitions\\\n in the Two-Dimensional Ising Model\n---\n\nIntroduction\n============\n\nThe (Lenz-)Ising model was introduced about a century ago for studies of the impacts of attractive local spin-spin interaction upon macroscopic cooperative ordering across the entire system\u00a0[@lenz1; @ising1]. As it turned out, the one-dimensional spin chain does not exhibit signs of a thermodynamic phase" -"---\nabstract: 'This note concerns the finite interpolation problem with two parametrized families of splines related to polynomial spline interpolation. We address the questions of uniqueness and establish basic convergence rates for splines of the form $ s_\\alpha = p\\cosh(\\alpha\\cdot)+q\\sinh(\\alpha \\cdot)$ and $t_\\alpha = p+q\\tanh(\\alpha \\cdot) $ between the nodes where $p,q\\in\\Pi_{k-1}$.'\nauthor:\n- 'Jeff Ledford, Ryan Urban, and Alec Vidanes'\nbibliography:\n- 'paper.bib'\ntitle: A note concerning polyhyperbolic and related splines\n---\n\nIntroduction\n============\n\nThis note continues the study of splines, popularized by Schoenberg [@Schoenberg_Part_A; @SCHOENBERG; @MR0053177]. The goal of this paper is to develop basic properties of two parametrized families of splines. Specifically, we address questions of uniqueness and convergence rate similar to those found in [@AHLBERG; @Birkhoff_deBoor; @Hall; @Pruess; @Spath]. The first family of splines satisfy $s\\in C^{2k-2}[a,b]$ and $$\\label{polyhyperbolic}\n(D^2-\\alpha^2)^k s =0\\quad\\text{ on } [a,b]\\setminus X,$$ where $\\alpha>0$ and $X$ is a partition of $[a,b]$. Following [@ledford], we call these *k-th order polyhyperbolic* splines. These splines are examples of $L$-splines, [@Schumaker_book Ch. 10]. The hyperbolic designation was chosen because the homogeneous solution of is given by $$s_\\alpha(x)=p(x)\\cosh(\\alpha x)+q(x)\\sinh(\\alpha x),$$ where $p$ and $q$ are polynomials of degree at most $k-1.$ We note that, in the literature," -"---\nabstract: 'The environment-dependent dilaton field is a well-motivated candidate for dark energy and naturally arises in the strong coupling limit of string theory. In this article, we present the very first experimental constraints on the parameters of this model. For this, we employ data obtained from the [[*[q]{}*]{}]{} collaboration and the Lunar Laser Ranging (LLR) experiment. Furthermore, we forecast expected exclusion plots for the Casimir And Non Newtonian force EXperiment () soon to be realised in an improved setup. Finally, we provide a detailed analysis of the screening mechanism and additional symmetries of the dilaton field theory.'\nauthor:\n- 'Hauke Fischer, Christian K\u00e4ding, Ren\u00e9 I.P. Sedmik, Hartmut Abele'\n- Philippe Brax\n- Mario Pitschmann\nbibliography:\n- 'refs.bib'\ntitle: |\n Search for environment-dependent dilatons\\\n *Preprint Version*\n---\n\nIntroduction\n============\n\nThe origin of dark energy is one of the greatest puzzles in modern physics. Unexpectedly, type Ia supernovae data have revealed that our Universe is currently expanding at an accelerated rate\u00a0[@SupernovaCosmologyProject:1997zqe; @SupernovaSearchTeam:1998fmf; @SupernovaSearchTeam:1998bnz]. This has been confirmed by many other cosmological probes.\n\nThe theoretical framework describing the Universe on cosmological scales is general relativity (GR). As GR is a crucial ingredient in the interpretation of cosmological observations, it seems natural" -"---\nabstract: 'Image composition refers to inserting a foreground object into a background image to obtain a composite image. In this work, we focus on generating plausible shadows for the inserted foreground object to make the composite image more realistic. To supplement the existing small-scale dataset, we create a large-scale dataset called RdSOBA with rendering techniques. Moreover, we design a two-stage network named DMASNet with **d**ecomposed **m**ask prediction and **a**ttentive **s**hadow filling. Specifically, in the first stage, we decompose shadow mask prediction into box prediction and shape prediction. In the second stage, we attend to reference background shadow pixels to fill the foreground shadow. Abundant experiments prove that our DMASNet achieves better visual effects and generalizes well to real composite images.'\nauthor:\n- |\n Written by AAAI Press Staff^1^[^1]\\\n AAAI Style Contributions by Pater Patel Schneider, Sunil Issar,\\\n J. Scott Penberthy, George Ferguson, Hans Guesgen, Francisco Cruz, Marc Pujol-Gonzalez\n- Author Name\n- 'Xinhao Tao^1^, Junyan Cao^1^, Yan Hong^2^, Li Niu^1^[^2]'\nbibliography:\n- 'main.bib'\n- 'supp.bib'\ntitle:\n- |\n AAAI Press Formatting Instructions\\\n for Authors Using LaTeX \u2014 A Guide\n- 'My Publication Title \u2014 Single Author'\n- Shadow Generation with Decomposed Mask Prediction and Attentive Shadow Filling\n---\n\nIntroduction" -"---\nabstract: 'The methods of single transferable vote (STV) and sequential ranked-choice voting (RCV) are different methods for electing a set of winners in multiwinner elections. STV is a classical voting method that has been widely used internationally for many years. By contrast, sequential RCV has rarely been used, and only recently has seen an increase in usage as several cities in Utah have adopted the method to elect city council members. We use Monte Carlo simulations and a large database of real-world ranked-choice elections to investigate the behavior of sequential RCV by comparing it to STV. Our general finding is that sequential RCV often produces different winner sets than STV. Furthermore, sequential RCV is best understood as an excellence-based method which will not produce proportional results, often at the expense of minority interests.'\naddress:\n- 'David McCune, Department of Mathematics and Data Science, William Jewell College, 500 College Hill, Liberty, MO, 64068-1896'\n- 'Erin Martin, Department of Mathematics, Brigham Young University, Provo, UT 84602'\n- 'Grant Latina, William Jewell College, 500 College Hill, Liberty, MO, 64068-1896'\n- 'Kaitlyn Simms, William Jewell College, 500 College Hill, Liberty, MO, 64068-1896'\nauthor:\n- David McCune\n- Erin Martin\n- Grant Latina\n-" -"---\nabstract: 'Resonances, also known as quasinormal modes (QNM) in the non-Hermitian case, play a ubiquitous role in all domains of physics ruled by wave phenomena, notably in continuum mechanics, acoustics, electrodynamics, and quantum theory. The non-Hermiticity arises from the system losses, whether they are material (Joule losses in electromagnetism) or linked to the openness of the problem (radiation losses). In this paper, we focus on the latter delicate matter when considering bounded computational domains mandatory when using *e.g.* Finite Elements. Dispersive perfectly matched layers and absorbing boundary conditions are studied.'\nauthor:\n- 'Guillaume Dem[\u00e9]{}sy'\n- Tong Wu\n- 'Yoann Br[\u00fb]{}l[\u00e9]{}'\n- Fr\u00e9d\u00e9ric Zolla\n- Andr\u00e9 Nicolet\n- Philippe Lalanne\n- Boris Gralak\ntitle: Dispersive Perfectly Matched Layer and high order Absorbing Boundary Conditions for the computation of Quasinormal modes of open electromagnetic structures\n---\n\nIntroduction\n============\n\nAn open structure described by its geometry, material properties and boundary conditions has a response to a solicitation which is strongly correlated to its intrinsic properties, and it appears that the strongest responses are related to its **resonances**, or quasinormal modes (QNM), the solutions of the wave equation without any source. Determining these eigenmodes is therefore extremely useful for both physical understanding and" -"---\nabstract: 'This paper is the second part of a series of papers about empirical approaches to open circuit voltage (OCV) modeling and its performance comparison in lithium-ion batteries. The first part of the series [@slowOCVp1] introduced various sources of uncertainties in the OCV models and established a theoretical relationship between uncertainties and the performance of a battery management system. In this paper, clearly defined approaches for low-rate OCV data collection are defined and described in detail. The data collection is designed with consideration to several parameters that affect the experimental time. Firstly, a more suitable method to fully charge the battery at different C-Rates is defined. Secondly, the OCV characterization following the full charge is described for various performance comparisons. Finally, optimal and efficient resistance estimation profiles are discussed. From the voltage, current and time data recorded using the procedure described in this paper, the OCV-SOC relationship is characterized and its uncertainties are modeled in the third part [@slowOCVp3] of this series of papers.'\nauthor:\n- \nbibliography:\n- 'References.bib'\n- 'literature\\_BFG.bib'\ntitle: |\n Performance Analysis of Empirical Open-Circuit Voltage Modeling in Lithium Ion Batteries,\\\n Part-2: Data Collection Procedure\n---\n\nOCV-SOC modeling, OCV modeling, OCV-SOC characterization, OCV characterization, Li-ion batteries," -"---\nabstract: 'In this paper, we first indicate that the block error event of polar codes under successive cancellation list (SCL) decoding is composed of path loss (PL) error event and path selection (PS) error event, where the PL error event is that correct codeword is lost during the SCL decoding and the PS error event is that correct codeword is reserved in the decoded list but not selected as the decoded codeword. Then, we simplify the PL error event by assuming the all-zero codeword is transmitted and derive the probability lower bound via the joint probability density of the log-likelihood ratios of information bits. Meanwhile, the union bound calculated by the minimum weight distribution is used to evaluate the probability of the PS error event. With the performance analysis, we design a greedy bit-swapping (BS) algorithm to construct polar codes by gradually swapping information bit and frozen bit to reduce the performance lower bound of SCL decoding. The simulation results show that the BLER performance of SCL decoding is close to the lower bound in the medium to high signal-to-noise ratio region and we can optimize the lower bound to improve the BLER performance of SCL decoding by the" -"---\nabstract: 'Demystifying complex human-ground interactions is essential for accurate and realistic 3D human motion reconstruction from RGB videos, as it ensures consistency between the humans and the ground plane. Prior methods have modeled human-ground interactions either implicitly or in a sparse manner, often resulting in unrealistic and incorrect motions when faced with noise and uncertainty. In contrast, our approach explicitly represents these interactions in a dense and continuous manner. To this end, we propose a novel [[**Gr**ound-**a**ware **M**otion **M**odel for 3D Hum**a**n Motion **R**econstruction]{}]{}, named ****, which jointly learns the distribution of transitions in both pose and interaction between every joint and ground plane at each time step of a motion sequence. It is trained to explicitly promote consistency between the motion and distance change towards the ground. After training, we establish a joint optimization strategy that utilizes as a dual-prior, regularizing the optimization towards the space of plausible ground-aware motions. This leads to realistic and coherent motion reconstruction, irrespective of the assumed or learned ground plane. Through extensive evaluation on the AMASS and AIST++ datasets, our model demonstrates good generalization and discriminating abilities in challenging cases including complex and ambiguous human-ground interactions. The code will be available at" -"---\nabstract: 'Driven by the need for more efficient and seamless integration of physical models and data, physics-informed neural networks (PINNs) have seen a surge of interest in recent years. However, ensuring the reliability of their convergence and accuracy remains a challenge. In this work, we propose an efficient, gradient-less weighting scheme for PINNs, that accelerates the convergence of dynamic or static systems. This simple yet effective attention mechanism is a function of the evolving cumulative residuals and aims to make the optimizer aware of problematic regions at no extra computational cost or adversarial learning. We illustrate that this general method consistently achieves a relative $L^{2}$ error of the order of $10^{-5}$ using standard optimizers on typical benchmark cases of the literature. Furthermore, by investigating the evolution of weights during training, we identify two distinct learning phases reminiscent of the fitting and diffusion phases proposed by the information bottleneck (IB) theory. Subsequent gradient analysis supports this hypothesis by aligning the transition from high to low signal-to-noise ratio (SNR) with the transition from fitting to diffusion regimes of the adopted weights. This novel correlation between PINNs and IB theory could open future possibilities for understanding the underlying mechanisms behind the training" -"---\nabstract: 'We combine the *ab initio* symmetry-adapted no-core shell model (SA-NCSM) with the single-particle Green\u2019s function approach to construct optical potentials rooted in first principles. Specifically, we show that total cross sections and phase shifts for neutron elastic scattering from a $^4$He target with projectile energies between 0.5 and 10 MeV closely reproduce the experiment. In addition, we discuss an important new development that resolves a long-standing issue with spurious center-of-mass motion in the Green\u2019s function formalism for many-body approaches. The new development opens the path for first-principle predictions of cross sections for elastic scattering of single-nucleon projectiles, nucleon capture and deuteron breakup reactions, feasible for a broad range of open-shell spherical and deformed nuclei in the SA-NCSM approach.'\nauthor:\n- 'M. Burrows'\n- 'K. D. Launey'\n- 'A. Mercenne'\n- 'R. B. Baker'\n- 'G. H. Sargsyan'\n- 'T. Dytrych'\n- 'D. Langr'\nbibliography:\n- 'sancsmgf.bib'\ntitle: '*Ab initio* translationally invariant nucleon-nucleus optical potentials '\n---\n\nIntroduction\n============\n\nRemarkable progress has been made in recent years in the development of many-body approaches from first principles to scattering and nuclear reactions (see Refs. [@FRIBTAwhite2018; @1402-4896-91-5-053002; @0954-3899-41-12-123002] for reviews), including, e.g., studies of elastic scattering [@NollettPWCH07; @HagenDHP07; @PhysRevLett.101.092501; @ElhatisariLRE15; @QuaglioniN09;" -"---\nabstract: 'We investigate the Casimir-Lifshitz force (CLF) [between two identical graphene strip gratings, laid on finite dielectric substrates, by using the scattering matrix (S-matrix) approach derived from the Fourier Modal Method with Local Basis Functions (FMM-LBF)]{}. We fully take into account the high-order electromagnetic diffractions, the multiple scattering and the exact 2D feature of the graphene strips. We show that the non-additivity, which is one of the most interesting features of the CLF in general, is significantly high and can be modulated [*[in situ]{}*]{}, without any change in the actual material geometry and this by [varying]{} the graphene chemical potential. [We discuss the nature of the geometrical effects and show the relevance of the geometric parameter $d/D$ (i.e. the ratio between separation and grating period), which allows to explore the regions of parameters where the additive result is fully acceptable or where the full calculation is needed.]{} This study can open to deeper experimental exploration of the non-additive features of the CLF with micro- or nano-electromechanical graphene-based systems.'\nauthor:\n- Youssef Jeyar\n- Minggang Luo\n- Kevin Austry\n- Brahim Guizal\n- Yi Zheng\n- 'H. B. Chan'\n- Mauro Antezza\nbibliography:\n- 'CLF.bib'\ntitle: 'Tunable non-additivity in Casimir-Lifshitz" -"---\nabstract: |\n In many applications of evolutionary algorithms the computational cost of applying operators and storing populations is comparable to the cost of fitness evaluation. Furthermore, by knowing what exactly has changed in an individual by an operator, it is possible to recompute fitness value much more efficiently than from scratch. The associated time and memory improvements have been available for simple evolutionary algorithms, few specific genetic algorithms and in the context of gray-box optimization, but not for all algorithms, and the main reason is that it is difficult to achieve in algorithms using large arbitrarily structured populations.\n\n This paper makes a first step towards improving this situation. We show that storing the population as a minimum spanning tree, where vertices correspond to individuals but only contain meta-information about them, and edges store structural differences, or *patches*, between the individuals, is a viable alternative to the straightforward implementation. Our experiments suggest that significant, even asymptotic, improvements\u00a0\u2014 including execution of crossover operators!\u00a0\u2014 can be achieved in terms of both memory usage and computational costs.\nauthor:\n- Maxim Buzdalov\nbibliography:\n- '../../../../bibliography.bib'\ntitle: Improving Time and Memory Efficiency of Genetic Algorithms by Storing Populations as Minimum Spanning Trees of" -"---\nabstract: 'We study joint downlink-uplink beamforming design for wireless federated learning (FL) with a multi-antenna base station. Considering analog transmission over noisy channels and uplink over-the-air aggregation, we derive the global model update expression over communication rounds. We then obtain an upper bound on the expected global loss function, capturing the downlink and uplink beamforming and receiver noise effect. We propose a low-complexity joint beamforming algorithm to minimize this upper bound, which employs alternating optimization to breakdown the problem into three subproblems, each solved via closed-form gradient updates. Simulation under practical wireless system setup shows that our proposed joint beamforming design solution substantially outperforms the conventional separate-link design approach and nearly attains the performance of ideal FL with error-free communication links.'\nauthor:\n- |\n Chong Zhang$^{\\star}$, Min Dong$^{\\dagger}$, Ben Liang$^{\\star}$, Ali Afana$^{\\ddagger}$, Yahia Ahmed$^{\\ddagger}$\\\n $^{\\star}$Dept. of Electrical and Computer Engineering, University of Toronto, Canada, $^{\\ddagger}$Ericsson Canada, Canada\\\n $^{\\dagger}$Dept. of Electrical, Computer and Software Engineering, Ontario Tech University, Canada[^1]\nbibliography:\n- 'Refs.bib'\ntitle: 'Joint Downlink-Uplink Beamforming for Wireless Multi-Antenna Federated Learning'\n---\n\nIntroduction {#sec:intro}\n============\n\nFederated learning (FL) is a widely recognized machine learning method to process training data locally at multiple worker nodes. In FL, a parameter server organizes" -"---\nabstract: 'Poisson noise commonly occurs in images captured by photon-limited imaging systems such as in astronomy and medicine. As the distribution of Poisson noise depends on the pixel intensity value, noise levels vary from pixels to pixels. Hence, denoising a Poisson-corrupted image while preserving important details can be challenging. In this paper, we propose a Poisson denoising model by incorporating the weighted anisotropic\u2013isotropic total variation (AITV) as a regularization. We then develop an alternating direction method of multipliers with a combination of a proximal operator for an efficient implementation. Lastly, numerical experiments demonstrate that our algorithm outperforms other Poisson denoising methods in terms of image quality and computational efficiency.'\naddress: |\n $^{1}$Department of Mathematics; University of California, Irvine; Irvine, CA 92697, United States\\\n $^{2}$Department of Mathematical Sciences; University of Texas, Dallas; Richardson, TX 75080, United States\\\n $^{3}$Department of Mathematics & Computer Science; Whittier College; Whittier, CA 90602, United States \nbibliography:\n- 'test.bib'\ntitle: 'Weighted Anisotropic \u2013 Isotropic Total Variation for Poisson Denoising '\n---\n\nPoisson noise, total variation, nonconvex optimization, ADMM, proximal operator\n\nIntroduction\n============\n\nIn various applications such as astronomy [@lanteri2005restoration] and medicine [@vardi1985statistical], photon-counting devices are utilized to capture images. However, these images are susceptible to Poisson" -"---\nabstract: 'Federated learning is an approach to collaboratively training machine learning models for multiple parties that prohibit data sharing. One of the challenges in federated learning is non-IID data between clients, as a single model can not fit the data distribution for all clients. Meta-learning, such as Per-FedAvg, is introduced to cope with the challenge. Meta-learning learns shared initial parameters for all clients. Each client employs gradient descent to adapt the initialization to local data distributions quickly to realize model personalization. However, due to non-convex loss function and randomness of sampling update, meta-learning approaches have unstable goals in local adaptation for the same client. This fluctuation in different adaptation directions hinders the convergence in meta-learning. To overcome this challenge, we use the historical local adapted model to restrict the direction of the inner loop and propose an elastic-constrained method. As a result, the current round inner loop keeps historical goals and adapts to better solutions. Experiments show our method boosts meta-learning convergence and improves personalization without additional calculation and communication. Our method achieved SOTA on all metrics in three public datasets.'\nauthor:\n- Peng Lan\n- Donglai Chen\n- Chong Xie\n- Keshu Chen\n- Jinyuan He\n- |" -"---\nabstract: 'We study the production of stochastic gravitational wave background from early dark energy (EDE) model. It is caused by resonant amplification of scalar field fluctuations, which easily takes place for typical EDE potential based on the string axion or $\\alpha$-attractor model. The resultant spectrum of gravitational wave background is computed by performing 3D lattice simulations. We show that, specifically in some class of generalized $\\alpha$-attractor EDE model, a significant amount of gravitational waves can be produced via tachyonic instability with a peak around femto-Hz frequency range. Models predicting such gravitational waves can be constrained by the cosmic microwave background observations.'\nbibliography:\n- 'ref.bib'\n---\n\n[**Stochastic gravitational wave background\\\nfrom early dark energy** ]{}\n\nNaoya Kitajima$^{\\,a,b}$ and Tomo Takahashi$^{\\,c}$\\\n1.0cm\n\n[*$^a$Frontier Research Institute for Interdisciplinary Sciences, Tohoku University, Sendai 980-8578, Japan\\\n$^b$Department of Physics, Tohoku University, Sendai 980-8578, Japan\\\n$^c$Department of Physics, Saga University, Saga 840-8502, Japan* ]{}\n\n1.0cm\n\nIntroduction {#sec:intro}\n============\n\nRecent observational discrepancy of the Hubble constant $H_0$, known as the Hubble tension, where the values of $H_0$ derived from direct and indirect measurements are inconsistent at almost 5$\\sigma$ level (see e.g., [@DiValentino:2021izs; @Perivolaropoulos:2021jda] for a review of the current status of the tension), motivates us to" -"---\nabstract: 'This paper studies high-dimensional canonical correlation analysis (CCA) with an emphasis on the vectors that define canonical variables. The paper shows that when two dimensions of data grow to infinity jointly and proportionally, the classical CCA procedure for estimating those vectors fails to deliver a consistent estimate. This provides the first result on the impossibility of identification of canonical variables in the CCA procedure when all dimensions are large. As a countermeasure, the paper derives the magnitude of the estimation error, which can be used in practice to assess the precision of CCA estimates. Applications of the results to cyclical vs.\u00a0non-cyclical stocks and to a limestone grassland data set are provided.'\naddress:\n- Duke University\n- University of California at Berkeley\nauthor:\n- Anna Bykhovskaya\n- Vadim Gorin\nbibliography:\n- 'CCA\\_biblio.bib'\ntitle: 'High-dimensional canonical correlation analysis'\n---\n\n=1\n\n[^1]\n\nIntroduction\n============\n\nBackground\n----------\n\nCanonical correlation analysis (CCA) is a classical statistical analysis method used to find a common structure between two data sets. It was first introduced in @harold1936relations and remains in active use today. CCA can be viewed as a generalization of principal component analysis (PCA) from one set of variables to two: in PCA the" -"---\nabstract: 'The local galaxy peculiar velocity field can be reconstructed from the surrounding distribution of large-scale structure and plays an important role in calibrating cosmic growth and expansion measurements. In this paper, we investigate the effect of the stochasticity of these velocity reconstructions on the statistical and systematic errors in cosmological inferences. By introducing a simple statistical model between the measured and theoretical velocities, whose terms we calibrate from linear theory, we derive the bias in the model velocity. We then use lognormal realisations to explore the potential impact of this bias when using a cosmic flow model to measure the growth rate of structure, and to sharpen expansion rate measurements from host galaxies for gravitational wave standard sirens with electromagnetic counterparts. Although our illustrative study does not contain fully realistic observational effects, we demonstrate that in some scenarios these corrections are significant and result in a measurable improvement in determinations of the Hubble constant compared to standard forecasts.'\nauthor:\n- |\n Ryan J. Turner$^{1}$[^1] & Chris Blake$^{1}$\\\n $^{1}$Centre for Astrophysics and Supercomputing, Swinburne University of Technology, Hawthorn, VIC 3122, Australia\\\nbibliography:\n- 'references.bib'\ndate: 'Accepted XXX. Received YYY; in original form ZZZ'\ntitle: 'Biases in velocity reconstruction: investigating" -"---\nbibliography:\n- 'refs.bib'\n---\n\n[**Modeling the $R$-ratio and hadronic contributions to $g-2$ with a Treed Gaussian Process**]{}\n\n> [^1]\n>\n> \\\n> [*Department of Physics, School of Mathematics and Physics, Xi\u2019an Jiaotong-Liverpool University, Suzhou 215123, China*\\\n> ]{}\n>\n> [^2]\n>\n> \\\n> [*Department of Physics and Institute of Theoretical Physics, Nanjing\\\n> Normal University, Nanjing, Jiangsu 210023, China*\\\n> ]{}\n\n> The BNL and FNAL measurements of the anomalous magnetic moment of the muon disagree with the Standard Model (SM) prediction by more than $4\\sigma$. The hadronic vacuum polarization (HVP) contributions are the dominant source of uncertainty in the SM prediction. There are, however, tensions between different estimates of the HVP contributions, including data-driven estimates based on measurements of the $R$-ratio. To investigate that tension, we modeled the unknown $R$-ratio as a function of CM energy with a treed Gaussian process (TGP). This is a principled and general method grounded in data-science that allows complete uncertainty quantification and automatically balances over- and under-fitting to noisy data. Our tool yields exploratory results are similar to previous ones and we find no indication that the $R$-ratio was previously mismodeled. Whilst we advance some aspects of modeling the"