diff --git "a/https:/huggingface.co/datasets/iamgroot42/mimir/tree/main/test/temporal_arxiv_2022_06.jsonl" "b/https:/huggingface.co/datasets/iamgroot42/mimir/tree/main/test/temporal_arxiv_2022_06.jsonl" new file mode 100644--- /dev/null +++ "b/https:/huggingface.co/datasets/iamgroot42/mimir/tree/main/test/temporal_arxiv_2022_06.jsonl" @@ -0,0 +1,1000 @@ +"---\nabstract: 'This study addresses an image-matching problem in challenging cases, such as large scene variations or textureless scenes. To gain robustness to such situations, most previous studies have attempted to encode the global contexts of a scene via graph neural networks or transformers. However, these contexts do not explicitly represent high-level contextual information, such as structural shapes or semantic instances; therefore, the encoded features are still not sufficiently discriminative in challenging scenes. We propose a novel image-matching method that applies a topic-modeling strategy to encode high-level contexts in images. The proposed method trains latent semantic instances called topics. It explicitly models an image as a multinomial distribution of topics, and then performs probabilistic feature matching. This approach improves the robustness of matching by focusing on the same semantic areas between the images. In addition, the inferred topics provide interpretability for matching the results, making our method explainable. Extensive experiments on outdoor and indoor datasets show that our method outperforms other state-of-the-art methods, particularly in challenging cases. Our code is available at [github](https://github.com/TruongKhang/TopicFM).'\nauthor:\n- Author Name\n- 'Khang Truong Giang ^a^, Soohwan Song ^b^[^1], Sungho Jo ^a^'\nbibliography:\n- 'aaai23.bib'\ntitle:\n- 'My Publication Title \u2014 Single Author'\n-" +"---\nabstract: 'Measuring semantic similarity between job titles is an essential functionality for automatic job recommendations. This task is usually approached using supervised learning techniques, which requires training data in the form of equivalent job title pairs. In this paper, we instead propose an unsupervised representation learning method for training a job title similarity model using noisy skill labels. We show that it is highly effective for tasks such as text ranking and job normalization.'\nauthor:\n- Rabih Zbib\n- Lucas Alvarez Lacasa\n- Federico Retyk\n- Rus Poves\n- Juan Aizpuru\n- |\n \\\n Hermenegildo Fabregat\n- 'Vaidotas \u0160imkus[^1]'\n- 'Emilia Garc\u00eda-Casademont'\nbibliography:\n- 'bibliography.bib'\ntitle: Learning Job Titles Similarity from Noisy Skill Labels\n---\n\nIntroduction {#sec:intro}\n============\n\nWith the significant growth of online platforms for job postings and applications, intelligent recommendation systems have become a necessity for both applicants and recruiters. Measuring semantic similarity between job titles is an essential functionality of these systems, whether to recommend suitable job openings to candidates, or vice versa. Job title similarity can be used as the sole measure for relevance, or more generally, as a component for computing an overall score between jobs and candidates, together with other information such as" +"---\nabstract: 'The landscape of privacy laws and regulations around the world is complex and ever-changing. National and super-national laws, agreements, decrees, and other government-issued rules form a patchwork that companies must follow to operate internationally. To examine the status and evolution of this patchwork, we introduce the Government Privacy Instructions Corpus, or GPI Corpus, of 1,043 privacy laws, regulations, and guidelines, covering 182 jurisdictions. This corpus enables a large-scale quantitative and qualitative examination of legal foci on privacy. We examine the temporal distribution of when GPIs were created and illustrate the dramatic increase in privacy legislation over the past 50 years, although a finer-grained examination reveals that the rate of increase varies depending on the personal data types that GPIs address. Our exploration also demonstrates that most privacy laws respectively address relatively few personal data types, showing that comprehensive privacy legislation remains rare. Additionally, topic modeling results show the prevalence of common themes in GPIs, such as finance, healthcare, and telecommunications. Finally, we release the corpus to the research community to promote further study.[^1]'\nauthor:\n- |\n **Sonu Gupta**[$^1$]{} **Ellen Poplavska**[$^1$]{} **Nora O\u2019Toole**[$^1$]{}\\\n **Siddhant Arora**[$^2$]{} **Thomas Norton**[$^3$]{} **Norman Sadeh**[$^2$]{} **Shomir Wilson**[$^1$]{}\\\n [$^1$]{}Penn State University, University Park, PA\\\n [$^2$]{}Carnegie Mellon" +"---\nabstract: 'Karyotyping is an important procedure to assess the possible existence of chromosomal abnormalities. However, because of the non-rigid nature, chromosomes are usually heavily curved in microscopic images and such deformed shapes hinder the chromosome analysis for cytogeneticists. In this paper, we present a self-attention guided framework to erase the curvature of chromosomes. The proposed framework extracts spatial information and local textures to preserve banding patterns in a regression module. With complementary information from the bent chromosome, a refinement module is designed to further improve fine details. In addition, we propose two dedicated geometric constraints to maintain the length and restore the distortion of chromosomes. To train our framework, we create a synthetic dataset where curved chromosomes are generated from the real-world straight chromosomes by grid-deformation. Quantitative and qualitative experiments are conducted on synthetic and real-world data. Experimental results show that our proposed method can effectively straighten bent chromosomes while keeping banding details and length.'\nauthor:\n- Sunyi Zheng\n- Jingxiong Li\n- Zhongyi Shui\n- Chenglu Zhu\n- Yunlong Zhang\n- Pingyi Chen\n- Lin Yang\nbibliography:\n- 'paper1588.bib'\ntitle: 'ChrSNet: Chromosome Straightening using Self-attention Guided Networks[^1]'\n---\n\nIntroduction\n============\n\nMetaphase chromosome analysis is a fundamental step in" +"---\nabstract: 'Small bowel path tracking is a challenging problem considering its many folds and contact along its course. For the same reason, it is very costly to achieve the ground-truth (GT) path of the small bowel in 3D. In this work, we propose to train a deep reinforcement learning tracker using datasets with different types of annotations. Specifically, we utilize CT scans that have only GT small bowel segmentation as well as ones with the GT path. It is enabled by designing a unique environment that is compatible for both, including a reward definable even without the GT path. The performed experiments proved the validity of the proposed method. The proposed method holds a high degree of usability in this problem by being able to utilize the scans with weak annotations, and thus by possibly reducing the required annotation cost.'\nauthor:\n- Seung Yeon Shin\n- 'Ronald M. Summers'\n- Seung Yeon Shin\n- 'Ronald M. Summers'\ntitle:\n- |\n Deep Reinforcement Learning for\\\n Small Bowel Path Tracking using\\\n Different Types of Annotations\n- |\n Deep Reinforcement Learning for\\\n Small Bowel Path Tracking using\\\n Different Types of Annotations:\\\n Supplementary Material\n---\n\nIntroduction {#sec:intro}\n============\n\nThe small bowel is the" +"---\nabstract: 'Nuclear Berry curvature effects emerge from electronic spin degeneracy and can lead to non-trivial spin-dependent (nonadiabatic) nuclear dynamics. However, such effects are completely neglected in all current mixed quantum-classical methods such as fewest switches surface-hopping. In this work, we present a phase-space surface-hopping (PSSH) approach to simulate singlet-triplet intersystem crossing dynamics. We show that with a simple pseudo-diabatic ansatz, a PSSH algorithm can capture the relevant Berry curvature effects and make predictions in agreement with exact quantum dynamics for a simple singlet-triplet model Hamiltonian. Thus, this approach represents an important step towards simulating photochemical and spin processes concomitantly, as relevant to intersystem crossing and spin-lattice relaxation dynamics.'\nauthor:\n- Xuezhi Bian\n- Yanze Wu\n- Jonathan Rawlinson\n- 'Robert G. Littlejohn'\n- 'Joseph E. Subotnik'\nbibliography:\n- 'main.bib'\ntitle: 'Modeling Spin-Dependent Nonadiabatic Dynamics with Electronic Degeneracy: A Phase-Space Surface-Hopping Method'\n---\n\n![image](toc.pdf){width=\"8cm\"}\n\nWhile not always fully appreciated, the spin degrees of freedom pertaining to a molecular or material system can be of particular importance when electronic transitions occur between states with different spin multiplicities (e.g. intersystem crossing \\[ISC\\]) or when states with large multiplicity interconvert (e.g., triplet internal conversion \\[IC\\])[@Kasha1950; @Penfold2018]. After all, nonadiabatic nuclear-electronic dynamics underlie many" +"---\nabstract: 'For indoor settings, we investigate the impact of location on the spectral distribution of the received light, i.e., the intensity of light for different wavelengths. Our investigations confirm that even under the same light source, different locations exhibit slightly different spectral distribution due to reflections from their localised environment containing different materials or colours. By exploiting this observation, we propose *Spectral-Loc*, a novel indoor localization system that uses light spectral information to identify the location of the device. With spectral sensors finding their way in latest products and applications, such as white balancing in smartphone photography, *Spectral-Loc* can be readily deployed without requiring any additional hardware or infrastructure. We prototype *Spectral-Loc* using a commercial-off-the-shelf light spectral sensor, AS7265x, which can measure light intensity over 18 different wavelength sub-bands. We benchmark the localisation accuracy of *Spectral-Loc* against the conventional light intensity sensors that provide only a single intensity value. Our evaluations over two different indoor spaces, a meeting room and a large office space, demonstrate that use of light spectral information significantly reduces the localization error for the different percentiles.'\nauthor:\n- |\n Yanxiang Wang$^{1,2}$, Jiawei Hu$^{1,2}$, Hong Jia$^3$, Wen Hu$^1$, Mahbub Hassan$^1$\\\n Ashraf Uddin$^1$, Brano Kusy$^2$, Moustafa Youssef$^4$" +"---\nabstract: 'The generation of molecules with Artificial Intelligence (AI) is poised to revolutionize materials discovery. Potential applications range from development of potent drugs to efficient carbon capture and separation technologies. However, existing computational frameworks lack automated training data creation and physical performance validation at meso-scale where complex properties of amorphous materials emerge. The methodological gaps have so far limited AI design to small-molecule applications. Here, we report the first automated discovery of complex materials through inverse molecular design which is informed by meso-scale target features and process figures-of-merit. We have entered the new discovery regime by computationally generating and validating hundreds of polymer candidates designed for application in post-combustion carbon dioxide filtration. Specifically, we have validated each discovery step, from training dataset creation, via graph-based generative design of optimized monomer units, to molecular dynamics simulation of gas permeation through the polymer membranes. For the latter, we have devised a Representative Elementary Volume (REV) enabling permeability simulations at about 1,000x the volume of an individual, AI-generated monomer, obtaining quantitative agreement. The discovery-to-validation time per polymer candidate is on the order of 100 hours in a standard computing environment, offering a computational screening alternative prior to lab validation.'\nauthor:\n- 'Ronaldo" +"[ *Astronomy Letters, 2022, Vol. 48*]{}\n\n -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --\n -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --\n\n0.5cm\n\n**PARAMETERS OF THE RADCLIFFE WAVE FROM MASERS,**\n\n**RADIO STARS, AND T TAURI STARS**\n\n**V. V. Bobylev (1), A. T. Bajkova (1), Yu. N. Mishurov (2)**\n\n*(1) Pulkovo Astronomical Observatory of the Russian Academy of Sciences, St. Petersburg, Russia*\n\n*(2) Southern Federal University, Rostov-on-Don, Russia*\n\n\u2014The presence of the Radcliffe wave is shown both in the positions and in the vertical velocities of masers and radio stars belonging to the Local Arm. This gives the impression that the structure of the Radcliffe wave is not a wave in the full sense of the word. It is more like a local high-amplitude burst," +"---\nabstract: 'We offer a new, gradual approach to the [*largest girth problem for cubic graphs*]{}. It is easily observed that the largest possible girth of all $n$-vertex cubic graphs is attained by a [*$2$-connected*]{} graph $G=(V,E)$. By Petersen\u2019s graph theorem, $E$ is the disjoint union of a $2$-factor and a perfect matching $M$. We refer to the edges of $M$ as [*chords*]{} and classify the cycles in $G$ by their number of chords. We define $\\gamma_k(n)$ to be the largest integer $g$ such that every cubic $n$-vertex graph with a given perfect matching $M$ has a cycle of length at most $g$ with at most $k$ chords. Here we determine this function up to small additive constant for $k= 1, 2$ and up to a small multiplicative constant for larger $k$.'\nauthor:\n- 'Aya Bernstine[^1]'\n- '[Nati Linial[^2]]{}'\nbibliography:\n- 'bibl.bib'\ntitle: An approach to the girth problem in cubic graphs\n---\n\nIntroduction\n============\n\nThe [*girth*]{} of a graph $G$ is the shortest length of a cycle in $G$. Our main concern here is with bounds on $g(n)$, the largest girth of a cubic $n$-vertex graph. Namely, we seek the best statement of the form \u201cEvery $n$-vertex cubic graphs" +"---\nabstract: 'We show a Gottlieb element in the rational homotopy of a simply connected space $X$ implies a structural result for the Sullivan minimal model, with different results depending on parity. In the even-degree case, we prove a rational Gottlieb element is a terminal homotopy element. This fact allows us to complete an argument of Dupont to prove an even-degree Gottlieb element gives a free factor in the rational cohomology of a formal space of finite type. We apply the odd-degree result to affirm a special case of the $2N$-conjecture on Gottlieb elements of a finite complex. We combine our results to make a contribution to the realization problem for the classifying space $\\B(X)$. We prove a simply connected space $X$ satisfying $\\B(X_\\Q) \\simeq S_\\Q^{2n}$ must have infinite-dimensional rational homotopy and vanishing rational Gottlieb elements above degree $2n-1$ for $n= 1, 2, 3.$'\naddress:\n- 'Department of Mathematics, Cleveland State University, Cleveland OH 44115'\n- 'Department of Mathematics, Saint Joseph\u2019s University, Philadelphia, PA 19131'\nauthor:\n- Gregory Lupton\n- Samuel Bruce Smith\nbibliography:\n- 'Baut.bib'\ntitle: The structuring effect of a Gottlieb element on the Sullivan model of a space\n---\n\nIntroduction\n============\n\nLet $X$ be simply connected and" +"---\nauthor:\n- \nbibliography:\n- 'Probabilisticlogicbib.bib'\ntitle: Projective families of distributions revisited\n---\n\nIntroduction\n============\n\nStatistical relational artificial intelligence (AI) comprises approaches that combine probabilistic learning and reasoning with variants of first-order predicate logic. The challenges of statistical relational AI have been adressed from both directions: Either probabilistic graphical models such as Bayesian networks or Markov networks are lifted to relational representations and linked to (variants of) first-level logic, or approaches based on predicate logic sich as logic programming are extended to include probabilistic facts. The resulting statistical relational languages make it possible to specify a complex probabilistic model compactly and without reference to a specific domain of objects.\n\nFormally, on a given input, a statistical relational model defines a probability distriution over possible worlds on the domain of the input, which can then be queried for the probabilites of various definable events.\n\nCompared to ordinary Bayesian networks or Markov networks, statistically relational AI offers several advantages:\n\n- The presentation is generic, which means that it can be transferred to other areas with a similar structure\n\n- It is possible to specify complex background knowledge declaratively. For example, different modelling assumptions can be implemented and adapted rapidly.\n\n- Statistical relational" +"---\nabstract: 'Higher-order structures of networks, namely, small subgraphs of networks (also called network motifs), are widely known to be crucial and essential to the organization of networks. Several works have studied the community detection problema fundamental problem in network analysis at the level of motifs. In particular, the higher-order spectral clustering has been developed, where the notion of *motif adjacency matrix* is introduced as the algorithm\u2019s input. However, how the higher-order spectral clustering works and when it performs better than its edge-based counterpart remain largely unknown. To elucidate these problems, the higher-order spectral clustering is investigated from a statistical perspective. The clustering performance of the higher-order spectral clustering is theoretically studied under a [*weighted stochastic block model*]{}, and the resulting bounds are compared with the corresponding results of the edge-based spectral clustering.'\nauthor:\n- |\n Xiao Guo$^{\\dagger}$, Hai Zhang$^\\dagger$, Xiangyu Chang$^\\ddag$[^1]\\\n $^\\dagger$ School of Mathematics, Northwest University, China\\\n $^\\ddag$ School of Management, Xi\u2019an Jiaotong University, China\nbibliography:\n- 'highord.bib'\ntitle: '**On the efficacy of higher-order spectral clustering under weighted stochastic block models**'\n---\n\n\\#1\n\n0\n\n[0]{}\n\n1\n\n[0]{}\n\n[****]{}\n\n[*Keywords:*]{} Higher-order structures, Community detection, Weighted networks, Network motifs\n\nIntroduction {#sec:intro}\n============\n\nThe network is a standard representation of relationships" +"---\nauthor:\n- 'Chun Li$^1$, Yuqi Tian$^2$, Donglin Zeng$^3$, Bryan E. Shepherd$^2$'\ndate: |\n $^1$University of Southern California\\\n $^2$Vanderbilt University\\\n $^3$University of North Carolina\\\ntitle: Asymptotic Properties for Cumulative Probability Models for Continuous Outcomes\n---\n\nAbstract {#abstract .unnumbered}\n========\n\nRegression models for continuous outcomes often require a transformation of the outcome, which is often specified a priori or estimated from a parametric family. Cumulative probability models (CPMs) nonparametrically estimate the transformation and are thus a flexible analysis approach for continuous outcomes. However, it is difficult to establish asymptotic properties for CPMs due to the potentially unbounded range of the transformation. Here we show asymptotic properties for CPMs when applied to slightly modified data where the outcomes are censored at the ends. We prove uniform consistency of the estimated regression coefficients and the estimated transformation function over the non-censored region, and describe their joint asymptotic distribution. We show with simulations that results from this censored approach and those from the CPM on the original data are very similar when a small fraction of data are censored. We reanalyze a dataset of HIV-positive patients with CPMs to illustrate and compare the approaches.\n\nIntroduction\n============\n\nRegression analyses of continuous outcomes often require a" +"---\nabstract: 'Without a complete theory of quantum gravity, the question of how quantum fields and quantum particles behave in a superposition of spacetimes seems beyond the reach of theoretical and experimental investigations. Here we use an extension of the quantum reference frame formalism to address this question for the Klein-Gordon field residing on a superposition of conformally equivalent metrics. Based on the group structure of \u201cquantum conformal transformations\u201d, we construct an explicit quantum operator that can map states describing a quantum field on a superposition of spacetimes to states representing a quantum field with a superposition of masses on a Minkowski background. This constitutes an extended symmetry principle, namely invariance under quantum conformal transformations. The latter allows to build an understanding of superpositions of diffeomorphically non-equivalent spacetimes by relating them to a more intuitive superposition of quantum fields on curved spacetime. Furthermore, it can be used to import the phenomenon of particle production in curved spacetime to its conformally equivalent counterpart, thus revealing new features in Minkowski spacetime with modified Klein-Gordon mass.'\nauthor:\n- Viktoria Kabel\n- 'Anne-Catherine de la Hamette'\n- 'Esteban Castro-Ruiz'\n- \u010caslav Brukner\nbibliography:\n- 'bibliography.bib'\nnocite: '[@apsrev42Control]'\ntitle: Quantum conformal symmetries for spacetimes in" +"---\nabstract: 'Droplet breakup is an important phenomenon in the field of microfluidics to generate daughter droplets. In this work, a novel breakup regime in the widely studied T-junction geometry is reported, where the pinch-off occurs laterally in the two outlet channels, leading to the formation of three daughter droplets, rather than at the center of the junction for conventional T-junctions which leads to two daughter droplets. It is demonstrated that this new mechanism is driven by surface tension, and a design rule for the T-junction geometry is proposed. A model for low values of the capillary number $Ca$ is developed to predict the formation and growth of an underlying carrier fluid pocket that accounts for this lateral breakup mechanism. At higher values of $Ca$, the conventional regime of central breakup becomes dominant again. The competition between the new and the conventional regime is explored. Altogether, this novel droplet formation method at T-junction provides the functionality of alternating droplet size and composition, which can be important for the design of new microfluidic tools.'\nauthor:\n- Jiande Zhou\n- 'Yves-Marie Ducimeti\u00e8re'\n- Daniel Migliozzi\n- Ludovic Keiser\n- Arnaud Bertsch\n- Fran\u00e7ois Gallaire\n- Philippe Renaud\nbibliography:\n- 'apssamp.bib'\ntitle: 'Breaking" +"---\nabstract: 'The Automatic Dependent Surveillance-Broadcast (ADS-B) protocol is increasingly being adopted by the aviation industry as a method for aircraft to relay their position to Air Traffic Control (ATC) monitoring systems. ADS-B provides greater precision compared to traditional radar-based technologies, however, it was designed without any encryption or authentication mechanisms and has been shown to be susceptible to spoofing attacks. A capable attacker can transmit falsified ADS-B messages with the intent of causing false information to be shown on ATC displays and threaten the safety of air traffic. Updating the ADS-B protocol will be a lengthy process, therefore, there is a need for systems to detect anomalous ADS-B communications. This paper presents [ATC-Sense]{}, an ADS-B anomaly detection system based on ontologies. An ATC ontology is used to model entities in a simulated controlled airspace and is used to detect falsified ADS-B messages by verifying that the entities conform to aviation constraints related to aircraft flight tracks, radar readings, and flight reports. We evaluate the computational performance of the proposed constraints-based detection approach with several ADS-B attack scenarios in a simulated ATC environment. We demonstrate how ontologies can be used for anomaly detection in a real-time environment and call for" +"---\nauthor:\n- 'Weilong Fu[^1], Ali Hirsa[^2], J[\u00f6]{}rg Osterrieder[^3]'\nbibliography:\n- 'references.bib'\ndate: \ntitle: '**Simulating financial time series using attention**'\n---\n\n[[*Keywords:*]{} deep learning, generative adversarial networks, attention, time series, stylized facts]{}\n\nIntroduction\n============\n\nTraining and evaluation of trading strategies need lots of data. Due to the limited amount of real data, there is a growing need to be able to simulate realistic financial data which satisfies the stylized facts. There has already been a vast literature of financial time series models. The generalized autoregressive conditional heteroskedasticity ([[GARCH]{}]{}) [@bollerslev_generalized_1986] model and its variants are applied to the stock prices and indices. The Black-Merton-Scholes model [@black_scholes_1973], the Heston model [@heston1993closed], the variance gamma model [@madan1990variance], etc. are applied to the option surfaces. The parametric models are popular for their simplicity, mathematical explicitness and robustness. However, it is difficult for a parametric model to fit all the major stylized facts.\n\nRecently, more data-driven approaches based on generative adversarial networks ([[GANs]{}]{}) [@goodfellow2014generative] are proposed to deal with the problem. The [[GAN]{}]{} includes a generator, which is used to generate samples, and a discriminator, which is responsible for judging whether the generated samples are similar enough to the real data. The applications of [[GANs]{}]{}" +"---\nabstract: 'We study piecewise affine policies for multi-stage adjustable robust optimization (ARO) problems with non-negative right-hand side uncertainty. First, we construct new dominating uncertainty sets and show how a multi-stage ARO problem can be solved efficiently with a linear program when uncertainty is replaced by these new sets. We then demonstrate how solutions for this alternative problem can be transformed into solutions for the original problem. By carefully choosing the dominating sets, we prove strong approximation bounds for our policies and extend many previously best-known bounds for the two-staged problem variant to its multi-stage counterpart. Moreover, the new bounds are - to the best of our knowledge - the first bounds shown for the general multi-stage ARO problem considered. We extensively compare our policies to other policies from the literature and prove relative performance guarantees. In two numerical experiments, we identify beneficial and disadvantageous properties for different policies and present effective adjustments to tackle the most critical disadvantages of our policies. Overall, the experiments show that our piecewise affine policies can be computed by orders of magnitude faster than affine policies, while often yielding comparable or even better results.'\nauthor:\n- |\n Simon Thom\u00e4 [^1], Grit Walther$^*$, Maximilian Schiffer" +"---\nabstract: 'A kernel-based quantum classifier is the most practical and influential quantum machine learning technique for the hyper-linear classification of complex data. We propose a Variational Quantum Approximate Support Vector Machine (VQASVM) algorithm that demonstrates empirical sub-quadratic run-time complexity with quantum operations feasible even in NISQ computers. We experimented our algorithm with toy example dataset on cloud-based NISQ machines as a proof of concept. We also numerically investigated its performance on the standard Iris flower and MNIST datasets to confirm the practicality and scalability.'\nauthor:\n- Siheon Park\n- 'Daniel K. Park'\n- 'June-Koo Kevin Rhee'\nbibliography:\n- 'main.bib'\ntitle: Variational Quantum Approximate Support Vector Machine with Inference Transfer\n---\n\nIntroduction {#introduction .unnumbered}\n============\n\nQuantum computing opens up new exciting prospects of quantum advantages in machine learning in terms of sample and computation complexity.[@qnflprl22; @qram2; @qml; @schuld2018supervised; @qsvm] One of the foundations of these quantum advantages is the ability to form and manipulate data efficiently in a large quantum feature space, especially with kernel functions used in classification and other classes of machine learning.[@schuld2017implementing; @schuld2019quantum; @ibm-qsvm; @lloyd2020quantum; @blank2020quantum; @park2020theory; @schuld2021supervised; @liu2021rigorous; @blank2022compact]\n\nThe support vector machine (henceforth SVM)[@svm] is one of the most comprehensive models that help conceptualize the" +"---\nabstract: 'We use the concept of excursions for the prediction of random variables without any moment existence assumptions. To do so, an excursion metric on the space of random variables is defined which appears to be a kind of a weighted $L^1$-distance. Using equivalent forms of this metric and the specific choice of excursion levels, we formulate the prediction problem as a minimization of a certain target functional which involves the excursion metric. Existence of the solution and weak consistency of the predictor are discussed. An application to the extrapolation of stationary heavy-tailed random functions illustrates the use of the aforementioned theory. Numerical experiments with the prediction of Gaussian, $\\alpha$-stable and further heavy\u2013tailed time series round up the paper.'\nauthor:\n- 'Vitalii Makogin[^1], Evgeny Spodarev[^2]'\nbibliography:\n- 'MakoginLit.bib'\ntitle: Prediction of random variables by excursion metric projections\n---\n\n[**Keywords**]{}: [extrapolation, (linear) prediction, forecasting, excursion, level set, Gini metric, stationary random field, $\\alpha$\u2013stable random function, heavy tails, time series, statistical learning]{}.\\\n[**AMS subject classification 2020**]{}: [Primary 60G25, 62M20; Secondary 60G10, 60G52]{}\\\n\nIntroduction {#sectIntro}\n============\n\nLet $Y:\\Omega\\to{\\mathbb R}$ be a square integrable random variable defined on a probability space $(\\Omega,{\\cal F}, {\\mathbb{P}})$, and let ${\\cal G}\\subset {\\cal F}$ be a sub\u2013$\\sigma$\u2013algebra" +"---\nabstract: 'This paper introduces two new abstract morphs for two $2$-dimensional shapes. The intermediate shapes gradually reduce the Hausdorff distance to the goal shape and increase the Hausdorff distance to the initial shape. The morphs are conceptually simple and apply to shapes with multiple components and/or holes. We prove some basic properties relating to continuity, containment, and area. Then we give an experimental analysis that includes the two new morphs and a recently introduced abstract morph that is also based on the Hausdorff distance\u00a0[@van2022between]. We show results on the area and perimeter development throughout the morph, and also the number of components and holes. A visual comparison shows that one of the new morphs appears most attractive.'\nauthor:\n- Lex de Kogel\n- Marc van Kreveld\n- 'Jordi L. Vermeulen'\nbibliography:\n- 'bibliography.bib'\ntitle: |\n Abstract morphing using the Hausdorff distance\\\n and Voronoi diagrams[^1]\n---\n\nIntroduction\n============\n\nMorphing, also referred to as shape interpolation, is the changing of a given shape into a target shape over time. Applications include animation and medical imaging. Animation is often motivated by the film industry, where morphing can be used to create cartoons or visual effects. In medical imaging, the objective is" +"---\nabstract: 'X-ray ptychography allows for large fields to be imaged at high resolution at the cost of additional computational expense due to the large volume of data. Given limited information regarding the object, the acquired data often has an excessive amount of information that is outside the region of interest (RoI). In this work we propose a physics-inspired unsupervised learning algorithm to identify the RoI of an object using only diffraction patterns from a ptychography dataset before committing computational resources to reconstruction. Obtained diffraction patterns that are automatically identified as not within the RoI are filtered out, allowing efficient reconstruction by focusing only on important data within the RoI while preserving image quality.'\nauthor:\n- |\n Dergan Lin\\\n Mathematics and Computer Science Division\\\n Argonne National Laboratory\\\n Lemont, IL 60439, USA\\\n `dlin@anl.gov`\\\n Yi Jiang\\\n Advanced Photon Source\\\n Argonne National Laboratory\\\n Lemont, IL 60439, USA\\\n `yjiang@anl.gov`\\\n Junjing Deng\\\n Advanced Photon Source\\\n Argonne National Laboratory\\\n Lemont, IL 60439, USA\\\n `junjingdeng@anl.gov`\\\n Zichao Wendy Di\\\n Mathematics and Computer Science Division\\\n Advanced Photon Source\\\n Argonne National Laboratory\\\n Lemont, IL 60439, USA\\\n `wendydi@mcs.anl.gov`\\\nbibliography:\n- 'references.bib'\ntitle: 'Physics-Inspired Unsupervised Classification for Region of Interest in X-Ray Ptychography '\n---\n\nIntroduction\n============\n\nX-ray transmission ptychography is an" +"---\nabstract: |\n Scientists increasingly rely on Python tools to perform scalable distributed memory array operations using rich, NumPy-like expressions. However, many of these tools rely on dynamic schedulers optimized for abstract task graphs, which often encounter memory and network bandwidth-related bottlenecks due to sub-optimal data and operator placement decisions. Tools built on the message passing interface (MPI), such as ScaLAPACK and SLATE, have better scaling properties, but these solutions require specialized knowledge to use.\n\n In this work, we present [NumS]{}, an array programming library which optimizes NumPy-like expressions on task-based distributed systems. This is achieved through a novel scheduler called Load Simulated Hierarchical Scheduling (LSHS). LSHS is a local search method which optimizes operator placement by minimizing maximum memory and network load on any given node within a distributed system. Coupled with a heuristic for load balanced data layouts, our approach is capable of attaining communication lower bounds on some common numerical operations, and our empirical study shows that LSHS enhances performance on Ray by decreasing network load by a factor of 2$\\times$, requiring 4$\\times$ less memory, and reducing execution time by 10$\\times$ on the logistic regression problem. On terabyte-scale data, [NumS]{}\u00a0achieves competitive performance to SLATE on DGEMM," +"---\nabstract: 'Skeleton-based action recognition aims to project skeleton sequences to action categories, where skeleton sequences are derived from multiple forms of pre-detected points. Compared with earlier methods that focus on exploring single-form skeletons via Graph Convolutional Networks (GCNs), existing methods tend to improve GCNs by leveraging multi-form skeletons due to their complementary cues. However, these methods (either adapting structure of GCNs or model ensemble) require the co-existence of all forms of skeletons during both training and inference stages, while a typical situation in real life is the existence of only partial forms for inference. To tackle this issue, we present Adaptive Cross-Form Learning (ACFL), which empowers well-designed GCNs to generate complementary representation from single-form skeletons without changing model capacity. Specifically, each GCN model in ACFL not only learns action representation from the single-form skeletons, but also adaptively mimics useful representations derived from other forms of skeletons. In this way, each GCN can learn how to strengthen what has been learned, thus exploiting model potential and facilitating action recognition as well. Extensive experiments conducted on three challenging benchmarks, i.e., NTU-RGB+D 120, NTU-RGB+D 60 and UAV-Human, demonstrate the effectiveness and generalizability of the proposed method. Specifically, the ACFL significantly improves various" +"---\nabstract: 'Rare-earth iron garnets $R_{3}$Fe$_{5}$O$_{12}$ are fascinating insulators with very diverse magnetic phases. Their strong potential in spintronic devices has encouraged a renewal of interest in the study of their low temperature spin structures and spin wave dynamics. A striking feature of rare-earth garnets with $R$-ions exhibiting strong crystal-field effects like Tb-, Dy-, Ho-, and Er-ions is the observation of low-temperature non-collinear magnetic structures featuring \u201cumbrella-like\u201d arrangements of the rare-earth magnetic moments. In this study, we demonstrate that such umbrella magnetic states are naturally emerging from the crystal-field anisotropies of the rare-earth ions. By means of a general model endowed with only the necessary elements from the crystal structure, we show how umbrella-like spin structures can take place and calculate the canting-angle as a result of the competition between the exchange-interaction of the rare-earth and the iron ions as well as the crystal-field anisotropy. Our results are compared to experimental data, and a study of the polarised spin wave dynamics is presented. Our study improves the understanding of umbrella-like spin structures and paves the way for more complex spin wave calculations in rare-earth iron garnets with non-collinear magnetic phases.'\nauthor:\n- Bruno Tomasello\n- Dan Mannix\n- Stephan Gepr\u00e4gs" +"---\nauthor:\n- Dmitry Ponomarev\nbibliography:\n- 'hs.bib'\ntitle: 'Basic introduction to higher-spin theories'\n---\n\nIntroduction\n============\n\nHigher-spin theories is a fascinating topic of modern theoretical physics, which has deep connections to other important areas, such as holography, various forms of bootstrap and string theory. Broadly speaking, with the ultimate goal being the construction of quantum gravity models, higher-spin program attempts to chart the landscape of consistent field theories scrutinising and, possibly, expanding the consistency requirements along the way. For a recent review on the current status of higher-spin theories, their connection to other areas of theoretical physics and for discussions on possible future directions we refer the reader to [@Bekaert:2022poo].\n\nWhen approaching this problem, one quickly discovers the following striking feature: although consistent field theories are abundant at free level, all known interacting theories are, essentially, just variants of a handful of theories \u2013 such as scalar theories and theories of spin-$\\frac{1}{2}$ fermions, the Yang-Mills theory or General Relativity \u2013 and involve only very limited types of lower-spin fields. A notable exception to this conclusion is provided by string theory, which, however, strongly relies on the world-sheet description, while its reformulation as a field theory constitutes a separate research" +"---\naddress: |\n $^{1}$ Institute of Astronomy and Astrophysics, Universit\u00e9 Libre de Bruxelles, CP 226, Boulevard du Triomphe, B-1050 Brussels, Belgium; nicolas.chamel@ulb.be\\\n $^{2}$ Grand Acc\u00e9l\u00e9rateur National d\u2019Ions Lourds (GANIL), CEA/DRF-CNRS/IN2P3, Boulevard Henri Becquerel, 14076 Caen, France; anthea.fantina@ganil.fr\n---\n\nIntroduction\n============\n\nSoft gamma-ray repeaters and anomalous X-ray pulsars are two facets of a very active subclass of neutron stars, called magnetars, exhibiting outbursts and less frequently giant flares that release huge amounts of energy up to $\\sim$$10^{46}$\u00a0erg within a second (see e.g.,\u00a0[@esposito2021] for a recent review). These phenomena are thought to be powered by internal magnetic fields exceeding $10^{14}$\u2013$10^{15}$\u00a0G\u00a0[@duncan1992]. At\u00a0the date of this writing, 24 such objects have been discovered and six more candidates remain to be confirmed according to the McGill Online Magnetar Catalog\u00a0[@olausen2014]. Their persistent X-ray luminosity $\\sim$$10^{33}$\u2013$10^{35}$ erg/s, which is well in excess of their rotational energy and which implies a higher surface temperature than in weakly magnetized neutron stars of the same age\u00a0[@vigano2013], provides further evidence for extreme magnetic fields\u00a0[@beloborodov2016]. It is widely thought that heat is generated by the deformations of the crust beyond the elastic limit due to magnetic stresses (see, e.g.,\u00a0[@degrandis2020]). This mechanism is most" +"---\nabstract: 'RNN-T models have gained popularity in the literature and in commercial systems because of their competitiveness and capability of operating in online streaming mode. In this work, we conduct an extensive study comparing several prediction network architectures for both monotonic and original RNN-T models. We compare 4 types of prediction networks based on a common state-of-the-art Conformer encoder and report results obtained on Librispeech and an internal medical conversation data set. Our study covers both offline batch-mode and online streaming scenarios. In contrast to some previous works, our results show that Transformer does not always outperform LSTM when used as prediction network along with Conformer encoder. Inspired by our scoreboard, we propose a new simple prediction network architecture, $N$-Concat, that outperforms the others in our online streaming benchmark. Transformer and n-gram reduced architectures perform very similarly yet with some important distinct behaviour in terms of previous context. Overall we obtained up to 4.1 % relative WER improvement compared to our LSTM baseline, while reducing prediction network parameters by nearly an order of magnitude (8.4 times).'\naddress: 'Nuance Communications, Inc.'\nbibliography:\n- 'mybib.bib'\ntitle: 'On the Prediction Network Architecture in RNN-T for ASR'\n---\n\n**Index Terms**: Conformer, RNN-T, prediction" +"---\nabstract: 'Rights provisioned within data protection regulations, permit patients to request that knowledge about their information be eliminated by data holders. With the advent of AI learned on data, one can imagine that such rights can extent to requests for forgetting knowledge of patient\u2019s data within AI models. However, forgetting patients\u2019 imaging data from AI models, is still an under-explored problem. In this paper, we study the influence of patient data on model performance and formulate two hypotheses for a patient\u2019s data: either they are common and similar to other patients or form edge cases, i.e.\u00a0unique and rare cases. We show that it is not possible to easily *forget patient data*. We propose a targeted forgetting approach to perform patient-wise forgetting. Extensive experiments on the benchmark Automated Cardiac Diagnosis Challenge dataset showcase the improved performance of the proposed targeted forgetting approach as opposed to a state-of-the-art method.'\nauthor:\n- Ruolin Su\n- Xiao Liu\n- 'Sotirios A. Tsaftaris'\nbibliography:\n- 'bibliography.bib'\ntitle: 'Why patient data cannot be easily forgotten?'\n---\n\nIntroduction\n============\n\nApart from solely improving algorithm performance, developing trusted deep learning algorithms that respect data privacy has now become of crucial importance [@abadi2016deep; @liu2020have]. Deep models can" +"---\nabstract: 'This work examines the problem of clique enumeration on a graph by exploiting its clique covers. The principle of inclusion/exclusion is applied to determine the number of cliques of size $r$ in the graph union of a set ${{\\cal C}} = \\{c_1, \\ldots, c_m\\}$ of $m$ cliques. This leads to a deeper examination of the sets involved and to an orbit partition, $\\Gamma$, of the power set ${\\mathcal{P}({{\\mathcal{N}_{m}}})}$ of ${\\mathcal{N}_{m}} = \\{1, \\ldots, m\\}$. Applied to the cliques, this partition gives insight into clique enumeration and yields new results on cliques within a clique cover, including expressions for the number of cliques of size $r$ as well as generating functions for the cliques on these graphs. The quotient graph modulo this partition provides a succinct representation to determine cliques and maximal cliques in the graph union. The partition also provides a natural and powerful framework for related problems, such as the enumeration of induced connected components, by drawing upon a connection to extremal set theory through intersecting sets.'\nauthor:\n- |\n Pavel Shuldiner & R. Wayne Oldford\\\n University of Waterloo\nbibliography:\n- 'proposal\\_bibliography.bib'\ntitle: 'How many cliques can a clique cover cover?'\n---\n\nClique covers, clique enumeration, graph" +"---\nabstract: |\n We present a parallel compositing algorithm for Volumetric Depth Images (VDIs) of large three-dimensional volume data. Large distributed volume data are routinely produced in both numerical simulations and experiments, yet it remains challenging to visualize them at smooth, interactive frame rates. VDIs are view-dependent piecewise constant representations of volume data that offer a potential solution. They are more compact and less expensive to render than the original data. So far, however, there is no method for generating VDIs from distributed data. We propose an algorithm that enables this by sort-last parallel generation and compositing of VDIs with automatically chosen content-adaptive parameters. The resulting composited VDI can then be streamed for remote display, providing responsive visualization of large, distributed volume data.\n\n <ccs2012> <concept> <concept\\_id>10003120.10003145.10003146</concept\\_id> <concept\\_desc>Human-centered computing\u00a0Visualization techniques</concept\\_desc> <concept\\_significance>300</concept\\_significance> </concept> <concept> <concept\\_id>10010147.10010371.10010372</concept\\_id> <concept\\_desc>Computing methodologies\u00a0Rendering</concept\\_desc> <concept\\_significance>300</concept\\_significance> </concept> </ccs2012>\nauthor:\n- |\n \\\n [ ]{}\nbibliography:\n- 'egbibsample.bib'\ntitle: Parallel Compositing of Volumetric Depth Images for Interactive Visualization of Distributed Volumes at High Frame Rates\n---\n\nIntroduction\n============\n\nInteractive direct volume rendering is a commonly used technique for the analysis of large scalar field data generated by scientific simulations or experimental measurement devices. Rendering at high, consistent frame rates" +"---\nabstract: 'While deep learning based speech enhancement systems have made rapid progress in improving the quality of speech signals, they can still produce outputs that contain artifacts and can sound unnatural. We propose a novel approach to speech enhancement aimed at improving perceptual quality and naturalness of enhanced signals by optimizing for key characteristics of speech. We first identify key acoustic parameters that have been found to correlate well with voice quality (e.g. jitter, shimmer, and spectral flux) and then propose objective functions which are aimed at reducing the difference between clean speech and enhanced speech with respect to these features. The full set of acoustic features is the extended Geneva Acoustic Parameter Set (eGeMAPS), which includes 25 different attributes associated with perception of speech. Given the non-differentiable nature of these feature computation, we first build differentiable estimators of the eGeMAPS and then use them to fine-tune existing speech enhancement systems. Our approach is generic and can be applied to any existing deep learning based enhancement systems to further improve the enhanced speech signals. Experimental results conducted on the Deep Noise Suppression (DNS) Challenge dataset shows that our approach can improve the state-of-the-art deep learning based enhancement systems.'\naddress:" +"---\nabstract: 'The 511 keV $\\gamma$-ray emission from the galactic center region may fully or partially originate from the annihilation of positrons from dark matter particles with electrons from the interstellar medium. Alternatively, the positrons could be created by astrophysical sources, involving exclusively standard model physics. We describe here a new concept for a 511keV mission called [*511-CAM*]{} (511 keV gamma-ray CAmera using Micro-calorimeters) that combines focusing $\\gamma$-ray optics with a stack of Transition Edge Sensor (TES) microcalorimeter arrays in the focal plane. The [*511-CAM*]{} detector assembly has a projected 511 keV energy resolution of 390eV Full Width Half Maximum (FWHM) or better, and improves by a factor of at least 11 on the performance of state-of-the-art Ge-based Compton telescopes. Combining this unprecedented energy resolution with sub-arcmin angular resolutions afforded by Laue lens or channeling optics could make substantial contributions toward identifying the origin of the 511 keV emission through discovering and characterizing point sources and measuring line-of-sight velocities of the emitting plasmas.'\nauthor:\n- Farzane Shirazi\n- Ephraim Gau\n- 'Md. Arman Hossen'\n- Daniel Becker\n- Daniel Schmidt\n- Daniel Swetz\n- Douglas Bennett\n- Dana Braun\n- Fabian Kislat\n- Johnathon Gard\n- John Mates\n- Joel" +"---\nabstract: 'In this paper we establish a bridge between Topological Data Analysis and Geometric Deep Learning, adapting the topological theory of group equivariant non-expansive operators (GENEOs) to act on the space of all graphs weighted on vertices or edges. This is done by showing how the general concept of GENEO can be used to transform graphs and to give information about their structure. This requires the introduction of the new concepts of generalized permutant and generalized permutant measure and the mathematical proof that these concepts allow us to build GENEOs between graphs. An experimental section concludes the paper, illustrating the possible use of our operators to extract information from graphs. This paper is part of a line of research devoted to developing a compositional and geometric theory of GENEOs for Geometric Deep Learning.'\nauthor:\n- 'Faraz\u00a0Ahmad $^1$, Massimo\u00a0Ferri $^1$, Patrizio\u00a0Frosini $^1$'\nbibliography:\n- 'Gong.bib'\ntitle: Generalized Permutants and Graph GENEOs\n---\n\nPerception pair, GENEO, permutant, weighted graphs.\n\nIntroduction {#intro}\n============\n\nIn recent years, the need for an extension of Deep Learning to non-Euclidean domains has led to the development of Geometric Deep Learning (GDL)\u00a0[@BBLSV17; @BrBrCoVe21; @bronstein_2022]. This line of research focuses on applying neural networks" +"---\nabstract: 'Modulational instability of a uniform two-dimensional binary Bose-Einstein condensate (BEC) in the presence of quantum fluctuations is studied. The analysis is based on the coupled Gross-Pitaevskii equations. It is shown that quantum fluctuations can induce instability when the BEC density is below a threshold. The dependence of the growth rate of modulations on the BEC parameters is found. It is observed that an asymmetry of the interaction parameters and/or initial densities of the components typically decreases the growth rate. Further development of the instability results in a break-up of the BEC into a set of quantum droplets. These droplets merge dynamically with each other so that the total number of droplets decreases rapidly. The rate of this decrease is evaluated numerically for different initial parameters.'\naddress: |\n Physical-Technical Institute of the Uzbek Academy of Sciences,\\\n Chingiz Aytmatov Str. 2-B, Tashkent, 100084, Uzbekistan\nauthor:\n- 'Sherzod R. Otajonov'\n- 'Eduard N. Tsoy'\n- 'Fatkhulla Kh. Abdullaev'\ntitle: 'Modulational instability and quantum droplets in a two-dimensional Bose-Einstein condensate'\n---\n\nIntroduction {#sec:intro}\n============\n\nModulation instability (MI), or the Benjamin-Fair instability\u00a0[@Benjamin1967; @Ostrovskii1967], is a well-known phenomenon in physics. The main effect of MI is an exponential growth of the modulation amplitude" +"---\nabstract: 'We study the action of a real reductive group $G$ on a [K\u00e4hler ]{}manifold $Z$ which is the restriction of a holomorphic action of a complex reductive Lie group $U^\\mathbb{C}.$ We assume that the action of $U$, a maximal compact connected subgroup of $U^{\\mathbb{C}}$ on $Z$ is Hamiltonian. If $G\\subset U^{\\mathbb{C}}$ is compatible, there is a corresponding gradient map $\\mu_\\mathfrak{p} : Z\\to \\mathfrak{p}$, where ${\\mathfrak{g}}= {\\mathfrak{k}}\\oplus {\\mathfrak{p}}$ is a Cartan decomposition of the Lie algebra of $G$. Our main results are the openness and connectedness of the set of semistable points associated with $G$-action on $Z$, and the nonabelian convexity theorem for the $G$-action on a $G$-invariant compact Lagrangian submanifold of $Z.$'\naddress: |\n Dipartimento di Scienze Matematiche, Fisiche e Informatiche\\\n Universit\u00e0 di Parma (Italy)\nauthor:\n- Oluwagbenga Joshua Windare\nbibliography:\n- 'main.bib'\ntitle: Remarks on Semistable Points and Nonabelian Convexity of Gradient Maps\n---\n\nIntroduction\n============\n\nLet $U$ be a compact connected Lie group with Lie algebra $\\mathfrak{u}$ and let $U^{\\mathbb{C}}$ be its complexification. The Lie algebra $\\mathfrak{u}^{\\mathbb{C}}$ of $U^{\\mathbb{C}}$ is the direct sum $\\mathfrak{u}\\oplus i\\mathfrak{u}.$ A closed subgroup $G$ of $U^{\\mathbb{C}}$ is compatible if $G$ is closed and the map $K\\times \\mathfrak{p} \\to G,$ $(k,\\beta) \\mapsto" +"---\nabstract: 'Based on the assumption that there is a correlation between anti-spoofing and speaker verification, a Total-Divide-Total integrated Spoofing-Aware Speaker Verification (SASV) system based on pre-trained automatic speaker verification (ASV) system and integrated scoring module is proposed and submitted to the SASV 2022 Challenge. The training and scoring of ASV and anti-spoofing countermeasure (CM) in current SASV systems are relatively independent, ignoring the correlation. In this paper, by leveraging the correlation between the two tasks, an integrated SASV system can be obtained by simply training a few more layers on the basis of the baseline pre-trained ASV subsystem. The features in pre-trained ASV system are utilized for logical access spoofing speech detection. Further, speaker embeddings extracted by the pre-trained ASV system are used to improve the performance of the CM. The integrated scoring module takes the embeddings of the ASV and anti-spoofing branches as input and preserves the correlation between the two tasks through matrix operations to produce integrated SASV scores. Submitted primary system achieved equal error rate (EER) of 3.07% on the development dataset of the SASV 2022 Challenge and 4.30% on the evaluation part, which is a 25% improvement over the baseline systems.'\naddress: |\n $^1$Key Laboratory" +"---\nabstract: 'The core task of any autonomous driving system is to transform sensory inputs into driving commands. In end-to-end driving, this is achieved via a neural network, with one or multiple cameras as the most commonly used input and low-level driving command, e.g. steering angle, as output. However, depth-sensing has been shown in simulation to make the end-to-end driving task easier. On a real car, combining depth and visual information can be challenging, due to the difficulty of obtaining good spatial and temporal alignment of the sensors. To alleviate alignment problems, Ouster LiDARs can output surround-view LiDAR-images with depth, intensity, and ambient radiation channels. These measurements originate from the same sensor, rendering them perfectly aligned in time and space. We demonstrate that such LiDAR-images are sufficient for the real-car road-following task and perform at least equally to camera-based models in the tested conditions, with the difference increasing when needing to generalize to new weather conditions. In the second direction of study, we reveal that the temporal smoothness of off-policy prediction sequences correlates equally well with actual on-policy driving ability as the commonly used mean absolute error.'\nauthor:\n- 'Ardi Tampuu, Romet Aidla, Jan Are van Gent, Tambet Matiisen [^1]" +"---\nabstract: 'Causal inference methods for treatment effect estimation usually assume independent units. However, this assumption is often questionable because units may interact, resulting in spillover effects between units. We develop augmented inverse probability weighting (AIPW) for estimation and inference of the direct effect of the treatment with observational data from a single (social) network with spillover effects. We use plugin machine learning and sample splitting to obtain a semiparametric treatment effect estimator that converges at the parametric rate and asymptotically follows a Gaussian distribution. We apply our AIPW method to the Swiss StudentLife Study data to investigate the effect of hours spent studying on exam performance accounting for the students\u2019 social network.'\nauthor:\n- Corinne Emmenegger\n- 'Meta-Lina Spohn'\n- Timon Elmer\n- Peter B\u00fchlmann\nbibliography:\n- 'references.bib'\ntitle: Treatment Effect Estimation with Observational Network Data using Machine Learning\n---\n\n**Keywords:** Dependent data, interference, observed confounding, semiparametric inference, spillover effects.\n\nIntroduction {#sec:intro}\n============\n\nClassical causal inference approaches for treatment effect estimation with observational data usually assume independent units. This assumption is part of the common stable unit treatment value assumption (SUTVA)\u00a0[@rubin1980]. However, independence is often violated in practice due to interactions among units that lead to so-called spillover" +"---\nabstract: 'We present the Prime Focus Spectrograph (PFS) Galaxy Evolution pillar of the 360-night PFS Subaru Strategic Program. This 130-night program will capitalize on the wide wavelength coverage and massive multiplexing capabilities of PFS to study the evolution of typical galaxies from cosmic dawn to the present. From Lyman $\\alpha$ emitters at $z \\sim 7$ to probe reionization, drop-outs at $z\\sim3$ to map the inter-galactic medium in absorption, and a continuum-selected sample at $z \\sim 1.5$, we will chart the physics of galaxy evolution within the evolving cosmic web. This article is dedicated to the memory of Olivier Le Fevre, who was an early advocate for the construction of PFS, and a key early member of the Galaxy Evolution Working Group.'\nauthor:\n- 'Jenny E. Greene'\n- Rachel Bezanson\n- Masami Ouchi\n- John Silverman\n- St\u00e9phane Arnouts\n- 'Andy D. Goulding'\n- Meng Gu\n- 'James E. Gunn'\n- Yuichi Harikane\n- Timothy Heckman\n- Benjamin Horowitz\n- 'Sean D. Johnson'\n- Daichi Kashino\n- 'Khee-Gan Lee'\n- Joel Leja\n- 'Yen-Ting Lin'\n- Danilo Marchesini\n- Yoshiki Matsuoka\n- Kentaro Nagamine\n- Yoshiaki Ono\n- Alan Pearl\n- Takatoshi Shibuya\n- 'Michael A. Strauss'\n- 'Allison L." +"---\nabstract: |\n In this article we focus on the pricing of exchange options when the dynamic of log-prices follows either the well-known variance gamma or the recent variance gamma++ process introduced in @Gardini2022. In particular, for the former model we can derive a Margrabe\u2019s type formula whereas, for the latter one we can write an \u201cintegral free\u201d formula. Furthermore, we show how to construct a general multidimensional versions of the variance gamma++ processes preserving both the mathematical and numerical tractability.\n\n Finally we apply the derived models to German and French energy power markets: we calibrate their parameters using real market data and we accordingly evaluate exchange options with the derived closed formulas, Fourier based methods and Monte Carlo techniques. 0.2cm **Keywords**: [L\u00e9vy]{}\u00a0Processes; Exchange Options; Energy Markets; Derivative Pricing\nauthor:\n- 'Matteo Gardini[^1]'\n- 'Piergiacomo Sabino[^2] [^3]'\nbibliography:\n- 'library.bib'\ntitle: 'Exchange option pricing under variance gamma-like models '\n---\n\nIntroduction\n============\n\nSpread options are widely used in many fields of finance, in particular in energy markets, and their payoff depends on the difference of the value of two risky underlying assets instead of one. The spread might be between spot and futures prices, between currencies, interest rates or" +"---\nabstract: 'The COVID-19 pandemic has caused globally significant impacts since the beginning of 2020. This brought a lot of confusion to society, especially due to the spread of misinformation through social media. Although there were already several studies related to the detection of misinformation in social media data, most studies focused on the English dataset. Research on COVID-19 misinformation detection in Indonesia is still scarce. Therefore, through this research, we collect and annotate datasets for Indonesian and build prediction models for detecting COVID-19 misinformation by considering the tweet\u2019s relevance. The dataset construction is carried out by a team of annotators who labeled the relevance and misinformation of the tweet data. In this study, we propose the two stage classifier model using IndoBERT pre-trained language model for the Tweet misinformation detection task. We also experiment with several other baseline models for text classification. The experimental results show that the combination of the BERT sequence classifier for relevance prediction and Bi-LSTM for misinformation detection outperformed other machine learning models with an accuracy of 87.02%. Overall, the BERT utilization contributes to the higher performance of most prediction models. We release a high-quality COVID-19 misinformation Tweet corpus in the Indonesian language, indicated by" +"---\nabstract: 'It is well-known that the $\\gamma$-ray emission in blazars originate in the relativistic jet pointed at the observers. However, it is not clear whether the exact location of the GeV emission is less than a pc from the central engine, such that it may receive sufficient amount of photons from the broad line region (BLR) or farther out at 1-100 pc range. The former assumption has been successfully used to model the spectral energy distribution of many blazars. However, simultaneous detection of TeV $\\gamma$-rays along with GeV outbursts in some cases indicate that the emission region must be outside the BLR. In addition, GeV outbursts have sometimes been observed to be simultaneous with the passing of a disturbance through the so called \u201cVLBI core,\u201d which is located tens of pc away from the central engine. Hence, the exact location of $\\gamma$-ray emission remains ambiguous. Here we present a method we have developed to constrain the location of the emission region. We identify simultaneous months-timescale GeV and optical outbursts in the light curves spanning over 8 years of a sample of eleven blazars. Using theoretical jet emission models we show that the energy ratio of simultaneous optical and GeV" +"---\nabstract: |\n Due to the vast number of students enrolled in Massive Open Online Courses (MOOCs), there has been an increasing number of automated program repair techniques focused on introductory programming assignments ([IPAs]{}). Such state-of-the-art techniques use program clustering to take advantage of previous correct student implementations to repair a given new incorrect submission. Usually, these repair techniques use clustering methods since analyzing all available correct student submissions to repair a program is not feasible. The clustering methods use program representations based on several features such as abstract syntax tree ([AST]{}), syntax, control flow, and data flow. However, these features are sometimes brittle when representing semantically similar programs.\n\n This paper proposes [InvAASTCluster]{}, a novel approach for program clustering that takes advantage of dynamically generated program invariants observed over several program executions to cluster semantically equivalent [IPAs]{}. Our main objective is to find a more suitable representation of programs using a combination of the program\u2019s semantics, through its invariants, and its structure, through its anonymized abstract syntax tree. The evaluation of [InvAASTCluster]{}shows that the proposed program representation outperforms syntax-based representations when clustering a set of different correct [IPAs]{}. Furthermore, we integrate [InvAASTCluster]{}into a" +"---\nabstract: 'Interpreting the short-timescale variability of the accreting, young, low-mass stars known as Classical T Tauri stars remains an open task. Month-long, continuous light curves from the Transiting Exoplanet Survey Satellite (*TESS*) have become available for hundreds of T Tauri stars. With this vast data set, identifying connections between the variability observed by [*TESS* ]{}and short-timescale accretion variability is valuable for characterizing the accretion process. To this end, we obtained short-cadence [*TESS* ]{}observations of 14 T Tauri stars in the Taurus star-formation region along with simultaneous ground-based, UBVRI-band photometry to be used as accretion diagnostics. In addition, we combine our dataset with previously published simultaneous NUV-NIR *Hubble Space Telescope* spectra for one member of the sample. We find evidence that much of the short-timescale variability observed in the [*TESS* ]{}light curves can be attributed to changes in the accretion rate, but note significant scatter between separate nights and objects. We identify hints of time lags within our dataset that increase at shorter wavelengths which we suggest may be evidence of longitudinal density stratification of the accretion column. Our results highlight that contemporaneous, multi-wavelength observations remain critical for providing context for the observed variability of these stars.'\nauthor:\n- 'Connor" +"---\nabstract: 'Home health care problems consist of scheduling visits to home patients by health professionals while following a series of requirements. This paper studies the Home Health Care Routing and Scheduling Problem, which comprises a multi-attribute vehicle routing problem with soft time windows. Additional route inter-dependency constraints apply for patients requesting multiple visits, either by simultaneous visits or visits with precedence. We apply a mathematical programming solver to obtain lower bounds for the problem. We also propose a biased random-key genetic algorithm, and we study the effects of additional state-of-art components recently proposed in the literature for this genetic algorithm. We perform computational experiment using a publicly available benchmark dataset. Regarding the previous local search-based methods, we find results up to 26.1% better than those of the literature. We find improvements from around 0.4% to 6.36% compared to previous results from a similar genetic algorithm.'\nauthor:\n- 'Alberto F. Kummer, Olinto C.B. de Ara\u00fajo, Luciana S. Buriol, Mauricio G.C. Resende'\nbibliography:\n- 'itor-hhcrsp-2021.bib'\ntitle: |\n A biased random-key genetic algorithm for the\\\n home health care problem\n---\n\nIntroduction\n============\n\nThe increase in life expectancy directly impacts the demand for health services, thus motivating public authorities to design and implement" +"---\nauthor:\n- \n- \n- \n- \nbibliography:\n- 'sn-article.bib'\ntitle: 'Geometry Parameter Estimation for Sparse X-ray Log Imaging'\n---\n\nIntroduction\n============\n\nA particular case of sparse X-ray computed tomography (CT) problems is found in the sawmill industry, where X-ray scanners are employed for non-invasive detection of log features and defects such as knots, rotten parts, and foreign objects. Furthermore, X-ray scanners allow timber tracking from the raw material to the resulting product (wood fingerprinting) [@Zol2019; @Flod2008]. Non-destructive feature detection and timber tracking provide information for selection of optimal sawing strategies and efficient process control.\n\nCT image reconstruction can be described mathematically as an inverse problem [@Tar2004; @Kaip2005; @Stu2010]. The objective of two-dimensional X-ray CT is to reconstruct the cross-sectional image of a scanned object from a collection of one-dimensional line projections. The projections are obtained by measuring the intensity loss of the beam of X-rays that penetrates the object from different view angles. The projections are collected to a sinogram.\n\nIn the sawmill industry, highly limited (sparse) tomographic data is used for log imaging. The sparse tomography is known to be a severely ill-posed problem. Sparse data lead to severe artefacts when the image is reconstructed by conventional methods like" +"---\nabstract: 'Tumor infiltration of the recurrent laryngeal nerve (RLN) is a contraindication for robotic thyroidectomy and can be difficult to detect via standard laryngoscopy. Ultrasound (US) is a viable alternative for RLN detection due to its safety and ability to provide real-time feedback. However, the tininess of the RLN, with a diameter typically less than 3mm, poses significant challenges to the accurate localization of the RLN. In this work, we propose a knowledge-driven framework for RLN localization, mimicking the standard approach surgeons take to identify the RLN according to its surrounding organs. We construct a prior anatomical model based on the inherent relative spatial relationships between organs. Through Bayesian shape alignment (BSA), we obtain the candidate coordinates of the center of a region of interest (ROI) that encloses the RLN. The ROI allows a decreased field of view for determining the refined centroid of the RLN using a dual-path identification network, based on multi-scale semantic information. Experimental results indicate that the proposed method achieves superior hit rates and substantially smaller distance errors compared with state-of-the-art methods.'\nauthor:\n- 'Haoran Dou[^1]'\n- Luyi Han\n- Yushuang He\n- Jun Xu\n- Nishant Ravikumar\n- Ritse Mann\n- 'Alejandro F. Frangi'" +"---\nabstract: 'Tracing potentially infected contacts of confirmed cases is important when fighting outbreaks of many infectious diseases. The COVID-19 pandemic has motivated researchers to examine how different contact tracing strategies compare in terms of effectiveness (ability to mitigate infections) and cost efficiency (number of prevented infections per isolation). Two important strategies are so-called forward contact tracing (tracing to whom disease spreads) and backward contact tracing (tracing from whom disease spreads). Recently, Kojaku and colleagues reported that backward contact tracing was \u201cprofoundly more effective\u201d than forward contact tracing, that contact tracing effectiveness \u201chinges on reaching the \u2018source\u2019 of infection\u201d, and that contact tracing outperformed case isolation in terms of cost efficiency. Here we show that these conclusions are not true in general. They were based in part on simulations that vastly overestimated the effectiveness and efficiency of contact tracing. Our results show that the efficiency of contact tracing strategies is highly contextual; faced with a disease outbreak, the disease dynamics determine whether tracing infection sources or new cases is more impactful. Our results also demonstrate the importance of simulating disease spread and mitigation measures in parallel rather than sequentially.'\nauthor:\n- 'Jonas L. Juul'\n- 'Steven H. Strogatz'\nbibliography:\n-" +"---\nabstract: 'Active Simultaneous Localization and Mapping (SLAM) is the problem of planning and controlling the motion of a robot to build the most accurate and complete model of the surrounding environment. Since the first foundational work in active perception appeared, more than three decades ago, this field has received increasing attention across different scientific communities. This has brought about many different approaches and formulations, and makes a review of the current trends necessary and extremely valuable for both new and experienced researchers. In this work, we survey the state-of-the-art in active SLAM and take an in-depth look at the open challenges that still require attention to meet the needs of modern applications. After providing a historical perspective, we present a unified problem formulation and review the well-established modular solution scheme, which decouples the problem into three stages that identify, select, and execute potential navigation actions. We then analyze alternative approaches, including belief-space planning and deep reinforcement learning techniques, and review related work on multi-robot coordination. The manuscript concludes with a discussion of new research directions, addressing reproducible research, active spatial perception, and practical applications, among other topics.'\nauthor:\n- |\n Julio\u00a0A.\u00a0Placed\\*, Jared\u00a0Strader, Henry\u00a0Carrillo,\\\n Nikolay\u00a0Atanasov," +"---\nabstract: 'We consider the setting of an aggregate data meta-analysis of a continuous outcome of interest. When the distribution of the outcome is skewed, it is often the case that some primary studies report the sample mean and standard deviation of the outcome and other studies report the sample median along with the first and third quartiles and/or minimum and maximum values. To perform meta-analysis in this context, a number of approaches have recently been developed to impute the sample mean and standard deviation from studies reporting medians. Then, standard meta-analytic approaches with inverse-variance weighting are applied based on the (imputed) study-specific sample means and standard deviations. In this paper, we illustrate how this common practice can severely underestimate the within-study standard errors, which results in overestimation of between-study heterogeneity in random effects meta-analyses. We propose a straightforward bootstrap approach to estimate the standard errors of the imputed sample means. Our simulation study illustrates how the proposed approach can improve estimation of the within-study standard errors and between-study heterogeneity. Moreover, we apply the proposed approach in a meta-analysis to identify risk factors of a severe course of COVID-19.'\nauthor:\n- Sean McGrath\n- Stephan Katzenschlager\n- 'Alexandra J.\u00a0Zimmer'" +"---\nabstract: 'It is believed that one of the first useful applications for a quantum computer will be the preparation of groundstates of molecular Hamiltonians. A crucial task involving state preparation and readout is obtaining physical observables of such states, which are typically estimated using projective measurements on the qubits. At present, measurement data is costly and time-consuming to obtain on any quantum computing architecture, which has significant consequences for the statistical errors of estimators. In this paper, we adapt common neural network models (restricted Boltzmann machines and recurrent neural networks) to learn complex groundstate wavefunctions for several prototypical molecular qubit Hamiltonians from typical measurement data. By relating the accuracy ${\\varepsilon}$ of the reconstructed groundstate energy to the number of measurements, we find that using a neural network model provides a robust improvement over using single-copy measurement outcomes alone to reconstruct observables. This enhancement yields an asymptotic scaling near ${\\varepsilon}^{-1}$ for the model-based approaches, as opposed to ${\\varepsilon}^{-2}$ in the case of classical shadow tomography.'\nauthor:\n- Dmitri Iouchtchenko\n- 'J\u00e9r\u00f4me F. Gonthier'\n- 'Alejandro Perdomo-Ortiz'\n- 'Roger G. Melko'\ntitle: Neural network enhanced measurement efficiency for molecular groundstates\n---\n\nIntroduction\n============\n\nFinding the groundstate of a molecular Hamiltonian on" +"---\nabstract: 'One of the strongest justifications for the continued search for superconductivity within the single-band Hubbard Hamiltonian originates from the apparent success of single-band ladder-based theories in predicting the occurrence of superconductivity in the cuprate coupled-ladder compound Sr$_{14-x}$Ca$_{x}$Cu$_{24}$O$_{41}$. Recent theoretical works have, however, shown the complete absence of quasi-long range superconducting correlations within the hole-doped multiband ladder Hamiltonian including realistic Coulomb repulsion between holes on oxygen sites and oxygen-oxygen hole hopping. Experimentally, superconductivity in Sr$_{14-x}$Ca$_{x}$Cu$_{24}$O$_{41}$ occurs only under pressure, and is preceded by dramatic transition from one to two dimensions that remains not understood. We show that understanding the dimensional crossover requires adopting a valence transition model within which there occurs transition in Cu-ion ionicity from +2 to +1, with transfer of holes from Cu to O-ions \\[Phys. Rev. B 98, 205153 (2018)\\]. The driving force behind the valence transition is the closed-shell electron configuration of Cu$^{1+}$, a feature shared by cations of all oxides with negative charge-transfer gap. We make a falsifiable experimental prediction for Sr$_{14-x}$Ca$_{x}$Cu$_{24}$O$_{41}$ and discuss the implications of our results for layered two-dimensional cuprates.'\nauthor:\n- 'Jeong-Pil Song$^1$, R.\u00a0Torsten Clay$^2$, Sumit Mazumdar$^1$'\ntitle: 'Valence Transition Theory of the Pressure-Induced Dimensionality Crossover in Superconducting Sr$_{14-x}$Ca$_{x}$Cu$_{24}$O$_{41}$'" +"---\nabstract: 'A new control method that considers all sources of uncertainty and noises that might affect the time evolutions of quantum physical systems is introduced. Under the proposed approach, the dynamics of quantum systems are characterised by probability density functions (pdfs), thus providing a complete description of their time evolution. Using this probabilistic description, the proposed method suggests the minimisation of the distance between the actual pdf that describes the joint distribution of the time evolution of the quantum system and the external electric field, and the desired pdf that describes the system target outcome. We start by providing the control solution for quantum systems that are characterised by arbitrary pdfs. The obtained solution is then applied to quantum physical systems characterised by Gaussian pdfs and the form of the optimised controller is elaborated. Finally, the proposed approach is demonstrated on a real molecular system and two spin systems showing the effectiveness and simplicity of the method.'\nauthor:\n- |\n Randa Herzallah[^1] andAbdessamad Belfakir[^2]\\\n Aston Institute for Urban Technologies and the Environment, College of Engineering and Applied Science,\\\n Aston University, Aston Triangle, Birmingham B4 7ET, UK.\ntitle: A Probabilistic Framework for Controlling Quantum Systems\n---\n\n[**Keywords:**]{} Probabilistic control, Kullback-Leibler" +"---\nabstract: 'An assumption that has often been used by researchers to model the interference in a wireless network is the unit disk graph model. While many theoretical results and performance guarantees have been obtained under this model, an open research direction is to extend these results to hypergraph interference models. Motivated by recent results that the worst-case performance of the distributed maximal scheduling algorithm is characterized by the interference degree of the hypergraph, in the present work we investigate properties of the interference degree of the hypergraph and the structure of hypergraphs arising from physical constraints. We show that the problem of computing the interference degree of a hypergraph is NP-hard and we prove some properties and results concerning this hypergraph invariant. We investigate which hypergraphs are realizable, i.e. which hypergraphs arise in practice, based on physical constraints, as the interference model of a wireless network. In particular, a question that arises naturally is: what is the maximal value of $r$ such that the hypergraph $K_{1,r}$ is realizable? We determine this quantity for various integral and nonintegral values of the path loss exponent of signal propagation. We also investigate hypergraphs generated by line networks.'\nauthor:\n- 'Ashwin\u00a0Ganesan [^1]'" +"---\nabstract: 'The high energy transfer efficiency of photosynthetic complexes has been a topic of research across many disciplines. Several attempts have been made in order to explain this energy transfer enhancement in terms of quantum mechanical resources such as energetic and vibration coherence and constructive effects of environmental noise. The developments in this line of research have inspired various biomimetic works aiming to use the underlying mechanisms in biological light harvesting complexes for improvement of synthetic systems. In this article we explore the effect of an auxiliary hierarchically structured environment interacting with a system on the steady-state heat transport across the system. The cold and hot baths are modeled by a series of identically prepared qubits in their respective thermal states, and we use collision model to simulate the open quantum dynamics of the system. We investigate the effects of system-environment, inter-environment couplings and coherence of the structured environment on the steady state heat flux and find that such a coupling enhances the energy transfer. Our calculations reveal that there exists a non-monotonic and non-trivial relationship between the steady-state heat flux and the mentioned parameters.'\nauthor:\n- Ali Pedram\n- Bar\u0131\u015f \u00c7akmak\n- '\u00d6zg\u00fcr E. M\u00fcstecapl\u0131o\u011flu'\ntitle: '**Environment-assisted modulation" +"---\nabstract: 'In this paper, we prove propagation of chaos in the context of wave turbulence for a generic quasisolution. We then apply the result to full solutions to the incompressible Euler equation.'\nauthor:\n- 'Anne-Sophie de Suzzoni[^1]'\nbibliography:\n- 'WTbib.bib'\ntitle: General remarks on the propagation of chaos in wave turbulence and application to the incompressible Euler dynamics\n---\n\nIntroduction\n============\n\nWe address the question of propagation of chaos in the context of *wave* turbulence.\n\nThe issue at stake is the following : we consider the solution to a Hamiltonian equation with a random initial datum whose Fourier coefficients are initially independent and we want to know if this independence remains satisfied at later times. These Fourier modes must satisfy what is called in the Physics literature *Random Phase Approximation*, which is something satisfied by Gaussian variables. Here, we address also the following question : assuming that the initial Fourier modes are Gaussian, do the Fourier modes at later times conserve some sort of Gaussianity.\n\nIn the context of weak turbulence and for Schr\u00f6dinger equations, these questions have been successfully adressed by Deng and Hani in [@denghani21]. The Gaussianity in these papers consists in proving that at later times" +"---\nauthor:\n- Mrunmay Jagadale\nbibliography:\n- 'Bibliography.bib'\ntitle: Decorated TQFTs and their Hilbert Spaces\n---\n\n*Walter Burke Institute for Theoretical Physics, California Institute of Technology, Pasadena, CA 91125, USA*\n\nE-mail: mjagadal@caltech.edu\n\n**Abstract**\n\nWe discuss topological quantum field theories that compute topological invariants which depend on additional structures (or decorations) on three-manifolds. The $q$-series invariant $\\hat{Z}(q)$ proposed by Gukov, Pei, Putrov and Vafa is an example of such an invariant. We describe how to obtain these decorated invariants by cutting and gluing, and make a proposal for Hilbert spaces that are assigned to two-dimensional surfaces in the $\\hat{Z}$-TQFT.\n\nIntroduction\n============\n\nWhen given a topological quantum field theory (TQFT), the first question one asks is, \u201cWhat does it compute?\u201d In general, given a three-manifold, a three-dimensional TQFT computes for us a topological invariant of that three-manifold. For example, the $SU(2)$ Chern-Simons theory at level $k\\in {{\\mathbb Z}}$ computes the Witten-Reshetikhin-Turaev (${\\mathrm{WRT}}$) invariants of three-manifolds[@Witten:1988hf; @Reshetikhin1991InvariantsO3]. A decorated TQFT computes a topological invariant that depends on additional data. We call this additional data \u201cdecoration\u201d. One classic example of such an invariant is the Reidemeister-Milnor-Turaev torsion which is a topological invariant of three-manifolds that depends on the $\\mathrm{Spin}^{c}$ structure of the three-manifold[@Turaev1997TorsionIO].\n\nIn" +"---\naddress:\n- |\n ${}^a$Perimeter Institute for Theoretical Physics,\\\n Waterloo, Ontario, N2L 2Y5, Canada\n- |\n ${}^b$ Department of Physics, University of Waterloo,\\\n Waterloo, ON N2L 3G1, Canada\n- |\n ${}^c$Mani L. Bhaumik Institute for Theoretical Physics,\\\n 475 Portola Plaza, Los Angeles, CA 90095, USA\n- |\n ${}^d$Simons Center for Geometry and Physics,\\\n SUNY, Stony Brook, NY 11794, USA\nbibliography:\n- 'main.bib'\n---\n\n-10pt\n\nDiego Delmastro,${}^{ab}$ [^1] Jaume Gomis,${}^{b}$ [^2] Po-Shen Hsin,${}^{c}$ [^3] Zohar Komargodski${}^{d}$ [^4]\n\n[ ]{}\n\n[ ]{}\n\n= .8mm\n\nIntroduction {#sec:intro}\n============\n\nThe study of the symmetries of Quantum Field Theory (QFT) and quantum many-body systems has been an indispensable tool for several decades. Symmetries provide a robust concept that remains valid at strong coupling. As such, symmetry considerations oftentimes have inspired far-reaching insights into non-perturbative physics.\n\nA zero-form symmetry group $G$ in a QFT comes equipped with a collection of co-dimension one topological, invertible operators which under fusion form the group $G$. The Ward identities satisfied by correlation functions of local operators transforming in representations of $G$ are a consequence of enriching the correlation functions of local operators with topological operators that implement the action of $G$.\n\nStronger dynamical constraints arise if the QFT has an" +"---\nabstract: 'The prevalence of high-speed vehicle-to-everything (V2X) communication will likely significantly influence the future of vehicle autonomy. In several autonomous driving applications, however, the role such systems will play is seldom understood. In this paper, we explore the role of communication signals in enhancing the performance of lane change assistance systems in situations where downstream bottlenecks restrict the mobility of a few lanes. Building off of prior work on modeling lane change incentives, we design a controller that 1) encourages automated vehicles to subvert lanes in which distant downstream delays are likely to occur, while also 2) ignoring greedy local incentives when such delays are needed to maintain a specific route. Numerical results on different traffic conditions and penetration rates suggest that the model successfully subverts a significant portion of delays brought about by downstream bottlenecks, both globally and from the perspective of the controlled vehicles.'\nauthor:\n- 'Abdul Rahman Kreidieh$^{1,2}$, Yashar Farid$^{1}$, and Kentaro Oguchi$^{1}$[^1][^2]'\nbibliography:\n- 'neo.bib'\ntitle: '**Non-local Evasive Overtaking of Downstream Incidents in Distributed Behavior Planning of Connected Vehicles**'\n---\n\nIntroduction\n============\n\nThe adoption of vehicle autonomy and wireless communication is expected to redefine the nature of future transportation systems. In autonomous intersection scenarios, for" +"---\nabstract: 'In this work, we discuss: (i) The ratios of different parton distribution functions (PDFs), i.e., MMHT2014, CJ15, CTEQ6l1, HERAPDF15, MSTW2008, HERAPDF20 and MSHT20, and the corresponding Kimber-Martin-Ryskin (KMR) unintegrated parton distribution functions (UPDFs) sets versus the hard scale $Q^2$, to find out the sensibility of the KMR UPDFs with respect to the input PDFs sets. It is shown that there is not much difference between the different input-PDFs or corresponding UPDFs sets ratios. (ii) Then, the dependence of proton $k_t$-factorization structure functions on the different UPDFs sets which can use the above PDFs sets as input, are presented. The results are compared with the experimental data of ZEUS, NMC and H1+ZEUS at the hard scale $Q^2 =27$ and $90$ $GeV^2$, and a reasonable agreement is found, considering different input PDFs sets. (iii) Furthermore, by fitting a Gaussian function, which depends on the transverse momentum $k_t$, to the KMR UPDFs and averaging over $x$ (the fractional parton momentum), we obtain the average transverse momentum, $$, in the scale range $Q^2=2.4-1.44\\times10^4$ $GeV^2$, which is in agreement with the other groups predictions, $0.25-0.44$ $GeV^2$ at $Q^2=2.4$ $GeV^2$. (iv) Finally we explore the average transverse momentum for which, the results of proton" +"---\nabstract: 'This paper demonstrates spherical convolutional neural networks (S-CNN) offer distinct advantages over conventional fully-connected networks (FCN) at estimating scalar parameters of tissue microstructure from diffusion MRI (dMRI). Such microstructure parameters are valuable for identifying pathology and quantifying its extent. However, current clinical practice commonly acquires dMRI data consisting of only 6 diffusion weighted images (DWIs), limiting the accuracy and precision of estimated microstructure indices. Machine learning (ML) has been proposed to address this challenge. However, existing ML-based methods are not robust to differing gradient schemes, nor are they rotation equivariant. Lack of robustness to gradient schemes requires a new network to be trained for each scheme, complicating the analysis of data from multiple sources. A possible consequence of the lack of rotational equivariance is that the training dataset must contain a diverse range of microstucture orientations. Here, we show spherical CNNs represent a compelling alternative that is robust to new gradient schemes as well as offering rotational equivariance. We show the latter can be leveraged to decrease the number of training datapoints required.'\nauthor:\n- 'Tobias Goodwin-Allcock'\n- Jason McEwen\n- Robert Gray\n- Parashkev Nachev\n- Hui Zhang\nbibliography:\n- 'references.bib'\ntitle: 'How can spherical CNNs benefit" +"---\nabstract: 'We propose the novel task of *distance-based sound separation*, where sounds are separated based only on their distance from a single microphone. In the context of assisted listening devices, proximity provides a simple criterion for sound selection in noisy environments that would allow the user to focus on sounds relevant to a local conversation. We demonstrate the feasibility of this approach by training a neural network to separate near sounds from far sounds in single channel synthetic reverberant mixtures, relative to a threshold distance defining the boundary between near and far. With a single nearby speaker and four distant speakers, the model improves scale-invariant signal to noise ratio by 4.4 dB for near sounds and 6.8 dB for far sounds.'\naddress: ' Google Research, Cambridge MA'\nbibliography:\n- 'refs.bib'\ntitle: 'Distance-Based Sound Separation'\n---\n\n**Index Terms**: distance-based sound separation\n\nIntroduction\n============\n\nExtracting estimates of clean speech in the presence of interference is a long-standing research problem in signal processing. This task is referred to as *speech enhancement* when the interference is non-speech, and *speech separation* when the interference is speech. More generally, *sound separation* refers to the extraction of a subset of sounds from a mixture of sounds." +"---\nabstract: 'Quantum Approximate Optimization Algorithm (QAOA) is a hybrid algorithm whose control parameters are classically optimized. In addition to the variational parameters, the right choice of hyperparameter is crucial for improving the performance of any optimization model. Control depth, or the number of variational parameters, is considered as the most important hyperparameter for QAOA. In this paper we investigate the control depth selection with an automatic algorithm based on proximal gradient descent. The performances of the automatic algorithm are demonstrated on $7$-node and $10$-node Max-Cut problems, which show that the control depth can be significantly reduced during the iteration while achieving an sufficient level of optimization accuracy. With theoretical convergence guarantee, the proposed algorithm can be used as an efficient tool for choosing the appropriate control depth as a replacement of random search or empirical rules. Moreover, the reduction of control depth will induce a significant reduction in the number of quantum gates in circuit, which improves the applicability of QAOA on Noisy Intermediate-scale Quantum (NISQ) devices.'\nauthor:\n- Yu Pan\n- Yifan Tong\n- Yi Yang\ntitle: Automatic Depth Optimization for Quantum Approximate Optimization Algorithm\n---\n\nIntroduction\n============\n\nQuantum Approximate Optimization Algorithm (QAOA) is considered as one of" +"---\nabstract: 'Recent work introduced the *epinet* as a new approach to uncertainty modeling in deep learning [@osband2022epistemic]. An epinet is a small neural network added to traditional neural networks, which, together, can produce predictive distributions. In particular, using an epinet can greatly improve the quality of *joint* predictions across multiple inputs, a measure of how well a neural network knows what it does not know [@osband2022epistemic]. In this paper, we examine whether epinets can offer similar advantages under distributional shifts. We find that, across ImageNet-A/O/C, epinets generally improve robustness metrics. Moreover, these improvements are more significant than those afforded by even very large ensembles at orders of magnitude lower computational costs. However, these improvements are relatively small compared to the outstanding issues in distributionally-robust deep learning. Epinets may be a useful tool in the toolbox, but they are far from the complete solution.'\nauthor:\n- |\n **Xiuyuan Lu[^1], Ian Osband, Seyed Mohammad Asghari, Sven Gowal,\\\n **Vikranth Dwaracherla, Zheng Wen, Benjamin Van Roy\\\n DeepMind****\nbibliography:\n- 'references.bib'\ntitle: Robustness of Epinets against Distributional Shifts \n---\n\nIntroduction\n============\n\nEpistemic neural networks (ENNs) were recently introduced as a new framework for modeling uncertainty in deep learning [@osband2022epistemic]. ENNs offer an interface for" +"---\nabstract: '[ The causal structure is a quintessential element of continuum spacetime physics and needs to be properly encoded in a theory of Lorentzian quantum gravity. Established spin foam (and tensorial group field theory (TGFT)) models mostly work with relatively special classes of Lorentzian triangulations (e.g. built from spacelike tetrahedra only), obscuring the explicit implementation of the local causal structure at the microscopic level. We overcome this limitation and construct a full-fledged model for Lorentzian quantum geometry the building blocks of which include spacelike, lightlike and timelike tetrahedra. We realize this within the context of the Barrett-Crane TGFT model. Following an explicit characterization of the amplitudes via methods of integral geometry, and the ensuing clear identification of local causal structure, we analyze the model\u2019s amplitudes with respect to its (space)time-orientation properties and provide also a more detailed comparison with the framework of causal dynamical triangulations (CDT). ]{}'\nauthor:\n- 'Alexander F. Jercher,'\n- 'Daniele Oriti,'\n- 'Andreas G. A. Pithis'\nbibliography:\n- 'references.bib'\ntitle: 'The Complete Barrett-Crane Model and its Causal Structure'\n---\n\n[@counter>0@toks=@toks=]{}\n\n[ !! @toks= @toks= ]{} [ !! @toks= @toks= ]{}\n\nIntroduction {#sec:Introduction}\n============\n\nAmong the most important conceptual and physical insights of special and general" +"---\nabstract: 'Hundreds of years after its creation, the game of chess is still widely played worldwide. Opening Theory is one of the pillars of chess and requires years of study to be mastered. Here we exploit the \u201cwisdom of the crowd\u201d in an online chess platform to answer questions that, traditionally, only chess experts could tackle. We first define the relatedness network of chess openings that quantifies how similar two openings are to play. In this network, we spot communities of nodes corresponding to the most common opening choices and their mutual relationships, information which is hard to obtain from the existing classification of openings. Moreover, we use the relatedness network to forecast the future openings players will start to play and we back-test these predictions, obtaining performances considerably higher than those of a random predictor. Finally, we use the Economic Fitness and Complexity algorithm to measure how difficult to play openings are and how skilled in openings players are. This study not only gives a new perspective on chess analysis but also opens the possibility of suggesting personalized opening recommendations using complex network theory.'\nauthor:\n- 'Giordano De Marzo$^{1, 2, 3, 4}$'\n- 'Vito D. P. Servedio$^{4}$'\ntitle:" +"---\nabstract: 'The evaluation of quarkonium regeneration in ultrarelativistic heavy-ion collisions (URHICs) requires the knowledge of the heavy-quark phase space distributions in the expanding quark-gluon plasma (QGP) fireball. We employ a semi-classical charmonium transport approach where regeneration processes explicitly account for the time-dependent spectra of charm quarks via Langevin simulations of their diffusion. The inelastic charmonium rates and charm-quark transport coefficients are computed from the same charm-medium interaction. The latter is modeled by perturbative rates, augmented with a $K$-factor to represent nonperturbative interaction strength and interference effect. Using central 5.02\u00a0TeV Pb-Pb collisions as a test case we find that a good description of the measured $J/\\psi$ yield and its transverse-momentum dependence can be achieved if a large $K{\\gtrsim}5$ is employed while smaller values lead to marked discrepancies. This is in line with open-charm phenomenology in URHICs, where nonperturbative interactions of similar strength are required. Our approach establishes a common transport framework for a microscopic description of open and hidden heavy-flavor (HF) observables that incorporates both nonperturbative and non-equilibrium effects, and thus enhances the mutual constraints from experiment on the extraction of transport properties of the QGP.'\naddress: |\n $^1$Fakult\u00e4t fur Physik, Universit\u00e4t Bielefeld, D-33615 Bielefeld, Germany\\\n $^2$Cyclotron Institute and" +"---\nabstract: 'The VEDLIoT project targets the development of energy-efficient Deep Learning for distributed AIoT applications. A holistic approach is used to optimize algorithms while also dealing with safety and security challenges. The approach is based on a modular and scalable cognitive IoT hardware platform. Using modular microserver technology enables the user to configure the hardware to satisfy a wide range of applications. VEDLIoT offers a complete design flow for Next-Generation IoT devices required for collaboratively solving complex Deep Learning applications across distributed systems. The methods are tested on various use-cases ranging from Smart Home to Automotive and Industrial IoT appliances. VEDLIoT is an H2020 EU project which started in November 2020. It is currently in an intermediate stage with the first results available.'\nauthor:\n- \ntitle: 'VEDLIoT: Very Efficient Deep Learning in IoT [^1] '\n---\n\nThe VEDLIoT Approach {#sec:introduction}\n====================\n\n![image](figures/Project_Overview.pdf){width=\"90.00000%\"}\n\nDeep Learning has become a strong driver in IoT applications. Typically, those applications have very challenging computational demands coupled with a low energy budget. The goal of VEDLIoT is to integrate IoT with Deep Learning, accelerate applications, and optimize them towards energy efficiency. shows the architecture of VEDLIoT. The project is presented following a bottom-up approach, starting" +"---\nabstract: 'The purpose of this paper is to discuss a number of issues that crop up in the computation of Poisson brackets in field theories. This is specially important for the canonical approaches to quantization and, in particular, for loop quantum gravity. We illustrate the main points by working out several examples. Due attention is paid to relevant analytic issues that are unavoidable in order to properly understand how computations should be carried out. Although the functional spaces that we use throughout the paper will likely have to be modified in order to deal with specific physical theories such as general relativity, many of the points that we will raise will also be relevant in that context. The specific example of the *mock holonomy-flux algebra* will be considered in some detail and used to draw some conclusions regarding the loop quantum gravity formalism.'\naddress:\n- '$^1$ Instituto de Estructura de la Materia, CSIC, Serrano 123, 28006 Madrid, Spain'\n- '$^2$ Grupo de Teor\u00edas de Campos y F\u00edsica Estad\u00edstica, Instituto Gregorio Mill\u00e1n (UC3M), Unidad Asociada al Instituto de Estructura de la Materia, CSIC'\n- '$^3$ Departamento de Matem\u00e1ticas, Universidad Carlos III de Madrid, Avda.\u00a0 de la Universidad 30, 28911 Legan\u00e9s," +"---\nabstract: 'The speed limit of information propagation is one of the most fundamental features in non-equilibrium physics. The region of information propagation by finite-time dynamics is approximately restricted inside the effective light cone that is formulated by the Lieb-Robinson bound. To date, extensive studies have been conducted to identify the shape of effective light cones in most experimentally relevant many-body systems. However, the Lieb-Robinson bound in the interacting boson systems, one of the most ubiquitous quantum systems in nature, has remained a critical open problem for a long time. This study reveals an optimal light cone to limit the information propagation in interacting bosons, where the shape of the effective light cone depends on the spatial dimension. To achieve it, we prove that the speed for bosons to clump together is finite, which in turn leads to the error guarantee of the boson number truncation at each site. Furthermore, we applied the method to provide a provably efficient algorithm for simulating the interacting boson systems. The results of this study settle the notoriously challenging problem and provide the foundation for elucidating the complexity of many-body boson systems.'\nauthor:\n- 'Tomotaka Kuwahara$^{1,2}$'\n- 'Tan Van Vu$^{3}$'\n- 'Keiji Saito$^{3}$'\nbibliography:" +"---\nauthor:\n- 'John E. M.45exCarthy [^1]'\nbibliography:\n- '../references\\_uniform\\_partial.bib'\ntitle: 'Conferences\u2014an Owner\u2019s Manual'\n---\n\nI have just attended my first in-person conferences since the start of COVID. They form a critical part of our profession, so I decided to write my opinions about how to maximize their benefits. I say nothing of online conferences, because I don\u2019t understand them.\n\nPersonal History\n================\n\nThe first out-of-town conference I attended was at the University of Arkansas in April 1988. There was a mini-course of 5 lectures by Harold Shapiro on Quadrature Domains, and individual lectures by other senior operator theorists and complex analysts (in those days, graduate students rarely traveled to conferences, and never gave talks). My adviser, Donald Sarason, had arranged for several of his students to go, and to share a room in a hotel (a Hilton! I had heard of the luxury hotel brand, but had never set foot in one, let alone slept in one.)\n\nThis conference turned out to be the most important event of my professional life. I found it terribly exciting\u2014real mathematicians talking about their work and progress on interesting problems. I could understand the statements, even if the proofs were complicated. Outside the" +"---\nabstract: 'Perturbation theory (PT), used in a wide range of fields, is a powerful tool for approximate solutions to complex problems, starting from the exact solution of a related, simpler problem. Advances in quantum computing, especially over the last several years, provide opportunities for alternatives to classical methods. Here we present a general quantum circuit estimating both the energy and eigenstates corrections that is far superior to the classical version when estimating second order energy corrections. We demonstrate our approach as applied to the two-site extended Hubbard model. In addition to numerical simulations based on qiskit, results on IBM\u2019s quantum hardware are also presented. Our work offers a general approach to study complex systems with quantum devices, with no training or optimization process needed to obtain the perturbative terms, which can be generalized to other Hamiltonian systems both in chemistry and physics.'\nauthor:\n- Junxu Li\n- Barbara Jones\n- 'Sabre Kais[^1]'\nbibliography:\n- 'ref.bib'\ntitle: Towards Perturbation Theory Methods on a Quantum Computer \n---\n\n=1\n\nIntroduction {#introduction .unnumbered}\n============\n\nHistorically, Schr\u00f6dinger\u2019s techniques presented in 1926[@schrodinger1926undulatory] represent the first important application of perturbation theory (PT) for quantum systems, to obtain quantum eigenenergies. With the expansion of theory for atomic" +"---\nabstract: 'Competitive online games use rating systems for matchmaking; progression-based algorithms that estimate the skill level of players with interpretable ratings in terms of the outcome of the games they played. However, the overall experience of players is shaped by factors beyond the sole outcome of their games. In this paper, we engineer several features from in-game statistics to model players and create ratings that accurately represent their behavior and true performance level. We then compare the estimating power of our behavioral ratings against ratings created with three mainstream rating systems by predicting rank of players in four popular game modes from the competitive shooter genre. Our results show that the behavioral ratings present more accurate performance estimations while maintaining the interpretability of the created representations. Considering different aspects of the playing behavior of players and using behavioral ratings for matchmaking can lead to match-ups that are more aligned with players\u2019 goals and interests, consequently resulting in a more enjoyable gaming experience.'\nauthor:\n- \n- \n- \n- \nbibliography:\n- 'IEEEabrv.bib'\n- 'reference.bib'\ntitle: |\n Behavioral Player Rating in Competitive\\\n Online Shooter Games\n---\n\nrating systems, behavioral rating, rank prediction, online shooter games\n\nIntroduction\n============\n\nCompetitive online games have revolutionized" +"---\nabstract: 'Ongoing school closures and gradual reopenings have been occurring since the beginning of the COVID-19 pandemic. One substantial cost of school closure is breakdown in channels of reporting of violence against children, in which schools play a considerable role. There is, however, little evidence documenting how widespread such a breakdown in reporting of violence against children has been, and scant evidence exists about potential recovery in reporting as schools re-open. We study all formal criminal reports of violence against children occurring in Chile up to December 2021, covering physical, psychological, and sexual violence. This is combined with administrative records of school re-opening, attendance, and epidemiological and public health measures. We observe sharp declines in violence reporting at the moment of school closure across all classes of violence studied. Estimated reporting declines range from -17% (rape), to -43% (sexual abuse). While reports rise with school re-opening, recovery of reporting rates is slow. Conservative projections suggest that reporting gaps remained into the final quarter of 2021, nearly two years after initial school closures. Our estimates suggest that school closure and incomplete re-opening resulted in around 2,800 \u2018missing\u2019 reports of intra-family violence, 2,000 missing reports of sexual assault, and 230 missing" +"---\nabstract: |\n We consider a general type of non-Markovian impulse control problems under adverse non-linear expectation or, more specifically, the zero-sum game problem where the adversary player decides the probability measure. We show that the upper and lower value functions satisfy a dynamic programming principle (DPP). We first prove the dynamic programming principle (DPP) for a truncated version of the upper value function in a straightforward manner. Relying on a uniform convergence argument then enables us to show the DPP for the general setting. Following this, we use an approximation based on a combination of truncation and discretization to show that the upper and lower value functions coincide, thus establishing that the game has a value and that the DPP holds for the lower value function as well. Finally, we show that the DPP admits a unique solution and give conditions under which a saddle-point for the game exists.\n\n As an example, we consider a stochastic differential game (SDG) of impulse versus classical control of path-dependent stochastic differential equations (SDEs).\nauthor:\n- 'Magnus Perninge[^1]'\nbibliography:\n- 'ImpNonlinExp\\_ref.bib'\ntitle: 'Non-Markovian Impulse Control Under Nonlinear Expectation[^2]'\n---\n\nIntroduction\n============\n\nWe solve a robust impulse control problem where the aim is to" +"---\nabstract: |\n The sub-barrier fusion hindrance has been observed in the domain of very low energies of astrophysical relevance. This phenomenon can be analyzed effectively using an uncomplicated straightforward elegant mathematical formula gleaned presuming diffused barrier with a Gaussian distribution. The mathematical formula for cross section of nuclear fusion reaction has been obtained by folding together a Gaussian function representing the fusion barrier height distribution and the expression for classical cross section of fusion assuming a fixed barrier. The variation of fusion cross section as a function of energy, thus obtained, describes well the existing data on sub-barrier heavy-ion fusion for lighter systems of astrophysical interest. Employing this elegant formula, cross sections of interacting nuclei from $^{16}$O + $^{18}$O to $^{12}$C + $^{198}$Pt, all of which were measured down to $<$ 10 $\\mu$b have been analyzed. The agreement of the present analysis with the measured values is comparable, if not better, than those calculated from more sophisticated calculations. The three parameters of this formula varies rather smoothly implying its usage in estimating the excitation function or extrapolating cross sections for pairs of interacting nuclei which are yet to be measured. Possible effects of neutron transfers on the hindrance in" +"---\nabstract: |\n \\\n We report a study of high-resolution microwave spectroscopy of nitrogen-vacancy centers in diamond crystals at and around zero magnetic field. We observe characteristic splitting and transition imbalance of the hyperfine transitions, which originate from level anti-crossings in the presence of a transverse effective field. We use pulsed electron spin resonance spectroscopy to measure the zero-field spectral features of single nitrogen-vacancy centers for clearly resolving such level anti-crossings. To quantitatively analyze the magnetic resonance behavior of the hyperfine spin transitions in the presence of the effective field, we present a theoretical model, which describes the transition strengths under the action of an arbitrarily polarized microwave magnetic field. Our results are of importance for the optimization of the experimental conditions for the polarization-selective microwave excitation of spin-1 systems in zero or weak magnetic fields.\naddress: '$^1$ Department of Physics, Indian Institute of Science Education and Research Bhopal, Madhya Pradesh, India'\nauthor:\n- 'Shashank Kumar$^1$, Pralekh Dubey$^1$, Sudhan Bhadade$^1$, Jemish Naliyapara$^1$, Jayita Saha$^1$, and Phani Peddibhotla$^1$ [^1]'\nbibliography:\n- 'References.bib'\ntitle: 'High-resolution spectroscopy of a single nitrogen-vacancy defect at zero magnetic field'\n---\n\n[*Keywords*]{}: nitrogen-vacancy center, zero-field electron spin resonance spectroscopy, quantum sensing\n\nIntroduction\n============\n\nNitrogen-vacancy (NV) center, a" +"---\nabstract: 'Models of many-species ecosystems, such as the Lotka-Volterra and replicator equations, suggest that these systems generically exhibit near-extinction processes, where population sizes go very close to zero for some time before rebounding, accompanied by a slowdown of the dynamics (aging). Here, we investigate the connection between near-extinction and aging by introducing an exactly solvable many-variable model, where the time derivative of each population size vanishes both at zero and some finite maximal size. We show that aging emerges generically when random interactions are taken between populations. Population sizes remain exponentially close (in time) to the absorbing values for extended periods of time, with rapid transitions between these two values. The mechanism for aging is different from the one at play in usual glassy systems: at long times, the system evolves in the vicinity of unstable fixed points rather than marginal ones.'\nauthor:\n- Thibaut Arnoulx de Pirey and Guy Bunin\nbibliography:\n- 'My\\_Library.bib'\ntitle: 'Aging by near-extinctions in many-variable interacting populations'\n---\n\nInteractions between species in ecosystems may lead to large fluctuations in their population sizes. Theoretical models play a central role in understanding these fluctuations in nature and experiments, both for several species [@beninca2015species; @fussmann2000crossing; @gause_volterra; @venturelli2018deciphering]" +"---\nabstract: 'Large pretrained models can be fine-tuned with differential privacy to achieve performance approaching that of non-private models. A common theme in these results is the surprising observation that high-dimensional models can achieve favorable privacy-utility trade-offs. This seemingly contradicts known results on the model-size dependence of differentially private convex learning and raises the following research question: When does the performance of differentially private learning not degrade with increasing model size? We identify that the magnitudes of gradients projected onto subspaces is a key factor that determines performance. To precisely characterize this for private convex learning, we introduce a condition on the objective that we term *restricted Lipschitz continuity* and derive improved bounds for the excess empirical and population risks that are dimension-independent under additional conditions. We empirically show that in private fine-tuning of large language models, gradients obtained during fine-tuning are mostly controlled by a few principal components. This behavior is similar to conditions under which we obtain dimension-independent bounds in convex settings. Our theoretical and empirical results together provide a possible explanation for the recent success of large-scale private fine-tuning. Code to reproduce our results can be found at .'\nauthor:\n- |\n [^1] Xuechen Li[^2] Daogao Liu[^3]" +"---\nabstract: 'Analog to digital converters (ADCs) act as a bridge between the analog and digital domains. Two important attributes of any ADC are sampling rate and its dynamic range. For bandlimited signals, the sampling should be above the Nyquist rate. It is also desired that the signals\u2019 dynamic range should be within that of the ADC\u2019s; otherwise, the signal will be clipped. Nonlinear operators such as modulo or companding can be used prior to sampling to avoid clipping. To recover the true signal from the samples of the nonlinear operator, either high sampling rates are required or strict constraints on the nonlinear operations are imposed, both of which are not desirable in practice. In this paper, we propose a generalized flexible nonlinear operator which is sampling efficient. Moreover, by carefully choosing its parameters, clipping, modulo, and companding can be seen as special cases of it. We show that bandlimited signals are uniquely identified from the nonlinear samples of the proposed operator when sampled above the Nyquist rate. Furthermore, we propose a robust algorithm to recover the true signal from the nonlinear samples. We show that our algorithm has the lowest mean-squared error while recovering the signal for a given" +"---\nabstract: 'We study the ability of a social media platform with a political agenda to influence voting outcomes. Our benchmark is Condorcet\u2019s jury theorem, which states that the likelihood of a correct decision under majority voting increases with the number of voters. We show how information manipulation by a social media platform can overturn the jury theorem, thereby undermining democracy. We also show that sometimes the platform can do so only by providing information that is biased in the [*opposite direction*]{} of its preferred outcome. Finally, we compare manipulation of voting outcomes through social media to manipulation through traditional media.'\nauthor:\n- 'Ronen Gradwohl[^1]'\n- 'Yuval Heller[^2]'\n- 'Arye Hillman[^3]'\nbibliography:\n- 'socialMediaDemocracy.bib'\ntitle: Social Media and Democracy \n---\n\n**Keywords:** Bayesian persuasion; Political agenda; Information manipulation; Condorcet Jury Theorem; Biased signals. **JEL codes:** D72, D82, P16\n\nIntroduction {#sec:introduction}\n============\n\nIn early theories of voting [@hotelling1929stability; @downs1957economic], voters are fully informed and vote based on their given preferences. In subsequent expositions, however, voters do not know which policies or competing political candidates merit their support. They are then susceptible to influence by political advertising [@hillman1988domestic; @grossman2001special; @prat2002campaign] and by competing endorsements of policies and candidates [@grossman1999competing; @lichter2017theories] through mass media." +"---\nabstract: 'We present the verification, validation, and results of an approximate, analytic model for the radial profile of the stress, strain, and displacement within the toroidal field (TF) coil of a Tokamak at the inner midplane, where stress management is of the most concern. The model is designed to have high execution speed yet capture the essential physics, suitable for scoping studies, rapid evaluation of designs, and in the inner loop of an optimizer. It is implemented in the PROCESS fusion reactor systems code. The model solves a many-layer axisymmetric extended plane strain problem. It includes linear elastic deformation, Poisson effects, transverse-isotropic materials properties, radial Lorentz force profiles, and axial tension applied to layer subsets. The model does not include out-of-plane forces from poloidal field coils. We benchmark the model against 2D and 3D Finite Element Analyses (FEA) using Ansys and COMSOL. We find the Tresca stress accuracy of the model to be within 10% of the FEA result. We show that this model allows PROCESS to optimize a fusion pilot plant, subject to the TF coil winding pack and coil case yield constraints. This model sets an upper limit on the magnetic field strength at the coil surface" +"---\nabstract: 'Criticality has been proposed as a mechanism for the emergence of complexity, life, and computation, as it exhibits a balance between robustness and adaptability. In classic models of complex systems where structure and dynamics are considered homogeneous, criticality is restricted to phase transitions, leading either to robust (ordered) or adaptive (chaotic) phases in most of the parameter space. Many real-world complex systems, however, are not homogeneous. Some elements change in time faster than others, with slower elements (usually the most relevant) providing robustness, and faster ones being adaptive. Structural patterns of connectivity are also typically heterogeneous, characterized by few elements with many interactions and most elements with only a few. Here we take a few traditionally homogeneous dynamical models and explore their heterogeneous versions, finding evidence that heterogeneity extends criticality. Thus, parameter fine-tuning is not necessary to reach a phase transition and obtain the benefits of (homogeneous) criticality. Simply adding heterogeneity can extend criticality, making the search/evolution of complex systems faster and more reliable. Our results add theoretical support for the ubiquitous presence of heterogeneity in physical, social, and technological systems, as natural selection can exploit heterogeneity to evolve complexity \u201cfor free\". In artificial systems and biological design," +"---\nabstract: 'We demonstrate magnetometry of cultured neurons on a polymeric film using a superconducting flux qubit that works as a sensitive magnetometer in a microscale area. The neurons are cultured in Fe$^{3+}$ rich medium to increase magnetization signal generated by the electron spins originating from the ions. The magnetometry is performed by insulating the qubit device from the laden neurons with the polymeric film while keeping the distance between them around several micrometers. By changing temperature (12.5 \u2013 200 mK) and a magnetic field (2.5 \u2013 12.5 mT), we observe a clear magnetization signal from the neurons that is well above the control magnetometry of the polymeric film itself. From electron spin resonance (ESR) spectrum measured at 10 K, the magnetization signal is identified to originate from electron spins of iron ions in neurons. This technique to detect a bio-spin system can be extended to achieve ESR spectroscopy at the single-cell level, which will give the spectroscopic fingerprint of cells.'\nauthor:\n- Hiraku\u00a0Toida\n- Koji\u00a0Sakai\n- 'Tetsuhiko\u00a0F.\u00a0Teshima'\n- Masahiro\u00a0Hori\n- Kosuke\u00a0Kakuyanagi\n- Imran\u00a0Mahboob\n- Yukinori\u00a0Ono\n- Shiro\u00a0Saito\ntitle: Magnetometry of neurons using a superconducting qubit\n---\n\nIron is one of" +"---\nabstract: |\n We give an explicit formulation of the weight part of Serre\u2019s conjecture for $\\operatorname{GL}_2$ using Kummer theory. This avoids any reference to $p$-adic Hodge theory. The key inputs are a description of the reduction modulo $p$ of crystalline extensions in terms of certain \u201c$G_K$-Artin\u2013Scheier cocycles\u201d and a result of Abrashkin which describes these cocycles in terms of Kummer theory.\n\n An alternative explicit formulation in terms of local class field theory was previously given by Demb\u00e9l\u00e9\u2013Diamond\u2013Roberts in the unramified case and by the second author in general. We show that the description of Demb\u00e9l\u00e9\u2013Diamond\u2013Roberts can be recovered directly from ours using the explicit reciprocity laws of Br\u00fcckner\u2013Shaferevich\u2013Vostokov. These calculations illustrate how our use of Kummer theory eliminates certain combinatorial complications appearing in these two papers.\nauthor:\n- 'Robin Bartlett & Misja F.A. Steinmetz'\nbibliography:\n- 'biblio.bib'\ndate: 24 June 2022\ntitle: 'Explicit Serre weights for $\\operatorname{GL}_2$ via Kummer theory'\n---\n\nIntroduction\n============\n\nOverview {#overview .unnumbered}\n--------\n\nSerre conjectured in [@ser87] that every continuous irreducible odd representation $\\overline{\\rho}:G_{\\mathbb{Q}} \\rightarrow \\operatorname{GL}_2(\\overline{\\mathbb{F}}_p)$ arose as the reduction modulo $p$ of the Galois representation attached to a modular form. Furthermore, Serre predicted the possible weights of the relevant modular forms in terms of" +"---\nabstract: 'We present a critique of the many-world interpretation of quantum mechanics, based on different \u201cpictures\u201d that describe the time evolution of an isolated quantum system. Without an externally imposed frame to restrict these possible pictures, the theory cannot yield non-trivial interpretational statements. This is analogous to Goodman\u2019s famous \u201cgrue-bleen\u201d problem of language and induction. Using a general framework applicable to many kinds of dynamical theories, we try to identify the kind of additional structure (if any) required for the meaningful interpretation of a theory. We find that the \u201cgrue-bleen\u201d problem is not restricted to quantum mechanics, but also affects other theories including classical Hamiltonian mechanics. For all such theories, absent external frame information, an isolated system has no interpretation.'\nauthor:\n- Benjamin Schumacher\n- 'Michael D. Westmoreland'\ntitle: 'Interpretation of quantum theory: the quantum \u201cgrue-bleen\u201d problem'\n---\n\nIntroduction\n============\n\nThe many-worlds interpretation\n------------------------------\n\nAny critique of the many-worlds interpretation of quantum mechanics ought to begin by praising it. In the simplest form of the interpretation, such as that presented by Everett in 1957 [@everett1957], the universe is regarded as a closed quantum system. Its state vector (Everett\u2019s \u201cuniversal wave function\u201d) evolves unitarily according to an internal Hamiltonian. Measurements" +"---\nabstract: 'Since their introduction by G.\u00a0Gr\u00e4tzer and E.\u00a0Knapp in 2007, more than four dozen papers have been devoted to finite slim planar semimodular lattices (in short, SPS lattices or slim semimodular lattices) and to some related fields. In addition to distributivity, there have been seven known properties of the congruence lattices of these lattices. The first two properties were proved by G.\u00a0Gr\u00e4tzer, the next four by the present author, while the seventh was proved jointly by G.\u00a0Gr\u00e4tzer and the present author. Five out of the seven properties were found and proved by using lamps, which are lattice theoretic tools introduced by the present author in a 2021 paper. Here, using lamps, we present infinitely many new properties. Lamps also allow us to strengthen the seventh previously known property, and they lead to an algorithm of exponential time to decide whether a finite distributive lattice can be represented as the congruence lattice of an SPS lattice. Some new properties of lamps are also given.'\naddress: 'University of Szeged, Bolyai Institute. Szeged, Aradi v\u00e9rtan\u00fak tere 1, HUNGARY 6720'\nauthor:\n- G\u00e1bor Cz\u00e9dli\ntitle: Notes on congruence lattices and lamps of slim semimodular lattices\n---\n\n[^1]\n\nIntroduction {#S:Introduction}" +"---\nabstract: |\n Motivated by cognitive experiments providing evidence that large crossing-angles do not impair the readability of a graph drawing, RAC (Right Angle Crossing) drawings were introduced to address the problem of producing readable representations of non-planar graphs by supporting the optimal case in which all crossings form $90^\\circ$ angles.\n\n In this work, we make progress on the problem of finding RAC drawings of graphs of low degree. In this context, a long-standing open question asks whether all degree-$3$ graphs admit straight-line RAC drawings. This question has been positively answered for the Hamiltonian degree-3 graphs. We improve on this result by extending to the class of $3$-edge-colorable degree-$3$ graphs. When each edge is allowed to have one bend, we prove that degree-$4$ graphs admit such RAC drawings, a result which was previously known only for degree-$3$ graphs. Finally, we show that $7$-edge-colorable degree-$7$ graphs admit RAC drawings with two bends per edge. This improves over the previous result on degree-$6$ graphs.\nauthor:\n- Patrizio\u00a0Angelini\n- 'Michael\u00a0A.\u00a0Bekos'\n- Julia\u00a0Katheder\n- Michael\u00a0Kaufmann\n- Maximilian\u00a0Pfister\nbibliography:\n- 'bibliography.bib'\ntitle: RAC Drawings of Graphs with Low Degree\n---\n\nIntroduction\n============\n\nIn the literature, there is a wealth" +"---\nabstract: 'We perform a Bethe-Salpeter equation (BSE) evaluation of composite scalar boson masses in order to verify how these masses can be smaller than the composition scale. The calculation is developed with a constituent self-energy dependent on its mass anomalous dimension ($\\gamma$), and we obtain a relation showing how the scalar mass decreases as $\\gamma$ is increased. We also discuss how fermionic corrections to the BSE kernel shall decrease the scalar mass, whose effect can be as important as the one of a large $\\gamma$. An estimate of the top quark loop effect that must appear in the BSE calculation gives a lower bound on the composite scalar mass.'\nauthor:\n- 'A. Doff$^{a}$ and A. A. Natale$^{b}$'\ntitle: Composite scalar boson mass dependence on the constituent mass anomalous dimension \n---\n\nIntroduction\n============\n\nThe discovery of the Higgs boson at the LHC [@atlas; @cms] completed the Standard Model (SM), where a scalar boson sector is present as proposed long ago [@sw]. In many extensions of this model this scalar sector is even larger than the SM one, although experimental signals of new particles belonging to this sector are still missing. Furthermore, there are theoretical shortcomings about this scalar sector [@kw;" +"---\nabstract: 'In this work, we derive a correct expression for the one\u2013component plasma (OCP) energy via the angular\u2013averaged Ewald potential (AAEP). Unlike E.\u00a0Yakub and C.\u00a0Ronchi (J. Low Temp. Phys. 139, 633 (2005)), who had tried to obtain the same energy expression from a two\u2013component plasma model, we used the original Ewald potential for an OCP. A constant in the AAEP was determined using the cluster expansion in the limit of weak coupling. The potential has a simple form suitable for effective numerical simulations. To demonstrate the advantages of the AAEP, we performed a number of Monte\u2013Carlo simulations for an OCP with up to a million particles in a wide range of the coupling parameter. Our computations turned out at least two orders of magnitude more effective than those with a traditional Ewald potential. A unified approach is offered for the determination of the thermodynamic limit in the whole investigated range. Our results are in good agreement with both theoretical data for a weakly coupled OCP and previous numerical simulations. We hope that the AAEP will be useful in path integral Monte Carlo simulations of the uniform electron gas.'\nauthor:\n- 'G. S. Demyanov'\n- 'P. R. Levashov'" +"---\nabstract: 'Tuple interpretations are a class of algebraic interpretation that subsumes both polynomial and matrix interpretations as it does not impose simple termination and allows non-linear interpretations. It was developed in the context of higher-order rewriting to study derivational complexity of algebraic functional systems. In this short paper, we continue our journey to study the complexity of higher-order TRSs by tailoring tuple interpretations to deal with innermost runtime complexity.'\nauthor:\n- Cynthia Kop\n- Deivid Vale\nbibliography:\n- 'references.bib'\ntitle: 'Tuple Interpretations and Applications to Higher-Order Runtime Complexity'\n---\n\nIntroduction\n============\n\nThe step-by-step computational model induced by term rewriting naturally gives rise to a *complexity* notion. Here, complexity is understood as the number of rewriting steps needed to reach a normal form. In the rewriting setting, a *complexity function* bounds the length of the longest rewrite sequence parametrized by the size of the starting term. Two distinct complexity notions are commonly considered: derivational and runtime. In the former, the starting term is unrestricted which allows initial terms with nested function calls. The latter only considers rewriting sequences beginning with *basic* terms. Intuitively, basic terms are those where a single function call is performed with *data* objects as arguments.\n\nThere" +"---\nabstract: 'An arithmetical structure on a finite, connected graph without loops is given by an assignment of positive integers to the vertices such that, at each vertex, the integer there is a divisor of the sum of the integers at adjacent vertices, counted with multiplicity if the graph is not simple. Associated to each arithmetical structure is a finite abelian group known as its critical group. Keyes and Reiter gave an operation that takes in an arithmetical structure on a finite, connected graph without loops and produces an arithmetical structure on a graph with one fewer vertex. We study how this operation transforms critical groups. We bound the order and the invariant factors of the resulting critical group in terms of the original arithmetical structure and critical group. When the original graph is simple, we determine the resulting critical group exactly.'\naddress:\n- 'Department of Mathematics and Statistics, Villanova University, 800 Lancaster Ave (SAC 305), Villanova, PA 19085, USA'\n- 'Department of Mathematics, Niagara University, Niagara University, NY 14109, USA'\nauthor:\n- 'Alexander Diaz-Lopez'\n- Joel Louwsma\nbibliography:\n- 'CriticalGroupOperation.bib'\ntitle: |\n Critical groups of arithmetical structures\\\n under a generalized star-clique operation\n---\n\nIntroduction\n============\n\nThis paper describes the" +"---\nabstract: 'Gravitational lensing of fast radio bursts (FRBs) offers an exciting avenue for several cosmological applications. However, it is not yet clear how many such events future surveys will detect nor how to optimally find them. We use the known properties of FRBs to forecast detection rates of gravitational lensing on delay timescales from microseconds to years, corresponding to lens masses spanning fifteen orders of magnitude. We highlight the role of the FRB redshift distribution on our ability to observe gravitational lensing. We consider cosmological lensing of FRBs by stars in foreground galaxies and show that strong stellar lensing will dominate on microsecond timescales. Upcoming surveys such as DSA-2000 and CHORD will constrain the fraction of dark matter in compact objects (e.g. primordial black holes) and may detect millilensing events from intermediate mass black holes (IMBHs) or small dark matter halos. Coherent all-sky monitors will be able to detect longer-duration lensing events from massive galaxies, in addition to short time-scale lensing. Finally, we propose a new application of FRB gravitational lensing that will measure directly the circumgalactic medium of intervening galaxies.'\nauthor:\n- Liam Connor\n- Vikram Ravi\nbibliography:\n- 'frb\\_grav\\_lensing.bib'\ntitle: Stellar prospects for FRB gravitational lensing\n---" +"---\nauthor:\n- 'Munshi G. Mustafa'\ntitle: An Introduction to Thermal Field Theory and Some of its Application\n---\n\nIntroduction\n============\n\nThe conventional quantum field theory is formalized at zero temperature. This is a framework to describe a wide class of phenomena in particle physics in the energy range covered by all experiments, i.e., a tool to deal with complicated many body problems or interacting system. The theoretical predictions under this framework, for example the cross sections of particle collisions in an accelerator, are extremely good to describe experimental data. With some modifications, it also plays a crucial role in atomic, nuclear and condensed matter physics. However, our real world is certainly of non-zero temperature. It is natural to wonder when and to what extent effects arising due to non-zero temperature are relevant, and what new phenomena could arise due to a thermal background. To understand these, one needs a prescription of quantum field theory in thermal background and the general context of thermal field theory can be illustrated as below:\n\nIn Fig.\u00a0\\[simp\\_pro\\], the simple two body process is displayed at zero temperature and it can be characterized by an observable as = q\\_1q\\_2|p\\_1p\\_2. \\[qft1\\]\n\n![Simple $2\\rightarrow 2$ process" +"---\nabstract: |\n **Context.** The availability of clean and diverse labeled data is a major roadblock for training models on complex tasks such as visual question answering (VQA). The extensive work on large vision-and-language models has shown that self-supervised learning is effective for pretraining multimodal interactions. In this technical report, we focus on visual representations. We review and evaluate self-supervised methods to leverage unlabeled images and pretrain a model, which we then fine-tune on a custom VQA task that allows controlled evaluation and diagnosis. We compare energy-based models (EBMs) with contrastive learning (CL). While EBMs are growing in popularity, they lack an evaluation on downstream tasks.\n\n **Findings.** Both EBMs and CL can learn representations from unlabeled images that enable training a VQA model on very little annotated data. In a simple setting similar to CLEVR, we find that CL representations also improve systematic generalization, and even match the performance of representations from a larger, supervised, ImageNet-pretrained model. However, we find EBMs to be difficult to train because of instabilities and high variability in their results. We, therefore, investigate other purported benefits of EBMs. They prove useful for OOD detection, but other results on supervised energy-based training and uncertainty calibration are" +"---\nabstract: 'Benefiting from the digitization of healthcare data and the development of computing power, machine learning methods are increasingly used in the healthcare domain. Fairness problems have been identified in machine learning for healthcare, resulting in an unfair allocation of limited healthcare resources or excessive health risks for certain groups. Therefore, addressing the fairness problems has recently attracted increasing attention from the healthcare community. However, the intersection of machine learning for healthcare and fairness in machine learning remains understudied. In this review, we build the bridge by exposing fairness problems, summarizing possible biases, sorting out mitigation methods and pointing out challenges along with opportunities for the future.'\nauthor:\n- Qizhang Feng\n- Mengnan Du\n- Na Zou\n- Xia Hu\ntitle: 'Fair Machine Learning in Healthcare: A Review'\n---\n\nIntroduction\n============\n\nWith the rapid digitization of health data and the growth of computing power, the use of machine learning has grown rapidly in many healthcare fields in recent years, leading healthcare into a new era. Many studies have attempted to implement machine learning methods on medical images, electronic health records (EHRs), clinical notes, and various other medical data\u00a0[@Kwak2019DeepHealthRA]. For example, machine learning algorithms have been used in medical" +"---\nabstract: 'Domain generalization typically requires data from multiple source domains for model learning. However, such strong assumption may not always hold in practice, especially in medical field where the data sharing is highly concerned and sometimes prohibitive due to privacy issue. This paper studies the important yet challenging single domain generalization problem, in which a model is learned under the worst-case scenario with only one source domain to directly generalize to different unseen target domains. We present a novel approach to address this problem in medical image segmentation, which extracts and integrates the semantic shape prior information of segmentation that are invariant across domains and can be well-captured even from single domain data to facilitate segmentation under distribution shifts. Besides, a test-time adaptation strategy with dual-consistency regularization is further devised to promote dynamic incorporation of these shape priors under each unseen domain to improve model generalizability. Extensive experiments on two medical image segmentation tasks demonstrate the consistent improvements of our method across various unseen domains, as well as its superiority over state-of-the-art approaches in addressing domain generalization under the worst-case scenario.'\nauthor:\n- 'Quande Liu^1^, Cheng Chen^1^, Qi Dou^1^, Pheng-Ann Heng^1,2^\\'\nbibliography:\n- 'refs.bib'\ntitle: |\n Single-domain Generalization in" +"---\nabstract: 'Semi-supervised learning is the problem of training an accurate predictive model by combining a small labeled dataset with a presumably much larger unlabeled dataset. Many methods for semi-supervised deep learning have been developed, including pseudolabeling, consistency regularization, and contrastive learning techniques. Pseudolabeling methods however are highly susceptible to confounding, in which erroneous pseudolabels are assumed to be true labels in early iterations, thereby causing the model to reinforce its prior biases and fail to generalize to strong predictive performance. We present a new approach to suppress confounding errors through a method we describe as Semi-supervised Contrastive Outlier removal for Pseudo Expectation Maximization (SCOPE). Like basic pseudolabeling, SCOPE is related to Expectation Maximization (EM), a latent variable framework which can be extended toward understanding cluster-assumption deep semi-supervised algorithms. However, unlike basic pseudolabeling which fails to adequately take into account the probability of the unlabeled samples given the model, SCOPE introduces outlier suppression in order to identify high confidence samples for pseudolabeling. Our results show that SCOPE improves semi-supervised classification accuracy for CIFAR-10 classification task using 40, 250 and 4000 labeled samples, as well as CIFAR-100 using 400, 2500, and 10000 labeled samples. Moreover, we show that SCOPE reduces the" +"---\nabstract: 'Preparing appropriate images for camera calibration is crucial to obtain accurate results. In this paper, new suggestions for preparing such data to alleviate the adverse effect of radial distortion for a calibration procedure using principal lines are developed through the investigations of: (i) identifying directions of checkerboard movements in an image which will result in maximum (and minimum) influence on the calibration results, and (ii) inspecting symmetry and monotonicity of such effect in (i) using the above principal lines. Accordingly, it is suggested that the estimation of principal point should based on linearly independent pairs of nearly parallel principal lines, with a member in each pair corresponds to a near 180\u00b0 rotation (in the image plane) of the other. Experimental results show that more robust and consistent calibration results for the foregoing estimation can actually be obtained, compared with the renowned algebraic methods which estimate distortion parameters explicitly.'\naddress: 'Department of Computer Science, National Yang Ming Chiao Tung University, Taiwan, ROC'\nbibliography:\n- 'strings.bib'\n- 'refs.bib'\ntitle: Visualizing and alleviating the effect of radial distortion on camera calibration using principal lines \n---\n\nCamera calibration, radial distortion, pose suggestion, principal line\n\nIntroduction {#sec:intro}\n============\n\nCamera calibration estimates intrinsic and" +"---\nabstract: 'We analyse the thermal fluctuations of magnetization textures in two stray field coupled elements, forming mesospins. To this end, the energy landscape associated with the thermal dynamics of the textures is mapped out and asymmetric energy barriers are identified. These barriers are modified by changing the gap that separates the mesospins. Moreover, the coupling between the edges leads to an anisotropy in the curvature of the energy surface at the metastable minima. This yields a dynamic mode splitting of the edge modes and affects the attempt switching frequencies. Thus, we elucidate the mechanism with which the magnons in the thermal bath generate the stochastic fluctuations of the magnetization at the edges.'\nauthor:\n- 'Samuel D. Sl\u00f6etjes'\n- Bj\u00f6rgvin Hj\u00f6rvarsson\n- Vassilios Kapaklis\ntitle: Texture fluctuations and emergent dynamics in coupled nanomagnets\n---\n\nSingle domain magnetic nanoislands with binary magnetization states - *mesospins* - are used as building blocks for magnetic metamaterials. Contemporary examples are *e.g.* artificial spin ices (ASI) [@Wang2006; @Gilbert_Shakti; @Gilbert_tetris; @Perrin_Nature_2016; @Ostman_natphys_2018], Ising chains and lattices [@Arnalds_2DIsing; @Ostman_Ising_2018], and reconfigurable magnonic crystals [@gliga2020dynamics; @gartside2021reconfigurable; @kaffash2021tailoring]. These magnetic metamaterials can be viewed as having thermal fluctuations, associated with switching of the magnetic states of the mesospins [@kapaklis2014thermal;" +"---\nabstract: 'I discuss here three important roles where machine intelligence, brain and behaviour studies together may facilitate criminal law. First, predictive modelling using brain and behaviour data may support legal investigations by predicting categorical, continuous, and longitudinal *legal outcomes of interests* related to brain injury and mental illnesses. Second, psychological, psychiatric, and behavioural studies supported by machine learning algorithms may help predict human behaviour and actions, such as lies, biases, and visits to crime scenes. Third, machine learning models have been used to predict recidivism using clinical and criminal data whereas brain decoding is beginning to uncover one\u2019s thoughts and intentions based on brain imaging data. Having dispensed with achievements and promises, I examine concerns regarding the accuracy, reliability, and reproducibility of the brain- and behaviour-based assessments in criminal law, as well as questions regarding data possession, ethics, free will (and automatism), privacy, and security. Further, I will discuss issues related to predictability *vs.* explainability, population-level prediction *vs.* personalised prediction, and predicting future actions, and outline three potential scenarios where brain and behaviour data may be used as court evidence. Taken together, brain and behaviour decoding in legal exploration and decision-making at present is promising but primitive. The derived" +"---\nabstract: 'Generative models have emerged as an essential building block for many image synthesis and editing tasks. Recent advances in this field have also enabled high-quality 3D or video content to be generated that exhibits either multi-view or temporal consistency. With our work, we explore 4D generative adversarial networks (GANs) that learn unconditional generation of 3D-aware videos. By combining neural implicit representations with time-aware discriminator, we develop a GAN framework that synthesizes 3D video supervised only with monocular videos. We show that our method learns a rich embedding of decomposable 3D structures and motions that enables new visual effects of spatio-temporal renderings while producing imagery with quality comparable to that of existing 3D or video GANs.'\nauthor:\n- |\n Sherwin Bahmani$^{1}$ Jeong Joon Park$^{2}$ Despoina Paschalidou$^{2}$ Hao Tang$^{1}$\\\n Gordon Wetzstein$^{2}$ Leonidas Guibas$^{2}$ Luc Van Gool$^{1,3}$ Radu Timofte$^{1,4}$\\\n [$^{1}$ETH Z\u00fcrich $^{2}$Stanford University$^{3}$KU Leuven$^{4}$University of W\u00fcrzburg]{.nodecor}\nbibliography:\n- 'bibliography\\_long.bib'\n- 'bibliography.bib'\n- 'bibliography\\_custom.bib'\ntitle: '3D-Aware Video Generation'\n---\n\n![[**[3D-Aware video generation.]{}**]{} We show multiple frames and viewpoints of two 3D videos, generated using our model trained on the FaceForensics dataset\u00a0[@Rossler2019ICCV]. Our 4D GAN generates 3D content of high quality while permitting control of time and camera extrinsics. Video results can" +"---\nabstract: 'Much of the current software depends on open-source components, which in turn have complex dependencies on other open-source libraries. Vulnerabilities in open source therefore have potentially huge impacts. The goal of this work is to get a quantitative overview of the frequency and evolution of existing vulnerabilities in popular software repositories and package managers. To this end, we provide an up-to-date overview of the open source landscape and its most popular package managers, we discuss approaches to map entries of the Common Vulnerabilities and Exposures (CVE) list to open-source libraries and we show the frequency and distribution of existing CVE entries with respect to popular programming languages.'\nauthor:\n- Tobias Dam\n- Sebastian Neumaier\nbibliography:\n- 'Bibliography.bib'\ntitle: 'Towards Measuring Vulnerabilities and Exposures in Open-Source Packages'\n---\n\nIntroduction\n============\n\nAccording to a 2021 open-source security report by Synopsis,[^1] 98% of 1,5k reviewed codebases depend on open-source components and libraries. Given the number of dependencies of medium- to large-size software projects, any vulnerability in open-source code has security implications in numerous software products and involves the risk of disclosing vulnerabilities either directly or through dependencies, as famously seen in the 2014 Heartbleed Bug[^2], a vulnerability in OpenSSL which exposed" +"---\nabstract: 'We obtain bounds on the numbers of intersections between triangulations as the conformal structure of a surface varies along a Teichm[\u00fc]{}ller geodesic contained in an $\\mathrm{SL}\\left(2,\\mathbb{R}\\right)$-orbit closure of rank 1 in the moduli space of Abelian differentials. For $0 \\leq \\theta \\leq 1$, we obtain an exponential bound on the number of closed geodesics in the orbit closure, of length at most $R$, that spend at least $\\theta$-fraction of their length in a region with short saddle connections.'\naddress: 'Department of Mathematics, Purdue University,150 N. University Street, West Lafayette, IN 47907, United States'\nauthor:\n- John Rached\nbibliography:\n- 'bibliography.bib'\ntitle: 'Counting Closed Geodesics in Rank 1 $\\mathrm{SL}\\left(2,\\mathbb{R}\\right)$-orbit Closures'\n---\n\nIntroduction\n============\n\nMany problems in geometry and dynamical systems lead to the study of compact surfaces without boundary which admit flat metrics away from a finite set of conical singularities, where all the curvature is concentrated. A central and classical problem is the study of billiard trajectories, for which understanding flat surfaces obtained by gluing sides of a polygon is essential (see $\\cite{wright2016rational}$). It may be very difficult to say anything about an individual flat surface, but there is a natural linear action on the space of deformations" +"---\nabstract: 'Ad platforms require reliable measurement of advertising returns: what increase in performance (such as clicks or conversions) can an advertiser expect in return for additional budget on the platform? Even from the perspective of the platform, accurately measuring advertising returns is hard. Selection and omitted variable biases make estimates from observational methods unreliable, and straightforward experimentation is often costly or infeasible. We introduce *Asymmetric Budget Split*, a novel methodology for valid measurement of ad returns from the perspective of the platform. Asymmetric budget split creates small asymmetries in ad budget allocation across comparable partitions of the platform\u2019s userbase. By observing performance of the same ad at different budget levels while holding all other factors constant, the platform can obtain a valid measure of ad returns. The methodology is unobtrusive and cost-effective in that it does not require holdout groups or sacrifices in ad or marketplace performance. We discuss a successful deployment of asymmetric budget split to LinkedIn\u2019s Jobs Marketplace, and ad marketplace where it is used to measure returns from promotion budgets in terms of incremental job applicants. We outline operational considerations for practitioners and discuss further use cases such as budget-aware performance forecasting.'\nauthor:\n- Johannes Hermle" +"---\nabstract: 'Inverse problems consist in reconstructing signals from incomplete sets of measurements and their performance is highly dependent on the quality of the prior knowledge encoded via regularization. While traditional approaches focus on obtaining a unique solution, an emerging trend considers exploring multiple feasibile solutions. In this paper, we propose a method to generate multiple reconstructions that fit both the measurements and a data-driven prior learned by a generative adversarial network. In particular, we show that, starting from an initial solution, it is possible to find directions in the latent space of the generative model that are null to the forward operator, and thus keep consistency with the measurements, while inducing significant perceptual change. Our exploration approach allows to generate multiple solutions to the inverse problem an order of magnitude faster than existing approaches; we show results on image super-resolution and inpainting problems.'\naddress: 'Department of Electronics and Telecommunications \u2013 Politecnico di Torino, Italy'\nbibliography:\n- 'biblio.bib'\ntitle: Exploring the solution space of linear inverse problems with GAN latent geometry\n---\n\nInverse problems, GANs\n\nIntroduction {#sec:intro}\n============\n\nLinear inverse problems are ubiquitous in the sciences as they are tasked with reconstructing a signal of interest from a set of" +"---\nabstract: 'Pre-trained code representation models such as CodeBERT have demonstrated superior performance in a variety of software engineering tasks, yet they are often heavy in complexity, quadratically with the length of the input sequence. Our empirical analysis of CodeBERT\u2019s attention reveals that CodeBERT pays more attention to certain types of tokens and statements such as keywords and data-relevant statements. Based on these findings, we propose [DietCode]{}, which aims at lightweight leverage of large pre-trained models for source code. [DietCode]{}simplifies the input program of CodeBERT with three strategies, namely, word dropout, frequency filtering, and an attention-based strategy that selects statements and tokens that receive the most attention weights during pre-training. Hence, it gives a substantial reduction in the computational cost without hampering the model performance. Experimental results on two downstream tasks show that [DietCode]{}provides comparable results to CodeBERT with 40% less computational cost in fine-tuning and testing.'\nauthor:\n- 'Zhaowei Zhang$^1$, Hongyu Zhang$^2$, Beijun Shen$^1$, Xiaodong Gu$^{1}$'\nbibliography:\n- 'references.bib'\ntitle: 'Diet Code Is Healthy: Simplifying Programs for Pre-trained Models of Code'\n---\n\n<ccs2012> <concept> <concept\\_id>10010147.10010178.10010179</concept\\_id> <concept\\_desc>Computing methodologies\u00a0Natural language processing</concept\\_desc> <concept\\_significance>500</concept\\_significance> </concept> </ccs2012>\n\nIntroduction\n============\n\nPre-trained models of code such as CodeBERT\u00a0[@feng2020codebert] have been the cutting-edge program representation" +"---\nbibliography:\n- 'biblio.bib'\n---\n\n\u00a0\\\n[ Line Defect Quantum Numbers & Anomalies]{}\n\n\u00a0\\\nT. Daniel Brennan,$^1$ Clay C\u00f3rdova,$^1$ and Thomas T.\u00a0Dumitrescu$^2$\n\n\u00a0\\\n$^1$[*Kadanoff Center for Theoretical Physics & Enrico Fermi Institute, University of Chicago*]{}\\\n$^2$[*Mani L.Bhaumik Institute for Theoretical Physics, Department of Physics and Astronomy,*]{}\\\n[*University of California, Los Angeles, CA 90095, USA*]{}\\\n\u00a0\\\n\nWe explore the connection between the global symmetry quantum numbers of line defects and \u2019t\u00a0Hooft anomalies. Relative to local (point) operators, line defects may transform projectively under both internal and spacetime symmetries. This phenomenon is known as symmetry fractionalization, and in general it signals the presence of certain discrete \u2019t Hooft anomalies. We describe this in detail in the context of free Maxwell theory in four dimensions. This understanding allows us to deduce the \u2019t Hooft anomalies of non-Abelian gauge theories with renormalization group flows into Maxwell theory by analyzing the fractional quantum numbers of dynamical magnetic monopoles. We illustrate this method in $SU(2)$ gauge theories with matter fermions in diverse representations of the gauge group. For adjoint matter, we uncover a mixed anomaly involving the 0-form and 1-form symmetries, extending previous results. For\u00a0$SU(2)$ QCD with fundamental fermions, the \u2019t\u00a0Hooft anomaly" +"---\nabstract: 'We propose and demonstrate a technique to control [the balance between]{} the two amplitudes of a dual-wavelength laser based on a phase-controlled optical feedback. The feedback cavity length is adjusted to achieve a relative phase shift between the desired emission wavelengths, introducing a boost in gain for one wavelength while the other wavelength experiences additional losses. Tuning the optical feedback phase proves to be an effective way to control the gain & losses, and, thus, to select one or balance the amplitude of the two emission wavelengths. This concept can be easily adapted to any platform, wavelength range and wavelength separations providing that a sufficient carrier coupling and gain can be obtained for each mode. To demonstrate the feasibility and to evaluate the performance of this approach, we have implemented two dual-wavelength lasers with different spectral separations together with individual optical feedback loops onto a InP generic foundry platform emitting around 1550\u00a0nm. An electro-optical-phase-modulator is used to tune the feedback phase. With this single control parameter, we successfully achieved extinction ratios of up to 38.6\u00a0dB for a 10\u00a0nm wavelength separation and up to 49\u00a0dB for a 1\u00a0nm wavelength separation.'\nauthor:\n- 'Robert\u00a0Pawlus," +"---\nabstract: 'We present *latent combinational game design*\u2014an approach for generating playable games that blend a given set of games in a desired combination using deep generative latent variable models. We use Gaussian Mixture Variational Autoencoders (GMVAEs) which model the VAE latent space via a mixture of Gaussian components. Through supervised training, each component encodes levels from one game and lets us define blended games as linear combinations of these components. This enables generating new games that blend the input games as well as controlling the relative proportions of each game in the blend. We also extend prior blending work using conditional VAEs and compare against the GMVAE and additionally introduce a hybrid conditional GMVAE (CGMVAE) architecture which lets us generate whole blended levels and layouts. Results show that [these]{} approaches can generate playable games that blend the input games in specified combinations. We use both platformers and dungeon-based games to demonstrate our results.'\nauthor:\n- 'Anurag Sarkar, Seth Cooper [^1]'\nbibliography:\n- 'refs-custom.bib'\ntitle: Latent Combinational Game Design\n---\n\n[Shell : Bare Demo of IEEEtran.cls for IEEE Journals]{}\n\nprocedural content generation, combinational creativity, game blending, variational autoencoder\n\nIntroduction\n============\n\nMethods for Procedural Content Generation via Machine Learning (PCGML) [@summerville2017procedural]" +"---\nabstract: 'Extracting cybersecurity entities such as attackers and vulnerabilities from unstructured network texts is an important part of security analysissince fast and accurate extraction technique can help researchers improve their working efficiency . However, the sparsity of intelligence data resulted from the higher frequency variations and the randomness of cybersecurity entity names makes it difficult for current methods to perform well in extracting security-related concepts and entities. To this end, we propose a semantic augmentation method which incorporates different linguistic features to enrich the representation of input tokens to detect and classify the cybersecurity names over unstructured text. In particular, we encode and aggregate the constituent feature, morphological feature and part of speech feature for each input token to improve the robustness of the method. More than that, a token gets augmented semantic information from its most similar *K* words in cybersecurity domain corpus where an attentive module is leveraged to weigh differences of the words, and from contextual clues based on a large-scale general field corpus. We have conducted experiments on the cybersecurity datasets *DNRTI* and *MalwareTextDB*, and the results demonstrate the effectiveness of the proposed method.'\nauthor:\n- \nbibliography:\n- 'sample.bib'\ntitle: |\n Multi-features based Semantic Augmentation" +"---\nabstract:\n- \n- 'Extracting the outcome of a quantum computation is a difficult task. In many cases, the quantum phase estimation algorithm is used to digitally encode a value in a quantum register whose amplitudes\u2019 magnitudes reflect the discrete $\\operatorname{sinc}$ function. In the standard implementation the value is approximated by the most frequent outcome, however, using the frequencies of other outcomes allows for increased precision without using additional qubits. One existing approach is to use Maximum Likelihood Estimation, which uses the frequencies of all measurement outcomes. We provide and analyze several alternative estimators, the best of which rely on only the two most frequent measurement outcomes. The Ratio-Based Estimator uses a closed form expression for the decimal part of the encoded value using the ratio of the two most frequent outcomes. The Coin Approximation Estimator relies on the fact that the decimal part of the encoded value is very well approximated by the parameter of the Bernoulli process represented by the magnitudes of the largest two amplitudes. We also provide additional properties of the discrete $\\operatorname{sinc}$ state that could be used to design other estimators.'\nauthor:\n- Charlee Stefanski\n- Vanio Markov\n- Constantin Gonciulea\nbibliography:\n- 'main.bib'\ntitle:" +"---\nabstract: 'Two-dimensional systems such as quantum spin liquids or fractional quantum Hall systems exhibit anyonic excitations that possess more general statistics than bosons or fermions. This exotic statistics makes it challenging to solve even a many-body system of non-interacting anyons. We introduce an algorithm that allows to simulate anyonic tight-binding Hamiltonians on two-dimensional lattices. The algorithm is directly derived from the low energy topological quantum field theory and is suited for general abelian and non-abelian anyon models. As concrete examples, we apply the algorithm to study the energy level spacing statistics, which reveals level repulsion for free semions, Fibonacci anyons and Ising anyons. Additionally, we simulate non-equilibrium quench dynamics, where we observe that the density distribution becomes homogeneous for large times - indicating thermalization.'\nauthor:\n- Nico Kirchner\n- Darragh Millar\n- 'Babatunde M. Ayeni'\n- Adam Smith\n- 'Joost K. Slingerland'\n- Frank Pollmann\nbibliography:\n- './references.bib'\ntitle: 'Numerical simulation of non-abelian anyons'\n---\n\nIntroduction\n============\n\nTwo-dimensional systems can support topological point-like quasiparticle excitations, so-called anyons\u00a0[@Leinaas1977; @PhysRevLett.48.1144; @PhysRevLett.49.957], which obey statistics beyond regular bosons or fermions. These anyons can lead to novel physical phenomena and have thus attracted considerable attention over the past decades. For instance, their" +"---\nabstract: 'Extensions of the Standard Model of particle physics with new Abelian gauge groups allow for kinetic mixing between the new gauge bosons and the hypercharge gauge boson, resulting in mixing with the photon. In many models the mixing with the hypercharge gauge boson captures only part of the kinetic mixing term with the photon, since the new gauge bosons can also mix with the neutral component of the $SU(2)_L$ gauge bosons. We take these contributions into account and present a consistent description of kinetic mixing for general Abelian gauge groups both in the electroweak symmetric and the broken phase. We identify an effective operator that captures the kinetic mixing with $SU(2)_L$ and demonstrate how renormalisable contributions arise if the charged fields only obtain their masses from electroweak symmetry breaking. For the first time, a low-energy theorem for the couplings of novel Abelian gauge bosons with the Standard Model Higgs boson is derived from the one-loop kinetic mixing amplitudes.'\nauthor:\n- Martin Bauer\n- Patrick Foldenauer\nbibliography:\n- 'references.bib'\ntitle: 'A Consistent Theory of Kinetic Mixing and the Higgs Low-Energy Theorem'\n---\n\n***Introduction.***\u00a0\\[sec:intro\\]Extensions of the Standard Model (SM) with an additional Abelian gauge group allow for a unique" +"---\nabstract: 'This paper reports on insights by robotics researchers that participated in a 5-day robot-assisted nuclear disaster response field exercise conducted by Kerntechnische Hilfdienst GmbH (KHG) in Karlsruhe, Germany. The German nuclear industry established KHG to provide a robot-assisted emergency response capability for nuclear accidents. We present a systematic description of the equipment used; the robot operators\u2019 training program; the field exercise and robot tasks; and the protocols followed during the exercise. Additionally, we provide insights and suggestions for advancing disaster response robotics based on these observations. Specifically, the main degradation in performance comes from the cognitive and attentional demands on the operator. Furthermore, robotic platforms and modules should aim to be robust and reliable in addition to their ease of use. Last, as emergency response stakeholders are often skeptical about using autonomous systems, we suggest adopting a variable autonomy paradigm to integrate autonomous robotic capabilities with the human-in-the-loop gradually. This middle ground between teleoperation and autonomy can increase end-user acceptance while directly alleviating some of the operator\u2019s robot control burden and maintaining the resilience of the human-in-the-loop.'\nauthor:\n- |\n Manolis Chiou^2^, Georgios-Theofanis Epsimos^1^, Grigoris Nikolaou^1^, Pantelis Pappas^1^, Giannis Petousakis^2^,\\\n Stefan M\u00fchl^3^, and Rustam Stolkin^2^ [^1][^2][^3][^4]\nbibliography:\n-" +"---\nabstract: 'We introduce a novel approach for temporal activity segmentation with timestamp supervision. Our main contribution is a graph convolutional network, which is learned in an end-to-end manner to exploit both frame features and connections between neighboring frames to generate dense framewise labels from sparse timestamp labels. The generated dense framewise labels can then be used to train the segmentation model. In addition, we propose a framework for alternating learning of both the segmentation model and the graph convolutional model, which first initializes and then iteratively refines the learned models. Detailed experiments on four public datasets, including 50 Salads, GTEA, Breakfast, and Desktop Assembly, show that our method is superior to the multi-layer perceptron baseline, while performing on par with or better than the state of the art in temporal activity segmentation with timestamp supervision.'\nauthor:\n- |\n Hamza Khan\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Sanjay Haresh\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Awais Ahmed\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Shakeeb Siddiqui\\\n Andrey Konin\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0M. Zeeshan Zia\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Quoc-Huy Tran [^1]\nbibliography:\n- 'references.bib'\ntitle: '**Timestamp-Supervised Action Segmentation with Graph Convolutional Networks**'\n---\n\nIntroduction {#sec:introduction}\n============\n\nHuman activity understanding in videos has been an important research topic in the fields of robotics and computer vision, with various applications ranging from human-robot interaction, assisted living, healthcare, home" +"-1.0cm 0.9cm -0.5cm\n\n[[[**[ (In)equivalence of Metric-Affine and Metric Effective Field Theories]{}**]{}]{}]{}\\\n\nand [**Alberto Salvio** ]{}\n\n**\n\nPhysics Department, University of Rome Tor Vergata,\\\nvia della Ricerca Scientifica, I-00133 Rome, Italy\\\n\nI. N. F. N. - Rome Tor Vergata,\\\nvia della Ricerca Scientifica, I-00133 Rome, Italy\\\n\n\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2013\n\n[**Abstract**]{}\n\nIn a geometrical approach to gravity the metric and the (gravitational) connection can be independent and one deals with metric-affine theories. We construct the most general action of metric-affine effective field theories, including a generic matter sector, where the connection does not carry additional dynamical fields. Among other things, this helps in identifying the complement set of effective field theories where there are other dynamical fields, which can have an interesting phenomenology. Within the latter set, we study in detail a vast class where the Holst invariant (the contraction of the curvature with the Levi-Civita antisymmetric tensor) is a dynamical pseudoscalar. In the Einstein-Cartan case (where the connection is metric compatible and fermions can be introduced) we also comment on the possible phenomenological role of dynamical dark photons from torsion and compute interactions of the above-mentioned pseudoscalar with a generic matter sector and the metric. Finally, we show that in an arbitrary" +"---\nabstract: 'We developed the functional form of the two-point correlation function under the approximation of fixed particle number density $\\bar{n}$.We solved the quasi-linear partial differential equation (PDE) through the method of characteristics to obtain the parametric solution for the canonical ensemble. We attempted many functional forms and concluded that the functional form should be such that the two-point correlation function should go to zero as the value of system temperature increases or the separation between the galaxies becomes large . Also we studied the graphical behavior of the developed two-point correlation function for large values of temperature $T$ and spatial separation $r$. The behavior of the two-point function was also studied from the temperature measurement of clusters in the red-shift range of $0.023 - 0.546$.'\nauthor:\n- Durakhshan Ashraf Qadri\n- 'Abdul W. Khanday'\n- 'Prince A. Ganai'\ntitle: 'A simplistic approach to the study of two-point correlation function in galaxy clusters'\n---\n\nIntroduction\n============\n\nGalaxies in clusters serve as robust cosmological observatories and special astrophysical laboratories. Thus, they provide a veritable understanding about the Universe at large scale. The Universe exhibits the hierarchical behavior which exists at all scales, from the smallest quantum particles to the ultimately vast" +"---\nabstract: 'Ultrasound (US) is widely used for its advantages of real-time imaging, radiation-free and portability. In clinical practice, analysis and diagnosis often rely on US sequences rather than a single image to obtain dynamic anatomical information. This is challenging for novices to learn because practicing with adequate videos from patients is clinically unpractical. In this paper, we propose a novel framework to synthesize high-fidelity US videos. Specifically, the synthesis videos are generated by animating source content images based on the motion of given driving videos. Our highlights are three-fold. First, leveraging the advantages of self- and fully-supervised learning, our proposed system is trained in weakly-supervised manner for keypoint detection. These keypoints then provide vital information for handling complex high dynamic motions in US videos. Second, we decouple content and texture learning using the dual decoders to effectively reduce the model learning difficulty. Last, we adopt the adversarial training strategy with GAN losses for further improving the sharpness of the generated videos, narrowing the gap between real and synthesis videos. We validate our method on a large in-house pelvic dataset with high dynamic motion. Extensive evaluation metrics and user study prove the effectiveness of our proposed method.'\nauthor:\n- 'Jiamin" +"---\nabstract: 'Flows in rivers can be strongly affected by obstacles to flow or artificial structures such as bridges, weirs and dams. This is especially true during floods, where significant backwater effects or diversion of flow out of bank can result. However, within contemporary industry practice, linear features such as bridges are often modelled using coarse approximations, empirically based methods or are omitted entirely. Presented within this paper is a novel Riemann solver which is capable of modelling the influence of such features within hydrodynamic flood models using finite volume schemes to solve the shallow water equations. The solution procedure represents structures at the interface between neighbouring cells and uses a combination of internal boundary conditions and a different form of the conservation laws in the adjacent cells to resolve numerical fluxes across the interface. Since the procedure only applies to the cells adjacent to the interface at which a structure is being modelled, the method is therefore potentially compatible with existing hydrodynamic models. Comparisons with validation data collected from a state of the art research flume demonstrate that the solver is suitable for modelling a range of flow conditions and structure configurations such as bridges and gates.'\naddress: 'School" +"---\nabstract: 'Radio Frequency Interference (RFI) corrupts astronomical measurements, thus affecting the performance of radio telescopes. To address this problem, supervised segmentation models have been proposed as candidate solutions to RFI detection. However, the unavailability of large labelled datasets, due to the prohibitive cost of annotating, makes these solutions unusable. To solve these shortcomings, we focus on the inverse problem; training models on only uncontaminated emissions thereby learning to discriminate RFI from all known astronomical signals and system noise. We use Nearest-Latent-Neighbours (NLN) - an algorithm that utilises both the reconstructions and latent distances to the nearest-neighbours in the latent space of generative autoencoding models for novelty detection. The uncontaminated regions are selected using *weak-labels* in the form of RFI flags (generated by classical RFI flagging methods) available from most radio astronomical data archives at no additional cost. We evaluate performance on two independent datasets, one simulated from the HERA telescope and another consisting of real observations from LOFAR telescope. Additionally, we provide a small expert-labelled LOFAR dataset (i.e., strong labels) for evaluation of our and other methods. Performance is measured using AUROC, AUPRC and the maximum F1-score for a fixed threshold. For the simulated HERA dataset we outperform the" +"---\nabstract: 'Non-normal modal logics, interpreted on neighbourhood models which generalise the usual relational semantics, have found application in several areas, such as epistemic, deontic, and coalitional reasoning. We present here preliminary results on reasoning in a family of modal description logics obtained by combining ${\\ensuremath{\\smash{\\mathcal{ALC}}}\\xspace}$ with non-normal modal operators. First, we provide a framework of terminating, correct, and complete tableau algorithms to check satisfiability of formulas in such logics with the semantics based on varying domains. We then investigate the satisfiability problems in fragments of these languages obtained by restricting the application of modal operators to formulas only, and interpreted on models with constant domains, providing tight complexity results.'\naddress:\n- 'Free University of Bozen-Bolzano'\n- 'University of Bergen, Norway'\nauthor:\n- Tiziano Dalmonte\n- Andrea Mazzullo\n- Ana Ozaki\nbibliography:\n- 'bib\\_nnmdl.bib'\ntitle: 'Reasoning in Non-normal Modal Description Logics'\n---\n\n\\[orcid=0000-0002-7153-0506, email=tiziano.dalmonte@unibz.it, \\]\n\n\\[orcid=0000-0001-8512-1933, email=andrea.mazzullo@unibz.it, \\]\n\n\\[orcid=0000-0002-3889-6207, email=ana.ozaki@uib.no, url=https://www.uib.no/en/persons/Ana.Ozaki, \\]\n\nNon-normal modal logics , Description logics , Tableau algorithms\n\nIntroduction\n============\n\nContexts involving epistemic and doxastic\u00a0[@Ago; @Bal; @Var1], agency-based\u00a0[@Brown; @Elg] and coalitional\u00a0[@Pau; @Tro], as well as deontic\u00a0[@AngEtAl; @Gob; @Wright], reasoning capabilities populate the wide spectrum of settings where modal logics have found natural applications." +"---\nabstract: '\\[abstract\\] Score-based generative models (SGMs) are a recently proposed paradigm for deep generative tasks and now show the state-of-the-art sampling performance. It is known that the original SGM design solves the two problems of the generative trilemma: i) sampling quality, and ii) sampling diversity. However, the last problem of the trilemma was not solved, i.e., their training/sampling complexity is notoriously high. To this end, distilling SGMs into simpler models, e.g., generative adversarial networks (GANs), is gathering much attention currently. We present an enhanced distillation method, called straight-path interpolation GAN (SPI-GAN), which can be compared to the state-of-the-art shortcut-based distillation method, called denoising diffusion GAN (DD-GAN). However, our method corresponds to an extreme method that does not use any intermediate shortcut information of the reverse SDE path, in which case DD-GAN fails to obtain good results. Nevertheless, our straight-path interpolation method greatly stabilizes the overall training process. As a result, SPI-GAN is one of the best models in terms of the sampling quality/diversity/time for CIFAR-10, CelebA-HQ-256, and LSUN-Church-256.'\nbibliography:\n- 'example\\_paper.bib'\ntitle: 'SPI-GAN: Distilling Score-based Generative Models with Straight-Path Interpolations'\n---\n\nIntroduction\n============\n\nGenerative models are one of the most popular research topics for deep learning. There have been" +"---\nabstract: 'The $\\Upsilon$ invariant is a concordance invariant using knot Floer homology. F\u00f6ldv\u00e1ri[@Foldvari-grid-upsilon] gives a combinatorial restructure of it using grid homology. We extend the combinatorial $\\Upsilon$ invariant for balanced spatial graphs. Regarding links as spatial graphs, we give an upper and lower bound for the $\\Upsilon$ invariant when two links are connected by a cobordism. Also, we show that the combinatorial $\\Upsilon$ invariant is a concordance invariant for knots.'\nauthor:\n- Hajime Kubota\nbibliography:\n- 'grid.bib'\ntitle: 'Concordance invariant $\\Upsilon$ for balanced spatial graphs using grid homology'\n---\n\nIntroduction\n============\n\nThe $\\tau$ invariant and the $\\Upsilon$ invariant are defined by Ozsv\u00e1th, Szab\u00f3 [@Knot-Floer-homology-and-the-four-ball-genus],[@Concordance-homomorphisms-from-knot-Floer-homology] using knot Floer homology. The $\\tau$ invariant and the $\\Upsilon$ invariant give homomorphisms from the (smooth) knot concordance group $\\mathcal{C}$ to $\\mathbb{Z}$ and lower bounds for the slice genus and the unknotting number. The $\\tau$ invariant is known to prove the Milnor conjecture $g_4(T_{p,q})=\\frac{1}{2}(p-1)(q-1)$ [@grid-tau-sarkar]. The $\\Upsilon$ invariant is a family of concordance invariants $\\Upsilon_t$ defined for every $t\\in[0,2]$ and the slope of the $\\Upsilon$ invariant at $t=0$ equals the value of the $\\tau$ invariant, so the $\\Upsilon$ invariant is stronger than the $\\tau$ invariant. The $\\Upsilon$ invariant shows that the subgroup of $\\mathcal{C}$ generated" +"---\nabstract: 'A number of late \\[WC\\] stars have unique infrared properties, not found among the non-\\[WC\\] planetary nebulae, and together define a class of IR-\\[WC\\] stars. They have unusual IRAS colours, resembling stars in the earliest post-AGB evolution and possibly related to PAH formation. Most or all show a double chemistry, with both a neutral (molecular) oxygen-rich and an inner carbon-rich region. Their dense nebulae indicate recent evolution from the AGB, suggesting a fatal-thermal-pulse (FTP) scenario. Although both the colours and the stellar characteristics predict fast evolution, it is shown that this phase must last for $10^4\\,\\rm yr$. The morphologies of the nebulae are discussed. For one object in Sgr, the progenitor mass ($1.3\\, \\rm M_\\odot$) is known. The stellar temperatures of the IR-\\[WC\\] stars appear much higher in low metallicity systems (LMC, Sgr). This may be indicative of an extended \u2019pseudo\u2019 photosphere. It is proposed that re-accretion of ejected gas may slow down the post-AGB evolution and so extend the life time of the IR-\\[WC\\] stars.'\nauthor:\n- 'Albert A. Zijlstra'\ntitle: 'THE INFRARED \\[WC\\] STARS\\'\n---\n\n(First published in Astrophysics &\u00a0Space Science, 275, 90 (2001). Some references to more recent works have been added as footnotes" +"---\nabstract: 'We propose a novel framework to solve risk-sensitive reinforcement learning ([RL]{}) problems where the agent optimises time-consistent dynamic spectral risk measures. Based on the notion of conditional elicitability, our methodology constructs (strictly consistent) scoring functions that are used as penalizers in the estimation procedure. Our contribution is threefold: we (i) devise an efficient approach to estimate a class of dynamic spectral risk measures with deep neural networks, (ii) prove that these dynamic spectral risk measures may be approximated to any arbitrary accuracy using deep neural networks, and (iii) develop a risk-sensitive actor-critic algorithm that uses full episodes and does not require any additional nested transitions. We compare our conceptually improved reinforcement learning algorithm with the nested simulation approach and illustrate its performance in two settings: statistical arbitrage and portfolio allocation on both simulated and real data.'\nauthor:\n- 'Anthony Coache[^1]'\n- Sebastian Jaimungal\n- '\u00c1lvaro Cartea[^2]'\nbibliography:\n- 'bib-files/references.bib'\ntitle: 'Conditionally Elicitable Dynamic Risk Measures for Deep Reinforcement Learning[^3]'\n---\n\nDynamic Risk Measures, Reinforcement Learning, Elicitability, Consistent Scoring Functions, Time-Consistency, Actor-Critic Algorithm, Portfolio Allocation, Statistical Arbitrage\n\n68T37, 91-08, 91G10, 91G70, 93E35.\n\nIntroduction {#sec:introduction}\n============\n\nOne principled model-free framework for learning-based control is reinforcement learning ([RL]{}) [@sutton2018reinforcement]. In [RL]{}," +"---\nabstract: 'A long-held objective in AI is to build systems that understand concepts in a humanlike way. Setting aside the difficulty of building such a system, even trying to evaluate one is a challenge, due to present-day AI\u2019s relative opacity and its proclivity for finding shortcut solutions. This is exacerbated by humans\u2019 tendency to anthropomorphize, assuming that a system that can recognize one instance of a concept must also understand other instances, as a human would. In this paper, we argue that understanding a concept requires the ability to use it in varied contexts. Accordingly, we propose systematic evaluations centered around concepts, by probing a system\u2019s ability to use a given concept in many different instantiations. We present case studies of such an evaluations on two domains\u2014RAVEN (inspired by Raven\u2019s Progressive Matrices) and the Abstraction and Reasoning Corpus (ARC)\u2014that have been used to develop and assess abstraction abilities in AI systems. Our *concept-based* approach to evaluation reveals information about AI systems that conventional test sets would have left hidden.'\naddress: 'Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, NM 87501 USA'\nauthor:\n- Victor Vikram Odouard\n- Melanie Mitchell\nbibliography:\n- 'EBeM.bib'\ntitle: Evaluating Understanding on Conceptual Abstraction" +"---\nabstract: 'Bosonic channels describe quantum-mechanically many practical communication links such as optical, microwave, and radiofrequency. We investigate the maximum rates for the bosonic multiple access channel (MAC) in the presence of thermal noise added by the environment and when the transmitters utilize Gaussian state inputs. We develop an outer bound for the capacity region for the thermal-noise lossy bosonic MAC. We additionally find that the use of coherent states at the transmitters is capacity-achieving in the limits of high and low mean input photon numbers. Furthermore, we verify that coherent states are capacity-achieving for the sum rate of the channel. In the non-asymptotic regime, when a global mean photon-number constraint is imposed on the transmitters, coherent states are the optimal Gaussian state. Surprisingly however, the use of single-mode squeezed states can increase the capacity over that afforded by coherent state encoding when each transmitter is photon number constrained individually.'\nauthor:\n- \n- \nbibliography:\n- './papers.bib'\ntitle: 'Fundamental Limits of Thermal-noise Lossy Bosonic Multiple Access Channel [^1] '\n---\n\nIntroduction\n============\n\nThe multiple access channel (MAC) is the principal building block of many practical networks. Quantum information [@nielsen00quantum; @wilde16quantumit2ed] governs the fundamental limits of physical channels comprising any network, and" +"---\nabstract: 'Computability on uncountable sets has no standard formalization, unlike that on countable sets, which is given by Turing machines. Some of the approaches to define computability in these sets rely on order-theoretic structures to translate such notions from Turing machines to uncountable spaces. Since these machines are used as a baseline for computability in these approaches, countability restrictions on the ordered structures are fundamental. Here, we aim to combine the theories of computability with order theory in order to study how the usual countability restrictions in these approaches are related to order density properties and functional characterizations of the order structure in terms of multi-utilities.'\nauthor:\n- 'Pedro Hack, Daniel A. Braun, Sebastian Gottwald'\nbibliography:\n- 'main.bib'\ndate: \ntitle: On the relation of order theory and computation in terms of denumerability\n---\n\nIntroduction\n============\n\nThe formalization of computation on the natural numbers was initiated by Turing [@turing1937computable; @turing1938computable] with the introduction of Turing machines [@rogers1987theory]. Such an approach is taken as canonical today, since other attempts to formalize it have proven to be equivalent [@cutland1980computability; @rogers1987theory]. Because of that, Turing machines are deployed as a baseline for computation from which it is transferred to other spaces of interest." +"---\nabstract: 'A novel modelling framework is proposed for the analysis of aggregative games on an infinite-time horizon, assuming that players are subject to heterogeneous periodic constraints. A new aggregative equilibrium notion is presented and the strategic behaviour of the agents is analysed under a receding horizon paradigm. The evolution of the strategies predicted and implemented by the players over time is modelled through a discrete-time multi-valued dynamical system. By considering Lyapunov stability notions and applying limit and invariance results for set-valued correspondences, necessary conditions are derived for convergence of a receding horizon map to a periodic equilibrium of the aggregative game. This result is achieved for any (feasible) initial condition, thus ensuring implicit adaptivity of the proposed control framework to real-time variations in the number and parameters of players. Design and implementation of the proposed control strategy are discussed and an example of distributed control for data routing is presented, evaluating its performance in simulation.'\naddress: 'Electrical and Electronic Engineering Department, Imperial College London, London, UK'\nauthor:\n- Filiberto Fele\n- Antonio De Paola\n- David Angeli\n- Goran Strbac\nbibliography:\n- 'biblioDR.bib'\ntitle: 'A framework for receding-horizon control in infinite-horizon aggregative games '\n---\n\nAggregative games ,Receding horizon" +"---\nabstract: 'An essential metric for the quality of a particle-identification experiment is its statistical power to discriminate between signal and background. Pulse shape discrimination (PSD) is a basic method for this purpose in many nuclear, high-energy and rare-event search experiments where scintillator detectors are used. Conventional techniques exploit the difference between decay-times of the pulse from signal and background events or pulse signals caused by different types of radiation quanta to achieve good discrimination. However, such techniques are efficient only when the total light-emission is sufficient to get a proper pulse profile. This is only possible when there is significant recoil energy due to the incident particle in the detector. But, rare-event search experiments like neutrino or dark-matter direct search experiments don\u2019t always satisfy these conditions. Hence, it becomes imperative to have a method that can deliver a very efficient discrimination in these scenarios. Neural network based machine-learning algorithms have been used for classification problems in many areas of physics especially in high-energy experiments and have given better results compared to conventional techniques. We present the results of our investigations of two network based methods [*viz.* ]{}Dense Neural Network and Recurrent Neural Network, for pulse shape discrimination and compare" +"---\nabstract: 'Decision-making is critical for lane change in autonomous driving. Reinforcement learning (RL) algorithms aim to identify the values of behaviors in various situations and thus they become a promising pathway to address the decision-making problem. However, poor runtime safety hinders RL-based decision-making strategies from complex driving tasks in practice. To address this problem, human demonstrations are incorporated into the RL-based decision-making strategy in this paper. Decisions made by human subjects in a driving simulator are treated as safe demonstrations, which are stored into the replay buffer and then utilized to enhance the training process of RL. A complex lane change task in an off-ramp scenario is established to examine the performance of the developed strategy. Simulation results suggest that human demonstrations can effectively improve the safety of decisions of RL. And the proposed strategy surpasses other existing learning-based decision-making strategies with respect to multiple driving performances.'\nauthor:\n- 'Jingda Wu$^{1}$, Wenhui Huang$^{1}$, Niels de Boer$^{2}$, Yanghui Mo$^{2}$, Xiangkun He$^{1}$ and Chen Lv$^{1}$[^1][^2][^3]'\nbibliography:\n- 'itsc.bib'\ntitle: '**Safe Decision-making for Lane-change of Autonomous Vehicles via Human Demonstration-aided Reinforcement Learning** '\n---\n\nINTRODUCTION\n============\n\nThe decision-making function that receives the ambient environment information and generates high-level intentions for autonomous vehicles" +"---\nabstract: 'Emergent communication research often focuses on optimizing task-specific utility as a driver for communication. However, human languages appear to evolve under pressure to efficiently compress meanings into communication signals by optimizing the Information Bottleneck tradeoff between informativeness and complexity. In this work, we study how trading off these three factors \u2014 utility, informativeness, and complexity \u2014 shapes emergent communication, including compared to human communication. To this end, we propose Vector-Quantized Variational Information Bottleneck (VQ-VIB), a method for training neural agents to compress inputs into discrete signals embedded in a continuous space. We train agents via VQ-VIB and compare their performance to previously proposed neural architectures in grounded environments and in a Lewis reference game. Across all neural architectures and settings, taking into account communicative informativeness benefits communication convergence rates, and penalizing communicative complexity leads to human-like lexicon sizes while maintaining high utility. Additionally, we find that VQ-VIB outperforms other discrete communication methods. This work demonstrates how fundamental principles that are believed to characterize human language evolution may inform emergent communication in artificial agents.'\nauthor:\n- '[^1]'\n- \n- \n- \nbibliography:\n- 'references.bib'\ntitle: 'Towards Human-Agent Communication via the Information Bottleneck Principle'\n---\n\nIntroduction\n============\n\nGood communication is a" +"---\nabstract: 'We finish the classification of equitable 2-partitions of the Johnson graphs of diameter 3, $J(n,3)$, for $n>10$.'\nauthor:\n- |\n Rhys\u00a0J.\u00a0Evans\\\n Sobolev Institute of Mathematics\\\n Siberian Branch of the Russian Academy of Sciences\\\n 4 Acad. Koptyug avenue\\\n 630090 Novosibirsk, Russia\\\n rhysjevans00@gmail.com\n- |\n Alexander\u00a0L.\u00a0Gavrilyuk\\\n Interdisciplinary Faculty of Science and Engineering\\\n Shimane University, Matsue, Japan\\\n gavrilyuk@riko.shimane-u.ac.jp\n- |\n Sergey\u00a0Goryainov\\\n School of Mathematical Sciences\\\n Hebei Key Laboratory of Computational Mathematics and Applications\\\n Hebei Normal University\\\n Shijiazhuang 050024\\\n P.R. China\\\n sergey.goryainov3@gmail.com\n- |\n Konstantin\u00a0Vorob\u2019ev\\\n Sobolev Institute of Mathematics\\\n Siberian Branch of the Russian Academy of Sciences\\\n 4 Acad. Koptyug avenue\\\n 630090 Novosibirsk, Russia\\\n Institute of Mathematics and Informatics\\\n Bulgarian Academy of Sciences\\\n 8 Acad G. Bonchev str.\\\n 1113 Sofia, Bulgaria\\\n konstantin.vorobev@gmail.com\\\nbibliography:\n- 'references.bib'\ntitle: 'Equitable 2-partitions of the Johnson graphs $J(n,3)$'\n---\n\nIntroduction\n============\n\nThe relationship between association schemes and codes was the topic of the thesis of Delsarte [@D_1975]. Motivated by previous authors, Delsarte puts a particular emphasis on the Hamming and Johnson schemes. He makes a comment [@D_1975 pg 55], suggesting that there does not exist any non-trivial perfect codes in the Johnson graphs.\n\nMartin [@M_1992] expands on the work of Delsarte" +"---\nabstract: 'Human brains lie at the core of complex neurobiological systems, where the neurons, circuits, and subsystems interact in enigmatic ways. Understanding the structural and functional mechanisms of the brain has long been an intriguing pursuit for neuroscience research and clinical disorder therapy. Mapping the connections of the human brain as a network is one of the most pervasive paradigms in neuroscience. Graph Neural Networks (GNNs) have recently emerged as a potential method for modeling complex network data. Deep models, on the other hand, have low interpretability, which prevents their usage in decision-critical contexts like healthcare. To bridge this gap, we propose an interpretable framework to analyze disorder-specific Regions of Interest (ROIs) and prominent connections. The proposed framework consists of two modules: a brain-network-oriented backbone model for disease prediction and a globally shared explanation generator that highlights disorder-specific biomarkers including salient ROIs and important connections. We conduct experiments on three real-world datasets of brain disorders. The results verify that our framework can obtain outstanding performance and also identify meaningful biomarkers. All code for this work is available at .'\nauthor:\n- Hejie Cui\n- Wei Dai\n- Yanqiao Zhu\n- Xiaoxiao Li\n- |\n \\\n Lifang He\n- 'Carl" +"---\nabstract: 'Modern speech synthesis techniques can produce natural-sounding speech given sufficient high-quality data and compute resources. However, such data is not readily available for many languages. This paper focuses on speech synthesis for low-resourced African languages, from corpus creation to sharing and deploying the Text-to-Speech (TTS) systems. We first create a set of general-purpose instructions on building speech synthesis systems with minimum technological resources and subject-matter expertise. Next, we create new datasets and curate datasets from \u201cfound\u201d data (existing recordings) through a participatory approach while considering accessibility, quality, and breadth. We demonstrate that we can develop synthesizers that generate intelligible speech with 25 minutes of created speech, even when recorded in suboptimal environments. Finally, we release the speech data, code, and trained voices for [12]{}African languages to support researchers and developers.'\naddress: |\n $^\\dagger$Language Technologies Institute, Carnegie Mellon University\\\n $^\\ddagger$Inspired Cognition\\\n Pittsburgh, PA, USA \nbibliography:\n- 'mybib.bib'\ntitle: Building African Voices\n---\n\n**Index Terms**: Speech Synthesis, Text-to-Speech, African Languages, Language Resources\n\nIntroduction\n============\n\nSpeech synthesis modelling techniques are advanced such that it is feasible to achieve almost natural-sounding output if sufficient data is available [@tacotron2; @synthasr]. Specifically, current TTS techniques mostly require high-quality single-speaker recordings with text transcription for" +"---\nabstract: 'Recently, the [*Learning by Confusion*]{} (LbC) approach has been proposed as a machine learning tool to determine the critical temperature $T_c$ of phase transitions without any prior knowledge of its even approximate value. The method has been proven effective, but it has been used only for continuous phase transitions, where the [*confusion*]{} results only from deliberate incorrect labeling of the data. However, in the case of a discontinuous phase transition, additional [*confusion*]{} can result from the coexistence of different phases. To verify whether the confusion scheme can also be used for discontinuous phase transitions, we apply the LbC method to three microscopic models, the Blume-Capel, the $q$\u2013state Potts, and the Falicov-Kimball models, which undergo continuous or discontinuous phase transitions depending on model parameters. With the help of a simple model, we predict that the phase coexistence present in discontinuous phase transitions can indeed make the neural network more [*confused*]{} and thus decrease its performance. However, numerical calculations performed for the models mentioned above indicate that other aspects of this kind of phase transition are more important and can render the LbC method even less effective. Nevertheless, we demonstrate that in some cases the same aspects allow us to" +"---\nabstract: 'This paper proposes a gaze correction and animation method for high-resolution, unconstrained portrait images, which can be trained without the gaze angle and the head pose annotations. Common gaze-correction methods usually require annotating training data with precise gaze, and head pose information. Solving this problem using an unsupervised method remains an open problem, especially for high-resolution face images in the wild, which are not easy to annotate with gaze and head pose labels. To address this issue, we first create two new portrait datasets: CelebGaze ($256 \\times 256$) and high-resolution CelebHQGaze ($512 \\times 512$). Second, we formulate the gaze correction task as an image inpainting problem, addressed using a Gaze Correction Module\u00a0(GCM) and a Gaze Animation Module\u00a0(GAM). Moreover, we propose an unsupervised training strategy, i.e., Synthesis-As-Training, to learn the correlation between the eye region features and the gaze angle. As a result, we can use the learned latent space for gaze animation with semantic interpolation in this space. Moreover, to alleviate both the memory and the computational costs in the training and the inference stage, we propose a Coarse-to-Fine Module\u00a0(CFM) integrated with GCM and GAM. Extensive experiments validate the effectiveness of our method for both" +"---\nabstract: 'Optical coherence tomography (OCT) is a non-invasive 3D modality widely used in ophthalmology for imaging the retina. Achieving automated, anatomically coherent retinal layer segmentation on OCT is important for the detection and monitoring of different retinal diseases, like Age-related Macular Disease (AMD) or Diabetic Retinopathy. However, the majority of state-of-the-art layer segmentation methods are based on purely supervised deep-learning, requiring a large amount of pixel-level annotated data that is expensive and hard to obtain. With this in mind, we introduce a semi-supervised paradigm into the retinal layer segmentation task that makes use of the information present in large-scale unlabeled datasets as well as anatomical priors. In particular, a novel fully differentiable approach is used for converting surface position regression into a pixel-wise structured segmentation, allowing to use both 1D surface and 2D layer representations in a coupled fashion to train the model. In particular, these 2D segmentations are used as anatomical factors that, together with learned style factors, compose disentangled representations used for reconstructing the input image. In parallel, we propose a set of anatomical priors to improve network training when a limited amount of labeled data is available. We demonstrate on the real-world dataset of scans with" +"---\nabstract: 'U-Net has been the go-to architecture for medical image segmentation tasks, however computational challenges arise when extending the U-Net architecture to 3D images. We propose the Implicit U-Net architecture that adapts the efficient Implicit Representation paradigm to supervised image segmentation tasks. By combining a convolutional feature extractor with an implicit localization network, our implicit U-Net has 40% less parameters than the equivalent U-Net. Moreover, we propose training and inference procedures to capitalize sparse predictions. When comparing to an equivalent fully convolutional U-Net, Implicit U-Net reduces by approximately 30% inference and training time as well as training memory footprint while achieving comparable results in our experiments with two different abdominal CT scan datasets.'\nauthor:\n- 'Sergio [Naval Marimont]{}'\n- Giacomo Tarroni\nbibliography:\n- 'ref.bib'\ntitle: 'Implicit U-Net for volumetric medical image segmentation'\n---\n\nIntroduction\n============\n\nU-Net [@Ronneberger2015] is the go-to architecture for medical image segmentation tasks [@Litjens2017]. A U-Net consists of two convolutional networks an encoder or feature extraction network and a decoder or localization network. U-Net incorporates *skip connections* that share feature maps directly from encoder to decoder layers with the same spatial resolution.\n\nDifferent approaches have been proposed to extend U-Nets to volumetric images. 3D convolutions, as" +"---\nauthor:\n- 'A. Chu'\n- 'F. Sarron'\n- 'F. Durret'\n- 'I. M\u00e1rquez'\nbibliography:\n- 'aa.bib'\ntitle: Physical properties of more than one thousand brightest cluster galaxies detected in the Canada France Hawaii Telescope Legacy Survey\n---\n\n[Brightest cluster galaxies (BCGs) are very massive elliptical galaxies found at the centers of clusters. Their study gives clues on the formation and evolution of the clusters in which they are embedded.]{} [We analysed here in a homogeneous way the properties of a sample of more than one thousand BCGs in the redshift range $0.15100}$. We also observe that $G(S)$ and $G(S')$ are not quasi-isometric when $S\\Delta S'$ is infinite.\n\nIn fact, our proof of residual" +"---\nabstract: 'In this paper, a novel transceiver architecture is proposed to simultaneously achieve efficient random access and reliable data transmission in massive IoT networks. At the transmitter side, each user is assigned a unique protocol sequence which is used to identify the user and also indicate the user\u2019s channel access pattern. Hence, user identification is completed by the detection of channel access patterns. Particularly, the columns of a parity check matrix [of low-density-parity-check (LDPC) code]{} are employed as protocol sequences. The design guideline of this LDPC parity check matrix and the associated performance analysis are provided in this paper. At the receiver side, a two-stage iterative detection architecture is designed, which consists of a group testing component and a payload data decoding component. They collaborate in a way that the group testing component maps detected protocol sequences to a tanner graph, on which the second component could execute its message passing algorithm. In turn, zero symbols detected by [the]{} message passing algorithm of the second component indicate potential false alarms made by the first group testing component. Hence, the tanner graph could iteratively evolve. The [provided simulation]{} results demonstrate that our transceiver design realizes a practical one-step grant-free transmission" +"---\nabstract: 'The limited and dynamically varied resources on edge devices motivate us to deploy an optimized deep neural network that can adapt its sub-networks to fit in different resource constraints. However, existing works often build sub-networks through searching different network architectures in a hand-crafted sampling space, which not only can result in a subpar performance but also may cause on-device re-configuration overhead. In this paper, we propose a novel training algorithm, Dynamic REal-time Sparse Subnets (DRESS). DRESS samples multiple sub-networks from the same backbone network through row-based unstructured sparsity, and jointly trains these sub-networks in parallel with weighted loss. DRESS also exploits strategies including parameter reusing and row-based fine-grained sampling for efficient storage consumption and efficient on-device adaptation. Extensive experiments on public vision datasets show that DRESS yields significantly higher accuracy than state-of-the-art sub-networks.'\nauthor:\n- |\n Zhongnan Qu [^1], Syed Shakib Sarwar, Xin Dong, Yuecheng Li, Ekin Sumbul, Barbara De Salvo\\\n Meta Reality Labs Research, US\\\n [quz@ethz.ch, {shakib7, yuecheng.li, ekinsumbul, barbarads}@fb.com, xindong@g.harvard.edu]{}\ntitle: 'DRESS: Dynamic REal-time Sparse Subnets'\n---\n\nIntroduction {#sec:introduction}\n============\n\nThere is a growing interest to deploy deep neural networks (DNNs) on resource-constrained edge devices to enable new intelligent services such as mobile assistants, augmented reality," +"---\nauthor:\n- 'Martijn Hidding,'\n- Johann Usovitsch\ntitle: Feynman parameter integration through differential equations\n---\n\nIntroduction\n============\n\nThe computation of multi-loop Feynman integrals is a crucial component in generating predictions for particle processes at high-energy colliders such as the LHC. Many modern techniques for computing Feynman integrals rely on the differential equation method [@Kotikov:1990kg; @Henn:2013pwa]. This method involves setting up a system of differential equations for the master integrals of a Feynman integral family, and solving the differential equations either analytically or numerically. Analytic solutions can often be found in terms of special classes of iterated integrals such as multiple polylogarithms [@Goncharov:1998kja; @Goncharov:2001iea] or generalizations thereof (see e.g. [@Broedel:2017siw; @Adams:2018yfj]), but are hard to obtain in general. In addition, there are many Feynman integrals which do not evaluate to well-studied classes of iterated integrals. Therefore, renewed interest has been expressed in solving differential equations numerically without reference to an intermediate function space.\n\nMany works in the literature have studied methods for solving differential equations for Feynman integrals numerically (see e.g. [@Pozzorini:2005ff; @Liu:2017jxz; @Lee:2017qql; @Mandal:2018cdj; @Czakon:2020vql]). In this paper, we employ the strategy outlined in [@Moriello:2019yhu]. In this approach, differential equations are repeatedly solved in terms of generalized power series" +"---\nabstract: 'While the Artificial Pancreas is effective in regulating the blood glucose in the safe range of 70-180 mg/dl in type 1 diabetic patients, the high intra-patient variability, as well as exogenous meal disturbances, poses a serious challenge. The existing control algorithms thus require additional safety algorithms and feed-forward actions. Moreover, the unavailability of insulin sensors in Artificial Pancreas makes this task more difficult. In the present work, a subcutaneous model of type 1 diabetes (T1D) is considered for observer-based controller design in the framework of contraction analysis. A variety of realistic multiple-meal scenarios for three virtual T1D patients have been investigated with $\\pm30\\%$ of parametric variability. The average time spent by the three T1D patients is found to be 77%, 73% and 76%, respectively. A significant reduction in the time spent in hyperglycemia ($>180$ mg/dl) is achieved without any feed-forward action for meal compensation.'\naddress:\n- 'Department of Electrical Engineering, Indian Institute of Technology Delhi, Hauz Khas, New Delhi, 110016, India'\n- 'GE Global Research, Bengaluru, Karnataka, 560066, India'\n- 'Department of Electrical Engineering, Indian Institute of Technology Kanpur, Kalyanpur, Kanpur, Uttar Pradesh, 208016, India'\nauthor:\n- Bhabani Shankar Dey\n- Anirudh Nath\n- Abhilash Patel\n- Indra" +"---\nabstract: 'The moduli stacks of Calabi-Yau varieties are known to enjoy several hyperbolicity properties. The best results have so far been proven using sophisticated analytic tools such as complex Hodge theory. Although the situation is very different in positive characteristic (e.g. the moduli stack of principally polarized abelian varieties of dimension at least 2 contains rational curves), we explain in this note how one can prove many hyperbolicity results by reduction to positive characteristic, relying ultimately on the nonabelian Hodge theory in positive characteristic developed by Ogus and Vologodsky.'\naddress: 'Institut de Math\u00e9matiques de Bordeaux, Universit\u00e9 de Bordeaux, 351 cours de la Lib\u00e9ration, F-33405 Talence'\nauthor:\n- Yohan Brunebarbe\nbibliography:\n- 'biblio.bib'\ntitle: 'An algebraic approach to the hyperbolicity of moduli stacks of Calabi-Yau varieties'\n---\n\nIntroduction\n============\n\nMain result\n-----------\n\nThe goal of this note is to provide an algebraic proof of the following result.\n\n\\[main result\\] Let $S$ be a smooth proper connected complex variety. Let $f: X {\\rightarrow}S$ be a smooth projective morphism with connected fibres. Assume that the relative canonical bundle $\\omega_{X/S}$ is relatively trivial and that the Hodge line bundle $f_\\ast(\\omega_{X/S})$ is big. Then:\n\n1. the canonical bundle of $S$ is big (in other" +"---\nabstract: 'Bumblebee gravity is one of the simplest gravity theories with spontaneous Lorentz symmetry breaking. Since we know a rotating black hole solution in bumblebee gravity, we can potentially test this model with the available astrophysical observations of black holes. In this work, we construct a reflection model in bumblebee gravity and we use our model to analyze the reflection features of a *NuSTAR* spectrum of the Galactic black hole EXO\u00a01846\u2013031 in order to constrain the Lorentz-violating parameter $\\ell$. We find that the analysis of the reflection features in the spectrum of EXO\u00a01846\u2013031 cannot constrain the parameter $\\ell$ because of a very strong degeneracy between the estimates of $\\ell$ and of the black hole spin parameter $a_*$. Such a degeneracy may be broken by combining other observations.'\nauthor:\n- Jiale\u00a0Gu\n- Shafqat\u00a0Riaz\n- 'Askar\u00a0B.\u00a0Abdikamalov'\n- Dimitry\u00a0Ayzenberg\n- Cosimo\u00a0Bambi\nnocite: '[@*]'\ntitle: 'Probing bumblebee gravity with black hole X-ray data'\n---\n\nIntroduction {#sec:intro}\n============\n\nGeneral relativity was proposed by Einstein at the end of 1915\u00a0[@Einstein:1916vd] and has successfully survived until today without any modification. The predictions of general relativity have been extensively tested in the so-called weak field regime, mainly with" +"---\nabstract: 'Training an ensemble of diverse sub-models has been empirically demonstrated as an effective strategy for improving the adversarial robustness of deep neural networks. However, current ensemble training methods for image recognition typically encode image labels using one-hot vectors, which overlook dependency relationships between the labels. In this paper, we propose a novel adversarial en-semble training approach that jointly learns the label dependencies and member models. Our approach adaptively exploits the learned label dependencies to pro-mote diversity among the member models. We evaluate our approach on widely used datasets including MNIST, FashionMNIST, and CIFAR-10, and show that it achieves superior robustness against black-box attacks compared to state-of-the-art methods. Our code is available at .'\nauthor:\n- 'Lele Wang[^1]'\n- 'Bin Liu[^2]'\nbibliography:\n- 'mybibfile.bib'\ntitle: Adversarial Ensemble Training by Jointly Learning Label Dependencies and Member Models \n---\n\nIntroduction {#sec:INTRODUCTION}\n============\n\nDeep neural networks (DNNs), also known as deep learning, have achieved remarka-ble success across many tasks in computer vision [@he2016deep; @krizhevsky2012imagenet; @russakovsky2015imagenet; @szegedy2016rethinking], speech recognition [@graves2014towards; @hannun2014deep], and natural language processing [@sutskever2014sequence; @young2018recent]. However, numerous works have shown that modern DNNs are vulnerable to adver-sarial attacks [@goodfellow2014explaining; @papernot2016limitations; @carlini2017towards; @madry2017towards; @xiao2018generating; @xiao2018spatially]. Even slight perturbations to input images, which" +"---\nauthor:\n- '\\'\nbibliography:\n- 'references.bib'\ndate: June 2022\ntitle: Generative Anomaly Detection for Time Series Datasets\n---\n\nIntroduction\n============\n\nIn many real-world monitoring scenarios, such as stock returns, road traffic conditions, data center KPIs, and personal health metrics, collected data appears in the form of multivariate time series (MTS). Monitoring is critical in practice. For instance, Transportation Management Centers are critical for managing the surface road network; delays accrued during the monitoring phase delay response and resolution\u00a0[@yasas2021]. Frequently, secondary crashes and long-clearance times lead to additional congestion on critical arterial road segments. To improve the real-time monitoring of extensive road networks, transportation agencies are increasing the available sensing modalities, often in smart corridors. However, this drastic increase in the number of sensors raises an essential question from an operational perspective\u2014how can transportation agencies monitor thousands of sensors in (near) real-time to detect incidents of interest? Our conversations with local transportation agencies revealed that this monitoring is largely performed manually, an infeasible strategy in the long run. One approach to enable transportation agencies to utilize an extensive array of sensors is to detect potentially anomalous patterns in real-time using the data generated by the sensors; then, human experts" +"---\nabstract: 'In this paper, we study a quantized feedback scheme to maximize the goodput of a finite blocklength communication scenario over a quasi-static fading channel. It is assumed that the receiver has perfect channel state information (CSI) and sends back the CSI to the transmitter over a resolution-limited error-free feedback channel. With this partial CSI, the transmitter is supposed to select the optimum transmission rate, such that it maximizes the overall goodput of the communication system. This problem has been studied for the asymptotic blocklength regime, however, no solution has so far been presented for finite blocklength. Here, we study this problem in two cases: with and without constraint on reliability. We first formulate the optimization problems and analytically solve them. Iterative algorithms that successfully exploit the system parameters for both cases are presented. It is shown that although the achievable maximum goodput decreases with shorter blocklengths and higher reliability requirements, significant improvement can be achieved even with coarsely quantized feedback schemes.'\nauthor:\n- 'Hasan\u00a0Basri\u00a0Celebi,\u00a0 and\u00a0Mikael\u00a0Skoglund,\u00a0 [^1] [^2]'\nbibliography:\n- 'FB\\_w\\_feedback.bib'\ntitle: 'Goodput Maximization with Quantized Feedback in the Finite Blocklength Regime for Quasi-Static Channels '\n---\n\nChannel coding, channel state information, goodput maximization, low-complexity" +"---\nabstract: 'In this article, we first derive parallel square-root methods for state estimation in linear state-space models. We then extend the formulations to general nonlinear, non-Gaussian state-space models using statistical linear regression and iterated statistical posterior linearization paradigms. We finally leverage the fixed-point structure of our methods to derive parallel square-root likelihood-based parameter estimation methods. We demonstrate the practical performance of the methods by comparing the parallel and the sequential approaches on a set of numerical experiments.'\nauthor:\n- 'Fatemeh Yaghoobi, Adrien Corenflos, Sakira Hassan, Simo S\u00e4rkk\u00e4[^1]'\nbibliography:\n- 'references.bib'\ntitle: 'Parallel square-root statistical linear regression for inference in nonlinear state space models[^2]'\n---\n\niterated Kalman smoothing, parameter estimation, robust inference, parallel scan, sigma-point and extended linearization\n\n68W10, 65D32, 62F15, 62F10\n\nIntroduction\n============\n\nInference in linear and nonlinear state-space models (SSMs) is an active research topic with applications in many fields including target tracking, control engineering, and biomedicine\u00a0[@bar2004estimation; @sarkka2013bayesian]. In this article, we are primarily interested in state and parameter estimation problems in state-space models of the form $$\\label{eq:ss-model}\n \\begin{split}\n x_k \\mid x_{k-1} &\\sim p(x_k \\mid x_{k-1}),\\\\\n y_k \\mid x_k &\\sim p(y_k \\mid x_k),\n \\end{split}$$ where $k=1,2,\\ldots,n$. Above, $x_k \\in \\mathbb{R}^{n_x}$ and $y_k \\in \\mathbb{R}^{n_y}$ are the state" +"---\nauthor:\n- \n- \n- \n- \n- \nbibliography:\n- 'references.bib'\ntitle: 'Interactive Physically-Based Simulation of Roadheader Robot'\n---\n\nIntroduction {#sec:intro}\n============\n\nRoadheader is a kind of tunneling robot, one of the most important machinery in the mining industry and underground engineering [@deshmukh2020roadheader]. Generally, underground engineering is very dangerous for people. To improve safety, unmanned underground tunneling [@li2018intelligent] and Virtual Reality (VR) based worker training [@grabowski2015virtual] have become a trend in recent years. The development of interactive computer graphics technology, especially physically-based robotics simulation, provides theoretical support for achieving these targets. The dynamics-based three-dimensional (3D) visualization system can not only simulate the working state of the roadheader robot most physically but also provide a realistic user experience. Therefore, physically-based robot simulation has become one of the core technologies of digital twin [@bilberg2019digital]. However, most of the current solutions are based on kinematics animation [@grabowski2015virtual] or directly using game-oriented commercial engines [@choi118use]. This paper presents a dynamics-based interactive roadheader simulation system that is accurate and stable.\n\nGraphical robot simulation belongs to the interdisciplinary field of computer graphics and robotics [@liu2021role]. Interactive robot simulation aims to compute and show the robot\u2019s motion state in real-time based on articulated rigid body dynamics (usually called" +"---\nabstract: 'Dark matter (DM) occupies the majority of matter content in the universe and is probably cold (CDM). However, modifications to the standard CDM model may be required by the small-scale observations, and DM may be self-interacting (SIDM) or warm (WDM). Here we show that the diffractive lensing of gravitational waves (GWs) from binary black hole mergers by small halos ($\\sim10^3-10^6M_\\odot$; mini-halos) may serve as a clean probe to the nature of DM, free from the contamination of baryonic processes in the DM studies based on dwarf/satellite galaxies. The expected lensed GW signals and event rates resulting from CDM, WDM, and SIDM models are significantly different from each other, because of the differences in halo density profiles and abundances predicted by these models. We estimate the detection rates of such lensed GW events for a number of current and future GW detectors, such as the Laser Interferometer Gravitational Observatories (LIGO), the Einstein Telescope (ET), the Cosmic Explorer (CE), Gravitational-wave Lunar Observatory for Cosmology (GLOC), the Deci-Hertz Interferometer Gravitational Wave Observatory (DECIGO), and the Big Bang Observer (BBO). We find that GLOC may detect one such events per year assuming the CDM model, DECIGO (BBO) may detect more than several" +"---\nabstract: 'We propose a multi-channel version of quantum electro-optic sampling involving monochromatic field modes. It allows for multiple simultaneous measurements of arbitrarily many $\\hat{X}$ and $\\hat{Y}$ field-quadrature for a single quantum-state copy, while independently tuning the interaction strengths at each channel. In contrast to standard electro-optic sampling, the sampled mid-infrared (MIR) mode undergoes a nonlinear interaction with multiple near-infrared (NIR) pump beams. We present a complete positive operator-valued measure (POVM) description for quantum states in the MIR mode. The probability distribution of the electro-optic signal outcomes is shown to be related to an $s$-parametrized phase-space quasiprobability distribution of the indirectly measured MIR state, with the parameter $s$ depending solely on the quantities characterizing the nonlinear interaction. Furthermore, we show that the quasiprobability distributions for the sampled and post-measurement states are related to each other through a renormalization and a change in the parametrization. This result is then used to demonstrate that two consecutive measurements of both $\\hat{X}$ and $\\hat{Y}$ quadratures can outperform eight-port homodyne detection.'\nauthor:\n- Emanuel Hubenschmid\n- 'Thiago L. M. Guedes'\n- Guido Burkard\nbibliography:\n- 'discrete\\_eos\\_final\\_EH\\_TG.bib'\ntitle: 'A complete POVM description of multi-channel quantum electro-optic sampling with monochromatic field modes'\n---\n\nIntroduction\n============\n\nUnderstanding simultaneous" +"---\nabstract: 'Neutron stars are known to show accelerated spin-up of their rotational frequency called a glitch. Highly magnetized rotating neutron stars (pulsars) are frequently observed by radio telescopes (and in other frequencies), where the glitch is observed as irregular arrival times of pulses which are otherwise very regular. A glitch in an isolated neutron star can excite the fundamental (*f*)-mode oscillations which can lead to gravitational wave generation. This gravitational wave signal associated with stellar fluid oscillations has a damping time of $10-200$ms and occurs at the frequency range between $2.2-2.8$kHz for the equation of state and mass range considered in this work, which is within the detectable range of the current generation of ground-based detectors. Electromagnetic observations of pulsars (and hence pulsar glitches) require the pulsar to be oriented so that the jet is pointed toward the detector, but this is not a requirement for gravitational wave emission which is more isotropic and not jetlike. Hence, gravitational wave observations have the potential to uncover nearby neutron stars where the jet is not pointed towards the Earth. In this work, we study the prospects of finding glitching neutron stars using a generic all-sky search for short-duration gravitational wave transients." +"---\nauthor:\n- 'A. Calabr[\u00f2]{}'\n- 'L. Pentericci'\n- 'M. Talia'\n- 'G. Cresci'\n- 'M. Castellano'\n- 'D. Belfiori'\n- 'S. Mascia'\n- 'G. Zamorani'\n- |\n \\\n R. Amor\u00edn\n- 'J.P.U. Fynbo'\n- 'M. Ginolfi'\n- 'L. Guaita'\n- 'N.P. Hathi'\n- 'A. Koekemoer'\n- 'M. Llerena'\n- |\n \\\n F. Mannucci\n- 'P. Santini'\n- 'A. Saxena'\n- 'D. Schaerer'\ndate: 'Submitted to A&A'\ntitle: 'Properties of the interstellar medium in star-forming galaxies at redshifts $2\\leq z \\leq 5$ from the VANDELS survey'\n---\n\nIntroduction\n============\n\nGalactic inflows and outflows are the main actors of the baryon cycle inside and outside galaxies, playing a fundamental role in the regulation of galaxy evolution across cosmic time. These phenomena of gas flows of the interstellar and circumgalactic medium (ISM and CGM, respectively) are thought to be essential for explaining the discrepancy at low and high masses between the observed shape of the galaxy luminosity function and the predicted mass function of dark matter haloes [@madau96; @behroozi13]. In addition, they are fundamental ingredients in the explanation of other important scaling relations, including the mass\u2013metallicity relation [MZR, @mannucci09; @dave11; @calabro17; @fontanot21] and the star formation rate (SFR)\u2013stellar mass (M$_\\star$) relation [e.g." +"---\nabstract: |\n Line-intensity mapping (LIM) is an emerging approach to survey the Universe, using relatively low-aperture instruments to scan large portions of the sky and collect the total spectral-line emission from galaxies and the intergalactic medium. Mapping the intensity fluctuations of an array of lines offers a unique opportunity to probe redshifts well beyond the reach of other cosmological observations, access regimes that cannot be explored otherwise, and exploit the enormous potential of cross-correlations with other measurements. This promises to deepen our understanding of various questions related to galaxy formation and evolution, cosmology, and fundamental physics.\n\n Here we focus on lines ranging from microwave to optical frequencies, the emission of which is related to star formation in galaxies across cosmic history. Over the next decade, LIM will transition from a pathfinder era of first detections to an early-science era where data from more than a dozen missions will be harvested to yield new insights and discoveries. This review discusses the primary target lines for these missions, describes the different approaches to modeling their intensities and fluctuations, surveys the scientific prospects of their measurement, presents the formalism behind the statistical methods to analyze the data, and motivates the opportunities for" +"---\nabstract: 'Machine learning models exhibit two seemingly contradictory phenomena: training data memorization, and various forms of forgetting. In memorization, models overfit specific training examples and become susceptible to privacy attacks. In forgetting, examples which appeared early in training are forgotten by the end. In this work, we connect these phenomena. We propose a technique to measure to what extent models \u201cforget\u201d the specifics of training examples, becoming less susceptible to privacy attacks on examples they have not seen recently. We show that, while non-convex models can memorize data *forever* in the worst-case, standard image, speech, and language models empirically do forget examples over time. We identify nondeterminism as a potential explanation, showing that deterministically trained models do not forget. Our results suggest that examples seen early when training with extremely large datasets\u2014for instance those examples used to pre-train a model\u2014may observe privacy benefits at the expense of examples seen later.'\nauthor:\n- Matthew Jagielski\n- Om Thakkar\n- 'Florian Tram[\u00e8]{}r'\n- Daphne Ippolito\n- |\n \\\n Katherine Lee\n- Nicholas Carlini\n- Eric Wallace\n- Shuang Song\n- |\n \\\n Abhradeep Thakurta\n- Nicolas Papernot\n- Chiyuan Zhang\nbibliography:\n- 'references.bib'\ntitle: '**[Measuring Forgetting of Memorized Training Examples]{}**'\n---" +"---\nabstract: 'We introduce a data-driven approach to the modelling and analysis of viscous fluid mechanics. Instead of including constitutive laws for the fluid\u2019s viscosity in the mathematical model, we suggest to directly use experimental data. Only a set of differential constraints, derived from first principles, and boundary conditions are kept of the classical PDE model and are combined with a data set. The mathematical framework builds on the recently introduced data-driven approach to solid-mechanics [@KO; @CMO]. We construct optimal data-driven solutions that are *material model free* in the sense that no assumptions on the rheological behaviour of the fluid are made or extrapolated from the data. The differential constraints of fluid mechanics are recast in the language of constant rank differential operators. Adapting abstract results on lower-semicontinuity and $\\mathscr{A}$-quasiconvexity, we show a $\\Gamma$-convergence result for the functionals arising in the data-driven fluid mechanical problem. The theory is extended to compact nonlinear perturbations, whence our results apply to both inertialess fluids and flows with finite Reynolds number. Data-driven solutions provide a new *relaxed* solution concept. We prove that the constructed data-driven solutions are consistent with solutions to the classical PDEs of fluid mechanics if the data sets have the form" +"---\nabstract: 'In this paper, we propose a novel end-to-end user-defined keyword spotting method that utilizes linguistically corresponding patterns between speech and text sequences. Unlike previous approaches requiring speech keyword enrollment, our method compares input queries with an enrolled text keyword sequence. To place the audio and text representations within a common latent space, we adopt an attention-based cross-modal matching approach that is trained in an end-to-end manner with monotonic matching loss and keyword classification loss. We also utilize a de-noising loss for the acoustic embedding network to improve robustness in noisy environments. Additionally, we introduce the LibriPhrase dataset, a new short-phrase dataset based on LibriSpeech for efficiently training keyword spotting models. Our proposed method achieves competitive results on various evaluation sets compared to other single-modal and cross-modal baselines.'\naddress: |\n $^1$Naver Corporation, South Korea\\\n $^2$Dept. of Electrical and Electronic Engineering, Yonsei University, South Korea \nbibliography:\n- 'longstrings.bib'\n- 'mybib.bib'\ntitle: 'Learning Audio-Text Agreement for Open-vocabulary Keyword Spotting'\n---\n\n**Index Terms**: user-defined keyword spotting, open-vocabulary, audio-text correspondence detection\n\nIntroduction\n============\n\nKeyword spotting (KWS) is the task of identifying enrolled keywords within spoken utterances. It is highly challenging because it involves not only detecting keywords accurately, but also rejecting other words" +"---\nabstract: 'The R\u00e9nyi cross-entropy measure between two distributions, a generalization of the Shannon cross-entropy, was recently used as a loss function for the improved design of deep learning generative adversarial networks. In this work, we examine the properties of this measure and derive closed-form expressions for it when one of the distributions is fixed and when both distributions belong to the exponential family. We also analytically determine a formula for the cross-entropy rate for stationary Gaussian processes and for finite-alphabet Markov sources.'\nauthor:\n- \nbibliography:\n- 'citations.bib'\ntitle: 'On the R\u00e9nyi Cross-Entropy$^*$ [^1] '\n---\n\nR\u00e9nyi information measures, cross-entropy, exponential family distributions, Gaussian processes, Markov sources.\n\nIntroduction\n============\n\nThe R\u00e9nyi entropy [@renyi] of order $\\alpha$ of a discrete distribution (probability mass function) $p$ with finite support $\\mathbb{S}$, defined as $$H_\\alpha(p)=\\frac{1}{1-\\alpha}\\ln \\sum_{x\\in\\mathbb{S}}p(x)^\\alpha$$ for $\\alpha>0,\\alpha\\neq 1$, is a generalization of the Shannon entropy,[^2] $H(p)$, in that $\\lim_{\\alpha\\to1}H_\\alpha(p)=H(p)$. Similarly, the R\u00e9nyi divergence (of order $\\alpha$) between two discrete distributions $p$ and $q$ with common finite support $\\mathbb{S}$, given by $$D_\\alpha(p||q)= \\frac{1}{\\alpha-1}\\ln \\sum_{x\\in\\mathbb{S}} p(x)^\\alpha q(x)^{1-\\alpha},$$ reduces to the KL divergence, $D(p\\|q)$, as $\\alpha \\to 1$.\n\nSince the introduction of these measures, several other R\u00e9nyi-type information measures have been put forward, each obeying the" +"---\nabstract: 'As a network-based functional approximator, we have proposed a \u201cLagrangian Density Space-Time Deep Neural Networks\u201d (LDDNN) topology. It is qualified for unsupervised training and learning to predict the dynamics of underlying physical science governed phenomena. The prototypical network respects the fundamental conservation laws of nature through the succinctly described Lagrangian and Hamiltonian density of the system by a given data-set of generalized nonlinear partial differential equations. The objective is to parameterize the Lagrangian density over a neural network and directly learn from it through data instead of hand-crafting an exact time-dependent \u201cAction solution\u201d of Lagrangian density for the physical system. With this novel approach, can understand and open up the information inference aspect of the \u201cBlack-box deep machine learning representation\u201d for the physical dynamics of nature by constructing custom-tailored network interconnect topologies, activation, and loss/cost functions based on the underlying physical differential operators. This article will discuss statistical physics interpretation of neural networks in the Lagrangian and Hamiltonian domains.'\nauthor:\n- '**[Bhupesh\u00a0Bishnoi[^1]]{}**'\nbibliography:\n- 'Lagrangian\\_Density\\_Space-Time\\_DNN\\_Topology.bib'\n---\n\nIntroduction {#introduction .unnumbered}\n============\n\nOur progress in understanding the nature from the pre-eighteenth-century experimentation approach to mid-twentieth-century physical theory-based modeling, model-based scientific computation, and first-principles simulation into today changing towards the" +"---\nabstract: 'Procrastination, the irrational delay of tasks, is a common occurrence in online learning. Potential negative consequences include higher risk of drop-outs, increased stress, and reduced mood. Due to the rise of learning management systems and learning analytics, indicators of such behavior can be detected, enabling predictions of future procrastination and other dilatory behavior. However, research focusing on such predictions is scarce. Moreover, studies involving different types of predictors and comparisons between the predictive performance of various methods are virtually non-existent. In this study, we aim to fill these research gaps by analyzing the performance of multiple machine learning algorithms when predicting the delayed or timely submission of online assignments in a higher education setting with two categories of predictors: subjective, questionnaire-based variables and objective, log-data based indicators extracted from a learning management system. The results show that models with objective predictors consistently outperform models with subjective predictors, and a combination of both variable types perform slightly better. For each of these three options, a different approach prevailed (Gradient Boosting Machines for the subjective, Bayesian multilevel models for the objective, and Random Forest for the combined predictors). We conclude that careful attention should be paid to the selection of" +"---\nabstract: 'We are concerned by Data Driven Requirements Engineering, and in particular the consideration of user\u2019s reviews. These online reviews are a rich source of information for extracting new needs and improvement requests. In this work, we provide an automated analysis using CamemBERT, which is a state-of-the-art language model in French. We created a multi-label classification dataset of 6000 user reviews from three applications in the Health & Fitness field[^1]. The results are encouraging and suggest that it\u2019s possible to identify automatically the reviews concerning requests for new features.'\nauthor:\n- |\n Jialiang Wei$^*$, Anne-Lise Courbis$^*$, Thomas Lambolais$^*$, Binbin Xu$^*$,\\\n Pierre Louis Bernard$^{**}$ and G\u00e9rard Dray$^*$\\\n $^{*}$: [EuroMov Digital Health in Motion, Univ Montpellier, IMT Mines Ales, Ales, France]{}\\\n $^{**}$: [EuroMov Digital Health in Motion, Univ Montpellier, IMT Mines Ales, Montpellier, France]{}\\\n `jialiang.wei@mines-ales.fr`\ntitle: |\n Towards a Data-Driven Requirements Engineering Approach:\\\n Automatic Analysis of User Reviews\n---\n\n**This is the translated English version, the original work was written in French, attached in this PDF:**\n\nRequirements Engineering, Data Driven Requirements Engineering, Deep Learning, NLP, CamemBERT\n\nIntroduction\n============\n\nRequirements Engineering (RE) aims to ensure that systems meet the needs of their stakeholders including users, sponsors, and customers. Placed in the preliminary" +"---\nabstract: 'The Permuted Kernel Problem (PKP) asks to find a permutation of a given vector belonging to the kernel of a given matrix. The PKP is at the basis of PKP-DSS, a post-quantum signature scheme deriving from the identification scheme proposed by Shamir in 1989. The most efficient solver for PKP is due to a recent paper by Koussa et al. In this paper we propose an improvement of such an algorithm, which we achieve by considering an additional collision search step applied on kernel equations involving a small number of coordinates. We study the conditions for such equations to exist from a coding theory perspective, and we describe how to efficiently find them with methods borrowed from coding theory, such as information set decoding. We assess the complexity of the resulting algorithm and show that it outperforms previous approaches in several cases. We also show that, taking the new solver into account, the security level of some instances of PKP-DSS turns out to be slightly overestimated.'\nauthor:\n- 'Paolo Santini, Marco Baldi, Franco Chiaraluce\\'\nbibliography:\n- 'References.bib'\ntitle: A Novel Attack to the Permuted Kernel Problem\n---\n\nDigital signatures, information set decoding, permuted kernel problem, post-quantum cryptography, PKP-DSS." +"---\nabstract: 'This paper provides a critical review of the Bayesian perspective of causal inference based on the potential outcomes framework. We review the causal estimands, assignment mechanism, the general structure of Bayesian inference of causal effects, and sensitivity analysis. We highlight issues that are unique to Bayesian causal inference, including the role of the propensity score, the definition of identifiability, the choice of priors in both low and high dimensional regimes. We point out the central role of covariate overlap and more generally the design stage in Bayesian causal inference. We extend the discussion to two complex assignment mechanisms: instrumental variable and time-varying treatments. We identify the strengths and weaknesses of the Bayesian approach to causal inference. Throughout, we illustrate the key concepts via examples.'\naddress: |\n $^{1}$Duke University, Durham, NC, USA\\\n $^{2}$University of California, Berkeley, CA, USA\\\n $^{3}$University of Florence and EUI, Florence, Italy \nauthor:\n- 'Fan Li$^{1}$, Peng Ding$^{2}$, and Fabrizia Mealli$^{3}$'\ntitle: 'Bayesian Causal Inference: A Critical Review'\n---\n\nIntroduction {#sec::intro}\n============\n\nCausality has long been central to the human philosophical debate and scientific pursuit. There are many relevant questions, e.g. the philosophical meaning of causation or deducing the causes of a given phenomenon. Among these" +"---\nabstract: 'Magnetic Resonance Imaging (MRI) is a widely used medical imaging modality boasting great soft tissue contrast without ionizing radiation, but unfortunately suffers from long acquisition times. Long scan times can lead to motion artifacts, for example due to bulk patient motion such as head movement and periodic motion produced by the heart or lungs. Motion artifacts can degrade image quality and in some cases render the scans nondiagnostic. To combat this problem, prospective and retrospective motion correction techniques have been introduced. More recently, data driven methods using deep neural networks have been proposed. As a large number of publicly available MRI datasets are based on Fast Spin Echo (FSE) sequences, methods that use them for training should incorporate the correct FSE acquisition dynamics. Unfortunately, when simulating training data, many approaches fail to generate accurate motion-corrupt images by neglecting the effects of the temporal ordering of the k-space lines as well as neglecting the signal decay throughout the FSE echo train. In this work, we highlight this consequence and demonstrate a training method which correctly simulates the data acquisition process of FSE sequences with higher fidelity by including sample ordering and signal decay dynamics. Through numerical experiments, we show" +"---\nabstract: 'This research work provides an exhaustive investigation of the viability of different coupled wormhole (WH) geometries with the relativistic matter configurations in the $f(R,G,T)$ extended gravity framework. We consider a specific model in the context of $f(R,G,T)$-gravity for this purpose. Also, we assume a static spherically symmetric space-time geometry and a unique distribution of matter with a set of shape functions ($\\beta(r)$) for analyzing different energy conditions (ECs). In addition to this, we examined WH-models in the equilibrium scenario by employing anisotropic fluid. The corresponding results are obtained using numerical methods and then presented using different plots. In this case, $f(R,G,T)$ gravity generates additional curvature quantities, which can be thought of as gravitational objects that maintain irregular WH-situations. Based on our findings, we conclude that in the absence of exotic matter, WH can exist in some specific regions of the parametric space using modified gravity model as, $f(R,G,T) = R +\\alpha R^2+\\beta G^n+\\gamma G\\ln(G)+\\lambda T$.'\nauthor:\n- |\n M. Ilyas$^1$ [^1], A. R. Athar$^2$ [^2] , Fawad Khan$^1$ [^3], Nasreen Ghafoor$^1$ [^4], Haifa I. Alrebdi $^{3}$ [^5],\\\n Kottakkaran Sooppy Nisar$^{4,5}$ [^6] and Abdel-Haleem Abdel-Aty $^{6}$ [^7]\\\n \\\n $^1$ Institute of Physics, Gomal University,\\\n Dera Ismail Khan, 29220, KP," +"---\nabstract: 'In this work, we exploit a serious security flaw in a code-based signature scheme from a 2019 work by Liu, Yang, Han and Wang. They adapt the McEliece cryptosystem to obtain a new scheme and, on top of this, they design an efficient digital signature. We show that the new encryption scheme based on McEliece, even if it has longer public keys, is not more secure than the standard one. Moreover, the choice of parameters for the signature leads to a significant performance improvement, but it introduces a vulnerability in the protocol.'\nauthor:\n- |\n Giuseppe D\u2019Alconzo\\\n \\\n Department of Mathematical Sciences, Politecnico di Torino\nbibliography:\n- 'local.bib'\ntitle: 'A note on a Code-Based Signature Scheme'\n---\n\n[ ***Keywords\u2014*** Code-Based Signatures; CFS; McEliece ]{}\n\nIntroduction {#Sect:intro}\n============\n\n#### Post-Quantum Cryptography.\n\nWith the emerging menace of quantum computation, there is the urge of replacing cryptosystems used nowadays, based on the Factorisation Problem and the Discrete Logarithm Problem, with quantum-resistant alternatives. Because of this, in 2016 the National Institute of Technology and Security (NIST) started a call to evaluate and standardise quantum-resistant cryptosystems[^1]. There are two categories of primitives under evaluation: Key-Encapsulation Mechanisms (KEM) and Digital Signatures. The standardisation process" +"---\nabstract: 'A perfect teleportation protocol requires pure maximally shared entangled states. While in reality the shared entanglement is drastically degraded due to the inevitable interaction with the noisy environment. Here, we propose a teleportation protocol to teleport an unknown qubit through amplitude damping channels with fidelity up to one . In our protocol, we utilize environment-assisted measurement during the entanglement distribution and further modify the teleportation protocol to apply weak measurement in the last step of teleportation. Furthermore, we controlled teleportation protocol, where all the qubits of the state pass through the amplitude damping channel.'\nauthor:\n- Sajede Harraz\n- 'Jiao-Yang Zhang'\n- Shuang Cong\ntitle: 'High-fidelity quantum teleportation through noisy channels via weak measurement and environment-assisted measurement'\n---\n\n[^1]\n\nQuantum teleportation is a quantum communication task which sends an unknown qubit from a sender to a receiver by using shared entanglement and classical communications [@lab1; @lab2]. The original protocol proposed by Bennett $et~al.$ [@lab3] uses as the shared . Other teleportation protocols are also developed by using qubit entangled state GHZ states and W states [@lab4; @lab5; @lab6; @lab7]. The W states are loss of particles if , then there genuine entanglement between the remaining two. , which" +"---\nabstract: 'In this paper, we design, analyze the convergence properties and address the implementation aspects of *AFAFed*. This is a novel *A*synchronous *F*air *A*daptive *Fed*erated learning framework for stream-oriented IoT application environments, which are featured by time-varying operating conditions, heterogeneous resource-limited devices (i.e., coworkers), non-i.i.d. local training data and unreliable communication links. The *key new* of *AFAFed* is the synergic co-design of: (i) two sets of adaptively tuned tolerance thresholds and fairness coefficients at the coworkers and central server, respectively; and, (ii) a distributed adaptive mechanism, which allows each coworker to adaptively tune own communication rate. The convergence properties of *AFAFed* under (possibly) non-convex loss functions is guaranteed by a set of *new* analytical bounds, which formally unveil the impact on the resulting *AFAFed* convergence rate of a number of Federated Learning (FL) parameters, like, first and second moments of the per-coworker number of consecutive model updates, data skewness, communication packet-loss probability, and maximum/minimum values of the (adaptively tuned) mixing coefficient used for model aggregation.'\naddress: |\n Department of Information Engineering, Electronics and Telecommunications (DIET),\\\n Sapienza University of Rome, Via Eudossiana 18, 00184 Rome, Italy\nauthor:\n- Enzo\u00a0Baccarelli\n- Michele\u00a0Scarpiniti\n- Alireza\u00a0Momenzadeh\n- Sima\u00a0Sarv\u00a0Ahrabi" +"---\nabstract: 'We analyze the completeness of the MOSDEF survey, in which $z\\sim2$ galaxies were selected for rest-optical spectroscopy from well-studied *HST* extragalactic legacy fields down to a fixed rest-optical magnitude limit ($H_{\\rm{AB}} =$ 24.5). The subset of $z\\sim2$ MOSDEF galaxies with high signal-to-noise (S/N) emission-line detections analyzed in previous work represents a small minority ($<10$%) of possible $z\\sim2$ MOSDEF targets. It is therefore crucial to understand how representative this high S/N subsample is, while also more fully exploiting the MOSDEF spectroscopic sample. Using spectral-energy-distribution (SED) models and rest-optical spectral stacking, we compare the MOSDEF $z\\sim2$ high S/N subsample with the full MOSDEF sample of $z\\sim2$ star-forming galaxies with redshifts, the latter representing an increase in sample size of more than a factor of three. We find that both samples have similar emission-line properties, in particular in terms of the magnitude of the offset from the local star-forming sequence on the \\[N\u00a0II\\] BPT diagram. There are small differences in median host galaxy properties, including the stellar mass ($M_{\\ast}$), star-formation rate (SFR) and specific SFR (sSFR), and UVJ colors; however, these offsets are minor considering the wide spread of the distributions. Using SED modeling, we also demonstrate that the" +"---\nabstract: 'Duration modelling has become an important research problem once more with the rise of non-attention neural text-to-speech systems. The current approaches largely fall back to relying on previous statistical parametric speech synthesis technology for duration prediction, which poorly models the expressiveness and variability in speech. In this paper, we propose two alternate approaches to improve duration modelling. First, we propose a duration model conditioned on phrasing that improves the predicted durations and provides better modelling of pauses. We show that the duration model conditioned on phrasing improves the naturalness of speech over our baseline duration model. Second, we also propose a multi-speaker duration model called Cauliflow, that uses normalising flows to predict durations that better match the complex target duration distribution. Cauliflow performs on par with our other proposed duration model in terms of naturalness, whilst providing variable durations for the same prompt and variable levels of expressiveness. Lastly, we propose to condition Cauliflow on parameters that provide an intuitive control of the pacing and pausing in the synthesised speech in a novel way.'\naddress: ' Alexa AI, Amazon '\nbibliography:\n- 'refs.bib'\ntitle: 'Expressive, Variable, and Controllable Duration Modelling in TTS'\n---\n\n**Index Terms**: neural text-to-speech, normalising" +"---\nabstract: 'Graph contrastive learning has emerged as a powerful unsupervised graph representation learning tool. The key to the success of graph contrastive learning is to acquire high-quality positive and negative samples as contrasting pairs to learn the underlying structural semantics of the input graph. Recent works usually sample negative samples from the same training batch with the positive samples or from an external irrelevant graph. However, a significant limitation lies in such strategies: the unavoidable problem of sampling false negative samples. In this paper, we propose a novel method to utilize **C**ounterfactual mechanism to generate artificial hard negative samples for **G**raph **C**ontrastive learning, namely **CGC**. We utilize a counterfactual mechanism to produce hard negative samples, ensuring that the generated samples are similar but have labels that differ from the positive sample. The proposed method achieves satisfying results on several datasets. It outperforms some traditional unsupervised graph learning methods and some SOTA graph contrastive learning methods. We also conducted some supplementary experiments to illustrate the proposed method, including the performances of CGC with different hard negative samples and evaluations for hard negative samples generated with different similarity measurements. The implementation code is available online to ease reproducibility[^1].'\nauthor:\n- Haoran" +"---\nabstract: 'Reconstruction of the soft tissues in robotic surgery from endoscopic stereo videos is important for many applications such as intra-operative navigation and image-guided robotic surgery automation. Previous works on this task mainly rely on SLAM-based approaches, which struggle to handle complex surgical scenes. Inspired by recent progress in neural rendering, we present a novel framework for deformable tissue reconstruction from binocular captures in robotic surgery under the single-viewpoint setting. Our framework adopts dynamic neural radiance fields to represent deformable surgical scenes in MLPs and optimize shapes and deformations in a learning-based manner. In addition to non-rigid deformations, tool occlusion and poor 3D clues from a single viewpoint are also particular challenges in soft tissue reconstruction. To overcome these difficulties, we present a series of strategies of tool mask-guided ray casting, stereo depth-cueing ray marching and stereo depth-supervised optimization. With experiments on DaVinci robotic surgery videos, our method significantly outperforms the current state-of-the-art reconstruction method for handling various complex non-rigid deformations. To our best knowledge, this is the first work leveraging neural rendering for surgical scene 3D reconstruction with remarkable potential demonstrated. Code is available at: .'\nauthor:\n- Yuehao Wang\n- Yonghao Long\n- Siu Hin Fan\n-" +"---\nabstract: 'In order to extract neutrino oscillation parameters, long-baseline neutrino oscillation experiments rely on detailed models of neutrino interactions with nuclei. These models constitute an important source of systematic uncertainty, partially because detectors to date have been blind to final state neutrons. Three-dimensional projection scintillator trackers comprise components of the near detectors of the next generation long-baseline neutrino experiments. Due to the good timing resolution and fine granularity, this technology is capable of measuring neutron kinetic energy in neutrino interactions on an event-by-event basis and will provide valuable data for refining neutrino interaction models and ways to reconstruct neutrino energy. Two prototypes have been exposed to the neutron beamline at Los Alamos National Laboratory (LANL) in both 2019 and 2020, with neutron energies between 0 and 800 MeV. In order to demonstrate the capability of neutron detection, the total neutron-scintillator cross section as a function of neutron energy is measured and compared to external measurements. The measured total neutron cross section in scintillator between 98 and 688 MeV is 0.36 $\\pm$ 0.05 barn.'\nauthor:\n- 'A. Agarwal'\n- 'H. Budd'\n- 'J. Cap\u00f3'\n- 'P. Chong'\n- 'G. Christodoulou'\n- 'M. Danilov'\n- 'A. Dergacheva'\n- 'A. De Roeck'" +"---\nabstract: 'It has been argued that the existence of time crystals requires a spontaneous breakdown of the continuous time translation symmetry so to account for the unexpected non-stationary behavior of quantum observables in the ground state. Our point is that such effects do emerge from position ($\\hat{q}_i$) and/or momentum ($\\hat{p}_i$) noncommutativity, i.e., from $[\\hat{q}_i,\\,\\hat{q}_j]\\neq 0$ and/or $[\\hat{p}_i,\\,\\hat{p}_j]\\neq 0$ (for $i\\neq j$). In such a context, a predictive analysis is carried out for the $2$-dim noncommutative quantum harmonic oscillator through a procedure supported by the Weyl-Wigner-Groenewold-Moyal framework. This allows for the understanding of how the phase-space noncommutativity drives the amplitude of periodic oscillations identified as time crystals. A natural extension of our analysis also shows how the spontaneous formation of time quasi-crystals can arise.'\nauthor:\n- 'A. E. Bernardini'\n- 'O. Bertolami'\n- 'A. E. Bernardini'\n- 'O. Bertolami'\ndate:\n- \n- \n- \ntitle: 'Emergent time crystals from phase-space noncommutative quantum mechanics'\n---\n\nTime crystals are time-periodic self-organized structures that are supposed to emerge in the time domain due to the spontaneous breaking of time translation symmetry [@W1; @W2]. They are analogous to spatial crystal lattices that are formed when the spontaneous breaking of space translation symmetry takes place [@PR2018]." +"---\nabstract: 'While the dynamics of dimers and polymer chains in a viscous solvent is well understood within the celebrated Rouse model, the effect of an external magnetic field on the dynamics of a charged chain is much less understood. Here we generalize the Rouse model for a charged dimer to include the effect of an external magnetic field. Our analytically solvable model allows a fundamental insight into the magneto-generated dynamics of the dimer in the overdamped limit as induced by the Lorentz-force. Surprisingly, for a dimer of oppositely charged particles, we find an enormous enhancement of the dynamics of the dimer center which exhibits even a transient superballistic behavior. This is highly unusual in an overdamped system for there is neither inertia nor any internal or external driving. We attribute this to a significant translation and rotation coupling due to the Lorentz force. We also find that magnetic field reduces the mobility of a dimer along its orientation and its effective rotational diffusion coefficient. In principle, our predictions can be tested by experiments with colloidal particles and complex plasmas.'\nauthor:\n- Rushikesh Shinde\n- Jens Uwe Sommer\n- Hartmut L\u00f6wen\n- Abhinav Sharma\nbibliography:\n- 'apssamp.bib'\ntitle: |\n Strongly" +"---\nauthor:\n- 'C. Stuardi, A. Bonafede, K. Rajpurohit, M. Br\u00fcggen, F. de Gasperin, D. Hoang, R. J. van Weeren'\n- 'F. Vazza'\nbibliography:\n- 'my\\_bib.bib'\ndate: 'Received XX; accepted YY'\ntitle: Using the polarization properties of double radio relics to probe the turbulent compression scenario\n---\n\n[Radio relics are Mpc-size synchrotron sources located in the outskirts of some merging galaxy clusters. Binary-merging systems with favorable orientation may host two almost symmetric relics, named double radio relics.]{} [Double radio relics are seen preferentially edge-on and, thus, constitute a privileged sample for statistical studies. Their polarization and Faraday rotation properties give direct access to the relics\u2019 origin and magnetic fields.]{} [In this paper, we present a polarization and Rotation Measure (RM) synthesis study of four clusters hosting double radio relics, namely 8C 0212+703, Abell 3365, PLCK G287.0+32.9, previously missing polarization studies, and ZwCl 2341+0000, for which conflicting results have been reported. We used 1-2 GHz Karl G. Jansky Very Large Array observations. We also provide an updated compilation of known double radio relics with important observed quantities. We studied their polarization and Faraday rotation properties at 1.4 GHz and we searched for correlations between fractional polarization and physical resolution, distance from" +"---\nabstract: 'Recent years have witnessed a rapid development of automated methods for skin lesion diagnosis and classification. Due to an increasing deployment of such systems in clinics, it has become important to develop a more robust system towards various Out-of-Distribution (OOD) samples (unknown skin lesions and conditions). However, the current deep learning models trained for skin lesion classification tend to classify these OOD samples incorrectly into one of their learned skin lesion categories. To address this issue, we propose a simple yet strategic approach that improves the OOD detection performance while maintaining the multi-class classification accuracy for the known categories of skin lesion. To specify, this approach is built upon a realistic scenario of a long-tailed and fine-grained OOD detection task for skin lesion images. Through this approach, 1) First, we target the mixup amongst middle and tail classes to address the long-tail problem. 2) Later, we combine the above mixup strategy with prototype learning to address the fine-grained nature of the dataset. The unique contribution of this paper is two-fold, justified by extensive experiments. First, we present a realistic problem setting of OOD task for skin lesion. Second, we propose an approach to target the long-tailed and fine-grained" +"---\nabstract: 'Emerging memristor computing systems have demonstrated great promise in improving the energy efficiency of neural network (NN) algorithms. The NN weights stored in memristor crossbars, however, may face potential theft attacks due to the nonvolatility of the memristor devices. In this paper, we propose to protect the NN weights by mapping selected columns of them in the form of 1\u2019s complements and leaving the other columns in their original form, preventing the adversary from knowing the exact representation of each weight. The results show that compared with prior work, our method achieves effectiveness comparable to the best of them and reduces the hardware overhead by more than 18X.'\nauthor:\n- \nbibliography:\n- 'references.bib'\ntitle: |\n Enhancing Security of Memristor Computing System\\\n Through Secure Weight Mapping\\\n [^1] \n---\n\n\u00a0\n\nIntroduction\n============\n\nNeural network (NN) algorithms are essential elements across industries such as robotics, visual object recognition, and natural language processing. They are typically data-intensive, involving a large number of vector-matrix multiplications (VMMs). Conventional computer architectures separating computation and memory are challenged because of their ineffectiveness in executing such algorithms. The computing systems based on emerging memristor devices, such as resistive random-access memory (RRAM) and phase-change memory (PCM), introduce an in" +"---\nabstract: 'Humans engaged in collaborative activities are naturally able to convey their intentions to teammates through multi-modal communication, which is made up of explicit and implicit cues. Similarly, a more natural form of human-robot collaboration may be achieved by enabling robots to convey their *intentions* to human teammates via multiple communication channels. In this paper, we postulate that a better communication may take place should collaborative robots be able to anticipate their movements to human teammates in an intuitive way. In order to support such a claim, we propose a robot system\u2019s architecture through which robots can communicate planned motions to human teammates leveraging a Mixed Reality interface powered by modern head-mounted displays. Specifically, the robot\u2019s hologram, which is superimposed to the real robot in the human teammate\u2019s point of view, shows the robot\u2019s *future* movements, allowing the human to understand them in advance, and possibly react to them in an appropriate way. We conduct a preliminary user study to evaluate the effectiveness of the proposed anticipatory visualization during a complex collaborative task. The experimental results suggest that an improved and more natural collaboration can be achieved by employing this *anticipatory communication mode*.'\nauthor:\n- 'Simone\u00a0Macci\u00f2, Alessandro\u00a0Carf\u00ec," +"---\nauthor:\n- Filippo Pallotta\nbibliography:\n- 'main.bib'\ntitle: |\n Bringing\\\n the Second Quantum Revolution\\\n into High School\n---\n\nplus1pt\n\n\\[abstract\\] Abstract\n=====================\n\nThe aim of this thesis is to bridge the gap between the world of physics research and secondary education on contemporary quantum physics.\n\nThe second quantum revolution is changing the boundaries of research into the foundations of physics and has enabled the creation of a new generation of technologies that are fundamental to meeting global challenges, now and in the future. Unfortunately, high school curricula are disconnected from these realities and that risks to exclude the new generations from the possibility of participating in a conscious and active way in the construction of the society in which they live.\n\nTo address this problem, the first strategy is to strengthen cohesion within the learning ecosystem related to quantum physics. Researchers, teachers and students need to be enabled to carry out activities that allow them to develop knowledge and skills through dialogue, sharing and collaboration. In this sense, the study of teachers\u2019 Pedagogical Content Knowledge (PCK) through the model of Educational Reconstruction for Teacher Education (ERTE) has made it possible to identify those factors that make it possible to" +"---\nabstract: 'Utilizing KAM theory, we show that there are certain levels in relative $\\SU(2)$ and $\\SU(3)$ character varieties of the once-punctured torus where the action of a single hyperbolic element is not ergodic.'\naddress:\n- 'Department of Mathematics, University of Maryland, College Park, MD 20742, USA'\n- 'Department of Mathematics, University of Maryland, College Park, MD 20742, USA'\n- 'Department of Mathematical Sciences, George Mason University, 4400 University Drive, Fairfax, Virginia 22030, USA'\n- 'CMLS, CNRS, \u00c9cole polytechnique, Institut Polytechnique de Paris, 91128 Palaiseau Cedex, France.'\nauthor:\n- Giovanni Forni\n- 'William M. Goldman'\n- Sean Lawton\n- Carlos Matheus\ntitle: 'Non-ergodicity on $\\SU(2)$ and $\\SU(3)$ character varieties of the once-punctured torus'\n---\n\nIntroduction\n============\n\nIn 1997 Goldman proved that the (pure) mapping class group of a closed orientable surface $\\Sigma$ acts ergodically on the moduli space of flat principal $G$-bundles over $\\Sigma$ when $G$ is a compact connected Lie group whose simple factors are rank 1 [@Gold6]. Goldman conjectured his theorem generalized to parabolic $G$-bundles over compact orientable surfaces $\\Sigma_{n,g}$ of genus $g\\geq 0$ with $n\\geq 0$ punctures (assuming $n\\geq 4$ if $g=0$) for any compact connected Lie group $G$. This conjecture was established in [@PE1; @PE2] when" +"---\nabstract: 'Thermal comfort in indoor environments has an enormous impact on the health, well-being, and performance of occupants. Given the focus on energy efficiency and Internet of Things enabled smart buildings, machine learning (ML) is being increasingly used for data-driven thermal comfort (TC) prediction. Generally, ML-based solutions are proposed for air-conditioned or HVAC ventilated buildings and the models are primarily designed for adults. On the other hand, naturally ventilated (NV) buildings are the norm in most countries. They are also ideal for energy conservation and long-term sustainability goals. However, the indoor environment of NV buildings lacks thermal regulation and varies significantly across spatial contexts. These factors make TC prediction extremely challenging. Thus, determining the impact of building environment on the performance of TC models is important. Further, the generalization capability of TC prediction models across different NV indoor spaces needs to be studied. This work addresses these problems. Data is gathered through month-long field experiments conducted in 5 naturally ventilated school buildings, involving 512 primary school students. The impact of spatial variability on student comfort is demonstrated through variation in prediction accuracy (by as much as 71%). The influence of building environment on TC prediction is also demonstrated through" +"---\nabstract: 'Spin states of two-dimensional Wigner clusters are considered at low temperatures, when all electrons are in ground coordinate states. The spin subsystem behavior is determined by antiferromagnetic exchange integrals. The spin states in such a system in the presence of a magnetic field are described in terms of the Ising model. The spin structure, correlation function and magnetic susceptibility of the cluster are found by computer simulations. It is shown that the spin susceptibility experiences oscillations with respect to the magnetic field, owing to the magnetoinduced spin subsystem rearrangements.'\nauthor:\n- 'Mehrdad M. Mahmoodian$^{1,2}$'\n- 'M.M. Mahmoodian$^{1,2}$'\n- 'M.V. Entin$^{1}$'\ntitle: 'Spin structure and spin magnetic susceptibility of two-dimensional Wigner clusters'\n---\n\nIntroduction\n============\n\nTwo-dimensional Wigner lattices have been the subject of study since the 1970\u2019s \u00a0[@chap; @platzman; @grimes]. In these systems the Coulomb repulsion is compensated by opposite charges separated by an insulating layer from a 2D quantum well. The classical electrons occupy the potential energy minima, and quantum electrons form wave functions near the minima. If the quantization energy becomes large, the electron lattice melts and the system converts to electron gas.\n\nAt a low temperature and in the absence of quantization, the electrons are arranged" +"---\nabstract: 'The electroencephalogram (EEG) is a powerful method to understand how the brain processes speech. Linear models have recently been replaced for this purpose with deep neural networks and yield promising results. In related EEG classification fields, it is shown that explicitly modeling subject-invariant features improves generalization of models across subjects and benefits classification accuracy. In this work, we adapt factorized hierarchical variational autoencoders to exploit parallel EEG recordings of the same stimuli. We model EEG into two disentangled latent spaces. Subject accuracy reaches $98.96\\%$ and $1.60\\%$ on respectively the subject and content latent space, whereas binary content classification experiments reach an accuracy of $51.51\\%$ and $62.91\\%$ on respectively the subject and content latent space.'\naddress: |\n $^1$KU Leuven, PSI, Dept. of Electrical engineering (ESAT), Leuven, Belgium\\\n $^2$KU Leuven, ExpORL, Dept. Neurosciences, Leuven, Belgium \nbibliography:\n- 'refs.bib'\ntitle: 'Learning subject-invariant representations from speech-evoked EEG using variational autoencoders'\n---\n\nfactorized hierarchical variational autoencoder, speech decoding, EEG, unsupervised learning, domain generalization\n\nIntroduction {#sec:intro}\n============\n\nRecently, much research has gone into modeling how natural running speech is processed in the human brain. A standard approach is to present natural running speech to a subject while EEG signals are recorded. Subsequently, linear models" +"---\nabstract: 'Repositories of large software systems have become commonplace. This massive expansion has resulted in the emergence of various problems in these software platforms including identification of (i) bug-prone packages, (ii) critical bugs, and (iii) severity of bugs. One of the important goals would be to mine these bugs and recommend them to the developers to resolve them. The first step to this is that one has to accurately detect the extent of severity of the bugs. In this paper, we take up this task of predicting the severity of bugs in the near future. Contextualized neural models built on the text description of a bug and the user comments about the bug help to achieve reasonably good performance. Further information on how the bugs are related to each other in terms of the ways they affect packages can be summarised in the form of a graph and used along with the text to get additional benefits.'\nauthor:\n- Rima Hazra\n- Arpit Dwivedi\n- Animesh Mukherjee\ntitle: |\n Is this bug severe? A text-cum-graph\\\n based model for bug severity prediction\n---\n\nIntroduction {#sec:intro}\n============\n\nLarge software systems have become increasingly commonplace. As these repositories grow, they become more" +"---\nabstract: |\n We provide a new perspective on the close relationship between entanglement and time. Our main focus is on bipartite entanglement, where this connection is foreshadowed both in the positive partial transpose criterion due to Peres \\[A. Peres, Phys. Rev. Lett., **77**, 1413 (1996)\\] and in the classification of quantum within more general non-signalling bipartite correlations \\[M. Frembs and A. D\u00f6ring, \\]. Extracting the relevant common features, we identify a necessary and sufficient condition for bipartite entanglement in terms of a compatibility condition with respect to time orientations in local observable algebras, which express the dynamics in the respective subsystems. We discuss the relevance of the latter in the broader context of von Neumann algebras and the thermodynamical notion of time naturally arising within the latter.\\\nauthor:\n- Markus Frembs\nbibliography:\n- 'bibliography.bib'\ntitle: Bipartite entanglement and the arrow of time\n---\n\nIntroduction {#sec: introduction}\n============\n\nThe connection between quantum entanglement and the arrow of time has been the subject of numerous research enterprises, some recent ones include [@PageWootters1983; @JenningsRudolph2010; @LinMarcolliOoguriStoica2015; @Susskind2016; @Racorean2019]. Here, we mainly focus on bipartite entanglement. It has been surmised that the operation of partial transposition in the positive partial transpose criterion for bipartite" +"---\nabstract: 'The 3rd Generation Partnership Project started the study of Release 18 in 2021. Artificial intelligence (AI)-native air interface is one of the key features of Release 18, where AI for channel state information (CSI) feedback enhancement is selected as the representative use case. [[This article provides an overview of AI for CSI feedback enhancement in 5G-Advanced. Several representative non-AI and AI-enabled CSI feedback frameworks are first introduced and compared. Then, the standardization of AI for CSI feedback enhancement in 5G-advanced is presented in detail. First, the scope of the AI for CSI feedback enhancement in 5G-Advanced is presented and discussed. Then, the main challenges and open problems in the standardization of AI for CSI feedback enhancement, especially focusing on performance evaluation and the design of new protocols for AI-enabled CSI feedback, are identified and discussed. This article provides a guideline for the standardization study of AI-based CSI feedback enhancement. ]{}]{}'\nauthor:\n- '[^1] [^2]'\nbibliography:\n- 'magazine.bib'\ntitle: 'AI for CSI Feedback Enhancement in 5G-Advanced'\n---\n\nIntroduction\n============\n\nJune 2022, the 3rd Generation Partnership Project (3GPP) completed the latest release of the fifth generation (5G) cellular networks, namely, Release 17, which is the last version of the first" +"---\nabstract: 'Future industrial applications will encompass compelling new use cases requiring stringent performance guarantees over multiple key performance indicators (KPI) such as reliability, dependability, latency, time synchronization, security, etc. Achieving such stringent and diverse service requirements necessitates the design of a *special-purpose Industrial Internet of Things (IIoT) network* comprising a multitude of specialized functionalities and technological enablers. This article proposes an innovative architecture for such a special-purpose 6G IIoT network incorporating seven functional building blocks categorized into: *special-purpose functionalities* and *enabling technologies*. The former consists of *Wireless Environment Control*, *Traffic/Channel Prediction*, *Proactive Resource Management* and *End-to-End Optimization* functions; whereas the latter includes *Synchronization and Coordination*, *Machine Learning and Artificial Intelligence Algorithms*, and *Auxiliary Functions*. The proposed architecture aims at providing a resource-efficient and holistic solution for the complex and dynamically challenging requirements imposed by future 6G industrial use cases. Selected test scenarios are provided and assessed to illustrate cross-functional collaboration and demonstrate the applicability of the proposed architecture in a wireless IIoT network.'\nauthor:\n- 'Nurul Huda Mahmood, Gilberto Berardinelli, Emil J. Khatib, Ramin Hashemi, Carlos de Lima, and\u00a0Matti Latva-aho[^1] [^2] [^3] [^4] [^5] [^6]'\ntitle: |\n A Functional Architecture for 6G\\\n Special Purpose Industrial IoT Networks\n---" +"---\nauthor:\n- 'Keagan Blanchette,'\n- 'Erin Piccirillo,'\n- 'Nassim Bozorgnia,'\n- 'Louis E. Strigari,'\n- 'Azadeh Fattahi,'\n- 'Carlos S. Frenk,'\n- 'Julio F. Navarro,'\n- and Till Sawala\nbibliography:\n- 'mainbib.bib'\ntitle: 'Velocity-dependent J-factors for Milky Way dwarf spheroidal analogues in cosmological simulations'\n---\n\nIntroduction {#sec:intro}\n============\n\nIndirect dark matter (DM) searches strive to identify Standard Model particles produced through the annihilation or decay of DM particles in astrophysical environments\u00a0[@Conrad:2015bsa; @Gaskins:2016cha]. These searches require identifying environments of high DM density, and understanding how DM is distributed in these environments. Because of their high DM-to-luminous mass ratios as obtained from their stellar kinematics, and their relative proximity, dwarf spheroidal galaxies (dSphs) of the Milky Way (MW) are ideal candidates for indirect DM searches\u00a0[@Strigari:2018utn], in particular for searches using high energy gamma rays. Fermi-LAT observations of gamma rays from dSphs provide the most stringent exclusion limits on the DM annihilation cross section for DM masses up to $\\sim$\u00a0TeV\u00a0[@Fermi-LAT:2016uux], while H.E.S.S.\u00a0[@H.E.S.S.:2020jez] and HAWC\u00a0[@Albert:2017vtb] set the strongest bounds for DM masses greater than 1 TeV.\n\nThe aforementioned bounds on the annihilation cross section obtained from dSphs typically assume the s-wave DM annihilation model in which the" +"---\nbibliography:\n- 'biblio.bib'\n---\n\nMPP-2022-63\n\n[ **NNLO study of top-quark mass renormalization\\\nscheme uncertainties in Higgs boson production**]{}\n\n[**Javier Mazzitelli**]{}\n\nMax-Planck Institut f\u00fcr Physik, F\u00f6hringer Ring 6, 80805 M\u00fcnchen, Germany\n\n[**Abstract**]{}\n\n> 10000\n>\n> The ambiguity in the choice of a renormalization scheme and scale for the top-quark mass leads to an additional source of theoretical uncertainty in the calculation of the Higgs boson production cross section via gluon fusion. These uncertainties were found to be dominant in the case of off-shell Higgs production at next-to-leading order in QCD for large values of the Higgs virtuality $m_H^*$. In this work, we study the uncertainties related to the top-quark mass definition up to next-to-next-to-leading order (NNLO) in QCD. We include the full top-quark mass dependence up to three loops in the virtual corrections, and evaluate the real contributions in the soft limit, therefore obtaining the so-called soft-virtual (SV) approximation. We construct predictions for off-shell Higgs boson production renormalizing the top-quark mass within both the on-shell (OS) and the schemes, and study in detail the differences between them. While the differences between the two schemes are sizeable, we find that the predictions are always compatible within scale uncertainties. We also" +"---\nabstract: 'We present a theoretical framework for a stereo vision method via optimal transport tools. We consider two aligned optical systems and we develop the matching between the two pictures line by line. By considering a regularized version of the optimal transport, we can speed up the computation and obtain a very accurate disparity function evaluation. Moreover, via this same method we can approach successfully the case of images with occluded regions.'\nauthor:\n- 'Mattia Galeotti, Alessandro Sarti, Giovanni Citti'\nbibliography:\n- 'mybiblio.bib'\ntitle: A framework for stereo vision via optimal transport\n---\n\nIntro\n=====\n\nStereo vision is the ability of reconstructing a 3-D representation of a given situation from the data of two 2-D pictures taken with a small shift. This process is clearly fundamental in human vision, has been analyzed through a great variety of mathematical tools and it is central also in many technological applications. In particular stereo vision techniques employ two cameras with the same focal length looking at the same scene. The fundamental step in the stereo reconstruction consists in building a correspondence between the two pictures of the same scene, that is finding for any point on the left image an associated point" +"---\nabstract: |\n Consider a BV function on a Riemannian manifold. What is its differential? And what about the Hessian of a convex function? These questions have clear answers in terms of (co)vector/matrix valued measures if the manifold is the Euclidean space. In more general curved contexts, the same objects can be perfectly understood via charts. However, charts are often unavailable in the less regular setting of metric geometry, where still the questions make sense.\n\n In this paper we propose a way to deal with this sort of problems and, more generally, to give a meaning to a concept of \u2018measure acting in duality with sections of a given bundle\u2019, loosely speaking. Despite the generality, several classical results in measure theory like Riesz\u2019s and Alexandrov\u2019s theorems have a natural counterpart in this setting. Moreover, as we are going to discuss, the notions introduced here provide a unified framework for several key concepts in nonsmooth analysis that have been introduced more than two decades ago, such as: Ambrosio-Kirchheim\u2019s metric currents, Cheeger\u2019s Sobolev functions and Miranda\u2019s BV functions.\n\n Not surprisingly, the understanding of the structure of these objects improves with the regularity of the underlying space. We are particularly interested in the" +"---\nauthor:\n- 'J. P\u00e9tri'\nbibliography:\n- '/home/petri/zotero/Ma\\_bibliotheque.bib'\ndate: 'Received ; accepted '\ntitle: Particle acceleration and radiation reaction in a strongly magnetized rotating dipole\n---\n\n[Neutron stars are surrounded by ultra-relativistic particles efficiently accelerated by ultra strong electromagnetic fields. These particles copiously emit high energy photons through curvature, synchrotron and inverse Compton radiation. However so far, no numerical simulations were able to handle such extreme regimes of very high Lorentz factors and magnetic field strengths close or even above the quantum critical limit of \u00a0T.]{} [It is the purpose of this paper to study particle acceleration and radiation reaction damping in a rotating magnetic dipole with realistic field strengths of \u00a0T to \u00a0T typical of millisecond and young pulsars as well as of magnetars.]{} [To this end, we implemented an exact analytical particle pusher including radiation reaction in the reduced Landau-Lifshitz approximation where the electromagnetic field is assumed constant in time and uniform in space during one time step integration. The position update is performed using a velocity Verlet method. We extensively tested our algorithm against time independent background electromagnetic fields like the electric drift in cross electric and magnetic fields and the magnetic drift and mirror motion in" +"---\nabstract: 'The next generation of radio telescopes will strive for unprecedented dynamic range across wide fields of view, and direction-dependent gains such as the gain from the primary-beam pattern, or leakage of one Stokes product into another, must be removed from the cleaned images if dynamic range is to reach its full potential. Unfortunately, such processing is extremely computationally intensive, and is made even more challenging by the very large volumes of data that these instruments will generate. Here we describe a new GPU-based imager, aimed primarily at use with the ASKAP telescope, that is capable of generating cleaned, full-polarisation images that include wide-field, primary-beam, and polarisation leakage corrections.'\nauthor:\n- 'Chris J. Skipper\\*, Anna M. M. Scaife, J. Patrick Leahy'\ntitle: 'A GPU-based Imager with Polarised Primary-beam Correction'\n---\n\nIntroduction\n============\n\nAstronomical images that are made with interferometric radio telescopes have to be cleaned, or deconvolved, in order to remove the point-spread function of the interferometer. But other, weaker, effects that are direction-dependent, such as gains due to the antenna primary-beam patterns, are normally ignored due to the complexity of processing required to remove them. However, the latest - and next - generations of radio telescopes (such as" +"---\nabstract: 'The Square Kilometre Array (SKA) is the largest radio interferometer under construction in the world. The high accuracy, wide-field and large size imaging significantly challenge the construction of the Science Data Processor (SDP) of SKA. We propose a hybrid imaging method based on improved W-Stacking and snapshots. The w range is reduced by fitting the snapshot $uv$ plane, thus effectively enhancing the performance of the improved W-Stacking algorithm. We present a detailed implementation of WS-Snapshot. With full-scale SKA1-LOW simulations, we present the imaging performance and imaging quality results for different parameter cases. The results show that the WS-Snapshot method enables more efficient distributed processing and significantly reduces the computational time overhead within an acceptable accuracy range, which would be crucial for subsequent SKA science studies.'\nauthor:\n- |\n Yang-Fan Xie,$^{1,2}$ Feng Wang,$^{1,2}$[^1] Hui Deng,$^{1,2}$[^2] Ying Mei,$^{1,2}$[^3] Ying-He Celeste L\u00fc,$^{3}$ Gabriella Hodos\u00e1n,$^{4}$ Vladislav Stolyarov,$^{3,5}$ Oleg Smirnov,$^{6,7}$ Xiao-Feng Li,$^{1,2}$[^4] and Tim Cornwell$^{3}$\\\n $^{1}$ Center For Astrophysics, Guangzhou University, Guangzhou 510006, P.R. China\\\n $^{2}$ Great Bay Center, National Astronomical Data Center, Guangzhou, Guangdong, 510006, P.R. China\\\n $^{3}$ Cavendish Astrophysics Group, University of Cambridge, Cambridge, CB3 0HE, UK\\\n $^{4}$ RAL Space, STFC Rutherford Appleton Laboratory, Didcot, Oxfordshire, OX11 0QX, UK\\\n $^{5}$ Special" +"---\nauthor:\n- 'Najmeh Nekooghadirli, Michel Gendreau, Jean-Yves Potvin, Thibaut Vidal'\ntitle: 'Workload Equity in Multi-Period Vehicle Routing Problems'\n---\n\nWorkload Equity in Multi-PeriodVehicle Routing Problems\n\n**Najmeh Nekooghadirli$^{1}$, Michel Gendreau$^{1}$**\\\n**Jean-Yves Potvin$^{2}$, Thibaut Vidal$^{1,3,4}$**\\\n\n[$^{1}$ CIRRELT & Department of Mathematical and Industrial Engineering, Polytechnique Montreal, Canada\\\n$^{2}$ CIRRELT & Department of Computer Science and Operation Research, University of Montreal, Canada\\\n$^{3}$ Department of Computer Science, Pontifical Catholic University of Rio de Janeiro, Brazil\\\n$^{4}$ SCALE-AI Chair in Data-Driven Supply Chains\\\n]{}\n\n[**[Abstract.]{}**]{} An equitable distribution of workload is essential when deploying vehicle routing solutions in practice. For this reason, previous studies have formulated vehicle routing problems with workload-balance objectives or constraints, leading to trade-off solutions between routing costs and workload equity. These methods consider a single planning period; however, equity is often sought over several days in practice. In this work, we show that workload equity over multiple periods can be achieved without impact on transportation costs when the planning horizon is sufficiently large. To achieve this, we design a two-phase method to solve multi-period vehicle routing problems with workload balance. Firstly, our approach produces solutions with minimal distance for each period. Next, the resulting routes are allocated to drivers" +"---\nabstract: 'We numerically estimate the divergence of several two-vertex diagrams that contribute to the radiative corrections for the Lorentzian EPRL spin foam propagator. We compute the amplitudes as functions of a homogeneous cutoff over the bulk quantum numbers, fixed boundary data, and different Immirzi parameters, and find that for a class of two-vertex diagrams, those with fewer than six internal faces are convergent. The calculations are done with the numerical framework `sl2cfoam-next`.'\nauthor:\n- |\n \\\n \\\n \\\ntitle: Radiative corrections to the Lorentzian EPRL spin foam propagator\n---\n\nIntroduction {#sec:intro}\n============\n\nThe main goal of spin foam theory is to define the dynamics of loop quantum gravity in a background independent and Lorentz covariant way, providing transition amplitudes between spin network states [@Rovelli2015; @Perez:2012wv]. The state of the art are the EPRL and FK spin foam models [@Engle:2007wy; @Freidel:2007py]; in this paper we will focus on the Lorentzian EPRL spin foam model. These theories have a compelling connection with discrete general relativity in the double limit of finer discretization and vanishing $\\hbar$ [@Barrett:2009mw; @Dona:2020yao; @Dona:2020tvv; @Han:2021kll; @Engle:2021xfs].\n\nThe theory is ultraviolet finite; however, the unbounded summation over the bulk degrees of freedom can cause large-volume infrared divergences (although" +"---\nabstract: 'Spiculations/lobulations, sharp/curved spikes on the surface of lung nodules, are good predictors of lung cancer malignancy and hence, are routinely assessed and reported by radiologists as part of the standardized Lung-RADS clinical scoring criteria. Given the 3D geometry of the nodule and 2D slice-by-slice assessment by radiologists, manual spiculation/lobulation annotation is a tedious task and thus no public datasets exist to date for probing the importance of these clinically-reported features in the SOTA malignancy prediction algorithms. As part of this paper, we release a large-scale Clinically-Interpretable Radiomics Dataset, CIRDataset, containing 956 radiologist QA/QC\u2019ed spiculation/lobulation annotations on segmented lung nodules from two public datasets, LIDC-IDRI (N=883) and LUNGx (N=73). We also present an end-to-end deep learning model based on multi-class Voxel2Mesh extension to segment nodules (while preserving spikes), classify spikes (sharp/spiculation and curved/lobulation), and perform malignancy prediction. Previous methods have performed malignancy prediction for LIDC and LUNGx datasets but without robust attribution to any clinically reported/actionable features (due to known hyperparameter sensitivity issues with general attribution schemes). With the release of this comprehensively-annotated CIRDataset and end-to-end deep learning baseline, we hope that malignancy prediction methods can validate their explanations, benchmark against our baseline, and provide clinically-actionable insights. Dataset, code," +"---\nabstract: |\n Thin synchrotron-emitting filaments are increasingly seen in the intracluster medium (ICM). We present the first example of a direct interaction between a magnetic filament, a radio jet, and a dense ICM clump in the poor cluster Abell\u00a0194. This enables the first exploration of the dynamics and possible histories of magnetic fields and cosmic rays in such filaments. Our observations are from the MeerKAT Galaxy Cluster Legacy Survey and the LOFAR Two Metre Sky Survey. Prominent 220\u00a0kpc long filaments extend east of radio galaxy 3C40B, with very faint extensions to 300\u00a0kpc, and show signs of interaction with its northern jet. They curve around a bend in the jet and intersect the jet in Faraday depth space. The X-ray surface brightness drops across the filaments; this suggests that the relativistic particles and fields contribute significantly to the pressure balance and evacuate the thermal plasma in a $\\sim$35\u00a0kpc cylinder. We explore whether the relativistic electrons could have streamed along the filaments from 3C40B, and present a plausible alternative whereby magnetized filaments are a) generated by shear motions in the large-scale, post-merger ICM flow, b) stretched by interactions with the jet and flows in the ICM, amplifying" +"---\nabstract: 'Machine learning (ML) interpretability techniques can reveal undesirable patterns in data that models exploit to make predictions\u2014potentially causing harms once deployed. However, how to take action to address these patterns is not always clear. In a collaboration between ML and human-computer interaction researchers, physicians, and data scientists, we develop []{}, the first interactive system to help domain experts and data scientists easily and responsibly edit Generalized Additive Models (GAMs) and fix problematic patterns. With novel interaction techniques, our tool puts interpretability into action\u2014empowering users to analyze, validate, and align model behaviors with their knowledge and values. Physicians have started to use our tool to investigate and fix pneumonia and sepsis risk prediction models, and an evaluation with 7 data scientists working in diverse domains highlights that our tool is easy to use, meets their model editing needs, and fits into their current workflows. Built with modern web technologies, our tool runs locally in users\u2019 web browsers or computational notebooks, lowering the barrier to use. []{} is available at the following public demo link: [[[**`https://interpret.ml/gam-changer`**](https://interpret.ml/gam-changer)]{}]{}.'\nauthor:\n- 'Zijie J. Wang'\n- Alex Kale\n- Harsha Nori\n- Peter Stella\n- 'Mark E. Nunnally'\n- Duen Horng Chau" +"---\nabstract: 'We report the discovery of an eccentric hot Neptune and a non-transiting outer planet around TOI-1272. We identified the eccentricity of the inner planet, with an orbital period of 3.3 d and $R_{\\rm p,b} = 4.1 \\pm 0.2$ $R_\\earth$, based on a mismatch between the observed transit duration and the expected duration for a circular orbit. Using ground-based radial velocity measurements from the HIRES instrument at the Keck Observatory, we measured the mass of TOI-1272b to be $M_{\\rm p,b} = 25 \\pm 2$ $M_\\earth$. We also confirmed a high eccentricity of $e_b = 0.34 \\pm 0.06$, placing TOI-1272b among the most eccentric well-characterized sub-Jovians. We used these RV measurements to also identify a non-transiting outer companion on an 8.7-d orbit with a similar mass of $M_{\\rm p,c}$ sin$i= 27 \\pm 3$ $M_\\earth$ and $e_c \\lesssim 0.35$. Dynamically stable planet-planet interactions have likely allowed TOI-1272b to avoid tidal eccentricity decay despite the short circularization timescale expected for a close-in eccentric Neptune. TOI-1272b also maintains an envelope mass fraction of $f_{\\rm env} \\approx 11\\%$ despite its high equilibrium temperature, implying that it may currently be undergoing photoevaporation. This planet joins a small population of short-period Neptune-like planets within the \u201cHot" +"---\nabstract: 'It has been argued that a self-interacting massive vector field is pathological due to a dynamical formation of a singular effective metric of the vector field, which is the onset of a gradient or ghost instability. We discuss that this singularity formation is not necessarily a fundamental problem but a breakdown of the effective field theory (EFT) description of the massive vector field. By using a model of ultraviolet (UV) completion of the massive vector field, we demonstrate that a Proca star, a self-gravitating condensate of the vector field, continues to exist even after the EFT suffers from a gradient instability without any pathology at UV, in which the EFT description is still valid and the gradient instability in EFT may be interpreted as a standard dynamical instability of a high-density boson star from the UV perspective. On the other hand, we find that the EFT description is broken before the ghost instability appears. This suggests that a heavy degree of freedom may be spontaneously excited to cure the pathology of the EFT as the EFT dynamically tends to approach the onset of the ghost instability.'\nauthor:\n- Katsuki Aoki\n- Masato Minamitsuji\nbibliography:\n- 'ref.bib'\ntitle: |" +"---\nabstract: 'In deterministic optimization, it is typically assumed that all problem parameters are fixed and known. In practice, however, some parameters may be a priori unknown but can be estimated from historical data. A typical predict-then-optimize approach separates predictions and optimization into two stages. Recently, end-to-end predict-then-optimize has become an attractive alternative. In this work, we present the [*PyEPO*]{} package, a [*PyTorch*]{}-based end-to-end predict-then-optimize library in Python. To the best of our knowledge, [*PyEPO*]{} (pronounced like *pineapple* with a silent \u201cn\") is the first such generic tool for linear and integer programming with predicted objective function coefficients. It provides four base algorithms: a convex surrogate loss function from the seminal work of\u00a0@elmachtoub2021smart, a differentiable black-box solver approach of\u00a0@poganvcic2019differentiation, and two differentiable perturbation-based methods from\u00a0@berthet2020learning. [*PyEPO*]{} provides a simple interface for the definition of new optimization problems, the implementation of state-of-the-art predict-then-optimize training algorithms, the use of custom neural network architectures, and the comparison of end-to-end approaches with the two-stage approach. [*PyEPO*]{} enables us to conduct a comprehensive set of experiments comparing a number of end-to-end and two-stage approaches along axes such as prediction accuracy, decision quality, and running time on problems such as Shortest Path, Multiple" +"---\nabstract: 'The transport and optical properties of semiconducting transition metal dichalcogenides around room temperature are dictated by electron-phonon scattering mechanisms within a complex, spin-textured and multi-valley electronic landscape. The relative positions of the valleys are critical, yet they are sensitive to external parameters and very difficult to determine directly. We propose a first-principle model as a function valley positions to calculate carrier mobility and Kerr rotation angles. The model brings valuable insights, as well as quantitative predictions of macroscopic properties for a wide range of carrier density. The doping-dependant mobility displays a characteristic peak, the height depending on the position of the valleys. The Kerr rotation signal is enhanced when same spin-valleys are aligned, and quenched when opposite spin-valleys are populated. We provide guidelines to optimize these quantities with respect to experimental parameters, as well as the theoretical support for *in situ* characterization of the valley positions.'\nauthor:\n- Thibault Sohier\n- 'Pedro M. M. C. de Melo'\n- Zeila Zanolli\n- Matthieu Jean Verstraete\nbibliography:\n- 'QV.bib'\ntitle: The impact of valley profile on the mobility and Kerr rotation of transition metal dichalcogenides \n---\n\nIntroduction {#sec:intro}\n============\n\nSemiconducting transition-metal dichalcogenides (TMDs) hold center stage in the flatland[@Manzeli2017]. They" +"---\nabstract: 'In this paper, we proposed a central moment discrete unified gas-kinetic scheme (DUGKS) for multiphase flows with large density ratio and high Reynolds number. Two sets of kinetic equations with central-moment-based multiple relaxation time collision operator are employed to approximate the incompressible Navier-Stokes equations and a conservative phase field equation for interface-capturing. In the framework of DUGKS, the first moment of the distribution function for the hydrodynamic equations is defined as velocity instead of momentum. Meanwhile, the zeroth moments of the distribution function and external force are also suitably defined such that a artificial pressure evolution equation can be recovered. Moreover, the Strang splitting technique for time integration is employed to avoid the calculation of spatial derivatives in the force term at cell faces. For the interface-capturing equation, two equivalent DUGKS methods that deal with the diffusion term differently using a source term as well as a modified equilibrium distribution function are presented. Several benchmark tests that cover a wide a range of density ratios (up to 1000) and Reynolds numbers (up to $10^5$) are subsequently carried out to demonstrate the capabilities of the proposed scheme. Numerical results are in good agreement with the reference and experimental data.'" +"---\nabstract: 'We analyze a 7.4\u00a0hr [XMM-Newton]{}\u00a0light curve of the cataclysmic variable [Swift\u00a0J0503.7$-$2819]{}, previously classified using optical periods as an intermediate polar (IP) with an orbital period of 0.0567\u00a0days. A photometric signal at 975\u00a0s, previously suggested to be the spin period, is not present in X-rays and is readily understood as a quasi-periodic oscillation. The X-ray light curve instead shows clear behavior of a highly asynchronous polar (AP) or stream-fed IP. It can be described by either of two scenarios: one which switches between one-pole and two-pole accretion, and another in which accretion alternates fully between two poles. The spin periods in these two models are 0.0455 days and 0.0505 days, respectively. The spin frequency $\\omega$ is thus either 24% faster or 12% faster than the orbital frequency $\\Omega$, and the corresponding beat period between spin and orbit is 0.231\u00a0days or 0.462\u00a0days. Brief absorption events seen in light curve are spaced in a way that may favor the longer spin and beat periods. These periods are confirmed and refined using data from the Transiting Exoplanet Survey Satellite (TESS) and the Asteroid Terrestrial-impact Last Alert System (ATLAS). The short beat cycle of [Swift\u00a0J0503.7$-$2819]{}" +"---\nabstract: 'The dynamics of particles attached to an interface separating two immiscible fluids are encountered in a wide variety of applications. Here we present a combined asymptotic and numerical investigation of the fluid motion past spherical particles attached to a deformable interface undergoing uniform creeping flows in the limit of small Capillary number and small deviation of the contact angle from $90^\\circ$. Under the assumption of a constant three-phase contact angle, we calculate the interfacial deformation around an isolated particle and a particle pair. Applying the Lorentz reciprocal theorem to the zeroth-order approximation corresponding to spherical particles at a flat interface and the first correction in Capillary number and correction contact angle allows us to obtain explicit analytical expressions for the hydrodynamic drag in terms of the zeroth-order approximations and the correction deformations. The drag coefficients are computed as a function of the three-phase contact angle, the viscosity ratio of the two fluids, the Bond number, and the separation distance between the particles. In addition, the capillary force acting on the particles due to the interfacial deformation is calculated.'\nauthor:\n- Zhi Zhou\n- 'Petia M. Vlahovska'\n- 'Michael J. Miksis'\nbibliography:\n- 'references.bib'\nnocite: '[@*]'\ntitle: Drag force" +"---\nabstract: |\n Performance in natural language processing, specifically for the question-answering task, is typically measured by comparing a model\u2019s most confident (primary) prediction to golden answers (i.e., the ground truth). We are making the case that it is also helpful to quantify how close a model came to predicting a correct answer for examples that failed, a goal that the F1 score only partially satisfies. We derive our metrics from the probability distribution from which the model ranks its predictions. We define an example\u2019s Golden Rank (GR) as the rank of its most confident prediction that matches the ground truth. Thus the GR quantifies how close a correct answer comes to being a model\u2019s best prediction. We demonstrate how the GR can be used to classify questions and visualize their spectrum of difficulty, from relative successes to persistent extreme failures.\n\n We derive a new aggregate statistic, the Golden Rank Interpolated Median (GRIM), that quantifies the proximity of correct predictions to the model\u2019s choices over a whole dataset or any slice of examples. To develop some intuition and explore the applicability of these scores, we use the *Stanford Question Answering Dataset (SQuAD-2)* and a few popular transformer models from the" +"---\nabstract: 'This letter shows that the TX and RX models commonly used in literature for downlink (distributed) massive MIMO are inaccurate, leading also to inaccurate conclusions. In particular, the effect should be modeled as $+\\varphi$ in the transmitter chain and $-\\varphi$ in the receiver chain, i.e., different signs. A common misconception in literature is to use the same sign for both chains. By correctly modeling TX and RX chain, one realizes that the phases are included in the reciprocity calibration and whenever the phases drift apart, a new reciprocity calibration becomes necessary (the same applies to time drifts). Thus, free-running s and the commonly made assumption of perfect reciprocity calibration (to enable blind DL channel estimation) are both not that useful, as they would require too much calibration overhead. Instead, the s at the base stations should be locked and relative reciprocity calibration in combination with downlink demodulation reference symbols should be employed.'\nauthor:\n- 'Ronald Nissel [^1] [^2]'\nbibliography:\n- 'bibliography.bib'\ntitle: 'Correctly Modeling TX and RX Chain in (Distributed) Massive MIMO - New Fundamental Insights on Coherency'\n---\n\nMassive MIMO, Phase Noise, Downlink, Time-Division Duplex (TDD), Hardware, Distributed MIMO\n\nIntroduction\n============\n\nis one of the most important" +"---\nabstract: 'We demonstrate the application of the YOLOv5 model, a general purpose convolution-based single-shot object detection model, in the task of detecting binary neutron star (BNS) coalescence events from gravitational-wave data of current generation interferometer detectors. We also present a thorough explanation of the synthetic data generation and preparation tasks based on approximant waveform models used for the model training, validation and testing steps. Using this approach, we achieve mean average precision (mAP~\\[0.50\\]~) values of 0.945 for a single class validation dataset and as high as 0.978 for test datasets. Moreover, the trained model is successful in identifying the GW170817 event in the LIGO H1 detector data. The identification of this event is also possible for the LIGO L1 detector data with an additional pre-processing step, without the need of removing the large glitch in the final stages of the inspiral. The detection of the GW190425 event is less successful, which attests to performance degradation with the signal-to-noise ratio. Our study indicates that the YOLOv5 model is an interesting approach for first-stage detection alarm pipelines and, when integrated in more complex pipelines, for real-time inference of physical source parameters.'\nauthor:\n- Jo\u00e3o Aveiro\n- 'Felipe\u00a0F.\u00a0Freitas'\n- M\u00e1rcio" +"---\nabstract: 'We consider the game-theoretic approach to [time-inconsistent]{} stopping of a one-dimensional diffusion where the [time-inconsistency]{} is due to the presence of a non-exponential (weighted) discount function. In particular, we study (weak) equilibria for this problem in a novel class of mixed (i.e., randomized) stopping times based on a local time construction of the stopping intensity. For a general formulation of the problem we provide a verification theorem giving sufficient conditions for mixed (and pure) equilibria in terms of a set of variational inequalities, including a smooth fit condition. We apply the theory to prove the existence of (mixed) equilibria in a recently studied real options problem in which no pure equilibria exist.'\nauthor:\n- |\n Andi Bodnariu\\\n Department of Mathematics, Stockholm University\\\n \\\n S\u00f6ren Christensen\\\n Department of Mathematics, Kiel University\\\n \\\n Kristoffer Lindensj\u00f6\\\n Department of Mathematics, Stockholm University\nbibliography:\n- 'TimeInconStopGBMAndKriSor20220630.bib'\ntitle: 'Local time pushed mixed stopping and smooth fit for time-inconsistent stopping problems'\n---\n\nIntroduction {#intro}\n============\n\nIn this paper we study the game-theoretic approach to [time-inconsistent]{} stopping where the [time-inconsistency]{} is due to the presence of a non-exponential discount function. In particular, we consider the problem of how to choose a stopping time $\\tau$ that minimizes\u2014in" +"---\nabstract: 'Deep neural networks (DNNs) achieve great success in blind image quality assessment (BIQA) with large pre-trained models in recent years. Their solutions cannot be easily deployed at mobile or edge devices, and a lightweight solution is desired. In this work, we propose a novel BIQA model, called GreenBIQA, that aims at high performance, low computational complexity and a small model size. GreenBIQA adopts an unsupervised feature generation method and a supervised feature selection method to extract quality-aware features. Then, it trains an XGBoost regressor to predict quality scores of test images. We conduct experiments on four popular IQA datasets, which include two synthetic-distortion and two authentic-distortion datasets. Experimental results show that GreenBIQA is competitive in performance against state-of-the-art DNNs with lower complexity and smaller model sizes.'\nauthor:\n- |\n Zhanxuan Mei\\\n University of Southern California\\\n Los Angeles, USA\\\n `zhanxuan@usc.edu`\\\n Yun-Cheng Wang\\\n University of Southern California\\\n Los Angeles, USA\\\n `yunchenw@@usc.edu`\\\n Xingze He\\\n Meta Platform, Inc.\\\n Menlo Park, California, USA\\\n `xingze.he@fb.com`\\\n C.-C. Jay Kuo\\\n University of Southern California\\\n Los Angeles, USA\\\n `cckuo@sipi.usc.edu`\\\nbibliography:\n- 'references.bib'\ntitle: 'GreenBIQA: A Lightweight Blind Image Quality Assessment Method'\n---\n\nIntroduction {#sec:introduction}\n============\n\nImage quality assessment (IQA) aims to evaluate image quality at various stages" +"---\nabstract: '*A leading explanation for widespread replication failures is publication bias. I show in a simple model of selective publication that, contrary to common perceptions, the replication rate is unaffected by the suppression of insignificant results in the publication process. I show further that the expected replication rate falls below intended power owing to issues with common power calculations. I empirically calibrate a model of selective publication and find that power issues alone can explain the entirety of the gap between the replication rate and intended power in experimental economics. In psychology, these issues explain two-thirds of the gap.*'\nauthor:\n- 'By Patrick Vu[^1]'\ntitle: '[Can the Replication Rate Tell Us About Publication Bias?]{}'\n---\n\n=cmr12 at 16pt\n\nA key stylized fact about the replication crisis in the social and life sciences is that the share of statistically significant results that are replicated is relatively low. In a replication of 18 experimental economics studies, @Camerer2016 found that 61% percent of significant results were replicated with the same sign and significance. In psychology, @OpenScience2015 found that only 36% of significant results were successfully replicated. In both cases, the target for the replication rate, which we refer to as intended" +"---\nabstract: 'Many exoplanets were detected thanks to the radial velocity method, according to which the motion of a binary system around its center of mass can produce a periodical variation of the Doppler effect of the light emitted by the host star. These variations are influenced by both Newtonian and non-Newtonian perturbations to the dominant inverse-square acceleration; accordingly, exoplanetary systems lend themselves to test theories of gravity alternative to General Relativity. In this paper, we consider the impact of Standard Model Extension (a model that can be used to test all possible Lorentz violations) on the perturbation of radial velocity, and suggest that suitable exoplanets configurations and improvements in detection techniques may contribute to obtain new constraints on the model parameters.'\nauthor:\n- Antonio Gallerati\n- Matteo Luca Ruggiero\n- Lorenzo Iorio\nbibliography:\n- 'SMV.bib'\ntitle: Impact of Lorentz violation models on exoplanets dynamics \n---\n\nIntroduction\n============\n\nAfter the first detection of a planet orbiting a main sequence star [@Mayor:1995eh], thousands of exoplanets were detected using different techninques, such as radial velocity, transit photometry and timing, pulsar timing, microlensing and astrometry: indeed, each of these techniques is sensitive to specific properties of the planetary systems, unavoidably leading to selection" +"---\nabstract: 'The emergence of new organizational forms\u2013such as virtual teams\u2013has brought forward some challenges for teams. One of the most relevant challenges is coordinating the decisions of team members who work from different time zones. Intuition suggests that task performance should improve if the team members\u2019 decisions are coordinated. However, previous research suggests that the effect of coordination on task performance is ambiguous. Specifically, the effect of coordination on task performance depends on aspects such as the team members\u2019 learning and the changes in team composition over time. This paper aims to understand how individual learning and team composition moderate the relationship between coordination and task performance. We implement an agent-based modeling approach based on the *NK*-framework to fulfill our research objective. Our results suggest that both factors have moderating effects. Specifically, we find that excessively increasing individual learning is harmful for the task performance of fully autonomous teams, but less detrimental for teams that coordinate their decisions. In addition, we find that teams that coordinate their decisions benefit from changing their composition in the short-term, but fully autonomous teams do not. In conclusion, teams that coordinate their decisions benefit more from individual learning and dynamic composition than teams" +"---\nabstract: 'This work presents a novel approach to water Cherenkov neutrino detector event reconstruction and classification. Three forms of a Convolutional Neural Network have been trained to reject cosmic muon events, classify beam events, and estimate neutrino energies, using only a slightly modified version of the raw detector event as input. When evaluated on a realistic selection of simulated CHIPS-5kton prototype detector events, this new approach significantly increases performance over the standard likelihood-based reconstruction and simple neural network classification.'\nauthor:\n- Josh Tingey\n- Simeon Bash\n- John Cesar\n- Thomas Dodwell\n- Stefano Germani\n- Paul Kooijman\n- Petr M\u00e1nek\n- Mustafa Ozkaynak\n- Andy Perch\n- Jennifer Thomas\n- Leigh Whitehead\nbibliography:\n- 'refs.bib'\ntitle: 'Neutrino Characterisation using Convolutional Neural Networks in CHIPS water Cherenkov detectors.'\n---\n\nIntroduction {#sec:introduction}\n============\n\nIn pursuit of answers to some of the open questions in physics, the study of the phenomenon of neutrino oscillations has become paramount, owing to the implication of the neutrinos\u2019 small masses in relation to the mass of the Universe and, potentially, the question of the matter anti-matter asymmetry. The next generation of giant neutrino detectors aim to push the size and therefore detector mass to the" +"---\nabstract: 'Plasma simulation is an important and sometimes only approach to investigating plasma behavior. In this work, we propose two general AI-driven frameworks for low-temperature plasma simulation: Coefficient-Subnet Physics-Informed Neural Network (CS-PINN) and Runge-Kutta Physics-Informed Neural Network (RK-PINN). The CS-PINN uses either a neural network or an interpolation function (e.g. spline function) as the subnet to approximate solution-dependent coefficients (e.g. electron-impact cross sections, thermodynamic properties, transport coefficients, et al.) in plasma equations. On the basis of this, the RK-PINN incorporates the implicit Runge-Kutta formalism in neural networks to achieve a large-time-step prediction of transient plasmas. Both CS-PINN and RK-PINN learn the complex non-linear relationship mapping from spatio-temporal space to equation\u2019s solution. Based on these two frameworks, we demonstrate preliminary applications by four cases covering plasma kinetic and fluid modeling. The results verify that both CS-PINN and RK-PINN have good performance in solving plasma equations. Moreover, the RK-PINN has ability of yielding a good solution for transient plasma simulation with not only large time step but also limited noisy sensing data.'\nauthor:\n- |\n Linlin Zhong[^1], \u00a0Bingyu Wu, and Yifan Wang\\\n School of Electrical Engineering\\\n Southeast University\\\n No 2 Sipailou, Nanjing, Jiangsu Province 210096, P. R. China\\\n `mathboylinlin@gmail.com`, `linlin@seu.edu.cn`\\\ndate:" +"---\nabstract: 'Astronomical data have shown that the galaxy rotation curves are mostly flat in the far distance of the galactic cores, which reveals the insufficiency of our knowledges about how gravity works in these regimes. In this paper we introduce a resolution of this issue from the $f(R,T)$ modified gravity formalism perspective. By investigating two classes of models with separable (minimal coupling model) and inseparable (non-minimal coupling model) parts of the Ricci scalar $R$ and trace of the energy-momentum tensor $T$, we find that only in the latter models it is possible to attain flat galaxy rotation curves.'\naddress:\n- |\n University of Sistan and Baluchestan, Faculty of Sciences, Department of Physics, Department of Physics, Zahedan, Iran\\\n $^a$School of Astronomy, Institute for Research in Fundamental Sciences (IPM) P. O. Box 19395-5531, Tehran, Iran\n- 'Universidade Federal do ABC (UFABC) - Centro de Ci\u00eancias Naturais e Humanas (CCNH) - Avenida dos Estados 5001, 09210-580, Santo Andr\u00e9, SP, Brazil'\nauthor:\n- 'H. Shabani'\n- 'P.H.R.S. Moraes'\ntitle: 'The galaxy rotation curves in the $f(R,T)$ modified gravity formalism'\n---\n\ndark matter ,rotation curves ,extended gravity\n\nIntroduction {#sec:intro}\n============\n\nThe dark matter paradigm is among the main challenges of experimental physics nowadays. Although" +"---\nabstract: 'Nuclear systems under constraints, with high degrees of symmetries and/or collectivities may be considered as moving effectively in spaces with reduced spatial dimensions. We first derive analytical expressions for the nucleon specific energy $E_0(\\rho)$, pressure $P_0(\\rho)$, incompressibility coefficient $K_0(\\rho)$ and skewness coefficient $J_0(\\rho)$ of symmetric nucleonic matter (SNM), the quadratic symmetry energy $E_{{\\mathrm{sym}}}(\\rho)$, its slope parameter $L(\\rho)$ and curvature coefficient $K_{{\\mathrm{sym}}}(\\rho)$ as well as the fourth-order symmetry energy $E_{{\\mathrm{sym,4}}}(\\rho)$ of neutron-rich matter in general $d$ spatial dimensions (abbreviated as \u201c$d$D\u201d) in terms of the isoscalar and isovector parts of the isospin-dependent single-nucleon potential according to the generalized Hugenholtz-Van Hove (HVH) theorem. The equation of state (EOS) of nuclear matter in $d$D can be linked to that in the conventional 3-dimensional (3D) space by the $\\epsilon$-expansion which is a perturbative approch successfully used previously in treating second-order phase transitions and related critical phenomena in solid state physics and more recently in studying the EOS of cold atoms. The $\\epsilon$-expansion of nuclear EOS in $d$D based on a reference dimension $d_{{\\mathrm{f}}}=d-\\epsilon$ is shown to be effective with $-1\\lesssim\\epsilon\\lesssim1$ starting from $1\\lesssim d_{{\\mathrm{f}}}\\lesssim3$ in comparison with the exact expressions derived using the HVH theorem. Moreover, the EOS of SNM (with/without considering" +"---\nabstract: 'Oceanic internal waves often have curvilinear fronts and propagate over various currents. We present the first study of long weakly-nonlinear internal ring waves in a three-layer fluid in the presence of a background linear shear current. The leading order of this theory leads to the angular adjustment equation - a nonlinear first-order differential equation describing the dependence of the linear long-wave speed on the angle to the direction of the current. Ring waves correspond to singular solution (envelope of the general solution) of this equation, and they can exist only under certain conditions. The constructed solutions reveal qualitative differences in the shapes of the wavefronts of the two baroclinic modes: the wavefront of the faster mode is elongated in the direction of the current, while the wavefront of the slower mode is squeezed. Moreover, different regimes are identified according to the vorticity strength. When the vorticity is weak, part of the wavefront is able to propagate upstream. However, when the vorticity is strong enough, the whole wavefront propagates downstream. A richer behaviour can be observed for the slower mode. As the vorticity increases, singularities of the swallowtail-type may arise and, eventually, solutions with compact wavefronts crossing the downstream" +"---\nabstract: 'Reliable quantitative analysis of immunohistochemical staining images requires accurate and robust cell detection and classification. Recent weakly-supervised methods usually estimate probability density maps for cell recognition. However, in dense cell scenarios, their performance can be limited by pre- and post-processing as it is impossible to find a universal parameter setting. In this paper, we introduce an end-to-end framework that applies direct regression and classification for preset anchor points. Specifically, we propose a pyramidal feature aggregation strategy to combine low-level features and high-level semantics simultaneously, which provides accurate cell recognition for our purely point-based model. In addition, an optimized cost function is designed to adapt our multi-task learning framework by matching ground truth and predicted points. The experimental results demonstrate the superior accuracy and efficiency of the proposed method, which reveals the high potentiality in assisting pathologist assessments.'\nauthor:\n- Zhongyi Shui\n- Shichuan Zhang\n- Chenglu Zhu\n- Bingchuan Wang\n- Pingyi Chen\n- Sunyi Zheng\n- Lin Yang\nbibliography:\n- 'mybib.bib'\ntitle: 'End-to-end cell recognition by point annotation[^1]'\n---\n\nIntroduction\n============\n\nQuantitative immunohistochemistry image analysis is of great importance for treatment selection and prognosis in clinical practice. Assessing PD-L1 expression on tumor cells by the Tumor Proportion" +"---\nabstract: 'In this manuscript we present a theoretical framework and its numerical implementation to simulate the out-of-equilibrium electron dynamics induced by the interaction of ultrashort laser pulses in condensed-matter systems. Our approach is based on evolving in real-time the density matrix of the system in reciprocal space. It considers excitonic and non-perturbative light-matter interactions. We show some relevant examples that illustrate the efficiency and flexibility of the approach to describe realistic ultrafast spectroscopy experiments. Our approach is suitable for modeling the promising and emerging ultrafast studies at the attosecond time scale that aim at capturing the electron dynamics and the dynamical electron-electron correlations via X-ray absorption spectroscopy.'\nauthor:\n- Giovanni Cistaro\n- Mikhail Malakhov\n- 'Juan Jos\u00e9 Esteve-Paredes'\n- 'Alejandro Jos\u00e9 Ur\u00eda-\u00c1lvarez'\n- 'Rui E. F. Silva'\n- Fernando Mart\u00edn\n- Juan Jos\u00e9 Palacios\n- Antonio Pic\u00f3n\nnocite: '[@*]'\ntitle: 'A theoretical approach for electron dynamics and ultrafast spectroscopy (EDUS)'\n---\n\n\\[sec:intro\\]Introduction\n=========================\n\nOptical manipulation is the fastest technique to control and switch properties in a material. The advent of ultrashort laser pulses enable to drive the system out-of-equilibrium and reach novel quantum phases with properties beyond the ones at equilibrium [@Basov2017]. Modifications of the topological phase [@Oka2009; @Inoue2010;" +"---\nabstract: |\n We give a nearly linear-time algorithm to approximately sample satisfying assignments in the random $k$-SAT model when the density of the formula scales exponentially with $k$. The best previously known sampling algorithm for the random $k$-SAT model applies when the density $\\alpha=m/n$ of the formula is less than $2^{k/300}$ and runs in time $n^{\\exp(\\Theta(k))}$ (Galanis, Goldberg, Guo and Yang, SIAM J. Comput., 2021). Here $n$ is the number of variables and $m$ is the number of clauses. Our algorithm achieves a significantly faster running time of $n^{1 + o_k(1)}$ and samples satisfying assignments up to density $\\alpha\\leq 2^{0.039 k}$.\n\n The main challenge in our setting is the presence of many variables with unbounded degree, which causes significant correlations within the formula and impedes the application of relevant Markov chain methods from the bounded-degree setting (Feng, Guo, Yin and Zhang, J. ACM, 2021; Jain, Pham and Vuong, 2021). Our main technical contribution is a $o_k(\\log n )$ bound of the sum of influences in the $k$-SAT model which turns out to be robust against the presence of high-degree variables. This allows us to apply the spectral independence framework and obtain fast mixing results of a uniform-block Glauber dynamics" +"---\nabstract: 'We introduce new models for Schr\u00f6dinger-type equations, which generalize standard NLS and for which different dispersion occurs depending on the directions. Our purpose is to understand dispersive properties depending on the directions of propagation, in the spirit of waveguide manifolds, but where the diffusion is of different types. We mainly consider the standard Euclidean space and the waveguide case but our arguments extend easily to other types of manifolds (like product spaces). Our approach unifies in a natural way several previous results. Those models are also generalizations of some appearing in seminal works in mathematical physics, such as relativistic strings. In particular, we prove the large data scattering on waveguide manifolds $\\mathbb{R}^d \\times \\mathbb{T}$, $d \\geq 3$. This result can be regarded as the analogue of [@TV2; @YYZ2] in our setting and the waveguide analogue investigated in [@GSWZ]. A key ingredient of the proof is a Morawetz-type estimate for the setting of this model.'\naddress:\n- 'Yannick Sire Department of Mathematics, Johns Hopkins University Krieger Hall, 3400 N. Charles St., Baltimore, MD, 21218.'\n- 'Xueying Yu Department of Mathematics, University of Washington C138 Padelford Hall Box 354350, Seattle, WA 98195,'\n- 'Haitian Yue Institute of Mathematical Sciences, ShanghaiTech" +"---\nabstract: 'We report a comprehensive study of the Zr$_5$Pt$_3$C$_x$ superconductors, with interstitial carbon comprised between 0 and 0.3. At a macroscopic level, their superconductivity, with $T_c$ ranging from 4.5 to 6.3K, was investigated via electrical-resistivity-, magnetic-susceptibility-, and specific-heat measurements. The upper critical fields $\\mu_0H_\\mathrm{c2}$ $\\sim$ 7T were determined mostly from measurements of the electrical resistivity in applied magnetic fields. The microscopic electronic properties were investigated by means of muon-spin rotation and relaxation ($\\mu$SR) and nuclear magnetic resonance (NMR) techniques. In the normal state, NMR relaxation data indicate an almost ideal metallic behavior, confirmed by band-structure calculations, which suggest a relatively high electronic density of states at the Fermi level, dominated by the Zr 4$d$ orbitals. The low-temperature superfluid density, obtained via transverse-field $\\mu$SR, suggests a fully-gapped superconducting state in Zr$_5$Pt$_3$ and Zr$_5$Pt$_3$C$_{0.3}$, with a zero-temperature gap $\\Delta_0$ = 1.20 and 0.60meV and a magnetic penetration depth $\\lambda_0$ = 333 and 493nm, respectively. The exponential dependence of the NMR relaxation rates below $T_c$ further supports a nodeless superconductivity. The absence of spontaneous magnetic fields below the onset of superconductivity, as determined from zero-field $\\mu$SR measurements, confirms a preserved time-reversal symmetry in the superconducting state of Zr$_5$Pt$_3$C$_x$. In contrast to a" +"---\nabstract: |\n Tree-cut width is a parameter that has been introduced as an attempt to obtain an analogue of treewidth for edge cuts. Unfortunately, in spite of its desirable structural properties, it turned out that tree-cut width falls short as an edge-cut based alternative to treewidth in algorithmic aspects. This has led to the very recent introduction of a simple edge-based parameter called edge-cut width \\[WG 2022\\], which has precisely the algorithmic applications one would expect from an analogue of treewidth for edge cuts, but does not have the desired structural properties.\n\n In this paper, we study a variant of tree-cut width obtained by changing the threshold for so-called thin nodes in tree-cut decompositions from $2$ to $1$. We show that this \u201cslim tree-cut width\u201d satisfies all the requirements of an edge-cut based analogue of treewidth, both structural and algorithmic, while being less restrictive than edge-cut width. Our results also include an alternative characterization of slim tree-cut width via an easy-to-use spanning-tree decomposition akin to the one used for edge-cut width, a characterization of slim tree-cut width in terms of forbidden immersions as well as approximation algorithm for computing the parameter.\nauthor:\n- Robert Ganian\n- Viktoriia Korchemna\nbibliography:" +"---\nabstract: 'As the machine learning and systems communities strive to achieve higher energy-efficiency through custom deep neural network (DNN) accelerators, varied precision or quantization levels, and model compression techniques, there is a need for design space exploration frameworks that incorporate quantization-aware processing elements into the accelerator design space while having accurate and fast power, performance, and area models. In this work, we present *QUIDAM*, a highly parameterized quantization-aware DNN accelerator and model co-exploration framework. Our framework can facilitate future research on design space exploration of DNN accelerators for various design choices such as bit precision, processing element type, scratchpad sizes of processing elements, global buffer size, number of total processing elements, and DNN configurations. Our results show that different bit precisions and processing element types lead to significant differences in terms of performance per area and energy. Specifically, our framework identifies a wide range of design points where performance per area and energy varies more than $5 \\times$ and $35 \\times$, respectively. With the proposed framework, we show that lightweight processing elements achieve on par accuracy results and up to $5.7 \\times$ more performance per area and energy improvement when compared to the best INT16 based implementation. Finally, due" +"---\nabstract: 'We present an open-source differentiable acoustic simulator, j-Wave, which can solve time-varying and time-harmonic acoustic problems. It supports automatic differentiation, which is a program transformation technique that has many applications, especially in machine learning and scientific computing. j-Wave is composed of modular components that can be easily customized and reused. At the same time, it is compatible with some of the most popular machine learning libraries, such as JAX and TensorFlow. The accuracy of the simulation results for known configurations is evaluated against the widely used k-Wave toolbox and a cohort of acoustic simulation software. j-Wave is available from .'\nauthor:\n- |\n Antonio Stanziola$^*$\\\n Dept. of Medical Physics and Biomedical Engineering\\\n University College London\\\n London WC1E 6BT, UK\\\n \\\n Simon R. Arridge\\\n Department of Computer Science\\\n University College of London\\\n London WC1E 6BT, UK\\\n \\\n Ben T. Cox\\\n Dept. of Medical Physics and Biomedical Engineering\\\n University College London\\\n London WC1E 6BT, UK\\\n \\\n Bradley E. Treeby\\\n Dept. of Medical Physics and Biomedical Engineering\\\n University College London\\\n London WC1E 6BT, UK\\\n \\\ntitle: 'j-Wave: An open-source differentiable wave simulator'\n---\n\nMotivation and significance {#sec:motivation}\n===========================\n\nBackground\n----------\n\nThe accurate simulation of wave phenomena has many interesting applications, from" +"---\nabstract: 'Neural radiance fields, or NeRF, represent a breakthrough in the field of novel view synthesis and 3D modeling of complex scenes from multi-view image collections. Numerous recent works have shown the importance of making NeRF models more robust, by means of regularization, in order to train with possibly inconsistent and/or very sparse data. In this work, we explore how differential geometry can provide elegant regularization tools for robustly training NeRF-like models, which are modified so as to represent continuous and infinitely differentiable functions. In particular, we present a generic framework for regularizing different types of NeRFs observations to improve the performance in challenging conditions. We also show how the same formalism can also be used to natively encourage the regularity of surfaces by means of Gaussian or mean curvatures.'\nauthor:\n- |\n Thibaud\u00a0Ehret Roger\u00a0Mar\u00ed Gabriele\u00a0Facciolo\\\n Universit\u00e9 Paris-Saclay, CNRS, ENS Paris-Saclay, Centre Borelli\\\n 91190, Gif-sur-Yvette, France\\\n [thibaud.ehret@ens-paris-saclay.fr]{}\nbibliography:\n- 'refs.bib'\ntitle: Regularization of NeRFs using differential geometry\n---\n\nIntroduction\n============\n\nRealistic rendering of new views of a 3D scene or a given volume is a long standing problem in computer graphics. The interest in this problem has been rekindled by the growth of augmented and virtual" +"---\nabstract: |\n Byzantine fault tolerance (BFT) can preserve the availability and integrity of IoT systems where single components may suffer from random data corruption or attacks that can expose them to malicious behavior. While state-of-the-art BFT state-machine replication (SMR) libraries are often tailored to fit a standard request-response interaction model with dedicated client-server roles, in our design, we employ an IoT-fit interaction model that assumes a loosly-coupled, event-driven interaction between arbitrarily wired IoT components.\n\n In this paper, we explore the possibility of *automating and streamlining the complete process of integrating BFT SMR into a component-based IoT execution environment*. Our main goal is providing simplicity for the developer: We strive to decouple the specification of a logical application architecture from the difficulty of incorporating BFT replication mechanisms into it. Thus, our contributions address the automated configuration, re-wiring and deployment of IoT components, and their replicas, within a component-based, event-driven IoT platform.\nauthor:\n- \n- \n- \n- \n- \nbibliography:\n- 'IEEEabrv.bib'\n- 'bibliography.bib'\ntitle: |\n Automatic Integration of BFT State-Machine Replication into IoT Systems\n\n [^1]\n---\n\n[0.84]{}(0.08,0.94) This is the authors\u2019 preprint version of the work.\\\nThe definitive version is published in the proceedings of the 18th European Dependable Computing Conference," +"---\nauthor:\n- 'Aditya Bawane, Pedram Karimi and Piotr Su[\u0142]{}kowski'\nbibliography:\n- 'superint\\_bib.bib'\ntitle: 'Proving superintegrability in $\\beta$-deformed eigenvalue models'\n---\n\nIntroduction\n============\n\nIn this note we prove the superintegrability property of the $\\beta$-deformed eigenvalue models. Superintegrability is the statement that certain quantities in an integrable system can be computed explicitly, in consequence of some extra properties in addition to those that guarantee integrability. A prototype example of such a feature is the existence of closed orbits, described by elementary functions, in the motion in the potential $r^n$ for $n=-1$ and $n=2$, due to an extra conservation law. Recently the superintegrability has been analyzed in much detail in matrix models, where these extra conditions take form of the string equation and Virasoro constraints. One manifestation of superintegrability in this context is the statement that expectation values of various symmetric functions can be explicitly expressed also in terms of analogous symmetric functions evaluated with appropriate arguments [@Itoyama:2017xid; @Mironov:2017och; @Morozov:2018eiq; @CLZ; @Mironov:2022fsr; @Mironov:2022yhd; @Mishnyakov:2022bkg; @Wang:2022lzj]. As symmetric functions in this context can be identified with appropriate characters, such relations are often presented schematically as $\\langle character \\rangle \\sim character$.\n\nLet us summarize our results, and also recent developments concerning superintegrability, in more" +"---\nabstract: 'Guiding many-body systems to desired states is a central challenge of modern quantum science, with applications from quantum computation\u00a0[@laflamme1996perfect; @devitt2013quantum] to many-body physics\u00a0[@carusotto2020photonic] and quantum-enhanced metrology\u00a0[@pezze2018quantum]. Approaches to solving this problem include step-by-step assembly\u00a0[@grusdt2014topological; @gomes2012designer; @dallaire2019low], reservoir engineering to irreversibly pump towards a target state\u00a0[@poyatos1996quantum; @kapit2014induced; @lebreuilly2017stabilizing], and adiabatic evolution from a known initial state\u00a0[@albash2018adiabatic; @zurek2005dynamics]. Here we construct low-entropy quantum fluids of light in a Bose Hubbard circuit by combining particle-by-particle assembly and adiabatic preparation. We inject individual photons into a disordered lattice where the eigenstates are known & localized, then adiabatically remove this disorder, allowing quantum fluctuations to melt the photons into a fluid. Using our platform\u00a0[@Ma2019AuthorPhotons], we first benchmark this lattice melting technique by building and characterizing arbitrary single-particle-in-a-box states, then assemble multi-particle strongly correlated fluids. Inter-site entanglement measurements performed through single-site tomography indicate that the particles in the fluid delocalize, while two-body density correlation measurements demonstrate that they also avoid one another, revealing Friedel oscillations characteristic of a Tonks-Girardeau gas\u00a0[@tonks1936complete; @girardeau1960relationship]. This work opens new possibilities for preparation of topological and otherwise exotic phases of synthetic matter\u00a0[@goldman2016topological; @carusotto2020photonic; @ozawa2019topological].'\nauthor:\n- 'Brendan Saxberg$^*$'\n- 'Andrei" +"---\nabstract: 'Superconducting interferometers are quantum devices able to transduce a magnetic flux into an electrical output with excellent sensitivity, integrability and power consumption. Yet, their voltage response is intrinsically non-linear, a limitation which is conventionally circumvented through the introduction of compensation inductances or by the construction of complex device arrays. Here we propose an intrinsically-linear flux-to-voltage mesoscopic transducer, called bi-SQUIPT, based on the superconducting quantum interference proximity transistor as fundamental building block. The bi-SQUIPT provides a voltage-noise spectral density as low as $\\sim10^{-16}$ V/Hz$^{1/2}$ and, more interestingly, under a proper operation parameter selection, exhibits a spur-free dynamic range as large as $\\sim60$ dB, a value on par with that obtained with state-of-the-art SQUID-based linear flux-to-voltage superconducting transducers. Furthermore, thanks to its peculiar measurement configuration, the bi-SQUIPT is tolerant to imperfections and non-idealities in general. For the above reasons, we believe that the bi-SQUIPT could provide a relevant step-beyond in the field of low-dissipation and low-noise current amplification with a special emphasis on applications in cryogenic quantum electronics.'\nauthor:\n- Giorgio De Simoni\n- Francesco Giazotto\ntitle: 'Ultra linear magnetic flux-to-voltage conversion in superconducting quantum interference proximity transistors'\n---\n\nIntroduction {#sec:Introduction}\n============\n\nSuperconducting interferometers are quantum devices able to transduce" +"---\nabstract: |\n In modern databases, transaction processing technology provides ACID (Atomicity, Consistency, Isolation, Durability) features. Consistency refers to the correctness of databases and is a crucial property for many applications, such as financial and banking services. However, there exist typical challenges for consistency. Theoretically, the current two definitions of consistency express quite different meanings, which are causal and sometimes controversial. Practically, it is notorious to check the consistency of databases, especially in terms of the verification cost.\n\n This paper proposes Coo, a framework to check the consistency of databases. Specifically, Coo has the following advancements. First, Coo proposes partial order pair (POP) graph, which has a better expressiveness on transaction conflicts in a schedule by considering stateful information like Commit and Abort. By POP graph with no cycle, Coo defines consistency completely. Secondly, Coo can construct inconsistent test cases based on POP cycles. These test cases can be used to check the consistency of databases in accurate (all types of anomalies), user-friendly (SQL-based test), and cost-effective (one-time checking in a few minutes) ways.\n\n We evaluate Coo with eleven databases, both centralized and distributed, under all supported isolation levels. The evaluation shows that databases did not completely follow the ANSI" +"---\nabstract: 'Electrostatic actuators provide a promising approach to creating soft robotic sheets, due to their flexible form factor, modular integration, and fast response speed. However, their control requires kilo-Volt signals and understanding of complex dynamics resulting from force interactions by on-board and environmental effects. In this work, we demonstrate an untethered planar five-actuator piezoelectric robot powered by batteries and on-board high-voltage circuitry, and controlled through a wireless link. The scalable fabrication approach is based on bonding different functional layers on top of each other (steel foil substrate, actuators, flexible electronics). The robot exhibits a range of controllable motions, including bidirectional crawling (up to \\~0.6 cm/s), turning, and in-place rotation (at \\~1 degree/s). High-speed videos and control experiments show that the richness of the motion results from the interaction of an asymmetric mass distribution in the robot and the associated dependence of the dynamics on the driving frequency of the piezoelectrics. The robot\u2019s speed can reach 6 cm/s with specific payload distribution.'\nauthor:\n- |\n Zhiwu Zheng, Hsin Cheng, Prakhar Kumar,\\\n Sigurd Wagner, Minjie Chen, Naveen Verma and James C. Sturm[^1][^2]\nbibliography:\n- 'IEEEabrv.bib'\n- 'repeat\\_name.bib'\n- 'mybib.bib'\ntitle: '**Wirelessly-Controlled Untethered Piezoelectric Planar Soft Robot Capable of Bidirectional Crawling and" +"---\nabstract: 'Ground based observations appear to indicate that Ultra High Energy Cosmic Rays (UHECR) of the highest energies ($>10^{18.7} \\rm eV$) consist of heavy particles \u2013 shower depth and muon production data both pointing towards this conclusion. On the other hand, cosmic-ray arrival directions at energies $>10^{18.9}{\\rm \\, eV}$ exhibit a dipole anisotropy, which disfavors heavy composition, since higher-Z nuclei are strongly deflected by the Galactic magnetic field, suppressing anisotropy. This is the composition problem of UHECR. One solution could be the existence of yet-unknown effects in proton interactions at center-of-mass (CM) energies $\\gtrsim 50$ TeV, which would alter the interaction cross section and the multiplicity of interaction products, mimicking heavy primaries. We study the impact of such changes on cosmic-ray observables using simulations of Extensive Air-Shower (EAS), in order to place constrains on the phenomenology of any new effects for high energy proton interactions that could be probed by $\\sqrt{s}>50$ TeV collisions. We simulate showers of primaries with energies in the range $10^{17} - 10^{20} {\\rm eV}$ using the CORSIKA code, modified to implement a possible increase in cross-section and multiplicity in hadronic collisions exceeding a CM energy threshold of $50$ TeV. We study the composition-sensitive shower observables" +"---\nabstract: 'A model is presented that simultaneously describes shape coexistence and quadrupole and octupole collective excitations within a theoretical framework based on the nuclear density functional theory and the interacting boson model. An optimal interacting-boson Hamiltonian that incorporates the configuration mixing between normal and intruder states, as well as the octupole degrees of freedom, is identified by means of self-consistent mean-field calculations using a universal energy density functional and a pairing interaction, with constraints on the triaxial quadrupole and the axially-symmetric quadrupole and octupole shape degrees of freedom. An illustrative application to the transitional nuclei $^{72}$Ge, $^{74}$Se, $^{74}$Kr, and $^{76}$Kr shows that the inclusion of the intruder states and the configuration mixing significantly lower the energy levels of the excited $0^+$ states, and that the predicted low-lying positive-parity states are characterized by the strong admixture of nearly spherical, weakly deformed oblate, and strongly deformed prolate shapes. The low-lying negative-parity states are shown to be dominated by the deformed intruder configurations.'\nauthor:\n- Kosuke Nomura\nbibliography:\n- 'refs.bib'\ntitle: Effect of configuration mixing on quadrupole and octupole collective states of transitional nuclei\n---\n\nIntroduction\n============\n\nThe phenomenon of shape coexistence in atomic nuclei has attracted considerable attention for many decades" +"---\nabstract: 'The localization is one of the active and fundamental research in topology physics. Based on a generalized Su-Schrieffer-Heeger model with the quasiperiodic non-Hermitian emerging at the off-diagonal location, we propose a novel systematic method to analyze the localization behaviors for the bulk and the edge, respectively. For the bulk, it can be found that it undergoes an extended-coexisting-localized-coexisting-localized transition induced by the quasidisorder and non-Hermiticity. While for the edge state, it can be broken and recovered with the increase of the quasidisorder strength, and its localized transition is synchronous exactly with the topological phase transition. In addition, the inverse participation ratio of the edge state oscillates with an increase of the disorder strength. Finally, numerical results elucidate that the derivative of the normalized participation ratio exhibits an enormous discontinuity at the localized transition point. Here, our results not only demonstrate the diversity of localization properties of bulk and edge state, but also may provide an extension of the ordinary method for investigating the localization.'\nauthor:\n- 'Gang-Feng Guo'\n- 'Xi-Xi Bao'\n- Lei Tan\nbibliography:\n- 'paper.bib'\ntitle: 'Reentrant Localized Bulk and Localized-Extended Edge in Quasiperiodic Non-Hermitian Systems'\n---\n\nINTRODUCTION\n============\n\nTopological insulators have become a burgeoning research" +"---\nabstract: 'We characterize some variations of pseudoskeleton (also called CUR) decompositions for matrices and tensors over arbitrary fields. These characterizations extend previous results to arbitrary fields and to decompositions which use generalized inverses of the constituent matrices, in contrast to Moore\u2013Penrose pseudoinverses in prior works which are specific to real or complex valued matrices, and are significantly more structured.'\naddress: 'Department of Mathematics, University of Texas at Arlington, Arlington, TX 76019 USA'\nauthor:\n- Keaton Hamm\nbibliography:\n- 'references.bib'\ntitle: Generalized Pseudoskeleton Decompositions\n---\n\nIntroduction\n============\n\nPseudoskeleton decompositions were introduced in their modern form by Gore\u012dnov et al.\u00a0[@Goreinov; @Goreinov2; @Goreinov3], though their origins go back at least to Penrose [@Penrose56]. A more complete history and expository treatment can be found in [@hamm2020perspectives] (see also [@strang2022lu]). At their core, pseudoskeleton decompositions of matrices are decompositions of the form $A = CXR$ where $C$ and $R$ are column and row submatrices of $A$, respectively, and $X$ is some mixing matrix. While an interesting piece of linear algebra in their own right, such decompositions have also proven useful in various applications from sketching massive data matrices [@drineas2006fast; @mahoney2009cur; @voronin2017efficient], accelerating nonconvex Robust PCA algorithms [@cai2020rapid; @cai2021robust], estimating massive kernel matrices [@GittensMahoney;" +"---\nabstract: 'We give a generalized Thurston\u2013Bennequin-type inequality for links in $S^3$ using a Bauer\u2013Furuta-type invariant for 4-manifolds with contact boundary. As a special case, we also give an adjunction inequality for smoothly embedded orientable surfaces with negative intersection in a closed oriented smooth 4-manifold whose non-equivariant Bauer\u2013Furuta invariant is non-zero.'\naddress:\n- 'Department of Mathematics Tokyo Institute of Technology 2-12-1, Ookayama, Meguro, Tokyo 152-8551 Japan'\n- 'Graduate School of Mathematical Sciences, the University of Tokyo, 3-8-1 Komaba, Meguro, Tokyo 153-8914, Japan'\n- '2-1 Hirosawa, Wako, Saitama 351-0198, Japan'\nauthor:\n- Nobuo Iida\n- Hokuto Konno\n- Masaki Taniguchi\nbibliography:\n- 'tex.bib'\ntitle: 'A note on generalized Thurston\u2013Bennequin inequalities'\n---\n\nMain results\n============\n\nAdjunction inequalities are lower bounds on genera of smoothly embedded surfaces in a 4-manifold modeled on the adjunction formulae for algebraic curves in an algebraic surface, and they have been important tools to study 4-dimensional topology. In this paper, we give a new adjunction-type inequality. The most general result is given as a generalized Thurston\u2013Bennequin-type inequality for links in $S^3$ (\\[thm: relative main\\]), and it deduces also an adjunction inequality for closed surfaces with negative self-intersection in a closed 4-manifold, described in \\[Adjunction inequality for negative-self intersections\\]." +"---\nabstract: 'Efficient search is a critical component for an e-commerce platform with an innumerable number of products. Every day millions of users search for products pertaining to their needs. Thus, showing the relevant products on the top will enhance the user experience. In this work, we propose a novel approach of fusing a transformer-based model with various listwise loss functions for ranking e-commerce products, given a user query. We pre-train a RoBERTa model over a fashion e-commerce corpus and fine-tune it using different listwise loss functions. Our experiments indicate that the RoBERTa model fine-tuned with an NDCG based surrogate loss function(approxNDCG) achieves an NDCG improvement of **13.9%** compared to other popular listwise loss functions like ListNET and ListMLE. The approxNDCG based RoBERTa model also achieves an NDCG improvement of **20.6%** compared to the pairwise RankNet based RoBERTa model. We call our methodology of directly optimizing the RoBERTa model in an end-to-end manner with a listwise surrogate loss function as **ListBERT**. Since there is a low latency requirement in a real-time search setting, we show how these models can be easily adopted by using a knowledge distillation technique to learn a representation-focused student model that can be easily deployed and" +"---\nabstract: 'Although existing machine reading comprehension models are making rapid progress on many datasets, they are far from robust. In this paper, we propose an understanding-oriented machine reading comprehension model to address three kinds of robustness issues, which are over sensitivity, over stability and generalization. Specifically, we first use a natural language inference module to help the model understand the accurate semantic meanings of input questions so as to address the issues of over sensitivity and over stability. Then in the machine reading comprehension module, we propose a memory-guided multi-head attention method that can further well understand the semantic meanings of input questions and passages. Third, we propose a multi-language learning mechanism to address the issue of generalization. Finally, these modules are integrated with a multi-task learning based method. We evaluate our model on three benchmark datasets that are designed to measure models\u2019 robustness, including DuReader (robust) and two SQuAD-related datasets. Extensive experiments show that our model can well address the mentioned three kinds of robustness issues. And it achieves much better results than the compared state-of-the-art models on all these datasets under different evaluation metrics, even under some extreme and unfair evaluations. The source code of our work" +"---\nabstract: 'In this paper we consider the coupled task scheduling problem with exact delay times on a single machine with the objective of minimizing the total completion time of the jobs. We provide constant-factor approximation algorithms for several variants of this problem that are known to be ${\\mathcal{NP}}$-hard, while also proving ${\\mathcal{NP}}$-hardness for two variants whose complexity was unknown before. Using these results, together with constant-factor approximations for the makespan objective from the literature, we also introduce the first results on bi-objective approximation in the coupled task setting.'\nauthor:\n- 'David Fischer[^1] \u00a0and P\u00e9ter Gy\u00f6rgyi[^2]'\nbibliography:\n- 'mybibliography.bib'\ntitle: Approximation algorithms for coupled task scheduling minimizing the sum of completion times\n---\n\nIntroduction {#sec:intro}\n============\n\nThe problem of scheduling coupled tasks with exact delays (CTP) was introduced by Shapiro\u00a0[@shapiro80] more than forty years ago. In this particular scheduling problem, each job has two separate tasks and a delay time. The goal is to schedule these tasks such that no tasks overlap, and the two tasks of a job are scheduled with exactly their given delay time in between them, while optimizing some objective function. This problem has several practical applications, e.g., in pulsed radar systems, where one needs" +"---\nabstract: 'The dynamics of social relations and the possibility of reaching the state of structural balance (Heider balance) under the influence of the temperature modeling the social noise level are discussed for interacting actors occupying nodes of classical random graphs. Depending on the graph density $D$, either a smooth cross-over or a first-order phase transition from a balanced to an imbalanced state of the system is observed with an increase in the thermal noise level. The minimal graph density $D_\\text{min}$ for which the first-order phase transition can be observed decreases with the system size $N$ as $D_\\text{min}\\propto N^{-0.58(1)}$. For graph densities $D>D_\\text{min}$ the reduced critical temperature $T_c^\\star=T_c/T_c(D=1)$ increases with the graph density as $T_c^\\star\\propto D^{1.719(6)}$ independently of the system size $N$.'\nauthor:\n- Krzysztof Malarz\n- 'Maciej Wo[\u0142]{}oszyn'\nbibliography:\n- 'bib/km.bib'\n- 'bib/heider.bib'\n- 'bib/basics.bib'\n- 'bib/networks.bib'\n- 'bib/opiniondynamics.bib'\n- 'bib/percolation.bib'\n- 'bib/this.bib'\ntitle: Thermal properties of structurally balanced systems on classical random graphs\n---\n\n[^1]\n\n[^2]\n\n> Modeling social processes gives us a unique opportunity to understand the dynamics of relations between individuals or larger communities. Among the most studied phenomena are the processes governing changes in social opinion, while another example is the evolving structure of friendly" +"---\nabstract: 'Kpc-scale triple active galactic nuclei (AGNs), potential precursors of gravitationally-bound triple massive black holes (MBHs), are rarely seen objects and believed to play an important role in the evolution of MBHs and their host galaxies. In this work we present a multi-band (3.0, 6.0 10.0, and 15.0 GHz), high-resolution radio imaging of the triple AGN candidate, SDSS J0849+1114, using the Very Large Array. Two of the three nuclei (A and C) are detected at 3.0, 6.0, and 15 GHz for the first time, both exhibiting a steep spectrum over 3\u201315 GHz (with a spectral index $-0.90 \\pm 0.05$ and $-1.03 \\pm 0.04$) consistent with a synchrotron origin. Nucleus A, the strongest nucleus among the three, shows a double-sided jet, with the jet orientation changing by $\\sim20\\arcdeg$ between its inner 1$''''$ and the outer 55 (8.1 kpc) components, which may be explained as the MBH\u2019s angular momentum having been altered by merger-enhanced accretion. Nucleus C also shows a two-sided jet, with the western jet inflating into a radio lobe with an extent of 15 (2.2 kpc). The internal energy of the radio lobe is estimated to be $\\rm 5.0 \\times 10^{55}$ erg, for an equipartition magnetic field strength of" +"---\nabstract: 'Sleep is particularly important to the health of infants, children, and adolescents, and sleep scoring is the first step to accurate diagnosis and treatment of potentially life-threatening conditions. But pediatric sleep is severely under-researched compared to adult sleep in the context of machine learning for health, and sleep scoring algorithms developed for adults usually perform poorly on infants. Here, we present the first automated sleep scoring results on a recent large-scale pediatric sleep study dataset that was collected during standard clinical care. We develop a transformer-based model that learns to classify five sleep stages from millions of multi-channel electroencephalogram (EEG) sleep epochs with 78% overall accuracy. Further, we conduct an in-depth analysis of the model performance based on patient demographics and EEG channels. The results point to the growing need for machine learning research on pediatric sleep.'\nauthor:\n- |\n Harlin Lee\\\n Department of Mathematics\\\n University of California Los Angeles\\\n Los Angeles, CA 90095\\\n `harlin@math.ucla.edu`\\\n Aaqib Saeed\\\n Philips Research\\\n Eindhoven, Netherlands\\\n `aaqib.saeed@philips.com`\\\nbibliography:\n- 'main.bib'\ntitle: 'Automatic Sleep Scoring from Large-scale Multi-channel Pediatric EEG'\n---\n\nIntroduction\n============\n\nSleep is necessary for everyone, but it is particularly important to the health and development of infants, children and adolescents. Sleep" +"---\nabstract: 'The Robust Perron Cluster Analysis (PCCA+) has become a popular spectral clustering algorithm for coarse-graining transition matrices of nearly decomposable Markov chains with transition states. Originally developed for reversible Markov chains, the algorithm only worked for transition matrices with real eigenvalues. In this paper, we therefore extend the theoretical framework of PCCA+ to Markov chains with a complex eigen-decomposition. We show that by replacing a complex conjugate pair of eigenvectors by their real and imaginary components, a real representation of the same subspace is obtained, which is suitable for the cluster analysis. We show that our approach leads to the same results as the generalized PCCA+ (GenPCCA), which replaces the complex eigen-decomposition by a conceptually more difficult real Schur decomposition. We apply the method on non-reversible Markov chains, including circular chains, and demonstrate its efficiency compared to GenPCCA. The experiments are performed in the Matlab programming language and codes are provided.'\nauthor:\n- 'Anna-Simone Frank'\n- Alexander Sikorski\n- Susanna R\u00f6blitz\nbibliography:\n- 'cas-refs.bib'\ntitle: Spectral clustering of Markov chain transition matrices with complex eigenvalues\n---\n\n\\[type=author,orcid=0000-0002-3728-3476\\]\n\n\\[type=author, orcid=0000-0001-9051-650X\\]\n\n\\[type=author, orcid=0000-0002-2735-0030\\]\n\nSpectral cluster analysis ,Complex eigendecomposition ,Invariant sub-space condition ,Discrete-time Markov chains ,Stochastic matrices ,Markov state model ," +"---\nabstract: 'The relativistic Vlasov-Maxwell-Landau (r-VML) system and the relativistic Landau equation (r-LAN) are fundamental models that describe the dynamics of an electron gas. In this paper, we introduce a novel weighted energy method and establish the validity of the Hilbert expansion for the r-VML system and r-LAN equation. As the Knudsen number shrinks to zero, we rigorously demonstrate the relativistic Euler-Maxwell limit and relativistic Euler limit, respectively. This successfully resolves the long-standing open problem regarding the hydrodynamic limits of Landau-type equations.'\naddress:\n- ' Department of Mathematics, University of Chicago'\n- ' Department of Mathematics, Lehigh University'\n- ' Innovation Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences'\nauthor:\n- Zhimeng Ouyang\n- Lei Wu\n- Qinghua Xiao\nbibliography:\n- 'Reference.bib'\ntitle: Hilbert Expansion for Coulomb Collisional Kinetic Models\n---\n\n[^1]\n\n[^2]\n\n[^3]\n\nIntroduction\n============\n\nRelativistic Vlasov-Maxwell-Landau System\n-----------------------------------------\n\nThe relativistic Vlasov-Maxwell-Landau (r-VML) system is a fundamental and complete model describing the dynamics of a dilute collisional ionized plasma appearing in nuclear fusion and the interior of stars, etc. Correspondingly, the relativistic Euler-Maxwell system, the foundation of the two-fluid theory in plasma physics, describes the dynamics of two compressible ion and electron fluids interacting with their" +"---\nabstract: |\n We propose a new technique for creating a space-efficient index for large repetitive text collections, such as pangenomic databases containing sequences of many individuals from the same species. We combine two recent techniques from this area: Wheeler graphs (Gagie et al., 2017) and prefix-free parsing (PFP, Boucher et al., 2019).\n\n Wheeler graphs are a general framework encompassing several indexes based on the Burrows-Wheeler transform (BWT), such as the FM-index. Wheeler graphs admit a succinct representation which can be further compacted by employing the idea of tunnelling, which exploits redundancies in the form of parallel, equally-labelled paths called blocks that can be merged into a single path. The problem of finding the optimal set of blocks for tunnelling, i.e. the one that minimizes the size of the resulting Wheeler graph, is known to be NP-complete and remains the most computationally challenging part of the tunnelling process.\n\n To find an adequate set of blocks in less time, we propose a new method based on the prefix-free parsing (PFP). The idea of PFP is to divide the input text into phrases of roughly equal sizes that overlap by a fixed number of characters. The phrases are then sorted lexicographically. The" +"---\nabstract: 'Stable Hamiltonian structures generalize contact forms and define a volume-preserving vector field known as the Reeb vector field. We study two aspects of Reeb vector fields defined by stable Hamiltonian structures on 3-manifolds: on one hand, we classify all the examples with finitely many periodic orbits under a non-degeneracy condition; on the other, we give sufficient conditions for the existence of a supporting broken book decomposition and for the existence of a Birkhoff section.'\naddress:\n- 'Robert Cardona, Laboratory of Geometry and Dynamical Systems, Department of Mathematics, Universitat Polit\u00e8cnica de Catalunya, Avinguda del Doctor Mara\u00f1on 44-50, 08028 , Barcelona'\n- 'A. Rechtman, Institut de Recherche Math\u00e9matique Avanc\u00e9e, Universit\u00e9 de Strasbourg, 7 rue Ren\u00e9 Descartes, 67084 Strasbourg, France'\nauthor:\n- Robert Cardona\n- Ana Rechtman\ntitle: |\n Periodic orbits and Birkhoff sections of\\\n Stable Hamiltonian structures\n---\n\nIntroduction\n============\n\nThe aim of this paper is to extend recent results concerning Reeb vector fields defined by contact forms ([@CDR; @CDHR; @CM]) to Reeb vector fields defined by stable Hamiltonian structures (SHS) on closed 3-manifolds, a larger set of volume-preserving vector fields. These results concern the number of periodic orbits and the existence of either a supporting broken book decomposition or" +"---\nabstract: 'Autonomous grasping of novel objects that are previously unseen to a robot is an ongoing challenge in robotic manipulation. In the last decades, many approaches have been presented to address this problem for specific robot hands. The UniGrasp framework, introduced recently, has the ability to generalize to different types of robotic grippers; however, this method does not work on grippers with closed-loop constraints and is data-inefficient when applied to robot hands with multi-grasp configurations. In this paper, we present *EfficientGrasp*, a generalized grasp synthesis and gripper control method that is independent of gripper model specifications. *EfficientGrasp* utilizes a gripper workspace feature rather than UniGrasp\u2019s gripper attribute inputs. This reduces memory use by 81.7% during training and makes it possible to generalize to more types of grippers, such as grippers with closed-loop constraints. The effectiveness of *EfficientGrasp* is evaluated by conducting object grasping experiments both in simulation and real-world; results show that the proposed method also outperforms UniGrasp when considering only grippers without closed-loop constraints. In these cases, *EfficientGrasp* shows 9.85% higher accuracy in generating contact points and 3.10% higher grasping success rate in simulation. The real-world experiments are conducted with a gripper with closed-loop constraints, which UniGrasp fails" +"---\nabstract: 'The Cosmic Infrared Background (CIB) traces the emission of star-forming galaxies throughout all cosmic epochs. Breaking down the contribution from galaxies at different redshifts to the observed CIB maps would allow us to probe the history of star formation. In this paper, we cross-correlate maps of the CIB with galaxy samples covering the range $z\\lesssim2$ to measure the bias-weighted star-formation rate (SFR) density ${\\langle b\\rho_{\\rm SFR}\\rangle}$ as a function of time in a model independent way. This quantity is complementary to direct measurements of the SFR density ${\\rho_{\\rm SFR}}$, giving a higher weight to more massive haloes, and thus provides additional information to constrain the physical properties of star formation. Using cross-correlations of the CIB with galaxies from the DESI Legacy Survey and the extended Baryon Oscillation Spectroscopic Survey, we obtain high signal-to-noise ratio measurements of ${\\langle b\\rho_{\\rm SFR}\\rangle}$, which we then use to place constraints on halo-based models of the star-formation history. We fit halo-based SFR models to our data and compare the recovered ${\\rho_{\\rm SFR}}$ with direct measurements of this quantity. We find a qualitatively good agreement between both independent datasets, although the details depend on the specific halo model assumed. This constitutes a useful robustness" +"---\nabstract: 'We introduce the framework of performative reinforcement learning where the policy chosen by the learner affects the underlying reward and transition dynamics of the environment. Following the recent literature on performative prediction\u00a0[@PZMH20], we introduce the concept of performatively stable policy. We then consider a regularized version of the reinforcement learning problem and show that repeatedly optimizing this objective converges to a performatively stable policy under reasonable assumptions on the transition dynamics. Our proof utilizes the dual perspective of the reinforcement learning problem and may be of independent interest in analyzing the convergence of other algorithms with decision-dependent environments. We then extend our results for the setting where the learner just performs gradient ascent steps instead of fully optimizing the objective, and for the setting where the learner has access to a finite number of trajectories from the changed environment. For both the settings, we leverage the dual formulation of performative reinforcement learning, and establish convergence to a stable solution. Finally, through extensive experiments on a grid-world environment, we demonstrate the dependence of convergence on various parameters e.g. regularization, smoothness, and the number of samples.'\nauthor:\n- |\n Debmalya Mandal\\\n MPI-SWS\\\n `dmandal@mpi-sws.org`\\\n- |\n Stelios Triantafyllou\\\n MPI-SWS\\\n `strianta@mpi-sws.org`" +"---\nauthor:\n- 'Albert Hofstetter, Lennard B\u00f6selt, Sereina Riniker$^\\text{*}$'\nbibliography:\n- 'references.bib'\ndate: |\n *Laboratory of Physical Chemistry, ETH Zurich, Vladimir-Prelog-Weg 2, 8093 Zurich, Switzerland\\\n Email: sriniker@ethz.ch*\ntitle: 'Graph Convolutional Neural Networks for (QM)ML/MM Molecular Dynamics Simulations'\n---\n\nAbstract {#abstract .unnumbered}\n========\n\nTo accurately study chemical reactions in the condensed phase or within enzymes, both a quantum-mechanical description and sufficient configurational sampling is required to reach converged estimates. Here, quantum mechanics/molecular mechanics (QM/ MM) molecular dynamics (MD) simulations play an important role, providing QM accuracy for the region of interest at a decreased computational cost. However, QM/MM simulations are still too expensive to study large systems on longer time scales. Recently, machine learning (ML) models have been proposed to replace the QM description. The main limitation of these models lies in the accurate description of long-range interactions present in condensed-phase systems. To overcome this issue, a recent workflow has been introduced combining a semi-empirical method (i.e. density functional tight binding (DFTB)) and a high-dimensional neural network potential (HDNNP) in a $\\Delta$-learning scheme. This approach has been shown to be capable of correctly incorporating long-range interactions within a cutoff of 1.4 nm. One of the promising alternative approaches to efficiently take" +"---\nabstract: 'Precision measurements of transitions between singlet ($S=0$) Rydberg states of H$_2$ belonging to series converging on the $\\mathrm{X}^+\\,^2\\Sigma_g^+(v^+=0,N^+=0)$ state of H$_2^+$ have been carried out by millimetre-wave spectroscopy under field-free conditions and in the presence of weak static electric fields. The Stark effect mixes states with different values of the orbital-angular-momentum quantum number $\\ell$ and leads to quadratic Stark shifts of low-$\\ell$ states and to linear Stark shifts of the nearly degenerate manifold of high-$\\ell$ states. Transitions to the Stark manifold were observed for the principal numbers 50 and 70, at fields below 50 mV/cm, with linewidths below 500\u00a0kHz. The energy-level structure was calculated using a matrix-diagonalisation approach, in which the zero-field positions of the $\\ell\\leq 3$ Rydberg states were obtained either from multichannel-quantum-defect-theory calculations or experiment, and those of the $\\ell\\geq 4$ Rydberg states from a long-range core-polarisation model. This approach offers the advantage of including rovibronic channel interactions through the MQDT treatment while retaining the advantages of a spherical basis for the determination of the off-diagonal elements of the Stark operator. Comparison of experimental and calculated transition frequencies enabled the quantitative description of the Stark manifolds, with residuals typically below 50 kHz. We demonstrate how" +"---\nabstract: 'Ride-pooling (or ride-sharing) services combine trips of multiple customers along similar routes into a single vehicle. The collective dynamics of the fleet of ride-pooling vehicles fundamentally underlies the efficiency of these services. In simplified models, the common features of these dynamics give rise to scaling laws of the efficiency that are valid across a wide range of street networks and demand settings. However, it is unclear how constraints of the vehicle fleet impact such scaling laws. Here, we map the collective dynamics of capacity-constrained ride-pooling fleets to services with unlimited passenger capacity and identify an effective fleet size of available vehicles as the relevant scaling parameter characterizing the dynamics. Exploiting this mapping, we generalize the scaling laws of ride-pooling efficiency to capacity-constrained fleets. We approximate the scaling function with a queueing theoretical analysis of the dynamics in a minimal model system, thereby enabling mean-field predictions of required fleet sizes in more complex settings. These results may help to transfer insights from existing ride-pooling services to new settings or service locations.'\nauthor:\n- 'Robin M. Zech'\n- Nora Molkenthin\n- Marc Timme\n- Malte Schr\u00f6der\nbibliography:\n- 'references.bib'\ntitle: 'Collective dynamics of capacity-constrained ride-pooling fleets'\n---\n\n[^1]\n\nIntroduction {#introduction" +"---\nabstract: 'The relaxor ferroelectric transition in [Cd$_2$Nb$_2$O$_7$]{}is thought to be described by the unusual condensation of two $\\Gamma$-centered phonon modes, [$\\Gamma_4^-$]{}and [$\\Gamma_5^-$]{}. However, their respective roles have proven to be ambiguous, with disagreement between *ab initio* studies, which favor [$\\Gamma_4^-$]{}as the primary mode, and global crystal refinements, which point to [$\\Gamma_5^-$]{}instead. Here, we resolve this issue by demonstrating from x-ray pair distribution function measurements that locally, [$\\Gamma_4^-$]{}dominates, but globally, [$\\Gamma_5^-$]{}dominates. This behavior is consistent with the near degeneracy of the energy surfaces associated with these two distortion modes found in our own *ab initio* simulations. Our first-principles calculations also show that these energy surfaces are almost isotropic, providing an explanation for the numerous structural transitions found in [Cd$_2$Nb$_2$O$_7$]{}, as well as its relaxor behavior. Our results point to several candidate descriptions of the local structure, some of which demonstrate two-in/two-out behavior for Nb displacements within a given Nb tetrahedron. Although this suggests the possibility of a charge analog of spin ice in [Cd$_2$Nb$_2$O$_7$]{}, our results are more consistent with a Heisenberg-like description for dipolar fluctuations rather than an Ising one. We hope this encourages future experimental investigations of the Nb and Cd dipolar fluctuations, along with their associated mode" +"---\nabstract: 'Interplay of topology and competing interactions can induce new phases and phase transitions at finite temperatures. We consider a weakly coupled two-dimensional hexatic-nematic XY model with a relative $Z_3$ Potts degrees of freedom, and apply the matrix product state method to solve this model rigorously. Since the partition function is expressed as a product of two-legged one-dimensional transfer matrix operator, an entanglement entropy of the eigenstate corresponding to the maximal eigenvalue of this transfer operator can be used as a stringent criterion to determine various phase transitions precisely. At low temperatures, the inter-component $Z_3$ Potts long-range order (LRO) exists, indicating that the hexatic and nematic fields are locked together and their respective vortices exhibit quasi-LRO. In the hexatic regime, below the BKT transition of the hexatic vortices, the inter-component $Z_3$ Potts LRO appears, accompanying with the binding of nematic vortices. In the nematic regime, however, the inter-component $Z_3$ Potts LRO undergoes a two-stage melting process. An intermediate Potts liquid phase emerges between the Potts ordered and disordered phases, characterized by an algebraic correlation with formation of charge-neutral pairs of both hexatic and nematic vortices. These two-stage phase transitions are associated with the proliferation of the domain walls and" +"---\nabstract: 'Energy saving is currently one of the most challenging issues for the Internet research community. Indeed, the exponential growth of applications and services induces a remarkable increase in power consumption and hence calls for novel solutions which are capable to preserve energy of the infrastructures, at the same time maintaining the required Quality of Service guarantees. In this paper we introduce a new mechanism for saving energy through intelligent switch off of network links. The mechanism has been implemented as an extension to the Open Shortest Path First routing protocol. We first show through simulations that our solution is capable to dramatically reduce energy consumption when compared to the standard OSPF implementation. We then illustrate a real-world implementation of the proposed protocol within the Quagga routing software suite.'\naddress:\n- 'Department of Electric Engineering and Information Technology, University of Napoli Federico II, Italy'\n- 'Dipartimento di Studi Europei e Mediterranei, Seconda Universit\u00e0 di Napoli, Italy'\nauthor:\n- 'M.\u00a0D\u2019Arienzo'\n- 'S.P.\u00a0Romano'\ntitle: 'GOSPF: an Energy Efficient Implementation of the OSPF Routing Protocol'\n---\n\nRouting Protocols ,Open Shortest Path First ,Energy Efficiency\n\nIntroduction {#sec:intro}\n============\n\nIn this paper we present an extension of the OSPF protocol specifically conceived" +"---\nabstract: 'Surgical captioning plays an important role in surgical instruction prediction and report generation. However, the majority of captioning models still rely on the heavy computational object detector or feature extractor to extract regional features. In addition, the detection model requires additional bounding box annotation which is costly and needs skilled annotators. These lead to inference delay and limit the captioning model to deploy in real-time robotic surgery. For this purpose, we design an end-to-end detector and feature extractor-free captioning model by utilizing the patch-based shifted window technique. We propose **S**hifted **Win**dow-Based **M**ulti-**L**ayer **P**erceptrons **Tran**sformer **Cap**tioning model (SwinMLP-TranCAP) with faster inference speed and less computation. SwinMLP-TranCAP replaces the multi-head attention module with window-based multi-head MLP. Such deployments primarily focus on image understanding tasks, but very few works investigate the caption generation task. SwinMLP-TranCAP is also extended into a video version for video captioning tasks using 3D patches and windows. Compared with previous detector-based or feature extractor-based models, our models greatly simplify the architecture design while maintaining performance on two surgical datasets. The code is publicly available at\u00a0.'\nauthor:\n- Mengya Xu\n- 'Mobarakol Islam[^1]'\n- 'Hongliang Ren[^2]'\nbibliography:\n- 'mybib.bib'\ntitle: 'Rethinking Surgical Captioning: End-to-End Window-Based MLP Transformer" +"---\nabstract: |\n The past several years have witnessed the success of transformer-based models, and their scale and application scenarios continue to grow aggressively. The current landscape of transformer models is increasingly diverse: the model size varies drastically with the largest being of hundred-billion parameters; the model characteristics differ due to the sparsity introduced by the Mixture-of-Experts; the target application scenarios can be latency-critical or throughput-oriented; the deployment hardware could be single- or multi-GPU systems with different types of memory and storage, etc. With such increasing diversity and the fast-evolving pace of transformer models, designing a highly performant and efficient inference system is extremely challenging.\n\n In this paper, we present , a comprehensive system solution for transformer model inference to address the above-mentioned challenges. consists of (1) a multi-GPU inference solution to minimize latency while maximizing the throughput of both dense and sparse transformer models when they fit in aggregate GPU memory, and (2) a heterogeneous inference solution that leverages CPU and NVMe memory in addition to the GPU memory and compute to enable high inference throughput with large models which do not fit in aggregate GPU memory.\n\n reduces latency by up to $7.3\\times$ over the state-of-the-art for latency oriented" +"---\nabstract: 'Identification of cyber threats is one of the essential tasks for security teams. Currently, cyber threats can be identified using knowledge organized into various formats, enumerations, and knowledge bases. This paper studies the current challenges of identifying vulnerabilities and threats in cyberspace using enumerations and data about assets. Although enumerations are used in practice, we point out several issues that still decrease the quality of vulnerability and threat identification. Since vulnerability identification methods are based on network monitoring and agents, the issues are related to the asset discovery, the precision of vulnerability discovery, and the amount of data. On the other hand, threat identification utilizes graph-based, nature-language, machine-learning, and ontological approaches. The current trend is to propose methods that utilize tactics, techniques, and procedures instead of low-level indicators of compromise to make cyber threat identification more mature. Cooperation between standards from threat, vulnerability, and asset management is also an unresolved issue confirmed by analyzing relationships between public enumerations and knowledge bases. Last, we studied the usability of techniques from the MITRE ATT&CK knowledge base for threat modeling using network monitoring to capture data. Although network traffic is not the most used data source, it allows the modeling of" +"---\nauthor:\n- Lionel\u00a0Nganyewou\u00a0Tidjon and Foutse\u00a0Khomh\nbibliography:\n- 'bibliography.bib'\ntitle: Threat Assessment in Machine Learning based Systems \n---\n\n![image](images/vuln_arch.pdf){width=\"100.00000%\"}\n\nIntroduction\n============\n\nNowadays, Machine Learning (ML) is achieving significant success in dealing with various complex problems in safety-critical domains such as healthcare\u00a0[@kourou2015machine] and space\u00a0[@girimonte2007artificial]. ML has also been applied in cybersecurity to detect threatening anomalous behaviors such as spams, malwares, and malicious URLs\u00a0[@8735821]; allowing a system to respond to real-time inputs, containing both normal and suspicious data, and learn to reject malicious behavior. While ML is strengthening defense systems, it also helps threat actors to improve their tactics, techniques, and procedures (TTPs), and expand their attack surface. Attackers leverage the black-box nature of ML models and manipulate input data to affect their performance\u00a0[@carlini2021extracting; @jagielski2018manipulating; @biggio2018wild; @8294186]. Most recent work\u00a0[@8294186; @carlini2021extracting; @arp2022and; @morris2020textattack; @pierazzi2020intriguing; @tramer2020adaptive; @carlini2019evaluating; @abdullah2019practical; @eykholt2018robust; @biggio2018wild] outlined ML attacks and defenses targeting different phases of the ML lifecycle, i.e., input data, training, inference, and monitoring. ML-based systems are also often deployed on-premise or on cloud service providers; which increases attack vectors and makes them vulnerable to traditional attacks at different layers: software-level, system-level, and network-level. At the software-level, ML-based systems are" +"---\nabstract: 'A unique challenge for data analysis with the Laser Interferometer Space Antenna (LISA) is that the noise backgrounds from instrumental noise and astrophysical sources will change significantly over both the year and the entire mission. Variations in the noise levels will be on time scales comparable to, or shorter than, the time most signals spend in the detector\u2019s sensitive band. The variation in the amplitude of the galactic stochastic GW background from galactic binaries as the antenna pattern rotates relative to the galactic center is a particularly significant component of the noise variation. LISA\u2019s sensitivity to different source classes will therefore vary as a function of sky location and time. The variation will impact both overall signal-to-noise and the efficiency of alerts to EM observers to search for multi-messenger counterparts.'\nauthor:\n- 'Matthew C. Digman'\n- 'Neil J. Cornish'\nbibliography:\n- 'nonstationary.bib'\ntitle: 'LISA Gravitational Wave Sources in A Time-Varying Galactic Stochastic Background'\n---\n\nIntroduction {#sec:intro}\n============\n\nThe Laser Interferometer Space Antenna (LISA) is a space-based mHz gravitational wave (GW) observatory set to launch in the mid-2030s [@2017arXiv170200786A]. Perhaps hundreds of millions of galactic ultra-compact binaries (UCBs), primarily white dwarf binaries, will continuously contribute to the gravitational-wave signal" +"---\nabstract: 'Noise is ubiquitous in real quantum systems, leading to non-Hermitian quantum dynamics, and may affect the fundamental states of matter. Here we report in experiment a quantum simulation of the two-dimensional non-Hermitian quantum anomalous Hall (QAH) model using the nuclear magnetic resonance processor. Unlike the usual experiments using auxiliary qubits, we develop a stochastic average approach based on the stochastic Schr\u00f6dinger equation to realize the non-Hermitian dissipative quantum dynamics, which has advantages in saving the quantum simulation sources and simplifies implementation of quantum gates. We demonstrate the stability of dynamical topology against weak noise, and observe two types of dynamical topological transitions driven by strong noise. Moreover, a region that the emergent topology is always robust regardless of the noise strength is observed. Our work shows a feasible quantum simulation approach for dissipative quantum dynamics with stochastic Schr\u00f6dinger equation and opens a route to investigate non-Hermitian dynamical topological physics.'\nauthor:\n- Zidong Lin\n- Lin Zhang\n- Xinyue Long\n- 'Yu-ang Fan'\n- Yishan Li\n- Kai Tang\n- Jun Li\n- Xinfang Nie\n- Tao Xin\n- 'Xiong-Jun Liu'\n- Dawei Lu\ntitle: 'Experimental quantum simulation of non-Hermitian dynamical topological states using stochastic Schr\u00f6dinger equation'\n---\n\n[^1]" +"---\nabstract: |\n In recent years, dash cams have gained international popularity for personal and commercial use\u00a0[@mehrishEgocentricAnalysisDashCam2020; @parkMotivesConcernsDashcam2016]. Although dash cams are primarily used to collect evidence for traffic incidents, further value may be gained from the videos they record through video analytics. Commercial dash cams lack the resources necessary to perform video analytics, so their video data must be offloaded elsewhere to be processed. Cloud computing is a popular choice for offloading computationally intensive tasks, though the high latency and bandwidth usage of cloud computing is undesirable.\n\n These issues can be mitigated through edge computing, where processing occurs close to the data source. A device that is likely to be in close proximity to a dash cam is a mobile device, one belonging to either the vehicle\u2019s driver or passengers. Modern mobile devices such as smartphones are much more powerful than commercial dash cams, yet they still have a fraction of the resources available to cloud servers. A single smartphone is capable of performing video analytics on dash cam recordings, but may be unable to produce results in a real-time manner. Instead of using a single mobile device, multiple can form a local network to share their resources" +"---\nabstract: 'Emergency Departments (EDs) are a fundamental element of the Portuguese National Health Service, serving as an entry point for users with diverse and very serious medical problems. Due to the inherent characteristics of the ED, forecasting the number of patients using the services is particularly challenging. And a mismatch between affluence and the number of medical professionals can lead to a decrease in the quality of the services provided and create problems that have repercussions for the entire hospital, with the requisition of healthcare workers from other departments and the postponement of surgeries. ED overcrowding is driven, in part, by non-urgent patients that resort to emergency services despite not having a medical emergency, representing almost half of the total number of daily patients. This paper describes a novel deep learning architecture, the Temporal Fusion Transformer, that uses calendar and time-series covariates to forecast prediction intervals and point predictions for a 4-week period. We have concluded that patient volume can be forecast with a Mean Absolute Percentage Error (MAPE) of 9.87% for Portugal\u2019s Health Regional Areas (HRA) and a Root Mean Squared Error (RMSE) of 178 people/day. The paper shows empirical evidence supporting the use of a multivariate approach" +"---\nabstract: 'A famous theorem in polytope theory states that the combinatorial type of a simplicial polytope is completely determined by its facet-ridge graph. This celebrated result was proven by Blind and Mani in 1987, via a non-constructive proof using topological tools from homology theory. An elegant constructive proof was given by Kalai shortly after. In their original paper, Blind and Mani asked whether their result can be extended to simplicial spheres, and a positive answer to their question was conjectured by Kalai in 2009. In this paper, we show that Kalai\u2019s conjecture holds in the particular case of Knutson and Miller\u2019s spherical subword complexes. This family of simplicial spheres arises in the context of Coxeter groups, and is conjectured to be polytopal. In contrast, not all manifolds are reconstructible. We show two explicit examples, namely the torus and the projective plane.'\naddress:\n- 'CC: TU Graz, Institut f\u00fcr Geometrie, Kopernikusgasse 24, 8010 Graz, Austria.'\n- 'JD: TU Graz, Institut f\u00fcr Geometrie, Kopernikusgasse 24, 8010 Graz, Austria.'\nauthor:\n- Cesar Ceballos\n- Joseph Doolittle\nbibliography:\n- 'biblio.bib'\ntitle: 'Subword Complexes and Kalai\u2019s Conjecture on Reconstruction of Spheres'\n---\n\nIntroduction\n============\n\nThe combinatorial structure of a simple polytope is known to" +"---\nabstract: 'Independence testing plays a central role in statistical and causal inference from observational data. Standard independence tests assume that the data samples are independent and identically distributed (i.i.d.) but that assumption is violated in many real-world datasets and applications centered on relational systems. This work examines the problem of estimating independence in data drawn from relational systems by defining sufficient representations for the sets of observations influencing individual instances. Specifically, we define marginal and conditional independence tests for relational data by considering the kernel mean embedding as a flexible aggregation function for relational variables. We propose a consistent, non-parametric, scalable kernel test to operationalize the relational independence test for non-i.i.d. observational data under a set of structural assumptions. We empirically evaluate our proposed method on a variety of synthetic and semi-synthetic networks and demonstrate its effectiveness compared to state-of-the-art kernel-based independence tests.'\nauthor:\n- Ragib Ahsan\n- Zahra Fatemi\n- David Arbour\n- Elena Zheleva\nbibliography:\n- 'ahsan\\_640.bib'\ntitle: 'Non-Parametric Inference of Relational Dependence'\n---\n\nIntroduction {#sec:intro}\n============\n\nMeasuring dependence is a fundamental task in statistics. However, most existing independence tests assume that the observed data is independent and identically distributed (i.i.d.). This assumption makes them unsuitable for" +"---\nabstract: 'The coupling of electrons and phonons is governed wisely by the symmetry properties of the crystal structures. In particular, for two-dimensional (2D) systems, it has been suggested that the electrons do not couple to phonons with pure out-of-plane distortion, as long as there is a $\\sigma_h$ symmetry. We show that such a statement is correct when constituents of the unit-cell layer are only located in the $\\sigma_h$ symmetric plane; a prominent example of such a system is graphene. For those 2D crystals in which atoms are vertically located away from the horizontal symmetric plane (e.g., 1H transition metal dichalcogenides), acoustic flexural modes do not couple to the electrons up to linear order, while optical flexural phonons, which preserve $\\sigma_h$ symmetry, do couple with the electrons. Our conclusions are supported by an analytic argument together with numerical calculations using density functional perturbation theory.'\nauthor:\n- Mohammad Alidoosti\n- Davoud Nasr Esfahani\n- Reza Asgari\nbibliography:\n- 'draft1.bib'\nnocite: '[@apsrev41Control]'\ntitle: '$\\sigma_h$ symmetry and electron-phonon interaction in two-dimensional crystalline systems'\n---\n\nIntroduction\n============\n\nAtomically thin two-dimensional (2D) materials, including one- (several-) atomic-width layer(s), have sparked a great deal of attention owing to their various applications with nano-technological instruments\u00a0[@novoselov2005two; @RevModPhys.82.2673;" +"---\nabstract: 'In this work, we present an end-to-end heterogeneous multi-robot system framework where ground robots are able to localize, plan, and navigate in a semantic map created in real time by a high-altitude quadrotor. The ground robots choose and deconflict their targets independently, without any external intervention. Moreover, they perform cross-view localization by matching their local maps with the overhead map using semantics. The communication backbone is opportunistic and distributed, allowing the entire system to operate with no external infrastructure aside from GPS for the quadrotor. We extensively tested our system by performing different missions on top of our framework over multiple experiments in different environments. Our ground robots travelled over 6 km autonomously with minimal intervention in the real world and over 96 km in simulation without interventions.'\nauthor:\n- 'Ian D. Miller$^{1}$, Fernando Cladera$^{1}$, Trey Smith$^{2}$, Camillo Jose Taylor$^{1}$, and Vijay Kumar$^{1}$ [^1] [^2] [^3]'\nbibliography:\n- 'IEEEabrv.bib'\n- 'bib.bib'\ntitle: |\n **Stronger Together:\\\n Air-Ground Robotic Collaboration Using Semantics**\n---\n\nat (current page.south) ;\n\nIntroduction {#sec:introduction}\n============\n\nMulti-robot systems offer many advantages in terms of resilience to failure, adaptability, and the ability to complete work in parallel [@dorigo2020reflections]. In particular, heterogeneous systems can take advantage of the" +"---\nabstract: 'Large public knowledge graphs, like Wikidata, contain billions of statements about tens of millions of entities, thus inspiring various use cases to exploit such knowledge graphs. However, practice shows that much of the relevant information that fits users\u2019 needs is still missing in Wikidata, while current linked open data (LOD) tools are not suitable to enrich large graphs like Wikidata. In this paper, we investigate the potential of enriching Wikidata with structured data sources from the LOD cloud. We present a novel workflow that includes gap detection, source selection, schema alignment, and semantic validation. We evaluate our enrichment method with two complementary LOD sources: a noisy source with broad coverage, DBpedia, and a manually curated source with a narrow focus on the art domain, Getty. Our experiments show that our workflow can enrich Wikidata with millions of novel statements from external LOD sources with high quality. Property alignment and data quality are key challenges, whereas entity alignment and source selection are well-supported by existing Wikidata mechanisms. We make our code and data available to support future work.'\nauthor:\n- Bohui Zhang\n- Filip Ilievski\n- Pedro Szekely\nbibliography:\n- 'references.bib'\ntitle: Enriching Wikidata with Linked Open Data\n---" +"---\nabstract: 'Recurrent neural networks have been shown to be effective architectures for many tasks in high energy physics, and thus have been widely adopted. Their use in low-latency environments has, however, been limited as a result of the difficulties of implementing recurrent architectures on field-programmable gate arrays (FPGAs). In this paper we present an implementation of two types of recurrent neural network layers\u2014long short-term memory and gated recurrent unit\u2014within the hls4ml framework. We demonstrate that our implementation is capable of producing effective designs for both small and large models, and can be customized to meet specific design requirements for inference latencies and FPGA resources. We show the performance and synthesized designs for multiple neural networks, many of which are trained specifically for jet identification tasks at the CERN Large Hadron Collider.'\nauthor:\n- |\n Elham E Khoda^1^[^1]\u00a0, Dylan Rankin^2^[^2]\u00a0, Rafael Teixeira de Lima^3^[^3]\u00a0\\\n **Philip Harris^2^, Scott Hauck^1^, Shih-Chieh Hsu^1^, Michael Kagan^3^, Vladimir Loncar^4^**,\\\n **Chaitanya Paikara^1^, Richa Rao^1^, Sioni Summers^4^, Caterina Vernieri^3^, Aaron Wang^1^**\\\n \u00a0\\\n $^{1}$University of Washington, Seattle, WA 98195, USA\\\n $^{2}$Massachusetts Institute of Technology Cambridge, MA 02139, USA\\\n $^{3}$SLAC National Accelerator Laboratory, Menlo Park, CA 94025, USA\\\n $^{4}$European Organization for Nuclear Research (CERN), Geneva, Switzerland" +"---\nabstract: 'We employ the Poisson-Lie group of pseudo-difference operators to define lattice analogs of classical $W_m$-algebras. We then show that the so-constructed algebras coincide with the ones given by discrete Drinfeld-Sokolov type reduction.'\naddress:\n- |\n Department of Mathematics\\\n University of Arizona\\\n Tucson AZ 85716\n- |\n Mathematics Department\\\n University of Wisconsin\\\n Madison WI 53706\nauthor:\n- Anton Izosimov and Gloria Mar\u00ed Beffa\nbibliography:\n- 'latticew.bib'\ntitle: 'What is a lattice W-algebra?'\n---\n\nIntroduction and Background\n===========================\n\nIt is well-known that the space of monic order $m$ periodic differential operators without sub-leading term, i.e. operators of the form $$u_0(x) + \\ldots + u_{m-2}(x) \\partial^{m-2} + \\partial^m,$$ where $\\partial := \\partial / \\partial x$ and the coefficients $u_i(x)$ are periodic functions, has a remarkable Poisson structure, called the second Adler-Gelfand-Dickey bracket [@GD; @Adler]. The corresponding Poisson algebra is known as the classical $W_m$-algebra and can be identified with the semi-classical limit of the $W_m$-algebra arising in conformal field theory [@zamolodchikov1985infinite]. In connection with integrable systems, Adler-Gelfand-Dickey structures are best known as Poisson brackets for KdV-type equations. In particular, the $W_2$-structure coincides with the second Poisson bracket for the classical KdV equation and can be interpreted as the Lie-Poisson structure on" +"---\nabstract: 'Many real-world problems contain both multiple objectives and agents, where a trade-off exists between objectives. Key to solving such problems is to exploit sparse dependency structures that exist between agents. For example in wind farm control a trade-off exists between maximising power and minimising stress on the systems components. Dependencies between turbines arise due to the wake effect. We model such sparse dependencies between agents as a multi-objective coordination graph (MO-CoG). In multi-objective reinforcement learning (MORL) a utility function is typically used to model a user\u2019s preferences over objectives. However, a utility function may be unknown a priori. In such settings a set of possibly optimal policies must be computed. Which policies are optimal depends on which optimality criterion applies. If the utility function of a user is derived from multiple executions of a policy, the scalarised expected returns (SER) must be optimised. If the utility of a user is derived from the single execution of a policy, the expected scalarised returns (ESR) criterion must be optimised. For example, wind farms are subjected to constraints and regulations that must be adhered to at all times, therefore the ESR criterion must be optimised. For MO-CoGs, the state-of-the-art algorithms can" +"---\nabstract: 'The Pandora Software Development Kit and algorithm libraries provide pattern-recognition logic essential to the reconstruction of particle interactions in liquid argon time projection chamber detectors. Pandora is the primary event reconstruction software used at ProtoDUNE-SP, a prototype for the Deep Underground Neutrino Experiment far detector. ProtoDUNE-SP, located at CERN, is exposed to a charged-particle test beam. This paper gives an overview of the Pandora reconstruction algorithms and how they have been tailored for use at ProtoDUNE-SP. In complex events with numerous cosmic-ray and beam background particles, the simulated reconstruction and identification efficiency for triggered test-beam particles is above 80% for the majority of particle type and beam momentum combinations. Specifically, simulated 1GeV/$c$ charged pions and protons are correctly reconstructed and identified with efficiencies of 86.1$\\pm0.6$% and 84.1$\\pm0.6$%, respectively. The efficiencies measured for test-beam data are shown to be within 5% of those predicted by the simulation.'\nauthor:\n- |\n The DUNE Collaboration\\\n \\\n A.\u00a0Abed Abud\n- 'B.\u00a0Abi'\n- 'R.\u00a0Acciarri'\n- 'M.\u00a0A.\u00a0Acero'\n- 'M.\u00a0R.\u00a0Adames'\n- 'G.\u00a0Adamov'\n- 'M.\u00a0Adamowski'\n- 'D.\u00a0Adams'\n- 'M.\u00a0Adinolfi'\n- 'C.\u00a0Adriano'\n- 'A.\u00a0Aduszkiewicz'\n- 'J.\u00a0Aguilar'\n- 'Z.\u00a0Ahmad'\n- 'J.\u00a0Ahmed'\n-" +"---\nabstract: 'Automatic segmentation of the placenta in fetal ultrasound (US) is challenging due to the (i) high diversity of placenta appearance, (ii) the restricted quality in US resulting in highly variable reference annotations, and (iii) the limited field-of-view of US prohibiting whole placenta assessment at late gestation. In this work, we address these three challenges with a multi-task learning approach that combines the classification of placental location (e.g., anterior, posterior) and semantic placenta segmentation in a single convolutional neural network. Through the classification task the model can learn from larger and more diverse datasets while improving the accuracy of the segmentation task in particular in limited training set conditions. With this approach we investigate the variability in annotations from multiple raters and show that our automatic segmentations (Dice of 0.86 for anterior and 0.83 for posterior placentas) achieve human-level performance as compared to intra- and inter-observer variability. Lastly, our approach can deliver whole placenta segmentation using a multi-view US acquisition pipeline consisting of three stages: multi-probe image acquisition, image fusion and image segmentation. This results in high quality segmentation of larger structures such as the placenta in US with reduced image artifacts which are beyond the field-of-view of single" +"---\nabstract: 'We build a microscopic model to study the intra- and inter-layer superexchange due to electrons hopping in chromium trihalides ($\\mathrm{CrX}_3$, X= Cl, Br, and I). In evaluating the superexchange, we identify the relevant intermediate excitations in the hopping. In our study, we find that the intermediate hole-pairs excitations in the $p$-orbitals on X ion play a crucial role in mediating various types of exchange interactions. In particular, the inter-layer antiferromagnetic exchange may be realized by the hole-pair-mediated superexchange. Interestingly, we also find that these virtual hopping processes compete with each other leading to weak intra-layer ferromagnetic exchange. In addition, we also study the spin-orbit coupling effects on the superexchange and investigate the Dzyaloshinskii-Moriya interaction. Finally, we extract the microscopic model parameters from density functional theory for analyzing the exchange interactions in a monolayer $\\mathrm{CrI}_3$.'\nauthor:\n- Kok Wee Song\n- 'Vladimir I Fal\u2019ko'\nbibliography:\n- 'CrX3.bib'\ntitle: 'Superexchange and spin-orbit coupling in monolayer and bilayer chromium trihalides'\n---\n\nintroduction\n============\n\nThe realization of two-dimensional (2D) magnetism in an atomically-thin chromium trihalide ($\\mathrm{CrX}_3$) is one of the recent breakthroughs in the field of 2D materials[@McGuire:ACScm27(2015); @Burch:Nature563(2018); @Gibertini:NatNano14(2019); @Wang:AnnalPhys532(2020); @Yao:Nanotech32(2021)]. Although 2D magnetism is relatively new to the field, the" +"---\nabstract: 'Pipe dreams and bumpless pipe dreams for vexillary permutations are each known to be in bijection with certain semistandard tableaux via maps due to Lenart and Weigandt, respectively. Recently, Gao and Huang have defined a bijection between the former two sets. In this note we show for vexillary permutations that the Gao\u2013Huang bijection preserves the associated tableaux, giving a new proof of Lenart\u2019s result. Our methods extend to give a recording tableau for any bumpless pipe dream.'\naddress:\n- 'Department of Mathematics, University of Florida, Gainesville, FL 32601'\n- 'Department of Mathematics, University of Florida, Gainesville, FL 32601'\nauthor:\n- Adam Gregory\n- Zachary Hamaker\nbibliography:\n- 'references.bib'\ntitle: 'Lenart\u2019s bijection via bumpless pipe dreams'\n---\n\nIntroduction\n============\n\nSchubert polynomials are fundamental objects in algebraic combinatorics and enumerative algebraic geometry. They have several combinatorial formulas, most notably the Billey-Jockusch-Stanley pipe dream formula\u00a0[@BB93; @billey1993some]. Schubert polynomials associated to vexillary permutations are flagged Schur functions, which have a formula in terms of flagged tableaux\u00a0[@wachs1985flagged]. As part of an effort to extend this type of formula to all Schubert polynomials, Lenart described a bijection from pipe dreams to flagged tableaux\u00a0[@L04 Rem.\u00a04.12\u00a0(2)]. Lenart\u2019s map relies on Edelman-Greene" +"---\nabstract: 'Precision agriculture is rapidly attracting research to efficiently introduce automation and robotics solutions to support agricultural activities. Robotic navigation in vineyards and orchards offers competitive advantages in autonomously monitoring and easily accessing crops for harvesting, spraying and performing time-consuming necessary tasks. Nowadays, autonomous navigation algorithms exploit expensive sensors which also require heavy computational cost for data processing. Nonetheless, vineyard rows represent a challenging outdoor scenario where GPS and Visual Odometry techniques often struggle to provide reliable positioning information. In this work, we combine Edge AI with Deep Reinforcement Learning to propose a cutting-edge lightweight solution to tackle the problem of autonomous vineyard navigation without exploiting precise localization data and overcoming task-tailored algorithms with a flexible learning-based approach. We train an end-to-end sensorimotor agent which directly maps noisy depth images and position-agnostic robot state information to velocity commands and guides the robot to the end of a row, continuously adjusting its heading for a collision-free central trajectory. Our extensive experimentation in realistic simulated vineyards demonstrates the effectiveness of our solution and the generalization capabilities of our agent.'\nauthor:\n- 'Mauro Martini$^{1,2}$, Simone Cerrato$^{1,2}$, Francesco Salvetti$^{1,2,3}$, Simone Angarano$^{1,2}$, and Marcello Chiaberge$^{1,2}$ [^1][^2][^3]'\nbibliography:\n- 'mybib.bib'\ntitle: 'Position-Agnostic Autonomous Navigation in" +"---\nabstract: 'We show that if the edges or vertices of an undirected graph $G$ can be covered by $k$ shortest paths, then the pathwidth of $G$ is upper-bounded by a single-exponential function of $k$. As a corollary, we prove that the problem [Isometric Path Cover with Terminals]{} (which, given a graph $G$ and a set of $k$ pairs of vertices called *terminals*, asks whether $G$ can be covered by $k$ shortest paths, each joining a pair of terminals) is FPT with respect to the number of terminals. The same holds for the similar problem [Strong Geodetic Set with Terminals]{} (which, given a graph $G$ and a set of $k$ terminals, asks whether there exist $\\binom{k}{2}$ shortest paths covering $G$, each joining a distinct pair of terminals). Moreover, this implies that the related problems [Isometric Path Cover]{} and [Strong Geodetic Set]{} (defined similarly but where the set of terminals is not part of the input) are in XP with respect to parameter $k$.'\nauthor:\n- 'Ma\u00ebl Dumas[^1]'\n- 'Florent Foucaud[^2]\u00a0[^3]'\n- Anthony Perez\n- Ioan Todinca\nbibliography:\n- 'main.bib'\ndate: April 2022\ntitle: On graphs coverable by $k$ shortest paths\n---\n\nIntroduction\n============\n\nPath problems" +"---\nabstract: 'Collision detection plays an important role in simulation, control, and learning for robotic systems. However, no existing method is differentiable with respect to the configurations of the objects, greatly limiting the sort of algorithms that can be built on top of collision detection. In this work, we propose a set of differentiable collision detection algorithms between capsules and padded polygons by formulating these problems as differentiable convex quadratic programs. The resulting algorithms are able to return a proximity value indicating if a collision has taken place, as well as the closest points between objects, all of which are differentiable. As a result, they can be used reliably within other gradient-based optimization methods, including trajectory optimization, state estimation, and reinforcement learning methods.'\nauthor:\n- 'Kevin Tracy$^{1}$, Taylor A. Howell$^{2}$, Zachary Manchester$^{1}$ [^1][^2]'\nbibliography:\n- 'rexlab.bib'\ntitle: |\n DiffPills: Differentiable Collision Detection\\\n for Capsules and Padded Polygons\n---\n\nIntroduction\n============\n\nCollision detection algorithms are used to determine if two abstract shapes have an intersection. This problem has been the subject of great interest from the computer graphics and video game communities, where accurate collision detection is a key part of both the simulation as well as the visualization of complex" +"---\nabstract: 'The intracluster medium (ICM) is a reservoir of heavy elements synthesized by different supernovae (SNe) types over cosmic history. Different enrichment mechanisms contribute a different relative metal production, predominantly caused by different SNe Type dominance. Using spatially resolved X-ray spectroscopy, one can probe the contribution of each metal enrichment mechanism. However, a large variety of physically feasible supernova explosion models make the analysis of the ICM enrichment history more uncertain. This paper presents a non-parametric PDF analysis to rank different theoretical SNe yields models by comparing their performance against observations. Specifically, we apply this new methodology to rank 7192 combinations of core-collapse SN and Type Ia SN models using 8 abundance ratios from *Suzaku* observations of 18 galaxy systems (clusters and groups) to test their predictions. This novel technique can compare many SN models and maximize spectral information extraction, considering all the individual measurable abundance ratios and their uncertainties. We find that Type II Supernova with nonzero initial metallicity progenitors in general performed better than Pair-Instability SN and Hypernova models and that 3D SNIa models (with the WD progenitor central density of $2.9\\times10^9 \\mathrm{g\\,cm^{-3}}$) performed best among all tested SN model pairs.'\nauthor:\n- Rebeca Batalha\n- 'Renato" +"---\nabstract: 'Despite the fact that the theory of mixtures has been part of non-equilibrium thermodynamics and engineering for a long time, it is far from complete. While it is well formulated and tested in the case of mechanical equilibrium (where only diffusion-like processes take place), the question how to properly describe homogeneous mixtures that flow with multiple independent velocities that still possess some inertia (before mechanical equilibrium is reached) is still open. Moreover, the mixtures can have several temperatures before they relax to a common value. In this paper, we derive a theory of mixtures from Hamiltonian mechanics in interaction with electromagnetic fields. The resulting evolution equations are then reduced to the case with only one momentum (classical irreversible thermodynamics), providing a generalization of the Maxwell-Stefan diffusion equations. In a next step, we reduce that description to the mechanical equilibrium (no momentum) and derive a non-isothermal variant of the dusty gas model. These reduced equations are solved numerically, and we illustrate the results on efficiency analysis, showing where in a concentration cell efficiency is lost. Finally, the theory of mixtures identifies the temperature difference between constituents as a possible new source of the Soret coefficient. For the sake of" +"---\nabstract: 'Assuming Jensen\u2019s diamond principle ($\\diamondsuit$) we construct for every natural number $n>0$ a compact Hausdorff space $K$ such that whenever the Banach spaces $C(K)$ and $C(L)$ are isomorphic for some compact Hausdorff $L$, then the covering dimension of $L$ is equal to $n$. The constructed space $K$ is separable and connected, and the Banach space $C(K)$ has few operators i.e. every bounded linear operator $T:C(K)\\rightarrow C(K)$ is of the form $T(f)=fg+S(f)$, where $g\\in C(K)$ and $S$ is weakly compact.'\naddress:\n- 'Institute of Mathematics of the Polish Academy of Sciences, ul. \u015aniadeckich 8, 00-656 Warszawa, Poland'\n- 'Faculty of Mathematics, Informatics, and Mechanics, University of Warsaw, ul. Banacha 2, 02-097 Warszawa, Poland'\nauthor:\n- Damian G\u0142odkowski\nbibliography:\n- 'bibliography.bib'\ntitle: 'A Banach space C(K) reading the dimension of K'\n---\n\n[^1]\n\nIntroduction\n============\n\nIn [@few-operators-main] Koszmider showed that there is a compact Hausdorff space $K$ such that whenever $L$ is compact Hausdorff and the Banach spaces $C(K)$ and $C(L)$ are isomorphic, the dimension of $L$ is greater than zero. In the light of this result Pe\u0142czy\u0144ski asked, whether there is a compact space $K$ with $\\dim(K)=k$ for given $k\\in \\omega\\backslash\\{0\\}$, such that if $C(K)\\sim C(L)$, then $\\dim(L)\\geq k$" +"---\nabstract: 'In this paper, we address the problem of detecting the moment when an ongoing asynchronous parallel iterative process can be terminated to provide a sufficiently precise solution to a fixed-point problem being solved. Formulating the detection problem as a global solution identification problem, we analyze the snapshot-based approach, which is the only one that allows for exact global residual error computation. From a recently developed approximate snapshot protocol providing a reliable global residual error, we experimentally investigate here, as well, the reliability of a global residual error computed without any prior particular detection mechanism. Results on a single-site supercomputer successfully show that such high-performance computing platforms possibly provide computational environments stable enough to allow for simply resorting to non-blocking reduction operations for computing reliable global residual errors, which provides noticeable time saving, at both implementation and execution levels.'\nauthor:\n- '[Guillaume Gbikpi-Benissan]{}[^1]'\n- '[Fr\u00e9d\u00e9ric Magoul\u00e8s]{}[^2]'\nbibliography:\n- 'ref.bib'\ntitle: Distributed asynchronous convergence detection without detection protocol\n---\n\nasynchronous iterations; convergence detection; residual error; snapshot; parallel computing; large-scale computing\n\nIntroduction\n============\n\nOne of the major questions which arise when implementing asynchronous iterations in distributed solvers consists of finding a mechanism to detect when convergence is reached, so that one" +"---\nabstract: 'Text-based simulated environments have proven to be a valid test bed for machine learning approaches. The process of affordance extraction can be used to generate possible actions for interaction within such an environment. In this paper the capabilities and challenges for utilizing external knowledge databases (in particular ConceptNet) in the process of affordance extraction are studied. An algorithm for automated affordance extraction is introduced and evaluated on the Interactive Fiction (IF) platforms TextWorld and Jericho. For this purpose, the collected affordances are translated into text commands for IF agents. To probe the quality of the automated evaluation process, an additional human baseline study is conducted. The paper illustrates that, despite some challenges, external databases can in principle be used for affordance extraction. The paper concludes with recommendations for further modification and improvement of the process.'\nauthor:\n- 'P. Gelhausen, M. Fischer, G. Peters'\nbibliography:\n- 'lit.bib'\ntitle: 'Challenges for Automated Affordance Extraction via External Knowledge Database for Text-Based Simulated Environments'\n---\n\nMotivation {#Section_Motivation}\n==========\n\nSimulated environments are an indispensable tool for modern studies of artificial intelligence and machine learning in particular. As such, they provide the possibility to develop new methods and even paradigms, as well as the" +"---\nabstract: 'The parameter-free computation of charge transport properties of semiconductors is now routine owing to advances in the ab-initio description of the electron-phonon interaction. Many studies focus on the low-field regime in which the carrier temperature equals the lattice temperature and the current power spectral density (PSD) is proportional to the mobility. The calculation of high-field transport and noise properties offers a stricter test of the theory as these relations no longer hold, yet few such calculations have been reported. Here, we compute the high-field mobility and PSD of hot holes in silicon from first principles at temperatures of 77 and 300 K and electric fields up to 20 [kV cm^-1^]{}\u00a0along various crystallographic axes. We find that the calculations quantitatively reproduce experimental trends including the anisotropy and electric-field dependence of hole mobility and PSD. The experimentally observed rapid variation of energy relaxation time with electric field at cryogenic temperatures is also correctly predicted. However, as in low-field studies, absolute quantitative agreement is in general lacking, a discrepancy that has been attributed to inaccuracies in the calculated valence band structure. Our work highlights the use of high-field transport and noise properties as a rigorous test of the theory of" +"---\nabstract: 'Observations of gaseous complex organic molecules (COMs) in cold starless and prestellar cloud cores require efficient desorption of the COMs and their parent species from icy mantles on interstellar grains. With a simple astrochemical model, we investigate if mechanical removal of ice fragments in oblique collisions between grains in two size bins (0.01 and 0.1 micron) can substantially affect COM abundances. Two grain collision velocities were considered \u2013 10 and 50 meters per second, corresponding to realistic grain relative speeds arising from ambipolar diffusion and turbulence, respectively. From the smaller grains, the collisions are assumed to remove a spherical cap with height equal to 1/3 and 1 ice mantle thickness, respectively. We find that the turbulence-induced desorption can elevate the gas-phase abundances of COMs by several orders of magnitude, reproducing observed COM abundances within an order of magnitude. Importantly, the high gaseous COM abundances are attained for long time-scales of up to 1 Myr and for a rather low methanol ice abundance, common for starless cores. The simple model, considering only two grain size bins and several assumptions, demonstrates a concept that must be tested with a more sophisticated approach.'\nauthor:\n- |\n Juris Kalv\u0101ns,$^{1}$[^1] Kedron Silsbee,$^{2}$\\\n $^{1}$Engineering" +"---\nabstract: 'Detection and tracking of moving objects is an essential component in environmental perception for autonomous driving. In the flourishing field of multi-view 3D camera-based detectors, different transformer-based pipelines are designed to learn queries in 3D space from 2D feature maps of perspective views, but the dominant dense BEV query mechanism is computationally inefficient. This paper proposes Sparse R-CNN 3D (SRCN3D), a novel two-stage fully-sparse detector that incorporates sparse queries, sparse attention with box-wise sampling, and sparse prediction. SRCN3D adopts a cascade structure with the twin-track update of both a fixed number of query boxes and latent query features. Our novel sparse feature sampling module only utilizes local 2D region of interest (RoI) features calculated by the projection of 3D query boxes for further box refinement, leading to a fully-convolutional and deployment-friendly pipeline. For multi-object tracking, motion features, query features and RoI features are comprehensively utilized in multi-hypotheses data association. Extensive experiments on nuScenes dataset demonstrate that SRCN3D achieves competitive performance in both 3D object detection and multi-object tracking tasks, while also exhibiting superior efficiency compared to transformer-based methods. Code and models are available at .'\nauthor:\n- |\n Yining Shi$^{1}$, Jingyan Shen$^{1}$, Yifan Sun$^{1}$, Yunlong Wang$^{1}$,\\\n Jiaxin Li$^{1}$," +"---\nabstract: 'A trigger system of general function is designed using the commercial module CAEN V2495 for heavy ion nuclear reaction experiment at Fermi energies. The system has been applied and verified on CSHINE (Compact Spectrometer for Heavy IoN Experiment). Based on the field programmable logic gate array (FPGA) technology of command register access and remote computer control operation, trigger functions can be flexibly configured according to the experimental physical goals. Using the trigger system on CSHINE, we carried out the beam experiment of 25 MeV/u $ ^{86}{\\rm Kr}+ ^{124}{\\rm Sn}$ on the Radioactive Ion Beam Line 1 in Lanzhou (RIBLL1), China. The online results demonstrate that the trigger system works normally and correctly. The system can be extended to other experiments.'\naddress:\n- 'Department of Physics, Tsinghua University, Beijing 100084, China'\n- 'Institute of Modern Physics, Chinese Academy of Science, Lanzhou 730000, China'\n- 'School of Nuclear Science and Technology, Lanzhou University, Lanzhou 730000, China'\nauthor:\n- Dong Guo\n- Yuhao Qin\n- Sheng Xiao\n- Zhi Qin\n- Yijie Wang\n- Fenhai Guan\n- Xinyue Diao\n- Boyuan Zhang\n- Yaopeng Zhang\n- Dawei Si\n- Shiwei Xu\n- Xianglun Wei\n- Herun Yang\n- Peng Ma\n-" +"---\nabstract: '[ We report the discovery of a candidate X-ray supernova remnant SRGe\u00a0J003602.3+605421=G121.1-1.9 in the course of *SRG*/eROSITA all-sky survey. The object is located at (l,b)=(121.1$^\\circ$,-1.9$^\\circ$), is $\\approx36$ arcmin in angular size and [ has a nearly circular shape]{}. Clear variations in spectral shape of the X-ray emission across the object are detected, with the emission from the inner (within 9\u2019) and outer (9\u2019-18\u2019) parts dominated by iron and oxygen/neon lines, respectively. The non-equilibrium plasma emission model is capable of describing the spectrum of the outer part with the initial gas temperature 0.1 keV, final temperature 0.5 keV and the ionization age $\\sim 2\\times10^{10}$ cm$^{-3}$ s. The observed spectrum of the inner region is more complicated (plausibly due to the contribution of the outer shell) and requires substantial overabundance of iron for all models we have tried. The derived X-ray absorption equals to $(4-6)\\times10^{21}$ cm$^{-2}$, locating the object at the distance beyond 1.5 kpc, and implying [ its age]{} $\\sim(5-30)\\times1000$ yrs. No bright radio, infrared, H$_\\alpha$ or gamma-ray counterpart of this object have been found in the publicly-available archival data. A model invoking a canonical $10^{51}$ erg explosion [(either SN\u00a0Ia or core collapse)]{} in the hot and" +"---\nabstract: 'We introduce regular languages of morphisms in free monoidal categories, with their associated grammars and automata. These subsume the classical theory of regular languages of words and trees, but also open up a much wider class of languages over string diagrams. We use the algebra of monoidal and cartesian restriction categories to investigate the properties of regular monoidal languages, and provide sufficient conditions for their recognizability by deterministic monoidal automata.'\nauthor:\n- Matthew Earnshaw\n- Pawe\u0142 Soboci\u0144ski\nbibliography:\n- 'main.bib'\ntitle: Regular Monoidal Languages\n---\n\nIntroduction {#sec:intro}\n============\n\nClassical formal language theory has been extended to various kinds of algebraic structures, such as infinite words, rational sequences, trees, countable linear orders, graphs of bounded tree width, etc. In recent years, the essential unity of the field has been better understood [@mmso; @eilenbergforfree]. Such structures can often be seen as algebras for monads on the category of sets, and sufficient conditions exist\u00a0[@mmso] for formal language theory to extend to their algebras.\n\nIn this paper, we make a first step into a programme of extending language theory to higher-dimensional algebraic structures. Here we make the step from monoids to 2-monoids, better known as monoidal categories.\n\nWe introduce a categorial" +"---\nabstract: 'Collision detection between objects is critical for simulation, control, and learning for robotic systems. However, existing collision detection routines are inherently non-differentiable, limiting their applications in gradient-based optimization tools. In this work, we propose DCOL: a fast and fully differentiable collision-detection framework that reasons about collisions between a set of composable and highly expressive convex primitive shapes. This is achieved by formulating the collision detection problem as a convex optimization problem that solves for the minimum uniform scaling applied to each primitive before they intersect. The optimization problem is fully differentiable with respect to the configurations of each primitive and is able to return a collision detection metric and contact points on each object, agnostic of interpenetration. We demonstrate the capabilities of DCOL on a range of robotics problems from trajectory optimization and contact physics, and have made an open-source implementation available.'\nauthor:\n- 'Kevin Tracy$^{1}$, Taylor A. Howell$^{2}$, and Zachary Manchester$^{1}$[^1][^2]'\nbibliography:\n- 'rexlab.bib'\ntitle: '**Differentiable Collision Detection for a Set of Convex Primitives** '\n---\n\nIntroduction\n============\n\nComputing collisions is of great interest to the computer graphics, video game, and robotics communities. Popular algorithms for collision detection include the Gilbert, Johnson, and Keerthi (GJK) algorithm [@gilbert1988]," +"UT-Komaba/22-1\\\nJuly 2022\n\nQuantizing multi-pronged open string junction\\\n\nMasako [Asano]{}$^{\\dagger}$ and Mitsuhiro [Kato]{}$^{\\ddagger}$\\\n${}^{\\dagger}$[*Faculty of Science and Technology*]{}\\\n[*Seikei University*]{}\\\n[*Musashino-shi, Tokyo 180-8633, Japan*]{}\\\n${}^{\\ddagger}$[*Institute of Physics*]{}\\\n[*University of Tokyo, Komaba*]{}\\\n[*Meguro-ku, Tokyo 153-8902, Japan*]{}\\\n\nABSTRACT\n\n> Covariant quantization of multi-pronged open bosonic string junction is studied beyond static analysis. Its excited states are described by a set of ordinary bosons as well as some sets of twisted bosons on the world-sheet. The system is characterized by a certain large algebra of twisted type which includes a single Virasoro algebra as a subalgebra. By properly defining the physical states, one can show that there are no ghosts in the Hilbert space.\n\nIntroduction\n============\n\nSince the 1990\u2019s, when D-branes and various string dualities were found, string junctions have been studied by many authors in the context of superstrings and M-theory.[^1] These analyses mainly focused on the static properties such as BPS conditions or stability, with a few exceptions (e.g.\u00a0[@Rey:1997sp; @Callan:1998sf].) String junctions are dynamical objects formed by dynamical strings, so that one can naturally ask their dynamical properties such as spectrum of their excited states and other quantum features beyond the static properties.\n\nGoing back to the 1970\u2019s, some earlier" +"---\nabstract: 'In this work, we investigate the parameter space of the Georgi-Machacek (GM) model, where we consider many theoretical and experimental constraints such as the perturbativity, vacuum stability, unitarity, electroweak precision tests, the Higgs diphoton decay, the Higgs total decay width and the LHC measurements of the signal strengths of the SM-like Higgs boson $h$ in addition to the constraints from doubly charged Higgs bosons and Drell-Yan diphoton production and the indirect constraint from the $b\\to s$ transition processes. We investigate also the possibility that the electroweak vacuum could be destabilized by unwanted wrong minima that may violate the $CP$ and/or the electric charge symmetries. We found that about 40 % of the parameter space that fulfills the above mentioned constraints are excluded by these unwanted minima. In addition, we found that the negative searches for a heavy resonance could exclude a significant part of the viable parameter space, and future searches could exclude more regions in the parameter space.'\nauthor:\n- Zahra Bairi\n- Amine Ahriche\ntitle: 'More constraints on the Georgi-Machacek model'\n---\n\nIntroduction\n============\n\nSince the discovery of a Standard Model (SM)-like 125 GeV Higgs boson at the Large Hadron Collider (LHC)\u00a0[@ATLAS:2012yve], many questions are" +"---\nabstract: 'Detection models trained by one party (including server) may face severe performance degradation when distributed to other users (clients). Federated learning can enable multi-party collaborative learning without leaking client data. In this paper, we focus on a special cross-domain scenario in which the server has large-scale labeled data and multiple clients only have a small amount of labeled data; meanwhile, there exist differences in data distributions among the clients. In this case, traditional federated learning methods can\u2019t help a client learn both the global knowledge of all participants and its own unique knowledge. To make up for this limitation, we propose a cross-domain federated object detection framework, named FedOD. The proposed framework first performs the federated training to obtain a public global aggregated model through multi-teacher distillation, and sends the aggregated model back to each client for fine-tuning its personalized local model. After a few rounds of communication, on each client we can perform weighted ensemble inference on the public global model and the personalized local model. We establish a federated object detection dataset which has significant background differences and instance differences based on multiple public autonomous driving datasets, and then conduct extensive experiments on the dataset. The" +"---\nabstract: 'The $^{16}$F nucleus is situated at the proton drip-line and is unbound by proton emission by only about 500 keV. Continuum coupling is then prominent in this nucleus. Added to that, its low-lying spectrum consists of narrow proton resonances as well. It is then a very good candidate to study nuclear structure and reactions at proton drip-line. The low-lying spectrum and scattering proton-proton cross section of $^{16}$F have then been calculated with the coupled-channel Gamow shell model framework for that matter using an effective Hamiltonian. Experimental data are very well reproduced, as well as in its mirror nucleus $^{16}$N. Isospin-symmetry breaking generated by the Coulomb interaction and continuum coupling explicitly appears in our calculations. In particular, the different continuum couplings in $^{16}$F and $^{16}$N involving $s_{1/2}$ partial waves allow to explain the different ordering of low-lying states in their spectrum.'\nauthor:\n- 'N. Michel'\n- 'J. G. Li'\n- 'L. H. Ru'\n- 'W. Zuo'\nbibliography:\n- 'references.bib'\ntitle: 'Calculation of the Thomas-Ehrman shift in $^{16}$F and $^{15}$O(p,p) cross section with the Gamow shell model'\n---\n\nIntroduction\n============\n\nThe area of the nuclear chart for which $A \\sim 20$ possesses interesting features by many aspects. On the one" +"---\nabstract: 'The formation of modified *Z*-phase in a 12Cr1MoV (German grade: X20) tempered martensite ferritic (TMF) steel subjected to interrupted long-term creep-testing at and 120 MPa was investigated. Quantitative volumetric measurements collected from thin-foil and extraction replica samples showed that modified *Z*-phase precipitated in both the uniformly-elongated gauge ($f_v$: 0.23 $\\pm$ 0.02 %) and thread regions ($f_v$: 0.06 $\\pm$ 0.01 %) of the sample that ruptured after 139 kh. The formation of modified *Z*-phase was accompanied by a progressive dissolution of MX precipitates, which decreased from ($f_v$: 0.16 $\\pm$ 0.02%) for the initial state to ($f_v$: 0.03 $\\pm$\u00a00.01%) in the uniformly-elongated gauge section of the sample tested to failure. The interparticle spacing of the creep-strengthening MX particles increased from ($\\lambda_{3D}$: 0.55 $\\pm$\u00a00.05 $\\mu m$) in the initial state to ($\\lambda_{3D}$: 1.01 $\\pm$\u00a00.10 $\\mu m$) for the uniformly-elongated gauge section of the ruptured sample, while the thread region had an interparticle spacing of ($\\lambda_{3D}$: 0.60 $\\pm$\u00a00.05 $\\mu m$). The locally deformed fracture region had an increased phase fraction of modified *Z*-phase ($f_v$: 0.40 $\\pm$ 0.20%), which implies that localised creep-strain strongly promotes the formation of modified *Z*-phase. The modified *Z*-phase precipitates did not form only on" +"---\nabstract: 'This work explores a synchronization-like phenomenon induced by common noise for continuous-time Markov jump processes given by chemical reaction networks. A corresponding random dynamical system is formulated in a two-step procedure, at first for the states of the embedded discrete-time Markov chain and then for the augmented Markov chain including also random jump times. We uncover a time-shifted synchronization in the sense that \u2013 after some initial waiting time \u2013 one trajectory exactly replicates another one with a certain time delay. Whether or not such a synchronization behaviour occurs depends on the combination of the initial states. We prove this partial time-shifted synchronization for the special setting of a birth-death process by analyzing the corresponding two-point motion of the embedded Markov chain and determine the structure of the associated random attractor. In this context, we also provide general results on existence and form of random attractors for discrete-time, discrete-space random dynamical systems.'\nauthor:\n- 'Maximilian Engel[^1]'\n- 'Guillermo Olic\u00f3n-M\u00e9ndez'\n- Nathalie Unger\n- 'Stefanie Winkelmann[^2]'\nbibliography:\n- 'bibliography.bib'\ntitle: Synchronization and random attractors for reaction jump processes\n---\n\n**Keywords:** chemical reaction networks, random attractors, random periodic orbits, reaction jump processes, synchronization\n\n**MSC2020:** 37H99, 60J27, 60J10, 92C40\n\nIntroduction\n============" +"---\nabstract: 'We consider the propagation of light in a random medium of two-level atoms. We investigate the dynamics of the field and atomic probability amplitudes for a two-photon state and show that at long times and large distances, the corresponding average probability densities can be determined from the solutions to a system of kinetic equations.'\naddress:\n- 'Department of Applied Physics and Applied Mathematics, Columbia University, New York, NY 10027'\n- 'Department of Mathematics and Department of Physics, Yale University, New Haven, CT 06515'\nauthor:\n- Joseph Kraisler\n- 'John C. Schotland'\ntitle: 'Kinetic equations for two-photon light in random media'\n---\n\nIntroduction\n============\n\nThe propagation of light in random media is usually considered within the setting of classical optics\u00a0[@vanRossum_1999]. However, there has been considerable recent interest in phenomena where quantum effects play a key role\u00a0[@Lodahl_2005_1; @Smolka_2009; @Smolka_2012; @Peeters_2010; @Pires_2012; @Smolka_2011; @vanExter_2012; @Lodahl_2005_2; @Lodahl_2006_1; @Lodahl_2006_2; @Patra_1999; @Beenakker_2009; @Tworzydlo_2002; @Beenakker_1998]. Of particular importance is understanding the impact of multiple scattering on entangled two-photon states, with an eye towards characterizing the transfer of entanglement from the field to matter\u00a0[@Beenakker_2009; @Lahini_2010; @Peeters_2010; @Ott_2010; @Cherroret_2011; @Cande_2013; @Cande_2014]. Progress in this direction can be expected to lead to advances in spectroscopy" +"---\nabstract: 'Reconfigurable intelligent surface (RIS)-based communication networks promise to improve channel capacity and energy efficiency. However, the promised capacity gains could be negligible for passive RISs because of the double pathloss effect. Active RISs can overcome this issue because they have reflector elements with a low-cost amplifier. This letter studies the active RIS-aided simultaneous wireless information and power transfer (SWIPT) in a multiuser system. The users exploit power splitting (PS) to decode information and harvest energy simultaneously based on a realistic piecewise nonlinear energy harvesting model. The goal is to minimize the base station (BS) transmit power by optimizing its beamformers, PS ratios, and RIS phase shifts/amplification factors. The simulation results show significant improvements (e.g., $ 19 \\% $ and $28 \\%$) with the maximum reflect power of $10$ mW and $15$ mW, respectively, compared to the passive RIS without higher computational complexity cost. We also show the robustness of the proposed algorithm against imperfect channel state information.'\nauthor:\n- 'Shayan Zargari, Azar Hakimi, and Chintha Tellambura, Sanjeewa Herath, [^1]'\nbibliography:\n- 'ref.bib'\ntitle: ' Multiuser MISO PS-SWIPT Systems: Active or Passive RIS? '\n---\n\nReconfigurable intelligent surface (RIS), active RIS, simultaneous wireless information and power transfer, energy harvesting," +"---\nabstract: 'We investigated the possibility of the homogeneous and isotropic cosmological solution in Weyl geometry, which differs from the Riemannian geometry by adding the so called Weyl vector. The Weyl gravity is obtained by constructing the gravitational Lagrangian both to be quadratic in curvatures and conformal invariant. It is found that such solution may exist provided there exists the direct interaction between the Weyl vector and the matter fields. Assuming the matter Lagrangian is that of the perfect fluid, we found how such an interaction can be implemented. Due to the existence of quadratic curvature terms and the direct interaction the perfect fluid particles may be created straight from the vacuum, and we found the expression for the rate of their production which appeared to be conformal invariant. In the case of creating the universe \u201cfrom nothing\u201d in the vacuum state, we investigated the problem, whether this vacuum may persist or not. It is shown that the vacuum may persist with respect to producing the non-dust matter (with positive pressure), but cannot resist to producing the dust particles. These particles, being non-interactive, may be considered as the candidates for dark matter.'\naddress: |\n Institute for Nuclear Research of the" +"---\nabstract: 'One powerful paradigm in visual navigation is to predict actions from observations directly. Training such an end-to-end system allows representations useful for downstream tasks to emerge automatically. However, the lack of inductive bias makes this system data inefficient. We hypothesize a sufficient representation of the current view and the goal view for a navigation policy can be learned by predicting the location and size of a crop of the current view that corresponds to the goal. We further show that training such random crop prediction in a self-supervised fashion purely on synthetic noise images transfers well to natural home images. The learned representation can then be bootstrapped to learn a navigation policy efficiently with little interaction data.**The code is available at** '\nauthor:\n- 'Yanwei Wang$^{1}$ and Ching-Yun Ko$^{1}$ and Pulkit Agrawal$^{1}$ [^1][^2]'\nbibliography:\n- 'egbib.bib'\ntitle: '**Visual Pre-training for Navigation: What Can We Learn from Noise?** '\n---\n\nINTRODUCTION\n============\n\nConsider a visual navigation task, where an agent needs to navigate to a target described by a goal image. Whether the agent should move forward or turn around to explore depends on if the goal image can be found in the current view. In other words, visual" +"---\nabstract: |\n These notes give an introduction to the equivalence problem of sub-Riemannian manifolds. We first introduce preliminaries in terms of connections, frame bundles and sub-Riemannian geometry. Then we arrive to the main aim of these notes, which is to give the description of the canonical grading and connection existing on sub-Riemann manifolds with constant symbol. These structures are exactly what is needed in order to determine if two manifolds are isometric. We give three concrete examples, which are Engel (2,3,4)-manifolds, contact manifolds and Cartan (2,3,5)-manifolds.\n\n These notes are an edited version of a lecture series given at the [42nd Winter school: Geometry and Physics](https://conference.math.muni.cz/srni/), Snr\u00ed, Check Republic, mostly based on [@Gro20b] and other earlier work. However, the work on Engel (2,3,4)-manifolds is original research, and illustrate the important special case were our model has the minimal set of isometries.\nauthor:\n- 'Erlend Grong[^1]'\nbibliography:\n- 'ErlendBib.bib'\ndate: 'Srn\u00ed, January 15-22, 2022'\ntitle: ' Curvature and the equivalence problem in sub-Riemannian geometry'\n---\n\n[*Keywords\u2014* Sub-Riemannian geometry, equivalence problem, frame bundle, Cartan connection, flatness theorem]{}\n\n[*Mathematics Subject Classification (2020)\u2014* 53C17,58A15]{}\n\nIntroduction\n============\n\n[*When are two spaces different?*]{} Let us start with a simple object. The two-dimensional vector space $\\mathbb{R}^2$ with" +"---\nauthor:\n- 'S.J.\u00a0Qian'\ndate: 'Compiled by using A&A latex'\ntitle: Doppler boosting effect and flux evolution of superluminal components in QSO 3C345\n---\n\nIntroduction\n============\n\n3C345 (z=0.595) is a prototypical quasar emanating emission over the entire electromagnetic spectrum from radio, infrared, optical, UV and X-rays to high-energy $\\gamma$ rays. It is also one of the best-studied blazars (e.g., Biretta et al. [@Bi86], Hardee et al. [@Ha87], Steffen et al. [@St96], Unwin et al. [@Un97], Zensus et al. [@Ze97], Klare [@Kl03], Klare et al. [@Kl05], Lobanov & Roland [@Lo05], Jorstad et al. [@Jo05], [@Jo13], [@Jo17], Qian et al. [@Qi91a], [@Qi96], [@Qi09], Schinzel et al. [@Sc10], Schinzel [@Sc11a], Schinzel et al. [@Sc11b], Homan et al. [@Ho14]). Its remarkable and violent variations in all these wavebands and the spectral energy distribution have been extensively monitored and studied, leading to many important results on the emission properties of the source. Studies of the correlation between the variabilities at multi-frequencies (from radio to $\\gamma$ rays) play an important role.\\\n3C345 is a remarkable compact flat-spectrum radio source which was one of the firstly discovered quasars to have a relativistic jet, emanating superluminal components steadily.\\\nVLBI-observations have revealed the parsec structure of its jet" +"---\nabstract: 'We present a calculation of the rational terms in two-loop all-plus gluon amplitudes using $D$-dimensional unitarity. We use a conjecture of separability of the two loops, and then a simple generalization of one-loop $D$-dimensional unitarity to perform calculations. We compute the four- and five-point rational terms analytically, and the six- and seven-point ones numerically. We find agreement with previous calculations of Dalgleish, Dunbar, Godwin, Jehu, Perkins, and Strong. For a special subleading-color amplitude, we compute the eight- and nine-point results numerically, and find agreement with an all-$n$ conjecture of Dunbar, Perkins, and Strong.'\nauthor:\n- 'David\u00a0A.\u00a0Kosower'\n- 'Sebastian P[\u00f6]{}gel'\ntitle: 'A Unitarity Approach to Two-Loop All-Plus Rational Terms'\n---\n\n=15 pt\n\nIntroduction {#sec:introductionsection}\n============\n\nIncreasing integrated luminosity at the Large Hadron Collider (LHC) in the coming decade will drive experimenters\u2019 search for physics beyond the Standard Model (SM). The LHC will not be surging into a new energy domain, but will be sensitive to ever-fainter discrepancies from SM predictions. The greater sensitivity will emerge both from increased statistics and from a better understanding of systematic uncertainties. Greater experimental sensitivity does not suffice, however, in order to find small deviations from theoretical expectations. We also need higher-precision" +"---\nauthor:\n- Jinnan Zhang\n- and Jun Cao\nbibliography:\n- 'SubPercentTheta13WithReactor.bib'\ntitle: 'Towards a Sub-percent Precision Measurement of $\\sin^2\\theta_{13}$ with Reactor Antineutrinos'\n---\n\n[!keywords!keywords[Neutrino Oscillation, Reactor Antineutrino, $\\sin^2\\theta_{13}$]{}]{}\n\nIntroduction {#sec:introduction}\n============\n\nSince first detected in 1956 by Reines and Cowan at the Savannah River reactor, neutrinos have played an inspiring role in particle physics. Plenty of experiments prove that there are three flavors of neutrinos, and they are massive. Neutrinos are created via electroweak interactions in flavor states ${\\nu}_e$, ${\\nu}_\\mu$, and ${\\nu}_\\tau$. The neutrino oscillation phenomenon reveals that the three mass eigenstates $\\nu_{1,2,3}$ with masses $m_{1,2,3}$ are non-degenerate from the three flavor eigenstates. The Pontecorvo-Maki-Nakagawa-Sakata (PMNS)\u00a0[@Pontecorvo:1967fh; @Maki:1962mu] matrix $U$ is proposed to describe the mixing among massive neutrino states, $\\nu_\\alpha=U_{\\alpha i}\\nu_i$, where $\\alpha$ indexes flavor states and $i$ indexes mass states. The mixing matrix can be parameterized with three mixing angles, $\\theta_{12}$, $\\theta_{13}$, and $\\theta_{23}$, plus a CP violation phase $\\delta_{\\rm CP}$. If neutrinos are Majorana fermions, there will be two additional phases irrelevant to the neutrino oscillation. See Ref.\u00a0[@Athar:2021xsd] for comprehensive reviews and perspectives of neutrino physics.\n\nTable\u00a0\\[tab:current:knowledge\\] summarizes the current precision estimation of the oscillation parameters and the dominant types of experiments, taken from" +"---\nabstract: 'This paper studies highly oscillatory solutions to a class of systems of semilinear hyperbolic equations with a small parameter, in a setting that includes Klein\u2013Gordon equations and the Maxwell\u2013Lorentz system. The interest here is in solutions that are polarized in the sense that up to a small error, the oscillations in the solution depend on only one of the frequencies that satisfy the dispersion relation with a given wave vector appearing in the initial wave packet. The construction and analysis of such polarized solutions is done using modulated Fourier expansions. This approach includes higher harmonics and yields approximations to polarized solutions that are of arbitrary order in the small parameter, going well beyond the known first-order approximation via a nonlinear Schr\u00f6dinger equation. The given construction of polarized solutions is explicit, uses in addition a linear Schr\u00f6dinger equation for each further order of approximation, and is accessible to direct numerical approximation.'\nauthor:\n- 'Julian Baumstark[^1]'\n- Tobias Jahnke\n- 'Christian Lubich[^2]'\nbibliography:\n- 'literature.bib'\ntitle: 'Polarized high-frequency wave propagation beyond the nonlinear Schr\u00f6dinger approximation[^3]'\n---\n\nIntroduction\n============\n\nWe consider semilinear hyperbolic systems of the form $$\\begin{aligned}\n\\label{PDE.uu}\n \\partial_\\t {u}+A(\\partial_{x}){u}+\\frac{1}{{\\varepsilon}} E{u}&= {\\varepsilon}\\, T({u},{u},{u}),\n \\qquad {x}\\in{\\mathbb{R}}^d, \\,\\t \\in [0,{\\tt_{\\mbox{\\tiny end}}}/{\\varepsilon}]\\end{aligned}$$ with" +"---\nabstract: |\n We present a scalable, cloud-based science platform solution designed to enable next-to-the-data analyses of terabyte-scale astronomical tabular datasets. The presented platform is built on Amazon Web Services (over Kubernetes and S3 abstraction layers), utilizes Apache Spark and the Astronomy eXtensions for Spark for parallel data analysis and manipulation, and provides the familiar JupyterHub web-accessible front-end for user access. We outline the architecture of the analysis platform, provide implementation details, rationale for (and against) technology choices, verify scalability through strong and weak scaling tests, and demonstrate usability through an example science analysis of data from the Zwicky Transient Facility\u2019s 1Bn+ light-curve catalog. Furthermore, we show how this system enables an end-user to iteratively build analyses (in Python) that transparently scale processing with no need for end-user interaction.\n\n The system is designed to be deployable by astronomers with moderate cloud engineering knowledge, or (ideally) IT groups. Over the past three years, it has been utilized to build science platforms for the DiRAC Institute, the ZTF partnership, the LSST Solar System Science Collaboration, the LSST Interdisciplinary Network for Collaboration and Computing, as well as for numerous short-term events (with over 100 simultaneous users). A live demo instance, the deployment scripts," +"---\nabstract: 'In this paper, we tackle the new Language-Based Audio Retrieval task proposed in DCASE 2022[^1]. Firstly, we introduce a simple, scalable architecture which ties both the audio and text encoder together. Secondly, we show that using this architecture along with contrastive loss allows the model to significantly beat the performance of the baseline model. Finally, in addition to having an extremely low training memory requirement, we are able to use pretrained models as it is without needing to finetune them. We test our methods and show that using a combination of our methods beats the baseline scores significantly.'\naddress: 'Nanyang Technological University[^2]'\nbibliography:\n- 'refs.bib'\ntitle: 'Language-Based Audio Retrieval with Converging Tied Layers and Contrastive Loss'\n---\n\nIntroduction {#sec:intro}\n============\n\nThe Language-Based Audio Retrieval task is a new form of cross modal learning [@Xie_2022_audio_retrieval] which aims to rank a list of audio clips according to their relevance given a query caption. These query captions [@drossos_clotho_2019] are descriptive natural language sentences annotated by humans, and they describe the acoustic events happening in both foreground and background of the audio clip. Being able to model and interpret the relationship between audio clips and a text sequence is helpful towards many" +"---\nabstract: 'We introduce a cluster algebraic generalization of Thurston\u2019s earthquake map for the cluster algebras of finite type, which we call the *cluster earthquake map*. It is defined by gluing exponential maps, which is modeled after the earthquakes along ideal arcs. We prove an analogue of the earthquake theorem, which states that the cluster earthquake map gives a homeomorphism between the spaces of $\\mathbb{R}^\\mathrm{trop}$- and $\\mathbb{R}_{>0}$-valued points of the cluster $\\mathcal{X}$-variety. For those of type $A_n$ and $D_n$, the cluster earthquake map indeed recovers the earthquake maps for marked disks and once-punctured marked disks, respectively. Moreover, we investigate certain asymptotic behaviors of the cluster earthquake map, which give rise to \u201ccontinuous deformations\u201d of the Fock\u2013Goncharov fan.'\naddress:\n- 'Takeru Asaka, Graduate School of Mathematical Sciences, the University of Tokyo, 3-8-1 Komaba, Meguro-ku, Tokyo, 153-8914, Japan.'\n- 'Tsukasa Ishibashi, Mathematical Institute, Tohoku University, 6-3 Aoba, Aramaki, Aoba-ku, Sendai, Miyagi 980-8578, Japan.'\n- 'Shunsuke Kano, Research Alliance Center for Mathematical Sciences, Tohoku University 6-3 Aoba, Aramaki, Aoba-ku, Sendai, Miyagi 980-8578, Japan.'\nauthor:\n- Takeru Asaka\n- Tsukasa Ishibashi\n- Shunsuke Kano\ntitle: Earthquake theorem for cluster algebras of finite type\n---\n\nIntroduction\n============\n\nThurston\u2019s earthquake maps\n--------------------------\n\nEarthquakes are particular deformations" +"---\nabstract: 'We construct the exact solution for a family of one-half spin chains explicitly. The spin chains Hamiltonian corresponds to an isotropic Heisenberg Hamiltonian, with staggered exchange couplings that take only two different values. We work out the exact solutions in the one- excitation subspace. Regarding the problem of quantum state transfer, we use the solution and some theorems concerning the approximation of irrational numbers, to show the appearance of conclusive pretty good transmission for chains with particular lengths. We present numerical evidence that pretty good transmission is achieved by chains whose length is not a power of two. The set of spin chains that shows pretty good transmission is a subset of the family with an exact solution. Using perturbation theory, we thoroughly analyze the case when one of the exchange coupling strengths is orders of magnitude larger than the other. This strong coupling limit allows us to study, in a simple way, the appearance of pretty good transmission. The use of analytical closed expressions for the eigenvalues, eigenvectors, and transmission probabilities allows us to obtain the precise asymptotic behavior of the time where the pretty good transmission is observed. Moreover, we show that this time scales as" +"---\nabstract: 'We study two problems related to recovering causal graphs from interventional data: (i) *verification*, where the task is to check if a purported causal graph is correct, and (ii) *search*, where the task is to recover the correct causal graph. For both, we wish to minimize the number of interventions performed. For the first problem, we give a characterization of a minimal sized set of atomic interventions that is necessary and sufficient to check the correctness of a claimed causal graph. Our characterization uses the notion of *covered edges*, which enables us to obtain simple proofs and also easily reason about earlier known results. We also generalize our results to the settings of bounded size interventions and node-dependent interventional costs. For all the above settings, we provide the first known provable algorithms for efficiently computing (near)-optimal verifying sets on general graphs. For the second problem, we give a simple adaptive algorithm based on graph separators that produces an atomic intervention set which fully orients any essential graph while using ${\\mathcal{O}}(\\log n)$ times the optimal number of interventions needed to *verify* (verifying size) the underlying DAG on $n$ vertices. This approximation is tight as *any* search algorithm on an" +"---\nabstract: 'Ambient prime geodesic theorems provide an asymptotic count of closed geodesics by their length and holonomy and imply effective equidistribution of holonomy. We show that for a smoothed count of closed geodesics on compact hyperbolic 3-manifolds, there is a persistent bias in the secondary term which is controlled by the number of zero spectral parameters. In addition, we show that a normalized, smoothed bias count is distributed according to a probability distribution, which we explicate when all distinct, non-zero spectral parameters are linearly independent. Finally, we construct an example of dihedral forms which does not satisfy this linear independence condition.'\naddress:\n- 'Millersville University, Department of Mathematics, PO Box 1002, Millersville, PA 17551, USA'\n- 'Bryn Mawr College, Department of Mathematics, 101 N Merion Ave, Bryn Mawr, PA 19010, USA'\nauthor:\n- Lindsay Dever\nbibliography:\n- 'biblio.bib'\ntitle:\n- Bias in the Distribution of Holonomy\n- |\n Bias in the distribution of holonomy\\\n on compact hyperbolic 3-manifolds\n---\n\nIntroduction and Background\n===========================\n\nDistribution of holonomy\n------------------------\n\nThe Prime Geodesic Theorem for a hyperbolic 3-manifold $\\Gamma \\backslash \\mathbb{H}^3$ states that the count $\\pi_\\Gamma(y)$ of primitive geodesics of length up to $y$ satisfies $$\\pi_\\Gamma(y) = \\text{Ei}_\\Gamma(y) + \\mathrm{O}_{\\Gamma, \\epsilon}(e^{(\\frac53+\\epsilon)y}),$$ where" +"---\nabstract: 'In this note, the importance of spectral properties of viscous flux discretization in solving compressible Navier-Stokes equations for turbulent flow simulations is discussed. We studied six different methods, divided into two different classes, with poor and better representation of spectral properties at high wavenumbers. Both theoretical and numerical results have revealed that the method with better properties at high wavenumbers, denoted as $\\alpha$-damping type discretization, produced superior solutions compared to the other class of methods. The proposed compact $\\alpha$-damping method converged towards the direct numerical simulation (DNS) solution at lower grid resolution compared with the other class of methods and is, therefore, a better candidate for high fidelity large-eddy simulations (LES) and DNS studies.'\naddress: 'Faculty of Mechanical Engineering, Technion - Israel Institute of Technology, Haifa, Israel'\nauthor:\n- Amareshwara Sainadh Chamarthi\n- Hemanth Chandravamsi K\n- Natan Hoffmann\n- Sean Bokor\n- 'Steven H.\u00a0Frankel'\ntitle: On the role of spectral properties of viscous flux discretization for flow simulations on marginally resolved grids\n---\n\nViscous, Diffusion, Finite-difference, High-frequency damping, Turbulence\n\nIntroduction {#sec1}\n============\n\n** [@debonis2013solutions]. Thus far, the conventional wisdom has been that viscous fluxes do not play a critical role in resolving turbulence. However, in this" +"---\nauthor:\n- 'Huitong\u00a0Chen, Yu\u00a0Wang, and\u00a0Qinghua\u00a0Hu,'\nbibliography:\n- 'bare\\_jrnl.bib'\ntitle: 'Multi-Granularity Regularized Re-Balancing for Class Incremental Learning'\n---\n\ndeep neural networks (DNNs) are generally applied in closed scenes, where the entire data to be learned are presumed to be fully obtained during training\u00a0[@Wang2021review; @Li2021AdaptiveNS]. In real-world applications, new classes often appear over time, and the DNNs should be capable of adapting their behaviors dynamically without having to be re-trained from scratch. For instance, autonomous vehicles are required to quickly identify new road signs when entering a new country; fault diagnosis systems should learn to recognize new fault types that occur owing to equipment aging issues\u00a0[@wang2021coarse]. To this end, class incremental learning (CIL) is proposed and has been attracted much attention in recent years. DNNs need to dynamically update on the new data instead of being trained on a collective dataset with all classes. An intuitive approach to CIL is to fine-tune the DNN model for old classes on new data\u00a0[@li2018learning]. Unfortunately, this leads to catastrophic forgetting, where the DNNs significantly lose their ability to identify the old classes\u00a0[@wu2019large; @hou2019learning].\n\nSignificant efforts have been made to address this problem\u00a0[@hou2019learning; @douillard2020podnet; @yan2021der]. Typically" +"---\nauthor:\n- |\n Jean Kaddour$^{*,1}$, Aengus Lynch$^{*,1}$, Qi Liu$^2$, Matt J. Kusner$^1$, Ricardo Silva$^1$\\\n $^*$Equal contribution. ${}^1$University College London. ${}^2$University of Oxford.\\\n {jean.kaddour.20, aengus.lynch.17}@ucl.ac.uk.\\\n \\\n 22 July 2022.\nbibliography:\n- 'references.bib'\ntitle: |\n Causal Machine Learning:\\\n A Survey and Open Problems\n---\n\nIntroduction\n============\n\nToday, machine learning (ML) techniques excel at finding associations in independent and identically distributed (i.i.d.) data. A few fundamental principles, including empirical risk minimization, backpropagation, and inductive biases in architecture design, have led to enormous advances for solving problems in fields like computer vision, natural language processing, graph representation learning, and reinforcement learning.\n\nHowever, new challenges have arisen when deploying these models to real-world settings. These challenges include: (1) large reductions in generalization performance when the data distribution shifts [@underspecification], (2) a lack of fine-grained control of samples from generative models [@steerability], (3) biased predictions reinforcing unfair discrimination of certain sub-populations [@mehrabi2021survey; @bender2021dangers], (4) overly abstract and problem-independent notions of interpretability [@lipton2018mythos], and (5) unstable translation of reinforcement learning methods to real-world problems [@dulac2019challenges].\n\nMultiple works have argued that these issues are partly due to the lack of causal formalisms in modern ML systems [@EOCI; @goyal2020inductive; @pearl2019seven; @causal_world; @scholkopf2021towards]. Subsequently, there has been a surge" +"---\nabstract: 'The pipeline of current robotic pick-and-place methods typically consists of several stages: grasp pose detection, finding inverse kinematic solutions for the detected poses, planning a collision-free trajectory, and then executing the open-loop trajectory to the grasp pose with a low-level tracking controller. While these grasping methods have shown good performance on grasping static objects on a table-top, the problem of grasping dynamic objects in constrained environments remains an open problem. We present [Neural Motion Fields]{}, a novel object representation which encodes both object point clouds and the relative task trajectories as an implicit value function parameterized by a neural network. This object-centric representation models a continuous distribution over the SE$(3)$ space and allows us to perform grasping reactively by leveraging sampling-based MPC to optimize this value function.'\nauthor:\n- |\n Yun-Chun Chen$^{1,2,\\ast}$ Adithyavairavan Murali$^{1,\\ast}$ Balakumar Sundaralingam$^{1,\\ast}$\\\n Wei Yang$^1$ Animesh Garg$^{1,2}$ Dieter Fox$^{1,3}$\\\n $^1$NVIDIA $^2$University of Toronto $^3$University of Washington $^\\ast$Equal contribution\\\nbibliography:\n- 'references.bib'\ntitle: ' Neural Motion Fields: Encoding Grasp Trajectories as Implicit Value Functions '\n---\n\nIntroduction\n============\n\nCurrent robotic grasping approaches typically decompose the task of grasping into several sub-components: detecting grasp poses on point clouds\u00a0[@ContactGraspNet; @GADDPG], finding inverse kinematics solutions at these poses," +"---\nabstract: 'The natural gradient field is a vector field that lives on a model equipped with a distinguished Riemannian metric, e.g. the Fisher-Rao metric, and represents the direction of steepest ascent of an objective function on the model with respect to this metric. In practice, one tries to obtain the corresponding direction on the parameter space by multiplying the ordinary gradient by the inverse of the Gram matrix associated with the metric. We refer to this vector on the parameter space as the natural parameter gradient. In this paper we study when the pushforward of the natural parameter gradient is equal to the natural gradient. Furthermore we investigate the invariance properties of the natural parameter gradient. Both questions are addressed in an overparametrised setting.'\nauthor:\n- Jesse van Oostrum\n- Johannes M\u00fcller\n- Nihat Ay\nbibliography:\n- 'parametrisation\\_invariance.bib'\ntitle: 'Invariance Properties of the Natural Gradient in Overparametrised Systems[^1]'\n---\n\nIntroduction\n============\n\nWithin the field of deep learning, gradient methods have become ubiquitous tools for parameter optimisation. Standard gradient optimisation procedures use the vector of coordinate derivatives of the objective function as the update direction of the parameters. This is implicitly assuming a Euclidean geometry on the space of parameters." +"---\nabstract: 'Supermassive stars (SMSs) with masses of ${M_\\ast}\\simeq 10^4$\u2013$10^5~{{\\rm M_\\odot}}$ are invoked as possible seeds of high-redshift supermassive black holes, but it remains under debate whether their protostar indeed acquires sufficient mass via gas accretion overcoming radiative feedback. We investigate protostellar growth in dynamically heated atomic-cooling haloes (ACHs) found in recent cosmological simulations, performing three-dimensional radiation hydrodynamical (RHD) simulations that consider stellar evolution under variable mass accretion. We find that one of the ACHs feeds the central protostar at rates exceeding a critical value, above which the star evolves in a cool bloating phase and hardly produces ionizing photons. Consequently, the stellar mass reaches ${M_\\ast}\\gtrsim 10^4~{{\\rm M_\\odot}}$ unimpeded by radiative feedback. In the other ACH, where the mass supply rate is lower, the star spends most of its life as a hot main-sequence star, emitting intense ionizing radiation. Then, the stellar mass growth is terminated around $500~{{\\rm M_\\odot}}$ by photoevaporation of the circumstellar disk. A series of our RHD simulations provide a formula of the final stellar mass determined either by stellar feedback or their lifetime as a function of the mass supply rate from the parent cloud in the absence of stellar radiation. Combining the results with the" +"---\nabstract: 'The properties of 3D Bose-Einstein condensate have been studied with variational and numerical methods. In the variational approach, we use the super-Gaussian trial function, and it is demonstrated that this trial function gives a good approach for the descriptions of the quantum droplets. The analytical equations for the variational parameters are obtained. The frequency of small oscillations of quantum droplets near the equilibrium position is estimated. It is also found that periodic modulation of the coupling constants leads to the resonance oscillations of the quantum droplets parameters or emission of waves depending on the amplitude of the modulations. The predictions are supported by direct numerical simulation of governing equation.'\naddress: |\n Physical-Technical Institute of the Uzbek Academy of Sciences,\\\n Chingiz Aytmatov Str. 2-B, Tashkent, 100084, Uzbekistan\nauthor:\n- 'Sherzod R. Otajonov'\ntitle: 'Quantum droplets in three-dimensional Bose-Einstein condensates'\n---\n\nIntroduction {#sec:intro}\n============\n\nFrom the mean-field theory, a Bose-Einstein condensate (BEC) in two- and three-dimensions is expected to collapse with the attraction between atoms. There are several methods to stabilize the BECs such as an application of external traps, an account of the three-body interaction term or dipolar interactions, and periodic variation of scattering parameters\u00a0[@Donley2001; @Cornish2000; @Roberts2001; @Koch2008;" +"---\nabstract: 'Ubiquity in network coverage is one of the main features of 5G and is expected to be extended to the computing domain in 6G. In order to provide this holistic approach of ubiquity in communication and computation, an integration of satellite, aerial and terrestrial networks is foreseen. In particular, the rising amount of applications such as In-Flight Entertainment and Connectivity Services (IFECS) and SDN-enabled satellites renders network management more challenging. Moreover, due to the stringent Quality of Service (QoS) requirements edge computing gains in importance for these applications. Here, network performance can be boosted by considering components of the aerial network, like aircrafts, as potential Multi-Access Edge Computing (MEC) nodes. Thus, we propose an Aerial-Aided Multi-Access Edge Computing (AA-MEC) architecture that provides a framework for optimal management of computing resources and internet-based services in the sky. Furthermore, we formulate optimization problems to minimize the network latency for the two use cases of providing IFECS to other aircrafts in the sky and providing services for offloading AI/ML-tasks from satellites. Due to the dynamic nature of the satellite and aerial networks, we propose a re-configurable optimization. For the transforming network we continuously identify the optimal MEC node for each application" +"---\nabstract: 'Dynamic community detection has been prospered as a powerful tool for quantifying changes in dynamic brain network connectivity patterns by identifying strongly connected sets of nodes. However, as the network science problems and network data to be processed become gradually more sophisticated, it awaits a better method to efficiently learn low dimensional representation from dynamic network data and reveal its latent function that changes over time in the brain network. In this work, an adversarial temporal graph representation learning (ATGRL) framework is proposed to detect dynamic communities from a small sample of brain network data. It adopts a novel temporal graph attention network as an encoder to capture more efficient spatio-temporal features by attention mechanism in both spatial and temporal dimensions. In addition, the framework employs adversarial training to guide the learning of temporal graph representation and optimize the measurable modularity loss to maximize the modularity of community. Experiments on the real-world brain networks datasets are demonstrated to show the effectiveness of this new method.'\nauthor:\n- Changwei Gong\n- Changhong Jing\n- Yanyan Shen\n- Shuqiang Wang\ntitle: Dynamic Community Detection via Adversarial Temporal Graph Representation Learning\n---\n\nIntroduction\n============\n\nNeuroscience is emerging into a generation marked" +"---\nabstract: |\n We prove that maximum a posteriori estimators are well-defined for diagonal Gaussian priors $\\mu$ on $\\ell^p$ under common assumptions on the potential $\\Phi$. Further, we show connections to the Onsager\u2013Machlup functional and provide a corrected and strongly simplified proof in the Hilbert space case $p=2$, previously established by @dashti2013map [@kretschmann2019nonparametric].\n\n These corrections do not generalize to the setting $1 \\leq p < \\infty$, which requires a novel convexification result for the difference between the Cameron\u2013Martin norm and the $p$-norm.\nauthor:\n- Ilja Klebanov\n- Philipp Wacker\nbibliography:\n- 'lit.bib'\ntitle: 'Maximum a posteriori estimators in $\\ell^p$ are well-defined for diagonal Gaussian priors.'\n---\n\n**Key words**: inverse problems, maximum a posteriori estimator, Onsager\u2013Machlup functional, small ball probabilities, sequence spaces, Gaussian measures\n\n**AMS subject classification**: 62F15, 62F99, 60H99\n\nIntroduction {#section:Introduction}\n============\n\nLet $(X,\\|\\quark \\|_{X})$, be a separable Banach space and $\\mu$ a centred and non-degenerate Gaussian (prior) probability measure on $X$. We are motivated by the inverse problem of inferring the unknown parameter $u\\in X$ via noisy measurements $$y = G(u) + \\epsilon \\label{eq:invProb},$$ where $G: X\\to \\mathbb R^d$ is a (possibly nonlinear) measurement operator and $\\epsilon$ is measurement noise, typically assumed to be independent of $u$. The Bayesian" +"---\nabstract: 'Inspired by the fact that humans use diverse sensory organs to perceive the world, sensors with different modalities are deployed in end-to-end driving to obtain the global context of the 3D scene. In previous works, camera and LiDAR inputs are fused through transformers for better driving performance. These inputs are normally further interpreted as high-level map information to assist navigation tasks. Nevertheless, extracting useful information from the complex map input is challenging, for redundant information may mislead the agent and negatively affect driving performance. We propose a novel approach to efficiently extract features from vectorized High-Definition (HD) maps and utilize them in the end-to-end driving tasks. In addition, we design a new expert to further enhance the model performance by considering multi-road rules. Experimental results prove that both of the proposed improvements enable our agent to achieve superior performance compared with other methods. The source code is released as an open-source package.'\nauthor:\n- 'Qingwen Zhang$^{1}$, Mingkai Tang$^{1}$, Ruoyu Geng$^{1}$, Feiyi Chen$^{1}$, Ren Xin$^{1}$, Lujia Wang$^{1,2}$'\nbibliography:\n- 'IEEEabrv.bib'\n- 'ref.bib'\ntitle: |\n **MMFN: Multi-Modal-Fusion-Net for End-to-End Driving**\n\n [^1] [^2]\n---\n\nINTRODUCTION\n============\n\nAutonomous driving is conducted via several modules[@gog2021pylot; @liu2020hercules], namely localization[@jiao2021greedy], perception[@liu2021ground], planning[@cheng2022real], and control. However," +"---\nabstract: 'Navigation inside luminal organs is an arduous task that requires non-intuitive coordination between the movement of the operator\u2019s hand and the information obtained from the endoscopic video. The development of tools to automate certain tasks could alleviate the physical and mental load of doctors during interventions, allowing them to focus on diagnosis and decision-making tasks. In this paper, we present a synergic solution for intraluminal navigation consisting of a 3D printed endoscopic soft robot that can move safely inside luminal structures. Visual servoing, based on Convolutional Neural Networks (CNNs) is used to achieve the autonomous navigation task. The CNN is trained with phantoms and in-vivo data to segment the lumen, and a model-less approach is presented to control the movement in constrained environments. The proposed robot is validated in anatomical phantoms in different path configurations. We analyze the movement of the robot using different metrics such as task completion time, smoothness, error in the steady-state, mean and maximum error. We show that our method is suitable to navigate safely in hollow environments and conditions which are different than the ones the network was originally trained on.'\nauthor:\n- |\n Jorge F. Lazo$ \\dagger^{1,2}$, Chun-Feng Lai$ \\dagger^{1,3}$, Sara Moccia$^{4,5}$," +"---\nabstract: 'The narrowband and far-field assumption in conventional wireless system design leads to a mismatch with the optimal beamforming required for wideband and near-field systems. This discrepancy is exacerbated for larger apertures and bandwidths. To characterize the behavior of near-field and wideband systems, we derive the beamforming gain expression achieved by a frequency-flat phased array designed for plane-wave propagation. To determine the far-field to near-field boundary for a wideband system, we propose a frequency-selective distance metric. The proposed far-field threshold increases for frequencies away from the center frequency. The analysis results in a fundamental upper bound on the product of the array aperture and the system bandwidth. We present numerical results to illustrate how the gain threshold affects the maximum usable bandwidth for the n260 and n261 5G NR bands.'\nauthor:\n- |\n \\\n [^1]\nbibliography:\n- 'references\\_Nitish.bib'\ntitle: 'A wideband generalization of the near-field region for extremely large phased-arrays'\n---\n\nNear-field, wideband, phased-array, frequency-selective, beamforming gain.\n\nIntroduction\n============\n\nDistinguishing between the near-field and far-field is becoming increasingly relevant as modern wireless devices begin to operate in both propagation regions. The most common near-field distance, known as the Fraunhofer distance, is proportional to the square of the aperture and" +"---\nabstract: '[Compact-object binaries including a white dwarf component]{} are unique among gravitational-wave sources because their evolution is governed not just by general relativity and tides, but also by mass transfer. While the black hole and neutron star binaries observed with ground-based gravitational-wave detectors are driven to inspiral due to the emission of gravitational radiation\u2014manifesting as a \u201cchirp-like\u201d gravitational-wave signal\u2014the astrophysical processes at work in double white dwarf (DWD) systems can cause the inspiral to stall and even reverse into an outspiral. The dynamics of the DWD outspiral thus encode information about tides, which tell us about the behaviour of electron-degenerate matter. We carry out a population study to determine the effect of the strength of tides on the distributions of the DWD binary parameters that the Laser Interferometer Space Antenna (LISA) will be able to constrain. [We find that the strength of tidal coupling parameterized via the tidal synchronization timescale at the onset of mass transfer affects the distribution of gravitational-wave frequencies and frequency derivatives for detectably mass-transferring DWD systems. Using a hierarchical Bayesian framework informed by binary population synthesis simulations, we demonstrate how this parameter can be inferred using LISA observations.]{} By measuring the population properties of DWDs," +"---\nabstract: 'We discuss electroweak baryogenesis in aligned two Higgs doublet models. It is known that in this model the severe constraint from the experimental results for the electron electric dipole moment can be avoided by destructive interference among CP-violating effects in the Higgs sector. In our previous work, we showed that the observed baryon number in the Universe can be explained without contradicting current available data in a specific scenario in the same model. We here first discuss details of the evaluation of baryon number based on the WKB method taking into account all order of the wall velocity. We then investigate parameter spaces which are allowed under the current available data from collider, flavor and electric dipole moment experiments simultaneously. We find several benchmark scenarios which can explain baryon asymmetry of the Universe. We also discuss how we can test these benchmark scenarios at future collider experiments, various flavor experiments and gravitational wave observations.'\nauthor:\n- Kazuki Enomoto\n- Shinya Kanemura\n- Yushi Mura\ntitle: |\n New benchmark scenarios of electroweak baryogenesis\\\n in aligned two Higgs double models\n---\n\nIntroduction\n============\n\nBaryon Asymmetry of the Universe (BAU) is one of the big mysteries of particle physics and cosmology." +"---\nabstract: 'Dual-unitary circuits have emerged as a minimal model for chaotic quantum many-body dynamics in which the dynamics of correlations and entanglement remains tractable. Simultaneously, there has been intense interest in the effect of measurements on the dynamics of quantum information in many-body systems. In this work we introduce a class of models combining dual-unitary circuits with particular projective measurements that allow the exact computation of dynamical correlations of local observables, entanglement growth, and steady-state entanglement. We identify a symmetry preventing a measurement-induced phase transition and present exact results for the intermediate critical purification phase.'\nauthor:\n- 'Pieter W. Claeys'\n- Marius Henry\n- Jamie Vicary\n- Austen Lamacraft\nbibliography:\n- 'Library.bib'\ntitle: 'Exact dynamics in dual-unitary quantum circuits with projective measurements'\n---\n\n*Introduction.* \u2013 Understanding the dynamics of quantum many-body systems remains a crucial problem with applications ranging from condensed matter physics to quantum information. Computing dynamical quantities such as correlation functions and entanglement growth is a notoriously hard problem in interacting systems, both numerically and analytically. In recent years unitary circuits have moved beyond models of quantum computation to minimal models for the study of general unitary dynamics governed by local interactions [@nahum_quantum_2017; @khemani_operator_2018; @von_keyserlingk_operator_2018; @nahum_operator_2018; @chan_solution_2018;" +"---\nabstract: |\n The popular systemic risk measure CoVaR (conditional Value-at-Risk) is widely used in economics and finance. Formally, it is defined as an (extreme) quantile of one variable (e.g., losses in the financial system) conditional on some other variable (e.g., losses in a bank\u2019s shares) being in distress and, hence, measures the spillover of risks. In this article, we propose joint dynamic and semiparametric models for VaR and CoVaR together with a two-step M-estimator for the model parameters drawing on recently proposed bivariate scoring functions for the pair (VaR, CoVaR). Among others, this allows for the estimation of joint dynamic forecasting models for (VaR, CoVaR). We prove consistency and asymptotic normality of the proposed estimator and analyze its finite-sample properties in simulations. We apply our dynamic models to generate CoVaR forecasts for real financial data, which are shown to be superior to existing methods.\\\n **Keywords:** CoVaR, Estimation, Forecasting, Modeling, Systemic Risk\\\n **JEL classification:** C14 (Semiparametric and Nonparametric Methods); C22 (Time-Series Models); C58 (Financial Econometrics); G17 (Financial Forecasting and Simulation); G32 (Financial Risk and Risk Management)\nauthor:\n- 'Timo Dimitriadis[^1]'\n- 'Yannick Hoga[^2]'\nbibliography:\n- 'bibCoQR.bib'\ntitle: 'Dynamic CoVaR Modeling[^3]'\n---\n\n1.5ex plus1ex minus1ex 1.5ex plus1ex minus1ex 1.5ex plus1ex minus1ex" +"---\nabstract: 'We address the enumeration of coprime polynomial pairs over ${\\mathbb{F}}_2$ where both polynomials have a nonzero constant term, motivated by the construction of orthogonal Latin squares via cellular automata. To this end, we leverage on Benjamin and Bennett\u2019s bijection between coprime and non-coprime pairs, which is based on the sequences of quotients visited by dilcuE\u2019s algorithm (i.e. Euclid\u2019s algorithm ran backward). This allows us to break our analysis of the quotients in three parts, namely the enumeration and count of: (1) sequences of constant terms, (2) sequences of degrees, and (3) sequences of intermediate terms. For (1), we show that the sequences of constant terms form a regular language, and use classic results from algebraic language theory to count them. Concerning (2), we remark that the sequences of degrees correspond to compositions of natural numbers, which have a simple combinatorial description. Finally, we show that for (3) the intermediate terms can be freely chosen. Putting these three obeservations together, we devise a combinatorial algorithm to enumerate all such coprime pairs of a given degree, and present an alternative derivation of their counting formula.'\nauthor:\n- Enrico Formenti\n- Luca Mariot\nbibliography:\n- 'bibliography.bib'\ntitle: An Enumeration Algorithm for" +"---\nabstract: |\n We present a new class of neurons, ARNs, which give a cross entropy on test data that is up to three times lower than the one achieved by carefully optimized LSTM neurons. The explanations for the huge improvements that often are achieved are elaborate skip connections through time, up to four internal memory states per neuron and a number of novel activation functions including small quadratic forms.\n\n The new neurons were generated using automatic programming and are formulated as pure functional programs that easily can be transformed.\n\n We present experimental results for eight datasets and found excellent improvements for seven of them, but LSTM remained the best for one dataset. The results are so promising that automatic programming to generate new neurons should become part of the standard operating procedure for any machine learning practitioner who works on time series data such as sensor signals.\nauthor:\n- |\n Roland Olsson Roland.Olsson@hiof.no\\\n Chau Tran Chau.Tran@hiof.no\\\n Lars Magnusson Lars.Magnusson@hiof.no\\\n Department of Computer Science\\\n [\u00d8]{}stfold University College\\\n N-1757 Halden, Norway\nbibliography:\n- 'arn4.bib'\ntitle: Automatic Synthesis of Neurons for Recurrent Neural Nets\n---\n\nRecurrent neural nets, neuron synthesis, automatic programming\n\nIntroduction\n============\n\nTime series or sequence data is abundant in" +"---\nabstract: 'One of the outstanding problems in the holographic approach to many-body physics is the explicit computation of correlation functions in nonequilibrium states. We provide a new and simple proof that the horizon cap prescription of Crossley-Glorioso-Liu for implementing the thermal Schwinger-Keldysh contour in the bulk is consistent with the Kubo-Martin-Schwinger periodicity and the ingoing boundary condition for the retarded propagator at any arbitrary frequency and momentum. The generalization to the hydrodynamic Bjorken flow is achieved by a Weyl rescaling in which the dual black hole\u2019s event horizon attains a constant surface gravity and area at late time although the directions longitudinal and transverse to the flow expands and contract respectively. The dual state\u2019s temperature and entropy density thus become constants (instead of the perfect fluid expansion) although no time-translation symmetry emerges at late time. Undoing the Weyl rescaling, the correlation functions can be computed systematically in a large proper time expansion in inverse powers of the average of the two reparametrized proper time arguments. The horizon cap has to be pinned to the nonequilibrium event horizon so that regularity and consistency conditions are satisfied. Consequently, in the limit of perfect fluid expansion, the Schwinger-Keldysh correlation functions with space-time" +"---\nauthor:\n- Yuri Shtanov\ntitle: Initial conditions for the scalaron dark matter\n---\n\nIntroduction\n============\n\nIt was suggested that dark matter in our universe can be explained in frames of models of $f (R)$ gravity [@Capozziello:2006uv; @Nojiri:2008nt; @Cembranos:2008gj; @Cembranos:2015svp; @Corda:2011aa; @Katsuragawa:2016yir; @Katsuragawa:2017wge; @Yadav:2018llv; @Parbin:2020bpp; @Shtanov:2021uif; @KumarSharma:2022qdf] (see also reviews [@Sotiriou:2008rp; @DeFelice:2010aj]). In these models, a new weakly interacting degree of freedom (the scalaron) arises in the gravitational sector, which can be associated with dark matter.\n\nAmong various proposals in this direction, we will focus here on a simple suggestion, pioneered in [@Cembranos:2008gj] and recently revisited in our paper [@Shtanov:2021uif], of $f(R)$ models in which essential role is played by the term quadratic in curvature, and which are minimally coupled to the Standard Model of particle physics. Dark matter in this model is formed of a scalaron field oscillating in a close neighbourhood of the minimum of its potential. The scalaron very weakly interacts with the rest of matter, which ensures its stability.\n\nOne of the issues of this model is the issue of initial conditions for the scalaron field. In [@Cembranos:2008gj], it was assumed that the scalaron is initially shifted from the vacuum value in the very early universe" +"---\nabstract: 'The consideration of the holographic dark energy approach and matter creation effects in a single cosmological model is carried out in this work accordingly, we discuss some cases of interest. We test this cosmological proposal against recent observations. Considering the best fit values obtained for the free cosmological parameters, the model exhibits a transition from a decelerated cosmic expansion to an accelerated one. The deceleration-acceleration transition can not be determined consistently from the coincidence parameter for this kind of model. At effective level the parameter state associated to the whole energy density contribution describes a quintessence scenario in the fluid analogy at present time.'\nauthor:\n- 'V\u00edctor H. C\u00e1rdenas$^1$'\n- Miguel Cruz$^2$\n- Samuel Lepe$^3$\ntitle: Holography and matter creation revisited\n---\n\nIntroduction\n============\n\nOur most simple explanation for the observed cosmic acceleration [@obs] generally requires of the dark energy concept, a strange component as its own name. Its complex nature it is not totally unveiled yet but nowadays, several theoretical approaches result to be useful to reproduce in good manner its effects at large scales. In Ref. [@entropic] the idea of a fundamental origin for the accelerated expansion is motivated by the consideration of non-equilibrium thermodynamics in" +"---\nabstract: 'In this paper we present a novel self-supervised method to anticipate the depth estimate for a future, unobserved real-world urban scene. This work is the first to explore self-supervised learning for estimation of monocular depth of future unobserved frames of a video. Existing works rely on a large number of annotated samples to generate the probabilistic prediction of depth for unseen frames. However, this makes it unrealistic due to its requirement for large amount of annotated depth samples of video. In addition, the probabilistic nature of the case, where one past can have multiple future outcomes often leads to incorrect depth estimates. Unlike previous methods, we model the depth estimation of the unobserved frame as a view-synthesis problem, which treats the depth estimate of the unseen video frame as an auxiliary task while synthesizing back the views using learned pose. This approach is not only cost effective - we do not use any ground truth depth for training (hence practical) but also deterministic (a sequence of past frames map to an immediate future). To address this task we first develop a novel depth forecasting network DeFNet which estimates depth of unobserved future by forecasting latent features. Second, we" +"---\nabstract: |\n For a non-decreasing sequence $S = (s_1, \\ldots, s_k)$ of positive integers, an $S$-packing edge-coloring of a graph $G$ is a decomposition of edges of $G$ into disjoint sets $E_1, \\ldots, E_k$ such that for each $1 \\le i \\le k$ the distance between any two distinct edges $e_1, e_2 \\in E_i$ is at least $s_i+1$. The notion of $S$-packing edge-coloring was first generalized by Gastineau and Togni from its vertex counterpart. They showed that there are subcubic graphs that are not $(1,2,2,2,2,2,2)$-packing (abbreviated to $(1,2^6)$-packing) edge-colorable and asked the question whether every subcubic graph is $(1,2^7)$-packing edge-colorable. Very recently, Hocquard, Lajou, and Lu\u017ear showed that every subcubic graph is $(1,2^8)$-packing edge-colorable and every $3$-edge colorable subcubic graph is $(1,2^7)$-packing edge-colorable. Furthermore, they also conjectured that every subcubic graph is $(1,2^7)$-packing edge-colorable.\n\n In this paper, we confirm the conjecture of Hocquard, Lajou, and Lu\u017ear, and extend it to multigraphs.\nauthor:\n- 'Xujun Liu[^1]'\n- 'Michael Santana[^2]'\n- 'Taylor Short[^3]'\ntitle: 'Every subcubic multigraph is $(1,2^7)$-packing edge-colorable'\n---\n\nIntroduction\n============\n\nGiven a non-decreasing sequence $S = (s_1, \\ldots, s_k)$ of positive integers, an $S$-packing edge-coloring of a graph $G$ is a decomposition of the edge set of $G$" +"---\nabstract: 'We show that the transition matrix from the standard basis to the web basis for a Specht module of the Hecke algebra is unitriangular and satisfies a strong positivity property whenever the Specht module is labeled by a partition with at most two parts. This generalizes results of Russell\u2013Tymoczko and Rhoades.'\naddress:\n- |\n Department of Mathematics\\\n University of Notre Dame\\\n Notre Dame, IN 46556\n- |\n Department of Mathematics\\\n University of Oklahoma\\\n Norman, OK 73019\nauthor:\n- Samuel David Heard\n- 'Jonathan R. Kujawa'\nbibliography:\n- 'Biblio.bib'\ntitle: Positivity and Web Bases for Specht Modules of Hecke Algebras\n---\n\n[^1]\n\nIntroduction {#S:Intro}\n============\n\nBackground\n----------\n\nThe Specht module $S^{(n,n)}$ is a simple module for the symmetric group $S_{2n}$ defined over the rational numbers. It admits (at least) two interesting combinatorial bases. The first is the *polytabloid basis* indexed by the standard tableaux of shape $(n,n)$. Tableaux combinatorics play an important role not only in the representation theory of the symmetric group, but also symmetric functions, Schubert and Springer varieties, and more (e.g., see [@Fulton]). The second is the *web basis* given by non-crossing pairings of $\\{1, \\dotsc , 2n \\}$. Webs arise in the diagrammatic description of" +"---\nabstract: 'We study the ranking of individuals, teams, or objects, based on pairwise comparisons between them, using the Bradley-Terry model. Estimates of rankings within this model are commonly made using a simple iterative algorithm first introduced by Zermelo almost a century ago. Here we describe an alternative and similarly simple iteration that provably returns identical results but does so much faster\u2014over a hundred times faster in some cases. We demonstrate this algorithm with applications to a range of example data sets and derive a number of results regarding its convergence.'\nauthor:\n- |\n M. E. J. Newman\\\n Center for the Study of Complex Systems\\\n University of Michigan\\\n Ann Arbor, MI 48109, USA\ntitle: Efficient computation of rankings from pairwise comparisons\n---\n\nIntroduction\n============\n\nThe problem of ranking a set of individuals, teams, or objects on the basis of a set of pairwise comparisons between them arises in many contexts, including competitions in sports, chess, and other games, paired comparison studies of consumer choice, and observational studies of dominance behaviors in animals and humans [@Zermelo29; @BT52; @DF76; @David88; @Cattelan12]. If a group of chess players play games against one another, for example, how can we rank the players, from best" +"---\nauthor:\n- 'Cem Er\u00f6ncel,'\n- 'Ryosuke Sato,'\n- 'G\u00e9raldine Servant,'\n- Philip S\u00f8rensen\nbibliography:\n- 'references.bib'\ntitle: 'ALP Dark Matter from Kinetic Fragmentation: Opening up the Parameter Window'\n---\n\nDESY 22-106\\\nOU-HET-1148\\\n\nIntroduction {#sec:introduction}\n============\n\nAxion-Like-Particles (ALPs) appear in many extensions of the Standard Model. Their physics case has been extensively scrutinised in the last decades, especially in the last few years which have seen an increased interest in novel detection strategies and new experimental proposals [@Irastorza:2018dyq; @Adams:2022pbo]. ALPs are pseudo-Nambu-Goldstone bosons, arising from the spontaneous breaking of a global $U(1)$ symmetry. As such, they are naturally light particles, with a mass $m$ much smaller than the abelian symmetry breaking scale $f$, also referred to as the ALP decay constant. ALPs are characterised by their small mass $m \\ll f$, and by their small coupling to Standard Model particles which scales as $1/f$. The axion mass is generated at some lower scale $\\Lambda_b \\ll f$ typically related to some new strong dynamics that break the $U(1)$ symmetry explicitly, leading to $m \\propto \\Lambda_b^2/f$. For a generic ALP, $m$ and $f$ are independent parameters. In the case of the QCD axion, they are related as $m f \\propto \\Lambda_{QCD}^2$. The" +"---\nabstract: 'This paper reviews the attitude control problems for rigid-body systems, starting from the attitude representation for rigid body kinematics. Highly redundant rotation matrix defines the attitude orientation globally and uniquely by 9 parameters, which is the most fundamental one, without any singularities; minimum 3-parameter Euler angles or (modified) Rodrigues parameters define the attitude orientation neither globally nor uniquely, but the former exhibits kinematical singularity and Gimbal lock, while the latter two exhibit geometrical singularity; once-redundant axis-angle or unit quaternion globally define the attitude rotation but not uniquely using 4 parameters, but the former is not appropriate to define very small or very large rotations, while the latter shows unwinding phenomenon despite of the reduced computation burden. In addition, we explore the relationships among those attitude representations, including the connections among Gimbal lock, unwinding phenomenon and a nowhere dense set of zero Lebesgue measure. Based on attitude representations, we analyze different attitude control laws, almost global control and global attitude control, nominal and general robustness, as well as the technique tools.'\nauthor:\n- 'Dandan Zhang, Xin Jin, Hongye Su[^1][^2]'\ntitle: A perspective on Attitude Control Issues and Techniques\n---\n\nRigid-body system, attitude control, stabilization, rotation matrix, Euler angles, (modified)" +"---\nabstract: 'The current quantum reinforcement learning control models often assume that the quantum states are known a priori for control optimization. However, full observation of quantum state is experimentally infeasible due to the exponential scaling of the number of required quantum measurements on the number of qubits. In this paper, we investigate a robust reinforcement learning method using partial observations to overcome this difficulty. This control scheme is compatible with near-term quantum devices, where the noise is prevalent and predetermining the dynamics of quantum state is practically impossible. We show that this simplified control scheme can achieve similar or even better performance when compared to the conventional methods relying on full observation. We demonstrate the effectiveness of this scheme on examples of quantum state control and quantum approximate optimization algorithm. It has been shown that high-fidelity state control can be achieved even if the noise amplitude is at the same level as the control amplitude. Besides, an acceptable level of optimization accuracy can be achieved for QAOA with noisy control Hamiltonian. This robust control optimization model can be trained to compensate the uncertainties in practical quantum computing.'\nauthor:\n- Chen Jiang\n- Yu Pan\n- 'Zheng-Guang Wu'\n- Qing" +"---\nabstract: 'We show that quasicrystals exhibit anyonic behavior that can be used for topological quantum computing. In particular, we study a correspondence between the fusion Hilbert spaces of the simplest non-abelian anyon, the Fibonacci anyons, and the tiling spaces of a class of quasicrystals, which includes the one dimensional Fibonacci chain and the two dimensional Penrose tiling. A possible encoding on tiling spaces of topological quantum information processing is also discussed.'\n---\n\n[Exploiting Anyonic Behavior of Quasicrystals\\\nfor Topological Quantum Computing]{}\\\nMarcelo Amaral,[^1] David Chester, Fang Fang, and Klee Irwin\\\nQuantum Gravity Research,\\\nLos Angeles, CA 90290, USA\\\n\n[***Keywords\u2014*** Topological Quantum Computing; Anyons; Quasicrystals; Quasicrystalline Codes; Tiling Spaces]{}\n\nIntroduction {#intro}\n============\n\nWhile quantum computers have been experimentally realized, obtaining large-scale quantum computers still remains a challenge. Since qubits are very sensitive to the environment, it is necessary to solve the problem of decoherence [@Nielsen2011]. Software algorithms have been proposed by major players in the field; see citations within Ref.\u00a0[@Djordjevic2021]. A different seminal solution is to use hardware-level error correction via topological quantum computation (TQC) [@Pachos; @Wang]. In particular, non-abelian anyons can provide universal quantum computation [@Pachos]. Theoretically, low dimensional anyonic systems are a hallmark topological phase of matter," +"---\nabstract: 'We study the buckling of pressurized spherical shells by Monte Carlo simulations in which the detailed balance is explicitly broken \u2013 thereby driving the shell active, out of thermal equilibrium. Such a shell typically has either higher (active) or lower (sedate) fluctuations compared to one in thermal equilibrium depending on how the detailed balance is broken. We show that, for the same set of elastic parameters, a shell that is not buckled in thermal equilibrium can be buckled if turned active. Similarly a shell that is buckled in thermal equilibrium can unbuckle if sedated. Based on this result, we suggest that it is possible to experimentally design microscopic elastic shells whose buckling can be optically controlled.'\nauthor:\n- Vipin Agrawal\n- Vikash Pandey\n- Dhrubaditya Mitra\ntitle: 'Active buckling of pressurized spherical shells : Monte Carlo Simulation'\n---\n\nThin spherical shells are commonly found in many natural and engineering settings. Their sizes can vary over a very large range \u2013 from hundred meters, e.g., the Avicii Arena Stockholm \u00a0[^1] down to about hundred nanometers, e.g., viral capsules\u00a0[@buenemann2008elastic; @buenemann2008elastic; @michel2006nanoindentation] and exosomes\u00a0[@pegtel2019exosomes; @cavallaro2019label]. The elastic properties of shells, including conditions under which buckling can occur, have been" +"---\nabstract: 'The James Webb Space Telescope (JWST) will provide an opportunity to investigate the atmospheres of potentially habitable planets. Aerosols, significantly mute molecular features in transit spectra because they prevent light from probing the deeper layers of the atmosphere. Earth occasionally has stratospheric/high tropospheric clouds at 15-20 km that could substantially limit the observable depth of the underlying atmosphere. We use solar occultations of Earth\u2019s atmosphere to create synthetic JWST transit spectra of Earth analogs orbiting dwarf stars. Unlike previous investigations, we consider both clear and cloudy sightlines from the SCISAT satellite. We find that the maximum difference in effective thickness of the atmosphere between a clear and globally cloudy atmosphere is 8.5 km at 2.28 microns with a resolution of 0.02 microns. After incorporating the effects of refraction and Pandexo\u2019s noise modelling, we find that JWST would not be able to detect Earth like stratospheric clouds if an exo-Earth was present in the TRAPPIST-1 system, as the cloud spectrum differs from the clear spectrum by a maximum of 10 ppm. These stratospheric clouds are also not robustly detected by TauREx when performing spectral retrieval for a cloudy TRAPPIST-1 planet. However, if an Earth size planet were to orbit" +"---\nabstract: |\n We explore algorithms and limitations for sparse optimization problems such as sparse linear regression and robust linear regression. The goal of the sparse linear regression problem is to identify a small number of key features, while the goal of the robust linear regression problem is to identify a small number of erroneous measurements. Specifically, the sparse linear regression problem seeks a $k$-sparse vector $x\\in\\mathbb{R}^d$ to minimize $\\|Ax-b\\|_2$, given an input matrix $A\\in\\mathbb{R}^{n\\times d}$ and a target vector $b\\in\\mathbb{R}^n$, while the robust linear regression problem seeks a set $S$ that ignores at most $k$ rows and a vector $x$ to minimize $\\|(Ax-b)_S\\|_2$.\n\n We first show bicriteria, NP-hardness of approximation for robust regression building on the work of [@ODonnellWZ15] which implies a similar result for sparse regression. We further show fine-grained hardness of robust regression through a reduction from the minimum-weight $k$-clique conjecture. On the positive side, we give an algorithm for robust regression that achieves arbitrarily accurate additive error and uses runtime that closely matches the lower bound from the fine-grained hardness result, as well as an algorithm for sparse regression with similar runtime. Both our upper and lower bounds rely on a general reduction from robust linear" +"---\nabstract: |\n We initiate the study of online routing problems with predictions, inspired by recent exciting results in the area of learning-augmented algorithms. A learning-augmented online algorithm which incorporates predictions in a black-box manner to outperform existing algorithms if the predictions are accurate while otherwise maintaining theoretical guarantees is a popular framework for overcoming pessimistic worst-case competitive analysis.\n\n In this study, we particularly begin investigating the classical online traveling salesman problem (OLTSP), where future requests are augmented with predictions. Unlike the prediction models in other previous studies, each actual request in the OLTSP, associated with its arrival time and position, may not coincide with the predicted ones, which, as imagined, leads to a troublesome situation. Our main result is to study different prediction models and design algorithms to improve the best-known results in the different settings. Moreover, we generalize the proposed results to the online dial-a-ride problem.\nauthor:\n- 'Hsiao-Yu Hu'\n- 'Hao-Ting Wei'\n- 'Meng-Hsi Li'\n- 'Kai-Min Chung'\n- 'Chung-Shou Liao'\nbibliography:\n- 'references.bib'\nnocite: '[@*]'\ntitle: Online TSP with Predictions\n---\n\nIntroduction\n============\n\nIn many applications, people make decisions without knowing the future, and two approaches are widely used to address this issue: machine learning and" +"---\nabstract: 'The high R/X ratio of typical distribution systems makes the system voltage vulnerable to the active power injection from distributed energy resources (DERs). Moreover, the intermittent and uncertain nature of the DER generation brings new challenges to the voltage control. This paper proposes a two-stage stochastic optimization strategy to optimally place the PV smart inverters with Volt-VAr capability for distribution systems with high photovoltaic (PV) penetration to mitigate voltage violation issues. The proposed optimization strategy enables a planning-stage guide for upgrading the existing PV inverters to smart inverters with Volt-VAr capability while considering the operation-stage characteristics of the Volt-VAr control. One advantage of this planning strategy is that it utilizes the local control capability of the smart inverter that requires no communication, thus avoiding issues related to communication delays and failures. Another advantage is that the Volt-VAr control characteristic is internally integrated into the optimization model as a set of constraints, making placement decisions more accurate. The objective of the optimization is to minimize the upgrading cost and the number of the smart inverters required while maintaining the voltage profile within the acceptable range. Case studies on an actual 12.47kV, 9km long Arizona utility feeder have been conducted" +"---\nabstract: 'Current RGB-based 6D object pose estimation methods have achieved noticeable performance on datasets and real world applications. However, predicting 6D pose from single 2D image features is susceptible to disturbance from changing of environment and textureless or resemblant object surfaces. Hence, RGB-based methods generally achieve less competitive results than RGBD-based methods, which deploy both image features and 3D structure features. To narrow down this performance gap, this paper proposes a framework for 6D object pose estimation that learns implicit 3D information from 2 RGB images. Combining the learned 3D information and 2D image features, we establish more stable correspondence between the scene and the object models. To seek for the methods best utilizing 3D information from RGB inputs, we conduct an investigation on three different approaches, including Early- Fusion, Mid-Fusion, and Late-Fusion. We ascertain the Mid- Fusion approach is the best approach to restore the most precise 3D keypoints useful for object pose estimation. The experiments show that our method outperforms state-of-the-art RGB-based methods, and achieves comparable results with RGBD-based methods.'\nauthor:\n- 'Jun Wu$^{1}$, Lilu Liu$^{1}$, Yue Wang$^{1}$ and Rong Xiong$^{1}$[^1] [^2].'\nbibliography:\n- 'reference.bib'\ntitle: '**Towards Two-view 6D Object Pose Estimation: A Comparative Study on Fusion" +"---\nabstract: 'For two matrices $A$ and $B$, and large $n$, we show that most products of $n$ factors of $e^{A/n}$ and $n$ factors of $e^{B/n}$ are close to $e^{A + B}$. This extends the Lie-Trotter formula. The elementary proof is based on the relation between words and lattice paths, asymptotics of binomial coefficients, and matrix inequalities. The result holds for more than two matrices.'\naddress: 'Department of Mathematics, Texas A&M University, College Station, TX 77843-3368'\nauthor:\n- 'Michael Anshelevich, Austin Pritchett'\ntitle: |\n Product of exponentials concentrates\\\n around the exponential of the sum\n---\n\n[^1]\n\nIntroduction.\n=============\n\nMatrix products do not commute. One familiar consequence is that in general, $$e^A e^B \\neq e^{A + B}.$$ (Here for a square matrix $A$, the expression $e^A$ can be defined, for example, using the power series expansion of the exponential function.) However, a vestige of the \u201cproduct of exponentials is the exponential of the sum\u201d property remains, as long as we take the factors in a very special *alternating* order.\n\nLet $A$ and $B$ be complex square matrices. Then $$\\lim_{n \\rightarrow \\infty} \\left( e^{A/n} e^{B/n} \\right)^n \\rightarrow e^{A + B},$$ where the convergence is with respect to any matrix norm.\n\nThis result" +"---\nabstract: 'I conjecture three identities for the determinant of adjacency matrices of graphene triangles and trapezia with Bloch (and more general) boundary conditions. For triangles, the parametric determinant is equal to the characteristic polynomial of the symmetric Pascal matrix. For trapezia it is equal to the determinant of a sub-matrix. Finally, the determinant of the tight binding matrix equals its permanent. The conjectures are supported by analytic evaluations and Mathematica, for moderate sizes. They establish connections with counting problems of partitions, lozenge tilings of hexagons, dense loops on a cylinder.'\naddress: 'L. G. Molinari, Physics Department \u201cAldo Pontremoli\u201d, Universit\u00e0 degli Studi di Milano and I.N.F.N. sez. di Milano, Via Celoria 16, 20133 Milano, Italy.'\nauthor:\n- Luca Guido Molinari\ntitle: 'GRAPHENE NANOCONES and PASCAL MATRICES\\'\n---\n\nIntroduction\n============\n\nPhysics often entails intriguing mathematics, and Carbon Nanocones are no exception. They are conical honeycomb lattices of Carbon atoms, with a cap of Carbon rings causing different cone apertures [@Balaban94]. The ones with smallest angle, near $19^\\circ$, were sinthetized by Ge and Sattler in 1994, in the hot vapour phase of carbon; the others were found by accident, in pyrolysis of heavy oil [@Sattler08].\n\n![[Scanning electron microscope images of carbon nanocones" +"---\nabstract: 'Recent deep learning-based methods for medical image registration achieve results that are competitive with conventional optimization algorithms at reduced run times. However, deep neural networks generally require plenty of labeled training data and are vulnerable to domain shifts between training and test data. While typical intensity shifts can be mitigated by keypoint-based registration, these methods still suffer from geometric domain shifts, for instance, due to different fields of view. As a remedy, in this work, we present a novel approach to geometric domain adaptation for image registration, adapting a model from a labeled source to an unlabeled target domain. We build on a keypoint-based registration model, combining graph convolutions for geometric feature learning with loopy belief optimization, and propose to reduce the domain shift through self-ensembling. To this end, we embed the model into the Mean Teacher paradigm. We extend the Mean Teacher to this context by 1) adapting the stochastic augmentation scheme and 2) combining learned feature extraction with differentiable optimization. This enables us to guide the learning process in the unlabeled target domain by enforcing consistent predictions of the learning student and the temporally averaged teacher model. We evaluate the method for exhale-to-inhale lung CT registration" +"---\nauthor:\n- Mohammad Farhat\n- 'Pierre Auclair-Desrotour'\n- Gwena\u00ebl Bou\u00e9\n- Jacques Laskar\ntitle: 'The Resonant Tidal Evolution of the Earth-Moon Distance'\n---\n\nIntroduction\n============\n\nDue to the tidal interplay in the Earth-Moon system, the spin of the Earth brakes with time and the Earth-Moon distance increases [@Darwin1879] at a present rate of $3.830\\pm0.008$ cm/year that is measured using Lunar Laser Ranging (LLR) [@williams2016secular]. [[There exists a rich narrative exploring the long term evolution of the system [@goldreich1966history; @mignard1979evolution; @touma1994evolution; @neron1997long] and the dynamical constraints on the origin of the Moon [@touma1998resonances; @cuk2019early]. Among all, it has been established that simple ]{}]{} tidal models starting with the present recession rate and integrated backward in time predict a close encounter in the Earth-Moon system within less than 1.6 billion years (Ga) [@gerstenkorn1967controversy; @macdonald1967evidence]. This is clearly not compatible with the estimated age of the Moon of $4.425\\pm0.025$ Ga [@maurice2020long], which suggests that the present rate of rotational energy dissipation is much larger than it has typically been over the Earth\u2019s history. To bypass this difficulty, empirical models have been fitted to the available geological evidences of the past rotational state of the Earth [@walker1986; @waltham_milankovitch_2015], acquired through the analysis of" +"---\nabstract: 'Grapheme-to-phoneme (G2P) conversion is an indispensable part of the Chinese Mandarin text-to-speech (TTS) system, and the core of G2P conversion is to solve the problem of polyphone disambiguation, which is to pick up the correct pronunciation for several candidates for a Chinese polyphonic character. In this paper, we propose a Chinese polyphone BERT model to predict the pronunciations of Chinese polyphonic characters. Firstly, we create 741 new Chinese monophonic characters from 354 source Chinese polyphonic characters by pronunciation. Then we get a Chinese polyphone BERT by extending a pre-trained Chinese BERT with 741 new Chinese monophonic characters and adding a corresponding embedding layer for new tokens, which is initialized by the embeddings of source Chinese polyphonic characters. In this way, we can turn the polyphone disambiguation task into a pre-training task of the Chinese polyphone BERT. Experimental results demonstrate the effectiveness of the proposed model, and the polyphone BERT model obtain 2% (from 92.1% to 94.1%) improvement of average accuracy compared with the BERT-based classifier model, which is the prior state-of-the-art in polyphone disambiguation.'\naddress: SenseTime Research\nbibliography:\n- 'main.bib'\ntitle: '**A Polyphone BERT for Polyphone Disambiguation in Mandarin Chinese**'\n---\n\n**Index Terms**: text-to-speech, polyphone disambiguation, Chinese polyphone" +"---\nabstract: |\n Quantifying the similarity between two graphs is a fundamental algorithmic problem at the heart of many data analysis tasks for graph-based data. In this paper, we study the computational complexity of a family of similarity measures based on quantifying the mismatch between the two graphs, that is, the \u201csymmetric difference\u201d of the graphs under an optimal alignment of the vertices. An important example is similarity based on graph edit distance. While edit distance calculates the \u201cglobal\u201d mismatch, that is, the number of edges in the symmetric difference, our main focus is on \u201clocal\u201d measures calculating the maximum mismatch per vertex.\n\n Mathematically, our similarity measures are best expressed in terms of the adjacency matrices: the mismatch between graphs is expressed as the difference of their adjacency matrices (under an optimal alignment), and we measure it by applying some matrix norm. Roughly speaking, global measures like graph edit distance correspond to entrywise matrix norms like the Frobenius norm and local measures correspond to operator norms like the spectral norm.\n\n We prove a number of strong NP-hardness and inapproximability results even for very restricted graph classes such as bounded-degree trees.\nauthor:\n- Timo Gervens\n- Martin Grohe\ntitle: Graph Similarity" +"---\nabstract: 'Safety guarantees in motion planning for autonomous driving typically involve certifying the trajectory to be collision-free under any motion of the uncontrollable participants in the environment, such as the human-driven vehicles on the road. As a result they usually employ a conservative bound on the behavior of such participants, such as reachability analysis. We point out that planning trajectories to rigorously avoid the entirety of the reachable regions is unnecessary and too restrictive, because observing the environment in the future will allow us to prune away most of them; disregarding this ability to react to future updates could prohibit solutions to scenarios that are easily navigated by human drivers. We propose to account for the autonomous vehicle\u2019s reactions to future environment changes by a novel safety framework, Comprehensive Reactive Safety. Validated in simulations in several urban driving scenarios such as unprotected left turns and lane merging, the resulting planning algorithm called Reactive ILQR demonstrates strong negotiation capabilities and better safety at the same time.'\nauthor:\n- 'Fang Da$^{1}$[^1][^2] [^3]'\nbibliography:\n- 'IEEEabrv.bib'\n- 'bp.bib'\ntitle: |\n **Comprehensive Reactive Safety:\\\n No Need For A Trajectory If You Have A Strategy** \n---\n\nINTRODUCTION {#sec:intro}\n============\n\nSafety is of central concern" +"---\nabstract: 'It was recently shown that a chiral topological phase emerges from the coupling of two twisted monolayers of superconducting Bi~2~Sr~2~CaCu~2~O~8$+\\delta$~ for certain twist angles. In this work, we reveal the behavior of such twisted superconducting bilayers with $d_{x^2-y^2}$ pairing symmetry in presence of applied magnetic field. Specifically, we show that the emergent vortex matter can serve as smoking gun for detection of topological superconductivity in such bilayers. Moreover, we report two distinct skyrmionic states that characterize the chiral topological phase, and provide full account of their experimental signatures and their evolution with the twist angle.'\nauthor:\n- 'Leonardo R. Cadorim'\n- Edson Sardella\n- 'Milorad V. Milo[\u0161]{}evi[\u0107]{}'\nbibliography:\n- 'bibliography.bib'\ntitle: 'Vortical versus skyrmionic states in the topological phase of a twisted bilayer with $d$-wave superconducting pairing'\n---\n\n\\[sec:sec1\\]Introduction\n========================\n\nChiral superconductivity [@kallin2016] has been a topic of tremendous interest in the recent literature due to its rich phenomenology [@sigrist1991; @vojta2000], including appearance of nontrivial surface currents [@stone2004] and half-quantum vortices [@read2000; @garaud2012; @jang2011; @zyuzin2017; @becerra2016], to name a few examples. Being mostly characterized by several Fermi surfaces, chiral superconductors often present multiple superconducting gaps, and are thereby prone to a plethora of interesting physics typical of multicomponent" +"---\nauthor:\n- 'Tomasz \u0141ukowski,'\n- 'Robert Moerman,'\n- and Jonah Stalknecht\nbibliography:\n- 'push-forward.bib'\ntitle: Pushforwards via Scattering Equations with Applications to Positive Geometries\n---\n\nIntroduction\n============\n\nOver the past decade, the scattering equations have proven to be fundamental to the study of scattering amplitudes. They find their origins in twistor string theory where novel formulae for the complete tree-level $S$-matrix of $\\mathcal{N}=4$ supersymmetric Yang-Mills (SYM) theory [@Witten:2003nn; @Roiban:2004vt; @Roiban:2004yf] and $\\mathcal{N}=8$ supersymmetric gravity [@Cachazo:2012da; @Cachazo:2012kg; @Cachazo:2012pz] were derived. These formulae consist of integrals over the moduli space of curves from the $n$-punctured Riemann sphere to the relevant kinematic space (either twistor space or momentum space) and were later subsumed by the Cachazo\u2013He\u2013Yuan (CHY) formalism [@Cachazo:2013gna; @Cachazo:2013hca; @Cachazo:2013iaa]. In this latter setting, further simplification comes from the fact that amplitudes are expressed as integrals directly over the moduli space of the $n$-punctured Riemann sphere. These moduli space integrals fully localize to the solutions of a set of algebraic equations, called the scattering equations, which connect points in the kinematic space to points in the moduli space. Moreover, the scattering equations are relevant to the scattering of massless particles in arbitrary dimensions [@Cachazo:2013hca] and their application extends to a broad" +"---\nabstract: |\n Some sorts of \u201cmorphisms\u201d are defined from very basic mathematical objects such as sets, functions, and partial functions. A wide range of mathematical notions such as continuous functions between topological spaces, ring homomorphisms, module homomorphisms, group homomorphisms, and covariant functors between categories can be characterized in terms of the \u201cmorphisms\u201d. We show that the inverse of any bijective \u201cmorphism\" is also a \u201cmorphism\u201d, and so an \u201cisomorphism\" can be defined as a bijective \u201cmorphism\".\n\n Galois correspondences are established and studied, not only for the Galois groups of the \u201cautomorphisms\u201d, but also for the \u201cGalois monoids\u201d of the \u201cendomorphisms\u201d.\n\n Ways to construct the \u201cmorphisms\u201d and the \u201cisomorphisms\u201d are studied.\n\n New interpretations on solvability of polynomials and solvability of homogeneous linear differential equations are introduced, and these ideas are roughly generalized for \u201cgeneral\u201d equation solving in terms of our theory for the \u201cmorphisms\u201d.\n\n Some more results are presented. For example, we prove an isomorphism theorem that generalizes the first isomorphism theorems for groups, rings, and modules. In addition, we show that a part of our theory is closely related to dynamical systems.\nauthor:\n- Gang Hu\ntitle: 'A (Galois) theory for a generalized morphism'\n---\n\nIntroduction\n============\n\nThe (Galois)" +"---\nauthor:\n- 'B.\u00a0Vollmer, R.I.\u00a0Davies, P.\u00a0Gratier, Th.\u00a0Liz\u00e9e, M.\u00a0Imanishi, J.\u00a0F.\u00a0Gallimore, C.\u00a0M.\u00a0V.\u00a0Impellizzeri, S.\u00a0Garc\u00eda-Burillo, F.\u00a0Le\u00a0Petit'\ndate: 'Received ; accepted '\nsubtitle: 'Modelling the molecular emission of a parsec-scale torus as found in NGC\u00a01068'\ntitle: 'From the Circumnuclear Disk in the Galactic Center to thick, obscuring tori of AGNs'\n---\n\nIntroduction\\[sec:introduction\\]\n================================\n\nActive galactic nuclei (AGN) can outshine the optical light of the whole galaxy which surrounds them. This huge luminosity and thus this huge amount of energy is gained through the conversion of gravitational potential energy to electromagnetic radiation. The ingredients for an AGN are (i) a supermassive black hole with a mass exceeding about $10^6$\u00a0M$_{\\odot}$, (ii) interstellar gas that surrounds the central black hole, and (iii) gas infall onto the black hole. As the material falls in toward the black hole, angular momentum will cause it to form an accretion disk. During infall the disk gas is heated to temperatures between $1$ and $4 \\times 10^{5}$\u00a0K (e.g., Bonning et al. 2007) giving rise to intense optical and UV emission. Moreover, the geometrically thin accretion disk is surrounded by a hot corona giving rise to strong X-ray" +"---\nabstract: 'Powered by recent advances in code-generating models, AI assistants like Github Copilot promise to change the face of programming forever. But what *is* this new face of programming? We present the first grounded theory analysis of how programmers interact with Copilot, based on observing 20 participants\u2014with a range of prior experience using the assistant\u2014as they solve diverse programming tasks across four languages. Our main finding is that interactions with programming assistants are *bimodal*: in *acceleration mode*, the programmer knows what to do next and uses Copilot to get there faster; in *exploration mode*, the programmer is unsure how to proceed and uses Copilot to explore their options. Based on our theory, we provide recommendations for improving the usability of future AI programming assistants.'\nauthor:\n- Shraddha Barke\n- 'Michael B. James'\n- Nadia Polikarpova\nbibliography:\n- 'main.bib'\nnocite: '[@*]'\ntitle: 'Grounded Copilot: How Programmers Interact with Code-Generating Models'\n---\n\nIntroduction\n============\n\nThe dream of an \u201cAI assistant\u201d working alongside the programmer has captured our imagination for several decades now, giving rise to a rich body of work from both the programming languages [@raychev2014code; @miltner2019fly; @snippy; @ni2021recode] and the machine learning [@kalyan2018neural; @Xu_2020; @guo2021learning] communities. Thanks to recent breakthroughs" +"---\nabstract: 'We predict sound anomalies at the doping $\\delta_{p}$ where the pseudogap ends in the normal state of hole-doped cuprates. Our prediction is based on the two-dimensional compressible Hubbard model using cluster dynamical mean-field theory. We find sharp anomalies (dips) in the velocity of sound as a function of doping and interaction. These dips are a signature of supercritical phenomena, stemming from an electronic transition without symmetry breaking below the superconducting dome. If experimentally verified, these signatures may help to solve the fundamental question of the nature of the pseudogap \u2013 pinpointing its origin as due to Mott physics and resulting short-range correlations.'\nauthor:\n- 'C. Walsh'\n- 'M. Charlebois'\n- 'P. S\u00e9mon'\n- 'G. Sordi'\n- 'A.-M. S. Tremblay'\ntitle: 'Prediction of anomalies in the velocity of sound for the pseudogap of hole-doped cuprates'\n---\n\nIntroduction\n============\n\nUpon decreasing the temperature near half-filling, cuprate superconductors reach a temperature $T^*$ where spin susceptibility and photoemission experiments show features that suggest a loss in the density of states. This has been associated to a so-called pseudogap phase, with $T^*$ a crossover rather than a sharp phase transition\u00a0[@Alloul2013; @annurev2019]. In addition, below $T^*$ as defined above, broken symmetry states are" +"---\nabstract: 'The construction of a better exchange-correlation potential in time-dependent density functional theory (TDDFT) can improve the accuracy of TDDFT calculations and provide more accurate predictions of the properties of many-electron systems. Here, we propose a machine learning method to develop the energy functional and the Kohn-Sham potential of a time-dependent Kohn-Sham system is proposed. The method is based on the dynamics of the Kohn-Sham system and does not require any data on the exact Kohn-Sham potential for training the model. We demonstrate the results of our method with a 1D harmonic oscillator example and a 1D two-electron example. We show that the machine-learned Kohn-Sham potential matches the exact Kohn-Sham potential in the absence of memory effect. Our method can still capture the dynamics of the Kohn-Sham system in the presence of memory effects. The machine learning method developed in this article provides insight into making better approximations of the energy functional and the Kohn-Sham potential in the time-dependent Kohn-Sham system.'\naddress:\n- ' $1$ Department of Physics and Astronomy, Dartmouth College. Hanover, NH 03755 '\n- '$^{*}$ Corresponding author:'\nauthor:\n- 'Jun Yang$^{1}$ & James Whitfield$^{1,*}$'\ntitle: 'Machine-learning Kohn-Sham Potential from Dynamics in Time-Dependent Kohn-Sham Systems'\n---\n\nIntroduction" +"---\nabstract: 'Giant atoms are known for the frequency-dependent spontaneous emission and associated interference effects. In this paper, we study the spontaneous emission dynamics of a two-level giant atom with dynamically modulated transition frequency. It is shown that the retarded feedback effect of the giant-atom system is greatly modified by a dynamical phase arising from the frequency modulation and the retardation effect itself. Interestingly, such a modification can in turn suppress the retarded feedback such that the giant atom behaves like a small one. By introducing an additional phase difference between the two atom-waveguide coupling paths, we also demonstrate the possibility of realizing chiral and tunable temporal profiles of the output fields. The results in this paper have potential applications in quantum information processing and quantum network engineering.'\nauthor:\n- Lei Du\n- Yan Zhang\n- Yong Li\ntitle: A giant atom with modulated transition frequency\n---\n\nIntroduction\n============\n\nSpontaneous emission is a basic and important process that arises from the interaction between an excited quantum system and the surrounding environment\u00a0[@AgarwalSE; @OpenSys; @QuantNoise]. This process is typically irreversible and thus plays a negative role in quantum information processing, e.g., leading to the so-called quantum decoherence effect. On the other" +"---\nabstract: 'Gravitational wave (GW) memory is studied in the context of a certain class of braneworld wormholes. Unlike other wormhole geometries, this novel class of wormholes do not require any exotic matter fields for its traversability. First, we study geodesics in this wormhole spacetime, in the presence of a GW pulse. The resulting evolution of the geodesic separation shows the presence of displacement and velocity memory effects. Motivated by the same, we study the memory effects at null infinity using the Bondi-Sachs formalism, adapted for braneworld wormhole. Our analysis provides a non-trivial change of the Bondi mass after the passage of a burst of gravitational radiation and hence manifests the memory effect at null infinity. In both of these exercises, the presence of extra dimension and the wormhole nature of the spacetime geometry gets imprinted in the memory effect. Since future GW detectors will be able to probe the memory effect, the present work provides another avenue to search for compact objects other than black holes.'\nauthor:\n- Indranil Chakraborty\n- Soumya Bhattacharya\n- Sumanta Chakraborty\nbibliography:\n- 'mybibliography\\_whm.bib'\ntitle: '**Gravitational wave memory in wormhole spacetimes**'\n---\n\nIntroduction\n============\n\nThe direct detection of Gravitational Waves (GWs) from binary black" +"---\nabstract: 'Interpretation of vibrational inelastic neutron scattering spectra of complex systems is frequently reliant on accompanying simulations from theoretical models. [*Ab initio*]{} codes can routinely generate force constants, but additional steps are required for direct comparison to experimental spectra. On modern spectrometers this is a computationally expensive task due to the large data volumes collected. In addition, workflows are frequently cumbersome as the simulation software and experimental data analysis software often do not easily interface to each other. Here a new package, [[*Euphonic*]{}]{}, is presented. [[*Euphonic*]{}]{} is a robust, easy to use and computationally efficient tool designed to be integrated into experimental software and able to interface directly with the force constant matrix output of [*ab initio*]{} codes.'\nauthor:\n- Rebecca Fair\n- Adam Jackson\n- David Voneshen\n- Dominik Jochym\n- Duc Le\n- Keith Refson\n- Toby Perring\nbibliography:\n- 'bibliography.bib'\ntitle: '[[*Euphonic*]{}]{}: inelastic neutron scattering simulations from force constants and visualisation tools for phonon properties'\n---\n\nIntroduction\n============\n\nThe study of atomic vibrations is of both fundamental and applied interest as they have a significant role in many macroscopic material properties. They can drive phase transitions\u00a0[@budai_metallization_2014], transport heat\u00a0[@zheng_advances_2021], govern elastic properties\u00a0[@boer_elasticity_2018] and even" +"---\nabstract: 'Shape-morphing structures, which are able to change their shapes from one state to another, are important in a wide range of engineering applications. A popular scenario is morphing from an initial two-dimensional (2D) shape that is flat to a three-dimensional (3D) target shape. One of the exciting manufacturing paradigms is transforming flat 2D sheets with prescribed cuts (i.e. kirigami) into 3D structures. By employing the formalism of the \u2018tapered elastica\u2019 equation, we develop an inverse design framework to predict the shape of the 2D cut pattern that would generate a desired axisymmetric 3D shape. Our previous work has shown that tessellated 3D structures can be achieved by designing both the width and thickness of the cut 2D sheet to have particular tapered designs. However, the fabrication of a sample with variable thickness is quite challenging. Here we propose a new strategy \u2013 perforating the cut sheet with tapered width but uniform thickness to introduce a distribution of porosity. We refer to this strategy as perforated kirigami and show how the porosity function can be calculated from our theoretical model. The porosity distribution can easily be realized by laser cutting and modifies the bending stiffness of the sheet to" +"---\nabstract: 'We identify the finitely many arithmetic lattices $\\Gamma$ in the orientation preserving isometry group of hyperbolic $3$-space ${\\mathbb H}^3$ generated by an element of order $4$ and and element of order $p\\geq 2$. Thus $\\Gamma$ has a presentation of the form $$\\Gamma\\cong\\langle f,g: f^4=g^p=w(f,g)=\\cdots=1 \\rangle$$ We find that necessarily $p\\in \\{2,3,4,5,6,\\infty\\}$, where $p=\\infty$ denotes that $g$ is a parabolic element, the total degree of the invariant trace field $k\\Gamma={\\mathbb Q}(\\{\\tr^2(h):h\\in\\Gamma\\})$ is at most $4$, and each orbifold is either a two bridge link of slope $r/s$ surgered with $(4,0)$, $(p,0)$ Dehn surgery (possibly a two bridge knot if $p=4$) or a Heckoid group with slope $r/s$ and $w(f,g)=(w_{r/s})^r$ with $r\\in \\{1,2,3,4\\}$. We give a discrete and faithful representation in $PSL(2,{\\mathbb C})$ for each group and identify the associated number theoretic data.'\nauthor:\n- 'G.J. Martin, K. Salehi and Y. Yamashita [^1]'\ntitle: 'The $(4,p)$-arithmetic hyperbolic lattices, $p\\geq 2$, in three dimensions. '\n---\n\nIntroduction\n============\n\nIn this paper we continue our long running programme to identify (up to conjugacy) all the finitely many arithmetic lattices $\\Gamma$ in the group of orientation preserving isometries of hyperbolic $3$-space $Isom^+({\\mathbb H}^3)\\cong PSL(2,{\\mathbb C})$ generated by two elements of finite order $p$" +"---\nabstract: |\n Co-evolutionary algorithms have a wide range of applications, such as in hardware design, evolution of strategies for board games, and patching software bugs. However, these algorithms are poorly understood and applications are often limited by pathological behaviour, such as loss of gradient, relative over-generalisation, and mediocre objective stasis. It is an open challenge to develop a theory that can predict when co-evolutionary algorithms find solutions efficiently and reliable.\n\n This paper provides a first step in developing runtime analysis for population-based competitive co-evolutionary algorithms. We provide a mathematical framework for describing and reasoning about the performance of co-evolutionary processes. An example application of the framework shows a scenario where a simple co-evolutionary algorithm obtains a solution in polynomial expected time. Finally, we describe settings where the co-evolutionary algorithm needs exponential time with overwhelmingly high probability to obtain a solution.\nauthor:\n- Per Kristian Lehre\nbibliography:\n- 'bilinear.bib'\ntitle: |\n Runtime Analysis of Competitive co-Evolutionary Algorithms for Maximin Optimisation\\\n of a Bilinear Function[^1]\n---\n\nIntroduction\n============\n\nMany real-world optimisation problems feature a strategic aspect, where the solution quality depends on the actions of other \u2013 potentially adversarial \u2013 players. There is a need for adversarial optimisation algorithms that operate" +"---\nabstract: 'Physics simulators have shown great promise for conveniently learning reinforcement learning policies in safe, unconstrained environments. However, transferring the acquired knowledge to the real world can be challenging due to the reality gap. To this end, several methods have been recently proposed to automatically tune simulator parameters with posterior distributions given real data, for use with domain randomization at training time. These approaches have been shown to work for various robotic tasks under different settings and assumptions. Nevertheless, existing literature lacks a thorough comparison of existing adaptive domain randomization methods with respect to transfer performance and real-data efficiency. In this work, we present an open benchmark for both offline and online methods (SimOpt, BayRn, DROID, DROPO), to shed light on which are most suitable for each setting and task at hand. We found that online methods are limited by the quality of the currently learned policy for the next iteration, while offline methods may sometimes fail when replaying trajectories in simulation with open-loop commands. The code used will be released at\u00a0.'\nauthor:\n- Gabriele Tiboni\n- Karol Arndt\n- Giuseppe Averta\n- |\n \\\n Ville Kyrki\n- Tatiana Tommasi\nbibliography:\n- 'biblio.bib'\ntitle: 'Online vs. Offline Adaptive" +"---\nabstract: 'The mean field approximation is used to investigate the general features of the dynamics of a two-level atom in a ferromagnetic lattice close to the Curie temperature. Various analytical and numerical results are obtained. We first linearize the lattice Hamiltonian, and we derive the self-consistency equation for the order parameter of the phase transition for arbitrary direction of the magnetic field. The reduced dynamics is deduced by tracing out the degrees of freedom of the lattice, which results in the reduction of the dynamics to that of an atom in an effective spin bath whose size is equal to the size of a unit cell of the lattice. It is found that the dephasing and the excited state occupation probability may be enhanced by applying the magnetic field along some specific directions. The dependence on the change of the temperature and the magnitude of spin is also investigated. It turns out that the increase of thermal fluctuations may reduce the occupation probability of the excited state. The entanglement of two such atoms that occupy non-adjacent cells is studied and its variation in time is found to be not much sensitive to the direction of the magnetic field. Entanglement" +"---\nauthor:\n- |\n Xiaoxuan Cai, Xinru Wang, Li Zeng, Habiballah Rahimi Eichi, Dost Ongur, Lisa Dixon,\\\n Justin T. Baker, Jukka-Pekka Onnela, Linda Valeri\nbibliography:\n- 'SSMimpute.bib'\ntitle: 'State space model multiple imputation for missing data in non-stationary multivariate time series with application in digital Psychiatry'\n---\n\nIntroduction\n============\n\nMobile devices (e.g., mobile phones and wearable devices) have revolutionized the way we access information and provide convenient and scalable methods for collecting psychological and behavioral biomarkers in patients\u2019 naturalistic settings [@world2011mhealth; @ben2017mobile]. An important application of mobile technology is psychiatric research to monitor patients\u2019 psychiatric symptoms, social interaction, life habits (e.g., sleeping, smoking and alcohol consumption), as well as other health-related conditions, continuously in their real life. Intensive monitoring on participants increases the number of observations to hundreds or even thousands, resulting in longitudinal data as entangled multivariate time series. Consequently, it greatly complicates research design, statistical analysis, and the handling of missing data.\n\nSevere Mental Illness (SMI), including schizophrenia, bipolar disorder, schizoaffective disorder, and related conditions, has become one of the major burdens of disease, affecting over 13 million individuals in the United States [@NSDUH2019]. Antipsychotics, albeit critical in controlling manic and depressive episodes, fall short in improving patient\u2019s" +"---\nabstract: 'On icy worlds, the ice shell and subsurface ocean form a coupled system \u2013 heat and salinity flux from the ice shell induced by the ice thickness gradient drives circulation in the ocean, and in turn, the heat transport by ocean circulation shapes the ice shell. Therefore, understanding the dependence of the efficiency of ocean heat transport (OHT) on orbital parameters may allow us to predict the ice shell geometry before direct observation is possible, providing useful information for mission design. Inspired by previous works on baroclinic eddies, I first derive scaling laws for the OHT on icy moons, driven by ice topography, and then verify them against high resolution 3D numerical simulations. Using the scaling laws, I am then able to make predictions for the equilibrium ice thickness variation knowing that the ice shell should be close to heat balance. Ice shell on small icy moons (e.g., Enceladus) may develop strong thickness variations between the equator and pole driven by the polar-amplified tidal dissipation in the ice, to the contrary, ice shell on large icy moons (e.g., Europa, Ganymede, Callisto etc.) tends to be flat due to the smoothing effects of the efficient OHT. These predictions are" +"---\nabstract: 'Linear combinations of complex gaussian functions, where the linear and nonlinear parameters are allowed to vary, are shown to provide an extremely flexible and effective approach for solving the time-dependent Schr\u00f6dinger equation in one spatial dimension. The use of flexible basis sets has been proven notoriously hard within the systematics of the Dirac\u2013Frenkel variational principle. In this work we present an alternative time-propagation scheme that de-emphasizes optimal parameter evolution but directly targets residual minimization via the method of Rothe\u2019s method, also called the method of vertical time layers. We test the scheme using a simple model system mimicking an atom subjected to an extreme laser pulse. Such a pulse produces complicated ionization dynamics of the system. The scheme is shown to perform very well on this model and notably does not rely on a computational grid. Only a handful of gaussian functions are needed to achieve an accuracy on par with a high-resolution, grid-based solver. This paves the way for accurate and affordable solution of the time-dependent Schr\u00f6dinger equation for atoms and molecules within and beyond the Born\u2013Oppenheimer approximation.'\nauthor:\n- Simen Kvaal\n- Caroline Lasser\n- Thomas Bondo Pedersen\n- Ludwik Adamowicz\ndate: 7 March 2023\ntitle:" +"---\nabstract: 'In this work we make the observation that the gravitational leptogenesis mechanism can be implemented without invoking new axial couplings in the inflaton sector. We show that in the perturbed Robertson-Walker background emerging after inflation, the spacetime metric itself breaks parity symmetry and generates a non-vansihing Pontryagin density which can produce a matter-antimatter asymmetry. We analyze the produced asymmetry in different inflationary and reheating scenarios. We show that the generated asymmetry can be locally comparable to observations in certain cases, although the size of the matter-antimatter regions is typically much smaller than the present Hubble radius.'\nauthor:\n- 'Antonio L.\u00a0Maroto'\n- 'Alfredo D. Miravet'\nbibliography:\n- 'metric\\_leptogenesis.bib'\ntitle: Gravitational leptogenesis from metric perturbations\n---\n\nIntroduction\n============\n\nThe excess of matter over antimatter in the universe is one of the longstanding problems in cosmology [@Sakharov:1967dj]. This matter-antimatter asymmetry is usually quantified through the ratio of the net baryon number density with respect to the total entropy density whose value measured by the Planck collaboration is $n_B/s=8.718\\pm 0.004\\times 10^{-11}$ [@Planck:2018vyg]. One of the most interesting proposals for the generation of the baryon asymmetry is leptogenesis [@Fukugita:1986hr]. The original implementation of this mechanism relied on the introduction of right-handed Majorana" +"---\nabstract: 'Here we propose BC$_3$, a graphene derivative that has been synthesized, as a platform to realize exotic quantum phases by introducing a moir\u00e9 pattern with mismatched stacking. In twisted bilayer BC$_3$, it is shown that a crossover from two-dimensional to quasi one-dimensional band structure takes place with the twist angle as a control parameter. This is a typical manifestation of the guiding principle in van der Waals stacked systems: the quantum interference between Bloch wave functions in adjacent layers has a striking effect on the effective interlayer tunneling. Interestingly, quasi one-dimensionalization happens in a valley dependent manner. Namely, there is interlocking between the valley index and the quasi-1D directionality, which makes BC$_3$ a plausible candidate for valleytronics devices. In addition, the strongly correlated regime of the valley interlocked quasi-1D state reduces to an interesting variant of the Kugel-Khomskii model where intertwined valley and spin degrees of freedom potentially induces exotic quantum phases. Notably, this variant of the Kugel-Khomskii model cannot be realized in conventional solids due to the three fold valley degeneracy.'\nauthor:\n- Toshikaze Kariyado\ntitle: 'Twisted bilayer BC$_3$: Valley interlocked anisotropic flat bands'\n---\n\nIntroduction\n============\n\nNanoscale moir\u00e9 patterns in van der Waals (vdW) heterostructures [@Geim:2013aa]" +"---\nabstract: 'Functional alterations in the relevant neural circuits occur from drug addiction over a certain period. And these significant alterations are also revealed by analyzing fMRI. However, because of fMRI\u2019s high dimensionality and poor signal-to-noise ratio, it is challenging to encode efficient and robust brain regional embeddings for both graph-level identification and region-level biomarkers detection tasks between nicotine addiction (NA) and healthy control (HC) groups. In this work, we represent the fMRI of the rat brain as a graph with biological attributes and propose a novel feature-selected graph spatial attention network(FGSAN) to extract the biomarkers of addiction and identify from these brain networks. Specially, a graph spatial attention encoder is employed to capture the features of spatiotemporal brain networks with spatial information. The method simultaneously adopts a Bayesian feature selection strategy to optimize the model and improve classification task by constraining features. Experiments on an addiction-related neural imaging dataset show that the proposed model can obtain superior performance and detect interpretable biomarkers associated with addiction-relevant neural circuits.'\nauthor:\n- Changwei Gong\n- Changhong Jing\n- Junren Pan\n- Shuqiang Wang\ntitle: 'Feature-selected Graph Spatial Attention Network for Addictive Brain-Networks Identification'\n---\n\nIntroduction\n============\n\nMedical image computing is becoming increasingly" +"---\nabstract: 'In this work we propose a $\\mathbb{Z}_N$ clock model which is exactly solvable on the lattice. We find exotic properties for the low-energy physics, such as UV/IR mixing and excitations with restricted mobility, that resemble fractonic physics from higher dimensional models. We then study the continuum descriptions for the lattice system in two distinct regimes and find two qualitative distinct field theories for each one of them. A characteristic time scale that grows exponentially fast with $N^2$ (and diverges rapidly as function of system parameters) separates these two regimes. For times below this scale, the system is described by an effective fractonic Chern-Simons-like action, where higher-form symmetries prevent quasiparticles from hopping. In this regime, the system behaves effectively as a fracton as isolated particles, in practice, never leave their original position. Beyond the large characteristic time scale, the excitations are mobile and the effective field theory is given by a pure mutual Chern-Simons action. In this regime, the UV/IR properties of the system is captured by a peculiar realization of the translation group.'\nauthor:\n- Guilherme Delfino\n- 'Weslei B. Fontana'\n- 'Pedro R. S. Gomes'\n- Claudio Chamon\nbibliography:\n- 'Draft\\_references.bib'\n---\n\nIntroduction\n============\n\nTopological order\u00a0[@wen2004]" +"---\nabstract: 'We develop a model for the production of the $P_c$ states observed at LHCb in $\\Lambda_b\\to\\jpp\\,K^-$ decays. With fewer parameters than other approaches, we obtain excellent fits to the $\\jpp$ invariant mass spectrum, capturing both the prominent peaks, and broader features over the full range of invariant mass. A distinguishing feature of our model is that whereas $P_c(4312)$, $P_c(4380)$ and $P_c(4440)$ are resonances with $\\S\\*\\D\\*$ constituents, the nature of $P_c(4457)$ is quite different, and can be understood either as a $\\S\\D^*$ threshold cusp, a $\\Lc(2595)\\D$ enhancement due to the triangle singularity, or a $\\Lc(2595)\\D$ resonance. We propose experimental measurements that can discriminate among these possibilities. Unlike in other models, our production mechanism respects isospin symmetry and the empirical dominance of colour-enhanced processes in weak decays, and additionally gives a natural explanation for the overall shape of the data. Our model is consistent with experimental constraints from photoproduction and $\\Lambda_b\\to \\Lc\\D^{(*)0}K^-$ decays and it does not imply the existence of partner states whose apparent absence in experiments is unexplained in other models.'\nauthor:\n- 'T.J.Burns'\n- 'E.S.Swanson'\nbibliography:\n- 'bibfile.bib'\ntitle: 'Production of $P_c$ states in $\\Lambda_b$ decays'\n---\n\nIntroduction {#Sec:introduction}\n============\n\nMuch of the considerable literature on the" +"---\nabstract: 'This paper presents the summary report on our DFGC 2022 competition. The DeepFake is rapidly evolving, and realistic face-swaps are becoming more deceptive and difficult to detect. On the other hand, methods for detecting DeepFakes are also improving. There is a two-party game between DeepFake creators and defenders. This competition provides a common platform for benchmarking the game between the current state-of-the-arts in DeepFake creation and detection methods. The main research question to be answered by this competition is the current state of the two adversaries when competed with each other. This is the second edition after the last year\u2019s DFGC 2021, with a new, more diverse video dataset, a more realistic game setting, and more reasonable evaluation metrics. With this competition, we aim to stimulate research ideas for building better defenses against the DeepFake threats. We also release our DFGC 2022 dataset contributed by both our participants and ourselves to enrich the DeepFake data resources for the research community ().'\nauthor:\n- 'Bo Peng^1^, Wei Xiang^1^, Yue Jiang^1^, Wei Wang^1^, Jing Dong^1^[^1], Zhenan Sun^1^, Zhen Lei^2^, Siwei Lyu^3^'\nbibliography:\n- 'egbib.bib'\ntitle: 'DFGC 2022: The Second DeepFake Game Competition'\n---\n\nIntroduction\n============\n\nAfter the DeepFake first broke" +"---\nabstract: 'Temperature asymmetry in the cosmic microwave background (CMB) data by the [*Planck*]{} satellite has been discovered and analyzed toward several nearby edge-on spiral galaxies. It provides a way to probe galactic halo rotation, and to constrain the baryon fraction in the galactic halos. The frequency independence of the observed data provides a strong indication of the Doppler shift nature of the effect, due to the galactic halo rotation. It was proposed that this effect may arise from the emission of cold gas clouds populating the galactic halos. However, in order to confirm this view, other effects that might give rise to a temperature asymmetry in the CMB data, have to be considered and studied in detail. The main aim of the present paper is to estimate the contribution in the CMB temperature asymmetry data due to the free-free emission by hot gas (particularly electrons) through the rotational kinetic Sunyaev-Zeldovich (rkSZ) effect. We concentrate, in particular, on the M31 galactic halo and compare the estimated values of the rkSZ induced temperature asymmetry with those obtained by using the SMICA pipeline of the [*Planck*]{} data release, already employed to project out the SZ sources and for lensing studies. As an" +"---\nabstract: 'Space-bounded computation has been a central topic in classical and quantum complexity theory. In the quantum case, every elementary gate must be unitary. This restriction makes it unclear whether the power of space-bounded computation changes by allowing intermediate measurement. In the bounded error case, Fefferman and Remscrim \\[STOC 2021, pp.1343\u20131356\\] and Girish, Raz and Zhan\u00a0\\[ICALP 2021, pp.73:1\u201373:20\\] recently provided the break-through results that the power does not change. This paper shows that a similar result holds for space-bounded quantum computation with *postselection*. Namely, it is proved possible to eliminate intermediate postselections and measurements in the space-bounded quantum computation in the bounded-error setting. Our result strengthens the recent result by Le Gall, Nishimura and Yakaryilmaz\u00a0\\[TQC 2021, pp.10:1\u201310:17\\] that logarithmic-space bounded-error quantum computation with *intermediate* postselections and measurements is equivalent in computational power to logarithmic-space unbounded-error probabilistic computation. As an application, it is shown that bounded-error space-bounded one-clean qubit computation (DQC1) with postselection is equivalent in computational power to unbounded-error space-bounded probabilistic computation, and the computational supremacy of the bounded-error space-bounded DQC1 is interpreted in complexity-theoretic terms.'\nauthor:\n- |\n Seiichiro Tani\\\n NTT Communication Science Laboratories, NTT Corporation, Japan\n- |\n International Research Frontiers Initiative (IRFI), Tokyo Institute" +"=1\n\nIntroduction\n============\n\nThe aim of this paper is to develop a bialgebra theory for differential algebras and get some applications. The notion of differential antisymmetric infinitesimal bialgebras is introduced, giving a new kind of derivations, called coherent derivations, on antisymmetric infinitesimal bialgebras. As an application, we generalize the typical construction of Poisson algebras from commutative differential algebras to the context of bialgebras.\n\nDifferential algebras\n---------------------\n\nThe notion of a differential algebra was introduced in [@Ritt1950Differential], which could be regarded as an associative algebra with finitely many commuting derivations. Such structures sprang from the classical study of algebraic differential equations with meromorphic functions as coefficients\u00a0[@kolchin1973differential], and the abstraction of differential operators led to the development of differential algebras, which in turn have influenced other areas such as Diophantine geometry, computer algebra and model theory\u00a0[@aschenbrenner2017asymptotic; @buium1975differential; @pillay1995model].\n\nDifferential algebras also have applications in mathematical physics such as Yang\u2013Mills theory. A $K$-cycle over the tensor product algebra is used in [@connes1992metric] to derive the full standard model, which turns out to be called Connes\u2013Lott model, and the differential algebras become the basic mathematical structure in the construction. More details about differential algebras used in such an approach can be found" +"---\nabstract: 'A common requirement of quantum simulations and algorithms is the preparation of complex states through sequences of 2-qubit gates. For a generic quantum state, the number of gates grows exponentially with the number of qubits, becoming unfeasible on near-term quantum devices. Here, we aim at creating an approximate encoding of the target state using a limited number of gates. As a first step, we consider a quantum state that is efficiently represented classically, such as a one-dimensional matrix product state. Using tensor network techniques, we develop an optimization algorithm that approaches the optimal implementation for a fixed number of gates. Our algorithm runs efficiently on classical computers and requires a polynomial number of iterations only. We demonstrate the feasibility of our approach by comparing optimal and suboptimal circuits on real devices. We, next, consider the implementation of the proposed optimization algorithm directly on a quantum computer and overcome inherent barren plateaus by employing a local cost function rather than a global one. By simulating realistic shot noise, we verify that the number of required measurements scales polynomially with the number of qubits. Our work offers a universal method to prepare target states using local gates and represents a" +"---\nabstract: 'Serverless computing is a buzzword that is being used commonly in the world of technology and among developers and businesses. Using the Function-as-a-Service (FaaS) model of serverless, one can easily deploy their applications to the cloud and go live in a matter of days, it facilitates the developers to focus on their core business logic and the backend process such as managing the infrastructure, scaling of the application, updation of software and other dependencies is handled by the Cloud Service Provider. One of the features of serverless computing is ability to scale the containers to zero, which results in a problem called cold start. The challenging part is to reduce the cold start latency without the consumption of extra resources. In this paper, we use SARIMA (Seasonal Auto Regressive Integrated Moving Average), one of the classical time series forecasting models to predict the time at which the incoming request comes, and accordingly increase or decrease the amount of required containers to minimize the resource wastage, thus reducing the function launching time. Finally, we implement PBA (Prediction Based Autoscaler) and compare it with the default HPA (Horizontal Pod Autoscaler), which comes inbuilt with kubernetes. The results showed that PBA" +"---\nabstract: '*Multi-robot motion planning* (MRMP) is the fundamental problem of finding non-colliding trajectories for multiple robots acting in an environment, under kinodynamic constraints. Due to its complexity, existing algorithms either utilize simplifying assumptions or are incomplete. This work introduces *kinodynamic conflict-based search* ([[-CBS]{}]{}), a decentralized (decoupled) MRMP algorithm that is general, scalable, and probabilistically complete. The algorithm takes inspiration from successful solutions to the discrete analogue of MRMP over finite graphs, known as *multi-agent path finding* (MAPF). Specifically, we adapt ideas from *conflict-based search* (CBS)\u2013a popular decentralized MAPF algorithm\u2013to the MRMP setting. The novelty in this adaptation is that we work directly in the continuous domain, without the need for discretization. In particular, the kinodynamic constraints are treated natively. [[-CBS]{}]{} plans for each robot individually using a low-level planner and and grows a conflict tree to resolve collisions between robots by defining constraints for individual robots. The low-level planner can be any sampling-based, tree-search algorithm for kinodynamic robots, thus lifting existing planners for single robots to the multi-robot settings. We show that [[-CBS]{}]{} inherits the (probabilistic) completeness of the low-level planner. We illustrate the generality and performance of [[-CBS]{}]{} in several case studies and benchmarks.'\nauthor:\n- 'Justin Kottinger$^{1}$," +"---\nauthor:\n- |\n Santanu Acharjee\\\n Department of Mathematics, Gauhati University, Assam, India\\\n e-mail: sacharjee326@gmail.com\ntitle: Theoretical possibilities of head transplant\n---\n\nAbstract: {#abstract .unnumbered}\n=========\n\nRecently, Zielinski and Sokal \\[ Zielinski, P., Sokal, P. (2016). Full spinal cord regeneration after total transection is not possible due to entropy change. Medical Hypotheses, 94, 63-65\\] proposed one hypothesis by considering three assumptions against the spinal cord regeneration after total transection. Thus, their claims are concluding that head transplant is not possible. But, using theoretical justifications, we show that head transplant is possible without any information loss. We design a spinal cord logic circuit bridge (SCLCB), which is a reversible logic circuit and thus, we show that there is no information loss.\\\n[**2020 AMS Classifications:**]{} 92C50, 94C11.\\\n[**Keywords:**]{} Spinal cord regeneration, entropy, laws of thermodynamics, head transplant, logic gates, SCLGB.\n\nBackground\n==========\n\nIn 2013, Canavero [@1] announced that full head (or body) transplant of a human is possible. Since then, Canavero and Ren [@2] have been facing criticisms. Often, ideas of Canavero [@1] or related procedural developments by Canavero and Ren [@3] are considered to be medically impossible tasks and unethical. One may find many articles on medical impossibilities as well as" +"---\nabstract: 'In this work, we present *MotionMixer*, an efficient 3D human body pose forecasting model based solely on multi-layer perceptrons (MLPs). *MotionMixer* learns the spatial-temporal 3D body pose dependencies by sequentially mixing both modalities. Given a stacked sequence of 3D body poses, a spatial-MLP extracts fine-grained spatial dependencies of the body joints. The interaction of the body joints over time is then modelled by a temporal MLP. The spatial-temporal mixed features are finally aggregated and decoded to obtain the future motion. To calibrate the influence of each time step in the pose sequence, we make use of squeeze-and-excitation (SE) blocks. We evaluate our approach on Human3.6M, AMASS, and 3DPW datasets using the standard evaluation protocols. For all evaluations, we demonstrate state-of-the-art performance, while having a model with a smaller number of parameters. Our code is available at: .'\nauthor:\n- 'Arij Bouazizi$^{1,2}$[^1]'\n- Adrian Holzbock$^2$\n- 'Ulrich Kressel $^{1}$'\n- |\n Klaus Dietmayer$^2$\\\n Vasileios Belagiannis$^3$ [^2]\\\n $^1$Mercedes-Benz AG, Stuttgart, Germany\\\n $^2$Ulm University, Ulm, Germany\\\n $^3$Otto von Guericke University Magdeburg, Magdeburg, Germany\\\n {arij.bouazizi, ulrich.kressel}@mercedes-benz.com, {adrian.holzbock, klaus.dietmayer}@uni-ulm.de, vasileios.belagiannis@ovgu.de\nbibliography:\n- 'ijcai22.bib'\ntitle: 'MotionMixer: MLP-based 3D Human Body Pose Forecasting'\n---\n\nIntroduction\n============\n\nForecasting 3D human motion is at the core of" +"---\nauthor:\n- |\n [Do\u011fay Kamar, Naz\u0131m Kemal \u00dcre and G\u00f6zde \u00dcnal\\\n ]{} [*{kamard, ure, gozde.unal}@itu.edu.tr*\\\n ]{}\ntitle: 'GAN-based Intrinsic Exploration For Sample Efficient Reinforcement Learning'\n---\n\nIn reinforcement learning, an agent learns which action to take depending on the current state by trying to maximize the reward signal provided by the environment [@Sutton1998]. The agent is not given any prior information about the environment or which action to take, but instead, it learns which actions return more reward in a trial-and-error manner. To do so, agents are usually incentivized to explore the state-action space before committing to the known rewards in order to avoid exploitation of a non-optimal solution.\n\nMost common approaches to exploration, such as $\\epsilon$-greedy in Deep-Q-Network (DQN) [@dqn], adding Ornstein\u2013Uhlenbeck to action in Deep Deterministic Policy Gradient (DDPG) [@ddpg] or maximizing entropy over the action space in Asynchronous Advantage Actor-Critic (A3C) [@a3c], rely on increasing the probability of a random action. This approach works fine in an environment with dense reward signals. However, in sparse or no reward settings, the agent fails to find a reward signal to guide itself, thus failing to find a solution.\n\nRelated works have focused on providing an extra intrinsic reward" +"---\nabstract: 'Generating expressive and contextually appropriate prosody remains a challenge for modern text-to-speech (TTS) systems. This is particularly evident for long, multi-sentence inputs. In this paper, we examine simple extensions to a Transformer-based FastSpeech-like system, with the goal of improving prosody for multi-sentence TTS. We find that long context, powerful text features, and training on multi-speaker data all improve prosody. More interestingly, they result in synergies. Long context disambiguates prosody, improves coherence, and plays to the strengths of Transformers. Fine-tuning word-level features from a powerful language model, such as BERT, appears to benefit from more training data, readily available in a multi-speaker setting. We look into objective metrics on pausing and pacing and perform thorough subjective evaluations for speech naturalness. Our main system, which incorporates all the extensions, achieves consistently strong results, including statistically significant improvements in speech naturalness over all its competitors.'\naddress: 'Alexa AI, Amazon'\nbibliography:\n- 'refs.bib'\ntitle: |\n Simple and Effective Multi-sentence TTS\\\n with Expressive and Coherent Prosody\n---\n\n**Index Terms**: neural text-to-speech, long-form TTS, multi-speaker TTS, contextual word embeddings, FastSpeech, BERT\n\nIntroduction\n============\n\nRecent advances in neural TTS [@Wangetal2017; @Shenetal2018; @Renetal2019; @Renetal2021] have unlocked a range of applications in which human-like and coherent prosody" +"---\nabstract: 'Perceptual video quality assessment (VQA) is an integral component of many streaming and video sharing platforms. Here we consider the problem of learning perceptually relevant video quality representations in a self-supervised manner. Distortion type identification and degradation level determination is employed as an auxiliary task to train a deep learning model containing a deep Convolutional Neural Network (CNN) that extracts spatial features, as well as a recurrent unit that captures temporal information. The model is trained using a contrastive loss and we therefore refer to this training framework and resulting model as **CON**trastive **VI**deo **Q**uality Estima**T**or (CONVIQT). During testing, the weights of the trained model are frozen, and a linear regressor maps the learned features to quality scores in a no-reference (NR) setting. We conduct comprehensive evaluations of the proposed model on multiple VQA databases by analyzing the correlations between model predictions and ground-truth quality ratings, and achieve competitive performance when compared to state-of-the-art NR-VQA models, even though it is not trained on those databases. Our ablation experiments demonstrate that the learned representations are highly robust and generalize well across synthetic and realistic distortions. Our results indicate that compelling representations with perceptual bearing can be obtained using self-supervised" +"---\nabstract: |\n We delineate a methodology for the specification and verification of flow security properties expressible in the opacity framework. We propose a logic, [[OpacTL]{}]{}, for straightforwardly expressing such properties in systems that can be modelled as partially observable labelled transition systems. We develop verification techniques for analysing property opacity with respect to observation notions. Adding a probabilistic operator to the specification language enables quantitative analysis and verification. This analysis is implemented as an extension to the PRISM model checker and illustrated via a number of examples. Finally, an alternative approach to quantifying the opacity property based on entropy is sketched.\n\n , logic, security, verification\nauthor:\n- Chunyan Mu\n- David Clark\nbibliography:\n- 'BIB-opacLTL.bib'\ntitle: Quantitative Verification of Opacity Properties in Security Systems\n---\n\nIntroduction {#sec:intro}\n============\n\nSecurity of software can be stated and investigated only with respect to a specification of what security means in a given context. The proliferation of software has lead to a profusion of security properties, many of which exhibit a great deal of similarity. This has lead to a search for common abstractions, and general formalisms for expressing security properties. *Opacity*, first introduced by Mazar\u00e9 [@MazareL:usiunifop], is a promising approach for describing" +"---\nabstract: 'Cognitive biases are mental shortcuts humans use in dealing with information and the environment, and which result in biased actions and behaviors (or, actions), unbeknownst to themselves. Biases take many forms, with cognitive biases occupying a central role that inflicts fairness, accountability, transparency, ethics, law, medicine, and discrimination. Detection of biases is considered a necessary step toward their mitigation. Herein, we focus on two cognitive biases - anchoring and recency. The recognition of cognitive bias in computer science is largely in the domain of information retrieval, and bias is identified at an aggregate level with the help of annotated data. Proposing a different direction for bias detection, we offer a principled approach along with Machine Learning to detect these two cognitive biases from Web logs of users\u2019 actions. Our individual user level detection makes it truly personalized, and does not rely on annotated data. Instead, we start with two basic principles established in cognitive psychology, use modified training of an attention network, and interpret attention weights in a novel way according to those principles, to infer and distinguish between these two biases. The personalized approach allows detection for specific users who are susceptible to these biases when performing" +"---\nabstract: 'We formulate a lagrangian hydrodynamics including shear and bulk viscosity in the presence of spin density, and investigate it using the linear response functional formalism. The result is a careful accounting of all sound and vortex interactions close to local equilibrium.'\nauthor:\n- |\n Giorgio Torrieri$^1$, David Montenegro$^2$\\\n **\\\n **\\\ntitle: Linear response hydrodynamics of a relativistic dissipative fluid with spin\n---\n\nIntroduction\n============\n\nRelativistic dissipative hydrodynamics has been part of a vigorous theoretical investigation, triggered by the experimental discovery of a vorticity-correlated hyperon spin polarization and vector resonance spin alignment in heavy ion collisions [@lisa]. Recently, several versions of hydrodynamics with spin have been proposed [@dm1; @dm2; @dm3; @flork1; @hydroryb; @flork2; @flork3; @gale; @hongo; @bec; @shearspin; @nair; @dirk; @kam; @palermo] but basic conceptual questions, such as the role of spin-vorticity coupling, pseudogauge dependence, entropy production and the definition of equilibrium, remain unanswered.\n\nOne approach that has the advantage of an immediate connection with both microscopic statistical mechanics and field theory is Lagrangian hydrodynamics [@hydroryb] analyzed via linear response techniques. In this formalism, one abandons the definitionof hydrodynamics as dictated by [*conservation laws*]{}, but instead develops it from a definition of [*a free energy*]{} to be locally minimized." +"---\nabstract: 'The classical Coulomb gas model has served as one of the most versatile frameworks in statistical physics, connecting a vast range of phenomena across many different areas. Nonequilibrium generalisations of this model have so far been studied much more scarcely. With the abundance of contemporary research into active and driven systems, one would naturally expect that such generalisations of systems with long-ranged Coulomb-like interactions will form a fertile playground for interesting developments. Here, we present two examples of novel macroscopic behaviour that arise from nonequilibrium fluctuations in long-range interacting systems, namely (1) unscreened long-ranged correlations in strong electrolytes driven by an external electric field and the associated fluctuation-induced forces in the confined Casimir geometry, and (2) out-of-equilibrium critical behaviour in self-chemotactic models that incorporate the particle polarity in the chemotactic response of the cells. Both of these systems have nonlocal Coulomb-like interactions among their constituent particles, namely, the electrostatic interactions in the case of the driven electrolyte, and the chemotactic forces mediated by fast-diffusing signals in the case of self-chemotactic systems. The results presented here hint to the rich phenomenology of nonequilibrium effects that can arise from strong fluctuations in Coulomb interacting systems, and a rich variety of" +"---\nauthor:\n- 'Matthew Ho [^1]'\n- Michelle Ntampaka\n- Markus Michael Rau\n- Minghan Chen\n- Alexa Lansberry\n- Faith Ruehle\n- Hy Trac\nbibliography:\n- 'bibliography.bib'\ntitle: The Dynamical Mass of the Coma Cluster from Deep Learning\n---\n\n**In 1933, Fritz Zwicky\u2019s famous investigations of the mass of the Coma cluster led him to infer the existence of dark matter [@1933AcHPh...6..110Z]. His fundamental discoveries have proven to be foundational to modern cosmology; as we now know such dark matter makes up 85% of the matter and 25% of the mass-energy content in the universe. Galaxy clusters like Coma are massive, complex systems of dark matter in addition to hot ionized gas and thousands of galaxies, and serve as excellent probes of the dark matter distribution. However, empirical studies show that the total mass of such systems remains elusive and difficult to precisely constrain. Here, we present new estimates for the dynamical mass of the Coma cluster based on Bayesian deep learning methodologies developed in recent years. Using our novel data-driven approach, we predict Coma\u2019s ${M_\\mathrm{200c}}$ mass to be $10^{15.10 \\pm 0.15}\\ {h^{-1}{\\mathrm{M}_\\odot}}$ within a radius of $1.78 \\pm 0.03\\ h^{-1}\\mathrm{Mpc}$ of its center. We show that our predictions" +"---\nabstract: 'Finding equilibria points in continuous minimax games has become a key problem within machine learning, in part due to its connection to the training of generative adversarial networks. Because of existence and robustness issues, recent developments have shifted from pure equilibria to focusing on mixed equilibria points. In this note we consider a method proposed by Domingo-Enrich et al.\u00a0for finding mixed equilibria in two-layer zero-sum games. The method is based on entropic regularisation and the two competing strategies are represented by two sets of interacting particles. We show that the sequence of empirical measures of the particle system satisfies a large deviation principle as the number of particles grows to infinity, and how this implies convergence of the empirical measure and the associated Nikaid\u00f4-Isoda error, complementing existing law of large numbers results.'\naddress: KTH Royal Institute of Technology\nauthor:\n- Viktor Nilsson\n- Pierre Nyquist\nbibliography:\n- 'references.bib'\ntitle: 'A note on large deviations for interacting particle dynamics for finding mixed equilibria in zero-sum games'\n---\n\n[^1]\n\nIntroduction {#sec:intro}\n============\n\nZero-sum games are at the heart of game theory and an important concept in a range of areas of applied mathematics. The development of efficient methods for" +"---\nabstract: 'Reconfigurable Intelligent Surface (RIS) has become a popular technology to improve the capability of a THz multiuser Multi-input multi-output (MIMO) communication system. THz wave characteristics, on the other hand, restrict THz beam coverage on RIS when using a uniform planar array (UPA) antenna. In this study, we propose a dynamic RIS subarray structure to improve the performance of a THz MIMO communication system. In more details, an RIS is divided into several RIS subarrays according to the number of users. Each RIS subarray is paired with a user and only reflects beams to the corresponding user. Based on the structure of RIS, we first propose a weighted minimum mean square error - RIS local search (WMMSE-LS) scheme, which requires that each RIS element has limited phase shifts. To improve the joint beamforming performance, we further develop an adaptive Block Coordinate Descent(BCD)-aided algorithm, an iterative optimization method. Numerical results demonstrate the effectiveness of the dynamic RIS subarray structure and the adaptive BCD-aided joint beamforming scheme and also show the merit of our proposed system.'\nauthor:\n- \n- \n- \ntitle: A Dynamic Subarray Structure in Reconfigurable Intelligent Surfaces for TeraHertz Communication Systems\n---\n\nTeraHertz (THz), Reconfigurable Intelligent Surfaces (RIS), multiple-input-multiple-output" +"---\nabstract: 'We study the problem of learning the parameters for the Hamiltonian of a quantum many-body system, given limited access to the system. In this work, we build upon recent approaches to Hamiltonian learning via derivative estimation. We propose a protocol that improves the scaling dependence of prior works, particularly with respect to parameters relating to the structure of the Hamiltonian (e.g., its locality $k$). Furthermore, by deriving exact bounds on the performance of our protocol, we are able to provide a precise numerical prescription for theoretically optimal settings of hyperparameters in our learning protocol, such as the maximum evolution time (when learning with unitary dynamics) or minimum temperature (when learning with Gibbs states). Thanks to these improvements, our protocol is practical for large problems: we demonstrate this with a numerical simulation of our protocol on an 80-qubit system.'\nauthor:\n- Andi Gu\n- Lukasz Cincio\n- 'Patrick J. Coles'\nbibliography:\n- 'references.bib'\ntitle: Practical Black Box Hamiltonian Learning\n---\n\nIntroduction\n============\n\nAn important task for learning about many-body quantum systems is to learn the associated Hamiltonian operator efficiently (i.e., without requiring resources that scale exponentially in system size). There are multiple physical motivations for this problem. First, for" +"---\nabstract: 'This paper describes the THUEE team\u2019s speech recognition system for the IARPA Open Automatic Speech Recognition Challenge (OpenASR21), with further experiment explorations. We achieve outstanding results under both the Constrained and Constrained-plus training conditions. For the Constrained training condition, we construct our basic ASR system based on the standard hybrid architecture. To alleviate the Out-Of-Vocabulary (OOV) problem, we extend the pronunciation lexicon using Grapheme-to-Phoneme (G2P) techniques for both OOV and potential new words. Standard acoustic model structures such as CNN-TDNN-F and CNN-TDNN-F-A are adopted. In addition, multiple data augmentation techniques are applied. For the Constrained-plus training condition, we use the self-supervised learning framework wav2vec2.0. We experiment with various fine-tuning techniques with the Connectionist Temporal Classification (CTC) criterion on top of the publicly available pre-trained model XLSR-53. We find that the frontend feature extractor plays an important role when applying the wav2vec2.0 pre-trained model to the encoder-decoder based CTC/Attention ASR architecture. Extra improvements can be achieved by using the CTC model finetuned in the target language as the frontend feature extractor.'\naddress: |\n Beijing National Research Center for Information Science and Technology\\\n Department of Electronic Engineering, Tsinghua University, Beijing 100084, China\nbibliography:\n- 'THUEE.bib'\ntitle: The THUEE System Description" +"---\nabstract: 'Two-dimensional *Ferrovalley* materials with intrinsic valley polarization are rare but highly promising for valley-based nonvolatile random access memory and valley filter. Using Kinetically Limited Minimization (KLM), an unconstrained crystal structure prediction algorithm, and prototype sampling based on first-principles calculations, we have discovered 17 new *Ferrovalley* materials (rare-earth iodides RI$_2$, where R is a rare-earth element belonging to Sc, Y, or La-Lu, and I is Iodine). The rare-earth iodides are layered and demonstrate 2H, 1T, or 1T$_d$ phase as the ground-state in bulk, analogous to transition metal dichalcogenides (TMDCs). The calculated exfoliation energy of monolayers is comparable to that of graphene and TMDCs, suggesting possible experimental synthesis. The monolayers in the 2H phase exhibit two-dimensional ferromagnetism due to unpaired electrons in $d$ and $f$ orbitals. Throughout the rare-earth series, $d$ bands show valley polarization at $K$ and $\\overline{K}$ points in the Brillouin zone near the Fermi level. Due to strong magnetic exchange interaction and spin-orbit coupling, large intrinsic valley polarization in the range of 15-143 meV without external stimuli is observed, which can be tuned and enhanced by applying a biaxial strain. These valleys can selectively be probed and manipulated for information storage and processing, potentially offering superior performance" +"---\nabstract: 'Accounting for geometry-induced changes in the electronic distribution in molecular simulation is important for capturing effects such as charge flow, charge anisotropy and polarization. Multipolar force fields have demonstrated their ability to qualitatively and correctly represent chemically significant features such as sigma holes. It has also been shown that off-center point charges offer a compact alternative with similar accuracy. Here it is demonstrated that allowing relocation of charges within a minimally distributed charge model (MDCM) with respect to their reference atoms is a viable route to capture changes in the molecular charge distribution depending on geometry. The approach, referred to as \u201cflexible MDCM\u201d (fMDCM) is validated on a number of small molecules and provides accuracies in the electrostatic potential (ESP) of 0.5 kcal/mol on average compared with reference data from electronic structure calculations whereas MDCM and point charges have root mean squared errors of a factor of 2 to 5 higher. In addition, MD simulations in the $NVE$ ensemble using fMDCM for a box of flexible water molecules with periodic boundary conditions show a width of 0.1 kcal/mol for the fluctuation around the mean at 300 K on the 10 ns time scale. The accuracy in capturing the" +"---\nabstract: 'In this work, we revisit the planar restricted four-body problem to study the dynamics of an infinitesimal mass under the gravitational force produced by three heavy bodies with unequal masses, forming an equilateral triangle configuration. We unify known results about the existence and linear stability of the equilibrium points of this problem which have been obtained earlier, either as relative equilibria or a central configuration of the planar restricted $(3 + 1)$-body problem. It is the first attempt in this direction. A systematic numerical investigation is performed to obtain the resonance curves in the mass space. We use these curves to answer the question about the existing boundary between the domains of linear stability and instability. The characterization of the total number of stable points found inside the stability domain is discussed.'\naddress:\n- 'Dept. de F\u00edsica, UAM\u2013Iztapalapa, 09340 Iztapalapa, Mexico City, Mexico.'\n- 'Dept. de Matem\u00e1ticas, UAM\u2013Iztapalapa, 09340 Iztapalapa, Mexico City, Mexico.'\nauthor:\n- Jos\u00e9 Alejandro Zepeda Ram\u00edrez\n- 'Martha Alvarez-Ram\u00edrez'\ntitle: 'Equilibrium points and their linear stability in the planar equilateral restricted four-body problem: A review and new results'\n---\n\nIntroduction\n============\n\nThe Newtonian planar $n$-body problem reads as the study of the dynamics of $n$" +"---\nabstract: 'We propose the scenario to interpret the overall observational features of the SS433 \u2013 W50 system. The most unique features of SS433 are the presence of the precessing, mildly relativistic jets and the obscuration of the central engine, which are considered to be due to a supercritical accretion on to the central compact object. The jets are likely to be ejected from the innermost region of the accretion flow. The concept of the accretion ring (Inoue 2021, PASJ, 73,795) is applied to the outer boundary of the accretion flow and the ring is supposed to have a precession. The accretion ring is expected to extend a two-layer outflow of a thin excretion disk and a thick excretion flow, as well as the accretion flow. The thin excretion disk is discussed to eventually form the optically thick excretion belt along the Roche lobe around the compact object, contributing to the obscuration of the central engine. The thick excretion flow is likely to turn to the supersonic wind (disk wind) with the terminal velocity of $\\sim 10^{8}$ cm s$^{-1}$ and to collide with the SNR matter at the distance of $\\sim 10^{18}$ cm. The interactions of the jets with the" +"---\nauthor:\n- 'G. Dr\u00e9au, Y. Lebreton, B. Mosser, D. Bossini, J. Yu'\nbibliography:\n- './Outils\\_latex/biblio\\_p.bib'\ntitle: Characterising the AGB bump and its potential to constrain mixing processes in stellar interiors\n---\n\n[ It is well-known that the AGB bump provides valuable information on the internal structure of low-mass stars, particularly on mixing processes such as core overshooting during the core He-burning phase. Here, we investigate the dependence with stellar mass and metallicity of the calibration of stellar models to observations.]{} [In this context, we analysed $\\sim$ 4,000 evolved giants observed by [*Kepler*]{}\u00a0and [TESS]{}, including red-giant branch (RGB) stars and AGB stars, for which asteroseismic and spectrometric data are available. By using statistical mixture models, we detected the AGBb both in frequency at maximum oscillation power ${\\nu{_{\\mathrm{max}}}}$ and in effective temperature ${T{_{\\mathrm{eff}}}}$. Then, we used the Modules for Experiments in Stellar Astrophysics ([MESA]{}) stellar evolution code to model AGB stars and match the AGBb occurrence with observations.]{} [From observations, we could derive the AGBb location in 15 bins of mass and metallicity. We noted that the higher the mass, the later the AGBb occurs in the evolutionary track, which agrees with theoretical works. Moreover, we found a slight increase" +"---\nabstract: 'We study integrable and superintegrable systems with magnetic field possessing quadratic integrals of motion on the three-dimensional Euclidean space. In contrast with the case without vector potential, the corresponding integrals may no longer be connected to separation of variables in the Hamilton\u2013Jacobi equation and can have more general leading order terms. We focus on two cases extending the physically relevant cylindrical\u2013 and spherical\u2013type integrals. We find three new integrable systems in the generalized cylindrical case but none in the spherical one. We conjecture that this is related to the presence, respectively absence, of maximal abelian Lie subalgebra of the three-dimensional Euclidean algebra generated by first order integrals in the limit of vanishing magnetic field. By investigating superintegrability, we find only one (minimally) superintegrable system among the integrable ones. It does not separate in any orthogonal coordinate system. This system provides a mathematical model of a helical undulator placed in an infinite solenoid.'\nauthor:\n- 'O. Kub$^1$, A. Marchesiello$^2$ and L. \u0160nobl$^1$'\ntitle: 'New classes of quadratically integrable systems in magnetic fields: the generalized cylindrical and spherical cases'\n---\n\n[$^1$Czech Technical University in Prague, Faculty of Nuclear Sciences and Physical Engineering, Department of Physics, B\u0159ehov\u00e1 7, 115 19 Prague" +"---\nabstract: 'Nominally anhydrous minerals in rocky planet mantles can sequester multiple Earth-oceans\u2019 worth of water. Mantle water storage capacities therefore provide an important constraint on planet water inventories. Here we predict silicate mantle water capacities from the thermodynamically-limited solubility of water in their constituent minerals. We report the variability of upper mantle and bulk mantle water capacities due to (i) host star refractory element abundances that set mantle mineralogy, (ii) realistic mantle temperature scenarios, and (iii) planet mass. We find that transition zone minerals almost unfailingly dominate the water capacity of the mantle for planets of up to $\\sim$1.5 Earth masses, possibly creating a bottleneck to deep water transport, although the transition zone water capacity discontinuity is less pronounced at lower Mg/Si. The pressure of the ringwoodite-perovskite phase boundary defining the lower mantle is roughly constant, so the contribution of the upper mantle reservoir becomes less important for larger planets. If perovskite and postperovskite are relatively dry, then increasingly massive rocky planets would have increasingly smaller fractional interior water capacities. In practice, our results represent initial water concentration profiles in planetary mantles where their primordial magma oceans are water-saturated. This work is a step towards understanding planetary deep water" +"---\nauthor:\n- 'S. Farrens [^1]'\n- 'A. Guinot'\n- 'M. Kilbinger'\n- 'T. Liaudat'\n- 'L. Baumont'\n- 'X. Jimenez'\n- 'A. Peel'\n- 'A. Pujol'\n- 'M. Schmitz'\n- 'J.-L. Starck'\n- 'A. Z. Vitorelli'\nbibliography:\n- 'ref.bib'\ntitle: 'ShapePipe: A modular weak-lensing processing and analysis pipeline'\n---\n\nIntroduction\n============\n\nWeak gravitational lensing, the apparent distortion of the shapes of galaxies caused by the bending of light by mass along the line of sight, has been demonstrated to be a powerful probe of cosmology [@kilbinger:15; @mandelbaum:18b]. However, numerous steps are required in order to go from raw survey data to competitive constraints on cosmological parameters. Recent surveys, such as the Canada-France-Hawai\u2019i Telescope Legacy Survey [@erben:13], the Hyper Suprime Cam (HSC) survey [@mandelbaum:18a], the Kilo-Degree Survey [@Kuijken:19], and the Dark Energy Survey [DES; @gatti:21], have carried out detailed weak-lensing analyses. Upcoming surveys, such as [*Euclid*]{} [@laureijs:11] and the Vera C. Rubin Observatory Legacy Survey of Space and Time [LSST; @ivezic:19], will also aim to tighten cosmological constraints with weak lensing. It is clear that weak lensing remains at the forefront of cosmological studies, and hence tools for weak-lensing analysis will remain in demand for years to come.\n\nThere" +"---\nabstract: 'Optical hyperparametric oscillation based on the third-order nonlinearity is one of the most significant mechanisms to generate coherent electromagnetic radiation and produce quantum states of light. Advances in dispersion-engineered high-$Q$ microresonators allow for generating signal waves far from the pump and decrease the oscillation power threshold to submilliwatt levels. However, the pump-to-signal conversion efficiency and absolute signal power are low, fundamentally limited by parasitic mode competition and attainable cavity intrinsic $Q$ to coupling $Q$ ratio, i.e., $Q_{\\rm i}/Q_{\\rm c}$. Here, we use Friedrich-Wintgen bound states in the continuum (BICs) to overcome the physical challenges in an integrated microresonator-waveguide system. As a result, on-chip coherent hyperparametric oscillation is generated in BICs with unprecedented conversion efficiency and absolute signal power. This work not only opens a path to generate high-power and efficient continuous-wave electromagnetic radiation in Kerr nonlinear media but also enhances the understanding of microresonator-waveguide system - an elementary unit of modern photonics.'\nauthor:\n- Fuchuan Lei\n- Zhichao Ye\n- Krishna Twayana\n- Yan Gao\n- Marcello Girardi\n- '\u00d3skar B. Helgason'\n- Ping Zhao\n- 'Victor Torres-Company'\nbibliography:\n- 'ref.bib'\ntitle: Hyperparametric Oscillation via Bound States in the Continuum\n---\n\nOptical hyperparametric oscillation (H-OPO) emerges in a" +"---\nauthor:\n- 'Seung-gyu Hwang[[](https://orcid.org/0000-0002-6355-6263)]{},'\n- 'Benjamin L\u2019Huillier[[](https://orcid.org/0000-0003-2934-6243)]{},'\n- 'Ryan E. Keeley[[](https://orcid.org/0000-0002-0862-8789)]{},'\n- 'M. James Jee,'\n- 'Arman Shafieloo[[](https://orcid.org/0000-0001-6815-0337)]{}'\nbibliography:\n- 'biblio.bib'\ntitle: ' [How to use GP:]{} Effects of the mean function and hyperparameter selection on Gaussian Process regression '\n---\n\nIntroduction {#sec:introduction}\n============\n\nThe nature of the accelerating universe is one of the biggest questions in modern physics and cosmology. In a matter-dominated universe, the expansion speed of the Universe should gradually decrease, but observations suggest that the cosmic expansion is accelerated by some unknown component of the Universe, dark energy (DE), e.g. [@1998AJ....116.1009R; @1999ApJ...517..565P].\n\nThe role of DE is to create a repulsive force against gravity. DE can be described as a perfect fluid with equation of state (EoS) $w=p/\\rho$, which is the ratio between pressure $p$ and density $\\rho$ of the fluid. The simplest model is the presence of a positive constant in the Einstein field equations, which can be modeled as a fluid with $w=-1$.\n\nHowever, as observations become more and more precise, some tensions arise such as the Hubble tension between local measurements of the Hubble-Lema\u00eetre constant $H_0$ [@2018ApJ...861..126R; @2021arXiv211204510R] and inferences of this parameter from high-redshift experiments . These tensions, in addition to" +"---\nabstract: 'Deep neural networks (DNNs) have been shown to be vulnerable against adversarial examples (AEs) which are maliciously designed to fool target models. The normal examples (NEs) added with imperceptible adversarial perturbation, can be a security threat to DNNs. Although the existing AEs detection methods have achieved a high accuracy, they failed to exploit the information of the AEs detected. Thus, based on high-dimension perturbation extraction, we propose a model-free AEs detection method, the whole process of which is free from querying the victim model. Research shows that DNNs are sensitive to the high-dimension features. The adversarial perturbation hiding in the adversarial example belongs to the high-dimension feature which is highly predictive and non-robust. DNNs learn more details from high-dimension data than others. In our method, the perturbation extractor can extract the adversarial perturbation from AEs as high-dimension feature, then the trained AEs discriminator determines whether the input is an AE. Experimental results show that the proposed method can not only detect the adversarial examples with high accuracy, but also detect the specific category of the AEs. Meanwhile, the extracted perturbation can be used to recover the AEs to NEs.'\nauthor:\n- Mingyu Dong\n- Jiahao Chen\n- Diqun" +"---\nabstract: 'We have studied the decoherence mechanism in a fermion and scalar quantum field theory with the Yukawa interaction in the Minkowski spacetime, using the non-equilibrium effective field theory formalism appropriate for open systems. The scalar field is treated as the system whereas the fermions as the environment. As the simplest realistic scenario, we assume that an observer measures only the Gaussian 2-point correlator for the scalar field. The cause of decoherence and the subsequent entropy generation is the ignorance of information stored in higher-order correlators, Gaussian and non-Gaussian, of the system and the surrounding. Using the 2-loop 2-particle irreducible effective action, we construct the renormalised Kadanoff-Baym equation, i.e., the equation of motion satisfied by the 2-point correlators in the Schwinger-Keldysh formalism. These equations contain the non-local self-energy corrections. We then compute the statistical propagator in terms of the 2-point functions. Using the relationship of the statistical propagator with the phase space area, we next compute the von Neumann entropy, as a measure of the decoherence or effective loss of information for the system. We have obtained the variation of the entropy with respect to various relevant parameters. We also discuss the qualitative similarities and differences of our results" +"---\nabstract: 'We develop a set of machine-learning based cosmological emulators, to obtain fast model predictions for the $C(\\ell)$ angular power spectrum coefficients characterising tomographic observations of galaxy clustering and weak gravitational lensing from multi-band photometric surveys (and their cross-correlation). A set of neural networks are trained to map cosmological parameters into the coefficients, achieving a speed-up $\\mathcal{O}(10^3)$ in computing the required statistics for a given set of cosmological parameters, with respect to standard Boltzmann solvers, with an accuracy better than 0.175% ($<$0.1% for the weak lensing case). This corresponds to $\\sim$ 2% or less of the statistical error bars expected from a typical Stage IV photometric surveys. Such overall improvement in speed and accuracy is obtained through (*i*) a specific pre-processing optimisation, ahead of the training phase, and (*ii*) a more effective neural network architecture, compared to previous implementations.'\nauthor:\n- |\n Marco Bonici$^{1}$, Luca Biggio$^{2}$, Carmelita Carbone$^{1}$, Luigi Guzzo$^{3,4}$\\\n $^{1}$INAF-IASF Milano, Via Alfonso Corti 12, I-20133 Milano, Italy\\\n $^{2}$Data Analytics Lab, Institute of Machine Learning, Department of Computer Science, ETH Z\u00fcrich, Switzerland\\\n $^{3}$Dipartimento di Fisica \u201cAldo Pontremoli\", Universit\u00e0 degli Studi di Milano, & INFN, Sez. di Milano, Via Celoria 16, I-20133 Milano, Italy\\\n $^{4}$INAF, Osservatorio Astronomico di Brera," +"---\nabstract: 'Record linkage algorithms match and link records from different databases that refer to the same real-world entity based on direct and/or quasi-identifiers, such as name, address, age, and gender, available in the records. Since these identifiers generally contain personal identifiable information (PII) about the entities, record linkage algorithms need to be developed with privacy constraints. Known as privacy-preserving record linkage (PPRL), many research studies have been conducted to perform the linkage on encoded and/or encrypted identifiers. Differential privacy (DP) combined with computationally efficient encoding methods, e.g. Bloom filter encoding, has been used to develop PPRL with provable privacy guarantees. The standard DP notion does not however address other constraints, among which the most important ones are fairness-bias and cost of linkage in terms of number of record pairs to be compared. In this work, we propose new notions of fairness-constrained DP and fairness and cost-constrained DP for PPRL and develop a framework for PPRL with these new notions of DP combined with Bloom filter encoding. We provide theoretical proofs for the new DP notions for fairness and cost-constrained PPRL and experimentally evaluate them on two datasets containing person-specific data. Our experimental results show that with these new notions" +"---\nabstract: 'Spectrum scarcity has been a major concern for achieving the desired quality of experience (QoE) in next-generation (5G/6G and beyond) networks supporting a massive volume of mobile and IoT devices with low-latency and seamless connectivity. Hence, spectrum sharing systems have been considered as a major enabler for next-generation wireless networks in meeting QoE demands. Specifically, the 3rd generation partnership project (3GPP) has standardized coexistence of 4G LTE License Assisted Access (LAA) network with WiFi in the unlicensed 5 GHz bands, and the 5G New Radio Unlicensed (NR-U) with WiFi 6/6E in 6 GHz bands. While most current coexistence solutions and standards focus on performance improvement and QoE optimization, the emerging security challenges of such network environments have been ignored in the literature. The security framework of standalone networks (either 5G or WiFi) assumes the ownership of entire network resources from spectrum to core functions. Hence, all accesses to the network shall be authenticated and authorized within the intra-network security system and is deemed illegal otherwise. However, coexistence network environments can lead to unprecedented security vulnerabilities and breaches as the standalone networks shall tolerate unknown and out-of-network accesses, specifically in the medium access. In this paper, for the first" +"---\nabstract: 'In the context of quantum integrated photonics, this work investigates the quantum properties of multimode light generated by silicon and silicon nitride micro-resonators pumped in pulsed regime. The developed theoretical model, performed in terms of the morphing supermodes, provides a comprehensive description of the generated quantum states. Remarkably, it shows that a full measurement of states carrying optimal squeezing levels is not accessible to standard homodyne detection, thus leaving hidden part of generated quantum features. By presenting and discussing this behaviour, as well as possible strategies to amend it, this work proves itself essential to future quantum applications exploiting micro-resonators as sources of multimode states.'\nauthor:\n- '\u00c9lie Gouzien$^{1,2}$, Laurent Labont\u00e9$^{2}$, Alessandro Zavatta$^{3,4}$, Jean Etesse$^{2}$, S\u00e9bastien Tanzilli$^{2}$, Virginia D\u2019Auria$^{2,5}$, Giuseppe Patera$^{6}$'\nbibliography:\n- 'biblio\\_ringSPOPO.bib'\ntitle: 'Hidden and detectable multimode squeezing from micro-resonators'\n---\n\nSilicon (Si) and Silicon Nitride (SiN) quantum photonics offer a precious possibility to propel practical quantum optical technologies thanks to high density integration of high-performance functions over small footprint chips\u00a0[@Wang2020]. In recent years, a particular interest has been driven by the possibility of exploiting their optical nonlinearities to generate on-chip highly multimode entanglement among frequency-time modes. Four-wave mixing (FWM) in silicon-based rings or disk-shaped" +"---\naddress: |\n Physik-Institut, Universit\u00e4t Z\u00fcrich, Winterthurerstrasse 190,\\\n CH-8057 Z\u00fcrich, Switzerland\nauthor:\n- 'J. M. LIZANA'\ntitle: 'FLAVOR HIERARCHIES AND B-ANOMALIES FROM 5D'\n---\n\nIntroduction\n============\n\nFlavor is an intriguing feature of the Standard Model (SM). It is an exact symmetry of the gauge sector only broken by the Yukawa couplings, which have a very particular hierarchy. Although the explanation of this structure could be postponed to some very high scale of New Physics, the observation of the $B$-anomalies[@Cornella:2021sby] in the recent years may suggest hints of New Physics connected to a flavor hierarchy explanation.[@Bordone:2017bld]\n\n$B$-anomalies are deviations in the $B$-meson semileptonic decays that break lepton flavor universality, both, in neutral current $b\\to sll$, and charge current $b \\to c\\tau \\nu$ processes. The neutral current anomalies point to a New Physics effective scale $\\sim 40\\,{\\rm TeV}$, while for the charge current ones, the effective scale is much lower $\\sim 3\\,{\\rm TeV}$. The difference on these scales, together with the difference on the family number of the fields involved in each process ($3_q\\to 2_l 2_l 2_q$ versus $3_q\\to 3_l 3_l 2_q$) suggests that a combined explanation should be mainly coupled to the third family. If we assume that the New Physics" +"---\nabstract: 'Competitive Coding approach (CompCode) is one of the most promising methods for palmprint recognition. Due to its high performance and simple formulation, it has been continuously studied for many years. However, although numerous variations of CompCode have been proposed, a detailed analysis of the method is still absent. In this paper, we provide a detailed analysis of CompCode from the perspective of linear discriminant analysis (LDA) at the first time. A non-trivial sufficient condition under which the CompCode is optimal in the sense of Fisher\u2019s criterion is presented. Based on our analysis, we examined the statistics of palmprints and conclude that CompCode deviates from the optimal condition. To mitigate the deviation, we propose a new method called \u201cClass-Specific CompCode\" that improves CompCode by excluding non-palm-line areas from matching. A non-linear mapping on the competitive code is also applied in this method to further enhance accuracy. Experiments on two public databases demonstrate the effectiveness of the proposed method.'\naddress: 'School of Computer Science and Technology, Beijing Institute of Technology, Beijing, 100081, P.R. China.'\nauthor:\n- Lingfei Song\n- Hua Huang\nbibliography:\n- 'reference.bib'\ntitle: |\n Revisiting Competitive Coding Approach for Palmprint Recognition:\\\n A Linear Discriminant Analysis Perspective\n---\n\nPalmprint" +"---\nabstract: 'We present an in-depth evaluation of four commercially available Speech-to-Text (STT) systems for Swiss German. The systems are anonymized and referred to as system a, b, c and d in this report. We compare the four systems to our STT models, referred to as FHNW in the following, and provide details on how we trained our model. To evaluate the models, we use two STT datasets from different domains. The Swiss Parliament Corpus (SPC) test set and the STT4SG-350 corpus, which contains texts from the news sector with an even distribution across seven dialect regions. We provide a detailed error analysis to detect the strengths and weaknesses of the different systems. On both datasets, our model achieves the best results for both, the WER (word error rate) and the BLEU (bilingual evaluation understudy) scores. On the SPC test set, we obtain a BLEU score of $0.607$, whereas the best commercial system reaches a BLEU score of $0.509$. On the STT4SG-350 test set, we obtain a BLEU score of $0.722$, while the best commercial system achieves a BLEU score of $0.568$. However, we would like to point out that this analysis is somewhat limited by the domain-specific idiosyncrasies of" +"---\nabstract: |\n This paper introduces an $hp$-adaptive multi-element stochastic collocation method, which additionally allows to re-use existing model evaluations during either $h$- or $p$-refinement. The collocation method is based on weighted Leja nodes. After $h$-refinement, local interpolations are stabilized by adding Leja nodes on each newly created sub-element in a hierarchical manner. in the context of forward and inverse uncertainty quantification to handle non-smooth or strongly localised response surfaces. \u00a0\\\n \u00a0\\\n [***keywords\u2014*** $hp$-adaptivity; multi-element approximation; stochastic collocation; surrogate modeling; uncertainty quantification]{}\nauthor:\n- 'Armin Galetzka$^1$, Dimitrios Loukrezis$^{1,2,4}$, Niklas Georg$^{1,2,3}$, Herbert De Gersem$^{1,2}$, Ulrich R\u00f6mer$^{3}$'\ndate: |\n \\\n \\\n \\\n \\\ntitle: 'An hp-adaptive multi-element stochastic collocation method for surrogate modeling with information re-use'\n---\n\nIntroduction {#sec:introduction}\n============\n\nPrediction and optimization with computational models is now routinely carried out in many fields of science and engineering. For instance, reliable predictions require the propagation and quantification of uncertainties[@matthies2007quantifying] related to parameter calibration from noisy and limited data. The so-called non-intrusive approach is widely adopted, where computer codes are treated as black box functions from model parameters to . In this way, restructuring and modifying complex software is avoided. Exemplarily, in forward [@lee2009comparative], also referred to as uncertainty propagation, non-intrusive methods evaluate" +"---\nabstract: 'Many performance gains achieved by massive multiple-input and multiple-output depend on the accuracy of the downlink channel state information (CSI) at the transmitter (base station), which is usually obtained by estimating at the receiver (user terminal) and feeding back to the transmitter. The overhead of CSI feedback occupies substantial uplink bandwidth resources, especially when the number of the transmit antennas is large. Deep learning (DL)-based CSI feedback refers to CSI compression and reconstruction by a DL-based autoencoder and can greatly reduce feedback overhead. In this paper, a comprehensive overview of state-of-the-art research on this topic is provided, beginning with basic DL concepts widely used in CSI feedback and then categorizing and describing some existing DL-based feedback works. The focus is on novel neural network architectures and utilization of communication expert knowledge to improve CSI feedback accuracy. Works on bit-level CSI feedback and joint design of CSI feedback with other communication modules are also introduced, and some practical issues, including training dataset collection, online training, complexity, generalization, and standardization effect, are discussed. At the end of the paper, some challenges and potential research directions associated with DL-based CSI feedback in future wireless communication systems are identified.'\nauthor:\n- '[^1]" +"---\nabstract: 'The late-time behavior of spectral form factor (SFF) encodes the inherent discreteness of a quantum system, which should be generically non-vanishing. We study an index analog of the microcanonical spectrum form factor in four-dimensional $\\mathcal{N}=4$ super Yang-Mills theory. In the large $N$ limit and at large enough energy, the most dominant saddle corresponds to the black hole in the AdS bulk. This gives rise to the slope that decreases exponentially for a small imaginary chemical potential, which is a natural analog of an early time. We find that the \u2018late-time\u2019 behavior is governed by the multi-cut saddles that arise in the index matrix model, which are non-perturbatively sub-dominant at early times. These saddles become dominant at late times, preventing the SFF from decaying. These multi-cut saddles correspond to the orbifolded Euclidean black holes in the AdS bulk, therefore giving the geometrical interpretation of the \u2018ramp.\u2019 Our analysis is done in the standard AdS/CFT setting without ensemble average or wormholes.'\nauthor:\n- Sunjin Choi\n- Seok Kim\n- Jaewon Song\nbibliography:\n- 'refs.bib'\ntitle: Supersymmetric Spectral Form Factor and Euclidean Black Holes\n---\n\nIntroduction\n============\n\nOne form of the information paradox in quantum gravity involves the long time behavior" +"---\nabstract: 'The Rice-Mele model has two topological and spatially-inversion symmetric phases, namely the Su-Schrieffer-Heeger (SSH) phase with alternating hopping only, and the charge-density-wave (CDW) phase with alternating energies only. The chiral symmetry of the SSH phase is robust in position space, so that it is preserved in the presence of the ends of a finite system and of textures in the alternating hopping. However, the chiral symmetry of the CDW wave phase is nonsymmorphic, resulting in a breaking of the bulk topology by an end or a texture in the alternating energies. We consider the presence of solitons (textures in position space separating two degenerate ground states) in finite systems with open boundary conditions. We identify the parameter range under which an atomically-sharp soliton in the CDW phase supports a localized state which lies within the band gap, and we calculate the expectation value $p_y$ of the nonsymmorphic chiral operator for this state, and the soliton electric charge. As the spatial extent of the soliton increases beyond the atomic limit, the energy level approaches zero exponentially quickly or inversely proportionally to the width, depending on microscopic details of the soliton texture. In both cases, the difference of $p_y$ from" +"---\nabstract: 'Given a set of $n\\geq 1$ unit disk robots in the Euclidean plane, we consider the fundamental problem of providing mutual visibility to them: the robots must reposition themselves to reach a configuration where they all see each other. This problem arises under obstructed visibility, where a robot cannot see another robot if there is a third robot on the straight line segment between them. This problem was solved by Sharma [*et al.*]{}\u00a0[@sharma2018complete] in the luminous robots model, where each robot is equipped with an externally visible light that can assume colors from a fixed set of colors, using 9 colors and $O(n)$ rounds. In this work, we present an algorithm that requires only 2 colors and $O(n)$ rounds. The number of colors is optimal since at least two colors are required for point robots\u00a0[@di2017mutual].'\nauthor:\n- 'Rusul J. Alsaedi'\n- Joachim Gudmundsson\n- |\n \\\n Andr\u00e9 van Renssen\nbibliography:\n- 'references.bib'\ntitle: The Mutual Visibility Problem for Fat Robots\n---\n\nIntroduction\n============\n\nWe consider a set of $n$ unit disk robots in ${{\\mathbb{R}}}^2$ and aim to position these robots in such a way that each pair of robots can see each other (see Fig.\u00a0[\\[fig:16\\]]{}" +"---\nabstract: |\n Due to its powerful feature learning capability and high efficiency, deep hashing has achieved great success in large-scale image retrieval. Meanwhile, extensive works have demonstrated that *deep neural networks* (DNNs) are susceptible to adversarial examples, and exploring adversarial attack against deep hashing has attracted many research efforts. Nevertheless, backdoor attack, another famous threat to DNNs, has not been studied for deep hashing yet. Although various backdoor attacks have been proposed in the field of image classification, existing approaches failed to realize a truly imperceptive backdoor attack that enjoys invisible triggers and clean label setting simultaneously, and they cannot meet the intrinsic demand of image retrieval backdoor.\n\n In this paper, we propose BadHash, the first imperceptible backdoor attack against deep hashing, which can effectively generate invisible and input-specific poisoned images with clean label. We first propose a new *conditional generative adversarial network* (cGAN) pipeline to effectively generate poisoned samples. For any given benign image, it seeks to generate a natural-looking poisoned counterpart with a unique invisible trigger. In order to improve the attack effectiveness, we introduce a label-based contrastive learning network LabCLN to exploit the semantic characteristics of different labels, which are subsequently used for confusing and misleading" +"---\nauthor:\n- 'Olaf Kaczmarek and Hai-Tao Shu'\ntitle: Spectral and transport properties from lattice QCD\n---\n\nMotivation {#sec:1}\n==========\n\nDirect photons and leptons are produced in all stages of the heavy-ion collision and the subsequent evolution of the produced medium. Therefore they are good probes that carry information of the evolving medium, including information on the quark gluon plasma phase. As a matter of fact, these two objects form the basis for the electroweak interaction. Since their coupling to the QGP constituents is weak [@David:2006sr], once produced they escape from the interaction region hardly changed. The experimental quantities characterizing the thermal production rate of these two objects are called thermal dilepton rates and thermal photon rates. Fig.\u00a0\\[fig8a\\]\u00a0(left) shows an example of experimentally measured dilepton yields, in this case from electron-positron pairs detected in at the PHENIX experiment [@Adare:2009qk], compared to the expected contributions from various hadronic decays. The excess in the low-mass region below 1\u00a0GeV$/c^2$ is expected to originate from the in-medium modification of the $\\rho$-meson may be related the restoration of chiral symmetry. Fig.\u00a0\\[fig8a\\]\u00a0(right) shows a sketch [@Fleuret:2009zza] of the expected photon rates from different stages of the medium produced in heavy-ion collisions." +"---\nabstract: 'Magnetic holes are plasma structures that trap a large number of particles in a magnetic field that is weaker than the field in its surroundings. The unprecedented high time-resolution observations by NASA\u2019s Magnetospheric Multi-Scale (MMS) mission enable us to study the particle dynamics in magnetic holes in the Earth\u2019s magnetosheath in great detail. [We reveal the local generation mechanism of whistler waves by a combination of Landau-resonant and cyclotron-resonant wave-particle interactions of electrons in response to the large-scale evolution of a magnetic hole.]{} As the magnetic hole converges, a pair of counter-streaming electron beams form near the hole\u2019s center as a consequence of the combined action of betatron and Fermi effects. The beams trigger the generation of slightly-oblique whistler waves. Our conceptual prediction is supported by a remarkable agreement between our observations and numerical predictions from the Arbitrary Linear Plasma Solver (ALPS). Our study shows that wave-particle interactions are fundamental to the evolution of magnetic holes in space and astrophysical plasmas.'\nauthor:\n- Wence Jiang\n- Daniel Verscharen\n- Hui Li\n- Chi Wang\n- 'Kristopher G. Klein'\ntitle: Whistler Waves As a Signature of Converging Magnetic Holes in Space Plasmas\n---\n\nIntroduction {#sec0}\n============\n\nSpace and astrophysical" +"---\nabstract: 'This paper considers large-scale linear ill-posed inverse problems whose solutions can be represented as sums of smooth and piecewise constant components. To solve such problems we consider regularizers consisting of two terms that must be balanced. Namely, a Tikhonov term guarantees the smoothness of the smooth solution component, while a total-variation (TV) regularizer promotes blockiness of the non-smooth solution component. A scalar parameter allows to balance between these two terms and, hence, to appropriately separate and regularize the smooth and non-smooth components of the solution. This paper proposes an efficient algorithm to solve this regularization problem by the alternating direction method of multipliers (ADMM). Furthermore, a novel algorithm for automatic choice of the balancing parameter is introduced, using robust statistics. The proposed approach is supported by some theoretical analysis, and numerical experiments concerned with different inverse problems are presented to validate the choice of the balancing parameter.'\nauthor:\n- Ali Gholami and Silvia Gazzola\ntitle: |\n Automatic balancing parameter selection\\\n for Tikhonov-TV regularization\n---\n\nIntroduction\n============\n\nThis paper is concerned with the solution of ill-conditioned linear systems of equations of the form $$\\label{main_eq}\n\\bold{d=Gm + e},$$ arising from the discretization of continuous inverse problems in different fields of" +"---\nabstract: 'Invariance has recently proven to be a powerful inductive bias in machine learning models. One such class of predictive or generative models are tensor networks. We introduce a new numerical algorithm to construct a basis of tensors that are invariant under the action of normal matrix representations of an arbitrary discrete group. This method can be up to several orders of magnitude faster than previous approaches. The group-invariant tensors are then combined into a group-invariant tensor train network, which can be used as a supervised machine learning model. We applied this model to a protein binding classification problem, taking into account problem-specific invariances, and obtained prediction accuracy in line with state-of-the-art deep learning approaches.'\nauthor:\n- 'Brent Sprangers[^1]'\n- 'Nick Vannieuwenhoven[^2].'\ntitle: 'Group-invariant tensor train networks for supervised learning[^3]'\n---\n\ntensor networks, tensor trains, group-equivariance, group-invariance, supervised learning, representation theory\n\n15A69, 68T05, 68T09, 65F15, 20C30\n\nIntroduction {#sec_introduction}\n============\n\nThe concept of *equivariance* asserts that when the input to a function changes in some specific way, then the output of that function changes in a correspondingly predictable way. *Invariance* is a special case wherein the function\u2019s output does not change under specific input changes. For example, a function that" +"---\nauthor:\n- \n- \ntitle: Stochastic Bohmian and Scaled Trajectories\n---\n\nIntroduction {#sec1}\n============\n\nReal physical systems do not exist in complete isolation in nature and therefore they can be thought as open systems in the sense that the interaction with their environments is always present. From the very beginning, the motion of particles in quantum mechanics was analyzed in terms of stochastic processes. A formal analogy between the Brownian motion and the Schr\u00f6dinger equation was noticed by F\u00fcrth [@furth]. Subsequently, F\u00e9nyes [@fenyes] and Weizel [@weizel1; @weizel2; @weizel3] developed this approach with more mathematical detail. The search for a stochastic support for quantum mechanics comes from the early 1950s. The first attempt in the Bohmian mechanics was due to Bohm and Vigier [@bohm3]. These authors assumed that the electron is a particle suspended in a Madelung fluid [@madelung] whose general motion is determined by the resolution of the Schr\u00f6dinger equation. In general, the aim was to show how close it is to classical theory of Brownian motion and Newtonian mechanics, and how the Schr\u00f6dinger equation might have been discovered from this point of view. The theory was well established by Nelson [@Ne-PR-1966] starting from a different point of view of" +"---\nabstract: 'Natural language generation models are computer systems that generate coherent language when prompted with a sequence of words as context. Despite their ubiquity and many beneficial applications, language generation models also have the potential to inflict social harms by generating discriminatory language, hateful speech, profane content, and other harmful material. Ethical assessment of these models is therefore critical. But it is also a challenging task, requiring an expertise in several specialized domains, such as computational linguistics and social justice. While significant strides have been made by the research community in this domain, accessibility of such ethical assessments to the wider population is limited due to the high entry barriers. This article introduces a new tool to democratize and standardize ethical assessment of natural language generation models: Tool for Ethical Assessment of Language generation models (TEAL), a component of Credo AI Lens, an open-source assessment framework.'\nauthor:\n- Amin Rasekh\n- Ian Eisenberg\nbibliography:\n- 'sample-base.bib'\ntitle: Democratizing Ethical Assessment of Natural Language Generation Models\n---\n\n<ccs2012> <concept> <concept\\_id>10010147.10010178.10010179.10010182</concept\\_id> <concept\\_desc>Computing methodologies\u00a0Natural language generation</concept\\_desc> <concept\\_significance>500</concept\\_significance> </concept> </ccs2012>\n\nIntroduction\n============\n\nNatural language generation models (LGM) create human-readable language when prompted with a sequence of words as context. They aim to" +"---\nabstract: 'In this article we study $p$-biharmonic curves as a natural generalization of biharmonic curves. In contrast to biharmonic curves $p$-biharmonic curves do not need to have constant geodesic curvature if $p=\\frac{1}{2}$ in which case their equation reduces to the one of $\\frac{1}{2}$-elastic curves. We will classify $\\frac{1}{2}$-biharmonic curves on closed surfaces and three-dimensional space forms making use of the results obtained for $\\frac{1}{2}$-elastic curves from the literature. By making a connection to magnetic geodesic we are able to prove the existence of $\\frac{1}{2}$-biharmonic curves on closed surfaces. In addition, we will discuss the stability of $p$-biharmonic curves with respect to normal variations. Our analysis highlights some interesting relations between $p$-biharmonic and $p$-elastic curves.'\naddress: |\n University of Vienna, Faculty of Mathematics\\\n Oskar-Morgenstern-Platz 1, 1090 Vienna, Austria\\\nauthor:\n- Volker Branding\nbibliography:\n- 'mybib.bib'\ntitle: 'On p-biharmonic curves'\n---\n\nIntroduction and Results\n========================\n\nFinding interesting curves on a Riemannian manifold is one of the central topics in modern differential geometry. The most prominent examples of such curves are without doubt *geodesics* as they are the curves with minimal distance between two given points in a Riemannian manifold. Moreover, the existence of geodesics is guaranteed by a number of famous" +"---\nauthor:\n- Kaspar Rosager Ludvigsen\n- Shishir Nagaraja\n- Angela Daly\nbibliography:\n- 'collections.bib'\ndate: June 2022\ntitle: 'The Dangers of Computational Law and Cybersecurity; Perspectives from Engineering and the AI Act'\n---\n\nIntroduction\n============\n\nDespite law being established as an academic discipline for a significant period of time, it is still rather undefined and lacks the rigour that other disciplines possess. There are ongoing debates and unresolved questions as to when law is deductive or inductive[^1], so for computational law (CL) to even claim being future-proof misses the mark. Like the existence of exotic matter in astronomy, you may deduce or assume their existence, but empirical evidence will eventually prevail and show whether it is worthwhile[^2]. The same can be said for CL, as its implementation should not be forced because of powers outside the academic and legal sphere[^3], or for the sake of profit. Increased use must be justified, regardless of whether some types of technology are forced upon everyone without any other reason than power[^4].\n\nCL can be defined by its usage, like many other fields of law[^5]. Law in general, and CL too, contain an important modifier:\n\nThey are affected by the technological systems they" +"---\nabstract: |\n In online convex optimization, the player aims to minimize regret, or the difference between her loss and that of the best fixed decision in hindsight over the entire repeated game. Algorithms that minimize (standard) regret may converge to a fixed decision, which is undesirable in changing or dynamic environments. This motivates the stronger metrics of performance, notably adaptive and dynamic regret. Adaptive regret is the maximum regret over any continuous sub-interval in time. Dynamic regret is the difference between the total cost and that of the best sequence of decisions in hindsight.\n\n State-of-the-art performance in both adaptive and dynamic regret minimization suffers a computational penalty - typically on the order of a multiplicative factor that grows logarithmically in the number of game iterations. In this paper we show how to reduce this computational penalty to be doubly logarithmic in the number of game iterations, and retain near optimal adaptive and dynamic regret bounds.\nauthor:\n- 'Zhou Lu[^1] [^2]\\'\n- 'Elad Hazan\\'\nbibliography:\n- 'Xbib.bib'\ntitle: |\n On the Computational Efficiency of\\\n Adaptive and Dynamic Regret Minimization\n---\n\nIntroduction\n============\n\nOnline convex optimization is a standard framework for iterative decision making that has been extensively studied and applied" +"---\nabstract: 'Parkinson\u2019s disease (PD) is a neurological disorder that has a variety of observable motor-related symptoms such as slow movement, tremor, muscular rigidity, and impaired posture. PD is typically diagnosed by evaluating the severity of motor impairments according to scoring systems such as the Movement Disorder Society Unified Parkinson\u2019s Disease Rating Scale (MDS-UPDRS). Automated severity prediction using video recordings of individuals provides a promising route for non-intrusive monitoring of motor impairments. However, the limited size of PD gait data hinders model ability and clinical potential. Because of this clinical data scarcity and inspired by the recent advances in self-supervised large-scale language models like GPT-3, we use human motion forecasting as an effective self-supervised pre-training task for the estimation of motor impairment severity. We introduce **GaitForeMer**, casting and impairment estimation transfor, which is first pre-trained on public datasets to forecast gait movements and then applied to clinical data to predict MDS-UPDRS gait impairment severity. Our method outperforms previous approaches that rely solely on clinical data by a large margin, achieving an F$_1$ score of 0.76, precision of 0.79, and recall of 0.75. Using GaitForeMer, we show how public human movement data repositories can assist clinical use cases through learning universal" +"---\nauthor:\n- 'B.\u00a0Schneider [[![image](orcid.png){width=\"8pt\"}](https://orcid.org/0000-0003-4876-7756)]{}'\n- 'E.\u00a0Le Floc\u2019h'\n- 'M.\u00a0Arabsalmani [[![image](orcid.png){width=\"8pt\"}](https://orcid.org/0000-0001-7680-509X)]{}'\n- 'S.\u00a0D.\u00a0Vergani'\n- 'J.\u00a0T.\u00a0Palmerio [[![image](orcid.png){width=\"8pt\"}](https://orcid.org/0000-0002-9408-1563)]{}'\nbibliography:\n- 'main.bib'\ndate: 'Accepted XXX. Received YYY; in original form ZZZ'\ntitle: 'Are the host galaxies of Long Gamma-Ray Bursts more compact than star-forming galaxies of the field?'\n---\n\n[Long Gamma-Ray Bursts (GRBs) offer a promising tool to trace the cosmic history of star formation, especially at high redshift where conventional methods are known to suffer from intrinsic biases. Previous studies of GRB host galaxies at low redshift showed that high surface densities of stellar mass and star formation rate (SFR) can potentially enhance the GRB production. Evaluating the effect of such stellar densities at high redshift is therefore crucial to fully control the ability of long GRBs for probing the activity of star formation in the distant Universe.]{} [We assess how the size, the stellar mass and star formation rate surface densities of distant galaxies affect their probability to host a long GRB, using a sample of GRB hosts at $z > 1$ and a control sample of star-forming sources from the field.]{} [We gather a sample of 45 GRB host galaxies at $1 <" +"---\nabstract: 'We prove the possibility of achieving exponentially growing wave propagation in space-time modulated media and give an asymptotic analysis of the quasifrequencies in terms of the amplitude of the time modulation at the degenerate points of the folded band structure. Our analysis provides the first proof of existence of k-gaps in the band structures of space-time modulated systems of subwavelength resonators.'\nauthor:\n- 'Habib Ammari[^1]'\n- Jinghao Cao\n- Xinmeng Zeng\nbibliography:\n- 'paper\\_kappa.bib'\ntitle: 'Transmission properties of space-time modulated metamaterials[^2]'\n---\n\n[**Mathematics Subject Classification (MSC2000):** 35J05, 35C20, 35P20, 74J20]{}\\\n[**Keywords:**]{} wave manipulation at subwavelength scales, unidirectional wave, subwavelength quasifrequency, space-time modulated medium, metamaterial, non-reciprocal band gaps, k-gaps\n\nIntroduction\n============\n\nThe ability to control wave propagation is fundamental in many areas of physics. In particular, systems with subwavelength structures have attracted considerable attention over the past decades [@lemoult2016soda; @yves2017crytalline; @phononic1; @phononic2]. The word *subwavelength* designates systems that are able to strongly scatter waves with comparatively large wavelengths. The building blocks of such systems which exhibit subwavelength resonance are called *subwavelength resonators*. Arranged in repeating patterns inside a medium with highly different material parameters, the subwavelength resonators together with the background medium can form microstructures that exhibit various new" +"---\nabstract: 'In this paper we provide new rigidity results for four-dimensional Riemannian manifolds and their twistor spaces. In particular, using the moving frame method, we prove that ${{\\mathds{C}}\\mathbb{P}}^3$ is the only twistor space whose Bochner tensor is parallel; moreover, we classify Hermitian Ricci-parallel and locally symmetric twistor spaces and we show the nonexistence of conformally flat twistor spaces. We also generalize a result due to Atiyah, Hitchin and Singer concerning the self-duality of a Riemannian four-manifold.'\nbibliography:\n- 'BiblioTwistor2.bib'\ndate:\n- \n- \ntitle: Rigidity results for Riemannian twistor spaces under vanishing curvature conditions\n---\n\n\n\nIntroduction and main results {#secIntro}\n=============================\n\nLet $(M,g)$ be an oriented Riemannian manifold of dimension $2n$, with metric $g$. The *twistor space* $Z$ associated to $M$ is defined as the set of all the couples $(p,J_p)$ such that $p\\in M$ and $J_p$ is a complex structure on $T_pM$ compatible with $g$, i.e. such that $g_p(J_p(X),J_p(Y))=g_p(X,Y)$ for every $X,Y\\in T_pM$. [^1]\n\nAlternatively, we can define $Z$ in an equivalent way as $$Z={{\\raisebox{.0em}{$O(M)$}\\left/\\raisebox{-.0em}{$U(n)$}\\right.}},$$ where $O(M)$ denotes the orthonormal frame bundle over $M$ and the unitary group $U(n)$ is identified with a subgroup of $SO(2n)$ (see [@debnan] for further details).\n\nThese structures, introduced by Penrose ([@penrose])" +"---\nabstract: 'We study the homotopy type of the space\u00a0$\\operatorname{\\mathcal{E}}(L)$ of unparametrised embeddings of a split link\u00a0$L=L_1\\sqcup \\ldots \\sqcup L_n$ in\u00a0$\\R^3$. Inspired by work of Brendle and Hatcher, we introduce a semi-simplicial space of separating systems and show that this is homotopy equivalent to\u00a0$\\operatorname{\\mathcal{E}}(L)$. This combinatorial object provides a gateway to studying the homotopy type of $\\operatorname{\\mathcal{E}}(L)$ via the homotopy type of the spaces\u00a0$\\operatorname{\\mathcal{E}}(L_i)$. We apply this tool to find a simple description of the fundamental group, or motion group, of\u00a0$\\operatorname{\\mathcal{E}}(L)$, and extend this to a description of the motion group of embeddings in\u00a0$S^3$.'\naddress:\n- |\n DPMMS, Centre for Mathematical Sciences\\\n Wilberforce Road\\\n Cambridge CB3 0WB\\\n UK\n- |\n Department of Mathematics and Statistics\\\n University of Southern Maine\\\n 96 Falmouth Street\\\n Portland, ME 04103\\\n USA\nauthor:\n- Rachael Boyd\n- Corey Bregman\nbibliography:\n- 'mybib.bib'\ntitle: Embedding spaces of split links\n---\n\nIntroduction\n============\n\nIn this paper we study the homotopy type of the unparametrised embedding space of a link\u00a0$L$ in\u00a0$\\R^3$. We say\u00a0$L$ is *split* if it can be written as a disjoint union\u00a0$L=L_1\\sqcup\\cdots\\sqcup L_n$, where the\u00a0$L_i$ can be contained in disjoint balls. We call each sublink" +"---\nbibliography:\n- 'references.bib'\n---\n\n[CALT-TH-2022-020, IPMU 22-0026]{}\\\n\n[Monica Jinwoo Kang,$^1$ Jaeha Lee,$^1$ and Hirosi Ooguri$^{1,2}$\\\n]{}\\\n[$^2$ Kavli Institute for the Physics and Mathematics of the Universe (WPI)\\\nUniversity of Tokyo, Kashiwa 277-8583, Japan]{}\n\n**Abstract**\n\nWe consider a $d$-dimensional unitary conformal field theory with a compact Lie group global symmetry $G$ and show that, at high temperature $T$ and on a compact Cauchy surface, the probability of a randomly chosen state being in an irreducible unitary representation $R$ of $G$ is proportional to $(\\operatorname{dim} R)^2 \\ \\exp[- c_2(R) /(b \\, T^{d-1}) ]$. We use the spurion analysis to derive this formula and relate the constant $b$ to a domain wall tension. We also verify it for free field theories and holographic conformal field theories and compute $b$ in these cases. This generalizes the result in [2109.03838](https://arxiv.org/abs/2109.03838) that the probability is proportional to $(\\operatorname{dim} R)^2$ when $G$ is a finite group. As a by-product of this analysis, we clarify thermodynamical properties of black holes with non-abelian hair in anti-de Sitter space.\n\nIntroduction\n============\n\nIn [@Harlow:2021trr], a simple formula is derived for the density of black hole microstates in theory with finite group gauge symmetry $G$. The formula states that, if" +"---\nabstract: 'A novel way of using neural networks to learn the dynamics of time delay systems from sequential data is proposed. A neural network with trainable delays is used to approximate the right hand side of a delay differential equation. We relate the delay differential equation to an ordinary differential equation by discretizing the time history and train the corresponding neural ordinary differential equation (NODE) to learn the dynamics. An example on learning the dynamics of the Mackey-Glass equation using data from chaotic behavior is given. After learning both the nonlinearity and the time delay, we demonstrate that the bifurcation diagram of the neural network matches that of the original system.'\naddress:\n- 'Department of Mechanical Engineering, '\n- |\n Department of Civil and Environmental Engineering,\\\n University of Michigan, Ann Arbor, MI 48109 USA\n- '(e-mail: xunbij@umich.edu, orosz@umich.edu)'\nauthor:\n- 'Xunbi\u00a0A.\u00a0Ji'\n- G\u00e1bor\u00a0Orosz\nbibliography:\n- 'TDS2022\\_bib.bib'\ntitle: Learning Time Delay Systems with Neural Ordinary Differential Equations\n---\n\nand\n\nneural ordinary differential equations, time delay systems, time delay neural networks, trainable time delays, Mackey-Glass equation.\n\nIntroduction\n============\n\nArtificial neural networks have been thriving in many fields over the past decade and their power in approximating and generalizing" +"---\nabstract: 'This technical report presents the 2nd winning model for AQTC, a task newly introduced in CVPR 2022 LOng-form VidEo Understanding (LOVEU) challenges. This challenge faces difficulties with multi-step answers, multi-modal, and diverse and changing button representations in video. We address this problem by proposing a new context ground module attention mechanism for more effective feature mapping. In addition, we also perform the analysis over the number of buttons and ablation study of different step networks and video features. As a result, we achieved the overall 2nd place in LOVEU competition track 3, specifically the 1st place in two out of four evaluation metrics. Our code is available at .'\nauthor:\n- |\n Hyeonyu Kim$^*$\\\n UNIST\\\n [khy0501@unist.ac.kr]{}\n- |\n Jongeun Kim$^*$\\\n UNIST\\\n [joannekim0420@unist.ac.kr]{}\n- |\n Jeonghun Kang\\\n UNIST\\\n [jhkang@unist.ac.kr]{}\n- |\n Sanguk Park\\\n Pyler, Inc.\\\n [psycoder@pyler.tech]{}\n- |\n Dongchan Park\\\n Pyler, Inc.\\\n [cto@pyler.tech]{}\n- |\n Taehwan Kim\\\n UNIST\\\n [taehwankim@unist.ac.kr]{}\nbibliography:\n- 'reference.bib'\ntitle: Technical Report for \u00a02022 LOVEU AQTC Challenge\n---\n\nIntroduction\n============\n\nAffordance-centric Question-driven Task Completion (AQTC) is a new challenge introduced in CVPR 2022 LOVEU workshop. The task aims to help users operate machines in the real-world from an affordance-centric perspective by answering users\u2019 questions such as," +"---\nabstract: 'We study synchronization dynamics in populations of coupled phase oscillators with higher-order interactions and community structure. We find that the combination of these two properties gives rise to a number of states unsupported by either higher-order interactions or community structure alone, including synchronized states with communities organized into clusters in-phase, anti-phase, and a novel skew-phase, as well as an incoherent-synchronized state. Moreover, the system displays a strong multistability, with many of these states stable at the same time. We demonstrate our findings by deriving the low dimensional dynamics of the system and examining the system\u2019s bifurcations using a stability analysis and perturbation theory.'\nauthor:\n- Per Sebastian Skardal\n- Sabina Adhikari\n- 'Juan G. Restrepo'\ntitle: 'Multistability in coupled oscillator systems with higher-order interactions and community structure'\n---\n\n> Spontaneous entrainment and pattern formation in large ensembles of coupled oscillator units play a role in a wide range of applications in mathematics, physics, engineering, and biology\u00a0[@Strogatz2003; @Pikovsky2003]. Recent insights from both neuroscience and physics\u00a0[@Petri2014Interface; @Ashwin2016PhysD] point to the presence of higher-order interactions in populations of coupled dynamical systems. While initial studies have begun to uncover the effects that higher-order interactions have on macroscopic system dynamics, the" +"---\nabstract: 'Machine learning (ML) may improve and automate quality control (QC) in injection moulding manufacturing. As the labelling of extensive, real-world process data is costly, however, the use of simulated process data may offer a first step towards a successful implementation. In this study, simulated data was used to develop a predictive model for the product quality of an injection moulded sorting container. The achieved accuracy, specificity and sensitivity on the test set was $99.4\\%$, $99.7\\%$ and $94.7\\%$, respectively. This study thus shows the potential of ML towards automated QC in injection moulding and encourages the extension to ML models trained on real-world data.'\nauthor:\n- |\n Steven Michiels$^1$, C\u00e9dric De Schryver$^2$, Lynn Houthuys$^1$,\\\n Frederik Vogeler$^1$, Frederik Desplentere$^2$ [^1]\\\n 1- Thomas More University of Applied Sciences, Belgium\\\n Dept. of Smart Technology & Design, Campus De Nayer\\\n 2- KU Leuven - University of Leuven, Belgium\\\n Dept. of Materials Engineering, ProPoliS Research Group, Campus Bruges\nbibliography:\n- 'AI4IMbib.bib'\ntitle: Machine learning for automated quality control in injection moulding manufacturing\n---\n\nIntroduction\n============\n\nWhen it comes to producing large series of plastic products, injection moulding may be the most important production technique for the manufacturing industry. The large batch sizes, however, result" +"---\nabstract: 'Generative adversarial networks (GANs) have proven to be surprisingly efficient for image editing by inverting and manipulating the latent code corresponding to an input real image. This editing property emerges from the disentangled nature of the latent space. In this paper, we identify that the facial attribute disentanglement is not optimal, thus facial editing relying on linear attribute separation is flawed. We thus propose to improve semantic disentanglement with supervision. Our method consists in learning a proxy latent representation using normalizing flows, and we show that this leads to a more efficient space for face image editing.'\naddress: 'InterDigital, Inc., France'\nbibliography:\n- 'main.bib'\ntitle: SEMANTIC UNFOLDING OF STYLEGAN LATENT SPACE\n---\n\nImage editing, GAN, Normalizing flow, Disentanglement\n\nIntroduction\n============\n\nGANs [@gan] have shown tremendous success in generating high quality realistic images that are indistinguishable from real ones. Yet, several open problems regarding these generative models still exist, namely: image generation control, latent space understanding and attributes disentanglement. All these features are important for the generation and editing of high quality images. Recently, many improvements have been proposed to the original GAN architecture, which led to unprecedented image quality. In particular, the state-of-the-art method StyleGAN [@stylegan; @styleganimporveing] has" +"---\nabstract: 'Delayed radio flares of optical tidal disruption events (TDEs) indicate the existence of non-relativistic outflows accompanying TDEs. The interaction of TDE outflows with the surrounding circumnuclear medium creates quasi-perpendicular shocks in the presence of [toroidal]{} magnetic fields. Because of the large shock obliquity and large outflow velocity, we find that the shock acceleration induced by TDE outflows generally leads to a steep particle energy spectrum, with the power-law index significantly larger than the \u201cuniversal\" index for a parallel shock. The measured synchrotron spectral indices of recently detected TDE radio flares are consistent with our theoretical expectation. It suggests that the particle acceleration at quasi-perpendicular shocks can be the general acceleration mechanism accounting for the delayed radio emission of TDEs.'\nauthor:\n- Siyao Xu\nbibliography:\n- 'xu.bib'\ntitle: 'Quasi-perpendicular shock acceleration and TDE radio flares'\n---\n\nIntroduction\n============\n\nStars in galactic nuclei can be tidally disrupted by a central super-massive black hole (SMBH) [@Ree88]. The resulting tidal disruption events (TDEs) produce transient emission at optical/UV and X-ray bands [@Sax20; @Van20]. They also eject sub-relativistic outflows, as indicated by the radio flares detected several months or years after the stellar disruption [@Alex20; @Mat21]. The delayed radio flares are believed as" +"---\nabstract: 'Phenomenological studies of quantum gravity have proposed a modification of the commutator between position and momentum in quantum mechanics so to introduce a minimal uncertainty in position in quantum mechanics. Such a minimal uncertainty and the consequent minimal measurable length have important consequences on the dynamics of quantum systems. In the present work, we show that such consequences go beyond dynamics, reaching the definition of quantities such as energy, momentum, and the same Hamiltonian. Furthermore, since the Hamiltonian, defined as the generator of time evolution, results to be bounded, a minimal length implies a minimal time.'\nauthor:\n- Pasquale Bosso\ntitle: Space and time transformations with a minimal length\n---\n\nModels of quantum mechanics with a minimal length are motivated by phenomenological studies of quantum gravity since several candidate theories have proposed the existence, under several perspectives and in different shapes, of a minimal accessible length [@Mead:1964zz; @Gross:1987kza; @Gross:1987ar; @Amati:1987wq; @Amati:1988tn; @Konishi:1989wk; @Garay:1994en; @Adler:1999bu; @Scardigli:1999jh]. One of the most common approaches for such phenomenological studies consists of a modification of the Heisenberg algebra describing a modified uncertainty relation between position and momentum, generically called Generalized Uncertainty Principle (GUP) [@Maggiore:1993kv; @Kempf:1994su; @Das:2008kaa; @Ali:2011fa; @Lake:2018zeg; @Bosso:2020aqm; @Petruzziello:2020wkd; @Wagner:2021bqz; @Bosso:2021koi; @Gomes:2022akh]." +"---\nabstract: 'Fault-tolerant quantum computing based on surface codes has emerged as a popular route to large-scale quantum computers capable of accurate computation even in the presence of noise. Its popularity is, in part, because the fault-tolerance or accuracy threshold for surface codes is believed to be less stringent than competing schemes. This threshold is the noise level below which computational accuracy can be increased by increasing physical resources for noise removal, and is an important engineering target for realising quantum devices. The current conclusions about surface code thresholds are, however, drawn largely from studies of probabilistic noise. While a natural assumption, current devices experience noise beyond such a model, raising the question of whether conventional statements about the thresholds apply. Here, we attempt to extend past proof techniques to derive the fault-tolerance threshold for surface codes subjected to general noise with no particular structure. Surprisingly, we found no nontrivial threshold, i.e., there is no guarantee the surface code prescription works for general noise. While this is not a proof that the scheme fails, we argue that current proof techniques are likely unable to provide an answer. A genuinely new idea is needed, to reaffirm the feasibility of surface code" +"---\nauthor:\n- 'S. Dey^1^, S. Ghosh^2^, D. Maity^1^, A. De^3^'\n- 'S. Chandra^4,5\\*^'\ntitle: |\n Two-stream Plasma Instability as a Potential Mechanism for Particle\\\n Escape from the Venusian Ionosphere\n---\n\nIntroduction\n============\n\nPlanets that have strong magnetic fields in their interior region, like Earth, Jupiter, Saturn, and Mercury, are enclosed by invisible magnetosphere [@Ham]. The charged particles of the solar wind radiation (electrons and protons) are deflected by their magnetic fields as they stream far away from the Sun. This deflection by the magnetic field creates a magnetic sphere that acts as a protective \u201cbubble\u201d covering the planet [@zhang2012magnetic]. Venus has no intrinsic magnetic field to act as a protection against the incoming stream of charged particles. However, the solar wind and UV radiation from sun, pulls out electrons from the atoms and molecules within the higher atmosphere, making a section of electrically charged gas referred to as the ionosphere. The planet is protected partially by the induced magnetic force field due to this ionosphere. But this induced magnetic field is very small in magnitude and not sufficient to deflect the solar wind unlike other planets. Thus, the electrons in the solar wind end up directly interacting with the" +"---\nabstract: 'We study a class of leptogenesis scenarios with decay or scattering being the source of lepton asymmetry, which can not only give rise to the observed baryon asymmetry in the universe but also can leave behind a large remnant neutrino asymmetry. Such large neutrino asymmetry can not only be probed at future cosmic microwave background (CMB) experiments but is also motivating due to its possible role in solving the recently reported anomalies in $^4{\\rm He}$ measurements. Additionally, such large neutrino asymmetry also offers the possibility of cogenesis if dark matter is in the form of a sterile neutrino resonantly produced in the early universe via Shi-Fuller mechanism. Considering $1 \\rightarrow 2, 1 \\rightarrow 3$ as well as $2 \\rightarrow 2$ processes to be responsible for generating the asymmetries, we show that only TeV scale leptogenesis preferably of $1 \\rightarrow N \\, (N \\geq 3)$ type can generate the required lepton asymmetry around sphaleron temperature while also generating a large neutrino asymmetry $\\sim \\mathcal{O}(10^{-2})$ by the epoch of the big bang nucleosynthesis. While such low scale leptogenesis can have tantalising detection prospects at laboratory experiments, the indication of a large neutrino asymmetry provides a complementary indirect signature.'\nauthor:\n-" +"---\nabstract: 'Despite their ubiquity throughout science and engineering, only a handful of partial differential equations (PDEs) have analytical, or closed-form solutions. This motivates a vast amount of classical work on numerical simulation of PDEs and more recently, a whirlwind of research into data-driven techniques leveraging machine learning (ML). A recent line of work indicates that a hybrid of classical numerical techniques and machine learning can offer significant improvements over either approach alone. In this work, we show that the choice of the numerical scheme is crucial when incorporating physics-based priors. We build upon Fourier-based spectral methods, which are known to be more efficient than other numerical schemes for simulating PDEs with smooth and periodic solutions. Specifically, we develop ML-augmented spectral solvers for three common PDEs of fluid dynamics. Our models are more accurate ($\\xspace2-4\\times$) than standard spectral solvers at the same resolution but have longer overall runtimes ($\\xspace{\\sim 2\\times}$), due to the additional runtime cost of the neural network component. We also demonstrate a handful of key design principles for combining machine learning and numerical methods for solving PDEs.'\nauthor:\n- |\n Gideon Dresdner gideond@gmail.com\\\n Google Research and ETH Zurich, Department for Computer Science Dmitrii Kochkov dkochkov@google.com\\\n Google Research" +"---\nabstract: 'We derive new concentration bounds for time averages of measurement outcomes in quantum Markov processes. This generalizes well-known bounds for classical Markov chains which provide constraints on finite time fluctuations of time-additive quantities around their averages. We employ spectral, perturbation and martingale techniques, together with noncommutative $L_2$ theory, to derive: (i) a Bernstein-type concentration bound for time averages of the measurement outcomes of a quantum Markov chain, (ii) a Hoeffding-type concentration bound for the same process, (iii) a generalization of the Bernstein-type concentration bound for counting processes of continuous time quantum Markov processes, (iv) new concentration bounds for empirical fluxes of classical Markov chains which broaden the range of applicability of the corresponding classical bounds beyond empirical averages. We also suggest potential application of our results to parameter estimation and consider extensions to reducible quantum channels, multi-time statistics and time-dependent measurements, and comment on the connection to so-called thermodynamic uncertainty relations.'\nauthor:\n- Federico Girotti\n- 'Juan P. Garrahan'\n- M\u0103d\u0103lin Gu\u0163\u0103\nbibliography:\n- 'biblio.bib'\ntitle: Concentration Inequalities for Output Statistics of Quantum Markov Processes\n---\n\nIntroduction\n============\n\nQuantum Markov chains describe the evolution of a quantum system which interacts successively with a sequence of identically prepared ancillary" +"---\nabstract: 'This paper proposes an effective emotional text-to-speech (TTS) system with a pre-trained language model (LM)-based emotion prediction method. Unlike conventional systems that require auxiliary inputs such as manually defined emotion classes, our system directly estimates emotion-related attributes from the input text. Specifically, we utilize generative pre-trained transformer (GPT)-3 to jointly predict both an emotion class and its strength in representing emotions\u2019 coarse and fine properties, respectively. Then, these attributes are combined in the emotional embedding space and used as conditional features of TTS model for generating output speech signal. Consequently, the proposed system can produce emotional speech only from text without any auxiliary inputs. Furthermore, because the enables to capture emotional context among the consecutive sentences, the proposed method can effectively handle the paragraph-level generation of emotional speech.'\naddress: ' $^1$NAVER Corp., Seongnam, Korea, $^2$LINE Corp., Tokyo, Japan'\nbibliography:\n- 'mybib.bib'\ntitle: 'Language Model-Based Emotion Prediction Methods for Emotional Speech Synthesis Systems'\n---\n\n**Index Terms**: Text-to-speech (TTS), emotional TTS, emotion modeling from text, language model, GPT-3\n\nIntroduction\n============\n\nAs the quality of the neural text-to-speech (TTS) system has reached natural sound almost indistinguishable from human recordings\u00a0[@jonathan2017natural; @lee2020multi; @ren2020fastspeech2], the interest in emotional TTS \u00a0[@kwon2019effective; @cai2021emotion; @lu2021multi; @wang2018style;" +"---\nabstract: 'This paper tackles the problem of robots collaboratively towing a load with cables to a specified goal location while avoiding collisions in real time. The introduction of cables (as opposed to rigid links) enables the robotic team to travel through narrow spaces by changing its intrinsic dimensions through slack/taut switches of the cable. However, this is a challenging problem because of the hybrid mode switches and the dynamical coupling among multiple robots and the load. Previous attempts at addressing such a problem were performed offline and do not consider avoiding obstacles online. In this paper, we introduce a cascaded planning scheme with a parallelized centralized trajectory optimization that deals with hybrid mode switches. We additionally develop a set of decentralized planners per robot, which enables our approach to solve the problem of collaborative load manipulation online. We develop and demonstrate one of the first collaborative autonomy framework that is able to move a cable-towed load, which is too heavy to move by a single robot, through narrow spaces with real-time feedback and reactive planning in experiments.'\nauthor:\n- |\n Chenyu Yang$^{*,1}$, Guo Ning Sue$^{*,1}$, Zhongyu Li$^{*,1}$, Lizhi Yang$^{1}$, Haotian Shen$^{1}$, Yufeng Chi$^{1}$,\\\n Akshara Rai$^{2}$, Jun Zeng$^{1}$, Koushil Sreenath$^{1}$" +"---\nabstract: 'Machine learning algorithms are routinely used for business decisions that may directly affect individuals, for example, because a credit scoring algorithm refuses them a loan. It is then relevant from an ethical (and legal) point of view to ensure that these algorithms do not discriminate based on sensitive attributes (like sex or race), which may occur unwittingly and unknowingly by the operator and the management. Statistical tools and methods are then required to detect and eliminate such potential biases.'\nauthor:\n- Roberta Pappad\u00e0 and Francesco Pauli\nbibliography:\n- 'pappadapaulireferencBIBTEX.bib'\ndate: 'Received: date / Accepted: date'\ntitle: Discrimination in machine learning algorithms\n---\n\nIntroduction {#sec:intro}\n============\n\nThe kind of discrimination we refer to consists of treating a person or a group depending on some sensitive attribute (s.a., $S$) such as race (skin color), sex, religious orientation, etc. A human may discriminate either because of irrational prejudice induced by ignorance and stereotypes or based on statistical generalization: lacking specific information on an individual, he is assigned the characteristics prevalent in the sensitive attribute category he belongs to. For example, in the United States, lacking information on education, a black person may be assumed to have relatively low level since this" +"---\nabstract: 'We present a study of the two-photon-exchange (PE-exchange) corrections to the $S$-levels in muonic ($\\mu$D) and ordinary (D) deuterium within the pionless effective field theory (/). Our calculation proceeds up to next-to-next-to-next-to-leading order (N3LO) in the / expansion. The only unknown low-energy constant entering the calculation at this order corresponds to the coupling of a longitudinal photon to the nucleon-nucleon system. To minimise its correlation with the deuteron charge radius, it is extracted using the information about the hydrogen-deuterium isotope shift. We find the elastic PE-exchange contribution in $\\mu$D larger by several standard deviations than obtained in other recent calculations. This discrepancy ameliorates the mismatch between theory and experiment on the size of PE-exchange effects, and is attributed to the properties of the deuteron elastic charge form factor parametrisation used to evaluate the elastic contribution. We identify a correlation between the deuteron charge and Friar radii, which can help one to judge how well a form factor parametrisation describes the low-virtuality properties of the deuteron. We also evaluate the higher-order PE-exchange contributions in $\\mu$D, generated by the single-nucleon structure and expected to be the most important terms beyond N3LO. The uncertainty of the theoretical result is dominated by" +"---\nabstract: 'Training medical image segmentation models usually requires a large amount of labeled data. By contrast, humans can quickly learn to accurately recognise anatomy of interest from medical (e.g. MRI and CT) images with some limited guidance. Such recognition ability can easily generalise to new images from different clinical centres. This rapid and generalisable learning ability is mostly due to the compositional structure of image patterns in the human brain, which is less incorporated in medical image segmentation. In this paper, we model the compositional components (i.e. patterns) of human anatomy as learnable von-Mises-Fisher (vMF) kernels, which are robust to images collected from different domains (e.g. clinical centres). The image features can be decomposed to (or composed by) the components with the composing operations, i.e. the vMF likelihoods. The vMF likelihoods tell how likely each anatomical part is at each position of the image. Hence, the segmentation mask can be predicted based on the vMF likelihoods. Moreover, with a reconstruction module, unlabeled data can also be used to learn the vMF kernels and likelihoods by recombining them to reconstruct the input image. Extensive experiments show that the proposed vMFNet achieves improved generalisation performance on two benchmarks, especially when annotations" +"---\nabstract: 'We give a necessary condition for a domain to have a bounded extension operator from $L^{1,p}(\\Omega)$ to $L^{1,p}(\\mathbb R^n)$ for the range $1 < p < 2$. The condition is given in terms of a power of the distance to the boundary of $\\Omega$ integrated along the measure theoretic boundary of a set of locally finite perimeter and its extension. This generalizes a characterizing curve condition for planar simply connected domains, and a condition for $W^{1,1}$-extensions. We use the necessary condition to give a quantitative version of the curve condition. We also construct an example of an extension domain that is homeomorphic to a ball and has $n$-dimensional boundary.'\naddress:\n- 'Departamento de An\u00e1lisis Matem\u00e1tico y Matem\u00e1tica Aplicada, Facultad de Ciencias Matem\u00e1ticas, Universidad Complutense, 28040, Madrid, Spain'\n- |\n University of Jyvaskyla\\\n Department of Mathematics and Statistics\\\n P.O. Box 35 (MaD)\\\n FI-40014 University of Jyvaskyla\\\n Finland\nauthor:\n- 'Miguel Garc\u00eda-Bravo'\n- Tapio Rajala\n- Jyrki Takanen\ntitle: A necessary condition for Sobolev extension domains in higher dimensions\n---\n\nIntroduction\n============\n\nA domain $\\Omega \\subset \\mathbb R^n$ is called a $W^{k,p}$-extension domain, if we can extend each Sobolev function $u \\in W^{k,p}(\\Omega)$ to a global Sobolev function $u \\in" +"---\nabstract: 'Cell-cell adhesion is one the most fundamental mechanisms regulating collective cell migration during tissue development, homeostasis and repair, allowing cell populations to self-organize and eventually form and maintain complex tissue shapes. Cells interact with each other via the formation of protrusions or filopodia and they adhere to other cells through binding of cell surface proteins. The resulting adhesive forces are then related to cell size and shape and, often, continuum models represent them by nonlocal attractive interactions. In this paper, we present a new continuum model of cell-cell adhesion which can be derived from a general nonlocal model in the limit of short-range interactions. This new model is local, resembling a system of thin-film type equations, with the various model parameters playing the role of surface tensions between different cell populations. Numerical simulations in one and two dimensions reveal that the local model maintains the diversity of cell sorting patterns observed both in experiments and in previously used nonlocal models. In addition, it also has the advantage of having explicit stationary solutions, which provides a direct link between the model parameters and the differential adhesion hypothesis.'\nauthor:\n- 'C. Falc\u00f3, R. E. Baker, J. A. Carrillo[^1]'\ntitle: 'A" +"---\nabstract: 'A nonequilibrium system is characterized by a set of thermodynamic forces and fluxes which give rise to entropy production (EP). We show that these forces and fluxes have an information-geometric structure, which allows us to decompose EP into contributions from different types of forces in general (linear and nonlinear) discrete systems. We focus on the excess and housekeeping decomposition, which separates contributions from conservative and nonconservative forces. Unlike the Hatano-Sasa decomposition, our housekeeping/excess terms are always well-defined, including in systems with odd variables and nonlinear systems without steady states. Our decomposition leads to far-from-equilibrium thermodynamic uncertainty relations and speed limits.'\nauthor:\n- Artemy Kolchinsky\n- Andreas Dechant\n- Kohei Yoshimura\n- Sosuke Ito\nbibliography:\n- 'writeup.bib'\ntitle: Information geometry of excess and housekeeping entropy production\n---\n\nA major goal of nonequilibrium thermodynamics is to understand entropy production (EP) from an operational point of view, in terms of tradeoffs between EP and functional properties such as speed of dynamical evolution\u00a0[@aurell2011optimal; @shiraishi_speed_2018] and statistics of fluctuating observables\u00a0[@gingrich2016dissipation]. However, EP can arise from different factors, including relaxation from nonequilibrium states, nonconservative forces, and exchange of conserved quantities between different reservoirs. In this Letter, we use methods from information geometry" +"---\nabstract: 'This work combines control barrier functions (CBFs) with a whole-body controller to enable self-collision avoidance for the MIT Humanoid. Existing reactive controllers for self-collision avoidance cannot guarantee collision-free trajectories as they do not leverage the robot\u2019s full dynamics, thus compromising kinematic feasibility. In comparison, the proposed CBF-WBC controller can reason about the robot\u2019s underactuated dynamics in real-time to guarantee collision-free motions. The effectiveness of this approach is validated in simulation. First, a simple hand-reaching experiment shows that the CBF-WBC enables the robot\u2019s hand to deviate from an infeasible reference trajectory to avoid self-collisions. Second, the CBF-WBC is combined with a linear model predictive controller (LMPC) designed for dynamic locomotion, and the CBF-WBC is used to track the LMPC predictions. Walking experiments show that adding CBFs avoids leg self-collisions when the footstep location or swing trajectory provided by the high-level planner are infeasible for the real robot, and generates feasible arm motions that improve disturbance recovery.'\nauthor:\n- 'Charles Khazoom$^{1*}$, Daniel Gonzalez-Diaz$^{1*}$, Yanran Ding$^1$, and Sangbae Kim$^{1}$[^1][^2][^3]'\nbibliography:\n- 'references/IEEEabrv.bib'\n- 'references/refs\\_humanoids\\_2022.bib'\n- 'references/refs\\_YD.bib'\ntitle: '**Humanoid Self-Collision Avoidance Using Whole-Body Control with Control Barrier Functions**'\n---\n\nIntroduction\n============\n\nMotivation\n----------\n\nHumanoid robots already need to coordinate multiple joints to" +"---\nabstract: 'Adaptive optics images from the W. M. Keck Observatory have delivered numerous influential scientific results, including detection of multi-system asteroids, the supermassive black hole at the center of the Milky Way, and directly imaged exoplanets. Specifically, the precise and accurate astrometry these images yield was used to measure the mass of the supermassive black hole using orbits of the surrounding star cluster. Despite these successes, one of the major obstacles to improved astrometric measurements is the spatial and temporal variability of the point-spread function delivered by the instruments. AIROPA is a software package for the astrometric and photometric analysis of adaptive optics images using point-spread function fitting together with the technique of point-spread function reconstruction. In adaptive optics point-spread function reconstruction, the knowledge of the instrument performance and of the atmospheric turbulence is used to predict the long-exposure point-spread function of an observation. In this paper we present the results of our tests using AIROPA on both simulated and on-sky images of the Galactic Center. We find that our method is very reliable in accounting for the static aberrations internal to the instrument, but it does not improve significantly the accuracy on sky, possibly due to uncalibrated telescope" +"---\nabstract: 'We present a general formalism for investigating the second-order optical response of solids to an electric field in weakly disordered crystals with arbitrarily complicated band structures based on density-matrix equations of motion, on a Born approximation treatment of disorder, and on an expansion in scattering rate to leading non-trivial order. One of the principal aims of our work is to enable extensive transport theory applications that accounts fully for the interplay between electric-field-induced interband and intraband coherence, and Bloch-state scattering. The quasiparticle bands are treated in a completely general manner that allows for arbitrary forms of the intrinsic spin-orbit coupling (SOC) and could be extended to the extrinsic SOC. According to the previous results, in the presence of the disorder potential, the interband response in conductors in addition to an intrinsic contribution due to the entire Fermi sea that captures, among other effects, the Berry curvature contribution to wave-packet dynamics includes an anomalous contribution caused by scattering that is sensitive to the presence of the Fermi surface. To demonstrate the rich physics captured by our theory, the relaxation time matrix for different strength order is considered and at the same time we explicitly solve for some electric-field response" +"---\nabstract: 'In this report, we present the ReLER@ZJU-Alibaba submission to the Ego4D Natural Language Queries (NLQ) Challenge in CVPR 2022. Given a video clip and a text query, the goal of this challenge is to locate a temporal moment of the video clip where the answer to the query can be obtained. To tackle this task, we propose a multi-scale cross-modal transformer and a video frame-level contrastive loss to fully uncover the correlation between language queries and video clips. Besides, we propose two data augmentation strategies to increase the diversity of training samples. The experimental results demonstrate the effectiveness of our method. The final submission ranked first on the leaderboard. The code is available at []{}.'\nauthor:\n- |\n Naiyuan Liu$^{1,2}$, Xiaohan Wang$^{1}$, Xiaobo Li$^{3}$, Yi Yang$^{1}$, Yueting Zhuang$^{1}$\\\n `naiyuan.liu@student.uts.edu.au,xiaohan.wang@zju.edu.cn, yangyics@zju.edu.cn`\\\n $^1$ReLER Lab, CCAI, Zhejiang University, $^2$University of Technology Sydney, $^3$Alibaba Group\nbibliography:\n- 'egbib.bib'\ntitle: 'ReLER@ZJU-Alibaba Submission to the Ego4D Natural Language Queries Challenge 2022'\n---\n\nIntroduction\n============\n\nGiven a video clip and a text query, the goal of Ego4D NLQ task [@grauman2021ego4d] is to locate the corresponding moment span where the answer to the query can be obtained. There are two challenges to Ego4D NLQ task: extremely" +"---\nauthor:\n- 'Catarina Cosme,'\n- 'Daniel G. Figueroa,'\n- and Nicol\u00e1s Loayza\ntitle: Gravitational wave production from preheating with trilinear interactions\n---\n\nIntroduction\n============\n\nSignificant evidence\u00a0[@Akrami:2018odb] supports the idea of inflation as a solution to the shortcomings of the hot Big Bang framework\u00a0[@Guth:1980zm; @Linde:1981mu; @Starobinsky:1980te], and as a mechanism to create the primordial density perturbations\u00a0[@Mukhanov:1981xt; @Guth:1982ec; @Starobinsky:1982ee; @Hawking:1982cz; @Bardeen:1983qw] (see\u00a0[@Lyth:1998xn; @Riotto:2002yw; @Bassett:2005xm; @Linde:2007fr; @Baumann:2009ds] for reviews on inflation). Inflationary models must be compatible with cosmological observations\u00a0[@Martin:2013tda; @Planck:2018jri], including the most recent constraint on the B-mode polarization of the Cosmic Microwave Background (CMB), which sets an upper bound on the inflationary Hubble scale as $H_{\\rm inf} \\lesssim 4.7\\times10^{13}$ GeV\u00a0[@BICEP:2021xfz]. This rules out many scenarios, and puts pressure on the parameter space of many others.\n\nInflation must be followed by a period of *reheating* during which the Universe ultimately has to reach a radiation dominated (RD) thermal state, at least before the onset of Big Bang Nucleosynthesis (BBN) at a temperature of $T_{\\rm BBN} \\simeq 10^{-3}\\text{ GeV}$ [@Kawasaki:1999na; @Kawasaki:2000en; @Hannestad:2004px; @Hasegawa:2019jsa]. The first stage of reheating may be driven by a period of *preheating*, characterized by strong non-perturbative field excitation, typically leading to exponentially growing" +"---\nabstract: 'We study power-set operations on classes of trees and tree algebras. Our main result consists of a distributive law between the tree monad and the upwards-closed power-set monad, in the case where all trees are assumed to be *linear.* For *non-linear* ones, we prove that such a distributive law does not exist.'\naddress: Masaryk University Brno\nauthor:\n- Achim Blumensath\nbibliography:\n- 'Power.bib'\ntitle: 'The Power-Set Construction for Tree Algebras'\n---\n\nIntroduction\n============\n\nThe main approaches to formal language theory are based on automata, logic, and algebra. Each comes with their own strengths and weaknesses and thereby complements the other two. In the present article we focus on the algebraic approach, which is well-known for producing proofs that are often simpler than automaton-based ones, if not as elementary and at the cost of yielding worse complexity bounds. Algebraic methods are especially successful at deriving structural results about classes of languages. In particular, they are the method of choice when deriving characterisations of subclasses of regular languages. A\u00a0prominent example of such a result is the Theorem of Sch\u00fctzenberger\u00a0[@Schutzenberger65] stating that a language is first-order definable if, and only if, its syntactic monoid is aperiodic. By now algebraic" +"---\nabstract: 'Fossil groups (FG) of galaxies still present a puzzle to theories of structure formation. Despite the low number of bright galaxies, they have relatively high velocity dispersions and ICM temperatures often corresponding to cluster-like potential wells. Their measured concentrations are typically high, indicating early formation epochs as expected from the originally proposed scenario for their origin as being older undisturbed systems. This is, however, in contradiction with the typical lack of expected well developed cool cores. Here, we apply a cluster dynamical indicator recently discovered in the intracluster light fraction (ICLf) to a classic FG, RX J1000742.53+380046.6, to assess its dynamical state. We also refine that indicator to use as an independent age estimator. We find negative radial temperature and metal abundance gradients, the abundance achieving supersolar values at the hot core. The X-ray flux concentration is consistent with that of cool core systems. The ICLf analysis provides an independent probe of the system\u2019s dynamical state and shows that the system is very relaxed, more than all clusters, where the same analysis has been performed. The specific ICLf is more $\\sim$5 times higher than any of the clusters previously analyzed, which is consistent with an older non-interactive galaxy" +"---\nabstract: 'We establish sufficient conditions for the quick relaxation to kinetic equilibrium in the classic Vicsek-Cucker-Smale model of bird flocking. The convergence time is polynomial in the number of birds as long as the number of flocks remains bounded. This new result relies on two key ingredients: exploiting the convex geometry of embedded averaging systems; and deriving new bounds on the $s$-energy of disconnected agreement systems. We also apply our techniques to bound the relaxation time of certain pattern-formation robotic systems investigated by Sugihara and Suzuki.'\nauthor:\n- 'Bernard Chazelle [^1]'\n- 'Kritkorn Karntikoon [^2]'\ntitle: ' Quick Relaxation in Collective Motion [^3]'\n---\n\nIntroduction\n============\n\nIn the classic Vicsek-Cucker-Smale model\u00a0[@CuckerSmale1; @vicsekCBCS95], a group of $n$ birds are flying in the air while interacting via a time-varying network\u00a0[@blondelHOT05; @chazFlockPaperI; @HendrickxB; @jadbabaieLM03]. The vertices of the network correspond to the $n$ birds and any two distinct birds are joined by an edge if their distance is at most some fixed $r\\leq 1$. The flocking network $G_t$ is thus symmetric and loopless. Its connected components are the [*flocks*]{}. Each bird $i$ has a position $x_i(t)$ and a velocity $v_i(t)$, both of them vectors in $\\mathbb{R}^3$. Given the state of" +"---\nabstract: |\n In a graph ${G}$ with a source $s$, we design a distance oracle that can answer the following query: ${\\textsc{Query}}(s,t,e)$ \u2013 find the length of shortest path from a fixed source $s$ to any destination vertex $t$ while avoiding any edge $e$. We design a deterministic algorithm that builds such an oracle in ${\\widetilde{O}}(m\\sqrt n)$ time[^1]. Our oracle uses ${\\widetilde{O}}(n\\sqrt n)$ space and can answer queries in ${\\widetilde{O}}(1)$ time. Our oracle is an improvement of the work of Bil\u00f2 et al. (ESA 2021) in the preprocessing time, which constructs the first deterministic oracle for this problem in ${\\widetilde{O}}(m\\sqrt n+n^2)$ time.\n\n Using our distance oracle, we also solve the [*single source replacement path problem*]{} (${\\textsc{Ssrp}}$ problem). Chechik and Cohen (SODA 2019) designed a randomized combinatorial algorithm to solve the ${\\textsc{Ssrp}}$ problem. The running time of their algorithm is ${\\widetilde{O}}(m\\sqrt n + n^2)$. In this paper, we show that the ${\\textsc{Ssrp}}$ problem can be solved in ${\\widetilde{O}}(m\\sqrt n + |{\\mathcal{R}}|)$ time, where ${\\mathcal{R}}$ is the output set of the ${\\textsc{Ssrp}}$ problem in ${G}$. Our ${\\textsc{Ssrp}}$ algorithm is optimal (upto polylogarithmic factor) as there is a conditional lower bound of $\\Omega(m\\sqrt n)$ for any combinatorial algorithm that solves this problem." +"---\nabstract: 'Recall that a group $G$ has finitely satisfiable generics (*fsg*) or definable $f$-generics (*dfg*) if there is a global type $p$ on $G$ and a small model $M_0$ such that every left translate of $p$ is finitely satisfiable in $M_0$ or definable over $M_0$, respectively. We show that any abelian group definable in a $p$-adically closed field is an extension of a definably compact *fsg* definable group by a *dfg* definable group. We discuss an approach which might prove a similar statement for interpretable abelian groups. In the case where $G$ is an abelian group definable in the standard model ${\\mathbb{Q}_p}$, we show that $G^0 = G^{00}$, and that $G$ is an open subgroup of an algebraic group, up to finite factors. This latter result can be seen as a rough classification of abelian definable groups in ${\\mathbb{Q}_p}$.'\nauthor:\n- Will Johnson and Ningyuan Yao\nbibliography:\n- 'references.bib'\ntitle: 'Abelian groups definable in $p$-adically closed fields'\n---\n\nIntroduction\n============\n\nIn this paper we study abelian groups definable in $p$-adically closed fields. Recall that a definable group $G$ has *finitely satisfiable generics* (*fsg*) if there is a global type on $G$, finitely satisfiable in a small model, with boundedly" +"---\nabstract: 'This work considers two linear operators which yield wave modes that are classified as neutrally stable, yet have responses that grow or decay in time. Previously, King et al. (Phys. Rev. Fluids, 1, 2016, 073604:1-19) and Huber et al. (IMA J. Appl. Math., 85, 2020, 309-340) examined the one-dimensional (1D) wave propagation governed by these operators. Here, we extend the linear operators to two spatial dimensions (2D) and examine the resulting solutions. We find that the increase of dimension leads to long-time behaviour where the magnitude is reduced by a factor of $t^{\\nicefrac{-1}{2}}$ from the 1D solutions. Thus, regions of the solution which grew algebraically as $t^{\\nicefrac{1}{2}}$ in 1D now are algebraically neutral in 2D, whereas regions which decay (algebraically or exponentially) in 1D now decay more quickly in 2D. Additionally, we find that these two linear operators admit long-time solutions that are functions of the same similarity variable that contracts space and time.'\naddress:\n- 'School of Mathematical Sciences, Rochester Institute of Technology, Rochester, NY, 14623, USA'\n- 'Department of Chemical Engineering, Rochester Institute of Technology, Rochester, NY, 14623, USA'\nauthor:\n- 'Colin M. Huber'\n- 'Nathaniel S. Barlow'\n- 'Steven J. Weinstein'\nbibliography:\n- 'neutral.bib'\ndate:" +"---\nabstract: 'Numerical reasoning is required when solving most problems in our life, but it has been neglected in previous artificial intelligence researches. FinQA challenge has been organized to strengthen the study on numerical reasoning where the participants are asked to predict the numerical reasoning program to solve financial question. The result of FinQA will be evaluated by both execution accuracy and program accuracy. In this paper, we present our approach to tackle the task objective by developing models with different specialized capabilities and fusing their strength. Overall, our approach achieves the **1st place** in FinQA challenge, with **71.93%** execution accuracy and **67.03%** program accuracy.'\nauthor:\n- Renhui Zhang\n- Youwei Zhang\n- |\n Yao Yu\\\n zhangrenhui.zrh@antgroup.com\\\n zhangyouwei.zyw@antgroup.com\\\n csyuyao@gmail.com\nbibliography:\n- 'anthology.bib'\n- 'custom.bib'\ntitle: |\n A Robustly Optimized Long Text to Math Models for\\\n Numerical Reasoning On FinQA\n---\n\nIntroduction\n============\n\nNumerical reasoning is an useful and important ability of human being, which is also one goal of artificial intelligence. However, most researches focus on obtaining the right answer, ignore the reasoning ability of models [@talmor2018web; @yang2018hotpotqa]. In 2019, the DROP dataset is introduced to study the numerical reasoning ability of deep learning models [@dua2019drop]. The DROP dataset contains" +"---\nauthor:\n- \n- \n- \n- \nbibliography:\n- 'reference.bib'\ntitle: Convergence Analysis of Dirichlet Energy Minimization for Spherical Conformal Parameterizations\n---\n\nIntroduction {#sec:Intro}\n============\n\nThe Poincar\u00e9\u2013Klein\u2013Koebe uniformization theorem [@koebe1907; @poincare1908] claims that a simply connected Riemann surface is conformally equivalent to one of three canonical Riemann surfaces, namely, a sphere $\\mathbb{S}^2=\\overline{\\mathbb{C}}=\\mathbb{R}^2\\cup\\{\\infty\\}$, a complex plane $\\mathbb{C}$, or a unit disk $\\mathbb{D}$. In this paper, we focus on the study of the spherical conformal parameterization between a simply connected closed surface $\\mathcal{S}$, i.e., a closed surface of genus zero, and the unit sphere $\\mathbb{S}^2$. It is well known from the Dirichlet principle in [@hilbert1935] that a spherical conformal map from $\\mathcal{S}$ to $\\mathbb{S}^2$ solves the optimization problem of Dirichlet energy with constraints on $\\mathbb{S}^2$. However, since the constraint of $\\mathbb{S}^2$ is not a convex domain, it is generally difficult for a gradient descent projection method or a heat diffusion flow approach on $\\mathbb{S}^2$ to show convergence. Therefore, in this paper, we reexamine the expression of the Dirichlet energy on $\\overline{\\mathbb{C}}$ and derive a theoretical foundation for Dirichlet energy minimization. On this basis, we will develop a new numerical algorithm for efficiently solving the spherical conformal map and prove that the numerical method" +"---\nabstract: |\n **Background:** Software Engineering regularly views communication between project participants as a tool for solving various problems in software development.\n\n **Objective:** Formulate research questions in areas related to CHASE.\n\n **Method:** A day-long discussion of five participants at the in-person day of the *15th International Conference on Cooperative and Human Aspects of Software Engineering* (CHASE 2022) on May 23rd 2022.\n\n **Results:** It is not rare in industrial SE projects that communication is not just a tool or technique to be applied but also represents a resource, which, when lacking, threatens project success. This situation might arise when a person required to make decisions (especially regarding requirements, budgets, or priorities) is often unavailable. It may be helpful to frame communication as a scarce resource to understand the key difficulty of such situations.\n\n **Conclusion:** We call for studies that focus on the allocation and management of scarce communication resources of stakeholders as a lens to analyze software engineering projects.\nauthor:\n- Christoph Matthies\n- 'Mary S\u00e1nchez-Gord\u00f3n'\n- Jens B\u00e6k J\u00f8rgensen\n- Lutz Prechelt\nbibliography:\n- 'bib.bib'\nsubtitle: 'A Summary of CHASE\u201922 Conference Discussions'\ntitle: '\u201cCommunication Is a Scarce Resource!\u201d'\n---\n\nIntroduction\n============\n\nThe 15th CHASE conference took place in May 2022" +"---\nabstract: 'We consider electronic and optical properties of group III-Nitride monolayers using first-principle calculations. The group III-Nitride monolayers have flat hexagonal structures with almost zero planar buckling, $\\Delta$. By tuning the $\\Delta$, the strong $\\sigma\\text{-}\\sigma$ bond through sp$^2$ hybridization of a flat form of these monolayers can be changed to a stronger $\\sigma\\text{-}\\pi$ bond through sp$^3$ hybridization. Consequently, the band gaps of the monolayers are tuned due to a dislocation of the $s$- and $p$-orbitals towards the Fermi energy. The band gaps decrease with increasing $\\Delta$ for those flat monolayers, which have a band gap greater than $1.0$\u00a0eV, while no noticeable change or a flat dispersion of the band gap is seen for the flat monolayers, that have a band gap less than $1.0$\u00a0eV. The decreased band gap causes a decrease in the excitation energy, and thus the static dielectric function, refractive index, and the optical conductivity are increased. In contrast, the flat band gap dispersion of few monolayers in the group III-Nitride induces a reduction in the static dielectric function, the refractive index, and the optical conductivity. We therefore confirm that tuning of the planar buckling can be used to control the physical properties of these" +"---\nabstract: 'The ability to convey relevant and faithful information is critical for many tasks in conditional generation and yet remains elusive for neural seq-to-seq models whose outputs often reveal hallucinations and fail to correctly cover important details. In this work, we advocate planning as a useful intermediate representation for rendering conditional generation less opaque and more grounded. We propose a new conceptualization of text plans as a sequence of question-answer (QA) pairs and enhance existing datasets (e.g., for summarization) with a QA *blueprint* operating as a proxy for content selection (i.e.,\u00a0what to say) *and* planning (i.e.,\u00a0in what order). We obtain blueprints automatically by exploiting state-of-the-art question generation technology and convert input-output pairs into input-blueprint-output tuples. We develop Transformer-based models, each varying in how they incorporate the blueprint in the generated output (e.g., as a global plan or iteratively). Evaluation across metrics and datasets demonstrates that blueprint models are more factual than alternatives which do not resort to planning and allow tighter control of the generation output.'\nauthor:\n- |\n Shashi Narayan, Joshua Maynez, Reinald Kim Amplayo, Kuzman Ganchev,\\\n **Annie Louis, Fantine Huot, Anders Sandholm, Dipanjan Das, Mirella Lapata**\\\n Google Research\\\n `shashinarayan@google.com, joshuahm@google.com, reinald@google.com,`\\\n `kuzman@google.com, annielouis@google.com, fantinehuot@google.com,`\\\n `sandholm@google.com," +"---\nbibliography:\n- 'citation.bib'\n---\n\n**The fewest-big-jumps principle and an**\n\n\u00a0\\\n\n**application to random graphs**\n\nC\u00e9line Kerriou and Peter M\u00f6rters\n\n*Universit\u00e4t zu K\u00f6ln\\\n*\n\n=1truecm =1truecm\n\n[***Summary.***]{} We prove a large deviation principle for the sum of $n$ independent heavy-tailed random variables, which are subject to a moving [truncation]{} at location $n$. Conditional on the sum being large at scale $n$, we show that a finite number of summands take values near the [truncation]{} boundary, while the remaining variables still obey the law of large numbers. This generalises the well-known single-big-jump principle for random variables without [truncation]{} to a situation where just the minimal necessary number of jumps occur. As an application, we consider a random graph with vertex set given by the lattice points of a torus with sidelength $2N+1$. Every vertex is the centre of a ball with random radius sampled from a heavy-tailed distribution. Oriented edges are drawn from the central vertex to all other vertices in this ball. When this graph is conditioned on having an exceptionally large number of edges we use our main result to show that, as $N\\to\\infty$, the excess outdegrees condense in a fixed, finite number of randomly scattered vertices of macroscopic" +"---\nabstract: 'Dual-encoder structure successfully utilizes two language-specific encoders (LSEs) for code-switching speech recognition. Because LSEs are initialized by two pre-trained language-specific models (LSMs), the dual-encoder structure can exploit sufficient monolingual data and capture the individual language attributes. However, most existing methods have no language constraints on LSEs and underutilize language-specific knowledge of LSMs. In this paper, we propose a language-specific characteristic assistance (LSCA) method to mitigate the above problems. Specifically, during training, we introduce two language-specific losses as language constraints and generate corresponding language-specific targets for them. During decoding, we take the decoding abilities of LSMs into account by combining the output probabilities of two LSMs and the mixture model to obtain the final predictions. Experiments show that either the training or decoding method of LSCA can improve the model\u2019s performance. Furthermore, the best result can obtain up to 15.4% relative error reduction on the code-switching test set by combining the training and decoding methods of LSCA. Moreover, the system can process code-switching speech recognition tasks well without extra shared parameters or even retraining based on two pre-trained LSMs by using our method.'\naddress: |\n $^1$Tianjin Key Laboratory of Cognitive Computing and Application,\\\n College of Intelligence and Computing, Tianjin" +"---\nabstract: 'Language models have achieved remarkable performance on a wide range of tasks that require natural language understanding. Nevertheless, state-of-the-art models have generally struggled with tasks that require quantitative reasoning, such as solving mathematics, science, and engineering problems at the college level. To help close this gap, we introduce [[Minerva\u00a0]{}]{}, a large language model pretrained on general natural language data and further trained on technical content. The model achieves state-of-the-art performance on technical benchmarks without the use of external tools. We also evaluate our model on over two hundred undergraduate-level problems in physics, biology, chemistry, economics, and other sciences that require quantitative reasoning, and find that the model can correctly answer nearly a third of them.'\nauthor:\n- 'Aitor Lewkowycz[^1]'\n- 'Anders Andreassen[^2]'\n- 'David Dohan$^\\dagger$'\n- 'Ethan Dyer$^\\dagger$'\n- 'Henryk Michalewski$^\\dagger$'\n- 'Vinay Ramasesh$^\\dagger$'\n- Ambrose Slone\n- Cem Anil\n- Imanol Schlag\n- 'Theo Gutman-Solo'\n- Yuhuai Wu\n- 'Behnam Neyshabur$^*$'\n- 'Guy Gur-Ari$^*$'\n- 'Vedant Misra$^*$'\nbibliography:\n- 'references.bib'\ntitle: |\n **[Solving Quantitative Reasoning Problems with\\\n Language Models]{}**\n---\n\nIntroduction {#sec:introduction}\n============\n\nArtificial neural networks have seen remarkable success in a variety of domains including computer vision, speech recognition, audio and image generation, translation, game" +"---\nabstract: 'We study the Bayesian multi-task variable selection problem, where the goal is to select activated variables for multiple related data sets simultaneously. We propose a new variational Bayes algorithm which generalizes and improves the recently developed \u201csum of single effects\" model of\u00a0@wang2020simple. Motivated by differential gene network analysis in biology, we further extend our method to joint structure learning of multiple directed acyclic graphical models, a problem known to be computationally highly challenging. We propose a novel order MCMC sampler where our multi-task variable selection algorithm is used to quickly evaluate the posterior probability of each ordering. Both simulation studies and real gene expression data analysis are conducted to show the efficiency of our method. Finally, we also prove a posterior consistency result for multi-task variable selection, which provides a theoretical guarantee for the proposed algorithms. Supplementary materials for this article are available online.'\nauthor:\n- Guanxun Li\n- Quan Zhou\nbibliography:\n- 'reference.bib'\ntitle: 'Bayesian Multi-task Variable Selection with an Application to Differential DAG Analysis'\n---\n\nIntroduction\n============\n\nIn machine learning, multi-task learning refers to the paradigm where we simultaneously learn multiple related tasks instead of learning each task independently\u00a0[@zhang2021survey]. In the context of model" +"---\nabstract: 'The negligible intrinsic spin-orbit coupling (SOC) in graphene can be enhanced by proximity effects in stacked heterostructures of graphene and transition metal dichalcogenides (TMDCs). The composition of the TMDC layer plays a key role in determining the nature and strength of the resultant SOC induced in the graphene layer. Here, we study the evolution of the proximity\u2013induced SOC as the TMDC layer is deliberately defected. Alloyed ${\\rm G/W_{\\chi}Mo_{1-\\chi}Se_2}$ heterostructures with diverse compositions ($\\chi$) and defect distributions are simulated using density functional theory. Comparison with continuum and tight-binding models allows both local and global signatures of the metal-atom alloying to be clarified. Our findings show that, despite some dramatic perturbation of local parameters for individual defects, the low\u2013energy spin and electronic behaviour follow a simple effective medium model which depends only on the composition ratio of the metallic species in the TMDC layer. Furthermore, we demonstrate that the topological state of such alloyed systems can be feasibly tuned by controlling this ratio.'\nauthor:\n- Zahra Khatibi\n- 'Stephen R. Power'\ntitle: 'Proximity spin-orbit coupling in graphene on alloyed transition metal dichalcogenides'\n---\n\nIntroduction\n============\n\nSpintronics exploits the spin degree of freedom of an electron to store and transfer information" +"---\nabstract: 'Neutrino telescope experiments are rapidly becoming more competitive in indirect detection searches for dark matter. Neutrino signals arising from dark matter annihilations are typically assumed to originate from the hadronisation and decay of Standard Model particles. Here we showcase a supersymmetric model, the BLSSMIS, that can simultaneously obey current experimental limits while still providing a potentially observable non-standard neutrino spectrum from dark matter annihilation.'\nauthor:\n- 'Melissa van Beekveld [^1]'\n- 'Wim Beenakker [^2]'\n- 'Sascha Caron [^3]'\n- 'Jochem Kip [^4]'\n- 'Roberto Ruiz de Austri [^5]'\n- 'Zhongyi Zhang [^6]'\nbibliography:\n- 'main.bib'\ntitle: 'Non-Standard Neutrino Spectra From Annihilating Neutralino Dark Matter'\n---\n\nIntroduction {#sec:Introduction}\n============\n\nOne of the big unsolved mysteries in current-day physics is the exact nature of dark matter (DM), which as described by the Lambda-CDM model should make up\u00a085% of the matter content of the universe\u00a0[@LAMBDA_CDM:Bertone_2005]. A standard hypothesis is that DM is comprised of Weakly Interacting Massive Particles (WIMPs), which for example can naturally be provided by extensions of the Standard Model (SM) in which supersymmetry (SUSY) is imposed and $R$-parity conservation is assumed. Collider, direct, and indirect-detection experiments have not yet found any conclusive evidence for the existence" +"---\nabstract: 'A data-driven model augmentation framework, referred to as Weakly-coupled Integrated Inference and Machine Learning (IIML), is presented to improve the predictive accuracy of physical models. In contrast to [*parameter*]{} calibration, this work seeks corrections to the [*structure*]{} of the model by a) inferring augmentation fields that are consistent with the underlying model, and b) transforming these fields into corrective model forms. The proposed approach couples the inference and learning steps in a weak sense via an alternating optimization approach. This coupling ensures that the augmentation fields remain learnable and maintain consistent functional relationships with local modeled quantities across the training dataset. An iterative solution procedure is presented in this paper, removing the need to embed the augmentation function during the inference process. This framework is used to infer an augmentation introduced within a Polymer electrolyte membrane fuel cell (PEMFC) model using a small amount of training data (from only 14 training cases.) These training cases belong to a dataset consisting of high-fidelity simulation data obtained from a high-fidelity model of a first generation Toyota Mirai. All cases in this dataset are characterized by different inflow and outflow conditions on the same geometry. When tested on 1224 different configurations," +"---\nabstract: |\n We study theoretically the electronic structure of three-dimensional (3D) higher-order topological insulators in the presence of step edges. We numerically find that a 1D conducting state with a helical spin structure, which also has a linear dispersion near the zero energy, emerges at a step edge and on the opposite surface of the step edge. We also find that the 1D helical conducting state on the opposite surface of a step edge emerges when the electron hopping in the direction perpendicular to the step is weak. In other words, the existence of the 1D helical conducting state on the opposite surface of a step edge can be understood by considering an addition of two different-sized independent blocks of 3D higher-order topological insulators. On the other hand, when the electron hopping in the direction perpendicular to the step is strong, the location of the emergent 1D helical conducting state moves from the opposite surface of a step edge to the dip ($270^{\\circ}$ edge) just below the step edge. In this case, the existence at the dip below the step edge can be understood by assigning each surface with a sign ($+$ or $-$) of the mass of the" +"---\nabstract: 'This paper presents a new voice conversion (VC) framework capable of dealing with both additive noise and reverberation, and its performance evaluation. There have been studied some VC researches focusing on real-world circumstances where speech data are interfered with background noise and reverberation. To deal with more practical conditions where no clean target dataset is available, one possible approach is zero-shot VC, but its performance tends to degrade compared with VC using sufficient amount of target speech data. To leverage large amount of noisy-reverberant target speech data, we propose a three-stage VC framework based on denoising process using a pretrained denoising model, dereverberation process using a dereverberation model, and VC process using a nonparallel VC model based on a variational autoencoder. The experimental results show that 1) noise and reverberation additively cause significant VC performance degradation, 2) the proposed method alleviates the adverse effects caused by both noise and reverberation, and significantly outperforms the baseline directly trained on the noisy-reverberant speech data, and 3) the potential degradation introduced by the denoising and dereverberation still causes noticeable adverse effects on VC performance.'\naddress: ' Nagoya University, Japan '\ntitle: 'An Evaluation of Three-Stage Voice Conversion Framework for Noisy and" +"---\nabstract: 'Three state-of-the-art language-and-image AI models, CLIP, SLIP, and BLIP, are evaluated for evidence of a bias previously observed in social and experimental psychology: equating American identity with being White. Embedding association tests (EATs) using standardized images of self-identified Asian, Black, Latina/o, and White individuals from the Chicago Face Database (CFD) reveal that White individuals are more associated with collective in-group words than are Asian, Black, or Latina/o individuals, with effect sizes $>.4$ for White vs. Asian comparisons across all models. In assessments of three core aspects of American identity reported by social psychologists, single-category EATs reveal that images of White individuals are more associated with patriotism and with being born in America, but that, consistent with prior findings in psychology, White individuals are associated with being less likely to treat people of all races and backgrounds equally. Additional tests reveal that the number of images of Black individuals returned by an image ranking task is more strongly correlated with state-level implicit bias scores for White individuals (Pearson\u2019s $\\rho=.63$ in CLIP, $\\rho = .69$ in BLIP) than are state demographics ($\\rho = .60$), suggesting a relationship between regional prototypicality and implicit bias. Three downstream machine learning tasks demonstrate biases" +"---\nabstract: 'Judging by the enormous body of work that it has inspired, Elliott Lieb and Derek Robinson\u2019s 1972 article on the \u201cFinite Group Velocity of Quantum Spin Systems\u201d can be regarded as a *high-impact paper*, as research accountants say. But for more than 30 years this major contribution to quantum physics has remained pretty much confidential. Lieb and Robinson\u2019s work eventually found a large audience in the years 2000, with the rapid and concomitant development of quantum information theory and experimental platforms enabling the characterisation and manipulation of isolated quantum systems at the single-particle level. In this short review article, I will first remind the reader of the central result of Lieb and Robinson\u2019s work, namely the existence of a maximum group velocity for the propagation of information in non-relativistic quantum systems. I will then review the experiments that most closely relate to this finding, in the sense that they reveal how information propagates in specific\u2014yet \u201creal\u201d\u2014quantum systems. Finally, as an outlook, I will attempt to make a connection with the quantum version of the butterfly effect recently studied in chaotic quantum systems.'\nauthor:\n- Marc Cheneau\nbibliography:\n- '../experimental\\_tests\\_of\\_lieb-robinson\\_bounds.bib'\ntitle: 'Experimental tests of Lieb\u2013Robinson bounds'\n---\n\nIntroduction\n============" +"---\nabstract: 'A growing self-avoiding walk (GSAW) is a stochastic process that starts from the origin on a lattice and grows by occupying an unoccupied adjacent lattice site at random. A sufficiently long GSAW will reach a state in which all adjacent sites are already occupied by the walk and become trapped, terminating the process. It is known empirically from simulations that on a square lattice, this occurs after a mean of 71 steps. In Part I of a two-part series of manuscripts, we consider simplified lattice geometries only two sites high (\u201cladders\u201d) and derive generating functions for the probability distribution of GSAW trapping. We prove that a self-trapping walk on a square ladder will become trapped after a mean of 17 steps, while on a triangular ladder trapping will occur after a mean of 941/48$\\approx$19.6 steps. We discuss additional implications of our results for understanding trapping in the \u201cinfinite\u201d GSAW.'\nauthor:\n- |\n Alexander R. Klotz\\\n `alex.klotz@csulb.edu`\\\n Department of Physics and Astronomy\\\n California State University, Long Beach\\\n 1250 Bellflower Blvd., Long Beach, CA, 90840\n- |\n Everett Sullivan\\\n `everetts@vt.edu`\\\n Department of Mathematics\\\n Virginia Tech\\\n Blacksburg, Virginia USA\\\n Department of Mathematics\\\n Clayton State University\\\n Morrow, Georgia USA\nbibliography:\n- 'walkrefs.bib'" +"---\nabstract: 'State-of-the-art schemes for performance analysis and optimization of multiple-input multiple-output systems generally experience degradation or even become invalid in dynamic complex scenarios with unknown interference and channel state information (CSI) uncertainty. To adapt to the challenging settings and better accomplish these network auto-tuning tasks, we propose a generic learnable model-driven framework in this paper. To explain how the proposed framework works, we consider regularized zero-forcing precoding as a usage instance and design a light-weight neural network for refined prediction of sum rate and detection error based on coarse model-driven approximations. Then, we estimate the CSI uncertainty on the learned predictor in an iterative manner and, on this basis, optimize the transmit regularization term and subsequent receive power scaling factors. A deep unfolded projected gradient descent based algorithm is proposed for power scaling, which achieves favorable trade-off between convergence rate and robustness.'\nauthor:\n- |\n Fan\u00a0Meng, Shengheng\u00a0Liu,\u00a0,\\\n Yongming\u00a0Huang,\u00a0, Zhaohua\u00a0Lu [^1] [^2]\nbibliography:\n- 'References.bib'\ntitle: 'Learnable Model-Driven Performance Prediction and Optimization for Imperfect MIMO System: Framework and Application'\n---\n\nIntelligent wireless communications, deep unfolding, digital twin, performance prediction, projected gradient descent, channel state information, linear beamforming, receive power scaling.\n\nIntroduction {#sec:introduction}\n============\n\nDeep" +"---\nabstract: 'Cross-modal representation learning has become a new normal for bridging the semantic gap between text and visual data. Learning modality agnostic representations in a continuous latent space, however, is often treated as a black-box data-driven training process. It is well-known that the effectiveness of representation learning depends heavily on the quality and scale of training data. For video representation learning, having a complete set of labels that annotate the full spectrum of video content for training is highly difficult, if not impossible. These issues, black-box training and dataset bias, make representation learning practically challenging to be deployed for video understanding due to unexplainable and unpredictable results. In this paper, we propose two novel training objectives, likelihood and unlikelihood functions, to unroll the semantics behind embeddings while addressing the label sparsity problem in training. The likelihood training aims to interpret semantics of embeddings beyond training labels, while the unlikelihood training leverages prior knowledge for regularization to ensure semantically coherent interpretation. With both training objectives, a new encoder-decoder network, which learns interpretable cross-modal representation, is proposed for ad-hoc video search. Extensive experiments on TRECVid and MSR-VTT datasets show that the proposed network outperforms several state-of-the-art retrieval models with a statistically" +"---\nabstract: |\n We introduce a novel approach for gait transfer from unconstrained videos in-the-wild. In contrast to motion transfer, the objective here is not to imitate the source\u2019s motions by the target, but rather to replace the walking source with the target, while transferring the target\u2019s typical gait. Our approach can be trained only once with multiple sources and is able to transfer the gait of the target from unseen sources, eliminating the need for retraining for each new source independently. Furthermore, we propose novel metrics for gait transfer based on gait recognition models that enable to quantify the quality of the transferred gait, and show that existing techniques yield a discrepancy that can be easily detected.\n\n We introduce Cycle Transformers GAN (CTrGAN), that consist of a decoder and encoder, both Transformers, where the attention is on the temporal domain between complete images rather than the spatial domain between patches. Using a widely-used gait recognition dataset, we demonstrate that our approach is capable of producing over an order of magnitude more realistic personalized gaits than existing methods, even when used with sources that were not available during training. As part of our solution, we present a detector that determines" +"---\nabstract: 'Edge computing is being widely used for video analytics. To alleviate the inherent tension between accuracy and cost, various video analytics pipelines have been proposed to optimize the usage of GPU on edge nodes. Nonetheless, we find that GPU compute resources provisioned for edge nodes are commonly under-utilized due to video content variations, subsampling and filtering at different places of a pipeline. As opposed to model and pipeline optimization, in this work, we study the problem of opportunistic data enhancement using the non-deterministic and fragmented idle GPU resources. In specific, we propose a task-specific discrimination and enhancement module and a model-aware adversarial training mechanism, providing a way to identify and transform low-quality images that are specific to a video pipeline in an accurate and efficient manner. A multi-exit model structure and a resource-aware scheduler is further developed to make online enhancement decisions and fine-grained inference execution under latency and GPU resource constraints. Experiments across multiple video analytics pipelines and datasets reveal that by judiciously allocating a small amount of idle resources on frames that tend to yield greater marginal benefits from enhancement, our system boosts DNN object detection accuracy by $7.3-11.3\\%$ without incurring any latency costs.'\nauthor:\n-" +"---\nabstract: 'We present a fine-tuning method to improve the appearance of 3D geometries reconstructed from single images. We leverage advances in monocular depth estimation to obtain disparity maps and present a novel approach to transforming 2D normalized disparity maps into 3D point clouds by using shape priors to solve an optimization on the relevant camera parameters. After creating a 3D point cloud from disparity, we introduce a method to combine the new point cloud with existing information to form a more faithful and detailed final geometry. We demonstrate the efficacy of our approach with multiple experiments on both synthetic and real images.'\nauthor:\n- Marissa Ramirez de Chanlatte\n- Matheus Gadelha\n- Thibault Groueix\n- Radomir Mech\nbibliography:\n- 'bib.bib'\ntitle: Recovering Detail in 3D Shapes Using Disparity Maps\n---\n\n16SubNumber[\\*\\*\\*]{}\n\nIntroduction\n============\n\nReconstructing full 3D shapes from single RGB images is a longstanding computer vision problem. A system capable of performing such task needs to have some understanding of *pixels* \u2013 how the scene illumination interacts with the geometry and materials to create that image \u2013 and \u00a0*shapes* \u2013 their structure, how they are usually decomposed and which symmetries arise from their function and style. To develop such" +"---\nabstract: 'This report presents an algorithm for determining the unknown rates in the sequential processes of a Stochastic Process Algebra (SPA) model, provided that the rates in the combined flat model are given. Such a rate lifting is useful for model reengineering and model repair. Technically, the algorithm works by solving systems of nonlinear equations and \u2013 if necessary \u2013 adjusting the model\u2019s synchronisation structure without changing its transition system. This report contains the complete pseudo-code of the algorithm. The approach taken by the algorithm exploits some structural properties of SPA systems, which are formulated here for the first time and could be very beneficial also in other contexts.'\nauthor:\n- Markus Siegle\n- Amin Soltanieh\nbibliography:\n- 'Literatur.bib'\ntitle: |\n Rate Lifting for Stochastic Process Algebra\\\n \u2013 Exploiting Structural Properties \u2013\n---\n\nIntroduction {#sec:Intro}\n============\n\nStochastic Process Algebra (SPA) is a family of formalisms widely used in the area of quantitative modelling and evaluation. Typical members of this family are PEPA [@HillstonBook], TIPP [@Goetz94], EMPA [@Bernardo99], CASPA [@kuntz:04b], but also the reactive modules language of tools such as PRISM [@KNP11] and STORM [@storm:2017]. Originally devised for classical performance and dependability modelling, SPA models are now frequently used in" +"---\nauthor:\n- 'Nana Cabo Bizet,'\n- 'Octavio Obreg\u00f3n,'\n- 'Wilfredo Yupanqui.'\nbibliography:\n- 'References.bib'\ntitle: Modified entropies as the origin of generalized uncertainty principles\n---\n\n[subheader]{}\n\n[ !a! @toks= @toks= ]{} [@counter>0@toks=@toks=]{} [@counter>0@toks=@toks=]{} [@counter>0@toks=@toks=]{}\n\n[abstract[The Heisenberg uncertainty principle is known to be connected to the entropic uncertainty principle. This correspondence is obtained employing a Gaussian probability distribution for wave functions associated to the Shannon entropy. Independently, due to quantum gravity effects the Heisenberg uncertainty principle has been extended to a Generalized Uncertainty Principle (GUP). In this work, we show that GUP has been derived from considering non-extensive entropies, proposed by one of us. We found that the deformation parameters associated with $S_{+}$ and $S_-$ entropies are negative and positive respectively. This allows us to explore various possibilities in the search of physical implications. We conclude that non-extensive statistics constitutes a signature of quantum gravity. ]{}]{}\n\nIntroduction\n============\n\nPhysical phenomena on the smaller scales are described by Quantum Mechanics, and our world is inherently non deterministic according to this theory. Heisenberg\u2019s uncertainty principle [@heisenberg], implies that it is impossible to have a particle for which both position $q$ and momentum $p$ are sharply defined, in dramatic contrast with classical mechanics." +"---\nabstract: 'We introduce analogs of left and right RSK insertion for Schubert calculus of complete flag varieties. The objects being inserted are certain biwords, the insertion objects are bumpless pipe dreams, and the recording objects are decorated chains in Bruhat order. As an application, we adopt Lenart\u2019s growth diagrams of permutations to give a combinatorial rule for Schubert structure constants in the separated descent case.'\naddress:\n- 'Department of Mathematics, University of Minnesota - Twin Cities, '\n- 'Department of Mathematics, University of Minnesota - Twin Cities, '\nauthor:\n- Daoji Huang\n- Pavlo Pylyavskyy\nbibliography:\n- 'ref.bib'\ntitle: 'Bumpless pipe dream RSK, growth diagrams, and Schubert structure constants'\n---\n\nIntroduction\n============\n\nThe cohomology ring $H^*(Fl(\\C^n))$ of the complete flag variety of nested vector spaces in $\\C^n$ has basis given by the cohomology classes of the Schubert varieties, $[X_w]$. The *Schubert structure constants* are coefficients $c_{w,v}^u$ of the Schubert classes in the expansion of the product of two Schubert classes, $[X_w]\\cdot [X_v]=\\sum_{u} c_{w,v}^u [X_u]$. The Schubert structure constants $c_{w,v}^u$ bear the geometric interpretation of counting the number of points in a suitable intersection of Schubert varieties determined by $u,v,w$, but its combinatorial interpretation has yet to be understood in" +"---\nabstract: 'We analyze the behavior of the microcanonical and canonical caloric curves for a piecewise model of the configurational density of states of simple solids, in the context of melting from the superheated state, as realized numerically in the Z-method via atomistic molecular dynamics. A first-order phase transition with metastable regions is reproduced by the model, being therefore useful to describe aspects of the melting transition. Within this model, transcendental equations connecting the superheating limit, the melting point, and the specific heat of each phase are presented and numerically solved. Our results suggest that the essential elements of the microcanonical Z curves can be extracted from simple modeling of the configurational density of states.'\naddress:\n- |\n Research Center on the Intersection in Plasma Physics, Matter and Complexity, P$^2$MC,\\\n Comisi\u00f3n Chilena de Energ\u00eda Nuclear, Casilla 188-D, Santiago, Chile\n- 'Departamento de F\u00edsica, Facultad de Ciencias Exactas, Universidad Andres Bello. Sazi\u00e9 2212, piso 7, Santiago, 8370136, Chile.'\nauthor:\n- Sergio Davis\n- Claudia Loyola\n- Joaqu\u00edn Peralta\nbibliography:\n- 'zmodel.bib'\ntitle: Configurational density of states and melting of simple solids\n---\n\nDensity of states ,Phase transitions ,Melting\n\nIntroduction\n============\n\nOne of the widely used approaches to determine the melting point" +"---\nabstract: 'Spin precession in merging black-hole binaries is a treasure trove for both astrophysics and fundamental physics. There are now well-established strategies to infer from gravitational-wave data whether at least one of the two black holes is precessing. In this paper we tackle the next-in-line target, namely the statistical assessment that the observed system has two precessing spins. We find that the recently developed generalization of the effective precession spin parameter $\\chi_\\mathrm{p}$ is a well-suited estimator to this task. With this estimator, the occurrence of two precessing spins is a necessary (though not sufficient) condition to obtain values $1<\\chi_\\mathrm{p}\\leq 2$. Confident measurements of gravitational-wave sources with $\\chi_\\mathrm{p}$ values in this range can be taken as a conservative assessment that the binary presents two precessing spins. We investigate this argument using a large set of >100 software injections assuming anticipated LIGO/Virgo sensitivities for the upcoming fourth observing run, O4. Our results are very encouraging, suggesting that, if such binaries exist in nature and merge at a sufficient rate, current interferometers are likely to deliver the first confident detection of merging black holes with two precessing spins. We investigate prior effects and waveform systematics and, though these need to be better" +"---\nabstract: 'The flavor evolution of neutrinos in core collapse supernovae and neutron star mergers is a critically important unsolved problem in astrophysics. Following the electron flavor evolution of the neutrino system is essential for calculating the thermodynamics of compact objects as well as the chemical elements they produce. Accurately accounting for flavor transformation in these environments is challenging for a number of reasons, including the large number of neutrinos involved, the small spatial scale of the oscillation, and the nonlinearity of the system. We take a step in addressing these issues by presenting a method which describes the neutrino fields in terms of angular moments. We apply our moment method to neutron star merger conditions and show it simulates fast flavor neutrino transformation in a region where this phenomenon is expected to occur. By comparing with particle-in-cell calculations we show that the moment method is able to capture the three phases of growth, saturation, and decoherence, and correctly predicts the lengthscale of the fastest growing fluctuations in the neutrino field.'\nauthor:\n- Evan Grohs\n- Sherwood Richers\n- 'Sean M. Couch'\n- Francois Foucart\n- 'James P. Kneller'\n- 'G. C. McLaughlin'\nbibliography:\n- 'mff.bib'\ntitle: Neutrino Fast Flavor" +"---\nabstract: |\n We introduce cryptographic protocols for securely and efficiently computing the cardinality of set union and set intersection. Our private set-cardinality protocols () are designed for the setting in which a large set of parties in a distributed system makes observations, and a small set of parties with more resources and higher reliability aggregates the observations. allows for secure and useful statistics gathering in privacy-preserving distributed systems. For example, it allows operators of anonymity networks such as Tor to securely answer the questions: [*How many unique users are using the network?*]{} and [*How many hidden services are being accessed?*]{}\n\n We prove the correctness and security of in the Universal Composability framework against an active adversary that compromises all but one of the aggregating parties. Although successful output cannot be guaranteed in this setting, either succeeds or terminates with an abort, and we furthermore make the adversary *accountable* for causing an abort by blaming at least one malicious party. We also show that prevents adaptive corruption of the data parties from revealing past observations, which prevents them from being victims of targeted compromise, and we ensure safe measurements by making outputs differentially private.\n\n We present a proof-of-concept implementation of" +"---\nabstract: |\n We study the $b$-matching problem in bipartite graphs $G=(S,R,E)$. Each vertex $s\\in S$ is a server with individual capacity $b_s$. The vertices $r\\in R$ are requests that arrive online and must be assigned instantly to an eligible server. The goal is to maximize the size of the constructed matching. We assume that $G$ is a $(k,d)$-graph\u00a0[@NW], where $k$ specifies a lower bound on the degree of each server and $d$ is an upper bound on the degree of each request. This setting models matching problems in timely applications.\n\n We present tight upper and lower bounds on the performance of deterministic online algorithms. In particular, we develop a new online algorithm via a primal-dual analysis. The optimal competitive ratio tends to\u00a01, for arbitrary $k\\geq d$, as the server capacities increase. Hence, nearly optimal solutions can be computed online. Our results also hold for the vertex-weighted problem extension, and thus for AdWords and auction problems in which each bidder issues individual, equally valued bids.\n\n Our bounds improve the previous best competitive ratios. The asymptotic competitiveness of\u00a01 is a significant improvement over the previous factor of $1-1/e^{k/d}$, for the interesting range where $k/d\\geq 1$ is small. Recall" +"---\nabstract: 'Previous Part-Of-Speech (POS) induction models usually assume certain independence assumptions (e.g., Markov, unidirectional, local dependency) that do not hold in real languages. For example, the subject-verb agreement can be both long-term and bidirectional. To facilitate flexible dependency modeling, we propose a Masked Part-of-Speech Model ([MPoSM]{}), inspired by the recent success of Masked Language Models (MLM). [MPoSM]{}can model arbitrary tag dependency and perform POS induction through the objective of masked POS reconstruction. We achieve competitive results on both the English Penn WSJ dataset as well as the universal treebank containing 10 diverse languages. Though modeling the long-term dependency should ideally help this task, our ablation study shows mixed trends in different languages. To better understand this phenomenon, we design a novel synthetic experiment that can specifically diagnose the model\u2019s ability to learn tag agreement. Surprisingly, we find that even strong baselines fail to solve this problem consistently in a very simplified setting: the agreement between adjacent words. Nonetheless, [MPoSM]{}achieves overall better performance. Lastly, we conduct a detailed error analysis to shed light on other remaining challenges.[^1]'\nauthor:\n- |\n Xiang Zhou\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Shiyue Zhang\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Mohit Bansal\\\n Department of Computer Science\\\n University of North Carolina at Chapel Hill\\\n `{xzh, shiyue, mbansal}@cs.unc.edu`\\" +"---\nabstract: 'The ionization loss of a high-energy electron-positron pair in thin targets is considered. The analogue of the Landau distribution function is derived for this loss under the condition when the Chudakov effect of the pair ionization loss suppression is manifested. Expression for the most probable value of the pair ionization loss $E_{MP}$ is obtained. It is shown that the magnitude of Chudakov effect for $E_{MP}$ can be noticeably different from the magnitude of this effect for the restricted mean value of the pair ionization loss.'\naddress:\n- 'National Science Center \u2018Kharkiv Institute of Physics and Technology\u2019, 1 Akademichna st., 61108 Kharkiv, Ukraine'\n- 'Karazin Kharkiv National University, 4 Svobody sq., 61022 Kharkiv, Ukraine'\nauthor:\n- 'S.V. Trofymenko'\nbibliography:\n- 'references.bib'\ntitle: 'On the Chudakov effect for the most probable value of high-energy electron-positron pair ionization loss in thin targets'\n---\n\nElectron-positron pair ,ionization energy loss ,straggling function ,Chudakov effect\n\nIntroduction {#Introduction}\n============\n\nThe value of ionization energy loss of a fast particle traversing thin target is stochastic. It is distributed according to the law firstly derived by Landau [@Landau1944]. Such a distribution, known as straggling function, is asymmetric with respect to its single maximum corresponding to the most" +"---\nabstract: 'The rise of vehicle automation has generated significant interest in the potential role of future automated vehicles (AVs). In particular, in highly dense traffic settings, AVs are expected to serve as congestion-dampeners, mitigating the presence of instabilities that arise from various sources. However, in many applications, such maneuvers rely heavily on non-local sensing or coordination by interacting AVs, thereby rendering their adaptation to real-world settings a particularly difficult challenge. To address this challenge, this paper examines the role of imitation learning in bridging the gap between such control strategies and realistic limitations in communication and sensing. Treating one such controller as an \u201cexpert\", we demonstrate that imitation learning can succeed in deriving policies that, if adopted by 5% of vehicles, may boost the energy-efficiency of networks with varying traffic conditions by 15% using only local observations. Results and code are available online at .'\nauthor:\n- 'Abdul Rahman Kreidieh$^{1,2}$, Zhe Fu$^{2}$, and Alexandre M. Bayen$^{2}$[^1][^2]'\nbibliography:\n- 'main.bib'\ntitle: '**Learning energy-efficient driving behaviors by imitating experts**'\n---\n\nIntroduction\n============\n\nAutonomous driving systems have the potential to vastly improve the quality and efficiency of existing transportation systems. With fast reaction times and socially optimal behaviors, *automated vehicles* (AVs) can" +"---\nabstract: |\n In recent years, the introduction of the Transformer models sparked a revolution in natural language processing (NLP). BERT was one of the first text encoders using only the attention mechanism without any recurrent parts to achieve state-of-the-art results on many NLP tasks.\n\n This paper introduces a text classifier using topological data analysis. We use BERT\u2019s attention maps transformed into attention graphs as the only input to that classifier. The model can solve tasks such as distinguishing spam from ham messages, recognizing whether a sentence is grammatically correct, or evaluating a movie review as negative or positive. It performs comparably to the BERT baseline and outperforms it on some tasks.\n\n Additionally, we propose a new method to reduce the number of BERT\u2019s attention heads considered by the topological classifier, which allows us to prune the number of heads from 144 down to as few as ten with no reduction in performance. Our work also shows that the topological model displays higher robustness against adversarial attacks than the original BERT model, which is maintained during the pruning process. To the best of our knowledge, this work is the first to confront topological-based models with adversarial attacks in the context" +"---\nabstract: 'A data-driven idea is presented to test if light nuclei and hypernuclei obey the coalescence-inspired sum rule, i.e., to test if the flow of a light nucleus or hypernucleus is the summed flow of each of its constituents. Here, the mass difference and charge difference among the constituents of light nuclei and hypernuclei are treated appropriately. The idea is applied to the available data for $\\sqrt{s_{NN}} = 3$ GeV fixed-target Au+Au collisions at the Relativistic Heavy Ion Collider (RHIC), published by the STAR collaboration. It is found that the sum rule for light nuclei is approximately valid near mid-rapidity ($-0.3 < y < 0$), but there is a clear violation of the sum rule at large rapidity ($y < -0.3$). The Jet AA Microscopic Transport Model (JAM), with baryonic mean-field plus nucleon coalescence, generates a similar pattern as obtained from the experimental data. In the present approach, the rapidity dependence of directed flow of the hypernuclei $\\mathrm{_{\\Lambda}^{3}H}$ and $\\mathrm{_{\\Lambda}^{4}H}$ is predicted in a model-independent way for $\\sqrt{s_{NN}} = 3$ GeV Au+Au collisions, which will be explored by ongoing and future measurements from STAR.'\naddress: 'Department of Physics, Kent State University, Kent, OH 44242, USA'\nauthor:\n- '*Ashik Ikbal" +"---\nabstract: 'Popular graph neural networks are shallow models, despite the success of very deep architectures in other application domains of deep learning. This reduces the modeling capacity and leaves models unable to capture long-range relationships. The primary reason for the shallow design results from over-smoothing, which leads node states to become more similar with increased depth. We build on the close connection between GNNs and PageRank, for which personalized PageRank introduces the consideration of a personalization vector. Adopting this idea, we propose the Personalized PageRank Graph Neural Network (PPRGNN), which extends the graph convolutional network to an infinite-depth model that has a chance to reset the neighbor aggregation back to the initial state in each iteration. We introduce a nicely interpretable tweak to the chance of resetting and prove the convergence of our approach to a unique solution without placing any constraints, even when taking infinitely many neighbor aggregations. As in personalized PageRank, our result does not suffer from over-smoothing. While doing so, time complexity remains linear while we keep memory complexity constant, independently of the depth of the network, making it scale well to large graphs. We empirically show the effectiveness of our approach for various node and" +"---\nabstract: 'Information geometry and optimal transport are two distinct geometric frameworks for modeling families of probability measures. During the recent years, there has been a surge of research endeavors that cut across these two areas and explore their links and interactions. This paper is intended to provide an (incomplete) survey of these works, including entropy-regularized transport, divergence functions arising from $c$-duality, density manifolds and transport information geometry, the para-K\u00e4hler and K\u00e4hler geometries underlying optimal transport and the regularity theory for its solutions. Some outstanding questions that would be of interest to audience of both these two disciplines are posed. Our piece also serves as an introduction to the Special Issue on Optimal Transport of the journal [*Information Geometry*]{}.'\nauthor:\n- Gabriel Khan\n- Jun Zhang\nbibliography:\n- 'references.bib'\ntitle: |\n When Optimal Transport Meets\\\n Information Geometry \n---\n\n\\[theorem\\][Conjecture]{} \\[theorem\\][Proposition]{} \\[theorem\\][Lemma]{} \\[theorem\\][Corollary]{} \\[theorem\\][Observation]{} \\[theorem\\][Procedure]{}\n\ni\n\nOptimal Transport: A Brief Overview\n===================================\n\nThe Monge and Kantorovich formulations\n--------------------------------------\n\nOptimal transport is a classic area of mathematics which combines ideas from geometry, analysis, measure theory, and probability. Today, it is a thriving area of research, both in the pure and applied settings. Furthermore, it has many practical applications ranging from logistics, economics," +"---\nabstract: 'Disposing of simple and efficient sources for photonic states with non-classical photon statistics is of paramount importance for implementing quantum computation and communication protocols. In this work, we propose an innovative approach that drastically simplifies the preparation of non-Gaussian states as compared to previous proposals, by taking advantage from the multiplexing capabilities offered by modern quantum photonics tools. Our proposal is inspired by iterative protocols, where multiple resources are combined one after the other for obtaining high-amplitude complex output states. Here, conversely, a large part of the protocol is performed in parallel, by using a single projective measurement along a mode which partially overlaps with all the input modes. We show that our protocol can be used to generate high-quality and high-amplitude Schr\u00f6dinger cat states as well as more complex states such as error-correcting codes. Remarkably, our proposal can be implemented with experimentally available resources, highlighting its straightforward feasibility.'\nauthor:\n- 'M. F. Melalkia'\n- 'J. Huynh'\n- 'S. Tanzilli'\n- 'V. D\u2019Auria'\n- 'J. Etesse'\nbibliography:\n- 'RefPD.bib'\ntitle: 'A multiplexed synthesizer for non-Gaussian photonic quantum state generation'\n---\n\nIntroduction {#introduction .unnumbered}\n============\n\nEfficient coding of quantum information relies on the capability to generate high-quality states of" +"---\nabstract: 'Motivated by the study of the summatory $k$-free indicator and totient functions in the classical setting, we investigate their function field analogues. First, we derive an expression for the error terms of the summatory functions in terms of the zeros of the associated zeta function. Under the Linear Independence hypothesis, we explicitly construct the limiting distributions of these error terms and compute the frequency with which they occur in an interval $[-\\beta, \\beta]$ for a real $\\beta > 0$. We also show that these error terms are unbiased, that is, they are positive and negative equally often. Finally, we examine the average behavior of these error terms across families of hyperelliptic curves of fixed genus. We obtain these results by following a general framework initiated by Cha and Humphries.'\naddress:\n- |\n Department of Mathematics\\\n Massachusetts Institute of Technology\\\n 77 Massachusetts Avenue\\\n Cambridge, MA 02139\n- |\n Department of Mathematics\\\n Harvard University\\\n 1 Oxford Street\\\n Cambridge, MA 02138\n- |\n Department of Mathematics\\\n Yale University\\\n 10 Hillhouse Avenue\\\n New Haven, CT 06520\n- |\n Department of Mathematics\\\n Princeton University\\\n 304 Washington Road\\\n Princeton, NJ 08544\nauthor:\n- Sanjana Das\n- Hannah Lang\n- Hamilton Wan\n- Nancy Xu" +"---\nabstract: 'The finance industry has adopted machine learning (ML) as a form of quantitative research to support better investment decisions, yet there are several challenges often overlooked in practice. (1) ML code tends to be unstructured and ad hoc, which hinders cooperation with others. (2) Resource requirements and dependencies vary depending on which algorithm is used, so a flexible and scalable system is needed. (3) It is difficult for domain experts in traditional finance to apply their experience and knowledge in ML-based strategies unless they acquire expertise in recent technologies. This paper presents Shai-am, an ML platform integrated with our own Python framework. The platform leverages existing modern open-source technologies, managing containerized pipelines for ML-based strategies with unified interfaces to solve the aforementioned issues. Each strategy implements the interface defined in the core framework. The framework is designed to enhance reusability and readability, facilitating collaborative work in quantitative research. Shai-am aims to be a pure AI asset manager for solving various tasks in financial markets.'\nauthor:\n- Jonghun Kwak\n- Jungyu Ahn\n- Jinho Lee\n- Sungwoo Park\ntitle: 'Shai-am: A Machine Learning Platform for Investment Strategies'\n---\n\nIntroduction\n============\n\nThe application of machine learning (ML) to solve complex" +"---\nabstract: 'This review summarizes our current understanding of the outer heliosphere and local interstellar medium (LISM) inferred from observations and modeling of interplanetary Lyman-$\\alpha$ emission. The emission is produced by solar Lyman-$\\alpha$ photons (121.567 nm) backscattered by interstellar H atoms inflowing to the heliosphere from the LISM. Studies of Lyman-$\\alpha$ radiation determined the parameters of interstellar hydrogen within a few astronomical units from the Sun. The interstellar hydrogen atoms appeared to be decelerated, heated, and shifted compared to the helium atoms. The detected deceleration and heating proved the existence of secondary hydrogen atoms created near the heliopause. This finding supports the discovery of a Hydrogen Wall beyond the heliosphere consisting of heated hydrogen observed in HST/GHRS Lyman-$\\alpha$ absorption spectra toward nearby stars. The shift of the interstellar hydrogen bulk velocity was the first observational evidence of the global heliosphere asymmetry confirmed later by Voyager in situ measurements. SOHO/SWAN all-sky maps of the backscattered Lyman-$\\alpha$ intensity identified variations of the solar wind mass flux with heliolatitude and time. In particular, two maxima at mid-latitudes were discovered during solar activity maximum, which Ulysses missed due to its specific trajectory. Finally, Voyager/UVS and New Horizons/Alice UV spectrographs discovered extraheliospheric Lyman-$\\alpha$ emission. We" +"---\nabstract: 'We study properties of Hamiltonian integrable systems with random initial data by considering their Lax representation. Specifically, we investigate the spectral behaviour of the corresponding Lax matrices when the number $N$ of degrees of freedom of the system goes to infinity and the initial data is sampled according to a properly chosen Gibbs measure. We give an exact description of the limit density of states for the exponential Toda lattice and the Volterra lattice in terms of the Laguerre and antisymmetric Gaussian $\\beta$-ensemble in the high temperature regime. For generalizations of the Volterra lattice to short range interactions, called INB additive and multiplicative lattices, the focusing Ablowitz\u2013Ladik lattice and the focusing Schur flow, we derive numerically the density of states. For all these systems, we obtain explicitly the density of states in the ground states.'\nauthor:\n- 'T. Grava[^1], M. Gisonni[^2], G. Gubbiotti[^3], G. Mazzuca[^4]'\nbibliography:\n- 'esplorativo.bib'\ntitle: Discrete integrable systems and random Lax matrices\n---\n\nIntroduction\n============\n\nIn this manuscript, we study properties of Hamiltonian integrable systems with random initial data by analysing the spectral properties of their Lax matrices considered as *random matrices*.\n\nOne of the first investigations in this direction was made in [@Bogomolny2009;" +"---\nabstract: 'The large majority of human activities require collaborations within and across formal or informal teams. Our understanding of how the collaborative efforts spent by teams relate to their performance is still a matter of debate. Teamwork results into a highly interconnected ecosystem of potentially overlapping components where tasks are performed in interaction with team members and across other teams. To tackle this problem, we propose a graph neural network model designed to predict a team\u2019s performance while identifying the drivers that determine such outcome. In particular, the model is based on three architectural channels: topological, centrality and contextual which capture different factors potentially shaping teams\u2019 success. We endow the model with two attention mechanisms to boost model performance and allow interpretability. A first mechanism allows pinpointing key members inside the team. A second mechanism allows us to quantify the contributions of the three driver effects in determining the outcome performance. We test model performance on a wide range of domains outperforming most of the classical and neural baselines considered. Moreover, we include synthetic datasets specifically designed to validate how the model disentangles the intended properties on which our model vastly outperforms baselines.'\nauthor:\n- Francesco Carli\n- Pietro" +"---\nabstract: 'Few-shot transfer often shows substantial gain over zero-shot transfer\u00a0[@lauscher2020zero], which is a practically useful trade-off between fully supervised and unsupervised learning approaches for multilingual pretrained model-based systems. This paper explores various strategies for selecting data for annotation that can result in a better few-shot transfer. The proposed approaches rely on multiple measures such as data entropy using $n$-gram language model, predictive entropy, and gradient embedding. We propose a loss embedding method for sequence labeling tasks, which induces diversity and uncertainty sampling similar to gradient embedding. The proposed data selection strategies are evaluated and compared for POS tagging, NER, and NLI tasks for up to 20 languages. Our experiments show that the gradient and loss embedding-based strategies consistently outperform random data selection baselines, with gains varying with the initial performance of the zero-shot transfer. Furthermore, the proposed method shows similar trends in improvement even when the model is fine-tuned using a lower proportion of the original task-specific labeled training data for zero-shot transfer.'\nauthor:\n- |\n Shanu Kumar^1^ Sandipan Dandapat^1^ Monojit Choudhury^2^\\\n ^1^ Microsoft R&D, Hyderabad, India\\\n ^2^ Microsoft Research, India\\\n [{shankum,sadandap,monojitc}@microsoft.com]{}\nbibliography:\n- 'custom.bib'\ntitle: '\u201cDiversity and Uncertainty in Moderation\u201d are the Key to Data Selection for" +"---\nabstract: 'Diagnosing hematological malignancies requires identification and classification of white blood cells in peripheral blood smears. Domain shifts caused by different lab procedures, staining, illumination, and microscope settings hamper the re-usability of recently developed machine learning methods on data collected from different sites. Here, we propose a cross-domain adapted autoencoder to extract features in an unsupervised manner on three different datasets of single white blood cells scanned from peripheral blood smears. The autoencoder is based on an R-CNN architecture allowing it to focus on the relevant white blood cell and eliminate artifacts in the image. To evaluate the quality of the extracted features we use a simple random forest to classify single cells. We show that thanks to the rich features extracted by the autoencoder trained on only one of the datasets, the random forest classifier performs satisfactorily on the unseen datasets, and outperforms published oracle networks in the cross-domain task. Our results suggest the possibility of employing this unsupervised approach in more complicated diagnosis and prognosis tasks without the need to add expensive expert labels to unseen data.'\nauthor:\n- 'Raheleh Salehi[^1]'\n- 'Ario Sadafi[^^]{}'\n- Armin Gruber\n- Peter Lienemann\n- |\n \\\n Nassir Navab\n- Shadi" +"---\nabstract: 'Let $L\\subset J^1M$ be a closed Legendrian, in the 1-jet space of a closed manifold $M$, with simple front singularities. We define a natural generalization of a Morse flow tree, namely, a stable flow tree. We show a result analogous to Gromov compactness for stable maps \u2013 a sequence of stable flow trees, with a uniform edge bound, has a subsequence that Floer-Gromov converges to a stable flow tree. Moreover, we realize Floer-Gromov convergence as the topological convergence of a certain moduli space of stable flow trees.'\naddress: 'Department of Mathematics, Massachusetts Institute of Technology'\nauthor:\n- Kenneth Blakey\nbibliography:\n- 'References.bib'\ntitle: '**Stable Morse flow trees**'\n---\n\nIntroduction\n============\n\nLet $(M,g)$ be a closed Riemannian $n$-manifold, let $J^0M\\coloneqq M\\times{\\mathbb{R}}$ be $M$\u2019s 0-jet space, and let $J^1M\\coloneqq T^*M\\times{\\mathbb{R}}$ be $M$\u2019s 1-jet space. We endow $J^1M$ with the standard contact structure $\\xi$, given as the kernel of $dz-\\alpha$, where $z$ is the ${\\mathbb{R}}$-coordinate and $\\alpha$ is the tautological 1-form on $T^*M$. An $n$-dimensional submanifold $L\\subset J^1M$ is called *Legendrian* if it is an integral submanifold of $\\xi$. We also endow $T^*M$ with the standard symplectic structure, given as $-d\\alpha$. An $n$-dimensional submanifold of $T^*M$ is called *Lagrangian* if $-d\\alpha$" +"---\nabstract: 'We introduce a tool that solves the Schr\u00f6dinger-Euler-Poisson system of equations and allows the study of the interaction between ultralight bosonic dark matter, whose dynamics is described with the Schr\u00f6dinger-Poisson system and luminous matter which, as a first approximation, is modeled with a single component compressible ideal fluid. The two matter fields are coupled through the Poisson equation, whose source is the addition of both, dark matter and fluid densities. We describe the numerical methods used to solve the system of equations and present tests for each of the two components, that show the accuracy and convergence properties of the code. As simple possible applications we present some toy scenarios: i) the merger between a core of dark matter with a cloud of gas, ii) the merger of bosonic dark matter plus fluid configurations, and iii) the post merger properties, including the dark-matter offset from gas and the correlation between oscillations of the bosonic core and those of the gas.'\nauthor:\n- |\n Iv\u00e1n \u00c1lvarez-Rios[^1] and Francisco S. Guzm\u00e1n[^2]\\\n Instituto de F\u00edsica y Matem\u00e1ticas, Universidad Michoacana de San Nicol\u00e1s de Hidalgo. Edificio C-3, Cd. Universitaria, 58040 Morelia, Michoac\u00e1n, M\u00e9xico.\nbibliography:\n- 'BECDM.bib'\ndate: 'Accepted XXX. Received YYY; in" +"---\nabstract: 'In this paper we present a framework for the design and implementation of offset equivariant networks, that is, neural networks that preserve in their output uniform increments in the input. In a suitable color space this kind of networks achieves equivariance with respect to the photometric transformations that characterize changes in the lighting conditions. We verified the framework on three different problems: image recognition, illuminant estimation, and image inpainting. Our experiments show that the performance of offset equivariant networks are comparable to those in the state of the art on regular data. Differently from conventional networks, however, equivariant networks do behave consistently well when the color of the illuminant changes.'\naddress: 'Dep.\u00a0of Electrical, Computer and Biomedical Engineering, University of Pavia, Via Ferrata 1, Pavia, 27100, Italy'\nauthor:\n- Marco Cotogni\n- Claudio Cusano\nbibliography:\n- 'bibliography.bib'\ntitle: Offset equivariant networks and their applications\n---\n\nEquivariant neural networks,Convolutional neural network,Image recognition,Illuminant estimation,Inpainting\n\nIntroduction {#sec:intro}\n============\n\nMany recent studies focused on the role of equivariance in neural networks\u00a0[@ravanbakhsh2017equivariance; @kondor2018generalization; @yarotsky2021universal]. Briefly, a network (or one of its components) is equivariant with respect to a group of transformations when the application of one of such transformations to the input" +"---\nabstract: 'Optically active defects in 2D materials, such as hexagonal boron nitride (hBN) and transition metal dichalcogenides (TMDs), are an attractive class of single-photon emitters with high brightness, room-temperature operation, site-specific engineering of emitter arrays, and tunability with external strain and electric fields. In this work, we demonstrate a novel approach to precisely align and embed hBN and TMDs within background-free silicon nitride microring resonators. Through the Purcell effect, high-purity hBN emitters exhibit a cavity-enhanced spectral coupling efficiency up to $46\\%$ at room temperature, which exceeds the theoretical limit for cavity-free waveguide-emitter coupling and previous demonstrations by nearly an order-of-magnitude. The devices are fabricated with a CMOS-compatible process and exhibit no degradation of the 2D material optical properties, robustness to thermal annealing, and 100 nm positioning accuracy of quantum emitters within single-mode waveguides, opening a path for scalable quantum photonic chips with on-demand single-photon sources.'\nauthor:\n- 'K. Parto'\n- 'S. I. Azzam'\n- 'N. Lewis'\n- 'S. D. Patel'\n- 'S. Umezawa'\n- 'K. Watanabe'\n- 'T. Taniguchi'\n- 'G. Moody'\nbibliography:\n- 'references.bib'\ntitle: 'Cavity-Enhanced 2D Material Quantum Emitters Deterministically Integrated with Silicon Nitride Microresonators'\n---\n\n\\[sec:intro\\]Introduction {#secintrointroduction .unnumbered}\n=========================\n\nSolid-state single-quantum emitters (SQEs) integrated with chip-scale" +"---\nabstract: 'We present a method called Manifold Interpolating Optimal-Transport Flow (MIOFlow) that learns stochastic, continuous population dynamics from static snapshot samples taken at sporadic timepoints. MIOFlow combines dynamic models, manifold learning, and optimal transport by training neural ordinary differential equations (Neural ODE) to interpolate between static population snapshots as penalized by optimal transport with manifold ground distance. Further, we ensure that the flow follows the geometry by operating in the latent space of an autoencoder that we call a geodesic autoencoder (GAE). In GAE the latent space distance between points is regularized to match a novel multiscale geodesic distance on the data manifold that we define. We show that this method is superior to normalizing flows, Schr\u00f6dinger bridges and other generative models that are designed to flow from noise to data in terms of interpolating between populations. Theoretically, we link these trajectories with dynamic optimal transport. We evaluate our method on simulated data with bifurcations and merges, as well as scRNA-seq data from embryoid body differentiation, and acute myeloid leukemia treatment.'\nauthor:\n- |\n Guillaume Huguet$^1$[^1] D.S. Magruder$^2$Alexander Tong$^1$Oluwadamilola Fasina$^2$ Manik Kuchroo$^2$ Guy Wolf$^{1}$[^2] Smita Krishnaswamy$^{2}$\\\n $^1$Universit\u00e9 de Montr\u00e9al; Mila - Quebec AI Institute $^2$ Yale University\ntitle: 'Manifold" +"---\nabstract: 'The paper introduces a novel representation for [*Generalized Planning*]{} (GP) problems, and their solutions, as C++ programs. Our C++ representation allows to formally proving the termination of generalized plans, and to specifying their [*asymptotic complexity*]{} w.r.t. the number of world objects. Characterizing the complexity of C++ generalized plans enables the application of a combinatorial search that enumerates the space of possible GP solutions in order of complexity. Experimental results show that our implementation of this approach, which we call [bfgp++]{}, outperforms the previous [*GP as heuristic search*]{} approach for the computation of generalized plans represented as compiler-styled programs. Last but not least, the execution of a C++ program on a classical planning instance is a deterministic grounding-free and search-free process, so our C++ representation allows us to automatically validate the computed solutions on large test instances of thousands of objects, where off-the-shelf classical planners get stuck either in the pre-processing or in the search.'\nauthor:\n- 'Javier Segovia-Aguas$^1$'\n- |\n Yolanda E-Mart\u00edn$^2$Sergio Jim\u00e9nez$^{3}$ $^1$Universitat Pompeu Fabra\\\n $^2$Florida Universit\u00e0ria\\\n $^3$VRAIN - Valencian Research Institute for Artificial Intelligence, Universitat Polit\u00e8cnica de Val\u00e8ncia javier.segovia@upf.edu, yescudero@florida-uni.es, serjice@dsic.upv.es\nbibliography:\n- 'ijcai22.bib'\ntitle: Representation and Synthesis of C++ Programs for Generalized Planning\n---\n\nIntroduction" +"---\nauthor:\n- 'Shuokai Li[^1]'\n- Yongchun Zhu\n- Ruobing Xie\n- Zhenwei Tang\n- Zhao Zhang\n- Fuzhen Zhuang\n- Qing He\n- Hui Xiong\nbibliography:\n- 'mybibliography.bib'\ntitle: Customized Conversational Recommender Systems\n---\n\nIntroduction\n============\n\nRecently, conversational recommender systems (CRS)\u00a0[@zhang2018towards; @sun2018conversational; @li2018towards; @chen2019towards; @zhou2020improving; @liu2020towards; @li2022user], which capture the user current preference and recommend high-quality items through real-time dialog interactions, have become an emerging research topic. They\u00a0[@li2018towards; @chen2019towards; @zhou2020improving] mainly require a dialogue system and a recommender system. The dialogue system elicits the user intentions through conversational interaction and responses to the user with reasonable utterances. On the other hand, the recommender system provides high-quality recommendations by user\u2019s intentions and inherent preferences. CRS has not only a high research value but also a broad application prospect\u00a0[@gao2021advances], such as \u201cSiri\", \u201cCortana\" etc.\n\n![image](intro_0627.pdf){width=\"95.00000%\"}\n\nAs a kind of human-machine interactive system, improving user experience is of vital importance. However, existing CRS methods neglect the importance of user experience. Some methods\u00a0[@zhang2018towards; @sun2018conversational] not only require lots of labor to construct rules or templates but also make results rely on the pre-processing, which hurt user experience as the constrained interaction\u00a0[@gao2021advances]. Some other approaches\u00a0[@li2018towards; @chen2019towards; @zhou2020towards; @zhou2020improving;" +"---\nabstract: |\n Nowhere dense classes of graphs are classes of sparse graphs with rich structural and algorithmic properties, however, they fail to capture even simple classes of dense graphs. Monadically stable classes, originating from model theory, generalize nowhere dense classes and close them under transductions, i.e.\u00a0transformations defined by colorings and simple first-order interpretations. In this work we aim to extend some combinatorial and algorithmic properties of nowhere dense classes to monadically stable classes of finite graphs. We prove the following results.\n\n - In monadically stable classes the Ramsey numbers $R(s,t)$ are bounded from above by $\\mathcal{O}(t^{s-1-\\delta})$ for some $\\delta>0$, improving the bound $R(s,t)\\in \\mathcal{O}(t^{s-1}/(\\log t)^{s-1})$ known for general graphs and the bounds known for $k$-stable graphs when $s\\leq k$.\n\n - For every monadically stable class ${\\mathrm{\\mathscr{C}}}$ and every integer\u00a0$r$, there exists $\\delta > 0$ such that every graph\u00a0$G \\in {\\mathrm{\\mathscr{C}}}$ that contains an\u00a0$r$-subdivision of the biclique $K_{t,t}$ as a subgraph also contains\u00a0$K_{t^\\delta,t^\\delta}$ as a subgraph. This generalizes earlier results for nowhere dense graph classes.\n\n - We obtain a stronger regularity lemma for monadically stable classes of graphs.\n\n - Finally, we show that we can compute polynomial kernels for the independent set and dominating set" +"---\nabstract: 'This paper presents and validates a novel lung nodule classification algorithm that uses multifractal features found in X-ray images. The proposed method includes a pre-processing step where two enhancement techniques are applied: histogram equalization and a combination of wavelet decomposition and morphological operations. As a novelty, multifractal features using wavelet leader based formalism are used with Support Vector Machine classifier; other classical texture features were also included. Best results were obtained when using multifractal features in combination with classical texture features, with a maximum ROC AUC of 75%. The results show improvements when using data augmentation technique, and parameter optimization. The proposed method proved to be more efficient and accurate than Modulus Maxima Wavelet Formalism in both computational cost and accuracy when compared in a similar experimental set up.'\nauthor:\n- |\n I. M. Sierra-Ponce, A. M. Le\u00f3n-Mec\u00edas, D. Vald\u00e9s-Santiago\\\n \\\n Applied Mathematics Department, Faculty of Mathematics and Computer Science\\\n University of Havana\\\n Cuba\nbibliography:\n- 'references.bib'\ntitle: 'Wavelet leader based formalism to compute multifractal features for classifying lung nodules in X-ray images'\n---\n\nIntroduction\n============\n\nLung cancer is a severe respiratory disease, with high prevalence in human beings and the highest oncological mortality rate around the globe." +"---\nabstract: 'We prove the nonarchimedean counterpart of a real inequality involving the metric entropy and measure geometric invariants $V_i$, called Vitushkin\u2019s variations. Our inequality is based on a new convenient partial preorder on the set of constructible motivic functions, extending the one considered in\u00a0[@CL.mot]. We introduce, using motivic integration theory and the notion of riso-triviality, nonarchimedean substitutes of the Vitushkin variations $V_i$, and in particular of the number $V_0$ of connected components. We also prove the nonarchimedean global Cauchy-Crofton formula for definable sets of dimension $d$, relating $V_d$ and the motivic measure in dimension $d$.'\naddress:\n- 'Univ. Savoie Mont Blanc, CNRS, LAMA, 73000 Chamb\u00e9ry, France'\n- 'Heinrich-Heine-Universit\u00e4t D\u00fcsseldorf, Universit\u00e4tsstr. 1, 40225 D\u00fcsseldorf, Germany'\nauthor:\n- Georges Comte\n- Immanuel Halupczok\nbibliography:\n- 'references.bib'\ntitle: Motivic Vitushkin invariants\n---\n\n[^1]\n\nIntroduction {#introduction .unnumbered}\n============\n\nIn singularity theory, invariants defined through a deformation process and additive invariants (or maps) are ubiquitous. Archetypical instances of such constructions are the Milnor fibre in singularity theory and the Lipschitz Killing curvatures in differential or integral geometry.\n\nThe Milnor number of a polynomial function germ $f\\colon ({\\mathbb{C}}^n,0)\\to ({\\mathbb{C}},0)$ having an isolated singularity at $0$ may be defined as (up to sign and shift" +"---\nabstract: 'In this paper, we propose a multi-layered hyperelastic plate theory of growth within the framework of nonlinear elasticity. First, the 3D governing system for a general multi-layered hyperelastic plate is established, which incorporates the growth effect, and the material and geometrical parameters of the different layers. Then, a series expansion-truncation approach is adopted to eliminate the thickness variables in the 3D governing system. An elaborate calculation scheme is applied to derive the iteration relations of the coefficient functions in the series expansions. Through some further manipulations, a 2D vector plate equation system with the associated boundary conditions is established, which only contains the unknowns in the bottom layer of the plate. To show the efficiency of the current plate theory, three typical examples regarding the growth-induced deformations and instabilities of multi-layered plate samples are studied. Some analytical and numerical solutions to the plate equation are obtained, which can provide accurate predictions on the growth behaviors of the plate samples. Furthermore, the problem of \u2018shape-programming\u2019 of multi-layered hyperelastic plates through differential growth is studied. The explicit formulas of shape-programming for some typical multi-layered plates are derived, which involve the fundamental quantities of the 3D target shapes. By using these" +"---\nabstract: 'Modeling continuous dynamical systems from discretely sampled observations is a fundamental problem in data science. Often, such dynamics are the result of non-local processes that present an integral over time. As such, these systems are modeled with Integro-Differential Equations (IDEs); generalizations of differential equations that comprise both an integral and a differential component. For example, brain dynamics are not accurately modeled by differential equations since their behavior is non-Markovian, i.e. dynamics are in part dictated by history. Here, we introduce the Neural IDE (NIDE), a novel deep learning framework based on the theory of IDEs where integral operators are learned using neural networks. We test NIDE on several toy and brain activity datasets and demonstrate that NIDE outperforms other models. These tasks include time extrapolation as well as predicting dynamics from unseen initial conditions, which we test on whole-cortex activity recordings in freely behaving mice. Further, we show that NIDE can decompose dynamics into their Markovian and non-Markovian constituents via the learned integral operator, which we test on fMRI brain activity recordings of people on ketamine. Finally, the integrand of the integral operator provides a latent space that gives insight into the underlying dynamics, which we demonstrate on" +"---\nabstract: 'We prove a version of Bressan\u2019s mixing conjecture where the advecting field is constrained to be a shear at each time. Also, inspired by recent work of BlumenthalCoti\u00a0ZelatiGvalani, we construct a particularly simple example of a shear flow which mixes at the optimal rate. The constructed vector field alternates randomly in time between just two distinct shears.'\naddress: 'University of Chicago, Department of Mathematics. Chicago, Illinois'\nauthor:\n- William Cooperman\nbibliography:\n- 'mixing.bib'\ntitle: Exponential mixing by shear flows\n---\n\nIntroduction\n============\n\nGiven a divergence-free vector field $b \\colon {\\mathbb{R}}^2 / {(2\\pi{\\mathbb{Z}})}^2 \\to {\\mathbb{R}}^2$ on the torus, we are interested in how effectively some mean-zero initial data $u_0$ is mixed when advected by $b$. By solving the transport equation $$\\label{eq:transport}\n \\begin{cases}\n D_t u(t, x) + b(t, x) \\cdot D_x u(t, x) = 0 &\\qquad \\text{ for $t > 0$ and $x \\in {\\mathbb{R}}^2 / {(2\\pi{\\mathbb{Z}})}^2$}\\\\\n u(0, x) = u_0(x) &\\qquad \\text{ for $x \\in {\\mathbb{R}}^2 / {(2\\pi{\\mathbb{Z}})}^2$.}\n \\end{cases}$$ we can measure mixing by taking various measurements of $u(1, \\cdot)$.\n\nThere are a few natural ways to measure mixing, two of which we list here. The functional mixing scale measures $\\|u(1, \\cdot)\\|_{H^{-1}}$ or some other negative Sobolev norm." +"---\nabstract: 'A (directed) temporal graph is a (directed) graph whose edges are available only at specific times during its lifetime $\\tau$. Temporal walks are sequences of adjacent edges whose appearing times are either strictly increasing or non-decreasing (here called non-strict), depending on the scenario. Paths are temporal walks where no vertex repetition is allowed. A temporal vertex is a pair $(u,i)$ where $u$ is a vertex and $i\\in[\\tau]$ a timestep. In this paper we focus on the questions: *(i)* are there at least $k$ paths from a single source $s$ to a single target $t$, no two of which internally intersect on a temporal vertex? *(ii)* are there at most $h$ temporal vertices whose removal disconnects $s$ from $t$? Let $k^*$ be the maximum value $k$ for which the answer to *(i)* is yes, and let $h^*$ be the minimum value $h$ for which the answer to *(ii)* is yes. In static graphs, $k^*$ and $h^*$ are equal by Menger\u2019s Theorem and this is a crucial property to solve efficiently both *(i)* and *(ii)*. In temporal graphs such equality has been investigated only focusing on disjoint walks rather than disjoint paths. In this context, we prove that" +"---\nabstract: 'The recent proliferation of real-world human mobility datasets has catalyzed geospatial and transportation research in trajectory prediction, demand forecasting, travel time estimation, and anomaly detection. However, these datasets also enable, more broadly, a descriptive analysis of intricate systems of human mobility. We formally define *patterns of life analysis* as a natural, explainable extension of online unsupervised anomaly detection, where we not only monitor a data stream for anomalies but also explicitly extract normal patterns over time. To learn patterns of life, we adapt Grow When Required (GWR) episodic memory from research in computational biology and neurorobotics to a new domain of geospatial analysis. This biologically-inspired neural network, related to self-organizing maps (SOM), constructs a set of \u201cmemories\u201d or prototype traffic patterns incrementally as it iterates over the GPS stream. It then compares each new observation to its prior experiences, inducing an online, unsupervised clustering and anomaly detection on the data. We mine patterns-of-interest from the Porto taxi dataset, including both major public holidays and newly-discovered transportation anomalies, such as festivals and concerts which, to our knowledge, have not been previously acknowledged or reported in prior work. We anticipate that the capability to incrementally learn normal and abnormal road" +"---\nabstract: 'Working with stories and working with computations require very different modes of thought. We call the first mode \u201cstory-thinking\u201d and the second \u201dcomputational-thinking\u201d. The aim of this curiosity-driven paper is to explore the nature of these two modes of thinking, and to do so in relation to programming, including software engineering as programming-in-the-large. We suggest that story-thinking and computational-thinking may be understood as two ways of attending to the world, and that each both contributes and neglects the world, though in different ways and for different ends. We formulate two fundamental problems, i.e., the problem of \u201cneglectful representations\u201d and the problem of oppositional ways of thinking. We briefly suggest two ways in which these problems might be tackled and identify candidate hypotheses about the current state of the world, one assertion about a possible future state, and several research questions for future research.'\nauthor:\n- |\n Austen Rainer\\\n School of Electronics,\\\n Electrical Engineering\\\n & Computer Science\\\n Queen\u2019s University Belfast\\\n a.rainer@qub.ac.uk\\\n Catherine Menon\\\n School of Physics,\\\n Engineering\\\n & Computer Science\\\n University of Hertfordshire\\\n c.menon@herts.ac.uk\\\nbibliography:\n- 'references.bib'\ndate: June 2022\ntitle: 'Story-thinking, computational-thinking, programming and software engineering'\n---\n\nIntroduction {#section:introduction}\n============\n\nThe term \u201cstory\u201d is widely used in software" +"---\nabstract: 'IBM quantum computers are used to simulate the dynamics of small systems of interacting quantum spins. For time-independent systems with $N\\leq 2$ spins, we construct circuits which compute the exact time evolution at arbitrary times and allow measurement of spin expectation values. Larger systems with spins require a the Lie-Trotter decomposition of the time evolution operator, and we investigate the case of $N=3$ explicitly. Basic readout-error mitigation is introduced to improve the quality of results run on these noisy devices. The simulations provide an interesting experimental component to the standard treatment of quantum spin in an undergraduate quantum mechanics course.'\nauthor:\n- 'Jarrett L. Lancaster'\n- 'D. Brysen Allen'\ntitle: Simulating spin dynamics with quantum computers\n---\n\nIntroduction\n============\n\nThe field of quantum computation has experienced tremendous growth in recent years. Originally suggested as the only possible way to simulate quantum mechanical systems with many degrees of freedom,\u00a0[@Feynman] quantum computing algorithms have also been shown to play a valuable role in an efficient factorization scheme\u00a0[@Shor] which has dramatic implications for modern cryptographic systems.\u00a0[@RSA] Though such devices are still far from capable of solving these classic problems, noisy, intermediate-scale quantum (NISQ) devices\u00a0[@Preskill] are widely available" +"---\nabstract: 'The problem of the large-scale aggregation of the behind-the-meter demand and generation resources by a distributed-energy-resource aggregator (DERA) is considered. As a profit-seeking wholesale market participant, a DERA maximizes its profit while providing competitive services to its customers with higher consumer/prosumer surpluses than those offered by the distribution utilities or community choice aggregators. A constrained profit maximization program for aggregating behind-the-meter generation and consumption resources is formulated, from which payment functions for the behind-the-meter consumptions and generations are derived. Also obtained are DERA\u2019s bid and offer curves for its participation in the wholesale energy market and the optimal schedule of behind-the-meter resources. It is shown that the proposed DERA\u2019s aggregation model can achieve market efficiency equivalent to that when its customers participate individually directly in the wholesale market.'\nauthor:\n- 'Cong Chen, Ahmed S. Alahmed, Timothy D. Mount, and Lang\u00a0Tong [^1]'\nbibliography:\n- 'BIB.bib'\ntitle: Competitive DER Aggregation for Participation in Wholesale Markets\n---\n\n### Keywords: {#keywords .unnumbered}\n\ndistributed energy resources and aggregation, behind-the-meter distributed generation, demand-side management, net energy metering, competitive wholesale market.\n\nIntroduction {#sec:Intro}\n============\n\nThe landmark ruling of the Federal Energy Regulatory Commission (FERC) order 2222 aims to remove barriers to the direct participation" +"---\nabstract: 'Condensation in cumulus clouds plays a key role in structuring the mean, non-precipitating trade-wind boundary layer. Here, we summarise how this role also explains the spontaneous growth of mesoscale ($>O(10)$ km) fluctuations in clouds and moisture around the mean state in a minimal-physics, large-eddy simulation of the undisturbed period during BOMEX on a large ($O(100)$ km) domain. Small, spatial anomalies in latent heating in cumulus clouds, which form on top of small moisture fluctuations, give rise to circulations that transport moisture, but not heat, from dry to moist regions, and thus reinforce the latent heating anomaly. We frame this positive feedback as a linear instability in mesoscale moisture fluctuations, whose time-scale depends only on i) a vertical velocity scale and ii) the mean environment\u2019s vertical structure. In our minimal-physics setting, we show both ingredients are provided by the shallow cumulus convection itself: It is intrinsically unstable to length scale growth. The upshot is that energy released by clouds at kilometre scales may play a more profound and direct role in shaping the mesoscale trade-wind environment than is generally appreciated, motivating further research into the mechanism\u2019s relevance.'\nauthor:\n- |\n Martin Janssens^12\\*^, Jordi Vil\u00e0-Guerau de Arellano^1^,\\\n Chiel C. van" +"---\nabstract: 'For any positive integer $n$ along with parameters $\\alpha$ and $\\nu$, we define and investigate $\\alpha$-shifted, $\\nu$-offset, floor sequences of length $n$. We find exact and asymptotic formulas for the number of integers in such a sequence that are in a particular congruence class. As we will see, these quantities are related to certain problems of counting lattice points contained in regions of the plane bounded by conic sections. We give specific examples for the number of lattice points contained in elliptical regions and make connections to a few well-known rings of integers, including the Gaussian integers and Eisenstein integers.'\nauthor:\n- Nicholas Dent\n- 'Caleb M.\u00a0Shor'\nbibliography:\n- 'refs.bib'\ntitle: On residues of rounded shifted fractions with a common numerator\n---\n\nIntroduction\n============\n\nFor a fixed positive integer $n$, consider the integer sequence $$\\label{seq:intro}\n {\\left\\lfloor \\frac{n}{1} \\right\\rfloor}, {\\left\\lfloor \\frac{n}{2} \\right\\rfloor}, {\\left\\lfloor \\frac{n}{3} \\right\\rfloor}, \\dots, {\\left\\lfloor \\frac{n}{n} \\right\\rfloor},$$ where ${\\left\\lfloor x \\right\\rfloor}$ denotes the floor function for real $x$. Among the $n$ terms in this sequence, how many are odd? At first glance, it maybe seems reasonable to expect the proportion of odd numbers in the sequence to be roughly half. For $n=10$, the sequence is $10," +"---\nabstract: 'We analyze $K\\to \\pi \\nu\\bar \\nu$ rates in a model with a TeV-scale leptoquark addressing $B$-meson anomalies, based on the flavor non-universal 4321 gauge group featuring third-generation quark-lepton unification. We show that, together with the tight bounds imposed by $\\Delta F = 2$ amplitudes, the present measurement of ${\\mathcal{B}}(K^+ \\to \\pi^+ \\nu \\bar\\nu)$ already provides a non-trivial constraint on the model parameter space. In the minimal version of the model, the deviations from the Standard Model in ${\\mathcal{B}}(K^+ \\to \\pi^+ \\nu \\bar\\nu)$ are predicted to be in close correlation with non-standard effects in the Lepton Flavor Universality ratios $R_D$ and $R_{D^*}$. With the help of future data, these correlations can provide a decisive test of the model.'\nauthor:\n- '\u00d2scar L. Crosas'\n- Gino Isidori\n- 'Javier M. Lizana'\n- 'Nud[\u017e]{}eim Selimovi[\u0107]{}'\n- 'Ben A. Stefanek'\nbibliography:\n- 'references.bib'\ntitle: |\n Flavor Non-universal Vector Leptoquark Imprints in $K\\to \\pi \\nu\\bar \\nu$\\\n and $\\Delta F = 2$ Transitions\n---\n\nIntroduction {#sec:intro}\n============\n\nThe anomalies in neutral-current\u00a0[@LHCb:2014vgu; @LHCb:2017avl; @LHCb:2019hip; @LHCb:2021trn] and charged-current\u00a0[@BaBar:2012obs; @BaBar:2013mob; @Belle:2015qfa; @LHCb:2015gmp; @LHCb:2017smo; @LHCb:2017rln] $B$-meson decays have triggered a renewed interest in precision tests of semi-leptonic processes, and in particular in transitions involving third-generation fermions." +"---\nabstract: 'We consider the problem of a Bayesian agent receiving signals over time and then taking an action. The agent chooses when to stop and take an action based on her current beliefs, and prefers (all else equal) to act sooner rather than later. The signals received by the agent are determined by a principal, whose objective is to maximize engagement (the total attention paid by the agent to the signals). We show that engagement maximization by the principal minimizes the agent\u2019s welfare; the agent does no better than if she gathered no information. Relative to a benchmark in which the agent chooses the signals, engagement maximization induces excessive information acquisition and extreme beliefs. An optimal strategy for the principal involves \u201csuspensive signals\u201d that lead the agent\u2019s belief to become \u201cless certain than the prior\u201d and \u201cdecisive signals\u201d that lead the agent\u2019s belief to jump to the stopping region.'\nauthor:\n- 'Benjamin H\u00e9bert (Stanford and NBER) and Weijie Zhong (Stanford)[^1]'\nbibliography:\n- 'references\\_comp.bib'\ntitle: Engagement Maximization\n---\n\nKey Words: Information Acquisition, Recommendation Algorithms, Polarization, Rational Inattention\n\nJEL Codes: D83, D86\n\nIntroduction\\[sec:Introduction\\]\n================================\n\nFree-to-use online platforms such as Facebook, Instagram, Youtube, and Pinterest are used by billions of people worldwide" +"---\nabstract: |\n The computation of interaction energies on noisy intermediate-scale quantum (NISQ) computers appears to be challenging with straightforward application of existing quantum algorithms. For example, use of the standard supermolecular method with the variational quantum eigensolver (VQE) would require extremely precise resolution of the total energies of the fragments to provide for accurate subtraction to the interaction energy. Here we present a symmetry-adapted perturbation theory (SAPT) method that may provide interaction energies with high quantum resource efficiency. Of particular note, we present a quantum extended random-phase approximation (ERPA) treatment of the SAPT second-order induction and dispersion terms, including exchange counterparts. Together with previous work on first-order terms, this provides a recipe for complete SAPT(VQE) interaction energies up to second order. The SAPT interaction energy terms are computed as first-level observables with no subtraction of monomer energies invoked, and the only quantum observations needed are the the VQE one- and two-particle density matrices. We find empirically that SAPT(VQE) can provide accurate interaction energies even with coarsely optimized, low circuit depth wavefunctions from the quantum computer, simulated through ideal statevectors. The errors on the total interaction energy are orders of magnitude lower than the corresponding VQE total energy errors of" +"---\nabstract: 'In recent years, questions about the construction of special orderings of a given graph search were studied by several authors. On the one hand, the so called end-vertex problem introduced by Corneil et al. in 2010 asks for search orderings ending in a special vertex. On the other hand, the problem of finding orderings that induce a given search tree was introduced already in the 1980s by Hagerup and received new attention most recently by Beisegel et al. Here, we introduce a generalization of some of these problems by studying the question whether there is a search ordering that is a linear extension of a given partial order on a graph\u2019s vertex set. We show that this problem can be solved in polynomial time on chordal bipartite graphs for LBFS, which also implies the first polynomial-time algorithms for the end-vertex problem and two search tree problems for this combination of graph class and search. Furthermore, we present polynomial-time algorithms for LBFS and MCS on split graphs which generalize known results for the end-vertex and search tree problems.'\nauthor:\n- Robert Scheffler\nbibliography:\n- 'partial-orders.bib'\ndate: |\n Institute of Mathematics, Brandenburg University of Technology, Cottbus, Germany\\\n `robert.scheffler@b-tu.de`\ntitle: Linearizing" +"---\nabstract: |\n We revisit the classical pursuit curve problem solved by Pierre Bouguer in the 18th century, taking into account that information propagates at a finite speed. To a certain extent, this could be seen as a relativistic correction to that problem, though one does not need Einstein\u2019s theory of relativity in order to derive or understand its solution.\n\n The discussion of this generalized problem of pursuit constitutes an excellent opportunity to introduce the concept of retarded time without the complications inherent to the study of electromagnetic radiation (where it is usually seen for the first time), which endows the problem with a clear pedagogical motivation.\n\n We find the differential equation which describes the problem, solve it numerically, compare the solution to Bouguer\u2019s for different values of the parameters, and deduce a necessary and sufficient condition for the pursuer to catch the pursued, complementing previous work by Hoenselaers.\nauthor:\n- Thales Azevedo\n- Anderson Pelluso\ntitle: 'Space pirates: a pursuit curve problem involving retarded time'\n---\n\nIntroduction\n============\n\nIn 1732, French mathematician, geophysicist and hydrographer Pierre Bouguer posed and solved a problem which is nowadays regarded as the beginning of modern mathematical pursuit analysis. [@Pierre] The problem consisted of" +"---\nabstract: 'There is a well-known relationship between the binary Pascal\u2019s triangle and Sierpinski triangle in which the latter obtained from the former by successive modulo 2 additions on one of its corners. Inspired by that, we define a binary Apollonian network and obtain two structures featuring a kind of dendritic growth. They are found to inherit the small-world and scale-free property from the original network but display no clustering. Other key network properties are explored as well. Our results reveal that the structure contained in the Apollonian network may be employed to model an even wider class of real-world systems.'\nauthor:\n- 'Eduardo M. K. Souza'\n- 'Guilherme M. A. Almeida'\ntitle: Binary Apollonian networks\n---\n\nNetwork theory has brought significant advances to the field of complex systems, being applied to the spread of diseases and information, author collaboration networks, engineering, condensed-matter physics, transport, and much more [@alb; @newbw]. While many types of networks are used to model the dynamics of those and other classes of systems, they often share a lot in common. To bypass complexity and reach for perspective, it is thus valuable to explore their universal properties. For instance, real-world networks are neither fully random nor" +"---\nabstract: 'The design of a neural network is usually carried out by defining the number of layers, the number of neurons per layer, their connections or synapses, and the activation function that they will execute. The training process tries to optimize the weights assigned to those connections, together with the biases of the neurons, to better fit the training data. However, the definition of the activation functions is, in general, determined in the design process and not modified during the training, meaning that their behavior is unrelated to the training data set. In this paper we propose the definition and utilization of an implicit, parametric, non-linear activation function that adapts its shape during the training process. This fact increases the space of parameters to optimize within the network, but it allows a greater flexibility and generalizes the concept of neural networks. Furthermore, it simplifies the architectural design since the same activation function definition can be employed in each neuron, letting the training process to optimize their parameters and, thus, their behavior. Our proposed activation function comes from the definition of the consensus variable from the optimization of a linear underdetermined problem with an $L_p^q$ regularization term, via the Alternating" +"---\nauthor:\n- Tetsutaro Higaki\n- Kohei Kamada\n- Kentaro Nishimura\nbibliography:\n- 'reference.bib'\ndate: June 2022\ntitle: Formation of Chiral Soliton Lattice\n---\n\nIntroduction\n============\n\nThe domain wall is a field configuration connecting two vacua and is identified as a two-dimensional topological defect. If the system has a periodic potential with (infinite) degenerate vacua, which typically appears for the pseudo Nambu-Goldstone bosons (pNGBs) associated with the spontaneous breaking of a global U(1) symmetry, it allows a stack of parallel domain walls, well-known as the chiral soliton lattice (CSL). This state universally appears from condensed matter physics to high-energy physics. The CSL-type magnetic structure has been originally studied in chiral magnets\u00a0[@dzyaloshinskii1964theory] and experimentally observed [@togawa2016symmetry] (see ref.\u00a0[@kishine2015theory] for review). In high energy physics, it has been shown, based on a low-energy effective theory that takes into account the chiral anomaly\u00a0[@Son:2004tq; @Son:2007ny], that the ground state of QCD at finite baryon chemical potential $\\mu_{\\textrm{B}}$ under a sufficiently strong magnetic field is the CSL of $\\pi_0$ meson [@Brauner:2016pko; @Son:2007ny; @Eto:2012qd] (see also refs.\u00a0[@Kawaguchi:2018fpi; @Chen:2021vou; @Gronli:2022cri; @Yamamoto:2015maz; @Brauner:2017mui; @Brauner:2021sci; @Evans:2022hwr] for related works). As other examples, the rotation-induced CSL\u00a0[@Huang:2017pqe; @Nishimura:2020odq; @Eto:2021gyy; @Chen:2021aiq], Floquet-engineered CSL\u00a0[@Yamada:2021jhy], the CSL in" +"---\nabstract: 'Transient noise glitches in gravitational-wave detector data limit the sensitivity of searches and contaminate detected signals. In this Paper, we show how glitches can be simulated using generative adversarial networks (GANs). We produce hundreds of synthetic images for the 22 most common types of glitches seen in the LIGO, KAGRA, and Virgo detectors. We show how our GAN-generated images can easily be converted to time series, which would allow us to use GAN-generated glitches in simulations and mock data challenges to improve the robustness of gravitational-wave searches and parameter-estimation algorithms. We perform a neural network classification to show that our artificial glitches are an excellent match for real glitches, with an average classification accuracy across all 22 glitch types of 99.0%.'\naddress:\n- '$^{1}$OzGrav: The ARC Centre of Excellence for Gravitational-wave Discovery, Australia'\n- '$^{2}$Centre for Astrophysics and Supercomputing, Swinburne University of Technology, Hawthorn, VIC 3122, Australia'\n- '$^{3}$OzGrav-ANU, Centre for Gravitational Astrophysics, College of Science, The Australian National University, ACT 2601, Australia'\n- '$^{4}$Eliiza, Level 2/452 Flinders St, Melbourne VIC 3000'\n- '$^{5}$School of Physics and Astronomy, Monash University, Clayton, VIC 3800 Australia'\nauthor:\n- 'Jade Powell$^{1,2}$, Ling Sun$^{1,3}$, Katinka Gereb$^{4}$, Paul D. Lasky$^{1,5}$, Markus Dollmann$^{4}$'\nbibliography:" +"---\naddress: |\n ${}^\\dagger$[*Department of Physics and Astronomy, University of British Columbia, 6224 Agricultural Road, Vancouver, BC Canada V6T 1Z1*]{}\\\n and\\\n ${}^\\ddagger$[*Faculty of Physics, University of Vienna\\\n Boltzmanngasse 5, A-1090 Vienna, Austria*]{} \nbibliography:\n- 'papers.bib'\ntitle: |\n Cosmic time evolution and propagator\\\n from a Yang-Mills matrix model\n---\n\nUWThPh-2022-8\\\n\nJoanna L. Karczmarek[^1]${}^\\dagger$ and Harold C. Steinacker[^2]${}^\\ddagger$\n\n**Abstract**\n\nWe consider a solution of a IKKT-type matrix model which can be considered as a 1+1-dimensional space-time with Minkowski signature and a Big Bounce-like singularity. A suitable $i\\varepsilon$ regularization of the Lorentzian matrix integral is proposed, which leads to the standard $i\\varepsilon$-prescription for the effective field theory. In particular, the Feynman propagator is recovered locally for late times. This demonstrates that a causal structure and time evolution can emerge in the matrix model, even on non-trivial geometries. We also consider the propagation of modes through the Big Bounce, and observe an interesting correlation between the post-BB and pre-BB sheets, which reflects the structure of the brane in target space.\n\nIntroduction {#introduction .unnumbered}\n============\n\nMatrix theory can be viewed as an alternative approach to string theory. There are two prominent matrix models which can be taken as starting point: the BFSS model [@Banks:1996vh] is" +"---\nabstract: 'Computing the Euclidean minimum spanning tree ([Emst]{}) is a computationally demanding step of many algorithms. While work-efficient serial and multithreaded algorithms for computing [Emst]{}are known, designing an efficient GPU algorithm is challenging due to a complex branching structure, data dependencies, and load imbalances. In this paper, we propose a single-tree [Borvka]{}-based algorithm for computing [Emst]{}on GPUs. We use an efficient nearest neighbor algorithm and reduce the number of the required distance calculations by avoiding traversing subtrees with leaf nodes in the same component. The developed algorithms are implemented in a performance portable way using ArborX, an open-source geometric search library based on the Kokkos framework. We evaluate the proposed algorithm on various 2D and 3D datasets, show and compare it with the current state-of-the-art open-source CPU implementations. We demonstrate 4-24$\\times$ speedup over the fastest multi-threaded implementation. We prove the portability of our implementation by providing results on a variety of hardware: [AMD EPYC 7763]{}, [Nvidia A100]{}and [AMD MI250X]{}. We show scalability of the implementation, computing [Emst]{}for 37\u00a0million 3D cosmological dataset in under a 0.5 second on a single A100 Nvidia GPU.'\nauthor:\n- Andrey Prokopenko\n- Piyush Sao\n- 'Damien Lebrun-Grandi\u00e9'\n- 'A." +"---\nabstract: 'This paper introduces R-MelNet, a two-part autoregressive architecture with a frontend based on the first tier of MelNet and a backend WaveRNN-style audio decoder for neural text-to-speech synthesis. Taking as input a mixed sequence of characters and phonemes, with an optional audio priming sequence, this model produces low-resolution mel-spectral features which are interpolated and used by a WaveRNN decoder to produce an audio waveform. Coupled with half precision training, R-MelNet uses under 11 gigabytes of GPU memory on a single commodity GPU (NVIDIA 2080Ti). We detail a number of critical implementation details for stable half precision training, including an approximate, numerically stable mixture of logistics attention. Using a stochastic, multi-sample per step inference scheme, the resulting model generates highly varied audio, while enabling text and audio based controls to modify output waveforms. Qualitative and quantitative evaluations of an R-MelNet system trained on a single speaker TTS dataset demonstrate the effectiveness of our approach.'\nauthor:\n- |\n Kyle Kastner\\\n Mila\\\n Universit\u00e9 de Montr\u00e9al\\\n Aaron Courville[^1]\\\n Mila\\\n Universit\u00e9 de Montr\u00e9al\\\nbibliography:\n- 'mybib.bib'\ntitle: 'R-MelNet: Reduced Mel-Spectral Modeling for Neural TTS'\n---\n\nIntroduction\n============\n\nRecent progress in text-to-speech (TTS) synthesis has been largely driven by the integration of neural networks" +"---\nabstract: 'We propose a method to construct composite two-qubit gates with narrowband profiles with respect to the spin-spin coupling. The composite sequences are selective to the variations in the amplitude and duration of the spin-spin coupling, and can be used for highly-selective qubit addressing with greatly reduced cross talk, quantum logic spectroscopy, and quantum sensing.'\nauthor:\n- 'Boyan T. Torosov'\n- 'Nikolay V. Vitanov'\ntitle: 'Narrowband composite two-qubit phase gates'\n---\n\nComposite pulses are a powerful quantum control technique, which offers a large variety of excitation profile features, such as high fidelity, robustness, sensitivity, etc. A composite pulse is actually a sequence of pulses with well-defined and different relative phases, used as control parameters to achieve a certain objective. First developed in nuclear magnetic resonance (NMR) [@NMR] and its analogue even earlier in polarization optics [@PolarizationOptics], their efficiency was soon largely acknowledged. In the last two decades, they have been successfully applied in other areas, such as trapped ions [@Gulde2003; @Mount2015; @Schmidt-Kaler2003; @Haffner2008; @Timoney2008; @Monz2009; @Zarantonello2019; @Shappert2013], neutral atoms [@Rakreungdet2009; @Demeter2016], quantum dots [@Wang2012; @Eng2015; @Kestner2013; @Wang2014; @Zhang2017; @Hickman2013], NV centers in diamond [@Rong2015], doped solids [@Schraft2013; @Genov2014; @Genov2017; @Bruns2018], superconducting qubits [@SteffenMartinisChuang; @TorosovIBM], optical clocks [@Zanon-Willette2018], atom optics" +"---\nabstract: 'We study the problem of model selection in bandit scenarios in the presence of nested policy classes, with the goal of obtaining simultaneous adversarial and stochastic (\u201cbest of both worlds\") high-probability regret guarantees. Our approach requires that each base learner comes with a candidate regret bound that may or may not hold, while our meta algorithm plays each base learner according to a schedule that keeps the base learner\u2019s candidate regret bounds balanced until they are detected to violate their guarantees. We develop careful mis-specification tests specifically designed to blend the above model selection criterion with the ability to leverage the (potentially benign) nature of the environment. We recover the model selection guarantees of the CORRAL\u00a0[@agarwal2017corralling] algorithm for adversarial environments, but with the additional benefit of achieving high probability regret bounds, specifically in the case of nested adversarial linear bandits. More importantly, our model selection results also hold simultaneously in stochastic environments under gap assumptions. These are the first theoretical results that achieve best of both world (stochastic and adversarial) guarantees while performing model selection in (linear) bandit scenarios.'\nauthor:\n- |\n Aldo Pacchiano\\\n Microsoft Research, NYC\\\n `apacchiano@microsoft.com`\\\n Christoph Dann\\\n Google, NYC\\\n `cdann@cdann.net`\\\n Claudio Gentile\\\n Google, NYC\\" +"---\nabstract: 'Transition amplitudes and transition probabilities are relevant to many areas of physics simulation, including the calculation of response properties and correlation functions. These quantities can also be related to solving linear systems of equations. Here we present three related algorithms for calculating transition probabilities. First, we extend a previously published short-depth algorithm, allowing for the two input states to be non-orthogonal. Building on this first procedure, we then derive a higher-depth algorithm based on Trotterization and Richardson extrapolation that requires fewer circuit evaluations. Third, we introduce a tunable algorithm that allows for trading off circuit depth and measurement complexity, yielding an algorithm that can be tailored to specific hardware characteristics. Finally, we implement proof-of-principle numerics for models in physics and chemistry and for a subroutine in variational quantum linear solving (VQLS). The primary benefits of our approaches are that (a) arbitrary non-orthogonal states may now be used with small increases in quantum resources, (b) we (like another recently proposed method) entirely avoid subroutines such as the Hadamard test that may require three-qubit gates to be decomposed, and (c) in some cases fewer quantum circuit evaluations are required as compared to the previous state-of-the-art in NISQ algorithms for transition" +"---\nabstract: 'Case-based Reasoning (CBR) on high-dimensional and heterogeneous data is a trending yet challenging and computationally expensive task in the real world. A promising approach is to obtain low-dimensional hash codes representing cases and perform a similarity retrieval of cases in a Hamming space. However, previous methods based on data-independent hashing rely on random projections or manual construction, inapplicable to address specific data issues (e.g., high-dimensionality and heterogeneity) due to their insensitivity to data characteristics. To address these issues, this work introduces a novel deep hashing network to learn similarity-preserving compact hash codes for efficient case retrieval and proposes a deep-hashing-enabled CBR model HeCBR. Specifically, we introduce position embedding to represent heterogeneous features and utilize a multilinear interaction layer to obtain case embeddings, which effectively filtrates zero-valued features to tackle high-dimensionality and sparsity and captures inter-feature couplings. Then, we feed the case embeddings into fully-connected layers, and subsequently a hash layer generates hash codes with a quantization regularizer to control the quantization loss during relaxation. To cater for incremental learning of CBR, we further propose an adaptive learning strategy to update the hash function. Extensive experiments on public datasets show HeCBR greatly reduces storage and significantly accelerates the case" +"---\nabstract: 'In a recent work the present authors have shown that the eigenvalue probability density function for Dyson Brownian motion from the identity on $U(N)$ is an example of a newly identified class of random unitary matrices called cyclic P\u00f3lya ensembles. In general the latter exhibit a structured form of the correlation kernel. Specialising to the case of Dyson Brownian motion from the identity on $U(N)$ allows the moments of the spectral density, and the spectral form factor $S_N(k;t)$, to be evaluated explicitly in terms of a certain hypergeometric polynomial. Upon transformation, this can be identified in terms of a Jacobi polynomial with parameters $(N(\\mu - 1),1)$, where $\\mu = k/N$ and $k$ is the integer labelling the Fourier coefficients. From existing results in the literature for the asymptotics of the latter, the asymptotic forms of the moments of the spectral density can be specified, as can $\\lim_{N \\to \\infty} {1 \\over N} S_N(k;t) |_{\\mu = k/N}$. These in turn allow us to give a quantitative description of the large $N$ behaviour of the average $ \\langle | \\sum_{l=1}^N e^{ i k x_l} |^2 \\rangle$. The latter exhibits a dip-ramp-plateau effect, which is attracting recent interest from the viewpoints" +"---\nabstract: 'We investigate the problem of finding the local analogue of the ergotropy, that is the maximum work that can be extracted from a system if we can only apply local unitary transformation acting on a given subsystem. In particular, we provide a closed formula for the local ergotropy in the special case in which the local system has only two levels, and give analytic lower bounds and semidefinite programming upper bounds for the general case. As non-trivial examples of application, we compute the local ergotropy for a atom in an electromagnetic cavity with Jaynes-Cummings coupling, and the local ergotropy for a spin site in an XXZ Heisenberg chain, showing that the amount of work that can be extracted with an unitary operation on the coupled system can be greater than the work obtainable by quenching off the coupling with the environment before the unitary transformation.'\nauthor:\n- Raffaele Salvia\n- Giacomo De Palma\n- Vittorio Giovannetti\nbibliography:\n- 'ErgoLocale.bib'\ntitle: Optimal local work extraction from bipartite quantum systems in the presence of Hamiltonian couplings\n---\n\n\\[section\\] \\[section\\] \\[section\\] \\[section\\] \\[section\\] \\[section\\] \\[section\\] \\[section\\]\n\nIntroduction {#sec:intro}\n============\n\nAs quantum technologies are expected to be highly sensitive to the interaction" +"---\nabstract: |\n Individuals invest in Environmental-Social-Governance (ESG)-assets not only because of (higher) expected returns but also driven by ethical and social considerations. Less is known about ESG-conscious investor subjective beliefs about crypto-assets and how do these compare to traditional assets. Controversies surrounding the ESG footprint of certain crypto-asset classes - mainly on grounds of their energy-intensive crypto mining - offer a potentially informative object of inquiry. Leveraging a unique representative household finance survey for the Austrian population, we examine whether investors\u2019 ESG preferences can explain cross-sectional differences in individual portfolio exposure to crypto-assets. We find a strong association between investors\u2019 ESG preferences and the crypto-investment exposure. The ESG-conscious investor attention is higher for crypto-assets compared to traditional asset classes such as bonds and shares.\\\n **JEL code:** D14, G11, G41.\\\n **Keywords:** crypto-assets; investment portfolio; financial behaviour; financial literacy; environmental-social-governance preferences.\nauthor:\n- 'Pavel Ciaian[^1] Andrej Cupak[^2] Pirmin Fessler[^3] d\u2019Artis Kancs[^4]'\nbibliography:\n- 'references.bib'\ntitle: '**Environmental-Social-Governance Preferences and Investments in Crypto-Assets**[^5]'\n---\n\nIntroduction {#sec:section1}\n============\n\nIn a standard asset pricing framework, financial decisions are determined by investor\u2019s preferences and beliefs over asset returns. A more recent literature has identified also the relevance of investor environment and non-pecuniary effects in driving cross-sectional" +"---\nabstract: 'While the 10d type IIB supergravity action evaluated on AdS$_5\\times S^5$ solution vanishes, the 5d effective action reconstructed from equations of motion using $M^5 \\times S^5$ compactification ansatz is proportional to the AdS$_5$ volume. The latter is consistent with the conformal anomaly interpretation in AdS/CFT context. We show that this paradox can be resolved if, in the case of $M^5 \\times X^5$ topology, the 10d action contains an additional 5-form dependent \u201ctopological\u201d term $\\int F_{5M} \\wedge F_{5X}$. The presence of this term is suggested also by gauge-invariance considerations in the PST formulation of type IIB supergravity action. We show that this term contributes to the 10d action evaluated on the D3-brane solution.'\nbibliography:\n- 'Boundary.bib'\n---\n\n\u00a0\n\n[**On type IIB supergravity action on $M^5 \\times X^5$ solutions**]{}\n\n[****]{}\n\n\u00e5[[a]{}]{}\u0141\n\n\u00e5[[a]{}]{}\n\n\u00e5[a]{}\n\nIntroduction\n============\n\nMany discussions of applications of the maximally supersymmetric case of AdS/CFT duality start with a classical action of 5d gauged supergravity or simply 5d gravity with a cosmological term S\\_5 = - [1 2 \\_5\\^2]{} d\\^5 x ( [ R]{}\\_5 + 12 L\\^[-2]{} + ... ) \u00a0. Evaluating this action on the vacuum solution with radius $L$ gives a factor of volume of space. Assuming $S^4$" +"---\nabstract: 'Majorana bound states are quasiparticle excitations localized at the boundaries of a topologically nontrivial superconductor. They are zero-energy, charge-neutral, particle-hole symmetric, and spatially-separated end modes which are topologically protected by the particle-hole symmetry of the superconducting state. Due to their topological nature, they are robust against local perturbations and, in an ideal environment, free from decoherence. Furthermore, unlike ordinary fermions and bosons, the adiabatic exchange of Majorana modes is noncommutative, i.e., the outcome of exchanging two or more Majorana modes depends on the order in which exchanges are performed. These properties make them ideal candidates for the realization of topological quantum computers. In this tutorial, I will present a pedagogical review of 1D topological superconductors and Majorana modes in quantum nanowires. I will give an overview of the Kitaev model and the more realistic Oreg-Lutchyn model, discuss the experimental signatures of Majorana modes, and highlight their relevance in the field of topological quantum computation. This tutorial may serve as a pedagogical and relatively self-contained introduction for graduate students and researchers new to the field, as well as an overview of the current state-of-the-art of the field and a reference guide to specialists.'\nauthor:\n- Pasquale Marra\ntitle: Majorana" +"---\nabstract: 'Previous work suggests that people\u2019s preference for different kinds of information depends on more than just accuracy. This could happen because the messages contained within different pieces of information may either be well-liked or repulsive. Whereas factual information must often convey uncomfortable truths, misinformation can have little regard for veracity and leverage psychological processes which increase its attractiveness and proliferation on social media. In this review, we argue that when misinformation proliferates, this happens because the social media environment enables adherence to misinformation by reducing, rather than increasing, the psychological cost of doing so. We cover how attention may often be shifted away from accuracy and towards other goals, how social and individual cognition is affected by misinformation and the cases under which debunking it is most effective, and how the formation of online groups affects information consumption patterns, often leading to more polarization and radicalization. Throughout, we make the case that polarization and misinformation adherence are closely tied. We identify ways in which the psychological cost of adhering to misinformation can be increased when designing anti-misinformation interventions or resilient affordances, and we outline open research questions that the CSCW community can take up in further understanding this" +"---\nabstract: 'We consider the time evolution of a sessile drop of volatile partially wetting liquid on a rigid solid substrate. Thereby, the drop evaporates under strong confinement, namely, it sits on one of the two parallel plates that form a narrow gap. First, we develop an efficient mesoscopic thin-film description in gradient dynamics form. It couples the diffusive dynamics of the vertically averaged vapour density in the narrow gap to an evolution equation for the profile of the volatile drop. The underlying free energy functional incorporates wetting, interface and bulk energies of the liquid and gas entropy. The model allows us to investigate the transition between diffusion-limited and phase transition-limited evaporation for shallow droplets. Its gradient dynamics character also allows for a full-curvature formulation. Second, we compare results obtained with the mesoscopic model to corresponding direct numerical simulations solving the Stokes equation for the drop coupled to the diffusion equation for the vapour as well as to selected experiments. In passing, we discuss the influence of contact line pinning.'\nauthor:\n- 'Simon Hartmann , Christian Diddens, Maziyar Jalaal,'\n- Uwe Thiele\nbibliography:\n- 'evaporation\\_gap.bib'\ntitle: 'Sessile drop evaporation in a gap \u2013 crossover between diffusion-limited and phase transition-limited regime'" +"---\nabstract: 'Most organizations rely on managers to identify talented workers for promotions. However, managers who are evaluated on team performance have an incentive to hoard workers. This study provides the first empirical evidence of talent hoarding using novel personnel records from a large manufacturing firm. Temporary reductions of talent hoarding increase workers\u2019 applications for promotions by 123%. By reducing the quality and performance of promoted workers, talent hoarding contributes to misallocation of talent and perpetuates gender inequality in representation and pay at the firm.'\nauthor:\n- 'Ingrid Haegele[^1]'\nbibliography:\n- 'literature202202.bib'\ndate: 'This version: February, 2022\\'\nnocite: '[@*]'\ntitle: Talent Hoarding in Organizations\n---\n\nIntroduction\n============\n\nFirms must continually decide how to allocate workers to jobs, a process which has critical implications for productivity ([@rosen1982], [@holmstromtirole]). Because it is difficult to perfectly observe worker ability, most firms rely on managers to identify talented workers who can be promoted to higher-level positions. However, when a talented worker leaves their team for a promotion, team performance suffers. Since managers are rewarded based on team performance and firms cannot perfectly monitor manager actions, the conflicting interests of manager and firm create the potential for moral hazard ([@holmstrom1979]). A growing body of evidence" +"---\nabstract: 'Deeply learned representations have achieved superior image retrieval performance in a retrieve-then-rerank manner. Recent state-of-the-art single stage model, which heuristically fuses local and global features, achieves promising trade-off between efficiency and effectiveness. However, we notice that efficiency of existing solutions is still restricted because of their multi-scale inference paradigm. In this paper, we follow the single stage art and obtain further complexity-effectiveness balance by successfully getting rid of multi-scale testing. To achieve this goal, we abandon the widely-used convolution network giving its limitation in exploring diverse visual patterns, and resort to fully attention based framework for robust representation learning motivated by the success of Transformer. Besides applying Transformer for global feature extraction, we devise a local branch composed of window-based multi-head attention and spatial attention to fully exploit local image patterns. Furthermore, we propose to combine the hierarchical local and global features via a cross-attention module, instead of using heuristically fusion as previous art does. With our Deep Attentive Local and Global modeling framework (DALG), extensive experimental results show that efficiency can be significantly improved while maintaining competitive results with the state of the arts.'\nauthor:\n- |\n Yuxin Song$^{*}$, Ruolin Zhu$^{*}$, Min Yang$^{*}$, Dongliang He$^{*}$\\\n [songyuxinbb@outlook.com]{}\ntitle:" +"---\nabstract: |\n We study the correlators $W_{g,n}$ arising from Orlov\u2013Scherbin 2-Toda tau functions with rational content-weight $G(z)$, at arbitrary values of the two sets of time parameters. Combinatorially, they correspond to generating functions of weighted Hurwitz numbers and $(m,r)$-factorisations of permutations. When the weight function is polynomial, they are generating functions of constellations on surfaces in which two full sets of degrees (black/white) are entirely controlled, and in which internal faces are allowed in addition to boundaries. We give the spectral curve (the \u201cdisk\u201d function $W_{0,1}$, and the \u201ccylinder\u201d function $W_{0,2}$) for this model, generalising Eynard\u2019s solution of the 2-matrix model which corresponds to $G(z)=1+z$, by the addition of arbitrarily many free parameters. Our method relies both on the Albenque\u2013Bouttier combinatorial proof of Eynard\u2019s result by slice decompositions, which is strong enough to handle the polynomial case, and on algebraic arguments. Building on this, we establish the topological recursion (TR) for the model. Our proof relies on the fact that TR is already known at time zero (or, combinatorially, when the underlying graphs have only boundaries, and no internal faces) by work of Bychkov\u2013Dunin-Barkowski\u2013Kazarian\u2013Shadrin (or Alexandrov\u2013Chapuy\u2013Eynard\u2013Harnad for the polynomial case), and on the general idea of deformation of spectral" +"---\nabstract: 'In studying the interstellar medium (ISM) and Galactic cosmic rays (CRs), uncertainty of the interstellar gas density has always been an issue. To overcome this difficulty, we used a component decomposition of the 21-cm ${\\ion{H}{i}}$ line emission and used the resulting gas maps in an analysis of $\\gamma$-ray data obtained by the *Fermi* Large Area Telescope (LAT) for the MBM\u00a053, 54, and 55 molecular clouds and the Pegasus loop. We decomposed the ISM gas into intermediate-velocity clouds, narrow-line and optically thick ${\\ion{H}{i}}$, broad-line and optically thin ${\\ion{H}{i}}$, CO-bright ${\\mathrm{H}_{2}}$, and CO-dark ${\\mathrm{H}_{2}}$ using detailed correlations with the ${\\ion{H}{i}}$ line profiles from the HI4PI survey, the *Planck* dust-emission model, and the *Fermi*-LAT $\\gamma$-ray data. We found the fractions of optical depth correction to the ${\\ion{H}{i}}$ column density and CO-dark ${\\mathrm{H}_{2}}$ to be nearly equal. We fitted the CR spectra directly measured at/near the Earth and the measured $\\gamma$-ray emissivity spectrum simultaneously. We obtained a spectral break in the interstellar proton spectrum at ${\\sim}$7\u00a0GeV, and found the $\\gamma$-ray emissivity normalization agrees with the AMS-02 proton spectrum within 10%, relaxing the tension with the CR spectra previously claimed.'\nauthor:\n- 'T.\u00a0Mizuno'\n- 'K.\u00a0Hayashi'\n- 'J.\u00a0Metzger'\n-" +"---\nabstract: 'Human action recognition is a quite hugely investigated area where most remarkable action recognition networks usually use large-scale coarse-grained action datasets of daily human actions as inputs to state the superiority of their networks. We intend to recognize our small-scale fine-grained Tai Chi action dataset using neural networks and propose a transfer-learning method using NTU RGB+D dataset to pre-train our network. More specifically, the proposed method first uses a large-scale NTU RGB+D dataset to pre-train the Transformer-based network for action recognition to extract common features among human motion. Then we freeze the network weights except for the fully connected (FC) layer and take our Tai Chi actions as inputs only to train the initialized FC weights. Experimental results show that our general model pipeline can reach a high accuracy of small-scale fine-grained Tai Chi action recognition with even few inputs and demonstrate that our method achieves the state-of-the-art performance compared with previous Tai Chi action recognition methods.'\nauthor:\n- \nbibliography:\n- 'reference.bib'\ntitle: 'Spatial Transformer Network with Transfer Learning for Small-scale Fine-grained Skeleton-based Tai Chi Action Recognition [^1] '\n---\n\nSmall-scale fine-grained action recognition, Tai Chi action, Transfer learning\n\nIntroduction\n============\n\nHuman action recognition raises a lot of" +"---\nabstract: 'Audio-visual speech enhancement system is regarded as one of promising solutions for isolating and enhancing speech of desired speaker. Typical methods focus on predicting clean speech spectrum via a naive convolution neural network based encoder-decoder architecture, and these methods a) are not adequate to use data fully, b) are unable to effectively balance audio-visual features. The proposed model alleviates these drawbacks by a) applying a model that fuses audio and visual features layer by layer in encoding phase, and that feeds fused audio-visual features to each corresponding decoder layer, and more importantly, b) introducing a 2-stage multi-head cross attention (MHCA) mechanism to infer audio-visual speech enhancement for balancing the fused audio-visual features and eliminating irrelevant features. This paper proposes attentional audio-visual multi-layer feature fusion model, in which MHCA units are applied to feature mapping at every layer of decoder. The proposed model demonstrates the superior performance of the network against the state-of-the-art models. Speech samples are available at:'\naddress: |\n $^1$E.E. Engineering, Trinity College Dublin, Ireland\\\n $^2$vivo AI Lab, P.R. China\\\n $^3$School of Foreign Languages, Hubei University of Chinese Medicine, P.R. China\nbibliography:\n- 'mybib.bib'\ntitle: 'Improving Visual Speech Enhancement Network by Learning Audio-visual Affinity with Multi-head Attention'" +"---\nabstract: 'In this position paper, we propose a new approach to generating a type of knowledge base (KB) from text, based on question generation and entity linking. We argue that the proposed type of KB has many of the key advantages of a traditional symbolic KB: in particular, it consists of small modular components, which can be combined *compositionally* to answer complex queries, including relational queries and queries involving \u201cmulti-hop\u201d inferences. However, unlike a traditional KB, this information store is well-aligned with common user information needs.'\nauthor:\n- |\n Wenhu Chen \u00a0 William W. Cohen \u00a0 Michiel De Jong \u00a0 Nitish Gupta\\\n **\u00a0 Alessandro Presta \u00a0 Pat Verga \u00a0 John Wieting**\\*\\\n `{wenhuchen,wcohen,msdejong,guptanitish,apresta,patverga,jwieting}@google.com`\\\n Google AI\\\nbibliography:\n- 'biblio.bib'\ntitle: 'QA Is the New KR: Question-Answer Pairs as Knowledge Bases'\n---\n\n=1\n\nIntroduction: QA Pairs as a KB \\[sec:intro\\]\n============================================\n\nBackground\n----------\n\n![Constructing a [QEDB]{} from generated questions and explanations. The first step is to generate question/answer pairs, and align referentially equivalent spans in the question and document (aligned spans have the same color). The aligned document spans, plus the answer spans, become nodes in a graph, and edges are labeled by an abstract versions of the corresponding questions.[]{data-label=\"fig:small-qedb\"}](small-qedb.png){width=\"40.00000%\"}\n\nSince the very beginnings of Artificial Intelligence" +"---\nabstract: 'Flow through porous, elastically deforming media is present in a variety of natural contexts ranging from large-scale geophysics to cellular biology. In the case of incompressible constituents, the porefluid pressure acts as a Lagrange multiplier to satisfy the resulting constraint on fluid divergence. The resulting system of equations is a possibly non-linear saddle-point problem and difficult to solve numerically, requiring nonlinear implicit solvers or flux-splitting methods. Here, we present a method for the simulation of flow through porous media and its coupled elastic deformation. The pore pressure field is calculated at each time step by correcting trial velocities in a manner similar to Chorin projection methods. We demonstrate the method\u2019s second-order convergence in space and time and show its application to phase separating neo-Hookean gels.'\naddress:\n- 'John A. Paulson School of Engineering and Applied Sciences, Harvard University, 29 Oxford Street, Cambridge, MA 02138'\n- 'Mathematics Group, Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720'\nauthor:\n- 'Nicholas J. Derr'\n- 'Chris H. Rycroft'\ntitle: A projection method for porous media flow\n---\n\n=1\n\nIntroduction\n============\n\nDeforming porous media appear across nature, in settings such as soil consolidation [@biot1941], river bed and channel formation [@rodriguez-iturbe1997; @abrams_growth_2009]," +"---\nabstract: 'Data reduction rules are an established method in the algorithmic toolbox for tackling computationally challenging problems. A data reduction rule is a polynomial-time algorithm that, given a problem instance as input, outputs an equivalent, typically smaller instance of the same problem. The application of data reduction rules during the preprocessing of problem instances allows in many cases to considerably shrink their size, or even solve them directly. Commonly, these data reduction rules are applied exhaustively and in some fixed order to obtain irreducible instances. It was often observed that by changing the order of the rules, different irreducible instances can be obtained. We propose to \u201cundo\u201d data reduction rules on irreducible instances, by which they become larger, and then subsequently apply data reduction rules again to shrink them. We show that this somewhat counter-intuitive approach can lead to significantly smaller irreducible instances. The process of undoing data reduction rules is not limited to \u201crolling back\u201d data reduction rules applied to the instance during preprocessing. Instead, we formulate so-called backward rules, which essentially undo a data reduction rule, but without using any information about which data reduction rules were applied to it previously. In particular, based on the example" +"---\nabstract: 'Approximate computing is an emerging computing paradigm that offers improved power consumption by relaxing the requirement for full accuracy. Since real-world applications may have different requirements for design accuracy, one trend of approximate computing is to design runtime quality-configurable circuits, which are able to operate under different accuracy modes with different power consumption. In this paper, we present a novel framework RUCA which aims to approximate an arbitrary input circuit in a runtime configurable fashion. By factorizing and decomposing the truth table, our approach aims to approximate and separate the input circuit into multiple configuration blocks which support different accuracy levels, including a corrector circuit to restore full accuracy. By activating different blocks, the approximate circuit is able to operate at different accuracy-power configurations. To improve the scalability of our algorithm, we also provide a design space exploration scheme with circuit partitioning to navigate the search space of possible approximations of subcircuits during design time. We thoroughly evaluate our methodology on a set of benchmarks and compare against another quality-configurable approach, showcasing the benefits and flexibility of RUCA. For 3-level designs, RUCA saves power consumption by 36.57% within 1% error and by 51.32% within 2% error on average.'" +"---\nabstract: 'We define a reflective numerical semigroup of genus $g$ as a numerical semigroup that has a certain reflective symmetry when viewed within ${\\mathbb{Z}}$ as an array with $g$ columns. Equivalently, a reflective numerical semigroup has one gap in each residue class modulo $g$. In this paper, we give an explicit description for all reflective numerical semigroups. With this, we can describe the reflective members of well-known families of numerical semigroups as well as obtain formulas for the number of reflective numerical semigroups of a given genus or given Frobenius number.'\naddress: 'Western New England University, Springfield, MA, USA'\nauthor:\n- 'Caleb M.\u00a0Shor'\nbibliography:\n- 'refs.bib'\ntitle: Reflective numerical semigroups\n---\n\nIntroduction\n============\n\nLet ${\\mathbb{N}}_0$ denote the set of non-negative integers, a monoid under addition, and let ${\\mathbb{N}}$ denote the set of positive integers. A numerical semigroup $S$ is a submonoid of ${\\mathbb{N}}_0$ with finite complement. We denote the complement by ${\\mathrm{H}}(S)={\\mathbb{N}}_0\\setminus S$. Elements of ${\\mathrm{H}}(S)$ are called gaps of $S$. The genus of $S$, denoted $\\operatorname{\\mathrm{g}}(S)$, is the number of gaps of $S$. When the genus is positive, the Frobenius number of $S$, denoted $\\operatorname{\\mathrm{F}}(S)$, is the largest gap of $S$. It happens that every numerical semigroup" +"---\nabstract: |\n In this paper, we consider center-based clustering problems where $C$, the set of points to be clustered, lies in a metric space $(X,d)$, and the set $X$ of candidate centers is potentially infinite-sized. We call such problems [*continuous*]{} clustering problems to differentiate them from the [*discrete*]{} clustering problems where the set of candidate centers is explicitly given. It is known that for many objectives, when one restricts the set of centers to $C$ itself and applies an $\\alpha_\\mathsf{dis}$-approximation algorithm for the discrete version, one obtains a $\\beta \\cdot \\alpha_{\\mathsf{dis}}$-approximation algorithm for the continuous version via the triangle inequality property of the distance function. Here $\\beta$ depends on the objective, and for many objectives such as $k$-median, $\\beta = 2$, while for some others such as $k$-means, $\\beta = 4$. The motivating question in this paper is *whether this gap of factor $\\beta$ between continuous and discrete problems is inherent, or can one design better algorithms for continuous clustering than simply reducing to the discrete case as mentioned above?* In a recent SODA 2021 paper, Cohen-Addad, Karthik, and Lee prove a factor-$2$ and a factor-$4$ hardness, respectively, for the continuous versions of the $k$-median and $k$-means problems, even" +"---\nabstract: 'Artificial Intelligence (AI) solutions and technologies are being increasingly adopted in smart systems context, however, such technologies are continuously concerned with ethical uncertainties. Various guidelines, principles, and regulatory frameworks are designed to ensure that AI technologies bring ethical well-being. However, the implications of AI ethics principles and guidelines are still being debated. To further explore the significance of AI ethics principles and relevant challenges, we conducted a survey of 99 representative AI practitioners and lawmakers (e.g., AI engineers, lawyers) from twenty countries across five continents. To the best of our knowledge, this is the first empirical study that encapsulates the perceptions of two different types of population (AI practitioners and lawmakers) and the study findings confirm that *transparency*, *accountability*, and *privacy* are the most critical AI ethics principles. On the other hand, *lack of ethical knowledge*, *no legal frameworks*, and *lacking monitoring bodies* are found the most common AI ethics challenges. The impact analysis of the challenges across AI ethics principles reveals that *conflict in practice* is a highly severe challenge. Moreover, the perceptions of practitioners and lawmakers are statistically correlated with significant differences for particular principles (e.g. *fairness*, *freedom*) and challenges (e.g. *lacking monitoring bodies*, *machine distortion*)." +"---\nabstract: 'Understanding similarities and differences between the cuprate and nickelate superconductors is drawing intense current interest. Competing charge orders have been observed recently in the $undoped$ infinite-layer nickelates in sharp contrast to the $undoped$ cuprates which exhibit robust antiferromagnetic insulating ground states. The microscopic mechanisms driving these differences remain unclear. Here, using in-depth first-principles and many-body theory based modeling, we show that the parent compound of the nickelate family, LaNiO$_2$, hosts a charge density wave (CDW) ground state with the predicted wavevectors in accord with the corresponding experimental findings. The CDW ground state is shown to be connected to a multi-orbital Peierls distortion. Our study points to the key role of electron-phonon coupling effects in the infinite-layer nickelates.'\nauthor:\n- Ruiqi\u00a0Zhang\n- Christopher Lane\n- Johannes Nokelainen\n- Bahadur Singh\n- Bernardo\u00a0Barbiellini\n- 'Robert S. Markiewicz'\n- Arun\u00a0Bansil\n- Jianwei\u00a0Sun\nbibliography:\n- 'Ref.bib'\ntitle: 'Peierls distortion driven multi-orbital origin of charge density waves in the undoped infinite-layer nickelate'\n---\n\nIntroduction\n============\n\nRecent discovery of superconductivity in the infinite-layer nickelates\u00a0[@Li2019a; @Osada2020a; @ZhangS2021; @2021arXiv211202484R; @OsadaAM2021] has attracted intense attention\u00a0[@Sawatzky2019; @Norman2020; @Pickett2021; @Mitchell2021] as an analog of cuprate superconductivity. Much experimental and theoretical effort has been" +"---\nabstract: 'The expressive and computationally efficient bipartite Graph Neural Networks (GNN) have been shown to be an important component of deep learning enhanced Mixed-Integer Linear Programming (MILP) solvers. Recent works have demonstrated the effectiveness of such GNNs in replacing the *branching (variable selection) heuristic* in branch-and-bound ([B$\\&$B]{}) solvers. These GNNs are trained, offline and on a collection of MILP instances, to imitate a very good but computationally expensive branching heuristic, *strong branching*. Given that\u00a0[B$\\&$B]{} results in a tree of sub-MILPs, we ask (a) whether there are strong dependencies exhibited by the target heuristic among the neighboring nodes of the [B$\\&$B]{} tree, and (b) if so, whether we can incorporate them in our training procedure. Specifically, we find that with the strong branching heuristic, a child node\u2019s best choice was often the parent\u2019s second-best choice. We call this the \u201clookback\u201d phenomenon. Surprisingly, the branching GNN of\u00a0@gasse2019exact often misses this simple \u201canswer\u201d. To imitate the target behavior more closely, we propose two methods that incorporate the lookback phenomenon into GNN training: (a) target smoothing for the standard cross-entropy loss function, and (b) adding a *Parent-as-Target (PAT) Lookback* regularizer term. Finally, we propose a model selection framework that directly considers" +"---\nabstract: 'The paper is devoted to the $\\bar{K}NNN$ system, which is an exotic system consisting of an antikaon and three nucleons. Dynamically exact four-body Faddeev-type equations were solved and characteristics of the quasi-bound state in the system were evaluated. Three antikaon-nucleon and three nucleon-nucleon potentials were used, so the dependence of the four-body pole positions on the two-body interaction models was studied. The resulting binding energies $B^{\\rm Chiral}_{K^-ppn} \\sim 30.5 - 34.5$ MeV obtained with chirally motivated and $B^{\\rm SIDD}_{K^-ppn} \\sim 46.4 - 52.0$ MeV obtained with phenomenological antikaon-nucleon potentials are close to those obtained for the $K^- pp$ system with the same $\\bar{K}N$ and $NN$ potentials, while the four-body widths $\\Gamma_{K^- ppn} \\sim 38.2 - 50.9$ MeV are smaller.'\nauthor:\n- 'N.V. Shevchenko'\ntitle: 'Quasi-bound state in the $\\bar{K}NNN$ system'\n---\n\nIntroduction\n============\n\nAttractive nature of $\\bar{K}N$ interaction lead to suggestions, that quasi-bound states can exist in few-body systems consisting of antikaons and nucleons\u00a0[@AY1]. In particular, a deep and relatively narrow quasi-bound state was predicted in the lightest three-body $\\bar{K}NN$ system\u00a0[@AY2]. Many theoretical calculations of the system were performed after that using different methods and inputs. All of them agree, that the quasi-bound state really exists" +"---\nabstract: 'The main objective of this article is to establish a central limit theorem for additive three-variable functionals of bifurcating Markov chains. We thus extend the central limit theorem under point-wise ergodic conditions studied in Bitseki-Delmas (2022) and to a lesser extent, the results of Bitseki-Delmas (2022) on central limit theorem under $L^{2}$ ergodic conditions. Our results also extend and complement those of Guyon (2007) and Delmas and Marsalle (2010). In particular, when the ergodic rate of convergence is greater than $1/\\sqrt{2}$, we have, for certain class of functions, that the asymptotic variance is non-zero at a speed faster than the usual central limit theorem studied by Guyon and Delmas-Marsalle.'\naddress: 'S. Val\u00e8re Bitseki Penda, IMB, CNRS-UMR 5584, Universit\u00e9 Bourgogne Franche-Comt\u00e9, 9 avenue Alain Savary, 21078 Dijon Cedex, France.'\nauthor:\n- 'S. Val\u00e8re Bitseki Penda'\nbibliography:\n- 'biblio.bib'\ntitle: 'Central limit theorem for bifurcating Markov chains: the mother-daughters triangles case'\n---\n\n**Keywords**: Bifurcating Markov chains, binary trees.\\\n**Mathematics Subject Classification (2010)**: 60F05, 60J80.\n\nIntroduction\n============\n\nThis article is devoted to the extension of the central limit theorem for bifurcating Markov chains given in Bitseki-Delmas [@BD1] when the functions do not only depend on one variable $x$ say, but on" +"---\nabstract: 'A map graph is a graph admitting a representation in which vertices are nations on a spherical map and edges are shared curve segments or points between nations. We present an explicit fixed-parameter tractable algorithm for recognizing map graphs parameterized by treewidth. The algorithm has time complexity that is linear in the size of the graph and, if the input is a yes-instance, it reports a certificate in the form of a so-called witness. Furthermore, this result is developed within a more general algorithmic framework that allows to test, for any $k$, if the input graph admits a $k$-map (where at most $k$ nations meet at a common point) or a hole-free\u00a0$k$-map (where each point of the sphere is covered by at least one nation). We point out that, although bounding the treewidth of the input graph also bounds the size of its largest clique, the latter alone does not seem to be a strong enough structural limitation to obtain an efficient time complexity. In fact, while the largest clique in a $k$-map graph is $\\lfloor 3k/2 \\rfloor$, the recognition of $k$-map graphs is still open for any fixed $k \\ge 5$.'\nauthor:\n- Patrizio\u00a0Angelini\n-" +"---\nauthor:\n- Sebastian Bartling\nbibliography:\n- '/Users/sebastianbartling/Documents/Bibtex/mybib.bib'\ntitle: 'Sur la cohomologie \u00e9tale de la courbe de Fargues-Fontaine'\n---\n\nR\u00e9sum\u00e9: On analyse la cohomologie \u00e9tale des faisceaux constructibles de torsion sur le site \u00e9tale de la courbe de Fargues-Fontaine associ\u00e9e \u00e0 un corps $p$-adique et un corps alg\u00e9briquement clos, non-archimedi\u00e9n de caract\u00e9ristique $p>0.$ Dans le cas de $\\ell\\neq p$-torsion, on d\u00e9montre deux conjectures de Fargues: l\u2019annulation en degr\u00e9s sup\u00e9rieure ou \u00e9gal \u00e0 trois de la cohomologie \u00e9tale des faisceaux constructibles sur la version alg\u00e9brique de la courbe de Fargues-Fontaine et la comparaison avec la cohomologie \u00e9tale de la version adique. Dans le cas $\\ell=p$ les r\u00e9sultats sont conditionnels: sous la condition qu\u2019une hypoth\u00e8se sur la g\u00e9om\u00e9trie de la courbe adique est v\u00e9rifi\u00e9e, on d\u00e9montre l\u2019annulation de la cohomologie \u00e9tale en degr\u00e9s sup\u00e9rieure ou \u00e9gal \u00e0 trois de ces faisceaux Zariski-constructibles sur la courbe adique qui sont alg\u00e9briques.\n\nIntroduction\n============\n\nSoient $p$ un nombre premier fix\u00e9 et $E$ une extension finie de $\\mathbb{Q}_{p},$ $F$ un corps alg\u00e9briquement clos de caract\u00e9ristique $p>0,$ extension du corps r\u00e9siduel de $E,$ complet pour une valeur absolue non-archim\u00e9dienne non-triviale. Dans le livre [@FarguesFontaine] Fargues-Fontaine ont associ\u00e9 \u00e0 la donn\u00e9e de $E$,$F$ un objet g\u00e9om\u00e9trique qui a" +"---\nauthor:\n- Olaf Kaczmarek\ntitle: Flavored aspects of QCD thermodynamics from Lattice QCD\n---\n\nIntroduction\n============\n\nIn these lecture notes we will discuss recent lattice QCD results relevant for the understanding of strongly interacting matter under extreme conditions. We will mainly focus here on observables which are relevant for the study of such matter at high temperatures and densities that are probed in heavy ion collision experiments. We will only give a brief introduction to lattice QCD in the next section and refer to the textbooks [@Gattringer:2010zz; @Montvay:1994cy; @Rothe:1992nt; @DeGrand:2006zz] and lecture notes [@Karsch:2003jg] for more detailed introductions to lattice field theory. For the topics addressed in this lecture note we also like to refer to the overview articles on QCD thermodynamics and the QCD phase transition [@Karsch:2003jg; @Ding:2015ona; @Guenther:2020jwe] and quarkonium in extreme conditions [@Rothkopf:2019ipj].\n\nQCD Thermodynamics and strangeness\n==================================\n\n$T_c$ and the equation of state\n-------------------------------\n\nAt vanishing chemical potentials it is well established that the transition from hadronic matter at small temperatures to the deconfined and chiral symmetric phase at high temperatures is not a real phase transition but rather a smooth cross-over for physical values of the quark masses [@Bazavov:2011nk; @Bhattacharya:2014ara; @Aoki:2006we]. Strictly speaking an" +"---\nabstract: 'We study the behavior of the co-spectral radius of a subgroup $H$ of a discrete group $\\Gamma$ under taking intersections. Our main result is that the co-spectral radius of an invariant random subgroup does not drop upon intersecting with a deterministic co-amenable subgroup. As an application, we find that the intersection of independent co-amenable invariant random subgroups is co-amenable.'\naddress:\n- |\n Department of Mathematics\\\n University of Chicago\\\n Chicago, IL 60637\\\n USA\n- |\n Department of Mathematics, Statistics, and Computer Science\\\n University of Illinois at Chicago\\\n Chicago, IL 60647\\\n USA\nauthor:\n- Mikolaj Fraczyk\n- Wouter van Limbeek\nbibliography:\n- 'ref.bib'\ntitle: 'Co-spectral radius of intersections'\n---\n\nIntroduction {#sec:intro}\n============\n\nLet $\\Gamma$ be a countable group and let $S$ be a finite symmetric subset of $\\Gamma$. The **co-spectral radius** of a subgroup $H\\subset \\Gamma$ (with respect to $S$) is defined as the norm of the operator $M\\colon L^2(\\Gamma/H)\\to L^2(\\Gamma/H)$: $$M\\phi(\\gamma H)=\\frac{1}{|S|}\\sum_{s\\in S}\\phi(s\\gamma H),\\quad \\rho(\\Gamma/H):=\\|M\\|.$$ The groups with co-spectral radius $1$ for every choice of $S$ are called [**co-amenable**]{}. If the group $\\Gamma$ is finitely generated, one needs only to verify that the co-spectral radius is $1$ for some generating set $S$.\n\nIn this paper we investigate the behavior" +"---\nabstract: 'In this paper, we explore some significant properties associated with a fractal operator on the space of all continuous functions defined on the Sierpi\u0144ski Gasket (SG). We also provide some results related to constrained approximation with fractal polynomials and study the best approximation properties of fractal polynomials defined on the SG. Further we discuss some remarks on the class of polynomials defined on the SG and try to estimate the fractal dimensions of the graph of $\\alpha$- fractal function defined on the SG by using the oscillation of functions.'\naddress:\n- 'Department of Mathematical Sciences, IIT(BHU), Varanasi, India 221005 '\n- 'Department of Applied Sciences, IIIT Allahabad, Prayagraj, India 211015'\n- 'Department of Mathematical Sciences, IIT(BHU), Varanasi, India 221005'\nauthor:\n- 'V. Agrawal'\n- 'S. Verma'\n- 'T. Som'\ntitle: Fractal polynomials On the Sierpi\u0144ski gasket and some dimensional results\n---\n\nIntroduction\n============\n\nThe foundational framework of Fractal interpolation functions (FIFs) was first devised by Barnsley [@MF2] using the theory of Iterated Function System (IFS). FIFs were defined by Celik et al. on $SG$ in [@Celik] and by Ruan [@Ruan3] on the basis of post-critically finite (p.c.f.) self-similar sets, presented by Kigami [@Kig] to investigate fractal analysis. On" +"---\nabstract: 'Fully-excited two-level atoms separated by less than the transition wavelength cooperatively emit light in a short burst, a phenomenon called superradiance by R. Dicke in 1954. The burst is characterized by a maximum intensity scaling with the square of the number of atoms $N$ and a temporal width reduced by $N$ compared to the single atom spontaneous decay time. Both effects are usually attributed to a synchronization of the electric dipole moments of the atoms occurring during the process of light emission. Contrary to this explanation, it was recently shown by use of a quantum path description that the peak intensity results from the quantum correlations among the atoms when occupying symmetric Dicke states. Here we investigate from this perspective the temporal evolution of the ensemble, starting in the small sample limit, i.e., when the atoms have mutual separations much smaller than the transition wavelength $\\lambda$ and pass down the ladder of symmetric Dicke states. In addition, we explore the temporal evolution for the case of non-interacting atoms with mutual separations much larger than $\\lambda$.\u00a0We show that in this case a similar superradiant burst of the emitted radiation is observed if the quantum correlations of the atoms" +"---\nabstract: 'We investigate two variants of quantum compass models (QCMs). The first, an orbital-only honeycomb QCM, is shown to exhibit a quantum phase transition (QPT) from a $XX$- to $ZZ$-ordered phase in the $3d$-Ising universality class, in accord with earlier studies. In a fractionalized parton construction, this describes a \u201csuperfluid-Mott insulator\u201d transition between a higher-order topological superfluid and the toric code, the latter described as a $p$-wave resonating valence bond state of the partons. The second variant, the spinless fermion QCM on a square lattice, is of interest in context of cold-atom lattices with higher-angular momentum states on each atom. We explore finite-temperature orbital order-disorder transitions in the itinerant and localized limits using complementary methods. In the itinerant limit, we uncover an intricate temperature ($T$)-dependent dimensional crossover from a high-$T$ quasi-$1d$ insulator-like state, via an incoherent bad-metal-like state at intermediate $T$, to a $2d$ symmetry-broken insulator at low $T$, well below the \u201corbital\u201d ordering scale. Finally, we discuss how engineering specific, tunable and realistic perturbations in both these variants can act as a playground for simulating a variety of exotic QPTs between topologically ordered and trivial phases. In the cold-atom context, we propose a novel way to engineer a" +"---\nabstract: 'Clinical trials are essential for drug development but are extremely expensive and time-consuming to conduct. It is beneficial to study similar historical trials when designing a clinical trial. However, lengthy trial documents and lack of labeled data make trial similarity search difficult. We propose a *zero-shot* clinical trial retrieval method, called [`Trial2Vec`]{}, which learns through self-supervision without the need for annotating similar clinical trials. Specifically, the *meta-structure* of trial documents (e.g., title, eligibility criteria, target disease) along with clinical knowledge (e.g., UMLS knowledge base [^1]) are leveraged to automatically generate contrastive samples. Besides, [`Trial2Vec`]{}encodes trial documents considering meta-structure thus producing compact embeddings aggregating multi-aspect information from the whole document. We show that our method yields medically interpretable embeddings by visualization and it gets 15% average improvement over the best baselines on precision/recall for trial retrieval, which is evaluated on our labeled 1600 trial pairs. In addition, we prove the pretrained embeddings benefit the downstream trial outcome prediction task over 240k trials. [^2]'\nauthor:\n- |\n Zifeng Wang and Jimeng Sun\\\n University of Illinois Urbana-Champaign\\\n `{zifengw2,jimeng}@illinois.edu`\\\nbibliography:\n- 'anthology.bib'\n- 'custom.bib'\ntitle: '[`Trial2Vec`]{}: Zero-Shot Clinical Trial Document Similarity Search using Self-Supervision'\n---\n\n=1\n\nIntroduction\n============\n\nClinical trials are essential" +"---\nabstract: 'We consider a class of translation-invariant 2D tensor network states with a stabilizer symmetry, which we call stabilizer PEPS. The cluster state, GHZ state, and states in the toric code belong to this class. We investigate the transmission capacity of stabilizer PEPS for measurement-based quantum wire, and arrive at a complete classification of transmission behaviors. The transmission behaviors fall into 13 classes, one of which corresponds to Clifford quantum cellular automata. In addition, we identify 12 other classes.'\nauthor:\n- Paul Herringer\n- Robert Raussendorf\nbibliography:\n- '2dWire.bib'\ntitle: 'Classification of measurement-based quantum wire instabilizer PEPS'\n---\n\nIntroduction\n============\n\nHow to harness multi-particle entanglement is a central question in quantum information processing. Among the many known protocols are quantum steering\u00a0[@Steer1; @Steer2], distribution of localizable entanglement [@LocE] in quantum networks, Bell-state distillation in quantum communication\u00a0[@Bennett; @Briegel], and measurement-based quantum computation (MBQC)\u00a0[@Raussendorf2001].\n\nIn some of these protocols, symmetry plays an important role. For example, we observe this feature in measurement-based quantum information processing, including measurement-based quantum state transmission (also known as quantum wire)\u00a0[@LRE; @SPT2], quantum computation on a resource state\u00a0[@Raussendorf2001; @AKLT1; @AKLT2], and computational phases of quantum matter\u00a0[@SPT1; @SPT2; @SPT3; @SPT4; @SPT6; @SPT7; @Bridge;" +"---\nabstract: 'Denoising Diffusion Probabilistic Models (DDPMs) have achieved remarkable success in various image generation tasks compared with Generative Adversarial Nets (GANs). Recent work on semantic image synthesis mainly follows the *de facto* GAN-based approaches, which may lead to unsatisfactory quality or diversity of generated images. In this paper, we propose a novel framework based on DDPM for semantic image synthesis. Unlike previous conditional diffusion model directly feeds the semantic layout and noisy image as input to a U-Net structure, which may not fully leverage the information in the input semantic mask, our framework processes semantic layout and noisy image differently. It feeds noisy image to the encoder of the U-Net structure while the semantic layout to the decoder by multi-layer spatially-adaptive normalization operators. To further improve the generation quality and semantic interpretability in semantic image synthesis, we introduce the classifier-free guidance sampling strategy, which acknowledge the scores of an unconditional model for sampling process. Extensive experiments on four benchmark datasets demonstrate the effectiveness of our proposed method, achieving state-of-the-art performance in terms of fidelity\u00a0(FID) and diversity\u00a0(LPIPS). The code and models are publicly available at .'\nauthor:\n- |\n Weilun Wang[$~^{1}$]{}, \u00a0Jianmin Bao[$~^{2}$]{}[^1], \u00a0Wengang Zhou[$~^{1,4}$]{},\\\n Dongdong Chen[$~^{3}$]{}, \u00a0Dong" +"---\nbibliography:\n- 'main.bib'\n---\n\nINTRODUCTION {#S:Intro}\n============\n\nThe United States Naval Academy (USNA) has approximately 4,400 students referred to as midshipmen. As of spring 2021, there are 1,165 freshmen, 1,097 sophomores, 1,130 juniors, and 1,050 seniors. The students are arranged into a midshipmen brigade of 6 battalions which are further subdivided into 5 companies each for a total of 30 companies. Each company has between 33-42 students from each class. Figure \\[fig:brigadeformation\\] shows the Brigade of Midshipmen in formation of companies on a field. The brigade organization exists to develop leadership traits with different levels of responsibility throughout the four years a student attends USNA [@atwaterAndYummarino1989].\n\n![Brigade of Midshipmen in Formation [@DVIDS:2018][]{data-label=\"fig:brigadeformation\"}](images/USNABrigade-Gray-DVIDSHUBphotoid4984586.jpg)\n\nDuring the COVID-19 pandemic, midshipmen were isolated from their leadership experiences within the brigade. Restrictions due to COVID forced all midshipmen home for remote learning during Spring 2020 for full virtual learning. Summer training at sea was cancelled in 2020, and all students completed virtual summer school. The brigade returned in Fall 2020, but various isolation protocols and hybrid classroom learning continued through the end of spring semester 2021. Furthermore, USNA leadership felt the pressures from the COVID environment exhibited strains to individual and team morals analogous" +"---\nabstract: 'We present a novel sub-8-bit for 8-bit neural network accelerators. Our method is inspired from Lloyd-Max compression theory with practical adaptations for a feasible computational overhead during training. With the quantization centroids derived from a baseline, we augment training loss with a Multi-Regional Absolute Cosine (MRACos) regularizer that aggregates weights towards their nearest centroid, effectively acting as a pseudo compressor. Additionally, a periodically hard compressor is introduced to improve the convergence rate by emulating model weight quantization. [We apply S8BQAT on speech recognition tasks using Recurrent Neural Network-Transducer (RNN-T) architecture. With S8BQAT, we are able to increase the model parameter size to reduce the word error rate by 4-16% relatively, while still improving latency by 5%.]{}'\naddress: |\n Alexa AI, Amazon, USA$^{\\dagger}$\\\n Hardware Compute Group, Amazon, USA$^{\\star}$\\\n \u00a0`{zhenk, hieng, ravitec}@amazon.com`\\\n \u00a0`{nsusanj, mouchta, tafzal, arastrow}@amazon.com` \nbibliography:\n- 'refs.bib'\ntitle: |\n Sub-8-Bit for 8-Bit Neural Network Accelerator\\\n On-Device \n---\n\n**Index Terms**: on-device speech recognition, sub-8-bit quantization, INT8 neural network accelerator, Lloyd-Max quantizer\n\nIntroduction {#sec:intro}\n============\n\nUbiquitous as end-to-end ASR models are [@song2021non; @gulati2020conformer; @chang2021end; @radfar2020end], deploying them on edge can be challenging due to the strict limit of bandwidth and memory of edge devices [@sainath2020streaming; @kim21m_interspeech]. Hence, it is highly" +"---\nauthor:\n- 'P. T. Rahna [^1]'\n- 'Z.-Y. Zheng [^2]'\n- 'A. L. Chies-Santos'\n- 'Z. Cai'\n- 'D. Spinoso'\n- 'I. Marquez'\n- 'R. Overzier'\n- 'L. R. Abramo'\n- 'S. Bonoli'\n- 'C. Kehrig'\n- 'L. A. D\u00edaz-Garc\u00eda'\n- 'M. Povi\u0107'\n- 'R. Soria'\n- 'J. M. Diego'\n- 'T. Broadhurst'\n- 'R. M. Gonz\u00e1lez Delgado'\n- 'J. Alcaniz'\n- 'N. Ben\u00edtez'\n- 'S. Carneiro'\n- 'A. J. Cenarro'\n- 'D. Crist\u00f3bal-Hornillos'\n- 'R. A. Dupke'\n- 'A. Ederoclite'\n- 'A. Hern\u00e1n-Caballero'\n- 'C. L\u00f3pez-Sanjuan'\n- 'A. Mar\u00edn-Franch'\n- 'C. Mendes de Oliveira'\n- 'M. Moles'\n- 'L. Sodr\u00e9 Jr.'\n- 'K. Taylor'\n- 'J. Varela'\n- 'H. V\u00e1zquez Rami\u00f3'\n- JPAS team\nbibliography:\n- 'ref.bib'\ntitle: 'The miniJPAS Survey: Detection of the double-core Ly[$\\alpha$]{}\u00a0morphology for two high-redshift QSOs'\n---\n\n[The Ly$\\alpha$ emission is an important tracer of neutral gas in a circum-galactic medium (CGM) around high-z quasi-stellar objects (QSOs). The origin of Ly$\\alpha$ emission around QSOs is still under debate, bringing on significant implications for galaxy formation and evolution.]{} [ In this paper, we study Ly$\\alpha$ nebulae around two high redshift QSOs, SDSS J141935.58+525710.7 at $z=3.218$ (hereafter QSO1) and SDSS J141813.40+525240.4 at $z=3.287$ (hereafter QSO2)," +"---\nabstract: 'Equivalent approaches to determine eigenfrequencies of the Liouvillians of open quantum systems are discussed using the solution of the Heisenberg-Langevin equations and the corresponding equations for operator moments. A simple damped two-level atom is analyzed to demonstrate the equivalence of both approaches. The suggested method is used to reveal the structure as well as eigenfrequencies of the dynamics matrices of the corresponding equations of motion and their degeneracies for interacting bosonic modes described by general quadratic Hamiltonians. Quantum Liouvillian exceptional and diabolical points and their degeneracies are explicitly discussed for the case of two modes. Quantum hybrid diabolical exceptional points (inherited, genuine, and induced) and hidden exceptional points, which are not recognized directly in amplitude spectra, are observed. The presented approach via the Heisenberg-Langevin equations paves the general way to a detailed analysis of quantum exceptional and diabolical points in infinitely dimensional open quantum systems.'\nauthor:\n- 'Jan Pe\u0159ina Jr.'\n- Adam Miranowicz\n- Grzegorz Chimczak\n- 'Anna Kowalewska-Kud\u0142aszyk'\ntitle: 'Quantum Liouvillian exceptional and diabolical points for bosonic fields with quadratic Hamiltonians: The Heisenberg-Langevin equation approach'\n---\n\nIntroduction\n============\n\nNon-Hermitian Hamiltonians, for systems with properly balanced dissipation and amplification, have real energy spectra if they exhibit the parity-time" +"---\nauthor:\n- 'Weilong Fu[^1], Ali Hirsa[^2]'\nbibliography:\n- 'references.bib'\ndate: \ntitle: '**Solving barrier options under stochastic volatility using deep learning**'\n---\n\n[[*Keywords:*]{} barrier option, stochastic volatility, Bergomi model, deep learning, neural network]{}\n\nIntroduction\n============\n\nStochastic volatility models are good at replicating the volatility smiles and the correlation between the underlying asset and volatility among the pure diffusion frameworks. Some examples are the Heston model [@heston1993closed], the SABR model [@hagan2002managing] and the Bergomi model [@bergomi2008smile]. The Bergomi model is more complex since it includes multiple volatility factors and is shown to be better at replicating the term structure of forward variances. However, since the stochastic volatility models define additional dynamics of volatility, option pricing under these models is generally more challenging than that under the models which only consider dynamics of the underlying asset.\n\nBarrier options are path-dependent options whose payoff depends on whether or not the underlying asset has reached the barrier level. They are classified into up/down-and-in/out calls/puts based on the position of the barrier level, its payoff after the barrier level is reached, and the corresponding vanilla option. Traditional methods to price barrier options under the stochastic volatility models include the finite difference method in [@chiarella2012evaluation; @guardasoni2016fast;" +"---\nabstract: '**Background:** Technical Debt (TD) refers to the situation where developers make trade-offs to achieve short-term goals at the expense of long-term code quality, which can have a negative impact on the quality of software systems. In the context of code review, such sub-optimal implementations have chances to be timely resolved during the review process before the code is merged. Therefore, we could consider them as Potential Technical Debt (PTD) since PTD will evolve into TD when it is injected into software systems without being resolved. **Aim:** To date, little is known about the extent to which PTD is identified in code reviews. Many tools have been provided to detect TD, but these tools lack consensus and a large amount of PTD are undetectable by tools while code review could help verify the quality of code that has been committed by identifying issues, such as PTD. To this end, we conducted an exploratory study in an attempt to understand the nature of PTD in code reviews and track down the resolution of PTD after being identified. **Method:** We randomly collected 2,030 review comments from the Nova project of OpenStack and the Qt Base project of Qt. We then manually" +"---\nabstract: '[We study the tree models of mildly dissipative diffeomorphisms on the disk ${{\\mathbb D}}$. This models, are one dimensional dynamical systems with the ergodic aperiodic data as well as some properties of the original dynamics. Our focus in this work is their quality as dynamical invariant in both the topological and the ergodic sense.]{}'\naddress:\n- Universidade Federal de Minas Gerais\n- Universidad Nacional Mayor de San Marcos\nauthor:\n- Javier Correa\n- Elizabeth Flores\nbibliography:\n- 'Bibliog.bib'\ntitle: On the tree models of mildly dissipative maps\n---\n\n[0,2cm]{} [0]{} [**]{} [0,5cm]{} [****]{} [0,2cm]{} [:]{}\n\n\\[section\\] \\[teo\\][Lemma]{} \\[teo\\][Proposition]{} \\[teo\\] [Definition]{} \\[teo\\][Remark]{}\n\n[0,2cm]{} [0]{} [0,5cm]{} [****]{} [0,2cm]{} [:]{}\n\n[^1]\n\nIntroduction\n============\n\nA classical way to study dynamical systems is through the search of reduced models that captures the main features of our object of study. S. Crovisier and E. Pujals introduced in [@pujcrov], a one-dimensional model to study a family of surface diffeomorphisms which they called strongly dissipative and later was renamed to mildly dissipative in [@CPT]. In this work, we study this model from the perspective of a dynamical invariant. This inquiry is natural and also interesting because these models are a topological object, yet they are built" +"---\nabstract: 'We demonstrate the presence of an optical phase transition with frustration-induced spontaneous symmetry breaking in a triangular planar atomic array due to cooperative light-mediated interactions. We show how the array geometry of triangle unit cells at low light intensities leads to degenerate collective radiative excitations forming nearly flat bands. We drive degenerate pairs of collective excitations to be equally populated in both cases of the atomic polarization in the lattice plane and perpendicular to it. At higher intensities, above specific threshold values, this symmetry in the populations is spontaneously broken. We also develop an effective few-mode model that provides semianalytic descriptions of the symmetry-breaking threshold and infinite-lattice limit phase transition. Surprisingly, we find how excitations due to dipolar interactions correspond to optical analogs of those found in frustrated magnets and superfluids, with closely related symmetry-breaking mechanisms despite the significant physical differences between these systems, opening potential for simulating even quantum magnetism. Transmitted light through the array conveys information about symmetry breaking in the hysteresis behavior of the spectrum. Moreover, in a Mott-insulator state, the atomic positions are subject to zero-point quantum fluctuations. Interpreting each stochastic realization as a light-induced quantum measurement of the atomic position configuration, we find" +"---\nabstract: 'Fetal growth assessment from ultrasound is based on a few biometric measurements that are performed manually and assessed relative to the expected gestational age. Reliable biometry estimation depends on the precise detection of landmarks in standard ultrasound planes. Manual annotation can be time-consuming and operator dependent task, and may results in high measurements variability. Existing methods for automatic fetal biometry rely on initial automatic fetal structure segmentation followed by geometric landmark detection. However, segmentation annotations are time-consuming and may be inaccurate, and landmark detection requires developing measurement-specific geometric methods. This paper describes BiometryNet, an end-to-end landmark regression framework for fetal biometry estimation that overcomes these limitations. It includes a novel Dynamic Orientation Determination (DOD) method for enforcing measurement-specific orientation consistency during network training. DOD reduces variabilities in network training, increases landmark localization accuracy, thus yields accurate and robust biometric measurements. To validate our method, we assembled a dataset of 3,398 ultrasound images from 1,829 subjects acquired in three clinical sites with seven different ultrasound devices. Comparison and cross-validation of three different biometric measurements on two independent datasets shows that BiometryNet is robust and yields accurate measurements whose errors are lower than the clinically permissible errors, outperforming other existing" +"---\nabstract: 'We present a model for the remnants of haloes that have gone through an adiabatic tidal stripping process. We show that this model exactly reproduces the remnant of an NFW halo that is exposed to a slowly increasing isotropic tidal field and approximately for an anisotropic tidal field. The model can be used to predict the asymptotic mass loss limit for orbiting subhaloes, solely as a function of the initial structure of the subhalo and the value of the tidal field at pericentre.Predictions can easily be made for differently concentrated host-haloes with and without baryonic components, which differ most notably in their relation between pericentre radius and tidal field. The model correctly predicts several empirically measured relations such as the \u2018tidal track\u2019 and the \u2018orbital frequency relation\u2019 that was reported by Errani & Navarro (2021) for the case of an isothermal sphere. Further, we [propose applications of]{} the \u2018structure-tide\u2019 degeneracy which implies that increasing the concentration of a subhalo has exactly the same impact on tidal stripping as reducing the amplitude of the tidal field. Beyond this, we find that simple relations hold for the bound mass, truncation radius, WIMP annihilation luminosity and tidal ratio of tidally stripped" +"---\nabstract: |\n Large Resistive Plate Chamber systems have their roots in High Energy Physics experiments at the European Organization for Nuclear Research: ATLAS, CMS and ALICE, where hundreds of square meters of both trigger and timing RPCs have been deployed. These devices operate with complex gas systems, equipped with re-circulation and purification units, which require a fresh gas supply of the order of 6 cm$^{3}$/min/m$^{2}$, creating logistical, technical and financial problems.\n\n In this communication, we present a new concept in the construction of RPCs which allowed us to operate a detector at ultra-low gas flow regime. With this new approach, the glass stack is encapsulated in a tight plastic box made of polypropylene, which presents excellent water vapor blocking properties as well as a good protection against atmospheric gases.\naddress:\n- 'LIP, Laborat\u00f3rio de Instrumenta\u00e7\u00e3o e F\u00edsica Experimental de Part\u00edculas, 3004-516 Coimbra, Portugal'\n- 'Hidronav Technologies SL, 36202 Vigo, Pontevedra, Spain'\nauthor:\n- 'J. Saraiva'\n- 'C. Alemparte'\n- 'D. Belver'\n- 'A. Blanco'\n- 'J. Call\u00f3n'\n- 'J. Collazo'\n- 'A. Iglesias'\n- 'L. Lopes'\ntitle: 'Advances Towards a Large-Area, Ultra-Low-Gas-Consumption RPC Detector'\n---\n\nGaseous detectors ,Resistive-plate chambers ,Particle tracking detectors, Gas systems\n\nIntroduction\n============\n\nLarge Resistive Plate" +"---\nabstract: 'Link prediction aims to predict links of a network that are not directly visible, with profound applications in biological and social systems. Despite intensive utilization of the topological feature in this task, it is unclear to what extent a particular feature can be leveraged to infer missing links. Here, we show that the maximum capability of a topological feature follows a simple mathematical expression, which is independent of how an index gauges the feature. Hence, a family of indexes associated with one topological feature shares the same performance limit. A feature\u2019s capability is lifted in the supervised prediction, which in general gives rise to better results compared with unsupervised prediction. The universality of the pattern uncovered is empirically verified by 550 structurally diverse networks, which can be applied to feature selection and the analysis of network characteristics associated with a topological feature in link prediction.'\nauthor:\n- Yijun Ran\n- 'Xiao-Ke Xu'\n- Tao Jia\nbibliography:\n- 'mybibfile.bib'\ntitle: The maximum capability of a topological feature in link prediction\n---\n\nComplex systems can be described by networks, in which nodes are the components of the system and links are the interactions between the components [@barabasi2016network; @newman2018networks]. Link prediction" +"---\nabstract: 'Machine learning has become widely used in astronomy. Gaussian Process (GP) regression in particular has been employed a number of times to fit or re-sample supernova (SN) light-curves, however by their nature typical GP models are not suited to fit SN photometric data and they will be prone to over-fitting. Recently GP re-sampling was used in the context of studying the morphologies of type II and IIb SNe and they were found to be clearly distinct with respect to four parameters: the rise time (t$_{\\rm rise}$), the magnitude difference between 40 and 30 days post explosion ($\\Delta m_{\\rm 40-30}$), the earliest maximum (post-peak) of the first derivative (dm1) and minimum of the second derivative (dm2). Here we take a close look at GP regression and its limitations in the context of SN light-curves in general, and we also discuss the uncertainties on these specific parameters, finding that dm1 and dm2 cannot give reliable astrophysical information. We do reproduce the clustering in t$_{\\rm rise}$\u2013$\\Delta m_{\\rm 40-30}$ space although it is not as clear cut as previously presented. The best strategy to accurately populate the t$_{\\rm rise}$\u2013 $\\Delta m_{\\rm 40-30}$ space will be to use an expanded sample of high" +"---\nabstract: 'We study the breathing (monopole) oscillations and their damping in a harmonically trapped one-dimensional (1D) Bose gas in the quasicondensate regime using a finite-temperature classical field approach. By characterising the oscillations via the dynamics of the density profile\u2019s rms width over long time, we find that the rms width displays beating of two distinct frequencies. This means that 1D Bose gas oscillates not at a single breathing mode frequency, as found in previous studies, but as a superposition of two distinct breathing modes, one oscillating at frequency close to $\\simeq\\!\\sqrt{3}\\omega$ and the other at $\\simeq\\!2\\omega$, where $\\omega$ is the trap frequency. The breathing mode at $\\sim\\!\\sqrt{3}\\omega$ dominates the beating at lower temperatures, deep in the quasicondensate regime, and can be attributed to the oscillations of the bulk of the density distribution comprised of particles populating low-energy, highly-occupied states. The breathing mode at $\\simeq\\!2\\omega$, on the other hand, dominates the beating at higher temperatures, close to the nearly ideal, degenerate Bose gas regime, and is attributed to the oscillations of the tails of the density distribution comprised of thermal particles in higher energy states. The two breathing modes have distinct damping rates, with the damping rate of the bulk" +"---\nauthor:\n- \n- \nbibliography:\n- 'library.bib'\ntitle: 'Efficient parameter estimation for parabolic SPDEs based on a log-linear model for realized volatilities'\n---\n\nIntroduction {#sec1}\n============\n\nDynamic models based on stochastic partial differential equations (SPDEs) are recently of great interest, in particular their calibration based on statistics, see, for instance, [@hambly], [@fuglstad], [@randolf2021] and [@randolf2022]. @trabs, @cialenco and @chong have independently of one another studied the parameter estimation for parabolic SPDEs based on power variation statistics of time increments when a solution of the SPDE is observed discretely in time and space. @trabs pointed out the relation of their estimators to realized volatilities which are well-known as key statistics for financial high-frequency data in econometrics. We develop estimators based on these realized volatilities which significantly improve upon the M-estimation from [@trabs]. Our new estimators attain smaller asymptotic variances, they are explicit functions of realized volatilities and we can readily provide asymptotic confidence intervals. Since generalized estimation approaches for small noise asymptotics in @kaino2021, rate-optimal estimation for more general observation schemes in @hildebrandt, long-span asymptotics in @kaino, and with two spatial dimensions in @kainopre have been built upon the M-estimator from [@trabs], we expect that our new method is of interest" +"---\nabstract: '\\[abstract\\] In the recent years, multi-core processor designs have found their way into many computing devices. To exploit the capabilities of such devices in the best possible way, signal processing algorithms have to be adapted to an operation in parallel tasks. In this contribution an optimized processing order is proposed for Frequency Selective Extrapolation, a powerful signal extrapolation algorithm. Using this optimized order, the extrapolation can be carried out in parallel. The algorithm scales very good, resulting in an acceleration of a factor of up to 7.7 for an eight core computer. Additionally, the optimized processing order aims at reducing the propagation of extrapolation errors over consecutive losses. Thus, in addition to the acceleration, a visually noticeable improvement in quality of up to 0.5 dB PSNR can be achieved.'\naddress: |\n Chair of Multimedia Communications and Signal Processing,\\\n University of Erlangen-Nuremberg, Cauerstr. 7, 91058 Erlangen, Germany\\\n {seiler, kaup}@LNT.de\ntitle: Optimized and Parallelized Processing Order for Improved Frequency Selective Signal Extrapolation\n---\n\nIntroduction {#sec:introduction}\n============\n\nSignal extrapolation is a very important task in image and video signal processing. In this process, a signal is extended from regions where the signal is known into regions where no information about the" +"---\nabstract: 'Radar must adapt to changing environments, and we propose changepoint detection as a method to do so. In the world of increasingly congested radio frequencies, radars must adapt to avoid interference. Many radar systems employ the *prediction action cycle* to proactively determine transmission mode while spectrum sharing. This method constructs and implements a model of the environment to predict unused frequencies, and then transmits in this predicted availability. For these selection strategies, performance is directly reliant on the quality of the underlying environmental models. In order to keep up with a changing environment, these models can employ *changepoint detection*. Changepoint detection is the identification of sudden changes, or changepoints, in the distribution from which data is drawn. This information allows the models to discard \u201cgarbage\" data from a previous distribution, which has no relation to the current state of the environment. In this work, *bayesian online changepoint detection* (BOCD) is applied to the sense and predict algorithm to increase the accuracy of its models and improve its performance. In the context of spectrum sharing, these changepoints represent interferers leaving and entering the spectral environment. The addition of changepoint detection allows for dynamic and robust spectrum sharing even as" +"---\nabstract: |\n The stabilization of \u201crigid\u201d flute and ballooning modes $m = 1$ in an axisymmetric mirror trap with the help of an ideally conducting lateral both in the presence and in the absence of end MHD anchors is studied. The calculations were performed for an anisotropic plasma in a model that simulates the pressure distribution during the injection of beams of fast neutral atoms into the magnetic field minimum at a right angle to the trap axis. It was assumed that the lateral wall has the shape of a cylinder with a variable radius, so that on an enlarged scale it repeats the shape of the plasma column.\n\n It has been found that for the effective stabilization of the listed modes by an ideally conducting lateral wall, the parameter beta ($\\beta$, the ratio of the plasma pressure to the magnetic field pressure) must exceed some critical value $\\beta_{\\text{crit}}$. When combined with a conducting lateral wall and conducting end plates imitating MHD end stabilizers, there are two critical beta values and two stability zones $0<\\beta<\\beta_{\\text{ crit}1}$ and $\\beta_{\\text {crit}2}<\\beta<1$ that can merge, making the entire range of allowable beta values $0<\\beta<1$ stable.\n\n The dependence of the critical betas on" +"---\nabstract: 'Serverless computing, in particular the Function-as-a-Service (FaaS) execution model, has recently shown to be effective for running large-scale computations, including MapReduce, linear algebra and machine learning jobs, to mention a few. Despite this wide array of applications, little attention has been paid to highly-parallel applications with *unbalanced* and *irregular* workloads. Typically, these workloads have been kept out of the cloud due to the impossibility of anticipating their computing resources ahead of time, frequently leading to severe resource over- and underprovisioning situations. Our main insight in this article is, however, that the *elasticity* and *ease of management* of serverless computing technology can be a key enabler for effectively running these problematic workloads for the first time in the cloud. More concretely, we demonstrate that with a simple serverless executor pool abstraction, a data scientist can achieve a better cost-performance trade-off than a Spark cluster of static size built upon large EC2 virtual machines (VMs). To support this conclusion, we evaluate three irregular algorithms: the Unbalanced Tree Search (UTS), the Mandelbrot Set using the Mariani-Silver algorithm, and the Betweenness Centrality on a random graph. For instance, our serverless implementation of UTS is able to outperform Spark by up to $55\\%$" +"---\nabstract: |\n The epoch of reionization (EoR) offers a unique window into the dawn of galaxy formation, through which high-redshift galaxies can be studied by observations of both themselves and their impact on the intergalactic medium. Line intensity mapping (LIM) promises to explore cosmic reionization and its driving sources by measuring intensity fluctuations of emission lines tracing the cosmic gas in varying phases. Using LIMFAST, a novel semi-numerical tool designed to self-consistently simulate LIM signals of multiple EoR probes, we investigate how building blocks of galaxy formation and evolution theory, such as feedback-regulated star formation and chemical enrichment, might be studied with multi-tracer LIM during the EoR. On galaxy scales, we show that the star formation law and the feedback associated with star formation can be indicated by both the shape and redshift evolution of LIM power spectra. For a baseline model of metal production that traces star formation, we find that lines highly sensitive to metallicity are generally better probes of galaxy formation models. On larger scales, we demonstrate that inferring ionized bubble sizes from cross-correlations between tracers of ionized and neutral gas requires a detailed understanding of the astrophysics that shape the line luminosity\u2013halo mass relation. Despite" +"---\nabstract: 'The evolution and lifetime of protoplanetary disks (PPDs) play a central role in the formation and architecture of planetary systems. Astronomical observations suggest that PPDs go through a two-timescale evolution, accreting onto the star over a few to several million years (Ma) followed by gas-dissipation within $\\lesssim$1 Ma. This timeline is consistent with gas dissipation by photoevaporation and/or magnetohydrodynamic winds. Because solar nebula magnetic fields are sustained by the gas of the protoplanetary disk, we can use paleomagnetic measurements to infer the lifetime of the disk. Here we use paleomagnetic measurements of meteorites to investigate whether the disk that formed our solar system had a two-timescale evolution. We report on paleomagnetic measurements of bulk subsamples of two CO carbonaceous chondrites: Allan Hills A77307 and Dominion Range 08006. If magnetite in these meteorites could acquire a crystallization remanent magnetization that recorded the ambient field during aqueous alteration, our measurements suggest that the local magnetic field strength at the CO parent-body location was <0.9 \u00b5T at some time between 2.7-5.1 million years (Myr) after the formation of calcium-aluminum-rich inclusions. Coupled with previous paleomagnetic studies, we conclude that dissipation of the solar nebula in the 3-7 AU region occurred <1.5 Myr" +"---\nabstract: 'Localization and navigation are basic robotic tasks requiring an accurate and up-to-date map to finish these tasks, with crowdsourced data to detect map changes posing an appealing solution. Collecting and processing crowdsourced data requires low-cost sensors and algorithms, but existing methods rely on expensive sensors or computationally expensive algorithms. Additionally, there is no existing dataset to evaluate point cloud change detection. Thus, this paper proposes a novel framework using low-cost sensors like stereo cameras and IMU to detect changes in a point cloud map. Moreover, we create a dataset and the corresponding metrics to evaluate point cloud change detection with the help of the high-fidelity simulator Unreal Engine 4. Experiments show that our visual-based framework can effectively detect the changes in our dataset.'\nauthor:\n- 'Zihan Lin$^{1}$, Jincheng Yu$^{1}$, Lipu Zhou$^{2}$, Xudong Zhang$^{1}$, Jian Wang$^{1}$, Yu Wang$^{1}$[^1][^2] [^3] [^4] [^5] [^6] [^7]'\nbibliography:\n- 'IEEEabrv.bib'\n- 'ref.bib'\ntitle: |\n **Point Cloud Change Detection With Stereo V-SLAM:\\\n Dataset, Metrics and Baseline** \n---\n\n[Lin : Point Cloud Change Detection With Stereo V-SLAM: Dataset, Metrics and Baseline]{}\n\nat (current page.south) ;\n\nVisual-Inertial SLAM, Mapping, Data Sets for SLAM\n\nINTRODUCTION\n============\n\nground and unnamed aerial vehicles have become popular over the last" +"---\nabstract: 'We present SImProv \u2013 a scalable image provenance framework to match a query image back to a trusted database of originals and identify possible manipulations on the query. SImProv consists of three stages: a scalable search stage for retrieving top-k most similar images; a re-ranking and near-duplicated detection stage for identifying the original among the candidates; and finally a manipulation detection and visualization stage for localizing regions within the query that may have been manipulated to differ from the original. SImProv is robust to benign image transformations that commonly occur during online redistribution, such as artifacts due to noise and recompression degradation, as well as out-of-place transformations due to image padding, warping, and changes in size and shape. Robustness towards out-of-place transformations is achieved via the end-to-end training of a differentiable warping module within the comparator architecture. We demonstrate effective retrieval and manipulation detection over a dataset of 100 million images.'\naddress:\n- 'CVSSP, University of Surrey, UK'\n- Adobe Research\nauthor:\n- Alexander\n- Tu\n- Simon\n- Zhifei\n- Viswanathan\n- John\nbibliography:\n- 'refs.bib'\ntitle: 'SImProv: scalable image provenance framework for robust content attribution'\n---\n\nIntroduction\n============\n\nImages are a great way to share stories" +"---\nabstract: 'We employ a machine learning-enabled approach to quantum state engineering based on evolutionary algorithms. In particular, we focus on superconducting platforms and consider a network of qubits \u2013 encoded in the states of artificial atoms with no direct coupling \u2013 interacting via a common single-mode driven microwave resonator. The qubit-resonator couplings are assumed to be in the resonant regime and tunable in time. A genetic algorithm is used in order to find the functional time-dependence of the couplings that optimise the fidelity between the evolved state and a variety of targets, including three-qubit GHZ and Dicke states and four-qubit graph states. We observe high quantum fidelities (above 0.96 in the worst case setting of a system of effective dimension 96)[, fast preparation times,]{} and resilience to noise, despite the algorithm being trained in the ideal noise-free setting. These results show that the genetic algorithms represent an effective approach to control quantum systems of large dimensions.'\nauthor:\n- Jonathon Brown\n- Mauro Paternostro\n- Alessandro Ferraro\ntitle: 'Optimal quantum control via genetic algorithms for quantum state engineering in driven-resonator mediated networks '\n---\n\nIntroduction\n============\n\nQuantum state engineering is an essential enabling step for a variety of quantum information" +"---\nabstract: 'On Mars, seismic measurements of the InSight lander have for the first time confirmed tectonic activity in an extraterrestrial geological system: the large graben system Cerberus Fossae. In-depth analysis of available marsquakes thus allows unprecedented characterization of an active extensional structure on Mars. We show that both major families of marsquakes, characterized by low and high frequency content, LF and HF events respectively, can be located on central and eastern parts of the graben system. This is in agreement with the decrease in structural maturity towards the East as inferred from orbital images. LF marsquake hypocenters are located at about 15-50 km and the spectral character suggests a weak, potentially warm source region consistent with recent volcanic activity at those depths. The HF marsquakes occur in the brittle shallow part and might originate in the fault planes associated with the graben flanks. One particularly intriguing cluster of seismic activity is located close to a recently identified active young volcano near Zunil crater. The inferred mechanically weak source region supports the existence of an active plume below Elysium Mons, from which dikes are extending into Elysium Planitia. We find no trace of seismic activity on compressional thrust faults on" +"---\nabstract: 'We study the ranking problem in generalized linear bandits. At each time, the learning agent selects an ordered list of items and observes stochastic outcomes. In recommendation systems, displaying an ordered list of the most attractive items is not always optimal as both position and item dependencies result in a complex reward function. A very naive example is the lack of diversity when all the most attractive items are from the same category. We model the position and item dependencies in the ordered list and design UCB and Thompson Sampling type algorithms for this problem. Our work generalizes existing studies in several directions, including position dependencies where position discount is a particular case, and connecting the ranking problem to graph theory.'\nauthor:\n- |\n Amitis Shidani\\\n George Deligiannidis, Arnaud Doucet\n- Author Name\n- 'First Author Name^1,2^, Second Author Name^2^, Third Author Name^1^'\nbibliography:\n- 'aaai24.bib'\ntitle:\n- Ranking In Generalized Linear Bandits\n- 'My Publication Title \u2014 Single Author'\n- 'My Publication Title \u2014 Multiple Authors'\n---\n\nIntroduction\n============\n\nThe *multi-armed bandit* (MAB) problem is a sequential decision-making problem in which there are $K$ possible choices called arms, each with an unknown reward distribution. At each time" +"---\nabstract: 'In this paper, we revisit the classical linear turning point problem for the second order differential equation $\\epsilon^2 x'''' +\\mu(t)x=0$ with $\\mu(0)=0,\\,\\mu''(0)\\ne 0$ for $0<\\epsilon\\ll 1$. Written as a first order system, $t=0$ therefore corresponds to a turning point connecting hyperbolic and elliptic regimes. Our main result is that we provide an alternative approach to WBK that is based upon dynamical systems theory, including GSPT and blowup, and we bridge \u2013 perhaps for the first time \u2013 hyperbolic and elliptic theories of slow-fast systems. As an advantage, we only require finite smoothness of $\\mu$. The approach we develop will be useful in other singular perturbation problems with hyperbolic\u2013to\u2013elliptic turning points.'\nauthor:\n- 'K. Uldall Kristiansen and P. Szmolyan'\nbibliography:\n- 'refs.bib'\ntitle: 'A dynamical systems approach to WKB-methods: The simple turning point'\n---\n\n --\n --\n\nKeywords: turning point, WKB-method, geometric singuar perturbation theory, slow manifold, blow-up method, normal forms\n\nIntroduction\n============\n\nIn this paper, we reconsider the classical linear turning point problem for the second order differential equation $${\\label{eq:secondordereq}}\n\\epsilon^2 x''+ \\mu(t)x = 0$$ for a function $x(t)$, $t \\in I \\subset {\\mathbb R}$ and $0<\\epsilon \\ll 1$. On intervals where $\\mu >0$, solutions are highly oscillatory whereas" +"---\nabstract: 'Cometary activity may be driven by ices with very low sublimation temperatures, such as carbon monoxide ice, which can sublimate at distances well beyond 20\u00a0au. This point is emphasized by the discovery of Oort cloud comet C/2014 UN$_{271}$ (Bernardinelli-Bernstein), and its observed activity out to $\\sim$26\u00a0au. Through observations of this comet\u2019s optical brightness and behavior, we can potentially discern the drivers of activity in the outer solar system. We present a study of the activity of comet [Bernardinelli-Bernstein]{} with broad-band optical photometry taken at 19\u201320\u00a0au from the Sun (2021 June to 2022 February) as part of the LCO Outbursting Objects Key (LOOK) Project. Our analysis shows that the comet\u2019s optical brightness during this period was initially dominated by cometary outbursts, stochastic events that ejected $\\sim10^7$ to $\\sim10^8$\u00a0kg of material on short ($<1$\u00a0day) timescales. We present evidence for three such outbursts occurring in 2021 June and September. The nominal nuclear volumes excavated by these events are similar to the 10\u2013100\u00a0m pit-shaped voids on the surfaces of short-period comet nuclei, as imaged by spacecraft. Two out of three Oort cloud comets observed at large pre-perihelion distances exhibit outburst behavior near 20\u00a0au, suggesting such" +"---\nabstract: 'In this study, we investigate $f(\\R,\\T)$ gravity in a [*non-standard*]{} theory known as [*[**K-**]{}essence*]{} theory, to explore the effect of the dark energy in the cosmological scenarios, where $\\bar{R}$ is the Ricci scalar and $\\bar{T}$ is the trace of the energy-momentum tensor of the [**K-**]{}essence geometry. We have used the Dirac-Born-Infeld (DBI) non-standard Lagrangian to produce the [**K-**]{}essence emergent gravity metric $(\\bar{G}_{\\mu\\nu})$, which is non-conformally equivalent to the gravitational metric $(g_{\\mu\\nu})$. It has been shown that under a flat FLRW background gravitational metric, the modified field equations and Friedmann equations of the $f(\\R,\\T)$ gravity are distinct from the usual ones. To obtain the equation of state (EoS) parameter $\\omega$, we have solved the Friedmann equations by considering $f(\\bar{R},\\bar{T})\\equiv f(\\bar{R})+\\lambda \\bar{T}$, where $\\lambda$ is a model parameter. For several forms of $f(\\bar{R})$, we have identified a connection between $\\omega$ and time by considering\u00a0the kinetic energy of the [**K-**]{}essence scalar field ($\\dot{\\phi}^{2}$) as the dark energy density that changes with time. Interestingly, this finding meets the requirement of the constraint on $\\dot{\\phi}^{2}$. We show through graphs of the EoS parameter with time that our model satisfies SNIa+BAO+H(z) observations for a certain time range.'\nauthor:\n- Arijit Panda\n- Surajit" +"---\nabstract: 'Based on five different ensembles of newly-generated (2+1)-flavor configurations with the pion mass around $m_{\\pi}{\\simeq} (140-310)$ MeV, we present a lattice analysis of hidden-charm and hidden strange hexaquarks with the quark content $usc\\bar{d}\\bar{s}\\bar{c}$. The correlation matrix of two types of operators with $J^{PC}=0^{++}, 0^{-+}, 1^{++}$ and $1^{--}$ are simulated to extract the masses of hexaquark candidates which are then extrapolated to the physical pion mass and the continuum limit. Results indicate that masses of the ground states are below the $\\Xi_c \\bar \\Xi_c$ threshold and provide a characteristic signal for the experimental discovery of hexaquark candidates. This may enrich the versatile structures of multiquarks and is an indispensable step to decipher the nonperturbative nature of fundamental interactions of quarks and gluons.'\nauthor:\n- Hang Liu\n- Jinchen He\n- Liuming Liu\n- Peng Sun\n- Wei Wang\n- 'Yi-Bo Yang'\n- 'Qi-An Zhang'\ntitle: 'Exploring Hidden-charm and hidden-strange Hexaquarks states from Lattice QCD'\n---\n\n[*Introduction:*]{} The spectrum of hadron excitations discovered at experimental facilities around the world manifests the fundamental interactions of elementary quarks and gluons, governed by the quantum gauge field theory of QCD. Understanding the complex emergent phenomena of this field theory has captivated the attention" +"---\nabstract: 'Large-scale speech self-supervised learning (SSL) has emerged to the main field of speech processing, however, the problem of computational cost arising from its vast size makes a high entry barrier to academia. In addition, existing distillation techniques of speech SSL models compress the model by reducing layers, which induces performance degradation in linguistic pattern recognition tasks such as phoneme recognition (PR). In this paper, we propose FitHuBERT, which makes thinner in dimension throughout almost all model components and deeper in layer compared to prior speech SSL distillation works. Moreover, we employ a time-reduction layer to speed up inference time and propose a method of hint-based distillation for less performance degradation. Our method reduces the model to 23.8% in size and 35.9% in inference time compared to HuBERT. Also, we achieve 12.1% word error rate and 13.3% phoneme error rate on the SUPERB benchmark which is superior than prior work.'\nbibliography:\n- 'template.bib'\n---\n\n**Index Terms**: knowledge distillation, speech representation learning, self-supervised learning, model compression\n\nIntroduction\n============\n\nLarge-scale speech self-supervised learning (SSL) has emerged as an important field in speech processing recently due to its powerful performance and versatility. Large amount of speech-only data can be utilized for pre-training," +"---\nabstract: 'In this paper we study the low rank matrix completion problem using tools from Schur complement. We give a sufficient and necessary condition such that the completed matrix is globally unique with given data. We assume the observed entries of the matrix follow a special \u201cstaircase\u201d structure. Under this assumption, the matrix completion problem is either globally unique or has infinitely many solutions (thus excluding local uniqueness). In fact, the uniqueness of the matrix completion problem totally depends on the rank of the submatrices at the corners of the \u201cstaircase\u201d. The proof of the theorems make extensive use of the Schur complement.'\nauthor:\n- '[Fei Wang](https://www.researchgate.net/profile/Fei_Wang187)[^1]'\nbibliography:\n- 'references.bib'\ndate: 'June, 2022'\ntitle: Uniqueness of Low Rank Matrix Completion and Schur Complement \n---\n\n[**Keywords:**]{} Low-rank matrix completion, matrix recovery, chordal graph\n\n[**AMS subject classifications:**]{} 65J22, 90C22, 65K10, 52A41, 90C46\n\nIntroduction\n============\n\nGiven a matrix $Z$ with partially sampled data $z$, the matrix completion problem asks whether the missing entries from $z$ can be recovered. Clearly, there are infinitely many ways to fill the missing entries, so in order for the matrix completion problem to be meaningful, we require the rank of recovered matrix to be not greater than" +"---\nabstract: 'We propose a margin-based loss for tuning joint vision-language models so that their gradient-based explanations are consistent with region-level annotations provided by humans for relatively smaller grounding datasets. We refer to this objective as Attention Mask Consistency ([AMC]{}) and demonstrate that it produces superior visual grounding results than previous methods that rely on using vision-language models to score the outputs of object detectors. Particularly, a model trained with [AMC]{}on top of standard vision-language modeling objectives obtains a state-of-the-art accuracy of $86.49\\%$ in the Flickr30k visual grounding benchmark, an absolute improvement of $5.38\\%$ when compared to the best previous model trained under the same level of supervision. Our approach also performs exceedingly well on established benchmarks for referring expression comprehension where it obtains 80.34% accuracy in the easy test of RefCOCO+, and 64.55% in the difficult split. [AMC]{}is effective, easy to implement, and is general as it can be adopted by any vision-language model, and can use any type of region annotations.'\nauthor:\n- |\n Ziyan Yang\\\n Rice University\\\n [zy47@rice.edu]{}\n- |\n Kushal Kafle\\\n Adobe Research\\\n [kkafle@adobe.com]{}\n- |\n Franck Dernoncourt\\\n Adobe Research\\\n [dernonco@adobe.com]{}\n- |\n Vicente Ordonez\\\n Rice University\\\n [vicenteor@rice.edu]{}\nbibliography:\n- 'egbib.bib'\ntitle: 'Improving Visual Grounding by" +"---\nabstract: 'Given a fibration $f$ between two projective manifolds $X$ and $Y$, we discuss the effective generation of the higher direct images $R^{i}f_{\\ast}(K^{m}_{X})$, where $K^{m}_{X}$ is the $m$-th tensor power of the canonical bundle of $X$. In particular, we answer two questions posed by Popa\u2013Schnell in [@PS14].'\naddress:\n- 'Shanghai Center for Mathematical Sciences, Fudan University, Shanghai 200433, People\u2019s Republic of China'\n- 'School of Mathematics, Shanghai University of Finance and Economy, Shanghai 200433, People\u2019s Republic of China'\nauthor:\n- Jixiang Fu\n- Jingcao Wu\ntitle: On the global generation of higher direct images of pluricanonical bundles\n---\n\n[^1]\n\nIntroduction {#sec:introduction}\n============\n\nAssume that $f:X\\rightarrow Y$ is a fibration, i.e. a surjective morphism with connected fibres between two projective manifolds $X$ and $Y$. Denote by $K^{m}_{X}$ the $m$-th tensor power of the canonical bundle $K_X$ of $X$. The positivity of the associated higher direct image $R^{i}f_{\\ast}(K^{m}_{X})$ is of significant importance for understanding the geometry of this fibration. Fruitful results have been obtained on this subject, such as [@Ber08; @Ber09; @Hor10; @Kaw81; @Kaw82; @Ko86a; @Ko86b; @Ko87; @Vie82b; @Vie83].\n\nPopa and Schnell [@PS14] proved the following result inspired by the brilliant work of Viehweg [@Vie82b; @Vie83] and Koll\u00e1r [@Ko86a; @Ko86b]:\n\n\\[t11\\]" +"---\nabstract: 'In 1972, at a symposium celebrating the 70th birthday of Paul Dirac, John Wheeler proclaimed that \u201ethe framework falls down for everything that one has ever called a law of physics\u201c. Responsible for this \u201ebreakage \\[\u2026\\] among the laws of physics\u201c was the general theory of relativity, more specifically its prediction of massive stars gravitationally collapsing to \u201eblack holes\u201c, a term Wheeler himself had made popular some years earlier. In our paper, we investigate how Wheeler reached the conclusion that gravitational collapse calls into question the lawfulness of physics and how, subsequently, he tried to develop a new worldview, rethinking in his own way the lessons of quantum mechanics as well as drawing inspiration from other disciplines, not least biology.'\nauthor:\n- Alexander Blum\n- Stefano Furlan\nbibliography:\n- 'habil.bib'\ntitle: How John Wheeler lost his faith in the law\n---\n\nThe second half of the twentieth century saw the scientific discipline of physics at its prime. It also saw the beginnings of a perceived decline, with the life sciences being perceived more and more as the scientific avantgarde.[^1] One aspect of this development is the increasing doubt in the validity and relevance of reductionist, microscopic physical laws;" +"---\nauthor:\n- Richie Diurba\n- Rob Fine\n- Mandeep Gill\n- Harvey Newman\n- Kevin Pedro\n- Alexx Perloff\n- Breese Quinn\n- Louise Suter\n- Shawn Westerdale\nbibliography:\n- 'Bibliography/common.bib'\n- 'Bibliography/main.bib'\ntitle: |\n Snowmass \u201921 Community Engagement Frontier 6: Public Policy and Government Engagement\\\n Congressional Advocacy for Areas Beyond HEP Funding \n---\n\nIntroduction {#sec:intro}\n============\n\nThis document has been prepared as a Snowmass contributed paper by the Public Policy & Government Engagement topical group (CEF06) within the Community Engagement Frontier. The charge of CEF06 is to review all aspects of how the High Energy Physics (HEP) community engages with government at all levels and how public policy impacts members of the community and the community at large, and to assess and raise awareness within the community of direct community-driven engagement of the US federal government (*i.e.* advocacy). In the previous Snowmass process these topics were included in a broader \u201cCommunity Engagement and Outreach\u201d group whose work culminated in the recommendations outlined in Ref. [@snowmass13recs].\n\nThe focus of this paper is the potential for HEP community advocacy on topics other than funding for basic research. The HEP community has run a very active and successful advocacy effort for" +"---\nabstract: |\n A hyperbolic group $\\Gamma$ acts by homeomorphisms on its Gromov boundary ${\\partial \\Gamma}$. We use a dynamical coding of boundary points to show that such actions are [*topologically stable*]{} in the dynamical sense: any nearby action is semi-conjugate to (and an extension of) the standard boundary action.\n\n This result was previously known in the special case that ${\\partial \\Gamma}$ is a topological sphere. Our proof here is independent and gives additional information about the semiconjugacy in that case. Our techniques also give a new proof of global stability when ${\\partial \\Gamma}= S^1$.\naddress:\n- 'Department of Mathematics, Cornell University, Ithaca, NY 14853, USA '\n- 'Department of Mathematics, Cornell University, Ithaca, NY 14853, USA '\n- 'Department of Mathematics, University of Michigan, Ann Arbor, MI 48109, USA '\nauthor:\n- Kathryn Mann\n- Jason Fox Manning\n- Theodore Weisman\ntitle: Stability of hyperbolic groups acting on their boundaries\n---\n\nIntroduction\n============\n\nA discrete, hyperbolic group $\\Gamma$, viewed as a (coarse) metric space, admits a natural compactification by its [*Gromov boundary*]{}, denoted ${\\partial \\Gamma}$. This boundary is a compact metrizable space, and if $\\Gamma$ is not virtually cyclic, it has no isolated points. The action of $\\Gamma$ on" +"---\nbibliography:\n- 'references.bib'\n---\n\n[ **Parity effects and universal terms of $\\mathcal{O}(1)$ in the entanglement near a boundary** ]{}\n\nHenning Schl\u00f6mer^1,2$\\dagger$^, Chunyu Tan^3^, Stephan Haas^3^ and Hubert Saleur^3,4^\n\n[**1**]{} Department of Physics and Arnold Sommerfeld Center for Theoretical Physics (ASC), Ludwig-Maximilians-Universit\u00e4t M\u00fcnchen, M\u00fcnchen D-80333, Germany\\\n[**2**]{} Munich Center for Quantum Science and Technology (MCQST), D-80799 M\u00fcnchen, Germany\\\n[**3**]{} Department of Physics and Astronomy, University of Southern California, Los Angeles, California 90089-0484, USA\\\n[**4**]{} Institut de physique th\u00e9orique, CEA, CNRS, Universit\u00e9 Paris-Saclay\n\n$\\dagger$ H.Schloemer@physik.uni-muenchen.de\n\nAbstract {#abstract .unnumbered}\n========\n\n[**In the presence of boundaries, the entanglement entropy in lattice models is known to exhibit oscillations with the (parity of the) length of the subsystem, which however decay to zero with increasing distance from the edge. We point out in this article that, when the subsystem starts at the boundary and ends at an impurity, oscillations of the entanglement (as well as of charge fluctuations) appear which do not decay with distance, and which exhibit universal features. We study these oscillations in detail for the case of the XX chain with one modified link (a conformal defect) or two successive modified links (a relevant defect), both numerically and analytically. We then generalize" +"---\nauthor:\n- 'Veronica Panizza[!!]{}'\n- Ricardo Costa de Almeida\n- Philipp Hauke\nbibliography:\n- 'main.bib'\ntitle: Entanglement Witnessing for Lattice Gauge Theories\n---\n\n[ !a! @toks= @toks= ]{} [ !b! @toks= @toks= ]{} [ !c! @toks= @toks= ]{}\n\n[@counter>0@toks=@toks=]{} [@counter>0@toks=@toks=]{} [@counter>0@toks=@toks=]{}\n\n[abstract[ Entanglement is assuming a central role in modern quantum many-body physics. Yet, for lattice gauge theories its certification remains extremely challenging. A key difficulty stems from the local gauge constraints underlying the gauge theory, which separate the full Hilbert space into a direct sum of subspaces characterized by different superselection rules. In this work, we develop the theoretical framework of entanglement witnessing for lattice gauge theories that takes this subtlety into account. We illustrate the concept at the example of a $\\mathrm{U}(1)$ lattice gauge theory in 2+1 dimensions, without and with dynamical fermionic matter. As this framework circumvents costly state tomography, it opens the door to resource-efficient certification of entanglement in theoretical studies as well as in laboratory quantum simulations of gauge theories. ]{}]{}\n\nIntroduction {#sec:intro}\n============\n\nIn addition to its large conceptual importance for the foundations of quantum theory, entanglement also constitutes a key resource for quantum technologies [@friis_vitagliano_malik_huber_2018; @pezze_smerzi_oberthaler_schmied_treutlein_2018; @acn_bloch_buhrman_calarco_eichler_eisert_esteve_gisin_glaser_jelezko; @azzini_mazzucchi_moretti_pastorello_pavesi_2020] and plays an important" +"\\\n^a^Laboratoire de Physique des Rayonnements et de leurs Int\u00e9ractions avec la Mati\u00e8re\\\nD\u00e9partement de Physique, Facult\u00e9 des Sciences de la Mati\u00e8re\\\nUniversit\u00e9 de Batna 1, Batna 05000, Algeria\\\n^b^ D\u00e9partement de Physique , Facult\u00e9 des Sciences de la Mati\u00e8re\\\nUniversit\u00e9 de Batna 1, Batna 05000, Algeria\\\n\nE-mail: zaim69slimane@yahoo.com (Corresponding author.\\*), h.rezki94@gmail.com\\\n\n> In this work, we calculate the particle creation density in a cosmological anisotropic Bianchi I universe, by solving the Klein-Gordon and Dirac equations in the presence of a time-dependent electric field using the semi-classical method. We show that the particles distribution becomes thermal under the influence of the electric field when the electric interaction proportional to the Ricci scalar of curved space-time\n\n**Keywords:** Cosmological space-time, Bogoliubov transformation, Particle production.\n\n**Pacs numbers**: 11.10.Nx, 03.65.Pm, 03.70.+k , 25.75.Dw\n\nIntroduction\n============\n\nIn the classical theory, black holes can only absorb and not emit particles. However, it was shown that quantum mechanical effects make black holes to create and emit particles. It is also well known that the most significant prediction of this theory is the phenomenon of particle creation which leads to the concept of quantum gravity. Particle creation phenomenon at the cost of gravitational field in the early universe" +"---\nabstract: |\n Motivated by the work of Bestvina-Feighn ([@BF]) and Mj-Sardar ([@pranab-mahan]), we define trees of metric bundles subsuming both the trees of metric spaces and the metric bundles. Then we prove a combination theorem for these spaces. More precisely, we prove that a tree of metric bundles is hyperbolic if the following hold (see Theorem \\[main-theorem-com\\]). $(1)$ The fibers are uniformly hyperbolic metric spaces and the base is also hyperbolic metric space, $(2)$ barycenter maps for the fibers are uniformly coarsely surjective, $(3)$ the edge spaces are uniformly qi embedded in the corresponding fibers and $(4)$ Bestvina-Feighn\u2019s flaring condition is satisfied.\n\n As an application, we provide a combination theorem for certain complexes of groups over finite simplicial complex (see Theorem \\[application-2\\]).\naddress: 'Indian Institute of Science Education and Research (IISER) Mohali,Knowledge City, Sector 81, S.A.S. Nagar 140306, Punjab, India'\nauthor:\n- Rakesh Halder\nbibliography:\n- 'TMB.bib'\ntitle: A Combination Theorem for Trees of Metric Bundles\n---\n\n[^1]\n\nIntroduction\n============\n\nBestvina-Feighn ([@BF]) proved that the fundamental group of a finite graph of hyperbolic groups with the qi embedded condition and annuli flare condition is hyperbolic (see [@BF-Adn Theorem $1.2$]). Motivated by this work of Bestvina and Feighn, M. Kapovich" +"---\nabstract: 'On January 15, 2022, at 04:14:45 (UTC), the Hunga Tonga-Funga Ha\u2019apai, a submarine volcano in the Tongan archipelago in the southern Pacific Ocean, erupted and generated global seismic, shock, and electromagnetic waves, which also reached Japan, situated more than 8,000 km away. KAGRA is a gravitational wave telescope located in an underground facility in Kamioka, Japan. It has a wide variety of auxiliary sensors to monitor environmental disturbances which obstruct observation of gravitational waves. The effects of the volcanic eruption were observed by these environmental sensors both inside and outside of the underground facility. In particular, the shock waves made it possible to evaluate the transfer functions from the air pressure wave in the atmosphere to the underground environmental disturbances (air pressure and seismic motion).'\nauthor:\n- Tatsuki Washimi\n- Takaaki Yokozawa\n- Akiteru Takamori\n- Akito Araya\n- Sota Hoshino\n- Yousuke Itoh\n- Yuichiro Kobayashi\n- 'Jun\u2019ya Kume'\n- Kouseki Miyo\n- Masashi Ohkawa\n- Shoichi Oshino\n- Takayuki Tomaru\n- 'Jun\u2019ichi Yokoyama'\n- Hirotaka Yuzurihara\ntitle: 'Response of the underground environment of the KAGRA observatory against the air pressure disturbance from the Tonga volcano eruption on January 15th, 2022'\n---\n\nThe KAGRA gravitational wave observatory" +"---\nabstract: 'The boundary structure of $3+1$-dimensional gravity (in the Palatini\u2013Cartan formalism) coupled to to gauge (Yang\u2013Mills) and matter (scalar and spinorial) fields is described through the use of the Kijowski\u2013Tulczijew construction. In particular, the reduced phase space is obtained as the reduction of a symplectic space by some first class constraints and a cohomological description (BFV) of it is presented.'\nauthor:\n- Giovanni Canepa\n- 'Alberto S. Cattaneo'\n- 'Filippo Fila-Robattino'\nbibliography:\n- 'bibliography.bib'\ntitle: 'Boundary structure of gauge and matter fields coupled to gravity[^1]'\n---\n\nIntroduction\n============\n\nIn this paper we will study the boundary structure of general relativity (in 3+1 dimensions in the Palatini\u2013Cartan formalism) coupled to different types of fields, such as a scalar field, a Yang\u2013Mills field, and a spinor field. Our goal is to describe the *reduced phase space* of the aforementioned theories coupled to gravity in two ways: (i) through a symplectic space and constraints on it and (ii) using a cohomological description, the BFV formalism.\n\nThe reduced phase space can be considered as the fundamental building block of the analysis of field theories on manifolds with boundary. If the boundary is a Cauchy surface, we can define it to be the space" +"---\nabstract: 'The anomaly in the lithium abundance is a well-known unresolved problem in nuclear astrophysics. Recent revisit to the problem tried the avenue of resonance enhancement to account for the primordial $^7$Li abundance in standard Big-Bang Nucleosynthesis (BBN). Prior measurements of the $^7$Be(d,p)$^8$Be\\* reaction could not account for the individual contributions of the different excited states involved, particularly at higher energies close to the Q-value of the reaction. We carried out an experiment at HIE-ISOLDE, CERN to study this reaction at E$_{cm}$ = 7.8 MeV, populating excitations up to 22 MeV in $^8$Be for the first time. The angular distributions of the several excited states have been measured and the contributions of the higher excited states in the total cross section at the relevant Big Bang energies were obtained by extrapolation to the Gamow window using the TALYS code. The results show that by including the contribution of the 16.63 MeV state, the maximum value of the total S-factor inside the Gamow window comes out to be 167 MeV b as compared to earlier estimate of 100 MeV b. However, this still does not account for the lithium discrepancy.'\nauthor:\n- 'Sk M. Ali$^1$'\n- 'D. Gupta$^1$'\n- 'K." +"---\nabstract: 'Equation-of-state (EOS) models underpin numerical simulations at the core of research in high energy density physics, inertial confinement fusion, laboratory astrophysics, and elsewhere. In these applications EOS models are needed that span ranges of thermodynamic variables that far exceed the ranges where data are available, making uncertainty quantification (UQ) of EOS models a significant concern. Model uncertainty, arising from the choice of functional form assumed for the EOS, is a major challenge to UQ studies for EOS that is usually neglected in favor of parameteric and data uncertainties which are easier to capture without violating the physical constraints on EOSs. In this work we introduce a new statistical EOS construction that naturally captures model uncertainty while automatically obeying the thermodynamic consistency constraint. We apply the model to existing data for $B_4C$\u00a0to place an upper bound on the uncertainty in the EOS and Hugoniot, and show that the neglect of thermodynamic constraints overestimates the uncertainty by factors of several when data are available and underestimates when extrapolating to regions where they are not. We discuss extensions to this approach, and the role of GP-based models in accelerating simulation and experimental studies, defining portable uncertainty-aware EOS tables, and enabling" +"---\nabstract: 'We present a new analytical solution to the steady-state distribution of stars close to a central supermassive black hole of mass ${M_{\\bullet}}$ in the center of a galaxy. Assuming a continuous mass function of the form $dN/dm \\propto m^{\\gamma}$, stars with a specific orbital energy $x = G{M_{\\bullet}}/r - v^2/2$ are scattered primarily by stars of mass ${m_{\\rm d}}(x) \\propto x^{-5/(4\\gamma+10)}$ that dominate the scattering of both lighter and heavier species at that energy. Stars of mass ${m_{\\rm d}}(x)$ are exponentially rare at energies lower than $x$, and follow a density profile $n(x'') \\propto x''^{3/2}$ at energies $x'' > x$. Our solution predicts a negligible flow of stars through energy space for all mass species, similarly to the conclusions of [@BW_77], but in contrast to the assumptions of [@AH_09]. This is the first analytic solution which smoothly transitions between regimes where different stellar masses dominate the scattering.'\nauthor:\n- Itai Linial\n- 'Re\u2019em Sari'\nbibliography:\n- 'Distributions.bib'\ntitle: 'Stellar Distributions Around a Supermassive Black Hole: Strong segregation regime revisited'\n---\n\nIntroduction\n============\n\nHow are stars and stellar remnants distributed in the vicinity of a supermassive black hole (SMBH) in the center of a galaxy? [@Peebles_72] provided an early" +"---\nauthor:\n- Erik Burman\n- Peter Hansbo\n- 'Mats G. Larson'\ntitle: The augmented Lagrangian method as a framework for stabilised methods in computational mechanics\n---\n\nIntroduction {#sec:1}\n============\n\nThe Augmented Lagrangian Method (ALM) has a long history in optimisation. In its standard form it can be seen as augmenting standard Lagrange multiplier methods with a penalty term, penalising the constraint equations. It was introduced in order to combine the advantages of the penalty method and the multiplier method in the context on constrained optimisation independently by Hestenes and Powell in [@Hest69; @Pow69]. It was then extended to the case of optimization with inequality constraints by Rockafellar in [@Ro73; @Ro73b]. Soon afterwards the potential of ALM for the numerical approximation of partial differential equations (pde) and computational mechanics was explored in Glowinski and Morocco [@GM75] and by Fortin in [@Fort77]. For overviews of the early results on augmented Lagrangian methods for approximation of pde we refer to the monographs by Glowinski and coworkers [@FG83; @GlLeTa89].\n\nIn computational mechanics, Lagrangian methods have the drawback of having to fulfil an [*inf\u2013sup*]{} condition to ensure stability of the discrete scheme such that the balance between the discretisation of the primal variable and" +"---\nabstract: 'Implicit radiance functions emerged as a powerful scene representation for reconstructing and rendering photo-realistic views of a 3D scene. These representations, however, suffer from poor editability. On the other hand, explicit representations such as polygonal meshes allow easy editing but are not as suitable for reconstructing accurate details in dynamic human heads, such as fine facial features, hair, teeth, and eyes. In this work, we present Neural Parameterization (NeP), a hybrid representation that provides the advantages of both implicit and explicit methods. NeP is capable of photo-realistic rendering while allowing fine-grained editing of the scene geometry and appearance. We first disentangle the geometry and appearance by parameterizing the 3D geometry into 2D texture space. We enable geometric editability by introducing an explicit linear deformation blending layer. The deformation is controlled by a set of sparse key points, which can be explicitly and intuitively displaced to edit the geometry. For appearance, we develop a hybrid 2D texture consisting of an explicit texture map for easy editing and implicit view and time-dependent residuals to model temporal and view variations. We compare our method to several reconstruction and editing baselines. The results show that the NeP achieves almost the same level" +"---\nabstract: 'With the increase of digital data and social network platforms the impact of social media science in driving company decision related to product/service features and customer care operations is becoming more crucial. In particular, platform such as Twitter where people can share experience about almost everything can drastically impact the reputation and offering of a company as well as of a place or tourism site. Text mining tools are researched and proposed in literature in order to gain value and perform trend topics and sentiment analysis on Twitter. As data are the fuels for these models, the \u201cright\u201d ones, i.e the domain-related ones makes a difference on their accuracy. In this paper, we describe a pipeline of *DataOps / MLOps* operations performed over a tourism related Twitter dataset in order to comprehend tourism motivation and interest. The gained knowledge can be exploit, by the travel/hospitality industry in order to develop data-driven strategic service, and by travelers which can consume relevant information about tourist destination.'\nauthor:\n- |\n Davide Stirparo, Beatrice Penna, Mohammad Kazemi, Ariona Shashaj\\\n Network Contacts\\\n Rome, Italy\\\n `{davide.stirparo, beatrice.penna, mohammad.kazemi, ariona.shashaj}@networkcontacts.it`\\\nbibliography:\n- 'references.bib'\ntitle: 'Mining Tourism Experience on Twitter: A case study '\n---\n\nIntroduction" +"---\nabstract: 'It is a well-known result due to Bollobas that the maximal Cheeger constant of large $d$-regular graphs cannot be close to the Cheeger constant of the $d$-regular tree. We prove analogously that the Cheeger constant of closed hyperbolic surfaces of large genus is bounded from above by $2/\\pi \\approx 0.63...$ which is strictly less than the Cheeger constant of the hyperbolic plane. The proof uses a random construction based on a Poisson\u2013Voronoi tessellation of the surface with a vanishing intensity.'\nauthor:\n- 'Thomas Budzinski[^1]&Nicolas Curien[^2] &Bram Petri[^3]'\nbibliography:\n- 'bib\\_hyperbolic.bib'\ntitle: 'On Cheeger constants of hyperbolic surfaces'\n---\n\n![From left to right: Poisson\u2013Voronoi tessellations of the hyperbolic plane with decreasing intensity. Their limit (on the right) is the *pointless Voronoi tessellation* of the hyperbolic plane whose cells have been colored in black/white uniformly at random. This object has an average \u201clinear\u201d density equal to $2 \\times \\frac{2}{\\pi}$ per unit of area. \\[fig:voronoiH\\]](poisson1 \"fig:\"){height=\"4.5cm\"} ![From left to right: Poisson\u2013Voronoi tessellations of the hyperbolic plane with decreasing intensity. Their limit (on the right) is the *pointless Voronoi tessellation* of the hyperbolic plane whose cells have been colored in black/white uniformly at random. This object has an" +"---\nabstract: 'An improved Amati correlation was constructed in (ApJ 931 (2022) 50) by us recently. In this paper, we further study constraints on the $\\Lambda$CDM and $w$CDM models from the gamma ray bursts (GRBs) standardized with the standard and improved Amati correlations, respectively. By using the Pantheon type Ia supernova sample to calibrate the latest A220 GRB data set, the GRB Hubble diagram is obtained model-independently. We find that at the high redshift region ($z>1.4$) the GRB distance modulus from the improved Amati correlation is larger apparently than that from the standard Amati one. The GRB data from the standard Amati correlation only give a lower bound limit on the present matter density parameter $\\Omega_{\\mathrm{m0}}$, while the GRBs from the improved Amati correlation constrain the $\\Omega_{\\mathrm{m0}}$ with the $68\\%$ confidence level to be $0.308^{+0.066}_{-0.230}$ and $0.307^{+0.057}_{-0.290}$ in the $\\Lambda$CDM and $w$CDM models, respectively, which are consistent very well with those given by other current popular observational data including BAO, CMB and so on. Once the $H(z)$ data are added in our analysis, the constraint on the Hubble constant $H_0$ can be achieved. We find that two different correlations provide slightly different $H_0$ results but the marginalized mean values seem" +"---\nabstract: 'Methods for learning from demonstration (LfD) have shown success in acquiring behavior policies by imitating a user. However, even for a single task, LfD may require numerous demonstrations. For versatile agents that must learn many tasks via demonstration, this process would substantially burden the user if each task were learned in isolation. To address this challenge, we introduce the novel problem of [*lifelong learning from demonstration*]{}, which allows the agent to continually build upon knowledge learned from previously demonstrated tasks to accelerate the learning of new tasks, reducing the amount of demonstrations required. As one solution to this problem, we propose the first lifelong learning approach to inverse reinforcement learning, which learns consecutive tasks via demonstration, continually transferring knowledge between tasks to improve performance.'\nauthor:\n- |\n Jorge A.\u00a0Mendez[,]{} Shashank Shivkumar[, and]{} Eric Eaton\\\n Department of Computer and Information Science\\\n University of Pennsylvania\\\n `{mendezme,shashs,eeaton}@seas.upenn.edu`\nbibliography:\n- 'LifelongIRL.bib'\ntitle: Lifelong Inverse Reinforcement Learning\n---\n\nIntroduction {#intro}\n============\n\nIn many applications, such as personal robotics or intelligent virtual assistants, a user may want to teach an agent to perform some sequential decision-making task. Often, the user may be able to demonstrate the appropriate behavior, allowing the agent to learn" +"---\nauthor:\n- |\n *Massimiliano Kaucic*, $\\;$*Filippo Piccotto*, $\\;$*Gabriele Sbaiz*, $\\;$*Giorgio Valentinuz*\\\n \\\n \\\n \\\n \\\ntitle: '**[A hybrid level-based learning swarm algorithm with mutation operator for solving large-scale cardinality-constrained portfolio optimization problems ]{}**'\n---\n\n#### Keywords: {#keywords .unnumbered}\n\nlevel-based learning swarm optimizer; projection operator; mutation; exact penalty function; large-scale portfolio optimization\n\nIntroduction\n============\n\nIn modern portfolio theory, the classical mean-variance portfolio selection problem developed by Markowitz [@MW] plays a crucial role. Following this approach, investors should consider together return and risk, distributing the capital among alternative securities based on their return-risk trade-off. Since the pioneering work by Markowitz, the mean-variance optimization model (MVO) has been recognized as a practical tool to tackle portfolio optimization problems, and a large number of developments of the basic model has been investigated (see, for instance, [@GU] and [@KOL]). Several authors have studied the multi-objective formulation of the MVO problem, in which the expected portfolio return is maximized and, at the same time, its variance is minimized ([@CE], [@CO] and [@KAU]). These contributions aim to provide algorithms able to generate accurate dotted representations of all the sets of non-dominated portfolios in few iterations. On the contrary, in other studies, a single-objective formulation has been" +"---\nabstract: 'This paper presents, for the special case of once-punctured torus bundles, a natural method to study the character varieties of hyperbolic 3\u2013manifolds that are bundles over the circle. The main strategy is to restrict characters to the fibre of the bundle, and to analyse the resulting branched covering map. This allows us to extend results of Steven Boyer, Erhard Luft and Xingru Zhang. Both $\\operatorname{SL}(2, \\mathbb{C})$\u2013character varieties and $\\operatorname{PSL}(2, \\mathbb{C})$\u2013character varieties are considered. As an explicit application of these methods, we build on work of Baker and Petersen to show that there is an infinite family of hyperbolic once-punctured bundles with canonical curves of $\\operatorname{PSL}(2, \\mathbb{C})$\u2013characters of unbounded genus.'\naddress:\n- |\n Stephan Tillmann\\\n School of Mathematics and Statistics F07, The University of Sydney, NSW 2006 Australia\\\n [stephan.tillmann@sydney.edu.au\\\n \u2014\u2013]{}\n- |\n Youheng Yao\\\n School of Mathematics and Statistics F07, The University of Sydney, NSW 2006 Australia\\\n [yyao3610@uni.sydney.edu.au]{}\nauthor:\n- Stephan Tillmann and Youheng Yao\nbibliography:\n- 'references.bib'\ntitle: 'On the topology of character varieties of once-punctured torus bundles'\n---\n\nIntroduction {#sec:intro}\n============\n\nThe first part of this paper extends results of Boyer, Luft and Zhang\u00a0[@Boyer-algebraic-2002] concerning the $\\operatorname{SL}(2, \\mathbb{C})$\u2013character variety $X(M_\\varphi)$ of $M_\\varphi$, a hyperbolic once-punctured torus" +"---\nabstract: 'This is an exciting era for exo-planetary exploration. The recently launched JWST, and other upcoming space missions such as Ariel, Twinkle and ELTs are set to bring fresh insights to the convoluted processes of planetary formation and evolution and its connections to atmospheric compositions. However, with new opportunities come new challenges. The field of exoplanet atmospheres is already struggling with the incoming volume and quality of data, and machine learning (ML) techniques lands itself as a promising alternative. Developing techniques of this kind is an inter-disciplinary task, one that requires domain knowledge of the field, access to relevant tools and expert insights on the capability and limitations of current ML models. These stringent requirements have so far limited the developments of ML in the field to a few isolated initiatives. In this paper, We present the Atmospheric Big Challenge Database (ABC Database), a carefully designed, organised and publicly available database dedicated to the study of the inverse problem in the context of exoplanetary studies. We have generated 105,887 forward models and 26,109 complementary posterior distributions generated with Nested Sampling algorithm. Alongside with the database, this paper provides a jargon-free introduction to non-field experts interested to dive into the" +"---\nabstract: 'This paper focuses on numerical approximation for fractional powers of elliptic operators on $2$-d manifolds. Firstly, parametric finite element method is employed to discretize the original problem. We then approximate fractional powers of the discrete elliptic operator by the product of rational functions, each of which is a diagonal Pad\u00e9 approximant for corresponding power function. Rigorous error analysis is carried out and sharp error bounds are presented which show that the scheme is robust for $\\alpha\\rightarrow 0^+$ and $\\alpha \\rightarrow 1^-$. The cost of the proposed algorithm is solving some elliptic problems. Since the approach is exponentially convergent with respect to the number of solves, it is very efficient. Some numerical tests are given to confirm our theoretical analysis and the robustness of the algorithm.'\naddress: 'Faculty of Computational Mathematics and Cybernetics, Shenzhen MSU-BIT University, Shenzhen 518172, P.R. China.'\nauthor:\n- Beiping Duan\nbibliography:\n- 'references.bib'\ntitle: 'Pad\u00e9-parametric FEM approximation for fractional powers of elliptic operators on manifolds'\n---\n\nIntroduction {#Se:1}\n============\n\nSuppose $\\cM$ is a $2$-dimensional compact and orientable manifold with $C^3$-smoothness in $\\bR^{3}$ (see [@dziuk2013finite Section 2.1] for the definition of the smoothness), and let $\\Gamma$ denote its boundary. We introduce the self-adjoint operator $\\cLg$ defined" +"---\nabstract: 'Superconducting microwave circuits with Josephson junctions are a major platform for quantum computing. To unleash their full capabilities, the cooperative operation of multiple microwave superconducting circuits is required. Therefore, designing an efficient protocol to distribute microwave entanglement remotely becomes a crucial open problem. Here, we propose a continuous-variable entanglement-swap approach based on optical-microwave entanglement generation, which can boost the ultimate rate by two orders of magnitude at state-of-the-art parameter region, compared with traditional approaches. We further empower the protocol with a hybrid variational entanglement distillation component to provide huge advantage in the infidelity-versus-success-probability trade-off. Our protocol can be realized with near-term device performance, and is robust against non-perfections such as optical loss and noise. Therefore, our work provides a practical method to realize efficient quantum links for superconducting microwave quantum computers.'\nauthor:\n- Bingzhi Zhang\n- Jing Wu\n- Linran Fan\n- Quntao Zhuang\ntitle: Entangling remote microwave quantum computers with hybrid entanglement swap and variational distillation \n---\n\n[^1]\n\n[^2]\n\nEmpowered by the law of quantum mechanics, quantum computers have the potential of speeding up the solution of various classically hard problems\u00a0[@Shor_1997; @harrow2009; @grover1996fast; @bloch2012quantum]. Among the candidate platforms for quantum computing, superconducting circuits with Josephson junctions" +"---\nabstract: 'The growing interest in Internet of Things (IoT) and Industrial IoT (IIoT) poses the challenge of finding robust solutions for the certification and notarization of data produced and collected by embedded devices. The blockchain and distributed ledger technologies represent a promising solution to address these issues, but rise other questions, for example regarding their practical feasibility. In fact, IoT devices have limited resources and, consequently, may not be able to easily perform all the operations required to participate in a blockchain. In this paper we propose a minimal architecture to allow IoT devices performing data certification and notarization on the Ethereum blockchain. We develop a hardware-software platform through which a lightweight device (e.g., an IoT sensor), holding a secret key and the associated public address, produces signed transactions, which are then submitted to the blockchain network. This guarantees data integrity and authenticity and, on the other hand, minimizes the computational burden on the lightweight device. To show the practicality of the proposed approach, we report and discuss the results of benchmarks performed on ARM Cortex-M4 hardware architectures, sending transactions over the Ropsten testnet. Our results show that all the necessary operations can be performed with small latency, thus" +"---\nabstract: 'Many complex vehicular systems, such as large marine vessels, contain confined spaces like water tanks, which are critical for the safe functioning of the vehicles. It is particularly hazardous for humans to inspect such spaces due to limited accessibility, poor visibility, and unstructured configuration. While robots provide a viable alternative, they encounter the same 1q8set of challenges in realizing robust autonomy. In this work, we specifically address the problem of detecting foreign object debris (FODs) left inside the confined spaces using a visual mapping-based system that relies on Mahalanobis distance-driven comparisons between the nominal and online maps for local outlier identification. The identified outliers, corresponding to candidate FODs, are used to generate waypoints that are fed to a mobile ground robot to take camera photos. The photos are subsequently labeled by humans for final identification of the presence and types of FODs, leading to high detection accuracy while mitigating the effect of recall-precision tradeoff. Preliminary simulation studies, followed by extensive physical trials on a prototype tank, demonstrate the capability and potential of our FOD detection system.'\naddress:\n- 'Department of Mechanical Engineering, University of Washington, Seattle, WA 98195, USA'\n- 'Naval Undersea Warfare Center Division Keyport, Keyport, WA" +"---\nabstract: 'Benefiting from the message passing mechanism, Graph Neural Networks (GNNs) have been successful on flourish tasks over graph data. However, recent studies have shown that attackers can catastrophically degrade the performance of GNNs by maliciously modifying the graph structure. A straightforward solution to remedy this issue is to model the edge weights by learning a metric function between pairwise representations of two end nodes, which attempts to assign low weights to adversarial edges. The existing methods use either raw features or representations learned by supervised GNNs to model the edge weights. However, both strategies are faced with some immediate problems: raw features cannot represent various properties of nodes (*e.g.*, structure information), and representations learned by supervised GNN may suffer from the poor performance of the classifier on the poisoned graph. We need representations that carry both feature information and as mush correct structure information as possible and are insensitive to structural perturbations. To this end, we propose an unsupervised pipeline, named STABLE, to optimize the graph structure. Finally, we input the well-refined graph into a downstream classifier. For this part, we design an advanced GCN that significantly enhances the robustness of vanilla GCN\u00a0[@kipf2017semi] without increasing the time" +"---\nabstract: 'Affleck-Kennedy-Lieb-Tasaki (AKLT) states are an important class of many-body quantum states that are useful in quantum information processing, including measurement-based quantum computation in particular. Here we propose a general approach for constructing efficient verification protocols for AKLT states on arbitrary graphs with local spin measurements. Our verification protocols build on bond verification protocols and matching covers (including edge coloring) of the underlying graphs, which have a simple geometric and graphic picture. We also provide rigorous performance guarantee that is required for practical applications. With our approach, most AKLT states of wide interest, including those defined on 1D and 2D lattices, can be verified with a constant sample cost, which is independent of the system size and is dramatically more efficient than all previous approaches. As an illustration, we construct concrete verification protocols for AKLT states on various lattices and on arbitrary graphs up to five vertices.'\nauthor:\n- Tianyi Chen\n- Yunting Li\n- Huangjun Zhu\nbibliography:\n- 'all\\_references.bib'\ntitle: 'Efficient verification of Affleck-Kennedy-Lieb-Tasaki states'\n---\n\nIntroduction\n============\n\nGround states of local Hamiltonians play crucial roles in many-body physics and have also found increasing applications in quantum information processing [@StepWPW17; @Wei18; @StepNBE19; @DaniAM20; @GoihWET20; @WeiRA22]. The Affleck-Kennedy-Lieb-Tasaki (AKLT)" +"---\nabstract: 'There has been a massive increase in research interest towards applying data driven methods to problems in mechanics, with a particular emphasis on using data driven methods for predictive modeling and design of materials with novel functionality. While traditional machine learning (ML) methods have enabled many breakthroughs, they rely on the assumption that the training (observed) data and testing (unseen) data are independent and identically distributed (i.i.d). However, when these standard ML approaches are applied to real world mechanics problems with unknown test environments, they can be very sensitive to data distribution shifts, and can break down when evaluated on test datasets that violate the i.i.d. assumption. In contrast, out-of-distribution (OOD) generalization approaches assume that the data contained in test environments are allowed to shift (i.e., violate the i.i.d. assumption). To date, multiple methods have been proposed to improve the OOD generalization of ML methods. However, most of these OOD generalization methods have been focused on classification problems, driven in part by the lack of benchmark datasets available for OOD regression problems. Thus, the efficiency of these OOD generalization methods on regression problems, which are typically more relevant to mechanics research than classification problems, is unknown. To address" +"---\nabstract: 'We investigate the transport properties of charge carriers in AB bilayer graphene through a triple electrostatic barrier. We calculate the transmission and reflection using the continuity conditions at the interfaces of the triple barrier together with the transfer matrix method. First, we consider the case where the energy is less than the interlayer coupling $\\gamma_1$ and show that, at normal incidence, transmission is completely suppressed in the gap for a large barrier width while it appears in the gap for a small barrier width. For energies greater than $\\gamma_1$, we show that in the absence of an interlayer potential difference, transmission is less than that of a single barrier, but in its presence, transmission in the gap region is suppressed, as opposed to a double barrier. It is found that one, two, or three gaps can be created depending on the number of interlayer potential differences applied. Resonance in the $T_-^+$ transmission channel is observed that is not seen in the single and double barrier cases. Finally, we compute the conductance and show that the number of peaks is greater than the double barrier case.'\nauthor:\n- Mouhamadou Hassane Saley\n- Abderrahim El Mouhafid\n- Ahmed Jellal\n-" +"---\nabstract: 'We present a proposal for a one-bit full-adder to process classical information based on the quantum reversible dynamics of a triple quantum dot system. The device works via the repeated execution of a Fredkin gate implemented through the dynamics of a single time-independent Hamiltonian. Our proposal uses realistic parameter values and could be implemented on currently available quantum dot architectures. We compare the estimated ideal energetic requirements for operating our full-adder with those of well-known fully classical devices, and argue that our proposal may provide consistently better energy efficiency. Our work serves as a proof of principle for the development of energy-efficient information technologies operating through coherent quantum dynamics.'\nauthor:\n- 'Jo\u00e3o P. Moutinho'\n- Marco Pezzutto\n- Sagar Pratapsi\n- Francisco Ferreira da Silva\n- Silvano De Franceschi\n- Sougato Bose\n- 'Ant\u00f3nio T. Costa'\n- Yasser Omar\nbibliography:\n- 'draft.bib'\ntitle: 'Quantum dynamics for energetic advantage in a charge-based classical full-adder'\n---\n\nThe ever-growing dependence of society on information technologies has lead to remarkable developments over the past decades. Transistor counts in modern processing devices have roughly doubled every two years, as empirically described by Moore\u2019s law [@moore1965cramming; @kish2002end], accompanied by similar gains in energy efficiency" +"---\nabstract: 'Based on trace theory, we study efficient methods for concurrent integration of B-spline basis functions in IGA-FEM. We consider several scenarios of parallelization for two standard integration methods; the classical one and sum factorization. We aim to efficiently utilize hybrid memory machines, such as modern clusters, by focusing on the non-obvious layer of the shared memory part of concurrency. We estimate the performance of computations on a GPU and provide a strategy for performing such computations in practical implementations.'\naddress:\n- |\n AGH University of Sciences and Technology\\\n Institute of Computer Science, Electronics and Telecommunication\\\n Department of Computer Science\\\n al. A Mickiewicza 30, 30-059 Krak\u00f3w, Poland\\\n email: macwozni@agh.edu.pl \n- 'Instituto de Matem\u00e1ticas, Pontificia Universidad Cat\u00f3lica de Valpara\u00edso. Valpara\u00edso, Chile'\nauthor:\n- Maciej Wo\u017aniak\n- Anna Szyszka\n- Sergio Rojas\nbibliography:\n- 'bibliography.bib'\ntitle: 'A study of efficient concurrent integration methods of B-Spline basis functions in IGA-FEM'\n---\n\nIsogeometric Finite Element Method ,Numerical integration ,Trace theory ,Sum factorization\n\nIntroduction {#sec:motivation}\n============\n\nThe great success of the finite element method (FEM) can be attributed to its solid theoretical rooting in the fields of variational calculus, and functional analysis [@strangfix; @hughes2012finite]. It is widely used for numerically solving partial differential equations" +"---\nabstract: 'Studies on the circumstellar structures around evolved stars provide vital information on the evolution of the parent star and the properties of the local interstellar medium. In this work, we present the discovery and characterization of an optical cocoon tail behind the star HD\u00a0185806. The cocoon apex emission is puzzling, as it is detected in the infrared but shows no signal in the optical wavelength. The [H$\\alpha$]{}\u00a0and [\\[O\u00a0[iii]{}\\]]{}\u00a0 fluxes of the nebular structure vary from 2.7 to 8.5$\\times$[$10^{-12}$ erg s$^{-1}$ cm$^{-2}$]{}\u00a0and from 0.9 to 7.0$\\times$[$10^{-13}$ erg s$^{-1}$ cm$^{-2}$]{}, respectively. Through high-resolution spectroscopy, we derive the spectral type of the star, construct the position-velocity diagrams of the cocoon tail for the [H$\\alpha$]{}, [\\[O\u00a0[iii]{}\\]]{}\u00a0and [\\[N[ii]{}\\]]{}\u00a0emission lines, and determine its velocity in the range of $-$100 to 40 [km s$^{-1}$]{}. Furthermore, we use SED fitting and MESA evolutionary models adopting a distance of 900\u00a0pc, and classify HD 185806 as a 1.3 M$_{\\odot}$ star, in the transition phase between the RGB and early AGB stages. Finally, we study the morpho-kinematic structure of the cocoon tail using the astronomical software SHAPE. An ellipsoidal structure, with an inclination of $\\sim19$\u00a0with respect to the plane of" +"---\nabstract: 'We develop a novel decentralized control method for a network of perturbed linear systems with dynamical couplings subject to Signal Temporal Logic (STL) specifications. We first transform the STL requirements into set containment problems and then we develop controllers to solve these problems. Our approach is based on treating the couplings between subsystems as disturbances, which are bounded sets that the subsystems negotiate in the form of parametric assume-guarantee contracts. The set containment requirements and parameterized contracts are added to the subsystems\u2019 constraints. We introduce a centralized optimization problem to derive the contracts, reachability tubes, and decentralized closed-loop control laws. We show that, when the STL formula is separable with respect to the subsystems, the centralized optimization problem can be solved in a distributed way, which scales to large systems. We present formal theoretical guarantees on robustness of STL satisfaction. The effectiveness of the proposed method is demonstrated via a power network case study.'\nauthor:\n- 'Kasra Ghasemi, Sadra Sadraddini, and Calin Belta[^1][^2][^3]'\nbibliography:\n- 'references.bib'\ntitle: '**Decentralized Signal Temporal Logic Control for Perturbed Interconnected Systems via Assume-Guarantee Contract Optimization** '\n---\n\nINTRODUCTION\n============\n\nMulti agent systems benefit from decentralized control laws that require only local information. Online" +"---\nabstract: 'Tree Ensemble (TE) models (like Gradient Boosted Trees) often provide higher prediction performance compared to single decision trees. However, TE models generally lack transparency and interpretability, as humans have difficulty understanding their decision logic. This paper presents a novel approach to convert a TE trained for a binary classification task, to a rule list (RL) that closely approximates the TE and is interpretable for a human. This RL can effectively explain the model even on the minority class predicted by the model. Experiments on benchmark datasets demonstrate that, (i) predictions from the RL generated by TE2Rules have higher fidelity (with respect to the original TE) compared to state-of-the-art methods, (ii) the run-time of TE2Rules is comparable to that of some other similar baselines and (iii) the run-time of TE2Rules algorithm can be traded off at the cost of a slightly lower fidelity.'\nauthor:\n- |\n Written by AAAI Press Staff^1^[^1]\\\n AAAI Style Contributions by Pater Patel Schneider, Sunil Issar,\\\n J. Scott Penberthy, George Ferguson, Hans Guesgen, Francisco Cruz, Marc Pujol-Gonzalez\n- Author Name\n- 'G Roshan Lal, Xiaotong (Elaine) Chen, Varun Mithal'\nbibliography:\n- 'main.bib'\ntitle:\n- 'TE2Rules: Explaining Tree Ensembles using Rules'\n- 'My Publication Title \u2014" +"---\nauthor:\n- \n- \n- \n- \ntitle: 'Quantum Advantage Seeker with Kernels (QuASK): a software framework to speed up the research in quantum machine learning'\n---\n\nIntroduction\n============\n\nBreakthroughs in quantum technologies have allowed the construction of small-scale prototypes of quantum computers [@madsen2022quantum; @dumitrescu2022dynamical; @huang2022quantum], namely NISQ devices [@preskill2018quantum]. Even though many sources of noise may corrupt the execution on these devices [@pelofske2022quantum], we are able to run a certain class of algorithms [@nisq_algorithms_2022] which compromises the strong theoretical speedup of fault-tolerant quantum algorithms [@montanaro2016quantum] to achieve shorter, less noisy computations. A large subset of the NISQ-ready algorithms is dedicated to the development of machine learning models.\n\nOne of the most interesting technique among them are the quantum classifiers [@Schuld_2019; @havlivcek2019supervised; @mengoni2019kernel]: the function $f(x) = \\Trace[\\rho_{\\mathbf{x}}\\rho_w]$, where $\\rho_{\\mathbf{x}}$ represents the encoding of a data point ${\\mathbf{x}}$ in a quantum state through the feature map $\\ketbra{0}{0} \\mapsto U({\\mathbf{x}})\\ketbra{0}{0} = \\rho_{\\mathbf{x}}$ and $\\rho_w$ represents the weight vector encoded through the mapping $\\ketbra{0}{0} \\mapsto W\\ketbra{0}{0} = \\rho_w$, can be interpreted as a linear[^1] model [@schuld2021machine]. Such a function can be immediately used to solve supervised learning tasks. By choosing the weight mapping to be parametric $W(\\theta)$, we can train the parameters to" +"---\nabstract: 'In the anisotropic random geometric graph model, vertices correspond to points drawn from a high-dimensional Gaussian distribution and two vertices are connected if their distance is smaller than a specified threshold. We study when it is possible to hypothesis test between such a graph and an Erd\u0151s-R\u00e9nyi graph with the same edge probability. If $n$ is the number of vertices and $\\alpha$ is the vector of eigenvalues, [@eldan2020information] shows that detection is possible when $n^3 \\gg ({{{\\left}\\|\\alpha{\\right}\\|}}_2/{{{\\left}\\|\\alpha{\\right}\\|}}_3)^6$ and impossible when $n^3 \\ll ({{{\\left}\\|\\alpha{\\right}\\|}}_2/{{{\\left}\\|\\alpha{\\right}\\|}}_4)^4$. We show detection is impossible when $n^3 \\ll ({{{\\left}\\|\\alpha{\\right}\\|}}_2/{{{\\left}\\|\\alpha{\\right}\\|}}_3)^6$, closing this gap and affirmatively resolving the conjecture of [@eldan2020information].'\nauthor:\n- 'Matthew Brennan[^1]'\n- 'Guy Bresler[^2]'\n- 'Brice Huang[^3]'\nbibliography:\n- 'bib.bib'\ndate: 'June 29, 2022'\ntitle: |\n Threshold for Detecting High Dimensional Geometry\\\n in Anisotropic Random Geometric Graphs\n---\n\nIntroduction\n============\n\nExtracting information from large graphs is a fundamental statistical task. Because many natural networks have underlying metric structure \u2013 for example, nearby proteins in a biological network are more likely to share function, and users with similar interests in a social network are more likely to interact \u2013 a central inference problem is to infer latent geometric structure in an observed graph. Moreover," +"---\nabstract: 'We consider a class of attractive-repulsive energies, given by the sum of two nonlocal interactions with power-law kernels, defined over sets with fixed measure. It has recently been proved by R. Frank and E. Lieb that the ball is the unique (up to translation) global minimizer for sufficiently large mass. We focus on the issue of the stability of the ball, in the sense of the positivity of the second variation of the energy with respect to smooth perturbations of the boundary of the ball. We characterize the range of masses for which the second variation is positive definite (large masses) or negative definite (small masses). Moreover, we prove that the stability of the ball implies its local minimality among sets sufficiently close in the Hausdorff distance, but not in $L^1$-sense.'\naddress:\n- 'Department of Mathematics, University of Trento, Italy'\n- 'Department of Mathematics - IMAPP, Radboud University, Nijmegen, The Netherlands'\n- 'Department of Mathematics and Applied Mathematics, Virginia Commonwealth University, Richmond, VA, USA'\nauthor:\n- Marco Bonacini\n- Riccardo Cristoferi\n- Ihsan Topaloglu\nbibliography:\n- 'references.bib'\ntitle: 'Stability of the ball for attractive - repulsive energies'\n---\n\nIntroduction {#sec:intro}\n============\n\nIn [@BurChoTop18] the following minimization problem among" +"---\nabstract: 'In this third paper of the series reporting on the reverberation mapping (RM) campaign of active galactic nuclei with asymmetric H$\\beta$ emission-line profiles, we present results for 15 Palomar-Green (PG) quasars using spectra obtained between the end of 2016 to May 2021. This campaign combines long time spans with relatively high cadence. For 8 objects, both the time lags obtained from the entire light curves and the measurements from individual observing seasons are provided. Reverberation mapping of 9 of our targets has been attempted for the first time, while the results for 6 others can be compared with previous campaigns. We measure the H$\\beta$ time lags over periods of years and estimate their black hole masses. The long duration of the campaign enables us to investigate their broad line region (BLR) geometry and kinematics for different years by using velocity-resolved lags, which demonstrate signatures of diverse BLR geometry and kinematics. The BLR geometry and kinematics of individual objects are discussed. In this sample, the BLR kinematics of Keplerian/virialized motion and inflow is more common than outflow.'\nauthor:\n- 'Dong-Wei Bao'\n- 'Michael S. Brotherton'\n- Pu Du\n- 'Jacob N. McLane'\n- 'T. E. Zastrocky'\n- 'Kianna A." +"---\nabstract: 'We combine the swap Monte Carlo algorithm to long multi-CPU molecular dynamics simulations to analyse the equilibrium relaxation dynamics of model supercooled liquids over a time window covering ten orders of magnitude for temperatures down to the experimental glass transition temperature $T_g$. The analysis of time correlation functions coupled to spatio-temporal resolution of particle motion allow us to elucidate the nature of the equilibrium dynamics in deeply supercooled liquids. We find that structural relaxation starts at early times in rare localised regions characterised by a waiting time distribution that develops a power law near $T_g$. At longer times, relaxation events accumulate with increasing probability in these regions as $T_g$ is approached. This accumulation leads to a power-law growth of the linear extension of relaxed domains with time with a large, dynamic exponent. Past the average relaxation time, unrelaxed domains slowly shrink with time due to relaxation events happening at their boundaries. Our results provide a complete microscopic description of the particle motion responsible for key experimental signatures of glassy dynamics, from the shape and temperature evolution of relaxation spectra to the core features of dynamic heterogeneity. They also provide a microscopic basis to understand the emergence of dynamic" +"---\nabstract: 'Self-supervised learning representations (SSLR) have resulted in robust features for downstream tasks in many fields. Recently, several SSLRs have shown promising results on automatic speech recognition (ASR) benchmark corpora. However, previous studies have only shown performance for solitary SSLRs as an input feature for ASR models. In this study, we propose to investigate the effectiveness of diverse SSLR combinations using various fusion methods within end-to-end (E2E) ASR models. In addition, we will show there are correlations between these extracted SSLRs. As such, we further propose a feature refinement loss for decorrelation to efficiently combine the set of input features. For evaluation, we show that the proposed \u201cFeaRLESS learning features\" perform better than systems without the proposed feature refinement loss for both the WSJ and Fearless Steps Challenge (FSC) corpora.'\naddress: 'Center for Robust Speech Systems, University of Texas at Dallas, TX 75080'\nbibliography:\n- 'mybib.bib'\ntitle: 'FeaRLESS: Feature Refinement Loss for Ensembling Self-Supervised Learning Features in Robust End-to-end Speech Recognition'\n---\n\n**Index Terms**: End-to-End Speech Recognition, Self-Supervised Learning Representation, Feature Decorrelation\n\nIntroduction\n============\n\nDeep neural networks (DNNs) have accelerated the progress in speech processing, thus enhancing the accessibility of speech-related technologies in daily life. Since DNNs are data" +"---\nabstract: 'Continual Learning (CL) on time series data represents a promising but under-studied avenue for real-world applications. We propose two new CL benchmarks for Human State Monitoring. We carefully designed the benchmarks to mirror real-world environments in which new subjects are continuously added. We conducted an empirical evaluation to assess the ability of popular CL strategies to mitigate forgetting in our benchmarks. Our results show that, possibly due to the domain-incremental properties of our benchmarks, forgetting can be easily tackled even with a simple finetuning and that existing strategies struggle in accumulating knowledge over a fixed, held-out, test subject.'\nauthor:\n- |\n Federico Matteoni$^1$, Andrea Cossu$^{1,2}$, Claudio Gallicchio$^1$,\\\n Vincenzo Lomonaco$^1$ and Davide Bacciu$^1$ [^1]\\\n 1- University of Pisa - Computer Science Department\\\n Largo B. Pontecorvo, 3 - 56127 Pisa - Italy\\\n 2- Scuola Normale Superiore\\\n Piazza dei Cavalieri, 7 - 56126 Pisa - Italy\\\nbibliography:\n- 'Bibliografia.bib'\ntitle: Continual Learning for Human State Monitoring\n---\n\nIntroduction\n============\n\nContinual Learning (CL) refers to the setting where the data is modeled as a non-stationary stream composed of $n$ experiences $e_0,\\ldots,e_n$ [@lomonaco2021a]. Each experience is a set of one or multiple samples which are used to perform the training of the model." +"---\nabstract: 'Equilateral triangle-shaped graphene nanoislands with a lateral dimension of $n$ benzene rings are known as $[n]$triangulenes. Individual $[n]$triangulenes are open-shell molecules, with single-particle electronic spectra that host $n-1$ half-filled zero modes and a many-body ground state with spin $S=(n-1)/2$. The on-surface synthesis of triangulenes has been demonstrated for $n=3,4,5,7$ and the observation of a Haldane symmetry-protected topological phase has been reported in chains of $[3]$triangulenes. Here, we provide a unified theory for the electronic properties of a family of two-dimensional honeycomb lattices whose unit cell contains a pair of triangulenes with dimensions $n_a,n_b$. Combining density functional theory and tight-binding calculations, we find a wealth of half-filled narrow bands, including a graphene-like spectrum (for $n_a=n_b=2$), spin-1 Dirac electrons (for $n_a=2,n_b=3$), $p_{x,y}$-orbital physics (for $n_a=n_b=3$), as well as a gapped system with flat valence and conduction bands (for $n_a=n_b=4$). All these results are rationalized with a class of effective Hamiltonians acting on the subspace of the zero-energy states that generalize the graphene honeycomb model to the case of fermions with an internal pseudospin degree of freedom with $C_3$ symmetry.'\nauthor:\n- 'R. Ortiz$^{1,2}$ G. Catarina$^{1,3,4}$, J. Fern\u00e1ndez-Rossier$^{3}$[^1]$^,$[^2]'\nbibliography:\n- 'triang2D.bib'\ntitle: 'Theory of triangulene two-dimensional crystals'\n---\n\nIntroduction\n============\n\nThe" +"---\nabstract: 'Quantum algorithms based on variational approaches are one of the most promising methods to construct quantum solutions and have found a myriad of applications in the last few years. Despite the adaptability and simplicity, their scalability and the selection of suitable ans\u00e4tzs remain key challenges. In this work, we report an algorithmic framework based on nested Monte-Carlo Tree Search (MCTS) coupled with the combinatorial multi-armed bandit (CMAB) model for the automated design of quantum circuits. Through numerical experiments, we demonstrated our algorithm applied to various kinds of problems, including the ground energy problem in quantum chemistry, quantum optimisation on a graph, solving systems of linear equations, and finding encoding circuit for quantum error detection codes. Compared to the existing approaches, the results indicate that our circuit design algorithm can explore larger search spaces and optimise quantum circuits for larger systems, showing both versatility and scalability.'\nauthor:\n- 'Pei-Yong Wang'\n- Muhammad Usman\n- Udaya Parampalli\n- 'Lloyd C. L. Hollenberg'\n- 'Casey R. Myers'\nbibliography:\n- 'reference.bib'\ntitle: Automated Quantum Circuit Design with Nested Monte Carlo Tree Search\n---\n\nIntroduction {#intro}\n============\n\nThe variational quantum circuit (VQC, also known as parameterised quantum circuit, PQC) approach, first proposed for" +"---\nabstract: 'Neural networks are ubiquitous in applied machine learning for education. Their pervasive success in predictive performance comes alongside a severe weakness, the lack of explainability of their decisions, especially relevant in human-centric fields. We implement five state-of-the-art methodologies for explaining black-box machine learning models (LIME, PermutationSHAP, KernelSHAP, DiCE, CEM) and examine the strengths of each approach on the downstream task of student performance prediction for five massive open online courses. Our experiments demonstrate that the families of explainers **do not agree** with each other on feature importance for the same Bidirectional LSTM models with the same representative set of students. We use Principal Component Analysis, Jensen-Shannon distance, and Spearman\u2019s rank-order correlation to quantitatively cross-examine explanations across methods and courses. Furthermore, we validate explainer performance across curriculum-based prerequisite relationships. Our results come to the concerning conclusion that the choice of explainer is an important decision and is in fact paramount to the interpretation of the predictive results, even more so than the course the model is trained on. Source code and models are released at `http://github.com/epfl-ml4ed/evaluating-explainers`.'\nauthor:\n- |\n Vinitra Swamy\\\n \\\n Bahar Radmehr\\\n \\\n Natasa Krco\\\n \\\n- |\n Mirko Marras\\\n \\\n Tanja K\u00e4ser\\\n \\\nbibliography:\n- 'base.bib'" +"---\nauthor:\n- 'Yi Zhong,'\n- 'Ke Yang,'\n- 'and Yu-Xiao Liu[!!]{}'\ntitle: Thick brane in Rastall gravity \n---\n\nIntroduction\n============\n\nIt has been an attractive idea for the last two decades that our spacetime may have extra dimensions and the observable universe could be a brane in a higher-dimensional spacetime. In the braneworld scenario, it is possible to explain the hierarchy between the Planck scale and the electroweak scale [@ArkaniHamed:1998rs; @Randall:1999ee; @Randall:1999vf]. The braneworlds in these models are geometrically thin. However, in a viable model, a braneworld should have a thickness. Inspired by the domain wall in Ref. [@Rubakov:1983bb]. The thick braneworld generated by a background scalar field was proposed [@Gremm:1999pj; @Csaki:2000fc; @PhysRevD.62.044017; @Kobayashi:2001jd; @PhysRevD.66.024024; @Bazeia:2002sd; @Andrianov:2005hm; @Afonso:2006gi; @Neupane:2010ey]. Furthermore, since general relativity suffers from both phenomenological and theoretical problems such as dark energy problem and singularity problem, various modified theories of gravity have been proposed, and braneworlds in various modified gravities have been investigated [@Liu:2017gcn]. It was found that the branes may have inner structure in some theories such as $f(T)$ gravity [@Yang:2012hu] and mimetic gravity [@Zhong:2017uhn]. Pure geometric thick branes were found in modified gravities [@Arias:2002ew; @Barbosa-Cendejas:2005vog; @Barbosa-Cendejas:2006cic; @Zhong:2015pta] as well. See Refs. [@Giovannini:2001ta; @Cho:2001nf; @Heydari-Fard:2007dos; @Nozari:2008ny;" +"---\nauthor:\n- 'M.\u00a0Millon'\n- 'C.\u00a0Dalang'\n- 'C.\u00a0Lemon'\n- 'D.\u00a0Sluse'\n- 'E.\u00a0Paic'\n- 'J.\u00a0H.\u00a0H.\u00a0Chan'\n- 'F.\u00a0Courbin'\nbibliography:\n- 'biblio.bib'\ntitle: 'Evidence for a milliparsec-separation supermassive binary black hole with quasar microlensing[^1], [^2]'\n---\n\nIntroduction\n============\n\nThe formation of supermassive binary black holes (SMBBHs) is an expected end product that naturally emerges from the hierarchical assembly of multiple galaxy mergers [@Haehnelt2002; @Volonteri2003]. The binding of the two black holes in the central parsec of the merging galaxies is first driven by dynamical friction until other mechanisms, such as stellar hardening and disk-driven torques, shrink the orbits further [see e.g. @LisaCollaboration2022 for a review]. Once the SMBBH reaches a separation of the order of 0.01 parsec, the emission of gravitational waves (GWs) efficiently dissipates the angular momentum and the merger of the two black holes becomes inevitable [@Begelman1980].\n\nThe process that leads to the merger of two supermassive black holes (SMBHs) is described in numerical simulations over a wide range of dynamical scales [e.g. @Merritt2006; @Dotti2007; @Cuadra2009] but remains largely unobserved[^3]. Measuring the number density of SMBBHs across redshift would improve our understanding of the mechanisms that lead to the formation of black" +"---\nabstract: 'Trial-to-trial effects have been found in a number of studies, indicating that processing a stimulus influences responses in subsequent trials. A special case are priming effects which have been modelled successfully with error-driven learning [@marsolek2008antipriming], implying that participants are continuously learning during experiments. This study investigates whether trial-to-trial learning can be detected in an unprimed lexical decision experiment. We used the Discriminative Lexicon Model [DLM; @baayen2019discriminative], a model of the mental lexicon with meaning representations from distributional semantics, which models error-driven incremental learning with the Widrow-Hoff rule. We used data from the British Lexicon Project [BLP; @keuleers2012british] and simulated the lexical decision experiment with the DLM on a trial-by-trial basis for each subject individually. Then, reaction times were predicted with Generalised Additive Models (GAMs), using measures derived from the DLM simulations as predictors. We extracted measures from two simulations per subject (one with learning updates between trials and one without), and used them as input to two GAMs. Learning-based models showed better model fit than the non-learning ones for the majority of subjects. Our measures also provide insights into lexical processing and individual differences. This demonstrates the potential of the DLM to model behavioural data and leads to" +"---\nabstract: 'CarbON CII line in post-rEionization and ReionizaTiOn (CONCERTO) is a low-resolution spectrometer with an instantaneous field-of-view of 18.6arcmin, operating in the 130\u2013310GHz transparent atmospheric window. It is installed on the 12-meter Atacama Pathfinder Experiment (APEX) telescope at 5100m above sea level. The Fourier transform spectrometer (FTS) contains two focal planes hosting a total of 4304 kinetic inductance detectors. The FTS interferometric pattern is recorded on the fly while continuously scanning the sky. One of the goals of CONCERTO is to characterize the large-scale structure of the Universe by observing the integrated emission from unresolved galaxies. This methodology is an innovative technique and is called line intensity mapping. In this paper, we describe the CONCERTO instrument, the effect of the vibration of the FTS beamsplitter, and the status of the CONCERTO main survey.'\nauthor:\n- Alessandro\u00a0Fasano\n- Alexandre\u00a0Beelen\n- Alain\u00a0Beno\u00eet\n- Andreas\u00a0Lundgren\n- Peter\u00a0Ade\n- Manuel\u00a0Aravena\n- Emilio\u00a0Barria\n- Matthieu\u00a0B\u00e9thermin\n- Julien\u00a0Bounmy\n- Olivier\u00a0Bourrion\n- Guillaume\u00a0Bres\n- Martino\u00a0Calvo\n- Andrea\u00a0Catalano\n- 'Fran\u00e7ois-Xavier\u00a0D\u00e9sert'\n- Carlos\u00a0De Breuck\n- Carlos\u00a0Dur\u00e1n\n- Thomas\u00a0Fenouillet\n- Jose\u00a0Garcia\n- Gregory\u00a0Garde\n- Johannes\u00a0Goupy\n- Christopher\u00a0Groppi\n-" +"---\nabstract: 'A global mode is shown to be unstable to non-axisymmetric perturbations in a differentially rotating Keplerian disk containing either vertical or azimuthal magnetic fields. In an unstratified cylindrical disk model, using both global eigenvalue stability analysis and linear global initial-value simulations, it is demonstrated that this instability dominates at strong magnetic field where local standard MRI becomes stable. Unlike the standard MRI mode, which is concentrated in the high flow shear region, these distinct global modes (with low azimuthal mode numbers) are extended in the global domain and are Alfv\u00e9n continuum driven unstable modes. As its mode structure and relative dominance over MRI is inherently determined by the global spatial curvature as well as the flow shear in the presence of magnetic field, we call it the magneto-curvature (magneto-spatial-curvature) instability. Consistent with the linear analysis, as the field strength is increased in the nonlinear simulations, a transition from a MRI-driven turbulence to a state dominated by global non-axisymmetric modes is obtained. This global instability could therefore be a source of nonlinear transport in accretion disks at higher magnetic field than predicted by local models.'\nauthor:\n- Fatima Ebrahimi\n- Matthew Pharr\nbibliography:\n- 'references.bib'\ntitle: 'A non-local magneto-curvature" +"---\nabstract: |\n ------------------------------------------------------------------------\n\n **Abstract:** *The Last of Us* is game focused on stealth, companionship and strategy. The game is based in a lonely world after the pandemic and thus it needs AI companions to gain the interest of players. There are three main NPCs the game has - Infected, Human enemy and Buddy AIs. This case study talks about the challenges in front of the developers to create AI for these NPCs and the AI techniques they used to solve them. It also compares the challenges and approach with similar industry-leading games.\n\n ------------------------------------------------------------------------\nauthor:\n- |\n Harsh Panwar\\\n School of ELectronic Engineering and Computer Science\\\n Queen Mary University of London\\\n London\\\n `h.panwar@se21.qmul.ac.uk`\\\nbibliography:\n- 'references.bib'\ntitle: 'The NPC AI of *The Last of Us*: A case study '\n---\n\nIntroduction\n============\n\n*The last of us* is a third-person shooter (TPS) action-adventure made by *Naughty Dog* and distributed by *Sony Computer Entertainment* developed majorly for *PlayStation 3* and later on remastered for *PlayStation 4* in 2014 [@dog2013last]. Since it\u2019s release the game has received amazing reviews by game developer critics [@hrej.cz] [@ry_2013] [@idnes.cz_2013] as well as by the gaming community and is considered as the best game of the decade" +"---\nabstract: 'In recent years there has been a growing interest in the field of Casimir wormhole. In classical general relativity (GR), it is known that the null energy condition (NEC) has to be violated to have a wormhole to be stable. The Casimir effect is an experimentally verified effect that is caused due to the vacuum field fluctuations in quantum field theory. Since the Casimir effect provides the negative energy density, thus this act as an ideal candidate for the exotic matter needed for the stability of the wormhole. In this paper, we study the Casimir effect on the wormhole geometry in modified symmetric teleparallel gravity or $f(Q)$ gravity, where the non-metricity scalar $Q$ drives the gravitation interaction. We consider three systems of the Casimir effect such as (i) two parallel plates, (ii) two parallel cylindrical plates, and (iii) two-sphere separated by a large distance to make it more experimentally feasible. Further, we studied the obtained wormhole solutions for each case with energy conditions at the wormhole throat with radius $r_0$ and found that some arbitrary quantity violates the classical energy conditions at the wormhole throat. Furthermore, the behavior of the equation of state (EoS) is also analyzed for" +"---\nabstract: 'We present a universal democratic Lagrangian for the bosonic sector of ten-dimensional type II supergravities, treating \u201celectric\u201d and \u201cmagnetic\u201d potentials of all RR fields on equal footing. For type IIB, this includes the five-form whose self-duality equation is derived from the Lagrangian. We also present an alternative form of the action for type IIB, with manifest $SL(2,{\\mathbb}R)$ symmetry.'\nauthor:\n- Karapet Mkrtchyan\n- Fridrich Valach\ntitle: Actions for type II supergravities and democracy\n---\n\n[\\\nImperial-TP-KM-2022-03\\\n]{}\n\nIntroduction\n============\n\nA large part of our knowledge on string theory comes from its low-energy limit \u2014 10d supergravities. The rich symmetry structures of the latter theories are governed by generalized geometry [@Hitchin:2003cxu; @Gualtieri:2003dx; @Grana:2005sn; @Grana:2008yw; @Coimbra:2011nw] (see also [@Siegel:1993xq; @Siegel:1993th]; for a review, see, e.g., [@Plauschinn:2018wbo]) operating not only with metric tensor but also form fields. It is therefore very useful to have descriptions of supergravities that manifest large number of underlying symmetries of string theory. We start investigation in this direction by developing a new Lagrangian formulation for type II supergravities adapted to generalised geometry. That is, they treat the electric and magnetic degrees of freedom in equal footing for the RR sector. This formulation we put forward in" +"---\nabstract: 'We investigate a time-independent many-boson system, whose ground states are quasi-degenerate and become infinitely degenerate in the thermodynamic limit. Out of these quasi-degenerate ground states we construct a quantum state that evolves in time with a period that is logarithmically proportional to the number of particles, that is, $T\\sim \\log N$. This boson system in such a state is a quantum time crystal as it approaches the ground state in the thermodynamic limit. The logarithmic dependence of its period on the total particle number $N$ makes it observable experimentally even for systems with very large number of particles. Possible experimental proposals are discussed.'\nauthor:\n- 'Haipeng Xue (\u859b\u6d77\u9e4f)'\n- 'Lingchii Kong (\u5b54\u4ee4\u7426)'\n- 'Biao Wu (\u5434\u98d9)'\nnocite: '[@*]'\ntitle: Logarithmic Quantum Time Crystal\n---\n\n[UTF8]{}[gbsn]{}\n\n[^1]\n\n[^2]\n\n\\[sec:1\\]INTRODUCTION \n======================\n\nSpontaneous symmetry breaking is a well known physical phenomenon, where the observed ground state of a many-particle system does not possess the symmetries of its Hamiltonian[@beekman2019introduction], e.g., ferromagnetic materials break the rotational symmetry and crystals break the spatial translational symmetry. Wilczek suggested the possibility of spontaneous breaking of time translational symmetry[@shapere2012classical; @wilczek2012quantum]: the observed ground state of a closed quantum system may oscillate periodically in time. This suggestion had" +"---\nabstract: 'Let\u00a0$G$ be a linear algebraic group defined over a finite field\u00a0${\\mathbb F}_q$. We present several connections between the isogenies of\u00a0$G$ and the finite groups of rational points\u00a0$(G({\\mathbb F}_{q^n}))_{n\\geq 1}$. We show that an isogeny\u00a0$\\phi: G''\\to G$ over\u00a0${\\mathbb F}_q$ gives rise to a subgroup of fixed index in\u00a0$G({\\mathbb F}_{q^n})$ for infinitely many\u00a0$n$. Conversely, we show that if\u00a0$G$ is reductive the existence of a subgroup\u00a0$H_n$ of fixed index\u00a0$k$ for infinitely many\u00a0$n$ implies the existence of an isogeny of order\u00a0$k$. In particular, we show that the infinite sequence\u00a0$H_n$ is covered by a finite number of isogenies. This result applies to classical groups\u00a0${\\mathrm{GL}}_m$, ${\\mathrm{SL}}_m$, ${\\mathrm{SO}}_m$, ${\\mathrm{SU}}_m$, ${\\mathrm{Sp}}_{2m}$ and can be extended to non-reductive groups if\u00a0$k$ is prime to the characteristic. As a special case, we see that if\u00a0$G$ is simply connected the minimal indices of proper subgroups of\u00a0$G({\\mathbb F}_{q^n})$ diverge to infinity. Similar results are investigated regarding the sequence\u00a0$(G({\\mathbb F}_p))_p$ by varying the characteristic\u00a0$p$.'\nauthor:\n- Davide Sclosa$^1$\nbibliography:\n- 'refs.bib'\ntitle: |\n Algebraic Groups over Finite Fields:\\\n Connections Between Subgroups and Isogenies\n---\n\nIntroduction.\n=============\n\nLinear algebraic groups are groups of" +"---\nabstract: 'We experimentally study three-body energy exchange during Rydberg excitation near a two-body F\u00f6rster resonance. By varying the excitation pulse duration or Rabi frequency, we coherently control the excitation of three-atom entangled states. We prove coherence using an optical rotary echo technique, and compare with a model for excitation in a three-atom basis. Our results suggest a robust way to implement a three-body entangling operation.'\naddress: ' Department of Physics, Kenyon College, 201 North College Rd., Gambier, Ohio 43022, USA\\'\nauthor:\n- 'Tomohisa Yoda, Emily Hirsch, Jason Montgomery, Dilara Sen, and Aaron Reinhard'\ntitle: 'Coherent excitation of three-atom entangled states near a two-body F\u00f6rster resonance'\n---\n\n[GB]{}[ ]{}\n\nINTRODUCTION\n============\n\nInteractions among ultracold atoms have enabled an explosion of progress in fundamental physics\u00a0[@Bloch] and quantum technologies\u00a0[@Saffman_Rev]. Close-range interactions in quantum degenerate gases have revealed exotic phases of matter like Effimov states\u00a0[@Kraemer] and the Tonks-Girardeau gas\u00a0[@Kinoshita]. They have also given insight into quantum dynamics, including phase transitions\u00a0[@Greiner; @Hadzibabic], wavepacket transport\u00a0[@Mandel], Anderson localization\u00a0[@Kondov], and quantum thermalization\u00a0[@Kinoshita2; @Gring]. Ultracold Rydberg atoms are particularly useful because of their long-range couplings\u00a0[@Saffman_Rev]. Dipole-dipole interactions in Rydberg systems have been used to create neutral atom quantum gates" +"---\nabstract: 'Social media provides an essential platform for shaping and sharing opinions and consuming information in a decentralized way. However, users often interact with and are exposed to information mostly aligned with their beliefs, creating a positive feedback mechanism that reinforces those beliefs and excludes contrasting ones. In this paper, we study such mechanisms by analyzing the social network dynamics of controversial Twitter discussions using unsupervised methods that demand little computational power. Specifically, we focus on the retweet networks of the climate change conversation during 2019, when important climate social movements flourished. We find echo chambers of climate believers and climate skeptics that we identify based solely on the retweeting patterns of Twitter users. In particular, we study the information sources, or *chambers*, consumed by the audience of the leading users of the conversation. Users with similar (contrasting) ideological positions show significantly high (low)-overlapping chambers, resulting in a bimodal overlap distribution. Further, we uncover the ideological position of previously unobserved high-impact users based on how many audience members fall into either echo chamber. We uncover the ideology of more than half of the retweeting population as either climate believers or skeptics and find that the cross-group communication is small." +"---\nabstract: |\n Despite the ubiquitous use of materials maps in modern rendering pipelines, their editing and control remains a challenge. In this paper, we present an example-based material control method to augment input material maps based on user-provided material photos. We train a tileable version of MaterialGAN and leverage its material prior to guide the appearance transfer, optimizing its latent space using differentiable rendering. Our method transfers the micro and meso-structure textures of user provided target(s) photographs, while preserving the structure and quality of the input material. We show our methods can control existing material maps, increasing realism or generating new, visually appealing materials.\n\n <ccs2012> <concept> <concept\\_id>10010147.10010371.10010372.10010376</concept\\_id> <concept\\_desc>Computing methodologies\u00a0Reflectance modeling</concept\\_desc> <concept\\_significance>500</concept\\_significance> </concept> </ccs2012>\nauthor:\n- |\n \\\n [ ]{}\nbibliography:\n- 'bibliography.bib'\ntitle: Controlling Material Appearance by Examples\n---\n\nIntroduction\n============\n\nRealistic materials are one of the key components of vivid virtual environments. Materials are typically represented by a parametric surface reflectance model and a set of 2D texture maps (material maps) where each pixel represents a spatially varying parameter of the model (e.g. albedo, normal, and roughness values). This representation is ubiquitous because of its compactness and ease of visualization. It is also used by most recent" +"---\nabstract: 'The Event Horizon Telescope (EHT) observation unveiled the first image of supermassive black hole Sgr A\\* showing a shadow of diameter $\\theta_{sh}= 48.7 \\pm 7\\,\\mu$as with fractional deviation from the Schwarzschild black hole shadow diameter $\\delta = -0.08^{+0.09}_{-0.09}~\\text{(VLTI)},-0.04^{+0.09}_{-0.10}~\\text{(Keck)}$. The Sgr A\\* shadow size is within $~10\\%$ of the Kerr predictions, providing us another tool to investigate the nature of strong-field gravity. We use the Sgr A\\* shadow observables to constrain metrics of four independent and well-motivated, parametrically different from Kerr spacetime, rotating regular spacetimes and the corresponding no-horizon spacetimes. We present constraints on the deviation parameter $g$ of rotating regular black holes. The shadow angular diameter $\\theta_{sh}$ within the $1 \\sigma$ region places bounds on the parameters $a$ and $g$. Together with EHT bounds on the $\\theta_{sh}$ and $\\delta$ of Sgr A\\*, our analysis concludes that the three rotating regular black holes, i.e., Bardeen Hayward, and Simpson-Visser black holes, and corresponding no-horizon spacetimes agree with the EHT results of Sgr A\\*. Thus, these three rotating regular spacetimes and Kerr black holes are indiscernible in some parameter space, and one cannot rule out the possibility of the former being strong candidates for the astrophysical black holes.'\nauthor:\n- Rahul" +"---\nabstract: 'Deep learning has proved particularly useful for semantic segmentation, a\u00a0fundamental image analysis task. However, the standard deep learning methods need many training images with ground-truth pixel-wise annotations, which are usually laborious to obtain and, in some cases (e.g., medical images), require domain expertise. Therefore, instead of pixel-wise annotations, we focus on image annotations that are significantly easier to acquire but still informative, namely the size of foreground objects. We define the object size as the maximum Chebyshev distance between a\u00a0foreground and the nearest background pixel. We propose an algorithm for training a\u00a0deep segmentation network from a\u00a0dataset of a\u00a0few pixel-wise annotated images and many images with known object sizes. The algorithm minimizes a\u00a0discrete (non-differentiable) loss function defined over the object sizes by sampling the gradient and then using the standard back-propagation algorithm. Experiments show that the new approach improves the segmentation performance.'\nauthor:\n- Denis Baru\u010di\u0107\n- Jan Kybic\nbibliography:\n- 'refs.bib'\ntitle: Learning to segment from object sizes\n---\n\nIntroduction\n============\n\nSemantic segmentation is the process of associating a class label to each pixel of an image. With the advent of deep learning, deep networks have achieved incredible performance on many image" +"---\nabstract: 'A need to understand and predict vehicles\u2019 behavior underlies both public and private goals in the transportation domain, including urban planning and management, ride-sharing services, and intelligent transportation systems. Individuals\u2019 preferences and intended destinations vary throughout the day, week, and year: for example, bars are most popular in the evenings, and beaches are most popular in the summer. Despite this principle, we note that recent studies on a popular benchmark dataset from Porto, Portugal have found, at best, only marginal improvements in predictive performance from incorporating temporal information. We propose an approach based on hypernetworks, a variant of meta-learning (\u201clearning to learn\u201d) in which a neural network learns to change its own weights in response to an input. In our case, the weights responsible for destination prediction vary with the metadata, in particular the time, of the input trajectory. The time-conditioned weights notably improve the model\u2019s error relative to ablation studies and comparable prior work, and we confirm our hypothesis that knowledge of time should improve prediction of a vehicle\u2019s intended destination.'\nauthor:\n- Mark Tenzer\n- Zeeshan Rasheed\n- Khurram Shafique\n- Nuno Vasconcelos\nbibliography:\n- 'references.bib'\ntitle: 'Meta-Learning over Time for Destination Prediction Tasks'\n---\n\n<ccs2012>" +"---\nabstract: |\n **Offensive Content Warning**: This paper contains offensive language only for providing examples that clarify this research and do not reflect the authors\u2019 opinions. Please be aware that these examples are offensive and may cause you distress.\\\n \\\n The subjectivity of recognizing *hate speech* makes it a complex task. This is also reflected by different and incomplete definitions in NLP. We present *hate speech* criteria, developed with perspectives from law and social science, with the aim of helping researchers create more precise definitions and annotation guidelines on five aspects: (1) target groups, (2) dominance, (3) perpetrator characteristics, (4) type of negative group reference, and the (5) type of potential consequences/effects. Definitions can be structured so that they cover a more broad or more narrow phenomenon. As such, conscious choices can be made on specifying criteria or leaving them open. We argue that the goal and exact task developers have in mind should determine how the scope of *hate speech* is defined. We provide an overview of the properties of English datasets from [hatespeechdata.com](hatespeechdata.com) that may help select the most suitable dataset for a specific scenario.\nauthor:\n- 'Urja Khurana$^{1*}$'\n- 'Ivar Vermeulen$^{2}$'\n- 'Eric Nalisnick$^{3}$\\'\n- '**Marloes van" +"---\nabstract: 'Although deep learning algorithms have been intensively developed for computer-aided tuberculosis diagnosis (CTD), they mainly depend on carefully annotated datasets, leading to much time and resource consumption. Weakly supervised learning (WSL), which leverages coarse-grained labels to accomplish fine-grained tasks, has the potential to solve this problem. In this paper, we first propose a new large-scale tuberculosis (TB) chest X-ray dataset, namely tuberculosis chest X-ray attribute dataset (TBX-Att), and then establish an attribute-assisted weakly supervised framework to classify and localize TB by leveraging the attribute information to overcome the insufficiency of supervision in WSL scenarios. Specifically, first, the TBX-Att dataset contains 2000 X-ray images with seven kinds of attributes for TB relational reasoning, which are annotated by experienced radiologists. It also includes the public TBX11K dataset with 11200 X-ray images to facilitate weakly supervised detection. Second, we exploit a multi-scale feature interaction model for TB area classification and detection with attribute relational reasoning. The proposed model is evaluated on the TBX-Att dataset and will serve as a solid baseline for future research. The code and data will be available at \u00a0.'\nauthor:\n- Chengwei Pan\n- 'Gangming Zhao[^1]'\n- Junjie Fang\n- Baolian Qi\n- Jiaheng Liu\n- |" +"---\nabstract: 'We present an enriched formulation of the Least Squares (LSQ) regression method for Uncertainty Quantification (UQ) using generalised polynomial chaos (gPC). More specifically, we enrich the linear system with additional equations for the gradient (or sensitivity) of the Quantity of Interest with respect to the stochastic variables. This sensitivity is computed very efficiently for all variables by solving an adjoint system of equations at each sampling point of the stochastic space. The associated computational cost is similar to one solution of the direct problem. For the selection of the sampling points, we apply a greedy algorithm which is based on the pivoted QR decomposition of the measurement matrix. We call the new approach sensitivity-enhanced generalised polynomial chaos, or se-gPC. We apply the method to several test cases to test accuracy and convergence with increasing chaos order, including an aerodynamic case with $40$ stochastic parameters. The method is found to produce accurate estimations of the statistical moments using the minimum number of sampling points. The computational cost scales as $\\sim m^{p-1}$, instead of $\\sim m^p$ of the standard LSQ formulation, where $m$ is the number of stochastic variables and $p$ the chaos order. The solution of the adjoint system" +"---\nabstract: 'We revisit the predictions for the duration of the inflationary phase after the bounce in loop quantum cosmology. We present our analysis for different classes of inflationary potentials that include the monomial power-law chaotic type of potentials, the Starobinsky and the Higgs-like symmetry breaking potential with different values for the vacuum expectation value. Our setup can easily be extended to other forms of primordial potentials than the ones we have considered. Independently on the details of the contracting phase, if the dynamics starts sufficiently in the far past, the kinetic energy will come to dominate at the bounce, uniquely determining the amplitude of the inflaton at this moment. This will be the initial condition for the further evolution that will provide us with results for the number of [*e*]{}-folds from the bounce to the beginning of the accelerated inflationary regime and the subsequent duration of inflation. We also discuss under which conditions each model considered could lead to observable signatures on the spectrum of the cosmic microwave background or else be excluded for not predicting a sufficient amount of accelerated expansion. A first analysis is performed considering the standard value for the Barbero-Immirzi parameter, $\\gamma \\simeq 0.2375$, which" +"---\nabstract: 'We study electron transport properties through a double quantum dot (DQD) system coupled to a single mode photon cavity, DQD-cavity. The DQD system has a complex multilevel energy spectrum, in which by tuning the photon energy several anti-crossings between the electron states of the DQD system and photon dressed states are produced, which have not been seen in a simple two level DQD system. Three different regions of the photon energy are studied based on anti-crossings, where the photon energy ranges are classified as \u201clow\", \u201cintermediate\u201d, and \u201chigh\u201d. The anti-crossings represent multiple Rabi-resonances, which lead to a current dip in the electron transport at the \u201cintermediate\u201d photon energy. Increasing the electron-photon coupling strength, $g_\\gamma$, the photon exchanges between the anti-crossing states are changed leading to a dislocation of the multiple Rabi resonance states. Consequently, the current dip at the intermediate photon energy is further reduced. Additionally, we tune the cavity-environment coupling, $\\kappa$, to see how the transport properties in the strong coupling regime, g$_{\\gamma}>\\kappa$, are changed for different directions of the photon polarization. Increasing $\\kappa$ with a constant value of $g_\\gamma$, a current enhancement in the intermediate photon energy is found, and a reduction in the current is" +"---\nabstract: 'We derive information theoretic generalization bounds for supervised learning algorithms based on a new measure of leave-one-out conditional mutual information (loo-CMI). Contrary to other CMI bounds, which are black-box bounds that do not exploit the structure of the problem and may be hard to evaluate in practice, our loo-CMI bounds can be computed easily and can be interpreted in connection to other notions such as classical leave-one-out cross-validation, stability of the optimization algorithm, and the geometry of the loss-landscape. It applies both to the output of training algorithms as well as their predictions. We empirically validate the quality of the bound by evaluating its predicted generalization gap in scenarios for deep learning. In particular, our bounds are non-vacuous on large-scale image-classification tasks.'\nauthor:\n- |\n Mohamad Rida Rammal\\\n `ridarammal@g.ucla.edu`\n- |\n Alessandro Achille\\\n `aachille@amazon.com`\n- |\n Aditya Golatkar\\\n `aditya29@cs.ucla.edu`\n- |\n Suhas Diggavi\\\n `suhas@ee.ucla.edu`\n- |\n Stefano Soatto\\\n `soatto@ucla.edu`\nbibliography:\n- 'bib.bib'\ntitle: 'On Leave-One-Out Conditional Mutual Information For Generalization'\n---\n\nIntroduction\n============\n\nGeneralization in classical machine learning models is often understood in terms of the bias-variance trade-off: over-parameterized models fit the data better but are more likely to overfit. However, the push for ever larger deep network" +"---\nabstract: 'Topological data analysis refers to approaches for systematically and reliably computing abstract \u201cshapes\u201d of complex data sets. There are various applications of topological data analysis in life and data sciences, with growing interest among physicists. We present a concise review of applications of topological data analysis to physics and machine learning problems in physics including the unsupervised detection of phase transitions. We finish with a preview of anticipated directions for future research.'\nauthor:\n- Daniel Leykam\n- 'Dimitris G. Angelakis'\nbibliography:\n- 'TDA\\_refs.bib'\ntitle: Topological data analysis and machine learning\n---\n\nIntroduction\n============\n\nTopological quantities are invariant under continuous deformations; an often-cited example is that a doughnut can be continuously transformed into coffee mug - both are topologically equivalent to a torus. The robustness of topological quantities to perturbations is inspiring physicists in many fields, including condensed matter, photonics, acoustics, and mechanical systems\u00a0[@RevModPhys.82.3045; @RevModPhys.91.015006; @Ma2019; @Kim2020]. In all these areas topology has enabled the prediction and explanation of surprisingly robust physical effects.\n\nMost famously, the extremely precise quantisation of the Hall conductivity observed in two-dimensional electronic systems since the 1980s was explained as a novel topological phase of matter, the quantum Hall phase\u00a0[@vonKlitzing2020]. In this and" +"---\nabstract: 'In this article, the observer-based coordinated tracking control problem for a class of nonlinear multi-agent systems(MASs) with intermittent communication and information constraints is studied under dynamic switching topology. First, a state observer is designed to estimate the unmeasurable actual state information in the system. Second, adjustable heterogeneous coupling weighting parameters are introduced in the dynamic switching topology, and the distributed coordinated tracking control protocol under heterogeneous coupling framework is proposed. Then, a new Lemma is constructed to realize the cooperative design of observer gain, state feedback gain and heterogeneous coupling gain matrices. Furthermore, the stability of the system is further proved, and the range of communication rate is obtained. On this basis, the intermittent communication mode is extended to three time interval cases, namely normal communication, leader-follower communication interruption and all agents communication interruption, and then the distributed coordinated tracking control method is improved to solve this problem. Finally, simulation experiments are conducted with nonlinear MASs to verify the correctness of methods.'\nauthor:\n- 'Yuhang\u00a0Zhang,\u00a0Yulian\u00a0Jiang and\u00a0Shenquan\u00a0Wang [^1]'\ntitle: 'Observer-Based Coordinated Tracking Control for Nonlinear Multi-Agent Systems with Intermittent Communication under Heterogeneous Coupling Framework [^2] '\n---\n\n[Shell : Bare Demo of IEEEtran.cls for" +"---\nabstract: 'This paper presents a novel methodology to auto-tune an Unscented Kalman Filter (UKF). It involves using a Two-Stage Bayesian Optimisation (TSBO), based on a t-Student Process to optimise the process noise parameters of a UKF for vehicle sideslip angle estimation. Our method minimises performance metrics, given by the average sum of the states\u2019 and measurement\u2019 estimation error for various vehicle manoeuvres covering a wide range of vehicle behaviour. The predefined cost function is minimised through a TSBO which aims to find a location in the feasible region that maximises the probability of improving the current best solution. Results on an experimental dataset show the capability to tune the UKF in less time than using a genetic algorithm (GA) and the overall capacity to improve the estimation performance in an experimental test dataset of to the current state-of-the-art GA. = -1'\nauthor:\n- 'Alberto Bertipaglia$^{1}$, Barys Shyrokau$^{1}$, Mohsen Alirezaei$^{2}$ and Riender Happee$^{1}$ [^1][^2][^3]'\nbibliography:\n- 'IEEEabrv.bib'\n- 'references.bib'\ntitle: '**A Two-Stage Bayesian Optimisation for Automatic Tuning of an Unscented Kalman Filter for Vehicle Sideslip Angle Estimation\\*** '\n---\n\nINTRODUCTION\n============\n\nAn accurate and real-time estimation of the vehicle sideslip angle is essential to strengthen the performance of active vehicle" +"---\nabstract: 'We applied localized orbital scaling correction (LOSC) in Bethe-Salpeter equation (BSE) to predict accurate excitation energies for molecules. LOSC systematically eliminates the delocalization error in the density functional approximation and is capable of approximating quasiparticle (QP) energies with accuracy similar or better than the $GW$ Green\u2019s function approach and with much less computational cost. The QP energies from LOSC instead of commonly used $G_{0}W_{0}$ and ev$GW$ are directly used in BSE. We show that the BSE/LOSC approach greatly outperforms the commonly used BSE/$G_{0}W_{0}$ approach for predicting excitations with different characters. For the calculations for Truhlar-Gagliardi test set containing valence, charge transfer (CT) and Rydberg excitations, BSE/LOSC with the Tamm-Dancoff approximation provides a comparable accuracy to time-dependent density functional theory (TDDFT) and BSE/ev$GW$. For the calculations of Stein CT test set and Rydberg excitations of atoms, BSE/LOSC considerably outperforms both BSE/$G_{0}W_{0}$ and TDDFT approaches with a reduced starting point dependence. BSE/LOSC is thus a promising and efficient approach to calculate excitation energies for molecular systems.'\nauthor:\n- Jiachen Li\n- Ye Jin\n- Neil Qiang Su\n- Weitao Yang\nbibliography:\n- 'ref.bib'\n- 'software.bib'\ntitle: 'Combining Localized Orbital Scaling Correction and Bethe-Salpeter Equation for Accurate Excitation Energies'\n---\n\n\\[sec:introduction\\]INTRODUCTION" +"---\nabstract: 'An animal is a planar shape formed by attaching congruent regular polygons along their edges. In 1976, Harary and Harborth gave closed isoperimetric formulas for Euclidean animals. Here, we provide analogous formulas for hyperbolic animals. We do this by proving a connection between Sturmian words and the parameters of a discrete analogue of balls in the graph determined by hyperbolic tessellations. This reveals a complexity in hyperbolic animals that is not present in Euclidean animals.'\naddress:\n- 'Zentrum Mathematik, TU M\u00fcnchen, Garching b. M\u00fcnchen, Germany'\n- 'Maths Learning Centre, De Montfort University'\nauthor:\n- \u00c9rika Rold\u00e1n\n- 'Rosemberg Toala-Enriquez'\nbibliography:\n- 'holeyominoes.bib'\ntitle: Isoperimetric Formulas for Hyperbolic Animals\n---\n\nIntroduction\n============\n\nAn *animal* is a planar shape with a connected interior consisting of a finite number of tiles on a regular tessellation of the plane. Using Schl\u00e4fli\u2019s notation, a $\\{p,q\\}$-tessellation consists of tiling the plane with regular $p$-gons with exactly $q$ of these polygons meeting at each vertex. We call an animal living in a $\\{p,q\\}$-tessellation a $\\{p,q\\}$-animal. The polygons forming an animal are called tiles. We define the perimeter to be the number of edges that are in the boundary of the animal. Let $\\mathcal{P}^{p,q}_{min}(n)$ be the" +"---\nabstract: 'The study of extra-solar planets, or simply, exoplanets, planets outside our own Solar System, is fundamentally a grand quest to understand our place in the Universe. Discoveries in the last two decades have re-defined our understanding of planets, and helped us comprehend the uniqueness of our very own Earth. In recent years the focus has shifted from planet detection to planet characterisation, where key planetary properties are inferred from telescope observations using Monte Carlo-based methods. However, the efficiency of sampling-based methodologies is put under strain by the high-resolution observational data from next generation telescopes, such as the James Webb Space Telescope and the Ariel Space Mission. We are delighted to announce the acceptance of the Ariel ML Data Challenge 2022 as part of the NeurIPS competition track. The goal of this challenge is to identify a reliable and scalable method to perform planetary characterisation. Depending on the chosen track, participants are tasked to provide either quartile estimates or the approximate distribution of key planetary properties. To this end, a synthetic spectroscopic dataset has been generated from the official simulators for the ESA Ariel Space Mission. The aims of the competition are three-fold. 1) To offer a challenging application" +"---\nabstract: 'Events from alpha interactions on the surfaces of germanium detectors are a major contribution to the background in germanium-based searches for neutrinoless double-beta decay. Surface events are subject to charge trapping, affecting their pulse shape and reconstructed energy. A study of alpha events on the passivated end-plate of a segmented true-coaxial n-type high-purity germanium detector is presented. Charge trapping is analysed in detail and an existing pulse-shape analysis technique to identify alpha events is verified with mirror pulses observed in the non-collecting channels of the segmented test detector. The observed radial dependence of charge trapping confirms previous results. A dependence of the probability of charge trapping on the crystal axes is observed for the first time. A first model to describe charge trapping effects within the framework of the simulation software *SolidStateDetectors.jl* is introduced. The influence of metalisation on events from low-energy gamma interactions close to the passivated surface is also presented.'\nauthor:\n- 'I.\u00a0Abt'\n- 'C.\u00a0Gooch'\n- 'F.\u00a0Hagemann'\n- 'L.\u00a0Hauertmann'\n- 'X.\u00a0Liu'\n- 'O.\u00a0Schulz'\n- 'M.\u00a0Schuster'\n- 'A.\u00a0J.\u00a0Zsigmond'\nbibliography:\n- 'main.bib'\ndate: 'Received: date / Accepted: date'\ntitle: Identification and simulation of surface alpha events on passivated surfaces" +"---\nabstract: 'Placement of sensors on vehicles for safety and autonomous capability is a complex optimization problem when considered in the full-blown form, with different constraints. Considering that Quantum Computers are expected to be able to solve certain optimization problems more \u201ceasily\" in the future, the problem was posted as part of the BMW Quantum Computing Challenge 2021. In this paper, we have presented two formulations for quantum-enhanced solutions in a systematic manner. In the process, necessary simplifications are invoked to accommodate the current capabilities of Quantum Simulators and Hardware. The presented results and observations from elaborate simulation studies demonstrate the correct functionality and usefulness of the proposals.'\nauthor:\n- \nbibliography:\n- 'SPO.bib'\ntitle: 'Optimization of Sensor-Placement on Vehicles using Quantum-Classical Hybrid Methods'\n---\n\nVariational quantum algorithms, ansatz, quantum annealing, maximum set-coverage, integer linear programming, sensor placement, combinatorial optimization\n\nIntroduction {#sec:intro}\n============\n\nThe paper presents two quantum computing methods to optimize the placement of sensors on the surface of a vehicle. The problem, as posted in the BMW Quantum Computing Challenge 2021 [@bmw_challenge; @bmw_problem], is to arrive at the optimal configurations (position, type and orientation) of the sensors on the vehicle surface that maximizes coverage of the Region of Interest" +"---\nabstract: 'We study the response of an optical system with the Kerr nonlinearity demonstrating Bloch oscillations to a periodic train of coherent pulses. It has been found out that the intensity of the field excited in the system by pulses resonantly depends on the train period. It is demonstrated numerically and analytically that the response of the system is stronger when the period of the driving pulses is commensurate with the period of the Bloch oscillations. Moreover, large enough pulses are capable to induce the instabilities which eventually lead to onset of chaotic Bloch oscillations of the wave-function envelope bouncing both in time and space. The analysis reveals that these instabilities are associated with period-doubling bifurcations. A cascade of such bifurcations with increase of the pulses\u2019 amplitude triggers the chaotic behaviour.'\nauthor:\n- 'A. Verbitskiy'\n- 'A. Balanov'\n- 'A. Yulin'\ntitle: Chaotic Bloch oscillations in dissipative optical systems driven by a periodic train of coherent pulses\n---\n\nIntroduction \\[sec:Introduction\\]\n=================================\n\nBloch oscillations is a very important fundamental phenomenon first discovered during the development of zone theory of the solid state physics [@discovery_1; @discovery_2; @discovery_3]. The effect manifests itself in a counter-intuitive periodic oscillations of quantum particles moving in" +"---\nauthor:\n- Changqing Wang$^1$\n- Ivan Gonin$^1$\n- Anna Grassellino$^1$\n- Sergey Kazakov$^1$\n- Alexander Romanenko$^1$\n- Vyacheslav P Yakovlev$^1$\n- 'Silvia Zorzetti$^{1,2}$'\ndate: |\n $^1$Fermi National Accelerator Laboratory, Batavia, IL, USA\\\n $^2$Corresponding author: zorzetti@fnal.gov\\\ntitle: '**High-efficiency microwave-optical quantum transduction based on a cavity electro-optic superconducting system with long coherence time**'\n---\n\n**Abstract** {#abstract .unnumbered}\n============\n\nFrequency conversion between microwave and optical photons is a key enabling technology to create links between superconducting quantum processors and to realize distributed quantum networks. We propose a microwave-optical transduction platform based on long-coherence-time superconducting radio-frequency (SRF) cavities coupled to electro-optic optical cavities to mitigate the loss mechanisms that limit the attainment of high conversion efficiency. In the design, we optimize the microwave-optical field overlap and optical coupling losses, while achieving long microwave and optical photon lifetime at milli-Kelvin temperatures. This represents a significant enhancement of the transduction efficiency up to 50% under pump power of , corresponding to few-photon quantum regime. Furthermore, this scheme exhibits high resolution for optically reading out the dispersive shift induced by a superconducting transmon qubit coupled to the SRF cavity. We also show that the fidelity of heralded entanglement generation between two remote quantum systems is enhanced" +"---\nabstract: 'Despite the success of practical solvers in various NP-complete domains such as SAT and CSP as well as using deep reinforcement learning to tackle two-player games such as Go, certain classes of PSPACE-hard planning problems have remained out of reach. Even carefully designed domain-specialized solvers can fail quickly due to the exponential search space on hard instances. Recent works that combine traditional search methods, such as best-first search and Monte Carlo tree search, with Deep Neural Networks\u2019 (DNN) heuristics have shown promising progress and can solve a significant number of hard planning instances beyond specialized solvers. To better understand why these approaches work, we studied the interplay of the policy and value networks of DNN-based best-first search on Sokoban and show the surprising effectiveness of the policy network, further enhanced by the value network, as a guiding heuristic for the search. To further understand the phenomena, we studied the cost distribution of the search algorithms and found that Sokoban instances can have heavy-tailed runtime distributions, with tails both on the left and right-hand sides. In particular, for the first time, we show the existence of *left heavy tails* and propose an abstract tree model that can empirically explain" +"---\nabstract: |\n The surroundings of a cancerous tumor impact how it grows and develops in humans. New data from early breast cancer patients contains information on the collagen fibers surrounding the tumorous tissue\u2014offering hope of finding additional biomarkers for diagnosis and prognosis\u2014but poses two challenges for typical analysis. Each image section contains information on hundreds of fibers, and each tissue has multiple image sections contributing to a single prediction of tumor vs. non-tumor. This nested relationship of fibers within image spots within tissue samples requires a specialized analysis approach.\n\n We devise a novel support vector machine (SVM)-based predictive algorithm for this data structure. By treating the collection of fibers as a probability distribution, we can measure similarities between the collections through a flexible kernel approach. By assuming the relationship of tumor status between image sections and tissue samples, the constructed SVM problem is non-convex and traditional algorithms can not be applied. We propose two algorithms that exchange computational accuracy and efficiency to manage data of all sizes. The predictive performance of both algorithms is evaluated on the collagen fiber data set and additional simulation scenarios. We offer reproducible implementations of both algorithms of this approach in the R package" +"---\nabstract: 'We present and study the concept of $m$-periodic Gorenstein objects relative to a pair $(\\mathcal{A,B})$ of classes of objects in an abelian category, as a generalization of $m$-strongly Gorenstein projective modules over associative rings. We prove several properties when $(\\mathcal{A,B})$ satisfies certain homological conditions, like for instance when $({\\mathcal{A}},{\\mathcal{B}})$ is a GP-admissible pair. Connections to Gorenstein objects and Gorenstein homological dimensions relative to these pairs are also established.'\naddress:\n- 'Facultad de Ciencias, Universidad Nacional Aut\u00f3noma de M\u00e9xico. Circuito Exterior, Ciudad Universitaria. CP04510. Mexico City, MEXICO. '\n- 'Instituto de Matem\u00e1ticas. Universidad Nacional Aut\u00f3noma de M\u00e9xico. Circuito Exterior, Ciudad Universitaria. CP04510. Mexico City, MEXICO'\n- 'Instituto de Matem\u00e1tica y Estad\u00edstica \u201cProf. Ing. Rafael Laguardia\u201d. Facultad de Ingenier\u00eda. Universidad de la Rep\u00fablica. CP11300. Montevideo, URUGUAY'\nauthor:\n- 'Mindy Y. Huerta'\n- Octavio Mendoza\n- 'Marco A. P\u00e9rez'\nbibliography:\n- 'biblionSG.bib'\ntitle: '$m$-periodic Gorenstein objects'\n---\n\n[^1] [^2]\n\nIntroduction {#introduction .unnumbered}\n============\n\nLet $R$ be an associative ring with identity. In [@BM2], Bennis and Mahdou introduced the notion of strongly Gorenstein projective modules over $R$. These are particular cases of Gorenstein projective $R$-modules $M$ which are part of a short exact sequence $M \\rightarrowtail P \\twoheadrightarrow M$, with $P$ projective." +"---\nabstract: 'We prove that random groups in the Gromov density model at density $d<1\\slash 4$ do not have Property (T), answering a conjecture of Przytycki. We also prove similar results in the $k$-angular model of random groups.'\nauthor:\n- 'Calum J. Ashcroft'\nbibliography:\n- 'bib.bib'\ntitle: 'Random groups do not have Property (T) at densities below $1\\slash 4$'\n---\n\nIntroduction\n============\n\nThe *density model* of random groups was introduced by Gromov in [@gromovasymptotic] to study a \u2018typical\u2019 group. Fix $n\\geq 2, l\\geq 1$, and $01\\slash 3$ has Property (T) [@Zuk] (see also [@kotowski;" +"---\nabstract: 'We introduce a method called MASCOT (Multi-Agent Shape Control with Optimal Transport) to compute optimal control solutions of agents with shape/formation/density constraints. For example, we might want to apply shape constraints on the agents \u2013 perhaps we desire the agents to hold a particular shape along the path, or we want agents to spread out in order to minimize collisions. We might also want a proportion of agents to move to one destination, while the other agents move to another, and to do this in the optimal way, i.e. the source-destination assignments should be optimal. In order to achieve this, we utilize the Earth Mover\u2019s Distance from Optimal Transport to distribute the agents into their proper positions so that certain shapes can be satisfied. This cost is both introduced in the terminal cost and in the running cost of the optimal control problem.'\nauthor:\n- Alex Tong Lin\n- 'Stanley J. Osher'\nbibliography:\n- 'references.bib'\ntitle: 'Multi-Agent Shape Control with Optimal Transport'\n---\n\nIntroduction\n============\n\nOptimal control seeks to find the best policy for an agent that optimizes a certain criterion. This general formulation allows optimal control theory to be applied in numerous areas such as robotics, finance," +"---\nabstract: 'Using decentralized data for federated training is one promising emerging research direction for alleviating data scarcity in the medical domain. However, in contrast to large-scale fully labeled data commonly seen in general object recognition tasks, the local medical datasets are more likely to only have images annotated for a subset of classes of interest due to high annotation costs. In this paper, we consider a practical yet under-explored problem, where underrepresented classes only have few labeled instances available and only exist in a few clients of the federated system. We show that standard federated learning approaches fail to learn robust multi-label classifiers with extreme class imbalance and address it by proposing a novel federated learning framework, FedFew. FedFew consists of three stages, where the first stage leverages federated self-supervised learning to learn *class-agnostic* representations. In the second stage, the decentralized partially labeled data are exploited to learn an energy-based multi-label classifier for the common classes. Finally, the underrepresented classes are detected based on the energy and a *prototype*-based nearest-neighbor model is proposed for few-shot matching. We evaluate FedFew on multi-label thoracic disease classification tasks and demonstrate that it outperforms the federated baselines by a large margin.'\nauthor:\n-" +"---\nabstract: 'Core-collapse supernova remnants are the gaseous nebulae of galactic interstellar media (ISM) formed after the explosive death of massive stars. Their morphology and emission properties depend both on the surrounding circumstellar structure shaped by the stellar wind-ISM interaction of the progenitor star and on the local conditions of the ambient medium. In the warm phase of the Galactic plane ($n\\approx 1\\, \\rm cm^{-3}$, $T\\approx 8000\\, \\rm K$), an organised magnetic field of strength $7\\, \\mu \\rm G$ has profound consequences on the morphology of the wind bubble of massive stars at rest. In this paper we show through 2.5D magneto-hydrodynamical simulations, in the context of a Wolf-Rayet-evolving $35\\, \\rm M_{\\odot}$ star, that it affects the development of its supernova remnant. When the supernova remnant reaches its middle age ($15$$-$$20\\, \\rm kyr$), it adopts a tubular shape that results from the interaction between the isotropic supernova ejecta and the anisotropic, magnetised, shocked stellar progenitor bubble into which the supernova blast wave expands. Our calculations for non-thermal emission, i.e. radio synchrotron and inverse Compton radiation, reveal that such supernova remnants can, due to projection effects, appear as rectangular objects in certain cases. This mechanism for shaping a supernova remnant is" +"---\nabstract: 'Automatic modulation classification is of crucial importance in wireless communication networks. Deep learning based automatic modulation classification schemes have attracted extensive attention due to the superior accuracy. However, the data-driven method relies on a large amount of training samples and the classification accuracy is poor in the low signal-to-noise radio (SNR). In order to tackle these problems, a novel data-and-knowledge dual-driven automatic modulation classification scheme based on radio frequency machine learning is proposed by exploiting the attribute features of different modulations. The visual model is utilized to extract visual features. The attribute learning model is used to learn the attribute semantic representations. The transformation model is proposed to convert the attribute representation into the visual space. Extensive simulation results demonstrate that our proposed automatic modulation classification scheme can achieve better performance than the benchmark schemes in terms of the classification accuracy, especially in the low SNR. Moreover, the confusion among high-order modulations is reduced by using our proposed scheme compared with other traditional schemes.'\nauthor:\n- |\n Rui Ding$^{ \\S}$, Hao Zhang$^{ \\S}$, Fuhui Zhou$^{ \\S }$, Qihui Wu$^{ \\S }$, Zhu Han$^\\dagger$\\\n $^{\\S}$Nanjing University of Aeronautics and Astronautics, China, $^\\dagger$University of Houston, USA\\\n Email: *{rui\\_ding, haozhangcn@nuaa.edu.cn, zhoufuhui@ieee.org," +"---\nabstract: 'Parametric equations of state (EoSs) provide an important tool for systematically studying EoS effects in neutron star merger simulations. In this work, we perform a numerical validation of the $M^*$-framework for parametrically calculating finite-temperature EoS tables. The framework, introduced in Raithel et al. (2019), provides a model for generically extending any cold, [$\\beta$-equilibrium]{}\u00a0EoS to finite-temperatures and arbitrary electron fractions. In this work, we perform numerical evolutions of a binary neutron star merger with the SFHo finite-temperature EoS, as well as with the $M^*$-approximation of this same EoS, where the approximation uses the zero-temperature, [$\\beta$-equilibrium]{}\u00a0 slice of SFHo and replaces the finite-temperature and composition-dependent parts with the $M^*$-model. We find that the approximate version of the EoS is able to accurately recreate the temperature and thermal pressure profiles of the binary neutron star remnant, when compared to the results found using the full version of SFHo. We additionally find that the merger dynamics and gravitational wave signals agree well between both cases, with differences of $\\lesssim1-2\\%$ introduced into the post-merger gravitational wave peak frequencies by the approximations of the EoS. We conclude the $M^*$-framework can be reliably used to probe neutron star merger properties in numerical simulations.'\nauthor:" +"---\nabstract: 'Several constructions on directed graphs originating in the study of flow equivalence in symbolic dynamics (e.g., splittings and delays) are known to preserve the Morita equivalence class of Leavitt path algebras over any coefficient field $\\mathbb{F}$. We prove that many of these equivalence results are not only independent of $\\mathbb{F}$, but are largely independent of linear algebra altogether. We do this by formulating and proving generalisations of these equivalence theorems in which the category of $\\mathbb{F}$-vector spaces is replaced by an arbitrary category with binary coproducts, showing that the Morita equivalence results for Leavitt path algebras depend only on the ability to form direct sums of vector spaces. We suggest that the framework developed in this paper may be useful in studying other problems related to Morita equivalence of Leavitt path algebras.'\naddress: 'Department of Mathematics & Statistics, University of Maine. 5752 Neville Hall, Room 333. Orono, ME 04469 USA'\nauthor:\n- Tyrone Crisp\n- Davis MacDonald\nbibliography:\n- 'Leavitt.bib'\ndate: June 2022\ntitle: Flow equivalence of diagram categories and Leavitt path algebras\n---\n\nIntroduction\n============\n\nSuppose that the picture below represents a communication network, with each vertex in the picture representing a node in the network, and" +"---\nabstract: 'This paper considers the distributed information bottleneck (D-IB) problem for a primitive Gaussian diamond channel with two relays and Rayleigh fading. Due to the bottleneck constraint, it is impossible for the relays to inform the destination node of the perfect channel state information (CSI) in each realization. To evaluate the bottleneck rate, we provide an upper bound by assuming that the destination node knows the CSI and the relays can cooperate with each other, and also three achievable schemes with simple symbol-by-symbol relay processing and compression. Numerical results show that the lower bounds obtained by the proposed achievable schemes can come close to the upper bound on a wide range of relevant system parameters.'\nauthor:\n- \nbibliography:\n- 'IEEEabrv.bib'\n- 'Ref.bib'\ntitle: Distributed Information Bottleneck for a Primitive Gaussian Diamond Channel with Rayleigh Fading\n---\n\nIntroduction\n============\n\nIntroduced by Tishby in [@tishby2000information], the information bottleneck (IB) paradigm, where relevant information about a signal $X$ is extracted from an observation $Y$ and conveyed to a destination via a rate-constrained bottleneck link, has found remarkable applications in communication systems and neural networks [@tishby2015deep; @shwartz2017opening; @goldfeld2020information; @zaidi2020information]. An interesting application of the IB problem in communications consists of a source node," +"---\nabstract: 'Nuclear mass measurements of isotopes are key to improving our understanding of nuclear structure across the chart of nuclides, in particular for the determination of the appearance or disappearance of nuclear shell closures. We present high-precision mass measurements of neutron-rich Ca, Ti and V isotopes performed at TRIUMF\u2019s Ion Trap for Atomic and Nuclear science (TITAN) and the Low Energy Beam and Ion Trap (LEBIT) facilities. These measurements were made using the TITAN multiple-reflection time-of-flight mass spectrometer (MR-ToF-MS) and the LEBIT 9.4T Penning trap mass spectrometer. In total, 13 masses were measured, eight of which represent increases in precision over previous measurements. These measurements refine trends in the mass surface around $N = 32$ and $N = 34$, and support the disappearance of the $N = 32$ shell closure with increasing proton number. Additionally, our data does not support the presence of a shell closure at $N = 34$.'\nauthor:\n- 'W. S. Porter'\n- 'E. Dunling'\n- 'E. Leistenschneider'\n- 'J. Bergmann'\n- 'G. Bollen'\n- 'T. Dickel'\n- 'K. A. Dietrich'\n- 'A. Hamaker'\n- 'Z. Hockenbery'\n- 'C. Izzo'\n- 'A. Jacobs'\n- 'A. Javaji'\n- 'B. Kootte'\n- 'Y. Lan'\n- 'I. Miskun'\n-" +"---\nabstract: 'The attention mechanisms of transformers effectively extract pertinent information from the input sequence. However, the quadratic complexity of self-attention w.r.t the sequence length incurs heavy computational and memory burdens, especially for tasks with long sequences. Existing accelerators face performance degradation in these tasks. To this end, we propose SALO to enable hybrid sparse attention mechanisms for long sequences. SALO contains a data scheduler to map hybrid sparse attention patterns onto hardware and a spatial accelerator to perform the efficient attention computation. We show that SALO achieves $17.66$x and $89.33$x speedup on average compared to GPU and CPU implementations, respectively, on typical workloads, i.e., Longformer and ViL.'\nauthor:\n- 'Guan Shen, Jieru Zhao, Quan Chen, Jingwen Leng, Chao Li, Minyi Guo'\nbibliography:\n- 'ref.bib'\ntitle: 'SALO: An Efficient Spatial Accelerator Enabling Hybrid Sparse Attention Mechanisms for Long Sequences'\n---\n\nIntroduction\n============\n\nNowadays, models based on Transformer [@vaswani2017attention] have achieved an extremely high performance in deep learning research. In the area of natural language processing, Transformer and its variants, like BERT [@devlin2018bert] and GPT-3 [@brown2020language], have outperformed other models based on RNN and CNN in most tasks. Inspired by the great ability that transformers have shown in NLP tasks, researchers" +"---\nabstract: 'In plant breeding the presence of a genotype by environment (`GxE`) interaction has a strong impact on cultivation decision making and the introduction of new crop cultivars. The combination of linear and bilinear terms has been shown to be very useful in modelling this type of data. A widely-used approach to identify `GxE` is the Additive Main Effects and Multiplicative Interaction Effects (AMMI) model. However, as data frequently can be high-dimensional, Markov chain Monte Carlo (MCMC) approaches can be computationally infeasible. In this article, we consider a variational inference approach for such a model. We derive variational approximations for estimating the parameters and we compare the approximations to MCMC using both simulated and real data. The new inferential framework we propose is on average two times faster whilst maintaining the same predictive performance as MCMC.'\naddress:\n- 'Hamilton Institute, Department of Mathematics and Statistics, Maynooth University, Ireland'\n- 'Insight Centre for Data Analytics, Maynooth University, Ireland'\nauthor:\n- \n- \n- \n- \nbibliography:\n- 'references.bib'\ntitle: Variational Inference for Additive Main and Multiplicative Interaction Effects Models\n---\n\n,\n\n,\n\n,\n\nIntroduction\n============\n\nIn plant breeding, it is often of interest to identify which genotypes perform best in different environments." +"---\nabstract: 'We present high-resolution ($\\lesssim0.1$arcsec) ALMA observations of the strongly-lensed galaxy HATLASJ113526.2-01460 at redshift $z\\sim3.1$ discovered in the Gama 12$^{\\rm th}$ field of the *Herschel*-ATLAS survey. The gravitationally lensed system is remarkably peculiar in that neither the background source nor the foreground lens show a clearly detected optical/NIR emission. We perform accurate lens modeling and source morphology reconstruction in three different (sub-)mm continuum bands, and in the C\\[II\\] and CO(8-7) spectral lines. The modeling indicates a foreground lensing (likely elliptical) galaxy with mass $\\gtrsim10^{11}\\, M_\\odot$ at $z\\gtrsim1.5$, while the source (sub-)mm continuum and line emissions are amplified by factors $\\mu\\sim6-13$. We estimate extremely compact sizes $\\lesssim0.5$ kpc for the star-forming region and $\\lesssim 1$ kpc for the gas component, with no clear evidence of rotation or of ongoing merging events. We perform broadband SED-fitting and retrieve the intrinsic de-magnified physical properties of the source, which is found to feature a very high star-formation rate $\\gtrsim10^3\\, M_\\odot$ yr$^{-1}$, that given the compact sizes is on the verge of the Eddington limit for starbursts; the radio luminosity at 6 cm from available EVLA observations is consistent with the star-formation activity. The galaxy is found to be extremely rich in gas $\\sim10^{11}\\," +"---\nabstract: 'HD 166620 was recently identified as a Maunder Minimum candidate based on nearly 50 years of [ H & K]{} activity data from Mount Wilson and Keck-HIRES [@Baum2022]. These data showed clear cyclic behavior on a 17-year timescale during the Mount Wilson survey that became flat when picked up later with Keck-HIRES planet-search observations. Unfortunately, the transition between these two data sets\u2014and therefore the transition into the candidate Maunder Minimum phase\u2014contained little to no data. Here we present additional Mount Wilson data not present in @Baum2022 along with photometry over a nearly 30-year baseline that definitively trace the transition from cyclic activity to a prolonged phase of flat activity. We present this as conclusive evidence of the star entering a grand magnetic minimum and therefore the first true Maunder Minimum analog. We further show that neither the overall brightness nor the chromospheric activity level (as measured by [$S_{\\mathrm{HK}}$]{}) is significantly lower during the grand magnetic minimum than its activity cycle minimum, implying that anomalously low mean or instantaneous activity levels are not a good diagnostic or criterion for identifying additional Maunder Minimum candidates. Intraseasonal variability in [$S_{\\mathrm{HK}}$]{}, however, *is* lower in the star\u2019s grand minimum; this may prove" +"---\nabstract: 'Recent contrastive learning methods achieved state-of-the-art in low label regimes. However, the training requires large batch sizes and heavy augmentations to create multiple views of an image. With non-contrastive methods, the negatives are implicitly incorporated in the loss, allowing different images and modalities as pairs. Although the meta-information (i.e., age, sex) in medical imaging is abundant, the annotations are noisy and prone to class imbalance. In this work, we exploited already existing temporal information (different visits from a patient) in a longitudinal optical coherence tomography (OCT) dataset using temporally informed non-contrastive loss (**TINC**) without increasing complexity and need for negative pairs. Moreover, our novel pair-forming scheme can avoid heavy augmentations and implicitly incorporates the temporal information in the pairs. Finally, these representations learned from the pretraining are more successful in predicting disease progression where the temporal information is crucial for the downstream task. More specifically, our model outperforms existing models in predicting the risk of conversion within a time frame from intermediate age-related macular degeneration (AMD) to the late wet-AMD stage.'\nauthor:\n- Taha Emre\n- Arunava Chakravarty\n- Antoine Rivail\n- Sophie Riedl\n- |\n \\\n Ursula Schmidt-Erfurth\n- Hrvoje Bogunovi\u0107\nbibliography:\n- 'mybibliography.bib'\ntitle: 'TINC: Temporally" +"---\nabstract: 'We present 3D radiation-hydrodynamical (RHD) simulations of star cluster formation and evolution in massive, self-gravitating clouds, whose dust columns are optically thick to infrared (IR) photons. We use `VETTAM` \u2013 a recently developed, novel RHD algorithm, which uses the Variable Eddington Tensor (VET) closure \u2013 to model the IR radiation transport through the cloud. We also use realistic temperature ($T$) dependent IR opacities ($\\kappa$) in our simulations, improving upon earlier works in this area, which used either constant IR opacities or simplified power laws ($\\kappa \\propto T^2$). We investigate the impact of the radiation pressure of these IR photons on the star formation efficiency (SFE) of the cloud, and its potential to drive dusty winds. We find that IR radiation pressure is unable to regulate star formation or prevent accretion onto the star clusters, even for very high gas surface densities ($\\Sigma > 10^5 {\\, \\mathrm{M}_{\\sun} \\, \\mathrm{pc}^{-2}}$), contrary to recent semi-analytic predictions and simulation results using simplified treatments of the dust opacity. We find that the commonly adopted simplifications of $\\kappa \\propto T^2$ or constant $\\kappa$ for the IR dust opacities leads to this discrepancy, as those approximations overestimate the radiation force. By contrast, with realistic opacities" +"---\nabstract: |\n Transformers are neural network models that utilize multiple layers of self-attention heads and have exhibited enormous potential in natural language processing tasks. Meanwhile, there have been efforts to adapt transformers to visual tasks of machine learning, including Vision Transformers and Swin Transformers. Although some researchers use Vision Transformers for reinforcement learning tasks, their experiments remain at a small scale due to the high computational cost. Experiments conducted at a large scale, on the other hand, have to rely on techniques to cut the costs of Vision Transformers, which also yield inferior results.\n\n To address this challenge, this article presents the first online reinforcement learning scheme that is based on Swin Transformers: Swin DQN [^1]. Swin Transformers are promising as a backbone in neural networks by splitting groups of image pixels into small patches and applying local self-attention operations inside the (shifted) windows of fixed sizes. They have demonstrated state-of-the-art performances in benchmarks. In contrast to existing research, our novel approach is reducing the computational costs, as well as significantly improving the performance. We demonstrate the superior performance with experiments on 49 games in the Arcade Learning Environment. The results show that our approach, using Swin Transformers with" +"---\nabstract: 'The theoretical study of many-body effects in the context of near-field radiative heat transfer (NFRHT) has already led to the prediction of a plethora of thermal radiation phenomena. Special attention has been paid to nonreciprocal systems in which the lack of the Lorentz reciprocity has been shown to give rise to unique physical effects. However, most of the theoretical work in this regard has been carried out with the help of approaches that consider either point-like particles or highly symmetric bodies (such as spheres), which are not easy to realize and explore experimentally. In this work we develop a many-body approach based on the thermal discrete dipole approximation (TDDA) that is able to describe the NFRHT between nonreciprocal objects of arbitrary size and shape. We illustrate the potential and the relevance of this approach with the analysis of two related phenomena, namely the existence of persistent thermal currents and the photon thermal Hall effect, in a system with several magneto-optical bodies. Our many-body TDDA approach paves the way for closing the gap between experiment and theory that is hindering the progress of the topic of NFRHT in many-body systems.'\nauthor:\n- 'E. Moncada-Villa$^{1}$'\n- 'J.\u00a0C. Cuevas$^{2}$'\ntitle:" +"---\nabstract: 'We equip the basic local crossing bimodules in Ozsv[\u00e1]{}th\u2013Szab[\u00f3]{}\u2019s theory of bordered knot Floer homology with the structure of 1-morphisms of 2-representations, categorifying the $U_q({\\mathfrak{gl}}(1|1)^+)$-intertwining property of the corresponding maps between ordinary representations. Besides yielding a new connection between bordered knot Floer homology and higher representation theory in line with work of Rouquier and the second author, this structure gives an algebraic reformulation of a \u201ccompatibility between summands\u201d property for Ozsv[\u00e1]{}th\u2013Szab[\u00f3]{}\u2019s bimodules that is important when building their theory up from local crossings to more global tangles and knots.'\naddress:\n- 'Department of Mathematics, University of Southern California, 3620 S. Vermont Ave., KAP 104, Los Angeles, CA 90089-2532'\n- 'Department of Mathematics, North Carolina State University, 2108 SAS Hall, Raleigh, NC 27695'\nauthor:\n- William Chang\n- Andrew Manion\nbibliography:\n- 'biblio.bib'\ntitle: 'Compatibility in Ozsv[\u00e1]{}th\u2013Szab[\u00f3]{}\u2019s bordered HFK via higher representations'\n---\n\nIntroduction\n============\n\nOzsv[\u00e1]{}th\u2013Szab[\u00f3]{}\u2019s theory [@OSzNew; @OSzNewer; @OSzHolo; @OSzHolo2] of bordered knot Floer homology, or bordered HFK, has proven to be highly efficient for computations (see [@HFKCalc] for a fast computer program based on the theory). It works by assigning certain dg algebras to sets of $n$ tangle endpoints (oriented up or down) and certain $A_{\\infty}$ bimodules" +"---\nabstract: 'With its exquisite astrometric precision, the latest Gaia data release includes $\\sim$$10^5$ astrometric binaries, each of which have measured orbital periods, eccentricities, and the Thiele-Innes orbital parameters. Using these and an estimate of the luminous stars\u2019 masses, we derive the companion stars\u2019 masses, from which we identify a sample of 24 binaries in long period orbits ($P_{\\rm orb}~\\sim~{\\rm yrs}$) with a high probability of hosting a massive ($>$1.4\u00a0${\\ifmmode {{M_\\odot}}\\else{$M_\\odot$}\\fi}$), dark companion: a neutron star (NS) or black hole (BH). The luminous stars in these binaries tend to be F-, G-, and K-dwarfs with the notable exception of one hot subdwarf. Follow-up spectroscopy of eight of these stars shows no evidence for contamination by white dwarfs or other luminous stars. The dark companions in these binaries span a mass range of 1.35\u20132.7 ${\\ifmmode {{M_\\odot}}\\else{$M_\\odot$}\\fi}$ and therefore likely includes both NSs and BHs without a significant mass gap in between. Furthermore, the masses of several of these objects are $\\simeq$1.7 ${\\ifmmode {{M_\\odot}}\\else{$M_\\odot$}\\fi}$, similar to the mass of at least one of the merging compact objects in GW190425. Given that these orbits are too wide for significant mass accretion to have occurred, this sample implies that some NSs are born" +"---\nabstract: 'Standard plane (SP) localization is essential in routine clinical ultrasound (US) diagnosis. Compared to 2D US, 3D US can acquire multiple view planes in one scan and provide complete anatomy with the addition of coronal plane. However, manually navigating SPs in 3D US is laborious and biased due to the orientation variability and huge search space. In this study, we introduce a novel reinforcement learning (RL) framework for automatic SP localization in 3D US. Our contribution is three-fold. First, we formulate SP localization in 3D US as a tangent-point-based problem in RL to restructure the action space and significantly reduce the search space. Second, we design an auxiliary task learning strategy to enhance the model\u2019s ability to recognize subtle differences crossing Non-SPs and SPs in plane search. Finally, we propose a spatial-anatomical reward to effectively guide learning trajectories by exploiting spatial and anatomical information simultaneously. We explore the efficacy of our approach on localizing four SPs on uterus and fetal brain datasets. The experiments indicate that our approach achieves a high localization accuracy as well as robust performance.'\nauthor:\n- 'Yuxin Zou[^1]'\n- Haoran Dou\n- Yuhao Huang\n- Xin Yang\n- Jikuan Qian\n- Chaojiong Zhen\n-" +"---\nabstract: |\n Edge-to-edge tilings of the sphere by congruent quadrilaterals are completely classified in a series of three papers. This last one classifies the case of $a^3b$-quadrilaterals with some irrational angle: there are a sequence of $1$-parameter families of quadrilaterals admitting $2$-layer earth map tilings together with their basic flip modifications under extra condition, and $5$ sporadic quadrilaterals each admitting a special tiling. A summary of the full classification is presented in the end.\n\n [*Keywords*]{}: spherical tiling, quadrilateral, classification, earth map tiling, irrational angle.\n\n [*2000 MR Subject Classification*]{} 52C20, 05B45\nauthor:\n- |\n Yixi Liao, Pinren Qian, Erxiao Wang[^1], Yingyun Xu\\\n Zhejiang Normal University\ntitle: 'Tilings of the sphere by congruent quadrilaterals III: edge combination $a^3b$ with general angles'\n---\n\nIntroduction\n============\n\nIn an edge-to-edge tiling of the sphere by congruent quadrilaterals, the tile can only have four edge arrangements [@ua2; @lpwx]: $a^2bc$, $a^2b^2$, $a^3b$, $a^4$. Akama and Sakano classified tilings for $a^2b^2$ and $a^4$ in [@sa] via the list of triangular tilings in [@ua]. Tilings for $a^2bc$ are classified in the first paper [@lpwx] of this series via the methods in [@wy1; @wy2; @awy; @wy3] developed for pentagonal tilings. Tilings for $a^3b$ with rational angles are classified in" +"---\nabstract: 'We study efficient and exact shortest path algorithms for routing on road networks with realistic traffic data. For navigation applications, both current (i.e., live) traffic events and predictions of future traffic flows play an important role in routing. While preprocessing-based speedup techniques have been employed successfully to both settings individually, a combined model poses significant challenges. Supporting predicted traffic typically requires expensive preprocessing while live traffic requires fast updates for regular adjustments. We propose an A\\*-based solution to this problem. By generalizing A\\* potentials to time dependency, i.e.\u00a0the estimate of the distance from a vertex to the target also depends on the time of day when the vertex is visited, we achieve significantly faster query times than previously possible. Our evaluation shows that our approach enables interactive query times on continental-sized road networks while allowing live traffic updates within a fraction of a minute. We achieve a speedup of at least two orders of magnitude over Dijkstra\u2019s algorithm and up to one order of magnitude over state-of-the-art time-independent A\\* potentials.'\nauthor:\n- Nils Werner\n- Tim Zeitz\ntitle: 'Combining Predicted and Live Traffic with Time-Dependent A\\* Potentials'\n---\n\nIntroduction\n============\n\nAn important feature of modern routing applications" +"---\nabstract: |\n We study the Hamiltonian formulation of gauge theory on spacetime manifolds endowed with a codimension-$1$ submanifold with boundary. The latter is thought of as a corner component for the spacetime manifold. We characterise the reduced phase space of the theory whenever it is described by a local momentum map for the action of the gauge group ${\\mathscr{G}}$, by adapting Fr\u00e9chet reduction by stages to the case of gauge subgroups.\n\n The local momentum map decomposes into a bulk term called constraint map defining a coisotropic constraint set, and a boundary term called flux map. The first stage, or *constraint reduction* views the constraint set as the zero locus of a momentum map for a normal subgroup $\\mathcal{G}_\\circ\\subset\\mathcal{G}$, called *constraint gauge group*. The second stage, or *flux superselection*, interprets the flux map as the momentum map for the residual action of the *flux gauge group* $\\underline{\\mathcal{G}}\\doteq\\mathcal{G}/\\mathcal{G}_\\circ$. Equivariance is controlled by cocycles of the flux gauge group $\\underline{\\mathcal{G}}$.\n\n Whereas the only physically admissible value of the constraint map is zero, the flux map is largely unconstrained. As a result, the reduced phase space of the theory, when smooth, is only a partial Poisson manifold $\\underline{\\underline{\\mathcal{C}}}=\\mathcal{C}/\\mathcal{G} \\simeq \\underline{\\mathcal{C}}/\\underline{\\mathcal{G}}$. Its symplectic leaves\u2014defined" +"---\nabstract: 'Goal recognition (GR) involves inferring the goals of other vehicles, such as a certain junction exit, which can enable more accurate prediction of their future behaviour. In autonomous driving, vehicles can encounter many different scenarios and the environment may be partially observable due to occlusions. We present a novel GR method named Goal Recognition with Interpretable Trees under Occlusion (OGRIT). OGRIT uses decision trees learned from vehicle trajectory data to infer the probabilities of a set of generated goals. We demonstrate that OGRIT can handle missing data due to occlusions and make inferences across multiple scenarios using the same learned decision trees, while being computationally fast, accurate, interpretable and verifiable. We also release the inDO, rounDO and OpenDDO datasets of occluded regions used to evaluate OGRIT.'\nauthor:\n- '`{cillian.brewitt@, m.tamborski@sms., cheng.wang@, s.albrecht@}ed.ac.uk`'\nbibliography:\n- 'library.bib'\ntitle: '**Verifiable Goal Recognition for Autonomous Driving with Occlusions** '\n---\n\nIntroduction\n============\n\nTo navigate through complex urban environments, an autonomous vehicle (AV) must have the ability to predict the future trajectories of other road users. One method of doing this is to first perform goal recognition (GR) to infer the goals of other vehicles, such as taking certain junction exits or roundabout" +"---\nabstract: |\n In the context of Radioastronomic applications where the Analog Radio-over-Fiber technology is used for the antenna downlink, detrimental nonlinearity effects arise because of the interference between the forward signal generated by the laser and the Rayleigh backscattered one which is re-forwarded by the laser itself toward the photodetector.\n\n The adoption of the so called *dithering* technique, which involves the direct modulation of the laser with a sinusoidal tone and takes advantage of the laser chirping phenomenon, has been proved to reduce such Rayleigh Back Scattering - induced nonlinearities. The frequency and the amplitude of the *dithering tone* should both be as low as possible, in order to avoid undesired collateral effects on the received spectrum as well as keep at low levels the global energy consumption.\n\n Through a comprehensive analysis of *dithered* Radio over Fiber systems, it is demonstrated that a progressive reduction of the *dithering tone* frequency affects in a peculiar fashion both the chirping characteristics of the field emitted by the laser and the spectrum pattern of the received signal at the fiber end.\n\n Accounting for the concurrent effects caused by such phenomena, optimal operating conditions are identified for the implementation of the *dithering tone*" +"---\nabstract: 'We investigate the optimization of the photon polarization to increase the yield of the Breit-Wheeler pair production in arbitrarily polarized plane wave backgrounds. We show that the optimized photon polarization can improve the positron yield by more than $20\\%$ compared to the unpolarized case, in the intensity regime of current laser-particle experiments. The seed photon\u2019s optimal polarization is resulting from the polarization coupling with the polarization of the laser pulse. The compact expressions of the coupling coefficients in both the perturbative and nonperturbative regime are given. Because of the evident difference in the coupling coefficients for the linear and circular polarization components, the seed photon\u2019s optimal polarization state in an elliptically polarized laser background, deviates considerably from the orthogonal state of the laser polarization.'\nauthor:\n- Yunquan Gao\n- Suo Tang\nbibliography:\n- 'NBW\\_optiaml\\_polar.bib'\ntitle: 'Optimal photon polarization toward the observation of the nonlinear Breit-Wheeler pair production'\n---\n\nIntroduction\n============\n\nThe production of electron-positron pair in the collision of two high-energy photons, now referred to as the linear Breit-Wheeler process (LBW), was first proposed in 1930s\u00a0[@PhysRev.46.1087]. The production yield depends not only on the photons\u2019 dynamical parameters, but also on the relative polarization of the two photons" +"---\nabstract: 'We address the problem of random search for a target in an environment with space-dependent diffusion coefficient $D(x)$. From a general form of the diffusion differential operator that includes It\u00f4, Stratonovich, and H\u00e4nggi-Klimontovich interpretations of the associated stochastic process, we obtain the first-passage time distribution and the search efficiency $\\mathcal{E}=\\langle 1/t \\rangle$. For the paradigmatic power-law diffusion coefficient $D(x) = D_0|x|^{\\alpha}$, with $\\alpha<2$, which controls whether the mobility increases or decreases with the distance from a target at the origin, we show the impact of the different interpretations. For the Stratonovich framework, we obtain a closed expression of the search efficiency, valid for arbitrary diffusion coefficient $D(x)$. We show that a heterogeneous diffusivity profile leads to lower efficiency than the homogeneous average level, and the efficiency depends only on the distribution of diffusivity values and not on its spatial organization, features that breakdown under other interpretations.'\naddress:\n- '$^{1}$ Department of Physics, PUC-Rio, Rua Marqu\u00eas de S\u00e3o Vicente 225, 22451-900, Rio de Janeiro, RJ, Brazil'\n- '$^{2}$ Institute of Science and Technology for Complex Systems, INCT-SC, Brazil'\nauthor:\n- 'M. A. F. dos Santos$^{1}$, L. Menon Jr.$^{1}$, and C. Anteneodo$^{1,2}$'\ntitle: ' Efficiency of random search with space-dependent" +"---\nabstract: 'The emerging memristive Memory Processing Unit (mMPU) overcomes the memory wall through memristive devices that unite storage and logic for real processing-in-memory (PIM) systems. At the core of the mMPU is stateful logic, which is accelerated with memristive partitions to enable logic with massive inherent parallelism within crossbar arrays. This paper vastly accelerates the fundamental operations of matrix-vector multiplication and convolution in the mMPU, with either full-precision or binary elements. These proposed algorithms establish an efficient foundation for large-scale mMPU applications such as neural-networks, image processing, and numerical methods. We overcome the inherent asymmetry limitation in the previous in-memory full-precision matrix-vector multiplication solutions by utilizing techniques from block matrix multiplication and reduction. We present the first fast in-memory binary matrix-vector multiplication algorithm by utilizing memristive partitions with a tree-based popcount reduction ($\\mbox{\\boldmath ${39}\\times$}$ faster than previous work). For convolution, we present a novel in-memory input-parallel concept which we utilize for a full-precision algorithm that overcomes the asymmetry limitation in convolution, while also improving latency ($\\mbox{\\boldmath ${2}\\times$}$ faster than previous work), and the first fast binary algorithm ($\\mbox{\\boldmath ${12}\\times$}$ faster than previous work).'\nauthor:\n- \nbibliography:\n- 'refs.bib'\ntitle: |\n MatPIM: Accelerating Matrix Operations\\\n with Memristive Stateful Logic\n---" +"---\nabstract: 'We present our experience as annotators in the creation of high-quality, adversarial machine-reading-comprehension data for extractive QA for Task 1 of the First Workshop on Dynamic Adversarial Data Collection (DADC). DADC is an emergent data collection paradigm with both models and humans in the loop. We set up a quasi-experimental annotation design and perform quantitative analyses across groups with different numbers of annotators focusing on successful adversarial attacks, cost analysis, and annotator confidence correlation. We further perform a qualitative analysis of our perceived difficulty of the task given the different topics of the passages in our dataset and conclude with recommendations and suggestions that might be of value to people working on future DADC tasks and related annotation interfaces.'\nauthor:\n- |\n Damian Y. Romero Diaz,^![image](uofa.png),![image](explosion.png)^ Magdalena Anio\u0142,^![image](explosion.png)^ John Culnan^![image](uofa.png)^\\\n `damian@explosion.ai, magda@explosion.ai, jmculnan@arizona.edu`\\\n ^![image](explosion.png)^ Explosion, ^![image](uofa.png)^ University of Arizona\nbibliography:\n- 'anthology.bib'\n- 'custom.bib'\ntitle: |\n Collecting high-quality adversarial data for machine reading\\\n comprehension tasks with humans and models in the loop\n---\n\n=1\n\nIntroduction\n============\n\nWe present quantitative and qualitative analyses of our experience as annotators in the machine reading comprehension shared task for the First Workshop on Dynamic Adversarial Data Collection.[^1]. The shared task was a" +"---\nauthor:\n- 'O.\u00a0Sokoliuk'\n- 'S. Praharaj'\n- 'A.\u00a0Baransky'\n- 'P.K. Sahoo'\nsubtitle: 'I. Ray-tracing'\ntitle: Accretion flows around exotic tidal wormholes\n---\n\n[This paper investigates the various spherically symmetric wormhole solutions in the presence of tidal forces and applies numerous methods, such as test particle orbital dynamics, ray-tracing and microlensing.]{} [We make the theoretical predictions on the test particle orbital motion around the tidal wormholes with the use of normalized by $\\mathcal{L}^2$ effective potential. In order to obtain the ray-tracing images (of both geometrically thin and thick accretion disks, relativistic jets), we properly modify the open source `GYOTO` code with python interface]{} [We applied this techniques to probe the accretion flows nearby the Schwarzschild-like and charged Reissner-N\u00f6rdstrom (RS) wormholes (we assumed both charged RS wormhole and special case with the vanishing electromagnetic charge, namely Damour-Solodukhin (DS) wormhole). It was shown that the photon sphere for Schwarzschild-like wormhole presents for both thin and thick accretion disks and even for the vanishing tidal forces. Moreover, it was observed that $r_{\\mathrm{ph}}\\to\\infty$ as $\\alpha\\to\\infty$, which constraints $\\alpha$ parameter to be sufficiently small and positive in order to respect the EHT observations. On the other hand, for the case of RS wormhole," +"---\nabstract: 'Data privacy and class imbalance are the norm rather than the exception in many machine learning tasks. Recent attempts have been launched to, on one side, address the problem of learning from pervasive private data, and on the other side, learn from long-tailed data. However, both assumptions might hold in practical applications, while an effective method to simultaneously alleviate both issues is yet under development. In this paper, we focus on learning with long-tailed (LT) data distributions under the context of the popular privacy-preserved federated learning (FL) framework. We characterize three scenarios with different local or global long-tailed data distributions in the FL framework, and highlight the corresponding challenges. The preliminary results under different scenarios reveal that substantial future work are of high necessity to better resolve the characterized federated long-tailed learning tasks.'\nauthor:\n- 'Zihan Chen$^{1,2*}$'\n- 'Songshang Liu$^1$[^1]'\n- Hualiang Wang$^1$\n- 'Howard H. Yang$^1$'\n- |\n \\\n Tony Q.S. Quek$^2$ Zuozhu Liu$^{1\\dag}$ $^1$ZJU-UIUC Institute, Zhejiang University, China\\\n $^2$Singapore University of Technology and Design, Singapore zihan\\_chen@mymail.sutd.edu.sg, songshang.17@intl.zju.edu.cn, hualiang\\_wang@zju.edu.cn, haoyang@intl.zju.edu.cn, tonyquek@sutd.edu.sg, zuozhuliu@intl.zju.edu.cn.\nbibliography:\n- 'ijcai22.bib'\ntitle: 'Towards Federated Long-Tailed Learning'\n---\n\nIntroduction\n============\n\nFederated learning (FL) has garnered increasing attentions from both academia and industries, as it" +"---\nauthor:\n- 'Anupam Ghosh,'\n- 'Partha Konar,'\n- Rishav Roshan\nbibliography:\n- 'ref.bib'\ntitle: 'Top-philic Dark Matter in a Hybrid KSVZ axion framework'\n---\n\nIntroduction {#Intro}\n============\n\nThe Standard Model (SM) of fundamental particles is one of the outstanding achievements of modern-day physics, which has been experimentally verified at many frontiers. Ever since the discovery of the Higgs boson [@CMS:2012qbp; @ATLAS:2012yve], the last missing piece of SM, the particle physics community is eagerly scrutinizing collider data to witness if nature can offer a glimmer of hope in the ongoing hunt for new physics. Physics beyond the SM (BSM) is envisaged as inevitable since SM still fails to account for various issues to give us a fully coherent description of nature. Some of the principal concerns are related to the Strong CP problem [@Peccei:1977hh; @Peccei:1977ur], the existence of the dark matter (DM) [@Sofue:2000jx; @Clowe:2006eq] in the Universe, non-zero but minuscule neutrino masses [@Super-Kamiokande:1998kpq; @SNO:2002tuh; @K2K:2002icj], matter-antimatter asymmetry of the Universe [@Riotto:1999yt; @Dine:2003ax] etc. The incapability of the SM in explaining these issues motivates us to look at possible extensions accommodating these aspects.\n\nDifferent celestial observational evidence [@Sofue:2000jx; @Clowe:2006eq] at diverse length scales suggests the existence of a non-baryonic, non-luminous, gravitationally" +"---\nabstract: 'This paper studies ICMEs detected by both Voyager spacecraft during propagation from 1 to 10\u00a0AU, with observations from 1977 to 1980. ICMEs are detected by using several signatures in the in-situ data, the primary one being the low measured to expected proton temperature ratio. We found 21 events common to both spacecraft and study their internal structure in terms of plasma and magnetic field properties. We find that ICMEs are expanding as they propagate outwards, with decreasing density and magnetic field intensities, in agreement with previous studies. We first carry out a statistical study and then a detailed analysis of each case. Furthermore, we analyse one case in which a shock can be clearly detected by both spacecraft. The methods described here can be interesting for other studies combining data sets from heliospheric missions. Furthermore, they highlight the importance of exploiting useful data from past missions.'\naddress:\n- 'CmPA/Department of Mathematics, KU Leuven, Celestijnenlaan 200 B, 3001 Leuven, Belgium'\n- 'Solar-Terrestrial Centre of Excellence \u2013 SIDC, Royal Observatory of Belgium; Avenue Circulaire 3, 1180 Brussels, Belgium'\n- 'Institute of Physics, University of Maria Curie-Sk[\u0142]{}odowska, Pl. M. Curie-Sk[\u0142]{}odowskiej 5, 20-031 Lublin, Poland'\nauthor:\n- Skralan\n- Luciano\n-" +"---\nauthor:\n- Richie Diurba\n- Rob Fine\n- Mandeep Gill\n- Harvey Newman\n- Kevin Pedro\n- Alexx Perloff\n- Louise Suter\nbibliography:\n- 'Bibliography/common.bib'\n- 'Bibliography/main.bib'\ntitle: |\n Snowmass \u201921 Community Engagement Frontier 6: Public Policy and Government Engagement\\\n Non-Congressional Government Engagement \n---\n\nIntroduction {#sec:intro}\n============\n\nThis document has been prepared as a Snowmass contributed paper by the Public Policy & Government Engagement topical group (CEF06) within the Community Engagement Frontier. The charge of CEF06 is to review all aspects of how the High Energy Physics (HEP) community engages with government at all levels and how public policy impacts members of the community and the community at large, and to assess and raise awareness within the community of direct community-driven engagement of the US federal government (*i.e.* advocacy). In the previous Snowmass process these topics were included in a broader \u201cCommunity Engagement and Outreach\u201d group whose work culminated in the recommendations outlined in Ref. [@snowmass13recs].\n\nThe focus of this paper is HEP community engagement of government entities other than the U.S. federal legislature (*i.e.* Congress). Congressional engagement and advocacy for HEP funding and additional areas are covered in two other CEF06 contributed papers [@cef06paper1; @cef06paper2]. This paper is" +"---\nabstract: 'The capability of generating speech with a specific type of emotion is desired for many human-computer interaction applications. Cross-speaker emotion transfer is a common approach to generating emotional speech when speech data with emotion labels from target speakers is not available for model training. This paper presents a novel cross-speaker emotion transfer system named iEmoTTS. The system is composed of an emotion encoder, a prosody predictor, and a timbre encoder. The emotion encoder extracts the identity of emotion type and the respective emotion intensity from the mel-spectrogram of input speech. The emotion intensity is measured by the posterior probability that the input utterance carries that emotion. The prosody predictor is used to provide prosodic features for emotion transfer. The timbre encoder provides timbre-related information for the system. Unlike many other studies which focus on disentangling speaker and style factors of speech, the iEmoTTS is designed to achieve cross-speaker emotion transfer via disentanglement between prosody and timbre. Prosody is considered the primary carrier of emotion-related speech characteristics, and timbre accounts for the essential characteristics for speaker identification. Zero-shot emotion transfer, meaning that the speech of target speakers is not seen in model training, is also realized with iEmoTTS. Extensive" +"---\nabstract: 'For a mechanical system consisting of a rotator and a pendulum coupled via a small, time-periodic Hamiltonian perturbation, the Arnold diffusion problem asserts the existence of \u2018diffusing orbits\u2019 along which the energy of the rotator grows by an amount independent of the size of the coupling parameter, for all sufficiently small values of the coupling parameter. There is a vast literature on establishing Arnold diffusion for such systems. In this work, we consider the case when an additional, dissipative perturbation is added to the rotator-pendulum system with coupling. Therefore, the system obtained is not symplectic but conformally symplectic. We provide explicit conditions on the dissipation parameter, so that the resulting system still exhibits energy growth. The fact that Arnold diffusion may play a role in systems with small dissipation was conjectured by Chirikov. In this work, the coupling is carefully chosen, however the mechanism we present can be adapted to general couplings and we will deal with the general case in future work.'\naddress:\n- 'Yeshiva University, Department of Mathematical Sciences, New York, NY 10016, USA '\n- 'Yeshiva University, Department of Mathematical Sciences, New York, NY 10016, USA '\n- 'Departament de Matem\u00e0tiques and IMTECH, Universitat Polit\u00e8cnica" +"---\nauthor:\n- 'Shao-Kai Jian'\n- Gregory Bentsen\n- Brian Swingle\nbibliography:\n- 'refs.bib'\ntitle: Linear Growth of Circuit Complexity from Brownian Dynamics\n---\n\n[**We calculate the frame potential for Brownian clusters of $N$ spins or fermions with time-dependent all-to-all interactions. In both cases the problem can be mapped to an effective statistical mechanics problem which we study using a path integral approach. We argue that the $k$th frame potential comes within $\\epsilon$ of the Haar value after a time of order $t \\sim k N + k \\log k + \\log \\epsilon^{-1}$. Using a bound on the diamond norm, this implies that such circuits are capable of coming very close to a unitary $k$-design after a time of order $t \\sim k N$. We also consider the same question for systems with a time-independent Hamiltonian and argue that a small amount of time-dependent randomness is sufficient to generate a $k$-design in linear time provided the underlying Hamiltonian is quantum chaotic. These models provide explicit examples of linear complexity growth that are also analytically tractable.**]{}\n\nIntroduction {#sec:intro}\n============\n\nThe ability to sample from the Haar distribution of unitaries on a Hilbert space $\\mathcal{H}$ is a widely useful capability [@bernstein1997quantum; @poulin2011quantum;" +"---\nabstract: 'Many organisms have an elastic skeleton that consists of a closed shell of epithelial cells that is filled with fluid, and can actively regulate both elastic forces in the shell and hydrostatic pressure inside it. In this work we introduce a simple network model of such pressure-stabilized active elastic shells in which cross-links are represented by material points connected by non-linear springs of some given equilibrium lengths and spring constants. We mimic active contractile forces in the system by changing the parameters of randomly chosen springs and use computer simulations to study the resulting local and global deformation dynamics of the network. We elucidate the statistical properties of these deformations by computing the corresponding distributions and correlation functions. We show that pressure-induced stretching of the network introduces coupling between its local and global behavior: while the network opposes the contraction of each excited spring and affects the amplitude and relaxation time of its deformation, random local excitations give rise to contraction of the network and to fluctuations of its surface area.'\n---\n\n[**[Network Model of Active Fluctuations of Thin Elastic Shells Swollen by Hydrostatic Pressure]{}**]{}\\\nAjoy Maji and Yitzhak Rabin\\\nDepartment of Physics and Institute of Nanotechnology and" +"---\nauthor:\n- \n- \n- \n- \n- \nbibliography:\n- 'bibliography.bib'\ntitle: 'A self-contained karma economy for the dynamic allocation of common resources$^\\star$'\n---\n\nIntroduction {#sec:Intro}\n============\n\nThe scarcity of resources is one of modern day society\u2019s most prominent challenges. With a strong population growth and shift to urbanization, our finite natural and infrastructure resources are seeing unprecedented levels of stress. The need to devise *fair* and *efficient* means of access to these resources is now more eminent than ever.\n\nIn this paper, we study a class of dynamic resource allocation problems in which an indivisible resource is repeatedly contested between two anonymous users who are randomly drawn from a large population. Figure\u00a0\\[fig:examples\\] demonstrates three motivating examples for this class of resource competitions.\n\n1. Due to excessive demand, only one of two trip requests from ride-hailing riders can be served by the closest ride-hailing driver. Which rider should be served? \\[ex:ridehailing\\]\n\n2. Two autonomous vehicles (AVs) meet at an unsignalled intersection. Which AV should go first? \\[ex:intersections\\]\n\n3. To improve quality of service for critical content, an internet service provider (ISP) splits its bandwidth into a high capacity fast channel and a low capacity slow channel, and dedicates the fast" +"---\nabstract: 'We study a family of finitely generated residually finite groups. These groups are doubles $F_2*_H F_2$ of a rank-$2$ free group $F_2$ along an infinitely generated subgroup $H$. Varying $H$ yields uncountably many groups up to isomorphism.'\naddress: |\n Dept. of Math. & Stats.\\\n McGill Univ.\\\n Montreal, QC, Canada H3A 0B9 \nauthor:\n- Hip Kuen Chong\n- 'Daniel T. Wise'\nbibliography:\n- 'Citation.bib'\ndate: 'July 27, 2021'\ntitle: An Uncountable Family of Finitely Generated Residually Finite Groups\n---\n\nIntroduction\n============\n\nA group $G$ is *residually finite* if for each $g\\neq 1$, there is a finite quotient $G\\to {\\@ifnextchar^{{\\wide@bar{G}{0}}}{\\wide@bar{G}{1}}}$ with ${\\@ifnextchar^{{\\wide@bar{g}{0}}}{\\wide@bar{g}{1}}}\\neq1$. Residually finite groups form a privileged and arguably rare class of groups. It is a routine exercise in small cancellation theory to provide uncountably many isomorphism classes of $2$-generated groups. For instance, consider the following presentation where $(p_i)$ is a sequence of distinct primes $\\geq 7$. $$\\langle a,b\\mid {\\left(abab^2\\cdots ab^{100i}\\right)}^{p_i}\\colon i\\in {\\mathbb{N}}\\rangle$$ Since the degrees of torsion element of the group are precisely the elements of $\\{p_i\\}$, these uncountably many groups are pairwise non-isomorphic.\n\nThis paper emerges from our curiosity to produce uncountably many finitely generated residually finite groups. The family of examples we constructed have a very" +"---\nabstract: 'A prerequisite for laser cooling a molecular anion, which has not been achieved so far, is the precise knowledge of the relevant transition frequencies in the cooling scheme. To determine these frequencies we present a versatile method that uses one pump and one photodetachment light beam. We apply this approach to C$_{2}^{-}$ and study the laser cooling transitions between the electronic ground state and the second electronic excited state in their respective vibrational ground levels, $B ^{2} \\Sigma _{u} ^{+}(v=0) \\leftarrow X ^{2} \\Sigma _{g} ^{+}(v=0) $. Measurements of the R(0), R(2), and P(2) transitions are presented, which determine the transition frequencies with a wavemeter-based accuracy of $0.7\\times10^{-3}$cm$^{-1}$ or 20MHz. The spin-rotation splitting is resolved, which allows for a more precise determination of the splitting constants to $\\gamma '' = 7.15(19)\\times10^{-3}\\,$cm$^{-1}$ and $\\gamma '''' = 4.10(27)\\times10^{-3}$cm$^{-1}$. These results are used to characterize the ions in the cryogenic 16-pole wire trap employed in this experiment. The translational and rotational temperature of the ions cooled by helium buffer gas are derived from the Doppler widths and the amplitude ratios of the measured transitions. The results support the common observation that the translational temperature is higher than the buffer gas temperature" +"---\nabstract: 'Cognitive biases distort the process of rational decision-making, including architectural decision-making. So far, no method has been empirically proven to reduce the impact of cognitive biases on architectural decision-making. We conducted an experiment in which 44 master\u2019s degree graduate students took part. Divided into 12 teams, they created two designs \u2013 before and after a debiasing workshop. We recorded this process and analysed how the participants discussed their decisions. In most cases (10 out of 12 groups), the teams\u2019 reasoning improved after the workshop. Thus, we show that debiasing architectural decision-making is an attainable goal and provide a simple debiasing treatment that could easily be used when training software practitioners.'\nauthor:\n- Klara Borowa\n- Maria Jarek\n- Gabriela Mystkowska\n- Weronika Paszko\n- Andrzej Zalewski\nbibliography:\n- 'references.bib'\ntitle: 'Debiasing architectural decision-making: a workshop-based training approach'\n---\n\nIntroduction\n============\n\nCognitive bias is a term that describes an individual\u2019s inability to reason entirely rationally; as such, it prejudices the quality of numerous decisions [@van2016decision]. Researchers have observed the influence of cognitive biases on day-to-day software development over two decades ago [@Stacy1995]. Since then, it was proven that almost all software development activities are affected by cognitive biases to" +"---\nabstract: 'A first-principles approach combining density functional and dynamical mean-field theories in conjunction with a quasi-atomic approximation for the strongly localized 4$f$ shell is applied to Nd$_{2}$Fe$_{14}$B-based hard magnets in order to evaluate crystal-field and exchange-field parameters at rare-earth sites and their corresponding single-ion contribution to the magnetic anisotropy. In pure Nd$_2$Fe$_{14}$B, our calculations reproduce the easy-cone to easy axis transition; theoretical magnetization curves agree quantitatively with experiment. Our study reveals that the rare-earth single-ion anisotropy in the \u201c2-14-1\u201d structure is strongly site-dependent, with the $g$ rare-earth site exhibiting a larger value. In particular, we predict that increased $f$ and $g$-site occupancy of $R=$ Ce and Dy, respectively, leads to an increase of the magnetic anisotropy of the corresponding (Nd,$R$)$_{2}$Fe$_{14}$B substituted compounds.'\nauthor:\n- 'James Boust$^{1}$, Alex Aubert$^{2}$, Bahar Fayyazi$^{2}$, Konstantin P. Skokov$^{2}$, Yurii Skourski$^{3}$, Oliver Gutfleisch$^{2}$ and Leonid V. Pourovskii$^{1,4}$'\ntitle: 'Ce and Dy substitutions in Nd$_{2}$Fe$_{14}$B: site-specific magnetic anisotropy from first-principles'\n---\n\nIntroduction\n============\n\nHigh-performance permanent magnets are key components of numerous energy-efficient technologies which have today to meet an increasing need, such as wind generators and electrical motors[@Skokov2018; @Coey2011; @Hono2018; @Skokov2018bis]. Understanding and optimizing their intrinsic properties arising at the atomic level as well as their" +"---\nabstract: |\n We study some strong combinatorial properties of families. An ideal $\\mathcal{I}$ is Shelah-Stepr\u0101ns if for every set $X\\subseteq\\left[\n \\omega\\right] ^{<\\omega}$ there is an element of $\\mathcal{I}$ that either intersects every set in $X$ or contains infinitely many members of it. We prove that a Borel ideal is Shelah-Stepr\u0101ns if and only if it is Kat\u011btov above the ideal $\\times$. We prove that Shelah-Stepr\u0101ns families have strong indestructibility properties (in particular, they are both Cohen and random indestructible). We also consider some other strong combinatorial properties of families. Finally, it is proved that it is consistent to have $\\operatorname{non}({\\mathcal{M}}) = {\\aleph}_{1}$ and no Shelah-Stepr[= a]{}ns families of size ${\\aleph}_{1}$.\naddress:\n- |\n Graduate School of System Informatics\\\n Kobe University\\\n Rokkodai 1-1, Nada, Kobe 657-8501, Japan\n- |\n Centro de Ciencias Matem\u00e1ticas\\\n UNAM, Campus Morelia, 58089, M\u00e9xico\n- |\n Centro de Ciencias Matem\u00e1ticas\\\n UNAM, Campus Morelia, 58089, M\u00e9xico\n- |\n Department of Mathematics\\\n National University of Singapore\\\n Singapore 119076.\nauthor:\n- 'J[\u00f6]{}rg Brendle'\n- Osvaldo Guzm\u00e1n\n- Michael Hru\u0161\u00e1k\n- Dilip Raghavan\ntitle: Combinatorial properties of MAD families\n---\n\nIntroduction and preliminaries\n==============================\n\nIn [@ProductsofFilters] Kat\u011btov introduced a preorder on ideals. The Kat\u011btov order is a very powerful tool" +"---\nabstract: 'Recently a large number of hot magnetic stars have been discovered to produce auroral radio emission by the process of electron cyclotron maser emission (ECME). Such stars have been given the name of Main-sequence Radio Pulse emitters (MRPs). The phenomenon characterizing MRPs is very similar to that exhibited by planets like the Jupiter. However, one important aspect in which the MRPs differ from aurorae exhibited by planets is the upper cut-off frequency of the ECME spectrum. While Jupiter\u2019s upper cut-off frequency was found to correspond to its maximum surface magnetic field strength, the same for MRPs are always found to be much smaller than the frequencies corresponding to their maximum surface magnetic field strength. In this paper, we report the wideband observations (0.4\u20134.0 GHz) of the MRPs HD35298 that enabled us to locate the upper cut-off frequency of its ECME spectrum. This makes HD35298 the sixth MRP with a known constraint on the upper cut-off frequency. With these information, for the first time we investigate into what could lead to the premature cut-off. We review the existing scenarios attempting to explain this effect, and arrive at the conclusion that none of them can satisfactorily explain all the observations." +"---\nabstract: 'The recent spike in certified Artificial Intelligence (AI) tools for healthcare has renewed the debate around adoption of this technology. One thread of such debate concerns Explainable AI and its promise to render AI devices more transparent and trustworthy. A few voices active in the medical AI space have expressed concerns on the reliability of Explainable AI techniques, questioning their use and inclusion in guidelines and standards. Revisiting such criticisms, this article offers a balanced and comprehensive perspective on the utility of Explainable AI, focusing on the specificity of clinical applications of AI and placing them in the context of healthcare interventions. Against its detractors and despite valid concerns, we argue that the Explainable AI research program is still central to human-machine interaction and ultimately our main tool against loss of control, a danger that cannot be prevented by rigorous clinical validation alone.'\nauthor:\n- 'Giovanni Cin\u00e0, PhD'\n- 'Tabea R\u00f6ber, MSc'\n- ', Rob Goedhart, PhD'\n- 'Ilker Birbil, PhD'\ntitle: Why we do need Explainable AI for Healthcare\n---\n\nIntroduction\n============\n\nAlong with the blooming of Artificial Intelligence (AI) and the accompanying increase in model complexity, there has been a surge of interest in explainable AI" +"---\naddress: |\n EPFL, Switzerland\\\n agaduran@gmail.com\\\n {akhil.arora, robert.west}@epfl.ch\nbibliography:\n- 'main.bib'\ntitle: 'Efficient Entity Candidate Generation for Low-Resource Languages'\n---\n\nIntroduction {#sec:intro}\n============\n\nCandidate generation [@shen2014entity; @Zhou2020ImprovingCG] is the task of retrieving a short list of plausible candidate entities from a knowledge base (KB) for a given mention. This module is usually a fundamental part of a more elaborate pipeline as that of entity linking [@guo2013improving], coreference resolution [@Zhang2019KnowledgeawarePC] or question answering [@Bao2016ConstraintBasedQA]. With few exceptions [@Gillick2019LearningDR], it is present in almost all existing entity linking techniques, where the subsequent entity disambiguation technique only has to evaluate a small set of entities from the KB, instead of all of them. Therefore, in addition to obtaining a high recall\u2014fraction of mentions where the true entity is present in the set of candidates\u2014a desired characteristic of a candidate generator is to constitute a low computational overhead of the full pipeline. Cross-lingual entity linking [@Tsai2016CrosslingualWU; @Sil2018NeuralCE] associates mentions in non-English documents to entities from a KB\u2014typically the English Wikipedia. This poses several challenges that can often be attributed to the amount of resources of the target language, especially if it has low resources. Candidate generation is deemed trivial in the English entity linking" +"---\nauthor:\n- Milan Korda\n- 'Jean-Bernard Lasserre'\n- Alexey Lazarev\n- Victor Magron\n- Simone Naldi\ntitle: 'Urysohn in action: separating semialgebraic sets by polynomials'\n---\n\nProblem setting\n===============\n\nA classical result from topology called Uryshon\u2019s lemma asserts the existence of a continuous separator of two disjoint closed sets in a sufficiently regular topological space. In this work we make a search for this separator constructive and efficient in the context of real algebraic geometry. Namely, given two compact disjoint basic semialgebraic sets $\\mathbf{A}=\\{x\\in \\mathbb{R}^n\\mid g_i(x)\\geq 0\\; \\forall i = 1 \\dots r\\}$ and $\\mathbf{B}=\\{x\\in \\mathbb{R}^n\\mid h_i(x)\\geq 0\\;\\forall i = 1 \\dots s \\}$ which are contained in an $n$-dimensional box $[-1,1]^n$, we provide an algorithm that computes a separating polynomial $p$ greater than or equal to 1 on $\\mathbf{A}$ and less than or equal to 0 on $\\mathbf{B}$. This is a challenging problem with many important applications (e.g., classification in machine learning [@Kot] or collision avoidance in robotics) and has a long history. In [@AA99] the authors provide a decision algorithm for the more general separation problem without compactness assumptions. In order to obtain a correctness certificate for the separator, another well-renowned approach is to rely on positivity" +"---\nabstract: 'Excited states of spin-chains play an important role in condensed matter physics. We present a method of calculating the single magnon excited states of the Heisenberg spin-chain that can be efficiently implemented on a quantum processor for small spin chains. Our method involves finding the stationary points of the energy vs wavenumber curve. We implement our method for 4-site and 8-site Heisenberg Hamiltonians using numerical techniques as well as using an IBM quantum processor. Finally, we give an insight into the circuit complexity and scaling of our proposed method.'\nauthor:\n- Shashank Kumar Ranu\n- 'Daniel D. Stancil'\nbibliography:\n- 'ref.bib'\ntitle: 'Single magnon excited states of a Heisenberg spin-chain using a quantum computer'\n---\n\n\\[sec:Intro\\]Introduction\n=========================\n\nWe are currently in the era of Noisy Intermediate Scale Quantum (NISQ) devices wherein limited coherence time, gate imperfections, and readout errors affect the performance of quantum computers [@preskill2018quantum]. Though these NISQ devices are not fault-tolerant, they have been used to show quantum advantage over classical computers [@arute2019quantum; @zhong2020quantum] for some calculations. However, we are still far away from achieving Feynman\u2019s vision of simulating a large and complex quantum system using a quantum computer. Hybrid classical-quantum algorithms have been proposed to" +"---\nabstract: 'We investigate an auto-regressive formulation for the problem of smoothing time-series by manipulating the inherent objective function of the traditional moving mean smoothers. Not only the auto-regressive smoothers enforce a higher degree of smoothing, they are just as efficient as the traditional moving means and can be optimized accordingly with respect to the input dataset. Interestingly, the auto-regressive models result in moving means with exponentially tapered windows.'\nauthor:\n- ','\nbibliography:\n- 'double\\_bib.bib'\ntitle: 'An Auto-Regressive Formulation for Smoothing and Moving Mean with Exponentially Tapered Windows'\n---\n\nIntroduction\n============\n\nData smoothing is heavily utilized in many learning problems to detect certain patters in the data or mitigate noisy/anomalous values [@hardle1991smoothing]. The notion of smoothing stems from the inference that the data points close to each other in their features should have close mappings (functional evaluations), i.e., behave similarly. The most prominent example is a time-series dataset where data points closer in time should have similar values, e.g, unnaturally high or low values should be suppressed to match the nominal data pattern. Smoothing is mainly done on the datasets for easier extraction of their information and consequently ease their analyses [@simonoff2012smoothing]. This approach is applied in many fields including" +"---\nabstract: 'We construct the spectral functions for light vector mesons at finite density and temperature in the presence of a novel mixing between parity partners, induced by baryon density via the Wess-Zumino-Witten action. As the main origin of in-medium broadening, a set of baryon resonances that strongly couple to the vector mesons and the modifications of kaons and anti-kaons due to the Kaplan-Nelson term are included. It is shown that the vector spectra, even with the broadening effects, exhibit sizable signatures of chiral symmetry restoration thanks to the chiral mixing depending on three-momenta carried by the vector mesons. Those spectral functions are used to calculate the integrated production rates of lepton pairs, and a proper binning in momenta and potential decrease in the vector-meson masses due to chiral symmetry restoration are discussed in quantifying the signatures.'\nauthor:\n- Chihiro Sasaki\ntitle: 'Anomaly-induced Chiral Mixing in Cold and Dense Matter'\n---\n\nIntroduction\n============\n\nLight vector mesons, especially their dynamical properties arising from spontaneously broken chiral symmetry in QCD, have been studied extensively in various approaches\u00a0[@Rapp; @HH; @RWvH; @Rapp:2011zz; @FS]. In-medium modifications of vector spectral functions were anticipated due to the interactions with a hadronic medium, which carry imprints of" +"---\nabstract: |\n This is a second part of the research on AC optimal power flow being used in the lower level of the bilevel strategic bidding or investment models. As an example of a suitable upper-level problem, we observe a strategic bidding of energy storage and propose a novel formulation based on the smoothing technique.\n\n After presenting the idea and scope of our work, as well as the model itself and the solution algorithm in the companion paper (Part I), this paper presents a number of existing solution techniques and the proposed one based on smoothing the complementary conditions. The superiority of the proposed algorithm and smoothing techniques is demonstrated in terms of accuracy and computational tractability over multiple transmission networks of different sizes and different OPF models. The results indicate that the proposed approach outperforms all other options in both metrics by a significant margin. This is especially noticeable in the metric of accuracy where out of total 422 optimizations over 9 meshed networks the greatest AC OPF error is 0.023% that is further reduced to 3.3e-4% in the second iteration of our algorithm.\nauthor:\n- 'K. \u0160epetanc, *Student Member*, *IEEE*, H. Pand\u017ei\u0107, *Senior Member*, *IEEE* and T." +"---\nabstract: 'In this article we propose and validate an unsupervised probabilistic model, Gaussian Latent Dirichlet Allocation (GLDA), for the problem of discrete state discovery from repeated, multivariate psychophysiological samples collected from multiple, inherently distinct, individuals. Psychology and medical research heavily involves measuring potentially related but individually inconclusive variables from a cohort of participants to derive diagnosis, necessitating clustering analysis. Traditional probabilistic clustering models such as Gaussian Mixture Model (GMM) assume a global mixture of component distributions, which may not be realistic for observations from different patients. The GLDA model borrows the individual-specific mixture structure from a popular topic model Latent Dirichlet Allocation (LDA) in Natural Language Processing and merges it with the Gaussian component distributions of GMM to suit continuous type data. We implemented GLDA using STAN (a probabilistic modeling language) and applied it on two datasets, one containing Ecological Momentary Assessments (EMA) and the other heart measures from electrocardiogram and impedance cardiograph. We found that in both datasets the GLDA-learned class weights achieved significantly higher correlations with clinically assessed depression, anxiety, and stress scores than those produced by the baseline GMM. Our findings demonstrate the advantage of GLDA over conventional finite mixture models for human state discovery from" +"---\nabstract: 'Every simple drawing of a graph in the plane naturally induces a rotation system, but it is easy to exhibit a rotation system that does not arise from a simple drawing in the plane. We extend this to all surfaces: for every fixed surface $\\Sigma$, there is a rotation system that does not arise from a simple drawing in $\\Sigma$.'\naddress:\n- 'Institute of Software Technology, Graz University of Technology, Austria'\n- 'Instituto de F\u00edsica, Universidad Aut\u00f3noma de San Luis Potos\u00ed, SLP 78000, Mexico'\n- 'Institute of Software Technology, Graz University of Technology, Austria'\nauthor:\n- Rosna Paul\n- Gelasio Salazar\n- Alexandra Weinberger\nbibliography:\n- 'rotsys.bib'\ntitle: Rotation systems and simple drawings in surfaces\n---\n\nnamedef[subjclassname@2020]{}[ Mathematics Subject Classification]{}\n\nIntroduction {#sec:intro}\n============\n\nIn a [*drawing*]{} of a graph, distinct vertices are represented by distinct points in the plane, and each edge $e=uv$ is represented by a Jordan arc whose endpoints are the points that represent $u$ and $v$. It is also required that (D1) no edge contains a vertex other than its endvertices; (D2) every pair of edges intersect each other a finite number of times; (D3) every intersection of edges is either a common endvertex or" +"---\nabstract: 'The complex interactions between algorithmic trading agents can have a severe influence on the functioning of our economy, as witnessed by recent banking crises and trading anomalies. A common phenomenon in these situations are *fire sales*, a contagious process of asset sales that trigger further sales. We study the existence and structure of equilibria in a game-theoretic model of fire sales. We prove that for a wide parameter range (e.g., convex price impact functions), equilibria exist and form a complete lattice. This is contrasted with a non-existence result for concave price impact functions. Moreover, we study the convergence of best-response dynamics towards equilibria when they exist. In general, best-response dynamics may cycle. However, in many settings they are guaranteed to converge to the socially optimal equilibrium when starting from a natural initial state. Moreover, we discuss a simplified variant of the dynamics that is less informationally demanding and converges to the same equilibria. We compare the dynamics in terms of convergence speed.'\nauthor:\n- 'Nils Bertschinger[^1]'\n- 'Martin Hoefer[^2]'\n- 'Simon Krogmann[^3]'\n- 'Pascal Lenzner[^4]'\n- 'Steffen Schuldenzucker[^5]'\n- 'Lisa Wilhelmi[^6]'\nbibliography:\n- 'references.bib'\ntitle: 'Equilibria and Convergence in Fire Sale Games[^7]'\n---\n\nIntroduction\n============\n\nOn May 6," +"---\nabstract:\n- |\n The probing methodology allows one to obtain a partial representation of linguistic phenomena stored in the inner layers of the neural network, using external classifiers and statistical analysis.\n\n Pre-trained transformer-based language models are widely used both for natural language understanding (NLU) and natural language generation (NLG) tasks making them most commonly used for downstream applications. However, little analysis was carried out, whether the models were pre-trained enough or contained knowledge correlated with linguistic theory.\n\n We are presenting the chronological probing study of transformer English models such as MultiBERT and T5. We sequentially compare the information about the language learned by the models in the process of training on corpora. The results show that 1) linguistic information is acquired in the early stages of training 2) both language models demonstrate capabilities to capture various features from various levels of language, including morphology, syntax, and even discourse, while they also can inconsistently fail on tasks that are perceived as easy.\n\n We also introduce the open-source framework for chronological probing research, compatible with other transformer-based models. \n\n **Keywords:** probing, language acquisition, language modeling, transformers\n\n **DOI:** 10.28995/2075-7182-2022-20-XX-XX\n- |\n \u041f\u0440\u043e\u0431\u0438\u043d\u0433-\u043c\u0435\u0442\u043e\u0434\u043e\u043b\u043e\u0433\u0438\u044f \u043f\u043e\u0437\u0432\u043e\u043b\u044f\u0435\u0442 \u043f\u043e\u043b\u0443\u0447\u0438\u0442\u044c \u043f\u0440\u0435\u0434\u0441\u0442\u0430\u0432\u043b\u0435\u043d\u0438\u0435 \u043e \u044f\u0432\u043b\u0435\u043d\u0438\u044f\u0445 \u044f\u0437\u044b\u043a\u0430, \u0445\u0440\u0430\u043d\u044f\u0449\u0435\u0435\u0441\u044f \u0432\u043e \u0432\u043d\u0443\u0442\u0440\u0435\u043d\u043d\u0438\u0445 \u0441\u043b\u043e\u044f\u0445" +"---\nabstract: 'For a hypergraph $H$, the transversal is a subset of vertices whose intersection with every edge is nonempty. The cardinality of a minimum transversal is the transversal number of $H$, denoted by $\\tau(H)$. The Tuza constant $c_k$ is defined as $\\sup{\\tau(H)/ (m+n)}$, where $H$ ranges over all $k$-uniform hypergraphs, with $m$ and $n$ being the number of edges and vertices, respectively. We give an upper bound and a lower bound on $c_k$. The upper bound improves the known ones for $k\\geq 7$, and the lower bound improves the known ones for $k\\in\\{7, 8, 10, 11, 13, 14, 17\\}$.'\nauthor:\n- 'Yun-Shan\u00a0Lu$^1$, Hung-Lung Wang$^1$'\nbibliography:\n- 'ref.bib'\ndate:\n- \n- |\n $^1$Department of Computer Science and Information Engineering\\\n National Taiwan Normal University, Taipei, Taiwan\\\n {60947097s,hlwang}@ntnu.edu.tw\\\ntitle: '**A note on the Tuza constant $c_k$ for small $k$** '\n---\n\nIntroduction\n============\n\nFor a hypergraph $H$, the *transversal* is a subset of vertices that intersects every edge. The cardinality of a minimum transversal is the *transversal number* of $H$, denoted by $\\tau(H)$. The transversal number is a fundamental quantity investigated in hypergraph theory\u00a0[@HenningYeo2020]. It draws considerable attention since it generalizes several concepts, e.g. the domination number of a graph" +"---\nabstract: 'To determine the impact of including edge states on the phase diagram of a spinless Falicov-Kimball model (FKM) on the Haldane lattice, a study of a corresponding ribbon geometry with zigzag edges is conducted. By varying the ribbon widths, the distinction between the effects connected to the mere presence of the edges and those originating from interference between the edge states is established. The local doping caused by the former is shown to give rise to a topologically trivial bulk insulator with metallic edge states. Additionally, it gives rise to a charge density wave (CDW) phase with mixed character of the subbands in various parts of the phase diagram. The local doping on the CDW instability is also addressed. Two additional gapless phases are found, caused by the edges but with stability regions depending on the width of the ribbon.'\nauthor:\n- Jan Skolimowski\nbibliography:\n- 'mybiblio.bib'\ntitle: 'Interplay between edge states and charge density wave order in the Falicov-Kimball model on a Haldane ribbon'\n---\n\nIntroduction\n============\n\nThe study of topological phases of matter has been a rapidly growing research direction in condensed matter physics over the past 30 years. The hallmark of non-trivial topology is the" +"---\nabstract: |\n Abstract\n\n : We consider an inertial active Ornstein-Uhlenbeck particle in an athermal bath. The particle is charged, constrained to move in a two-dimensional harmonic trap, and a magnetic field is applied perpendicular to the plane of motion. The steady state correlations and the mean square displacement are studied when the particle is confined as well as when it is set free from the trap. With the help of both numerical simulation and analytical calculations, we observe that inertia plays a crucial role in the dynamics in the presence of a magnetic field. In a highly viscous medium where the inertial effects are negligible, the magnetic field has no influence on the correlated behaviour of position as well as velocity. In the time asymptotic limit, the overall displacement of the confined harmonic particle gets enhanced by the presence of magnetic field and saturates for a stronger magnetic field. On the other hand, when the particle is set free, the overall displacement gets suppressed and approaches zero when strength of the field is very high. Interestingly, it is seen that in the time asymptotic limit, the confined harmonic particle behaves like a passive particle and becomes independent of the" +"---\nabstract: |\n It is known that gradient based MCMC samplers for continuous spaces, such as Langevin Monte Carlo (LMC), can be derived as particle versions of a gradient flow that minimizes KL divergence on a Wasserstein manifold. The superior efficiency of such samplers has motivated several recent attempts to generalize LMC to discrete spaces. However, a fully principled extension of Langevin dynamics to discrete spaces has yet to be achieved, due to the lack of well-defined gradients in the sample space. In this work, we show how the Wasserstein gradient flow can be generalized naturally to discrete spaces. Given the proposed formulation, we demonstrate how a discrete analogue of Langevin dynamics can subsequently be developed. With this new understanding, we reveal how recent gradient based samplers in discrete spaces can be obtained as special cases by choosing particular discretizations. More importantly, the framework also allows for the derivation of novel algorithms, one of which, *Discrete Langevin Monte Carlo* (DLMC), is obtained by a factorized estimate of the transition matrix. The DLMC method admits a convenient parallel implementation and time-uniform sampling that achieves larger jump distances. We demonstrate the advantages of DLMC on various binary and categorical distributions.\n\n 0 Langevin" +"---\nabstract: 'In socio-technical settings, operators are increasingly assisted by decision support systems. By employing these, important properties of socio-technical systems such as self-adaptation and self-optimization are expected to improve further. To be accepted by and engage efficiently with operators, decision support systems need to be able to provide explanations regarding the reasoning behind specific decisions. In this paper, we propose the usage of Learning Classifier Systems, a family of rule-based machine learning methods, to facilitate transparent decision making and highlight some techniques to improve that. We then present a template of seven questions to assess application-specific explainability needs and demonstrate their usage in an interview-based case study for a manufacturing scenario. We find that the answers received did yield useful insights for a well-designed LCS model and requirements to have stakeholders actively engage with an intelligent agent.'\nauthor:\n- Michael Heider\n- Helena Stegherr\n- Richard Nordsieck\n- J\u00f6rg H\u00e4hner\nbibliography:\n- 'XLCS\\_ext.bib'\ntitle: 'Learning Classifier Systems for Self-Explaining Socio-Technical-Systems'\n---\n\nIntroduction\n============\n\nIncreasing automation of manufacturing creates a continuous interest in properties commonly associated with lifelike or organic computing systems, such as self-adaptation or self-optimisation, within the producing industry\u00a0[@permin2016]. These properties are often achieved using data driven" +"---\nabstract: '[In the analysis of single-cell RNA sequencing data, researchers often characterize the variation between cells by estimating a latent variable, such as cell type or pseudotime, representing some aspect of the cell\u2019s state. They then test each gene for association with the estimated latent variable. If the same data are used for both of these steps, then standard methods for computing p-values in the second step will fail to achieve statistical guarantees such as Type 1 error control. Furthermore, approaches such as sample splitting that can be applied to solve similar problems in other settings are not applicable in this context. In this paper, we introduce *count splitting*, a flexible framework that allows us to carry out valid inference in this setting, for virtually any latent variable estimation technique and inference approach, under a Poisson assumption. We demonstrate the Type 1 error control and power of count splitting in a simulation study, and apply count splitting to a dataset of pluripotent stem cells differentiating to cardiomyocytes. ]{} [Poisson, binomial thinning, pseudotime, clustering, selective inference, sample splitting.]{}'\nauthor:\n- |\n Anna Neufeld $^{\\dagger}$, Lucy L. Gao $\\circ$, Joshua Popp$^\\bigtriangleup$, Alexis Battle$^{\\bigtriangleup \\bigtriangledown, \\cdot}$, and Daniela Witten $^{\\dagger\\ddagger}$\\\n $\\dagger$ *Department" +"---\nabstract: 'The fog-radio-access-network (F-RAN) has been proposed to address the strict latency requirements, which offloads computation tasks generated in user equipments (UEs) to the edge to reduce the processing latency. However, it incorporates the task transmission latency, which may become the bottleneck of latency requirements. Data compression (DC) has been considered as one of the promising techniques to reduce the transmission latency. By compressing the computation tasks before transmitting, the transmission delay is reduced due to the shrink transmitted data size, and the original computing task can be retrieved by employing data decompressing (DD) at the edge nodes or the centre cloud. Nevertheless, the DC and DD incorporate extra processing latency, and the latency performance has not been investigated in the large-scale DC-enabled F-RAN. Therefore, in this work, the successful data compression probability (SDCP), i.e., the probability of the task execution latency being smaller than a target latency and the signal to interference ratio of the wireless uplink transmission being larger than a threshold, is defined to analyse the latency performance of the F-RAN. Moreover, to analyse the effect of compression offloading ratio (COR), which determines the proportion of tasks being compressed at the edge, on the SDCP of" +"---\nabstract: 'Model-free policy learning has been shown to be capable of learning manipulation policies which can solve long-time horizon tasks using single-step manipulation primitives. However, training these policies is a time-consuming process requiring large amounts of data. We propose the Local Dynamics Model (LDM) which efficiently learns the state-transition function for these manipulation primitives. By combining the LDM with model-free policy learning, we can learn policies which can solve complex manipulation tasks using one-step lookahead planning. We show that the LDM is both more sample-efficient and outperforms other model architectures. When combined with planning, we can outperform other model-based and model-free policies on several challenging manipulation tasks in simulation.'\nauthor:\n- Colin Kohler\n- Robert Platt\ntitle: Visual Foresight With a Local Dynamics Model\n---\n\nIntroduction {#sec:introduction}\n============\n\nReal-world robotic manipulation tasks require a robot to execute complex motion plans while interacting with numerous objects within cluttered environments. Due to the difficulty in learning good policies for these tasks, a common approach is to simplify policy learning by expressing the problem using more abstract (higher level) actions such as end-to-end collision-free motions combined with some motion primitive such as pick, place, push, etc. This is often called the *spatial" +"---\nabstract: 'Automatic program synthesis is a long-lasting dream in software engineering. Recently, a promising Deep Learning (DL) based solution, called Copilot, has been proposed by OpenAI and Microsoft as an industrial product. Although some studies evaluate the correctness of Copilot solutions and report its issues, more empirical evaluations are necessary to understand how developers can benefit from it effectively. In this paper, we study the capabilities of Copilot in two different programming tasks: (i) generating (and reproducing) correct and efficient solutions for fundamental algorithmic problems, and (ii) comparing Copilot\u2019s proposed solutions with those of human programmers on a set of programming tasks. For the former, we assess the performance and functionality of Copilot in solving selected fundamental problems in computer science, like sorting and implementing data structures. In the latter, a dataset of programming problems with human-provided solutions is used. The results show that Copilot is capable of providing solutions for almost all fundamental algorithmic problems, however, some solutions are buggy and non-reproducible. Moreover, Copilot has some difficulties in combining multiple methods to generate a solution. Comparing Copilot to humans, our results show that the correct ratio of humans\u2019 solutions is greater than Copilot\u2019s suggestions, while the buggy solutions" +"---\nabstract: 'This work takes a critical look at the application of conventional machine learning methods to wireless communication problems through the lens of reliability and robustness. Deep learning techniques adopt a frequentist framework, and are known to provide poorly calibrated decisions that do not reproduce the true uncertainty caused by limitations in the size of the training data. Bayesian learning, while in principle capable of addressing this shortcoming, is in practice impaired by model misspecification and by the presence of outliers. Both problems are pervasive in wireless communication settings, in which the capacity of machine learning models is subject to resource constraints and training data is affected by noise and interference. In this context, we explore the application of the framework of *robust* Bayesian learning. After a tutorial-style introduction to robust Bayesian learning, we showcase the merits of robust Bayesian learning on several important wireless communication problems in terms of accuracy, calibration, and robustness to outliers and misspecification.'\nauthor:\n- 'Matteo Zecchin,\u00a0 Sangwoo Park,\u00a0 Osvaldo Simeone,\u00a0 Marios Kountouris,\u00a0, David Gesbert,\u00a0'\nbibliography:\n- 'ref.bib'\ntitle: 'Robust Bayesian Learning for Reliable Wireless AI: Framework and Applications [^1] '\n---\n\nBayesian learning, robustness, localization, modulation classification, channel modeling\n\nIntroduction\n============" +"---\nauthor:\n- Martina Kiechle\n- Adam Papp\n- Simon Mendisch\n- Valentin Ahrens\n- Matthias Golibrzuch\n- 'Gary H. Bernstein'\n- Wolfgang Porod\n- Gyorgy Csaba\n- Markus Becherer\nbibliography:\n- 'main.bib'\ntitle: 'Spin-Wave Optics in YIG by Ion-Beam Irradiation'\n---\n\nIntroduction {#Introduction}\n============\n\nA major motivation behind magnonics research is to replicate the functionality of optical devices in chip-scale devices that are amenable to integration with microelectronic circuitry [@csaba2014spin]$^,$[@csaba2017perspectives]. This way, the functionality of coherent optical computers could also be cloned in the magnonic domain. Spin waves (magnons) display interference phenomena that resemble the ones shown by optical waves, but they offer submicron (possibly sub-100nm) wavelength and can be launched and detected by electrical means. Chip-scale optically-inspired devices also provide a pathway to much needed energy-efficient neuromorphic and edge-AI computing components [@wang2021inverse]$^,$[@papp2021nanoscale].\\\nIt has, however, remained elusive to produce spin-wave optics that approach the \u2019ideal\u2019 behavior of optical components. This is largely due to the fact that no working technology is available to control the propagation characteristics of spin waves to the extent that is possible in optics. Ideally, one would want to realize any fine-grained spatial distribution of the index of refraction, as this provides a high" +"---\nabstract: 'We study the relationship between the scale-height of the molecular gas disc and the turbulent velocity dispersion of the molecular interstellar medium within a simulation of a Milky Way-like galaxy in the moving-mesh code [Arepo]{}. We find that the vertical distribution of molecular gas can be described by a Gaussian function with a uniform scale-height of $\\sim 50$\u00a0pc. We investigate whether this scale-height is consistent with a state of hydrostatic balance between gravity and turbulent pressure. We find that the hydrostatic prediction using the total turbulent velocity dispersion (as one would measure from kpc-scale observations) gives an over-estimate of the true molecular disc scale-height. The hydrostatic prediction using the velocity dispersion between the centroids of discrete giant molecular clouds (cloud-cloud velocity dispersion) leads to more-accurate estimates. The velocity dispersion internal to molecular clouds is elevated by the locally-enhanced gravitational field. Our results suggest that observations of molecular gas need to reach the scale of individual molecular clouds in order to accurately determine the molecular disc scale-height.'\nbibliography:\n- 'bibliography.bib'\ndate: 'Accepted XXX. Received XXX; in original form XXX'\ntitle: 'On the scale-height of the molecular gas disc in Milky Way-like galaxies'\n---\n\n\\[firstpage\\]\n\nISM:clouds \u2013 ISM:evolution \u2013" +"---\nabstract: 'In this paper, we investigate the stabilization of a one-dimensional Lorenz piezoelectric (Stretching system) with partial viscous dampings. First, by using Lorenz gauge conditions, we reformulate our system to achieve the existence and uniqueness of the solution. Next, by using General criteria of Arendt-Batty, we prove the strong stability in different cases. Finally, we prove that it is sufficient to control the stretching of the center-line of the beam in $x-$direction to achieve the exponential stability. Numerical results are also presented to validate our theoretical result.'\naddress: |\n $^1$Univ. Polytechnique Hauts-de-France, INSA Hauts-de-France, CERAMATHS-Laboratoire de Mat\u00e9riaux C\u00e9ramiques et de Math\u00e9matiques, F-59313 Valenciennes, France\\\n $^{2}$ Department of Mathematics, College of Science, University of Sharjah, P.O.Box 27272, Sharjah, UAE.\\\n $^{3}$ \u00a0Department of Mathematics and Statistics, American University of Sharjah. Sharjah, UAE.\\\n mohammad.akil@uphf.fr, asoufyane@sharjah.ac.ae , ybelhamadia@aus.edu\nauthor:\n- 'Mohammad Akil$^{1}$ , Abdelaziz Soufyane$^2$ and Youssef Belhamadia$^3$'\ntitle: Stabilization results of a Lorenz piezoelectric beam with partial viscous dampings \n---\n\n**Keywords.** Lorenz Gauge - Piezoelectric beams - Stabilization - Electromagnetic potentials-Exponential Stability.\n\nIntroduction\n============\n\nPiezoelectric materials have become more promising in aeronautic, civil and space structures. It is known, since the 19th century that materials such as quartz, Rochelle salt and barium" +"---\nabstract: 'Novel techniques are indispensable to process the flood of data from the new generation of radio telescopes. In particular, the classification of astronomical sources in images is challenging. Morphological classification of radio galaxies could be automated with deep learning models that require large sets of labelled training data. Here, we demonstrate the use of generative models, specifically Wasserstein GANs (wGAN), to generate artificial data for different classes of radio galaxies. Subsequently, we augment the training data with images from our wGAN. We find that a simple fully-connected neural network for classification can be improved significantly by including generated images into the training set.'\nauthor:\n- 'Janis Kummer[^1]$^{,}$[^2]$\\ $'\n- 'Lennart Rustige$^{1,}$[^3]$\\ $'\n- 'Florian Griese$^{1,}$[^4]$\\ $'\n- 'Kerstin Borras$^{3,}$[^5]$\\ $'\n- 'Marcus\u00a0Br\u00fcggen$^{2}\\ $'\n- 'Patrick L. S. Connor$^{1,}$[^6]$\\ $'\n- 'Frank\u00a0Gaede$^{3}\\ $'\n- 'Gregor\u00a0Kasieczka$^{6}\\ $'\n- 'Peter\u00a0Schleper$^{6}\\ $'\nbibliography:\n- 'ref.bib'\ntitle: 'Radio Galaxy Classification with wGAN-Supported Augmentation'\n---\n\nRadio galaxy classification, Generative models, GANplyfication\n\nIntroduction\n============\n\nThe new generation of radio telescopes (e.g. LOFAR, MeerKAT and in the future the SKA [@LOFAR; @MeerKAT; @Carilli_2004]) will produce enormous amounts of data and the improved sensitivity of the instruments leads to a much higher source" +"---\nabstract: 'Existing methods for isolating hard subpopulations and spurious correlations in datasets often require human intervention. This can make these methods labor-intensive and dataset-specific. To address these shortcomings, we present a scalable method for automatically distilling a model\u2019s failure modes. Specifically, we harness linear classifiers to identify consistent error patterns, and, in turn, induce a natural representation of these failure modes as *directions within the feature space*. We demonstrate that this framework allows us to discover and automatically caption challenging subpopulations within the training dataset. Moreover, by combining our framework with off-the-shelf diffusion models, we can generate images that are especially challenging for the analyzed model, and thus can be used to perform synthetic data augmentation that helps remedy the model\u2019s failure modes. [^1]'\nauthor:\n- |\n Saachi Jain[^2]\\\n MIT\\\n `saachij@mit.edu`\\\n- |\n Hannah Lawrence\\\n MIT\\\n `hanlaw@mit.edu`\n- |\n Ankur Moitra\\\n MIT\\\n `moitra@mit.edu`\n- |\n Aleksander Mdry\\\n MIT\\\n `madry@mit.edu`\nbibliography:\n- 'main.bib'\ntitle: Distilling Model Failures as Directions in Latent Space\n---\n\nIntroduction {#sec:intro}\n============\n\nThe composition of the training dataset has key implications for machine learning models\u2019 behavior [@feldman2019does; @carlini2019secret; @koh2017understanding; @ghorbani2019data; @ilyas2022datamodels], especially as the training environments often deviate from deployment conditions [@rabanser2019failing; @koh2020wilds; @hendrycks2020faces]. For example," +"---\nauthor:\n- |\n Eric R. Knorr\\*^1^, Baptiste Lemaire\\*^1^, Andrew Lim\\*^1^\\\n Siqiang Luo^2^, Huanchen Zhang^3^, Stratos Idreos^1^, Michael Mitzenmacher^1^\nbibliography:\n- 'works\\_cited.bib'\ntitle: 'Proteus: A Self-Designing Range Filter'\n---\n\n=4\n\nIntroduction {#sec:intro}\n============\n\n\\[fig:teaser\\]\n\n**The Importance of Range Filters:** Range queries are a fundamental operation in big data applications. Given a set $S$, a range query $\\texttt{[a,b]}$ returns the members of $S$ within the query interval, i.e. $S \\cap \\texttt{[a,b]}$. Example applications that need range queries and handle large amounts of data include social media platforms using spatio-temporal queries to aggregate user events [@gisdatabase], pattern discovery and anomaly detection in time-series data streams [@kondylakis2020coconut], scientific spatial models [@spatialrangequery], graph databases [@dgraph; @kyrola2014graphchidb] and Blockchain analytics [@vchain]. Range queries over such data sets are expensive due to the disk or network costs required to process the data. Using a filter data structure to determine when no elements are in the query range can vastly improve performance by preventing unnecessary IO operations.\n\nAs an important unifying application, large-scale data systems keep large volumes of data on cheap but high latency storage devices. Answering range queries requires checking every data page that intersects with the queried range to retrieve the relevant data. For" +"---\nabstract: 'Physiological measurements involves observing variables that attribute to the normative functioning of human systems and subsystems directly or indirectly. The measurements can be used to detect affective states of a person with aims such as improving human-computer interactions. There are several methods of collecting physiological data, but wearable sensors are a common, non-invasive tool for accurate readings. However, valuable information is hard to extract from the raw physiological data, especially for affective state detection. Machine Learning techniques are used to detect the affective state of a person through labeled physiological data. A clear problem with using labeled data is creating accurate labels. An expert is needed to analyze a form of recording of participants and mark sections with different states such as stress and calm. While expensive, this method delivers a complete dataset with labeled data that can be used in any number of supervised algorithms. An interesting question arises from the expensive labeling: how can we reduce the cost while maintaining high accuracy? Semi-Supervised learning (SSL) is a potential solution to this problem. These algorithms allow for machine learning models to be trained with only a small subset of labeled data (unlike unsupervised which use no labels)." +"---\nabstract: 'In this paper we study the spontaneous emission spectra and the emission decay rates of a simplest atom system that exhibits sub- and superradiant properties: a system which consists of two artificial atoms (superconducting qubits) embedded in a one-dimensional open waveguide. The calculations are based on the method of the transition operator which was firstly introduced by R. H. Lehmberg to theoretically describe the spontaneous emission of two-level atoms in a free space. We obtain the explicit expressions for the photon radiation spectra and the emission decay rates for different initial two-qubit configurations with one and two excitations. For every initial state we calculate the radiation spectra and the emission decay rates for different effective distances between qubits. In every case, a decay rate is compared with a single qubit decay to show the superradiant or subradiant nature of a two-qubit decay with a given initial state.'\nauthor:\n- 'Ya. S. Greenberg'\n- 'O. A. Chuikin'\ntitle: 'Superradiant emission spectra of a two-qubit system in circuit quantum electrodynamics'\n---\n\nIntroduction\n============\n\nThe control of spontaneous emission in the multi-atom (or qubit) system that interact with a quantized radiation field in restricted geometries has received a great deal of" +"---\nabstract: 'This work intends to prove that strong instabilities may appear for high order geometric optics expansions of weakly stable quasilinear hyperbolic boundary value problems, when the forcing boundary term is perturbed by a small amplitude oscillating function, with a transverse frequency. Since the boundary frequencies lie in the locus where the so-called Lopatinskii determinant is zero, the amplifications on the boundary give rise to a highly coupled system of equations for the profiles. A simplified model for this system is solved in an analytical framework using the [Cauchy-Kovalevskaya ]{}theorem as well as a version of it ensuring analyticity in space and time for the solution. Then it is proven that, through resonances and amplification, a particular configuration for the phases may create an instability, in the sense that the small perturbation of the forcing term on the boundary interferes at the leading order in the asymptotic expansion of the solution. Finally we study the possibility for such a configuration of frequencies to happen for the isentropic Euler equations in space dimension three.'\naddress: |\n Institut de Math\u00e9matiques de Toulouse ; UMR5219\\\n Universit\u00e9 de Toulouse ; CNRS\\\n UPS, F-31062 Toulouse Cedex 9, France\nauthor:\n- Corentin Kilque\nbibliography:\n-" +"---\nauthor:\n- 'Peng Zhao[^1]'\n- 'Hao Zou[^2]'\nbibliography:\n- 'ref.bib'\ntitle:\n- Remarks on 2d Unframed Quiver Gauge Theories\n- Remarks on 2d Unframed Quiver Gauge Theories\n---\n\nIntroduction and Summary\n========================\n\nTwo-dimensional gauged linear sigma models (GLSMs) with $\\mathcal{N} = (2,2)$ supersymmetry [@Witten:1993yc] engineer a large class of K\u00e4hler target geometries. GLSMs with Abelian gauge groups have been well studied in relation to toric geometry and mirror symmetry and then extended to non-Abelian gauge groups. Quiver GLSMs have recently been studied from various perspectives [@Donagi:2007hi; @Jockers:2012zr; @Bonelli:2013mma; @Benini:2014mia; @Gomis:2014eya; @Franco:2015tna; @Closset:2017yte; @Guo:2018iyr; @Closset:2018axq; @Guo:2021dlz; @Galakhov:2022uyu], leading to new mathematical structures and physical insights that would not be attainable by considering only a single gauge node.\n\nInfrared dualities play a central r\u00f4le in relating different quiver GLSMs. In [@Benini:2014mia], building on earlier works on a single gauge node [@Hanany:1997vm; @Hori:2006dk; @Benini:2012ui] and similar dualities in higher dimensions [@Seiberg:1994pq; @Berenstein:2002fi; @Benini:2011mf; @Closset:2012eq; @Xie:2013lya], it was found that under a Seiberg-like duality, gauge theories with unitary groups realize a cluster algebra structure. The gauge group ranks and the complexified Fayet-Iliopoulos (FI) parameters transform as cluster variables. The precise matching of parameters has been tested by the sphere partition function [@Doroud:2012xw; @Benini:2012ui]." +"---\nabstract: 'Electric vehicles may dominate motorized transport in the next decade, yet the impact of the collective dynamics of electric mobility on long-range traffic flow is still largely unknown. We demonstrate a type of congestion that arises if charging infrastructure is limited or electric vehicle density is high. This congestion emerges solely through indirect interactions at charging infrastructure by queue-avoidance behavior that \u2013 counterintuitively \u2013 induces clustering of occupied charging stations and phase separation of the flow into free and congested stations. The resulting congestion waves propagate forward in the direction of travel, in contrast to backward-propagating congestion waves known from traditional traffic jams. These results may guide the planning and design of charging infrastructure and decision support applications in the near future.'\nauthor:\n- Philip Marszal\n- Marc Timme\n- Malte Schr\u00f6der\nbibliography:\n- 'main\\_bib.bib'\ntitle: Phase separation induces congestion waves in electric vehicle charging\n---\n\nThe ongoing transition towards more sustainable mobility centrally relies on electric vehicles to provide low-emission transport. As the number of battery electric vehicles (EVs) grows rapidly [@supp; @ieadata; @kraftfahrtbundesamt], EVs may soon become the primary form of individual mobility [@ieaevoutlook]. However, with their limited range and long recharge periods, EVs critically depend" +"---\nauthor:\n- 'Mohammad Sabbaqi and Elvin Isufi [^1]'\nbibliography:\n- 'myIEEEabrv.bib'\n- 'GTCNN\\_lib.bib'\ntitle: |\n Graph-Time Convolutional Neural Networks:\\\n Architecture and Theoretical Analysis\n---\n\n\\[sec\\_intro\\]\n\nLearning from *multivariate temporal* data is a challenging task due to their intrinsic spatiotemporal dependencies. This problem arises in applications such as time series forecasting, classification, action recognition, and anomaly detection\u00a0[@mo:yu2017spatio; @mo:yan2018STGCNN; @mo:kadous2002temporal; @mo:zhang2019anomaly]. The spatial dependencies can be captured by a graph either explicitly such as in transportation networks or implicitly such as in recommender systems\u00a0[@wang2021graph]. Therefore, graph-based inductive biases should be considered during learning to exploit the spatial dependencies alongside with temporal patterns in a computationally and data efficient manner. Based on advances in processing and learning over graphs\u00a0[@GSPsurvey; @hamilton2017representation], a handful of approaches have been proposed to learn from multivariate temporal data\u00a0[@surveyDLonST]. The main challenge is to capture the spatiotemporal dependencies by built-in effective biases in a principled manner\u00a0[@battaglia2018inductivebias].\n\nThe convolution principle has been key to build learning solutions for graph-based data [@gama2020elvinmagazine; @bronstein2021geometric]. By cascading graph convolutional filters and pointwise nonlinearities, graph convolutional neural networks (GCNNs) have been developed as non-linear models for graph-based data\u00a0[@GamaGCNN; @CNNonGraphs]. Such a principle reduces both the number of" +"---\nabstract: 'We extend the applicability of the hydrodynamics, perturbative QCD and saturation -based EKRT (Eskola-Kajantie-Ruuskanen-Tuominen) framework for ultrarelativistic heavy-ion collisions to peripheral collisions by introducing dynamical freeze-out conditions. As a new ingredient compared to the previous EKRT computations we also introduce a non-zero bulk viscosity. We compute various hadronic observables and flow correlations, including normalized symmetric cumulants, mixed harmonic cumulants and flow-transverse momentum correlations, and compare them against measurements from the BNL Relativistic Heavy Ion Collider (RHIC) and the CERN Large Hadron Collider (LHC). We demonstrate that the inclusion of the dynamical freeze-out and bulk viscosity allows a better description of the measured flow coefficients in peripheral collisions and enables the use of an extended centrality range when constraining the properties of QCD matter in the future.'\nauthor:\n- 'H.\u00a0Hirvonen${}^{a,b}$, K.\u00a0J.\u00a0Eskola${}^{a,b}$, H.\u00a0Niemi${}^{a,b}$,'\ntitle: 'Flow correlations from a hydrodynamics model with dynamical freeze-out and initial conditions based on perturbative QCD and saturation'\n---\n\nIntroduction\n============\n\nHeavy-ion collisions at ultrarelativistic energies provide the means to produce and investigate experimentally quark-gluon plasma (QGP), a strongly interacting fluid of quarks and gluons. In recent years the two main collider experiments that have investigated QGP properties are the Relativistic Heavy" +"---\naddress:\n- 'Direcci\u00f3n de Prestaciones Econ\u00f3micas y Sociales, Instituto Mexicano del Seguro Social, Mexico City, Mexico'\n- 'Institut f\u00fcr Statistik, Ludwig-Maximilians-Universit\u00e4t M\u00fcnchen, 80799 M\u00fcnchen, Germany'\n- 'Centre for Infectious Disease Epidemiology and Research, University of Cape Town, South Africa'\n- 'ICON-group. Non-communicable Disease Epidemiology. London School of Hygiene and Tropical Medicine. London, U.K.'\n- 'Faculty of Pharmacy and Department of Social and Preventive Medicine, Universit\u00e9 de Montr\u00e9al, Montreal, Canada'\n- 'Department of Epidemiology, Biostatistics and Occupational Health, McGill University, Montreal, Canada'\n- 'Department of Statistics and Operations Research, University of Granada, Granada, Spain.'\nauthor:\n- 'Rodrigo Zepeda-Tello'\n- Michael Schomaker\n- Camille Maringe\n- 'Matthew J. Smith'\n- Aurelien Belot\n- Bernard Rachet\n- 'Mireille E. Schnitzer'\n- 'Miguel Angel Luque Fernandez\\*'\nbibliography:\n- 'bibfile.bib'\n- 'extrabibliography.bib'\ntitle: 'The Delta-Method and Influence Function in Medical Statistics: a Reproducible Tutorial'\n---\n\nIntroduction\n============\n\nA fundamental problem in inferential statistics is to approximate the distribution of an estimator constructed from the sample (*i.e.* a statistic). The standard error (SE) of an estimator characterises its variability.[@Boos2013] Oftentimes, it is not directly the estimator which is of interest but a function of it. In this case, the Delta-Method can approximate the standard error" +"---\nabstract: |\n We present a mechanistic theory for predicting void evolution in the Li metal electrode during the charge and discharge of all-solid-state battery cells. A phase field formulation is developed to model vacancy annihilation and nucleation, and to enable the tracking of the void-Li metal interface. This is coupled with a viscoplastic description of Li deformation, to capture creep effects, and a mass transfer formulation accounting for substitutional (bulk and surface) Li diffusion and current-driven flux. Moreover, we incorporate the interaction between the electrode and the solid electrolyte, resolving the coupled electro-chemical-mechanical problem in both domains. This enables predicting the electrolyte current distribution and thus the emergence of local current \u2018hot spots\u2019, which act as precursors for dendrite formation and cell death. The theoretical framework is numerically implemented, and single and multiple void case studies are carried out to predict the evolution of voids and current hot spots as a function of the applied pressure, material properties and charge (magnitude and cycle history). For both plating and stripping, insight is gained into the interplay between bulk diffusion, Li dissolution and deposition, creep, and the nucleation and annihilation of vacancies. The model is shown to capture the main experimental" +"---\nabstract: |\n This paper proposes a parallelizable algorithm for linear-quadratic model predictive control (MPC) problems with state and input constraints. The algorithm itself is based on a parallel MPC scheme that has originally been designed for systems with input constraints. In this context, one contribution of this paper is the construction of time-varying yet separable constraint margins ensuring recursive feasibility and asymptotic stability of sub-optimal parallel MPC in a general setting, which also includes state constraints. Moreover, it is shown how to tradeoff online run-time guarantees versus the conservatism that is introduced by the tightened state constraints. The corresponding performance of the proposed method as well as the cost of the recursive feasibility guarantees is analyzed in the context of controlling a large-scale mechatronic system. This is illustrated by numerical experiments for a large-scale control system with more than $100$ states and $60$ control inputs leading to run-times in the millisecond range.\\\n **Keywords:** Model Predictive Control, Parallel Computing, Real-Time Control, Recursive Feasibility\\\nauthor:\n- 'Jiahe Shi[^1] , Yuning Jiang[^2] , Juraj Oravec[^3] [^4] , and Boris Houska'\ntitle: Parallel MPC for Linear Systems with State and Input Constraints\n---\n\nIntroduction {#sec:introduction}\n============\n\nModel predictive control (MPC)\u00a0[@Rawlings2009] is a" +"---\nabstract: |\n An *obstacle representation* of a graph\u00a0$G$ consists of a set of pairwise disjoint simply-connected closed regions and a one-to-one mapping of the vertices of\u00a0$G$ to points such that two vertices are adjacent in $G$ if and only if the line segment connecting the two corresponding points does not intersect any obstacle. The *obstacle number* of a graph is the smallest number of obstacles in an obstacle representation of the graph in the plane such that all obstacles are simple polygons.\n\n It is known that the obstacle number of each $n$-vertex graph is $O(n \\log n)$ \\[Balko, Cibulka, and Valtr, 2018\\] and that there are $n$-vertex graphs whose obstacle number is $\\Omega(n/(\\log\\log n)^2)$ \\[Dujmovi\u0107 and Morin, 2015\\]. We improve this lower bound to $\\Omega(n/\\log\\log n)$ for simple polygons and to $\\Omega(n)$ for convex polygons. To obtain these stronger bounds, we improve known estimates on the number of $n$-vertex graphs with bounded obstacle number, solving a conjecture by Dujmovi\u0107 and Morin. We also show that if the drawing of some $n$-vertex graph is given as part of the input, then for some drawings $\\Omega(n^2)$ obstacles are required to turn them into an obstacle representation of the graph." +"---\nabstract: 'This essay is the first systematic account of causal relationships between measurement instruments and the data they elicit in the social sciences. This problem of reflexive measurement is pervasive and profoundly affects social scientific inquiry. I argue that, when confronted by the problem of reflexive measurement, scientific knowledge of the social world is not possible without a model of the causal effect of our measurement instruments.'\nauthor:\n- James Michelson\nbibliography:\n- 'reflex\\_measure.bib'\ntitle: Reflexive Measurement\n---\n\nIntroduction\n============\n\nThe idea of a \u201cself-fulfilling prophecy\u201d is an ancient one, going back at least as far as the story of Oedipus. In its more modern, philosophical guise philosophers of science have called the idea *reflexive prediction*. A contemporary example of this phenomenon is bank runs: announcing an impending bank run may incite one. Contemporary philosophers of science have stressed the challenge reflexive prediction poses for theory development and testing in the social sciences (see [@kopec2011; @lowe2018]). Less explored, however, is the idea of *reflexive measurement*. A reflexive measurement is one that causally affects what the social scientist is trying to measure. Thus, the act of measuring some quantity about people (their beliefs or opinions, patterns of behavior, preferences, etc)" +"---\nabstract: 'Public blockchains are decentralized networks where each participating node executes the same decision-making process. This form of decentralization does not scale well because the same data are stored on each network node, and because all nodes must validate each transaction prior to their confirmation. One solution approach decomposes the nodes of a blockchain network into subsets called \u201cshards\", each shard processing and storing disjoint sets of transactions in parallel. To fully benefit from the parallelism of sharded blockchains, the processing load of shards must be evenly distributed. However, the problem of computing balanced workloads is theoretically hard and further complicated in practice as transaction processing times are unknown prior to be assigned to shards. In this paper we introduce a dynamic workload-balancing algorithm where the allocation strategy of transactions to shards is periodically adapted based on the recent workload history of shards. Our algorithm is an adaptation to sharded blockchains of a consensus-based load-balancing algorithm. It is a fully distributed algorithm inline with network based applications such as blockchains. Some preliminary results are reported based on simulations that shard transactions of three well-known blockchain platforms.'\nauthor:\n- 'M. Toulouse$^{\\textrm{\\Letter}}$'\n- 'H. K. Dai'\n- 'Q. L. Nguyen'\nbibliography:" +"---\nabstract: 'Novel test selectors used in simulation-based verification have been shown to significantly accelerate coverage closure regardless of the number of coverage holes. This paper presents a configurable and highly-automated framework for novel test selection based on neural networks. Three configurations of this framework are tested with a commercial signal processing unit. All three convincingly outperform random test selection with the largest saving of simulation being 49.37% to reach 99.5% coverage. The computational expense of the configurations is negligible compared to the simulation reduction. We compare the experimental results and discuss important characteristics related to the performance of the configurations.'\nauthor:\n- \n- \n- \nbibliography:\n- 'bibliography.bib'\ntitle: 'Using Neural Networks for Novelty-based Test Selection to Accelerate Functional Coverage Closure'\n---\n\nSimulation-Based Verification, Functional Coverage, Novelty Detection, Neural Network\n\nIntroduction {#s:introduction}\n============\n\nSimulation-based verification is a vital technique that is used to gain confidence in the functional correctness of digital designs. Traditionally, the quality of a generated test is measured by various coverage metrics obtained during simulation. Functional coverage records whether the specified functionality has been executed by simulated tests. In practice, functional coverage closure requires the generation and simulation of many tests. The coverage gain that can be" +"---\nabstract: 'Finite simplex lattice models are used in different branches of science, e.g., in condensed matter physics, when studying frustrated magnetic systems and non-Hermitian localization phenomena; or in chemistry, when describing experiments with mixtures. An $n$-simplex represents the simplest possible polytope in $n$ dimensions, e.g., a line segment, a triangle, and a tetrahedron in one, two, and three dimensions, respectively. In this work, we show that various fully solvable, in general non-Hermitian, $n$-simplex lattice models [with open boundaries]{} can be constructed from the high-order field-moments space of quadratic bosonic systems. Namely, we demonstrate that such $n$-simplex lattices can be formed by a dimensional reduction of highly-degenerate iterated polytope chains in $(k>n)$-dimensions, which naturally emerge in the field-moments space. Our findings indicate that the field-moments space of bosonic systems provides a versatile platform for simulating real-space $n$-simplex lattices exhibiting non-Hermitian phenomena, and yield valuable insights into the structure of many-body systems exhibiting similar complexity. [Amongst a variety of practical applications, these simplex structures can offer a physical setting for implementing the discrete fractional Fourier transform, an indispensable tool for both quantum and classical signal processing.]{}'\nauthor:\n- 'Ievgen I. Arkhipov'\n- Adam Miranowicz\n- Franco Nori\n- '\u015eahin K. \u00d6zdemir'" +"---\nabstract: 'This article describes the dynamics of small inertial particles centrifuging out of a single vortex. It shows the of caustics formation in the vicinity of a single vortex: both for particle collisions and void formation. From these single-vortex studies we provide estimates of the role of caustics in high Reynolds number turbulence, and in the case of clouds, estimate how they may help in rain initiation by bridging the droplet-growth bottleneck. We briefly describe how the Basset-Boussinesq history force may be calculated by a method which does not involve huge memory costs, and provide arguments for its possible importance for droplets in turbulence. We discuss how phase change could render cloud turbulence fundamentally different from turbulence in other situations.'\nauthor:\n- 'S. Ravichandran'\n- Rama Govindarajan\ntitle: The waltz of tiny droplets and the flow they live in\n---\n\nIntroduction\n============\n\nThe flow situations\n-------------------\n\nThe dynamics of finite-sized particles or droplets in turbulent flow is relevant in a wide range of natural and industrial settings. In the dispersal of pollutants due to a volcanic eruption or from industrial flue gases, in a sandstorm, or a cloud, or when a river carrying sediment disgorges itself into the ocean," +"---\nauthor:\n- 'Shuang-Liang Li'\n- Minhua Zhou\n- Minfeng Gu\nbibliography:\n- 'x-ray.bib'\ndate: 'Received; accepted'\ntitle: 'Constraining X-ray emission of magnetically arrested disk (MAD) by radio-loud AGNs with extreme ultraviolet (EUV) deficit'\n---\n\n[Active galactic nuclei (AGNs) with EUV deficit are suggested to be powered by a MAD surrounding the black hole, where the slope of EUV spectra ($\\alpha_{\\rm EUV}$) is found to possess a well positive relationship with the jet efficiency. In this work, we investigate the properties of X-ray emission in AGNs with EUV deficit for the first time.]{} [We construct a sample of 15 objects with EUV deficit to analyse their X-ray emission. The X-ray luminosity in 13 objects are newly processed by ourself, while the other 2 sources are gathered from archival data.]{} [It is found that the average X-ray flux of AGNs with EUV deficit are 4.5 times larger than that of radio-quiet AGNs (RQAGNs), while the slope of relationship between the optical-UV luminosity ($L_{\\rm UV}$) and the X-ray luminosity ($L_{\\rm X}$) is found to be similar with that of RQAGNs. For comparison, the average X-ray flux of radio-loud AGNs (RLAGNs) without EUV deficit is about 2-3 times larger than that of RQAGNs." +"---\naddress: 'Institute of Nuclear Physics Polish Academy of Sciences, 31-342 Krakow, Poland; [apoorva.bhatt@ifj.edu.pl]{}, [pawel.malecki@ifj.edu.pl]{}'\n---\n\nIntroduction\n============\n\nCosmic rays are high-energy particles that originate in outer space. The main sources of the cosmic rays which strike the earth\u2019s atmosphere from all directions are the Sun, objects in our Galaxy, and objects beyond our Galaxy, depending on the energy of the primary cosmic ray. The cosmic rays are mostly nuclei of atoms such as H, He, and heavier elements. They also include $e^-$, $e^+$, and other subatomic particles. The energy of cosmic rays varies from a few MeVs [(10$^6$ eVs)]{} to TeVs [(10$^{12}$ eVs)]{} and beyond. The primary cosmic rays interact with the atoms and nuclei in the atmosphere producing large cascades of secondary particles mainly hadrons, known as the Extensive Air Showers (EAS). The hadrons undergo strong interactions with the atmospheric nuclei such as nitrogen and oxygen and produce hadron showers containing mainly $\\pi$\u2019s and $K$\u2019s. Out of the secondary particles, $\\pi$\u2019s are the most abundant due to their lower mass. If the secondary particles have sufficient energy they initiate new interactions. The unstable particles like $\\pi$s and $K$s decay through the channels, $$\\pi^+\\left(\\pi^-\\right) \\rightarrow \\mu^+\\left(\\mu^-\\right) + \\nu_\\mu\\left(\\bar\\nu_\\mu\\right),$$ $$K^+\\left(K^-\\right)" +"---\nabstract: 'We present optimal survey strategies for the upcoming NIX imager, part of the ERIS instrument to be installed on the Very Large Telescope (VLT). We will use a custom 2.2$\\mu$m $K$-peak filter to optimise the efficiency of a future large-scale direct imaging survey, aiming to detect brown dwarfs and giant planets around nearby stars. We use the results of previous large scale imaging surveys (primarily SPHERE SHINE and Gemini GPIES) to inform our choice of targets, as well as improved planet population distributions. We present four possible approaches to optimise survey target lists for the highest yield of detections: i) targeting objects with anomalous proper motion trends, ii) a follow-up survey of dense fields from SPHERE SHINE and Gemini GPIES iii) surveying nearby star-forming regions and iv) targeting newly discovered members of nearby young moving groups. We also compare the predicted performance of NIX to other state-of-the-art direct imaging instruments.'\nauthor:\n- |\n Sophie Dubber$^{1,2}$[^1], Beth Biller$^{1,2}$, Mariangela Bonavita$^{1,2}$, Katelyn Allers$^{3}$, Cl\u00e9mence Fontanive$^{4}$, Matthew A. Kenworthy${^5}$, Micka\u00ebl Bonnefoy${^6}$, William Taylor$^{7}$\\\n $^{1}$SUPA, Institute for Astronomy, Royal Observatory, University of Edinburgh, Blackford Hill, Edinburgh EH93HJ, UK\\\n $^{2}$Centre for Exoplanet Science, University of Edinburgh, Edinburgh, UK\\\n $^{3}$Department of Physics and Astronomy," +"---\nabstract: 'We calculate the relativistic six-meson scattering amplitude at low energy within the framework of QCD-like theories with $n$ degenerate quark flavors at next-to-leading order in the chiral counting. We discuss the cases of complex, real and pseudo-real representations, i.e.\u00a0with global symmetry and breaking patterns $\\operatorname{SU}(n)\\times\\operatorname{SU}(n)/\\operatorname{SU}(n)$ (extending the QCD case), $\\operatorname{SU}(2n)/\\operatorname{SO}(2n)$, and $\\operatorname{SU}(2n)/\\operatorname{Sp}(2n)$. In case of the one-particle-irreducible part, we obtain analytical expressions in terms of 10 six-meson subamplitudes based on the flavor and group structures. We extend on our previous results obtained within the framework of the $\\operatorname{O}(N+1)/\\operatorname{O}(N)$ non-linear sigma model, with $N$ being the number of meson flavors. This work allows for studying a number of properties of six-particle amplitudes at one-loop level.'\nauthor:\n- Johan Bijnens\n- Tom\u00e1\u0161 Husek\n- Mattias Sj\u00f6\ntitle: 'Six-meson amplitude in QCD-like theories'\n---\n\n(-,1cm) LU TP 22-45\n\nIntroduction\n============\n\nQuantum chromodynamics (QCD), the fundamental theory of the strong interaction, becomes non-perturbative at low energy and it is therefore impractical for phenomenology in that regime. From the large-distance perspective, the fundamental quark and gluon degrees of freedom are effectively replaced by composite colorless states, the lightest of which are the mesons. These can be approximately interpreted as the Nambu\u2013Goldstone bosons" +"---\nabstract: 'We investigate the superradiant instability of Kerr black holes under a massive scalar perturbation. We obtain a potential $V_i(r)$ when expanding the scalar potential $V_K(r)$ for large $r$. The Newton potential $V_1(r)$ and the far-region potential $V_2(r)$ are used to explore the superradiant instability, while $V_3(r)$ matches a geodesic potential $V_{gK}(r)$ for a neutral particle moving around the Kerr black hole. Thus, $V_{gK}(r)$ is employed to fix the separation constant. We obtain a condition for a trapping well to possess a quasibound state in the Kerr black holes by analyzing the Newton potential $V_1(r)$and far-region wave functions obtained from $V_2(r)$. The other condition for no trapping well (a tiny well) is found to generate an asymptotic bound state. Finally, we discuss an ultralight boson whose potential has a tiny well located at asymptotic region.'\n---\n\n\\\nYun Soo Myung$^{a,b}$[^1]\\\n[${}^a$Institute of Basic Sciences and Department of Computer Simulation, Inje University Gimhae 50834, Korea\\\n]{}\n\n[${}^b$Asia Pacific Center for Theoretical Physics, Pohang 37673, Korea]{}\n\nIntroduction\n============\n\nA merging of two black holes in vacuum is a meticulous prediction of general relativity, which has confirmed recently by gravitational wave observation of the LIGO/Virgo Collaborartion\u00a0[@LIGOScientific:2016aoc; @LIGOScientific:2021sio]. If dark matter clusters" +"---\nabstract: 'The Double Asteroid Redirection Test (DART) is a NASA mission intended to crash a projectile on Dimorphos, the secondary component of the binary (65803) Didymos system, to study its orbit deflection. As a consequence of the impact, a dust cloud will be be ejected from the body, potentially forming a transient coma- or comet-like tail on the hours or days following the impact, which might be observed using ground-based instrumentation. Based on the mass and speed of the impactor, and using known scaling laws, the total mass ejected can be roughly estimated. Then, with the aim to provide approximate expected brightness levels of the coma and tail extent and morphology, we have propagated the orbits of the particles ejected by integrating their equation of motion, and have used a Monte Carlo approach to study the evolution of the coma and tail brightness. For typical power-law particle size distribution of index \u20133.5, with radii r$_{rmin}$=1 $\\mu$m and r$_{max}$=1 cm, and ejection speeds near 10 times the escape velocity of Dimorphos, we predict an increase of brightness of $\\sim$3 magnitudes right after the impact, and a decay to pre-impact levels some 10 days after. That would be the case if" +"---\nbibliography:\n- 'library.bib'\n---\n\n[**** ]{}\\\nL. Cultrera^1,\\*^\\\n**[1]{} Brookhaven National Laboratory, Upton, NY 11973\\\nlcultrera@bnl.gov**\n\nAbstract {#abstract .unnumbered}\n========\n\nThis paper summarizes the state of the art of photocathode based on III-V semiconductors for spin polarized electron beam production. The limitations preventing this class of material to provide the long term reliability at the highest average beam currents necessary for some of the new accelerator facilities or proposed upgrades of existing ones are illustrated. Promising alternative classes of materials are identified showing properties that can be leveraged to synthesize photocathode structures that can outperform III-V semiconductors in the production of spin polarized electron beams and support the operating conditions of advanced electron sources for new facilities.\n\nIntroduction {#introduction .unnumbered}\n============\n\nThe use of spin polarized electron source finds application in a wide range of electron accelerators of interest for HEP. The planned International Linear Collider [@1] is designed to collide spin polarized electrons and positrons at very large energies in the TeV scale, SuperKEKB is considering upgrading to a highly spin polarized electron source [@2]. These facilities are projected to require relatively modest amount of average current, not more that few hundreds of $\\mu$A that can be produced" +"---\nabstract: 'The problem of joint sequential detection and isolation is considered in the context of multiple, not necessarily independent, data streams. A multiple testing framework is proposed, where each hypothesis corresponds to a different subset of data streams, the sample size is a stopping time of the observations, and the probabilities of four kinds of error are controlled below distinct, user-specified levels. Two of these errors reflect the detection component of the formulation, whereas the other two the isolation component. The optimal expected sample size is characterized to a first-order asymptotic approximation as the error probabilities go to 0. Different asymptotic regimes, expressing different prioritizations of the detection and isolation tasks, are considered. A novel, versatile family of testing procedures is proposed, in which two distinct, in general, statistics are computed for each hypothesis, one addressing the detection task and the other the isolation task. Tests in this family, of various computational complexities, are shown to be asymptotically optimal under different setups. The general theory is applied to the detection and isolation of anomalous, not necessarily independent, data streams, as well as to the detection and isolation of an unknown dependence structure.'\naddress: 'Department of Statistics, University of Illinois," +"---\nabstract: 'In Reynolds-averaged-Navier-Stokes (RANS) simulations for turbulent scalar transport, it is common that using an eddy-viscosity (EV) model to close the Reynolds stress yields reasonable mean flow predictions but large errors in scalar transfer results regardless of scalar flux model inadequacies. This failure mode of EV models is generally related to the fact that the transport of momentum and scalar depends on different Reynolds stress components. The present work addresses two common issues relevant to such failures in the scenarios of turbulent scalar transport around obstacles. The first issue is the general overprediction of scalar transfer near the upwind surfaces, which is primarily attributed to the absence of wall-blocking mechanism in conventional EV models. We accordingly propose a Shear-Preserving-Wall-Blocking (SPWB) method to analytically correct the overpredicted wall-normal stress under the realizability constraint. The second issue is the general underprediction of scalar transfer in the downstream large separation regions, which is essentially attributed to the presence of vortex shedding invalidating the scaling ground in conventional EV models\u2019 dissipation closures. We accordingly generalize the recently proposed Double-Scale Double-Linear-EV (DSDL) model to scalar transport predictions. Consequently, a combined model, SPWB-DSDL, is developed. The model is then applied to two test cases, of" +"---\nabstract: 'We construct a hybrid cavity magnomechanical system to transfer the bipartite entanglements and achieve the strong microwave photon-phonon entanglement based on the reservoir engineering approach. The magnon mode is coupled to the microwave cavity mode via magnetic dipole interaction, and to the phonon mode via magnetostrictive force (optomechanical-like). It is shown that the initial magnon-phonon entanglement can be transferred to the photon-phonon subspace in the case of these two interactions cooperating. In the reservoir-engineering parameter regime, the initial entanglement is directionally transferred to the photon-phonon subsystem, so we obtain a strong bipartite entanglement in which the magnon mode acts as the cold reservoir to effectively cool the Bogoliubov mode delocalized over the cavity and the mechanical deformation mode. Moreover, as the dissipation ratio between the cold reservoir mode and the target mode increases, greater quantum entanglement and better cooling effect can be achieved. Our results indicate that the steady-state entanglement is robust against temperature. The scheme may provide potential applications for quantum information processing, and is expected to be extended to other three-mode systems.'\nauthor:\n- 'Zhi-Qiang Liu'\n- Yun Liu\n- Lei Tan\n- 'Wu-Ming Liu'\ntitle: Reservoir engineering strong quantum entanglement in cavity magnomechanical systems\n---" +"---\nauthor:\n- Andrey Grabovsky\n- and Vitaly Vanchurin\ntitle: |\n Bio-inspired Machine Learning:\\\n programmed death and replication\n---\n\n[@counter>0@toks=@toks=]{}\n\n[@counter>0@toks=@toks=]{}\n\n[ !a! @toks= @toks= ]{}\n\n[ !b! @toks= @toks= ]{}\n\n[ !c! @toks= @toks= ]{}\n\n[ !d! @toks= @toks= ]{}\n\n[abstract[We analyze algorithmic and computational aspects of biological phenomena, such as replication and programmed death, in the context of machine learning. We use two different measures of neuron efficiency to develop machine learning algorithms for adding neurons to the system (i.e. replication algorithm) and removing neurons from the system (i.e. programmed death algorithm). We argue that the programmed death algorithm can be used for compression of neural networks and the replication algorithm can be used for improving performance of the already trained neural networks. We also show that a combined algorithm of programmed death and replication can improve the learning efficiency of arbitrary machine learning systems. The computational advantages of the bio-inspired algorithms are demonstrated by training feedforward neural networks on the MNIST dataset of handwritten images. ]{}]{}\n\nIntroduction\n============\n\nArtificial neural networks [@Galushkin; @Schmidhuber; @Haykin] have been successfully used for solving computational problems in natural language processing, pattern recognition, data analysis etc. In addition to the empirical results" +"---\nabstract: 'In this paper, we introduce a synthesis technique for transmission line based decoupling networks, which find application in coupled systems such as multiple-antenna systems and antenna arrays. Employing the generalized $\\pi$-network and the transmission line analysis technique, we reduce the decoupling network design into simple matrix calculations. The synthesized decoupling network is essentially a generalized $\\pi$-network with transmission lines at all branches. The advantage of this proposed decoupling network is that it can be implemented using transmission lines, ensuring better control on loss, performance consistency and higher power handling capability, when compared with lumped components, and can be easily scaled for operation at different frequencies.'\nauthor:\n- 'Binbin Yang [^1]'\nbibliography:\n- 'DN.bib'\ntitle: Synthesis of General Decoupling Networks Using Transmission Lines\n---\n\nDecoupling network, mutual coupling, multi-port systems, antenna array, network synthesis, transmission lines.\n\nIntroduction\n============\n\narrays and multiple-antenna systems typically suffer from mutual coupling effect, which causes lower system efficiency, impedance mis-matching [@balanis2016antenna] and correlation-induced capacity reduction [@wallace2004mutual]. Therefore, a decoupling network that combats the mutual coupling effect is of great interest to such coupled systems. The reported works in literature on decoupling techniques typically deal with coupled networks with special characteristics, such as circular symmetry" +"---\nabstract: 'We discuss the spin susceptibility of superconductors in which a Cooper pair consists of two electrons having the angular momentum $J=3/2$ due to strong spin-orbit interactions. The susceptibility is calculated analytically for several pseudospin quintet states in a cubic superconductor within the linear response to a Zeeman field. The susceptibility for $A_{1g}$ symmetry states is isotropic in real space. For $E_g$ and $T_{2g}$ symmetry cases, the results depend sensitively on choices of order parameter. The susceptibility is isotropic for a $T_{2g}$ symmetry state, whereas it becomes anisotropic for an $E_{g}$ symmetry state. We also find in a $T_{2g}$ state that the susceptibility tensor has off-diagonal elements.'\nauthor:\n- 'Dakyeong Kim$^{1}$'\n- 'Takumi Sato$^{1}$'\n- 'Shingo Kobayashi$^{2}$'\n- 'Yasuhiro Asano$^{1}$'\ntitle: ' Spin Susceptibility of a $J=3/2$ Superconductor'\n---\n\nIntroduction\n============\n\nSpin-orbit interaction is a source of exotic electronic states realized in topological semimetals\u00a0[@chen:cpb2016; @armitage:rmp2018], topological insulators\u00a0[@hasan:rmp2010; @qi:rmp2011], and topological superconductors\u00a0[@tanaka:jpsj2011; @sato:jpsj2016; @mizushima:jpsj2016; @chiu:rmp2016; @sato:rpp2017; @sato:rpp2017]. In the presence of strong spin-orbit interactions, spin $S=1/2$ and orbital angular momentum $L=1$ of an electron are inseparable degrees of freedom. Electronic properties of such materials are characterized by an electron with pseudospin $J=L+S=3/2$. Recent studies have suggested a" +"---\nabstract: 'We present a broadband X-ray study of W50 (\u2018the Manatee nebula\u2019), the complex region powered by the microquasar SS\u00a0433, that provides a test-bed for several important astrophysical processes. The W50 nebula, a Galactic PeVatron candidate, is classified as a supernova remnant but has an unusual double-lobed morphology likely associated with the jets from SS\u00a0433. Using NuSTAR, XMM-Newton, and Chandra observations of the inner eastern lobe of W50, we have detected hard non-thermal X-ray emission up to $\\sim$30\u00a0keV, originating from a few-arcminute size knotty region (\u2018Head\u2019) located $\\lesssim$ 18$^{\\prime}$ (29\u00a0pc for a distance of 5.5\u00a0kpc) east of SS\u00a0433, and constrain its photon index to 1.58$\\pm$0.05 (0.5\u201330 keV band). The index gradually steepens eastward out to the radio \u2018ear\u2019 where thermal soft X-ray emission with a temperature $kT$$\\sim$0.2\u00a0keV dominates. The hard X-ray knots mark the location of acceleration sites within the jet and require an equipartition magnetic field of the order of $\\gtrsim$12$\\mu$G. The unusually hard spectral index from the \u2018Head\u2019 region challenges classical particle acceleration processes and points to particle injection and re-acceleration in the sub-relativistic SS\u00a0433 jet, as seen in blazars and pulsar wind nebulae.'\nauthor:\n- 'S. Safi-Harb'\n-" +"---\nabstract: |\n We consider a general non-linear model where the signal is a finite mixture of an unknown, possibly increasing, number of features issued from a continuous dictionary parameterized by a real non-linear parameter. The signal is observed with Gaussian (possibly correlated) noise in either a continuous or a discrete setup. We propose an off-the-grid optimization method, that is, a method which does not use any discretization scheme on the parameter space, to estimate both the non-linear parameters of the features and the linear parameters of the mixture.\n\n We use recent results on the geometry of off-the-grid methods to give minimal separation on the true underlying non-linear parameters such that interpolating certificate functions can be constructed. Using also tail bounds for suprema of Gaussian processes we bound the prediction error with high probability. Assuming that the certificate functions can be constructed, our prediction error bound is up to $\\log-$factors similar to the rates attained by the Lasso predictor in the linear regression model. We also establish convergence rates that quantify with high probability the quality of estimation for both the linear and the non-linear parameters.\naddress:\n- 'Cristina Butucea, CREST, ENSAE, IP Paris'\n- 'Jean-Fran\u00e7ois Delmas, CERMICS, \u00c9cole des" +"---\nabstract: 'End-to-end spoken language understanding (SLU) systems benefit from pretraining on large corpora, followed by fine-tuning on application-specific data. The resulting models are too large for on-edge applications. For instance, BERT-based systems contain over 110M parameters. Observing the model is over-parameterized, we propose lean transformer structure where the dimension of the attention mechanism is automatically reduced using group sparsity. We propose a variant where the learned attention subspace is transferred to an attention bottleneck layer. In a low-resource setting and without pre-training, the resulting compact SLU model achieves accuracies competitive with pre-trained large models.'\naddress: 'Department of Electrical Engineering-ESAT, KU Leuven, Belgium'\nbibliography:\n- 'mybib.bib'\ntitle: 'Bottleneck Low-rank Transformers for Low-resource Spoken Language Understanding'\n---\n\n**Index Terms**: low-rank transformer, bottleneck attention, low-resource spoken language understanding\n\nIntroduction\n============\n\nSpoken language understanding (SLU) systems that infer users\u2019 intents speech draw increased attention since the adoption of voice assistants such as Apple Siri and Amazon Alexa [@Ballati2018Siri; @coucke2018snips; @kumar2017just]. A traditional SLU system is the pipeline of an automatic speech recognition (ASR) component which decodes transcriptions from input speech and a natural language understanding (NLU) component which extracts meaning (intents) from the decoded transcriptions. To address several drawbacks of the pipeline structure" +"---\nabstract: 'We introduce a novel, practically relevant variation of the anomaly detection problem in multi-variate time series: *intrinsic anomaly detection*. It appears in diverse practical scenarios ranging from DevOps to IoT, where we want to recognize failures of a system that operates under the influence of a surrounding environment. Intrinsic anomalies are changes in the functional dependency structure between time series that represent an environment and time series that represent the internal state of a system that is placed in said environment. We formalize this problem, provide under-studied public and new purpose-built data sets for it, and present methods that handle intrinsic anomaly detection. These address the short-coming of existing anomaly detection methods that cannot differentiate between expected changes in the system\u2019s state and unexpected ones, i.e., changes in the system that deviate from the environment\u2019s influence. Our most promising approach is fully unsupervised and combines adversarial learning and time series representation learning, thereby addressing problems such as label sparsity and subjectivity, while allowing to navigate and improve notoriously problematic anomaly detection data sets.'\nauthor:\n- |\n Stephan Rabanser[^1] [^2]\\\n University of Toronto & Vector Institute\\\n `stephan@cs.toronto.edu`\\\n- |\n Tim Januschowski\\\n Zalando Research\\\n `tim.januschowski@zalando.de`\\\n- |\n Kashif Rasul\\\n Morgan" +"---\nabstract: 'We consider the anomalous magnetic moment of the muon $a_\\mu$, which shows a significant deviation from the Standard Model expectation given the recent measurements at Fermilab and BNL. We focus on Standard Model Effective Field Theory (SMEFT) with the aim to identify avenues for the upcoming LHC runs and future experiments such as MUonE. To this end, we include radiative effects to $a_\\mu$ in SMEFT to connect the muon anomaly to potentially interesting searches at the LHC, specifically Higgs decays into muon pairs and such decays with resolved photons. Our investigation shows that similar to results for concrete UV extensions of the Standard Model, the Fermilab/BNL result can indicate strong coupling within the EFT framework and $a_\\mu$ is increasingly sensitive to a single operator direction for high scale UV completions. In such cases, there is some complementarity between expected future experimental improvements, yet with considerable statistical challenges to match the precision provided by the recent $a_\\mu$ measurement.'\nauthor:\n- Akanksha Bhardwaj\n- Christoph Englert\n- Panagiotis Stylianou\nbibliography:\n- 'paper.bib'\ntitle: Implications of the muon anomalous magnetic moment for the LHC and MUonE\n---\n\nIntroduction {#sec:intro}\n============\n\nThe search for new physics beyond the Standard Model (SM), which" +"---\nabstract: 'The purpose of this article is to initiate the investigation of the curvature operator of the second kind on K\u00e4hler manifolds. The main result asserts that a closed K\u00e4hler surface with six-positive curvature operator of the second kind is biholomorphic to $\\mathbb{CP}^2$. It is also shown that a closed non-flat K\u00e4hler surface with six-nonnegative curvature operator of the second kind is either biholomorphic to $\\mathbb{CP}^2$ or isometric to $\\mathbb{S}^2 \\times \\mathbb{S}^2$.'\naddress: 'Department of Mathematics, Statistics and Physics, Wichita State University, Wichita, KS, 67260'\nauthor:\n- Xiaolong Li\nbibliography:\n- 'ref.bib'\ntitle: 'K\u00e4hler surfaces with six-positive curvature operator of the second kind'\n---\n\n[^1]\n\nIntroduction\n============\n\nThe Riemann curvature tensor $R_{ijkl}$ of a Riemannian manifold $(M^n,g)$ defines two kinds of curvature operators: $\\hat{R}$ acting on two-forms via $$ \\hat{R}(e_i\\wedge e_j) =\\frac 1 2 \\sum_{k,l}R_{ijkl}e_k \\wedge e_l,$$ and $\\mathring{R}$ acting on symmetric two-tensors via $$ \\mathring{R}(e_i \\odot e_j) =\\sum_{k,l}R_{iklj} e_k \\odot e_l,$$ where $\\{e_1, \\cdots, e_n\\}$ is an orthonormal basis of the tangent space $T_pM$ at $p \\in M$, $\\wedge$ denotes the wedge product, and $\\odot$ denotes the symmetric product. The self-adjoint operator $\\hat{R}$ is the so-called curvature operator (of the first kind by Nishikawa [@Nishikawa86]) and it is" +"---\nabstract: 'Over the past decades, the incidence of thyroid cancer has been increasing globally. Accurate and early diagnosis allows timely treatment and helps to avoid over-diagnosis. Clinically, a nodule is commonly evaluated from both transverse and longitudinal views using thyroid ultrasound. However, the appearance of the thyroid gland and lesions can vary dramatically across individuals. Identifying key diagnostic information from both views requires specialized expertise. Furthermore, finding an optimal way to integrate multi-view information also relies on the experience of clinicians and adds further difficulty to accurate diagnosis. To address these, we propose a personalized diagnostic tool that can customize its decision-making process for different patients. It consists of a multi-view classification module for feature extraction and a personalized weighting allocation network that generates optimal weighting for different views. It is also equipped with a self-supervised view-aware contrastive loss to further improve the model robustness towards different patient groups. Experimental results show that the proposed framework can better utilize multi-view information and outperform the competing methods.'\nauthor:\n- Han Huang\n- Yijie Dong\n- Xiaohong Jia\n- Jianqiao Zhou\n- Dong Ni\n- 'Jun Cheng^()^'\n- 'Ruobing Huang^()^'\nbibliography:\n- 'references.bib'\ntitle: 'Personalized Diagnostic Tool for Thyroid Cancer Classification" +"---\nabstract: 'Cooling the trapped atoms toward their motional ground states is key to applications of quantum simulation and quantum computation. By utilizing nonreciprocal couplings between constituent atoms, we present an intriguing dark-state cooling scheme in $\\Lambda$-type three-level structure, which is shown superior than the conventional electromagnetically-induced-transparency cooling in a single atom. The effective nonreciprocal couplings can be facilitated either by an atom-waveguide interface or a free-space photonic quantum link. By tailoring system parameters allowed in dark-state cooling, we identify the parameter regions of better cooling performance with an enhanced cooling rate. We further demonstrate a mapping to the dark-state sideband cooling under asymmetric laser driving fields, which shows a distinct heat transfer and promises an outperforming dark-state sideband cooling assisted by collective spin-exchange interactions.'\nauthor:\n- 'Chun-Che Wang'\n- 'Yi-Cheng Wang'\n- 'Chung-Hsien Wang'\n- 'Chi-Chih Chen'\n- 'H. H. Jen'\ntitle: 'Superior dark-state cooling via nonreciprocal couplings in trapped atoms'\n---\n\n[^1]\n\n[^2]\n\nIntroduction\n============\n\nCooling the trapped atoms toward their motional ground states engages a series of improving laser cooling techniques from Doppler to Sisyphus and subrecoil cooling schemes [@Cohen2011]. Upon approaching the motional ground state of atoms, the linewidth or the spontaneous decay rate of the" +"---\nabstract: 'In this paper, we present an incremental domain adaptation technique to prevent catastrophic forgetting for an end-to-end automatic speech recognition (ASR) model. Conventional approaches require extra parameters of the same size as the model for optimization, and it is difficult to apply these approaches to end-to-end ASR models because they have a huge amount of parameters. To solve this problem, we first investigate which parts of end-to-end ASR models contribute to high accuracy in the target domain while preventing catastrophic forgetting. We conduct experiments on incremental domain adaptation from the LibriSpeech dataset to the AMI meeting corpus with two popular end-to-end ASR models and found that adapting only the linear layers of their encoders can prevent catastrophic forgetting. Then, on the basis of this finding, we develop an element-wise parameter selection focused on specific layers to further reduce the number of fine-tuning parameters. Experimental results show that our approach consistently prevents catastrophic forgetting compared to parameter selection from the whole model.'\naddress: |\n $^1$ Hitachi, Ltd. Research & Development Group, Japan\\\n $^2$ Language Technologies Institute, Carnegie Mellon University, USA\\\n $^3$ Center for Language and Speech Processing, Johns Hopkins University, USA\nbibliography:\n- 'mybib.bib'\ntitle: |\n Updating Only Encoders" +"---\nabstract: |\n Regression learning is classic and fundamental for medical image analysis. It provides the continuous mapping for many critical applications, like the attribute estimation, object detection, segmentation and non-rigid registration. However, previous studies mainly took the case-wise criteria, like the mean square errors, as the optimization objectives. They ignored the very important population-wise *correlation* criterion, which is exactly the final evaluation metric in many tasks. In this work, we propose to revisit the classic regression tasks with novel investigations on directly optimizing the fine-grained correlation losses. We mainly explore two complementary correlation indexes as learnable losses: Pearson linear correlation (PLC) and Spearman rank correlation (SRC). The contributions of this paper are two folds. First, for the PLC on global level, we propose a strategy to make it robust against the outliers and regularize the key distribution factors. These efforts significantly stabilize the learning and magnify the efficacy of PLC. Second, for the SRC on local level, we propose a coarse-to-fine scheme to ease the learning of the exact ranking order among samples. Specifically, we convert the learning for the ranking of samples into the learning of similarity relationships among samples. We extensively validate our method on two typical" +"---\nabstract: 'The Planck mission detected a positive correlation between the intensity ($T$) and $B$-mode polarization of the Galactic thermal dust emission. The $TB$ correlation is a parity-odd signal, whose statistical mean vanishes in models with mirror symmetry. Recent work has shown with strong evidence that local handedness of the misalignment between the dust filaments and the sky-projected magnetic field produces $TB$ signals. However, it remains unclear whether the observed global $TB$ signal is caused by statistical fluctuations of magnetic misalignment angles, or whether some parity-violating physics in the interstellar medium sets a preferred misalignment handedness. The present work aims to make a quantitative statement about how confidently the statistical-fluctuation interpretation is ruled out by filament-based simulations of polarized dust emission. We use the publicly available DUSTFILAMENTS code to simulate the dust emission from filaments whose magnetic misalignment angles are symmetrically randomized, and construct the probability density function of $\\xi_{p}$, a weighted sum of $TB$ power spectrum. We find that Planck data has a $\\gtrsim 10\\sigma$ tension with the simulated $\\xi_{p}$ distribution. Our results strongly support that the Galactic filament misalignment has a preferred handedness, whose physical origin is yet to be identified.'\nauthor:\n- 'Zhiqi Huang $^{1,2}$'\ntitle: The" +"---\nabstract: 'During the reversible insertion of ions, lattices in intercalation materials undergo structural transformations. These lattice transformations generate misfit strains and volume changes that, in turn, contribute to the structural decay of intercalation materials and limit their reversible cycling. In this paper, we draw on insights from shape-memory alloys, another class of phase transformation materials, that also undergo large lattice transformations but do so with negligible macroscopic volume changes and internal stresses. We develop a theoretical framework to predict structural transformations in intercalation compounds and establish crystallographic design rules necessary for forming shape-memory-like microstructures in intercalation materials. We use our framework to systematically screen open-source structural databases comprising $n>5,000$ pairs of intercalation compounds. We identify candidate compounds, such as Li$_x$Mn$_2$O$_4$ (Spinel), Li$_x$Ti$_2$(PO$_4$)$_3$ (NASICON), that approximately satisfy the crystallographic design rules and can be precisely doped to form shape-memory-like microstructures. Throughout, we compare our analytical results with experimental measurements of intercalation compounds. We find a direct correlation between structural transformations, microstructures, and increased capacity retention in these materials. These results, more generally, show that crystallographic designing of intercalation materials could be a novel route to discovering compounds that do not decay with continuous usage.'\nauthor:\n- Delin Zhang\n- 'Ananya" +"---\nabstract: '**Topologically protected magnetic textures, such as skyrmions, half-skyrmions (merons) and their antiparticles, constitute tiny whirls in the magnetic order. They are promising candidates for information carriers in next-generation memory devices, as they can be efficiently propelled at very high velocities using current-induced spin torques [@Tomasello2014; @Geng2017; @Gobel2019; @Gobel2021; @Yu2021; @Juge2021b]. Antiferromagnets have been shown to host versions of these textures, which have gained significant attention because of their potential for terahertz dynamics, deflection free motion, and improved size scaling due to the absence of stray field [@Jani2021; @Barker2016; @Sort2006; @Wu2011; @Kolesnikov2018; @Chmiel2018]. Here we show that topological spin textures, merons and antimerons, can be generated at room temperature and reversibly moved using electrical pulses in thin film CuMnAs, a semimetallic antiferromagnet that is a testbed system for spintronic applications [@Wadley2016; @Grzybowski2017; @Olejnik2017; @Olejnik2018; @Wadley2018; @Kaspar2021; @Zubac2021]. The electrical generation and manipulation of antiferromagnetic merons is a crucial step towards realizing the full potential of antiferromagnetic thin films as active components in high density, high speed magnetic memory devices.**'\nauthor:\n- 'O. J. Amin'\n- 'S. F. Poole'\n- 'S. Reimers'\n- 'L. X. Barton'\n- 'F. Maccherozzi'\n- 'S. S. Dhesi'\n- 'V. Nov\u00e1k'\n- 'F. K\u0159\u00ed\u017eek'\n-" +"---\nabstract: 'In this paper, we reinvestigate remote state preparation by using the prepared non-maximally entangled channel. An innovative remote state preparation protocol is developed for deterministically preparing information encoded in quantum states without additional consumption of quantum resources. We have increased the success probability of preparing a d-dimensional quantum state to 1 via a non-maximally entangled quantum channel. A feasibly experimental scheme is also designed to realize the above-mentioned deterministic scheme. This work provides a valuable method to address decoherence and environmental noises in the practicalization of quantum communication tasks.'\nauthor:\n- Xuanxuan Xin\n- Shiwen He\n- Yongxing Li\n- Chong Li\nbibliography:\n- 'literature.bib'\ntitle: 'Optimal deterministic remote state preparation via a non-maximally entangled channel without additional quantum resources'\n---\n\nIntroduction\n============\n\nThe quantum network, the backbone of quantum computing architectures, is composed of spatially separated nodes storing quantum information in quantum bits and connected by quantum channels [@PhysRevA.29.1419; @PhysRevLett.78.3221; @ritter2012elementary; @simon2017towards; @PhysRevLett.120.030501]. The exchange of quantum information between different nodes is accomplished via quantum channels. Remote state preparation (RSP) can be considered as transferring known quantum information between two different quantum nodes [@PhysRevA.62.012313]. Two communicators called Alice and Bob are placed in two different quantum nodes" +"---\nabstract: 'A class of isotropic and scale invariant strain energy functions is given for which the corresponding spherically symmetric elastic motion includes bodies whose diameter becomes infinite with time or collapses to zero in finite time, depending on the sign of the residual pressure. The bodies are surrounded by vacuum so that the boundary surface forces vanish, while the density remains strictly positive. The body is subject only to internal elastic stress.'\naddress: |\n Department of Mathematics\\\n University of California\\\n Santa Barbara, CA 93106\\\n USA\nauthor:\n- 'Thomas C.\u00a0Sideris'\nbibliography:\n- 'AffineElasticity.bib'\ntitle: 'Expansion and collapse of spherically symmetric isotropic elastic bodies surrounded by vacuum\\'\n---\n\nIntroduction\n============\n\nWe shall be concerned with $C^2$ spherically symmetric and separable motions of a three-dimensional hyperelastic material based on a class of isotropic and scale invariant strain energy functions. The solid elastic body is surrounded by vacuum so that the boundary surface force vanishes, while the boundary density remains strictly positive. The body is subject only to internal elastic stress. Depending on the sign of the residual pressure, we shall show that the diameter of a spherical body can expand to infinity with time or it can collapse to zero in" +"---\nabstract: 'In recent years, deep learning has shown itself to be an incredibly valuable tool in cybersecurity as it helps network intrusion detection systems to classify attacks and detect new ones. Adversarial learning is the process of utilizing machine learning to generate a perturbed set of inputs to then feed to the neural network to misclassify it. Much of the current work in the field of adversarial learning has been conducted in image processing and natural language processing with a wide variety of algorithms. Two algorithms of interest are the Elastic-Net Attack on Deep Neural Networks and TextAttack. In our experiment the EAD and TextAttack algorithms are applied to a Domain Name System amplification classifier. The algorithms are used to generate malicious Distributed Denial of Service adversarial examples to then feed as inputs to the network intrusion detection systems neural network to classify as valid traffic. We show in this work that both image processing and natural language processing adversarial learning algorithms can be applied against a network intrusion detection neural network.'\nauthor:\n- Jared Mathews\n- Prosenjit Chatterjee\n- Shankar Banik\n- Cory Nance\nbibliography:\n- 'references.bib'\ntitle: A Deep Learning Approach to Create DNS Amplification Attacks\n---" +"---\nabstract: 'The emerging class of instance-optimized systems has shown potential to achieve high performance by specializing to a specific data and query workloads. Particularly, Machine Learning (ML) techniques have been applied successfully to build various instance-optimized components (e.g., learned indexes). This paper investigates to leverage ML techniques to enhance the performance of spatial indexes, particularly the R-tree, for a given data and query workloads. As the areas covered by the R-tree index nodes overlap in space, upon searching for a specific point in space, multiple paths from root to leaf may potentially be explored. In the worst case, the entire R-tree could be searched. In this paper, we define and use the overlap ratio to quantify the degree of extraneous leaf node accesses required by a range query. The goal is to enhance the query performance of a traditional R-tree for high-overlap range queries as they tend to incur long running-times. We introduce a new AI-tree that transforms the search operation of an R-tree into a multi-label classification task to exclude the extraneous leaf node accesses. Then, we augment a traditional R-tree to the AI-tree to form a hybrid \u201cAI+R\u201d-tree. The \u201cAI+R\u2019\u2019-tree can automatically differentiate between the high- and" +"---\nabstract: '> Hierarchical Task Networks (HTN) planners generate plans using a decomposition process with extra domain knowledge to guide search towards a planning task. While domain experts develop HTN descriptions, they may repeatedly describe the same preconditions, or methods that are rarely used or possible to be decomposed. By leveraging a three-stage compiler design we can easily support more language descriptions and preprocessing optimizations that when chained can greatly improve runtime efficiency in such domains. In this paper we evaluate such optimizations with the HyperTensioN HTN planner, used in the HTN IPC 2020.'\nauthor:\n- |\n Maur\u00edcio Cec\u00edlio Magnaguagno$^1$, Felipe Meneguzzi$^2$ [and]{.nodecor}\u00a0Lavindra de Silva$^3$\\\n $^1$Independent researcher\\\n $^2$University of Aberdeen, Aberdeen, UK\\\n $^3$Department of Engineering, University of Cambridge, Cambridge, UK\\\n `maumagnaguagno@gmail.com`\\\n `felipe.meneguzzi@abdn.ac.uk`\\\n `lavindra.desilva@eng.cam.ac.uk`\ntitle: 'HyperTensioN and Total-order Forward Decomposition optimizations'\n---\n\nIntroduction\n============\n\nHierarchical planning was originally developed as a means to allow planning algorithms to incorporate domain knowledge into the search engine using an intuitive formalism\u00a0[@nau1999shop]. Hierarchical Task Network (HTN) is the most widely used formalism for hierarchical planning, having been implemented in a variety of systems rendered in different (though conceptually similar) input languages\u00a0[@de2015hatp; @nau2003shop2; @Ilghami2003]. Recent research has re-energized work on HTN planning formalisms" +"---\nabstract: 'We present GO-Surf, a direct feature grid optimization method for accurate and fast surface reconstruction from RGB-D sequences. We model the underlying scene with a learned hierarchical feature voxel grid that encapsulates multi-level geometric and appearance local information. Feature vectors are directly optimized such that after being tri-linearly interpolated, decoded by two shallow MLPs into signed distance and radiance values, and rendered via volume rendering, the discrepancy between synthesized and observed RGB/depth values is minimized. Our supervision signals \u2014 RGB, depth and approximate SDF \u2014 can be obtained directly from input images without any need for fusion or post-processing. We formulate a novel SDF gradient regularization term that encourages surface smoothness and hole filling while maintaining high frequency details. GO-Surf can optimize sequences of $1$-$2$K frames in $15$-$45$ minutes, a speedup of $\\times60$ over NeuralRGB-D\u00a0[@azinovic2022neural], the most related approach based on an MLP representation, while maintaining on par performance on standard benchmarks. Project page: .'\nauthor:\n- Jingwen Wang\n- Tymoteusz Bleja\n- Lourdes Agapito\nbibliography:\n- 'main.bib'\ntitle: |\n GO-Surf: Neural Feature Grid Optimization for\\\n Fast, High-Fidelity RGB-D Surface Reconstruction \n---\n\nIntroduction\n============\n\nRecent years have seen impressive progress in learning-based methods for 3D scene modelling" +"---\nabstract: 'We consider dynamic motion of a linearized elastic body with a crack subject to a modified contact law, which we call *the Signorini contact condition of dynamic type*, and to the Tresca friction condition. Whereas the modified contact law involves both displacement and velocity, it formally includes the usual non-penetration condition as a special case. We prove that there exists a unique strong solution to this model. It is remarkable that not only existence but also uniqueness is obtained and that no viscosity term that serves as a parabolic regularization is added in our model.'\naddress:\n- 'Department of Mathematics, Tokyo University of Science, 1-3 Kagurazaka, Shinjuku-ku, Tokyo 162-8601, Japan'\n- 'Graduate School of Mathematical Sciences, The University of Tokyo, 3-8-1 Komaba, Meguro-ku, Tokyo 153-8914, Japan'\nauthor:\n- Hiromichi Itou\n- Takahito Kashiwabara\ntitle: 'Unique solvability of a crack problem with Signorini-type and Tresca friction conditions in a linearized elastodynamic body'\n---\n\nIntroduction\n============\n\nAnalysis of crack motion is one of the most important topics in fracture mechanics and it has also attracted much attention in material science or in seismology (e.g.\u00a0[@Bro1999; @Fre1990; @UMB2014]). However, at least from mathematical point of view, it is far from being" +"---\nabstract: 'We consider a class of sparsity-inducing optimization problems whose constraint set is regularizer-compatible, in the sense that, the constraint set becomes easy-to-project-onto after a coordinate transformation induced by the sparsity-inducing regularizer. Our model is general enough to cover, as special cases, the ordered LASSO model in [@TibSuo16] and its variants with some commonly used nonconvex sparsity-inducing regularizers. The presence of both the sparsity-inducing regularizer and the constraint set poses challenges on the design of efficient algorithms. In this paper, by exploiting absolute-value symmetry and other properties in the sparsity-inducing regularizer, we propose a new algorithm, called the Doubly Majorized Algorithm (DMA), for this class of problems. The DMA makes use of projections onto the constraint set after the coordinate transformation in each iteration, and hence can be performed efficiently. *Without* invoking any commonly used constraint qualification conditions such as those based on horizon subdifferentials, we show that any accumulation point of the sequence generated by DMA is a so-called $\\psi_{\\rm opt}$-stationary point, a new notion of stationarity we define as inspired by the notion of $L$-stationarity in [@BeckEldar13; @BeckHallak16]. We also show that any global minimizer of our model has to be a $\\psi_{\\rm opt}$-stationary point, again without" +"---\nabstract: |\n We analyze a uniqueness result presented by Elcrat, Neel, and Siegel [@ENS] for unbounded liquid bridges, and show that the proof they presented is incorrect. We add a choice of three hypothesis to their stated theorem and show that their result holds under this condition. Then we use Chebyshev spectral methods to build a numerical method to approximate solutions to a related boundary value problems that show one of the three hypothesis holds.\\\n **Keywords.** Capillarity, Unbounded Liquid Bridges, Uniqueness, Chebyshev Spectral Methods\\\n [ **Mathematics Subject Classification**: Primary 35Q35; Secondary 76A02]{}\nauthor:\n- 'Ray Treinen[^1]'\ntitle: 'Discussion of a uniqueness result in \u201cEquilibrium Configurations for a Floating Drop\u201d '\n---\n\nIntroduction {#intro}\n============\n\nIn 2004 Elcrat, Neel, and Siegel published a collection of results on the floating drop problem and the related floating bubble problem [@ENS]. Physically, one can visualize a drop of oil resting on a reservoir of water, and the resulting free boundary problem will not be described in detail here. This work has been held in high esteem in the field of capillarity, which is evident in the review Robert Finn wrote in Math Reviews for the paper [@Finn]. We offer a select quote from" +"---\nabstract: 'One of the most pressing societal issues is the fight against false news. The false claims, as difficult as they are to expose, create a lot of damage. To tackle the problem, fact verification becomes crucial and thus has been a topic of interest among diverse research communities. Using only the textual form of data we propose our solution to the problem and achieve competitive results to other approaches. We present our solution based on two approaches - PLM (pre-trained language model) based method and Prompt based method. PLM based approach uses the traditional supervised learning, where the model is trained to take \u2019x\u2019 as input and output prediction \u2019y\u2019 as P(y$|$x). Whereas, Prompt-based learning reflects the idea to design input to fit the model such that the original objective may be re-framed as a problem of (masked) language modelling. We may further stimulate the rich knowledge provided by PLMs to better serve downstream tasks by employing extra prompts to fine-tune PLMs. Our experiments showed that the proposed method performs better than just fine tuning PLMs. We achieved an F1 score of 0.6946 on the FACTIFY dataset and 7$^{th}$ position on the competition leader-board.'\naddress:\n- 'Indian Institute" +"---\nabstract: '[The Bose-Einstein condensate (BEC) of excited states, provides a different platform to explore the interplay between gravity and quantum physics. In this Letter, we study the response of excited-state BECs to an external gravitational field and their dynamics under gravity when space is expanding. We reveal the anomalous response of the center-of-mass of the BEC to the gravitational field and the exotic gravity-induced accelerating expansion phenomena. We demonstrate that these effects result from the interplay among gravity, space and quantum effects. We also propose related experiments to observe these anomalies.]{}'\nauthor:\n- Lijia Jiang\n- 'Jun-Hui Zheng'\ntitle: 'Gravity-induced accelerating expansion of excited-state Bose-Einstein condensate'\n---\n\nThe study of quantum matter in gravitational fields or in curved space-time continues to reveal fascinating phenomena, leading to novel physical concepts such as the Unruh effect, Hawking radiation, black hole entropy, etc [@Hawking1974; @Unruh1976; @Wald2001; @Unruh2017]. These discoveries greatly advance our understanding of the nature of the vacuum and the thermodynamics of black holes [@Dodelson2008]. Meanwhile, with the development of laser cooling technology, the Bose-Einstein condensate (BEC) in ultracold atomic systems has become a highly clean and controllable platform for quantum simulations. Since the geometry of BECs is highly sensitive to" +"---\nabstract: 'Possible coexistence of kaon condensation and hyperons in highly dense matter \\[the ($Y+K$) phase\\] is investigated on the basis of the relativistic mean-field theory combined with the effective chiral Lagrangian. Two coupling schemes for the $s$-wave kaon-baryon interaction are compared regarding the onset density of kaon condensation in the hyperon-mixed matter and equation of state for the developed ($Y+K$) phase: One is the contact interaction scheme related to the nonlinear effective chiral Lagrangian. The other is the meson-exchange scheme, where the interaction vertices between the kaon field and baryons are described by exchange of mesons ($\\sigma$, $\\sigma^\\ast$ mesons for scalar coupling, and $\\omega$, $\\rho$, $\\phi$ mesons for vector coupling). It is shown that in the meson exchange scheme, the contribution from the nonlinear scalar self-interaction gives rise to a repulsive effect for kaon effective energy, pushing up the onset density of kaon condensation as compared with the case of the contact interaction scheme. In general, the difference of kaon-baryon dynamics between the contact interaction scheme and the meson-exchange scheme relies on the specific forms of the nonlinear self-interacting meson terms. They generate many-baryon forces through the equations of motion for the meson mean fields. However, they should have" +"---\nabstract: 'Many-valued logics in general, and real-valued logics in particular, usually focus on a notion of consequence based on preservation of full truth, typically represented by the value $1$ in the semantics given in the real unit interval $[0,1]$. In a recent paper (*Foundations of Reasoning with Uncertainty via Real-valued Logics*, arXiv:2008.02429v2, 2021), Ronald Fagin, Ryan Riegel, and Alexander Gray have introduced a new paradigm that allows to deal with inferences in [*propositional*]{} real-valued logics based on a rich class of sentences, multi-dimensional sentences, that talk about combinations of any possible truth-values of real-valued formulas. They have given a sound and complete axiomatization that tells exactly when a collection of combinations of truth-values of formulas imply another combination of truth-values of formulas. In this paper, we extend their work to the first-order (as well as modal) logic of multi-dimensional sentences. We give axiomatic systems and prove corresponding completeness theorems, first assuming that the structures are defined over a fixed domain, and later for the logics of varying domains. As a by-product, we also obtain a 0-1 law for finitely-valued versions of these logics.'\naddress:\n- |\n University of Queensland\\\n Brisbane, St Lucia, QLD 4072, Australia\\\n `guillebadia89@gmail.com` \n- |\n IBM" +"---\nauthor:\n- 'G. Riccio'\n- 'M. Paolillo'\n- 'M. Cantiello'\n- 'R. D\u2019Abrusco'\n- 'X. Jin'\n- 'Z. Li'\n- 'T. Puzia'\n- 'S. Mieske'\n- 'D.J. Prole'\n- 'E. Iodice'\n- 'G. D\u2019Ago'\n- 'M. Gatto'\n- 'M. Spavone'\nbibliography:\n- 'main.bib'\ntitle: 'Properties of intra-cluster low-mass X-ray binaries in Fornax globular clusters '\n---\n\n[We present a study of the intra-cluster population of low-mass X-ray binaries (LMXB) residing in globular clusters (GC) in the central 1 $deg^2$ of the Fornax galaxy cluster. Differently from previous studies, which were restricted to the innermost regions of individual galaxies, this work is aimed at comparing the properties of the intra-cluster population of GC-LMXBs with those of the host galaxy. ]{} [The data used are a combination of *VLT Survey Telescope* (VST) and *Chandra* observations.We perform a cross-match between the optical and X-ray catalogue, in order to identify the LMXBs residing in GCs. We divide the GC-LMXBs into host-galaxy and intra-cluster objects based on their distance from the nearest galaxy in terms of effective radius ($R_{eff}$). We found 82 intra-cluster GC-LMXBs and 86 objects that are hosted in galaxies. As the formation of LMXBs also depends on the host GC colour," +"---\nabstract: 'We establish the existence of the long-debated $f_0(1370)$ resonance in the dispersive analyses of meson-meson scattering data. For this, we present a novel approach using forward dispersion relations, valid for generic inelastic resonances. We find its pole at $\\left(1245\\pm40\\right)-i\\,\\left(300^{+30}_{-70}\\right)$ MeV in $\\pi\\pi$ scattering. We also provide the couplings as well as further checks extrapolating partial-wave dispersion relations or with other continuation methods. A pole at $\\left(1380^{+70}_{-60}\\right)-i\\,\\left(220^{+80}_{-70}\\right)$ MeV also appears in the $\\pi\\pi\\to K\\bar K$ data analysis with partial-wave dispersion relations. Despite settling its existence, our model-independent dispersive and analytic methods still show a lingering tension between pole parameters from the $\\pi\\pi$ and $K\\bar K$ channels that should be attributed to data.'\nauthor:\n- 'J.\u00a0R.\u00a0Pel\u00e1ez'\n- 'A.\u00a0Rodas'\n- 'J.\u00a0Ruiz de Elvira'\nbibliography:\n- 'largebiblio.bib'\ntitle: 'The $f_0(1370)$ controversy from dispersive meson-meson scattering data analyses'\n---\n\nQuantum Chromodynamics (QCD) was established as the theory of strong interactions almost 50 years ago, but its low-energy regime, particularly the lightest scalar spectrum, is still under debate\u2014see [@Pelaez:2015qba; @Pelaez:2020gnd] and the \u201cScalar Mesons below 2 GeV\" note in the Review of Particle Physics [@pdg] (RPP). This may be surprising, since light scalars are relevant for nucleon-nucleon interactions, final states" +"---\nabstract: 'Voltage control generally requires accurate information about the grid\u2019s topology in order to guarantee network stability. However, accurate topology identification is a challenging problem for existing methods, especially as the grid is subject to increasingly frequent reconfiguration due to the adoption of renewable energy. Further, running existing control mechanisms with incorrect network information may lead to unstable control. In this work, we combine a nested convex body chasing algorithm with a robust predictive controller to achieve provably finite-time convergence to safe voltage limits in the online setting where the network topology is initially unknown. Specifically, the online controller does not know the true network topology and line parameters, but instead must learn them over time by narrowing down the set of network topologies and line parameters that are consistent with its observations and adjusting reactive power generation accordingly to keep voltages within desired safety limits. We demonstrate the effectiveness of the approach using a case study, which shows that in practical settings the controller is indeed able to narrow the set of consistent topologies quickly enough to make control decisions that ensure stability.'\nauthor:\n- Christopher Yeh\n- Jing Yu\n- Yuanyuan Shi\n- Adam Wierman\nbibliography:\n-" +"---\nabstract: 'We show that a closed four-manifold with $4\\frac{1}{2}$-positive curvature operator of the second kind is diffeomorphic to a spherical space form. The curvature assumption is sharp as both $\\mathbb{CP}^2$ and $\\mathbb{S}^3 \\times \\mathbb{S}^1$ have $4\\frac{1}{2}$-nonnegative curvature operator of the second kind. In higher dimensions $n\\geq 5$, we show that closed Riemannian manifolds with $4\\frac{1}{2}$-positive curvature operator of the second kind are homeomorphic to spherical space forms. These results are proved by showing that $4\\frac{1}{2}$-positive curvature operator of the second kind implies both positive isotropic curvature and positive Ricci curvature. Rigidity results for $4\\frac{1}{2}$-nonnegative curvature operator of the second kind are also obtained.'\naddress: 'Department of Mathematics, Statistics and Physics, Wichita State University, Wichita, KS, 67260'\nauthor:\n- Xiaolong Li\nbibliography:\n- 'ref.bib'\ntitle: 'Manifolds with $4\\frac{1}{2}$-positive curvature operator of the second kind'\n---\n\nIntroduction\n============\n\nIn 1986, Nishikawa [@Nishikawa86] conjectured that a closed Riemannian manifold with positive (respectively, nonnegative) curvature operator of the second kind is diffeomorphic to a spherical space form (respectively, Riemannian locally symmetric space). Recall that the curvature tensor acts on the space of symmetric two-tensors $S^2(T_pM)$ via $$\\label{eq 1.1}\n \\mathring{R}(e_i \\odot e_j) =\\sum_{k,l=1}^n R_{iklj} e_k \\odot e_l,$$ where $\\{e_1, \\cdots, e_n\\}$ is an orthonormal" +"---\nabstract: '3D object detection in autonomous driving aims to reason \u201cwhat\u201d and \u201cwhere\u201d the objects of interest present in a 3D world. Following the conventional wisdom of previous 2D object detection, existing methods often adopt the canonical Cartesian coordinate system with perpendicular axis. However, we conjugate that this does not fit the nature of the ego car\u2019s perspective, as each onboard camera perceives the world in shape of wedge intrinsic to the imaging geometry with radical (non-perpendicular) axis. Hence, in this paper we advocate the exploitation of the Polar coordinate system and propose a new Polar Transformer ([***PolarFormer***]{}) for more accurate 3D object detection in the bird\u2019s-eye-view (BEV) taking as input only multi-camera 2D images. Specifically, we design a cross-attention based Polar detection head without restriction to the shape of input structure to deal with irregular Polar grids. For tackling the unconstrained object scale variations along Polar\u2019s distance dimension, we further introduce a multi-scale Polar representation learning strategy. As a result, our model can make best use of the Polar representation rasterized via attending to the corresponding image observation in a sequence-to-sequence fashion subject to the geometric constraints. Thorough experiments on the nuScenes dataset demonstrate that our PolarFormer outperforms" +"---\nabstract: 'Nowadays, graph becomes an increasingly popular model in many real applications. The efficiency of graph storage is crucial for these applications. Generally speaking, the tune tasks of graph storage rely on the database administrators (DBAs) to find the best graph storage. However, DBAs make the tune decisions by mainly relying on their experiences and intuition. Due to the limitations of DBAs\u2019s experiences, the tunes may have an uncertain performance and conduct worse efficiency. In this paper, we observe that an estimator of graph workload has the potential to guarantee the performance of tune operations. Unfortunately, because of the complex characteristics of graph evaluation task, there exists no mature estimator for graph workload. We formulate the evaluation task of graph workload as a classification task and carefully design the feature engineering process, including graph data features, graph workload features and graph storage features. Considering the complex features of graph and the huge time consumption in graph workload execution, it is difficult for the graph workload estimator to obtain enough training set. So, we propose an active auto-estimator (AAE) for the graph workload evaluation by combining the active learning and deep learning. AAE could achieve good evaluation efficiency with limited" +"---\nabstract: 'We use a simulation study to compare three methods for adaptive experimentation: Thompson sampling, Tempered Thompson sampling, and Exploration sampling. We gauge the performance of each in terms of social welfare and estimation accuracy, and as a function of the number of experimental waves. We further construct a set of novel \u201chybrid\" loss measures to identify which methods are optimal for researchers pursuing a combination of experimental aims. Our main results are: 1) the relative performance of Thompson sampling depends on the number of experimental waves, 2) Tempered Thompson sampling uniquely distributes losses across multiple experimental aims, and 3) in most cases, Exploration sampling performs similarly to random assignment.'\nauthor:\n- |\n Samantha Horn [^1] samihorn@cmu.edu\\\n Department of Social and Decision Sciences\\\n Carnegie Mellon University\\\n Pittsburgh, PA 15213 USA Sabina J.\u00a0Sloman ssloman@andrew.cmu.edu\\\n Department of Social and Decision Sciences\\\n Carnegie Mellon University\\\n Pittsburgh, PA 15213 USA\nbibliography:\n- 'adaptive\\_experimentation.bib'\ntitle: A Comparison of Methods for Adaptive Experimentation\n---\n\nadaptive experimentation, response-adaptive randomization\n\nAdaptive experiments have recently gained popularity in the social sciences.[^2] While traditional methods for adaptive experimentation target participant welfare, a body of literature shows that these methods forgo statistical power and can introduce bias in the" +"---\nauthor:\n- Nikhil Padhye\nbibliography:\n- 'main.bib'\ntitle: '**Mechanics and Modeling of Cold Rolling of Polymeric Films at Large Strains - A Rate-Independent Approach**'\n---\n\nRecently, a new phenomenon of bonding polymeric films in solid-state, via symmetric rolling, at ambient temperatures ($\\approx$ $20$ $^o$C) well below the glass transition temperature (T$_g$ $\\approx$ $78$ $^o$C) of the polymer, has been reported. In this new type of bonding, polymer films are subject to plane strain active bulk plastic compression between the rollers during deformation. Here, we analyze these plane strain cold rolling processes, at large strains but slow strain rates, by finite element modeling. We find that at low temperatures, slow strain rates, and moderate thickness reductions during rolling (at which Bauschinger effect can be neglected for the particular class of polymeric films studied here), the task of material modeling is greatly simplified, and enables us to deploy a computationally efficient, yet accurate, finite deformation rate-independent elastic-plastic material behavior (with inclusion of isotropic-hardening) for analyzing these rolling processes. The finite deformation elastic-plastic material behavior based on (i) the additive decomposition of stretching tensor ($\\mathbf{D=D^e+D^p}$, i.e., a hypoelastic formulation) with incrementally objective time integration and, (ii) multiplicative decomposition of deformation gradient ($\\mathbf{F=F^eF^p}$)" +"---\nabstract: 'The class of *$2$-regular* matroids is a natural generalisation of regular and near-regular matroids. We prove an excluded-minor characterisation for the class of $2$-regular matroids. The class of *$3$-regular* matroids coincides with the class of matroids representable over the Hydra-$5$ partial field, and the $3$-connected matroids in the class with a $U_{2,5}$- or $U_{3,5}$-minor are precisely those with six inequivalent representations over ${\\textrm{GF}}(5)$. We also prove that an excluded minor for this class has at most $15$ elements.'\naddress:\n- 'School of Mathematics and Statistics, Victoria University of Wellington, New Zealand'\n- 'Department of Mathematics, Louisiana State University, Baton Rouge, Louisiana, USA'\n- 'School of Mathematics and Statistics, University of Canterbury, New Zealand'\n- 'School of Mathematics and Statistics, Victoria University of Wellington, New Zealand'\nauthor:\n- Nick Brettell\n- James Oxley\n- Charles Semple\n- Geoff Whittle\nbibliography:\n- 'lib.bib'\ntitle: 'The excluded minors for $2$- and $3$-regular matroids'\n---\n\nIntroduction\n============\n\nLet $\\mathbb Q(\\alpha_1,\\ldots,\\alpha_n)$ denote the field obtained by extending the rational numbers by $n$ independent transcendentals $\\alpha_1,\\ldots,\\alpha_n$. For a non-negative integer\u00a0$k$, a matroid is *$k$-regular* if it has a representation by a matrix $A$ over $\\mathbb Q(\\alpha_1,\\ldots,\\alpha_k)$ in which every non-zero subdeterminant of $A$" +"---\nabstract: 'The intrinsic alignment (IA) of galaxies is potentially a major limitation in deriving cosmological constraints from weak lensing surveys. In order to investigate this effect we assign intrinsic shapes and orientations to galaxies in the light-cone output of the MICE simulation, spanning $\\sim5000\\,{\\rm deg}^2$ and reaching redshift $z=1.4$. This assignment is based on a \u2019semi-analytic\u2019 IA model that uses photometric properties of galaxies as well as the spin and shape of their host halos. Advancing on previous work, we include more realistic distributions of galaxy shapes and a luminosity dependent galaxy-halo alignment. The IA model parameters are calibrated against COSMOS and BOSS LOWZ observations. The null detection of IA in observations of blue galaxies is accounted for by setting random orientations for these objects. We compare the two-point alignment statistics measured in the simulation against predictions from the analytical IA models NLA and TATT over a wide range of scales, redshifts and luminosities for red and blue galaxies separately. We find that both models fit the measurements well at scales above $8\\,h^{-1}{\\rm Mpc}$, while TATT outperforms NLA at smaller scales. The IA parameters derived from our fits are in broad agreement with various observational constraints from red galaxies." +"---\nabstract: 'The aims to prescribe a sightseeing plan that maximizes tourist satisfaction into account a multitude of parameters and constraints, such as the mobility environment consists of a pedestrian network and a road network. . Experimental results show that our approach can handle realistic instances with up to 3643 points of interest in few seconds.'\nauthor:\n- Tommaso Adamo\n- Lucio Colizzi\n- Giovanni Dimauro\n- Gianpaolo Ghiani\n- Emanuela Guerriero\nbibliography:\n- 'mybibfile.bib'\ntitle: A multimodal tourist trip planner integrating road and pedestrian networks\n---\n\nIntroduction {#intro}\n============\n\nis one of the fast-growing sectors the world. On the wave of digital transformation, is experiencing a shift from mass tourism to personalized travel. Designing a tourist trip is a rather complex and time-consuming . , the of expert and intelligent systems . Such systems typically appear in the form of ICT integrated solutions that perform three main services: recommendation of attractions (Points of Interest, PoIs), route generation and itinerary customization [@gavalas2014survey]. In this research work, we focus on route generation, known in literature as (TTDP). The objective of the TTDP is to select PoIs that maximize tourist satisfaction, while taking into account a set of parameters (e.g., alternative transport" +"---\nauthor:\n- Stefan Henneking Leszek Demkowicz\nbibliography:\n- 'References/journals-iso4.bib'\n- 'References/ref\\_stefan.bib'\ndate: ' Austin, TX. . '\ntitle: 3D User Manual \n---\n\nAcknowledgements {#acknowledgements .unnumbered}\n================\n\nThe authors would like to thank the developers and collaborators who have contributed to the development of the 3D software. In addition to the authors, the main contributors to the current version of the 3D finite element code are (in alphabetical order):\n\n- Jacob Badger\n\n- Ankit Chakraborty\n\n- Federico Fuentes\n\n- Paolo Gatto\n\n- Brendan Keith\n\n- Kyungjoo Kim\n\n- Jaime D. Mora\n\n- Sriram Nagaraj\n\n- Socratis Petrides\n\nAdditionally, the authors would like to thank each one of the reviewers who have helped with writing and improving this user manual.\n\nThe development of the 3D software and documentation, including this user manual, was partially funded by NSF award \\#2103524.\n\nIntroduction {#chap:introduction}\n============\n\nThe 3D finite element (FE) software has been developed by Prof.\u00a0Leszek Demkowicz and his students, postdocs, and collaborators at The University of Texas at Austin over the course of many years. The current version of the software is available at . This user manual was written to provide guidance to current and prospective users of 3D. We welcome" +"---\nabstract: |\n **Background** In stellar environments, carbon is produced exclusively via the $3\\alpha$ process, where three $\\alpha$ particles fuse to form $^{12}$C in the excited Hoyle state, which can then decay to the ground state. The rate of carbon production in stars depends on the radiative width of the Hoyle state. While not directly measurable, the radiative width can be deduced by combining three separately measured quantities, one of which is the $E0$ decay branching ratio. The $E0$ branching ratio can be measured by exciting the Hoyle state in the $^{12}$C$(p,p')$ reaction and measuring the pair decay of both the Hoyle state and the first $2^+$ state of $^{12}$C.\n\n **Purpose** To reduce the uncertainties in the carbon production rate in the universe by measuring a set of proton angular distributions for the population of the Hoyle state ($0^+_2$) and $2^+_1$ state in $^{12}$C in $^{12}$C$(p,p')$ reactions between 10.20 and 10.70 MeV, used in the determination of the $E0$ branching ratio of the Hoyle state.\n\n **Method** Proton angular distributions populating the ground, first $2^+$, and the Hoyle states in $^{12}$C were measured in $^{12}$C(p,p\u2019) reactions with a silicon detector array covering $22^\\circ<\\theta<158^\\circ$ in 14 small energy steps between 10.20 and" +"---\nauthor:\n- Lijing Shao\nbibliography:\n- 'refs.bib'\ntitle: 'Radio Pulsars as a Laboratory for Strong-field Gravity Tests'\n---\n\nIntroduction {#sec:intro}\n============\n\nPulsars are rotating magnetized neutron stars. On the one hand, due to their large moment of inertia ($I \\sim 10^{38}\\,{\\rm kg\\,m}^2$) and usually small external torque, their rotation is extremely stable. If a pulsar sweeps a radiating beam in the direction of the Earth, a radio pulse could be recorded using large-area telescopes for each rotation. As fundamentally known in physics, such a periodic signal can be viewed as a [*clock*]{}. Therefore, pulsars are famously recognized as astrophysical clocks in astronomy. Even better, thanks to a sophisticated technique called pulsar timing\u00a0[@Taylor:1992kea], pulsar astronomers can [*accurately*]{} record a number of periodic pulse signals. These pulses\u2019 times of arrival are compared with atomic clocks at the telescope sites. Some of these observations can be carried out and last for decades. From a large number of times of arrival of these pulse signals, the physical properties of pulsar systems are inferred to a great precision\u00a0[@Lorimer:2005misc]. For example, a recent study with sixteen years of timing data of the Double Pulsar,[^1] PSR\u00a0J0737$-$3039A/B, gives the rotational frequency of pulsar A" +"---\nabstract: |\n A canonical problem in social choice is how to aggregate ranked votes: that is, given $n$ voters\u2019 rankings over $m$ candidates, what *voting rule* $f$ should we use to aggregate these votes and select a single winner? One standard method for comparing voting rules is by their satisfaction of *axioms*[\u2014]{}properties that we want a \u201creasonable\u201d rule to satisfy. This approach, unfortunately, leads to several impossibilities: no voting rule can simultaneously satisfy all the properties we would want, at least in the worst case over all possible inputs.\n\n Motivated by this, we consider a relaxation of this worst case requirement. We analyze this through a \u201csmoothed\u201d model of social choice, where votes are (independently) perturbed with small amounts of noise. If no matter which input profile we start with, the probability (post-noise) of an axiom being satisfied is large, we will view it as nearly as good as satisfied[\u2014]{}called \u201csmoothed-satisfied\u201d[\u2014]{}even if it may be violated in the worst case.\n\n Our model is a mild restriction of Lirong Xia\u2019s, and corresponds closely to that in Spielman and Teng\u2019s original work on smoothed analysis. Much work has been done so far in several papers by Xia on which axioms can" +"---\nabstract: 'The chiral vortical effect is a chiral anomaly-induced transport phenomenon characterized by an axial current in a uniformly rotating chiral fluid. It is well-understood for Weyl fermions in high energy physics, but its realization in condensed matter band structures, including those of Weyl semimetals, has been controversial. In this work, we develop the Kubo response theory for electrons in a general band structure subject to space- and time-dependent rotation or vorticity relative to the background lattice. For continuum Hamiltonians, we recover the chiral vortical effect in the static limit and the transport or uniform limit when the fluid, strictly, is not a Fermi liquid. In the transport limit of a Fermi liquid, we discover a new effect that we dub the gyrotropic vortical effect. The latter is governed by Berry curvature of the occupied bands while the former contains an additional contribution from the magnetic moment of electrons on the Fermi surface. The two vortical effects can be understood as kinematic analogs of the well-known chiral and gyrotropic magnetic effects in chiral band structures. We address recent controversies in the field and conclude by describing device geometries that exploit Ohmic or Seebeck transport to drive the vortical effects.'" +"---\nabstract: 'We examine the thermodynamics of a regular charged black hole (RCB) added with corrections due to massive gravity and thermal fluctuations at quantum level. We then derive the expressions for all the relevant thermodynamic quantities such as entropy, Hawking\u2019s temperature, internal energy, Gibbs free energy, corrected to first-order. We also briefly discuss the stability of such a system with the help of quantities like specific heat and thermodynamic pressure, along with this we present their comparative study with the corresponding original values.'\nauthor:\n- 'Amruta Desai [^1]'\n- 'Shubham Sharma [^2]'\n- 'Prince A. Ganai [^3]'\ndate: June 2022\ntitle: Quantum fluctuations of a regular charged black hole in massive gravity \n---\n\nIntroduction\n============\n\nSince the past few decades, black holes have been studied immensely as a potential connection that would relate two very crucial theories of modern physics, namely quantum mechanics and general relativity. This connection is credited to the second law of thermodynamics and its possible violation. It was thought that objects disappearing in the black hole would mean loss of information because black holes are characterised only by their mass, charge and angular momentum and they carry no information about the structure of matter that" +"---\nabstract: 'The Rubin Observatory\u2019s Data Butler is designed to allow data file location and file formats to be abstracted away from the people writing the science pipeline algorithms. The Butler works in conjunction with the workflow graph builder to allow pipelines to be constructed from the algorithmic tasks. These pipelines can be executed at scale using object stores and multi-node clusters, or on a laptop using a local file system. The Butler and pipeline system are now in daily use during Rubin construction and early operations.'\nauthor:\n- Tim\u00a0Jenness\n- 'James\u00a0F.\u00a0Bosch'\n- Andrei\u00a0Salnikov\n- 'Nate\u00a0B.\u00a0Lust'\n- 'Nathan\u00a0M.\u00a0Pease'\n- Michelle\u00a0Gower\n- Mikolaj\u00a0Kowalik\n- 'Gregory\u00a0P.\u00a0Dubois-Felsmann'\n- Fritz\u00a0Mueller\n- Pim\u00a0Schellart\ntitle: 'The Vera C. Rubin Observatory Data Butler and Pipeline Execution System'\n---\n\nIntroduction\n============\n\nThe Vera C.\u00a0Rubin Observatory\u2019s Legacy Survey of Space and Time (LSST) [@2019ApJ...873..111I] will image the entire southern sky every three days and consist of tens of petabytes of raw image data and associated calibration data. All these files must be tracked, along with the intermediate datasets and output products from pipeline processing, and depending on where the processing occurs the files will" +"---\nabstract: 'In the recent years, transformer-based models have lead to significant advances in language modelling for natural language processing. However, they require a vast amount of data to be (pre-)trained and there is a lack of corpora in languages other than English. Recently, several initiatives have presented multilingual datasets obtained from automatic web crawling. However, the results in Spanish present important shortcomings, as they are either too small in comparison with other languages, or present a low quality derived from sub-optimal cleaning and deduplication. In this paper, we introduce esCorpius, a Spanish crawling corpus obtained from near 1 Pb of Common Crawl data. It is the most extensive corpus in Spanish with this level of quality in the extraction, purification and deduplication of web textual content. Our data curation process involves a novel highly parallel cleaning pipeline and encompasses a series of deduplication mechanisms that together ensure the integrity of both document and paragraph boundaries. Additionally, we maintain both the source web page URL and the WARC shard origin URL in order to complain with EU regulations. esCorpius has been released under CC BY-NC-ND 4.0 license and is available on HuggingFace.'\nauthor:\n- |\n Asier Guti\u00e9rrez-Fandi\u00f1o\\\n LHF" +"---\nabstract: 'We present photometric metallicity measurements for a sample of 2.6 million bulge red clump stars extracted from the Blanco DECam Bulge Survey (BDBS). Similar to previous studies, we find that the bulge exhibits a strong vertical metallicity gradient, and that at least two peaks in the metallicity distribution functions appear at $b < -5^{\\circ}$. We can discern a metal-poor (\\[Fe/H\\] $\\sim$ $-0.3$) and metal-rich (\\[Fe/H\\] $\\sim$ $+$0.2) abundance distribution that each show clear systematic trends with latitude, and may be best understood by changes in the bulge\u2019s star formation/enrichment processes. Both groups exhibit asymmetric tails, and as a result we argue that the proximity of a star to either peak in \\[Fe/H\\] space is not necessarily an affirmation of group membership. The metal-poor peak shifts to lower \\[Fe/H\\] values at larger distances from the plane while the metal-rich tail truncates. Close to the plane, the metal-rich tail appears broader along the minor axis than in off-axis fields. We also posit that the bulge has two metal-poor populations \u2013 one that belongs to the metal-poor tail of the low latitude and predominantly metal-rich group, and another belonging to the metal-poor group that dominates in the outer bulge. We detect" +"---\nabstract: |\n The latency reduction between the discovery of vulnerabilities, the build-up and dissemination of cyber-attacks has put significant pressure on cybersecurity professionals. For that, security researchers have increasingly resorted to collective action in order to reduce the time needed to characterize and tame outstanding threats. Here, we investigate how joining and contributions dynamics on MISP, an open source threat intelligence sharing platform, influence the time needed to collectively complete threat descriptions. We find that performance, defined as the capacity to characterize quickly a threat event, is influenced by (i) its own complexity (negatively), by (ii) collective action (positively), and by (iii) learning, information integration and modularity (positively). Our results inform on how collective action can be organized at scale and in a modular way to overcome a large number of time-critical tasks, such as cybersecurity threats.\\\n [***Keywords\u2014*** cybersecurity, information sharing, collective action, information integration, economies of scales, Malware Information Sharing Platform (MISP)]{}\nbibliography:\n- 'references.bib'\n- 'references-1.bib'\n- 'references-2.bib'\n---\n\n**Efficient Collective Action for Tackling Time-Critical Cybersecurity Threats**\n\nS\u00e9bastien Gillard$^{1,2}$ $^\\textrm{[0000-0002-3237-8599]}$, Dimitri Percia David$^{1,3,4}$ $^\\textrm{[0000-0002-9393-1490]}$\\\nAlain Mermoud$^{3}$ $^\\textrm{[0000-0001-6471-772X]}$ and Thomas Maillart$^{1,5}$ $^\\textrm{[0000-0002-5747-9927]}$\n\n$^{1}$ Information Science Institute, Geneva School of Economics and Management, University of Geneva\\\n$^{2}$ Department of" +"---\nabstract: 'In *Spoken Language Understanding*\u00a0(SLU) the task is to extract important information from audio commands, like the intent of what a user wants the system to do and special entities like locations or numbers. This paper presents a simple method for embedding intents and entities into , and, in combination with a pretrained general-purpose *Speech-to-Text* model, allows building without any additional training. Building those models is very fast and only takes a few seconds. It is also completely language independent. With a comparison on different benchmarks it is shown that this method can outperform multiple other, more resource demanding SLU approaches.'\naddress: 'University of Augsburg, Institute for Software & Systems Engineering'\nbibliography:\n- 'mybib.bib'\ntitle: |\n Finstreder: Simple and fast Spoken Language Understanding\\\n with Finite State Transducers using modern Speech-to-Text models\n---\n\n**Index Terms**: spoken language understanding, speech to intent, offline voice assistant, finite state transducer decoding\n\nIntroduction {#sec:intro}\n============\n\nWhen building models for *Spoken Language Understanding*\u00a0(SLU), there are two alternative approaches: Using two separate stages of transcribing the spoken command to text (*Speech-to-Text*, STT) and then extracting the useful information out of the transcribed sentence (*Natural Language Understanding*, NLU), or using a direct SLU approach which" +"---\nabstract: 'We consider a high-dimensional random constrained optimization problem in which a set of binary variables is subjected to a linear system of equations. The cost function is a simple linear cost, measuring the Hamming distance with respect to a reference configuration. Despite its apparent simplicity, this problem exhibits a rich phenomenology. We show that different situations arise depending on the random ensemble of linear systems. When each variable is involved in at most two linear constraints, we show that the problem can be partially solved analytically, in particular we show that upon convergence, the zero-temperature limit of the cavity equations returns the optimal solution. We then study the geometrical properties of more general random ensembles. In particular we observe a range in the density of constraints at which the systems enters a glassy phase where the cost function has many minima. Interestingly, the algorithmic performances are only sensitive to another phase transition affecting the structure of configurations allowed by the linear constraints. We also extend our results to variables belonging to $\\text{GF}(q)$, the Galois Field of order $q$. We show that increasing the value of $q$ allows to achieve a better optimum, which is confirmed by the Replica" +"---\nabstract: 'The dynamics of the solutions to a class of conservative SPDEs are analysed from two perspectives: Firstly, a probabilistic construction of a corresponding random dynamical system is given for the first time. Secondly, the existence and uniqueness of invariant measures, as well as mixing for the associated Markov process is shown.'\nauthor:\n- 'Benjamin Fehrman [^1]'\n- 'Benjamin Gess [^2]'\n- 'Rishabh S. Gvalani [^3]'\nbibliography:\n- 'rds.bib'\ntitle: Ergodicity and random dynamical systems for conservative SPDEs\n---\n\nIntroduction\n============\n\nIn this work, we consider the dynamical behaviour of solutions to conservative SPDEs of the general type $$\\label{eq:spde-intro}\n\\partial_{t}\\rho=\\Delta\\Phi(\\rho)-\\nabla\\cdot\\nu(\\rho)-\\nabla\\cdot(B(\\rho))-f(\\rho)-\\sqrt{{\\varepsilon}}\\nabla\\cdot(\\sigma(\\rho)\\xi),$$ where $\\nu$ is local function of $\\rho$, $B$ a nonlocal function, $\\xi$ is spatially correlated noise and under assumptions on $\\Phi,\\nu,B,f,\\sigma$ specified below. We introduce a new probabilistic construction of an associated random dynamical system, and we prove the ergodicity and mixing properties of the associated Markovian semigroup.\n\nConservative SPDEs of this type appear, for example, as fluctuating continuum models in non-equilibrium statistical mechanics, see, for example [@Ot2005; @Gi.Le.Pr1999; @FG21; @FehGesDir2020; @Fe.Ge2022].\n\nThe theory of random dynamical systems (RDS) aim at merging techniques from stochastic analysis with those from dynamical systems theory, as an approach to the systematic study" +"---\nabstract: 'To understand the microscopic mechanism of the charge order observed in the parent compound of the infinite-layer nickelate superconductors, we consider a minimal three-legged model consisting of a two-legged Hubbard ladder for the Ni $3d_{x^2-y^2}$ electrons and a free conduction electron chain from the rare-earths. With highly accurate density matrix renormalization group calculations, when the chemical potential difference is adjusted to make the Hubbard ladder with $1/3$ hole doping, we find a long-range charge order with period $3$ in the ground state of the model, while the spin excitation has a small energy gap. Moreover, the electron pair-pair correlation has a quasi-long-range behavior, indicating an instability of superconductivity even at half-filling. As a comparison, the same method is applied to a pure two-legged Hubbard model with $1/3$ hole doping in which the period-3 charge order is a quasi-long range one. The difference between them demonstrates that the free electron chain of the three-legged ladder plays the role of a charge reservoir and enhances the charge order in the undoped infinite-layer nickelates.'\nauthor:\n- Yang Shen\n- Mingpu Qin\n- 'Guang-Ming Zhang'\nbibliography:\n- 'ladder\\_DMRG.bib'\ntitle: 'Comparative study of charge order in undoped infinite-layer nickelate superconductors'\n---\n\n[^1]\n\n[^2]" +"---\nabstract: 'The recently defined concept of a statistical depth function for fuzzy sets provides a theoretical framework for ordering fuzzy sets with respect to the distribution of a fuzzy random variable. One of the most used and studied statistical depth function for multivariate data is simplicial depth, based on multivariate simplices. We introduce a notion of pseudosimplices generated by fuzzy sets and propose three plausible generalizations of simplicial depth to fuzzy sets. Their theoretical properties are analyzed and the behavior of the proposals illustrated through a study of both synthetic and real data.'\naddress:\n- 'Departamento de Matem\u00e1ticas, Estad\u00edstica y Computaci\u00f3n, Universidad de Cantabria (Spain), '\n- 'Departamento de Estad\u00edstica e Investigaci\u00f3n Operativa y Did\u00e1ctica de las Matem\u00e1ticas, Universidad de Oviedo (Spain), '\nauthor:\n- \n- \n- \ntitle: Simplicial depths for fuzzy random variables\n---\n\n,\n\n0 ...........\n\n..........\n\nIntroduction\n============\n\nIn the general framework of fuzzy data, the data consists of classes of objects with a continuum of grades of membership [@zadehfuzzysets]. They are generally represented as functions from $\\mathbb{R}^p$ to $[0,1],$ as opposed to multivariate data which are points in $\\mathbb{R}^p.$ On the other hand, statistical depth functions are a quantification of the intuitive notion that the" +"---\nabstract: 'Dark compact objects (\u201cclumps\u201d) transiting the Solar System exert accelerations on the test masses (TM) in a gravitational-wave (GW) detector. We reexamine the detectability of these clump transits in a variety of current and future GW detectors, operating over a broad range of frequencies. TM accelerations induced by clump transits through the inner Solar System have frequency content around $f \\sim \\mu$Hz. Some of us \\[Fedderke *et al*., [Phys.\u00a0Rev.\u00a0D **105**, 103018 (2022)](https://doi.org/10.1103/PhysRevD.105.103018)\\] recently proposed a GW detection concept with $\\mu$Hz sensitivity, based on asteroid-to-asteroid ranging. From the detailed sensitivity projection for this concept, we find both analytically and in simulation that purely gravitational clump\u2013matter interactions would yield one detectable transit every $\\sim20$yrs, if clumps with mass ${\\ensuremath{m_{{\\text{cl}}}}}\\sim 10^{14}\\,\\text{kg}$ saturate the dark-matter (DM) density. Other\u00a0(proposed) GW detectors using local TMs and operating in higher frequency bands are sensitive to smaller clump masses and have smaller rates of discoverable signals. We also consider the case of clumps endowed with an additional attractive long-range clump\u2013matter fifth force significantly stronger than gravity\u00a0(but evading known fifth-force constraints). For the $\\mu$Hz detector concept, we use simulations to show that, for example, a clump\u2013matter fifth-force $\\sim 10^3$ times stronger than gravity" +"---\nabstract: 'The operation of industrial facilities is a broad field for optimization. Industrial plants are often a) composed of several components, b) linked using network technology, c) physically interconnected and d) complex regarding the effect of set-points and operating points in every entity. This leads to the possibility of overall optimization but also to a high complexity of the emerging optimization problems. The decomposition of complex systems allows the modeling of individual models which can be structured according to the physical topology. A method for energy performance indicators (EnPI) helps to formulate an optimization problem. The optimization algorithm OptTopo achieves efficient set-points by traversing a graph representation of the overall system.'\nauthor:\n- |\n [![image](orcid.pdf)Gregor Thiele](https://orcid.org/0000-0002-7108-5203)\\\n Department for Process\\\n Automation and Robotics,\\\n Fraunhofer Institute for\\\n Production Systems\\\n and Design Technology IPK\\\n `0000-0002-7108-5203`\\\n [![image](orcid.pdf)Theresa Johanni](https://orcid.org/0000-0002-0562-669X)\\\n Faculty of Electrical\\\n Engineering\\\n and Computer Science\\\n Technical University of Berlin\\\n Berlin, Germany\\\n `0000-0002-0562-669X`\\\n [![image](orcid.pdf)David Sommer](https://orcid.org/0000-0002-0562-669X)\\\n Weierstrass Institute\\\n for Applied Analysis\\\n and Stochastics\\\n Berlin, Germany\\\n `0000-0002-6797-8009`\\\n [![image](orcid.pdf)J\u00f6rg Kr\u00fcger](https://orcid.org/0000-0002-0562-669X)\\\n Institute for Machine Tools\\\n and Factory Management\\\n Technical University of Berlin\\\n Berlin, Germany\\\n `0000-0001-5138-0793`\\\nbibliography:\n- 'Literature.bib'\ntitle: Decomposition of Industrial Systems for Energy Efficiency Optimization with OptTopo\n---\n\nIntroduction\n============\n\nUsing energy as efficiently as possible" +"---\nabstract: 'The bit-rock interaction considerably affects the dynamics of a drill string. One critical condition is the stick-slip oscillations, where torsional vibrations are high; the bit angular speed varies from zero to about two times (or more) the top drive nominal angular speed. In addition, uncertainties should be taken into account when calibrating (identifying) the bit-rock interaction parameters. This paper proposes a procedure to estimate the parameters of four bit-rock interaction models, one of which is new, and at the same time select the most suitable model, given the available field data. The approximate Bayesian computation (ABC) is used for this purpose. An approximate posterior probability density function is obtained for the parameters of each model, which allows uncertainty to be analyzed. Furthermore, the impact of the uncertainties of the selected models on the torsional stability map (varying the nominal top drive angular speed and the weight on bit) of the system is evaluated.'\naddress:\n- 'Department of Mechanical Engineering - Universidade Federal do Rio de Janeiro, Rio de Janeiro, Brazil'\n- 'Department of Mechanical Engineering - University of Bristol, Bristol, UK'\nauthor:\n- 'D.A. Castello'\n- 'T.G. Ritto'\nbibliography:\n- 'bibdtwin.bib'\ntitle: 'ABC for model selection and parameter" +"---\nabstract: |\n There is abundant medical data on the internet, most of which are unlabeled. Traditional supervised learning algorithms are often limited by the amount of labeled data, especially in the medical domain, where labeling is costly in terms of human processing and specialized experts needed to label them. They are also prone to human error and biased as a select few expert annotators label them. These issues are mitigated by Self-supervision, where we generate pseudo-labels from unlabelled data by seeing the data itself. This paper presents various self-supervision strategies to enhance the performance of a time-series based Diffusion convolution recurrent neural network (DCRNN) model. The learned weights in the self-supervision pretraining phase can be transferred to the supervised training phase to boost the model\u2019s prediction capability. Our techniques are tested on an extension of a Diffusion Convolutional Recurrent Neural network (DCRNN) model, an RNN with graph diffusion convolutions, which models the spatiotemporal dependencies present in EEG signals. When the learned weights from the pretraining stage are transferred to a DCRNN model to determine whether an EEG time window has a characteristic seizure signal associated with it, our method yields an AUROC score $1.56\\%$ than the current state-of-the-art models" +"---\nabstract: 'We report the discovery of a new short period pulsating variable in the field of exoplanet host star XO-2. Variable has been identified while it was being used as a comparison star. In order to verify the variability of the candidate, a follow-up program was carried out. Period analysis of multi-band light curves revealed a very prominent and consistent pulsation periodicity of $P\\sim0.95$ hours. Given the variability period, amplitude and the color index, the object is most likely a *Delta Scuti* type variable. Absolute magnitude ($M_{v}$) and the color index $(B-V)_{0}$ of the star determined as $2.76$ and $0.22$, respectively. This $(B-V)_{0}$ of the star corresponds to A7 spectral type with an approximate effective temperature of 7725 K. Machine-learning analysis of the time-series data also revealed that the object is of variable type DSCT with a probability of 78%.'\nauthor:\n- |\n M. Turan Sa\u011flam$^{1,2}$[^1], Meryem \u00c7\u00f6rd\u00fck$^{2}$, Sinan Ali\u015f$^{1,2}$, G\u00f6rkem \u00d6zg\u00fcl$^{2}$,\\\n $^{1}$Istanbul University Observatory Research and Application Center, 34116 Istanbul, Turkey\\\n $^{2}$Department of Astronomy and Space Sciences, Faculty of Science, Istanbul University, 34116 Istanbul, Turkey\nbibliography:\n- 'IST40spp.bib'\ndate: 'Accepted: XXX. Revised: YYY. Received: ZZZ.'\ntitle: Discovery of a Short Period Pulsator from Istanbul University Observatory\n---\n\n\\[firstpage\\]" +"---\nabstract: 'It is crucial to automatically construct knowledge graphs (KGs) of diverse new relations to support knowledge discovery and broad applications. Previous KG construction methods, based on either crowdsourcing or text mining, are often limited to a small predefined set of relations due to manual cost or restrictions in text corpus. Recent research proposed to use pretrained language models (LMs) as implicit knowledge bases that accept knowledge queries with prompts. Yet, the implicit knowledge lacks many desirable properties of a full-scale symbolic KG, such as easy access, navigation, editing, and quality assurance. In this paper, we propose a new approach of harvesting massive KGs of *arbitrary* relations from pretrained LMs. With minimal input of a relation definition (a prompt and a few shot of example entity pairs), the approach efficiently searches in the vast entity pair space to extract diverse accurate knowledge of the desired relation. We develop an effective search-and-rescore mechanism for improved efficiency and accuracy. We deploy the approach to harvest KGs of over 400 new relations from different LMs. Extensive human and automatic evaluations show our approach manages to extract diverse accurate knowledge, including tuples of complex relations (e.g., `A is capable of but not good" +"---\nabstract: 'We investigate in this paper the vanishing at $s=1$ of the twisted $L$-functions of elliptic curves $E$ defined over the rational function field ${\\mathbb{F}}_q(t)$ (where ${\\mathbb{F}}_q$ is a finite field of $q$ elements and characteristic $\\geq 5$) for twists by Dirichlet characters of prime order $\\ell \\geq 3$, from both a theoretical and numerical point of view. In the case of number fields, it is predicted that such vanishing is a very rare event, and our numerical data seems to indicate that this is also the case over function fields for non-constant curves. For constant curves, we adapt the techniques of [@Li-vanishing; @Donepudi-Li] who proved vanishing at $s=1/2$ for infinitely many Dirichlet $L$-functions over ${\\mathbb{F}}_q(t)$ based on the existence of one, and we can prove that if there is one $\\chi_0$ such that $L(E, \\chi_0, 1)=0$, then there are infinitely many. Finally, we provide some examples which show that twisted $L$-functions of constant elliptic curves over ${\\mathbb{F}}_q(t)$ behave differently than the general ones.'\naddress:\n- 'Antoine Comeau-Lapointe: Department of Mathematics and Statistics, Concordia University, 1455 de Maisonneuve West, Montr\u00e9al, Qu\u00e9bec, Canada H3G 1M8'\n- 'Chantal David: Department of Mathematics and Statistics, Concordia University, 1455 de Maisonneuve West, Montr\u00e9al," +"---\nabstract: 'Elasticity typically refers to a material\u2019s ability to store energy, while viscosity refers to a material\u2019s tendency to dissipate it. In this review, we discuss fluids and solids for which this is not the case. These materials display additional linear response coefficients known as odd viscosity and odd elasticity. We first introduce odd elasticity and odd viscosity from a continuum perspective, with an emphasis on their rich phenomenology, including transverse responses, modified dislocation dynamics, and topological waves. We then provide an overview of systems that display odd viscosity and odd elasticity. These systems range from quantum fluids, to astrophysical gasses, to active and driven matter. Finally, we comment on microscopic mechanisms by which odd elasticity and odd viscosity arise.'\nauthor:\n- 'Michel Fruchart,$^1$ Colin Scheibner,$^1$ and Vincenzo Vitelli$^{1,2}$'\ntitle: Odd Viscosity and Odd Elasticity\n---\n\ncontinuum mechanics, fluid mechanics, non-equilibrium, active matter, hydrodynamics\n\nIntroduction\n============\n\nContinuum theories are phenomenological tools that allow us to understand and manipulate the world around us\u00a0[@Truesdell1960; @Landau6; @Landau7]. As it turns out, classical field theories such as elasticity and fluid mechanics have been built upon certain symmetries and conservation laws that describe disparate systems ranging from rivers to steel beams. Examples include" +"---\nabstract: 'We prove that for $n \\in \\mathbb N$ and an absolute constant $C$, if $p \\geq C\\log^2 n / n$ and $L_{i,j} \\subseteq [n]$ is a random subset of $[n]$ where each $k\\in [n]$ is included in $L_{i,j}$ independently with probability $p$ for each $i, j\\in [n]$, then asymptotically almost surely there is an order-$n$ Latin square in which the entry in the $i$th row and $j$th column lies in $L_{i,j}$. The problem of determining the threshold probability for the existence of an order-$n$ Latin square was raised independently by Johansson, by Luria and Simkin, and by Casselgren and H[\u00e4]{}ggkvist; our result provides an upper bound which is tight up to a factor of $\\log n$ and strengthens the bound recently obtained by Sah, Sawhney, and Simkin. We also prove analogous results for Steiner triple systems and $1$-factorizations of complete graphs, and moreover, we show that each of these thresholds is at most the threshold for the existence of a $1$-factorization of a nearly complete regular bipartite graph.'\naddress:\n- 'School of Mathematics, University of Birmingham, Edgbaston, Birmingham, United Kingdom'\n- 'School of Mathematics, Georgia Institute of Technology, Atlanta, GA, USA'\n- 'Department of Mathematics, ETH Z\u00fcrich, Z\u00fcrich," +"---\nauthor:\n- Mateus Carneiro\n- Richie Diurba\n- Rob Fine\n- Mandeep Gill\n- Ketino Kaadze\n- Harvey Newman\n- Kevin Pedro\n- Alexx Perloff\n- Louise Suter\n- Shawn Westerdale\nbibliography:\n- 'Bibliography/common.bib'\n- 'Bibliography/main.bib'\ntitle: |\n Snowmass \u201921 Community Engagement Frontier 6: Public Policy and Government Engagement\\\n Congressional Advocacy for HEP Funding\\\n (The \u201cDC Trip\u201d) \n---\n\nIntroduction {#sec:intro}\n============\n\nThis document has been prepared as a Snowmass contributed paper by the Public Policy & Government Engagement topical group (CEF06) within the Community Engagement Frontier. The charge of CEF06 is to review all aspects of how the High Energy Physics (HEP) community engages with government at all levels and how public policy impacts members of the community and the community at large, and to assess and raise awareness within the community of direct community-driven engagement of the U.S. federal government (*i.e.* advocacy). In the previous Snowmass process these topics were included in a broader \u201cCommunity Engagement and Outreach\u201d group whose work culminated in the recommendations outlined in Ref. [@snowmass13recs].\n\nThe focus of this paper is the advocacy undertaken by the High Energy Physics (HEP) community that pertains directly to the funding of the field by the U.S. federal" +"---\nabstract: 'We present a systematic and complementary study of quantum correlations near a black hole by considering the measurement-induced nonlocality (MIN). The quantum measure of interest is discussed on the same footing for the fermionic, bosonic and mixed fermion-boson modes in relation to the Hawking radiation. The obtained results show that in the infinite Hawking temperature limit, the physically accessible correlations does not vanish only in the fermionic case. However, the higher frequency modes can sustain correlations for the finite Hawking temperature, with mixed system being more sensitive towards increase of the fermionic frequencies than the bosonic ones. Since the MIN for the latter modes quickly diminishes, the increased frequency may be a way to maintain nonlocal correlations for the scenarios at the finite Hawking temperature.'\nauthor:\n- 'Adam Z. Kaczmarek$^{1}$'\n- 'Dominik Szcz[\u0229]{}[\u015b]{}niak$^{1}$'\n- 'Sabre Kais$^{2}$'\nbibliography:\n- 'bibliography.bib'\ntitle: 'Measurement-induced nonlocality for observers near a black hole'\n---\n\nIntroduction\n============\n\nThe behavior of quantum information in the relativistic settings gained significant attention over the recent years [@peres2002; @ahn2003; @terashima2003; @hu2018; @lanzagorta2014] and the advent of this interest can be associated with the pioneering work of Peres, Scudo and Terno, showing that the spin entropy is not Lorentz" +"---\nabstract: 'Quantum error correcting codes are designed to pinpoint exactly when and where errors occur in quantum circuits. This feature is the foundation of their primary task: to support fault-tolerant quantum computation. However, this feature could used as the basis of benchmarking: By analyzing the outputs of even small-scale quantum error correction circuits, a detailed picture can be constructed of error processes across a quantum device. Here we perform an example of such an analysis, using the results of small repetition codes to determine the error rate of each qubit while idle during a syndrome measurement. This provides an idea of the errors experienced by the qubits across a device while they are part of the kind of circuit that we expect to be typical in fault-tolerant quantum computers.'\nauthor:\n- 'James R. Wootton'\nbibliography:\n- 'references.bib'\ntitle: 'Syndrome-Derived Error Rates as a Benchmark of Quantum Hardware'\n---\n\nIntroduction and Motivation\n===========================\n\nThe development of quantum computers requires the benchmarking and validation of quantum hardware\u00a0[@eisert:2019]. When aiming towards the long-term goal of fault-tolerant and scalable quantum computation, it is crucial for this benchmarking to account for quantum error correction: probing to what degree its requirements are met, and" +"---\nabstract: 'For monaural speech enhancement, contextual information is important for accurate speech estimation. However, commonly used convolution neural networks (CNNs) are weak in capturing temporal contexts since they only build blocks that process one local neighborhood at a time. To address this problem, we learn from human auditory perception to introduce a two-stage trainable reasoning mechanism, referred as global-local dependency (GLD) block. GLD blocks capture long-term dependency of time-frequency bins both in global level and local level from the noisy spectrogram to help detecting correlations among speech part, noise part, and whole noisy input. What is more, we conduct a monaural speech enhancement network called GLD-Net, which adopts encoder-decoder architecture and consists of speech object branch, interference branch, and global noisy branch. The extracted speech feature at global-level and local-level are efficiently reasoned and aggregated in each of the branches. We compare the proposed GLD-Net with existing state-of-art methods on WSJ0 and DEMAND dataset. The results show that GLD-Net outperforms the state-of-the-art methods in terms of PESQ and STOI.'\naddress: |\n $^1$Electronic & Elect. Engineering, Trinity College Dublin, Ireland\\\n $^2$vivo AI Lab, China\\\n $^3$School of Foreign Languages, HBUCM, China\nbibliography:\n- 'mybib.bib'\ntitle: 'GLD-Net: Improving Monaural Speech Enhancement by" +"---\nabstract: 'In this paper, we give a family of online algorithms for the classical coloring problem of intersection graphs of discs with bounded diameter. Our algorithms make use of a geometric representation of such graphs and are inspired by an algorithm of Fiala [*et al.*]{}, but have better competitive ratios. The improvement comes from using two techniques of partitioning the set of vertices before coloring them. One of which is an application of a $b$-fold coloring of the plane. The method is more general and we show how it can be applied to coloring other shapes on the plane as well as adjust it for online $L(2,1)$-labeling.'\nauthor:\n- 'Joanna Chybowska-Sok\u00f3\u0142$^1$[^1]'\n- |\n Konstanty Junosza-Szaniawski$^1$\\\n $^1$ Faculty of Mathematics and Information Science,\\\n Warsaw University of Technology, Poland\\\n email: [j.sokol@mini.pw.edu.pl, konstanty.szaniawski@pw.edu.pl]{}\ntitle: Online coloring of disk graphs\n---\n\nkeywords: disk graphs, online coloring, online $L(2,1)$-labeling, online coloring geometric shapes\n\nIntroduction\n============\n\nIntersection graphs of families of geometric objects attracted much attention of researchers both for their theoretical properties and practical applications (c.f. McKee and McMorris [@McKMcM]). For example intersection graphs of families of discs, and in particular discs of unit diameter (called [*unit disk intersection graphs*]{}), play a crucial role" +"---\nabstract: 'The central binomial series at negative integers are expressed as a linear combination of values of certain two polynomials. We show that one of the polynomials is a special value of the bivariate Eulerian polynomial and the other polynomial is related to the antidiagonal sum of poly-Bernoulli numbers. As an application, we prove Stephan\u2019s observation from 2004.'\naddress:\n- 'Faculty of Water Sciences, University of Public Service, Baja, HUNGARY'\n- 'Institute for Advanced Research, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, 464-8601, JAPAN'\nauthor:\n- Be\u00e1ta B\u00e9nyi\n- Toshiki Matsusaka\nbibliography:\n- 'References1.bib'\ntitle: 'Remarkable relations between the central binomial series, Eulerian polynomials, and poly-Bernoulli numbers'\n---\n\nIntroduction\n============\n\nThe *central binomial series* is a Dirichlet series defined by $$\\begin{aligned}\n\\label{dir_ser}\n \\zeta_{CB}(s) = \\sum_{n=1}^\\infty \\frac{1}{n^s {2n \\choose n}} \\quad (s \\in \\mathbb{C}).\\end{aligned}$$ Borwein, Broadhurst, and Kamnitzer\u00a0[@BorweinBroadhurstKamnitzer2001] studied special values $\\zeta_{CB}(k)$ at positive integers and recovered some remarkable connections. A classical evaluation is $\\zeta_{CB}(4) = \\frac{17\\pi^{4}}{3240} = \\frac{17}{36}\\zeta(4)$. In particular, for $k\\geq 2$ Borwein\u2013Broadhurst\u2013Kamnitzer showed that $\\zeta_{CB}(k)$ can be written as a $\\mathbb{Q}$-linear combination of multiple zeta values and multiple Clausen and Glaisher values.\n\nOn the other hand, Lehmer\u00a0[@Lehmer1985] proved that for $k\\leq 1$, $\\zeta_{CB}(k)$ is a $\\mathbb{Q}$-linear combination" +"---\nabstract: 'The presence of thermal gluons reduces the stimulated gluon emission off a heavy quark propagating in the quark-gluon plasma (QGP) while absorption causes reduction of the radiative energy loss. On the other hand, the chromo-electromagnetic field fluctuations present in the QGP lead to collisional energy gain of the heavy quark. The net effect of the thermal gluon absorption and field fluctuations is a reduction of the total energy loss of the heavy quark, prominent at the lower momenta. We consider both kind of the energy gains along with the usual losses, and compute the nuclear modification factor ($R_{AA}$) of heavy mesons, viz., $D$ and $B$ mesons. The calculations have been compared with the experimental measurements in Au-Au collisions at $\\sqrt{s_{NN}} = 200$ GeV from STAR and PHENIX experiments at the RHIC and Pb-Pb collisions at $\\sqrt{s_{NN}} = 2.76$ TeV and $5.02$ TeV from CMS and ALICE experiments at the LHC. We find a significant effect of the total energy gain due to thermal gluon absorption and field fluctuations on heavy flavour suppression, especially at the lower transverse momenta.'\nauthor:\n- Ashik Ikbal Sheikh\ntitle: Effect of thermal gluon absorption and medium fluctuations on heavy flavour nuclear modification factor" +"---\nabstract: 'Inspired by experiments on the magnetic field induced phases of the spin-orbit coupled $2d$ Mott insulator $\\alpha$-RuCl$_3$, we study some general aspects of gapped $Z_2$ Quantum Spin Liquids (QSL) enriched by lattice translation symmetry. We show that there are $12$ distinct such phases with different implementations of translation symmetry. In some of these phases the vison excitations of this QSL may form topological Chern bands. We explore a phenomenological description of a putative $Z_2$ QSL as a candidate ground state at intermediate magnetic fields in $\\alpha$-RuCl$_3$. This state has broad continuum spectra in neutron scattering, a \u201cbosonic\u201d thermal Hall signal that goes to zero at zero temperature, and is naturally proximate to a zigzag magnetic ordered state, all of which are also seen in $\\alpha$-RuCl$_3$. On general grounds continuum scattering will also be seen at multiple points in the Brillouin zone in this state.'\nauthor:\n- 'Xue-Yang Song'\n- 'T. Senthil'\ntitle: 'Translation-enriched $Z_2$ spin liquids and topological vison bands: Possible application to $\\alpha$-RuCl$_3$'\n---\n\nThe search for a Quantum Spin Liquid (QSL) state in electronic materials has led to intensive study of $\\alpha$-RuCl$_3$ in the last few years[@broholm2020quantum; @takagi2019concept; @trebst2022kitaev; @hermanns2017physics; @winter2017models]. This is a layered material" +"---\nabstract: 'Deep segmentation models often face the failure risks when the testing image presents unseen distributions. Improving model robustness against these risks is crucial for the large-scale clinical application of deep models. In this study, inspired by human learning cycle, we propose a novel online reflective learning framework (*RefSeg*) to improve segmentation robustness. Based on the reflection-on-action conception, our RefSeg firstly drives the deep model to take action to obtain semantic segmentation. Then, RefSeg triggers the model to reflect itself. Because making deep models realize their segmentation failures during testing is challenging, RefSeg synthesizes a realistic proxy image from the semantic mask to help deep models build intuitive and effective reflections. This proxy translates and emphasizes the segmentation flaws. By maximizing the structural similarity between the raw input and the proxy, the reflection-on-action loop is closed with segmentation robustness improved. RefSeg runs in the testing phase and is general for segmentation models. Extensive validation on three medical image segmentation tasks with a public cardiac MR dataset and two in-house large ultrasound datasets show that our RefSeg remarkably improves model robustness and reports state-of-the-art performance over strong competitors.'\nauthor:\n- 'Yuhao Huang[^1], Xin Yang'\n- Xiaoqiong Huang\n- Jiamin Liang" +"---\nabstract: 'End-2-end (E2E) models have become increasingly popular in some ASR tasks because of their performance and advantages. These E2E models directly approximate the posterior distribution of tokens given the acoustic inputs. Consequently, the E2E systems implicitly define a language model (LM) over the output tokens, which makes the exploitation of independently trained language models less straightforward than in conventional ASR systems. This makes it difficult to dynamically adapt E2E ASR system to contextual profiles for better recognizing special words such as named entities. In this work, we propose a contextual density ratio approach for both training a contextual aware E2E model and adapting the language model to named entities. We apply the aforementioned technique to an E2E ASR system, which transcribes doctor and patient conversations, for better adapting the E2E system to the names in the conversations. Our proposed technique achieves a relative improvement of up to 46.5% on the names over an E2E baseline without degrading the overall recognition accuracy of the whole test set. Moreover, it also surpasses a contextual shallow fusion baseline by 22.1 % relative.'\naddress: ' Nuance Communications, Inc., $^2$Valencia, Spain, $^3$Torino, Italy, $^4$Burlington, MA, USA '\nbibliography:\n- 'mybib.bib'\ntitle: Contextual Density" +"---\nabstract: 'Despite much research, traditional methods to pitch prediction are still not perfect. With the emergence of neural networks (NNs), researchers hope to create a NN based pitch predictor that outperforms traditional methods . Three pitch detection algorithms (PDAs), pYIN, YAAPT, and CREPE are compared in this paper. pYIN and YAAPT are conventional approaches considering time domain and frequency domain processing. CREPE utilizes a data trained deep convolutional neural network to estimate pitch. It involves 6 densely connected convolutional hidden layers and determines pitch probabilities for a given input signal. The performance of CREPE representing neural network pitch predictors is compared to more classical approaches represented by pYIN and YAAPT. The figure of merit (FOM) will include the amount of unvoiced-to-voiced errors, voiced-to-voiced errors, gross pitch errors, and fine pitch errors.'\nauthor:\n- \ntitle: |\n Comparing Conventional Pitch Detection Algorithms with a Neural Network Approach\\\n---\n\nIntroduction\n============\n\nPitch is a characteristic of speech that is derived from the fundamental frequency. It exists in voiced speech only and is the rate at which the chords in the vocal tract vibrate. Speech can have voiced (periodic) and unvoiced (aperiodic) segments. Thus, during the unvoiced sections of speech, pitch is not" +"---\nabstract: 'For a graph $G$, a subset $S \\subseteq V(G)$ is called a *resolving set* if for any two vertices $u,v \\in V(G)$, there exists a vertex $w \\in S$ such that $d(w,u) \\neq d(w,v)$. The [Metric Dimension]{} problem takes as input a graph $G$ and a positive integer $k$, and asks whether there exists a resolving set of size at most $k$. This problem was introduced in the 1970s and is known to be -hard\u00a0\\[GT\u00a061 in Garey and Johnson\u2019s book\\]. In the realm of parameterized complexity, Hartung and Nichterlein\u00a0\\[CCC\u00a02013\\] proved that the problem is -hard when parameterized by the natural parameter $k$. They also observed that it is \u00a0when parameterized by the vertex cover number and asked about its complexity under *smaller* parameters, in particular the feedback vertex set number. We answer this question by proving that [Metric Dimension]{} is -hard when parameterized by the combined parameter feedback vertex set number plus pathwidth. This also improves the result of Bonnet and Purohit\u00a0\\[IPEC 2019\\] which states that the problem is -hard parameterized by the pathwidth. On the positive side, we show that [Metric Dimension]{} is \u00a0when parameterized by either the distance to cluster or" +"---\nabstract: 'We propose a novel anomaly detection method for echocardiogram videos. The introduced method takes advantage of the periodic nature of the heart cycle to learn three variants of a *variational latent trajectory* model (TVAE). While the first two variants (TVAE-C and TVAE-R) model strict periodic movements of the heart, the third (TVAE-S) is more general and allows shifts in the spatial representation throughout the video. All models are trained on the healthy samples of a novel in-house dataset of infant echocardiogram videos consisting of multiple chamber views to learn a normative prior of the healthy population. During inference, maximum a posteriori (MAP) based anomaly detection is performed to detect out-of-distribution samples in our dataset. The proposed method reliably identifies severe congenital heart defects, such as Ebstein\u2019s Anomaly or Shone-complex. Moreover, it achieves superior performance over MAP-based anomaly detection with standard variational autoencoders when detecting pulmonary hypertension and right ventricular dilation. Finally, we demonstrate that the proposed method enables interpretable explanations of its output through heatmaps highlighting the regions corresponding to anomalous heart structures.'\nauthor:\n- Alain Ryser\n- Laura Manduchi\n- Fabian Laumer\n- Holger Michel\n- Sven Wellmann\n- 'Julia E. Vogt'\nbibliography:\n- 'refs.bib'\ntitle: Anomaly" +"---\nabstract: 'We introduce topological invariants of semi-decompositions (e.g. filtrations, semi-group actions, multi-valued dynamical systems, combinatorial dynamical systems) on a topological space to analyze semi-decompositions from a dynamical systems point of view. In fact, we construct Morse hyper-graphs and abstract weak elements of semi-decompositions. Moreover, the Morse hyper-graphs of the set of sublevel sets of a Morse function of a compact manifold is the Reeb graph of such function as abstract multi-graphs. The abstract weak element space for a simplicial complex is the face poset. The abstract weak element spaces of positive orbits of acyclic directed graphs are their abstract directed multi-graphs.'\naddress: 'Applied Mathematics and Physics Division, Gifu University, Yanagido 1-1, Gifu, 501-1193, Japan\\'\nauthor:\n- Tomoo Yokoyama\nbibliography:\n- 'yt20211011.bib'\ntitle: 'Morse hyper-graphs and abstract weak element spaces of semi-decompositions'\n---\n\nIntroduction\n============\n\nThe concept of a recurrent point is introduced by Birkhoff to describe the limit behavior of orbits [@Birkhoff]. Conley introduced a weak form of recurrence, called chain recurrence [@conley1978isolated], and showed that dynamical systems on compact metric spaces can be decomposed into blocks, each of which is a chain recurrent one or a gradient one [@Conley1988]. Then this decomposition is called the Morse decomposition, and" +"---\nabstract: 'Heating and cooling systems in buildings account for 31% of global energy use, much of which are regulated by Rule Based Controllers (RBCs) that neither maximise energy efficiency nor minimise emissions by interacting optimally with the grid. Control via Reinforcement Learning (RL) has been shown to significantly improve building energy efficiency, but existing solutions require access to building-specific simulators or data that cannot be expected for every building in the world. In response, we show it is possible to obtain emission-reducing policies without such knowledge *a priori* \u2013 a paradigm we call zero-shot building control. We combine ideas from system identification and model-based RL to create PEARL (**P**robabilistic **E**mission-**A**bating **R**einforcement **L**earning) and show that a short period of active exploration is all that is required to build a performant model. In experiments across three varied building energy simulations, we show PEARL outperforms an existing RBC once, and popular RL baselines in all cases, reducing building emissions by as much as 31% whilst maintaining thermal comfort. Our source code is available online via [https://enjeeneer.io/projects/pearl/]{}.'\nauthor:\n- |\n Written by AAAI Press Staff^1^[^1]\\\n AAAI Style Contributions by Pater Patel Schneider, Sunil Issar,\\\n J. Scott Penberthy, George Ferguson, Hans Guesgen, Francisco" +"---\nabstract: 'We investigate how much can be learnt about four types of primordial non-Gaussianity (PNG) from small-scale measurements of the halo field. Using the quijote-png simulations, we quantify the information content accessible with measurements of the halo power spectrum monopole and quadrupole, the matter power spectrum, the halo-matter cross spectrum and the halo bispectrum monopole. This analysis is the first to include small, non-linear scales, up to $k_\\mathrm{max}=0.5 \\mathrm{h/Mpc}$, and to explore whether these scales can break degeneracies with cosmological and nuisance parameters making use of thousands of N-body simulations. We perform all the halo measurements in redshift space with a single sample comprised of all halos with mass $>3.2 \\times 10^{13}~h^{-1}M_\\odot$. For *local* PNG, measurements of the scale dependent bias effect from the power spectrum using sample variance cancellation provide significantly tighter constraints than measurements of the halo bispectrum. In this case measurements of the small scales add minimal additional constraining power. In contrast, the information on *equilateral* and *orthogonal* PNG is primarily accessible through the bispectrum. For these shapes, small scale measurements increase the constraining power of the halo bispectrum by up to $\\times4$, though the addition of scales beyond $k\\approx 0.3 \\mathrm{h/Mpc}$ improves constraints largely" +"---\nabstract: 'It is widely believed that magnetic flux ropes are the key structure of solar eruptions; however, their observable counterparts are not clear yet. We study a flare associated with flux rope eruption in a comprehensive radiative magnetohydrodynamic simulation of flare-productive active regions, especially focusing on the thermodynamic properties of the plasma involved in the eruption and their relation to the magnetic flux rope. The pre-existing flux rope, which carries cold and dense plasma, rises quasi-statically before the eruption onsets. During this stage, the flux rope does not show obvious signatures in extreme ultraviolet (EUV) emission. After the flare onset, a thin \u2018current shell\u2019 is generated around the erupting flux rope. Moreover, a current sheet is formed under the flux rope, where two groups of magnetic arcades reconnect and create a group of post-flare loops. The plasma within the \u2018current shell\u2019, current sheet, and post-flare loops are heated to more than 10 MK. The post-flare loops give rise to abundant soft X-ray emission. Meanwhile a majority of the plasma hosted in the flux rope is heated to around 1 MK, and the main body of the flux rope is manifested as a bright arch in cooler EUV passbands such" +"---\nabstract: 'In this work, we investigate a class of quasilinear wave equations of Westervelt type with, [in general, nonlocal-in-time]{} dissipation. They arise as models of nonlinear sound propagation through complex media with anomalous diffusion of Gurtin\u2013Pipkin type. Aiming at minimal assumptions on the involved memory kernels \u2013 which we allow to be weakly singular \u2013 we prove the well-posedness of such wave equations in a general theoretical framework. In particular, the Abel fractional kernels, as well as Mittag-Leffler-type kernels, are covered by our results. The analysis is carried out uniformly with respect to the small involved parameter on which the kernels depend and which can be physically interpreted as the sound diffusivity or the thermal relaxation time. We then analyze the behavior of solutions as this parameter vanishes, and in this way relate the equations to their limiting counterparts. To establish the limiting problems, we distinguish among different classes of kernels and analyze and discuss all ensuing cases.'\naddress:\n- |\n Department of Mathematics, Alpen-Adria-Universit\u00e4t Klagenfurt\\\n Universit\u00e4tsstra\u00dfe 65\u201367, A-9020 Klagenfurt, Austria\n- |\n Department of Mathematics\\\n Radboud University\\\n Heyendaalseweg 135, 6525 AJ Nijmegen, The Netherlands\nauthor:\n- 'Barbara Kaltenbacher, Mostafa Meliani, and Vanja Nikoli\u0107'\nbibliography:\n- 'references.bib'\ntitle: |" +"---\nabstract: 'Experimental setups that study laser-cooled ions immersed in baths of ultracold atoms merge the two exciting and well-established fields of quantum gases and trapped ions. These experiments benefit both from the exquisite read-out and control of the few-body ion systems as well as the many-body aspects, tunable interactions, and ultracold temperatures of the atoms. However, combining the two leads to challenges both in the experimental design and the physics that can be studied. Nevertheless, these systems have provided insights into ion-atom collisions, buffer gas cooling of ions and quantum effects in the ion-atom interaction. This makes them promising candidates for ultracold quantum chemistry studies, creation of cold molecular ions for spectroscopy and precision measurements, and as test beds for quantum simulation of charged impurity physics. In this review we aim to provide an experimental account of recent progress and introduce the experimental setup and techniques that enabled the observation of quantum effects.'\naddress:\n- 'Van der Waals-Zeeman Institute, Institute of Physics, University of Amsterdam, 1098 XH Amsterdam, the Netherlands'\n- 'Van der Waals-Zeeman Institute, Institute of Physics, University of Amsterdam, 1098 XH Amsterdam, the Netherlands'\n- 'QuSoft, Science Park 123, 1098 XG Amsterdam, the Netherlands'\nauthor:\n- 'Rianne" +"---\nabstract: |\n Since the appearance of Covid-19 in late 2019, Covid-19 has become an active research topic for the artificial intelligence (AI) community. One of the most interesting AI topics is Covid-19 analysis of medical imaging. CT-scan imaging is the most informative tool about this disease.\n\n This work is part of the 2nd COV19D competition, where two challenges are set: Covid-19 Detection and Covid-19 Severity Detection from the CT-scans. For Covid-19 detection from CT-scans, we proposed an ensemble of 2D Convolution blocks with Densenet-161 models. Here, each 2D convolutional block with Densenet-161 architecture is trained separately and in testing phase, the ensemble model is based on the average of their probabilities. On the other hand, we proposed an ensemble of Convolutional Layers with Inception models for Covid-19 severity detection. In addition to the Convolutional Layers, three Inception variants were used, namely Inception-v3, Inception-v4 and Inception-Resnet.\n\n Our proposed approaches outperformed the baseline approach in the validation data of the 2nd COV19D competition by 11% and 16% for Covid-19 detection and Covid-19 severity detection, respectively.\nauthor:\n- |\n [![image](orcid.pdf)Fares BOUGOURZI](https://sciprofiles.com/profile/FaresBougourzi) [^1]\\\n Institute of Applied Sciences and Intelligent Systems,\\\n National Research Council of Italy (CNR),\\\n 73100 Lecce, Italy\\\n `faresbougourzi@gmail.com`\\\n [![image](orcid.pdf)Cosimo Distante](https://orcid.org/0000-0000-0000-0000)\\\n Institute" +"---\nabstract: 'A calculation of two-electron QED effects to all orders in the nuclear binding strength parameter ${{Z \\alpha}}$ is presented for the ground and $n = 2$ excited states of helium-like ions. After subtracting the first terms of the ${{Z \\alpha}}$ expansion from the all-order results, we identify the higher-order QED effects of order $m\\alpha^7$ and higher. Combining the higher-order remainder with the results complete through order $m\\alpha^6$ from \\[V.\u00a0A.\u00a0Yerokhin and K.\u00a0Pachucki, Phys.\u00a0Rev.\u00a0A [**81**]{}, 022507 (2010)\\], we obtain the most accurate theoretical predictions for the ground and non-mixing $n=2$ states of helium-like ions with $Z = 5\\,$\u2013$\\,30$. For the mixing $2\\,^1 P_1$ and $2\\,^3 P_1$ states, we extend the previous calculation by evaluating the higher-order mixing correction and show that it defines the uncertainty of theoretical calculations in the $LS$ coupling for $Z > 10$.'\nauthor:\n- 'Vladimir A. Yerokhin'\n- Vojt\u011bch Patk\u00f3\u0161\n- Krzysztof Pachucki\ntitle: 'QED calculations of energy levels of helium-like ions with $\\bm {5 \\leq Z \\leq 30}$ '\n---\n\n\\[4\\]\n\n{\n\n[cl]{} \\#1 & \\#2\\\n\\#3 & \\#4\\\n\n.\n\nIntroduction\n============\n\nHelium and helium-like ions are the simplest few-body atomic systems in nature. They have been extensively studied experimentally," +"---\nabstract: 'Emden-Fowler type equations are nonlinear differential equations that appear in many fields such as mathematical physics, astrophysics and chemistry. In this paper, we perform an asymptotic analysis of a specific Emden-Fowler type equation that emerges in a queuing theory context as an approximation of voltages under a well-known power flow model. Thus, we place Emden-Fowler type equations in the context of electrical engineering. We derive properties of the continuous solution of this specific Emden-Fowler type equation and study the asymptotic behavior of its discrete analog. We conclude that the discrete analog has the same asymptotic behavior as the classical continuous Emden-Fowler type equation that we consider.'\nauthor:\n- 'Christianen, M.H.M.'\n- 'Janssen, A.J.E.M.'\n- 'Vlasiou, M.'\n- 'Zwart, B.'\nbibliography:\n- 'PhD-Asymptotic\\_Analysis.bib'\ntitle: 'Asymptotic analysis of Emden-Fowler type equation with an application to power flow models'\n---\n\nIntroduction\n============\n\nMany problems in mathematical physics, astrophysics and chemistry can be modeled by an Emden-Fowler type equation of the form $$\\begin{aligned}\n\\frac{d}{dt}\\left(t^{\\rho}\\frac{du}{dt} \\right)\\pm t^{\\sigma}h(u) = 0,\\label{eq:general_fowler_emden}\\end{aligned}$$ where $\\rho,\\sigma$ are real numbers, the function $u:\\mathbb{R}\\to\\mathbb{R}$ is twice differentiable and $h: \\mathbb{R}\\to\\mathbb{R}$ is some given function of $u$. For example, choosing $h(u)=u^n$ for $n\\in\\mathbb{R}$, $\\rho=1$, $\\sigma=0$ and plus sign in , is" +"---\nabstract: |\n Knowledge-based Visual Question Answering (VQA) expects models to rely on external knowledge for robust answer prediction. Though significant it is, this paper discovers several leading factors impeding the advancement of current state-of-the-art methods. On the one hand, methods which exploit the explicit knowledge take the knowledge as a complement for the coarsely trained VQA model. Despite their effectiveness, these approaches often suffer from noise incorporation and error propagation. On the other hand, pertaining to the implicit knowledge, the multi-modal implicit knowledge for knowledge-based VQA still remains largely unexplored. This work presents a unified end-to-end retriever-reader framework towards knowledge-based VQA. In particular, we shed light on the multi-modal implicit knowledge from vision-language pre-training models to mine its potential in knowledge reasoning. As for the noise problem encountered by the retrieval operation on explicit knowledge, we design a novel scheme to create pseudo labels for effective knowledge supervision. This scheme is able to not only provide guidance for knowledge retrieval, but also drop these instances potentially error-prone towards question answering. To validate the effectiveness of the proposed method, we conduct extensive experiments on the benchmark dataset. The experimental results reveal that our method outperforms existing baselines by a noticeable" +"---\nabstract: 'The ongoing COVID-19 pandemic has already taken millions of lives and damaged economies across the globe. Most COVID-19 deaths and economic losses are reported from densely crowded cities. It is comprehensible that the effective control and prevention of epidemic/pandemic infectious diseases is vital. According to WHO, testing and diagnosis is the best strategy to control pandemics. Scientists worldwide are attempting to develop various innovative and cost-efficient methods to speed up the testing process. This paper comprehensively evaluates the applicability of the recent top ten state-of-the-art Deep Convolutional Neural Networks (CNNs) for automatically detecting COVID-19 infection using chest X-ray images. Moreover, it provides a comparative analysis of these models in terms of accuracy. This study identifies the effective methodologies to control and prevent infectious respiratory diseases. Our trained models have demonstrated outstanding results in classifying the COVID-19 infected chest x-rays. In particular, our trained models MobileNet, EfficentNet, and InceptionV3 achieved a classification average accuracy of 95%, 95%, and 94% test set for COVID-19 class classification, respectively. Thus, it can be beneficial for clinical practitioners and radiologists to speed up the testing, detection, and follow-up of COVID-19 cases.'\nauthor:\n- |\n Zeba Ghaffar [^1]\\\n Department of Computer Science\\\n COMSATS University" +"---\nabstract: 'Second harmonic generation is enhanced at the surface lattice resonance in plasmonic nanoparticle arrays. We carried out a parametric investigation on two-dimensional lattices composed of gold nanobars where the centrosymmetry is broken at oblique incidence. We study the influence of the periodicity, the incidence angle and the direction of the linear input polarization on the second harmonic generation. Excitation of the surface lattice resonance either at the fundamental or second harmonic wavelength, achieved by varying the incidence angle, enhance the conversion efficiency. As a special case, we demonstrate that both the wavelengths can be simultaneously in resonance for a specific period of the lattice. In this double resonant case, maximum second harmonic power is achieved.'\nauthor:\n- Sebastian Beer\n- Jeetendra Gour\n- Alessandro Alberucci\n- Christin David\n- Stefan Nolte\n- Uwe Detlef Zeitner\nbibliography:\n- 'references.bib'\ntitle: Second harmonic generation under doubly resonant lattice plasmon excitation\n---\n\n[^1]\n\n[^2]\n\n[^3]\n\n*^^ [Opt. Express 30, 40884-40896 \\[2022\\]](https://opg.optica.org/oe/fulltext.cfm?uri=oe-30-22-40884&id=511118), Optica Publishing Group. Users may use, reuse, and build upon the article, or use the article for text or data mining, so long as such uses are for non-commercial purposes and appropriate attribution is maintained. All other rights are reserved.*" +"---\nabstract: 'We consider non-negative $\\sigma$-finite measure spaces coupled with a proper functional $P$ that plays the role of a perimeter. We introduce the Cheeger problem in this framework and extend many classical results on the Cheeger constant and on Cheeger sets to this setting, requiring minimal assumptions on the pair measure space-perimeter. Throughout the paper, the measure space will never be asked to be metric, at most topological, and this requires the introduction of a suitable notion of Sobolev spaces, induced by the coarea formula with the given perimeter.'\naddress:\n- 'Dipartimento di Matematica \u201cTullio Levi Civita\u201d, Universit\u00e0 di Padova, via Trieste 63, 35121 Padova (PD), Italy'\n- 'Dipartimento di Matematica, Universit\u00e0 di Trento, via Sommarive 14, 38123 Povo (TN), Italy'\n- 'Dipartimento di Matematica e Informatica \u201cUlisse Dini\u201d, Universit\u00e0 di Firenze, viale Morgagni 67/A, 50134 Firenze (FI), Italy'\n- 'Scuola Internazionale Superiore di Studi Avanzati (SISSA), via Bonomea 265, 34136 Trieste (TS), Italy'\nauthor:\n- Valentina Franceschi\n- Andrea Pinamonti\n- Giorgio Saracco\n- Giorgio Stefani\nbibliography:\n- 'biblio.bib'\ntitle: |\n The Cheeger problem\\\n in abstract measure spaces\n---\n\nIntroduction\n============\n\nIn the Euclidean framework, the *Cheeger constant* of a given set $\\Omega\\subset \\mathbb{R}^n$ is defined as $$h(\\Omega)=\\inf\\set*{\\frac{P(E)}{\\mathscr{L}^n(E)}\\,:\\," +"---\nabstract: 'Recent advances in synthetic speech quality have enabled us to train text-to-speech (TTS) systems by using synthetic corpora. However, merely increasing the amount of synthetic data is always advantageous for improving training efficiency. Our aim in this study is to selectively choose synthetic data that are beneficial to the training process. In the proposed method, we first adopt a variational autoencoder whose posterior distribution is utilized to extract latent features representing acoustic similarity between the recorded and synthetic corpora. By using those learned features, we then train a ranking support vector machine (RankSVM) that is well known for effectively ranking relative attributes among binary classes. By setting the recorded and synthetic ones as two opposite classes, RankSVM is used to determine how the synthesized speech is acoustically similar to the recorded data. Then, synthetic TTS data, to the recorded data, are selected from large-scale synthetic corpora. By using these data for the TTS model, the can be significantly improved. Objective and subjective evaluation results .'\naddress: |\n $^{1}$NAVER Corp., Seongnam, Korea\\\n $^{2}$LINE Corp., Tokyo, Japan\nbibliography:\n- 'mybib.bib'\ntitle: 'TTS-by-TTS 2: Data-Selective Augmentation for Neural Speech Synthesis Using Ranking Support Vector Machine with Variational Autoencoder'\n---\n\n**Index Terms**:" +"---\nabstract: 'Recently there has been a notable progress in the study of glueball states in lattice gauge theories, in particular extrapolating their spectrum to the limit of large number of colors $N$. In this note we compare the large $N$ lattice results with the holographic predictions, focusing on the Klebanov-Strassler model. We note that glueball spectrum demonstrates approximate universality across a range of gauge theory models. Because of this universality the holographic models can give reliable predictions for the spectrum of pure $SU(N)$ Yang-Mills theories with and without supersymmetry. This is especially important for the supersymmetric theories, for which no firm lattice predictions exist yet, and the holographic models remain the most tractable approach. For non-supersymmetric pure $SU(N)$ theories with large $N$ we find an agreement within 5-8% between the lattice and holographic predictions for the mass ratios of the lightest states in various sectors. In particular both lattice and holography give predictions for the $2^{++}$ and $1^{--}$ mass ratio, consistent with the known constraints on the pomeron and odderon Regge trajectories.'\nauthor:\n- Anatoly Dymarsky$^a$ and Dmitry Melnikov$^b$\nbibliography:\n- 'refs.bib'\ntitle: 'Spectrum of Large N Glueballs: Holography vs Lattice '\n---\n\n*$^a$Department of Physics and Astronomy,\\\nUniversity" +"---\nabstract: 'Neural networks are susceptible to adversarial examples \u2014 small input perturbations that cause models to fail. Adversarial training is one of the solutions that stops adversarial examples; models are exposed to attacks during training and learn to be resilient to them. Yet, such a procedure is currently expensive \u2013 it takes a long time to produce and train models with adversarial samples, and, what is worse, it occasionally fails. In this paper we demonstrate *data pruning* \u2014 a method for increasing adversarial training efficiency through data sub-sampling. We empirically show that *data pruning* leads to improvements in convergence and reliability of adversarial training, albeit with different levels of utility degradation. For example, we observe that using random sub-sampling of CIFAR10 to drop 40% of data, we lose 8% adversarial accuracy against the strongest attackers, while by using only 20% of data we lose 14% adversarial accuracy and reduce runtime by a factor of $3$. Interestingly, we discover that in some settings data pruning brings benefits from both worlds \u2014 it both improves adversarial accuracy and training time.'\nauthor:\n- |\n Maximillian Kaufmann\\\n University of Cambridge\\\n Yiren Zhao\\\n University of Cambridge\\\n Ilia Shumailov\\\n University of Cambridge and Vector Institute\\" +"---\nabstract: 'Let $E$ be a countable Borel equivalence relation on the space $\\mathcal{E}_{\\infty}$ of all infinite partitions of the natural numbers. We show that $E$ coincides with equality below a Carlson-Simpson generic element of $\\mathcal{E}_{\\infty}$. In contrast, we show that there is a hypersmooth equivalence relation on $\\mathcal{E}_{\\infty}$ which is Borel bireducible with $E_1$ on every Carlson-Simpson cube. Our arguments are classical and require no background in forcing.'\naddress:\n- 'Department of Mathematical Sciences, Carnegie Mellon University'\n- 'Department of Mathematical Sciences, Carnegie Mellon University'\nauthor:\n- Aristotelis Panagiotopoulos\n- Allison Wang\nbibliography:\n- 'bibliography.bib'\ntitle: 'Every CBER is smooth below the Carlson-Simpson generic partition'\n---\n\nIntroduction\n============\n\nOne of the prominent ongoing research programs of descriptive set theory concerns the complexity theory of classification problems. Formally, a classification problem is any pair $(X,E)$ where $X$ is a Polish space and $E$ is any equivalence relation on $X$ which is analytic as a subset of $X\\times X$. A classification problem $(X,E)$ is considered to be of less or equal complexity to the classification problem $(Y,F)$ if there is a [**Borel reduction**]{} from $E$ to $F$, *i.e.*, if there is a Borel map $f\\colon X\\to Y$ so that $x E" +"---\nabstract: 'Vision transformers have recently set off a new wave in the field of medical image analysis due to their remarkable performance on various computer vision tasks. However, recent hybrid-/transformer-based approaches mainly focus on the benefits of transformers in capturing long-range dependency while ignoring the issues of their daunting computational complexity, high training costs, and redundant dependency. In this paper, we propose to employ adaptive pruning to transformers for medical image segmentation and propose a lightweight and effective hybrid network APFormer. To our best knowledge, this is the first work on transformer pruning for medical image analysis tasks. The key features of APFormer mainly are self-supervised self-attention (SSA) to improve the convergence of dependency establishment, Gaussian-prior relative position embedding (GRPE) to foster the learning of position information, and adaptive pruning to eliminate redundant computations and perception information. Specifically, SSA and GRPE consider the well-converged dependency distribution and the Gaussian heatmap distribution separately as the prior knowledge of self-attention and position embedding to ease the training of transformers and lay a solid foundation for the following pruning operation. Then, adaptive transformer pruning, both query-wise and dependency-wise, is performed by adjusting the gate control parameters for both complexity reduction and performance" +"---\nabstract: 'Coxeter groups possess an associative operation, called variously the Demazure, greedy, or $0$-Hecke product. For symmetric groups, this product has an amusing formulation as matrix multiplication in the min-plus (tropical) semiring of two matrices associated to the permutations. We prove that this min-plus formulation extends to furnish a Demazure product on a much larger group of integer permutations, consisting of all permutations that change the sign of finite many integers. We prove several alternative descriptions of this product and some useful properties of it. These results were developed in service of future applications to Brill\u2013Noether theory of algebraic and tropical curves; the connection is surveyed in an appendix.'\naddress: 'Department of Mathematics and Statistics, Amherst College'\nauthor:\n- Nathan Pflueger\nbibliography:\n- '../template/main.bib'\ntitle: ' An extended Demazure product on integer permutations via min-plus matrix multiplication '\n---\n\nIntroduction\n============\n\nThe symmetric group $S_d$, and more generally any Coxeter group, possesses a useful associative operation $\\star$, variously called the *Demazure*, *$0$-Hecke*, or *greedy* product. If $\\alpha \\in S_d$ and $\\sigma_n$ is the simple transposition of $n$ and $n+1$, then $$\\label{eq:demazureBasic}\n\\alpha \\star \\sigma_n = \\begin{cases}\n\\alpha \\sigma_n & \\mbox{ if } \\alpha(n) < \\alpha(n+1),\\\\\n\\alpha & \\mbox{ otherwise.}" +"---\nabstract: 'Hypothesis-pruning maximizes the hypothesis updates for active learning to find those desired unlabeled data. An inherent assumption is that this learning manner can derive those updates into the optimal hypothesis. However, its convergence may not be guaranteed well if those incremental updates are negative and disordered. In this paper, we introduce a black-box teaching hypothesis $h^\\mathcal{T}$ employing a tighter slack term $\\left(1+\\mathcal{F}^{\\mathcal{T}}(\\widehat{h}_t)\\right)\\Delta_t$ to replace the typical $2\\Delta_t$ for pruning. Theoretically, we prove that, under the guidance of this teaching hypothesis, the learner can converge into a tighter generalization error and label complexity bound than those non-educated learners who do not receive any guidance from a teacher:1) the generalization error upper bound can be reduced from $R(h^*)+4\\Delta_{T-1}$ to approximately $R(h^{\\mathcal{T}})+2\\Delta_{T-1}$, and 2) the label complexity upper bound can be decreased from $4 \\theta\\left(TR(h^{*})+2O(\\sqrt{T})\\right)$ to approximately $2\\theta\\left(2TR(h^{\\mathcal{T}})+3 O(\\sqrt{T})\\right)$. To be strict with our assumption, self-improvement of teaching is firstly proposed when $h^\\mathcal{T}$ loosely approximates $h^*$. Against learning, we further consider two teaching scenarios: teaching a white-box and black-box learner. Experiments verify this idea and show better generalization performance than the fundamental active learning strategies, such as IWAL [@beygelzimer2009importance], IWAL-D [@cortes2019active], etc.'\nauthor:\n- |\n Xiaofeng Cao[^1] xiaofeng.cao.uts@gmail.com\\\n School of Artificial" +"---\nabstract: 'We propose a general deep architecture for learning functions on multiple permutation-invariant sets. We also show how to generalize this architecture to sets of elements of any dimension by dimension equivariance. We demonstrate that our architecture is a universal approximator of these functions, and show superior results to existing methods on a variety of tasks including counting tasks, alignment tasks, distinguishability tasks and statistical distance measurements. This last task is quite important in Machine Learning. Although our approach is quite general, we demonstrate that it can generate approximate estimates of KL divergence and mutual information that are more accurate than previous techniques that are specifically designed to approximate those statistical distances.'\nauthor:\n- '[Kira A. Selby](mailto:?Subject=Multi-Set Transformers (UAI))'\n- Ahmad Rashid\n- Ivan Kobyzev\n- Mehdi Rezagholizadeh\n- Pascal Poupart\nbibliography:\n- 'multiset.bib'\ntitle: 'Learning Functions on Multiple Sets using Multi-Set Transformers'\n---\n\nIntroduction\n============\n\nTypical deep learning algorithms are constrained to operate on either fixed-dimensional vectors or ordered sequences of such vectors. This is a limiting assumption which prevents the application of neural methods to many problems, particularly those involving sets. While some investigation has now been done into the problem of applying deep learning to functions" +"---\nabstract: 'Breast lesion detection in ultrasound is critical for breast cancer diagnosis. Existing methods mainly rely on individual 2D ultrasound images or combine unlabeled video and labeled 2D images to train models for breast lesion detection. In this paper, we first collect and annotate an ultrasound video dataset (188 videos) for breast lesion detection. Moreover, we propose a clip-level and video-level feature aggregated network (CVA-Net) for addressing breast lesion detection in ultrasound videos by aggregating video-level lesion classification features and clip-level temporal features. The clip-level temporal features encode local temporal information of ordered video frames and global temporal information of shuffled video frames. In our CVA-Net, an inter-video fusion module is devised to fuse local features from original video frames and global features from shuffled video frames, and an intra-video fusion module is devised to learn the temporal information among adjacent video frames. Moreover, we learn video-level features to classify the breast lesions of the original video as benign or malignant lesions to further enhance the final breast lesion detection performance in ultrasound videos. Experimental results on our annotated dataset demonstrate that our CVA-Net clearly outperforms state-of-the-art methods. The corresponding code and dataset are publicly available at .'\nauthor:" +"---\nabstract: 'Growing evidence suggests that YouTube\u2019s recommendation algorithm plays a role in online radicalization via surfacing extreme content. Radical Islamist groups, in particular, have been profiting from the global appeal of YouTube to disseminate hate and jihadist propaganda. In this quantitative, data-driven study, we investigate the prevalence of religiously intolerant Arabic YouTube videos, the tendency of the platform to recommend such videos, and how these recommendations are affected by demographics and watch history. Based on our deep learning classifier developed to detect hateful videos and a large-scale dataset of over 350K videos, we find that Arabic videos targeting religious minorities are particularly prevalent in search results (30%) and first-level recommendations (21%), and that 15% of overall captured recommendations point to hateful videos. Our personalized audit experiments suggest that gender and religious identity can substantially affect the extent of exposure to hateful content. Our results contribute vital insights into the phenomenon of online radicalization and facilitate curbing online harmful content.'\nauthor:\n- Nuha Albadi\n- Maram Kurdi\n- Shivakant Mishra\nbibliography:\n- 'sample-base.bib'\ntitle: 'Deradicalizing YouTube: Characterization, Detection, and Personalization of Religiously Intolerant Arabic Videos'\n---\n\n<ccs2012> <concept> <concept\\_id>10003456.10003462.10003480.10003482</concept\\_id> <concept\\_desc>Social and professional topics\u00a0Hate speech</concept\\_desc> <concept\\_significance>500</concept\\_significance> </concept> <concept> <concept\\_id>10003120.10003130.10011762</concept\\_id> <concept\\_desc>Human-centered" +"---\nabstract: |\n We devise constant-factor approximation algorithms for finding as many disjoint cycles as possible from a certain family of cycles in a given planar or bounded-genus graph. Here disjoint can mean vertex-disjoint or edge-disjoint, and the graph can be undirected or directed. The family of cycles under consideration must satisfy two properties: it must be uncrossable and allow for an oracle access that finds a weight-minimal cycle in that family for given nonnegative edge weights or (in planar graphs) the union of all remaining cycles in that family after deleting a given subset of edges.\n\n Our setting generalizes many problems that were studied separately in the past. For example, three families that satisfy the above properties are (i) all cycles in a directed or undirected graph, (ii) all odd cycles in an undirected graph, and (iii) all cycles in an undirected graph that contain precisely one demand edge, where the demand edges form a subset of the edge set. The latter family (iii) corresponds to the classical disjoint paths problem in fully planar and bounded-genus instances. While constant-factor approximation algorithms were known for edge-disjoint paths in such instances, we improve the constant in the planar case and obtain" +"---\nabstract: 'Strong measurements usually restrict the dynamics of measured finite dimensional systems to the Zeno subspace, where subsequent evolution is unitary due to the suppression of dissipative terms. Here we show qualitatively different behaviour due to the competition between strong measurements and the thermodynamic limit, inducing a time-translation symmetry breaking phase transition resulting in a continuous time crystal. We consider a spin star model, where the central spin is subject to a strong continuous measurement, and qualify the dynamic behaviour of the system in various parameter regimes. We show that above a critical value of measurement strength, the magnetization of the thermodynamically large ancilla spins develops limit cycle oscillations. Our result also demonstrates that a coherent drive is not necessary in order to induce continuous time-translation symmetry breaking.'\nauthor:\n- Midhun Krishna\n- Parvinder Solanki\n- Michal Hajdu\u0161ek\n- Sai Vinjanampathy\ntitle: Measurement Induced Continuous Time Crystals\n---\n\n*Introduction.\u2014* Quantum measurements are a central aspect of open quantum system dynamics with applications to quantum computing [@raussendorf2001oneway], sensing [@degen2017quantum], communications [@khatri2020principles] and cryptography [@gisin2002quantum]. Since measurements are non-unitary and act on a timescale different than Hamiltonian evolution, novel effects such as weak-value amplification [@aharonov1988result], quantum Zeno [@misra1977zeno] and anti-Zeno [@antizeno;" +"---\nabstract: 'We study the definability of convex valuations on ordered fields, with a particular focus on the distinguished subclass of henselian valuations. In the setting of ordered fields, one can consider definability both in the language of rings ${\\mathcal{L}_{\\mathrm{r}}}$ and in the richer language of ordered rings ${\\mathcal{L}_{\\mathrm{or}}}$. We analyse and compare definability in both languages and show the following contrary results: while there are *convex* valuations that are definable in the language ${\\mathcal{L}_{\\mathrm{or}}}$ but not in the language ${\\mathcal{L}_{\\mathrm{r}}}$, any ${\\mathcal{L}_{\\mathrm{or}}}$-definable *henselian* valuation is already ${\\mathcal{L}_{\\mathrm{r}}}$-definable. To prove the latter, we show that the value group and the *ordered* residue field of an ordered henselian valued field are stably embedded (as an ordered abelian group, respectively as an ordered field). Moreover, we show that in almost real closed fields *any* ${\\mathcal{L}_{\\mathrm{or}}}$-definable valuation is henselian.'\naddress:\n- |\n Institut f\u00fcr Algebra\\\n Fachrichtung Mathematik\\\n TU Dresden\\\n 01062 Dresden, Germany\n- |\n Mathematisches Institut\\\n Fachbereich Mathematik und Informatik\\\n Universit\u00e4t M\u00fcnster\\\n Einsteinstra\u00dfe 62\\\n 48149 M\u00fcnster, Germany\n- |\n Fachbereich Mathematik und Statistik\\\n Universit\u00e4t Konstanz\\\n 78457 Konstanz, Germany\n- |\n Fachbereich Mathematik und Statistik\\\n Universit\u00e4t Konstanz\\\n 78457 Konstanz, Germany\nauthor:\n- Philip Dittmann\n- Franziska Jahnke\n- Lothar Sebastian Krapp\n- Salma Kuhlmann" +"---\nabstract: |\n In this paper we show that there are circumstances in which the damping of gravitational waves (GWs) propagating through a viscous fluid can be highly significant; in particular, this applies to Core Collapse Supernovae (CCSNe). In previous work, we used linearized perturbations on a fixed background within the Bondi-Sachs formalism, to determine the effect of a dust shell on GW propagation. Here, we start with the (previously found) velocity field of the matter, and use it to determine the shear tensor of the fluid flow. Then, for a viscous fluid, the energy dissipated is calculated, leading to an equation for GW damping. It is found that the damping effect agrees with previous results when the wavelength $\\lambda$ is much smaller than the radius $r_i$ of the matter shell; but if $\\lambda\\gg r_i$, then the damping effect is greatly increased.\n\n Next, the paper discusses an astrophysical application, CCSNe. There are several different physical processes that generate GWs, and many models have been presented in the literature. The damping effect thus needs to be evaluated with each of the parameters $\\lambda,r_i$ and the coefficient of shear viscosity $\\eta$, having a range of values. It is found that in most" +"---\nabstract: 'We investigate thermal transport in a serial asymmetric double quantum dot (DQD) coupled to two electron reservoirs with different temperatures. The inter- and intra-Coulomb interactions are taken into account in a Coulomb blockade DQD where the electron sequential tunneling via four different master equation approaches is considered. In the absence of Coulomb interactions, a neglectable thermoelectric and heat currents is found identifying as the Coulomb blockade DQD regime. In the presence of Coulomb interactions, intra- and inter-Coulomb interactions, crossings energies between the intra- and the inter-dot many-body electron states are observed. The crossings induce extra channels in the energy spectrum of the DQD that enhance thermoelectric and heat currents. The extra channels form several peaks in the thermoelectric and heat currents in which intensity and position of the peaks depend on strength of the inter- and intra-dot Coulomb interactions. In addition, the problem of coherences and incoherences are studied using different approaches to the master equation, which are the first order von-Neumann, the Redfield, a first order Lindblad, and the Pauli methods. We find that all methods give almost similar thermal transport when the role of the coherences is irrelevant in the DQD.'\naddress:\n- 'Physics Department, College" +"---\nabstract: 'In this paper, we present an improved parameterization of the elastic scattering of spin-0 particles, which is based on a dispersive representation for the inverse scattering amplitude. Besides being based on well known general principles, the requirement that the inverse amplitude should satisfy the dispersion relation significantly constrains its possible forms and have not been incorporated in the existing parameterizations so far. While the right-hand cut of the inverse scattering amplitude is controlled by unitarity, the contribution from the left-hand cut, which comes from the crossing symmetry, is commonly ignored or incorporated improperly. The latter is parameterized using the expansion in a suitably constructed conformal variable, which accounts for its analytic structure. The correct implementations of the Adler zero and threshold factors for angular momentum $J>0$ are discussed in detail as well. The amplitudes are written in a compact analytic form and provide a useful tool to analyze current and future lattice data in the elastic region improving upon the commonly used Breit-Wigner or K-matrix approaches.'\naddress:\n- 'Institut f\u00fcr Kernphysik & PRISMA$^+$ Cluster of Excellence, Johannes Gutenberg Universit\u00e4t, D-55099 Mainz, Germany'\n- 'Helmholtz Institut Mainz, D-55099 Mainz, Germany'\nauthor:\n- Igor Danilkin\n- Volodymyr Biloshytskyi\n- 'Xiu-Lei" +"---\nabstract: 'Within the framework of many-body perturbation theory (MBPT) integrated with density functional theory (DFT), a novel defect-subspace projection GW method, the so-called p-GW, is proposed. By avoiding the periodic defect interference through open boundary self-energies, we show that the p-GW can efficiently and accurately describe quasi-particle correlated defect levels in two-dimensional (2D) monolayer MoS$_2$. By comparing two different defect states originating from sulfur vacancy and adatom to existing theoretical and experimental works, we show that our GW correction to the DFT defect levels is precisely modelled. Based on these findings, we expect that our method can provide genuine trap states for various 2D transition-metal dichalcogenide (TMD) monolayers, thus enabling the study of defect-induced effects on the device characteristics of these materials *via* realistic simulations.'\nauthor:\n- |\n Guido Gandus, Youseung Lee, Leonard Deuschle, Mathieu Luisier\\\n Integrated Systems Laboratory\\\n ETH\\\n Z\u00fcrich, Switzerland\\\n Daniele Passerone\\\n nanotech@surfaces\\\n EMPA\\\n Z\u00fcrich, Switzerland\\\nbibliography:\n- 'refs.bib'\ntitle: 'Efficient and accurate defect level modelling in monolayer MoS$_2$ via GW+DFT with open boundary conditions '\n---\n\nIntroduction\n============\n\nThe physical dimension of Si logic transistors is approaching the atomic limit, thus requiring novel architectures and/or high-mobility channel materials for future technology nodes. Logic switches based on" +"---\nabstract: 'A new methodology for fundamental studies of radiation effects in solids is herein introduced by using a plasma Focused Ion Beam (PFIB). The classical example of ion-induced amorphization of single-crystalline pure Si is used as a proof-of-concept experiment that delineates the advantages and limitations of this new technique. We demonstrate both the feasibility and invention of a new ion irradiation mode consisting of irradiating a single-specimen in multiple areas, at multiple doses, in specific sites. This present methodology suggests a very precise control of the ion beam over the specimen, with an error in the flux on the order of only 1%. In addition, the proposed methodology allows the irradiation of specimens with higher dose rates when compared with conventional ion accelerators and implanters. This methodology is expected to open new research frontiers beyond the scope of materials at extremes such as in nanopatterning and nanodevices fabrication.'\naddress: 'Materials Science and Technology Division, Los Alamos National Laboratory, United States of America'\nauthor:\n- 'M.A. Tunes'\n- 'M.M. Schneider'\n- 'C.A. Taylor'\n- 'T.A. Saleh'\nbibliography:\n- 'bibdata.bib'\ntitle: '**A New Methodology for Radiation Effects Studies in Solids using the Plasma Focused Ion Beam**'\n---\n\nRadiation Effects: Fundamentals ,Ion" +"---\nabstract: |\n A robotic platform for mobile manipulation needs to satisfy two contradicting requirements for many real-world applications: A compact base is required to navigate through cluttered indoor environments, while the support needs to be large enough to prevent tumbling or tip over, especially during fast manipulation operations with heavy payloads or forceful interaction with the environment.\\\n This paper proposes a novel robot design that fulfills both requirements through a versatile footprint. It can reconfigure its footprint to a narrow configuration when navigating through tight spaces and to a wide stance when manipulating heavy objects. Furthermore, its triangular configuration allows for high-precision tasks on uneven ground by preventing support switches.\\\n A model predictive control strategy is presented that unifies planning and control for simultaneous navigation, reconfiguration, and manipulation. It converts task-space goals into whole-body motion plans for the new robot.\\\n The proposed design has been tested extensively with a hardware prototype. The footprint reconfiguration allows to almost completely remove manipulation induced vibrations. The control strategy proves effective in both lab experiment and during a real-world construction task.\nauthor:\n- 'Johannes Pankert, Giorgio Valsecchi, Davide Baret, Jon Zehnder, Lukasz L. Pietrasik, Marko Bjelonic, Marco Hutter [^1]'\nbibliography:\n- 'references.bib'\ntitle:" +"---\nabstract: 'Large-sample Bayesian analogs exist for many frequentist methods, but are less well-known for the widely-used \u2018sandwich\u2019 or \u2018robust\u2019 variance estimates. We review existing approaches to Bayesian analogs of sandwich variance estimates and propose a new analog, as the Bayes rule under a form of balanced loss function, that combines elements of standard parametric inference with fidelity of the data to the model. Our development is general, for essentially any regression setting with independent outcomes. Being the large-sample equivalent of its frequentist counterpart, we show by simulation that Bayesian robust standard error estimates can faithfully quantify the variability of parameter estimates even under model misspecification \u2013 thus retaining the major attraction of the original frequentist version. We demonstrate our Bayesian analog of standard error estimates when studying the association between age and systolic blood pressure in NHANES.'\naddress:\n- 'Department of Biostatistics, St. Jude Children\u2019s Research Hospital, Memphis, Tennessee, U.S.A..'\n- 'Department of Biostatistics, University of Washington, Seattle, Washington, U.S.A..'\nauthor:\n- \u00a0\n- \u00a0\nbibliography:\n- 'references.bib'\ntitle: 'A Bayesian \u2018sandwich\u2019 for variance estimation'\n---\n\nIntroduction\n============\n\nProbabilistic models, with a finite number of parameters that characterizes the data generating distribution, are a standard statistical tool. Frequentist methods view the" +"---\nabstract: 'Current efforts to build quantum computers focus mainly on the two-state qubit, which often involves suppressing readily-available higher states. In this work, we break this abstraction and synthesize short-duration control pulses for gates on generalized *d*-state qu*dits*. We present Incremental Pulse Re-seeding, a practical scheme to guide optimal control software to the lowest-duration pulse by iteratively seeding the optimizer with previous results. We find a near-linear relationship between Hilbert space dimension and gate duration through explicit pulse optimization for one- and two-qudit gates on transmons. Our results suggest that qudit operations are much more efficient than previously expected in the practical regime of interest and have the potential to significantly increase the computational power of current hardware.'\nauthor:\n- \nbibliography:\n- 'bibliography.bib'\ntitle: |\n Time-Efficient Qudit Gates through\\\n Incremental Pulse Re-seeding [^1] \n---\n\nquantum computing, qudit, quantum optimal control, pulse synthesis\n\nIntroduction\n============\n\nQuantum computing traditionally focuses mainly on the realization of noise-robust two-level systems, known as qubits. However, in many quantum architectures, each qubit is embedded in a much larger Hilbert space, with all other energy levels being ignored or suppressed. Qudits, the extension of qubits to $d$ levels, are a promising topic of study with the" +"---\nabstract: |\n We present a statistical model for German medical natural language processing trained for named entity recognition (NER) as an open, publicly available model. The work serves as a refined successor to our first GERNERMED model which is substantially outperformed by our work. We demonstrate the effectiveness of combining multiple techniques in order to achieve strong results in entity recognition performance by the means of transfer-learning on pretrained deep language models (LM), word-alignment and neural machine translation. Due to the sparse situation on open, public medical entity recognition models for German texts, this work offers benefits to the German research community on medical NLP as a baseline model. Since our model is based on public English data, its weights are provided without legal restrictions on usage and distribution. The sample code and the statistical model is available at:\\\n \nauthor:\n- Johann Frei\n- 'Ludwig Frei-Stuber'\n- Frank Kramer\ntitle: 'GERNERMED++: Robust German Medical NLP through Transfer Learning, Translation and Word Alignment'\n---\n\nIntroduction\n============\n\nExtraction and processing of key information from medical notes and doctors\u2019 letters poses a common challenge in advanced digitization of health care systems. In particular, research-oriented data mining of non-research-centric data sources (often" +"---\nabstract: 'In many computational problems in engineering and science, function or model differentiation is essential, but also integration is needed. An important class of computational problems include so-called integro-differential equations which include both integrals and derivatives of a function. In another example, stochastic differential equations can be written in terms of a partial differential equation of a probability density function of the stochastic variable. To learn characteristics of the stochastic variable based on the density function, specific integral transforms, namely moments, of the density function need to be calculated. Recently, the machine learning paradigm of Physics-Informed Neural Networks emerged with increasing popularity as a method to solve differential equations by leveraging automatic differentiation. In this work, we propose to augment the paradigm of Physics-Informed Neural Networks with *automatic integration* in order to compute complex integral transforms on trained solutions, and to solve integro-differential equations where integrals are computed on-the-fly during training. Furthermore, we showcase the techniques in various application settings, numerically simulating quantum computer-based neural networks as well as classical neural networks.'\nauthor:\n- Niraj Kumar\n- Evan Philip\n- 'Vincent E. Elfving'\nbibliography:\n- 'references.bib'\ntitle: 'Integral Transforms in a Physics-Informed (Quantum) Neural Network setting: Applications & Use-Cases'" +"---\nabstract: 'Dielectric loss is known to limit state-of-the-art superconducting qubit lifetimes. Recent experiments imply upper bounds on bulk dielectric loss tangents on the order of 100 parts-per-billion, but because these inferences are drawn from fully fabricated devices with many loss channels, these experiments do not definitely implicate or exonerate the dielectric. To resolve this ambiguity, we devise a measurement method capable of separating and resolving bulk dielectric loss with a sensitivity at the level of $5\\times10^{-9}$. The method, which we call the dielectric dipper, involves the *in situ* insertion of a dielectric sample into a high-quality microwave cavity mode. Smoothly varying the sample participation in the cavity mode enables a differential measurement of the sample\u2019s dielectric loss tangent. The dielectric dipper can probe the low-power behavior of dielectrics at cryogenic temperatures and does so without the need for any lithographic process, enabling controlled comparisons of substrate materials and processing techniques. We demonstrate the method with measurements of sapphire grown by edge-defined film-fed growth (EFG) in comparison to high-grade sapphire grown by the heat-exchanger method (HEMEX). For EFG sapphire we infer a bulk loss tangent of $63(8)\\times10^{-9}$ and a substrate-air interface loss tangent of $15(3)\\times10^{-4}$\u00a0(assuming a sample surface thickness" +"---\nabstract: 'In this work, we address the task of SDR videos to HDR videos(SDRTV-to-HDRTV conversion). Previous approaches use global feature modulation for SDRTV-to-HDRTV conversion. Feature modulation scales and shifts the features in the original feature space, which has limited mapping capability. In addition, the global image mapping cannot restore detail in HDR frames due to the luminance differences in different regions of SDR frames. To resolve the appeal, we propose a two-stage solution. The first stage is a hierarchical Dynamic Context feature mapping (HDCFM) model. HDCFM learns the SDR frame to HDR frame mapping function via hierarchical feature modulation (HME and HM ) module and a dynamic context **feature transformation** (DYCT) module. The HME estimates the feature modulation vector, HM is capable of hierarchical feature modulation, consisting of global feature modulation in series with local feature modulation, and is capable of adaptive mapping of local image features. The DYCT module constructs a feature transformation module in conjunction with the context, which is capable of adaptively generating a feature transformation matrix for feature mapping. Compared with simple feature scaling and shifting, the DYCT module can map features into a new feature space and thus has a more excellent feature mapping" +"---\nabstract: 'This study experimentally investigated the influence of heat transfer from a heated hydrofoil on cavitating flow to understand the evaporation phenomenon under high-heat-flux and high-speed conditions. A temperature difference was generated between the hydrofoil and mainstream by installing an aluminum nitride heater in a NACA0015 hydrofoil fabricated from copper. A cavitation experiment was performed in a high-temperature water cavitation tunnel at the Institute of Fluid Science, Tohoku University. The effect of heating on cavitating flow was evaluated by changing the mainstream velocity and pressure, namely the cavitation number, at a fixed heater power of 860 W. Results showed that the heat transfer from the hydrofoil affected cavitating flow in terms of the cavity length, cavity aspect, and periodicity. The effect on the cavity length became stronger at a lower velocity owing to a higher hydrofoil temperature. The variation in periodicity implied that the heating effect reduced the unsteadiness of cavitation. A modified cavitation number was proposed by considering the heat transfer from the heated wall. A thermal correction term was derived by considering that the fluid temperature close to the heated hydrofoil was affected by the turbulent convective heat transfer between the mainstream and hydrofoil. The corrected cavitation" +"---\nabstract: 'Although wireless and IP-based access to video content gives a new degree of freedom to the viewers, the risk of severe block losses caused by transmission errors is always present. The purpose of this paper is to present a new method for concealing block losses in erroneously received video sequences. For this, a motion compensated data set is generated around the lost block. Based on this aligned data set, a model of the signal is created that continues the signal into the lost areas. Since spatial as well as temporal informations are used for the model generation, the proposed method is superior to methods that use either spatial or temporal information for concealment. Furthermore it outperforms current state of the art spatio-temporal concealment algorithms by up to 1.4 dB in PSNR.'\naddress: |\n Chair of Multimedia Communications and Signal Processing,\\\n University of Erlangen-Nuremberg, Cauerstr. 7, 91058 Erlangen, Germany\\\n [{seiler, kaup}@LNT.de]{}\ntitle: Motion Compensated Frequency Selective Extrapolation for Error Concealment in Video Coding\n---\n\nIntroduction\n============\n\nDue to modern video codecs and increased computational power, the transmission and processing of video signals in wireless environments or IP-based access to videos became more and more usual in the past years." +"---\naddress:\n- ', , '\n- ', , '\n- ', , '\n- ', , '\n- ', , '\n- ', , '\n- ', '\n- ', '\n- ', '\n- ', , '\n- ', , '\n- ', , '\nauthor:\n- Ibrahim Aldulijan\n- Jacob Beal\n- Sonja Billerbeck\n- Jeff Bouffard\n- Ga\u00ebl Chambonnier\n- Nikolaos Delkis\n- Isaac Guerreiro\n- Martin Holub\n- Paul Ross\n- Vinoo Selvarajah\n- Noah Sprent\n- Gonzalo Vidal\n- Alejandro Vignoni\ntitle: Functional Synthetic Biology\n---\n\nIntroduction\n============\n\nOver the past decade, synthetic biologists have made great strides in the engineering of biological systems. One vision that has served as something of a roadmap is the model enunciated by Endy\u00a0[@Endy05], a model comprising four levels of increasing abstraction: DNA, Parts, Devices, and Systems. Consistent with this model, the field has developed a plethora of basic parts such as promoters, terminators, coding sequences, and functional RNAs, which can be combined into composite DNA sequences through a variety of assembly methods (e.g., \u00a0[@shetty2011assembly; @gibson2009enzymatic; @Engler2008GoldenGate; @Weber2011MoClo; @kok2014rapid]) or low-cost nucleic acid synthesis\u00a0[@CarlsonCurve]. At higher levels of abstraction, the field has produced families of biological" +"---\nabstract: |\n The emerging interlaced magnetic recording (IMR) technology achieves a higher areal density for hard disk drive (HDD) over the conventional magnetic recording (CMR) technology. IMR-based HDD interlaces top tracks and bottom tracks, where each bottom track is overlapped with two neighboring top tracks. Thus, top tracks can be updated without restraint, whereas bottom tracks can be updated by the time-consuming read-modify-write (RMW) or other novel update strategy. Therefore, the layout of the tracks between the IMR-based HDD and the CMR-based HDD is much different. Unfortunately, there has been no related disk simulator and product available to the public, which motivates us to develop an open-source IMR disk simulator to provide a platform for further research.\n\n We implement the first public IMR disk simulator, called IMRSim, as a block device driver in the Linux kernel, simulating the interlaced tracks and implementing many state-of-the-art data placement strategies. IMRSim is built on the actual CMR-based HDD to precisely simulate the I/O performance of IMR drives. While I/O operations in CMR-based HDD are easy to visualize, update strategy and multi-stage allocation strategy in IMR are inherently dynamic. Therefore, we further graphically demonstrate how IMRSim processes I/O requests in the visualization mode." +"---\nabstract: 'Understanding vision and language representations of product content is vital for search and recommendation applications in e-commerce. As a backbone for online shopping platforms and inspired by the recent success in representation learning research, we propose a contrastive learning framework that aligns language and visual models using unlabeled raw product text and images. We present techniques we used to train large-scale representation learning models and share solutions that address domain-specific challenges. We study the performance using our pre-trained model as backbones for diverse downstream tasks, including category classification, attribute extraction, product matching, product clustering, and adult product recognition. Experimental results show that our proposed method outperforms the baseline in each downstream task regarding both single modality and multiple modalities.'\nauthor:\n- Wonyoung Shin\n- Jonghun Park\n- Taekang Woo\n- Yongwoo Cho\n- Kwangjin Oh\n- Hwanjun Song\nbibliography:\n- 'bibliography.bib'\ntitle: 'e-CLIP: Large-Scale Vision-Language Representation Learning in E-commerce '\n---\n\n<ccs2012> <concept> <concept\\_id>10010405.10003550</concept\\_id> <concept\\_desc>Applied computing\u00a0Electronic commerce</concept\\_desc> <concept\\_significance>500</concept\\_significance> </concept> </ccs2012>\n\nIntroduction\n============\n\n![image](figures/naver_shopping.pdf){width=\"12.9cm\"}\n\nProduct search engines and recommendation systems satisfy millions of users daily and facilitate billion-level transaction records\u00a0[@sorokina2016amazon; @liu2017cascade; @zhang2020towards]. As shown in Figure\u00a0\\[fig:naver\\_shopping\\], a diverse set of unimodal and multimodal tasks, including category" +"---\nabstract: |\n We prove that the open subvariety $\\operatorname{Gr}_0(3,8)$ of the Grassmannian $\\operatorname{Gr}(3,8)$ determined by the nonvanishing of all Pl\u00fccker coordinates is sch\u00f6n, i.e., all of its initial degenerations are smooth. Furthermore, we find an initial degeneration that has two connected components, and show that the remaining initial degenerations, up to symmetry, are irreducible. As an application, we prove that the Chow quotient of $\\operatorname{Gr}(3,8)$ by the diagonal torus of $\\operatorname{PGL}(8)$ is the log canonical compactification of the moduli space of $8$ lines in ${{\\mathbb{P}}}^2$, resolving a conjecture of Hacking, Keel, and Tevelev. Along the way we develop various techniques to study finite inverse limits of schemes.\n\n **MSC 2020**: 14T90 (primary), 05E14, 14C05, 52B40 (secondary)\n\n **Keywords**: Chow quotient, Grassmannian, matroid, tight span\nauthor:\n- Daniel Corey\n- Dante Luber\nbibliography:\n- 'bibliographie.bib'\ntitle: 'The Grassmannian of $3$-planes in ${{\\mathbb{C}}}^{8}$ is sch\u00f6n'\n---\n\nIntroduction\n============\n\nA closed subvariety of an algebraic torus is *sch\u00f6n* if all of its initial degenerations\u2014flat degenerations arising via Gr\u00f6bner theory\u2014are smooth. This notion was introduced by Tevelev in his influential paper [@Tevelev], and admits this characterization by Helm and Katz [@HelmKatz]. Sch\u00f6n subvarieties of tori satisfy many desirable properties. Their tropical compactifications are sch\u00f6n compactifications," +"---\nabstract: 'For an integer $k\\ge 2$, a *$k$-community structure* in an undirected graph is a partition of its vertex set into $k$ sets called *communities*, each of size at least two, such that every vertex of the graph has proportionally at least as many neighbours in its own community as in any other community. In this paper, we give a necessary and sufficient condition for a forest on $n$ vertices to admit a $k$-community structure. Furthermore, we provide an $\\mathcal{O}(k^2\\cdot n^{2})$-time algorithm that computes such a $k$-community structure in a forest, if it exists. These results extend a result of \\[Bazgan et al., Structural and algorithmic properties of $2$-community structure, Algorithmica, 80(6):1890-1908, 2018\\]. We also show that if communities are allowed to have size one, then every forest with $n \\geq k\\geq 2$ vertices admits a $k$-community structure that can be found in time $\\mathcal{O}(k^2 \\cdot n^{2})$. We then consider threshold graphs and show that every connected threshold graph admits a $2$-community structure if and only if it is not isomorphic to a star; also if such a $2$-community structure exists, we explain how to obtain it in linear time. We further describe an infinite family of disconnected threshold" +"---\nauthor:\n- 'A.\u00a0F.\u00a0Lanza'\ndate: 'Received XXX; accepted XXX'\ntitle: |\n Tidal excitation of autoresonant oscillations\\\n in stars with close-by planets\n---\n\n[Close-by planets may excite various kinds of oscillations in their host stars through their time-varying tidal potential.]{} [Magnetostrophic oscillations with a frequency much smaller than the stellar rotation frequency have recently been proposed to account for the spin-orbit commensurability observed in several planet-hosting stars. In principle, they can be resonantly excited in an isolated slender magnetic flux tube by a Fourier component of the time-varying tidal potential with a very low frequency in the reference frame rotating with the host. However, due to the weakness of such high-order tidal components, a mechanism is required to lock the oscillations in phase with the forcing for long time intervals ($10^{3}-10^{7}$\u00a0yr) in order to allow the oscillation amplitude to grow.]{} [We propose that the locking mechanism is an autoresonance produced by the non-linear dependence of the oscillation frequency on its amplitude. We suggest that the angular momentum loss rate is remarkably reduced in hosts entering autoresonance that contributes to maintain those systems in that regime for a long time. ]{} [We apply our model to a sample of" +"---\nabstract: 'Non-orthogonal multiple access (NOMA) has been a strong candidate to support massive connectivity in future wireless networks. In this regard, its implementation into cooperative relaying, named cooperative-NOMA (CNOMA), has received tremendous attention by researchers. However, most of the existing CNOMA studies have failed to address practical constraints since they assume ideal conditions. Particularly, error performance of CNOMA schemes with imperfections has not been investigated, yet. In this letter, we provide an analytical framework for error performance of CNOMA schemes under practical assumptions where we take into account imperfect successive interference canceler (SIC), imperfect channel estimation (ICSI), and hardware impairments (HWI) at the transceivers. We derive bit error rate (BER) expressions in CNOMA schemes whether the direct links between source and users exist or not which is, to the best of the authors\u2019 knowledge, the first study in the open literature. For comparisons, we also provide BER expression for downlink NOMA with practical constraints which has also not been given in literature, yet. The theoretical BER expressions are validated with computer simulations where the perfect-match is observed. Finally, we discuss the effects of the system parameters (e.g., power allocation, HWI level) on the performance of CNOMA schemes to reveal" +"---\nabstract: 'Objective: Transformers, born to remedy the inadequate receptive fields of CNNs, have drawn explosive attention recently. However, the daunting computational complexity of global representation learning, together with rigid window partitioning, hinders their deployment in medical image segmentation. This work aims to address the above two issues in transformers for better medical image segmentation. Methods: We propose a boundary-aware lightweight transformer (BATFormer) that can build cross-scale global interaction with lower computational complexity and generate windows flexibly under the guidance of entropy. Specifically, to fully explore the benefits of transformers in long-range dependency establishment, a cross-scale global transformer (CGT) module is introduced to jointly utilize multiple small-scale feature maps for richer global features with lower computational complexity. Given the importance of shape modeling in medical image segmentation, a boundary-aware local transformer (BLT) module is constructed. Different from rigid window partitioning in vanilla transformers which would produce boundary distortion, BLT adopts an adaptive window partitioning scheme under the guidance of entropy for both computational complexity reduction and shape preservation. Results: BATFormer achieves the best performance in Dice of 92.84$\\%$, 91.97$\\%$, 90.26$\\%$, and 96.30$\\%$ for the average, right ventricle, myocardium, and left ventricle respectively on the ACDC dataset and the best performance" +"---\nabstract: 'We consider an explanation of CDF II W bosom mass anomaly by $Z-Z''$ mixing with $U(1)_R$ gauge symmetry under which right-handed fermions are charged. It is found that $U(1)_R$ is preferred to be leptophobic to accommodate the anomaly while avoiding other experimental constraints. In such a case we require extra charged leptons to cancel quantum anomalies and the SM charged leptons get masses via interactions with the extra ones. These interactions also induce muon $g-2$ and lepton flavor violations. We discuss muon $g-2$, possible flavor constraints, neutrino mass generation via inverse seesaw mechanism, and collider physics regarding $Z''$ production for parameter space explaining the W boson mass anomaly.'\nauthor:\n- 'Keiko I. Nagao'\n- Takaaki Nomura\n- Hiroshi Okada\ntitle: |\n An alternative gauged $U(1)_R$ symmetric model\\\n in light of the CDF II $W$ boson mass anomaly\n---\n\n[APCTP Pre2022 - 010]{}\n\nIntroduction\n============\n\nPrecision measurements of electroweak observable are good test of the standard model(SM) and would provide a hint of beyond the SM. CDFII collaboration recently reported updated result of the SM charged-gauge boson (W boson) mass\u00a0[@CDF:2022hxs] $$\\begin{aligned}\nm_W = (80.433\\pm 0.0064_{\\rm stat}\\pm 0.0069_{\\rm syst})\\ {\\rm GeV},\n\\label{eq:Wmass}\\end{aligned}$$ which deviates from the SM prediction by" +"---\nabstract: 'Privacy and interpretability are two important ingredients for achieving trustworthy machine learning. We study the interplay of these two aspects in graph machine learning through graph reconstruction attacks. The goal of the adversary here is to reconstruct the graph structure of the training data given access to model explanations. Based on the different kinds of auxiliary information available to the adversary, we propose several graph reconstruction attacks. We show that additional knowledge of post-hoc feature explanations substantially increases the success rate of these attacks. Further, we investigate in detail the differences between attack performance with respect to three different classes of explanation methods for graph neural networks: gradient-based, perturbation-based, and surrogate model-based methods. While gradient-based explanations reveal the most in terms of the graph structure, we find that these explanations do not always score high in utility. For the other two classes of explanations, privacy leakage increases with an increase in explanation utility. Finally, we propose a defense based on a randomized response mechanism for releasing the explanations, which substantially reduces the attack success rate. Our code is available at .'\nauthor:\n- 'Iyiola E. Olatunji'\n- Mandeep Rathee\n- Thorben Funke\n- Megha Khosla\nbibliography:\n- 'main.bib'" +"---\nabstract: 'For many wireless communication applications, traffic pattern modeling of radio signals combined with channel effects is much needed. While analytical models are used to capture these phenomena, real world non-linear effects (e.g. device responses, interferences, distortions, noise) and especially the combination of such effects can be difficult to capture by these models. This is simply due to their complexity and degrees of freedom which can be hard to explicitize in compact expressions. In this paper, we propose a more model-free approach to jointly approximate an end-to-end black-boxed wireless communication scenario using software-defined radio platforms and optimize for an efficient synthesis of subsequently similar \u201cpseudo-radio-signals\u201d. More precisely, we implement a generative adversarial network based solution that automatically learns radio properties from recorded prototypes in specific scenarios. This allows for a high degree of expressive freedom. Numerical results show that the prototypes\u2019 traffic patterns jointly with channel effects are learned without the introduction of assumptions about the scenario or the simplification to a parametric model.'\nauthor:\n- |\n Haythem Chaker,\u00a0 Soumaya Hamouda,\u00a0\\\n and Nicola Michailow [^1]\nbibliography:\n- 'IEEEabrv.bib'\n- 'GAN\\_bib.bib'\ntitle: 'Generative Adversarial Networks for Pseudo-Radio-Signal Synthesis'\n---\n\n=1\n\nDeep Learning, Generative Adversarial Networks, Over-The-Air Learning, Software-Defined Radio" +"---\nabstract: 'We study the behavior of cylindrical objects as they sink into a dry granular bed fluidized due to lateral oscillations. Somewhat unexpectedly, we have found that, within a large range of lateral shaking powers, cylinders with flat bottoms sink vertically, while those with a foundationconsisting in a shallow ring attached to their bottom, tilt besides sinking. The latter scenario seems to dominate independently from the nature of the foundation when strong enough lateral vibrations are applied. We are able to explain the observed behavior by quasi-2D numerical simulations, which also demonstrate the influence of the intruder\u2019s aspect ratio. The vertical sink dynamics is explained with the help of a Newtonian equation of motion for the intruder. Our findings may shed light on the behavior of buildings and other man-made constructions during earthquakes.'\nauthor:\n- 'L. Alonso-Llanes'\n- 'G. S\u00e1nchez-Colina'\n- 'A. J. Batista-Leyva'\n- 'C. Cl\u00e9ment'\n- 'E. Altshuler'\n- 'R. Toussaint'\nbibliography:\n- 'Phys-Rev-E.bib'\ntitle: |\n Sink vs. tilt penetration into shaken dry granular matter:\\\n the role of foundation\n---\n\nIntroduction\n============\n\nThe Kocalei earthquake occurring on August 17, 1999 affected in various ways many constructions in the city of Adapazari, Turkey. Following observers, some buildings sank" +"---\nauthor:\n- |\n Dhruvit Patel\\\n [dpp94@umd.edu]{}\n- Edward Ott\nbibliography:\n- 'Manuscript.bib'\ntitle: 'Using Machine Learning to Anticipate Tipping Points and Extrapolate to Post-Tipping Dynamics of Non-Stationary Dynamical Systems'\n---\n\nAbstract {#abstract .unnumbered}\n========\n\nIn this paper we consider the machine learning (ML) task of predicting tipping point transitions and long-term post-tipping-point behavior associated with the time evolution of an unknown (or partially unknown), non-stationary, potentially noisy and chaotic, dynamical system. We focus on the particularly challenging situation where the past dynamical state time series that is available for ML training predominantly lies in a restricted region of the state space, while the behavior to be predicted evolves on a larger state space set not fully observed by the ML model during training. In this situation, it is required that the ML prediction system have the ability to extrapolate to different dynamics past that which is observed during training. We investigate the extent to which ML methods are capable of accomplishing useful results for this task, as well as conditions under which they fail. In general, we found that the ML methods were surprisingly effective even in situations that were extremely challenging, but do (as one would expect) fail" +"---\nabstract: 'We investigate preprocessing for vertex-subset problems on graphs. While the notion of kernelization, originating in parameterized complexity theory, is a formalization of provably effective preprocessing aimed at reducing the total instance size, our focus is on finding a non-empty vertex set that belongs to an optimal solution. This decreases the size of the remaining part of the solution which still has to be found, and therefore shrinks the search space of fixed-parameter tractable algorithms for parameterizations based on the solution size. We introduce the notion of a $c$-essential vertex as one that is contained in all $c$-approximate solutions. For several classic combinatorial problems such as Odd Cycle Transversal and Directed Feedback Vertex Set, we show that under mild conditions a polynomial-time preprocessing algorithm can find a subset of an optimal solution that contains all 2-essential vertices, by exploiting packing/covering duality. This leads to FPT algorithms to solve these problems where the exponential term in the running time depends only on the number of *non-essential* vertices in the solution.'\nauthor:\n- Benjamin Merlin Bumpus\n- 'Bart M.P. Jansen'\n- 'Jari J.H. de Kroon'\nbibliography:\n- 'biblio.bib'\ntitle: 'Search-Space Reduction via Essential Vertices'\n---\n\nIntroduction {#sec:intro}\n============\n\n#####" +"---\nabstract: 'k-fac is a successful tractable implementation of Natural Gradient for Deep Learning, which nevertheless suffers from the requirement to compute the inverse of the Kronecker factors (through an eigen-decomposition). This can be very time-consuming (or even prohibitive) when these factors are large. In this paper, we theoretically show that, owing to the exponential-average construction paradigm of the Kronecker factors that is typically used, their eigen-spectrum must decay. We show numerically that in practice this decay is very rapid, leading to the idea that we could save substantial computation by only focusing on the first few eigen-modes when inverting the Kronecker-factors. Importantly, the spectrum decay happens over a constant number of modes irrespectively of the layer width. This allows us to reduce the time complexity of k-fac from cubic to quadratic in layer width, partially closing the gap w.r.t.\u00a0seng (another practical Natural Gradient implementation for Deep learning which scales linearly in width). Randomized Numerical Linear Algebra provides us with the necessary tools to do so. Numerical results show we obtain $\\approx2.5\\times$ reduction in per-epoch time and $\\approx3.3\\times$ reduction in time to target accuracy. We compare our proposed k-fac sped-up versions seng, and observe" +"---\nabstract: 'The Omnid human-collaborative mobile manipulators are an experimental platform for testing control architectures for autonomous and human-collaborative multirobot mobile manipulation. An Omnid consists of a mecanum-wheel omnidirectional mobile base and a series-elastic Delta-type parallel manipulator, and it is a specific implementation of a broader class of mobile collaborative robots (\u201cmocobots\u201d) suitable for safe human co-manipulation of delicate, flexible, and articulated payloads. Key features of mocobots include passive compliance, for the safety of the human and the payload, and high-fidelity end-effector force control independent of the potentially imprecise motions of the mobile base. We describe general considerations for the design of teams of mocobots; the design of the Omnids in light of these considerations; manipulator and mobile base controllers to achieve multirobot collaborative behaviors; and experiments in human-multirobot collaborative mobile manipulation of large and articulated payloads, where the mocobot team renders the payloads weightless for effortless human co-manipulation. In these experiments, the only communication among the humans and Omnids is mechanical, through the payload.'\nauthor:\n- 'Matthew L. Elwin,\u00a0, Billie Strong,\u00a0, Randy A. Freeman,\u00a0, and Kevin M. Lynch[^1],\u00a0[^2] [^3]'\nbibliography:\n- 'main.bib'\ntitle: 'Human-Multirobot Collaborative Mobile Manipulation: the Omnid Mocobots'\n---\n\nswarm robotics, human-robot collaborative" +"---\nabstract: 'In the topological dynamical system $(X,T)$, a point $x$ simultaneously approximates a point $y$ if there exists a sequence $n_1$, $n_2$, \u2026of natural numbers for which $T^{n_i} x$, $T^{2n_i}x$, \u2026, $T^{k n_i} x$ all tend to $y$. In 1978, Furstenberg and Weiss showed that every system possesses a point which simultaneously approximates itself (a multiply recurrent point) and deduced refinements of van der Waerden\u2019s theorem on arithmetic progressions. In this paper, we study the denseness of the set of points that are simultaneously approximated by a given point. We show that in a minimal nilsystem, all points simultaneously approximate a $\\delta$-dense set of points under a necessarily restricted set of powers of $T$. We tie this theorem to the multiplicative combinatorial properties of return-time sets, showing that all nil-Bohr sets and typical return-time sets in a minimal system are multiplicatively thick in a coset of a multiplicative subsemigroup of the natural numbers. This yields an inhomogeneous multiple recurrence result that generalizes Furstenberg and Weiss\u2019 theorem and leads to new enhancements of van der Waerden\u2019s theorem. This work relies crucially on continuity in the prolongation relation (the closure of the orbit-closure relation) developed by Auslander, Akin, and Glasner; the" +"---\nauthor:\n- |\n C. Ducourant[^1], A. Krone-Martins, L. Galluccio, R. Teixeira, J.-F. Le Campion,\\\n E. Slezak, R. de Souza, P. Gavras, F. Mignard, J. Guiraud, W. Roux,\\\n S. Managau, D. Semeux, A. Blazere, A. Helmer, and D. Pourbaix\nbibliography:\n- 'cu4\\_bibliography.bib'\ndate: 'Received 15 April 2022; accepted 12 May 2022'\ntitle: '[*Gaia*]{}Data Release 3: Surface brightness profiles of galaxies and host galaxies of quasars. [^2]'\n---\n\n[Since July 2014, the [*Gaia*]{}space mission has been continuously scanning the sky and observing the extragalactic Universe with unprecedented spatial resolution in the optical domain ($\\sim$ 180 mas by the end of the mission). [*Gaia*]{}provides an opportunity to study the morphology of the galaxies of the local Universe (z<0.45) with much higher resolution than has ever been attained from the ground. It also allows us to provide the first morphological all-sky space catalogue of nearby galaxies and galaxies that host quasars in the visible spectrum.]{} [We present the Data Processing and Analysis Consortium CU4-Surface Brightness Profile fitting pipeline, which aims to recover the light profile of nearby galaxies and galaxies hosting quasars. ]{} [The pipeline uses a direct model based on the Radon transform to measure the two-dimensional surface brightness profile of the" +"---\nabstract: 'Quantum emitters with a $\\Lambda$-type level structure enable numerous protocols and applications in quantum science and technology. Understanding and controlling their dynamics is, therefore, one of the central research topics in quantum optics. Here, we drive two-photon Rabi oscillations between the two ground states of cesium atoms and observe the associated oscillatory Raman gain and absorption that stems from the atom-mediated coherent photon exchange between the two drive fields. The atoms are efficiently and homogeneously coupled with the probe field by means of a nanofiber-based optical interface. We study the dependence of the two-photon Rabi frequency on the system parameters and observe Autler-Townes splitting in the probe transmission spectrum. Beyond shedding light on the fundamental processes underlying two-photon Rabi oscillations, our method could also be used to investigate (quantum) correlations between the two drive fields as well as the dynamical establishment of electromagnetically induced transparency.'\naddress:\n- '$^1$ Department of Physics, Humboldt-Universit\u00e4t zu Berlin, Unter den Linden 6, 10099 Berlin, Germany'\n- '$^2$ Institute of Physics, National Academy of Sciences of Ukraine, prospect Nauki 46, Kyiv-39, 03650, Ukraine'\nauthor:\n- 'Christian Liedl$^1$, Sebastian Pucher$^1$, Philipp Schneeweiss$^1$, Leonid P. Yatsenko$^2$, and Arno Rauschenbeutel$^1$'\nbibliography:\n- 'bibliography.bib'\ntitle:\n- 'Observation" +"---\nabstract: 'The electronic structure of condensed matter can be significantly affected by the electron-phonon interaction, which leads to important phenomena such as electrical resistance, superconductivity or the formation of polarons. This interaction is often neglected in band structure calculations, but can have a strong impact, e.g. on band gaps or optical spectra. Commonly used frameworks for electron-phonon energy corrections are the Allen-Heine-Cardona theory and the [[Fr\u00f6hlich\u00a0]{}]{}model. The latter accounts for a single longitudinal optical mode, a single parabolic electron band, and washes out atomic details. While it shows qualitative agreement with experiment for many polar materials, its simplicity should bring hard limits to its applicability in real materials. Improvements can be made by introducing a generalized version of the model, which takes into account anisotropic and degenerate electronic bands, and multiple phonon branches. In this work, we search for trends and outliers on over a thousand materials in existing databases of phonon and electron band structures. We use our results to identify the limits of applicability of the standard [[Fr\u00f6hlich\u00a0]{}]{}model by comparing to the generalized version, and by testing its basic hypothesis of a large radius for the polaronic wavefunction and the corresponding atomic displacement cloud (large" +"---\nabstract: 'Word embedding is a fundamental natural language processing task which can learn feature of words. However, most word embedding methods assign only one vector to a word, even if polysemous words have multi-senses. To address this limitation, we propose SememeWSD Synonym (SWSDS) model to assign a different vector to every sense of polysemous words with the help of word sense disambiguation (WSD) and synonym set in OpenHowNet. We use the SememeWSD model, an unsupervised word sense disambiguation model based on OpenHowNet, to do word sense disambiguation and annotate the polysemous word with sense id. Then, we obtain top 10 synonyms of the word sense from OpenHowNet and calculate the average vector of synonyms as the vector of the word sense. In experiments, We evaluate the SWSDS model on semantic similarity calculation with Gensim\u2019s wmdistance method. It achieves improvement of accuracy. We also examine the SememeWSD model on different BERT models to find the more effective model.'\nauthor:\n- Yangxi Zhou\n- 'Junping Du[^1]'\n- Zhe Xue\n- Ang Li\n- Zeli Guan\ntitle: Chinese Word Sense Embedding with SememeWSD and Synonym Set\n---\n\nIntroduction\n============\n\nThe purpose of word embedding is to learn the vector representation of a" +"---\nabstract: 'Given an $n$-dimensional stochastic process ${\\boldsymbol{X}}$ driven by $\\mathbb{P}$-Brownian motions and Poisson random measures, we seek the probability measure $\\mathbb{Q}$, with minimal relative entropy to $\\mathbb{P}$, such that the $\\mathbb{Q}$-expectations of some terminal and running costs are constrained. We prove existence and uniqueness of the optimal probability measure, derive the explicit form of the measure change, and characterise the optimal drift and compensator adjustments under the optimal measure. We provide an analytical solution for Value-at-Risk (quantile) constraints, discuss how to perturb a Brownian motion to have arbitrary variance, and show that pinned measures arise as a limiting case of optimal measures. The results are illustrated in a risk management setting \u2013 including an algorithm to simulate under the optimal measure \u2013 and explore an example where an agent seeks to answer the question: what dynamics are induced by a perturbation of the Value-at-Risk and the average time spent below a barrier on the reference process?'\naddress:\n- 'Department of Statistical Sciences, University of Toronto'\n- 'Department of Mathematics, King\u2019s College London'\n- 'Oxford-Man Institute of Quantitative Finance, University of Oxford'\nauthor:\n- \u00a0\n- \u00a0\n- \u00a0\nbibliography:\n- 'references.bib'\ntitle: 'Minimal Kullback-Leibler Divergence for Constrained L\u00e9vy-It\u00f4 Processes'\n---\n\n,\n\nIntroduction" +"---\nabstract: 'Despite the astonishing success of COVID-19 vaccines against the virus, a substantial proportion of the population is still hesitant to be vaccinated, undermining governmental efforts to control the virus. To address this problem, we need to understand the different factors giving rise to such a behavior, including social media discourses, news media propaganda, government responses, demographic and socioeconomic statuses, and COVID-19 statistics, etc. However, existing datasets fail to cover all these aspects, making it difficult to form a complete picture in inferencing about the problem of vaccine hesitancy. In this paper, we construct a multi-source, multi-modal, and multi-feature online-offline data repository `CoVaxNet`[^1]. We provide descriptive analyses and insights to illustrate critical patterns in `CoVaxNet`. Moreover, we propose a novel approach for connecting online and offline data so as to facilitate the inference tasks that exploit complementary information sources.'\nauthor:\n- 'Bohan Jiang^()^'\n- Paras Sheth\n- Baoxin Li\n- Huan Liu\nbibliography:\n- 'ref.bib'\ntitle: 'CoVaxNet: An Online-Offline Data Repository for COVID-19 Vaccine Hesitancy Research'\n---\n\nIntroduction {#intro}\n============\n\nThe COVID-19 pandemic has killed over six million people and infected 536 million globally as of mid-June, 2022[^2]. It has been pointed out that the FDA-authorized COVID-19 vaccines are" +"---\nabstract: 'Between 2010 and 2017 we have collected new optical and radar observations of the potentially hazardous asteroid (2102)\u00a0Tantalus from the ESO NTT and Danish telescopes at the La Silla Observatory and from the Arecibo planetary radar. The object appears to be nearly spherical, showing a low amplitude light-curve variation and limited large-scale features in the radar images. The spin-state is difficult to constrain with the available data; including a certain light-curve subset significantly changes the spin-state estimates, and the uncertainties on period determination are significant. Constraining any change in rotation rate was not possible, despite decades of observations. The convex lightcurve-inversion model, with rotational pole at $\\lambda=210\\pm41\\degr$ and $\\beta=-30\\pm35\\degr$, is more flattened than the two models reconstructed by including radar observations: with prograde ($\\lambda=36\\pm23\\degr$, $\\beta=30\\pm15\\degr$), and with retrograde rotation mode ($\\lambda=180\\pm24\\degr$, $\\beta=-30\\pm16\\degr$). Using data from WISE we were able to determine that the prograde model produces the best agreement in size determination between radar and thermophysical modelling. Radar measurements indicate possible variation in surface properties, suggesting one side might have lower radar albedo and be rougher at centimetre-to-decimetre scale than the other. However, further observations are needed to confirm this. Thermophysical analysis indicates a surface covered in" +"---\nabstract: 'Automated analysis of optical colonoscopy (OC) video frames (to assist endoscopists during OC) is challenging due to variations in color, lighting, texture, and specular reflections. Previous methods either remove some of these variations via preprocessing (making pipelines cumbersome) or add diverse training data with annotations (but expensive and time-consuming). We present CLTS-GAN, a new deep learning model that gives fine control over color, lighting, texture, and specular reflection synthesis for OC video frames. We show that adding these colonoscopy-specific augmentations to the training data can improve state-of-the-art polyp detection/segmentation methods as well as drive next generation of OC simulators for training medical students. The code and pre-trained models for CLTS-GAN are available on Computational Endoscopy Platform GitHub ().'\nauthor:\n- 'Shawn Mathew$^1$\\*'\n- 'Saad Nadeem$^2$\\*[[^1]]{}'\n- Arie Kaufman$^1$\n- 'Shawn Mathew$^1$\\*'\n- 'Saad Nadeem$^2$\\*[[^2]]{}'\n- Arie Kaufman$^1$\ntitle:\n- 'CLTS-GAN: Color-Lighting-Texture-Specular Reflection Augmentation for Colonoscopy'\n- 'CLTS-GAN: Color-Lighting-Texture-Specular Reflection Augmentation for Colonoscopy (Supplement)'\n---\n\nIntroduction\n============\n\nColorectal cancer is the fourth deadliest cancer. Polyps, anomalous protrusions on the colon wall, are precursors of colon cancer and are often screened and removed using optical colonoscopy (OC). During OC, variations in color, texture, lighting, specular reflections, and fluid motion make" +"---\nabstract: 'A vast array of (metastable) vacuum solutions arise from string compactifications, each leading to different 4-d laws of physics. The space of these solutions, known as the string landscape, allows for an environmental solution to the cosmological constant problem. We examine the possibility of an environmental solution to the gauge hierarchy problem. We argue that the landscape favors softly broken supersymmetric models over particle physics models containing quadratic divergences, such as the Standard Model. We present a scheme for computing relative probabilities for supersymmetric models to emerge from the landscape. The probabilities are related to the likelihood that the derived value of the weak scale lies within the Agrawal et al. (ABDS) allowed window of values leading to atoms as we know them. This then favors natural SUSY models over unnatural (SUSY and other) models via a computable probability measure.'\nbibliography:\n- 'prob.bib'\n---\n\nOU-HEP-220702\n\n[**Fine-tuned vs. natural supersymmetry:\\\nwhat does the string landscape predict?**]{}\\\n\n[Howard Baer$^{1}$, Vernon Barger$^2$, Dakotah Martinez$^1$ and Shadman Salam$^1$ ]{}\\\n\n[ *$^1$Homer L. Dodge Department of Physics and Astronomy,\\\nUniversity of Oklahoma, Norman, OK 73019, USA\\\n*]{} [ *$^2$Department of Physics, University of Wisconsin, Madison, WI 53706 USA\\\n*]{}\n\nIntroduction {#sec:intro}\n============\n\nSupersymmetry" +"---\nabstract: 'Let $\\ell$ be a prime number. We classify the subgroups $G$ of ${\\operatorname{Sp}}_4({\\mathbb{F}}_\\ell)$ and ${\\operatorname{GSp}}_4({\\mathbb{F}}_\\ell)$ that act irreducibly on ${\\mathbb{F}}_\\ell^4$, but such that every element of $G$ fixes an ${\\mathbb{F}}_\\ell$-vector subspace of dimension 1. We use this classification to prove that the local-global principle for isogenies of degree $\\ell$ between abelian surfaces over number fields holds in many cases \u2013 in particular, whenever the abelian surface has non-trivial endomorphisms and $\\ell$ is large enough with respect to the field of definition. Finally, we prove that there exist arbitrarily large primes $\\ell$ for which some abelian surface $A/{\\mathbb{Q}}$ fails the local-global principle for isogenies of degree $\\ell$.'\nauthor:\n- Davide Lombardo and Matteo Verzobio\nbibliography:\n- 'biblio.bib'\ntitle: 'On the local-global principle for isogenies of abelian surfaces'\n---\n\n[ ]{}\n\n[ ]{}\n\nIntroduction\n============\n\nLet $K$ be a number field and $A$ be an abelian variety over $K$. For all primes $v$ of $K$ we denote by $\\mathbb{F}_v$ the residue field at $v$, and \u2013 if $A$ has good reduction at $v$ \u2013 we write $A_v$ for the reduction of $A$ modulo $v$. If $A/K$ has some kind of global level structure (say, a $K$-rational isogeny or a $K$-rational" +"---\nabstract: 'Bistable mechanical vibration is observed in a cavity magnomechanical system, which consists of a microwave cavity mode, a magnon mode, and a mechanical vibration mode of a ferrimagnetic yttrium-iron-garnet (YIG) sphere. The bistability manifests itself in both the mechanical frequency and linewidth under a strong microwave drive field, which simultaneously activates three different kinds of nonlinearities, namely, magnetostriction, magnon self-Kerr, and magnon-phonon cross-Kerr nonlinearities. The magnon-phonon cross-Kerr nonlinearity is first predicted and measured in magnomechanics. The system enters a regime where Kerr-type nonlinearities strongly modify the conventional cavity magnomechanics that possesses only a [radiation-pressure-like]{} magnomechanical coupling. Three different kinds of nonlinearities are identified and distinguished in the experiment. Our work demonstrates a new mechanism for achieving mechanical bistability by combining magnetostriction and Kerr-type nonlinearities, and indicates that such Kerr-modified cavity magnomechanics provides a unique platform for studying many distinct nonlinearities in a single experiment.'\nauthor:\n- 'Rui-Chang Shen'\n- Jie Li\n- 'Zhi-Yuan Fan'\n- 'Yi-Pu Wang'\n- 'J. Q. You'\ntitle: 'Mechanical Bistability in Kerr-modified Cavity Magnomechanics'\n---\n\n*Introduction.\u2014*Bistability, or multistability, discontinuous jumps, and hysteresis are characteristic features of nonlinear systems. Bistability is a widespread phenomenon that exists in a variety of physical systems, e.g., optics\u00a0[@Walls1;" +"---\nabstract: 'A dominating set of a graph is a set of vertices such that every vertex not in the set has at least one neighbor in the set. The problem of counting dominating sets is \\#P-complete for chordal graphs but solvable in polynomial time for its subclass of interval graphs. The complexity status of the corresponding problem is still undetermined for directed path graphs, which are a well-known class of graphs that falls between chordal graphs and interval graphs. This paper reveals that the problem of counting dominating sets remains \\#P-complete for directed path graphs but a stricter constraint to rooted directed path graphs admits a polynomial-time solution.'\nauthor:\n- |\n Min-Sheng Lin[^1]\\\n Department of Electrical Engineering\\\n National Taipei University of Technology\\\n Taipei 106, Taiwan, ROC\\\nbibliography:\n- 'ms.bib'\ntitle: Counting Dominating Sets in Directed Path Graphs\n---\n\n[ ***Keywords \u2014*** ]{}[Algorithms; Dominating sets; Counting problem; Directed path graphs.]{}\n\nIntroduction\n============\n\nFor a graph $G$, a subset $S$ of vertices of $G$ is a *dominating set* (DS) if every vertex of $G$ not in $S$ is adjacent to a vertex in $S$. This paper concerns the problem of computing the number of DSs in a graph. The problem" +"---\nabstract: 'Multilingual Neural Machine Translation (MNMT) enables one system to translate sentences from multiple source languages to multiple target languages, greatly reducing deployment costs compared with conventional bilingual systems. The MNMT training benefit, however, is often limited to many-to-one directions. The model suffers from poor performance in one-to-many and many-to-many with zero-shot setup. To address this issue, this paper discusses how to practically build MNMT systems that serve arbitrary `X-Y` translation directions while leveraging multilinguality with a two-stage training strategy of pretraining and finetuning. Experimenting with the WMT\u201921 multilingual translation task, we demonstrate that our systems outperform the conventional baselines of direct bilingual models and pivot translation models for most directions, averagely giving +6.0 and +4.1 BLEU, without the need for architecture change or extra data collection. Moreover, we also examine our proposed approach in an extremely large-scale data setting to accommodate practical deployment scenarios.'\nauthor:\n- 'Akiko Eriguchi$^{*\\,\\dagger}$, Shufang Xie[^1] $^{\\ \\ddagger}$\u00a0, Tao Qin$^{\\ddagger}$,'\n- |\n Hany Hassan Awadalla$^{\\dagger}$\\\n $^{\\dagger}$Microsoft \u00a0\u00a0\u00a0$^{\\ddagger}$Microsoft Research Asia\\\n `{akikoe,shufxi,taoqin,hanyh}@microsoft.com`\nbibliography:\n- 'anthology.bib'\n- 'custom.bib'\ntitle: |\n Building Multilingual Machine Translation Systems\\\n That Serve Arbitrary *X-Y* Translations\n---\n\n=1\n\nIntroduction\n============\n\nMultilingual Neural Machine Translation (MNMT), which enables one system to serve translation" +"---\nabstract: 'Hybrid programming is gaining prominence, but, in practice, applications perform slower with it compared to the model. The most critical challenge to the parallel efficiency of applications is slow performance. MPI libraries have recently made significant strides on this front, but to exploit their capabilities, users must expose the communication parallelism in their applications. Recent studies show that MPI 4.0 provides users with new performance-oriented options to do so, but our evaluation of these new mechanisms shows that they pose several challenges. An alternative design is MPI Endpoints. In this paper, we present a comparison of the different designs from the perspective of MPI\u2019s end-users: domain scientists and application developers. We evaluate the mechanisms on metrics beyond performance such as usability, scope, and portability. Based on the lessons learned, we make a case for a future direction.'\nauthor:\n- \nbibliography:\n- 'bib/refsused.bib'\ntitle: Lessons Learned on MPI+Threads Communication\n---\n\nexascale MPI, MPI Endpoints, MPI+OpenMP, MPI+threads, MPI\\_THREAD\\_MULTIPLE, partitioned communication, network parallelism\n\nIntroduction {#sec:introduction}\n============\n\nThe hybrid model is gaining prominence over the traditional approach following the evolution of modern computing architectures. Over the last decade, the number of cores on a processor has grown disproportionately to the growth in" +"---\nabstract: 'The problem of beam alignment and tracking in high mobility scenarios such as high-speed railway (HSR) becomes extremely challenging, since large overhead cost and significant time delay are introduced for fast time-varying channel estimation. To tackle this challenge, we propose a learning-aided beam prediction scheme for HSR networks, which predicts the beam directions and the channel amplitudes within a period of future time with fine time granularity, using a group of observations. Concretely, we transform the problem of high-dimensional beam prediction into a two-stage task, i.e., a low-dimensional parameter estimation and a cascaded hybrid beamforming operation. In the first stage, the location and speed of a certain terminal are estimated by maximum likelihood criterion, and a data-driven data fusion module is designed to improve the final estimation accuracy and robustness. Then, the probable future beam directions and channel amplitudes are predicted, based on the HSR scenario priors including deterministic trajectory, motion model, and channel model. Furthermore, we incorporate a learnable non-linear mapping module into the overall beam prediction to allow non-linear tracks. Both of the proposed learnable modules are model-based and have a good interpretability. Compared to the existing beam management scheme, the proposed beam prediction has (near)" +"---\nabstract: 'The direct detection of gravitational waves opens the possibility to test general relativity and its alternatives in the strong field regime. Here we focus on the test of the existence of extra dimensions. The classification of gravitational waves in metric gravity theories according to their polarizations in higher-dimensional space-time and the possible observation of these polarizations in 3-dimensional subspace are discussed in this work. And we show that the difference in the response of gravitational waves in detectors with and without extra dimensions can serve as evidence for the existence of extra dimensions.'\nauthor:\n- 'Yu-Qiang Liu$^{a}$$^{b}$[^1]'\n- 'Yu-Qi Dong$^{a}$$^{b}$[^2]'\n- 'Yu-Xiao Liu$^{a}$$^{b}$[^3]'\nbibliography:\n- 'reference1.bib'\ntitle: 'Classification of Gravitational Waves in Higher-dimensional Space-time and Possibility of Observation'\n---\n\nintroduction\n============\n\nSince the gravitational waves of the collision of two black holes was first observed by Advanced Laser Interferometer Gravitational-wave Observatory (aLIGO) and Virgo collaborations in 2015 [@abbott2016observation; @abbott2016gw151226], we have entered the era of gravitational waves astronomy. So far, over a hundred gravitational wave events have been detected by LIGO and Virgo [@abbott2021gwtc]. Among these events, some mergers of binary neutron stars were also observed. The most famous one is GW170817 [@abbott2017gw170817], which is the first event" +"---\nabstract: 'We present an analysis of 10ks snapshot [*Chandra*]{}observations of 12 shocked post-starburst galaxies, which provide a window into the unresolved question of active galactic nuclei (AGN) activity in post-starburst galaxies and its role in the transition of galaxies from actively star forming to quiescence. While 7/12 galaxies have statistically significant detections (with 2 more marginal detections), the brightest only obtained 10 photons. Given the wide variety of hardness ratios in this sample, we chose to pursue a forward modeling approach to constrain the intrinsic luminosity and obscuration of these galaxies rather than stacking. We constrain intrinsic luminosity of obscured power-laws based on the total number of counts and spectral shape, itself mostly set by the obscuration, with hardness ratios consistent with the data. We also tested thermal models. While all the galaxies have power-law models consistent with their observations, a third of the galaxies are better fit as an obscured power-law and another third are better fit as thermal emission. If these post-starburst galaxies, early in their transition, contain AGN, then these are mostly confined to a lower obscuration ($n_H \\leq10^{23}$cm$^{-2}$) and lower luminosity ($L_{2-10\\,\\rm keV}\\leq10^{42}$ergs$^{-1}$). Two galaxies, however, are clearly best fit as significantly obscured AGN. At" +"---\nabstract: 'In this work, we propose a deep U-Net based model to tackle the challenging task of prostate cancer segmentation by aggressiveness in MRI based on weak scribble annotations. This model extends the size constraint loss proposed by Kervadec et al. [@kervadec_constrained-cnn_2019] in the context of multiclass detection and segmentation task. This model is of high clinical interest as it allows training on prostate biopsy samples and avoids time-consuming full annotation process. Performance is assessed on a private dataset (219 patients) where the full ground truth is available as well as on the ProstateX-2 challenge database, where only biopsy results at different localisations serve as reference. We show that we can approach the fully-supervised baseline in grading the lesions by using only 6.35% of voxels for training. We report a lesion-wise Cohen\u2019s kappa score of $0.29 \\pm 0.07$ for the weak model versus $0.32 \\pm 0.05$ for the baseline. We also report a kappa score ($0.276 \\pm 0.037$) on the ProstateX-2 challenge dataset with our weak U-Net trained on a combination of ProstateX-2 and our dataset, which is the highest reported value on this challenge dataset for a segmentation task to our knowledge.'\nauthor:\n- Audrey Duran\n- Gaspard" +"---\nabstract: |\n In many reacting flow systems, the thermo-chemical state-space is known or assumed to evolve close to a low-dimensional manifold (LDM). Various approaches are available to obtain those manifolds and subsequently express the original high-dimensional space with fewer parameterizing variables. Principal component analysis (PCA) is one of the dimensionality reduction methods that can be used to obtain LDMs. PCA does not make prior assumptions about the parameterizing variables and retrieves them empirically from the training data. In this paper, we show that PCA applied in local clusters of data (local PCA) is capable of detecting the intrinsic parameterization of the thermo-chemical state-space. We first demonstrate that utilizing three common combustion models of varying complexity: the Burke-Schumann model, the chemical equilibrium model and the homogeneous reactor. Parameterization of these models is known *a priori* which allows for benchmarking with the local PCA approach. We further extend the application of local PCA to a more challenging case of a turbulent non-premixed $n$-heptane/air jet flame for which the parameterization is no longer obvious. Our results suggest that meaningful parameterization can be obtained also for more complex datasets. We show that local PCA finds variables that can be linked to local stoichiometry," +"---\nabstract: 'In industry, Deep Neural Networks have shown high defect detection rates surpassing other more traditional manual feature engineering based proposals. This has been achieved mainly through supervised training where a great amount of data is required in order to learn good classification models. However, such amount of data is sometimes hard to obtain in industrial scenarios, as few defective pieces are produced normally. In addition, certain kinds of defects are very rare and usually just appear from time to time, which makes the generation of a proper dataset for training a classification model even harder. Moreover, the lack of available data limits the adaptation of inspection models to new defect types that appear in production as it might require a model retraining in order to incorporate the detects and detect them. In this work, we have explored the technique of weight imprinting in the context of solar cell quality inspection where we have trained a network on three base defect classes, and then we have incorporated new defect classes using few samples. The results have shown that this technique allows the network to extend its knowledge with regard to defect classes with few samples, which can be interesting" +"---\nabstract: 'Reinforcement Learning (RL) has presented an impressive performance in video games through raw pixel imaging and continuous control tasks. However, RL performs poorly with high-dimensional observations such as raw pixel images. It is generally accepted that physical state-based RL policies such as laser sensor measurements give a more sample-efficient result than learning by pixels. This work presents a new approach that extracts information from a depth map estimation to teach an RL agent to perform the mapless navigation of Unmanned Aerial Vehicle (UAV). We propose the Depth-Imaged Contrastive Unsupervised Prioritized Representations in Reinforcement Learning(Depth-CUPRL) that estimates the depth of images with a prioritized replay memory. We used a combination of RL and Contrastive Learning to lead with the problem of RL based on images. From the analysis of the results with Unmanned Aerial Vehicles (UAVs), it is possible to conclude that our Depth-CUPRL approach is effective for the decision-making and outperforms state-of-the-art pixel-based approaches in the mapless navigation capability.'\nauthor:\n- |\n Junior C. de Jesus$^{1}$, Victor A. Kich$^{2}$, Alisson H. Kolling$^{2}$,\\\n Ricardo B. Grando$^{3}$, Rodrigo S. Guerra$^{2}$, Paulo L. J. Drews-Jr$^{1}$[^1] [^2][^3]\nbibliography:\n- './bibliography/IEEEabrv.bib'\n- './bibliography/IEEEexample.bib'\ntitle: '**Depth-CUPRL: Depth-Imaged Contrastive Unsupervised Prioritized Representations in Reinforcement Learning" +"---\nabstract: 'We present a single-stage casual waveform-to-waveform multichannel model that can separate moving sound sources based on their broad spatial locations in a dynamic acoustic scene. We divide the scene into two spatial regions containing, respectively, the target and the interfering sound sources. The model is trained end-to-end and performs spatial processing implicitly, without any components based on traditional processing or use of hand-crafted spatial features. We evaluate the proposed model on a real-world dataset and show that the model matches the performance of an oracle beamformer followed by a state-of-the-art single-channel enhancement network.'\naddress: |\n $^1$Meta, Reality Labs Research, Pittsburgh PA, USA\\\n $^2$Meta AI Research, Paris, France\nbibliography:\n- 'references.bib'\ntitle: Implicit Neural Spatial Filtering for Multichannel Source Separation in the Waveform Domain\n---\n\n**Index Terms**: source separation, multichannel processing, raw waveform, deep learning\n\nIntroduction\n============\n\nSpatial clues captured by left and right ear help us localize sound sources and focus attention to directions of interest. Similarly, multiple microphones can be used to separate sound sources based on their spatial locations. With modern devices often featuring two to eight microphones, multichannel processing is becoming increasingly important in commercial applications. At the same time, the rise of virtual and" +"---\nabstract: 'In nuclear matter in neutron stars the flavor content (e.g., proton fraction) is subject to weak interactions, establishing flavor ($\\beta$-)equilibrium. During the merger of two neutron stars there can be deviations from this equilibrium. [By incorporating Urca processes into]{} general-relativistic hydrodynamics simulations, we [study]{} the resulting out-of-equilibrium dynamics during the collision. [We provide the first direct evidence that microphysical transport effects at late times reach a hydrodynamic regime with a nonzero bulk viscosity, making neutron star collisions intrinsically viscous.]{} Finally, we identify signatures of this process in the post-merger gravitational wave emission.'\nauthor:\n- 'Elias R. Most'\n- Alexander Haber\n- 'Steven P. Harris'\n- Ziyuan Zhang\n- 'Mark G. Alford'\n- Jorge Noronha\nbibliography:\n- 'inspire.bib'\n- 'non\\_inspire.bib'\ndate: 1 July 2022\ntitle: 'Emergence of microphysical viscosity in binary neutron star post-merger dynamics'\n---\n\nIntroduction\n============\n\nBinary neutron star mergers [@LIGOScientific:2017vwq; @LIGOScientific:2020aai] offer exciting prospects for constraining the dense matter equation of state (EoS) [@Lattimer:2015nhk; @Oertel:2016bki; @Ozel:2016oaf] (e.g., [@Flanagan:2007ix; @Read:2009yp; @Raithel:2019uzi; @HernandezVivanco:2019vvk; @Landry:2020vaw; @Chatziioannou:2021tdi]). This can be done either by using the tidal deformability encoded in the inspiral gravitational wave signal [@Bauswein:2017vtn; @Annala:2017llu; @Most:2018hfd; @LIGOScientific:2018cki; @Raithel:2018ncd; @De:2018uhw; @Chatziioannou:2018vzf; @Carson:2018xri], the potential presence of very massive neutron stars" +"---\nabstract: 'Text-only and semi-supervised training based on audio-only data has gained popularity recently due to the wide availability of unlabeled text and speech data. In this work, we propose incorporating text-only and semi-supervised training into an attention-based deliberation model. By incorporating text-only data in training a bidirectional encoder representation from transformer (BERT) for the deliberation text encoder, and large-scale text-to-speech and audio-only utterances using joint acoustic and text decoder (JATD) and semi-supervised training, we achieved 4%-12% WER reduction for various tasks compared to the baseline deliberation. Compared to a state-of-the-art language model (LM) rescoring method, the deliberation model reduces the Google Voice Search WER by 11% relative. We show that the deliberation model also achieves a positive human side-by-side evaluation compared to the state-of-the-art LM rescorer with reasonable endpointer latencies.'\naddress: 'Google LLC, USA'\nbibliography:\n- 'refs.bib'\ntitle: 'Improving Deliberation by Text-Only and Semi-Supervised Training'\n---\n\nIntroduction\n============\n\nEnd-to-end (E2E) automatic speech recognition (ASR) models have made tremendous improvements in recent years\u00a0[@sainath2021efficient; @narayanan2021cascaded; @chen2021developing; @li2020developing; @yeh2019transformer; @wang2021cascade; @saon2021advancing; @li2021recent]. In a state-of-the-art system\u00a0[@sainath2021efficient], a neural language model (LM) is used to rescore a cascaded encoder model and outperforms a conventional ASR system in both Google Voice Search" +"---\nabstract: 'Since the discovery of electron-wave duality, electron scattering instrumentation has developed into a powerful array of techniques for revealing the atomic structure of matter. Beyond detecting local lattice variations in equilibrium structures with the highest possible spatial resolution, recent research efforts have been directed towards the long sought-after dream of visualizing the dynamic evolution of matter in real-time. The atomic behavior at ultrafast timescales carries critical information on phase transition and chemical reaction dynamics, the coupling of electronic and nuclear degrees of freedom in materials and molecules, the correlation between structure, function and previously hidden metastable or nonequilibrium states of matter. Ultrafast electron pulses play an essential role in this scientific endeavor, and their generation has been facilitated by rapid technical advances in both ultrafast laser and particle accelerator technologies. This review presents a summary of the remarkable developments in this field over the last few decades. The physics and technology of ultrafast electron beams is presented with an emphasis on the figures of merit most relevant for ultrafast electron diffraction (UED) experiments. We discuss recent developments in the generation, manipulation and characterization of ultrashort electron beams aimed at improving the combined spatio-temporal resolution of these measurements. The" +"---\nabstract: 'The production of Higgs bosons in weak boson fusion has the second largest cross section among Higgs-production processes at the LHC. As such, this process plays an important role in detailed studies of Higgs interactions with vector bosons. In this paper we extend the available description of Higgs boson production in weak boson fusion by considering anomalous $HVV$ interactions and NNLO QCD radiative corrections at the same time. We find that, while leading order QCD predictions are too uncertain to allow for detailed studies of the anomalous couplings, NLO QCD results are sufficiently precise, most of the time. The NNLO QCD corrections alter the NLO QCD predictions only marginally, but their availability enhances the credibility of conclusions based on NLO QCD computations.'\nauthor:\n- Konstantin\u00a0Asteriadis\n- Fabrizio\u00a0Caola\n- Kirill\u00a0Melnikov\n- Raoul\u00a0R\u00f6ntsch\nbibliography:\n- 'anom.bib'\ntitle: Anomalous Higgs boson couplings in weak boson fusion production at NNLO in QCD\n---\n\n\\#1[(\\#1)]{} \u00b6[P]{}\n\nIntroduction {#sec:introduction}\n============\n\nThe discovery of the Higgs boson nearly ten years ago completed the Standard Model (SM) of particle physics, providing, for the first time, experimental support for the hypothesis that electroweak symmetry is broken by a scalar field. By now quantum" +"---\nabstract: 'This paper is a cursory study on how topological features are preserved within the internal representations of neural network layers. Using techniques from topological data analysis, namely persistent homology, the topological features of a simple feedforward neural network\u2019s layer representations of a modified torus with a Klein bottle-like twist were computed. The network appeared to approximate homeomorphisms in early layers, before significantly changing the topology of the data in deeper layers. The resulting noise hampered the ability of persistent homology to compute these features, however similar topological features seemed to persist longer in a network with a bijective activation function.'\nauthor:\n- |\n Archie Shahidullah\\\n Computing + Mathematical Sciences\\\n California Institute of Technology\\\n Pasadena, CA 91125\\\n `archie@caltech.edu`\ntitle: Topological Data Analysis of Neural Network Layer Representations\n---\n\nIntroduction\n============\n\nIn recent years, deep neural networks have revolutionized many computing problems once thought to be difficult. However, they are often referred to as black boxes and the mechanisms through which they learn have remained elusive. Insight into how a neural network internally represents the dataset it is trained on will provide understanding into what makes for effective training data, and how a neural network extracts relevant features from a" +"---\nauthor:\n- 'Rafa[\u0142]{} Kapica[^1], \u00a0Jonathan R. Partington[^2]\u00a0 \u00a0and Rados[\u0142]{}aw Zawiski[^3]\u00a0[^4]'\ndate: \ntitle: 'Admissibility of retarded diagonal systems with one-dimensional input space'\n---\n\nWe investigate infinite-time admissibility of a control operator $B$ in a Hilbert space state-delayed dynamical system setting of the form $\\dot{z}(t)=Az(t)+A_1 z(t-\\tau)+Bu(t)$, where $A$ generates a diagonal $C_0$-semigroup, $A_1\\in\\calL(X)$ is also diagonal and $u\\in L^2(0,\\infty;\\CC)$. Our approach is based on the Laplace embedding between $L^2$ and the Hardy space $H^2(\\CC_+)$. The results are expressed in terms of the eigenvalues of $A$ and $A_1$ and the sequence representing the control operator.\n\n[**Keywords:**]{} admissibility, state delay, infinite-dimensional diagonal system\n\n[**2020 Subject Classification:**]{} 34K30, 34K35, 47D06, 93C23\n\nIntroduction\n============\n\nState-delayed differential equations arise in many areas of applied mathematics, which is related to the fact that in the real world there is an inherent input-output delay in every physical system. Among sources of delay we have the spatial character of the system in relation to signal propagation, measurements processing or hatching time in biological systems, to name a few. Whenever the delay has a considerable influence on the outcome of the process it has to be incorporated into a process\u2019s mathematical model. Hence, an understanding of a state-delayed system," +"---\nabstract: '\\[sec:abstract\\] In recent years, multi-robot systems have received increasing attention from both industry and academia. Besides the need of accurate and robust estimation of relative localization, security and trust in the system are essential to enable wider adoption. In this paper, we propose a framework using Hyperledger Fabric for multi-robot collaboration in industrial applications. We rely on blockchain identities for the interaction of ground and aerial robots, and use smart contracts for collaborative decision making. The use of ultra-wideband (UWB) localization for both autonomous navigation and robot collaboration extends our previous work in Fabric-based fleet management. We focus on an inventory management application which uses a ground robot and an aerial robot to inspect a warehouse-like environment and store information about the found objects in the blockchain. We measure the impact of adding the blockchain layer, analyze the transaction commit latency and compare the resource utilization of blockchain-related processes to the already running data processing modules.'\nauthor:\n- \nbibliography:\n- 'bibliography.bib'\ntitle: ' Secure Heterogeneous Multi-Robot Collaboration and Docking with Hyperledger Fabric Blockchain\\'\n---\n\nRobotics; ROS\u00a02; Blockchain; Hyperledger Fabric; Multi-robot systems; Ultra-wideband (UWB); Inventory management; Fleet management; Distributed ledger technologies.\n\nIntroduction {#sec:introduction}\n============\n\nMulti-robot systems (MRS), including" +"---\nabstract: 'While empirical shell model calculations have successfully described low-lying nuclear data for decades, only recently has significant effort been made to quantify the uncertainty in such calculations. Here we quantify the statistical error in effective parameters for transition operators in empirical calculations in the $sd$ ($1s_{1/2}$-$0d_{3/2}$-$0d_{5/2}$) valence space, specifically the quenching of Gamow-Teller transitions, effective charges for electric quadrupole ($E2$) transitions, and the effective orbital and spin couplings for magnetic dipole ($M1$) transitions and moments. We find the quenching factor for Gamow-Teller transitions relative to free-space values is tightly constrained. For effective $M1$ couplings, we found isoscalar components more constrained than isovector. This detailed quantification of uncertainties, while highly empirical, nonetheless is an important step towards interpretation of experiments.'\nauthor:\n- 'Jordan M. R. Fox'\n- 'Calvin W. Johnson'\n- Rodrigo Navarro Perez\nbibliography:\n- 'johnsonmaster.bib'\n- 'fox\\_uq.bib'\n- 'navarroperez.bib'\ntitle: Uncertainty quantification of transition operators in the empirical shell model\n---\n\nIntroduction\n============\n\nOver the last two decades, interest in uncertainty quantification (UQ) has grown rapidly in many sciences, and theoretical nuclear physics is no exception [@dobaczewski2014error; @navarro_perez_error_analysis_2015; @xu2021bayesian; @Lovell_2020; @piekarewicz2015information; @whitehead2021prediction; @drischler2020quantifying; @perez2020uncertainty; @PhysRevC.96.054316; @PhysRevC.98.061301]. Providing theoretical predictions with error bars that reflect the true limits" +"---\nabstract: 'We present the exact diagonalization study of rotating Bose-condensed gas interacting via finite-range Gaussian potential confined in a quasi-2D harmonic trap. The system of many-body Hamiltonian matrix is diagonalized in given subspaces of quantized total angular momentum to obtain the lowest-energy eigenstate employing the beyond lowest-Landau-level approximation. In the co-rotating frame, the quantum mechanical stability of angular momentum states is discussed for the existence of phase transition between the stable states of interacting system. Thereby analyzing the von Neumann entanglement entropy and degree of condensation provide the information about quantum phase correlation in the many-body states. Calculating the conditional probability distribution, we further probe the internal structure of quantum mechanically stable and unstable states. Much emphasis is put on finding the spatial correlation of bosonic atoms in the rotating system for the formation and entry of singly quantized vortices, and then organizing into canonical polygons with and without a central vortex at the trap center. Results are summarized in the form of a movie depicting the vortex patterns having discrete p-fold rotational symmetry with $p = 2,3,4,5,6$.'\nauthor:\n- 'Mohd. Imran'\n- 'M. A. H. Ahsan'\ntitle: 'Novel phases in rotating Bose-condensed gas: vortices and quantum correlation'\n---" +"---\nabstract: 'There has been a growing interest in the use of Deep Neural Networks (DNNs) to solve Partial Differential Equations (PDEs). Despite the promise that such approaches hold, there are various aspects where they could be improved. Two such shortcomings are (i) their computational inefficiency relative to classical numerical methods, and (ii) the non-interpretability of a trained DNN model. In this work we present ASPINN, an anisotropic extension of our earlier work called SPINN\u2013Sparse, Physics-informed, and Interpretable Neural Networks\u2013to solve PDEs that addresses both these issues. ASPINNs generalize radial basis function networks. We demonstrate using a variety of examples involving elliptic and hyperbolic PDEs that the special architecture we propose is more efficient than generic DNNs, while at the same time being directly interpretable. Further, they improve upon the SPINN models we proposed earlier in that fewer nodes are require to capture the solution using ASPINN than using SPINN, thanks to the anisotropy of the local zones of influence of each node. The interpretability of ASPINN translates to a ready visualization of their weights and biases, thereby yielding more insight into the nature of the trained model. This in turn provides a systematic procedure to improve the architecture based" +"---\nabstract: '**Anisotropic X-ray Dark-field Tomography (AXDT) is a recently developed imaging modality that enables the visualization of oriented microstructures using lab-based X-ray grating interferometer setups. While there are very promising application scenarios, for example in materials testing of fibrous composites or in medical diagnosis of brain cell connectivity, AXDT faces challenges in practical applicability due to the complex and time-intensive acquisitions required to fully sample the anisotropic X-ray scattering functions. However, depending on the specific imaging task at hand, a full sampling may not be required, allowing for reduced acquisitions. In this work we are investigating a performance prediction approach for AXDT using task-specific detectability indices. Based on this approach we present a task-driven acquisition optimization method that enables reduced acquisition schemes while keeping the task-specific image quality high. We demonstrate the feasibility and efficacy of the method in experiments with simulated and experimental data.**'\nauthor:\n- 'Theodor\u00a0Cheslerean-Boghiu,\u00a0 Franz\u00a0Pfeiffer,\u00a0 and\u00a0Tobias\u00a0Lasser,\u00a0 [^1][^2]'\nbibliography:\n- 'bibliography.bib'\ntitle: 'Task-specific Performance Prediction and Acquisition Optimization for Anisotropic X-ray Dark-field Tomography'\n---\n\n= \\[diamond, draw, fill=blue!20, text width=4.5em, text badly centered, node distance=3cm, inner sep=0pt\\] = \\[rectangle, draw, fill=blue!20, text width=5em, text centered, rounded corners, minimum height=4em\\] = \\[draw, -latex\u2019\\]" +"---\nabstract: 'We show several novel aspects in the exact non-equilibrium dynamics of quantum double dark-soliton states in the Lieb-Liniger model for the one-dimensional Bose gas with repulsive interactions. We also show an exact finite-size scaling of the fraction of the [quasi-]{}Bose-Einstein condensation (BEC) in the ground state, which should characterize the quasi-BEC in quantum double dark-soliton states that we assume to occur in the weak coupling regime. First, we show the exact time evolution of the density profile in the quantum state associated with a quantum double dark-soliton by the Bethe ansatz. Secondly, we derive a kind of macroscopic quantum wave-function effectively by exactly evaluating the square amplitude and phase profiles of the matrix element of the field operator between the quantum double dark-soliton states. The profiles are close to those of dark-solitons particularly in the weak-coupling regime. Then, the scattering of two notches in the quantum double dark-soliton state is exactly demonstrated. It is suggested from the above observations that the quasi-BEC should play a significant role in the dynamics of quantum double dark-soliton states. If the condensate fraction is close to 1, the quantum state should be well approximated by the quasi-BEC state where the mean-field picture" +"---\nauthor:\n- Danial Langeroodi\n- Alessandro Sonnenfeld\n- Henk Hoekstra\n- Adriano Agnello\nbibliography:\n- 'aanda.bib'\ntitle: Photometric redshift estimation of strongly lensed galaxies\n---\n\n[Around $10^5$ strongly lensed galaxies are expected to be discovered with upcoming wide-field surveys such as Euclid and the LSST. Utilising these large samples to study the inner structure of lens galaxies requires source redshifts, which are needed to turn lens models into mass measurements. However, obtaining spectroscopic source redshifts for large samples of strong lenses is prohibitive with the current capacity of spectroscopic facilities. ]{}\n\n[ As an alternative to spectroscopy, we study the possibility of obtaining source photometric redshifts (photo-$z$s) for large samples of strong lenses. We pay particular attention to the problem of blending between the lens and the source light. ]{}\n\n[ Our strategy consists of deblending the source and lens light by simultaneously modelling the lens galaxy and the background source in all available photometric bands, and then feeding the derived source colours to a template-fitting photo-$z$ algorithm. We describe the lens and the source light with a S\u00e9rsic profile, and the lens mass with a singular isothermal ellipsoid. We first test our approach on a simulated sample of" +"---\nabstract: 'Existing text recognition methods usually need large-scale training data. Most of them rely on synthetic training data due to the lack of annotated real images. However, there is a domain gap between the synthetic data and real data, which limits the performance of the text recognition models. Recent self-supervised text recognition methods attempted to utilize unlabeled real images by introducing contrastive learning, which mainly learns the discrimination of the text images. Inspired by the observation that humans learn to recognize the texts through both reading and writing, we propose to learn discrimination and generation by integrating contrastive learning and masked image modeling in our self-supervised method. The contrastive learning branch is adopted to learn the discrimination of text images, which imitates the reading behavior of humans. Meanwhile, masked image modeling is firstly introduced for text recognition to learn the context generation of the text images, which is similar to the writing behavior. The experimental results show that our method outperforms previous self-supervised text recognition methods by 10.2%-20.2% on irregular scene text recognition datasets. Moreover, our proposed text recognizer exceeds previous state-of-the-art text recognition methods by averagely 5.3% on 11 benchmarks, with similar model size. We also demonstrate that" +"---\nabstract: 'There exist bipartite entangled states whose violations of Clauser-Horne-Shimony-Holt (CHSH) Bell inequality can be observed by a single Alice and arbitrarily many sequential Bobs \\[[Phys. Rev. Lett. **125**, 090401 (2020)](https://link.aps.org/doi/10.1103/PhysRevLett.125.090401)\\]. Here we consider its analogues for tripartite systems: a tripartite entangled state is shared among Alice, Bob and multiple Charlies. The first Charlie measures his qubit and then passes his qubit to the next Charlie who measures again with other measurements and so on. The goal is to maximize the number of Charlies that can observe some kind of nonlocality with the single Alice and Bob. It has been shown that at most two Charlies could share genuine nonlocality of the Greenberger-Horne-Zeilinger (GHZ) state via the violation of Svetlichny inequality with Alice and Bob \\[[Quantum Inf. Process. **18**, 42 (2019)](https://doi.org/10.1007/s11128-018-2161-x) and [Phys. Rev. A **103**, 032216 (2021)](https://link.aps.org/doi/10.1103/PhysRevA.103.032216)\\]. In this work, we show that arbitrarily many Charlies can have standard nonlocality (via violations of Mermin inequality) and some other kind of genuine nonlocality (which is known as genuinely nonsignal nonlocality) with the single Alice and single Bob.'\nauthor:\n- Ya Xi\n- 'Mao-Sheng Li'\n- Libin Fu\n- 'Zhu-Jun Zheng'\ntitle: Sharing tripartite nonlocality sequentially by arbitrarily many independent" +"---\nabstract: '[ In 2D semiconductors and insulators, the Chern number of the valence band Bloch state is an important quantity that has been linked to various material properties, such as the topological order. We elaborate that the opacity of 2D materials to circularly polarized light over a wide range of frequencies, measured in units of the fine structure constant, can be used to extract a spectral function that frequency-integrates to the Chern number, offering a simple optical experiment to measure it. This method is subsequently generalized to finite temperature and locally on every lattice site by a linear response theory, which helps to extract the Chern marker that maps the Chern number to lattice sites. The long range response in our theory corresponds to a Chern correlator that acts like the internal fluctuation of the Chern marker, and is found to be enhanced in the topologically nontrivial phase. Finally, from the Fourier transform of the valence band Berry curvature, a nonlocal Chern marker is further introduced, whose decay length diverges at topological phase transitions and therefore serves as a faithful indicator of the transitions, and moreover can be interpreted as a Wannier state correlation function. The concepts discussed in" +"---\nabstract: 'We establish Central Limit Theorems for the volumes of intersections of $B_{p}^n$ (the unit ball of $\\ell_p^n$) with uniform random subspaces of codimension $d$ for fixed $d$ and $n\\to \\infty$. As a corollary we obtain higher order approximations for expected volumes, refining previous results by Koldobsky and Lifschitz and approximations obtained from the Eldan\u2013Klartag version of CLT for convex bodies. We also obtain a Central Limit Theorem for the Minkowski functional of the intersection body of $B_p^n$, evaluated on a random vector distributed uniformly on the unit sphere.'\naddress:\n- 'Institute of Mathematics, University of Warsaw, ul. Banacha 2, 02-097 Warszawa, Poland'\n- 'Mathematics Department, University of Missouri, Columbia, Missouri 65211'\n- 'Mathematics Department, University of Missouri, Columbia, Missouri 65211'\nauthor:\n- 'Rados[\u0142]{}aw Adamczak'\n- Peter Pivovarov\n- Paul Simanjuntak\nbibliography:\n- 'CLT-sections.bib'\ntitle: 'Limit theorems for the volumes of small codimensional random sections of $\\ell_p^n$-balls'\n---\n\nIntroduction\n============\n\nAn important aspect of stochastic geometry is the investigation of volumes of random sets. They have been studied in a variety of contexts, including for instance volumes of convex hulls of i.i.d. Gaussian vectors in ${\\mathbb{R}}^m$ (e.g., B\u00e1r\u00e1ny and Vu [@MR2330981], Calka and Yukich [@MR3405618]), points selected from the" +"---\nauthor:\n- 'Honey,[!!]{}'\n- 'B. Satyanarayana,'\n- 'R. Shinde,'\n- 'V.M. Datar,'\n- 'D. Indumathi,'\n- 'Ram K V Thulasi,'\n- 'N. Dalal,'\n- 'S. Prabhakar,'\n- 'S. Ajith,'\n- 'Sourabh Pathak,'\n- Sandip Patel\ntitle: 'Magnetic field measurements on the mini-ICAL detector using Hall probes'\n---\n\nIntroduction and Motivation {#sec:intro}\n===========================\n\nThe proposed magnetized Iron Calorimeter (ICAL) detector to be located at the India-based Neutrino Observatory (INO) is designed to be a 51 kton iron detector optimized to detect $\\mu^{-}$ and $\\mu^{+}$ generated from the charged current interactions within iron of $\\nu_{\\mu}$ and $\\overline{\\nu}_\\mu$ respectively in the energy range of few to 10s of GeV. The detector will comprise 3 modules made up of 151 layers of 56 mm thick iron plates with 150 layers of Resistive plate chambers (RPCs) as active detector element which are sandwiched between the iron plates. The magnetic field will be generated through current passing in copper coils which are wound around the iron plates in slots constructed for the purpose. Hence the field is expected to vary in both magnitude and direction over the area of the iron plates.\n\nOne of the most important capabilities of ICAL will be its ability to" +"---\nabstract: 'We present a quantum-correlation-based free-space optical(FSO) link over 250 m using an outdoor active reflector 125 m from the transceiver station. The performance of free-space optical communication can be significantly degraded by atmospheric turbulence effects, such as beam wander and signal fluctuations. We used a 660 nm tracking laser to reduce atmospheric effects, by analyzing the fast beam wander and slow temporal beam drift, using this information to correct the quantum channel alignment of the 810 nm signal photons. In this work, the active reflector consisted of a mirror, a 6-axis hexapod stage, and a long-range wireless bridge. The slow drift of the beam path due to outdoor temperature changes was steered and controlled using wireless optical feedback between the receiver units and the active reflector. Our work provides useful knowledge for improved control of beam paths in outdoor conditions, which can be developed to ensure high quality quantum information transfer in real-world scenarios, such as an unmanned FSO link for urban quantum communication or retro-reflective quantum communication links.'\nauthor:\n- Dongkyu Kim\n- Dohoon Lim\n- Kyungdeuk Park\n- Yong Sup Ihn\ntitle: 'Quantum-correlation-based free-space optical link with an active reflector'\n---\n\nIntroduction\n============\n\nQuantum key distribution" +"---\nabstract: 'The Rouquier blocks, also known as the RoCK blocks, are important blocks of the symmetric groups algebras and the Hecke algebras of type $A$, with the partitions labelling the Specht modules that belong to these blocks having a particular abacus configuration. We generalise the definition of Rouquier blocks to the Ariki-Koike algebras, where the Specht modules are indexed by multipartitions, and explore the properties of these blocks.'\naddress: 'School of Mathematics, University of East Anglia, Norwich NR4 7TJ, UK.'\nauthor:\n- Sin\u00e9ad Lyle\ntitle: 'Rouquier blocks for Ariki-Koike algebras'\n---\n\nIntroduction\n============\n\nSuppose there is a conjecture that you believe to be true for all finite groups. If no general method of attack suggests itself, perhaps you begin by proving it for specific families of groups. You might start with the abelian groups, or try to work your way through the simple groups, but fairly soon it is likely you will want to consider the symmetric groups $\\mathfrak{S}_n$. One may easily see examples of this approach. In representation theory, conjectures for which partial proofs exist include Alperin\u2019s weight conjecture (conjectured in 1987\u00a0[@Alperin] and proved for symmetric groups in 1990\u00a0[@AlperinFong]), Donovan\u2019s conjecture (conjectured in 1980\u00a0[@AlperinBook Conjecture" +"---\nabstract: 'Outlier or anomaly detection is an important task in data analysis. We discuss the problem from a geometrical perspective and provide a framework which exploits the metric structure of a data set. Our approach rests on the *manifold assumption*, i.e., that the observed, nominally high-dimensional data lie on a much lower dimensional manifold and that this intrinsic structure can be inferred with manifold learning methods. We show that exploiting this structure significantly improves the detection of outlying observations in high dimensional data. We also suggest a novel, mathematically precise and widely applicable distinction between *distributional* and *structural* outliers based on the geometry and topology of of the data manifold that clarifies conceptual ambiguities prevalent throughout the literature. Our experiments focus on functional data as one class of structured high-dimensional data, but the framework we propose is completely general and we include image and graph data applications. Our results show that the outlier structure of high-dimensional and non-tabular data can be detected and visualized using manifold learning methods and quantified using standard outlier scoring methods applied to the manifold embedding vectors.'\nauthor:\n- |\n Moritz Herrmann, Florian Pfisterer, and Fabian Scheipl\\\n Department of Statistics, Ludwig Maximilians University, Munich, Germany" +"---\nauthor:\n- 'A. Beck'\n- 'V. Lebouteiller'\n- 'S. C. Madden'\n- 'C. Iserlohe'\n- 'A. Krabbe'\n- 'L. Ramambason'\n- 'C. Fischer'\n- 'M. Ka\u017amierczak-Barthel'\n- 'S. T. Latzko'\n- 'J. P. P\u00e9rez-Beaupuits'\nbibliography:\n- 'Literature.bib'\nsubtitle: 'I. Observations and fundamental parameters of the ionised gas'\ntitle: Infrared view of the multiphase ISM in NGC253\n---\n\n[Massive star-formation leads to enrichment with heavy elements of the interstellar medium. On the other hand, the abundance of heavy elements is a key parameter to study the star-formation history of galaxies. Furthermore, the total molecular hydrogen mass, usually determined by converting CO or $\\left[\\ion{C}{ii}\\right]158{\\,\\mu\\rm{m}}$ luminosities, depends on the metallicity as well. The excitation of metallicity-sensitive emission lines, however, depends on the gas density of regions, where they arise.]{} [We used spectroscopic observations from SOFIA, *Herschel*, and *Spitzer* of the nuclear region of the starburst galaxy NGC253, as well as photometric observations from GALEX, 2MASS, *Spitzer*, and *Herschel* in order to derive physical properties such as the optical depth to correct for extinction, as well as the gas density and metallicity of the central region.]{} [Ratios of the integrated line fluxes of several species were utilised to derive the gas density and" +"---\nabstract: 'Diadochokinetic speech tasks (DDK), in which participants repeatedly produce syllables, are commonly used as part of the assessment of speech motor impairments. These studies rely on manual analyses that are time-intensive, subjective, and provide only a coarse-grained picture of speech. This paper presents two deep neural network models that automatically segment consonants and vowels from unannotated, untranscribed speech. Both models work on the raw waveform and use convolutional layers for feature extraction. The first model is based on an LSTM classifier followed by fully connected layers, while the second model adds more convolutional layers followed by fully connected layers. These segmentations predicted by the models are used to obtain measures of speech rate and sound duration. Results on a young healthy individuals dataset show that our LSTM model outperforms the current state-of-the-art systems and performs comparably to trained human annotators. Moreover, the LSTM model also presents comparable results to trained human annotators when evaluated on unseen older individuals with Parkinson\u2019s Disease dataset.'\naddress: |\n $^1$Faculty of Electrical and Computer Engineering, Technion\u2013Israel Institute of Technology, Israel\\\n $^2$Laboratoire de Sciences Cognitives et Psycholinguistique, D\u00e9partement d\u2019Etudes Cognitives, ENS, EHESS, CNRS, PSL University, France\\\n $^3$Department of Linguistics, Northwestern University, IL, USA\\\n $^4$Department" +"---\nabstract: 'In a previous paper, we had modified Non-Relativistic QCD as it applies to quarkonium production by taking into account the effect of perturbative soft-gluon emission from the colour-octet quarkonium states. We tested the model by fitting the unknown non-perturbative parameter in the model from Tevatron data and using that to make parameter-free predictions for $J/\\psi$ and $\\psi ''$ production at the LHC. In this paper, we study $\\chi_c$ production: we fit as before the unknown matrix-element using data from Tevatron. We, then, extend the results of the previous paper for $J/\\psi$ production by calculating the effect of $\\chi_c$ feed-down to the $J/\\psi$ cross-section, which, by comparing with CMS results at $\\sqrt{s}=$ 13 TeV, we demonstrate to be small. We have also computed $\\chi_c^1$ and $\\chi_c^2$ at $\\sqrt{s}=$7 TeV and find excellent agreement with data from the ATLAS experiment.'\nauthor:\n- |\n Sudhansu\u00a0S.\u00a0Biswal$^1$[^1], Sushree\u00a0S.\u00a0Mishra$^1$[^2] \u00a0and K.\u00a0Sridhar$^2$[^3]\\\n \\[0.2cm\\] [*1. Department of Physics, Ravenshaw University,*]{}\\\n \\[-0.2cm\\] [*Cuttack, 753003, India.*]{}\\\n \\[-0.2cm\\] [*2. School of Arts and Sciences, Azim Premji University,*]{}\\\n \\[-0.2cm\\] [*Sarjapura, Bangalore, 562125, India.*]{}\\\ntitle: '[$\\chi_c$]{} production in modified NRQCD'\n---\n\nThe effective theory for studying heavy quarkonium physics, Non-Relativistic Quantum Chromodynamics (NRQCD)\u00a0[@bbl], found much" +"---\nabstract: 'The discovery of the Higgs boson, ten years ago, was a milestone that opened the door to the study of a new sector of fundamental physical interactions. We review the role of the Higgs field in the Standard Model of particle physics and explain its impact on the world around us. We summarize the insights into Higgs physics revealed so far by ten years of work, discuss what remains to be determined, and outline potential connections of the Higgs sector with unsolved mysteries of particle physics.'\nauthor:\n- |\n Gavin P. Salam,$^{1,2}$ Lian-Tao Wang,$^{3}$ Giulia Zanderighi$^{4,5,6}$\\\n $^1$ [Rudolf Peierls Centre for Theoretical Physics,\\\n Clarendon Laboratory, Parks Road, University of Oxford, Oxford OX1 3PU, UK]{},\\\n $^2$ [All Souls College, Oxford OX1 4AL, UK]{},\\\n $^3$ [University of Chicago, 5640 South Ellis Ave. Chicago, IL 60637]{},\\\n $^4$ [Max Planck Institute for Physics, F\u00f6hringer Ring 6, 80805 Munich, Germany]{}\\\n $^5$ [Physik-Department, Technische Universit\u00e4t M\u00fcnchen, James-Franck-Strasse 1, 85748 Garching, Germany]{}\\\n $^6$ [zanderi@mpp.mpg.de]{}\\\nbibliography:\n- 'higgs.bib'\ntitle: The Higgs boson Turns Ten\n---\n\nTen years ago, on the 4$^\\text{th}$ of July 2012, scientists and journalists gathered at CERN, and remotely around the world, for the announcement of the discovery of a new fundamental particle," +"---\nabstract: 'Staged trees are a recently-developed, powerful family of probabilistic graphical models. An equivalence class of staged trees has now been characterised, and two fundamental statistical operators have been defined to traverse the equivalence class of a given staged tree. Here, two staged trees are said to be *statistically equivalent* when they represent the same set of distributions. Probabilistic graphical models such as staged trees are increasingly being used for causal analyses. Staged trees which are within the same equivalence class can encode very different causal hypotheses but data alone cannot help us distinguish between these. Therefore, in using score-based methods to learn the model structure and distributions from data for causal analyses, we should expect that a suitable scoring function is one which assigns the same score to statistically equivalent models. No scoring function has yet been proven to have this desirable property for staged trees. In this paper, we present a novel Bayesian Dirichlet scoring function based on path uniformity and mass conversation, and prove that this new scoring function is score-equivalent for staged trees.'\nauthor:\n- Conor Hughes\n- Peter Strong\n- Aditi Shenvi\nbibliography:\n- 'biblio.bib'\ntitle: Score Equivalence for Staged Trees\n---\n\nStaged Trees" +"---\nabstract: 'Animal behavior is driven by multiple brain regions working in parallel with distinct control policies. We present a biologically plausible model of off-policy reinforcement learning in the basal ganglia, which enables learning in such an architecture. The model accounts for action-related modulation of dopamine activity that is not captured by previous models that implement on-policy algorithms. In particular, the model predicts that dopamine activity signals a combination of reward prediction error (as in classic models) and \u201caction surprise,\" a measure of how unexpected an action is relative to the basal ganglia\u2019s current policy. In the presence of the action surprise term, the model implements an approximate form of $Q$-learning. On benchmark navigation and reaching tasks, we show empirically that this model is capable of learning from data driven completely or in part by other policies (e.g. from other brain regions). By contrast, models without the action surprise term suffer in the presence of additional policies, and are incapable of learning at all from behavior that is completely externally driven. The model provides a computational account for numerous experimental findings about dopamine activity that cannot be explained by classic models of reinforcement learning in the basal ganglia. These include" +"---\nabstract: |\n [nohyphenation]{}\n\n To solve tasks in complex environments, robots need to learn from experience. Deep reinforcement learning is a common approach to robot learning but requires a large amount of trial and error to learn, limiting its deployment in the physical world. As a consequence, many advances in robot learning rely on simulators. On the other hand, learning inside of simulators fails to capture the complexity of the real world, is prone to simulator inaccuracies, and the resulting behaviors do not adapt to changes in the world. The Dreamer algorithm has recently shown great promise for learning from small amounts of interaction by planning within a learned world model, outperforming pure reinforcement learning in video games. Learning a world model to predict the outcomes of potential actions enables planning in imagination, reducing the amount of trial and error needed in the real environment. However, it is unknown whether Dreamer can facilitate faster learning on physical robots. In this paper, we apply Dreamer to 4 robots to learn online and directly in the real world, without any simulators. Dreamer trains a quadruped robot to roll off its back, stand up, and walk from scratch and without resets in only" +"---\nabstract: 'Learning task-oriented dialog policies via reinforcement learning typically requires large amounts of interaction with users, which in practice renders such methods unusable for real-world applications. In order to reduce the data requirements, we propose to leverage data from across different dialog domains, thereby reducing the amount of data required from each given domain. In particular, we propose to learn domain-agnostic *action embeddings*, which capture general-purpose structure that informs the system how to act given the current dialog context, and are then specialized to a specific domain. We show how this approach is capable of learning with significantly less interaction with users, with a reduction of 35% in the number of dialogs required to learn, and to a higher level of proficiency than training separate policies for each domain on a set of simulated domains.'\nauthor:\n- |\n Jorge A. Mendez[^1]\\\n University of Pennsylvania\\\n `mendezme@seas.upenn.edu`\\\n Alborz Geramifard\\\n Facebook AI\\\n `alborzg@fb.com`\\\n Mohammad Ghavamzadeh\\\n Facebook AI Research\\\n `mgh@fb.com`\\\n Bing Liu\\\n Facebook Assistant\\\n `bingl@fb.com`\\\nbibliography:\n- 'MultiDomainRL.bib'\ntitle: |\n Reinforcement Learning of\\\n Multi-Domain Dialog Policies Via Action Embeddings\n---\n\nIntroduction\n============\n\nConversational systems provide a seamless interaction mechanism for users, allowing them to explore the capabilities of the system without the need" +"---\nabstract: |\n Real bug fixes found in open source repositories seem to be the perfect source for learning to localize and repair real bugs. However, the absence of large scale bug fix collections has made it difficult to effectively exploit real bug fixes in the training of larger neural models in the past. In contrast, [*artificial bugs*]{} \u2013 produced by mutating existing source code \u2013 can be easily obtained at a sufficient scale and are therefore often preferred in the training of existing approaches. Still, localization and repair models that are trained on artificial bugs usually underperform when faced with real bugs. This raises the question whether bug localization and repair models trained on real bug fixes are more effective in localizing and repairing real bugs.\n\n We address this question by introducing [*[RealiT]{}*]{}, a pre-train-and-fine-tune approach for effectively learning to localize and repair real bugs from real bug fixes. [RealiT]{}\u00a0 is first pre-trained on a large number of artificial bugs produced by traditional mutation operators and then fine-tuned on a smaller set of real bug fixes. Fine-tuning does not require any modifications of the learning algorithm and hence can be easily adopted in various training scenarios for bug localization" +"---\nabstract: |\n Generalizing the notion of the boundary sequence introduced by Chen and Wen, the $n$th term of the $\\ell$-boundary sequence of an infinite word is the finite set of pairs $(u,v)$ of prefixes and suffixes of length $\\ell$ appearing in factors $uyv$ of length $n+\\ell$ ($n\\ge \\ell\\ge 1$). Otherwise stated, for increasing values of $n$, one looks for all pairs of factors of length $\\ell$ separated by $n-\\ell$ symbols.\n\n For the large class of addable abstract numeration systems $S$, we show that if an infinite word is $S$-automatic, then the same holds for its $\\ell$-boundary sequence. In particular, they are both morphic (or generated by an HD0L system). To precise the limits of this result, we discuss examples of non-addable numeration systems and $S$-automatic words for which the boundary sequence is nevertheless $S$-automatic and conversely, $S$-automatic words with a boundary sequence that is not $S$-automatic. In the second part of the paper, we study the $\\ell$-boundary sequence of a Sturmian word. We show that it is obtained through a sliding block code from the characteristic Sturmian word of the same slope. We also show that it is the image under a morphism of some other characteristic Sturmian word." +"---\nabstract: 'The problem of algorithmic recourse has been explored for supervised machine learning models, to provide more interpretable, transparent and robust outcomes from decision support systems. An unexplored area is that of algorithmic recourse for anomaly detection, specifically for tabular data with only discrete feature values. Here the problem is to present a set of counterfactuals that are deemed normal by the underlying anomaly detection model so that applications can utilize this information for explanation purposes or to recommend countermeasures. We present an approach\u2014**C**ontext preserving **A**lgorithmic **R**ecourse for **A**nomalies in **T**abular data (*CARAT*), that is effective, scalable, and agnostic to the underlying anomaly detection model. *CARAT* uses a transformer based encoder-decoder model to explain an anomaly by finding features with low likelihood. Subsequently semantically coherent counterfactuals are generated by modifying the highlighted features, using the overall context of features in the anomalous instance(s). Extensive experiments help demonstrate the efficacy of *CARAT*.'\nauthor:\n- Debanjan Datta\n- Feng Chen\n- Naren Ramakrishnan\nbibliography:\n- 'references.bib'\ntitle: Framing Algorithmic Recourse for Anomaly Detection\n---\n\nIntroduction\n============\n\nAlgorithmic recourse can be defined as a a set of actions or changes that can change the outcome for a data instance with respect to" +"---\nabstract: 'Freehand 3D ultrasound (US) has important clinical value due to its low cost and unrestricted field of view. Recently deep learning algorithms have removed its dependence on bulky and expensive external positioning devices. However, improving reconstruction accuracy is still hampered by difficult elevational displacement estimation and large cumulative drift. In this context, we propose a novel deep motion network (MoNet) that integrates images and a lightweight sensor known as the inertial measurement unit (IMU) from a velocity perspective to alleviate the obstacles mentioned above. Our contribution is two-fold. First, we introduce IMU acceleration for the first time to estimate elevational displacements outside the plane. We propose a temporal and multi-branch structure to mine the valuable information of low signal-to-noise ratio (SNR) acceleration. Second, we propose a multi-modal online self-supervised strategy that leverages IMU information as weak labels for adaptive optimization to reduce drift errors and further ameliorate the impacts of acceleration noise. Experiments show that our proposed method achieves the superior reconstruction performance, exceeding state-of-the-art methods across the board.'\nauthor:\n- 'Mingyuan Luo[^1]'\n- 'Xin Yang[^^]{}'\n- Hongzhang Wang\n- Liwei Du\n- 'Dong Ni^()^'\nbibliography:\n- 'paper1226.bib'\ntitle: Deep Motion Network for Freehand 3D Ultrasound Reconstruction\n---"